+ All Categories
Home > Documents > A Token-Based Distributed Group Mutual Exclusion Algorithm with Quorums

A Token-Based Distributed Group Mutual Exclusion Algorithm with Quorums

Date post: 18-Jan-2023
Category:
Upload: hiroshima-u
View: 0 times
Download: 0 times
Share this document with a friend
13
A Token-Based Distributed Group Mutual Exclusion Algorithm with Quorums Hirotsugu Kakugawa, Member, IEEE, and Sayaka Kamei, Member, IEEE, and Toshimitsu Masuzawa, Member, IEEE Abstract— The group mutual exclusion problem is a general- ization of mutual exclusion problem such that a set of processes in the same group can enter critical section simultaneously. In this paper, we propose a distributed algorithm for the group mutual exclusion problem in asynchronous message passing distributed systems. Our algorithm is based on tokens, and a process that obtains a token can enter critical section. For reducing message complexity, it uses coterie as a communication structure when a process sends a request messages. Informally, coterie is a set of quorums, each of which is a subset of the process set, and any two quorums share at least one process. The message complexity of our algorithm is O(|Q|) in the worst case, where |Q| is a quorum size that the algorithm adopts. Performance of the proposed algorithm is presented by analysis and discrete event simulation. Especially, the proposed algorithm achieves high concurrency, which is a performance measure for the number of processes that can be in critical section simultaneously. Index Terms— D.4.7.b - Distributed systems, E.1.b - Distributed data structures, D.4.1.d - Mutual Exclusion, D.4.1.f - Synchro- nization, I. I NTRODUCTION D ISTRIBUTED mutual exclusion is one of fundamental problems for conflict resolution in distributed computing. The problem appears in many context such as access to shared object and allocation of shared resource, etc. Because it is a fundamental problem in distributed computing, many generalized problems have been proposed so far. In this paper, we consider the problem of distributed group mutual exclusion[1], which is a generalization of the distributed mutual exclusion problem such that only processes in the same group can enter critical section simultaneously. In other words, no two processes in different groups enter critical section at a time. As an intuitive example, consider a multicast of a musical live concert on the Internet but network capacity is not enough to transfer more than one concert live. Each user is allowed to make a request to receive a concert live he/she prefers among many concert lives. Although any number of users can receive the same concert live at a time by the multicast technology, no two different concert lives can be multicast simultaneously by limitation of network capacity. The distributed group mutual exclusion problem models such a situation, and it is one of conflict resolution problems in distributed systems. Hirotsugu Kakugawa and Toshimitsu Masuzawa are with Gradu- ate School of Information Science and Technology, Osaka Univer- sity. 1-3 Machikaneyama, Toyonaka, Osaka 560-8531, JAPAN. E-Mail: {kakugawa,masuzawa}@ist.osaka-u.ac.jp. Sayaka Kamei is with Department of Information Systems, Tottori University of Environmental Studies. 1-1-1, Wakabadai Kita, Tottori, Tottori 689-1111, JAPAN. E-Mail: s-kamei@kankyo- u.ac.jp. Manuscript received XXX YY, 200Z; revised XXX YY, 200Z. Concurrency and waiting time are important performance mea- sure for distributed group mutual exclusion algorithm. Concur- rency is the number of processes that are in critical section simultaneously, and higher concurrency is better. Waiting time is the time that a process must wait to enter critical section after it issues a request. Design of an algorithm that achieves high concurrency and small waiting time is non-trivial when processes in different groups make request simultaneously. Con- sider a situation that some processes are in critical section, and a process in the same group makes a request. If the request is granted immediately, concurrency is increased. On the other hand, requests by other processes in different groups must wait to be granted. The difficulty of algorithm design is the trade-off of concurrency and waiting time. Many distributed group mutual exclusion algorithm have been proposed so far, and they are categorized as follows. (See Sec- tion VII for more detailed review.) Decentralized permission type [2], [3] — When a process wants to enter critical section, it sends request messages to some processes, and it enters critical section if it obtains permissions from some set of processes defined in advance. Typically, quorums[4] or its variant[3] is used to define a set of processes from which a process must obtain permission. (See Definition 2 for formal definition.) Privileged token type [5] — A virtual object called token is maintained by processes, and only a process that obtains a token may enter critical section. In the algorithm proposed in [5], a process sends request messages to all processes so that a request arrives at a process that holds a token. Hybrid type [6] — The first process to enter critical section obtains permission from some set of processes (decentralized permission type), and the process grants other processes in the same group to enter critical section by sending a token (privileged token type). In this paper, we propose a new distributed algorithm TQGmx of privileged token type. Our algorithm also uses two classes of tokens, main-token and sub-token, as [5]. Different from [5], our algorithm uses coteries for communication structure to reduce message complexity. Such an idea is proposed by Mizuno, Neilsen and Rao [7] for (non-group) distributed mutual exclusion – a token is used as a privilege to enter critical section and coterie is used for propagation of requests. (In Section VII, technical description of the algorithm in [7] and ours are presented.) Outline of our algorithm is as follows. When a process obtains the main-token, it notifies to processes in a quorum that it holds the main-token. When a process makes a request, it sends request messages to processes in a quorum. A request message received is forwarded to the holder of the main-token, eventually received by the holder of the main-token, and enqueued in the main-token.
Transcript

A Token-Based Distributed Group MutualExclusion Algorithm with QuorumsHirotsugu Kakugawa,Member, IEEE, and Sayaka Kamei,Member, IEEE, and

Toshimitsu Masuzawa,Member, IEEE

Abstract— The group mutual exclusion problem is a general-ization of mutual exclusion problem such that a set of processes inthe same group can enter critical section simultaneously. In thispaper, we propose a distributed algorithm for the group mutualexclusion problem in asynchronous message passing distributedsystems. Our algorithm is based on tokens, and a process thatobtains a token can enter critical section. For reducing messagecomplexity, it uses coterie as a communication structure when aprocess sends a request messages. Informally, coterie is a set ofquorums, each of which is a subset of the process set, and any twoquorums share at least one process. The message complexity ofour algorithm is O(|Q|) in the worst case, where|Q| is a quorumsize that the algorithm adopts. Performance of the proposedalgorithm is presented by analysis and discrete event simulation.Especially, the proposed algorithm achieves high concurrency,which is a performance measure for the number of processesthat can be in critical section simultaneously.

Index Terms— D.4.7.b - Distributed systems, E.1.b - Distributeddata structures, D.4.1.d - Mutual Exclusion, D.4.1.f - Synchro-nization,

I. I NTRODUCTION

D ISTRIBUTED mutual exclusion is one of fundamentalproblems for conflict resolution in distributed computing.

The problem appears in many context such as access to sharedobject and allocation of shared resource, etc. Because it isafundamental problem in distributed computing, many generalizedproblems have been proposed so far. In this paper, we considerthe problem of distributedgroup mutual exclusion[1], which is ageneralization of the distributed mutual exclusion problem suchthat only processes in the samegroup can enter critical sectionsimultaneously. In other words, no two processes in differentgroups enter critical section at a time.

As an intuitive example, consider a multicast of a musical liveconcert on the Internet but network capacity is not enough totransfer more than one concert live. Each user is allowed tomake a request to receive a concert live he/she prefers amongmany concert lives. Although any number of users can receivethe same concert live at a time by the multicast technology,no two different concert lives can be multicast simultaneouslyby limitation of network capacity. The distributed group mutualexclusion problem models such a situation, and it is one of conflictresolution problems in distributed systems.

Hirotsugu Kakugawa and Toshimitsu Masuzawa are with Gradu-ate School of Information Science and Technology, Osaka Univer-sity. 1-3 Machikaneyama, Toyonaka, Osaka 560-8531, JAPAN.E-Mail:{kakugawa,masuzawa}@ist.osaka-u.ac.jp. Sayaka Kamei is with Departmentof Information Systems, Tottori University of Environmental Studies. 1-1-1,Wakabadai Kita, Tottori, Tottori 689-1111, JAPAN. E-Mail:[email protected].

Manuscript received XXX YY, 200Z; revised XXX YY, 200Z.

Concurrency and waiting time are important performance mea-sure for distributed group mutual exclusion algorithm. Concur-rency is the number of processes that are in critical sectionsimultaneously, and higher concurrency is better. Waitingtimeis the time that a process must wait to enter critical sectionafter it issues a request. Design of an algorithm that achieveshigh concurrency and small waiting time is non-trivial whenprocesses in different groups make request simultaneously. Con-sider a situation that some processes are in critical section, anda process in the same group makes a request. If the requestis granted immediately, concurrency is increased. On the otherhand, requests by other processes in different groups must waitto be granted. The difficulty of algorithm design is the trade-offof concurrency and waiting time.

Many distributed group mutual exclusion algorithm have beenproposed so far, and they are categorized as follows. (See Sec-tion VII for more detailed review.)

• Decentralized permission type [2], [3] — When a processwants to enter critical section, it sends request messages tosome processes, and it enters critical section if it obtainspermissions from some set of processes defined in advance.Typically, quorums[4] or its variant[3] is used to define a setof processes from which a process must obtain permission.(See Definition 2 for formal definition.)

• Privileged token type [5] — A virtual object calledtokenismaintained by processes, and only a process that obtains atoken may enter critical section. In the algorithm proposedin [5], a process sends request messages to all processes sothat a request arrives at a process that holds a token.

• Hybrid type [6] — The first process to enter critical sectionobtains permission from some set of processes (decentralizedpermission type), and the process grants other processes inthe same group to enter critical section by sending a token(privileged token type).

In this paper, we propose a new distributed algorithmTQGmxof privileged token type. Our algorithm also uses two classesof tokens, main-token and sub-token, as [5]. Different from[5],our algorithm uses coteries for communication structure toreducemessage complexity. Such an idea is proposed by Mizuno, Neilsenand Rao [7] for (non-group) distributed mutual exclusion – atoken is used as a privilege to enter critical section and coterieis used for propagation of requests. (In Section VII, technicaldescription of the algorithm in [7] and ours are presented.)

Outline of our algorithm is as follows. When a process obtainsthe main-token, it notifies to processes in a quorum that it holdsthe main-token. When a process makes a request, it sends requestmessages to processes in a quorum. A request message receivedis forwarded to the holder of the main-token, eventually receivedby the holder of the main-token, and enqueued in the main-token.

while (true) do {Select a groupg ∈ G;Request(g); – Entry protocol.Critical SectionRelease; – Exit protocol.

}

Fig. 1. Model of process behavior.

Finally the request is granted and a token (main- or sub-token) issent to the requesting process.

Main contribution of our algorithm is twofold:

• Flexible control mechanism of concurrency. Requests areaggregated in the main-token, and the holder of the main-token may control the order to grant requests in the queue.We propose a priority scheme that can control concurrencyand waiting time.

• Less message complexity. The message complexity of ouralgorithm is0 in the best case andO(|Q|) in the worst case,where|Q| is a quorum size that the algorithm adopts. Whenwe adopt a coterie based on finite projective plane[8], wehave |Q| ≈ √

n.

This paper is organized as follows. In Section II, we describethe computational model assumed in this paper, and define thegroup mutual exclusion problem and coterie. In Section III,wepresent the proposed algorithm for the group mutual exclusionproblem. In Sections IV and V, we show correctness and per-formance analysis of the proposed algorithm, respectively. InSection VI, we show simulation result of the proposed algorithm.In Section VII, related works are referred to, and performanceof the proposed algorithm is compared with other algorithms.Especially, show simulation results of algorithms proposed in [6].Finally, in Section VIII, we give concluding remarks.

II. D EFINITIONS

A. The computational model

In this paper, we assume that a distributed system consistsof a set of n processesV = {P0, P1, ..., Pn−1} and a set ofcommunication channelsE ⊆ V × V . Each process has a uniqueidentifier selected from a set of integers{0, 1, ..., n − 1}. Adistributed system isasynchronous, i.e., there is no commonglobal clock. Information exchange between processes is doneby asynchronous message passing. Each communication channelis FIFO, and each message sent is delivered within finite time. Weassume that there is no upper bound on message delivery time.We assume that a system is error-free.

B. The group mutual exclusion problem

Let G = {0, 1, ...., m − 1} be a set of groups. Each processselects a groupg ∈ G, and makes a request for critical sectionentries. We model such a behavior of each processPi as shownin Figure 1.

Formally, the problem ofgroup mutual exclusionis defined asfollows.

Definition 1: Thegroup mutual exclusion problemis a problemto control execution of processes to satisfy the following twoconditions.

• Safety:No two processes in different groups are in criticalsection simultaneously, and

• Liveness:Any requesting process eventually enters its criticalsection.

Note that this definition does not include any requirement forconcurrent access by processes in the same group. We leave sucha condition as a goodness measure of algorithms because somealgorithms may have limited concurrency to achieve low messagecomplexity (e.g., [9]).

C. Coterie

Coterie is defined formally as follows.Definition 2: (Coterie [4]) Let U = {P0, P1, ..., Pn−1} be a

set. A setC of subsets ofU is a coterie underU if and only ifthe following three conditions are satisfied.

1) Non-emptiness:For eachQ ∈ C, Q is not empty andQ ⊆U ,

2) Intersection property:For any Q, Q′ ∈ C, Q ∩ Q′ is notempty, and

3) Minimality: For anyQ, Q′ ∈ C, Q is not a proper subsetof Q′.

An element ofC is called aquorum.Some examples of coterie are given below.Example 1: (Majority coterie [4]) For any integern, let U =

{Pi : 0 ≤ i < n}. A majority coterieCmaj underU is defined asfollows.

• Whenn is odd,Cmaj = {Q ⊆ U : |Q| = (n + 1)/2}, and• When n is even,Cmaj = C1 ∪ C2, where, for someP ∈ U ,

C1 = {Q ⊆ U : P ∈ Q and |Q| = n/2} andC2 = {Q ⊆ U :

|Q| = (n/2) + 1 andQ′ 6⊂ Q for any Q′ ∈ C1}Example 2: (Grid coterie [8]) For integerℓ, let U = {Px+yℓ :

0 ≤ x, y < ℓ}. A grid coterieCgrid underU is {Qx,y : 0 ≤ x, y <

ℓ}, whereQx,y =⋃

0≤i<ℓ{Pi+yℓ} ∪

0≤j<ℓ{Px+jℓ}. Size of

each quorum is2ℓ − 1 = 2√

n − 1, wheren = |U | = ℓ2.As a general concept,quorum systemis defined as a set of

quorums in which intersection and minimality properties maynot be satisfied. Coterie is a quorum system in which the twoproperties are satisfied. Since many variants of quorum systemshave been proposed so far[10], [11], [12], [3], [13], a quorumsystem for coterie is called anordinary quorum system.

D. Performance measures

We define performance measures of a distributed group mutualexclusion algorithms.

Definition 3: Message complexityis the number of messagesexchanged per request for critical section.Maximum concurrencyis the maximum number of processes that can enter critical sectionsimultaneously.Waiting timeis the time between a process makesa request and it actually enters critical section, by assumingthat each message transmission consumes 1 time unit and localcomputation time is zero.Synchronization delayis the timebetween all the processes in a group exit critical section and aprocess in another group enters critical section.

Usually, waiting time is measured when the system load islight, and synchronization delay is measured when the systemload is heavy in such a way that different groups are requestedby processes simultaneously [14], [6], [5].

procedureRequest(g);trigger event requestEventi(g);wait for event requestDonei;

procedureRelease;trigger event releaseEventi;wait for event releaseDonei;

Fig. 2. ProceduresRequestandRelease.

III. T HE PROPOSEDALGORITHM

In this section, we present a distributed algorithmTQGmx forthe group mutual exclusion algorithm based on coterie. LetC be acoterie underV , andn be the total number of processes, i.e.,n =

|V |. ProceduresRequestandReleaseare shown in Figure 2, whichare used to request for entry to and exit from critical section.

A. Outline

Our algorithmTQGmx uses two types of tokens, themain-tokenand sub-tokens. The main-token is maintained so that it isunique in the system. A sub-token is generated by the holder ofthe main-token, and the number of sub-tokens varies. A processobtains the main-token to enter critical section if there isnoprocess in critical section. A process enters its critical section byreceiving a sub-token in case other process in the same groupis inits critical section. A process that holds the main-token maintainsthe following two values:

• Current group, which is the group of processes that arecurrently in critical section, and

• Group size, which is the number of processes currently incritical section.

The rough sketch of the proposed algorithm is as follows.

1) When processPi wishes to enter critical section, it sendsa request message to each processPj in a quorum, and itwaits for a token (main-token or sub-token) to arrive.

2) When processPj receives arequest message fromPi, itforwards the request to the holder, sayPk, of the main-tokenif Pj knows suchPk. Otherwise, the request is buffereduntil Pj knows suchPk.

3) When the holderPk of the main-token receives a requestissued byPi and forwarded byPj : The arrived requestis, in any case, put into the queue of the main-token. Eachrequest in the queue is granted according to the followingrules.

• If no process is in critical section, the main-token istransferred by atoken message to a requesting process.

• If some processes are in critical section already andthe requested group is the same as the current group,Pk sends asubtoken message toPi, as long as thereis no request in other group with higher priority. Seesubsection III-E for a priority scheme of the queue toguarantee liveness.

• Otherwise, the request is kept in the queue of the main-token. The main-token may be transferred to otherprocess to grant the request of the process, and this mayhappen several times. The request ofPi is eventuallygranted by a process that holds the main-token, sayPℓ,which may not be the same asPk.

4) ProcessPi enters critical section by receiving the main-token (a token message) or a sub-token (asubtokenmessage).

5) When processPi exits critical section : IfPi is the holder ofthe main-token, it decrements group size by one. Otherwise,i.e., Pi holds a sub-token, it returns a sub-token by sendinga release message.

B. Local variables at eachPi

Each processPi maintains the following local variables.

• modei represents current status ofPi, and its value is IDLE

(not interested in critical section), TRYING (making a requestfor critical section entry), and INCS (process is in criticalsection).

• toki is the main-token object ifPi holds it. Otherwise, itsvalue is⊥.

• typei is the type (MAIN or SUB) of a token thatPi holdswhen Pi is in critical section. IfPi has no token, its valueis ⊥. WhenPi is not in critical section, its value is⊥ evenif Pi holds the main-token.

• tsi is the timestamp value for each request, which is incre-mented by one whenPi makes a new request. Each requestis identified by a pair of process name and timestamp value.

• qi is a quorum thatPi uses.• grpi is the group name thatPi is currently interested in. If

Pi is not interested in critical section, its value is⊥.• holderi is process name that holds the main-token to the best

knowledge ofPi. Its value is⊥ if Pi does not know.• homei is the process name to whichPi should return a sub-

token. Its value is⊥ if Pi does not hold a sub-token.• tmpQi is a temporary queue forrequest items1 that Pi

receives to forward to the holder of the main-token.• acqsi is a set of processesPk such thatPi sent anacquired

message toPk but has not sent aleave message toPk.• acksi is a set of processesPk such thatPi is waiting for an

ack message fromPk as a reply ofleave message toPk.• leavingi is true if and only ifPi is waiting forack message.

C. Structure of the main-token

The main-token contains the following data.

• gNameholds the current group. If no process is in criticalsection, its value is⊥.

• gSizeholds the group size.• reqQ is a queue of request items.• tsReq is an array of timestamps (sequence numbers) of

size n. For each0 ≤ i < n, the value oftsReq[i] is thelargest timestamp value of requests byPi that the main-token enqueued so far.

In case arequest message is sent to each process in a quorum,more than onerequest messages for the same request may bedelivered at the holder of the main-token. To enqueue each requestonly once into the queue of the main-token,tsReq[i] is used. Thetimestamp value ofPi is attached to each request message, anda request ofPi that arrived at the holder of the main-token istaken into consideration only when its timestamp is larger thantsReq[i]. Otherwise, the request message is simply discarded.

1A request item is a triplet of process name, timestamp and group, for amutual exclusion request.

D. Description of the algorithm

Formal description of the proposed algorithm is shown inFigures 3, 4 and 5. Precondition (labeled “PC”) is attached foreach handler and procedure which is satisfied on invocation.When a process, sayPi, sends messages to processes in a quorum,it does not send a message toPi itself to reduce messagecomplexity. In description below, “Pi sends a message to quorumqi” means that “Pi sends a message to each process in quorumqi exceptPi itself”.

We assume that each handler is executed atomically. That is,any message arrival and local event never interrupt execution ofa handler. Message arrival events and local events are handledin the order of occurrence. That is, after execution of a handlerfinishes, a next handler is invoked. (Note that each handler doesnot contain any blocking operation.)

Operationsenqueue, dequeueandpeekare defined on queues.We assume thatreqQ of the main-token is a priority queue basedon priority to be discussed in subsection III-E. We assume thata dequeuefollowed by apeek always returns the same item asthe one returned by thepeek as long as queue contents are notchanged after thepeek.

1) System startup (line 1.1 – 1.15):When a system starts,each process selects a quorum2. ProcessP0 plays a special role.It creates the main-token, initializes it, and sends anacquiredmessage to each process in a quorum.P0 maintains a setacqsiof processes to whichP0 sent anacquired message.

2) Making a request (line 2.1 – 2.14):ProcessPi invokesprocedurerequestEvent. Pi increments its timestamp, and therequest ofPi is sent. Our algorithm make use of quorums toforward the request to the holder of the main-token with someoptimization as follows.

• Case A1 (line 2.2–), WhenPi holds the main-token:Pi

acts as if it receives a request message from itself. First,Pi updates thetsReq[i] in the main-token, and enqueues therequest into the queue of the main-token.Then, Pi invokes procedurehandlePendingRequestsat line2.5. If the request ofPi has priority,Pi enters critical sec-tion (line 4.7 or 4.25) in procedurehandlePendingRequests.Otherwise, it waits for a message oftoken or subtokentype.

• Case A2 (line 2.6–), WhenPi does not hold the main-tokenand holderi 6= ⊥: SincePi knows the holder of the main-token, it sends arequest message directly to the holderof the main-token, and it waits for a message oftoken orsubtoken type.

• Case A3 (line 2.8–), WhenPi does not know the holderof the main-token:Pi sends arequest message to quorumqi. In casePi ∈ qi, the request is enqueued into its localtemporary queuetmpQi. Out-dated requests ofPi in tmpQi,if any, are deleted since they are no longer necessary. It waitsfor a message oftoken or subtoken type.

3) Exit from critical section (line 3.1 – 3.13):ProcessPi

invokes procedurereleaseEvent. There are two cases dependingon type of token it holds.

• Case B1 (line 3.1–), WhenPi holds the main-token: Itdecrements group size by one. If all processes exit criti-

2Although a quorum to be used can be dynamically switched, except thatquorums forleave messages and correspondingacquire messages must bethe same, we fix it for simplicity of description in this paper.

on initialization ;PC: true1.1 modei := IDLE; tsi := 0; grpi := ⊥;1.2 typei := ⊥; holderi := ⊥;1.3 homei := ⊥; leavingi := false; acksi := ∅;1.4 tmpQi := new(Queue);1.5 qi := select a quorum in coterieC;1.6 if (Pi = P0) {1.7 toki := new(Token);1.8 toki.gName:= ⊥; toki.gSize:= 0;1.9 toki.reqQ := new(Queue);1.10 toki.tsReq[j] := 0 for each j = 0..n − 1;1.11 acqsi := qi − {Pi};1.12 send〈acquired〉 to eachPj ∈ acqsi;1.13 } else{1.14 toki := ⊥; acqsi := ∅;1.15 }

on eventrequestEvent(gi); // Request for CSPC:(modei = IDLE) ∧ (typei = ⊥)2.1 modei := TRYING; tsi := tsi + 1; grpi := gi;2.2 if (toki 6= ⊥) then { – case A12.3 toki.tsReq[i] := tsi;2.4 enqueue(toki.reqQ, 〈Pi, tsi, grpi〉);2.5 call handlePendingRequests;2.6 } else if(holderi 6= ⊥) { – case A22.7 send〈request, Pi, tsi, grpi〉 to holderi;2.8 } else{ – case A32.9 send〈request, Pi, tsi, grpi〉

to eachP ∈ (qi − {Pi});2.10 if (Pi ∈ qi) {2.11 delete〈Pi, ∗, ∗〉 from tmpQi;2.12 enqueue(tmpQi, 〈Pi, tsi, grpi〉);2.13 }2.14 }

on eventreleaseEvent; // Exit from CSPC:(modei = INCS) ∧ (typei 6= ⊥)

∧ ((typei = MAIN) ⇒ (toki 6= ⊥)∧ (toki.gSize> 0))

∧ ((typei = SUB) ⇒ (toki = ⊥) ∧ (homei 6= ⊥))∧ ¬leavingi ∧ (acksi = ∅)

3.1 if (typei = MAIN) then { – case B13.2 typei := ⊥ ; modei := IDLE;3.3 grpi := ⊥; toki.gSize:= toki.gSize− 1;3.4 if (toki.gSize= 0) {3.5 toki.gName= ⊥;3.6 call handlePendingRequests;3.7 }3.8 } else{ – case B23.9 send〈release〉 to homei;3.10 grpi := ⊥; homei = ⊥;3.11 typei := ⊥; modei := IDLE;3.12 }3.13 trigger event releaseDonei; // Exit done

Fig. 3. Description ofTQGmx for Pi (1/3).

cal section, procedurehandlePendingRequestsis invoked toswitch current group if there is a pending request.

• Case B2 (line 3.8–), WhenPi holds a sub-token:Pi sendsa release message to the holder of the main-token.

4) Procedure handlePendingRequests (line 4.1 – 4.30):In thisprocedure, pending requests are granted. Note that this procedureis invoked only whenPi holds the main-token. IfPi is waitingfor any ack message, i.e.,leavingi is true, invocation of thisprocedure has no effect. Below, we explain whenleavingi is false.

Requests in the queue of the main-token are granted in theorder of priority. If no process is in critical section, letPj bethe process with the highest priority in the queue (line 4.2). If

procedurehandlePendingRequests;PC:(toki 6= ⊥)4.1 if ¬leavingi ∧ (toki 6= ⊥) ∧ ¬empty(toki.reqQ)

∧(toki.gSize= 0) {4.2 〈Pj , t, g〉 := peek(toki.reqQ); // peek the top item4.3 if (Pj = Pi) {4.4 dequeue(toki.reqQ); // discard the top item4.5 toki.gName:= grpi; toki.gSize:= 1;4.6 typei := MAIN ; modei := INCS;4.7 trigger event requestDonei; // Enter CS4.8 } else{4.9 if (acqsi 6= ∅) {4.10 call beginTokenTransfer;4.11 } else{4.12 dequeue(toki.reqQ); // discard the top item4.13 send〈token〉 to Pj ; toki := ⊥;4.14 }4.15 }4.16 }4.17 while ¬leavingi ∧ (toki 6= ⊥) ∧ ¬empty(toki.reqQ) {4.18 〈Pj , t, g〉 := peek(toki.reqQ);4.19 if (g 6= toki.gName)4.20 break;4.21 dequeue(toki.reqQ);4.22 if (Pj = Pi) {4.23 toki.gSize:= toki.gSize+ 1;4.24 typei := MAIN ; modei := INCS;4.25 trigger event requestDonei; // Enter CS4.26 } else{4.27 toki.gSize:= toki.gSize+ 1;4.28 send〈subtoken〉 to Pj ;4.29 }4.30 }

on receipt of 〈token, tok〉;PC:(toki = ⊥) ∧ (typei = ⊥) ∧ (tok.gSize= 0)

∧ (homei = ⊥) ∧ ¬leavingi ∧ (acksi = ∅) ∧ (acqsi = ∅)5.1 toki := tok;5.2 acqsi := qi − {Pi};5.3 send〈acquired, Pi〉 to eachPj ∈ acqsi;5.4 typei := MAIN ; modei := INCS;5.5 toki.gName:= grpi; toki.gSize:= toki.gSize+ 1;5.6 trigger event requestDonei; // Enter CS5.7 while ¬empty(tmpQi) {5.8 〈Pj , t, g〉 := dequeue(tmpQi);5.9 if (toki.tsReq[j] < t) {5.10 toki.tsReq[j] := t;5.11 enqueue(toki.reqQ, 〈Pj , t, g〉);5.12 }5.13 }5.14 call handlePendingRequests;

Fig. 4. Description ofTQGmx for Pi (2/3).

Pj = Pi (line 4.3–), Pi enters critical section by the main-token. Otherwise (line 4.8–),Pi transfers the main-token toPj . Ifacqsi = ∅, the main-token is transferred immediately (line 4.13).If acqsi 6= ∅, procedurebeginTokenTransferis invoked to scheduletoken transfer (line 4.10). In procedurebeginTokenTransfer, thevalue of leavingi is changed to true (line 10.1), and it remainstrue untilPi receivesack message from each process inqi−{Pi}.

Then (line 4.17–4.30), if the main-token is not transferred(toki 6= ⊥) and is not scheduled to transfer (leavingi is false),each request in the queue is granted in the order of priority aslong as its group is the same as the current group.

5) Receiving atoken message (line 5.1–5-14):First, Pi startsa new current group containingPi only, and it sends anacquiredmessage to a quorum (line 5.1–5.5).Pi is enabled to enter criticalsection (line 5.6). Then,Pi moves request items from a temporary

on receipt of 〈subtoken〉 from Pk;PC:(toki = ⊥) ∧ (typei = ⊥)

∧ (homei = ⊥) ∧ ¬leavingi6.1 typei := SUB; modei := INCS; homei := Pk;6.2 trigger event requestDonei; // Enter CS

on receipt of 〈acquired〉 from Pk;PC:(holderi = ⊥)7.1 holderi := Pk;7.2 while ¬empty(tmpQi) {7.3 〈Pj , t, g〉 := dequeue(tmpQi);7.4 send〈request, Pj , t, g〉 to holderi;7.5 }

on receipt of 〈request, Pk, t, g〉;PC: true8.1 if (toki 6= ⊥) { – case C18.2 if (toki.tsReq[k] < t) {8.3 toki.tsReq[k] := t;8.4 enqueue(toki.reqQ, 〈Pk, t, g〉);8.5 call handlePendingRequests;8.6 }8.7 } else if(holderi 6= ⊥) { – case C28.8 send〈request, Pk, t, g〉 to holderi;8.9 } else{ – case C38.10 delete〈Pk, ∗, ∗〉 from tmpQi;8.11 enqueue(tmpQi, 〈Pk, t, g〉);8.12 }

on receipt of 〈release〉;PC:(toki 6= ⊥) ∧ (toki.gSize> 0) ∧ ¬leavingi9.1 toki.gSize:= toki.gSize− 1;9.2 if (toki.gSize= 0) {9.3 toki.gName= ⊥;9.4 call handlePendingRequests;9.5 }

procedurebeginTokenTransfer;PC:(toki 6= ⊥) ∧ (toki.gSize= 0) ∧ ¬leavingi

∧ (acqsi 6= ∅) ∧ (acksi = ∅)10.1 leavingi := true;10.2 send〈leave〉 to eachPj ∈ acqsi (= qi − {Pi});10.3 acqsi := ∅; acksi := qi − {Pi};

on receipt of 〈leave〉 from Pk;PC:(holderi = Pk)11.1 holderi := ⊥;11.2 send〈ack〉 to Pk;

on receipt of 〈ack〉 from Pk;PC:(toki 6= ⊥) ∧ (toki.gSize= 0) ∧ leavingi

∧(acqsi = ∅) ∧ (Pk ∈ acksi 6= ∅)12.1 acksi := acksi − {Pk};12.2 if (acksi = ∅) {12.3 leavingi := false;12.4 if (toki 6= ⊥) {12.5 〈Pj , t, g〉 := dequeue(toki.reqQ);12.6 if (Pj = Pi) {12.7 acqsi := qi − {Pi};12.8 send〈acquired〉 to eachPj ∈ acqsi;12.9 typei := MAIN ; modei := INCS;12.10 toki.gName:= grpi; toki.gSize:= 1;12.11 call handlePendingRequests;12.12 trigger event requestDonei; // Enter CS12.13 } else{12.14 send〈token, toki〉 to Pj ; toki := ⊥;12.15 }12.16 }12.17}

Fig. 5. Description ofTQGmx for Pi (3/3).

queuetmpQi into the queue of the main-token (line 5.7–5.13).Finally, Pi invokes procedurehandlePendingRequestsin whichPi sends asubtoken message (line 5.14).

6) Receiving asubtoken message (line 6.1–6.2):First,Pi setsthe value ofhomei to the sender process of this message, to whichPi should send arelease message on exit of critical section.Then,Pi enters critical section.

7) Receiving anacquired message (line 7.1–7.5):The valueof holderi is set to the sender process of this message (line 7.1).The value ofholderi is the process that holds the main-token,and hence to whichPi should forward request messages (line7.2–7.5).

8) Receiving arequest message (line 8.1–8.12):There arethree cases depending on the value ofholderi. Let Pk be theprocess that issued the request in question. Note thatPk may notbe the sender of the message since the request may be forwardedby some process.

• Case C1 (line 8.1–), WhenPi holds the main-token: If therequest has not been enqueued yet, it is enqueued, andPi

invokes procedurehandlePendingRequests. Otherwise, themessage is ignored.

• Case C2 (line 8.7–), WhenPi does not hold the main-tokenbut holderi 6= ⊥ holds: It forwards the request to the holderof the main-token.

• Case C3 (line 8.9–), Otherwise:Pi enqueues the request intoits local temporary queuetmpQi. Before this action takesplace,Pi deletes the out-dated request (if any) intmpQi.

9) Receiving arelease message (line 9.1–9.5):ProcessPi

decrements the group size by one. If no process is in criticalsection,Pi invokes procedurehandlePendingRequeststo grantpending requests in the queue.

10) Transfer of the main-token:ProcedurebeginTokenTransferis invoked to start transfer of the main-token.

In procedurebeginTokenTransfer(line 10.1–10.3),Pi sends aleave message to a quorum, and waits for anack message foreachleave message. To avoid further invocation of this procedurewhile this action is in progress, local variableleavingi is used. Thevalue of leavingi remains true untilPi receives anack messagefrom each process inqi − {Pi}.

When a processPℓ receives aleave message, it nullifiesholderℓ, and sends anack message back toPi (line 11.1–11.2).

When Pi receives all theack messages (line 12.1–12.2), it isready to transfer the main-token. LetPj be the process with thehighest priority3.

• If Pj = Pi (line 12.6), Pi uses the main-token again.Pi sends anacquired to each process in a quorum, andinvokes procedurehandlePendingRequeststo sendsubtokenmessages.

• Otherwise (line 12.14),Pi transfers the main-token toPj bya token message.

Note that the casePj = Pi occurs in the following scenario.While Pi is waiting forack messages, (1) it issues a new requestfor mutual exclusion, and (2) the priority of its request is thehighest.

3Note that the queue is always non-empty at line 12.5. Procedure begin-TokenTransferis invoked at line 4.10 when the queue is not empty. (See line4.1). After procedurebeginTokenTransferis invoked, requests in the queue arenot dequeued until all theack messages are received by the value ofleaving.Thus, the queue is not empty at line 12.5.

11) A note on maintenance of the value of holderi: Theprotocol described above to maintain the value ofholderi, andcurrent holder of the main-token, is summarized as follows.

1) Before the main-token moves fromPk to other process,Pk

sends aleave message to each processPj in a quorum (line10.2), andPj nullifies the value ofholderj (line 11.1).

2) As a response to aleave message, eachPj sends anackmessage toPk (line 11.2).

3) By receiving anack message from each process in aquorum (line 12.1–12.2),Pk is ready to transfer the main-token.

By this protocol, we have the following two properties.

• Suppose thatPj sends a request message toholderj whenholderj 6= ⊥ holds. By anack message and FIFO propertyof a channel, the request message is received byPk beforePk transfers the main-token to other process.

• By intersection property of coterie, at least one process inany quorum (eventually) knows the holder of the main-token,and at least one request message sent to each process in aquorum is successfully forwarded to the holder of the main-token.

E. Priority scheme of the token queue

In the proposed algorithm, each request is put into the queueofthe main-token, even if a requesting process holds the main-tokenthat is not in use. Because of this structure, we can define variouspriority schemes on the queue of the main-token.

A simple priority scheme is the first-come-first-served (FCFS)scheme such that requests are granted in the order of enqueue.Unfortunately, this scheme yields low expected concurrency ofcritical section entries. Suppose that each process selects one oftwo groups uniformly at random when it makes a request. Letr1, r2, ... be requests such thatri is the i-th request that arrivesat the queue of the main-token. LetK be a random variable suchthat groups of eachr1, r2, ..., rK are the same and that ofrK+1

is different. By simple probabilistic analysis, expected value ofKis at most 2, and hence the FCFS scheme is not enough to yieldhigher concurrency. On the other hand, priority scheme mustbenon-starving, i.e., every request must be granted eventually forany execution.

Let r1, r2, ..., rL be requests in the queue, whereL is thenumber of requests in the queue. We assume thatri is the i-th oldest request in the queue. That is,r1 is the oldest requestand rL is the latest one. Since the requests are centralized atthe main-token, we can define various priority schemes. As anexample, we define priorityP(ri) for eachri as follows.P(ri) =

αG(ri) + βX(ri) + γSG(ri) + δAG(ri), where

• α, β, γ, δ are constant parameters.• Group:G(ri) is 1 if the group ofri is the same as the current

group, and0 otherwise.• Order:X(ri) = (L− i+1)/L is the reversed order of arrival

of ri at the queue. The oldest request has value 1, and thelatest one has the smallest value. (0 < X(ri) ≤ 1 for eachri)

• Group Size:SG(ri) = sg(ri)/L, wheresg(ri) is the numberof requests that are in the same group asri’s. (0 < SG(ri) ≤1 for eachri)

• Group Age:AG(ri) =∑

rjag(rj)/n, whereag(rj) is the

age of rj which is the number of granted requests afterrj

• Vtok is the number of processPi such thattoki 6= ⊥.• gSizeis the value oftok.gSize.• gNameis the value oftok.gName.• tsReq[i] is the value oftok.tsReq[i].• reqQ is the set of all requests intok.reqQ.• tmpQis the set of all requests intmpQi for somePi.• TMAIN (resp. TSUB) is the number of processPi such that

typei = MAIN (resp.= SUB).• Ct is the total number of messages of typet in transit.• Ct,i,k (resp.Ct,∗,k) is the number of messages of typet in

transit fromPi (resp. any process) toPk.• Mt is the set of all messages of typet in transit.• St,i,k is the total number ofsends of message of typet

from Pi to Pk since the system startups.• Rt,i,k is the total number of receipts of message of typet

from Pi to Pk since the system startups.• Bacqs,i,j is 1 if acqsi ∋ Pj and 0 otherwise.• Backs,i,j is 1 if acksi ∋ Pj and 0 otherwise.• Btype,k,SUB is 1 if typek = SUB and 0 otherwise.

Fig. 6. Symbols used in invariants.

is enqueued. The sum is taken over requestsrj that are inthe same group asri’s. (AG(ri) ≥ 0 and its value has noupper bound).

Based on this framework, we obtain various priority schemeby setting four parametersα, β, γ, δ.

• α = 0, β = 1, γ = δ = 0 : This priority scheme is thesame as the FCFS scheme. Although this scheme yields lowconcurrency, this is a non-starving scheme.

• α = 1, β = 0, γ = δ = 0 : Although this scheme yields highconcurrency, this scheme is starving.

• α, δ > 0 : This priority scheme yields both concurrency andnon-starving.

IV. PROOF OFCORRECTNESS

In this section, we show proof of correctness ofTQGmx.Because of space limitation, we show only outline. The algorithmmaintains invariants shown in Figure 7, and symbols used in theinvariants are shown in Figure 6. Invariants InvA and InvB assertthat the number of tokens and current group size are correctlymaintained. Invariant InvD asserts group mutual exclusioncondi-tion is maintained.

Lemma 1: Invariant InvA is maintained for any execution.For simplicity of description of invariants in Figure 7, the

(unique) main-token is denoted bytok, which may be held bya process or may be in atoken message in transit.

By induction, we show that invariants are maintained. We omitthe proofs of the following lemmas because we can easily checkthat each action of message handler and event handler maintainsthe invariants.

Lemma 2: (Base step) When the system is initialized, allthe invariants are satisfied, and precondition ofrequestEventissatisfied at each process.

Lemma 3: (Induction step, precondition of message receipt)Suppose that a message arrives atPi provided that (1) precondi-tion of a handler that sends the message is satisfied, and (2) allthe invariants are satisfied just before the message arrivesat Pi.Then, precondition of corresponding handler is satisfied.

Lemma 4: (Induction step, precondition of procedure call)Suppose that a handler is invoked with precondition being sat-isfied when all the invariants are maintained. Then, preconditionof any procedure invoked from the handler is also satisfied.

Lemma 5: (Induction step, invariants) Suppose that a handleris invoked with precondition being satisfied and all the invariantsare satisfied just before the handler is invoked. Then, executionof a handler maintains all the invariants.

Lemma 6:Suppose thatrequestDonei event is triggered as-suming that (1) precondition of message handler or procedure, inwhich the event is triggered, is satisfied, and (2) all the invariantsare satisfied on invocation of message handler or procedure.Then,precondition ofreleaseEventhandler becomes true, and it remainstrue until releaseEventhandler is invoked.

Lemma 7:Suppose thatreleaseDonei event is triggered assum-ing that (1) precondition of message handler or procedure, inwhich the event is triggered, is satisfied, and (2) all the invariantsare satisfied on invocation of message handler or procedure.Then,precondition ofrequestEventhandler becomes true and it remainstrue until requestEventhandler is invoked.

Theorem 1:For any execution, the proposed algorithm main-tains all the invariants, and each handler is invoked with precon-dition being satisfied.

Now we have safety and liveness properties of the proposedalgorithm.

Theorem 2:(Safety) For any execution, no two processes indifferent groups are in critical section simultaneously.

Proof: Assume contrary that there exists an execution andtwo processesPi andPj such that

(modei = modej = INCS) ∧ (grpi 6= grpj).

By InvB (2), we havegSize> 0. This implies gName 6= ⊥by InvD (9). By InvD (13), we havegName= grpi = grpj ; acontradiction.

Theorem 3:(Liveness) A process that makes a request eventu-ally enters critical section, provided that priority scheme of thequeue of the main-token is non-starving.

Proof: (Outline) Suppose thatPi makes a request. Therequest ofPi is eventually enqueued into the queue of the main-token by locally, by a directrequest message, or by an indirectrequest message forwarded by a process in a quorum. Then,the request in the queue is eventually granted since the priorityscheme is non-starving.

Note that the latest request item intmpQj is not deleted foreveryPj ∈ qi (at line 2.11 and line 8.10). Assume contrary that,for eachPj , the latest request item ofPi is deleted fromtmpQj ,and all the latest request items ofPi are lost. The only possiblescenario is that eachPj receives an old request ofPi when thelatest request is intmpQj . By FIFO property of a channel, thisis possible only when

• The latest request arrives atPj , and then• An old request ofPi forwarded toPj via some processPk

arrives by asynchrony of a system.

SincePk forwards the request toPj , Pj is the holder of the main-token. Thus, the latest request is enqueued into the queue ofthemain-token when it arrives atPj .

Now we have the following theorem.Theorem 4:The proposed algorithm is a distributed group

mutual exclusion algorithm.

• InvA: The main-token is maintained so that it is unique inthe system.

Vtok + Ctoken = 1 (1)• InvB: tok.gSizecounts the number of tokens in use.

TMAIN + Csubtoken + TSUB + Crelease = gSize (2)• InvC: Maintenance of requests and timestamp values.∀Pi : modei = IDLE ⇒

(tsi = tsReq[i]) ∧ (∀〈P, t, g〉 ∈ reqQ : P 6= Pi)∧ (Ctoken,∗,i = Csubtoken,∗,i = 0) (3)

∀Pi : modei = TRYING ⇒(tsi = tsReq[i] + 1) ∧ (∀〈P, t, g〉 ∈ reqQ : P 6= Pi)

∧ (Ctoken,∗,i = Csubtoken,∗,i = 0)∨ (tsi = tsReq[i]) ∧ (∃〈P, t, g〉 ∈ reqQ : P = Pi)

∧ (Ctoken,∗,i = Csubtoken,∗,i = 0)∨ (tsi = tsReq[i]) ∧ (∀〈P, t, g〉 ∈ reqQ : P 6= Pi)

∧ (Ctoken,∗,i = 1 ∧ Csubtoken,∗,i = 0)∨ (tsi = tsReq[i]) ∧ (∀〈P, t, g〉 ∈ reqQ : P 6= Pi)

∧ (Ctoken,∗,i = 0 ∧ Csubtoken,∗,i = 1) (4)∀Pi : modei = INCS⇒

(tsi = tsReq[i]) ∧ (∀〈P, t, g〉 ∈ reqQ : P 6= Pi)∧ (Ctoken,∗,i = Csubtoken,∗,i = 0) (5)

∀Pi :(∀〈P, t, g〉 ∈ Mrequest : P = Pi ⇒ t ≤ tsi) (6)

∧ (∀〈P, t, g〉 ∈ tmpQ: P = Pi ⇒ t ≤ tsi) (7)∧ (|{〈P, t, g〉 ∈ reqQ : P = Pi}| ≤ 1) (8)

• InvD: Current group is maintained bygName.gSize= 0 ⇔ gName= ⊥ (9)∧ ∀Pi :

(modei 6= IDLE ⇔ grpi 6= ⊥) (10)∧ (∀〈P, t, g〉 ∈ reqQ : P = Pi ⇒ g = grpi) (11)∧ (Csubtoken,∗,i = 1 ⇒ gName= grpi) (12)∧ (modei = INCS⇒ gName= grpi) (13)

• InvE: Process mode and local variables.∀Pi :

(modei = INCS⇔(typei = MAIN ∨ typei = SUB)) (14)

∧ (typei = MAIN

⇒ (toki 6= ⊥ ∧ homei = ⊥)) (15)∧ (typei = SUB ⇒ toki = ⊥) (16)

∧ (typei = SUB ⇔ homei 6= ⊥) (17)∧ (homei 6= Pi) (18)

• InvF: Transfer of the main-token.(Ctoken = 1) ⇒

(gSize= 0) (19)∧ (∀Pi : acqsi = ∅) (20)∧ (∀Pi : acksi = ∅) (21)

• InvG: Management of the main-token and sub-tokens.∀Pi :

(∀Pk : 0 ≤ Ssubtoken,i,k − Rrelease,k,i) (22)∧ (∀Pk : Csubtoken,i,k + Btype,k,SUB + Crelease,k,i

= Ssubtoken,i,k − Rrelease,k,i) (23)∧ (∀Pk : Ssubtoken,i,k − Rrelease,k,i > 0

⇒ toki 6= ⊥) (24)∧ ∀Pi :

(∀Pk : 0 ≤ Rsubtoken,k,i − Srelease,i,k ≤ 1) (25)∧ ∀Pk : (Rsubtoken,k,i − Srelease,i,k = 1

⇔ homei = Pk) (26)• InvH: Sendingacquired, leave, ack messages.

∀Pi∀Pk :(0 ≤ Sacquired,i,k − Sleave,i,k

≤ Sacquired,i,k − Rack,k,i ≤ 1) (27)∧ (Sacquired,i,k − Sleave,i,k = Bacqs,i,k) (28)∧ (Sleave,i,k − Rack,k,i = Backs,i,k) (29)∧ (toki = ⊥ ⇒ acqsi = acksi = ∅) (30)

• InvI: Receivingacquired, leave, ack messages and main-tenance ofholderi.

∀Pi∀Pk :(Rleave,k,i = Sack,i,k) (31)∧ (Racquired,k,i − Rleave,k,i = 1

⇔ holderi = Pk) (32)• InvJ: Local variables.

∀Pi :(toki = ⊥ ⇒ ¬leavingi) (33)∧ (toki = ⊥ ⇒ acksi = ∅) (34)∧ (acksi 6= ∅ ⇒ acqsi = ∅) (35)∧ (leavingi ⇔ acksi 6= ∅) (36)∧ (leavingi ⇒ gSize= 0) (37)

Fig. 7. Invariants of the algorithm

V. PERFORMANCEANALYSIS

In this section, we show performance analysis ofTQGmx forperformance measures defined in II-D.

Lemma 8:Message complexity of the proposed algorithm is 0in the best case and5|Q| + 1 in the worst case, where|Q| is thesize of quorum used by the algorithm.

Proof: Consider a processPi holds the main-token when itmakes a request and priority of its request is the highest.Pi usesit without any message exchange. Thus, the number of messagesin this case 0.

The scenario of the worst case is as follows. (1) A requestingprocessPi sends arequest message to each process in a quorum,(2) Each request is forwarded to the holder, sayPk, of the main-token, (3)Pk sends aleave message to each process, and (4) Anack message is sent back toPk from each process in a quorum.Although the main-token may be sent to other process, sayPℓ,we do not count such messages since they are not caused byPi. (5) The main-token is transferred toPi by a token messagefrom some process, and (6)Pi sends anacquire message to eachprocess in a quorum. In total,5|Q| + 1 messages are exchanged.

Note that the worst case message complexity5|Q| + 1 onlyhappens for a process that receives the main-token. Ifx processesenter critical section simultaneously, the number of amortizedmessages is at most{(5|Q| + 1) + (x − 1) · (2|Q| + 1)}/x, andthis is equal to2.75|Q|+ 1 whenx = 4. Although the worst casemessage complexity is large, the number of amortized messagesis small when concurrency is large.

Lemma 9:Maximum concurrency of the proposed algorithmis n.

Proof: When each process makes a request for the samegroup simultaneously,n−1 sub-tokens are generated by the holderof the main-token. Thus, the maximum concurrency isn.

Lemma 10:Waiting time of the proposed algorithm is at most5 message hops.

Proof: Let us observe a chain of messages. A requestingprocess sends arequest message to each process in a quorum,and arequest message is forwarded to the holder of the main-token. Then,leave andack messages are exchanged, and finally atoken messages is transferred. Thus, 5 message hops are required

TABLE I

SIMULATION PARAMETERS

• n — the number of processes :4, 9, 16, ...,100 .• C — coterie : Grid coterieCgrid = {qx,y : 0 ≤ x, y <

√n} is

used.Pi always selects quorumqx,y, wherex = i div√

n and y =i mod

√n.

• Dmsg — message delay : 0.01 (fixed).• Tcs — duration in CS : 1.0 (fixed).• Tidle — duration in non-CS :10−5, 10−4, ...,10+5 (expected value

under exponential distribution).• Gsel — group selection scheme : “fixedg(i) = 0” (Pi selects group

0), “fixed g(i) = i” (Pi selects groupi), “random g(i) = 0..k” foreachk = 1, 2, ..,4 (Pi selects a group from0, .., k randomly).

• α, β, γ, δ : priority parameters —α = 0.25k for eachk = 0, 1, ...,6,β = 1, γ = δ = 0.

in the worst case.Lemma 11:Synchronization delay of the proposed algorithm

is at most 4 message hops.Proof: Let us observe the action when groups are switched.

The worst case occurs when the last process, sayPi, of the currentgroup holds a sub-token. WhenPi exits critical section, it sendsa release message to the holder of the main-token, sayPk, thenPk sendsleave messages and waits forack messages. Then, itsends atoken message. Thus, 4 hops are required to start nextcritical section entries.

The best case is thatPi holds the main-token, and synchro-nization delay is 3 message hops.

VI. SIMULATION RESULTS

We carried out a simulation to evaluate average performanceof TQGmx. Simulation parameters are shown in Table I. Weassumed that any local computation and message transmission donot consume local time. The number of trials (simulations) is 100for each combination of simulation parameters, and each simula-tion terminates when the total number of critical section entriesby all processes reaches1000n. Although the proposed algorithmassumes asynchronous distributed system, our simulation modelis a synchronous one with a global clock.

The simulation was carried out on a discrete event simulatorwedeveloped in C language. Computing environment is as follows.

• IBM IntelliStation A Pro with dual Opteron 245 (2.8GHzclock) and 10G byte memory

• RedHat Enterprise Linux WS4 for AMD64/EM64T• C compiler: GCC version 3.4.2

A. The number of messages

Figures 8(a) and (b) show relations betweenn and the averagenumber of messages per critical section entry. For cases that eachprocess selects the same group (“fixedg(i) = 0”), less messagesare exchanged because most of processes enters critical sectionby sub-tokens. On the other hand, for cases that each processselects different group (“fixedg(i) = i”), more messages areexchanged because the main-token is transferred for each entryof critical section. When requests are likely to conflict (Tidle =

0.01, Figure 8(a)), average message complexity4 is approximately

4Quorum size of grid coterieCgrid used in our simulation is2√

n − 1.Although its quorum size is19 when n = 100, quorum size for messagecommunication is18 because eachPi is in its quorum by our simulationsetting and each process does not send messages itself.

0

20

40

60

80

100

0 20 40 60 80 100

#mes

sage

s/cs

n

TQGmx : Dmsg=0.01, Tcs=1, Tidle=0.01, α=0.0

fixed g(i) = irandom g(i)=0..3random g(i)=0..2random g(i)=0..1fixed g(i) = 0

(a) n and #messages (Tidle = 0.01)

0

20

40

60

80

100

0 20 40 60 80 100

#mes

sage

s/cs

n

TQGmx : Dmsg=0.01, Tcs=1, Tidle=100, α=0.0

fixed g(i) = irandom g(i)=0..3random g(i)=0..2random g(i)=0..1fixed g(i) = 0

(b) n and #messages (Tidle = 100)

0

20

40

60

80

100

10-6 10-4 10-2 100 102 104 106

#mes

sage

s/cs

Tidle

TQGmx : Dmsg=0.01, n=100, Tcs=1, α=0.0

fixed g(i) = irandom g(i)=0..3random g(i)=0..2random g(i)=0..1fixed g(i) = 0

(c) Tidle and #messages (n = 100)

Fig. 8. TQGmx: The average number of messages per CS entry.

1.1 × |Q| (resp.,3.3 × |Q|) in the best (resp., worst) case. Whenrequests are unlikely to conflict (Tidle = 1000, Figure 8(b)), it isapproximately4.4 × |Q|.

Figure 8(c) shows relations betweenTidle and the numberof messages whenn = 100. The simulation results match ourestimation shown below.

• When Tidle is large, every request is unlikely to conflict.Assuming that each request does not conflict, we derivean upper bound of the expected number of messages. Sup-pose Pi makes a request whenPj holds the main-token.With probability 1/n, Pi = Pj holds and no message isexchanged. With probability(|Q| − 1)/n, Pi ∈ qj holdsand 3|Q| + 2 messages are exchanged. With probability(n−|Q|)/n, Pi 6∈ qj holds and at most5|Q|+1 messages areexchanged. Thus, the expected number messages exchanged

0.00

0.05

0.10

0.15

0.20

0.25

0.30

0.35

0.40

0.45

0 20 40 60 80 100

norm

aliz

ed fr

eque

ncy

concurrency

TQGmx : Dmsg=0.01, n=100, Gsel="random g(i)=0..1", Tcs=1, Tidle=0.1

α=0.0α=0.25α=0.5α=0.75α=1.0

(a) Distribution whenGsel is “randomg(i) = 0..1”

0.00

0.05

0.10

0.15

0.20

0.25

0.30

0.35

0.40

0.45

0 20 40 60 80 100

norm

aliz

ed fr

eque

ncy

concurrency

TQGmx : Dmsg=0.01, n=100, Gsel="random g(i)=0..2", Tcs=1, Tidle=0.1

α=0.0α=0.25α=0.5α=0.75α=1.0

(b) Distribution whenGsel is “randomg(i) = 0..2”

Fig. 9. TQGmx: Distribution of concurrency whenTidle = 0.1.

is at most0 · (1/n) + (3|Q|+ 2) · (|Q| − 1)/n + (5|Q|+ 1) ·(n − |Q|)/n.

• When Tidle is small, every request is likely to conflict.Assuming that each request conflicts, we derive an upperbound of the expected number of messages. By a similarprobabilistic analysis as above, we have0 · (1/n)+3 · (|Q|−1)/n + (|Q| + |I | + 2) · (n − |Q|)/n, provided that quorumselection is uniformly at random and request messages thatare not forwarded immediately are ignored. Here,|I | isthe expected intersection size of two quorums (i.e.,|I | =

E[|qi ∩ qj |]) whenPi 6∈ qj . In case ofCgrid we have|I | = 2.

B. Concurrency

Figure 9 shows distribution of concurrency whenTilde = 0.1. Inour simulation, distribution of concurrency is measured. For eachtime a process enters critical section, we measure concurrency (thenumber of processes) that are in critical section at the sametime,and frequency (the number of occurrences) of each concurrencyvalue is counted. Normalized frequency value is computed suchthat the sum of normalized frequency equals 1.0. For example,Figure 9(a) shows that, in caseα = 0.25, 39 processes are likelyto be in critical section with probability 0.10 when a processenters critical section.

The figures show that the proposed algorithm achieves highconcurrency by setting the value ofα.

C. Waiting time

Figure 10(a) shows waiting time decreases when the processesare likely to select the same group. Figure 10(b) shows waiting

0

20

40

60

80

100

10-6 10-4 10-2 100 102 104 106

wai

ting

time

Tidle

TQGmx : Dmsg=0.01, n=100, Tcs=1, α=0.0

fixed g(i) = irandom g(i)=0..2random g(i)=0..1fixed g(i) = 0

(a) Tidle and waiting time for each group selections

0

20

40

60

80

100

10-6 10-4 10-2 100 102 104 106

wai

ting

time

Tidle

TQGmx : Dmsg=0.01, n=100, Gsel="random g(i)=0..2", Tcs=1

α=0.0α=0.25α=0.5α=0.75α=1.0

(b) Tidle and waiting time for eachα when Gsel is “randomg(i) = 0..2”

0.0

0.5

1.0

1.5

2.0

2.5

3.0

3.5

4.0

4.5

5.0

0 10 20 30 40 50 60 70 80 90 100

wai

ting

time

(in m

essa

ge h

ops)

n

TQGmx : Dmsg=0.01, Gsel="fixed g(i) = 0", Tcs=1, α=0.0

Tidle=100000Tidle=1000Tidle=100Tidle=10Tidle=1Tidle=0.00001

(c) n and waiting time for eachTidle whenGsel is “fixed g(i) =0”

Fig. 10. TQGmx: Average waiting time.

time for some values ofα when group selection is “randomg(i) = 0..2”. When the value ofα is increased, (1) concurrency isincreased as shown in Figure 9(b), (2) waiting time of processesin the same group is decreased, but (3) waiting time of processesin other groups is increased. Figure 10(b) shows that averagewaiting time decreases by increase ofα.

Figure 10(c) shows waiting time in message hops5 when eachprocess always select group 0. As the Figure shows, 5 messagehops is the worst case and 3 message hops is the best case and

5The value is computed by dividing the waiting time measured in simulationtime by message delayDmsg(= 0.01).

VII. R ELATED WORKS

A. Quorum-based distributed algorithms

As a communication structure in distributed systems,coterie(or quorums), proposed by Garcia-Molina and Barbara in [4], iswidely used for communication structure between processes. Adistributed mutual exclusion and concurrency control of database,for example, can be implemented with less message complexityand more reliability by coterie [15], [8]. After Garcia-Molina andBarbara proposed the concept of coterie, many variants of coterie(or quorums) have been proposed for solving various conflictresolution and coordination problems in distributed systems. Forexample, see [7], [10], [16], [11], [12], [17], [13].

B. Distributed group mutual exclusion algorithms

The problem of group mutual exclusion was first proposedby Joung in [1], in which an algorithm for shared memoryparallel computer systems is proposed. Below, we review majordistributed algorithm for the problem of group mutual exclusion.

1) Joung’s broadcast-based algorithm:In [2], Joung proposeda distributed algorithm for the problem by extending the Ricartand Agrawala’s distributed mutual exclusion algorithm[18]. Heproposed two algorithmRA1 and RA2. Since these algorithmsbroadcast request messages, message complexities are bothO(n).

In RA1, a process wishing to enter critical section broadcastsa request message the every other process. If it obtains an ac-knowledgment from every other process, it enters critical section.Although the maximum concurrency ofRA1 is n in the best case,the expected concurrency is shown to beO(1) even if the numberof groups is two. To this end,RA2 is proposed to increase theexpected concurrency by giving a priority on requests with thesame group as the current group.

Implementation of priority schemes inRA1 and RA2 aredistributed in a sense that each process locally decides whether itgrants to a request or not. Unfortunately, it is difficult to controlpriority scheme of the algorithm because implementation ofitspriority scheme is distributed.

2) Joung’s quorum-based algorithm:In [3], Joung proposed aquorum system, calledsurficial quorum system, for solving theproblem. A requesting process must obtain a permission fromeach process in a quorum to enter critical section. Implementationof priority scheme is distributed. Two algorithms are proposedbased on concurrency of issuing request messages. A concurrentversion Maekawa M sends all request messages in parallel,which requires messages to avoid deadlocks. A serial versionnamedMaekawa S sequentially obtains permission from eachprocess in a quorum to avoid deadlocks (cf. [19]).

Surficial quorum system has limitation in the maximum con-currency because of its structure. These algorithms can useordi-nary quorum systems (i.e., coteries), instead of surficial quorumsystems, and the maximum concurrency becomesn.

3) Atreya and Mittal’s algorithm: In [6], Atreya and Mittalproposed an algorithmSurrogate based on quorums under aconcept of surrogate quorums. The algorithm adoptsleader-follower scheme. A process that obtains a permission from eachprocess in a quorum, it becomes theleader. Other process inthe same group obtains permission from the leader, and enterscritical section as afollower. The algorithm achieves low messagecomplexity O(|Q|) even in the worst case, where|Q| is the sizeof quorum used.

The leader grants to followers’ requests only when it is about toenter critical section, and no requests are granted when theleaderis in critical section. Hence it is difficult to increase concurrency.To improve concurrency they proposed another algorithm in [6]so that the leader can grant to followers’ requests while it is incritical section. We call itSurrogateEx in this paper. Unfortu-nately, as we will show its simulation result in subsection VII-C,improvement of concurrency is limited in average situation.

Limited concurrency comes from leader’s protocol for grantingfollowers’ requests. After a leader exits critical section, even ifthere are many pending requests in the same group, it never grantsany requests. Thus, if a leader exits critical section immediately,increase of concurrency is limited.

4) Mittal and Mohan’s algorithm: In [5], Mittal and Mohanproposed an algorithmTokenGME based on token which is anextension of the distributed mutual exclusion algorithm bySuzukiand Kasami [20]. When a process makes a request, it broadcastsrequest messages to every other processes, and waits for arrivalof a token. Thus its message complexity isO(n). There are twotypes of tokens, primary and secondary, as in our algorithm.Theholder of the primary token generates a new secondary tokenwhen it receives a request for the same group. Since the primarytoken holds a request queue, implementation of priority schemeis centralized.

5) Proposed algorithmTQGmx: Our algorithm is an exten-sion of Mizuno, Neilsen and Rao [7] for (non-group) distributedmutual exclusion. It is based on a token for permissions andquorums for requesting communication. Their algorithm usesthree message types,request for request (sent to processes ina quorum),acquired for notification of token acquisition (sent toprocesses in a quorum), andprivileged for token transfer. Eachrequest is forwarded to a token holder, and it is enqueued in aqueue of a token. Then, request is granted.

To make it a group mutual exclusion algorithm, a tokenholder may simply issue a sub-token for requests in the samegroup based on the leader-follower scheme. Unfortunately,such astraightforward extension does not increase concurrency because,by the property of the algorithm in [7], at most one request isforwarded after receipt of anacquired message from a tokenholder, and a request message may be forwarded to a processthat does not hold a token. To increase concurrency, our algorithmforwards any number of requests to a holder of the main-token.To keep the message complexity small, our algorithm maintainsa holder of the main token up-to-date by introducing messagesleave and ack and a local variableholderi so that a forwardedrequest is received by a holder of the main.

In TQGmx, even after a token holder (leader) exits criticalsection, it may continue granting followers’ requests to increaseconcurrency as long as there is a request with the highest priorityin the current group. When there is a request in the current group,pending requests in different groups can be delayed longer toincrease concurrency by setting parameters of the priorityschemeproposed in Subsection III-E.

C. Comparison of performance

In Table II, performance of distributed group mutual algorithmsare summarized. The proposed algorithm improves in messagecomplexity of previous algorithms (except MaekawaS whichrequires large waiting time). Among these algorithms,SurrogateandSurrogateEx have competitive performance to our algorithm.

0

20

40

60

80

100

10-6 10-4 10-2 100 102 104 106

#mes

sage

s/cs

Tidle

Surrogate : Dmsg=0.01, n=100, Tcs=1

fixed g(i) = irandom g(i)=0..3random g(i)=0..2random g(i)=0..1fixed g(i) = 0

(a) Surrogate: Tidle and #messages (n = 100)

0

20

40

60

80

100

10-6 10-4 10-2 100 102 104 106

#mes

sage

s/cs

Tidle

SurrogateEx : Dmsg=0.01, n=100, Tcs=1

fixed g(i) = irandom g(i)=0..3random g(i)=0..2random g(i)=0..1fixed g(i) = 0

(b) SurrogateEx: Tidle and #messages (n = 100)

Fig. 11. Surrogate and SurrogateEx: The average number of messagesper CS entry.

We carried out simulation to evaluate performance of these twoalgorithms under the same setting described in Section VI. Resultsare shown6 in Figures 11, 12 and 13.

• Message complexity.(See Figures 11 and 8(c).)Surrogateand SurrogateEx is efficient when requests are unlikelyto conflict (i.e., Tidle is large) since deadlock avoidancemechanism is not triggered, and group selection has a littleeffect on message complexity. On the other hand,TQGmxrequires small number of messages when requests are likelyto conflict, and message complexity is reduced if processesare likely to select the same group.

• Concurrency.(See Figures 12 and 9.) Even if every processselects the same group, concurrency ofSurrogateEx is lowin practice. On the other hand,TQGmx achieves higherconcurrency even if each process selects a group randomlyfrom two or three.

• Waiting time.(See Figures 13 and 10(a).) When system isbusy (i.e.,Tidle is small) and each process selects the samegroup (i.e., “fixedg(i) = 0”), waiting time of TQGmx ismuch smaller than that ofSurrogateEx. When each processselects different group (i.e., “fixedg(i) = i”), waiting timeof SurrogateEx is better than that ofTQGmx, however,their difference is small.

As a conclusion,TQGmx achieves high concurrency and smallmessage complexity when the system is busy.

6Since concurrency and waiting time ofSurrogate are worse than thoseof SurrogateEx, only the results ofSurrogateEx are shown.

0.00

0.05

0.10

0.15

0.20

0.25

0.30

0.35

0.40

0.45

0 20 40 60 80 100

dist

ribut

ion

concurrency

SurrogateEx : Dmsg=0.01, n=100, Gsel="fixed g(i) = 0", Tcs=1, Tidle=0.1

(a) Distribution whenGsel is “fixed g(i) = 0”

Fig. 12. SurrogateEx: Distribution of concurrency whenTidle = 0.1.

0

20

40

60

80

100

10-6 10-4 10-2 100 102 104 106

wai

ting

time

Tidle

SurrogateEx : Dmsg=0.01, n=100, Tcs=1

fixed g(i) = irandom g(i)=0..2random g(i)=0..1fixed g(i) = 0

(a) Tidle and waiting time for each group selection

Fig. 13. SurrogateEx: Average waiting time.

VIII. C ONCLUSION

In this paper, we proposed a distributed group mutual exclusionalgorithm based on tokens for permissions and quorums forrequesting communication. Although the proposed algorithm isfully distributed, management of pending requests is centralizedat the holder of the main-token. The proposed algorithm isimplemented on a discrete event simulator and its performance isevaluated. Especially, our algorithm is shown to achieve high con-currency in average which is an important factor for group mutualexclusion and benefit of the proposed algorithm. Development ofan algorithm that reduces waiting time and synchronizationdelaywith keeping high concurrency is left as a future work.

ACKNOWLEDGMENT

This work is supported in part by Global COE (Centers of Ex-cellence) Program of MEXT, Grant-in-Aid for Scientific Research((B)19300017, (B)19700075 and (B)17300020) of JSPS, andKayamori Foundation of Informational Science Advancement.

REFERENCES

[1] Yuh-Jzer Joung, “Ashynchronous group mutual exclusion,” DistributedComputing, vol. 13, pp. 189–206, 2000.

[2] Yuh-Jzer Joung, “The congenial talking philosophers problem incomputer networks,”Distributed Computing, vol. 15, pp. 155–175, 2002.

[3] Yuh-Jzer Joung, “Quorum-based algorithms for group mutual exclusion,”IEEE Transactions on Parallel and Distributed Systems, vol. 14, no. 5,pp. 2003, May 2003.

TABLE II

PERFORMANCE COMPARISON OF DISTRIBUTED GROUP MUTUAL EXCLUSION ALGORITHMS.

message complexity the maximum waiting synchroni-best worst concurrency time zation delay

TQGmx (this paper) 0 5|Q| + 1 n 0 – 5 3 – 4RA1 [2] 2(n − 1) 2(n − 1) n 2 1RA2 [2] 2(n − 1) 3(n − 1) n 2 1Maekawa M w/ S.Q. [3] 3

2n(m − 1)/m O(nm)√

2n/m(m − 1) 2 2Maekawa M w/ O.Q. [3] 3|Q| O(n|Q|) n 2 2Maekawa S w/ O.Q. [3] 2|Q| + 1 2|Q| + 1 n |Q| + 1 |Q| + 1

Surrogate [6] 3|Q| + 2 8|Q| + 1 n 2 2SurrogateEx [6] 3|Q| + 2 10|Q| + 1 n 2 2TokenGME [5] 0 2n − 1 n 2 1“S.Q.” denotes surficial quorum systems, “O.Q.” denotes ordinary quorum systems, i.e., coterie.n (resp.m) is the number of processes (resp. groups).

[4] Hector Garcia-Molina and Daniel Barbara, “How to assignvotes in adistributed system,”Journal of the ACM, vol. 32, no. 4, pp. 841–860,Oct. 1985.

[5] Neeraj Mittal and Prajwal K. Mohan, “A priority-based distributed groupmutual exclusion algorithm when group access is non-uniform,” Journalof Parallel and Distributed Computing, vol. 67, pp. 797–815, 2007.

[6] Ranganath Atreya, Neeraj Mittal, and Sathya Peri, “A quorum-basedgroup mutual exclusion algorithm for a distributed system with dynamicgroup set,”IEEE Transactions on Parallel and Distributed Systems, vol.18, no. 10, pp. 1345–1360, 2007.

[7] Masaaki Mizuno, Mitchell L. Neilsen, and Raghavendra. Rao, “Atoken based distributed mutual exclusion algorithm based on quorumagreements,” inProceedings of 11th International Conference onDistributed Computing Systems (ICDCS), 1991, pp. 361—368.

[8] Mamoru Maekawa, “A√

N algorithm for mutual exclusion in decen-tralized systems,”ACM Transactions on Computer Systems, vol. 3, no.2, pp. 145–159, Mar. 1985.

[9] Yuh-Jzer Joung, “Quorum-based algorithms for group mutual exclu-sion,” in Proceedings of the 15th Conference on Distributed Computing(DISC). 2001, vol. 2180 ofLNCS, pp. 16–32, Springer-Verlag.

[10] Hirotsugu Kakugawa, Satoshi Fujita, Masafumi Yamashita, and TadashiAe, “A distributed k-mutual exclusion algorithm usingk-coterie,”Information Processing Letters, vol. 49, no. 2, pp. 213–218, 1994.

[11] Hirotsugu Kakugawa and Masafumi Yamashita, “Local coteries and adistributed resource allocation algorithm,”Transactions of InformationProcessing Society of Japan, vol. 37, no. 8, pp. 1487–1496, 1996.

[12] Dahlia Malkhi, Michael K. Reiter, and Rebecca N. Wright, “Probabilisticquorum systems,” inProceedings of the 16th Annual ACM symposiumon Principles of Distributed Computing (PODC), 1997, pp. 267–273.

[13] Ken Miura, Taro Tagawa, and Hirotsugu Kakugawa, “A quorum-based protocol for searching objects in peer-to-peer networks,” IEEETransactions on Parallel & Distributed Systems, vol. 17, no. 1, pp. 25–37, Jan. 2006.

[14] Ranganath Atreya and Neeraj Mittal, “A dynamic group mutualexclusion algorithm using surrogate-quorums,” inProceedings of the25th IEEE International Conference on Distributed Computing Systems(ICDCS), 2005, pp. 251–260.

[15] Daniel Barbara and Hector Garcia-Molina, “The reliability of votingmechanisms,”IEEE Transactions on Computers, vol. C-36, no. 10, pp.1197–1208, Oct. 1987.

[16] Shing-Tsaan Huang, Jehn-Ruey Jiang, and Yu-Chen Kuo, “k-coteriesfor fault-tolerantk entries to a critical section,” inProceedings of 13thInternational Conference of Distributed Computing Systems (ICDCS),1993, pp. 362–369.

[17] Yoshifumi Manabe, Roberto Baldoni, Michel Raynal, andShigemiAoyagi, “k-arbiter: A safe and general scheme forh-out of-k mutualexclusion,” Theoretical Computer Science, vol. 193, no. 1–2, pp. 97–112, Feb. 1998.

[18] Glenn Ricart and Ashok K. Agrawala, “An optimal algorithm for mutualexclusion in computer network,”Communications of the ACM, vol. 24,no. 1, pp. 9–17, Jan. 1981.

[19] J.W. Havender, “Avoiding deadlocks in multi-tasking systems,” IBMSystem Journal, vol. 7, no. 2, pp. 78–84, 1968.

[20] Ichiro Suzuki and Tadao Kasami, “A distributed mutual exclusionalgorithm,” ACM Transactions on Computer Systems, vol. 3, no. 4,pp. 344–349, Nov. 1985.

Hirotsugu Kakugawa received the B.E. degree inengineering in 1990 from Yamaguchi University,and the M.E. and D.E. degrees in information engi-neering in 1992, 1995 respectively from HiroshimaUniversity. He is currently an associate professorof Osaka University. He is a member of the IEEEComputer Society and the Information ProcessingSociety of Japan.

Sayaka Kamei received the B.E., M.E. and D.E.degrees in electronics engineering in 2001, 2003and 2006 respectively from Hiroshima University.She is currently an assistant professor of TottoriUniversity of Environmental Studies. Her researchinterests include distributed algorithms. She is amember of the Institute of Electronics, Informationand Communication Engineers and IEEE ComputerSociety.

Toshimitsu Masuzawa received the B.E., M.E.and D.E. degrees in computer science from Os-aka University in 1982, 1984 and 1987. He hadworked at Osaka University during 1987-1994, andwas an associate professor of Graduate School ofInformation Science, Nara Institute of Science andTechnology (NAIST) during 1994-2000. He is now aprofessor of Graduate School of Information Scienceand Technology, Osaka University. He was also avisiting associate professor of Department of Com-puter Science, Cornell University between 1993-

1994. His research interests include distributed algorithms, parallel algorithmsand graph theory. He is a member of ACM, IEEE, IEICE and the InformationProcessing Society of Japan.


Recommended