22872871 Papoulis Solutions Manual 4th Edition

Post on 30-Nov-2015

121 views 4 download

transcript

Solutions Manual to accompany

Probability, Random Variables

and Stochastic Processes

Fourth Edition

Athanasios Papoulis Polytechnic University

S. Unnikrishna Pillai Polytechnic University

Solutions Manual to accompany PROBABILITY, RANDOM VARIABLES AND STOCHASTIC PROCESSES, FOURTH EDITION ATHANASIOS PAPOULIS Published by McGraw-Hill Higher Education, an imprint of The McGraw-Hill Companies, Inc., 1221 Avenue of the Americas, New York, NY 10020. Copyright © 2002 by The McGraw-Hill Companies, Inc. All rights reserved. The contents, or parts thereof, may be reproduced in print form solely for classroom use with PROBABILITY, RANDOM VARIABLES AND STOCHASTIC PROCESSES, FOURTH EDITION, provided such reproductions bear copyright notice, but may not be reproduced in any other form or for any other purpose without the prior written consent of The McGraw-Hill Companies, Inc., including, but not limited to, in any network or other electronic storage or transmission, or broadcast for distance learning. www.mhhe.com

Problem Solutions for Chapter �

��� �a P �A occurs atleast twice in n trials

� � � P �A never occurs in n trials� P �A occurs once in n trials

� � � �� � pn � np�� � pn��

�b P �A occurs atleast thrice in n trials

� � � P �A never occurs in n trials� P �A occurs once in n trials

�P �A occurs twice in n trials

� � � �� � pn � np�� � pn�� �n�n� �

� p��� � pn��

���

P �doublesix ��

��

��

��

P ��double six atleast three times in n trials��

� � ���

� ����

�� �����

������ �

� ����

� �����

���� �

� ����

�� �����

��� ����

��� �a

p� � � ��

�� ����

�b

� ��

���

���

� �

��� ����

�c

���

���

���

� �

���

���

� �

� ��

��� ����

��� �a Let n represent the number of wins required in � games so thatthe net gain or loss does not exceed ��� This gives the net gain to be

�� � n� � � n� � �

�� � n � ����

n � ��

P �net gain does not exceed �� ��� ��

� ���

��� ���

���� ����

P �net gain or loss exceeds �� � � � ���� � ����

�b Let n represent the number of wins required so that the net gainor loss does not exceed ��� This gives

�� � n��� � n

� � �

���� � n � �

P �net gain does not exceed �� �X��

n � ��

�� n

� ���

�n ���

����n

� ����

P �net gain or loss exceeds �� � �� ���� � ����

��� De�ne the eventsA�� r successes in n Bernoulli trials�B��success at the ith Bernoulli trial�C��r�� successes in the remaining n�� Bernoulli trials excluding

the ith trial�

P �A ��nr

�pr qn�r

P �B � p

P �C ��n� �r � �

�pr�� qn�r

We need

P �BjA �P �AB

P �A�

P �BC

P �A�

P �BP �C

P �A�

r

n�

��� There are�����

�ways of selecting �� cards out of �� cards� The

number of ways to select �� cards of any suit �out of �� cards equals�����

�� �� Four such �mutually exclusive suits give the total number

of favorable outcomes to be �� Thus the desired probability is given by

����

��

� � ���� � ���

��� Using the hint we obtain

p �Nk�� �Nk � q �Nk �Nk��� �

Let

Mk�� � Nk�� �Nk

so that the above iteration gives

Mk�� � qp Mk �

�p

���������

�qp

�kM� �

�p � q

n�� �qp

ko� p �� q

M� �kp � p � q

This gives

Ni �i��Xk��

Mk��

�������������

�M� �

�p� q

� i��Xk��

�qp

�k� i

p � q � p �� q

iM� �i�i� �

�p � p � q

where we have used No � � Similarly Na�b � gives

M� ��

p� q�

a� b

p � q�

� � q�p

�� �q�pa�b�

Thus

Ni �

���������

a� bp � q �

�� �q�pi

� � �q�pa�b �i

p� q � p �� q

i�a� b� i� p � q

which gives for i � a

Na �

���������

a� bp� q �

�� �q�pa

� � �q�pa�b �a

p� q � p �� q

ab� p � q

���������

b�p� � � a� b

�p� � ��� �p�qb

�� �p�qa�b � p �� q�

ab� p � q

����Pn � pPn�� � qPn��

Arguing as in ����� we get the corresponding iteration equation

Pn � Pn�� � qPn��

and proceed as in Example �����

���� Suppose one best on k � �� �� � � � � ��Then

p� � P �k appears on one dice ����

� ���

� ���

��p� � P �k appear on two dice �

���

� ���

�� ���

p� � P �k appear on all the tree dice ����

��p� � P �k appear none �

���

��

Thus we get

Net gain � �p� � �p� � �p� � p� � �����

215

Chapter 15

15.1 The chain represented by

P =

0 1/2 1/2

1/2 0 1/2

1/2 1/2 0

is irreducible and aperiodic.The second chain is also irreducible and aperiodic.The third chain has two aperiodic closed sets {e1, e2} and {e3, e4}

and a transient state e5.

15.2 Note that both the row sums and column sums are unity in thiscase. Hence P represents a doubly stochastic matrix here, and

P n =1

m+ 1

1 1 · · · 1 1

1 1 · · · 1 1

......

.........

1 1 · · · 1 1

limn→∞

P{xn = ek} =1

m+ 1, k = 0, 1, 2, · · ·m.

15.3 This is the “success runs” problem discussed in Example 15-11and 15-23. From Example 15-23, we get

ui+1 = pi,i+1ui =1

i+ 1ui =

uo(i+ 1)!

so that from (15-206)

∞∑

k=1

uk = u0∞∑

k=1

1

k!= e · u0 = 1

216

gives u0 = 1/e and the steady state probabilities are given by

uk =1/e

k!, k = 1, 2, · · ·

15.4 If the zeroth generation has size m, then the overall process maybe considered as the sum of m independent and identically distributedbranching processes x(k)n , k = 1, 2, · · ·m, each corresponding to unitysize at the zeroth generation. Hence if π0 represents the probability ofextinction for any one of these individual processes, then the overallprobability of extinction is given by

limn→∞

P [xn = 0|x0 = m] =

= P [{x(1)n = 0|x(1)0 = 1}⋂{x(2)n = 0|x(2)0 = 1}⋂ · · · {x(m)n = 0|x(m)0 = 1}]

=∏mk=1 P [x

(k)n = 0|x(k)0 = 1]

= πm0

15.5 From (15-288)-(15-289),

P (z) = p0 + p1z + p2z2, since pk = 0, k ≥ 3.

Also p0 + p1 + p2 = 1, and from (15-307) the extinction probability isgiven by sloving the equation

P (z) = z.

Notice thatP (z)− z = p0 − (1− p1)z + p2z

2

= p0 − (p0 + p2)z + p2z2

= (z − 1)(p2z − p0)

and hence the two roots of the equation P (z) = z are given by

z1 = 1, z2 =p0p2.

Thus if p2 < p0, then z2 > 1 and hence the smallest positive root ofP (z) = z is 1, and it represents the probability of extinction. It follows

217

that such a tribe which does not produce offspring in abundence isbound to extinct.15.6 Define the branching process {xn}

xn+1 =xn∑

k=1

yk

where yk are i.i.d random variables with common moment generatingfunction P (z) so that (see (15-287)-(15-289))

P ′(1) = E{yk} = µ.

ThusE{xn+1|xn} = E{∑xn

k=1 yk|xn = m}= E{∑m

k=1 yk|xn = m}= E{∑m

k=1 yk} = mE{yk} = xn µ

SimilarlyE{xn+2|xn} = E{E{xn+2|xn+1,xn}}

= E{E{xn+2|xn+1}|xn}= E{µxn+1|xn} = µ2 xn

and in general we obtain

E{xn+r|xn} = µr xn. (i)

Also from (15-310)-(15-311)

E{xn} = µn. (ii)

Definewn =

xn

µn. (iii)

This givesE{wn} = 1.

Dividing both sider of (i) with µn+r we get

E{xn+rµn+r

|xn = x} = µr · xn

µn+r=

xn

µn= wn

218

orE{wn+r|wn =

x

µn∆= w} = wn

which givesE{wn+r|wn} = wn,

the desired result.

15.7sn = x1 + x2 + · · ·+ xn

where xn are i.i.d. random variables. We have

sn+1 = sn + xn+1

so that

E{sn+1|sn} = E{sn + xn+1|sn} = sn + E{xn+1} = sn.

Hence {sn} represents a Martingale.

15.8 (a) From Bayes’ theorem

P{xn = j|xn+1 = i} =P{xn+1 = i|xn = j}P{xn = j}

P{xn+1 = i}=

qj pjiqi = p∗ij,

(i)

where we have assumed the chain to be in steady state.(b) Notice that time-reversibility is equivalent to

p∗ij = pij

and using (i) this gives

p∗ij =qj pjiqi

= pij (ii)

or, for a time-reversible chain we get

qj pji = qi pij. (iii)

219

Thus using (ii) we obtain by direct substitution

pij pjk pki =(qjqi pji

)

(

qkqj pkj

)

( qiqk pik

)

= pik pkj pji,

the desired result.

15.9 (a) It is given that A = AT , (aij = aji) and aij > 0. Define the ith

row sum

ri =∑

k

aik > 0, i = 1, 2, · · ·

and let

pij =aij

k aik=

aijri.

Thenpji =

aji∑

m

ajm=

ajirj =

aijrj

= rirj

aijri =

rirj pij

(i)

or

ri pij = rj pji.

Hence∑

i

ri pij =∑

i

rj pji = rj∑

i

pji = rj, (ii)

since∑

i

pji =

i ajirj

=rjrj= 1.

Notice that (ii) satisfies the steady state probability distribution equa-tion (15-167) with

qi = c ri, i = 1, 2, · · ·where c is given by

c∑

i

ri =∑

i

qi = 1 =⇒ c =1

i ri=

1∑

i

j aij.

220

Thus

qi =ri

i ri=

j aij∑

i

j aij> 0 (iii)

represents the stationary probability distribution of the chain.With (iii) in (i) we get

pji =qiqjpij

orpij =

qj pjiqi

= p∗ij

and hence the chain is time-reversible.

15.10 (a) M = (mij) is given by

M = (I −W )−1

or(I −W )M = I

M = I +WM

which gives

mij = δij +∑

k wikmkj, ei, ej ∈ T

= δij +∑

k pikmkj, ei, ej ∈ T

(b) The general case is solved in pages 743-744. From page 744,with N = 6 (2 absorbing states; 5 transcient states), and with r = p/qwe obtain

mij =

(rj − 1)(r6−i − 1)(p− q)(r6 − 1) , j ≤ i

(ri − 1)(r6−i − rj−i)(p− q)(r6 − 1) , j ≥ i.

15.11 If a stochastic matrix A = (aij), aij > 0 corresponds to the two-step transition matrix of a Markov chain, then there must exist anotherstochastic matrix P such that

A = P 2, P = (pij)

221

wherepij > 0,

j

pij = 1,

and this may not be always possible. For example in a two state chain,let

P =

α 1− α

1− β β

so that

A = P 2 =

α2 + (1− α)(1− β) (α + β)(1− α)

(α + β)(1− β) β2 + (1− α)(1− β)

.

This gives the sum of this its diagonal entries to be

a11 + a22 = α2 + 2(1− α)(1− β) + β2

= (α+ β)2 − 2(α + β) + 2

= 1 + (α + β − 1)2 ≥ 1.(i)

Hence condition (i) necessary. Since 0 < α < 1, 0 < β < 1, we alsoget 1 < a11 + a22 ≤ 2. Futher, the condition (i) is also sufficient in the2× 2 case, since a11 + a22 > 1, gives

(α + β − 1)2 = a11 + a22 − 1 > 0

and henceα + β = 1±

√a11 + a22 − 1

and this equation may be solved for all admissible set of values 0 <α < 1 and 0 < β < 1.

15.12 In this case the chain is irreducible and aperiodic and there areno absorption states. The steady state distribution {uk} satisfies (15-167),and hence we get

uk =∑

j

uj pjk =N∑

j=0

uj

(

N

k

)

pkj qN−kj .

222

Then if α > 0 and β > 0 then “fixation to pure genes” does not occur.

15.13 The transition probabilities in all these cases are given by (page765) (15A-7) for specific values of A(z) = B(z) as shown in Exam-ples 15A-1, 15A-2 and 15A-3. The eigenvalues in general satisfy theequation

j

pij x(k)j = λk x

(k)i , k = 0, 1, 2, · · ·N

and trivially∑

j pij = 1 for all i implies λ0 = 1 is an eigenvalue in allcases.However to determine the remaining eigenvalues we can exploit the

relation in (15A-7). From there the corresponding conditional momentgenerating function in (15-291) is given by

G(s) =N∑

j=0

pij sj (i)

where from (15A-7)

pij ={Ai(z)}j {BN−i(z)}N−j

{Ai(z)BN−i(z)}N

=coefficient of sj zN in {Ai(sz)BN−i(z)}

{Ai(z)BN−i(z)}N

(ii)

Substituting (ii) in (i) we get the compact expression

G(s) ={Ai(sz)BN−i(z)}N{Ai(z)BN−i(z)}N

. (iii)

Differentiating G(s) with respect to s we obtain

G′(s) =N∑

j=0

Pij j sj−1

={iAi−1(sz)A′(sz)z BN−i(z)}N

{Ai(z)BN−i(z)}N

= i · {Ai−1(sz)A′(sz)BN−i(z)}N−1

{Ai(z)BN−i(z)}N.

(iv)

223

Letting s = 1 in the above expression we get

G′(1) =N∑

j=0

pij j = i{Ai−1(z)A′(z)BN−i(z)}N−1

{Ai(z)BN−i(z)}N. (v)

In the special case when A(z) = B(z), Eq.(v) reduces to

N∑

j=0

pij j = λ1 i (vi)

where

λ1 ={AN−1(z)A′(z)}N−1

{AN(z)}N. (vii)

Notice that (vi) can be written as

Px1 = λ1 x1, x1 = [0, 1, 2, · · ·N ]T

and by direct computation with A(z) = B(z) = (q + pz)2 (Example15A-1) we obtain

λ1 ={(q + pz)2(N−1) 2p(q + pz)}N

{(q + pz)2N}N

=2p{(q + pz)2N−1}N−1

{(q + pz)2N}N=

2p

(

2N

N − 1

)

qN pN−1

(

2N

N

)

qN pN= 1.

Thus∑N

j=0 pij j = i and from (15-224) these chains represent Martin-gales. (Similarly for Examples 15A-2 and 15A-3 as well).To determine the remaining eigenvalues we differentiate G′(s) once

more. This gives

G′′(s) =N∑

j=0

pij j(j − 1) sj−2

={i(i− 1)Ai−2 (sz)[A′(sz)]2 z BN−i(z) + iAi−1(sz)A

′′

(sz)z BN−i(z)}N−1

{Ai(z)BN−i(z)}N

={i Ai−2(sz)BN−i(z)[(i− 1) (A′(sz))

2+ A(sz)A′′(sz)]}N−2

{Ai(z)BN−i(z)}N.

.

224

With s = 1, and A(z) = B(z), the above expression simplifies to

N∑

j=0

pij j(j − 1) = λ2 i(i− 1) + iµ2 (viii)

where

λ2 ={AN−2(z) [A′(z)]2}N−2

{AN(z)}Nand

µ2 ={AN−1(z)A′′(z)}N−2

{AN(z)}N.

Eq. (viii) can be rewritten as

N∑

j=0

pij j2 = λ2 i

2 + (polynomial in i of degree ≤ 1)

and in general repeating this procedure it follows that (show this)

N∑

j=0

pij jk = λk i

k + (polynomial in i of degree ≤ k − 1) (ix)

where

λk ={AN−k(z) [A′(z)]k}N−k

{AN(z)}N, k = 1, 2, · · ·N. (x)

Equations (viii)–(x) motivate to consider the identities

P qk = λk qk (xi)

where qk are polynomials in i of degree ≤ k, and by proper choice ofconstants they can be chosen in that form. It follows that λk, k =1, 2, · · ·N given by (ix) represent the desired eigenvalues.

(a) The transition probabilities in this case follow from Example 15A-1(page 765-766) with A(z) = B(z) = (q + pz)2. Thus using (ix) we

225

obtain the desired eigenvalues to be

λk ={(q + pz)2(N−k)[2p(q + pz)]k}N−k

{(q + pz)2N}N

= 2k pk{(q + pz)2N−k}N−k

{(q + pz)2N}N}

= 2k

(

2N − k

N − k

)

(

2N

N

) , k = 1, 2, · · ·N.

(b) The transition probabilities in this case follows from Example 15A-2(page 766) with

A(z) = B(z) = eλ(z−1)

and hence

λk ={eλ(N−k)(z−1) λk eλk(z−1)}N−k

{eλN(z−1)}N

=λk {eλNz}N−k

{eλNz}N=

λk (λN)N−k/(N − k)!(λN)N/N !

= N !(N − k)!Nk =

(

1− 1N

) (

1− 2N

)

· · ·(

1− k − 1N

)

, k = 1, 2, · · ·N

(c) The transition probabilities in this case follow from Example 15A-3(page 766-767) with

A(z) = B(z) =q

1− pz.

Thus

λk = pk{1/(1− pz)N+k}N−k

{1/(1− pz)N}N

= (−1)k

(

−(N + k)

N − k

)

(

−NN

) =

(

2N − 1N − k

)

(

2N − 1N

) , r = 2, 3, · · ·N

226

15.14 From (15-240), the mean time to absorption vector is given by

m = (I −W )−1E, E = [1, 1, · · · 1]T ,

where

Wik = pjk, j, k = 1, 2, · · ·N − 1,

with pjk as given in (15-30) and (15-31) respectively.

15.15 The mean time to absorption satisfies (15-240). From there

mi = 1 +∑

k∈T

pikmk = 1 + pi,i+1mi+1 + pi,i−1mi−1

= 1 + pmi+1 + q mi−1,

or

mk = 1 + pmk+1 + q mk−1.

This gives

p (mk+1 −mk) = q (mk −mk−1)− 1

Let

Mk+1 = mk+1 −mk

so that the above iteration gives

Mk+1 = qp Mk − 1p

=(qp

)kM1 − 1p

[

1 + qp +

(qp

)2+ · · ·+

(qp

)k−1]

=

(qp

)kM1 − 1

p− q

{

1− (qp)k}

, p 6= q

M1 − kp , p = q

227

This gives

mi =i−1∑

k=0

Mk+1

=

(

M1 +1

p− q

)

i−1∑

k=0

(qp

)k − ip− q , p 6= q

iM1 − i(i− 1)2p , p = q

=

(

M1 +1

p− q

) 1− (q/p)i1− q/p

− ip− q , p 6= q

iM1 − i(i− 1)2p , p = q

where we have used mo = 0. Similarly ma+b = 0 gives

M1 +1

p− q=

a+ b

p− q· 1− q/p

1− (q/p)a+b .

Thus

mi =

a+ bp− q ·

1− (q/p)i1− (q/p)a+b −

ip− q , p 6= q

i(a+ b− i), p = q

which gives for i = a

ma =

a+ bp− q ·

1− (q/p)a1− (q/p)a+b −

ap− q , p 6= q

ab, p = q

=

b2p− 1 −

a+ b2p− 1 ·

1− (p/q)b1− (p/q)a+b , p 6= q

ab, p = q

by writing

1− (q/p)a1− (q/p)a+b = 1−

(q/p)a − (q/p)a+b1− (q/p)a+b = 1− 1− (p/q)b

1− (p/q)a+b

(see also problem 3-10).

228

Chapter 16

16.1 Use (16-132) with r = 1. This gives

pn =

ρn

n! p0, n ≤ 1

ρn p0, 1 < n ≤ m

= ρn p0, 0 ≤ n ≤ m

Thusm∑

n=0

pn = p0m∑

n=0

ρn = p0(1− ρm+1)

1− ρ= 1

=⇒ p0 =1− ρ

1− ρm+1

and hence

pn =1− ρ

1− ρn+1ρn, 0 ≤ n ≤ m, ρ 6= 1

and lim ρ→ 1, we get

pn =1

m+ 1, ρ = 1.

16.2 (a) Let n1(t) = X + Y , where X and Y represent the two queues.Then

pn = P{n1(t) = n} = P{X + Y = n}

=n∑

k=0

P{X = k}P{Y = n− k}

=n∑

k=0

(1− ρ)ρk (1− ρ)ρn−k

= (n+ 1)(1− ρ)2ρn, n = 0, 1, 2, · · ·

(i)

where ρ = λ/µ.

229

(b) When the two queues are merged, the new input rate λ′

=λ+ λ = 2λ. Thus from (16-102)

pn =

(λ′

/µ)n

n! p0 =(2ρ)n

n! p0, n < 2

22

2! (λ

2µ)n p0 = 2ρ

np0, n ≥ 2.

Hence

∞∑

k−0

pk = p0(1 + 2ρ+ 2∞∑

k=2

ρk)

= p0(1 + 2ρ+2ρ2

1− ρ)

= p01− ρ((1 + 2ρ) (1− ρ) + 2ρ2)

= p01− ρ (1 + ρ) = 1

=⇒ p0 =1− ρ

1 + ρ, (ρ = λ/µ). (ii)

Thus

pn =

2 (1− ρ) ρn/(1 + ρ), n ≤ 1

(1− ρ)/(1 + ρ), n = 0(iii)

(c) For an M/M/1 queue the average number of items waiting isgiven by (use (16-106) with r = 1)

E{X} = L′

1 =∞∑

n=2

(n− 1) pn

230

where pn is an in (16-88). Thus

L′

1 =∞∑

n=2

(n− 1)(1− ρ) ρn

= (1− ρ) ρ2∞∑

n=2

(n− 1) ρn−2

= (1− ρ) ρ2∞∑

k=1

k ρk−1

= (1− ρ) ρ21

(1− ρ)2=

ρ2

(1− ρ).

(iv)

Since n1(t) = X + Y we have

L1 = E{n1(t)} = E{X}+ E{Y }

= 2L′

1 =2ρ2

1− ρ

(v)

For L2 we can use (16-106)-(16-107) with r = 2. Using (iii), thisgives

L2 = prρ

(1− ρ)2

= 2(1− ρ) ρ2

1 + ρ

ρ

(1− ρ)2=

2 ρ3

1− ρ2

=2 ρ2

1− ρ

(

ρ

1 + ρ

)

< L1

(vi)

From (vi), a single queue configuration is more efficient then twoseparate queues.

16.3 The only non-zero probabilities of this process are

λ0,0 = −λ0 = −mλ, λ0,1 = µ

λi,i+1 = (m− i)λ, λi,i−1 = iµ

231

λi,i = [(m− i)λ+ iµ], i = 1, 2, · · · ,m− 1

λm,m = −λm,m−1 = −mµ.

Substituting these into (16-63) text, we get

mλp0 = µ p1 (i)

[(m− i)λ+ iµ] pi = (m− i+1) pi−1+(i+1)µ pi+1, i = 1, 2, · · · ,m−1(ii)

and

mµpm = λ pm−1. (iii)

Solving (i)-(iii) we get

pi =

(

m

i

) (

λ

λ+ µ

)i (µ

λ+ µ

)m−i

, i = 0, 1, 2, · · · ,m

16.4 (a) In this case

pn =

λ

µ1

λ

µ1· · ·

λ

µ1=

(

λ

µ1

)n

p0, n < m

λ

µ1

λ

µ1· · ·

λ

µ1

λ

µ2· · ·

λ

µ2p0, n ≥ m

=

ρn1 p0, n < m

ρm−11 ρn−m+1

2 p0, n ≥ m,

where∞∑

n=0

pn = p0

[

m−1∑

k=0

ρk1 + ρm−11 ρ2

∞∑

n=0

ρn2

]

= p0

[

1− ρm11− ρ1

+ρ2ρ

m−11

1− ρ2

]

= 1

232

gives

p0 =

(

1− ρm11− ρ1

+ρ2ρ

m−11

1− ρ2

)

−1

.

(b)

L =∞∑

n=0

n pn

= p0

[

m−1∑

n=0

n ρn1 +∞∑

n=m

n ρm−11 ρn−m+1

2

]

= p0

ρ1m−1∑

n=0

n ρn−11 + ρ1

(

ρ1ρ2

)m−2 ∞∑

n=m

n ρn−12

= p0

ρ1d

dρ1

(

m−1∑

n=0

ρn1

)

+ ρ1

(

ρ1ρ2

)m−2d

dρ2

∞∑

n=m

ρn2

= p0

ρ1d

dρ1

(

1− ρm11− ρ

)

+ ρ1

(

ρ1ρ2

)m−2d

(

ρm

1− ρ

)

= p0

[

ρ1[1 + (m− 1)ρm1 −mρm−1

1 ]

(1− ρ1)2+ρ2 ρ

m−11 + [m− (m− 1)ρ2]

(1− ρ2)2

]

.

16.5 In this case

λi =

λ, j < r

pλ, j ≥ rµi =

jµ, j < r

rµ, j ≥ r.

Using (16-73)-(16-74), this gives

pn =

(λ/µ)n

n! p0, n < r

(λ/µ)r

r! (pλ/rµ)n−r, n ≥ r.

233

16.6

P{w > t} =m−1∑

n=r

pn P (w > t|n)

=m−1∑

n=r

pn (1− Fw(t|n)) =∑

pr

(

λrµ

)n−r

(1− Fw(t|n))

fw(t|n) = e−γµt (γµ)n−r+1 tn−r

(n− r)!)(see 16.116)

and

Fw(t|n) = 1−n−r∑

k=0

(γµt)k

k!e−γµt (see 4.)

so that

1− Fw(t|n) =n−r∑

k=0

(γµt)k

k!e−γµt

P{w > t} =m−1∑

n=r

pr

(

λγµ

)n−r n−r∑

k=0

(γµt)k

k! e−γµt

=m−r−1∑

i=0

pr ρi

i∑

k=0

(γµt)k

k! e−γµt, n− r = i

= pr e−γµt

m−r−1∑

k=0

ρkk∑

i=0

(γµt)i

i!

=m−r−1∑

k=0

k∑

i=0

=m−r−1∑

i=0

m−r−1∑

k=i

P{w > t} = pr e−γµt

m−r−1∑

i=0

(γµt)i

i!

m−r−1∑

k=i

ρk

= pr1− ρ e

−γµtm−r−1∑

i=0

(γµt)i

i! (ρi − ρm−r), ρ = λ/γµ.

234

Note that m→∞ =⇒M/M/r/m =⇒M/M/r and

P (w > t) = pr1− ρ e

−γµt∞∑

i=0

(γµρt)i

i!

= pr1− ρ e

−γµ(1−ρ)t t > 0.

and it agrees with (16.119)

16.7 (a) Use the hints

(b)

−∞∑

n=1

(λ+ µ) pn zn +

µ

z

∞∑

n=1

pn+1 zn+1 + λ

∞∑

n=1

n∑

k=1

pn−k ck zn = 0

−(ρ+1) (P (z)− p0) +µ

z(P (z)− p0 − p1z) + λ

∞∑

k=1

ckzk∑

m=0

pm zm = 0

which gives

P (z)[1− z − ρz (1− C(z))] = p0(1− z)

or

P (z) =p0(1− z)

1− z − ρz (1− C(z)).

1 = P (1) =−p0

−1− ρ+ ρz C ′(z) + ρC(z)=

−p0−1 + ρC ′(1)

=⇒ p0 = 1− ρ0, ρ0 = ρC ′(1).

Let

D(z) =1− C(z)

1− z.

Then

P (z) =1− ρL

1− ρzD(z).

(c) This gives

P ′(z) =(1− ρc)

(1− ρzD(z))2(ρD(z) + ρzD′(z))

235

L = P ′(1) =(1− ρc)(1− ρc)

2 ρ (D(1) +D′(1))

= 1(1− ρc)

(C ′(1) +D′(1))

C ′(1) = E(x)

D(z) =1− C(z)

1− z

D′(z) =(1− z) (−C ′(z))− (1− C(z)) (−1)

(1− z)2

=1− C(z)− (1− z)C ′(z)

(1− z)2

By L-Hopital’s Rule

D′(1) = limz→1−C ′(z)− (−1)C ′(z)− (1− z)C ′′(z)

−2(1− z)

= limz→1 = 1/2C′′(z) =

C ′′(z)2

= 1/2∑

k(k − 1)Ck =E(X2)− E(X)

2

L =ρ (E(X) + E(X2))

2 (1− ρE(X)).

(d)C(z)zm E(X) = m

P (z) =1− ρ

1− ρ∑m

k=1 zk

D(z) =1− zm

1− z=

m−1∑

k=0

zk

E(X) = m, E(X2) = m2

L =ρ(m+m2)

2(1− ρm)

236

(e)

C(z) =qz

1− Pz

P (z) =1− ρ0

1− ρzD(z), C(z) =

qz

1− pz

D(z) =1− C(z)

1− z=1− qz

1−P (z)

1− z=1− Pz − (1− P )z(1− z)(1− Pz)

=1− z

(1− z)(1− Pz)=

1

1− Pz

P (z) =(1− ρ0)(1− pz)

1− pz − ρz=(1− ρ0)(1− pz)

1− (p+ ρ)z

C ′(1) =(1− pz)q − qz(−p)

(1− Pz)2=

q

q2=1

q

D(z) =1− C(z)

1− z

D(1) = C ′(1)

L = P ′(1) =1− ρc(1− ρc)2

(ρ · C ′(1) + ρ ·D′(1))

D′(z) =−(1− z)C ′(z)− (1− C(z)) (ρ− 1)

(1− z)2=1− C(z)− (1− z)C ′(1)

(1− z)2

limz→1D′(z) = limz→1

−C ′(z)− (−1)C ′(z)− (1− z)C ′′(z)2(1− z)

=−(1− z)C ′′(z)−2(1− z)

=ρ′′(z)2

D′(1) =C ′′(1)

2

L =1

(1− ρc)

(

ρE(X) +ρ (E(X2)− E(X))

2

)

=ρE(X) + ρE(X2)

2(1− ρc).

16.8 (a) Use the hints.(b)

−∞∑

n=1

(λ+ µ) pn zn +

µ

zn

∞∑

n=1

pn+m zn+m + λz

∞∑

n=1

pn−1 zn−1 = 0

237

or

−(1 + ρ) (P (z)− p0) +1

zm

(

P (z)−m∑

k=0

pk zk

)

+ ρzP (z) = 0

which gives

P (z)[

ρ zm+1 − (ρ+ 1) zm + 1]

=m∑

k=0

pk zk − p0 (1 + ρ) z

m

or

P (z) =

m∑

k=0

pk zk − p0 (1 + ρ) z

m

ρ zm+1 − (ρ+ 1) zm + 1=N(z)

M(z). (i)

(c) Consider the denominator polynomial M(z) in (i) given by

M(z) = ρ zm+1 − (1 + ρ) zm + 1 = f(z) + g(z)

where

f(z) = −(1 + ρ) zm,

g(z) = 1 + ρ zm+1.

Notice that |f(z)| > |g(z)| in a circle defined by |z| = 1 + ε, ε > 0.Hence by Rouche’s Theorem f(z) and f(z)+g(z) have the same numberof zeros inside the unit circle (|z| = 1+ ε). But f(z) has m zeros insidethe unit circle. Hence f(z) + g(z) = M(z) also has m zeros inside theunit circle. Hence

M(z) =M1(z) (z − z0) (ii)

where |z0| > 1 and M1(z) is a polynomial of degree m whose zeros areall inside or on the unit circle. But the moment generating functionP (z) is analytic inside and on the unit circle. Hence all the m zerosof M(z) that are inside or on the unit circle must cancel out with thezeros of the numerator polynomial of P (z). Hence

N(z) =M1(z) a. (iii)

238

Using (ii) and (iii) in (i) we get

P (z) =N(z)

M(z)=

a

z − z0.

But P (1) = 1 gives a = 1− z0or

P (z) =z0 − 1

z0 − z

=(

1−1

z0

) ∞∑

n=0

(z/z0)n

=⇒ pn =(

1−1

z0

) (

1

z0

)n

= (1− r) rn, n ≥ 0 (iv)

where r = 1/z0.

(d) Average system size

L =∞∑

n=0

n pn =r

1− r.

16.9 (a) Use the hints in the previous problem.(b)

−∞∑

n=m

(λ+ µ) pn zn + µ

∞∑

n=m

pn+m zn + λ

∞∑

n=m

pn−1 zn

−(1 + ρ)

(

P (z)−m−1∑

k=0

pk zk

)

+1

zm

(

P (z)−2m−1∑

k=0

pk zk

)

+ρ z

(

P (z)−m−2∑

k=0

pk zk

)

= 0.

After some simplifications we get

P (z)[

ρ zm+1 − (ρ+ 1) zm + 1]

= (1− zm)m−1∑

k=0

pk zk

or

P (z) =

(1− zm)m−1∑

k=0

pk zk

ρ zm+1 − (ρ+ 1) zm + 1=

(z0 − 1)m−1∑

k=0

zk

m (z0 − z)

239

where we have made use of Rouche’s theerem and P (z) ≡ 1 as inproblem 16-8.(c)

P (z) =∞∑

n=0

pn zn =

1− r

m

m−1∑

k=0

zk

1− rz

gives

pn =

(1 + r + · · ·+ rk) p0, k ≤ m− 1

rn−m+1 (1 + r + · · ·+ rm−1) p0, k ≥ m

where

p0 =1− r

m, r =

1

z0.

Finally

L =∞∑

n=0

n pn = P′

n(1).

But

P′

(z) =(

1− r

m

)

m−1∑

k=1

k zk−1 (1− rz)−m−1∑

k=0

zk (−r)

(1− rz)2

so that

L = P′

(1) =1− r

m

m− 1 + r

(1− r)2=m− (1− r)

m (1− r)

=1

1− r−1

m.

16.10 Proceeding as in (16-212),

ψA(u) =∫

0e−uτdA(τ)

=

(

λm

u+ λmz

)m

.

240

This gives

B(z) = ψA(ψ(1− z))

=

(

λm

µ (1− z) + λm

)m

=

1

1 + 1ρ(1− z)

m

=

(

ρ

(1 + ρ)− z

)m

, ρ =λ

mµ.

(i)

Thus the equation B(z) = z for π0 reduce to

(

ρ

(1 + ρ)− z

)m

= z

or

ρ

(1 + ρ)− z= z1/m,

which is the same as

ρ z−1/m = (1 + ρ)− z (ii)

Let x = z−1/m. Sustituting this into (ii) we get

ρ x = (1 + ρ)− x−m

or

ρ xm+1 − (1 + ρ)xm + 1 = 0 (iii)

16.11 From Example 16.7, Eq.(16-214), the characteristic equation forQ(z) is given by (ρ = λ/mµ)

1− z[1 + ρ (1− z)]m = 0

241

which is equivalent to

1 + ρ (1− z) = z−1/m. (i)

Let x = z1/m in this case, so that (i) reduces to

[(1 + ρ)− ρ xm]x = 1

or the characteristic equation satisfies

ρ xm+1 − (1 + ρ)x+ 1 = 0. (ii)

16.12 Here the service time distribution is given by

dB(t)

dt=

k∑

i=1

di δ(t− Ti)

and this Laplace transform equals

Φs(s) =k∑

i=1

di e−s Ti (i)

substituting (i) into (15.219), we get

A(z) = Φs (λ (1− z))

=k∑

i=1

di e−λTi (1−z)

=k∑

i=1

di e−λTi eλTi z

=k∑

i=1

di e−λTi

∞∑

j=0

(λTi)j zj

j!=

∞∑

j=0

aj zj.

Hence

aj =k∑

i=1

di e−λTi

(λTi)j

j!, j = 0, 1, 2, · · · . (i)

242

To get an explicit formula for the steady state probabilities {qn}, wecan make use of the analysis in (16.194)-(16.204) for an M/G/1 queue.From (16.203)-(16.204), let

c0 = 1− a0, cn = 1−n∑

k=0

ak, n ≥ 1

and let {c(m)k } represent them−fold convolution of the sequence {ck}

with itself. Then the steady-state probabilities are given by (16.203) as

qn = (1− ρ)∞∑

m=0

n∑

k=0

ak c(m)n−k.

(b) State-Dependent Service Distribution

Let Bi(t) represent the service-time distribution for those customersentering the system, where the most recent departure left i customersin the queue. In that case, (15.218) modifies to

ak,i = P{Ak|Bi}

where

Ak = ”k customers arrive during a service time”

and

Bi = ”i customers in the system at the most recent departure.”

This gives

ak,i =∫

0e−λt (λt)

k

k!dBi(t)

=

0e−λt (λt)

k

k!µ1 e

−µ1t dt =µ1λ

k

(λ+ µ1)k+1, i = 0

0e−λt (λt)

k

k!µ2 e

−µ2t dt =µ2λ

k

(λ+ µ2)k+1, i ≥ 1

(i)

243

This gives

Ai(z) =∞∑

k=0

ak,i zk =

1

1 + ρ1(1− z), i = 0

1

1 + ρ2(1− z), i ≥ 1

(ii)

where ρ1 = λ/µ1, ρ2 = λ/µ2. Proceeding as in Example 15.24, thesteady state probabilities satisfy [(15.210) gets modified]

qj = q0 aj,0 +j+1∑

i=1

qi aj−i+1,i (iii)

and (see(15.212))

Q(z) =∞∑

j=0

qj zj

= q0∞∑

j=0

aj,0 zj +

∞∑

j=0

qi aj−i+1,i

= q0A0(z) +∞∑

i=1

qi zi

∞∑

m=0

am,i zm z−1

= q0A0(z) + (Q(z)− q0)A1(z)/z

(iv)

where (see (ii))

A0(z) =1

1 + ρ1(1− z)(v)

and

A1(z) =1

1 + ρ2(1− z). (vi)

From (iv)

Q(z) =q0 (z A0(z)− A1(z))

z − A1(z). (vii)

244

Since

Q(1) = 1 =q0[

A′

0(1) + A0(1)− A′

1(1)]

1− A′

1(1)

=q0 (1 + ρ1 − ρ2)

1− ρ2

we obtain

q0 =1− ρ2

1 + ρ1 − ρ2. (viii)

Substituting (viii) into (vii) we can rewrite Q(z) as

Q(z) = (1− ρ2)(1− z)A1(z)

A1(z)− z·

1

1 + ρ1 − ρ2

1− z A0(z)/A1(z)

1− z

=

(

1− ρ21− ρ2 z

)

1

1 + ρ1 − ρ2

1− ρ2

1+ρ1

z

1− ρ1

1+ρ1

z

= Q1(z)Q2(z)

(ix)where

Q1(z) =1− ρ21− ρ2 z

= (1− ρ2)∞∑

k=0

ρk2 zk

and

Q2(z) =1

1 + ρ1 − ρ2

(

1−ρ2

1 + ρ1z

)

∞∑

i=0

(

ρ11 + ρ1

)i

zi.

Finally substituting. Q1(z) and Q2(z) into (ix) we obtain

qn = q0

n∑

i=0

(

ρ11 + ρ1

)n−i

ρi2 −n−1∑

i=0

ρi+12

ρn−i−11

(1 + ρ1)n−i

. n = 1, 2, · · ·

with q0 as in (viii).

245

16.13 From (16-209), the Laplace transform of the waiting time distri-bution is given by

Ψw(s) =1− ρ

1− λ(

1−Φs(s)s

)

=1− ρ

1− ρ µ(

1−Φs(s)s

) .(i)

Let

Fr(t) = µ∫ t

0[1−B(τ)]dτ

= µ[

t−∫ t

0B(τ)dτ

]

.(ii)

represent the residual service time distribution. Then its Laplacetransform is given by

ΦF (s) = L {Fr(t)} = µ

(

1

s−Φs(s)

s

)

= µ

(

1− Φs(s)

s

)

.(iii)

Substituting (iii) into (i) we get

Ψw(s) =1− ρ

1− ρΦF (s)= (1− ρ)

∞∑

n=0

[ρΦF (s)]n, |ΦF (s)| < 1. (iv)

Taking inverse transform of (iv) we get

Fw(t) = (1− ρ)∞∑

n=0

ρn F (n)r (t),

where F (n)r (t) is the nth convolution of Fr(t) with itself.

16.14 Let ρ in (16.198) that represents the average number of customersthat arrive during any service period be greater than one. Notice that

246

ρ = A′

(1) > 1

where

A(z) =∞∑

k=0

ak zk

From Theorem 15.9 on Extinction probability (pages 759-760) itfollows that if ρ = A

(1) > 1, the eqution

A(z) = z (i)

has a unique positive root π0 < 1. On the other hand, the transientstate probabilities {σi} satisfy the equation (15.236). By direct substi-tution with xi = πi

0 we get

∞∑

j=1

pij xj =∞∑

j=1

aj−i+1 πj0 (ii)

where we have made use of pij = aj−i+1, i ≥ 1 in (15.33) for anM/G/1 queue. Using k = j − i+ 1 in (ii), it reduces to

∞∑

k=2−i

ak πk+i−10 = πi−1

0

∞∑

k=0

ak πk0

= πi−10 π0 = πi

0 = xi (iii)

since π0 satisfies (i). Thus if ρ > 1, the M/G/1 system is transientwith probabilities σi = πi

0.

16.15 (a) The transition probability matrix here is the truncated versionof (15.34) given by

247

P =

a0 a1 a2 · · · · am−2 1−m−2∑

k=0

ak

a0 a1 a2 · · · · am−2 1−m−2∑

k=0

ak

0 a0 a1 · · · · am−3 1−m−3∑

k=0

ak

......

......

......

0 0 0 · · · a0 a1 1− (a0 + a1)

0 0 0 · · · 0 a0 1− a0

(i)

and it corresponds to the upper left hand block matrix in (15.34)followed by an mth column that makes each row sum equal to unity.(b) By direct sybstitution of (i) into (15-167), the steady state prob-

abilities {q∗j}m−1j=0 satisfy

q∗j = q∗0 aj +j+1∑

i=1

q∗i aj−i+1, j = 0, 1, 2, · · · ,m− 2 (ii)

and the normalization condition gives

q∗m−1 = 1−m−2∑

i=0

q∗i . (iii)

Notice that (ii) in the same as the first m− 1 equations in (15-210)for an M/G/1 queue. Hence the desired solution {q∗j}

m−1j=0 must satisfy

the first m− 1 equations in (15-210) as well. Since the unique solutionset to (15.210) is given by {qj}

j=0 in (16.203), it follows that the desiredprobabilities satisfy

q∗j = c qj, j = 0, 1, 2, · · · ,m− 1 (iv)

where {qj}m−1j=0 are as in (16.203) for an M/G/1 queue. From (iii)

we also get the normalization constant c to be

248

c =1

m−1∑

i=0

qi

. (v)

16.16 (a) The event {X(t) = k} can occur in several mutually exclusiveways, viz., in the interval (0, t), n customers arrive and k of themcontinue their service beyond t. Let An = “n arrivals in (0, t)”, andBk,n =“exactly k services among the n arrivals continue beyond t”,then by the theorem of total probability

P{X(t) = k} =∞∑

n=k

P{An ∩Bk,n} =∞∑

n=k

P{Bk,n|An}P (An).

But P (An) = e−λt(λt)n/n!, and to evaluate P{Bk,n|An}, we argue asfollows: From (9.28), under the condition that there are n arrivals in(0, t), the joint distribution of the arrival instants agrees with the jointdistribution of n independent random variables arranged in increasingorder and distributed uniformly in (0, t). Hence the probability that aservice time S does not terminate by t, given that its starting time x

has a uniform distribution in (0, t) is given by

pt =∫ t

0P (S > t− x|x = x)f

x(x)dx

=∫ t

0[1−B(t− x)]

1

tdx =

1

t

∫ t

0(1−B(τ)) dτ =

α(t)

t

Thus Bk,n given An has a Binomial distribution, so that

P{Bk,n|An} =

(

n

k

)

pkt (1− pt)n−k, k = 0, 1, 2, · · ·n,

249

and

P{X(t) = k} =∞∑

n=k

e−λt (λt)n

n!

(

n

k

)(

α(t)

t

)k (1

t

∫ t

0B(τ)dτ

)n−k

= e−λt [λα(t)]k

k!

∞∑

n=k

(

λt1

t

∫ t

0B(τ)dτ

)n−k

(n− k)!

=[λα(t)]k

k!e−λ

[

t−∫ t

0B(τ)dτ

]

=[λα(t)]k

k!e−λ

∫ t

0[1−B(τ)]dτ

=[λα(t)]k

k!e−λα(t), k = 0, 1, 2, · · ·

(i)(b)

limt→∞

α(t) =∫

0[1−B(τ)]dτ

= E{s}(ii)

where we have made use of (5-52)-(5-53). Using (ii) in (i), we obtain

limt→∞

P{x(t) = k} = e−ρ ρk

k!(iii)

where ρ = λE{s}.