+ All Categories
Home > Documents > Upper bounds on the probability of error for M-ary orthogonal signaling in white Gaussian noise

Upper bounds on the probability of error for M-ary orthogonal signaling in white Gaussian noise

Date post: 22-Sep-2016
Category:
Upload: dv
View: 216 times
Download: 3 times
Share this document with a friend
7
IEEE TRANSACTIONS ON INFORMATION THFORY, VOL. 36, NO. 3, MAY 1990 627 [4] S. Shamai (Shitz), "On the capacity of a Gaussian channel with pcak power constrained and hand limited input signals," AEU (E/r.c/r.on. Cornmun.), vol. 42, no. 6, pp. 340-346, 1988. Upper Bounds on the Probability of Error for M-ary Orthogonal Signaling in White Gaussian Noise DILIP v. SARWATE, FELLOW. IEEE WPIL K. CHAWLA, STUDENT MEMBER, IEEE, AND Abstract -The optimum detection of M orthogonal equiprobable equal-energy signals in additive white Gaussian noise is considered, and two upper bounds for the probability of error are derived. The behavior of these bounds is discussed and they are compared with previously known bounds for various values of signal-to-noise ratio and M. Some numerical results are presented. I. INTRODUCTION We consider the classical problem of detecting one of M orthogonal equal-energy signals in additive white Gaussian noise (AWGN). The detector observes r(t), 0 I t I T and has to choose one of the following M hypotheses: H,): r(t) = Gs,,(t) + w(t), 0 I t I T HIM-,: r(t) = ms,,,- I(t) + w(t), 0 I t I T where s,(t), i = 0,. . . , M - 1 are known signals orthonormal on [O,T], E denotes the signal energy, and w(t) is AWGN with one-sided power spectral density NI,. Assuming equally likely hypotheses, it is well known that the decision rule that mini- mizes the average probability of error is the maximum likelihood estimator. The detector correlates r(t) with each s,(t), i = 0; . ., M - 1. If the ith correlator has the largest output, the detector chooses hypothesis H,. Let the normalized output of the corre- lation with s,(t) be denoted by yI (Fig. 1): Without loss of generality, we assume that HI, is true. Then, the yI's are independent unit variance Gaussian random variables with E[Yo]=p=(2E/Nl,)1/2, and E[K:]=O, i>O. E/Nl, is commonly referred to as the signal-to-noise ratio (SNR). An- other commonly used parameter is E/(No log, M), which is the ratio of the bit energy to noise density (SNR,). Manuscript received January 4, 1989; revised June 20, 1989. This work was supported by the Joint Services Electronics Program under Contract N00014- 84-C-0149. A preliminary version of this correspondence was presented at the 1989 Conference on Information Sciences and Systems, Johns Hopkins University, Baltimore, MD, March 22-24, 1989. The authors are with the Coordinated Science Laboratory and the Depart- ment of Electrical and Computer Engineering, University of Illinois at Ur- bana-Champaign, Urbana, Illinois 61801. IEEE Log Number 8932930. I I I Largest I Fig. 1. Maximum likelihood estimator of H, It is well known that the error probability, i.e., the probability that the receiver chooses hypothesis HI, i # 0 when H,, is true, is given by [5] P, = 1 -Ir [@(X)IM-I4(x - p) du (1) -r where @(.I is the distribution function, and 4(.) is the density function of a unit Gaussian random variable. Except when M = 2, the integral in (1) cannot be simplified further, and numerical integration is required to find the value of P,. Thus, it is useful to find simple tight bounds or approximations for P,, especially for system feasibility studies. Since P, = P[ U , + l){yI > Yo}], it may be overbounded using the union bound P, I (M- l)P[Y, > Yo] = (M- 1)Q - e -xz /2 ( ; ) (2) where Q(x) = 1 - @(XI. In addition, using the result [l] that Q<x> I => x20 (3) we obtain the following bound: e-'12/4 P, I ( M - 1) ~ PJ;;. (4) For later reference, we let P*(M,p) be the function defined by the right-hand side of (4). The union bound (2) is quite tight when p is large and M is small, i.e., when SNR, is large. However, if SNR, is small, which is true of many communica- tion systems, the bound is loose and often exceeds 1. In fact, at p = 0, (2) has value (M - 1)/2. Note that (4) is even looser since it increases without bound as p approaches 0. Gallager [4] derived two other bounds for P,. The first is a generalization of the union bound and is given by (5.6.1) in [4]: When the minimization is carried out, the following bound is obtained (see (2.5.16) in [6]): In ( M - 1) 1/2 I P2 . 0018-9448/90/050O-0627$01 .OO 0 1990 IEEE
Transcript
Page 1: Upper bounds on the probability of error for M-ary orthogonal signaling in white Gaussian noise

IEEE TRANSACTIONS ON INFORMATION THFORY, VOL. 36, NO. 3 , M A Y 1990 627

[4] S . Shamai (Shitz), "On the capacity of a Gaussian channel with pcak power constrained and hand limited input signals," AEU (E/r.c/r.on. Cornmun.), vol. 42, no. 6, pp. 340-346, 1988.

Upper Bounds on the Probability of Error for M-ary Orthogonal Signaling in White Gaussian Noise

DILIP v. SARWATE, FELLOW. IEEE W P I L K. CHAWLA, STUDENT MEMBER, IEEE, A N D

Abstract -The optimum detection of M orthogonal equiprobable equal-energy signals in additive white Gaussian noise is considered, and two upper bounds for the probability of error are derived. The behavior of these bounds is discussed and they are compared with previously known bounds for various values of signal-to-noise ratio and M . Some numerical results are presented.

I. INTRODUCTION

We consider the classical problem of detecting one of M orthogonal equal-energy signals in additive white Gaussian noise (AWGN). The detector observes r ( t ) , 0 I t I T and has to choose one of the following M hypotheses:

H,): r ( t ) = G s , , ( t ) + w ( t ) , 0 I t I T

H I M - , : r ( t ) = ms,,,- I ( t ) + w ( t ) , 0 I t I T

where s , ( t ) , i = 0,. . . , M - 1 are known signals orthonormal on [O,T], E denotes the signal energy, and w ( t ) is AWGN with one-sided power spectral density NI,. Assuming equally likely hypotheses, it is well known that the decision rule that mini- mizes the average probability of error is the maximum likelihood estimator. The detector correlates r ( t ) with each s , ( t ) , i = 0; . ., M - 1. If the ith correlator has the largest output, the detector chooses hypothesis H,. Let the normalized output of the corre- lation with s , ( t ) be denoted by yI (Fig. 1):

Without loss of generality, we assume that HI, is true. Then, the yI's are independent unit variance Gaussian random variables with E[Yo]=p=(2E/Nl,)1/2, and E[K:]=O, i > O . E/Nl, is commonly referred to as the signal-to-noise ratio (SNR). An- other commonly used parameter is E/(No log, M ) , which is the ratio of the bit energy to noise density (SNR,).

Manuscript received January 4, 1989; revised June 20, 1989. This work was supported by the Joint Services Electronics Program under Contract N00014- 84-C-0149. A preliminary version of this correspondence was presented at the 1989 Conference on Information Sciences and Systems, Johns Hopkins University, Baltimore, MD, March 22-24, 1989.

The authors are with the Coordinated Science Laboratory and the Depart- ment of Electrical and Computer Engineering, University of Illinois at Ur- bana-Champaign, Urbana, Illinois 61801.

IEEE Log Number 8932930.

I I

I Largest I Fig. 1. Maximum likelihood estimator of H,

It is well known that the error probability, i.e., the probability that the receiver chooses hypothesis H I , i # 0 when H,, is true, is given by [5]

P, = 1 -Ir [ @ ( X ) I M - I 4 ( x - p ) du (1) -r

where @(.I is the distribution function, and 4(.) is the density function of a unit Gaussian random variable. Except when M = 2, the integral in (1) cannot be simplified further, and numerical integration is required to find the value of P,. Thus, it is useful to find simple tight bounds or approximations for P,, especially for system feasibility studies.

Since P, = P[ U , + l){yI > Yo}], it may be overbounded using the union bound

P, I ( M - l)P[Y, > Yo] = ( M - 1)Q -

e - x z / 2

(;) (2) where Q(x) = 1 - @(XI. In addition, using the result [ l ] that

Q < x > I => x 2 0 (3)

we obtain the following bound: e-'12/4

P, I ( M - 1) ~

PJ;;. (4)

For later reference, we let P * ( M , p ) be the function defined by the right-hand side of (4). The union bound (2) is quite tight when p is large and M is small, i.e., when SNR, is large. However, if SNR, is small, which is true of many communica- tion systems, the bound is loose and often exceeds 1. In fact, at p = 0, (2) has value ( M - 1)/2. Note that (4) is even looser since it increases without bound as p approaches 0.

Gallager [4] derived two other bounds for P,. The first is a generalization of the union bound and is given by (5.6.1) in [4]:

When the minimization is carried out, the following bound is obtained (see (2.5.16) in [6]):

In ( M - 1) 1/2 I

P2 .

001 8-9448/90/050O-0627$01 .OO 0 1990 IEEE

Page 2: Upper bounds on the probability of error for M-ary orthogonal signaling in white Gaussian noise

628 IEFE T R A N S A C T I O N S O N INFORMATION TIIEORY, VOI . 36, N O . 3 . M A Y 1990

Note that In(M - l ) / p z is proportional to l/SNR,. Thus, when SNR, is large, (5a) is applicable. However, (5a) is actually a weaker bound than (2 ) . On the other hand, for low SNR,, bound (5) exceeds 1 (just as (2) does), and this is reflected in (5c). For other values of SNR,, however, (5b) can provide a better bound than (2). Gallager’s second bound (see (8.2.39) in [4]) is tighter than bounds (2) and (5) for low SNR, and is given by

where the minimum value occurs at approximately d =

(21n(M))1/2. In this paper, we obtain two bounds for P,. The first is a modified version of (6) and is proved using a variation of Gallager’s technique for obtaining (6). The second bound is derived differently using some results from the asymptotic the- ory of extreme order statistics. The bounds are presented in the following theorems.

Theorem 1: For the problem and the receiver structure previ- ously described, the probability of error may be overbounded by

P, I min { Q( p - d ) d > 0

+ P * ( M , p ) [ Q ( f i ( d - p / 2 ) ) - e ( ’ 1 d - r 2 / 4 ) Q(&d)]} (7)

where the minimum value occurs at d = d , satisfying

& f ( M - l ) Q ( f i j Z d , ) e d ~ / 2 = 1.

Theorem 2: For the problem and the receiver structure de- scribed above, if A42 23, then the probability of error may be overbounded by

where

1

2e c , = l + - - ,

Note that d,, is the approximate solution of the following equation for d [3]:

We prove the theorems in Section 11, discuss the various bounds, and present numerical results in Section 111.

11. DERIVATION OF THE BOUNDS We begin by stating and proving two lemmas that are used in

Lemma 1: For any z: z 2 0, the following inequalities hold: the proofs of the theorems.

Proof: We first prove that Q(z)<&e”/ ’Q(f iz ) . For z = 0, the result is obvious. As z increases without bound, both Q(z) and fie‘-/’Q(@) approach zero. It is easy to show that the derivative of (&e‘:/’Q(&z)- Q(z)) is negative for z 2 0, and hence, Q ( z ) < f i e ‘ - /2Q( f i z ) , z 2 0. To prove the left-hand inequality in (14), we first prove that

Z e - z 2 / 2

z 2 0. (15) > (2m-)l/’( 1 + z’) ’ It is known that (see (26.2.24) in [l]):

Simple algebraic manipulation shows that

[ ( 4 + 2 y 2 - z ] z >-

2 ( 1 + z 2 )

It follows from (16) and (17) that Q ( z ) > ~ e - ~ ~ / ~ / ( ( 2 ~ ) ’ / * (1 + z’)). Using the upper bound (3) for Q(&z) and the lower bound (15) for Q(z), we obtain

Q(z )e -z2 /2 &z2 2- e(&.) l + z 2

2 1 for z 2 (a+ 1)’”. (18) In addition, numerical evaluation shows that Q ( z ) 2 eZ2/’Q(fiz> for 0 5 z I (fi + Hence, the left-hand in- equality in (14) follows. 0

Lemma 2: For any zl, z2: z , , z2 2 0, the following inequalities hold:

(19)

Proof We first prove that Q(z, + z,) I Q ( z l ) e ~ ( z ~ z ~ + z ~ / 2 ! Define a function g,: [0,m) +(-m,m) as follows:

g,( z 2 ) = Q( z l ) e z f / ’ - Q( z , + z2)e(z1+z~)2/2 , z2 2 0.

Note that to prove the right-hand inequality in (191, it suffices to show that g , ( z 2 ) 2 0, for z2 2 0. It is easy to verify that g;(z2) > 0. In addition, g$O) = 0. It follows that g l ( z2 ) 2 0 for z2 2 0. Next, we prove that Q b , + z 2 ) 2 ( z , / ( z , + z2))Q(zl)e~(z1z~+2~/2). Define a function g,: [O,m)+ (-m,m) as follows:

g,(z,) = ( z I + z 2 ) Q ( z l + z 2 ) e ( z l + z ~ ) 2 / 2 - z ,Q(z l ) eZf / ’ ,

To prove Q(z, + z 2 ) 2 ( z l / ( z l + z 2 ) ) Q ( z l ) e - ( z I z ~ f Z ~ / 2 ) , it suf- fices to show that g2 (z2 ) 2 0, z2 2 0. We have

‘Z’ 2 0.

g ; ( z 2 ) = ~ ( z , + z2)e(zl+z2)2/2

- (2T)I l2 ( l + ( z , + z,)’) 1

> 0 (using ( 15)) . (14)

Page 3: Upper bounds on the probability of error for M-ary orthogonal signaling in white Gaussian noise

IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 36, NO. 3 , MAY 1990 629

In addition, g2(0) = 0. It follows that gz(zz) 2 0, for z2 2 0. 0

Proof of Theorem 1: Let Y:,,,, be the variable defined by

Then, the probability of error P, is given by

P, = P[Y,7aA > yo]. The density function of y:,,,, may be written as

f v , , , , ( x ) = ( M - 1)[@(x)IM-’4(x>> - C O < x

It follows that for any z

P[Y,,,> 2 1 = j x ~ M - 1 ) [ ~ ( ~ ) I U - 2 d ( ~ ) ~ z

+ - 1 ) 4 ( x ) h

= ( M - l ) Q ( 2 ) .

Therefore, P[ynax > z ] I min[(M - l)Q(z), 11. Thus, the proba- bility of error may be written as

‘e = - P [ ymax > x I h

I/_” d(x-IL)d.+jdXd(~-p)(M-1)Q(x)Q!X

where d is any real number. For any d > 0, we obtain

(using Lemma 1). (21) Using integration by parts, the above expression reduces

P, I Q ( P - d )

+ P * ( M , p ) [ Q ( f i ( d - p / 2 ) ) - e ( p d - p 2 / 4 ) Q( a d ) ] and the best bound is obtained by choosing d to minimize the right-hand side. To find the optimum value of d , we rewrite (21) as

( M - l ) e ” 2 / 2 ~ Q ( f i x ) 4 ( X - p ) h .

Since ( M - l ) e x 2 / 2 f i Q ( f i x ) is a decreasing function of x, x > 0, we see that ( 7 ) is minimized for d = d , given in (8). 0

We have noted before that the derivations of (6) and ( 7 ) are similar. Before proceeding to prove Theorem 2, we compare the two derivations and the resulting bounds. Gallager [4] derives (20) but simplifies the integral in (20) by using bound (3) for Q(x) and then overbounding l / x by l / d to obtain

Although Gallager derived (22) assuming that d = (2ln(M))’/’, it is easy to see that the derivation is valid for any positive d. Hence, (22) can be improved by minimizing over d. In compari- son, we use bound (14) for Q(x) to simplify the integral in (20). Because (14) is a tighter bound than (3), bound ( 7 ) is tighter than (61, as we show next.

Consider the two bounds (6) and ( 7 ) given by

P , I Q ( p - d ) + z P * ( M , p ) Q ( f i ( d - ~ ~ / 2 ) )

P, I €?(P - d )

(6) CL

+ P * ( M , p ) [ ~ ( f i ( d - p / 2 ) ) - e(+p’/4) Q ( f i d ) ] ( 7 )

for any fixed value of d. For p 2 2d, it is easy to see that (7) is tighter than (6). Now, suppose that p < 2d. Substituting

Z ] = f i d - p /fi = fi( d - p /2) > 0

2 2 = p /fi 2 0

in Lemma 2, we obtain

Some algebra shows that (23) is equivalent to P

- Q ( f i ( d - p / 2 ) ) 2 Q ( f i ( d - p / 2 ) ) - e(pd-p2 /4 )Q( a d ) . 2d

Hence, (7) is tighter than (6) for all d > 0.

Proof of Theorem 2: To derive (9), we use a different technique. Let

and consider a random variable L with distribution function

L - d,

P [ b(M-l) 4 e - 2 r M - l , t > O , M i [ 4 e + 1 ] = 1 2

(25) =(al.-- t < o . Note that M > 14e + 11 in order for (25) to be a valid distribution function. In the appendix, we show that L is stochastically larger [2] than Y,,,, that is

P [ L I x ] I P[Y,,, I X I , - -m< x <m

and P [ L > x ] > P[Y,,, > x ] , - m < x <m.

Now, since Yo is a Gaussian random variable with mean p and unit variance, Yh = (Yo - d,,)/b(M - 1) is a Gaussian random variable with mean p0 = ( p - d,,)/b(M - 1) and variance m i =

l / (b (M - 1))2. Let L’ denote the variable ( L - dO) /b (M - 1). Then, we have

P, = P [ Y,,, > Y,] ] I P [ L > Yo]

= P [ L’> Y;,]

Page 4: Upper bounds on the probability of error for M-ary orthogonal signaling in white Gaussian noise

630 IFFF TKANSA('TI0NS O N INTORMATION T I I F O K Y . VOI,. 36, N O . 3. MAY 1990

Using a method similar to the one used in proving the right-hand inequality of (141, it is easy to show that

Since (d , ) + r ) r =[ln(ln(M - l ) ) + l n ( 4 ~ ) ] / 2 , we have e- (d i j+r ) r = [ 4 ~ In(M - 1)]-'12 and substituting in (29), we obtain

e - ' - '< l -e -"+l ' , x > o , P * * ( M > P ) =c lQ(P - 41)

-, e - 2 x 2 ( 2 1 n ( ~ - l ) ) ' / '

P * ( M , P ) Q ( - P /2)) (30)

where c 2 = e' ' I2. Noting that d,, = (21n(M - l))'/', the right- hand side of (30) looks similar to Gallager's bound (6), except that we would expect it to be somewhat larger than (6) because c l r c , > 1. We conclude that bounds (9) and (6) are similar except that (6) may be a little tighter than (9).

Thus P + c ,

fi..( x) = e-" e-" + - M-1

8 < e - x + - ( M - l ; ) e - : " > x > o

and F,.(O) = e- ' -4 / (M - 1). Hence

111. DISCUSSION A. Numerical Considerations

h. (27) We comment briefly on the numerical evaluation of bounds (7) and (9) on P, as compared with determining P, exactly. In

is not in a form suitable for numerical evaluation, and it is much better to use the following alternative expression for P,:

- PLO 8

+ ( T E - f )p2y 7) The integrals in (27) may be evaluated using integration by parts most applications, P, << 1. Therefore, the right-hand side of (1)

to obtain

Noting that 4/(M - 1) < 1/2e for M 2 23 and substituting the values of pll and ul:, we obtain bound (9)

P, I c , Q ( p - dll) + ( M - l)e(d~-I*d11+(dlj-~L)r)e(2d0 - /L + r )

with c l and r as defined in (11) and (12). 0

To gain some insight into bound (91, we compare it with bound (6). Assume that 2 d 0 2 p . Then, using (19) in (9) and manipulating the expression algebraically, we obtain

f', 5 C I Q ( P - do)

+ ( M - l ) e ( d A - r d ~ ~ + ( d ~ ~ - ~ ) . ) e ( 2 d , l - P ) e - [ ( 2 d ~ ~ - ~ ) r + r Z / 2 1

( M - l ) [ @ ( x ) ] " ~ * 4 ( x ) @ ( x - p ) h . (31) = /I:

This integral can be evaluated via several different numerical integration methods, but the computation can be time consum- ing. Note that in most cases, we need to evaluate P, as a function of SNR, and the numerical integration must be re- peated for each SNR.

On the other hand, we have the following upper bounds for P , that were derived in the last section:

= c I Q ( P - do)

+ ~ ~ P * ( M , ~ ) e ~ ( d ~ ~ r + r ' ~ 2 ) e ( d l ~ - I * / 2 ) ' Q ( 2 d l ~ - p,). (28)

Let P * * ( M , p ) be the right-hand side of (28). From Lemma 1, it follows that Q ( z ) = f i e z 2 I 2 Q ( f i z ) for large z. Therefore, we use the approximation

f iQ[2(do - p / 2 ) ] = e - ( ~ ( d 1 1 ~ r / 2 ' ) ' / 2 Q [ f i ( d l ) - p / 2 ) ]

in (28) to obtain

P * * ( M , P ) = c i Q ( ~ - d o ) -

The constants c , , d,, and r , which appear in (91, are indepen- dent of p (and hence of the SNR). Furthermore, from (8), we see that d , , which is the value of d that minimizes (71, does not depend on p. Thus, once we have computed these constants for any particular value of M , bounds (7) and (9) require only the computation of exponentials and Q( . 1. Strictly speaking, the evaluation of Q ( . ) also requires numerical integration. How- ever, excellent approximations to e ( . ) are known [ l ] and widely used. In fact, such approximations are generally also used in the numerical evaluation of (31) in order to avoid a double integral and to simplify the factor [@(x)]"-' in the integrand. Lastly,

P h consider the solution of (8) to find d,. From Lemma 1, we have + - P * ( M , ~ ) e r ' / 2 e ~ r ' e ~ " ~ ~ ' Q ( f i ( d , , - p / 2 ) ) 6 Q ( d , ) < f i e d f / 2 Q ( f i d , ) s f i Q ( d , )

and hence = C I Q ( P - do)

per ' / 2 6

6 .Q(fi(dl, - P / 2 ) )

( M - l )Q( d , ) < 1 5 6 ( M - l ) Q ( d l ) . ( 32) + ------P*(M,P) Let d* and d** be the solutions of ( M - l ) Q ( d ) = 1 and f i ( M - l )Q(d) = 1, respectively. From (32) and the fact that Q(d) is a decreasing function of d, it follows that d* < d , 5 d** . There are excellent rational approximations to the solution of (29) . e - (dil+i )r

Page 5: Upper bounds on the probability of error for M-ary orthogonal signaling in white Gaussian noise

IEEE TRANSACTIONS ON INFORMATION THEORY, VOL.. 36, NO. 3, MAY 1990 63 1

equations of the type Q ( x ) = p , 0 < p I 0.5 (see (26.2.22) and (26.2.23) of [l]). Using any of these approximations, it is easy to compute d* and d* * . Numerical evaluation shows that for M 2 10, ( d * * - d * ) < 0.2. In fact, numerical evaluation also shows that for M 2 5, do is an even tighter upper bound for d , than d** and that (d,) - d * ) < 0.04 for M > 100. Thus, it is easy to solve (8) for d , starting from d*, d * * , or d,,. Of course, it is not necessary to solve (8) at all if one is prepared to accept the slightly looser bound obtained by substituting do or d* in (7).

B. Behavior of the Bounds

To gain a better understanding of the bounds, we examine how the different terms in the bounds compare under various values of SNR and M. Although we just investigate the behavior of (71, the behavior of the other bounds is similar and can be investigated in a similar fashion.

1. p = cd,,: Let us assume that p = cd,,, or equivalently SNR, = c 2 In2, where c 2 2. We define this to be a condition of relatively high SNR,. It is well known [ l ] that the right-hand side of (3) is also a good approximation for that is

Using (33) in (4), we obtain

In addition, using (33) again, we have

From (34) and (35), it follows that

Q ( p - d o ) - P* (M,p )e '+d1) -p2 /4 'Q(Ad"~

P*( M , P ) Q( a( do - CL /2))

(36 ) = exp [ - ((. - 2 ~ , ) ~ / 4 ]

2(c - l )d ,J;;Q(JZ(do - ~ / 2 ) ) '

Observing that for p 2 2d,,, Q ( a ( d n - p /2)) 2 0.5, we note that the second term in (7) is larger than the other two. Hence, especially for large do, the bound behaves like P * ( M , p ) Q ( A ( d O - p /2)). It is well known that the union bound (2) is tight for high SNR,. Thus, we expect bound (7) also to be quite tight under these conditions.

2. p = d,): Next, we assume that p = do + E , where E is small compared to do. Then, using (19) as an approximation, we have

p ' / 2

Q ( P - do) = Q ( E ) = 7. In addition, using (33), we obtain

From (37), we see that Q ( p - d o ) is larger than the other two terms in (7), and therefore, the bound behaves approximately as Q ( p - do), especially for large do. It is worth noting that when p I a d , , , the union bound (2) exceeds 1. Therefore, (7) is quite useful as compared to (2) for p = d,).

Finally, we investigate the behavior of bound (7) as p + 0, i.e., as the signal energy is made vanishingly small. Let PI, be defined as the limit of the probability of error as p + 0. From symmetry considerations, it is easy to see that

1

M Po = 1 - -

From (71, we obtain

using L'Hopital's rule.

Using [I]

we have

and therefore

Choosing d = d, so that Q ( d , , ) = e - d ; / 2 / ( ( 2 ~ ) ' / 2 d , = 1/(M - 1) and substituting in (38), we obtain

2 M - 3 1

2 M - 2 2 M - 2 ' -I-- PI) 5 ~ - (39 )

Observe that at p = 0, the union bound (2) has value (M - 1 ) / 2 or 1 if we write the right-hand side of (2) as min[l , (M - l)Q(p / a)]. On the other hand, (7 ) not only has value less than 1 but is very nearly the same as PI). A similar result can be proved for (6). We conclude that at low SNR, bounds ( 6 ) and (7) are very useful.

Page 6: Upper bounds on the probability of error for M-ary orthogonal signaling in white Gaussian noise

632 I E C E TRANSACTIONS ON I N F O R M A T I O N THEORY, VOL. 36, NO. 3, M A Y 1990

10-4 7 8 9 I O 11 12 13 14

SNR(dB)

Fig. 2. Plot of f, versus SNR for M = 32 and M = 4096. SNR,(dB) =

SNR(dB) - 7.0; M = 32. SNRh(dB) = SNR(dB) - 10.8; M = 4096.

C. Numerical Results

In Fig. 2, we plot the various bounds and the actual value of P , as a function of SNR for two different values of M . The actual value of P, was evaluated to three significant figures by numerical integration using the subroutine QDAGI from an IMSL software library. For evaluating (71, we used the value d o for d . For evaluating (6), we used [2h1(M)]l/~ for d , which is the value that Gallager used in his derivation. From the figure, we see that bounds (61, (7), and (9) are tighter than the union bound (2) at low SNR. Bound (7) appears to be the tightest of all. For M = 32, bound (9) is nearly as good as bound (7) at low SNR but is not as good at high SNR, whereas for M = 4096, bound (9) is quite tight over the entire range of SNR shown and is also considerably better than bound (6).

D. Concluding Remarks In this correspondence, we derived two upper bounds for P,.

Both bounds are quite tight and significantly easier to compute than the exact value of P,. Bound (7) seems to be an improve- ment over all the previous known bounds for P,. Bound (9) is reasonably tight but not as tight as (7) and therefore not as useful. This may be explained by the fact that the convergence of the maximum of Gaussian random variables to its asymptotic distribution is very slow [3]. We note that bound (9) is signifi- cantly tighter at M = 4096 than at M = 32. In deriving (9), we used a new technique that may prove to be useful for similar problems with different noise distributions. In addition, an im- proved bound on f L ( x ) could tighten (9) even further. This is a topic for further research. As a final remark, the probability of error for the case of M simplex signals can be obtained from any of our results by increasing the SNR by a factor of ( M + 1)/M [51.

APPENDIX

We first state two lemmas, the proofs of which may be found

Lemma A l : For any z , 0 < z < 1/2, the following inequality in pages 8-10 of [3]:

holds:

Let XI , . . . , X,, be i.i.d random variables with common distribu- tion function F ( x ) . Define X,,I(,,,,, as the random variable

X,,l<fA,,l = max ( X , ) . I = I . ,I1

Lemma A2: For any x that satisfies

1 1 - F ( x ) < -

2 h

we have

P[X,,,,,, < x ] > T ( ~ ) - 4 n [ l - F ( x ) ] ~ F ” ( x ) (A.2)

where T ( x ) = exp[- 4 1 - F ( x ) ) l . Bound (A.2) may be derived from (A.l) by the substitution of

z by ( 1 - F ( x ) ) and then using the inequality le’’ -11 <2w,

For the problem at hand, we have Y,,,, = max,=, , ,,,- l(YJ), where Y J are i.i.d unit Gaussian random variables. It follows that for x that satisfy

0 < w < 1/2.

1 1 - @ ( x ) < ~

2 h F i

we have

P[Y,,,,<x]> T ( X ) - ~ ( M - ~ ) [ ~ - @ ( X ) ] ~ @ ~ - ~ ( X )

where T(x)=exp[-(M-l)(l-@,(x))] . Let

x = a ( M - 1)+ b ( M - l ) y , y > 0 (‘4.3)

with a ( M - 1)= d,, and b ( . ) as defined in (24). For conve- nience, we denote [2ln(M - 1)]1’2 by y, so that b ( M - 1) = l /y , and yo = d,, + r (cf. (12)). Numerical evaluation shows that ( M - l )Q(d , ) < 1 for M 2 5. Using the fact that @ ( z ) is an increasing function of z , it follows that for x defined by (A.3)

Lemma A3: For all x defined by (A.3) and for M 2 5

( M - 1)(1- @(.))e? < 1. (A.4)

Proof Define a function h: [O, w) + ( -CO, a) as follows:

Therefore

y 2 0 .

In addition, because h’(y) is differentiable, any y in the interior of [0,m] that maximizes h(y) must satisfy h ’ ( y ) = 0. Thus, if

Page 7: Upper bounds on the probability of error for M-ary orthogonal signaling in white Gaussian noise

IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 36, NO. 3, MAY 1990 633

y * : y * E (0,m) and y * maximizes h ( y ) , then

Hence

h ( y * ) = ( M - 1 ) Q d o + - e y ’ ( I,)

It is easy to see that as y + m, h ( y ) -+ 0. In addition, we have h(0) = ( M - l)Q(d(,) 5 1 for M 2 5. It follows that ( M - 1)

0 (1 - Wx))ey < 1, y 2 0.

Using Lemmas A2 and A3, we obtain

4

M - 1 1 - P[Y,,, < x ] < 1 - ---e-’Y

= l - P [ L < x ] , x z d , (A.6)

where L is defined by (25). It follows that L is stochastically larger than Y,,,.

REFERENCES M. Abramowitz and 1. A. Stegun, Eds., Handbook of Mathematical Functions. New York: Dover, 1965. P. J. Bickel and K. J. Doksum, Mafhematical Sfafisfics. Oakland, CA: Holden-Day, 1977. J . Galambos, The Asympfoiic Theory of Exfreme Order Statistics. New York: Wiley, 1978. R. G. Gallager, Information Theory and Reliable Communication. New York: Wiley, 1968. A. J. Viterbi, Principles of Coherent Communication. New York: McGraw-Hill, 1966. A. J . Viterbi and J . K. Omura, Principles of Digital Communication Systems. New York: McGraw-Hill, 1979.

On Lower Bounds to the Maximum Correlation of Complex Roots-of-Unity Sequences

P. VIJAY KUMAR MEMBER, IEEE, AND CHAO-MING LIU, MEMBER, IEEE

Abstract-This correspondence shows how the Welch bound on the maximum correlation of families of complex sequences of fixed norm can be modified to provide an improved bound for the case when the sequence symbols are roots of unity. As in the Welch bound, the improved bound is based on a useful expression for the even correlation moments. An analysis of the ratio of successive even moments using this expression is shown to yield a small improvement over a similarly derived bound due to Sidelnikov. Interestingly, the expression for the moments reduces in the binary (q = 2 ) case to a version of the Pless power-moment identities. The derivation also provides insight into the problem of optimal sequence design.

I. INTRODUCTION

A lower bound to the maximum value of the inner product (or correlation) for a family of complex vectors (sequences) having fixed norm was derived several years ago by Welch [l]. This bound was derived from an examination of even moments of the inner product.

Around the same time, Sidelnikov [2], [3], focusing on se- quences whose symbols were complex q th roots of unity for some integer q, used the ratio of successive even moments to derive a lower bound on the maximum correlation value of such sequences. Both the Welch and Sidelnikov bounds have since been extensively used in the design and analysis of sequences for code-division multiple access [4]-[11].

Here, we consider the same class of sequences as Sidelnikov did. We will show how the Welch bound can be modified to provide an improved bound for this class of sequences. As in the Welch bound, the modified bound is based on a useful expres- sion for the even correlation moments. Interestingly, the expres- sion reduces in the binary ( q = 2) case to a version of the Pless power-moment identities [12], [13]. An analysis of the ratio of successive even moments using this expression is shown to yield a small improvement over the Sidelnikov bound. Besides relat- ing the Welch and Sidelnikov bounds, the derivation also pro- vides insight into the problem of optimal sequence design.

The expression for the even correlation moments and the modification of the Welch bound are presented in Section 11. The Sidelnikov bound and its improvement are discussed in Section 111. The binary ( q = 2) case is examined in detail in Section IV. Some derivations mostly related to coefficients ap- pearing in the expression for the even moments are contained in the Appendix. Linear recursions satisfied by these coefficients, which are very useful for computational purposes are included here, as is a key result (Corollary 1) concerning ratios of these coefficients.

Manuscript received March 1987; revised September 1989. This work was supported by the National Security Agency under Contract

MDA904-85-H-0010. This paper was presented at the 1988 IEEE Interna- tional Symposium on Information Theory, Kobe, Japan June 19-24, 1988.

P. Vijay Kumar is with the Department of Electrical Engineering, Univer- sity of Southern California, Los Angeles, CA 90089-0272.

Chao-Ming Liu is with Bell Communications Research, Red Bank, NJ 07701.

IEEE Log Number 8933628.

0018-9448/90/0500-0633$01 .OO 01990 IEEE


Recommended