+ All Categories
Home > Documents > Direct-sequence spread-spectrum multiple-access communications with random signature sequences: a...

Direct-sequence spread-spectrum multiple-access communications with random signature sequences: a...

Date post: 22-Sep-2016
Category:
Upload: rk
View: 214 times
Download: 2 times
Share this document with a friend
14
514 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 37, NO. 3, MAY 1991 Direct-Sequence Spread-Spectrum Multiple- Access Communications with Random Signature Sequences: A Large Deviations Analysis John S. Sadowsky, Member, ZEEE, and Randall K. Bahr, Member, ZEEE Abslract -A DS-SSMA bit-error probability analysis is devel- oped using large deviations rheory. Let m denote the number of interfering spread spectrum signals and let n denote the signa- ture sequences length. Then the large deviations limit is as n -+CO with m fixed. A tight asymptotic expression for the bit-error probability is proven, and in addition, recent large deviations results dealing with the importance sampling Monte Carlo estimation technique are applied to obtain accurate and computationally efficient estimates of the bit-error probability for finite values of m and n. The large deviations point of view is compared also to the conventional asymptotics of central limit theory and the associated Gaussian approximation. The Gauss- ian approximation is accurate when the ratio m / n is moder- ately large and all signals have roughly equal power. In the near / fur situation, however, the Gaussian approximation is quite poor. In contrast, large deviations techniques are more accurate in the near/far situation, and it is here that these methods provide some important practical insight. Index Tenns -Spread spectrum multiple access, large devia- tions. I. INTRODUCTION N a DS-SSh4A (direct-sequence spread-spectrum mul- I tiple-access) communications system, bit errors occur because of MAI (multiple access interference) as well as other types of additive noise sources. Here we investigate the bit-error probability using large deviations theory. Let m denote the number of interfering DS-SS signals and let n denote the length of the signature sequence (per infor- mation bit). Then our large deviations approach considers the limit as n --)CO with m (and the MAI signal ampli- tudes) held constant. We consider a conventional correlator receiver that is assumed to be coherently synchronized with the desired signal, but the MAI signals are not synchronous with respect to this reference. Let 0 =(a.,; *,a,, F1;. ., Manuscript received-August 24, 1989; revised October 29, 1990. This -' work was presented in part at the IEEE International Symposium on Information Theory, San Diego, CA, January 14-19, 1990. J. S. Sadowsky is with the School of Electrical Engineering, Purdue University, West Lafayette, IN 47907. R. K. Bahr is with the Department of Electrical and Computer Engineering, University of Arizona, Tucson, AZ 85721. IEEE Log Number 9142955. Ym) denote the random phases (Qi's) and time delays (z's) of the m interfering signals. We first consider the conditional bit-error probability P,( e) = 9( bit error 10 = e). The subscript n in the notation simply reflects the fact that we are considering asymptotics in the limit as n --$a. A straightforward application of large deviations theory yields where a, N b, means precisely that a, / b, --f 1 as n -00. I(0) is the large deuiations rate that is obtained from the Legredre-Fenchel transform of the asymptotic log- moment generating function Male). O ! r goal is to esti- mate the auerage bit-error probability P, = E[P,(@)]. To this end we shall prove the following limit, which is our main theoretical contribution: where = min, Z(0l.l is the wyst-case rate with respect to the phase/delay vector 0. Z and the corresponding worst case phase/delay vectors are identified as the solu- tion of a minimax problem involving the function A(al6). Conventional asymptotic analysis of DS-SSh4A systems is performed using central limit theory. As illustrated in Fig. 1, central limit and large deviations theories consider distinctly different limits. To apply central limit theory, one must formulate the asymptotics in such a way that the ratio of the total MAI variance to the desired signal energy is held constant so that the bit-error probability will tend to a positive limit that can be evaluated using the limiting Gaussian distribution. The most appropriate way to formulate this kind of limit is to fix the bit transmission time T and let n +CO with the ratio m/n constant. The central limit theorem is then applied to the decision statistic expressed as the sum of m independent 'Formula (1.2) describes the behavior of Fn as n --*CO with all other parameters fixed. The factor K3'"l2 does not ipply that Fn decreases with n fixed increasing m because both C and I also depend on m. 0018-9448/91/0500-0514%01.00 01991 IEEE
Transcript

514 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 37, NO. 3, MAY 1991

Direct-Sequence Spread-Spectrum Multiple- Access Communications with Random Signature Sequences: A Large

Deviations Analysis John S. Sadowsky, Member, ZEEE, and Randall K . Bahr, Member, ZEEE

Abslract -A DS-SSMA bit-error probability analysis is devel- oped using large deviations rheory. Let m denote the number of interfering spread spectrum signals and let n denote the signa- ture sequences length. Then the large deviations limit is as n -+CO with m fixed. A tight asymptotic expression for the bit-error probability is proven, and in addition, recent large deviations results dealing with the importance sampling Monte Carlo estimation technique are applied to obtain accurate and computationally efficient estimates of the bit-error probability for finite values of m and n. The large deviations point of view is compared also to the conventional asymptotics of central limit theory and the associated Gaussian approximation. The Gauss- ian approximation is accurate when the ratio m / n is moder- ately large and all signals have roughly equal power. In the near / fur situation, however, the Gaussian approximation is quite poor. In contrast, large deviations techniques are more accurate in the near/far situation, and it is here that these methods provide some important practical insight.

Index Tenns -Spread spectrum multiple access, large devia- tions.

I. INTRODUCTION N a DS-SSh4A (direct-sequence spread-spectrum mul- I tiple-access) communications system, bit errors occur

because of MAI (multiple access interference) as well as other types of additive noise sources. Here we investigate the bit-error probability using large deviations theory. Let m denote the number of interfering DS-SS signals and let n denote the length of the signature sequence (per infor- mation bit). Then our large deviations approach considers the limit as n --)CO with m (and the MAI signal ampli- tudes) held constant.

We consider a conventional correlator receiver that is assumed to be coherently synchronized with the desired signal, but the MAI signals are not synchronous with respect to this reference. Let 0 =(a.,; *,a,, F1;. .,

Manuscript received-August 24, 1989; revised October 29, 1990. This -' work was presented in part at the IEEE International Symposium on Information Theory, San Diego, CA, January 14-19, 1990.

J. S. Sadowsky is with the School of Electrical Engineering, Purdue University, West Lafayette, IN 47907.

R. K. Bahr is with the Department of Electrical and Computer Engineering, University of Arizona, Tucson, AZ 85721.

IEEE Log Number 9142955.

Ym) denote the random phases (Qi's) and time delays ( z ' s ) of the m interfering signals. We first consider the conditional bit-error probability

P,( e ) = 9( bit error 10 = e ) . The subscript n in the notation simply reflects the fact that we are considering asymptotics in the limit as n --$a. A straightforward application of large deviations theory yields

where a, N b, means precisely that a, / b, --f 1 as n -00.

I(0) is the large deuiations rate that is obtained from the Legredre-Fenchel transform of the asymptotic log- moment generating function Male). O!r goal is to esti- mate the auerage bit-error probability P, = E[P,(@)]. To this end we shall prove the following limit, which is our main theoretical contribution:

where = min, Z(0l.l is the wyst-case rate with respect to the phase/delay vector 0. Z and the corresponding worst case phase/delay vectors are identified as the solu- tion of a minimax problem involving the function A(al6).

Conventional asymptotic analysis of DS-SSh4A systems is performed using central limit theory. As illustrated in Fig. 1, central limit and large deviations theories consider distinctly different limits. To apply central limit theory, one must formulate the asymptotics in such a way that the ratio of the total MAI variance to the desired signal energy is held constant so that the bit-error probability will tend to a positive limit that can be evaluated using the limiting Gaussian distribution. The most appropriate way to formulate this kind of limit is to fix the bit transmission time T and let n +CO with the ratio m / n constant. The central limit theorem is then applied to the decision statistic expressed as the sum of m independent

'Formula (1.2) describes the behavior of Fn as n --*CO with all other parameters fixed. The factor K3'"l2 does not ipply that Fn decreases with n fixed increasing m because both C and I also depend on m.

0018-9448/91/0500-0514%01.00 01991 IEEE

, I ,

SADOWSKY AND BAHR: DS-SSMA COMMUNICATIONS WITH RANDOM SIGNATURE SEQUENCES 515

Cenaal Limit Theorv

n = Signatun Sequence Length

Fig. 1. Comparison of large deviations theory and central limit theory asymptotics.

MAI signal responses. In this limit we have T, = the chip duration = T / n , and hence, bandwidth = 2T, = 2 n / T t w. This “large bandwidth limit” is the appropriate limit to consider for SSMA systems that attempt to effi- ciently utilize an expanded bandwidth by accommodating as many users as possible.

The independent summands to which the central limit theorem is applied are the m independent MA1 re- sponses, not the sum over n chips. Unfortunately, in the literature we find both implicit and explicit erroneous claims that central limit theory applies to the limit n with other parameters held constant. Since the bit-error probability vanishes in this limit, the convergence in dis- tribution conclusion of a central limit theorem generally does not apply to the “large n limit.”2

Consider the near/ far situation where the total MAI response is dominated by a few signals having much higher energies than the desired signal. If we apply the Lindeberg-Feller central limit theorem to the large band- width limit (with m / n fixed) as previously described, it is not necessary that all MA1 signals have exactly the same energy. However, the uniform asymptotic negligibility con- dition3 requires that the sum of m independent MAI signal responses not be dominated by just a few terms [17, p. 311. This is a necessary condition for the central limit theorem to hold, and it rules out the near/far case. In contrast, large deviations theory is not restricted by the near/far situation. In fact, it turns out that it is precisely in the near/far case that (1.2) converges most rapidly. Thus, large deviations theory provides an important com- plement to the conventional central limit theory analysis of DS-SSMA systems because it provides some valuable understanding of the performance of these systems oper- ating in near/far conditions.

In addition to the asymptotic result, we also develop some accurate and numerically efficient methods for eval-

’An exception is the formulation in [l] that correctly applies central limit theory to derive Pitman’s A R E (asymptotic relative efficiency) for nonlinear correlator receivers, but to do this, signal amplitudes must be programmed to vanish as n +m. A R E is useful for the design of receivers that are resistant to impulsive noise. However, MA1 is asymp- totically negligible in the A R E formulation.

3The “uniform asymptotic negligibility” condition for a sum of m independent (but not necessarily identically distributed) random vari- ables requires that the ratio of the maximum variance to the total variance must tend to zero as m +m.

uating F,, for finite n. We employ a Monte Carlo estima- tion technique known as importance sampling for estima- tion of both Pn(fl> and the expectation F,, = E[P,(@)]. The computational cost is O ( m r ~ ~ / ~ ) . The n3I2 factor in this computational cost is due to the estimation of Pn(fl). However, if we use (1.1) as an approximation for Pn(0), then the overall computational cost is reduced to just O(m)! This approximation is known to be very accurate even for relatively small values of n, say n = 20 (see [7, p. 1301). Our algorithms are easily programmed and run quite efficiently with a modest amount of computing power.4 Moreover, we can also compute precision esti- mates using the sample variance estimator.

Several previous works have considered the problems of estimating DS-SSMA error probabilities. Some of the early work includes the moment bound approach of [20] and the bounds and approximations of [8] and [16]. The Chernoff bound (which is an element of large deviations theory) had been previously considered in [ 161, however, there is a distinction. In [16] the authors compute the unconditional Chernoff bound. CramCr’s theorem, which is our fundamental large deviations theorem, does not hold in the unconditional setting. Hence, the observation in [16] that the unconditional Chemoff bound of [16] is not tight is not surprising. A numerical method for com- puting arbitrarily tight upper and lower bounds is pro- posed in [l l l , but this method is extremely computation- ally intensive for large values of m and n. Our Monte Carlo methods offer an alternative for computing arbi- trarily accurate estimates for large values of m or n, particularly in the near/far situation.

The near/far problem has been studied extensively in [12], [13], [MI, and [19]. Much of this work deals with the design and performance of optimal and suboptimal near/far resistant receivers, whereas here we consider only the standard correlator receiver. In [19], an “asymp- totic efficiency” is defined for Gaussian channels, but this definition is not related to either the ARE of [l] or the large deviations approach here. It does shed light on the near/far problem for yet another point of view. It is pointed out in [19] that for fixed signal sets the bit-error probability of the conventional correlator receiver tends to 1/2 as near/far energy ratios become large. In other words, the correlator receiver is not near/far resistant. In the large deviations setting we see this same phe- nomenon, specifically, Z L O as the near/far power ratios become large. This indicates that simply lengthening sig- nature sequence length is not an effective strategy for mitigating a severe near/far problems in conventional receivers, and hence, further justifies the need for near/far resistant DS-SSMA receivers as in [12] and [131.

The body of this paper assumes no previous exposure to modern large deviations theory other than an elemen- tary understanding of the Chernoff bound. Proofs in- volving technical arguments are deferred to appendixes.

4Contact J. S. Sadowsky for a free copy of Fortran software. Send e-mail to [email protected].

516 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 37, NO. 3, MAY 1991

Section I1 develops a representation of the correlation receiver's decision statistic as a sum of conditionally inde- pendent and identically distributed (i.i.d.) random vari- ables. This representation is necessary for the application of large deviations theory. Much of Section I1 will appear to be standard, however, we have not found this particu- lar representation in the literature and most references do not develop the conditioning on 0, which is critical in our analysis. Section I11 develops the large deviations of P,,(O) and Section IV considers P,, = E[P,,(@)]. Section V presents numerical results for both the situation of equal power MA1 signals and the near/far problem. Gaussian approximations and large deviations estimates are pre- sented.

I. PLN EXPRESSION FOR THE DECISION STATISTIC For the transmission of a single information bit, we

consider a signal of the form & a,X(t) where a0 is the signal amplitude, the + sign carries the information bit, and

n - 1

x(f> = c U k $ ( f - kTc) ( 2 .1 ) k = O

is the signature waveform. $ ( t ) is the chip waveform which we assume vanishes outside of the time interval [O,T,), T, is the chip duration, and we normalize so that l$( t )*dt = 1 . U,, =(Uo,- . a , U,,-1) is the signature se- quence of length n. We assume U,, is an i.i.d. sequence of - + 1 valued random variables with 9 ( u k = + 1 ) = s ( u k =

- 1) = 1 / 2 . The time duration for a single bit transmis- sion is T = nT,. The total baseband received signal is

m

Y(f)=*aox(t)+ Y ( ~ ) ( t > + N ( t ) , (2 .2 ) r = l

where Y(')(f) is the ith MAI signal and N ( f ) is an additive noise process. A linear correlation receiver is used to test for the transmitted bit value. Given the received signal Y(t ), the receiver's decision statistic is

D = j T Y ( l ) X ( t ) d t . 0 ( 2 . 3 )

The i'th MAI signal is m

Y ( ' ) ( ~ ) = U , C O S ( @ ~ ) Vi ' )$( t - k T , - x ) . ( 2 . 4 ) k = - m

This is a baseband formulation; the baseband amplitude is the factor a,cos(@,,), whereas a, is the carrier ampli- tude and is the signal's random phase with respect to the receiver's carrier reference. The sequence {Vi')} in- cludes both the signature sequence and data modulation of the MA1 signal. Since the signature sequence is i.i.d. with values + 1 being equally likely, multiplying by any - + 1 valued data sequence does not alter the marginal distribution of the product. That is, the product of data modulation and signature sequences is again i.i.d. with values k 1 being equally likely. Likewise, a factor of + 1 can be absorbed into (Vk(')}, and hence, we can have the

phase uniformly distributed on [ - ~ / 2 , 7 r / 2 ) instead of [ - T , 7). The delay of the MA1 signal is also random with respect to the receiver's time reference. However, the i.i.d. sequence distribution of (Vi')} is not altered by random time shifts, ar?d hence, (Vk(')} can absorb the discrete "modulo T," delay. As a result, with no loss of generality, we may assume that the delay q in (2.4) is uniformly distributed on [O,T,), instead of uniform on [O, T ) .

For notational convenience, all MA1 phases and delays are grouped into a single random vector 0 = (@. , ,e * ., am, Fl,* . -, Fm). Specific values are denoted in lower case e= (41 , . . . , 4m,T l , . . . , ~~) .

Now consider the correlation receiver's response to just Y( ' ) ( f ) . The key to our representation of D is that we break the range of integration in (2.3) into subintervals that are synchronous with the chips of Y(')(t). This is done separately for each of the MAI signals. In this fashion, the i'th MAI response is

n - 1

L T Y ( 1 ) ( t ) X ( t ) d t = Zf), (2 .5) k = O

where Z f ) is the partial correlation over the time interval [ ( k - 1)T, + q, kT, + q) for k 2 1 , and Z$) is the sum of the two partial correlations over the time intervals [O, .Y,) and [ ( n - 1)T, + q, nT,). Define

f i k = u k - I u k (2 .6 ) and notice that the sequence i?" = (Ul,. . -tfi,,,-l) is again an i.i.d. sequence with 9(uk = + 1) = 9 < U k = - 1) = 1 /2 . Then for k 2 1 , we find that

where functions ai( * , .) and Si( * , .) are the partial chip correlations

+ ' $ ( r - T , + ~ ) $ ( r ) d r ] (2.8) /T , - 7 and

Notice from (2.7) that 22) = ai(@i, x) when r i k = + I, and 22) = k q) when ck = - 1 . Note also that g ( U k - l v f ! l = + 1 l r i k = U) = 1 /2 , independent of the value U = i- 1 . It thus follows from (2.7) that

* a i ( Q i , q ) , i f f i k , = + l , +Si(Qj,Z), if o k = - l ,

zf ) = 1 where the + or - signs occur with probability 1 / 2 .

SADOWSKY AND BAHR: DS-SSMA COMMUNICATIONS WITH RANDOM SIGNATURE SEQUENCES 517

Hence, _the conditional probability mass function of Z p ) given {U,, = f i n ; 0 = e} is &liik; e) where

1/2, for z = f ai given ii = + 1 pi ( zlii; e) = (2.10)

From (2.7) we-see that gf) depends on c,, only through the value of U,. Since U,, is an i.i.d. sequence, it follows that (2;’): k = 1,. a , n - l} is a conditionally i.i.d. se- quence when conditioned on (0 = e}.

The random variable Z$) is different than the 22) ’s for k L 1 because it accounts for partial correlation over two time intervals. The formula is

z$) = V,(?,U,,-,a,( a,, Z ), if V,?lU,-l = VCl]U, v~ ! ,u , , - ,~ , (~ , , 1, if V~?,U,,-~ # v?]u,’

(2.11) (

From formula (2.11) we deduce that the conditional dis- tribution of Z$) given (U,, = U,; 0 = 01 is uniform on the set ( f e,(+,, t,), f a,(+,, 6,)) and this conditional distribu- tion does not actually depend on U,,. Consequently, when conditioned on (@ = e}, Z$)- is independent of U,,, and hence, also independent of U,.

Next we consider the receiver’s response to the additive noise process N(t ) . We write

n - 1

i T N ( t ) X ( t ) d t = Nk. (2.12) k = O

N ( t ) is a white noise process, but possibly not Gaussian, and

which is again an i.i.d. sequence. Consideration of non- Gaussian noise is useful for investigating the effects of impulsive noise [l]. However, for brevity we shall proceed under the assumption that N ( t ) is a Gaussian process, and hence, (Nk} is an i.i.d. Gaussian sequence having variance a2.

Finally, from (2.2), (2.3), (2.3, and (2.12), we can write the receiver’s decision statistic as

(2.14) D = * a,n + S,,

where S, is the total MA1 and noise response: n - 1

s n = ‘ k (2.15) k = O

where m

z k = Nk + 22). (2.16)

We summarize three important properties of this repre- sentation: 1) For fixed k L 1 the random variables Zp), . . , Zim) and Nk ar_e conditionally independent when conditioned on (0 = 8; Uk = G k ) ; 2) The random variables Z f ) , - * , Zh“) and No are conditionally independent when conditioned on just ( @ = e ) ; 3) The random variables

i = l

Z,, Z , , . a , Z,, defined by (2.16) are conditionally indepen- dent when conditioned on (0 = 0). Moreover, Z,,. a , Z,, are conditionally i.i.d. given (0 = e).

In the important case of rectangular chip waveforms, # ( t ) = 1 for t E [0, T‘), the partial chip correlations de- fined in (2.8) and (2.9) reduce to a , (+,~)=a~T~cos(C#J) and ai(C#J,7) = aiTc cos(C#J)[l - ~ T / T , ] .

111. LARGE DEVIATIONS OF THE CONDITIONAL BIT-ERROR PROBABILITY

In this section we consider the asymptotic behavior of the conditional bit-error probability. With no loss of gener-. ality we consider only the case that - a , X ( t ) is the transmitted desired signal. Then from (2.14) the condi- tional bit error probability is

Pn(e) = 9 ( D L 010 = e ) = 9 ( S n 2 a,nl@ = e) . (3.1) yereafter, when conditioning on the event (0 = 8) we

use the abbreviated notation a, = a,(Cp,,~,) and 8, =

T ~ ) . In some intermediate expressions we suppress the conditioning on (0 = e} in the notation where it is convenient to do so.

A. An Expression for the Conditional Moment Generating Function

Recalling that the random variables Z, , Z , , . . ., 2, are conditionally independent and i.i.d. for k 2 1, we can express the conditional moment generating function of S, as

M,,( ale) = E[eaSnIO = e ] = A,( ale)A( (3.2) where

A, ( ale) = E [ enz@ = e ] (3.3a) and

A( ale) = E [ eaZk10 = e ] (3.3b) for k L 1. When conditioned on (0 = e}, 2i1), . * * , Zi” and No are independent and Z$) is uniformly distributed on ( * a,, f a,}. Hence,

m

i = l A,( ale) = E [ eaNo] n E [ ea’:)]

m

i = l = ea2a2 /2 n [ f cosh ( aai) + 3 cosh (as , ) ] . (3.4)

Next, we obtain Male) using the conditional indepen- deqce of the random variables Z f ) , . 1 , ZLm) given (0 = 8 ; U, = G k ) along with the conditional probability mass function (2.10). Define

A + ( a l e ) = + E [ e a Z k l e = e ; g k = + + ] m

= u 2 a 2 / 2 n cosh( aa1) (3.5a) i = l

and A - ( a I e ) = +E [ ea‘k 10 = e ; g k = - 11

m - - 2 ~ e u z a 2 / 2 n cosh ( ( ~ 6 ~ ) . (3.5b)

i = l

~

518 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 37, NO. 3, MAY 1991

Since p ( u k = + 1) = p ( o k = - 1) = 1/2, we have

Formulas (3.21, (3.4143.6) together determine the mo- ment generating function M,,(ale).

A( ale) = A - ( d e ) + A + (ale). (3.6)

B. The Chemoff Bound and the Large Deviations Limit

The Chernoff bound for Pn(e) is

~ , , ( e ) I=P( - [anon - log (~ , ( a i e ) ) ]> , which holds for all a 2 0. Now define A(al6) =

log(A(alf3)). Then applying (3.2) we obtain

for all a 2 0. The Chernoff bound (3.7) is seen to be an exponential function of the signature sequence length n. To optimize this bound for large n, we neglect the first factor and minimize just the exponent with respect to a 2 0. The result is

where I ( e) = max { aa, - A( ale)) (3.9)

CY20

and a*(e) is the value of a that obtains the maximum in (3.9).

As a function of a, the relationship (3.9) is known as the Legredre - Fenchel transfonn or convex conjugate of the function A(al0) [5 , Ch. VI]. In this application, a, = desired signal amplitude is fixed.

The optimized Chernoff bound (3.8) establishes that P,(8) vanishes exponentially in the limit as n +a. There is still the possibility that P,,(B) may vanish with a some- what faster rate than the Chemoff bound. However, large deviations theory establishes that this is not the case. The optimized Chemoff bound is exponentially tight in the sense that PJe) = C,(B)exp(- I (e)n) with e,(@) varying slowly relative to the exponential factor. The following proposition is a direct application of CramCr's theorem. For proof, see [5].

Proposition I : The following conclusions hold for each 8 E [ - ~ / 2 , ~ / 2 ) * X [O, Tc)m: 1) As a function of a,A(ale) is analytic and strictly convex on R; 2) The equation A'(a*(e)le) = a, has a unique solution a*(e) > 0 and

I ( e ) = a*( e)a, - A( a*( @le); (3.10) and, 3) Z(0) > 0 and

1 lim - log ( P,( e)) = - I( e ) .

n - m n (3.11)

C. The a-Conjugate Distribution

In the large deviations theory of i.i.d. sums, there is a twisted density function (also called the tilted density) which plays a central role in certain constructions and

proofs. The associated i.i.d. sequence distribution is called the a-conjugate distribution. This device is known in the engineering literature [7, Appendix 5Al. We specify this process distribution below for our particular representa- tion (2.141-42.16).

Given (0 =e} , the a-conjugate distribution of c,, is i.i.d. with

and

(3.12)

Next, when conditioned on (0 = 8, c,, = %,,I, the random variables Z p ) , . . e , Zim) are conditionally independent and depend only on the sequence %, through the single value i i k . For k 2 1, the conditional ?-conjugate probability mass function z Y ) given {e = e; U,, = %,,I is p!a)(zliik; e) where

pi=)( 216; e )

for z = k a, given ii = + 1 2cosh (sui) '

exp(az) = I 2cosh(a6,) '

. (3.13) for z = k 6, given ii = - 1

. 'I . (3.13) for z = k 6, given ii = - 1 exp(az)

= 1 2cosh(a6,) '

Notice that (3.13) reduces to (2.10) when a=0 . The random variables Zg", . . ,Z;") are again independent of c, under the a-conjugate distribution. The a-conjugate probability mass function for Z $ ) is

exP ( a z ) p$(Zle) = 2[ cosh ( sui) + cosh ( a6,)]

(3.14)

for z E I k a,, k 6,). Notice that (3.14) reduces to the uniform distribution when a = 0. Finally, the a-conjugate distribution for the additive Gaussian noise component Nk is

f $ ) ( v ) =exp(av-u2a2/2)fN(v) , (3.15)

where f&) is the zero-mean Gaussian probability den- sity function with variance a2. A straightforward manipu- lation indicates that f$)(v) is a Gaussian probability density function with variance a2, but the a-conjugate mean is (+'a, which is nonzero except when a = 0. Again, we see that f,$?)(v) reduces to f N ( v ) when a = 0.

The a-conjugate distribution is useful because it pro- vides an alternative expression for the conditional bit- error probability. This expression is

where E("f .I denotes the expectation operation for the a-conjugate distribution and l[ao,,,m)(. is the indicator function of the interval [a ,n ,w) . (l,(s) = 1 for s E A and

SADOWSKY AND BAHR: DS-SSMA COMMUNICATIONS WITH RANDOM SIGNATURE SEQUENCES 519

l,(s) = 0 for s A . ) The factor in front of the expecta- tion in the last display is just the Chernoff bound (3.7). In particular, when cy = a*(e) we have

Pn( 8 1 = C,( 6 ) ~ X P ( - I ( 6 n , (3.16) where

D. A Tighter Large Deviations Approximation

The expressions (3.16) and (3.17) provide a useful for- mula for the nonexponential part of the error probability Pn(e). The following proposition indicates that C J e ) is 0 ( 1 / 6 ) . This proposition is a formal statement of the limit (1.1). See Bahadur and Rao [2] for proof.

Proposition 2:

lim 6c,( e) = c( e ) , (3.18) n + m

where

(3.19)

E. Efficient Monte Carlo Estimation

Consider es!imation of Pn(f3) using Monte Carlo simu- lations. Let Pn(8) denote the Monte Carlo estimate of Pn(e) that results from L independent simulations. Next, consider >he Monte Carlo estimator's refatiue precision; E = ~ar[P,(e)]'/~/P,(0) = the estimator's standard devia- tion as a fraction of the true value. Let LJB) denote the number of independent simulations required to estimate P,,(e) to a lOOX E% relative precision. Then a simple computation indicates that L J 8 ) - e-'/P,(e). For exam- ple, a 10% relative precision requires L J e ) - 100/Pn(8) simulations. Since PJe) vanishes like e-'(e)n, it follows that L,(B) grows like E-2e1(0)n. Thus, we see that the ordinary Monte Carlo method has an exponentially grow- ing computational requirement as a function of n!

An alternative to the ordinary Monte Carlo approach is to generate simulation data using the a*(d)-conjugate distribution and form a Monte Carlo estimate of the expectation in (3.17). This estimate is then applied to (3.16) to get an estimate of Pn(0). Intuitively, we see that the reason ordinary Monte Carlo simulation is so compu- tationally intensive is that it attempts to estimate some- thing that is exponentially small in n. On the other hand, if we generate data from the a*(B)-conjugate distribution to estimate the expectation in (3.17), then we are estimat- ing an 0 ( 1 / 6 ) quantity. In [4], [5] it is demonstrated that this alternative approach is asymptotically the most efficient way to estimate PJO) using any i.i.d. (or even

Markov) importance sampling simulation scheme. (Impor- tance sampling is discussed further in Section IV.) In fact, it is shown in [lo] that the required number of simulation runs is L J 8 ) - O ( ~ E - ' ) , a substantial improvement over the exponential computational cost of ordinary Monte Carlo. The computational requirement (that is, actual computer run time) for a single simulation is O(mn). Hence, the total computational requirement for Monte Carlo estimation using the a*(O)-conjugate distribution is o ( ~ E - ' ) x O(mn) = 0(e-'mn3/') .

IV. THE AVERAGE BIT-ERROR PROBABILITY We now consider the average bit error probability F,, =

E[P,(@)]. The key point of the previous section was that the conditional probabilities Pn(0) decay expone_ntially with rate I(f3). In this section we demonstrate that P,, also vanishes exponentially in n and the rate of decay is the worst case (slowest) rate of decay of PJO).

A. Worst Case 8

Recall that e=(41,...,4m,~l,...,7m) and the admissi- ble range of 8 is [ - 7r/2,~/2)" X[O, T,)". However, it is convenient to use the closed rectangle E = [ - 7r/2, 7r/2]" x [0, T,]" as the domain of 8. This causes no loss of generality because A(al6) and A'(al0) are continuous in e and have continuous extensions to the closed do- main, and the uniform distribution for @ = (@.,,- . * , @", TI; * , Tm) puts zero probability on the boundary of E. We introduce the notation 4 = . -, 4") E [ - ~ / 2 , 7r/2]" and 7 = ( ~ ~ , . . . , 7 ~ ) € [ 0 , T , ] " , so 6 = ( 4 , 7 ) .

The worst case rate is I = mine I(0). Recalling (3.91, we see that identification of the worst case phase/delay vector can be viewed as a minimax problem:

= min I( 0 ) = min max { ay, - A( ale)} . (4.1) B E E B E E a 2 0

Proeosition 3: There exists at least-one 8 E E such that I = Z(e>. Furthermore, for any such 8 we have

~=cy*(8)ao-A(cy*(8)I8)

= max cya0 - maxA( ale)}. (4.2) a 2 0 ( B E E

Proofi This proposition is a direct application of Proposition 1 and the minimax theorem (see [6, p. 853). We require only the compactness of the domain E. U

It turns out that it is easier to work with (4.2); that is, it is easier to first consider the maximization of A(cul0) with respect to 8. Recalling that A(al6) = log(A(ale>), from (3.5) and (3.6) we have

A(ale) = -log(2) ++a2a' i m m \ n cosh( aai) + n cosh( asi)

i = l i = l

Proposition 4: There _are 2" worst case phase/delay vectors which w_e label e('), j = 1; . *,2". The worst case phase vector is ~ # & j ) = 0 and the worst case delay is Tici) = 0

520 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 37, NO. 3, MAY 1991

or T,. Moreover, we have A(al@')) = A(al0) for each j, and hence, a*($)) = a* does not depend on j .

Proof: Since cosh(x) is a strictly increasing function of 1x1, for any value of a we see from (4.3) that A(cyl8) will be maximized with respect to 8 if we can simultane- ously maximize the partial autocorrelations loil and 16il for each i = 1,. * , m. Since these partial autocorrelations depend on the phases only through the factors COS(^^) in (2.8) and (2.9), the solution 4 = 0 is trivial. Identification of the worst case delays follows from an application of the Cauchy-Schwarz inequality as in 1153.

Finally, from (2.8) and (2.9) we see that ai(O,O)= ai(O,T,) = ai(0,O) = - 6,(0, T,) _= ai. Since cosh(x) is even,

0 it follows from (4.3) that A(alB(')) = A(aI0).

B. Asymptotics of the Average Bit-Error Probability

Identification of the worst case phase/delay vec_tors is important because the worst case exponential rate Z turns out to be the asym@otic rate of decay of the average bit-error probability P,, = E[P,,(@)I.

Proposition 5:

1 Iim - log(F,,) = - I. ( 4 -4)

Proof: See Appendix A 0

n + m n

We should not _expect that exp(- In) will be a very good estimate of P, for finite n. Instead, what Proposi- tion 5 tells us is that the expectation P,, = E[P,,(@)I is asymptotically dominated by values of 8 near the worst case. Theorem 1 is a refinement of Proposition 5. This result is the formal statement of our asymptotic formula (1.2). The proof of Theorem 1 has three key steps: 1) Z(0) is expanded in a Taylor series about each worst case phase/delay vector e('), j = 1,. . ,2"; 2) For each j, a change of variables is applied-to magnify the expectation E[P,(O)] in the vicinity of e(') in order to obtain an asymptotically stable integral; 3) The dominated conver- gence theorem is applied to prove convergence of this integral. The limiting integral is completely determined by the coefficients of the dominant non-constant terms in the Taylor series. These coefficients determine the con- stant C in (1.2). A factor n- ' j2 results from application of (1.1) and a factor n-3m/2 results from the stabilizing change of variable. Together these two factors account for the factor n(-3m+1)/2 in (1.2).

The Taylor series step in the proof of Theorem 1 requires certain derivatives. Let a*-= a*(O) and recall from Proposition 4 that CY* = a*(O(J)) for each j = 1, - * *, 2". Then the required Taylor series coefficients are

where the plus sign occurs when 7)') = 0 and the minus sign occurs when 7:') = T,, and

vi = da*ai tanh ( a*ai). (4.6)

Theorem I: Assume that $ ( t ) is continuously differen- tiable on (O,T,). Then

lim d3" + ')I2 exp ( In) F,, n + m

where 4 0 ) is as in (3.19) with 8 = 0.

Proof: See Appendix B. 0

It may happen that some py)= 0, in which case the right side of (4.7) is 00. This breakdown occurs because the first-order Taylor series terms in 7 vanish. One could modify the proof by considering the nontrivial second- order terms (as one must do for +). The end result would be that the n-(3m+1)/2 in (1.2) would be replaced by n-('"+lI2). By differentiation of (2.8) and (2.9) one can show that p$J) # 0 whenever $(O) it 0 and $(T,) # 0, that is, when the chip waveforms are discontinous at the endpoints of the interval [0, T,]. This includes the impor- tant case of rectangular chip waveforms. An example where p?) = 0 is $ ( t ) = cos(.rrt/ T,). We will not consider the fairly straightforward extensions of Theorem 1 re- quired to handle this case.

Formula (4.7) can be simplified in the case that $ ( t ) is symmetric about T,/2. In this case, all terms in the sum on the right side of (4.7) are identical, and the result reduces to

lim d3" + ')I2 exp ( In) F,, n + m

= 2 - Tc-' ~ ( 0 ) n p $ ' b i . (4.8) ( c )" I-' C. Efficient Monte Carlo Estimation of F,,

Throughout this subsection we assume the hypothesis of Theorem 1.

We now show how to efficiently estimate p,, = E[P,,(@)I using importance sampling. We wish to evaluate the inte- gral

The idea is to generate random L independent samples @('I, * , from a probability density function f ,*(8) and form the Monte Carlo estimate

A 1 L P,, = - Pn( @'[')w,,( 0'") , (4.10)

L 1 = 1

where w,,(8) = (.rrT,)-"/ f ,*(8) is the importance sampling weighting function. Notice that wJ8) is just the likelihood ratio between the true density f(O)=(.rrT,)-" and the simulation densityf,*(8). We assume that f ,*(8) > 0 for all e E E, which is required for w,,(8) to be well defined. A simple integration against the simulation density f ,*(8) indicates that the estimator (4.10) is unbiased, that is,

SADOWSKY AND BAHR: DS-SSMA COMMUNICATIONS WITH RANDOM SIGNATURE SEQUENCES

~

521

E*[p,,] = p,, where E*[*] is the f,*(e) expectation opera- tion. The estimator's variance is

where

T n ( f,* ) = E * [ ( ~n ( 0 ) wn( 0 ) 1'1 . (4-12)

A go2d simulation density should tend to minimize var* [ P,], or equivalently, it should minimize the func- tional ~,(f,*).

The unconstrained minimum variance simulation den- sity is well known [3], [9]. By Jensen's inequality we have

T,(f,*) 2 E*[P , (@)w, (@)] ' = E [ P , ( @ ) ] * = F; and equality holds if and only if f,*(fl) a P,(e); but this classical solution is not practical. Notice that the normal- ization factor that-makes f,*(fl) a P,(B) a probability den- sity is l/F,, but P, is precisely the quantity that we are trying to estimate. _Furthermore, the importance sampling weight is w,(e) = P,. Thus, this unconstrained minimum variance solution assumes knowledge of the parameter we are trying to estimate.

A practical choice of f,*(e) should be efficient in-two ways. First, it must be efficient in the sense that random samples a(') can be easily generated on a digital com- puter. The importance sampling weighting function w,(e) must also be easily evaluated. Second, it should tend to minimize ~,(f,*), as does the minimum variance solution, in order to minimize the required number of simulations.

The unconstrained minimum variance simulation den- sity, f,*(e) a P,(e), concentrates its probability mass where P,(fl) is relatively large. Using what we know about the large deviations of P,(e), we conclude that asymptoti- cally f,*(fl) should be large where Z(f3) is small. We thus argue that f,*(fl) a exp( - Z(0)n) should be a good alter- native to the unconstrained minimum variance solution. Unfortunately, this candidate is also impractical because performing the normalization to a probability distribu- tion, and computing the importance sampling weights, is a formidable task tantamount to computing F,. So instead, we try the next best thing. Applying insight gained from the proof of Theorem 1, we construct a simulation density using the Taylor series expansions of Z(0) about the worst case phase/delay vectors. From formulas (B.5) and (B.6) in Appendix B, for 6 near

I ( 0) = + i ( j ) ( e ) + higher order terms,

we have

where the dominant nonconstant terms are

1 " m

P( e ) = &)(T; - ?p) + - vi'&. (4.13) i = l 2 ; = I

Define Ej = [ - 7r/2,7r/2Im X [a,, b,] x . . X [a,, b,], where aj = 0 and bj = T, /2 if 7;') = 0, or a j = T, /2 and bj = T, if 7!j) = T,. Define fii)(e) to be the probability density that is directly proportional to exp( - Z(j)(B)n) on

Ej. After normalization to a probability density, we have

where

Q ( x ) = imexp( - 2 ' / 2 ) / G d z J X

Then our candidate simulation density is

(4.15) j = l

It turns out that the simulation density constructed by (4.14) and (4.15) satisfies both of the requirements for efficient simulation. First, it is easy to generate random samples. Randomly select j E { 1, . . . ,2"}, then sample 0") from f,?(e). Notice from (4.14) that f,")(e) has a product form, and hence, the and x(% are inde- pendent. Furthermore, the marginal densities of the are Gaussian densities truncated to the interval [ - 7 ~ / 2 , a / 2 ] , and the marginal densities of the Z(')'s are exponential densities truncated to intervals [a i , b;]. These observations greatly simplify the generation of the random samples 0"' and the computation of the impor- tance sampling weights w,(@(')). Our second requirement was that f,*(e) should tend to minimize the required number of sim-ulations. Surprisingly, our compromise f,*(e) a exp(- Z(j)(B)n) on Ej, instead of f,*(e) a P,(e), does not sacrifice much of this regard. This is demon- strated by the following result that is proved in Appen- dix B.

Theorem 2: Assume the hypothesis of Theorem 1. Let L, denote the numbzr of independent samples of 0 required to estimate P, to a specified relative precision. If f,*(e) is as specified by (4.14) and (4.13, then L,+ L, < w as n +W.

Echoing the discussion in Section 111-E, we see that Theorem 2 indicates a dramatic improvement in effi- ciency over ordinary Monte Carlo simulations. Since P,, vanishes like exp( - In), it fol_lows that for ordinary Monte Carlo, L, grows like exp(+ In). Thus, the reduction from an exponential growth of L, to asymptotically constant L, - L, is significant.

The argument leading to the reduction (4.8) applies here as well. If @ ( t ) is symmetric about T, /2, then (4.15) can be replaced by f,*(e) = f,?(O).

This discussion has dealt with importance sampling of 0. It was implicit in this discussion that P,(B) could be evaluated for each sample. From Section I11 we have two methods of evaluating Pn(e); the approximation (1.1) and Monte Carlo estimation of C,(O) using the a*(O)-con- jugate distribution. Thus, we now have two methods of estimating P,, = E[P,(@)]. In both methods 0 is sampled as previously described.

522 IEEE TRANSACITONS ON INFORMATION THEORY, VOL. 37, NO. 3, MAY 1991

1 0 - 2

10-6

1 0 - 8

10-9

* Approx. I. S . Method Exact 1. S . Method

0 10 20 30 40 50 60

n = Signature Sequence Length

Fig. 2. Bit-error probability estimates for m = 3 equal power

In the approximate method, we will employ (1.1) and

signals.

actually estimate the expectation

E [ exp( - Z ( 0 ) n ) - P,,. (4.16) 1 - The computational cost of generating each random sam- ple a(’), applying this sample to (1.1) and computing the importance sampling weight is O b ) . Hence, this approx- imate method requires a total computational cost of O b “ ’ ) , where E is the desired relative precision. One can prove (4.16) as a formal limit. (This follows from the uniform convergence c,(O> + c(O) that is established in Appendix C.) Thus, the approximate method is asymptoti- cally tight for large n.

The most accurate estimates are obtained using ct* (O) - conjugate simulation for evaluation of Pn(0) as described in Section 111-E. We refer to this method as the exact method because there is no approximation. For each sample @’), a fixed number of ct*(O(’))-conjugate simula- tions are applied to form a crude estimate of P,,(@(‘)). These conditional estimates are then applied to the esti- mator (4.10). (This technique is known as conditionul importance sampling [3].) The exact method requires O(mn3l2e-*) computational cost. In comparison, the ex- act method is limited by the n3/* growth in computational cost. However, the approximate method is asymptotically consistent for large n. Hence, we can use the exact method for moderate values of n to check the conver- gence of (4.16).

V. NUMERICAL EXAMPLES

-0- Approx. IS Method ExactISMethod

10.14-

10-15.

10-16-

1 0 - 1 7 ,

0 100 200 300

n = Signature Sequence Length

Fig. 3. Bit-error probability estimates for m = 10 equal power MAI signals.

desired signal power, so MAI is the dominant cause of bit errors. The plots differ only in the number and ampli- tudes of the MA1 signals. This is not the usual way that bit error probabilities are presented in the communica- tions literature. Typically, one finds plots of bit-error rate as a function of = transmitted energy per bit over the Gaussian noise power spectral density. By plotting p,, as a function of n, we can clearly see where large devia- tions plays a dominant role, and where it does got.

Figs. 2-5 plot the following estimates of P,,; 1) the asymptotic formula (1.2); 2) the exact and approximate method importance sampling estimates developed in Sec- tion IV-C; and 3) the Gaussian approximation as moti- vated by central limit theory.

Figs. 4 and 5 also plot the conditional Gaussian approxi- mation. This is an ad hoc approximation originally pro- posed in [14], where it is called the “improved Gaussian approximation.” The idea is to apply the Gaussian ap- proximation to the conditional bit-error probability; Pn(e) = Q(u,J./.<e>), where U ( @ ) is the per chip conditional variance of the total MA1 and noise response. The aver- age bit-error probability is approximated as P,, = E [ Q ( U ~ \ / W ) ] . ~ Note that the conditional Gaussian approximation is not asymptotically tight because Q ( a , \ / m ) decays with exponential rate a: / d e ) , which is not the same as Z(0) = the true rate of decay

For the importance sampling estimates, enough simula- tions were performed so that the sample st.ndard devia- tion was smaller than 5% of the estimate P,,. (Of course, for the approximate method the sample standard devia- tion does not measure approximation error of (4.161.)

, This degree of empirical precision typically required only

p,,(e).

Figs. 2-5 present some sample numerical ctmputa- L = 500-1000 samples of 0. For large n, some of the

’Actually, it is proposed in [ll] that one also condition on C = # of + 1, which is a binomial random variable. But this additional

tions. These figures plot various estimates of P,, as a function of n with the nz MAI signal amplitudes fixed. The chip waveforms are rectangular. The additive Gauss- ian noise process (Nk) has power equal to 1/100th Of the

ckis conditioning had little effect for the values plotted in Figs. 4 and 5.

SADOWSKY AND BAHR: DS-SSMA COMMUNICATIONS WITH RANDOM SIGNATURE SEQUENCES

0 50 100 150 200 250

n = Signature Sequence Length

Fig. 4. Bit-error probability estimates for m = 1 MA1 signal 10 dB more powerful than desired signal.

exact method estimates required as much as 20 minutes of CPU time on a Sun 3/50 workstation. The approxima- tion method usually required less than 2 minutes of CPU time.

Figs. 2 and 3 illustrate the cases of m = 3 and 10 equal power MAI signals with each signal having the same power as the desired signal. Notice that in these plots the region where the Gaussian approximation fails is not in the realm of practical interest. The large deviations asymptotics are significant only for the very small proba- bilities corresponding to large values of n. The condi- tional Gaussian approximation (not plotted in Figs. 2 and 3) gave very good estimates for the equal power cases of Figs. 2 and 3, although a slight divergence in our compu- tations was noted at large values of n.

Figs. 4 and 5 illustrate the near/far problem. In these figures there is only m = 1 MA1 signal. In Fig. 4 the MA1 signal is 10 dB more powerful than the desired signal, and in Fig. 5 the MA1 is 20 dB more powerful. Unlike in Figs. 2 and 3, in Figs. 4 and 5 the large deviations asymptotics do play a dominant role in the region of practical interest. Moreover, both the Gaussian approximation and the con- ditional Gaussian approximation breakdown, while (1.2) converges quite well under these near/far conditions.

VI. CONCLUSIONS The application of central limit theory to estimate

DS-SSMA bit-error probabilities in the presence of strong MAI requires that the ratio m / n be held constant in the limit where m is the number of significant MA1 signals. This is required in order to have a nonvanishing bit-error probability in the limit that can be evaluated using the limiting Gaussian distribution. The convergence is most rapid when m / n is large, which explains why the Gauss- ian approximations in Figs. 2 and 3 are actually best for

523

0 200 400 600 800 loo0

n = Signature Sequence Length

Fig. 5 . Bit-error probability estimates for m = 1 MA1 signal 20 dB more powerful than desired signal.

small values of n instead of large n. With m fixed, small n implies a larger ratio m / n , and hence, a better Gauss- ian approximation.

The practical significance of the present work is the realization that the near/far problem is in the domain of large deviations theory, not central limit theory. We argue this point by first noting that the necessary “uniform asymptotic negligibility” condition of central limit theory is violated by the near/far situation. This forces us to consider the asymptotics of large n with m fixed, which is the large deviations limit.

The distinction between a “large deviations probability” and an “approximately Gaussian probability” (that is, when central limit theory asymptotics dominate) is made clear by considering the decomposition of these small probabilities into an exponential factor and a nonexpo- nential factor. For PJO) this factorization is give? by formulas (3.16) and (3.17). The factorization P, = nonexponential factor x exp ( - In) is given by formula (B.7) in Appendix B. A probability is “approximately Gaussian” when the exponential factor is essentially 1, and hence, the probability is determined by the nonex- ponential factor. In fact, in this case it turns out that cr*(O)[S, - a,n] = 0 and, as a result, in (3.17) we have Pn(0) = conditional Gaussian approximation. This ex- plains why the conditional Gaussian approximation is accurate for the case of roughly equal power MA1 signals. Conversely, the case of a “large deviations probability” is characterized by a small exponential factor which deter- mines the “order of magnitude” of the probability. In this case Gaussian approximations can easily be in error by orders of magnitude. The importance sampling methods developed here account for both factors, and for this reason, these methods are accurate for any operating point. (Note that the importance sampling schemes re- duce to ordinary Monte Carlo in the case of approxi- mately Gaussian probabilities when a*(O) = 0.) Of course,

524 IEEE TRANSACTIONS O N INFORMATION THEORY, VOL. 37, NO. 3, MAY 1991

a problem with Gaussian approximation (conditional or unconditional) is that it is difficult to predict where it will breakdown.

APPENDIX A PROOF OF PROPOSITION 5

Using the optimized Chernoff bound (3.8), we have

h’(alf3) is jointly continuous in CY and 8, it follows that a*(e) (which is the solution of A’(cu*(e)le) = y ) is continu- ous in 8. Clearly, A(ale> and A,(ale> are jointly continu- ous in (Y and 8, and hence, this supremum must be finite because of the continuity of a*(@) and because E is compact. Replacing Pn<e> by this upper bound in the expectation F,, = E[P,,(O)I, it follows that

1

n + m n limsup - log (F,,) I - j . (A.1)

Define S , = (e: Z(@) I f + E } . By (3.101, the continuity of a*(f3) and the continuity of A(al6) as a function of both CY and 8, it follows that Z(0) is also continuous in 8. This implies that S, has a nonempty interior, and hence, p , = HO E s,) > o for all E > 0. Since Fn 2 E[P,,(o)(o E

S,]p, , we have 1 1

Applying Jensen’s inequality, Fatou’s lemma, and Propo- sition 1, we have

1 Iim inf - log (F,,)

n - r m n

= -E[ / (O)IO€S, ]

2 - ( J + e ) . ( A 4 Letting E 40 completes the proof. 0

APPENDIX B P R O O F O F T H E O R E M S ~ A N D ~

Proof of Theorem 1: Define

E,=[ - ~ / 2 , ~ / 2 ] ” X [a, ,b ,] X . X [ a m , b m ] , where U, = 0 and b, = T, / 2 if TI’) = 0 and ai = T, /2 and bj = T, if Tli) = T,. Then

The first step is to consider the dominant terms in the Taylor series of Z(e) about the worst case phase/delay vectors. We consider only the term @”) = 0, and hereafter drop the superscript ( j ) to simplify the notation. The other terms follow analogously. Recalling (3.101, we have

a z ( e ) aCY*(e) as, as, Y -=-

aa*( e ) aA( CY*( @le) - ~ ‘ ( ~ * ( e ) l e ) - - - asi as1

However, by definition A’(a*(e)le) = a,, and hence, there is a cancellation resulting in

az( e ) an( CY * ( e le -=-

as, asi The derivative with respect to U, is similar. From (4.31, using the fact that a i (O ,O) = 6,(0,0) = ai, we obtain

where CY* = a*(O) (as in Proposition 4). Using the chain rule and previous displays, we find that

where p, is as defined in (4.5) (with j = 1). Next, we have

az( e ) an( CY * ( e ) Io) du1 ( +,, 7, ) -- _ - 84, au1 841

as, . (B.3)

However, ~ ~ ( 4 , T) and a,(+, 7) depend on 6 only through the factor cos (4) (see (2.7) and (2.8)). Hence, aul(0,0)/a4 = dsl(O,0)/a4 = 0, and as a result, (B.3) re- duces to aZ(O)/a+, = 0. So, we must differentiate (B.3) again to obtain a nonzero derivative. Note that we also have a2al(4,, 7,)/a+, a+, = a2S,(+,, 7,)/a4, 84, = 0 for i #

j , a2U,(4,, 7,)/a+2 = - U,(+,, 7,) and a2s1(4,, ~ , ) / a + : = - s,(+,,~,). We thus obtain

- a* ( 0 ) le) as, ( 4, 7 7,)

a2 I “ 1

- ( O ) = v;, a4Z

where v,? is as defined in (4.61, and a2Z(0)/a+ia4i = 0 for i # j.

Now, since Z(0) = 1, we have the Taylor series

z(e) = I + i ( e ) + ~ ( I I T I I ) + O(IITIIII+II) + O(II+II~), (13.5)

where 1 1 . I1 denotes the Euclidean norm on Rm and m 1 m

i = l

In the remainder terms of (BS), the “little 0” notation means O ( E ) / E -, 0 as E 40.

SADOWSKY AND BAHR: DS-SSMA COMMUNICATIONS WITH RANDOM SIGNATURE SEQUENCES

Now consider the j = 1 term in (B.1). We reyrite this integral in terms of the variables Fi = nr . and 4i = 64i. We also write Pn(e) = c,(e)e-'(')"/&, where c,(e) = &C,<e) and C,(e) is as defined in (3.17). Recall from Proposition 2 that c,(e) + 40) of definition (3.19). Apply- ing the change of variables, we have

cn( e,) exp ([ - I ( e,)] n) de', (B.7) L" where 8, is the vector wiLh components c#J,,~ = &i /A and T,,~ = Fi/n, and- E, = [ - 7 ~ & / 2 , 7 ~ 6 / 2 ] " x [_O, nTc /2]". As n + m, E, + R" X [0, m)". For each fixed 8,e, + 0. It turns out that the convergence c,(e) + c(f3) is uniform for 8 E E. The proof of this fact is lengthy and technical, so it is deferred to Appendix C. As a result of this uqiform convergence, we have c,(e,) -+ 40) for each fixed 8 E R" X [O,oc)". Applying the change of variable to (B.5) and (B.6) we obtain

[ I - I ( e, ) ] n = - i( e') + o ( 1 ) , P . 8 ) for each fixed e'€ R" X[O,m)". ( O W vanishes as n -+CO.)

Hence, we will appeal to the dominated convergence theorem to obtain

= /-- ( c( 0) + o( 1)) exp ( - f( e') + o( 1)) de'

+ C ( O ) / exp( - f(e'))de'. (B.9)

En

Rm x[ 0 ,m)

From (B.6) we see that the limiting integral in (B.9) has product form and can be easily evaluated using the for- mulas

Lme-Fr d r = 1 / p and Im e-v2d2/2 d+ = 7T /v.

When all of the 2" terms in (B. l ) are accounted for, the result is the desired limit (4.7). (Note that it follows for Proposition 4 that c(@)) = 4 0 ) for each j . )

To complete the proof we must find a dominating function for the application of the dominated conver- gence theorem in (B.9). First, since we have c,(e) + 4 0 ) uniformly in 8, and since 4 6 ) is continuous on a compact domain E , and hence, uniformly continuous on E , there exists a constant c <CO such that c,(O> < c for all n sufficiently large and all 8 E E,. We show that there exists positive constants j i i > 0 and Vi > 0 such that

- m

m l m I ( e ) 2 I,( e ) = I + + - s,'4;.

i = l 2 i = l

Then the integrable dominating function for (B.9) is

~

525

Note that the constants ji, and fit must be positive in order the have integrability (over the domain R m X [O,m)"). Referring back to the Taylor series (B .3 , there exists an E > 0 such that

z(e) 2 I + - p , ~ , + v;& 1 = 1 2 7 1 = 1 1 l " "

whenever llell I E . Let d = inf, E Z(0) where L? = E , n {I1811 2_ E ) and note that d > I . Then- ji, = min{p, /2, (d - I ) /mTc} and fi: = min{v:/2,(d - I ) / m a 2 ) provide the desired positive constants. This completes the proof of Theorem 1. U

Proof of Theorem 2: We begin by writing 2 m

qn(f,*> = C / (Pn(e>wn(e))'f,*(e>de ( ~ * 1 1 ) J = 1 E,

and we consider only the j = 1 term as in the proof of Theorem 1. On E , we have

f ,*(e) = 2-"f,")(e) a exp ( - i( e)n) . Note that the total normalizing constant in (4.14) is a n3"/'. So, from (4.141, (4.15), and ( l . l ) , we can rewrite the first term in (B . l l ) as

1 (P,(e)w,(e))'ff,*(e) de = ~ , n - ( 3 " / 2 + 1 ) ~ 2 ~ n El

x jEc , ( e ) ' eXp( -2 [~ (~) - I ] n + i ( e ) n ) d e , I

where r'(0) is as in (B.61, and K , + K E ( 0 , ~ ) . Next we apply the change of variables which results in another factor of n-3m/2 because of the change of variables formula. Using (BA), the result is

j (pn(e)wn(e))'f,*(e> de El

= ~ , ~ - ( 3 " + 1 ) , - 2 f n c,( B,)2exp ( - i( e') + o( 1)) de'.

By the same dominated convergence argument proof of Theorem 1, after accounting for all (B.111, we conclude that

q,(ff,*) - &-(3m+1)exp(-21n).

Finally, from (B.12), (4.111, and (1.2), we have

The algebra is somewhat messy, however, one

as in the terms in

(B.12)

can show that e > C2. (It is a consequence of the fact that yariance is positive that we always have L, > 0, and hence, C 2 C2). This is enough to establish Theorem 2. U

APPENDIX C THE UNIFORM CONVERGENCE OF c,(e)

The proof of Proposition 2 given here follows classical lines [2]. Our consideration here is to establish the addi- tional property that c,(e) + c(0) uniformly for 8 E E. This fact is required in the proofs of Theorems 1 and 2.

526 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 37, NO. 3, MAY 1991

From formula (3.17), we find that

c,(e) =&c,(e) = [A,( C Y * ( ~ ) I ~ ) / A ( C Y * ( ~ > I ~ ) ] . - , ( ~ ) , where .-,(e) is just the expectation in (3.17) times 6. It is sufficient to establish that E,(@) -+ E(8) uniformly in 8. Let F,(sle) denote the_ a*(fl)-conjugate cumulative distri- bution function of S, = (S, - n y ) / 6 . Then, after a change of variables, an integration by parts and some other manipulations, we obtain

E,( e ) = Im[ F,( sle) - F,(Ole)] a*( O)ne-J'"*(e)S ds

From (3.19) and similar manipulations, we find that equivalent expression for the limiting value of c',(e) is

0

S ds . .-(e) = Im a * ( 6 ) ne - ,/;a

o +2.rmff( .*( e)le)

Hence,

[.-,(e) - E ( B ) l I I ~ e , ( s l e ) ( Y . ( e ) n e - , / ~ a * ( e ) ~ d s 0 7 (C.1)

where

(C.2)

(C.3)

We demonstrate that

e,(sle) I k , s + &s2,

where k , and k do not depend on 8 and k , 5 0 as n -+ m. Applying (C.3) and (C.1) and evaluating the resulting integrals, we get

By differentiating (4.3), one can show that A'(ale) I K a for CY > 0 and some K < m (because the derivative of the ''log'' term is bounded). Thus, y = A'(a*(e)le) I Kcu*(O), and hence, a*(e) 2 y / K . Applying this to the last display we have

Kk, K2k Ic',(e) - .-(e) 1 I - + - for all 8 E E .

Y y 2 h '

This last display is the desired uniform convergence of .-,(e). To complete the proof we need to establish (C.3).

Since the additive noise variables {Nk} are i.i.d. Gauss- ian with variance U', we can split each Nk into the sum of two independent Gaussians: Nk = NZ + Ni. Recall that under the a-conjugate distribution E(")[ Nk] = aa2 and var(")[N,]= u2. Thus, we define {NZ) and {Nil to be independent i.i.d. Gaussian sequences with E(")[ NZI =

au2, E("! N,b] = 0 and var(")[ N;] = var(")[ N i l = u2/2 under the a-conjugate distribution. Next define

m

Y, = Z f ) + NZ. (C.4) i = l

Note that Yk differs from z k defined in (2.16) because it includes only the N[ part of the Gaussian component Nk = NZ + Ni. We have Z, = Yk + N,b. Next define

n - 1

R,= Yk and R , = ( R , - ~ Y ) / & . k = O

For k 2 1, we have

E("*@))[ y,] = ,+*(e))[ zk] = Af( a*( e)le) =

and

~ a r ( ~ * ( ~ ) ) [ z,] = var("*(@)[ yk] + u2/2 .

Let G,(rle) denote the a*(O)-conjugate cumulative distri- bution function of R,. By the Berry-EssCen theorem [17, p. 331, we have

where @(.I is the zero-mean unit-variance Gaussian dis- tribution function and

Since ~ar(~*('))[Y,] 2 var(")[ N[l= u2/2 , we have

33~(a*(eN [ lyk - Y13] & U 3

K ( e ) I

It is pointed out in Appendix A that a*(O) varies continu- ously with 8. Yk - N; is a discrete random variable with finite support and the a-conjugate probability mass func- tion of Y, - NZ (specified by formulas (3.12) and (3.13)) varies continuously with a. Consequently the third abso- lute moment E("'('))[~Y, - y13] is a continuous function of 8 on the compact set E , and hence, from the last display we conclude

K = SUP K(8) < m . (C.7) B E E

Thus, we have convergence in (C.5) uniform in 8 E E. We comment at this point that the reason for including

a Gaussian component in the definition of Yk is that when all of the phases are f ~ / 2 , the variance of the discrete component of Z, is zero. By including the con- stant variance Gaussian component, we have avoided the messy task of considering the ratio (C.6) of two quantities that both vanish at c$i = f ~ / 2 .

We are interested in F,(slO) = :he a*(B)-conjugate cu- mulative d_istrib_ution function of S, = (S, - n y ) / & . No- tice that S, = R, plus an independent Gaussian compo- nent with zero-mean and variance a2/2, namely,

f l - 1

k = O

Thus, f, is a continuous random variable with probability

I I.

SADOWSKY AND BAHR: DS-SSMA COMMUNICATIONS WITH RANDOM SIGNATURE SEQUENCES 527

density function

= 1: G , ( ~ I ~ ) T L ~ / ~ ( s - r ) d r , (C.8)

where v,,(f) is the zero-mean Gaussian probability density function with variance U. Define

-m

U ( e ) = ~ar(*’(~))[Z,] = A(a*(e)le). Using the uniform-error bound (C.6) with K ( 0 ) replaced by K in (C.7), we obtain

for all s E R (C.9)

where K‘ < m. Likewise, differentiating (C.8) again we have

for all s E R, (C.10)

where K t f < w . Next, from (C.8) we see that F,(slO) is analytic in s, and hence, we have the Taylor series

F,( de) - F,(ole) = f,(ole)s + r,( sle). (c.11)

Using (C.10) we have

1 I r , ( s l e ) l i Z SUP If;(rle)Is2

r € R

~ , , ( ~ , ( o ) = 1 / , / 2 ~ ~ ( a*(e)le) ,

from - (C.91, (C.111, and (C.12) we obtain (C.3) with k, =

ACKNOWLEDGMENT The authors thank D. V. Sarwate for poLnting out the

simple proof of Proposition 4 for the case # ( t ) = #( t ) .

REFERENCES

[l] B. Aazhang and H. V. Poor, “Performance of DS/SSMA commu- nications in impulsive channels-Part I: Linear correlation re- ceivers,” IEEE Trans. Commun., vol. COM-35, pp. 1179-1188, 1987.

[2] R. R. Bahadur and R. R. Rao, “On deviations of the sample mean,” Ann. Math. Statis., vol. 31, pp. 43-54, 1960.

[3] P. Bratley, B. L. Fox, and L. E. Schrage, A Guide to Simulation, 2nd ed. New York Springer, 1987.

[4] J. A. Bucklew, P. Ney, and J. S . Sadowsky, “Monte Carlo simula- tion and large deviations theory for uniformly recurrent Markov chains,” J. Appl. Prob., vol. 27, pp. 44-59, Mar. 1990.

[SI J. A. Bucklew, Large Deviatwns Techniques in Decision, Simulation, and Estimation.

[6] T. S . Ferguson, Mathematical Statistics. Ne* York: Academic Press, 1967.

[7] R. G. Gallager, Information Theory and Reliable Communications. New York Wiley, 1968.

[8] E. A. Geraniotis and M. B. Pursley, “Error probability for direct- sequence spread-spectrum multiple-access communications-Part 11: Approximations,” IEEE Trans. Commun., vol. COM-30, pp.

[9] J. M. Hammersley and D. C. Handscomb, Monte Carlo Methods. New York Chapman and Hall, 1964.

[lo] V. Hunkel and J. A. Bucklew, “Fast simulation for functionals of Markov chains,” Proc. 1988 Conf Inform. Sci. Syst., Princeton Univ., Princeton, NJ, Mar. 1988.

[ l l ] J. S . Lehnert and M. B. Pursley, “Error probability for binary direct-sequence spread-spectrum communications with random sig- nature sequences,” IEEE Trans. Commun., vol. COM-35, pp. 87-98, 1987.

[12] R. Lupas and S. Verdl, “Linear multiuser detectors for syn- chronous code-division multiple-access channels,” IEEE Trans. In- form. Theory, vol. 35, no. 1, pp. 123-136, Jan. 1989.

[13] R. Luuas and S . Verdl. “Near-far resistance of multiuser detectors

New York Wiley, 1990.

985-995, 1982.

in asynchronous channels,” IEEE Trans. Commun., vol. COM-38,

R. K. Morrow, Jr. and J. S . Lehnert, “Bit-to-bit error dependence in slotted DS/SSMA packet system with random signature se- quences,’’ IEEE Trans. Commun., vol. COM-37, pp. 1052-1061, 1989. M. B. Pursley, “Spread spectrum multiple-access communications,” in Multi-User Communications Systems, E. G. Longo, Ed. New York: Springer-Verlag, 1981. M. B. Pursley, D. V. Sarwate, and W. E. Stark, “Error probability for direct-sequence spread-spectrum multiple-access communica- tions-Part I: Upper and lower bounds,” IEEE Trans. Commun.,

R. J. Serfling, Approximation Theorem of Mathematical Statistics. New York Wiley, 1980. S. Verdl, “Minimum probability of error for asynchronous Gauss- ian multiple-access channels,” IEEE Trans. Inform. Theory, vol. IT-32, no. 1, pp. 85-96, Jan. 1986. S. Verdl, “Multiuser detection,” to appear in Adu. in Statis. Sig. Proc .

pp. 496-508, 1990.

vol. COM-30, pp. 975-984, 1982.

. .~

[20] K. Yao, “Error probability of asynchronous spread spectrum multi- ple access communication systems,” IEEE Trans. Commun., vol.

K ’ / \ l n . This completes the proof. 0 COM-25, pp. 803-809, 1977.


Recommended