+ All Categories
Home > Documents > Noncoherent Receiversthe receiver does not know the value of θk. We can use the representation of...

Noncoherent Receiversthe receiver does not know the value of θk. We can use the representation of...

Date post: 08-Nov-2020
Category:
Upload: others
View: 2 times
Download: 0 times
Share this document with a friend
25
Chapter 5 Noncoherent Receivers In the previous chapters we assumed the receiver had perfect knowledge of the desired received signal except the transmitted data bits which we want to detect. In this chapter we consider the case where the receiver knows everything about the received signal except the data bits transmitted and the carrier phase. The approach is to treat the unknown phase as a random variable and then to average the likelihood of the received signal given the data bits and unknown phase with respect to the distribution of the phase. We begin by assuming a general carrier modulated waveform with M signals with arbitrary correlation. For this general signal set we derive the optimal receiver in the presence of additive Gaussian noise. We then derive the important special case of additive white Gaussian noise. It will be shown in the case of additive white Gaussian noise that the receiver can be implemented using an envelope detector. Next we derive an expression for the error probability of two signals in additive white Gaussian noise. In section 3 we derive the error probability for M-orthogonal signals in additive white Gaussian noise. Finally we derive expressions for the error probability of repetition codes with noncoherent reception. 1. Optimal Receiver in AGN Assume additive stationary Gaussian noise and that s k t for 0 k M 1 has the form s k t a k t cos ϖ c t β k t 0 k M 1 with s 2 k t dt 1 2 a 2 k t dt E where a k t and β k t are lowpass waveforms with respect to ϖ c . When s k t is transmitted the received waveform has the form rt a k t cos ϖ c t β k t θ k nt where θ k is a random phase. If θ k 0 with probability 1 then we have the usual coherent reception situation already discussed. We will for this section assume that θ k is uniformly distributed on the interval 0 2π and that the receiver does not know the value of θ k . We can use the representation of bandpass signals and noise in deriving the optimal receiver. rt Re u k te jθ k jϖ c t nt where u k t a k t cos β k t ja k t sin β k t and j 1. Assuming the noise is also narrow band we can express the noise as nt n c t cos ϖ c t n s t sin ϖ c t Then the lowpass representation of the received signal becomes ˜ rt u k te jθ k zt 5-1
Transcript
Page 1: Noncoherent Receiversthe receiver does not know the value of θk. We can use the representation of bandpass signals and noise in deriving the optimal receiver. r t Re uk t ejθk jωct

Chapter 5

Noncoherent Receivers

In the previous chapters we assumed the receiver had perfect knowledge of the desired received signal except thetransmitted data bits which we want to detect. In this chapter we consider the case where the receiver knowseverything about the received signal except the data bits transmitted and the carrier phase. The approach is totreat the unknown phase as a random variable and then to average the likelihood of the received signal given thedata bits and unknown phase with respect to the distribution of the phase. We begin by assuming a general carriermodulated waveform with M signals with arbitrary correlation. For this general signal set we derive the optimalreceiver in the presence of additive Gaussian noise. We then derive the important special case of additive whiteGaussian noise. It will be shown in the case of additive white Gaussian noise that the receiver can be implementedusing an envelope detector. Next we derive an expression for the error probability of two signals in additive whiteGaussian noise. In section 3 we derive the error probability for M-orthogonal signals in additive white Gaussiannoise. Finally we derive expressions for the error probability of repetition codes with noncoherent reception.

1. Optimal Receiver in AGN

Assume additive stationary Gaussian noise and that sk�t � for 0 � k � M � 1 has the form

sk�t ��� ak

�t � cos

�ωct � βk

�t ���� 0 � k � M � 1

with s2

k�t � dt � 1

2

a2

k�t � dt � E

where ak�t � and βk

�t � are lowpass waveforms with respect to ωc. When sk

�t � is transmitted the received waveform

has the formr�t ��� ak

�t � cos

�ωct � βk

�t ��� θk ��� n

�t �

where θk is a random phase. If θk � 0 with probability 1 then we have the usual coherent reception situationalready discussed. We will for this section assume that θk is uniformly distributed on the interval 0 � 2π � and thatthe receiver does not know the value of θk. We can use the representation of bandpass signals and noise in derivingthe optimal receiver.

r�t ��� Re uk

�t � e jθk � jωct ��� n

�t �

whereuk�t ��� ak

�t � cosβk

�t ��� jak

�t � sinβk

�t �

and j ��� � 1. Assuming the noise is also narrow band we can express the noise as

n�t ��� nc

�t � cosωct � ns

�t � sin ωct �

Then the lowpass representation of the received signal becomes

r̃�t ��� uk

�t � e jθk � z

�t �

5-1

Page 2: Noncoherent Receiversthe receiver does not know the value of θk. We can use the representation of bandpass signals and noise in deriving the optimal receiver. r t Re uk t ejθk jωct

5-2 CHAPTER 5. NONCOHERENT RECEIVERS

wherez�t ��� nc

�t ��� jns

�t ��

Now assume that z�t � is a Gaussian process with covariance function K

�s � t � which has eigenfunctions ϕl

�t � and

eigenvalues λl . Then we can express the received lowpass signal as

r̃�t � �

∑l � 0

�uk � le jθk � zl � ϕl

�t �

�∞

∑l � 0

r̃lϕl�t �

whereuk � l �

uk�t � ϕ �l � t � dt

zl �

z�t � ϕ �l � t � dt

and zl is a complex Gaussian random variable with mean zero and with

E Re�zl � 2 � � λl

E Im �zl � 2 � � λl

E Re�zl � Im �

zl � � � 0 �We can now calculate the probability density of r̃ � �

r̃1 ��� � � � r̃N � conditioned on the value of θk. Let r̃l � uk � le jθk � zl

then

pk�r̃l�θk ��� 1

2πλlexp

�� 1

2λl

�r̃l � e jθkuk � l � 2 �

pk�r̃�θk � � 1

∏Nl � 1 2πλl

exp � � 12

N

∑l � 1

�r̃l � e jθkuk � l � 2

λl �� 1

∏Nl � 1 2πλl

exp � � 12

N

∑l � 1

�r̃l� 2 � �

uk � l � 2 � r̃le jθku�k � l � r̃

�l e jθk uk � l

λl ��

N

∏l � 1

�2πλl � 1 exp � � 1

2

N

∑l � 1

�r̃l� 2 � �

uk � l � 2 � 2Re�r̃le jθk u

�k � l �

λl ��

N

∏l � 1

�2πλl � 1 exp � � 1

2 N

∑l � 1

�r̃l� 2 � �

uk � l � 2λl � � N

∑l � 1

Re�r̃lu�k � l � cosθk � Im

�r̃lu�k � l � sin θk

λl ��

N

∏l � 1

�2πλl � 1 exp � � 1

2 N

∑l � 1

�r̃l� 2 � �

uk � l � 2λl � � Re

N

∑l � 1

r̃lu�k � l

λl� cos

�θk � � Im

N

∑l � 1

r̃lu�k � l

λl� sin

�θk � �

�N

∏l � 1

�2πλl � 1 exp � � 1

2 N

∑l � 1

�r̃l� 2 � �

uk � l � 2λl � � �������

N

∑l � 1

�r̃lu�k � l �

λl����� cos

�θk � ψ ��� �

where

ψ � tan 1 �� Im ∑Nl � 1

r̃l u �k � lλl

�Re ∑N

l � 1r̃l u �k � lλl

���� �The joint density given signal k transmitted is then obtained by averaging with respect to θk. The joint densitygiven signal k transmitted is then

pk�r̃1 ��� � � � r̃N ���

θ � 0

12π

pk�r̃1 ��� � � � r̃N

�θk � dθk

Page 3: Noncoherent Receiversthe receiver does not know the value of θk. We can use the representation of bandpass signals and noise in deriving the optimal receiver. r t Re uk t ejθk jωct

1. 5-3

θ � 0

12π

N

∏l � 1

�2πλl � 1 exp � � 1

2 N

∑l � 1

�r̃l� 2 � �

uk � l � 2λl � � �������

N

∑l � 1

�r̃lu�k � l �

λl����� cos

�θk � ψ � � � dθ

�N

∏l � 1

�2πλl � 1 exp � � 1

2

N

∑l � 1

�r̃l� 2 � �

uk � l � 2λl � 2π

θ � 0

12π

exp � � �����N

∑l � 1

�r̃lu�k � l �

λl����� � cosθ � dθ

�N

∏l � 1

�2πλl � 1 exp � � 1

2

N

∑l � 1

�r̃l� 2 � �

uk � l � 2λl � I0 � �����

N

∑l � 1

�r̃lu�k � l �

λl����� �where

I0�x � ∆� 1

θ � 0exp

�xcosθ � dθ

is the modified Bessel function of order 0. Now let us calculate the likelihood ratio between hypothesis Hk and thehypothesis that no signal was present.

Λk�N ��� pk

�r̃1 � � � � � r̃N �

p0�r̃1 � � � � � r̃N � �

∏Nl � 1

�2πλl � 1 exp

�� 1

2 ∑Nl � 1

�r̃l

� 2 � �uk � l � 2

λl

� I0

� �∑N

l � 1r̃ku �k � lλl

� �∏N

l � 1

�2πλl � 1 exp

� � 12 ∑N

l � 1

�r̃l

�2

λl�

� exp � � 12

N

∑l � 0

uk � lu �k � lλl � I0 � �����

N

∑l � 1

r̃lu�k � l

λl����� � �

Now

limN � ∞

N

∑l � 1

uk � lu �k � lλl

uk�t � q �k � t � dt

where

qk�t ��� lim

N � ∞

N

∑l � 1

uk � lλl

ϕl�t �

is the solution of the integral equation

uk�s ���

K�s � t � qk

�t � dt �

SimilarlyN

∑l � 1

r̃lu�k � l

λl�

r̃�t � q �k � t � dt �

Thus the optimal receiver computes the following likelihood ratios

Λk � 0 � r̃ � t � ��� limN � ∞

�r�t ����� exp

� � 12

uk�t � q �k � t � dt � I0 � � r̃

�t � q �k � t � dt

� �and chooses k for which Λk � 0 is maximum.

Special Case: White Gaussian Noise

In this case the integral equation is easily solved:

qk�t ��� 2

N0uk�t �

so that

Λk � 0 � r̃ � t � ��� exp� � 1

N0

�uk�t � � 2dt � I0 � 2

N0

� r̃�t � u �k � t � dt

� � �

Page 4: Noncoherent Receiversthe receiver does not know the value of θk. We can use the representation of bandpass signals and noise in deriving the optimal receiver. r t Re uk t ejθk jωct

5-4 CHAPTER 5. NONCOHERENT RECEIVERS

For equi-energy signals this reduces to choosing k that maximizes

I0 � 2N0

� r̃�t � u �k � t � dt

� �and since I0

�x � is an increasing function of x the optimal receiver chooses k to maximize�

r̃�t � u �k � t � dt

� 2 � �Re

r̃�t � u �k � t � dt � 2 � �

Im

r̃�t � u �k � t � dt � 2 �

Now note thatr�t ��� Re � r̃ � t � e jωct �

and consider the following integralr�t � ak

�t � cos

�ωct � βk

�t � � dt �

Re r̃ � t � e jωct � Re uk

�t � e jωct � dt �

Since (can you show this) for any two complex numbers a and b

Re a � Re b ��� 12

Re ab� ��� 1

2Re ab �

the above integral becomes

r�t � ak

�t � cos

�ωct � βk

�t ��� dt � 1

2

Re

�r̃�t � � u �k � t � � dt � 1

2

Re

�r̃�t � � uk

�t � e j2ωct � dt �

That the second term is zero is due to the fact that both r̃ and uk are lowpass processes. Thus

r�t � ak

�t � cos

�ωct � βk

�t � � dt � 1

2

Re

�r̃�t � � u �k � t � � dt �

Similarly r�t � ak

�t � sin

�ωct � βk

�t ��� dt � 1

2

Im

�r̃�t � � u �k � t � � dt �

Thus an equivalent form of the optimal receiver computes the following for each k

� r�t � ak

�t � cos

�ωct � βk

�t � � dt � 2 � �

r�t � ak

�t � sin

�ωct � βk

�t ��� dt � 2

and decides that signal k was transmitted if k maximizes the above expression.

2. Performance of Binary Signals in AWGN

Consider a binary communication system with noncoherent reception. Assume the two transmitted signals are

sk�t ��� ak

�t � cos

�ωct � βk

�t � � � k � 1 � 2

with1

2E

s0�t � s1

�t � dt � ρ0 � 1 � ρ

Let H1 denote the event that signal s1 is transmitted and H2 the event that s2 is transmitted. The received signaldiffers from the transmitted signal in that there is a random phase term included and because of the noise. If s1 istransmitted the received signal then is

r�t ��� a1

�t � cos

�ωct � β1

�t ��� θ1 ��� n

�t �

where n�t � is a white Gaussian noise process. If s2 is transmitted the received signal then is

r�t ��� a2

�t � cos

�ωct � β2

�t ��� θ2 ��� n

�t � �

Page 5: Noncoherent Receiversthe receiver does not know the value of θk. We can use the representation of bandpass signals and noise in deriving the optimal receiver. r t Re uk t ejθk jωct

2. 5-5

We would like to compute the error probability of the optimal receiver. The optimal receiver processes the receivedsignal by correlating with two signals and then sums the squares. That is the receiver first computes

Xk � c ∆�

r�t � ak

�t � cos

�ωct � βk

�t � � dt

andXk � s ∆�

r�t � ak

�t � sin

�ωct � βk

�t � � dt

ThenXk � X2

k � c � X2k � s �

The receiver decides signal 2 was transmitted if X2 � X1 and otherwise decides s1 transmitted. The probability oferror given signal s2 is transmitted is then

P�error

�H1 � � P

�X2 � X1

�H1 �

To calculate the error probability we need to know the density of Xk. It is easy to see that Xk � c and Xk � s are aGaussian random variables with mean

E Xk � c �Hm � θm � � 12

ρk �m cosθm

E Xk � s �Hm � θm � � 12

ρk �m sinθm

and variance

Var Xk � c �Hm � θm ��� 14

N0E

where E is the energy of the transmitted signal. The density of Xk given Hm can then be calculated in a straightfor-ward manner as

pm�xk ��� 1

2σ2 exp

���µ2 � xk �

2σ2� I0 � � xkµ

σ2

� � x � 0

where

µ∆� µ2

c � µ2s

µc∆� 1

2ρk �m cosθk

µs∆� 1

2ρk �m sinθk

σ2 ∆� 14

N0E

Involved calculation then yields

P�error

�H1 � � P

�X2 � X1

�H1 � � Q

�a � b � � 1

2exp

� � � a2 � b2 ��� 2 � I0�ab �

where

a ��

E2N0

�1 ��� 1 � � ρ � 2 �

b ��

E2N0

�1 � � 1 � � ρ � 2 �

and

Q�a � b ���

b2 � 2 exp� � � a2

2� x � � I0

� � 2xa � dx

and is called Marcum’s Q function.

Page 6: Noncoherent Receiversthe receiver does not know the value of θk. We can use the representation of bandpass signals and noise in deriving the optimal receiver. r t Re uk t ejθk jωct

5-6 CHAPTER 5. NONCOHERENT RECEIVERS

4 6 8 10 12 14 16 1810

−6

10−5

10−4

10−3

10−2

10−1

100

EB/N

0 (dB)

Pe,b ρ=1

ρ=.2ρ=.4

ρ=.6

ρ=.8

ρ=0

Figure 5.1: Performance of Nonorthogonal Signals with Noncoherent Demodulation

3. Error Probability for M-orthogonal Signals in AWGN

In this section we derive the error probability for M-ary orthogonal signal in additive white Gaussian noise. Thetransmitted signal is one of M signals, s0

�t � � � � � � sM 1

�t � where

si�t ��� � 2Pa j

�t � cos

�2π fct �� 0 � t � T �

where T

0a j�t � ai

�t � dt �

�0 i �� jT i � j �

The optimal receiver computes

Xc � j �

T

0r�t ��

2T

a j�t � cos

�2π fct � dt

Xs � j �

T

0r�t � � 2Ta j

�t � sin

�2π fct � dt

for j � 0 � 1 ��� � � � M � 1. These are further processed by computing Y j � X2c � j � X2

s � j.The statistics for the receiver output (assuming signal si

�t � is transmitted are

X jc � � E δi j cosφi � nc � N � � Eδi j cosφi � N0

2

�X js � � E δi j sinφi � n � � N � � Eδi j sinφi � N0

2

�j � 0 � 2 � � ��� � M � 1 �

Page 7: Noncoherent Receiversthe receiver does not know the value of θk. We can use the representation of bandpass signals and noise in deriving the optimal receiver. r t Re uk t ejθk jωct

3. 5-7

Let Yj � X2jc � X2

js . Then we need to determine the probability that Y0 which corresponds to nonzero mean randomvariables is less than Y1 � Y2 ��� � � � YM 1 which correspond to zero mean random variables.

Pc � i � P�Yj � Yi � j �� i

�si transmitted �

� E P �Yj � Yi � j �� i

�Yi � si trans � �

� E ∏j �� i

P�Yj � Yi

�si trans � Yi � �

fs�yi � P �

Yj � yi�sitransmitted � � M 1 dyi

fs�yi � Fn

�yi � �

�M 1 � dyi �

where Fs�y � is the distribution function of Y j given signal s j

�t � was transmitted and Fn

�y � ( fn

�y � ) is the distribution

(density) function of Yi given signal s j�t � was transmitted with i �� j. Doing an integration by parts we find

Pc � i � �M � 1 �

0Fs�y � Fn

�y � � � M 2 � fn

�y � dy

Thus from the appendix we find that the error probability is given as

Pc � i � �M � 1 �

0 1 � Q

� � 2EN0

� � 2y � � 1 � e y � � M 2 � e ydy �

This expression is not too difficult to evaluate for most values of M. For small values of M and alternative (andbetter known) expression can be derived as follows.

Define Z j � � X2jc � X2

js . Then

Pc � i � P�Z j � Zi � j �� i

�si transmitted �

� E P �Z j � Zi � j �� i

�Zi � si trans � �

� E ∏j �� i

P�Z j � Zi

�si trans � Zi � �

p�zi � P �

Z j � zi � � M 1 dzi

Doing a change of variables (ν � u2

2σ2 � dν � uduσ2 ) we obtain

P�Z j � zi � �

u � zi

uσ2 e u2 � 2σ2

du �

ν � z2i� 2σ2

e νdν

� 1 � e z2i� 2σ2

Pc � i �

0p�zi � 1 � e z2

i� 2σ2 � M 1 dzi

0p�zi ��� M 1

∑l � 0

� � 1 � l � M � 1l

�e lz2

i� 2σ2 � dzi

�M 1

∑l � 0

� � 1 � l � M � 1l

� ∞

0p�zi � e lz2

i� 2σ2

dzi

0p�zi � e lz2

i� 2σ2

dzi �

0

zi

σ2 e ���z2i µ2 2σ2 I0

� µzi

σ2

�e lz2

i� 2σ2

dzi

� e µ2 � 2σ2

0

zi

σ2 e � l � 1 � z2i� 2σ2

I0

� µzi

σ2

�dzi

Page 8: Noncoherent Receiversthe receiver does not know the value of θk. We can use the representation of bandpass signals and noise in deriving the optimal receiver. r t Re uk t ejθk jωct

5-8 CHAPTER 5. NONCOHERENT RECEIVERS

Do another change of variables, (wi � � l � 1 zi, dwi � � l � 1 dzi) we get

0p�zi � e lz2

i� 2σ2

dzi � e µ2 � 2σ2

0

wi

σ2 � l � 1e w2

i� � 2σ2 � I0 � µwi

� l � 1 σ2

� dwi

� l � 1

� e µ2 � 2σ2 1�l � 1 �

wi

σ2 e w2i

2σ2 I0 � µwi

� l � 1 σ2

�dwi

Let µ̂ � µ�l � 1

� Then

0p�zi � e lz2

i� 2σ2

dzi � e µ2 � 2σ2eµ̂2 � 2σ2

l � 1

wi

σ2 e � w2i µ̂2

2σ2 � I0 � µ̂wi

σ2

�dwi� ��� �� 1

�exp � µ̂2 µ2

2σ2 �l � 1 � �

exp � µ2

l 1 µ2

2σ2 �l � 1

� exp

�� lµ2

2�l � 1 � σ2

� 1l � 1

Substituting this into the expression for the probability of correct, we obtain

Pc � i � M 1

∑l � 0

� � 1 � l � M � 1l

� exp � lµ2

2�l � 1 � σ2 �

l � 1 �where

µ2

2σ2 � E2

2 N0E2

� EN0

Thus

Pc � i � 1 �M 1

∑l � 1

� � 1 � l M 1l �

l � 1exp

�� l�

l � 1 �EN0

�Pe � i � 1 � Pc � i � M 1

∑l � 1

� � 1 � l � 1 M 1l �

l � 1exp

�� l

l � 1EN0

�� 1 � Pc � i � M 1

∑l � 1

� � 1 � l � 1 M 1l �

l � 1exp

�� l

l � 1Eb log2

�M �

N0

�Pe � 1

M

M

∑i � 2

Pe � i � Pe � iWe can also determine the bit error probability from the symbol error probability. If M � 2k then for each

symbol transmitted k bits of information are being transmitted. Because of the symmetry

Pe � b � 12

MM � 1

Pe � iFor M � 2 the error probability takes a particularly simple form, namely

Pe � b � Pe � i � 12

e Eb� 2N0 � M � 2 �

Page 9: Noncoherent Receiversthe receiver does not know the value of θk. We can use the representation of bandpass signals and noise in deriving the optimal receiver. r t Re uk t ejθk jωct

4. 5-9

Eb � N0 (dB)

Pb

1

10 1

10 2

10 3

10 4

10 5

10 6

10 7

10 8

10 9

10 10

� 5 � 1 � 59 0 5 10 15

M � 2

4

8

16

32

64M � ∞

Figure 5.2: Bit error probability for M-ary orthogonal signal with noncoherent detection

The limiting behavior of the error probability for M-ary orthogonal signals for M large with noncoherent demodu-lation is the same as the limiting performance of coherent demodulation.

limM � ∞

Pe�M � �

�1 � Eb � N0 � ln

�2 ��� � 1 � 59dB

0 � Eb � N0 � ln�2 ��� � 1 � 59dB

limM � ∞

Pe � b � M � ��

1 � 2 � Eb � N0 � ln�2 ��� � 1 � 59dB

0 � Eb � N0 � ln�2 ��� � 1 � 59dB

4. Error Estimates for Repetition Codes with Noncoherent Reception

Consider transmitting one of two codewords of length L using binary frequency shift keying (orthogonal) and non-coherent reception. The optimum receiver would compute the log-likelihood ratio and compare to zero (for equallylikely codewords) to determine which of the two codewords was transmitted. Assume that the first codeword isthe all zero vector (0,0,...,0) of length L and the other codeword is the all one vector (1,1,...,1) of length L. If thetwo codewords agreed in some positions then we would not need to process the received signal over the interval oftime where they agreed since no information can be gained about which signal was transmitted from the receivedsignal in that interval.

The transmitted signal would be

Page 10: Noncoherent Receiversthe receiver does not know the value of θk. We can use the representation of bandpass signals and noise in deriving the optimal receiver. r t Re uk t ejθk jωct

5-10 CHAPTER 5. NONCOHERENT RECEIVERS

s0�t ���

L 1

∑l � 0

� 2Pcos�ω0t � θ0 � l � pT

�t � lT �

or

s1�t ���

L 1

∑l � 0

� 2Pcos�ω1t � θ1 � l � pT

�t � lT �

where θi � l are independent identically distributed random variables with uniform distribution unknown to the re-ceiver. The received signal is

r�t ���

�s0�t ��� n

�t � H0 true

s1�t ��� n

�t � H1 true

The receiver processes the signals using noncoherent matched filters; that is the received signal is multiplied byexp jω0t then passed through a filter with impulse response pT

�t � , sampled and then the magnitude is formed.

Denote by Y0 � l the output of the noncoherent matched filter

Y0 � l � � ∞ ∞r�s � pT

�s � lT � exp

�jω0

�s � lT � � ds

� 2and

Y1 � l � � ∞ ∞r�s � pT

�s � lT � exp

�jω1

�s � lT � � ds

� 2The statistics of Y0 � l � Y1 � l were calculated in the previous section. Because of orthogonality of the received signalsand that the noise is white the joint statistics of Y � �

Y0 � 1 ��� � � � Y0 � L � Y1 � 1 ��� � � � Y1 � L � factor (conditioned on either of thetwo hypotheses) as

p�y0 � 1 ��� � � � y0 � L � y1 � 1 � � � � � y1 � L � �Hk ���

L

∏l � 1

p�y0 � l �Hk � p � y1 � l �Hk � k � 0 � 1 �

The log-likelihood ratio is then

Λ � logp�y�H1 �

p�y�H0 � � log

L

∏l � 1

p�y0 � l �H1 � p � y1 � l �H1 �

p�y0 � l �H0 � p � y1 � l �H0 �

�L

∑l � 1

logp�y0 � l �H1 � p � y1 � l �H1 �

p�y0 � l �H0 � p � y1 � l �H0 �

�L

∑l � 1

log�

p�y0 � l �H1 � p � y1 � l �H1 � � � log

�p�y0 � l �H0 � p � y1 � l �H0 � �

Substituting in the appropriate density functions yields

Λ �L

∑l � 1

log I0� µ � y1 � l

σ2 � � � log I0� µ � y0 � l

σ2 � �

The optimum receiver is thus

L

∑l � 1

log I0� µ � y1 � l

σ2 � �H1�

H0

L

∑l � 1

log I0� µ � y0 � l

σ2 � �

which is quite complex in implementation. It would be useful to have suboptimum receivers which are easierto implement but have nearly optimum performance. Before we suggest some suboptimal receivers is would be

Page 11: Noncoherent Receiversthe receiver does not know the value of θk. We can use the representation of bandpass signals and noise in deriving the optimal receiver. r t Re uk t ejθk jωct

4. 5-11

useful to get estimates of the performance of the optimum receiver. The error probability (given H0 say) is easy towrite down but hard to evaluate except for L small. It is

Pe �

yI R1 � p � y �H0 � dy

where I R1 � is the region where the log-likelihood ratio is positive. This is a 2L dimensional integral. The Chernoffbound to the error probability can be expressed as

Pe � DL

where

D �

y

�p�y�H0 � p � y �H1 � dy �

For the additive white Gaussian channel

D � ∞

0

12σ2 exp

� � 12

� yσ2 �

µ2

σ2 � � � I0� µ � y

σ2 � dy� 2

�� ∞

0exp

� � � w � Γ � 2 � � � I0� � 4Γw � dw � 2

and Γ � E � N0. This is much more computationally attractive than the exact expression. An asymptotic form forlow signal-to-noise ratios is not to difficult to compute.

D � 1 � � Γ2

� 2

Γ small �For hard decisions

D � 2�

p�1 � p � � p � 1

2e Γ

2

which for low signal-to-noise ratios becomes

D � 1 � � Γ2 � 2

� 2

Γ small �

Thus for low signal-to-noise ratios hard decisions cost � 2=1.5dB.Now let us consider some suboptimal receivers. First note that µ � σ2 � 2 � N0. So the argument to the Bessel

function will be small if the noise power is large (low signal-to-noise ratio) and will be large if the noise poweris small (high signal-to-noise ratio). Also note that (see Abramowitz and Stegun Handbook of MathematicalFunctions)

I0�x ��� 1 � x2

4� x small

log I0�x ��� x2

4� x small

I0�x ��� ex

� 2πx� x large

Using the approximation for log I0 in the optimum decision rule we get

L

∑l � 1

� µ2y1 � l4σ4 �

H1� �H0

L

∑l � 1

� µ2y0 � l4σ4 �

L

∑l � 1

y1 � l H1� �H0

L

∑l � 1

y0 � l

Page 12: Noncoherent Receiversthe receiver does not know the value of θk. We can use the representation of bandpass signals and noise in deriving the optimal receiver. r t Re uk t ejθk jωct

5-12 CHAPTER 5. NONCOHERENT RECEIVERS

So for small signal-to-noise ratios the optimum receiver is the square-law combining receiver.Now consider when the argument to the Bessel function is large. The optimum decision rule then becomes

L

∑l � 1

� µ � y1 � l2σ2 � � log � 2πµ � y1 � l � σ2 � � µ � y0 � l

2σ2 ��� log � 2πµ � y0 � l � σ2

H1�

H0

0

Let wi � l � yi � l � � 2σ2 � then the decision rule is

L

∑l � 1

� �4Γw1 � l � � log � 2π

�4Γw1 � l � � �

4Γw0 � l ��� log � 2π�

4Γw0 � l H1�

H0

0

Note that the average value of W given signal present is Γ � 1 while the average value of W given no signal is 1.For very large Γ the terms

� �4Γw � dominates the terms of the form log

�2π � 4Γw and thus the optimum decision

rule is

L

∑l � 1

� � w1 � l � H1�

H0

L

∑l � 1

� � w0 � l �Thus the decision rule for very high signal-to-noise ratio is to add the square-roots of the noncoherent matchedfilter outputs.

It is of interest to analyze the performance of these suboptimal receivers. The receiver for very low signal-to-noise ratios is (relatively) easy to analyze. First let us normalize the density for the sum of the squares of 2Lrandom variables. Let

W0 � 12σ2

L

∑l � 1

X2c � 0 � l � X2

s � 0 � land similarly for W1. Then the density of W is given by

fW0

�w0�H0 � �

� w0

Γ� �

L 1 � � 2exp

� � � w0 � Γ � � IL 1� �

4w0Γ � w � 0

fW0

�w0�H1 � � w

�L 1 �

0�L � 1 � ! exp

� � w0 � w � 0

and similarly for W1.

Pe � 1 � P�W0 � W1

�H0 �

� 1 � ∞

0fW0

�w0�H0 � P �

W1 � w0�H0 � dw0

� 1 �

0fW0

�w0�H0 � 1 �

L 1

∑m � 0

wm0

m!e w0 � dw0

Pe � 12

exp� � Γ

2� L 1

∑i � 0

�Γ � 2 � i

i!�L � i � 1 � !

L 1

∑j � i

�j � L � 1 � !�

j � i � !2 j � L 1

For L � 1 the above becomes

Pe � 12

e Γ � 2where Γ � E � N0. The Chernoff bound can also be calculated for square-law combining.

5. Primer on sums of squares of Gaussian random variables

First we derive the density for the sum of the squares of two Gaussian random variables. Let

Xc � N�µc � σ2 �

Xs � N�µs � σ2 �

Page 13: Noncoherent Receiversthe receiver does not know the value of θk. We can use the representation of bandpass signals and noise in deriving the optimal receiver. r t Re uk t ejθk jωct

5. 5-13

with Xc � Xs independent. Let µ2 � µ2c � µ2

s and

Y � X2c � X2

s �Then

P�Y � y � �

1

2πσ2 exp

�� 1

2σ2 � � xc � µc � 2 � �xs � µs � 2 � � dxcdxs

x2c � x2

s � y

12πσ2 exp

�� 1

2σ2 x2c � x2

s � 2�xcµc � xsµs ��� µ2 � � dxcdxs

x2c � x2

s � y

12πσ2 exp � � 1

2σ2 x2c � x2

s � 2µ � x2c � x2

s

x2c � x2

s � y

cos�φ � γ ��� µ2 � � dxcdxs

where φ � tan 1 xsxc

and γ � tan 1� µs

µc

�.

r2 � � y

φ � 0

r2πσ2 exp

�� r2

2σ2 �µrσ2 cos

�φ � γ ��� µ2

2σ2 � � drdφ

r � � y

rσ2 exp

� � r2

2σ2� e µ2 � 2σ2 1

φ � 0exp � µr

σ2 cos�φ � γ � dφdr� ��� �

I0 � µrσ2 �

xcµc � xsµs � βcos�φ � γ �

� β cosφcosγ � sinφsin γ �

φ � tan 1 xs

xc

tanφ � xsxc

cosφ � xc� x2c � x2

s

sinφ � xs� x20 � x2

s

cosγ � µc�µ2

c � µ2s

sinγ � � µs�µ2

c � µ2s

β � � µ2c � µ2

s � x2c � x2

s

xcµc � xsµs � � µ2c � µ2

s � x2c � x2

s cos�φ � γ � �

φ � tan 1 � xs

xc

�γ � tan 1 � � µs

µc

�P

�Y � y � �

r� �

y

rσ2 exp

�� r2 � µ2

2σ2� I0

� µrσ2

�dr

Let u � r2 then 0 � r � � y is equivalent to u � y. Also du � 2rdr.

P�Y � y � �

u � y

12σ2 exp

�� u � µ2

2σ2� I0 � µ � u

σ2

�du

fY�y � � 1

2σ2 exp

�� y � µ2

2σ2� I0 � µ � y

σ2

Page 14: Noncoherent Receiversthe receiver does not know the value of θk. We can use the representation of bandpass signals and noise in deriving the optimal receiver. r t Re uk t ejθk jωct

5-14 CHAPTER 5. NONCOHERENT RECEIVERS

A change of variables makes for a cleaner expression: Let W � Y � � 2σ2 � . Then fW�w ��� 2σ2 fY

�2σ2w �

fW�w ��� exp

� � � w � Γ � � I0

�� 4Γw

�where Γ � µ2 � � 2σ2 � . (If the receiver does this normalization then it must know the power density of the noise).Now let Z � � Y . Then

P�Z � z � � P � � Y � z � P � Y � z2 �

FZ�z � � FY

�z2 �

fZ�z � � fY

�z2 � � 2z �

� zσ2 exp

�� z2 � µ2

2σ2� I0

� µzσ2

�µ � 0 � fY

�y ��� 1

2σ2 e y � 2σ2

fZ�z ��� z

σ2 e z2 � 2σ2

Using the fact that a density must integrate to one we can derive an useful integral.

0

rσ2 exp

� � r2 � 2σ2 � exp� � αr2 � I0

�rβ � dr � 1

1 � 2σ2αexp

� σ2β2

1 � 2σ2α�

Generalization:

Xc � i � N�µc � i � σ2 � i � 1 � 2 � � � � � L

Xs � i � N�µs � i � σ2 � i � 1 � 2 ��� � � � L

with Xc � i � Xs � i independent. Let Λ � ∑Li � 1 µ2

c � i � µ2s � i and

Y �L

∑i � 1

X2c � i � X2

s � i �Then

fY�y ��� 1

2σ2 exp� � � y � Λ

2σ2

� � � yΛ

� �L 1 � � 2

IL 1� � yΛ

σ2 �

FY�y ��� 1 � QL � � Λ

σ� � y

σ�

where

QL�a � b ��� Q

�a � b ��� exp

���a2 � b2 � � 2 � L 1

∑k � 1

� ba� kIk

�ab �

and

Q�a � b ��� exp

� � � a2 � b2 ��� 2 � ∞

∑k � 1

� ba� kIk

�ab �

For Λ � 0

fY�y � � 1

2σ2 exp� � � y

2σ2

� � � y2σ2

� �L 1 � 1�

L � 1 � !FY�y � � 1 � exp

� � � y2σ2

� � L 1

∑k � 0

1k!

� y2σ2

� k

Page 15: Noncoherent Receiversthe receiver does not know the value of θk. We can use the representation of bandpass signals and noise in deriving the optimal receiver. r t Re uk t ejθk jωct

5. 5-15

Let Z � � Y then

fZ�z � � zL

σ2Λ�L 1 � � 2 exp

� � � z2 � Λ2σ2

� � IL 1� z � Λ

σ2 �

FZ�z � � 1 � QL � � Λ

σ� zσ�

For Λ � 0 we obtain

fZ�z � � z2L 1

2L 1σ2L�L � 1 � ! exp

� � z2

2σ2 �FZ�z � � 1 � exp

� � z2

2σ2 � L 1

∑l � 0

�z2 � � 2σ2 ��� l

l!

Consider two random variables Z1 and Z2 where Z1 has distribution given above with Λ1 � 0 and with differentvariances σ1 and σ2. Assume that they are independent. We wish to determine the probability that Z1 � Z2.

P�Z1 � Z2 � �

z2� 0

P�Z1 � z2 � fZ2

�z2 � dz2

z2� 0

FZ1

�z2 � fZ2

�z2 � dz2

z2� 0 1 � exp

� � z22

2σ21

� L 1

∑l � 0

�z2

2 � � 2σ21 ��� l

l! � fZ2

�z2 � dz2

� ∞

z2� 0 1 � exp

� � z22

2σ21

� L 1

∑l � 0

�z2

2 � � 2σ21 ��� l

l! �zL

2

σ22Λ

�L 1 � � 2 exp

� � z22 � Λ2σ2

2

� IL 1� z2 � Λ

σ22

� dz2

� 1 �L 1

∑l � 0

1�2σ2

1 � l l!exp

� � � Λ2σ2

2

� � ∞

z � 0exp

� � z2

2σ21

� z2

2σ22

�zL � 2l

σ22Λ

�L 1 � � 2 IL 1

� z � Λσ2

2

� dz

� 1 �L 1

∑l � 0

1�2σ2

1 � l l!σ22Λ

�L 1 � � 2 exp

� � � Λ2σ2

2

� � ∞

z � 0e α2z2

zL � 2lIL 1�γz � dz

where

α2 � 1

2σ21� 1

2σ22

γ � � Λ � σ22

The integral may be evaluated as (see Lindsey, Watson)

z � 0e α2z2

zL � 2lIL 1�γz � dz � l!γL 1

2Lα2�L � l � eγ2 � � 4α2 � l

∑k � 0

� l � L � 1l � k

� γ2 � � 4α2 � � kk!

Thus

P�Z1 � Z2 � �

L 1

∑l � 0

1�2σ2

1 � l l!σ22Λ

�L 1 � � 2 exp

� � � Λ2σ2

2

� � l!γL 1

2Lα2�L � l � eγ2 � � 4α2 �

Page 16: Noncoherent Receiversthe receiver does not know the value of θk. We can use the representation of bandpass signals and noise in deriving the optimal receiver. r t Re uk t ejθk jωct

5-16 CHAPTER 5. NONCOHERENT RECEIVERS

l

∑k � 0

� l � L � 1l � k

� γ2 � � 4α2 � � kk!

�L 1

∑l � 0

1�2σ2

1 � lσ22Λ

�L 1 � � 2 exp

� � � Λ2σ2

2

� � l!γL 1

2Lα2�L � l � eγ2 � � 4α2 �

l

∑k � 0

� l � L � 1l � k

� γ2 � � 4α2 � � kk!

�L 1

∑l � 0

1�2σ2

1 � lσ22Λ

�L 1 � � 2 exp

�γ2 � � 4α2 � � Λ

2σ22

� γL 1

2Lα2�L � l �

l

∑k � 0

� l � L � 1l � k

� γ2 � � 4α2 � � kk!

α2 � σ21 � σ2

2

2σ21σ2

2

γ2 � � 4α2 � ��Λ � σ4

2 � 2σ21σ2

2

σ21 � σ2

2

� Λσ21

2σ22

�σ2

1 � σ22 �

γ2 � � 4α2 � � Λ2σ2

2

��Λ � σ4

2 � 2σ21σ2

2

σ21 � σ2

2

� Λ2σ2

2

� Λ2σ2

2

�σ2

1

σ21 � σ2

2

� 1 �� Λ

2�σ2

1 � σ22 �

P�Z1 � Z2 � � exp

� � Λ2�σ2

1 � σ22 �

� L 1

∑l � 0

1�2σ2

1 � lσ22Λ

�L 1 � � 2 γL 1

2Lα2�L � l �

l

∑k � 0

� l � L � 1l � k

� γ2 � � 4α2 � � kk!

� e Λ2 � σ

21 σ2

2 � σ2

1

σ21 � σ2

2

� L L 1

∑l � 0

�σ2

2

σ21 � σ2

2

� l l

∑k � 0

� l � L � 1l � k

� γ2 � � 4α2 � � kk!

� e Λ2 � σ

21 σ2

2 � σ2

1

σ21 � σ2

2� L L 1

∑l � 0

�σ2

2

σ21 � σ2

2� l l

∑k � 0

� l � L � 1l � k

� 1k!

�Λσ2

1

2σ22

�σ2

1 � σ22 �� k

6. Frequency Shift Keying (FSK)

Frequency shift keying communicates information by transmitting different frequencies. It can be demodulatednoncoherently (by measuring the received energy at the different frequencies). It performance is worse than coher-ently demodulated signals but may be simpler.

b�t � �

∑l � ∞

bl pT�t � lT �� bl �

� � 1 � � 1 �

Page 17: Noncoherent Receiversthe receiver does not know the value of θk. We can use the representation of bandpass signals and noise in deriving the optimal receiver. r t Re uk t ejθk jωct

6. 5-17

�b�t �

VCOs�t � ��� �

� �

n�t �

��r

�t �

Figure 5.3: FSK Modulator

s�t � � � 2P

∑l � ∞

cos�2π�fc � b

�t � ∆ f � t � θ � pT

�t � lT �

where ∆ f is half the difference between the two transmitted frequencies and θ is an unknown (to the receiver)phase. We let f0 � f � ∆ f and f1 � f � ∆ f . When bi � � 1 then a signal at frequency f1 is transmitted. Whenbi � � 1 then a signal at frequency f0 is transmitted. The two frequencies f0 and f1 are separated far enough tomake the two signals orthogonal. (Minimum shift keying has the minimum separation in order to make the signalsorthogonal).

The receiver decides signal � 1 was transmitted if�Y 1

� � �Y1�and otherwise decides signal 1. The random

variables at the output of the low pass filters are

Xc � 1 � iT � � � Eδ�bi 1 � 1 � cos

�θ ��� ηc � 1

Xs � 1 � iT � � � Eδ�bi 1 � 1 � sin

�θ ��� ηs � 1

Xc � 1�iT � � � Eδ

�bi 1 � � 1 � cos

�θ ��� ηc � 1

Xs � 1�iT � � � Eδ

�bi 1 � � 1 � sin

�θ ��� ηs � 1

where δ�a � b � � 1 if a � b and is zero otherwise. In the absence of noise (ηx � i � 0) it is easy to see that when

bi 1 ��� 1 that Y1 � � E and Y 1 � 0. The error probability of binary FSK is

Pe � b � 12

e Eb� 2N0 �

Page 18: Noncoherent Receiversthe receiver does not know the value of θk. We can use the representation of bandpass signals and noise in deriving the optimal receiver. r t Re uk t ejθk jωct

5-18 CHAPTER 5. NONCOHERENT RECEIVERS

Noncoherent Demodulator

r�t �

����

����

�2 � T cos

�2π�fc � ∆ f � t �

�2 � T sin

�2π�fc � ∆ f � t �

LPF

LPF

��

��

t � iT

t � iT

Xs � 1

Xc � 1

� � 2

� � 2

�+

�Y 1

� 2

����

����

�2 � T cos

�2π�fc � ∆ f � t �

�2 � T sin

�2π�fc � ∆ f � t �

LPF

LPF

��

��

t � iT

t � iT

Xs � 1

Xc � 1

� � 2

� � 2

�+

�Y1� 2

Figure 5.4: Noncoherent Demodulator

Page 19: Noncoherent Receiversthe receiver does not know the value of θk. We can use the representation of bandpass signals and noise in deriving the optimal receiver. r t Re uk t ejθk jωct

7. 5-19

Figure 5.5: Output Densities For Noncoherent Receivers.

Figure 5.6: Density for Y1 � Y 1 given +1 Transmitted.

7. Differential Phase Shift Keying (DPSK)

�b � t � DifferentialEncoder

a � t � � ����

�2Pcos � 2π fct � θ �

�s � t �

n � t �

��r � t �

Modulator

b�t � �

∑l � ∞

bl pT�t � lT �� bl �

� � 1 � � 1 � �a�t � �

∑l � ∞

al pT�t � lT �� al �

� � 1 � � 1 � �Differential Encoder is such that

bl � 1 � al � al 1

bl � � 1 � al � � al 1 �

For example

l � � � � 2 � 1 0 1 2 3 ��� �bl � 1 1 1 � 1 1 � 1al � 1 1 1 1 � 1 � 1 1

s�t � � � 2Pa

�t � cos

�2π fct � θ ��

Optimum Demodulator

Figure 5.7: Error Probability of FSK with Noncoherent Detection.

Page 20: Noncoherent Receiversthe receiver does not know the value of θk. We can use the representation of bandpass signals and noise in deriving the optimal receiver. r t Re uk t ejθk jωct

5-20 CHAPTER 5. NONCOHERENT RECEIVERS

Figure 5.8: Error Probability for Differential Phase Shift Keying

r�t �

����

����

�2 � T cos

�2π fct �

�2 � T sin

�2π fct �

LPF

LPF

��� �

��� �

t � iT Xs�iT �

t � iT Xc�iT �

DelayT

DelayT

����

����

�Zi

� � 0 dec bi 1 � � 1� 0 dec bi 1 � � 1

Xc�iT ��� � Eai 1 cosθ � ηc � i �

Xs�iT ��� � Eai 1 sinθ � ηs � i �

The random variables ηc � i and ηs � i are independent identically distributed Gaussian random variables with mean 0and variance N0 � 2. Thus

Zi � Xc�iT � Xc

���i � 1 � T ��� Xs

�iT � Xs

� �i � 1 � T �

Zi � Re W �iT � W � ���

i � 1 � T � �where W

�iT ��� Xc

�iT � � jXs

�iT � . The error probability for DPSK is

Pe � b � 12

e E � N0 �Thus differential phase shift keying is 3dB better than FSK with noncoherent detection. However, errors tend tooccur in pairs.

To derive the above expression for DPSK consider the low pas filter with impulse response h�t ��� pT

�t � . The

output of the lowpass filters can be expressed as

Xc�t � �

∞ ∞

�2 � T cosωcτh

�t � τ � r � τ � dτ

Page 21: Noncoherent Receiversthe receiver does not know the value of θk. We can use the representation of bandpass signals and noise in deriving the optimal receiver. r t Re uk t ejθk jωct

7. 5-21

Xc�iT � �

∞ ∞

�2 � T cosωcτpT

�iT � τ � r � τ � dτ

iT

�i 1 � T

�2 � T cosωcτ ∞

∑l � ∞

� 2Pal cos�ωcτ � θ � pT

�τ � lT ��� n

�τ � � dτ

iT

�i 1 � T � 2P

�2 � Tai 1 cosωcτcos

�ωcτ � θ � dτ � ηc � i

nc � i is Gaussian random variable, mean 0 variance N0 � 2. Assuming ωcT � 2πn

Xc�iT ��� � Eai 1 cosθ � ηc � i

Similarly

Xs�iT ��� � Eai 1 sinθ � nc � i

ThusZi � Xc

�iT � Xc

���i � 1 � T ��� Xs

�iT � Xs

� �i � 1 � T �

Note that if we write W�iT � � Xc

�iT � � jXs

�iT � that Zi � Re W �

iT � W � ���i � 1 � T � � . It is clear that this represents

the phase difference between two consecutive symbols.Let

U1 � Xc�iT ��� Xc

���i � 1 � T �

2

U2 � Xs�iT ��� Xs

� �i � 1 � T �

2

U3 � Xc�iT � � Xc

���i � 1 � T �

2

U4 � Xs�iT � � Xs

� �i � 1 � T �

2

Zi � U21 � U2

2 � �U2

3 � U24 �

Assume bi 1 � � 1 so that ai 1 � ai 2 then

Pe � 0 � P�Z � 0

�ai 1 � ai 2 �

� P � U21 � U2

2 � U23 � U2

4�

U1 � N�µ1 � σ2 �

U2 � N�µ2 � σ2 �

µ1 � 12� 2P

�ai 1T cosθ � ai 2T cosθ �

� 12� 2P

�ai 1 � ai 2 � T cosθ

µ2 � 12� 2P

�ai 1 � ai 2 � T sinθ

σ2 � 14 N0T � N0T �

� 12

N0T

U3 � N�µ3 � σ2 � U4 � N

�µ4 � σ2 �

µ3 � 0 � µ4 � 0 �

E U1U2 � � E

�Xc�iT ��� Xc

���i � 1 � T �

2� � Xs

�iT ��� Xs

���i � 1 � T �

2�

Page 22: Noncoherent Receiversthe receiver does not know the value of θk. We can use the representation of bandpass signals and noise in deriving the optimal receiver. r t Re uk t ejθk jωct

5-22 CHAPTER 5. NONCOHERENT RECEIVERS

� 14

E Xc�iT � Xs

�iT ��� Xc

�iT � Xs

� �i � 1 � T �

� Xc� �

i � 1 � T � Xs�iT ��� Xc

� �i � 1 � T � Xs

� �i � 1 � T � �

E Xc�iT � Xs

�jT � � � E Xc

�iT � � E Xs

�jT � � since independent

E U1U2 � � � E Xc�iT � � � E Xc

� �i � 1 � T � �

2

� � E Xs�iT � ��� E Xs

���i � 1 � T � �

2

�� E U1 � E U2 � � U1 � U2 independent

Similarly�U1 � U3 � independent

�U2 � U3 � independent

�U3 � U4 � independent

�U1 � U4 � independent U2 � U4 � inde-

pendentThus U2

1 � U22 is independent of U2

3 � U24 . From the results derived for noncoherent FSK it is easy to show that

P�U2

1 � U22 � U2

3 � U24 � � 1

2e E � N0 �

Thus differential phase shift keying is 3dB better than FSK with noncoherent detection. However, errors tend tooccur in pairs.

8. Problems

1. A noncoherent communication system employs the signals

si�t ��� Asin

�π

tT� pT

�t � cos

�2π fit � θi � i � 0 � 1

where 2π fiT is an integer multiple of 2π and θi is unknown. Determine the optimal receiver for this system.Determine the error probability. (Assume additive white Gaussian noise).

2. Consider the following DPSK (differential phase shift keying) communication system. The information tobe transmitted is given by the following data waveform

b�t ���

∑l � ∞

bl pT�t � lT �

where bl is a sequence of i.i.d. random variables with P�bl � � 1 � � P

�bl � � 1 � � 1 � 2. The differential

encoder produces another data stream

a�t ���

∑l � ∞

al pT�t � lT �

where al � al 1 if bl � 1 otherwise al �� al 1. The transmitted waveform is

s�t ���

∑l � ∞

� 2Pal pT�t � lT � cos

�2π fct � θ �

where θ is unknown to the receiver and P is the power. Consider the receiver shown below. Determinethe error probability for deciding on bl. Express your answer in terms of the signal energy and the noise(additive white Gaussian) spectral density N0 � 2 Hint: Use the following transformation of variables

U1 � Xc�iT ��� Xc

� �i � 1 � T �

2

U2 � Xs�iT ��� Xs

� �i � 1 � T �

2

U3 � Xc�iT � � Xc

� �i � 1 � T �

2

U4 � Xs�iT � � Xs

� �i � 1 � T �

2

then write the error probability in terms of these variables.

Page 23: Noncoherent Receiversthe receiver does not know the value of θk. We can use the representation of bandpass signals and noise in deriving the optimal receiver. r t Re uk t ejθk jωct

8. 5-23

r�t �

����

����

�2 � T cos

�2π fct �

�2 � T sin

�2π fct �

LPF

LPF

� ���

����

t � iT Xs�iT �

t � iT Xc�iT �

DelayT

DelayT

����

����

�Zi

� � 0 dec bi 1 ��� 1� 0 dec bi 1 � � 1

3. Determine the loss in signal-to-noise ratio for using hard decisions (versus soft decisions) on an additivewhite Gaussian noise channel using orthogonal signals and noncoherent detection (based on the cutoff rate)at low signal-to-noise ratios.

4. (a) Consider a communication system using DPSK. Show that over anytime duration of 2T seconds thetwo possible transmitted signals are orthogonal. That is, given an arbitrary reference signal (phase) in thetime period

� �l � 2 � T � � l � 1 � T � , show that the two waveforms corresponding to bl � � 1 and bl � � 1 are

orthogonal.(b) Using the above or otherwise, derive the optimal receiver for detecting data bit bl .(c) Show that the optimum receiver is identical to that derived in class (if it is not already in that form).

5. (a) Consider a communication system with modulation and demodulation producing a discrete time (mem-oryless) channel with transition probabilities p

�b�a � , a � A � b � B. If the decoder knows these transition

probabilities then we showed in class (and in the homework) that codes of rate R exist to transmit informa-tion with error probability upper bounded by

Pe � 2 N�R0 R �

where R0 is called the cutoff rate. In this problem consider a communication system with a receiver that doesnot know exactly the transition probabilities. The receiver uses transition probability p

� �b�a � as if they were

the actual transition probabilities. (The channel statistics and the receiver statistics are memoryless). Showthat there exist codes of rate R such that

Pe � 2N�R �0 R �

whereR�0 � � lnJ

�0 �

Page 24: Noncoherent Receiversthe receiver does not know the value of θk. We can use the representation of bandpass signals and noise in deriving the optimal receiver. r t Re uk t ejθk jωct

5-24 CHAPTER 5. NONCOHERENT RECEIVERS

J�0 � min

λ � 0minp�x � E J �λ � X1 � X2 � �

and

J�λ�a1 � a2 ��� ∑

b � B

p�b�a1 ��p� �

b�a2 �

p� �

b�a1 � � λ �

(b) Show that when p� �

b�a � � p

�b�a � the optimizing value for λ is 1/2 so the result reduces to the normal

cutoff rate.(Hint: Use Cauchy inequality:

� ∑a � A

p�a � w � a � u � a ��� 2

� � ∑a � A

p�a � w2 � a � � � ∑

a � A

p�a � u2 � a � �

with equality if w�a � � cu

�a � for some positive constant c and use w

�a � � �

p�b�a ��� � 1 λ � � 2 and u

�a � ��

p�b�a ��� λ � 2).

(c) Show that for a noncoherent detection of binary (orthogonal) FSK the exponential density functions (usedin the receiver) results in square law combining (as opposed to maximum likelihood combining) and that thecutoff rate has the form

R�0 � 1 � log2

�1 � D

� ��Determine an explicit form for D

�.

6. Let H0 and H1 be two events. Let pi�x1 � ��� � � xn � be the conditional density of X1 � ��� � � Xn given Hi occurred.

(a) Assume that given Hi,�X j � n

j � 1 is a sequence of i.i.d. Gaussian random variables with mean� � 1 � i � E

and variance N0 � 2. Use the Chernoff bound to show that

P�

p1�X � � p0

�X � �H0 � � e nE � N0

(b) Now let X1 ��� ����� Xn be independent discrete random variables taking values +1 and � 1 with

P�Xi ��� 1

�H0 � � p �

P�Xi � � 1

�H0 � � 1 � p �

P�Xi � � 1

�H1 � � p �

P�Xi ��� 1

�H1 � � 1 � p �

Thus if the number of components of x equal to +1 is d then p0�x � � pd � 1 � p � n d and p1

�x � � pn d � 1 � p � d .

Use the Chernoff bound to show that

P�p1�X � � p0

�X � �H0 � � e n

� ln � 4p�1 p � �

(c) Again let X1 � ��� � � Xn be independent discrete random variables with Xi taking nonnegative integer valuesonly. Let

p0�x1 � � ����� xn ���

n

∏i � 1

λxi0 e λ0

xi!� xi � 0 � 1 � i � n

and

p1�x1 ��� ����� xn ���

n

∏i � 1

λxi1 e λ1

xi!

If λ1 � λ0 find the best Chernoff bound on

Pe � 0 � P�

p1�X � � p0

�X � �H0 �

andPe � 1 � P

�p0�X � � p1

�X � �H1 ���

Page 25: Noncoherent Receiversthe receiver does not know the value of θk. We can use the representation of bandpass signals and noise in deriving the optimal receiver. r t Re uk t ejθk jωct

8. 5-25

Let s�i be the optimal value of s for minimizing the bound to Pe � i. Show s

�1 � 1 � s

�0.

(d) Again let X1 ��� ����� X2n be independent discrete random variables with Xi taking nonnegative integer valuesonly. Let

p0�x1 � � ��� � x2n ���

n

∏i � 1

λxi1 e λ1

xi!

2n

∏i � n � 1

λxi0 e λ0

xi!� xi � 0 � 1 � i � 2n

and

p1�x1 ��� ����� x2n ���

n

∏i � 1

λxi0 e λ0

xi!

2n

∏i � n � 1

λxi1 e λ1

xi!

If λ1 � λ0 find the best Chernoff bound on

Pe � 0 � P�

p1�X � � p0

�X � �H0 �

andPe � 1 � P

�p0�X � � p1

�X � �H1 ���

7. (a) Show that ∑∞i � 1 φi

�t � φ �i � s � � δ

�t � s � for any complete orthonormal set of functions φi

�t � . ( Hint: Show

that the above function satisfies the definition of a delta function, that ∞ ∞

f�t � δ � t � s � dt � f

�s �

for all functions f�t � ).

(b) Let K�s � t � be a positive covariance function of the zero mean Gaussian random process X

�t � . Let K 1 � 2

be the negative square root of this function as defined in class. Show that the process

Y�s � �

K 1 � 2 � s � t � X �

t � dt

is a white Gaussian noise process. (You need to show that E Y � s � Y � � t � ��� δ�t � s � ).

8. Let K�s � t � be a real covariance matrix of a random process with eigenvalues λi and (real) eigenfunctions φi.

Define (as in class) K2 � s � t � as

K2 � s � t ���

K�s � u � K �

u � t � du

and Kn � s � t � as

Kn � s � t ���

Kn 1 � s � u � K �u � t � du �

Also define eK � s � t � as

eK�s � t � � ∞

∑n � 0

Kn � s � t �n!

�Show that

eK�s � t � � ∞

∑i � 1

eλiφi�s � φi

�t ��


Recommended