+ All Categories
Home > Documents > Signal Constellations

Signal Constellations

Date post: 27-Dec-2015
Category:
Upload: papori-borgohain
View: 47 times
Download: 1 times
Share this document with a friend
Description:
digital communication
28
EE810: Communication Theory I Page 83 Signal Constellations, Optimum Receivers and Error Probabilities Source Modulator (Transmitter) Demodulator (Receiver) i m ) (t s i ) (t w ) ( t r i m of Estimate 1 1 1 or ) ( Choose m t s 2 2 2 or ) ( Choose m t s M M M m t s or ) ( Choose ) , , , ( space n observatio l dimensiona - 2 1 N r r r N 1 r s kT t = (29 - s s kT T k t ) 1 ( d ) (t w ) (t s i ) ( t r Decision Device Decision ) ( 1 t φ There are M messages. For example, each message may carry λ = log 2 M bits. If the bit duration is T b , then the message (or symbol) duration is T s = λT b . The messages in two different time slots are statistically independent. The M signals s i (t), i =1, 2,...,M , can be arbitrary but of finite energies E i . The a priori message probabilities are P i . Unless stated otherwise, it is assumed that the messages are equally probable. The noise w(t) is modeled as a stationary, zero-mean, and white Gaussian process with power spectral density of N 0 /2. The receiver observes the received signal r(t)= s i (t)+ w(t) over one symbol duration and makes a decision to what message was transmitted. The criterion for the optimum receiver is to minimize the probability of making an error. Optimum receivers and their error probabilities shall be considered for various signal constellations. University of Saskatchewan
Transcript

EE810: Communication Theory I Page 83

Signal Constellations, Optimum Receivers and ErrorProbabilities

SourceModulator

(Transmitter)Demodulator

(Receiver)im )(tsi

)(tw

)(tr im of

Estimate

11

1

or )( Choose mts

22

2

or )( Choose mts

MM

M

mts or )( Choose

),,,(

space nobservatio ldimensiona-

21 Nrrr

N

1r

skTt =

( )−

•s

s

kT

Tk

t)1(

d

)(tw

watts/Hz2

strength WGN, 0N

)(tsi )(tr DecisionDevice

Decision

)(1 tφ

• There are M messages. For example, each message may carry λ = log2 M bits.If the bit duration is Tb, then the message (or symbol) duration is Ts = λTb.The messages in two different time slots are statistically independent.

• The M signals si(t), i = 1, 2, . . . , M , can be arbitrary but of finite energiesEi. The a priori message probabilities are Pi. Unless stated otherwise, it isassumed that the messages are equally probable.

• The noise w(t) is modeled as a stationary, zero-mean, and white Gaussianprocess with power spectral density of N0/2.

• The receiver observes the received signal r(t) = si(t) + w(t) over one symbolduration and makes a decision to what message was transmitted. The criterionfor the optimum receiver is to minimize the probability of making an error.

• Optimum receivers and their error probabilities shall be considered for varioussignal constellations.

University of Saskatchewan

EE810: Communication Theory I Page 84

Binary Signalling

Received signal:

On-Off Orthogonal Antipodal

H1 (0): r(t) = w(t) r(t) = s1(t) + w(t) r(t) = −s2(t) + w(t)H2 (1): r(t) = s2(t) + w(t) r(t) = s2(t) + w(t) r(t) = s2(t) + w(t)

∫ Ts

0 s1(t)s2(t)dt = 0

Decision space and receiver implementation:

0

)(1 ts )(2 ts

Choose 0 Choose 1⇐ �

( )� •bT

t0

d

)(1 tφ

)(tr21

( )( )

s tt

Eφ =

2E

0)(1 ts )(2 ts

E

21

( )( )

s tt

Eφ =

1

1

12

02

D

D

Er

Er

≥ �

< �

Choose 0 Choose 1⇐ �

( )� •bT

t0

d

)(1 tφ

)(tr 1

1

0 1

0 0D

D

r

r

≥ �

< �

0 )(1 ts

)(2 ts

Choose 1

Choose 0

)( 22 tr φ

)(1

1

t

r

φ

E

E

2E

E

( )� •bT

t0

d

)(1 tφ)(tr

( )� •bT

t0

d

2 ( )tφ

2 1

2 1

1

0D

D

r r

r r

≥ �

< �

1r

2r

(a) On-Off

(b) Orthogonal

(c) Antipodal

University of Saskatchewan

EE810: Communication Theory I Page 85

Error performance:

On-Off Orthogonal Antipodal

From decision space: Q(

E2N0

)

Q(

EN0

)

Q(

2EN0

)

With the same Eb: Q(

Eb

N0

)

Q(

Eb

N0

)

Q(

2Eb

N0

)

The above shows that, with the same average energy per bit Eb = 12(E1 + E2),

antipodal signalling is 3 dB more efficient than orthognal signalling, which has thesame performance as that of on-off signalling. This is graphically shown below.

0 2 4 6 8 10 12 1410

−6

10−5

10−4

10−3

10−2

10−1

Eb/N

0 (dB)

Pr[

erro

r]

On−Off and Orthogonal

Antipodal

University of Saskatchewan

EE810: Communication Theory I Page 86

Example 1: In passband transmission, the digital information is encoded as a vari-ation of the amplitude, frequency and phase (or their combinations) of a sinusoidalcarrier signal. The carrier frequency is much higher than the highest frequency ofthe modulating signals (messages). Binary amplitude-shift keying (BASK), binaryphase-shift keying (BPSK) and binary frequency-shift keying (BFSK) are examplesof on-off, orthogonal and antipodal signallings, respectively.

0

1

0 bT bT2 bT3 bT4 bT5 bT6 bT7 bT8 bT9

1 0 00 01 11 1ta Binary Da(a)

0

V

V−

Signal g Modulatin(b)

SignalASK (c)

0

V

V−

Signal PSK (d)

0

V

V−

Signal FSK (e)

t

t

t

t

0

V

V−

SignalQPSK (a) t

0

V

V−

SignalOQPSK (b) t

bT2 bT4 bT6 bT8

1

1

1

0

==

a

a

0

0

3

2

==

a

a

1

0

5

4

==

a

a

1

1

7

6

==

a

a

bT bT2 bT3 bT4 bT5 bT6 bT7

10 =a

11 =a

02 =a

03 =a

04 =a

15 =a

16 =a

17 =a

0

0

BASK BFSK BPSK

s1(t): 0 V cos(2πf1t) −V cos(2πfct)s2(t): V cos(2πfct) V cos(2πf2t) V cos(2πfct)

0 ≤ t ≤ Tb fc = nTb

f2 + f1 = n2Tb

fc = nTb

n is an integer f2 − f1 = m2Tb

n is an integer

University of Saskatchewan

EE810: Communication Theory I Page 87

Example 2: Various binary baseband signaling schemes are shown below. Theoptimum receiver and its error performance follows easily once the two signalss1(t) and s2(t) used in each scheme are identified.

1 0 00 01 11 1aBinary Dat

0

V

0

V

V−

0

V

0

V

V−

0

V

0

V

V−

0

V

0

V

V−

bT

Time

Clock

Code NRZ (a)

L-NRZ (b)

Code RZ(c)

L- RZ(d)

Phase- Bi(e)

L-Phase- Bi(f)

Code Miller (g)

L- Miller(h)University of Saskatchewan

EE810: Communication Theory I Page 88

M-ary Pulse Amplitude Modulation

si(t) = (i − 1)∆φ1(t), i = 1, 2, . . . , M (1)

)(1 tφ0

)(1 ts

E

)(2 tφ

Mπ2

)(2 ts

)(tsM

Mπ2−

0)(1 tφ

��

∆ ∆2 ∆− )1(k ∆− )1(M∆− )2(M

)(1 ts )(2 ts )(3 ts )(tsk )(1 tsM − )(tsM

)(1 tφ0

000)(1 ↔ts

E

)(2 tφ

100)(8 ↔ts

001)(2 ↔ts

011)(3 ↔ts

)(010 4 ts↔

)(110 5 ts↔

)(111 6 ts↔

101)(7 ↔ts

0 ∆ ∆− )1(k

1r

( ))(1 tsrf k

)( Choose tsk

)( Choose tsM�⇐ )( Choose 1 ts

( ))(1 tsrf k

1r

2

∆2

02

∆2

∆−

)(1 tφ

2

3∆2

3∆−

2r

)(2 tφ

1r

)(1 tφ

sTt =

sTt =

( ) ( )

smallest the

choose and

,,2 ,1for

Compute2

222

11

Mi

srsr ii

=−+−)(tr Decision

( )•sT

t0

d

( )•sT

t0

d

0 ∆)(1 tφ

∆2∆−∆− 2

Decision rule:

choose sk−1(t) if(k − 3

2

)∆ < r1 <

(k − 1

2

)∆, k = 2, 3, . . . , M − 1

choose s1(t) if r1 < ∆2 and choose sM(t) if r1 >

(M − 3

2

)∆

⇒ The optimum receiver computes r1 =∫ kTs

(k−1)Ts

r(t)φ1(t)dt and determines whichsignal r1 is closest to:

1r

skTt =

( )−

•s

s

kT

Tk

t)1(

d)(tr Decision

Device

)(1 tφ

University of Saskatchewan

EE810: Communication Theory I Page 89

Probability of error:

0 ∆ ∆− )1(k

1r

( ))(1 tsrf k

)( Choose tsk

)( Choose tsM�⇐ )( Choose 1 ts

( ))(1 tsrf k

1r

2

∆2

02

∆2

∆−

)(1 tφ

2

3∆2

3∆−

2r

)(2 tφ

1r

)(1 tφ

sTt =

sTt =

( ) ( )

smallest the

choose and

,,2 ,1for

Compute2

222

11

Mi

srsr ii

=−+−)(tr Decision

( )� •sT

t0

d

( )� •sT

t0

d

0 ∆)(1 tφ

∆2∆−∆− 2

• For the M − 2 inner signals: Pr[error|si(t)] = 2Q(∆/

√2N0

).

• For the two end signals: Pr[error|si(t)] = Q(∆/

√2N0

)

• Since Pr[si(t) = 1/M , then

Pr[error] =M∑

i=1

Pr[si(t)] Pr[error|si(t)] =2(M − 1)

MQ(

∆/√

2N0

)

• The above is symbol (or message) error probability. If each message carriesλ = log2 M bits, the bit error probability depends on the mapping and it isoften tedious to compute exactly. If the mapping is a Gray mapping (i.e., themappings of the nearest signals differ in only one bit), a good approximationis Pr[bit error = Pr[error]/M .

• A modified signal set to minimize the average transmitted energy (M is even):

0 ∆ ∆− )1(k

1r

( ))(1 tsrf k

)( Choose tsk

)( Choose tsM�⇐ )( Choose 1 ts

( ))(1 tsrf k

1r

2

∆2

02

∆2

∆−

)(1 tφ

2

3∆2

3∆−

2r

)(2 tφ

1r

)(1 tφ

sTt =

sTt =

( ) ( )

smallest the

choose and

,,2 ,1for

Compute2

222

11

Mi

srsr ii

=−+−)(tr Decision

( )� •sT

t0

d

( )� •sT

t0

d

0 ∆)(1 tφ

∆2∆−∆− 2• If φ1(t) is a sinusoidal carrier, i.e., φ1(t) =√

2Ts

cos(2πfct), where 0 ≤ t ≤Ts; fc = k/Ts; k is an integer, the scheme is also known as M -ary amplitudeshift keying (M -ASK).

University of Saskatchewan

EE810: Communication Theory I Page 90

To express the probability in terms of Eb/N0, compute the average transmittedenergy per message (or symbol) as follows:

Es =

∑Mi=1 Ei

M=

∆2

4M

M∑

i=1

(2i − 1 − M)2

=∆2

4M

[1

3M(M 2 − 1)

]

=(M 2 − 1)∆2

12(2)

Thus the the average transmitted energy per bit is

Eb =Es

log2 M=

(M 2 − 1)∆2

12 log2 M⇒ ∆ =

(12 log2 M)Eb

M 2 − 1(3)

⇒ Pr[error] =2(M − 1)

MQ

(√

(6 log2 M)Eb

(M 2 − 1)N0

)

(4)

0 5 10 15 2010

−6

10−5

10−4

10−3

10−2

10−1

Eb/N

0 (dB)

Pr[

sym

bol e

rror

]

M=2

M=4

M=8

M=16

University of Saskatchewan

EE810: Communication Theory I Page 91

M-ary Phase Shift Keying (M-PSK)

The messages are distinguished by M phases of a sinusoidal carrier:

si(t) = V cos

[

2πfct −(i − 1)2π

M

]

, 0 < t < Ts; i = 1, 2, . . . , M

=√

E cos

[(i − 1)2π

M

]

︸ ︷︷ ︸si1

φ1(t) +√

E sin

[(i − 1)2π

M

]

︸ ︷︷ ︸si2

φ2(t) (5)

where E = V 2Ts/2 and the two orthonormal basis functions are:

φ1(t) =V cos(2πfct)√

E; φ2(t) =

V sin(2πfct)√E

(6)

M = 8:

)(1 tφ0

)(1 ts

E

)(2 tφ

Mπ2

)(2 ts

)(tsM

Mπ2−

0)(1 tφ

��

∆ ∆2 ∆− )1(k ∆− )1(M∆− )2(M

)(1 ts )(2 ts )(3 ts )(tsk )(1 tsM − )(tsM

)(1 tφ0

000)(1 ↔ts

E

)(2 tφ

100)(8 ↔ts

001)(2 ↔ts

011)(3 ↔ts

)(010 4 ts↔

)(110 5 ts↔

)(111 6 ts↔

101)(7 ↔ts

Figure 1: Signal space for 8-PSK with Gray mapping.

University of Saskatchewan

EE810: Communication Theory I Page 92

Decision regions and receiver implementation:

1r0

)(1 tsE

2r

)(2 ts

2r

)(2 tφ

1r

)(1 tφ

( ) ( )

smallest the

choose and

,,2 ,1for

Compute2

222

11

Mi

srsr ii

�=−+−)(tr Decision

( )� •sT

t0

d

( )� •sT

t0

d

2

2

Region

Choose ( )

Z

s t

1

1

Region

Choose ( )

Z

s t

( )Ms t

Choosem2=01

Choosem4=10

Cho

o se

m3=

11

Cho

ose

m1=

00

)(1 tφ0

)(1 ts

)(2 ts

)(2 tφ

)(3 ts

)(4 ts

)(2 tφ

)(1 tφ1r

2r

Probability of error:

Pr[error] =1

M

M∑

i=1

Pr[error|si(t)] = Pr[error|s1(t)]

= 1 − Pr[(r1, r2) ∈ Z1|s1(t)]

= 1 −∫ ∫

(r1,r2)∈Z1

p(r1, r2|s1(t))dr1dr2 (7)

where p(r1, r2|s1(t)) =1

πN0exp

[

−(r1 −√

E)2 + r22

N0

]

.

University of Saskatchewan

EE810: Communication Theory I Page 93

Changing the variables V =√

r21 + r2

2 and Θ = tan−1 r2

r1

(polar coordinate sys-tem), the join pdf of V and Θ is

p(V, Θ) =V

πN0exp

(

−V 2 + E − 2√

EV cos Θ

N0

)

(8)

Integration of p(V, Θ) over the range of V yields the pdf of Θ:

p(Θ) =

∫ ∞

0

p(V, Θ)dV

=1

2πe−

E

N0sin2 Θ

∫ ∞

0

V e−(

V −√

2E/N0 cosΘ)2

/2dV (9)

With the above pdf of Θ, the error probability can be computed as:

Pr[error] = 1 − Pr[−π/M ≤ Θ ≤ π/M |s1(t)]

= 1 −∫ π/M

π/M

p(Θ)dΘ (10)

In general, the integral of p(Θ) as above does not reduce to a simple form andmust be evaluated numerically, except for M = 2 and M = 4.An approximation to the error probability for large values of M and for large

symbol signal-to-noise ratio (SNR) γs = E/N0 can be obtained as follows. Firstthe error probability is lower bounded by

Pr[error] = Pr[error|s1(t)]

> Pr[(r1, r2) is closer to s2(t) than s1(t)|s1(t)]

= Q{

sin( π

M

)√

E/N0

}

(11)

The upper bound is obtained by the following union bound:

Pr[error] = Pr[error|s1(t)]

< Pr[(r1, r2) is closer to s2(t) OR sM−1(t) than s1(t)|s1(t)]

< 2Q(

sin( π

M

)√

E/N0

)

(12)

Since the lower and upper bounds only differ by a factor of two, they are verytight for high SNR. Thus a good approximation to the error probability of M -PSKis:

Pr[error] ≈ 2Q(

sin( π

M

)√

E/N0

)

= 2Q(√

λ(2Eb/N0) sin( π

M

))

(13)

University of Saskatchewan

EE810: Communication Theory I Page 94

Quadrature Phase Shift Keying (QPSK):

1r0

)(1 tsE

2r

)(2 ts

2r

)(2 tφ

1r

)(1 tφ

( ) ( )

smallest the

choose and

,,2 ,1for

Compute2

222

11

Mi

srsr ii

=−+−)(tr Decision

( )� •sT

t0

d

( )� •sT

t0

d

2

2

Region

Choose ( )

Z

s t

1

1

Region

Choose ( )

Z

s t

( )Ms t

Choosem2=01

Choosem4=10

Cho

o se

m3=

11

Cho

ose

m1=

00

)(1 tφ0

)(1 ts

)(2 ts

)(2 tφ

)(3 ts

)(4 ts

)(2 tφ

)(1 tφ1r

2r

Pr[correct] = Pr[r1 ≥ 0 and r2 ≥ 0|s1(t)] =

[

1 − Q

(√

E

N0

)]2

(14)

⇒ Pr[error] = 1 − Pr[correct] = 1 −[

1 − Q

(√

E

N0

)]2

= 2Q

(√

E

N0

)

− Q2

(√

E

N0

)

(15)

University of Saskatchewan

EE810: Communication Theory I Page 95

The bit error probability of QPSK with Gray mapping: Can be obtained by con-sidering different conditional message error probabilities:

Pr[m2|m1] = Q

(√

E

N0

)[

1 − Q

(√

E

N0

)]

(16)

Pr[m3|m1] = Q2

(√

E

N0

)

(17)

Pr[m4|m1] = Q

(√

E

N0

)[

1 − Q

(√

E

N0

)]

(18)

The bit error probability is therefore

Pr[bit error] = 0.5 Pr[m2|m1] + 0.5 Pr[m4|m1] + 1.0 Pr[m3|m1]

= Q

(√

E

N0

)

= Q

(√

2Eb

N0

)

(19)

where the viewpoint is taken that one of the two bits is chosen at random, i.e.,with a probability of 0.5. The above shows that QPSK with Gray mapping hasexactly the same bit-error-rate (BER) performance with BPSK, while its bit ratecan be double for the same bandwidth.In general, the bit error probability of M -PSK is difficult to obtain for an arbi-

trary mapping. For Gray mapping, again a good approximation is Pr[bit error] =Pr[sumbol error]/λ. The exact calculation of the bit error probability can be foundin the following paper:

J. Lassing, E. G. Strom, E. Agrell and T. Ottosson, “Computation of the Exact Bit-Error Rate of Coherent M -ary PSK With Gray Code Bit Mapping”, IEEE Trans.

on Communications, vol. 51, Nov. 2003, pp. 1758–1760.

University of Saskatchewan

EE810: Communication Theory I Page 96

Comparison of M -PSK and BPSK

Pr[error]M -ary ' 2Q(√

λ(2Eb/N0) sin( π

M

))

, (M > 4)

Pr[error]QPSK = 2Q

(√

2Eb

N0

)

− Q2

(√

2Eb

N0

)

Pr[error]BPSK = Q(√

2Eb/N0)

0 5 10 15 2010

−7

10−6

10−5

10−4

10−3

10−2

10−1

Eb/N

0 (dB)

Pr[

sym

bol e

rror

]

Lower boundUpper bound

M=2

M=4

M=8

M=16

M=32

λ M M -ary BW/binary BW λ sin2(π/M) M -ary energy/binary energy

3 8 1/3 0.44 3.6 dB4 16 1/4 0.15 8.2 dB5 32 1/5 0.05 13.0 dB6 64 1/6 0.44 17.0 dB

University of Saskatchewan

EE810: Communication Theory I Page 97

M-ary Quadrature Amplitude Modulation

• In QAM modulation, the messages are encoded into both the amplitudes andphases of the carrier:

si(t) = Vc,i

√2

Tscos(2πfct) + Vs,i

√2

Tssin(2πfct) (20)

=√

Ei

√2

Tscos(2πfct − θi),

0 ≤ t ≤ Ts

i = 1, 2, . . . , M(21)

where Ei = V 2c,i + V 2

s,i and θi = tan−1(Vs,i/Vc,i).

• Examples of QAM constellations are shown on the next page.

• Rectangular QAM constellations, shown below, are the most popular. Forrectangular QAM, Vc,i, Vs,i ∈ {(2i − 1 − M)∆/2}.

University of Saskatchewan

EE810: Communication Theory I Page 98

Examples of QAM Constellations:

University of Saskatchewan

EE810: Communication Theory I Page 99

Another example: The 16-QAM signal constellation shown below is an interna-tional standard for telephone-line modems (called V.29). The decision regions ofthe minimum distance receiver are also drawn.

5

-5

5-5 -3 -1 1 3

-1

-3

1

3

1101

1000 1001

1100

0001

01100111000001000101 0011

0010

1111

1010 1110

1011

Receiver implementation (same as for M -PSK):

1r0

)(1 tsE

2r

)(2 ts

2r

)(2 tφ

1r

)(1 tφ

( ) ( )

smallest the

choose and

,,2 ,1for

Compute2

222

11

Mi

srsr ii

�=−+−)(tr Decision

( )•sT

t0

d

( )•sT

t0

d

2

2

Region

Choose ( )

Z

s t

1

1

Region

Choose ( )

Z

s t

( )Ms t

Choosem2=01

Choosem4=10

Cho

o se

m3=

11

Cho

ose

m1=

00

)(1 tφ0

)(1 ts

)(2 ts

)(2 tφ

)(3 ts

)(4 ts

)(2 tφ

)(1 tφ1r

2rUniversity of Saskatchewan

EE810: Communication Theory I Page 100

Error performance of rectangular M -QAM: For M = 2λ, where λ is even, QAM

consists of two ASK signals on quadrature carriers, each having√

M = 2λ/2 signalpoints. Thus the probability of symbol error can be computed as

Pr[error] = 1 − Pr[correct] = 1 −(1 − Pr√M [error]

)2(22)

where Pr√M is the probability of error of√

M -ary ASK:

Pr√M [error] = 2

(

1 − 1√M

)

Q

(√

3

M − 1

Es

N0

)

(23)

Note that in the above equation, Es is the average energy per symbol of the QAMconstellation.

0 5 10 15 2010

−6

10−5

10−4

10−3

10−2

10−1

Eb/N

0 (dB)

Pr[

sym

bol e

rror

]

M=4

M=16

M=64

University of Saskatchewan

EE810: Communication Theory I Page 101

Comparison of M -QAM and M -PSK:

Pr[error]PSK ≈ Q

(√

2Es

N0sin

π

M

)

(24)

Pr[error]PSK ≈ Pr√M−ASK[error] = 2

(

1 − 1√M

)

Q

(√

3

M − 1

Es

N0

)

(25)

Since the error probability is dominated by the argument of the Q-function, onemay simply compares the arguments of Q for the two modulation schemes. Theratio of the arguments is

⇒ RM =3/(M − 1)

2 sin2(π/M)(26)

M 10 log10 RM

8 1.65 dB16 4.20 dB32 7.02 dB64 9.95 dB

Thus M -ary QAM yields a better performance than M -ary PSK for the same bitrate (i.e., same M) and the same transmitted energy (Eb/N0).

University of Saskatchewan

EE810: Communication Theory I Page 102

M-ary Orthognal Signals

si(t) =√

Eφi(t), i = 1, 2, . . . , M (27)

)(1 tφ0)(1 ts

)(2 ts

)(3 ts

)(2 tφ

)(3 tφ

E

E

E

0

)(1 ts

)(2 ts

E

E

(a) M=2 (b) M=3

)(2 tφ

)(1 tφ

Choosethe

largest

Mr

1r

E

tst

)()( 1

1 =φ

sTt =

sTt =

)(tr Decision

( )� •sT

t0

d

( )� •sT

t0

d

E

tst M

M

)()( =φ

)(1 tφ0)(1 ts

)(2 ts

)(3 ts

)(2 tφ

)(3 tφ

E

E

E

(a) M=2 (b) M=3

1( )s t−

2 ( )s t−

3( )s t−

)(1 tφ0)(1 ts

)(2 ts

)(2 tφ

E

E1( )s t−

2 ( )s t−

The decision rule of the minimum distance receiver follows easily:

Choose si(t) if ri > rj, j = 1, 2, 3, . . . , M ; j 6= i (28)

Minimum distance receiver:

)(1 tφ0)(1 ts

)(2 ts

)(3 ts

)(2 tφ

)(3 tφ

E

E

E

0

)(1 ts

)(2 ts

E

E

(a) M=2 (b) M=3

)(2 tφ

)(1 tφ

Choosethe

largest

Mr

1r

E

tst

)()( 1

1 =φ

sTt =

sTt =

)(tr Decision

( )� •sT

t0

d

( )� •sT

t0

d

E

tst M

M

)()( =φ

)(1 tφ0)(1 ts

)(2 ts

)(3 ts

)(2 tφ

)(3 tφ

E

E

E

(a) M=2 (b) M=3

1( )s t−

2 ( )s t−

3( )s t−

)(1 tφ0)(1 ts

)(2 ts

)(2 tφ

E

E1( )s t−

2 ( )s t−

(Note: Instead of φi(t) one may use si(t))

University of Saskatchewan

EE810: Communication Theory I Page 103

When the M orthonormal functions are chosen to be orthogonal sinusoidal car-riers, the signal set is known as M -ary frequency shift keying (M -FSK):

si(t) = V cos(2πfit), 0 ≤ t ≤ Ts; i = 1, 2, . . . , M (29)

where the frequencies are chosen so that the signals are orthogonal over the interval[0, Ts] seconds:

fi = (k ± (i − 1))

(1

2Ts

)

, i = 1, 2, . . . , M (30)

Error performance of M orthogonal signals:To determine the message error probability consider that message s1(t) was trans-

mitted. Due to the symmetry of the signal space and because the messages areequally likely

Pr[error] = Pr[error|s1(t)] = 1 − Pr[correct|s1(t)]

= 1 − Pr[all rj < r1) : j 6= 1|s1(t)]

= 1 −M∏

j=2

Pr[rj < r1) : j 6= 1|s1(t)]

= 1 −∫ ∞

R1=−∞

M∏

j=2

Pr[(rj < R1)|s1(t)]p(R1|s1(t))dR1

= 1 −∫ ∞

R1=−∞

[∫ R1

R2=−∞(πN0)

−0.5 exp

{

−R22

N0

}

dR2

]M−1

×(πN0)−0.5 exp

{

−(R1 −√

E)2

N0

}

dR1 (31)

The above integral can only be evaluated numerically. It can be normalized sothat only two parameters, namely M (the number of messages) and E/N0 (thesignal-to-noise) enter into the numerical integration as given below:

Pr[error] =1√2π

∫ ∞

−∞

[

1 −(

1√2π

∫ y

−∞e−x2/2dx

)M−1]

· exp

−1

2

(

y −√

2E

N0

)2

dy (32)

University of Saskatchewan

EE810: Communication Theory I Page 104

The relationship between probability of bit error and probability of symbol errorfor an M -ary orthogonal signal set can be found as follows. For equiprobableorthogonal signals, all symbol errors are equiprobable and occur with probability

Pr[symbol error]

M − 1=

Pr[symbol error]

2λ − 1

There are(λk

)ways in which k bits out of λ may be in error. Hence the average

number of bit errors per λ-bit symbol is

λ∑

k=1

k

k

)Pr[symbol error]

2λ − 1= λ

2λ−1

2λ − 1Pr[symbol error] (33)

The probability of bit error is the the above result divided by λ. Thus

Pr[bit error]

Pr[symbol error]=

2λ−1

2λ − 1(34)

Note that the above ratio approaches 1/2 as λ → ∞.

Figure 2: Probability of bit error for M -orthogonal signals versus γb = Eb/N0.

University of Saskatchewan

EE810: Communication Theory I Page 105

Union Bound: To provide insight into the behavior of Pr[error], consider upperbounding (called the union bound) the error probability as follows.

Pr[error] = Pr[(r1 < r2) or (r1 < r3) or, . . . , or (r1 < rM)|s1(t)]

< Pr[(r1 < r2)|s1(t)] + . . . + Pr[(r1 < rM)|s1(t)]

= (M − 1)Q(√

E/N0) < MQ(√

E/N0

)

< Me−(E/2N0) (35)

where a simple bound on Q(x), namely Q(x) < exp{

−x2

2

}

has been used. Let

M = 2λ, then

Pr[error] < eλ ln 2e−(λEb/2N0) = e−λ(Eb/N0−2 ln 2)/2 (36)

The above equation shows that there is a definite threshold effect with orthogonalsignalling. As M → ∞, the probability of error approaches zero exponentially,provided that:

Eb

N0> 2 ln 2 = 1.39 = 1.42dB (37)

A different interpretation of the upper bound in (36) can be obtained as follows.Since E = λEb = PsTs (Ps is the transmitted power) and the bit rate rb = λ/Ts,(36) can be rewritten as:

Pr[error] < eλ ln 2e−(PsTs/2N0) = e−Ts[−rb ln 2+Ps/2N0] (38)

The above implies that if −rb ln 2+Ps/2N0 > 0, or rb <Ps

2 ln 2N0the probability

or error tends to zero as Ts or M become larger and larger. This behaviour oferror probability is quite surprising since what it shows is that: provided the bit rateis small enough, the error probability can be made arbitrarily small even thoughthe transmitter power can be finite. The obvious disadvantage, however, of theapproach is that the bandwidth requirement increases with M . As M → ∞, thetransmitted bandwidth goes to infinity.Since the union bound is not a very tight upper bound at sufficiently low SNR

due to the fact that the upper bound Q(x) < exp{

−x2

2

}

for the Q function is

loose. Using a more elaborate bounding techniques it can be shown that (seetexts by Proakis, Wozencraft and Jacobs) Pr[error] → 0 as λ → ∞, provided thatEb/N0 > ln 2 = 0.693 (−1.6dB).

University of Saskatchewan

EE810: Communication Theory I Page 106

Biorthognal Signals

)(1 tφ0)(1 ts

)(2 ts

)(3 ts

)(2 tφ

)(3 tφ

E

E

E

0

)(1 ts

)(2 ts

E

E

(a) M=2 (b) M=3

)(2 tφ

)(1 tφ

Choosethe

largest

Mr

1r

E

tst

)()( 1

1 =φ

sTt =

sTt =

)(tr Decision

( )•sT

t0

d

( )•sT

t0

d

E

tst M

M

)()( =φ

)(1 tφ0)(1 ts

)(2 ts

)(3 ts

)(2 tφ

)(3 tφ

E

E

E

(a) M=2 (b) M=3

1( )s t−

2 ( )s t−

3( )s t−

)(1 tφ0)(1 ts

)(2 ts

)(2 tφ

E

E1( )s t−

2 ( )s t−

A biorthogonal signal set can be obtained from an original orthogonal set of N sig-nals by augmenting it with the negative of each signal. Obviously, for the biorthog-onal set M = 2N . Denote the additional signals by −si(t), i = 1, 2, . . . , N andassume that each signal has energy E.The received signal r(t) is closer than si(t) than −si(t) if and only if (iff)∫ Ts

0 [r(t)−√

Eφi(t)]2dt <

∫ Ts

0 [r(t)+√

Eφi(t)]2dt. This happens iff ri =

∫ Ts

0 r(t)φi(t)dt >

0. Similarly, r(t) is closer to si(t) than to sj(t) iff ri > rj, j 6= 1 and r(t) is closerto si(t) than to −sj(t) iff ri > −rj, j 6= 1. It follows that the decision rule of theminimum-distance receiver for biorthogonal signalling can be implemented as

Choose ± si(t) if |ri| > |rj| and ± ri > 0, ∀j 6= i (39)

The conditional probability of a correct decision for equally likely messages, giventhat s1(t) is transmitted and that

r1 =√

E + w1 = R1 > 0 (40)

is just

Pr[correct|s1(t), R1 > 0] = Pr[−R1 < all rj < R1) : j 6= 1|s1(t), R1 > 0]

= (Pr[−R1 < rj < R1|s1(t), R1 > 0])N−1

=

[∫ R1

R2=−R1

(πN0)−0.5 exp

{

−R22

N0

}

dR2

]N−1

(41)

University of Saskatchewan

EE810: Communication Theory I Page 107

Averaging over the pdf of R1 we have

Pr[correct|s1(t)] =

∫ ∞

R1=0

[∫ R1

R2=−∞(πN0)

−0.5 exp

{

−R22

N0

}

dR2

]N−1

×(πN0)−0.5 exp

{

−(R1 −√

E)2

N0

}

dR1 (42)

Again, by virtue of symmetry and the equal a priori probability of the messagesthe above equation is also the Pr[correct]. Finally, by noting that N = M/2 weobtain

Pr[error] = 1 −∫ ∞

R1=0

[∫ R1

R2=−∞(πN0)

−0.5 exp

{

−R22

N0

}

dR2

]M

2−1

×(πN0)−0.5 exp

{

−(R1 −√

E)2

N0

}

dR1 (43)

The difference in error performance for M biorthogonal and M orthogonal signalsis negligible when M and E/N0 are large, but the number of dimensions (i.e.,bandwidth) required is reduced by one half in the biorthogonal case.

University of Saskatchewan

EE810: Communication Theory I Page 108

Hypercube Signals (Vertices of a Hypercube)

)(1 tφ)(1 ts

)(2 ts

2 bE

)(1 tφ0

)(1 ts)(2 ts)(2 tφ

)(1 tφ

)(2 tφ

)(3 tφ

)(1 ts

0

3( )s t 4 ( )s t

5( )s t6 ( )s t

7 ( )s t 8( )s t3( )s t 4 ( )s t

)(2 ts 0

2 bE

2 bE

2 (a) =M 42 (b) 2 ==M 82 (c) 3 ==M

)(1 tφ)(1 ts

)(2 ts

)(1 tφ0 )(1 ts

)(2 ts

)(2 tφ

)(1 tφ

)(2 tφ

)(3 tφ )(1 ts

0

3( )s t

4 ( )s t

3( )s t

)(2 ts 0

2E2

E

2 (a) =M (b) 3M =

(c) 4M =

(binary antipodal) (equilateral triangle)

(regular tetrahedron)

2

E

6

E

2

E

2E

2E

2E

Here the M = 2λ signals are located on the vertices of an λ-dimensional hyper-cube centered at the origin. This configuration is shown geometrically above forλ = 1, 2, 3. The hypercube signals can be formed as follows:

si(t) =√

Eb

λ∑

j=1

bijφj(t), bij ∈ {±1}, i = 1, 2, . . . , M = 2λ (44)

Thus the components of the signal vector si = [si1, si2, . . . , siλ]> are ±

√Eb.

To evaluate the error probability, assume that the signal

s14=(

−√

Eb,−√

Eb, . . . ,−√

Eb

)

(45)

is transmitted. First claim that no error is made if the noise components alongφj(t) satisfy

wj <√

Eb, for all j = 1, 2, . . . , λ (46)

The proof is immediate. When r = x = s1 + w is received, the jth component ofx − si is:

(xj − sij) =

{wj, if sij = −

√Eb

2√

Eb − wj, if sij = +√

Eb(47)

Since Equation (46) implies:

2√

Eb − wj > wj for all j, (48)

University of Saskatchewan

EE810: Communication Theory I Page 109

it follows that

|x − si|2 =λ∑

j=1

(xj − sij)2 >

λ∑

j=1

w2j = |x − s1|2 (49)

for all si 6= s1 whenever (46) is satisfied.Next claim that an error is made if, for at least one j,

wj >√

Eb (50)

This follows from the fact that x is closer to sj than to s1 whenever (50) is satisfied,where sj denotes that signal with components

√Eb along the jth direction and

−√

Eb in all other directions. (Of course, x may be still closer to some signalother than sj, but it cannot be closest to s1).Equations (49) and (50) together imply that a correct decision is made if and only

if (46) is satisfied. The probability of this event, given that m = m1, is therefore:

Pr[correct|m1] = Pr[

all wj <√

Eb; j = 1, 2, . . . , λ]

=λ∏

j=1

Pr[

wj <√

Eb

]

=(

1 − Pr[

wj ≥√

Eb

])λ

= (1 − p)λ (51)

in which,

p = Pr[

wj ≥√

Eb

]

= Q

(√

2Eb

N0

)

is again the probability of error for two equally likely signals separated by distance2√

Eb. Finally, from symmetry:

Pr[correct|mi] = Pr[correct|m1] for all i, (52)

hence,

Pr[correct] = (1 − p)λ =

[

1 − Q

(√

2Eb

N0

)]λ

(53)

In order to express this result in terms of signal energy, we again recognize thatthe distance squared from the origin to each signal si is the same. The transmittedenergy is therefore independent of i, hence can be designated by E. Clearly

|si|2 =λ∑

j=1

s2ij = λEb = E (54)

University of Saskatchewan

EE810: Communication Theory I Page 110

Thus

p = Q

(√

2E

λN0

)

(55)

The simple form of the result Pr[correct] = (1 − p)λ suggests that a moreimmediate derivation may exist. Indeed one does. Note that the jth coordinate ofthe random signal s is a priori equally likely to be +

√Eb or −

√Eb, independent

of all other coordinates. Moreover, the noise wj disturbing the jth coordinate isindependent of the noise in all other coordinates. Hence, by the theory of sufficientstatistics, a decision may be made on the jth coordinate without examining anyother coordinate. This single-coordinate decision corresponds to the problem ofbinary signals separated by distance 2

√Eb, for which the probability of correct

decision is 1 − p. Since in the original hypercube problem a correct decision ismade if only if a correct decision is make on every coordinate, an since thesedecisions are independent, it follows immediately that Pr[correct] = (1 − p)λ.

University of Saskatchewan


Recommended