Home > Documents > 20 Convolutional Codes 3

# 20 Convolutional Codes 3

Date post: 04-Jun-2018
Category:
Author: mailstonaik
View: 219 times
Embed Size (px)

of 29

Transcript
• 8/13/2019 20 Convolutional Codes 3

1/29

• 8/13/2019 20 Convolutional Codes 3

2/29

Hard Decision Decoding

Viterbi Decoding of (n,1,m) code on BSC

• 8/13/2019 20 Convolutional Codes 3

3/29

• 8/13/2019 20 Convolutional Codes 3

4/29

• 8/13/2019 20 Convolutional Codes 3

5/29

• 8/13/2019 20 Convolutional Codes 3

6/29

• 8/13/2019 20 Convolutional Codes 3

7/29

• 8/13/2019 20 Convolutional Codes 3

8/29

If the added noise is small (ie. its variance issmall) then the spread will be smaller as we

can see in c as compared to b. Intuitively we

can see from these pictures that we are less

likely to make decoding errors if the S/N is high

or the variance of the noise is small.

• 8/13/2019 20 Convolutional Codes 3

9/29

Making a hard decision means that a simple

decision threshold which is usually between thetwo signals is chosen, such that if the received

voltage is positive then the signal is decoded as a1 otherwise a 0. In very simple terms this is also

what the Maximum likelihood decoding means.

We can quantify the error that is made withthis decision method. The probability that a 0

will be decoded, given that a 1was sent is afunction of the two shaded areas seen in thefigure above.

• 8/13/2019 20 Convolutional Codes 3

10/29

This is the familiar bit error rate equation. We see that itassumes that hard-decision is made

• 8/13/2019 20 Convolutional Codes 3

11/29

• 8/13/2019 20 Convolutional Codes 3

12/29

Now we ask? If the received voltage falls in region 3, then

what is probability of error that a 1 was sent?

With hard-decision, the answer is easy. It can be calculated

from the equation above. How do we calculate similar

probabilities for a multi-region space?

We use the Q function that is tabulated in many books.The Q function gives us the area under the tail defined bythe distance from the mean to any other value. So Q(2)for a signal the mean of which is 2 would give us theprobability of a value that is 4 or greater.

• 8/13/2019 20 Convolutional Codes 3

13/29

This process of subdividing the decision space into

regions greater than two such as this 4 -level exampleis called soft-decision. These probabilities are alsocalled the transition probabilities.

There are 4 different values of voltage for each signal

received with which to make a decision. In signals,

as in real life, more information means betterdecisions. Soft-decision improve the sensitivity ofthe decoding metrics and improves the

performance by as much as 3 dB in case of 8-levelsoft decision.

• 8/13/2019 20 Convolutional Codes 3

14/29

• 8/13/2019 20 Convolutional Codes 3

15/29

1. Break the all-zero (initial) state of the state

diagram into a start state and an endstate. This will be called the modified state

diagram.

2. For every branch of the modified state

diagram, assign the symbol D with its

exponent equal to the Hamming weight of the

output bits.

3. For every branch of the modified state

diagram, assign the symbol J.4. Assign the symbol N to the branch of the

modified state diagram, if the branch

transition is caused by an input bit 1.

• 8/13/2019 20 Convolutional Codes 3

16/29

Example

Convolutional encoder with k=1, n=2, r=1/2, m=2

• 8/13/2019 20 Convolutional Codes 3

17/29

• 8/13/2019 20 Convolutional Codes 3

18/29

Modified State Diagram

Sa is the start state and Se is the end state.

• 8/13/2019 20 Convolutional Codes 3

19/29

Nodal equations are obtained for all the states except for

the start state in These results are

• 8/13/2019 20 Convolutional Codes 3

20/29

Here

By substituting and rearranging,

Closed Form

Expanded polynomial form

• 8/13/2019 20 Convolutional Codes 3

21/29

Free distance of Convolutional codes

Since a Convolutional encoder generates

codewords with various sizes (as opposite to the

block codes), the following approach is used to

find the minimum distance between all pairs ofcodewords:

Since the code is linear, the minimum distance

of the code is the minimum distance betweeneach of the codewords and the all-zero

codeword.

This is the minimum distance in the set of allarbitrary long paths along the trellis that diverge

and remerge to the all-zero path.

It is called the minimum free distance or thefree distance of the code, denoted by ffree dd or

• 8/13/2019 20 Convolutional Codes 3

22/29

The minimum free distance corresponds to the ability of

the convolutional code to estimate the best decoded bitsequence. As dfree increases, the performance of the

convolutional code also increases.

From the transfer function, the minimum free distance isidentified as the lowest exponent of D. From the above

transfer function considered, dfree = 5.

If N and J are set to 1, the coefficients of Dis representthe number of paths through the trellis with weight Di. More

information about the codeword is obtained from observing

the exponents of N and J. For a codeword, the exponent of N indicates the number

of 1s in the input sequence (data weight), and the

exponent of J indicates the length of the path that mergeswith the all-zero path for the first time

• 8/13/2019 20 Convolutional Codes 3

23/29

Free distance

2

0

1

2

1

0

2

1

1

2

1

0

0

2

1

1

0

2

0

6t1t 2t 3t 4t 5t

Hamming weight

of the branchAll-zero pathThe path diverging and remerging to

all-zero path with minimum weight

5=f

d

I t l i

• 8/13/2019 20 Convolutional Codes 3

24/29

Interleaving

Convolutional codes are suitable for memoryless

channels with random error events.

Some errors have bursty nature: Statistical dependence among successive error events

(time-correlation) due to the channel memory.

Like errors in multipath fading channels in wirelesscommunications, errors due to the switching noise

Interleaving makes the channel looks like as amemoryless channel at the decoder.

• 8/13/2019 20 Convolutional Codes 3

25/29

Interleaving Interleaving is done by spreading the coded

symbols in time (interleaving) before

transmission. The reverse in done at the receiver by

Interleaving makes bursty errors look likerandom. Hence, Conv. codes can be used.

Types of interleaving: Block interleaving

Convolutional or cross interleaving

• 8/13/2019 20 Convolutional Codes 3

26/29

Block diagram of system employing interleaving forburst error channel.

• 8/13/2019 20 Convolutional Codes 3

27/29

• 8/13/2019 20 Convolutional Codes 3

28/29

• 8/13/2019 20 Convolutional Codes 3

29/29

Recommended