+ All Categories
Home > Documents > Tic 1 Tutorial on Convolutional Coding With Viterbi Decoding

Tic 1 Tutorial on Convolutional Coding With Viterbi Decoding

Date post: 07-Apr-2015
Category:
Upload: tranvanthanhtran
View: 487 times
Download: 3 times
Share this document with a friend
28
1 A Tutorial on Convolutional Coding with Viterbi Decoding by Chip Fleming of Spectrum Applications Updated 2002-07-05 20:18Z Copyright © 1999-2002, Spectrum Applications. All rights reserved. This tutorial is best viewed with Netscape Navigator, Version 4 or higher. Equations will appear out of line with Internet Explorer. Introduction The purpose of this tutorial is to introduce the reader to a forward error correction technique known as convolutional coding with Viterbi decoding. More particularly, this tutorial will focus primarily on the Viterbi decoding algorithm itself. The intended audience is anyone interested in designing or understanding wireless digital communications systems. Following this introduction, I will provide a detailed description of the algorithms for generating random binary data, convolutionally encoding the data, passing the encoded data through a noisy channel, quantizing the received channel symbols, and performing Viterbi decoding on the quantized channel symbols to recover the original binary data. Complete simulation source code examples of these algorithms follow the algorithm descriptions. I have also included some example results from the simulation code. Since the examples are written in the C programming language, your ability to read C code would be very helpful to achieving a clear understanding. However, I have tried to provide enough explanation in the description of the algorithms and comments in the example source code so that you can understand the algorithms even if you don't know C very well. The purpose of forward error correction (FEC) is to improve the capacity of a channel by adding some carefully designed redundant information to the data being transmitted through the channel. The process of adding this redundant information is known as channel coding. Convolutional coding and block coding are the two major forms of channel coding. Convolutional codes operate on serial data, one or a few bits at a time. Block codes operate on relatively large (typically, up to a couple of hundred bytes) message blocks. There are a variety of useful convolutional and block codes, and a variety of algorithms for decoding the received coded information sequences to recover the original data. The reader is advised to study the sources listed in the bibliography for a broader and deeper understanding of the digital communications and channel-coding field. Convolutional encoding with Viterbi decoding is a FEC technique that is particularly suited to a channel in which the transmitted signal is corrupted mainly by additive white gaussian noise (AWGN). You can think of AWGN as noise whose voltage distribution over time has characteristics that can be described using a Gaussian, or normal, statistical distribution, i.e. a bell curve. This voltage distribution has zero mean and a standard deviation that is a function of the signal-to-noise ratio (SNR) of the received signal. Let's assume for the moment that the received signal level is fixed. Then if the SNR is high, the standard deviation of the noise is small, and vice-versa. In digital communications, SNR is usually measured in terms of E b /N 0 , which stands for energy per bit divided by the one-sided noise density. Let's take a moment to look at a couple of examples. Suppose that we have a system where a '1' channel bit is transmitted as a voltage of -1V, and a '0' channel bit is transmitted as a voltage of +1V. This is called bipolar non-return-to-zero (bipolar NRZ) signaling. It is also called binary "antipodal" (which means the signaling states are exact opposites of each other) signaling. The receiver comprises a comparator that decides the received channel bit is a '1' if its voltage is less than 0V, and a '0' if its voltage is greater than or equal to 0V. One would want to sample the output of the comparator in the middle of
Transcript
Page 1: Tic 1 Tutorial on Convolutional Coding With Viterbi Decoding

1

A Tutorial on Convolutional Coding with Viterbi Decoding

by Chip Fleming of Spectrum Applications

Updated 2002-07-05 20:18Z

Copyright © 1999-2002, Spectrum Applications. All rights reserved.

This tutorial is best viewed with Netscape Navigator, Version 4 or higher. Equations will appear out of line with Internet

Explorer.

Introduction

The purpose of this tutorial is to introduce the reader to a forward error correction technique known as

convolutional coding with Viterbi decoding. More particularly, this tutorial will focus primarily on the

Viterbi decoding algorithm itself. The intended audience is anyone interested in designing or

understanding wireless digital communications systems.

Following this introduction, I will provide a detailed description of the algorithms for generating random

binary data, convolutionally encoding the data, passing the encoded data through a noisy channel,

quantizing the received channel symbols, and performing Viterbi decoding on the quantized channel

symbols to recover the original binary data. Complete simulation source code examples of these

algorithms follow the algorithm descriptions. I have also included some example results from the

simulation code. Since the examples are written in the C programming language, your ability to read C

code would be very helpful to achieving a clear understanding. However, I have tried to provide enough

explanation in the description of the algorithms and comments in the example source code so that you can

understand the algorithms even if you don't know C very well.

The purpose of forward error correction (FEC) is to improve the capacity of a channel by adding some

carefully designed redundant information to the data being transmitted through the channel. The process

of adding this redundant information is known as channel coding. Convolutional coding and block coding

are the two major forms of channel coding. Convolutional codes operate on serial data, one or a few bits

at a time. Block codes operate on relatively large (typically, up to a couple of hundred bytes) message

blocks. There are a variety of useful convolutional and block codes, and a variety of algorithms for

decoding the received coded information sequences to recover the original data. The reader is advised to

study the sources listed in the bibliography for a broader and deeper understanding of the digital

communications and channel-coding field.

Convolutional encoding with Viterbi decoding is a FEC technique that is particularly suited to a channel

in which the transmitted signal is corrupted mainly by additive white gaussian noise (AWGN). You can

think of AWGN as noise whose voltage distribution over time has characteristics that can be described

using a Gaussian, or normal, statistical distribution, i.e. a bell curve. This voltage distribution has zero

mean and a standard deviation that is a function of the signal-to-noise ratio (SNR) of the received signal.

Let's assume for the moment that the received signal level is fixed. Then if the SNR is high, the standard

deviation of the noise is small, and vice-versa. In digital communications, SNR is usually measured in

terms of Eb/N0, which stands for energy per bit divided by the one-sided noise density.

Let's take a moment to look at a couple of examples. Suppose that we have a system where a '1' channel

bit is transmitted as a voltage of -1V, and a '0' channel bit is transmitted as a voltage of +1V. This is

called bipolar non-return-to-zero (bipolar NRZ) signaling. It is also called binary "antipodal" (which

means the signaling states are exact opposites of each other) signaling. The receiver comprises a

comparator that decides the received channel bit is a '1' if its voltage is less than 0V, and a '0' if its voltage

is greater than or equal to 0V. One would want to sample the output of the comparator in the middle of

Page 2: Tic 1 Tutorial on Convolutional Coding With Viterbi Decoding

2

each data bit interval. Let's see how our example system performs, first, when the Eb/N0 is high, and then

when the Eb/N0 is lower.

The following figure shows the results of a channel simulation where one million (1 x 106) channel bits

are transmitted through an AWGN channel with an Eb/N0 level of 20 dB (i.e. the signal voltage is ten

times the rms noise voltage). In this simulation, a '1' channel bit is transmitted at a level of -1V, and a '0'

channel bit is transmitted at a level of +1V. The x axis of this figure corresponds to the received signal

voltages, and the y axis represents the number of times each voltage level was received:

Our simple receiver detects a received channel bit as a '1' if its voltage is less than 0V, and as a '0' if its

voltage is greater than or equal to 0V. Such a receiver would have little difficulty correctly receiving a

signal as depicted in the figure above. Very few (if any) channel bit reception errors would occur. In this

example simulation with the Eb/N0 set at 20 dB, a transmitted '0' was never received as a '1', and a

transmitted '1' was never received as a '0'. So far, so good.

The next figure shows the results of a similar channel simulation when 1 x 106 channel bits are

transmitted through an AWGN channel where the Eb/N0 level has decreased to 6 dB (i.e. the signal

voltage is two times the rms noise voltage):

Page 3: Tic 1 Tutorial on Convolutional Coding With Viterbi Decoding

3

Now observe how the right-hand side of the red curve in the figure above crosses 0V, and how the left-

hand side of the blue curve also crosses 0V. The points on the red curve that are above 0V represent

events where a channel bit that was transmitted as a one (-1V) was received as a zero. The points on the

blue curve that are below 0V represent events where a channel bit that was transmitted as a zero (+1V)

was received as a one. Obviously, these events correspond to channel bit reception errors in our simple

receiver. In this example simulation with the Eb/N0 set at 6 dB, a transmitted '0' was received as a '1'

1,147 times, and a transmitted '1' was received as a '0' 1,207 times, corresponding to a bit error rate (BER)

of about 0.235%. That's not so good, especially if you're trying to transmit highly compressed data, such

as digital television. I will show you that by using convolutional coding with Viterbi decoding, you can

achieve a BER of better than 1 x 10-7

at the same Eb/N0, 6 dB.

Convolutional codes are usually described using two parameters: the code rate and the constraint length.

The code rate, k/n, is expressed as a ratio of the number of bits into the convolutional encoder (k) to the

number of channel symbols output by the convolutional encoder (n) in a given encoder cycle. The

constraint length parameter, K, denotes the "length" of the convolutional encoder, i.e. how many k-bit

stages are available to feed the combinatorial logic that produces the output symbols. Closely related to K

is the parameter m, which indicates how many encoder cycles an input bit is retained and used for

encoding after it first appears at the input to the convolutional encoder. The m parameter can be thought

of as the memory length of the encoder. In this tutorial, and in the example source code, I focus on rate

1/2 convolutional codes.

Viterbi decoding was developed by Andrew J. Viterbi, a founder of Qualcomm Corporation. His seminal

paper on the technique is "Error Bounds for Convolutional Codes and an Asymptotically Optimum

Decoding Algorithm," published in IEEE Transactions on Information Theory, Volume IT-13, pages 260-

269, in April, 1967. Since then, other researchers have expanded on his work by finding good

convolutional codes, exploring the performance limits of the technique, and varying decoder design

parameters to optimize the implementation of the technique in hardware and software. Consult the

Convolutional Coding/Viterbi Decoding Papers section of the bibliography for more reading on this

subject. The Viterbi decoding algorithm is also used in decoding trellis-coded modulation, the technique

used in telephone-line modems to squeeze high ratios of bits-per-second to Hertz out of 3 kHz-bandwidth

analog telephone lines.

Viterbi decoding is one of two types of decoding algorithms used with convolutional encoding-the other

type is sequential decoding. Sequential decoding has the advantage that it can perform very well with

long-constraint-length convolutional codes, but it has a variable decoding time. A discussion of sequential

decoding algorithms is beyond the scope of this tutorial; the reader can find sources discussing this topic

in the Books about Forward Error Correction section of the bibliography.

Viterbi decoding has the advantage that it has a fixed decoding time. It is well suited to hardware decoder

implementation. But its computational requirements grow exponentially as a function of the constraint

length, so it is usually limited in practice to constraint lengths of K = 9 or less. Stanford Telecom

produces a K = 9 Viterbi decoder that operates at rates up to 96 kbps, and a K = 7 Viterbi decoder that

operates at up to 45 Mbps. Advanced Wireless Technologies offers a K = 9 Viterbi decoder that operates

at rates up to 2 Mbps. NTT has announced a Viterbi decoder that operates at 60 Mbps, but I don't know

its commercial availability. Moore's Law applies to Viterbi decoders as well as to microprocessors, so

consider the rates mentioned above as a snapshot of the state-of-the-art taken in early 1999.

For years, convolutional coding with Viterbi decoding has been the predominant FEC technique used in

space communications, particularly in geostationary satellite communication networks, such as VSAT

(very small aperture terminal) networks. I believe the most common variant used in VSAT networks is

rate 1/2 convolutional coding using a code with a constraint length K = 7. With this code, you can

transmit binary or quaternary phase-shift-keyed (BPSK or QPSK) signals with at least 5 dB less power

than you'd need without it. That's a reduction in Watts of more than a factor of three! This is very useful

Page 4: Tic 1 Tutorial on Convolutional Coding With Viterbi Decoding

4

in reducing transmitter and/or antenna cost or permitting increased data rates given the same transmitter

power and antenna sizes.

But there's a tradeoff-the same data rate with rate 1/2 convolutional coding takes twice the bandwidth of

the same signal without it, given that the modulation technique is the same. That's because with rate 1/2

convolutional encoding, you transmit two channel symbols per data bit. However, if you think of the

tradeoff as a 5 dB power savings for a 3 dB bandwidth expansion, you can see that you come out ahead.

Remember: if the modulation technique stays the same, the bandwidth expansion factor of a

convolutional code is simply n/k.

Many radio channels are AWGN channels, but many, particularly terrestrial radio channels also have

other impairments, such as multipath, selective fading, interference, and atmospheric (lightning) noise.

Transmitters and receivers can add spurious signals and phase noise to the desired signal as well.

Although convolutional coding with Viterbi decoding might be useful in dealing with those other

problems, it may not be the most optimal technique.

In the past several years, convolutional coding with Viterbi decoding has begun to be supplemented in the

geostationary satellite communication arena with Reed-Solomon coding. The two coding techniques are

usually implemented as serially concatenated block and convolutional coding, i.e. concatenated Reed-

Solomon coding and convolutional encoding with Viterbi decoding. Typically, the information to be

transmitted is first encoded with the Reed-Solomon code, then with the convolutional code. On the

receiving end, Viterbi decoding is performed first, followed by Reed-Solomon decoding. This is the

technique that is used in most if not all of the direct-broadcast satellite (DBS) systems, and in several of

the newer VSAT products as well. At least, that's what the vendors are advertising.

Recently (1993) a new parallel-concatenated convolutional coding technique known as turbo coding has

emerged. Initial hardware encoder and decoder implementations of turbo coding have already appeared

on the market. This technique achieves substantial improvements in performance over concatenated

Viterbi and Reed-Solomon coding. It gets its name from the fact that the decoded data are recycled

through the decoder several times. I suppose the inventors found this reminiscent of the way a

turbocharger operates. A variant in which the codes are product codes has also been developed, along

with hardware implementations. Check the appropriate sources listed in the bibliography for more

information on turbo coding and turbo code devices.

Page 5: Tic 1 Tutorial on Convolutional Coding With Viterbi Decoding

5

Description of the Algorithms (Part 1)

The steps involved in simulating a communication channel using convolutional encoding and Viterbi

decoding are as follows:

• Generate the data to be transmitted through the channel-result is binary data bits

• Convolutionally encode the data-result is channel symbols

• Map the one/zero channel symbols onto an antipodal baseband signal, producing transmitted

channel symbols

• Add noise to the transmitted channel symbols-result is received channel symbols

• Quantize the received channel levels-one bit quantization is called hard-decision, and two to n bit

quantization is called soft-decision (n is usually three or four)

• Perform Viterbi decoding on the quantized received channel symbols-result is again binary data

bits

• Compare the decoded data bits to the transmitted data bits and count the number of errors.

Many of you will notice that I left out the steps of modulating the channel symbols onto a transmitted

carrier, and then demodulating the received carrier to recover the channel symbols. You're right, but we

can accurately model the effects of AWGN even though we bypass those steps.

Generating the Data

Generating the data to be transmitted through the channel can be accomplished quite simply by using a

random number generator. One that produces a uniform distribution of numbers on the interval 0 to a

maximum value is provided in C: rand (). Using this function, we can say that any value less than half

of the maximum value is a zero; any value greater than or equal to half of the maximum value is a one.

Convolutionally Encoding the Data

Convolutionally encoding the data is accomplished using a shift register and associated combinatorial

logic that performs modulo-two addition. (A shift register is merely a chain of flip-flops wherein the

output of the nth flip-flop is tied to the input of the (n+1)th flip-flop. Every time the active edge of the

clock occurs, the input to the flip-flop is clocked through to the output, and thus the data are shifted over

one stage.) The combinatorial logic is often in the form of cascaded exclusive-or gates. As a reminder,

exclusive-or gates are two-input, one-output gates often represented by the logic symbol shown below,

that implement the following truth-table:

Input

A

Input

B Output

(A xor B)

0 0 0

0 1 1

1 0 1

1 1 0

Page 6: Tic 1 Tutorial on Convolutional Coding With Viterbi Decoding

6

The exclusive-or gate performs modulo-two addition of its inputs. When you cascade q two-input

exclusive-or gates, with the output of the first one feeding one of the inputs of the second one, the output

of the second one feeding one of the inputs of the third one, etc., the output of the last one in the chain is

the modulo-two sum of the q + 1 inputs.

Another way to illustrate the modulo-two adder, and the way that is most commonly used in textbooks, is

as a circle with a + symbol inside, thus:

Now that we have the two basic components of the convolutional encoder (flip-flops comprising the shift

register and exclusive-or gates comprising the associated modulo-two adders) defined, let's look at a

picture of a convolutional encoder for a rate 1/2, K = 3, m = 2 code:

In this encoder, data bits are provided at a rate of k bits per second. Channel symbols are output at a rate

of n = 2k symbols per second. The input bit is stable during the encoder cycle. The encoder cycle starts

when an input clock edge occurs. When the input clock edge occurs, the output of the left-hand flip-flop

is clocked into the right-hand flip-flop, the previous input bit is clocked into the left-hand flip-flop, and a

new input bit becomes available. Then the outputs of the upper and lower modulo-two adders become

stable. The output selector (SEL A/B block) cycles through two states-in the first state, it selects and

outputs the output of the upper modulo-two adder; in the second state, it selects and outputs the output of

the lower modulo-two adder.

The encoder shown above encodes the K = 3, (7, 5) convolutional code. The octal numbers 7 and 5

represent the code generator polynomials, which when read in binary (1112 and 1012) correspond to the

shift register connections to the upper and lower modulo-two adders, respectively. This code has been

determined to be the "best" code for rate 1/2, K = 3. It is the code I will use for the remaining discussion

and examples, for reasons that will become readily apparent when we get into the Viterbi decoder

algorithm.

Let's look at an example input data stream, and the corresponding output data stream:

Let the input sequence be 0101110010100012.

Assume that the outputs of both of the flip-flops in the shift register are initially cleared, i.e. their outputs

are zeroes. The first clock cycle makes the first input bit, a zero, available to the encoder. The flip-flop

outputs are both zeroes. The inputs to the modulo-two adders are all zeroes, so the output of the encoder

is 002.

Page 7: Tic 1 Tutorial on Convolutional Coding With Viterbi Decoding

7

The second clock cycle makes the second input bit available to the encoder. The left-hand flip-flop clocks

in the previous bit, which was a zero, and the right-hand flip-flop clocks in the zero output by the left-

hand flip-flop. The inputs to the top modulo-two adder are 1002, so the output is a one. The inputs to the

bottom modulo-two adder are 102, so the output is also a one. So the encoder outputs 112 for the channel

symbols.

The third clock cycle makes the third input bit, a zero, available to the encoder. The left-hand flip-flop

clocks in the previous bit, which was a one, and the right-hand flip-flop clocks in the zero from two bit-

times ago. The inputs to the top modulo-two adder are 0102, so the output is a one. The inputs to the

bottom modulo-two adder are 002, so the output is zero. So the encoder outputs 102 for the channel

symbols.

And so on. The timing diagram shown below illustrates the process:

After all of the inputs have been presented to the encoder, the output sequence will be:

00 11 10 00 01 10 01 11 11 10 00 10 11 00 112.

Notice that I have paired the encoder outputs-the first bit in each pair is the output of the upper modulo-

two adder; the second bit in each pair is the output of the lower modulo-two adder.

You can see from the structure of the rate 1/2 K = 3 convolutional encoder and from the example given

above that each input bit has an effect on three successive pairs of output symbols. That is an extremely

important point and that is what gives the convolutional code its error-correcting power. The reason why

will become evident when we get into the Viterbi decoder algorithm.

Now if we are only going to send the 15 data bits given above, in order for the last bit to affect three pairs

of output symbols, we need to output two more pairs of symbols. This is accomplished in our example

encoder by clocking the convolutional encoder flip-flops two ( = m) more times, while holding the input

at zero. This is called "flushing" the encoder, and results in two more pairs of output symbols. The final

binary output of the encoder is thus 00 11 10 00 01 10 01 11 11 10 00 10 11 00 11 10 112. If we don't

Page 8: Tic 1 Tutorial on Convolutional Coding With Viterbi Decoding

8

perform the flushing operation, the last m bits of the message have less error-correction capability than

the first through (m - 1)th bits had. This is a pretty important thing to remember if you're going to use this

FEC technique in a burst-mode environment. So's the step of clearing the shift register at the beginning of

each burst. The encoder must start in a known state and end in a known state for the decoder to be able to

reconstruct the input data sequence properly.

Now, let's look at the encoder from another perspective. You can think of the encoder as a simple state

machine. The example encoder has two bits of memory, so there are four possible states. Let's give the

left-hand flip-flop a binary weight of 21, and the right-hand flip-flop a binary weight of 2

0. Initially, the

encoder is in the all-zeroes state. If the first input bit is a zero, the encoder stays in the all zeroes state at

the next clock edge. But if the input bit is a one, the encoder transitions to the 102 state at the next clock

edge. Then, if the next input bit is zero, the encoder transitions to the 012 state, otherwise, it transitions to

the 112 state. The following table gives the next state given the current state and the input, with the states

given in binary:

Next State, if

Current State

Input = 0: Input = 1:

00 00 10

01 00 10

10 01 11

11 01 11

The above table is often called a state transition table. We'll refer to it as the next state table. Now let

us look at a table that lists the channel output symbols, given the current state and the input data, which

we'll refer to as the output table:

Output Symbols, if

Current State

Input = 0: Input = 1:

00 00 11

01 11 00

10 10 01

11 01 10

You should now see that with these two tables, you can completely describe the behavior of the example

rate 1/2, K = 3 convolutional encoder. Note that both of these tables have 2(K - 1)

rows, and 2k columns,

where K is the constraint length and k is the number of bits input to the encoder for each cycle. These two

tables will come in handy when we start discussing the Viterbi decoder algorithm.

Mapping the Channel Symbols to Signal Levels

Page 9: Tic 1 Tutorial on Convolutional Coding With Viterbi Decoding

9

Mapping the one/zero output of the convolutional encoder onto an antipodal baseband signaling scheme

is simply a matter of translating zeroes to +1s and ones to -1s. This can be accomplished by performing

the operation y = 1 - 2x on each convolutional encoder output symbol.

Adding Noise to the Transmitted Symbols

Adding noise to the transmitted channel symbols produced by the convolutional encoder involves

generating Gaussian random numbers, scaling the numbers according to the desired energy per symbol to

noise density ratio, Es/N 0, and adding the scaled Gaussian random numbers to the channel symbol values.

For the uncoded channel, Es/N0 = Eb/N 0, since there is one channel symbol per bit. However, for the

coded channel, Es/N0 = Eb/N0 + 10log10(k/n). For example, for rate 1/2 coding, E s/N0 = Eb/N0 +

10log10(1/2) = Eb/N0 - 3.01 dB. Similarly, for rate 2/3 coding, Es/N0 = Eb/N0 + 10log10 (2/3) = Eb/N0 -

1.76 dB.

The Gaussian random number generator is the only interesting part of this task. C only provides a uniform

random number generator, rand(). In order to obtain Gaussian random numbers, we take advantage of

relationships between uniform, Rayleigh, and Gaussian distributions:

Given a uniform random variable U, a Rayleigh random variable R can be obtained by:

−⋅⋅=

−⋅⋅=

UUR

1

1ln2

1

1ln2 2 σσ

where 2σ is the variance of the Rayleigh random variable, and given R and a second uniform random

variable V, two Gaussian random variables G and H can be obtained by

G = R cos V and H = R sin V.

In the AWGN channel, the signal is corrupted by additive noise, n(t), which has the power spectrum No/2

watts/Hz. The variance 2σ of this noise is equal to No/2. If we set the energy per symbol Es equal to 1,

then 2

0 2

1

σ=

N

ES . So

=

0

2

1

N

ES

σ .

Quantizing the Received Channel Symbols

An ideal Viterbi decoder would work with infinite precision, or at least with floating-point numbers. In

practical systems, we quantize the received channel symbols with one or a few bits of precision in order

to reduce the complexity of the Viterbi decoder, not to mention the circuits that precede it. If the received

channel symbols are quantized to one-bit precision (< 0V = 1, > 0V = 0), the result is called hard-decision

data. If the received channel symbols are quantized with more than one bit of precision, the result is

called soft-decision data. A Viterbi decoder with soft decision data inputs quantized to three or four bits

of precision can perform about 2 dB better than one working with hard-decision inputs. The usual

quantization precision is three bits. More bits provide little additional improvement.

The selection of the quantizing levels is an important design decision because it can have a significant

effect on the performance of the link. The following is a very brief explanation of one way to set those

levels. Let's assume our received signal levels in the absence of noise are –1V = 1, +1V = 0. With noise,

Page 10: Tic 1 Tutorial on Convolutional Coding With Viterbi Decoding

10

our received signal has mean +/– 1 and standard deviation

=

0

2

1

N

ES

σ . Let's use a uniform, three-bit

quantizer having the input/output relationship shown in the figure below, where D is a decision level that

we will calculate shortly:

The decision level, D, can be calculated according to the formula

⋅=⋅=

0

2

15,05,0

N

ED

S

σ , where

Es/N0 is the energy per symbol to noise density ratio. (The above figure was redrawn from Figure 2 of

Advanced Hardware Architecture's ANRS07-0795, "Soft Decision Thresholds and Effects on Viterbi

Performance". See the bibliography for a link to their web pages.)

Page 11: Tic 1 Tutorial on Convolutional Coding With Viterbi Decoding

11

Description of the Algorithms (Part 2)

Performing Viterbi Decoding

The Viterbi decoder itself is the primary focus of this tutorial. Perhaps the single most important concept

to aid in understanding the Viterbi algorithm is the trellis diagram. The figure below shows the trellis

diagram for our example rate 1/2 K = 3 convolutional encoder, for a 15-bit message:

The four possible states of the encoder are depicted as four rows of horizontal dots. There is one column

of four dots for the initial state of the encoder and one for each time instant during the message. For a 15-

bit message with two encoder memory flushing bits, there are 17 time instants in addition to t = 0, which

represents the initial condition of the encoder. The solid lines connecting dots in the diagram represent

state transitions when the input bit is a one. The dotted lines represent state transitions when the input bit

is a zero. Notice the correspondence between the arrows in the trellis diagram and the state transition

table discussed above. Also notice that since the initial condition of the encoder is State 002, and the two

memory flushing bits are zeroes, the arrows start out at State 002 and end up at the same state.

The following diagram shows the states of the trellis that are actually reached during the encoding of our

example 15-bit message:

The encoder input bits and output symbols are shown at the bottom of the diagram. Notice the

correspondence between the encoder output symbols and the output table discussed above. Let's look at

that in more detail, using the expanded version of the transition between one time instant to the next

shown below:

Page 12: Tic 1 Tutorial on Convolutional Coding With Viterbi Decoding

12

The two-bit numbers labeling the lines are the corresponding convolutional encoder channel symbol

outputs. Remember that dotted lines represent cases where the encoder input is a zero, and solid lines

represent cases where the encoder input is a one. (In the figure above, the two-bit binary numbers labeling

dotted lines are on the left, and the two-bit binary numbers labeling solid lines are on the right.)

OK, now let's start looking at how the Viterbi decoding algorithm actually works. For our example, we're

going to use hard-decision symbol inputs to keep things simple. (The example source code uses soft-

decision inputs to achieve better performance.) Suppose we receive the above encoded message with a

couple of bit errors:

Each time we receive a pair of channel symbols, we're going to compute a metric to measure the

"distance" between what we received and all of the possible channel symbol pairs we could have

received. Going from t = 0 to t = 1, there are only two possible channel symbol pairs we could have

received: 002, and 112. That's because we know the convolutional encoder was initialized to the all-zeroes

state, and given one input bit = one or zero, there are only two states we could transition to and two

possible outputs of the encoder. These possible outputs of the encoder are 00 2 and 112.

The metric we're going to use for now is the Hamming distance between the received channel symbol pair

and the possible channel symbol pairs. The Hamming distance is computed by simply counting how

many bits are different between the received channel symbol pair and the possible channel symbol pairs.

The results can only be zero, one, or two. The Hamming distance (or other metric) values we compute at

each time instant for the paths between the states at the previous time instant and the states at the current

time instant are called branch metrics. For the first time instant, we're going to save these results as

"accumulated error metric" values, associated with states. For the second time instant on, the accumulated

error metrics will be computed by adding the previous accumulated error metrics to the current branch

metrics.

At t = 1, we received 002. The only possible channel symbol pairs we could have received are 002 and 112.

The Hamming distance between 002 and 002 is zero. The Hamming distance between 002 and 112 is two.

Therefore, the branch metric value for the branch from State 002 to State 002 is zero, and for the branch

from State 002 to State 102 it's two. Since the previous accumulated error metric values are equal to zero,

the accumulated metric values for State 002 and for State 102 are equal to the branch metric values. The

accumulated error metric values for the other two states are undefined. The figure below illustrates the

results at t = 1:

Page 13: Tic 1 Tutorial on Convolutional Coding With Viterbi Decoding

13

Note that the solid lines between states at t = 1 and the state at t = 0 illustrate the predecessor-successor

relationship between the states at t = 1 and the state at t = 0 respectively. This information is shown

graphically in the figure, but is stored numerically in the actual implementation. To be more specific, or

maybe clear is a better word, at each time instant t, we will store the number of the predecessor state that

led to each of the current states at t.

Now let's look what happens at t = 2. We received a 112 channel symbol pair. The possible channel

symbol pairs we could have received in going from t = 1 to t = 2 are 002 going from State 002 to State 002,

112 going from State 002 to State 102, 102 going from State 102 to State 01 2, and 012 going from State 102

to State 11 2. The Hamming distance between 002 and 112 is two, between 112 and 112 is zero, and

between 10 2 or 012 and 112 is one. We add these branch metric values to the previous accumulated error

metric values associated with each state that we came from to get to the current states. At t = 1, we could

only be at State 002 or State 102. The accumulated error metric values associated with those states were 0

and 2 respectively. The figure below shows the calculation of the accumulated error metric associated

with each state, at t = 2.

That's all the computation for t = 2. What we carry forward to t = 3 will be the accumulated error metrics

for each state, and the predecessor states for each of the four states at t = 2, corresponding to the state

relationships shown by the solid lines in the illustration of the trellis.

Now look at the figure for t = 3. Things get a bit more complicated here, since there are now two different

ways that we could get from each of the four states that were valid at t = 2 to the four states that are valid

at t = 3. So how do we handle that? The answer is, we compare the accumulated error metrics associated

with each branch, and discard the larger one of each pair of branches leading into a given state. If the

members of a pair of accumulated error metrics going into a particular state are equal, we just save that

value. The other thing that's affected is the predecessor-successor history we're keeping. For each state,

the predecessor that survives is the one with the lower branch metric. If the two accumulated error metrics

are equal, some people use a fair coin toss to choose the surviving predecessor state. Others simply pick

one of them consistently, i.e. the upper branch or the lower branch. It probably doesn't matter which

method you use. The operation of adding the previous accumulated error metrics to the new branch

metrics, comparing the results, and selecting the smaller (smallest) accumulated error metric to be

Page 14: Tic 1 Tutorial on Convolutional Coding With Viterbi Decoding

14

retained for the next time instant is called the add-compare-select operation. The figure below shows the

results of processing t = 3:

Note that the third channel symbol pair we received had a one-symbol error. The smallest accumulated

error metric is a one, and there are two of these.

Let's see what happens now at t = 4. The processing is the same as it was for t = 3. The results are shown

in the figure:

Notice that at t = 4, the path through the trellis of the actual transmitted message, shown in bold, is again

associated with the smallest accumulated error metric. Let's look at t = 5:

At t = 5, the path through the trellis corresponding to the actual message, shown in bold, is still associated

with the smallest accumulated error metric. This is the thing that the Viterbi decoder exploits to recover

the original message.

Perhaps you're getting tired of stepping through the trellis. I know I am. Let's skip to the end.

At t = 17, the trellis looks like this, with the clutter of the intermediate state history removed:

Page 15: Tic 1 Tutorial on Convolutional Coding With Viterbi Decoding

15

The decoding process begins with building the accumulated error metric for some number of received

channel symbol pairs, and the history of what states preceded the states at each time instant t with the

smallest accumulated error metric. Once this information is built up, the Viterbi decoder is ready to

recreate the sequence of bits that were input to the convolutional encoder when the message was encoded

for transmission. This is accomplished by the following steps:

• First, select the state having the smallest accumulated error metric and save the state number of

that state.

• Iteratively perform the following step until the beginning of the trellis is reached: Working

backward through the state history table, for the selected state, select a new state which is listed in

the state history table as being the predecessor to that state. Save the state number of each selected

state. This step is called traceback.

• Now work forward through the list of selected states saved in the previous steps. Look up what

input bit corresponds to a transition from each predecessor state to its successor state. That is the

bit that must have been encoded by the convolutional encoder.

The following table shows the accumulated metric for the full 15-bit (plus two flushing bits) example

message at each time t:

t = 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17

State 002

0 2 3 3 3 3 4 1 3 4 3 3 2 2 4 5 2

State 012

3 1 2 2 3 1 4 4 1 4 2 3 4 4 2

State 102

2 0 2 1 3 3 4 3 1 4 1 4 3 3 2

State 112

3 1 2 1 1 3 4 4 3 4 2 3 4 4

It is interesting to note that for this hard-decision-input Viterbi decoder example, the smallest

accumulated error metric in the final state indicates how many channel symbol errors occurred.

The following state history table shows the surviving predecessor states for each state at each time t:

Page 16: Tic 1 Tutorial on Convolutional Coding With Viterbi Decoding

16

t = 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17

State 002

0 0 0 1 0 1 1 0 1 0 0 1 0 1 0 0 0 1

State 012

0 0 2 2 3 3 2 3 3 2 2 3 2 3 2 2 2 0

State 102

0 0 0 0 1 1 1 0 1 0 0 1 1 0 1 0 0 0

State 112

0 0 2 2 3 2 3 2 3 2 2 3 2 3 2 2 0 0

The following table shows the states selected when tracing the path back through the survivor state table

shown above:

t = 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17

0 0 2 1 2 3 3 1 0 2 1 2 1 0 0 2 1 0

Using a table that maps state transitions to the inputs that caused them, we can now recreate the original

message. Here is what this table looks like for our example rate 1/2 K = 3 convolutional code:

Input was, Given Next State =

Current State 002 = 0 012 = 1 102 = 2 112 = 3

002 = 0 0 x 1 x

012 = 1 0 x 1 x

102 = 2 x 0 x 1

112 = 3 x 0 x 1

Note: In the above table, x denotes an impossible transition from one state to another state.

So now we have all the tools required to recreate the original message from the message we received:

t = 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15

0 1 0 1 1 1 0 0 1 0 1 0 0 0 1

The two flushing bits are discarded.

Page 17: Tic 1 Tutorial on Convolutional Coding With Viterbi Decoding

17

Here's an insight into how the traceback algorithm eventually finds its way onto the right path even if it

started out choosing the wrong initial state. This could happen if more than one state had the smallest

accumulated error metric, for example. I'll use the figure for the trellis at t = 3 again to illustrate this

point:

See how at t = 3, both States 012 and 112 had an accumulated error metric of 1. The correct path goes to

State 012 -notice that the bold line showing the actual message path goes into this state. But suppose we

choose State 112 to start our traceback. The predecessor state for State 112 , which is State 102 , is the

same as the predecessor state for State 012! This is because at t = 2, State 102 had the smallest

accumulated error metric. So after a false start, we are almost immediately back on the correct path.

For the example 15-bit message, we built the trellis up for the entire message before starting traceback.

For longer messages, or continuous data, this is neither practical or desirable, due to memory constraints

and decoder delay. Research has shown that a traceback depth of K x 5 is sufficient for Viterbi decoding

with the type of codes we have been discussing. Any deeper traceback increases decoding delay and

decoder memory requirements, while not significantly improving the performance of the decoder. The

exception is punctured codes, which I'll describe later. They require deeper traceback to reach their final

performance limits.

To implement a Viterbi decoder in software, the first step is to build some data structures around which

the decoder algorithm will be implemented. These data structures are best implemented as arrays. The

primary six arrays that we need for the Viterbi decoder are as follows:

• A copy of the convolutional encoder next state table, the state transition table of the encoder.

The dimensions of this table (rows x columns) are 2(K - 1)

x 2k. This array needs to be initialized

before starting the decoding process.

• A copy of the convolutional encoder output table. The dimensions of this table are 2(K - 1)

x 2k.

This array needs to be initialized before starting the decoding process.

• An array (table) showing for each convolutional encoder current state and next state, what input

value (0 or 1) would produce the next state, given the current state. We'll call this array the input

table. Its dimensions are 2(K - 1)

x 2(K - 1)

. This array needs to be initialized before starting the

decoding process.

• An array to store state predecessor history for each encoder state for up to K x 5 + 1 received

channel symbol pairs. We'll call this table the state history table. The dimensions of this array

are 2 (K - 1)

x (K x 5 + 1). This array does not need to be initialized before starting the decoding

process.

• An array to store the accumulated error metrics for each state computed using the add-compare-

select operation. This array will be called the accumulated error metric array. The dimensions

of this array are 2 (K - 1)

x 2. This array does not need to be initialized before starting the decoding

process.

Page 18: Tic 1 Tutorial on Convolutional Coding With Viterbi Decoding

18

• An array to store a list of states determined during traceback (term to be explained below). It is

called the state sequence array. The dimensions of this array are (K x 5) + 1. This array does

not need to be initialized before starting the decoding process.

Before getting into the example source code, for purposes of completeness, I want to talk briefly about

other rates of convolutional codes that can be decoded with Viterbi decoders. Earlier, I mentioned

punctured codes, which are a common way of achieving higher code rates, i.e. larger ratios of k to n.

Punctured codes are created by first encoding data using a rate 1/n encoder such as the example encoder

described in this tutorial, and then deleting some of the channel symbols at the output of the encoder. The

process of deleting some of the channel output symbols is called puncturing. For example, to create a rate

3/4 code from the rate 1/2 code described in this tutorial, one would simply delete channel symbols in

accordance with the following puncturing pattern:

1 0 1

1 1 0

where a one indicates that a channel symbol is to be transmitted, and a zero indicates that a channel

symbol is to be deleted. To see how this make the rate be 3/4, think of each column of the above table as

corresponding to a bit input to the encoder, and each one in the table as corresponding to an output

channel symbol. There are three columns in the table, and four ones. You can even create a rate 2/3 code

using a rate 1/2 encoder with the following puncturing pattern:

1 1

1 0

which has two columns and three ones.

To decode a punctured code, one must substitute null symbols for the deleted symbols at the input to the

Viterbi decoder. Null symbols can be symbols quantized to levels corresponding to weak ones or weak

zeroes, or better, can be special flag symbols that when processed by the ACS circuits in the decoder,

result in no change to the accumulated error metric from the previous state.

Of course, n does not have to be equal to two. For example, a rate 1/3, K = 3, (7, 7, 5) code can be

encoded using the encoder shown below:

Page 19: Tic 1 Tutorial on Convolutional Coding With Viterbi Decoding

19

This encoder has three modulo-two adders, so for each input bit, it can produce three channel symbol

outputs. Of course, with suitable puncturing patterns, you can create higher-rate codes using this encoder

as well.

I don't have good data to share with you right now about the traceback depth requirements for Viterbi

decoders for punctured codes. I have been told that instead of K x 5, depths of K x 7, K x 9, or even more

are required to reach the point of diminishing returns. This would be a good topic around which to design

some experiments using a modified version of the example simulation code I provide.

Page 20: Tic 1 Tutorial on Convolutional Coding With Viterbi Decoding

20

Simulation Source Code Examples

The simulation source code comprises a test driver routine and several functions, which will be described

below. This code simulates a link through an AWGN channel from data source to Viterbi decoder output.

The test driver first dynamically allocates several arrays to store the source data, the convolutionally

encoded source data, the output of the AWGN channel, and the data output by the Viterbi decoder. Next.

it calls the data generator, convolutional encoder, channel simulation, and Viterbi decoder functions in

turn. It then compares the source data output by the data generator to the data output by the Viterbi

decoder and counts the number of errors. Once 100 errors (sufficient for +/- 20% measurement error with

95% confidence) are accumulated, the test driver displays the BER for the given Es/No. The test

parameters are controlled by definitions in vdsim.h .

The test driver includes a compile-time option to also measure the BER for an uncoded channel, i.e. a

channel without forward error correction. I used this option to validate my Gaussian noise generator, by

comparing the simulated uncoded BER to the theoretical uncoded BER given by ( )

2

/ 0NEerfcBER

b= ,

where Eb/N0 is expressed as a ratio, not in dB. I am happy to say that the results agree quite closely.

When running the simulations, it is important to remember the relationship between Es/N0 and Eb/N0. As

stated earlier, for the uncoded channel, Es/N0 = Eb/N0, since there is one channel symbol per bit.

However, for the coded channel, Es/N0 = Eb /N0 + 10log10(k/n). For example, for rate 1/2 coding, Es/N0 =

Eb/N0 + 10log 10(1/2) = Eb/N0 - 3.01 dB. For rate 1/8 coding, Es/N0 = Eb/N0 + 10log 10(1/8) = Eb/N0 - 9.03

dB.

The data generator function simulates the data source. It accepts as arguments a pointer to an input array

and the number of bits to generate, and fills the array with randomly-chosen zeroes and ones.

The convolutional encoder function accepts as arguments the pointers to the input and output arrays and

the number of bits in the input array. It then performs the specified convolutional encoding and fills the

output array with one/zero channel symbols. The convolutional code parameters are in the header file

vdsim.h .

The channel simulation function accepts as arguments the desired Es/N0, the number of channel symbols

in the input array, and pointers to the input and output arrays. It performs the binary (one and zero) to

baseband signal level (+/- 1) mapping on the convolutional encoder channel symbol outputs. It then adds

Gaussian random variables to the mapped symbols, and fills the output array. The output data are floating

point numbers.

The arguments to the Viterbi decoder function* are the expected Es/N0, the number of channel symbols in

the input array, and pointers to its input and output arrays. First, the decoder function sets up its data

structures, the arrays described in the algorithm description section. Then, it performs three-bit soft

quantization on the floating point received channel symbols, using the expected E s/N0, producing

integers. (Optionally, a fixed quantizer designed for a 4 dB Es/N0 can be chosen.) This completes the

preliminary processing.

The next step is to start decoding the soft-decision channel symbols. The decoder builds up a trellis of

depth K x 5, and then traces back to the beginning of the trellis and outputs one bit. The decoder then

shifts the trellis left one time instant, discarding the oldest data, following which it computes the

accumulated error metrics for the next time instant, traces back, and outputs a bit. The decoder continues

in this way until it reaches the flushing bits. The flushing bits cause the encoder to converge back to state

0, and the decoder exploits this fact. Once the decoder builds the trellis for the last bit, it flushes the

trellis, decoding and outputting all the bits in the trellis up to but not including the first flushing bit.

Page 21: Tic 1 Tutorial on Convolutional Coding With Viterbi Decoding

21

I have compiled and tested the simulation source code described above under Borland C++ Builder

Version 3. Simulation results are presented here.

Click on one of the links below to go to the beginning of that section:

* If you would like to obtain an electronic copy of the Viterbi decoder function that you can copy and

paste, and you are an engineering student, send me an email request with the following information:

name and email address of your instructor, name, address, and url of your school, college, or university,

name and number of the course you are taking for which you want the electronic copy, and of course your

name and email address. I will only accept requests for free copies of the software from email addresses

with a .edu domain. If you do not have such an email address, you must purchase the software. Allow at

least 48 hours for a response. If you are not an engineering student, you may purchase an electronic copy

of the Viterbi decoder function. Contact me for terms.

Page 22: Tic 1 Tutorial on Convolutional Coding With Viterbi Decoding

22

Example Simulation Results

I obtained the results shown in this chart using the example simulation code, with the trellis depth set to K

x 5, using the adaptive quantizer with three-bit channel symbol quantization. For each data point, I ran the

simulation until 100 errors (or possibly more) occurred. With this number of errors, I have 95%

confidence that the true number of errors for the number of data bits through the simulation lies between

80 and 120.

Notice how the simulation results for BER on an uncoded channel closely track the theoretical BER for

an uncoded channel, which is given by the equation P(e) = 0.5 * erfc(sqrt(Eb/N0)) = Q(sqrt(2Eb /N0). This

validates the uncoded BER algorithm and the Gaussian noise generator. The coded BER results appear to

agree well with those obtained by others.

Since I first published this tutorial in 1999, I've gotten a few questions about the 95% confidence interval,

so let me elaborate somewhat. Error events occur as a Poisson process, a random sequence of events in

time. The Poisson process has a mean rate λ equal to n/t, where n is the number of events (the number of

errors, in this case) and t is the time interval of the measurement. For the purposes of the simulation, let's

let t = the total number of bits in the simulation. Let's say we measure 100 errors in 100,000 bits. The

rate λ is thus 100/100,000, or 1 x 10-3

. If we set up the simulation to run for 100,000 bits, then the mean µ

of the Poisson distribution is λ t, or 100 errors. The formula for the probability of an expected number r

of errors, given a mean of µ errors, is µµ −= er

rPr

!)( . So the Poisson distribution for 50 to 150 errors,

given a mean of 100 errors, is illustrated in the chart below:

Page 23: Tic 1 Tutorial on Convolutional Coding With Viterbi Decoding

23

The cumulative probability of the above distribution for the range of 80 to 120 errors is actually 95.99%

(approximately).

The obvious next step for you to take is to start varying some of the parameters. For example, you can try

different trellis depths, the fixed quantizer instead of the adaptive quantizer, more or fewer bits in the

quantizer, and so on.

Page 24: Tic 1 Tutorial on Convolutional Coding With Viterbi Decoding

24

Bibliography If you think I should include something here, please email me details.

Web Links for FEC Products--Viterbi, Reed-Solomon, and Turbo Codes

4i2i offers C++, VHDL, and Verilog code for convolutional encoders/Viterbi decoders. In addition, they

offer Reed-Solomon and Golay code products.

Advanced Hardware Architectures offers turbo product code encoder/decoders, Reed-Solomon

encoder/decoders, and a concatenated Viterbi/Reed-Solomon decoder.

The Communications Research Centre of Canada offers Turbo and Viterbi codecs for the PC platform.

They claim that their Turbo decoder has a throughput of over 400 kbps for a full four-iteration decoder,

and that their Viterbi decoder has a throughput of over 1 Mbps for rate 1/2, K = 7, both on a 400 MHz

Pentium II.

Efficient Channel Coding, Inc. developed the turbo product coding techniques implemented by Advanced

Hardware Architectures in their AHA4501 device.

Istari Design, Inc. used to offer Reed-Solomon decoder cores. They also had a Viterbi Decoder

Simulation Accelerator, and they were developing Viterbi decoder cores. The "Technology" section of

their web page was very interesting. A reader has informed me that they are now part of Conexant. I

haven’t investigated further at this time.

Qualcomm offers a Viterbi/Trellis decoder, the Q1900. I have been told that the last-time-buy deadline

on this part is March 15, 2001.

Small World Communications offers a MAP decoder implementation for Xilinx FPGAs, suitable for use

as a component of a turbo code decoder.

The ASIC and Custom Products Division of Stanford Telecom developed a number of different Viterbi

decoder chips. This group is now part of Intel. You can find their products listed on the Intel Developer's

web site .

Web Links for FEC Articles, Papers, Class Notes, Patents, etc.

Brian Joseph of Alantro (now part of Texas Instruments) has written a nice Java applet to run a step-by-

step simulation of a Viterbi decoder. Visit the Viterbi Algorithm Workshop and try it out.

Dr. Robert H. Morelos-Zaragoza, currently at the University of Tokyo, has a comprehensive page listing

links to source code for error-control coding programs in C and links to other pages with more

information on error-control coding. Reed-Solomon, convolutional/Viterbi, BCH, and Golay codes and

Galois-field calculators are among the topics covered.

There is a set of nicely-done lecture notes on Digital Communications, by Dr. Janak Sodha of the

Department of Computer Science, Mathematics & Physics at the University of the West Indies in

Bridgetown, Barbados. Now there's a job.... Unfortunately for us, Dr. Sodha is in the process of

publishing a textbook on the topic, and his publisher has made him password-protect the Digital

Page 25: Tic 1 Tutorial on Convolutional Coding With Viterbi Decoding

25

Communications lecture notes due to copyright concerns. But some of the other course material is open-

access, and may be useful to you. Contact Dr. Sodha for more info on his forthcoming book.

Phil Karn, KA9Q, has published the source code for his Reed-Solomon, Viterbi, and Fano decoders. His

main FEC code page is here . Phil recently released the source code for Viterbi and Reed-Solomon

decoders designed to take advantage of the Intel® SIMD instruction sets. His Viterbi decoder reaches

speeds of 14 Mbps running on a 1.8 GHz Intel® Pentium

® 4 Processor. Phil also has a descriptive page

on convolutional code decoders for amateur radio. Includes the Fano sequential decoding algorithm as

well as the Viterbi algorithm.

There is a very useful web page on turbo codes at this JPL site maintained by Fabrizio Pollara and

Dariush Divsalar. It contains detailed information about turbo codes with emphasis on deep-space

applications, and contains a good bibliography on turbo codes as well as links to other turbo coding

research sites. They also have links to commercial turbo codec providers. Another extensive bibliography

on turbo coding can be found at this University of Virginia site .

If you want to be drowned in information about Viterbi decoders (and perhaps, keep yourself out of

trouble), go to the US Patent and Trademark Office patent search site and do a Boolean Search for all

years on "Viterbi decoder" in any field. When I did it on February 14, 1999, I got 702 hits. Other

interesting search terms are "Reed-Solomon," (1304 hits) "parallel concatenated," (7 hits) and "turbo

code" (7 hits).

Some Books about Digital Communications

L. W. Couch, II, Digital and Analog Communication Systems, 4 th

ed. New York: Macmillan Publishing

Company, 1993.

S. Haykin, Communication Systems, 3rd

ed. New York: John Wiley & Sons, 1994.

T. McDermott, Wireless Digital Communications: Design and Theory . Tucson, AZ: Tucson Amateur

Packet Radio Corporation, 1996.

J. G. Proakis, Digital Communications, 3rd ed. Boston, MA: WCB/McGraw-Hill,1995.

J. G. Proakis and M. Salehi, Contemporary Communication Systems Using MATLAB®

. Boston, MA:

PWS Publishing Company, 1998.

M. S. Roden, Digital Communication Systems Design. Englewood Cliffs, NJ: Prentice Hall, 1988.

Some Books about Forward Error Correction

S. Lin and D. J. Costello, Error Control Coding. Englewood Cliffs, NJ: Prentice Hall, 1982.

A. M. Michelson and A. H. Levesque, Error Control Techniques for Digital Communication. New York:

John Wiley & Sons, 1985.

Page 26: Tic 1 Tutorial on Convolutional Coding With Viterbi Decoding

26

W. W. Peterson and E. J. Weldon, Jr., Error Correcting Codes, 2 nd

ed. Cambridge, MA: The MIT Press,

1972.

V. Pless, Introduction to the Theory of Error-Correcting Codes, 3rd

ed. New York: John Wiley & Sons,

1998.

C. Schlegel and L. Perez, Trellis Coding. Piscataway, NJ: IEEE Press, 1997

S. B. Wicker, Error Control Systems for Digital Communication and Storage . Englewood Cliffs, NJ:

Prentice Hall, 1995.

Some Papers about Convolutional Coding with Viterbi Decoding

For those interested in VLSI implementations of the Viterbi algorithm, I recommend the following paper

and the papers to which it refers (and so on):

Lin, Ming-Bo, "New Path History Management Circuits for Viterbi Decoders," IEEE Transactions on

Communications, vol. 48, October, 2000, pp. 1605-1608.

Other papers are:

G. D. Forney, Jr., "Convolutional Codes II: Maximum-Likelihood Decoding," Information Control, vol.

25, June, 1974, pp. 222-226.

K. S. Gilhousen et. al., "Coding Systems Study for High Data Rate Telemetry Links," Final Contract

Report, N71-27786, Contract No. NAS2-6024, Linkabit Corporation, La Jolla, CA, 1971.

J. A. Heller and I. M. Jacobs, Viterbi Decoding for Satellite and Space Communications," IEEE

Transactions on Communication Technology, vol. COM-19, October, 1971, pp. 835-848.

K. J. Larsen, "Short Convolutional Codes with Maximal Free Distance for Rates 1/2, 1/3, and 1/4," IEEE

Transactions on Information Theory, vol. IT-19, May, 1973, pp. 371-372.

J. P. Odenwalder, "Optimum Decoding of Convolutional Codes," Ph. D. Dissertation, Department of

Systems Sciences, School of Engineering and Applied Sciences, University of California at Los Angeles,

1970.

A. J. Viterbi, "Error Bounds for Convolutional Codes and an Asymptotically Optimum Decoding

Algorithm," IEEE Transactions on Information Theory , vol. IT-13, April, 1967, pp. 260-269.

Some Papers about Turbo Coding

An excellent series of introductory articles on turbo coding appeared in the January through April, 1998

issues of Personal Engineering and Instrumentation News. Although PE&IN appears to be defunct, the

author is in the process of working with ChipCenter to make these articles available on the ChipCenter

Column Archives website . The articles are as follows:

Page 27: Tic 1 Tutorial on Convolutional Coding With Viterbi Decoding

27

C. Gumas, "Turbo codes rev up error-correcting performance," (Part 1), PE&IN, January, 1998, pp. 61-

66.

C. Gumas, "Turbo codes build on classic error-correcting codes and boost performance," (Part 2),

PE&IN, February, 1998, pp. 54-63.

C. Gumas, "Turbo Codes propel new concepts for superior codes," (Part 3), PE&IN, March, 1998, pp. 65-

70.

C. Gumas, "Win, place, or show, Turbo Codes enter the race for next generation error-correcting

systems," (Part 4), PE&IN, April, 1998, pp. 54-62.

Another good introductory article that was published recently in an IEE (UK) magazine is as follows:

A. Burr, "Turbo-codes: the ultimate error control codes?" Electronics and Communication Engineering

Journal, August, 2001, pp. 155-165.

The seminal paper on the MAP algorithm upon which the original turbo-code decoder was based is:

L.R. Bahl, J. Cocke, F. Jelinek , J. Raviv. "Optimal Decoding of Linear Codes for Minimizing Symbol

Error Rate." IEEE Transactions on Information Theory,

IT-20, pp. 284-287, March 1974

Other papers of interest are (but refer to the web pages mentioned above for extensive bibliographies):

C. Berrou, A. Glavieux, and P. Thitimajshima, "Near Shannon limit error-correcting coding and

decoding," Proceedings of the ICC '93, May, 1993, pp. 1064-1070.

C. Berrou, "Some clinical aspects of turbo codes," International Symposium on Turbo Codes, September,

1997, pp. 26-31.

S. Benedetto and G. Montorsi, "Design of parallel concatenated convolutional codes," IEEE Transactions

on Communications, vol. 44, May, 1996.

S. Benedetto, D. Divsalar, G. Montorsi, and F. Pollara, "Algorithm for continuous decoding of turbo

codes," Electronic Letters, vol. 32 no. 4, February, 1996.

R. M. Pyndiah, "Near-optimum decoding of product codes: block turbo codes," IEEE Transactions on

Communications, vol. 46, August, 1998, pp. 1003-1010.

Page 28: Tic 1 Tutorial on Convolutional Coding With Viterbi Decoding

28

For more information, contact:

Chip Fleming

Spectrum Applications

7408 Vinyard Court

Derwood, MD 20855-1142

Phone: +1 301 926 8028

Fax: +1 301 926 6638

email: [email protected]


Recommended