+ All Categories
Home > Documents > Turbo Code.doc

Turbo Code.doc

Date post: 26-Nov-2014
Category:
Upload: prasant-kumar-barik
View: 120 times
Download: 2 times
Share this document with a friend
Popular Tags:
32
Technical Seminar Report on TURBO CODE A technical seminar report submitted in partial fulfillment of the requirement for the degree of bachelor of engineering under BPUT SUBMITTED BY: PRASANTA KUMAR BARIK REGISTRATION NO: 0701106246 DEPARTMENT OF COMPUTER SCIENCE & ENGINEERING
Transcript

Technical Seminar Report on

TURBO CODE A technical seminar report submitted in partial fulfillment of the requirement for the degree of bachelor of engineering under BPUT

SUBMITTED BY:

PRASANTA KUMAR BARIK

REGISTRATION NO: 0701106246

DEPARTMENT OF COMPUTER SCIENCE & ENGINEERING

COLLEGE OF ENGINEERING AND TECHNOLOGY

Techno Campus, Kalinga Nagar, Ghatikia,

Bhubaneswar-751003

CERTIFICATE

This is to certify that PRASANTA KUMAR BARIK is a student of 8th semester B.Tech Computer Science and Engineering in the College of Engineering & Technology, with registration number 0701106246 in the batch 2007-2011 has taken active interest in the preparing report on "TURBO CODE".

This is in partial fulfillment of requirement for the Bachelor of Technology degree in Computer Science , under Biju Pattnaik University of Technology ,Orissa.This report is verified and attested by

Prof Jibitesh Mishra

HOD,Department of CSE

College of Engineering & Technology,Bhubaneswar

ACKNOWLEDGEMENT

Many people have contributed to the success of this. Although

a single sentence hardly suffices, I would like to thank God for blessing me with His grace. I

am profoundly indebted to my seminar guide Er Sarita Tripathy for innumerable acts of

timely advice, encouragement and I sincerely express my gratitude to her.

I express my immense pleasure and thankfulness

to all the teachers and staff of the Department of Computer Science, College Of Engineering

& Technology, for their cooperation and support.

Last but not least, I thank all others, and especially my

classmates who in one way or another helped me in successful completion of this work.

Prasanta Kumar Barik

Turbo Code

Regd no.: 0701106246

8th Semester, CSE

ABSTRACT

During the transmission of data from transmitter to receiver, there is lossof information in the communication channel due to noise. This loss ismeasured in terms of bit error rate (BER) and several decodingalgorithms and modulation techniques used to minimize it. . Turbo codesare one of the most powerful types of error control codes currentlyavailable, which could achieve low BERs at signal to noise ratio (SNR)very close to Shannon limit. Nevertheless, the specific performance of thecode highly depends on the particular decoding algorithm used at thereceiver. In this sense, the election of the decoding algorithm involves atrade off between the gain introduced by the code and the complexity ofthe decoding process.

Prasanta Kumar Barik

CSE-0701106246

Turbo Code

INDEX

CHAPTER PAGE NO

I.Introduction 1

II.Channel Coding 2-4

Backword Correction

Forward Error Correction

Need For Better Code

III. Turbo Code 5-15

Encoding with Interleaving

Recursive Systematic Convolutional Encoder

Decoding

Perfomance

Example

IV. Conclusion 16

V. Reference 17

Turbo Codes1.Introduction

Concatenated coding schemes were first proposed by Forney as a method forachieving large coding gains by combining two or more relatively simple buildingblock or component codes (sometimes called constituent codes). The resulting codes had the error-correction capability of much longer codes, and they were endowed with a structure that permitted relatively easy to moderately complex decoding. A serial concatenation of codes is most often used for power-limited systems such as transmitters on deep-space probes. The most popular of these schemes consists of a Reed-Solomon outer (applied first, removed last) codefollowed by a convolutional inner (applied last, removed first) code . A turbo code can be thought of as a refinement of the concatenated encoding structure plus an iterative algorithm for decoding the associated code sequence. Turbo codes were first introduced in 1993 by Berrou, Glavieux, and Thitimajshima, and reported in ,where a scheme is described that achieves a bit-error probability of 10-5 using a rate 1/2 code over an additive white Gaussian noise channel and modulation at an Eb/N0 of 0.7 dB. The codes areconstructed by using two or more component codes on different interleaved versions of the same information sequence. Whereas, for conventional codes, the final step at the decoder yields hard-decision decoded bits (or, more generally, decoded symbols), for a concatenated scheme such as a turbo code to work properly, the decoding algorithm should not limit itself to passing hard decisions among the decoders. To best exploit the information learned from each decoder, the decoding algorithm must effect an exchange of soft decisions rather than hard decisions. For a system with two component codes, the concept behind turbo decoding is to pass soft decisions from the output of one decoder to the input of the other decoder, and to iterate this process several times so as to produce more reliable decisions.

Turbo Code

xi−1 xi−2 xi−3

++

+ +

2.Channel Coding

The task of channel coding is to encode the information sent over a communicationchannel in such a way that in the presence of channel noise, errors can bedetected and/or corrected. We distinguish between two coding methods:

• Backward error correction (BEC)

requires only error detection: if an erroris detected, the sender is requested to retransmit the message. While thismethod is simple and sets lower requirements on the code’s error-correcting

Figure 1: A convolutional encoder

properties, it on the other hand requires duplex communication and causesundesirable delays in transmission.

• Forward error correction (FEC)

requires that the decoder should also becapable of correcting a certain number of errors, i.e. it should be capable oflocating the positions where the errors occurred. Since FEC codes requireonly simplex communication, they are especially attractive in wireless communicationsystems, helping to improve the energy efficiency of the system.

In the rest of this paper we deal with binary FEC codes only.Next, we briefly recall the concept of conventional convolutional codes. Convolutionalcodes differ from block codes in the sense that they do not break themessage stream into fixed-size blocks. Instead, redundancy is added continuouslyto the whole stream. The encoder keeps M previous input bits in memory. Eachoutput bit of the encoder then depends on the current input bit as well as the Mstored bits. Figure 1 depicts a sample convolutional encoder. The encoder producestwo output bits per every input bit, defined by the equations

y1,i = xi + xi−1 + xi−3,

y2,i = xi + xi−2 + xi−3.

For this encoder, M = 3, since the ith bits of output depend on input bit i, as wellas three previous bits i − 1, i − 2, i − 3. The encoder is nonsystematic, since theinput bits do not appear explicitly in its output.An important parameter of a channel code is the code rate. If the input size (ormessage size) of the encoder is k bits and the output size (the code word size) is nbits, then the ratio kn is called the code rate r. Since our sample convolutional encoderproduces two output bits for every input bit, its rate is 12 . Code rate expressesthe amount of redundancy in the code—the lower the rate, the more redundant thecode.Finally, the Hamming weight or simply the weight of a code word is the numberof non-zero symbols in the code word. In the case of binary codes, dealt with inthis paper, the weight of a code word is the number of ones in the word.

3.A Need for Better Codes

Designing a channel code is always a tradeoff between energy efficiency and bandwidth efficiency. Codes with lower rate (i.e. bigger redundancy) can usually correct more errors. If more errors can be corrected, the communication system canoperate with a lower transmit power, transmit over longer distances, tolerate more

Turbo Code

interference, use smaller antennas and transmit at a higher data rate. These properties make the code energy efficient. On the other hand, low-rate codes have a large overhead and are hence more heavy on bandwidth consumption. Also, decoding complexity grows exponentially with code length, and long (low-rate) codes set high computational requirements to conventional decoders. According to Viterbi, this is the central problem of channel coding: encoding is easy but decoding is hard .For every combination of bandwidth (W), channel type, signal power (S) and received noise power (N), there is a theoretical upper limit on the data transmissionrate R, for which error-free data transmission is possible. This limit is called channel capacity or also Shannon capacity (after Claude Shannon, who introduced the notion in 1948). For additive white Gaussian noise channels, the formula is

In practical settings, there is of course no such thing as an ideal error-free channel.Instead, error-free data transmission is interpreted in a way that the bit error probability can be brought to an arbitrarily small constant. The bit error probability, or bit error rate (BER) used in benchmarking is often chosen to be 10−5or 10−6.Now, if the transmission rate, the bandwidth and the noise power are fixed, we geta lower bound on the amount of energy that must be expended to convey one bitof information. Hence, Shannon capacity sets a limit to the energy efficiency of acode.Although Shannon developed his theory already in the 1940s, several decadeslater the code designs were unable to come close to the theoretical bound. Evenin the beginning of the 1990s, the gap between this theoretical bound and practicalimplementations was still at best about 3dB. This means that practical codesrequired about twice as much energy as the theoretical predicted minimum.1Hence, new codes were sought that would allow for easier decoding. One wayof making the task of the decoder easier is using a code with mostly high-weightcode words. High-weight code words, i.e. code words containing more ones andless zeros, can be distinguished more easily.Another strategy involves combining simple codes in a parallel fashion, so thateach part of the code can be decoded separately with less complex decoders andeach decoder can gain from information exchange with others. This is called the

R<W log2(1+ SN )[bits/second]

divide-and-conquer strategy. Keeping these design methods in mind, we are now ready to introduce the concept of turbo codes.

4.Turbo Codes: Encoding with Interleaving

The first turbo code, based on convolutional encoding, was introduced in 1993 byBerrou et al. Since then, several schemes have been proposed and the term“turbo codes” has been generalized to cover block codes as well as convolutionalcodes. Simply put,a turbo code is formed from the parallel concatenation of two codes separated byan interleaver.The generic design of a turbo code is depicted in Figure. Although the generalconcept allows for free choice of the encoders and the interleaver, most designsfollow the ideas presented• The two encoders used are normally identical;• The code is in a systematic form, i.e. the input bits also occur in the output• The interleaver reads the bits in a pseudo-random order.The choice of the interleaver is a crucial part in the turbo code design . The taskof the interleaver is to “scramble” bits in a (pseudo-)random, albeit predetermined

*A decibel is a relative measure. If E is the actual energy and Eref is the theoretical lowerbound, then the relative energy increase in decibels is

10 log10E

Eref.

Sincelog10 2≅ 0.3,

A twofold relative energy increase equals 3dB.

Turbo Code

Systematic output INPUT Xi

Output 1

Output 2

Figure 2: The generic turbo encoder

fashion. This serves two purposes. Firstly, if the input to the second encoder is interleaved,its output is usually quite different from the output of the first encoder.This means that even if one of the output code words has low weight, the otherusually does not, and there is a smaller chance of producing an output with verylow weight. Higher weight, as we saw above, is beneficial for the performance ofthe decoder. Secondly, since the code is a parallel concatenation of two codes, thedivide-and-conquer strategy can be employed for decoding. If the input to the second decoder is scrambled, also its output will be different, or “uncorrelated” from the output of the first encoder. This means that the corresponding two decoders will gain more from information exchange.We now briefly review some interleaver design ideas, stressing that the list is by nomeans complete. The first three designs are illustrated in Figure 3 with a sampleinput size of 15 bits.

1. A “row-column” interleaver: data is written row-wise and read columnwise.While very simple, it also provides little randomness.

2. A “helical” interleaver: data is written row-wise and read diagonally.

3. An “odd-even” interleaver: first, the bits are left uninterleaved and encoded,but only the odd-positioned coded bits are stored. Then, the bits arescrambled and encoded, but now only the even-positioned coded bits arestored. Odd-even encoders can be used, when the second encoder producesone output bit per one input bit.

Interleaver

Encoder 1

Encoder 2

4. A pseudo-random interleaver defined by a pseudo-random number generatoror a look-up table.Turbo Codes 6

Input

X1 X2 X3 X4 X5

X6 X7 X8 X9 X10

X11 X12 X13 X14 X15

Row-column interleaver output

X1 X6 X11 X2 X7 X12 X3 X8X13

X4 X9 X14 X5 X10 X15

Helical interleaver output

X11 X7 X3 X14 X10 X1X12

X8 X4 X15 X6 X2 X13 X9 X5

Odd-even interleaver output

Encoder output without interleaving

X1 X2 X3 X4 X5 X6 X7 X8 X9 X10 X11 X12X13

X14 X15

Y1 - Y3 - Y5 - Y7 - Y9 - Y11 -Y13

- Y15

Encoder output with row-column interleaving

X1 X6 X11 X2 X7 X12 X3 X8 X13 X4 X9 X14 X5 X10 X15

- Z6 - Z2 - Z12 - Z8 - Z4 - Z14 - Z10 -

Final output of the encoder

Y1 Z6 Y3 Z2 Y5 Z12 Y7 Z8 Y9 Z4 Y11 Z14Y13

Z10 Y15

Turbo Code

+

+

Figure 3: Interleaver designs

There is no such thing as a universally best interleaver. For short block sizes, theodd-even interleaver has been found to outperform the pseudo-random interleaver,and vice versa. The choice of the interleaver has a key part in the success of thecode and the best choice is dependent on the code design. For further reading,several articles on interleaver design can be found.

5.Recursive Systematic Convolutional (RSC) EncoderThe recursive systematic convolutional (RSC) encoder is obtained from thenonrecursive nonsystematic (conventional) convolutional encoder by feeding back one of its encoded outputs to its input. Figure shows a conventional convolutional encoder.

g1

x

g2

Figure 4.1: Conventional convolutional encoder

D D

The conventional convolutional encoder is represented by the generator sequencesg1 =[111] and g2 =[101] and can be equivalently represented in a more compact form as G=[g1, g2]. The RSC encoder of this conventional convolutional encoder is represented as G=[1, g2 / g1] where the first output (represented by g1) is fed back to the input. In the above representation, 1 denotes the systematic output, g2 denotes the feedforward output, and g1 is the feedback to the input of the RSC encoder. Figure 4.2 shows the resulting RSC encoder.

Figure 4.2: The RSC encoder obtained from the previous figure

Turbo Code

+

+

+

D D

6.Turbo Codes: Some Notes on Decoding

In the traditional decoding approach, the demodulator makes a “hard” decisionof the received symbol, and passes to the error control decoder a discrete value,either a 0 or a 1. The disadvantage of this approach is that while the value of somebits is determined with greater certainty than that of others, the decoder cannotmake use of this information.A soft-in-soft-out (SISO) decoder receives as input a “soft” (i.e. real) value ofthe signal. The decoder then outputs for each data bit an estimate expressing theprobability that the transmitted data bit was equal to one. In the case of turbocodes, there are two decoders for outputs from both encoders. Both decodersprovide estimates of the same set of data bits, albeit in a different order. If allintermediate values in the decoding process are soft values, the decoders can gaingreatly from exchanging information, after appropriate reordering of values. Information exchange can be iterated a number of times to enhance performance.At each round, decoders re-evaluate their estimates, using information from theother decoder, and only in the final stage will hard decisions be made, i.e. each bitis assigned the value 1 or 0. Such decoders, although more difficult to implement,are essential in the design of turbo codes.

7.Working Of Turbo Code with the Example

Figure 5.1: Encoding

Turbo Code

Figure 5.2:Decoding

8.Turbo Codes: Performance

We have seen that the conventional codes left a 3dB gap between theory and practice. After bringing out the arguments for the efficiency of turbo codes, one clearly wants to ask: how efficient are they?Already the first rate 1/3 code proposed in 1993 made a huge improvement: the gap between Shannon’s limit and implementation practice was only 0.7dB, giving a less than 1.2-fold overhead. (In the authors’ measurements, the allowed bit errorrate BER was 10−5). In [2], a thorough comparison between convolutional codesand turbo codes is given. In practice, the code rate usually varies between 1/2 and1/6 . Let the allowed bit error rate be 10−6. For code rate 1/2 , the relative increase in energy consumption is then 4.80dB for convolutional codes, and 0.98dB for turbo codes. For code rate 1/6 , the respective numbers are 4.28dB and -0.12dB2. It can also be noticed, that turbo codes gain significantly more from lowering the code rate than conventional convolutional codes.

Figure 6: Perfomance

Turbo Code

+ +

++ +

+

+

+ +

++

9.The UMTS Turbo Code

Input Xi

X’i

Figure 7: The UMTS turbo encoder

X’i-1 X’i-2 X’i-3

Xi-1 Xi-2 Xi-3

Interleaver

The UMTS turbo encoder closely follows the design ideas presented in the original1993 paper .The starting building block of the encoder is the simple convolutionalencoder depicted in Figure 1. This encoder is used twice, once withoutinterleaving and once with the use of an interleaver, exactly as described above.In order to obtain a systematic code, desirable for better decoding, the followingmodifications are made to the design. Firstly, a systematic output is added to theencoder. Secondly, the second output from each of the two encoders is fed backto the corresponding encoder’s input. The resulting turbo encoderis a rate 1/3 encoder, since for each input bit it produces one systematicoutput bit and two parity bits. Details on the interleaver design can be found inthe corresponding specification .Although the relative value is negative, it does not actually violate the Shannon’s limit. The negative value is due to the fact that we allow for a small error, whereas Shannon’s capacity applies for perfect error-free transmission.

As a comparison, the GSM system uses conventional convolutional encoding incombination with block codes. The code rate varies with the type of input; in thecase of speech signal, it

260456

< 12

Turbo Code

10.Conclusions

Turbo codes are a recent development in the field of forward-error-correctionchannel coding. The codes make use of three simple ideas: parallel concatenationof codes to allow simpler decoding; interleaving to provide better weightdistribution; and soft decoding to enhance decoder decisions and maximize thegain from decoder interaction.While earlier, conventional codes performed—in terms of energy efficiency or,equivalently, channel capacity—at least twice as bad as the theoretical bound suggested,turbo codes immediately achieved performance results in the near range of the theoretically best values, giving a less than 1.2-fold overhead. Since the firstproposed design in 1993, research in the field of turbo codes has produced evenbetter results. Nowadays, turbo codes are used in many commercial applications,including both third generation cellular systems UMTS and cdma2000.

11.References

[1] University of South Australia, Institute for Telecommunications Research,Turbo coding research group. http://www.itr.unisa.edu.au/~steven/turbo/.[2] S.A. Barbulescu and S.S. Pietrobon. Turbo codes: A tutorial on a new class ofpowerful error correction coding schemes. Part I: Code structures and interleaverdesign. J. Elec. and Electron.Eng., Australia, 19:129–142, September1999.[3] S.A. Barbulescu and S.S. Pietrobon. Turbo codes: A tutorial on a new class ofpowerful error correction coding schemes. Part II: Decoder design and performance.J. Elec. and Electron.Eng., Australia, 19:143–152, September 1999.

Turbo Code


Recommended