+ All Categories
Home > Documents > DESIGN AND ANALYSIS OF LDPC CONVOLUTIONAL CODES

DESIGN AND ANALYSIS OF LDPC CONVOLUTIONAL CODES

Date post: 14-Apr-2018
Category:
Upload: sinshaw-bekele
View: 248 times
Download: 0 times
Share this document with a friend

of 170

Transcript
  • 7/27/2019 DESIGN AND ANALYSIS OF LDPC CONVOLUTIONAL CODES

    1/170

    DESIGN AND ANALYSIS OF LDPC CONVOLUTIONAL CODES

    A Dissertation

    Submitted to the Graduate School

    of the University of Notre Dame

    in Partial Fulfillment of the Requirements

    for the Degree of

    Doctor of Philosophy

    by

    Arvind Sridharan, M. S.

    Daniel J. Costello, Jr., Director

    Graduate Program in Electrical Engineering

    Notre Dame, Indiana

    February 2005

  • 7/27/2019 DESIGN AND ANALYSIS OF LDPC CONVOLUTIONAL CODES

    2/170

    c Copyright byArvind Sridharan

    2005

    All Rights Reserved

  • 7/27/2019 DESIGN AND ANALYSIS OF LDPC CONVOLUTIONAL CODES

    3/170

    DESIGN AND ANALYSIS OF LDPC CONVOLUTIONAL CODES

    Abstract

    by

    Arvind Sridharan

    Low-density parity-check (LDPC) block codes invented by Gallager in 1962

    achieve exceptional error performance on a wide variety of communication chan-

    nels. LDPC convolutional codes are the convolutional counterparts of LDPC block

    codes. This dissertation describes techniques for the design of LDPC convolutional

    codes, and analyzes their distance properties and iterative decoding convergence

    thresholds.

    The construction of time invariant LDPC convolutional codes by unwrapping

    the Tanner graph of algebraically constructed quasi-cyclic LDPC codes is de-

    scribed. The convolutional codes outperform the quasi-cyclic codes from which

    they are derived. The design of parity-check matrices for time invariant LDPC

    convolutional codes by the polynomial extension of a base matrix is proposed. An

    upper bound on free distance, proving that time invariant LDPC codes are not

    asymptotically good, is obtained.

    The Tanner graph is used to describe a pipelined message passing based itera-

    tive decoder for LDPC convolutional codes that outputs decoding results continu-

    ously. The performance of LDPC block and convolutional codes are compared for

    fixed decoding parameters like computational complexity, processor complexity,

    and decoding delay. In each case, the LDPC convolutional code performance is

  • 7/27/2019 DESIGN AND ANALYSIS OF LDPC CONVOLUTIONAL CODES

    4/170

    Arvind Sridharanat least as good as that of LDPC block codes. An analog circuit to implement

    pipelined decoding of LDPC convolutional codes is proposed.

    The distance properties of a permutation matrix based (time varying) ensemble

    of (J, K) regular LDPC convolutional codes is investigated. It is proved that these

    codes (for J > 2) have free distance increasing linearly with constraint length, i.e.,

    they are asymptotically good. Further, the asymptotic free distance to constraint

    length ratio for the convolutional codes is several times larger than the minimum

    distance to block length ratio for corresponding LDPC block codes.

    Iterative decoding of terminated LDPC convolutional codes, based on the en-

    semble mentioned above, is analyzed, assuming transmission over a binary erasure

    channel. The structured irregularity of the codes leads to significantly better con-

    vergence thresholds compared to corresponding LDPC block codes. At the calcu-

    lated thresholds, both the bit and block error probability can be made arbitrarily

    small. The results obtained suggest that the thresholds approach capacity with

    increasing J.

  • 7/27/2019 DESIGN AND ANALYSIS OF LDPC CONVOLUTIONAL CODES

    5/170

    To all my teachers

    ii

  • 7/27/2019 DESIGN AND ANALYSIS OF LDPC CONVOLUTIONAL CODES

    6/170

    CONTENTS

    FIGURES . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vi

    TABLES . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ix

    ACKNOWLEDGMENTS . . . . . . . . . . . . . . . . . . . . . . . . . . . x

    CHAPTER 1: INTRODUCTION . . . . . . . . . . . . . . . . . . . . . . . 11.1 Digital communication systems . . . . . . . . . . . . . . . . . . . 11.2 Channel models . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41.3 Channel codes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7

    1.3.1 Block codes . . . . . . . . . . . . . . . . . . . . . . . . . . 71.3.2 Convolutional codes . . . . . . . . . . . . . . . . . . . . . 10

    1.4 Channel decoding . . . . . . . . . . . . . . . . . . . . . . . . . . . 131.4.1 Optimal decoding . . . . . . . . . . . . . . . . . . . . . . . 131.4.2 Iterative decoding . . . . . . . . . . . . . . . . . . . . . . . 14

    1.5 Outline of the thesis . . . . . . . . . . . . . . . . . . . . . . . . . 15

    CHAPTER 2: BACKGROUND . . . . . . . . . . . . . . . . . . . . . . . . 172.1 LDPC block codes . . . . . . . . . . . . . . . . . . . . . . . . . . 172.2 Iterative decoding techniques . . . . . . . . . . . . . . . . . . . . 202.3 LDPC convolutional codes . . . . . . . . . . . . . . . . . . . . . . 24

    CHAPTER 3: LDPC CONVOLUTIONAL CODES CONSTRUCTED FROMQUASI-CYCLIC CODES . . . . . . . . . . . . . . . . . . . . . . . . . 303.1 Code construction . . . . . . . . . . . . . . . . . . . . . . . . . . . 31

    3.1.1 Construction of quasi-cyclic LDPC block codes . . . . . . 313.1.2 Construction of LDPC convolutional codes . . . . . . . . . 36

    3.2 Properties of constructed codes . . . . . . . . . . . . . . . . . . . 423.2.1 Relation between the block and convolutional Tanner graphs 42

    iii

  • 7/27/2019 DESIGN AND ANALYSIS OF LDPC CONVOLUTIONAL CODES

    7/170

    3.2.2 Quasi-cyclic block codes viewed as tail-biting convolutionalcodes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43

    3.2.3 Girth . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 443.2.4 Minimum distance . . . . . . . . . . . . . . . . . . . . . . 463.2.5 Encoding . . . . . . . . . . . . . . . . . . . . . . . . . . . 47

    3.3 Performance results . . . . . . . . . . . . . . . . . . . . . . . . . . 49

    CHAPTER 4: A CONSTRUCTION FOR TIME INVARIANT LDPC CON-VOLUTIONAL CODES . . . . . . . . . . . . . . . . . . . . . . . . . . 554.1 Code construction technique . . . . . . . . . . . . . . . . . . . . . 564.2 Choosing the base matrix . . . . . . . . . . . . . . . . . . . . . . 60

    4.2.1 Girth properties . . . . . . . . . . . . . . . . . . . . . . . . 614.2.2 Optimizing thresholds . . . . . . . . . . . . . . . . . . . . 62

    4.3 Distance bound . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63

    CHAPTER 5: IMPLEMENTATION OF LDPC CONVOLUTIONAL CODE

    DECODERS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 675.1 Decoder description . . . . . . . . . . . . . . . . . . . . . . . . . . 685.2 Comparison of message passing decoding of block and convolutional

    codes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 735.3 Analog decoding . . . . . . . . . . . . . . . . . . . . . . . . . . . 77

    5.3.1 Analog decoding of LDPC convolutional codes . . . . . . . 79

    CHAPTER 6: DISTANCE BOUNDS FOR LDPC CONVOLUTIONALCODES . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 876.1 An LDPC convolutional code ensemble . . . . . . . . . . . . . . . 88

    6.2 A lower bound on row distance . . . . . . . . . . . . . . . . . . . 926.2.1 Row distance . . . . . . . . . . . . . . . . . . . . . . . . . 936.2.2 Probability for a single block of constraints . . . . . . . . . 956.2.3 Lower bound on row distance . . . . . . . . . . . . . . . . 102

    6.3 A lower bound on free distance . . . . . . . . . . . . . . . . . . . 1076.4 Results and discussion . . . . . . . . . . . . . . . . . . . . . . . . 111

    CHAPTER 7: CONVERGENCE ANALYSIS ON THE ERASURE CHAN-NEL . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1177.1 Tanner graph description . . . . . . . . . . . . . . . . . . . . . . . 1187.2 Termination of LDPC convolutional codes . . . . . . . . . . . . . 119

    7.3 Decoding analysis on the erasure channel . . . . . . . . . . . . . . 1347.4 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 140

    CHAPTER 8: CONCLUDING REMARKS . . . . . . . . . . . . . . . . . 147

    iv

  • 7/27/2019 DESIGN AND ANALYSIS OF LDPC CONVOLUTIONAL CODES

    8/170

    APPENDIX A: MAXIMUM OF THE FUNCTION U([1,L]) . . . . . . . . 150BIBLIOGRAPHY . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 152

    v

  • 7/27/2019 DESIGN AND ANALYSIS OF LDPC CONVOLUTIONAL CODES

    9/170

    FIGURES

    1.1 Block diagram of a digital communication system. . . . . . . . . . 2

    1.2 The binary erasure channel . . . . . . . . . . . . . . . . . . . . . . 5

    1.3 The binary symmetric channel . . . . . . . . . . . . . . . . . . . . 5

    1.4 Illustration of the additive white Gaussian noise channel model. . 6

    1.5 A rate 1/2 convolutional encoder with memory 2. . . . . . . . . . 10

    2.1 The Tanner graph of the parity-check matrix of Example 2.1.1. TheTanner graph consists of 20 symbol nodes, each with J = 3 edgesand 15 constraint nodes, each with K = 4 edges. . . . . . . . . . 19

    2.2 Illustration of the Jimenez-Zigangirov method to construct the syn-drome former of a b = 1, c = 2 LDPC convolutional (4, 3, 6) code . 26

    2.3 The Tanner graph for the convolutional code of Example 2.3.2. . . 28

    3.1 Tanner graph for a [155,64,20] QC code. . . . . . . . . . . . . . . 35

    3.2 Tanner graph for a [21,8,6] QC code. . . . . . . . . . . . . . . . . 36

    3.3 Tanner graph for the rate 1/3 convolutional code of Example 5. . 42

    3.4 A 12-cycle. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45

    3.5 A systematic syndrome former based encodes for the R = 1/3 codeof Example 5 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48

    3.6 The performance of convolutional LDPC codes versus the corre-sponding QC block LDPC codes for (3,5) connectivity. . . . . . . 50

    3.7 The performance of convolutional LDPC codes versus the corre-sponding QC block LDPC codes for (5,7) connectivity. . . . . . . 52

    3.8 The performance of convolutional LDPC codes versus the corre-sponding QC block LDPC codes for (3,17) connectivity. . . . . . . 53

    3.9 Performance of regular (3, 5) QC block codes with increasing circu-lant size m. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54

    vi

  • 7/27/2019 DESIGN AND ANALYSIS OF LDPC CONVOLUTIONAL CODES

    10/170

    4.1 Performance of the codes in Examples 4.1.1-4.1.3 on the AWGNchannel. The performance of a QC code with N = 1600 obtainedfrom the convolutional code of Example 4.1.3 is also shown. . . . 60

    4.2 Existence of six cycle with polynomials of weight 3 or more. . . . 61

    5.1 The sliding window decoder for I iterations of the code of Example 5. 705.2 Continuous decoding of the code of Example 5 with I iterations. . 71

    5.3 Performance comparison of LDPC block and convolutional codes onthe AWGN channel for the same processor complexity and decodingdelays. The block code is decoded with a very fast decoder. . . . . 76

    5.4 Serial decoding of LDPC block codes using I decoders. . . . . . . 77

    5.5 Transistor circuits for implementing the symbol (SYM) and con-straint (CHK) node updates. . . . . . . . . . . . . . . . . . . . . . 78

    5.6 The rotating ring decoder . . . . . . . . . . . . . . . . . . . . . . 80

    5.7 Bit error rate curves on the AWGN channel with rotating ring de-coders (Type c)) of different window sizes. The ML performance isalso shown. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82

    5.8 Bit error rate curves for different decoder types with W=9. Typea) is the usual sliding window decoder. Type b) is a rotating ringdecoder with channel values reloaded at each time. Type c) is astandard rotating window decoder (no reloading) . . . . . . . . . 83

    5.9 Decoding results for the time discrete decoder (left two columns)and the time continuous decoder (right two columns) for one re-ceived block of length 55 bits. Each row represents a different test

    case. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85

    5.10 Probability distributions of output LLRs for analog and digital de-coders. L(z) is the decoder output and p(L(z)) is the probabilitydistribution function of the random variable L(z). . . . . . . . . . 86

    6.1 syndrome former for a code in CP(3, 6, M). . . . . . . . . . . . . . . 89

    6.2 Permuting rows of a syndrome former HT in CP(3, 6, M). . . . . . 916.3 Illustration of length L = 3 segment ofHT[1,L]. . . . . . . . . . . . 94

    6.4 Set of parameters3 and 3 for calculating the function F3([1,3],[1,3])1036.5 Lower bound on row distance to constraint length ratio d(L)6M as afunction of L. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112

    6.6 Maximizing weight vector [1,L] for different L. . . . . . . . . . . . 113

    vii

  • 7/27/2019 DESIGN AND ANALYSIS OF LDPC CONVOLUTIONAL CODES

    11/170

    7.1 Tanner graph connections of the 2M symbol nodes at time t. . . . 118

    7.2 Tanner graph connections of the M parity-check equations at time t.119

    7.3 The matrix HT[L+1,L+3]. . . . . . . . . . . . . . . . . . . . . . . . . 123

    7.4 The matrices resulting from row operations on A. . . . . . . . . . 125

    7.5 The matrices C3 and C2. . . . . . . . . . . . . . . . . . . . . . . 1307.6 Tanner graph of a terminated convolutional code obtained from

    C(M, 2M, 3). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1337.7 Illustration of the messages (a) to a symbol node and (b) to a

    constraint node for the case J = 3. . . . . . . . . . . . . . . . . . 138

    7.8 The first level of computation trees for t = 1, 2, 3 with J = 3. . . . 139

    7.9 Illustration of the tunneling effect and the breakout value . . . . . 143

    viii

  • 7/27/2019 DESIGN AND ANALYSIS OF LDPC CONVOLUTIONAL CODES

    12/170

    TABLES

    3.1 EXAMPLES OF CODES CONSTRUCTED FROM (PRIME) CIR-CULANT SIZES. . . . . . . . . . . . . . . . . . . . . . . . . . . . 34

    4.1 BEST THRESHOLDS OBTAINED FOR BASE MATRICES OFDIFFERENT SIZES . . . . . . . . . . . . . . . . . . . . . . . . . 64

    6.1 DISTANCE BOUND FOR LDPC BLOCK AND CONVOLUTIONAL

    CODES FOR DIFFERENT (J, K). . . . . . . . . . . . . . . . . . 115

    7.1 THRESHOLDS FOR THE ENSEMBLE CP(3, 6, M) WITH DIF-FERENT L AND FOR THE CORRESPONDING IRREGULARBLOCK CODES. . . . . . . . . . . . . . . . . . . . . . . . . . . . 140

    7.2 THRESHOLDS FOR THE ENSEMBLES CP(J, 2J, M) WITH DIF-FERENT J. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 145

    ix

  • 7/27/2019 DESIGN AND ANALYSIS OF LDPC CONVOLUTIONAL CODES

    13/170

    ACKNOWLEDGMENTS

    This dissertation would not have been possible without the help of numerous

    people over the past five years. Although, it is only possible to thank a few here,

    I would like to express my gratitude to all of them.

    I would like to thank Dr. Costello for being a wonderful adviser. He provided

    constant support, encouragement, made himself readily available, and always had

    a patient ear, even for all the nonsense I had to say. Many thanks for the wonderful

    six months in Germany!

    Dr. Tanner gave a series of interesting talks in the Fall of 2000. I would like

    to thank him for these talks and other stimulating discussions which has resulted

    in a chapter of this dissertation.

    Dr. Kamil Zigangirovs help in shaping my research cannot be over empha-sized. Kamils down to earth personality and enthusiasm meant that working with

    him has not only been a great opportunity and learning experience but also great

    fun. My heartfelt thanks to him also for the the wonderful dinners he cooked and

    for the demonstrations of the art of drinking vodka, in both Russian and Swedish

    styles.

    Dr. Michael Lentmaier, during his time at Notre Dame, taught me the value

    of being systematic. His clear insight and attitude to research was a wonderful

    example. Very often he dropped everything he was doing to discuss my research

    problems. These discussions have been immensely useful.

    x

  • 7/27/2019 DESIGN AND ANALYSIS OF LDPC CONVOLUTIONAL CODES

    14/170

    I would like to thank Dr. Fuja and Dr. Collins for serving on my defense

    committee, and Dr. Peter Massey, Andrew Schaefer, and Dr. Dmitry Trukhachev

    for the opportunity to discuss and work together.

    My friends and colleagues at Notre Dame; Adrish, Ajay, Chandra, Junying,Ching, Ravi, Rajkumar, Tony, Ajit, Hongzhi, Shirish, Vasu, Sridhar, Ali, Xiaowei,

    Guanjun, Kameshwar, Shiva, Jagadish, to name a few made Notre Dame a pleas-

    ant place to work at. Special thanks to Ali for meeting every single inane problem

    I had about using the computer with a smile and solving it. I would also like to

    thank Sameer for providing the LATEXclass file used to format the dissertation.

    I cannot thank the EE department staff enough for all their help. I would like

    to specially thank Pat for always doing more than was needed.

    The music concerts, cinema, theater, wonderful library, and sports facilities at

    Notre Dame meant I always had more to do than time permitted. The opportunity

    to see several plays of Shakespeare enacted on stage by Actors from the London

    Stage will remain an unforgettable part of my time at Notre Dame.

    A significant reason for the wonderful time I had at Notre Dame was Deepak.

    During his time at Notre Dame, Deepak was my friend, philosopher, guide, punch-

    ing bag, and more. I have had countless discussions with him on virtually every-

    thing under the sun. I consider myself fortunate to have spent time with him.

    My parents and Arathi made home a wonderful place to be. They have always

    provided me with unconditional support, love, and encouragement. Anthony,

    Clement, and Dilip have taught me much and continue to do so. I am much the

    better for knowing them.

    xi

  • 7/27/2019 DESIGN AND ANALYSIS OF LDPC CONVOLUTIONAL CODES

    15/170

    CHAPTER 1

    INTRODUCTION

    Today the use of digital cell phones, the internet, and CD and DVD players is

    ubiquitous. In all of these cases, digitally represented data is either transmitted

    from one place to another or retrieved from a storage device when needed. For

    the proper functioning of these systems the transmitted or retrieved data must

    be sufficiently error free. This can be accomplished efficiently by using channel

    coding techniques. One such channel coding technique for achieving reliable data

    transmission or retrieval is to use low-density parity-check (LDPC) convolutional

    codes. The design of LDPC convolutional codes and an analysis of their properties

    are the topic of this dissertation.

    The next section describes a model of a general digital communication system.

    In Section 1.2, the discrete time model of the communication channels considered

    throughout this dissertation is described. The next two sections describe the basic

    concepts of channel coding and decoding. In Section 1.5, an outline of the rest of

    the thesis is presented.

    1.1 Digital communication systems

    Most generally, a communication system aims at transmitting information

    from a source to a destination across a channel. The nature of the source and

    1

  • 7/27/2019 DESIGN AND ANALYSIS OF LDPC CONVOLUTIONAL CODES

    16/170

    Channel

    Channel

    decoder

    encoder

    Demodulator

    Modulator

    Discrete ChannelBinary Information Source

    Binary Information Sink

    Source

    Source

    ChannelPhysical

    Destination

    Sourceencoder

    decoder

    u v

    u r

    Figure 1.1. Block diagram of a digital communication system.

    destination, e.g., computers exchanging messages or a conversation between two

    humans, is not of importance. Similarly, what the information exchanged repre-

    sents, e.g., music or conversation, is not essential. The information exchange can

    take place across two different locations, as in the case of telephone conversations,

    or across both different times and locations as in the case of storage and retrieval

    systems like CDs.

    Claude Shannon, in his famous paper [1], pioneered the area of information

    theory. This paper quantified the notion of information and has been the basis of

    the study and design of communication systems since. One of Shannons results

    is that it is possible to separate the processing of the source output before trans-

    mission into source encoding and channel encoding without sacrificing optimality.

    The general model of a digital communication system based on this separation

    principle is shown in Fig. 1.1.

    2

  • 7/27/2019 DESIGN AND ANALYSIS OF LDPC CONVOLUTIONAL CODES

    17/170

    Typically, the data generated from most sources has redundancy, the trans-

    mission of which can be avoided. For example, in a phone conversation there are

    often periods of silence in the conversation which need not be transmitted. The

    function of the source encoder is two fold, one, convert the information from thesource into a binary sequence u, the information sequence, and two, remove all

    redundancy in the source information in obtaining u.

    Probably the most remarkable result obtained by Shannon in [1] is the noisy

    channel coding theorem. This theorem states that, for a broad class of channels,

    reliable transmission is possible as long as the transmission rate is less than the

    channel capacity. This is achieved by introducing redundancy, by means of a

    channel code, suited for overcoming the debilitating effects of the channel on the

    data to be transmitted. This result led to the subject of coding theory, which

    investigates the use of efficient and practical coding schemes to achieve reliable

    communication at information rates close to the capacity of the channel. The

    function of the channel code is to add redundancy in a controlled fashion to the

    information sequence u and generate the code sequence v to facilitate reliable

    transmission to the destination.

    The modulator converts symbols of the code sequence v into a form suited

    for transmission across the physical channel. The physical channel is the medium

    across which data is transmitted. It can be, for example, a simple wire, a storage

    medium like a CD, or the wireless channel from a base station to a cell phone user.

    The channel, while allowing the transfer of data, corrupts the transmitted signal

    from the modulator in some fashion. As already noted, Shannon proved that as

    long as the transmission rate is less than the channel capacity these effects can be

    overcome.

    3

  • 7/27/2019 DESIGN AND ANALYSIS OF LDPC CONVOLUTIONAL CODES

    18/170

    The blocks after the channel undo the operations performed during data trans-

    mission. The demodulator generates a discrete sequence r, the received sequence,

    from the output of the channel. The channel decoder uses r to generate an esti-

    mate u of the information sequence u. Ideally, u is the same as u. The sourcedecoder uses the estimated information sequence u and regenerates the sourcedata in the form required by the destination.

    In the scope of this work, we are concerned with channel encoding and de-

    coding. We therefore group the blocks in Fig. 1.1 as shown so that we obtain a

    binary information source, a discrete channel, and a binary information sink. The

    information sequence1 u is transmitted from the binary source across a discrete

    channel to the binary sink. The discrete channel corrupts the transmitted code

    sequence and this can be overcome by appropriately designed coding schemes.

    1.2 Channel models

    In this section we describe three different models for the discrete channel,

    the binary erasure channel (BEC), the binary symmetric channel (BSC), and the

    additive white Gaussian noise (AWGN) channel.

    The binary erasure channel is shown in Fig 1.2. In this case, there are three

    possible channel outputs, the transmitted symbols 0 and 1, or an erasure . The

    BEC does not cause any errors, a symbol is either received correctly or as an

    erasure. The probability of a symbol (0 or 1) being received as an erasure, the

    erasure probability of the channel, is p. The BEC has capacity 1 p bits per

    channel use.1For an ideal source encoder, the bits of the information sequence are independent and equally

    likely and we assume that this is the case throughout the dissertation.

    4

  • 7/27/2019 DESIGN AND ANALYSIS OF LDPC CONVOLUTIONAL CODES

    19/170

    p

    p

    1 p

    1 p 00

    11

    e

    Figure 1.2. The binary erasure channel

    p

    p

    1 p

    1 p 00

    11

    Figure 1.3. The binary symmetric channel

    The BSC is shown in Fig 1.3. In this case a 0 can be erroneously received as

    a 1 or vice versa with crossover probabilityp. The BSC has capacity 1 h(p) bitsper channel use where the binary entropy function h(p) is defined as

    h(p) = p log2p (1 p)log2(1 p) (1.1)

    The third model we consider, the AWGN channel, has binary inputs as earlier

    but the outputs are real numbers. The model for the AWGN channel is shown in

    Fig. 1.4. The modulator maps the bits ofv into antipodal waveforms with energy

    5

  • 7/27/2019 DESIGN AND ANALYSIS OF LDPC CONVOLUTIONAL CODES

    20/170

    Modulator +vt

    {0, 1

    }

    nt N(0, N0/2)

    rt

    R

    Es

    Figure 1.4. Illustration of the additive white Gaussian noise channelmodel.

    Es. The waveforms are represented by their amplitudes

    Es in Fig. 1.4. The

    channel adds noise with one-sided power spectral density N0 to the transmitted

    waveforms. The optimal demodulator generates at each time t the real number

    rt = (1 2vt)

    Es + nt, vt {0, 1}, (1.2)

    where nt is a Gaussian random variable with variance 2 = N0/2. The received

    value rt is therefore a random variable with conditional probability density func-

    tion

    p(rt|vt) = 1N0

    e(rt(12vt )Es)2/N0 . (1.3)

    The AWGN channel model is a good model for channels where noise in the chan-

    nel is the result of a superposition of a large number of independent identically

    distributed noise sources. The capacity of the discrete AWGN channel considered

    here must be calculated numerically. In this dissertation we will deal mainly with

    the AWGN channel except for the penultimate chapter where we focus on the

    erasure channel.

    6

  • 7/27/2019 DESIGN AND ANALYSIS OF LDPC CONVOLUTIONAL CODES

    21/170

    1.3 Channel codes

    Channel codes introduce redundancy into the information sequence so that it

    is possible to correct erroneous symbols altered by the channel and thus ensure

    reliable transmission. For this reason, channel codes are often referred to as error

    correcting codes. In this section, a brief introduction to error correcting codes is

    presented. We review the two main classes of codes, block codes and convolutional

    codes, and associated concepts. A more thorough treatment of block codes can

    be found in several textbooks [2][3][4][5][6]. Convolutional codes are described in

    detail in [7][6].

    1.3.1 Block codes

    Consider data transmission over a BSC. A simple way to combat errors in

    this case is to repeat each information bit N times, a length N repetition code.

    The decoder uses a majority vote to determine the information bit. The decoding

    decision is correct as long as the number of errors is less than N/2. Thus by

    increasing N, arbitrary reliability can be achieved. The code rate R, the number

    of information symbols transmitted per channel use, for a length N repetition

    code, is R = 1/N. The code rate of a repetition code tends to zero as N is

    increased which makes it not very interesting in practice. A more efficient coding

    scheme is to encode K > 1 information symbols together to generate a codeword

    of length N.

    Example 1.3.1 A rate R = 4/7 error correcting code

    7

  • 7/27/2019 DESIGN AND ANALYSIS OF LDPC CONVOLUTIONAL CODES

    22/170

    A codeword v = (v0, v1, v2, v3, v4, v5, v6) of length N = 7 is generated for a group

    of K = 4 information bits u = (u0, u1, u2, u3) as

    u = (u0, u1, u2, u3)v = (u0, u1, u2, u3, u0 + u1 + u2, u1 + u2 + u3, u0 + u1 + u3).

    (1.4)

    The addition here is over the binary field GF(2). The code is defined by the

    2K = 16 different codewords corresponding to the 2K different information blocks

    u. Since there are N = 7 code bits for K = 4 information bits, the code rate is

    R = 4/7. 2

    A code of the form described in Example 1.3.1, where codewords are generated

    for a block of information symbols, is called a block code. In the dissertation we

    only focus on linear block codes over the binary field GF(2). An (N, K) linear

    block code of rate R = K/N is a K dimensional vector space of the vector space

    GF(2)N. The code in Example 1.3.1 is a (7, 4) linear block code.

    For a linear code, the codeword corresponding to an information block can be

    conveniently specified by means of matrix multiplication as v1N = u1KGKN,

    where G is a generator matrix. In Example 1.3.1, the generator matrix is

    G =

    1 0 0 0 1 0 1

    0 1 0 0 1 1 1

    0 0 1 0 1 1 0

    0 0 0 1 0 1 1

    (47).

    (1.5)

    Any set ofK basis vectors can be used to obtain a generator matrix. The differentgenerator matrices specify the same code but the mapping of information blocks

    to codewords is different. A systematic generator matrix is one where the K

    8

  • 7/27/2019 DESIGN AND ANALYSIS OF LDPC CONVOLUTIONAL CODES

    23/170

    information symbols appear as a part of the codeword. The generator matrix in

    (1.5) is systematic.

    The Hamming weight of a vector x is the number of non-zero components in

    x. The Hamming distance between two vectors x and y is the Hamming weightof the vector xy = x+y.2 The minimum distance dmin of a block code C is theminimum of the Hamming distances between two distinct codewords in C. For alinear block code C, the sum of any two codewords is also a codeword, and hencedmin is equal to the minimum Hamming weight over all non-zero codewords in C.The minimum distance of a code plays a significant role in determining the error

    correcting capability of the code; a code with minimum distance dmin can correct

    up to (dmin 1)/2 errors3 on the BSC. The code of Example 1.3.1 has dmin = 3.In general, determining the minimum distance of a code is difficult, especially if

    the number of codewords is large.

    A linear block code can also be specified by means of a parity-check matrix.

    A parity-check matrix H specifies the dual space of the code so that for any

    codeword v we have

    vHT = 0, (1.6)

    where HT denotes the transpose ofH. Thus, for a generator matrix G of the

    block code, GHT = 0. For an (N, K) linear block code of rate R = K/N, the

    parity-check matrix is an L N matrix, where L = (N K). The 3 7 matrix

    H=

    1 1 1 0 1 0 0

    0 1 1 1 0 1 01 1 0 1 0 0 1

    (37)

    (1.7)

    2Over the binary field addition and subtraction are identical.3z is the largest integer less than or equal to z

    9

  • 7/27/2019 DESIGN AND ANALYSIS OF LDPC CONVOLUTIONAL CODES

    24/170

    is a parity-check matrix for the code of Example 1.3.1. As in the case of the

    generator matrix, any set of (N K) basis vectors of the dual space can be usedto form a parity-check matrix of an (N, K) linear block code. Thus, the same

    code can be described by several different parity-check matrices.

    1.3.2 Convolutional codes

    Convolutional codes were introduced by Elias [8] as an alternative to the class

    of block codes. A code sequence of a convolutional code is obtained as the output

    of a convolutional encoder, a linear system. Thus, a code sequence is obtained

    by the convolution of an information sequence with some generating sequence.

    In a convolutional code, the information sequence is continuously encoded into a

    code sequence. Further, the output of a convolutional encoder is dependent on

    information symbols both at the current and previous times.

    Example 1.3.2 A b = 1, c = 2, m = 2, rate R = b/c = 1/2 time-invariant

    convolutional encoder.

    Figure 1.5 shows a rate R = b/c = 1/2 convolutional encoder. At each time

    instant, the encoder maps b = 1 information symbols, ut = u(0)t , into c = 2 code

    ut

    v(0)t

    v(1)t

    Figure 1.5. A rate 1/2 convolutional encoder with memory 2.

    10

  • 7/27/2019 DESIGN AND ANALYSIS OF LDPC CONVOLUTIONAL CODES

    25/170

    symbols, vt = (v(0)t , v

    (1)t ). The code symbols are obtained as

    v(0)t = u

    (0)t

    v

    (1)

    t = u

    (0)

    t + u

    (0)

    t1 + u(0)

    t2. (1.8)

    The encoder stores information symbols from m = 2 time instants past, and hence

    we say that the encoder has memory 2. The encoding equations in (1.8) are the

    same for each time instant, i.e., the encoder is time invariant. It is also worth

    noting that the convolutional encoder is a causal system. 2

    A general rate R = b/c convolutional encoder maps an information sequence

    u = . . . ,u0,u1, . . . ,ut, . . . , (1.9)

    where ut = (u(0)t , . . . , u

    (b1)t ) and u

    ()t GF(2) into a code sequence

    v = . . . ,v0,v1, . . . ,vt, . . . , (1.10)

    where vt = (v(0)t , . . . , v(c1)t ) and v()t GF(2).For a convolutional encoder without feedback, as in Example 1.3.2, the encod-

    ing can be described as

    vt = utG0(t) + ut1G1(t) + + utmGm(t), (1.11)

    where m is the memory of the convolutional code and Gi(t), i = 0, 1, . . . , m,

    are binary b c matrices at each t. For a time invariant encoder the matricesGi(t), i = 0, 1, . . . , m, are the same for all t. As in the case of block codes, the

    11

  • 7/27/2019 DESIGN AND ANALYSIS OF LDPC CONVOLUTIONAL CODES

    26/170

    encoding can be described by a matrix multiplication v = uG, where the matrix

    G =

    . . .. . .

    G0(0) . . . Gm(m)

    G0(1) . . . Gm(m + 1). . .

    . . .

    G0(t) . . . Gm(t + m)

    . . .. . .

    (1.12)

    is called a generator matrix of the code. (The blank area of the matrix is assumed

    to be filled with zeros.) For the convolutional code in Example 1.3.2, we have

    G0 = (11),G1 = (01), and G2 = (01).4 We note here that, as in the case of

    block codes, there are several different possible generator matrices for the same

    convolutional code.

    For a time invariant convolutional code, the information and code sequences

    are commonly expressed in terms of the delay operator D as

    u(D) = + u0 + u1D + + utDt + . . .

    v(D) = + v0 + v1D + + vtDt + . . . , (1.13)

    and we have v(D) = u(D)G(D) where the b c generator matrix G(D) is equalto

    G(D) = G0 +G1D + + GmDm. (1.14)

    In Example 1.3.2, we have G(D) = [1, 1 + D + D2] = [1 1] + D[1 0] + [0 1]D2.

    For a time invariant convolutional encoder without feedback, G(D) consists of

    polynomials. In general, however, G(D) consists of rational functions.

    4Since the matrices Gi(t), i = 0, 1, 2, are the same for all t we omit the argument t.

    12

  • 7/27/2019 DESIGN AND ANALYSIS OF LDPC CONVOLUTIONAL CODES

    27/170

    The minimum of the Hamming distances between two distinct code sequences

    of a convolutional code is called the free distance dfree of the code. Since a con-

    volutional code is linear, dfree is equal to the minimum Hamming weight over all

    non-zero code sequences. As with block codes, on a BSC the transmitted sequencecan be recovered as long as there are no more than (dfree 1)/2 symbols in error.

    1.4 Channel decoding

    Suppose that the code sequence5 v corresponding to an information sequence

    u is transmitted over a discrete channel with binary inputs. The function of the

    channel decoder is to compute an estimate u of the information sequence u basedon the received sequence r. Thus, the reliability of a coding scheme is determined

    not only by the code but also by the decoding algorithm. An optimal decoding

    algorithm is one that minimizes the probability of decoding error.

    1.4.1 Optimal decoding

    The decoding block error probability PB of a block code is the probability that

    the information sequence is decoded incorrectly, i.e.,

    PB = P(u = u) = r

    P(u = u|r)P(r). (1.15)Since, P(r) is independent of the decoding algorithm, PB is minimized by mini-

    mizing P(

    u = u|r) or equivalently maximizing P(

    u = u|r) for each r. Thus PB

    5Whenever the distinction between block and convolutional codes is not relevant, we also use

    the term sequence in connection with an information block or codeword of a block code.

    13

  • 7/27/2019 DESIGN AND ANALYSIS OF LDPC CONVOLUTIONAL CODES

    28/170

    is minimized by choosing u as

    u = arg max

    uP(u|r). (1.16)

    Such a decoder is called a maximum a-posteriori(MAP) decoder. If all information

    blocks u are equally likely, then a MAP decoder is equivalent to a maximum

    likelihood (ML) decoder which chooses u asu = arg max

    uP(r|u). (1.17)

    In the case of convolutional codes, ML decoding can be efficiently performed ona trellis [6][7] using the Viterbi [9][10] algorithm. For equally likely information

    sequences, the ML decoder minimizes the probability of the decoded sequence

    being in error.

    An alternative to minimizing the error probability of the complete sequence

    or block is to minimize the decoding error probability of a single information bit.

    The probability that an information bit ui is decoded incorrectly is called the bit

    error probability Pb = P(ui = ui). A symbol-by-symbol MAP decoder chooses uias

    ui = arg maxu{0,1}

    P(ui = u|r). (1.18)

    1.4.2 Iterative decoding

    In his noisy channel coding theorem, Shannon showed that randomly con-

    structed codes with long block lengths, decoded using an ML decoder, achieve

    reliable communication at rates equal to the capacity of the channel. One main

    drawback to a practical implementation of such a coding scheme is that ML decod-

    14

  • 7/27/2019 DESIGN AND ANALYSIS OF LDPC CONVOLUTIONAL CODES

    29/170

    ing becomes prohibitively complex for long block lengths. Very often sub-optimal

    decoding algorithms which provide a reasonable tradeoff between complexity and

    performance are used in practice.

    An attractive way to construct long powerful codes is by concatenating severalsmaller codes. While optimal decoding of the long code is still too complex, it can

    be decoded in an iterative fashion. Decoding is carried out by performing optimal

    decoding separately for the component codes and then exchanging information be-

    tween the component decoders. The performance of the overall decoder is greatly

    improved if the component decoders exchange soft reliability values rather than

    hard decision values. The component decoders typically calculate the a-posteriori

    probabilities (APP)

    P(ui = u|r) =

    vC:ui=uP(v|r), u {0, 1} (1.19)

    using an APP decoder. Note that if the estimated symbol ui is chosen as theargument that maximizes (1.19), then we obtain the MAP decoder.

    In the last few years, coding schemes like turbo codes [11] with iterative de-coding have led to the design of systems that operate at rates approaching the

    capacity of AWGN channels. In the dissertation we deal with the low-density

    parity-check (LDPC) codes, invented by Gallager [12], which are another class of

    codes that can operate close to the capacity limit on many channels.

    1.5 Outline of the thesis

    In the next chapter, we describe some background material on LDPC codes.

    We also review the iterative decoding algorithms used for decoding LDPC codes.

    15

  • 7/27/2019 DESIGN AND ANALYSIS OF LDPC CONVOLUTIONAL CODES

    30/170

    The following three chapters constitute the design component of the thesis. In

    Chapter 3, a construction technique for LDPC convolutional codes starting from

    algebraically constructed quasi-cyclic LDPC codes is presented. The next chapter

    generalizes the code construction technique and a large class of time invariantLDPC convolutional codes are obtained. We indicate techniques to optimize the

    code design for improved performance. In this chapter, we also present a bound

    on the free distance of time invariant LDPC convolutional codes which shows

    that they are not asymptotically good. Chapter 5 begins with a description of

    the sliding window decoder for LDPC convolutional codes. We then compare

    the sliding window decoder with the decoder of LDPC block codes. The chapter

    concludes with a description of a decoding architecture in analog to decode LDPC

    convolutional codes.

    Chapter 6 and 7 serve as the analysis component of the thesis. In Chapter

    6, we construct an ensemble of LDPC convolutional codes and show that these

    codes are asymptotically good. This result is analogous to Gallagers result for

    LDPC block codes. Chapter 7 describes the iterative decoding analysis of the

    ensemble of convolutional codes of Chapter 6 on the BEC channel. We show

    that the convolutional nature of the codes leads to significantly better thresholds

    compared to the corresponding LDPC block codes. Chapter 8 summarizes the

    results of the dissertation and has pointers for future work.

    16

  • 7/27/2019 DESIGN AND ANALYSIS OF LDPC CONVOLUTIONAL CODES

    31/170

    CHAPTER 2

    BACKGROUND

    In this chapter we review several concepts and terminology related to LDPC

    codes which shall be useful in the remainder of the thesis. Section 2.1 begins with

    a brief historical sketch of LDPC codes. We then define an LDPC code and point

    out how it may also be represented by means of a Tanner graph. In the next

    section iterative decoding of LDPC codes based on message passing algorithms is

    described. Section 2.3 deals with LDPC convolutional codes. The Tanner graph

    representation of a convolutional code and the features that distinguish it from the

    Tanner graph representation of a block code are also pointed out in this section.

    2.1 LDPC block codes

    LDPC codes and iterative decoding were invented by Gallager in 1963 [12].

    With the notable exception of Zyablov and Pinsker [13], and Tanner [14], iterative

    coding systems were all but forgotten until the introduction of turbo codes [11].

    In the wake of the discovery of turbo codes LDPC codes were rediscovered by

    Mackay and Neal [15]. After the rediscovery of LDPC codes the task of analyzing

    and designing LDPC codes with performance close to capacity has been carried

    out in earnest, e.g., [16][17]. Perhaps the culmination of this process was the

    construction of an LDPC code capable of achieving reliable performance within

    0.0045 dB of the capacity limit of the binary input AWGN channel [18].

    17

  • 7/27/2019 DESIGN AND ANALYSIS OF LDPC CONVOLUTIONAL CODES

    32/170

    An LDPC block code is a code defined by a sparse parity-check matrix; con-

    taining mostly 0s and relatively few 1s. The LDPC codes introduced by Gallager

    have the additional property that the number of ones in each row and each column

    of the parity-check matrix is fixed. We call such codes regular LDPC codes.

    Definition 2.1.1 A regular LDPC block (N,J,K) code is a code defined by an

    L N parity-check matrixH, L < N, with exactly J ones in each column (J

  • 7/27/2019 DESIGN AND ANALYSIS OF LDPC CONVOLUTIONAL CODES

    33/170

    Symbol Nodes

    Constraint Nodes

    Figure 2.1. The Tanner graph of the parity-check matrix ofExample 2.1.1. The Tanner graph consists of 20 symbol nodes, eachwith J = 3 edges and 15 constraint nodes, each with K = 4 edges.

    It is easy to check that there are four ones in each row and three ones in each

    column so that the parity-check matrix defines a (20, 3, 4) code. H has two

    dependent rows and the rate of this code is R = 7/20. 2

    In his landmark paper [14], Tanner described how a code can be represented by

    means of a bipartite graph. The bipartite graph representation, called a Tanner

    graph, consists ofsymbol nodescorresponding to the columns ofH and constraint

    nodescorresponding to the rows ofH. There is an edge connecting a symbol node

    to a constraint node if the entry in the corresponding column and row ofH is a

    one. Figure 2.1 shows the Tanner graph corresponding to the parity-check matrix

    in Example 2.1.1. There is a one to one correspondence between a parity-check

    matrix and its Tanner graph representation. LDPC codes by virtue of their sparse

    parity-check matrices have sparse Tanner graph representations. As we shall see,

    the Tanner graph provides a convenient setting to describe iterative decoding

    algorithms.

    19

  • 7/27/2019 DESIGN AND ANALYSIS OF LDPC CONVOLUTIONAL CODES

    34/170

    Gallager, in his dissertation [12], showed the existence of LDPC codes with

    minimum distance increasing linearly with block length N. Specifically, Gallager

    used an ensemble of (J, K) regular LDPC codes and proved that for large N almost

    all codes in this ensemble satisfy dmin G(J, K)N provided J > 2. This resultparallels the well known Gilbert-Varshamov (GV) bound [19][20] for the ensemble

    of linear binary block codes. The GV bound shows the existence of block codes of

    rate R satisfying dmin GV(R)N for large N. Typically, the Gallager coefficientG(J, K) is smaller than the corresponding GV coefficient, thus indicating that

    LDPC code ensembles are not as good as the ensemble of randomly constructed

    codes. However, it is computationally feasible to efficiently decode long block

    length LDPC codes which is not the case for the class of randomly constructed

    codes. A linear increase in distance is a remarkable property and does not hold

    for many classes of codes that are useful in practice.

    For a code defined by means of a parity-check matrix H it is possible to find

    the generator matrix G by means of Gaussian elimination. Encoding can then

    be carried out as usual by means of matrix multiplication, which has complex-

    ity, O(N2), i.e., quadratic in the block length. In [21], a graph based encoding

    procedure exploiting the sparse parity-check matrix of LDPC block codes was de-

    scribed. In this case encoding complexity can be reduced to O(N + g2) where g

    is typically much smaller than N.

    2.2 Iterative decoding techniques

    One of the most attractive features of LDPC codes is that they allow for effi-

    cient iterative decoding. There are several algorithms known for iterative decoding

    of LDPC codes; Gallagers bit flipping algorithm [12], the belief propagation (BP)

    20

  • 7/27/2019 DESIGN AND ANALYSIS OF LDPC CONVOLUTIONAL CODES

    35/170

    decoder, the min-sum decoder etc. Most of these decoding techniques can be

    described as message passing algorithms.

    The operation of a message passing algorithm can be conveniently described

    on the Tanner graph of a code. In a message passing algorithm, messages areexchanged between nodes along the edges of the Tanner graph and nodes process

    the incoming messages received via their adjacent edges1 to determine the outgoing

    messages. A message along an edge represents the estimate of the bit represented

    by the symbol node associated with the edge. The message can be in the form of a

    hard decision, i.e., 0 or 1, a probability vector [p0, p1], where pi is the probability of

    the bit taking the value i, a log likelihood ratio (LLR) log(p0/p1) etc. An iteration

    of message passing consists of a cycle of information passing and processing. In

    a message passing algorithm, the outgoing message is calculated based on the

    incoming messages along the otheredges. The exact fashion in which the outgoing

    message is calculated depends on the message passing algorithm being used.

    The computations in a message passing decoder are localized, e.g., computa-

    tions at a constraint node are performed independent of the overall structure of

    the code. This implies that highly parallel implementations of a message passing

    decoder are feasible. Further, the computations are distributed, i.e., computations

    are performed by all nodes in the graph. The number of messages exchanged in

    an iteration of message passing is dependent on the number of edges in the Tanner

    graph; a sparser Tanner graph means lesser computations. Thus, it is computa-

    tionally feasible to use message passing algorithms for decoding long block length

    LDPC codes.

    The most widely used message passing algorithm is the belief propagation(BP)

    algorithm. The processing at the symbol and constraint nodes for a BP algorithm

    1For symbol nodes we also treat the information from the channel as an incoming message.

    21

  • 7/27/2019 DESIGN AND ANALYSIS OF LDPC CONVOLUTIONAL CODES

    36/170

    with LLRs as messages is now described. Let ci denote the LLR of the bit as-

    sociated with the symbol node i, rij denote the message from symbol node i to

    constraint node j, and qji the message from constraint node j to symbol node i.

    Let N(i) denote the set of neighbors of the node i and N(i)\j the set of neighborsof node i excluding j. Then the outgoing messages calculated at a constraint node

    j and a symbol node i are

    qji = log

    1 +

    iN(j)\i tanh(rij )

    1 iN(j)\i tanh(rij )

    , i N(j), (2.1)

    rij = ci +

    jN(i)\jqji, j N(i), (2.2)

    respectively. An iteration of decoding involves the above computations at all the

    constraint and symbol nodes, respectively, of the Tanner graph. For a BP decoder

    with a maximum of I iterations the process is repeated I times. At the start of

    decoding the outgoing messages from all symbol nodes are initialized to the LLR

    of the corresponding bit. When the I decoding iterations are complete a final

    LLR, ci +

    jN(i)qji , is computed and used for making a hard decision on the bit

    corresponding to symbol node i.

    The BP decoder is known to be optimal for Tanner graphs with no closed

    paths. However, almost all interesting LDPC Tanner graphs have closed paths or

    cycles. Inspite of this, the BP decoder has been observed to work remarkably well

    in these cases. Low-complexity versions of the BP algorithm that provide good

    complexity to performance tradeoffs also exist. One common modification is to

    simplify the computations at the constraint nodes (2.1). Another possibility is toquantize the LLRs. Other examples of message passing are presented in [16].

    22

  • 7/27/2019 DESIGN AND ANALYSIS OF LDPC CONVOLUTIONAL CODES

    37/170

    Gallager in his thesis [12] analyzed an iterative hard-decision decoding algo-

    rithm to upper bound the bit error probability for regular LDPC codes on a

    binary symmetric channel (BSC)2. This analysis was extended to a broader class

    of channels and message passing algorithms in [16][17][22]. In particular, densityevolution [16] makes analysis of message passing algorithms with messages be-

    longing to a continuous alphabet, as for example in the case of the BP algorithm,

    feasible. All known techniques for the analysis of message passing algorithms re-

    quire that the messages exchanged during iterations be independent. It is possible

    to show the existence of a sequence of codes with increasing block length for which

    this holds [12][16]. Thus, the analysis is asymptotic in the block length. In the

    case of finite block lengths, where arbitrary number of independent iterations is

    not possible, little is known about the performance of message passing algorithms.

    The concept of a threshold or iterative decoding limit was defined in [16].

    The threshold depends on the particular class of codes, the iterative decoding

    algorithm, and the channel3. It is the largest channel parameter for which the

    probability of an incorrect message being exchanged, when decoding a code from

    the specified class with the given decoding algorithm, approaches zero as the

    number of iterations tends to infinity.

    In [23], it was observed that LDPC codes with irregular Tanner graph4 struc-

    tures lead to significantly better performance on the BEC compared to regular

    LDPC codes. Following this idea, in [17] density evolution was used to optimize

    the degree profile of irregular LDPCs and achieve thresholds close to capacity on

    a variety of channels. It should be noted however that the thresholds, especially

    2A BSC has two inputs and two outputs identical to the two inputs. The probability that anerror occurs is the same for both inputs.

    3Thresholds are known to exist if the channel is physically degraded and the decoding algo-rithm respects this ordering [16].

    4A Tanner graph which is not regular is irregular.

    23

  • 7/27/2019 DESIGN AND ANALYSIS OF LDPC CONVOLUTIONAL CODES

    38/170

    for irregular LDPCs, are channel parameters below which the bit error probability

    and not necessarily the block error probability can be made arbitrarily small.

    2.3 LDPC convolutional codes

    In this section we turn our attention to LDPC convolutional codes, first con-

    structed in [24]. We begin with the definition of the syndrome former of a convo-

    lutional code.

    A rate R = b/c convolutional code can be defined as the set of sequences

    v = (. . . ,v0,v1, . . . ,vt, . . . ), vt Fc2, satisfying the equality vHT = 0, where

    HT =

    . . . . . .

    HT0 (0) . . . HTms (ms)

    . . . . . .

    HT0 (t) . . . HTms (t + ms)

    . . . . . .

    . (2.3)

    The infinite transposed parity-check matrix HT is also called the syndrome for-

    mer. The sub-matrices HTi (t + i), i = 1, . . . , ms, are c (c b) binary matrices.The largest i such that HTi (t + i) is a non-zero matrix for some t is called the syn-

    drome former memory ms. As in the case of block codes, for a generator matrix

    G of the convolutional code we have GHT = 0. The constraint imposed by the

    syndrome former can be written as

    vtHT0 (t) + vt1HT1 (t) + + vtmsHTms (t) = 0, t Z. (2.4)

    24

  • 7/27/2019 DESIGN AND ANALYSIS OF LDPC CONVOLUTIONAL CODES

    39/170

    As we shall see in the next chapter, (2.4) can be used for encoding a convolutional

    code.

    For time invariant convolutional codes, the sub-matrices HTi (t+i), i = 1, . . . , ms,

    are the same for all t. For example, the convolutional code in Example 1.3.2is defined by the syndrome former with ms = 2, and H0(t) = (11),H1(t) =

    (10), and H2(t) = (10) for all t. In the time invariant case it is possible to express

    the syndrome former in terms of the delay operator. The syndrome former for the

    code of Example 1.3.2 in terms of the delay operator is H(D)T = [1+ D + D2, 1]T.

    Note that we have G(D)H(D)T = 0.

    Analogous to LDPC block codes, LDPC convolutional codes are convolutional

    codes defined by sparse parity-check matrices or equivalently sparse syndrome

    former matrices. Corresponding to definition 2.1.1 of the previous section we have

    Definition 2.3.1 A regular LDPC convolutional(ms, J , K ) code is a code defined

    by a syndrome formerHT, having exactly J ones in each row, where J

  • 7/27/2019 DESIGN AND ANALYSIS OF LDPC CONVOLUTIONAL CODES

    40/170

    (a) (b)(b)

    (c) (d)

    0

    000

    00

    0

    0

    0

    000

    000

    00

    000

    00

    0

    0

    0

    000

    000

    00

    0 0

    0

    0

    000

    0

    0

    0

    0

    000

    0

    000

    000

    00

    0

    0 000 00

    0

    0

    0

    000

    0

    000

    0

    00

    111

    111

    111

    111

    111

    11

    11

    111

    111

    111

    111

    11

    11

    1 1

    1

    1

    1

    11

    11

    11

    1111

    1

    11

    1

    11

    111

    111

    1111

    111

    111

    1111

    111

    1

    111

    1111

    11

    1

    1

    1

    1

    1

    1

    1

    11

    ...

    ...

    Figure 2.2. Illustration of the Jimenez-Zigangirov method to constructthe syndrome former of a b = 1, c = 2 LDPC convolutional (4, 3, 6) code

    26

  • 7/27/2019 DESIGN AND ANALYSIS OF LDPC CONVOLUTIONAL CODES

    41/170

    sub-matrices) as illustrated in Fig. 2.3(a). Then the lower part is moved to the

    right as shown in Fig. 2.3(b). Then, fixed c (c b) sub-matrices are appendedfrom the left (Fig. 2.3(c)) to ensure that the matrices H0(t)

    T have full rank. The

    resulting matrix represents one period and is repeated indefinitely (Fig. 2.3(d)),leading to a syndrome former HT of memory ms = 4, and J = 3 ones in each row

    and K = 6 ones in each column. 2

    We now turn our attention to the Tanner graph representation of convolu-

    tional codes. The Tanner graph of a convolutional code can be obtained as in the

    case of block codes. There are symbol nodes corresponding to each row of the

    syndrome former HT and constraint nodes corresponding to each column ofHT.

    There is an edge connecting a symbol node to a constraint node if the entry in

    the corresponding row and column ofHT is a one. As earlier, there is a one to

    one correspondence between a syndrome former and its Tanner graph represen-

    tation. The features unique to convolutional codes make the Tanner graph of a

    convolutional code different in several important ways from that of a block code.

    We illustrate these differences by means of an example.

    Example 2.3.2 Tanner graph representation of a b = 1, c = 2, LDPC convolu-

    tional (3, 2, 3) code

    Consider the time invariant convolutional code with syndrome former matrix

    HT(D) =

    1 D3

    D D2

    D3 1

    (2.5)

    It is easy to see that this code has ms = 3, J = 2, and K = 3. Fig. 2.3.2 shows

    the Tanner graph of this convolutional code.

    27

  • 7/27/2019 DESIGN AND ANALYSIS OF LDPC CONVOLUTIONAL CODES

    42/170

    ms

    t t + 1 t + 2 t + 3

    Figure 2.3. The Tanner graph for the convolutional code ofExample 2.3.2.

    The first difference between the Tanner graph of a convolutional code from

    that of a block code comes from the fact that the convolutional code is infinite

    so that the corresponding Tanner graph is also infinite. The code symbols of a

    convolutional code have a time index associated with them. The convolutional

    code of this example is a b = 1, c = 3, convolutional code. The symbol nodes

    corresponding to the c = 3 symbols generated at each time t are shown vertically

    aligned in Fig. 2.3.2. Similarly, at each time t there are c b = 2 parity-checkequations on symbols, the corresponding constraint nodes are shown vertically

    aligned in Fig. 2.3.2. The indexing of the Tanner graph by time is absent in the

    case of block codes.

    The memory of a convolutional code restricts the local structure in the Tanner

    graph. Specifically, the memory determines the maximum separation between

    two symbol nodes connected by a single constraint node. Two pairs of symbolnodes ms = 3 time units apart (time indices t and t + 3) are shown highlighted in

    Fig. 2.3.2. On the other hand, in the Tanner graph of a block code two symbol

    28

  • 7/27/2019 DESIGN AND ANALYSIS OF LDPC CONVOLUTIONAL CODES

    43/170

    nodes connected by a constraint node can be arbitrarily separated. Note also that

    the causality of the convolutional code is reflected by all symbols being connected

    only to constraint nodes either at the same or later time instants. Finally, since

    the code is time invariant the connectivity of nodes in Tanner graph is the samefor all t. 2

    Typically LDPC convolutional codes have large ms (or c b) so that decodingbased on their trellis representation is no longer feasible. However, LDPC convolu-

    tional codes by virtue of their sparse syndrome formers have sparse Tanner graph

    representations. Thus, they can be decoded by means of message passing algo-

    rithms. The special features of the Tanner graph structure of a convolutional code

    pointed out here, as we shall see in Chapter 5, lead to a highly parallel continuous

    sliding window decoder for LDPC convolutional codes. Further, in Chapter 7 we

    explain how the structure of the convolutional code Tanner graphs result in better

    thresholds for LDPC convolutional codes compared to corresponding LDPC block

    codes.

    29

  • 7/27/2019 DESIGN AND ANALYSIS OF LDPC CONVOLUTIONAL CODES

    44/170

    CHAPTER 3

    LDPC CONVOLUTIONAL CODES CONSTRUCTED FROM QUASI-CYCLIC

    CODES

    The construction of periodically time varying LDPC convolutional codes has

    been described in [24]. One big difficulty in implementing time varying LDPC

    convolutional codes is the huge storage requirements for storing a period of the

    parity-check matrix. For time invariant convolutional codes the connectivity of

    nodes in the Tanner graph structure at different times is identical. This leads to

    significantly reduced storage requirements for implementing these codes. In this

    chapter we describe the construction of time invariant LDPC convolutional codes

    from algebraically constructed quasi-cyclic (QC) LDPC codes.

    Section 3.1 begins with a description of the construction technique for the QC

    LDPC codes. The QC codes are constructed by generalizing the code construc-

    tion technique of a (155, 64) QC LDPC code by Tanner [25]. This is followed

    by a description of the design technique for constructing time invariant convolu-

    tional codes from the quasi-cyclic codes. Section 3.2 details the properties of the

    constructed codes - both quasi-cyclic and convolutional, and the corresponding

    Tanner graphs. The relationship between the QC and corresponding convolu-

    tional codes, girth and distance properties, and syndrome former based encoding

    of the convolutional code are described. The chapter concludes with performance

    results of the codes with iterative decoding.

    30

  • 7/27/2019 DESIGN AND ANALYSIS OF LDPC CONVOLUTIONAL CODES

    45/170

    3.1 Code construction

    This section describes the means by which the underlying structure of multi-

    plicative groups in the set of integers modulo m may be used to construct low-

    density parity-check codes both block codes and convolutional codes.

    3.1.1 Construction of quasi-cyclic LDPC block codes

    We use the structure of multiplicative groups in the set of integers modulo m to

    place circulant matrices within a parity-check matrix so as to form regular quasi-

    cyclic LDPC block codes with a variety of block lengths and rates. For prime m,

    the integers {0, 1, . . . , m1} form a field under addition and multiplication modulom i.e., the Galois field GF(m). The non-zero elements of GF(m) form a cyclic

    multiplicative group. Let a and b be two non-zero elements with multiplicative

    orders o(a) = K and o(b) = J respectively1. Then we form the J K matrix Pof elements from GF(m) that has as its (s, t)th element Ps,t = b

    sat as follows:

    P =

    1 a a2 . . . aK1

    b ab a2b . . . aK1b

    . . . . . . . . . . . . . . .

    bJ1 abJ1 a2bJ1 . . . aK1bJ1

    . (3.1)

    (Here, 0 s J 1 and 0 t K 1.)The LDPC code is constructed by specifying its parity-check matrixH. Specif-

    ically, H is made up of a J K array of circulant sub-matrices as shown below:1There exists such elements if K and J divide (m) = m 1, the order of the multiplicative

    group.

    31

  • 7/27/2019 DESIGN AND ANALYSIS OF LDPC CONVOLUTIONAL CODES

    46/170

    H=

    I1 Ia Ia2 . . . IaK1

    Ib Iab Ia2b . . . IaK1b

    . . . . . . . . . . . . . . .

    IbJ1 IabJ1 Ia2bJ1 . . . IaK1bJ1

    . (3.2)

    where Ix is an m m identity matrix with rows cyclically shifted to the left byx 1 positions. The circulant sub-matrix in position (s, t) within H is obtainedby cyclically shifting the rows of the identity matrix to the left by Ps,t 1 places.The resulting binary parity-check matrix is of size Jm Km, which means theassociated code has a rate R 1(J/K). (The rate may be greater than 1(J/K)due to linear dependence among the rows of H; it is easy to see that there are

    in fact at least J 1 dependent rows in H.) By construction, every column ofH contains J ones and every row contains K ones, and so H represents a (J, K)

    regular LDPC code. (We observe here that for the case J = 2, our construction

    yields the graph-theoretic error-correcting codes proposed by Hakimi et al. in

    [26].)

    The codes constructed using this technique are quasi-cyclic with period K

    i.e., cyclically shifting a codeword by one position within each of the K blocks

    of circulant sub-matrices (each block consists of m code bits) results in another

    codeword2. As described in [27] the construction can also be extended to non-

    2Strictly speaking, the word quasi-cyclic means the code has the property that when a

    codeword is cyclically shifted by K positions another codeword is obtained; to observe thisproperty in the codes constructed above, the bit positions in each codeword must be permutedto a different order than the one indicated by the construction.

    32

  • 7/27/2019 DESIGN AND ANALYSIS OF LDPC CONVOLUTIONAL CODES

    47/170

    prime m. Examples of LDPC quasi-cyclic codes constructed in this manner from

    a prime m are shown in Table 3.1.1 [28].

    Example 3.1.1 A [155,64,20] QC code (m = 31) [25]

    Elements a = 2, b = 5 are chosen from GF(31); then o(a) = 5, o(b) = 3, and the

    parity-check matrix is given by

    H=

    I1 I2 I4 I8 I16

    I5 I10 I20 I9 I18

    I25 I19 I7 I14 I28

    (93155)

    ,

    where Ix is a 3131 identity matrix with rows shifted cyclically to the left by x1positions. The parity-check matrix H describes a (3, 5) regular LDPC code and

    has rank 91 (determined using Gaussian elimination), so that the corresponding

    code has rate R = 64/155 .4129 code. The Tanner graph resulting from H isshown in Figure 3.1. The length of the shortest cycle (i.e., the girth) of the Tanner

    graph is eight. The sparse Tanner graph along with the large girth for a code of

    this size makes the code ideal for graph-based message-passing decoding. The

    associated code has minimum distance dmin = 20 (determined using MAGMA)

    that compares well with the minimum distance dmin = 28 of the best linear code

    known with the same rate and block length. It can be shown that the Tanner

    graph of this code has diameter six, which is the best possible for a (3 , 5) regular

    bipartite graph of this size. Also, the Tanner graph has girth eight, while an upper

    bound on girth is 10 (from the tree bound; see Section 3.2). 2

    Example 3.1.2 A [21,8,6] QC code (m = 7).

    33

  • 7/27/2019 DESIGN AND ANALYSIS OF LDPC CONVOLUTIONAL CODES

    48/170

  • 7/27/2019 DESIGN AND ANALYSIS OF LDPC CONVOLUTIONAL CODES

    49/170

    constraint nodes (3 rings of 31 each)

    code bit nodes (5 rings of 31 each)

    Figure 3.1. Tanner graph for a [155,64,20] QC code.

    Elements a = 2, b = 6 are chosen from GF(7); then o(a) = 3, o(b) = 2, and the

    parity-check matrix is given by

    H=

    I1 I2 I4I6 I5 I3

    (1421)

    ,

    where Ix is a 7 7 identity matrix with rows cyclically shifted to the left byx 1 positions. The resulting Tanner graph has girth twelve and is shown in

    Figure 3.2. (Figure 3.2 shows both the ring-like structure characteristic of theseconstructions and a flattened representation that will be useful in what follows.)

    The associated code has minimum distance dmin = 6 and is a (2, 3) regular LDPC

    code. The best [21, 8] linear code has dmin = 8. 2

    Example 3.1.3 A [5219,4300] QC code (m = 307).

    Elements a = 9, b = 17 are chosen from GF(307) (note that 307 is a prime); then

    o(a) = 17, o(b) = 3, and the parity-check matrix is given by

    35

  • 7/27/2019 DESIGN AND ANALYSIS OF LDPC CONVOLUTIONAL CODES

    50/170

    Figure 3.2. Tanner graph for a [21,8,6] QC code.

    H=

    I1 I9 I81 I115 I114 I105 I24 I216 I102 I304 I280 I64 I269 I272 I299 I235 I273

    I17 I153 I149 I113 I96 I250 I101 I295 I199 I256 I155 I167 I275 I19 I171 I4 I36

    I289 I145 I77 I79 I97 I259 I182 I103 I6 I54 I179 I76 I70 I16 I144 I68 I305

    ,

    where Ix is a 307 307 identity matrix with rows cyclically shifted to the left byx 1 positions. H is a 921 5219 matrix and describes a (3, 17) regular LDPCcode with minimum distance upper bounded by 24 (see Section 3.2). 2

    These examples show that the construction technique described above yields codes

    with a wide range of rates and block lengths.

    3.1.2 Construction of LDPC convolutional codes

    An LDPC convolutional code can be constructed by replicating the constraint

    structure of the quasi-cyclic LDPC block code to infinity [29]. Naturally, the

    parity-check matrices (or, equivalently, the associated Tanner graphs) of the con-

    volutional and quasi-cyclic codes form the key link in this construction.

    Each circulant in the parity-check matrix of a QC block code can be specified

    by a unique polynomial; the polynomial represents the entries in the first column

    36

  • 7/27/2019 DESIGN AND ANALYSIS OF LDPC CONVOLUTIONAL CODES

    51/170

    of the circulant matrix. For example, a circulant matrix whose first column is

    [1 1 1 0 1 0]T is represented by the polynomial 1+D+D2+D4. Thus, the JmKmbinary parity-check matrix of a regular LDPC code obtained from the construction

    described above can be expressed in polynomial form (with indeterminate D) toobtain the following J K matrix:

    H(D) =

    D0 Da1 Da21 . . . Da

    K11

    Db1 Dab1 Da2b1 . . . Da

    K1b1

    . . . . . . . . . . . . . . .

    DbJ1

    1 DabJ1

    1 Da2

    bJ1

    1 . . . DaK1

    bJ1

    1

    (JK)

    .

    Since all the circulant sub-matrices in the parity-check matrix of the QC LDPC

    code are shifted identity matrices, H(D) is comprised solely of monomials; the

    exponent ofD indicates how many places the identity matrix was shifted to form

    the corresponding circulant sub-matrix. H(D) is the parity-check matrix of a

    corresponding LDPC convolutional code in polynomial form. (The indeterminate

    D is now interpreted as the delay operator in the convolutional code.) We note

    here that the parity-check matrix of the QC code, when written in circulant form,

    is over the ring F2[D]/Dm + 1, i.e., the polynomial ring F2[D] modulo the idealDm + 1, whereas the parity-check matrix of the convolutional code is over therational field F2(D).) In all cases that were examined, the rate of the LDPC

    convolutional codes obtained from the QC codes was equal to the design rate of

    the original QC code, i.e., R = 1 (J/K). This rate is slightly less than the rateof the original QC code. The syndrome former memory of the convolutional code

    so obtained satisfies ms m.

    37

  • 7/27/2019 DESIGN AND ANALYSIS OF LDPC CONVOLUTIONAL CODES

    52/170

    Example 3.1.4 A rate R = 2/5 LDPC convolutional code.

    From the [155, 64, 20] QC code in Example 3.1.4, we can obtain a rate R = 2/5

    convolutional code with parity-check and generator matrices given by

    H(D) =

    1 D D3 D7 D15

    D4 D9 D19 D8 D17

    D24 D18 D6 D13 D27

    (35)

    (3.3)

    G(D) =

    a1(D)(D)

    a2(D)(D)

    a3(D)(D) 1 0

    b1(D)

    (D)

    b2(D)

    (D)

    b3(D

    (D)0 1

    (25)

    , (3.4)

    where

    a1(D) = D4(1 + D7 + D10 + D14 + D18 + D29),

    a2(D) = D3(1 + D3 + D6 + D18 + D21 + D36),

    a3(D) = D7(1 + D4 + D8 + D11 + D15 + D22),

    b1(D) = D13(1 + D6 + D14 + D15 + D23 + D28),

    b2(D) = D12(1 + D2 + D11 + D21 + D23 + D35),

    b3(D) = D21(1 + D3 + D4 + D5 + D10 + D16),

    (D) = 1 + D4 + D14 + D25 + D26 + D33.

    The generator matrix G(D) was obtained from the parity-check matrix H(D)

    using Gaussian- elimination. The syndrome former memory ofH(D) is ms = 28.

    We conjecture that the above convolutional code has a free distance dfree of 24. By

    choosing one of the information sequences equal to the denominator polynomial,

    i.e., 1 + D4 + D14 + D25 + D26 + D33, and the other information sequence as the

    all zero sequence, we obtain a code sequence of weight 24. This only provides an

    upper bound on the dfree of this convolutional code, but we have been unable to

    38

  • 7/27/2019 DESIGN AND ANALYSIS OF LDPC CONVOLUTIONAL CODES

    53/170

    find any lower weight code sequences. Interestingly, as we shall see, this is the

    same as the upper bound obtained by Mackay and Davey [30] for LDPC matrices

    constructed by using non-overlapping permutation matrices that commute with

    each other, as is the case here. Further, the results of [29] guarantee that theminimum distance of the QC block code, 20 in this case, provides a lower bound

    on the free distance of the associated convolutional code.

    The matrix G(D) in (3.4) is not in minimal form. The minimal-basic genera-

    tor matrix [7] has the minimum total memory among all equivalent rational and

    polynomial generator matrices and is thus of interest. The minimal-basic form of

    G(D), which has total memory 23 + 22 = 45, is given by

    Gmin(D) =

    a1(D) a2(D) a3(D) a4(D) a5(D)b1(D) b

    2(D) b

    3(D) b

    4(D) b

    5(D)

    (25)

    , (3.5)

    where

    a1(D) = D6 + D10 + D14 + D17 + D18,

    a2(D) = D5 + D8 + D9 + D11 + D13 + D14 + D15 + D19 + D21 + D22 + D23,

    a3(D) = D9,

    a4(D) = D2 + D10 + D13 + D15 + D16 + D18 + D19 + D20 + D21 + D23,

    a5(D) = 1 + D + D3 + D9 + D10 + D11 + D12 + D13 + D15,

    b1(D) = D4 + D5 + D6 + D8 + D10 + D11 + D12 + D13 + D14 + D16 + D17,

    b2(D) = D3+D4+D5+D6+D8+D11+D12+D13+D14+D16+D17+D18+D19+D20,

    b3(D) = D7 + D8 + D9 + D12,

    b4(D) = 1+D+D2 +D5 +D8 +D9 +D10 +D11 +D13 +D14 +D15 +D16 +D21 +D22,

    b5(D) = 1 + D2 + D4 + D7 + D8 + D13 + D14. 2

    39

  • 7/27/2019 DESIGN AND ANALYSIS OF LDPC CONVOLUTIONAL CODES

    54/170

    By pulling out common factors from the matrix H(D) in (3.3) we obtain the

    matrix

    Hequiv(D) =

    1 D D3 D7 D15

    1 D5 D15 D4 D3

    D18 D12 1 D7 D21

    (35)

    . (3.6)

    It is easy to see that the matrices H(D) and Hequiv(D) describe the same code.

    However, Hequiv(D) has syndrome former memory, ms = 21 as against ms = 28

    for H(D).

    Several different LDPC convolutional codes can be obtained from the same QC

    LDPC block code. For example, reordering the rows of the parity-check matrix

    of the quasi-cyclic code within each block of circulant sub-matrices will leave it

    unchanged but can lead to a completely different LDPC convolutional code 3.

    Consider once again the [155, 64, 20] QC code. Cyclically shift the first block of

    m = 31 rows above by one position, the middle block of 31 rows by five positions,

    and the last block of 31 rows by 25 positions, so that now the first row in each

    block has a one in the first column. The resulting LDPC matrix is given by

    H=

    I1 I2 I4 I8 I16

    I1 I6 I16 I5 I14

    I1 I26 I14 I21 I4

    (93155)

    ,

    where again Ix is a 31 31 identity matrix with rows cyclically shifted to the leftby x 1 positions. Clearly, the QC block code and its associated Tanner graph

    3

    Here we are only interested in those convolutional codes that have the same graph connec-tivity (i.e., nodes of the same degrees) as the base QC block code. For example, equivalentrepresentations for the QC block code can be obtained by suitable linear combinations of rowsofH. However, in such a case, the new representation(s) of the block code and that of thecorresponding convolutional code will not in general have the same node degrees as the originalrepresentation of the QC code.

    40

  • 7/27/2019 DESIGN AND ANALYSIS OF LDPC CONVOLUTIONAL CODES

    55/170

    are unaffected by these row shifts. However, the convolutional code obtained by

    following the above procedure has the parity-check matrix

    H1(D) =

    1 D D3 D7 D15

    1 D5 D15 D4 D13

    1 D25 D13 D20 D3

    (35)

    .

    Looking at the third constraint equation ofH(D) and H1(D), we see that that the

    two codes are in fact different. (Note that the first and second constraint equations

    ofH(D) and H1(D) are equivalent, since the first and second rows ofH(D) are

    just delayed versions of the first and second rows ofH1(D) respectively.)

    Example 5: A rate 1/3 LDPC convolutional code.From the QC code of Example 2, a rate 1/3 convolutional code with the

    following parity-check and generator matrices is obtained

    H(D) =

    1 D D3

    D5 D4 D2

    (23)

    1 D D3

    D3 D2 1

    (23)G(D) =

    D+D3

    1+D2+D4 1D2

    1+D2+D4

    (13)

    ,

    In this case, the syndrome former memory4 is ms = 3 and the minimal basic

    generator matrix has total memory four. The convolutional code has a free

    distance dfree = 6, equal to the minimum distance of the original QC code.

    In this manner, it is possible to construct numerous convolutional codes with

    a sparse Tanner graph representation. For example, for every entry in Table 1,

    4The parity-check matrix with common factors removed from the second row.

    41

  • 7/27/2019 DESIGN AND ANALYSIS OF LDPC CONVOLUTIONAL CODES

    56/170

    it is possible to construct a convolutional code with an actual rate equal to the

    design rate indicated in the table.

    3.2 Properties of constructed codes

    This section describes the properties of the codes constructed in the previous

    section. Specifically, the structure of the Tanner graphs, the minimum distance

    of the codes, and encoding techniques are described.

    3.2.1 Relation between the block and convolutional Tanner graphs

    The Tanner graphs for the rate 1/3 convolutional code in Example 5 and thecorresponding QC code (Example 2) are shown in Figure 3.3. We observe that

    the Tanner graph of the convolutional code is strikingly similar to that of the QC

    code. The convolutional code Tanner graph can be viewed as being obtained by

    unwrapping that of the QC code. The construction technique described in the

    previous section thus unwraps the Tanner graph of the QC code to obtain the

    convolutional code.

    Quasi-cyclic code Convolutional code

    Figure 3.3. Tanner graph for the rate 1/3 convolutional code ofExample 5.

    42

  • 7/27/2019 DESIGN AND ANALYSIS OF LDPC CONVOLUTIONAL CODES

    57/170

    3.2.2 Quasi-cyclic block codes viewed as tail-biting convolutional codes

    Tail biting is a technique by which a convolutional code can be used to con-

    struct a block code without any loss of rate [31][32]. An encoder for the tail-biting

    convolutional code is obtained by reducing each polynomial entry in the gener-

    ator matrix of the convolutional code modulo Dm + 1 for some positive integer

    m and replacing each of the entries so obtained with circulant matrices of size

    m m. The block length of the block code5 so derived depends on m. Since thegenerator matrix consists of circulant sub-matrices, the tail-biting convolutional

    code obtained is a QC block code. However, this tail-biting code has a rate equal

    to that of the convolutional code which is less than or equal to the rate of the

    original QC code. As expected, the two QC codes, i.e., the original QC code and

    the tail-biting convolutional code, are closely related, as the following theorem

    shows.

    Theorem 3.2.1 Let C be a length Km QC code with the Jm Km parity checkmatrixH, whereH is composed of m m circulants (i.e., its period is K). Let

    C be a convolutional code obtained by unwrapping H. Then the QC block code

    (tail-biting convolutional code) C of length Km constructed from C is a sub-codeof C.

    Proof: Since the tail-biting code C is quasi-cyclic with period K, any codeword inC can be described by a set of polynomials in the ring F2[D]/Dm + 1. Thereforeany codeword in

    C is of the form a(D)G(D) mod Dm + 1, where G(D) is the

    generator matrix of the convolutional code C and a(D) is an information poly-

    nomial. Now, we know that the polynomial generator matrix of the convolutional

    5With a feedforward encoder for the convolutional code, a tail-biting code of any block lengthmay be obtained [6].

    43

  • 7/27/2019 DESIGN AND ANALYSIS OF LDPC CONVOLUTIONAL CODES

    58/170

    code C satisfies the parity constraints imposed by its parity-check matrix H(D),

    i.e., G(D)HT(D) = 0. Therefore, G(D)HT(D) 0 mod Dm + 1. Since, byconstruction, H(D) is also the parity-check matrix of the original QC block code

    C, with the entries in H(D) now interpreted as being in the ringF

    2[D]/Dm

    + 1,any codeword in C satisfies the constraints of the original QC code C, i.e., thetail-biting code C is a sub-code ofC. 2

    If there were no rank reduction in the parity-check matrix of the original QC

    block code, it would have a rate equal to that of the convolutional code. In such

    a case, it is easy to see that the tail-biting convolutional code would be exactly

    the same as the original QC block code.

    We have derived a rate 2/5 convolutional code (Example 4) from the [155,64,20]

    QC block code (Example 1); from the rate 2/5 convolutional code we can in turn

    derive a tail-biting convolutional code, i.e., another QC block code of block length

    155 and 62 information bits (rate 2/5), that is a sub-code of the original [155,64,20]

    QC block code and has dmin 20.

    3.2.3 Girth

    The Tanner graph representing the parity-check matrix H in equation (3.2)

    cannot have girth less than six6. This is seen by observing that the relative

    difference between the shifts across any two columns is different along different

    rows. However, the Tanner graph cannot have girth larger than twelve. Figure 3.4

    is a graphical illustration of why this is so (see also [33]). The dark diagonal lines

    in the figure represent the non-zero entries of the parity-check matrix and theempty regions represent the zeros. The dotted lines indicate a cycle of length

    twelve in the Tanner graph involving bit nodes b1 through b6 and constraint

    6This holds for prime circulant sizes m but is not necessarily true for non-prime m.

    44

  • 7/27/2019 DESIGN AND ANALYSIS OF LDPC CONVOLUTIONAL CODES

    59/170

    H =

    +X

    Y +Z ZX +Y

    p5

    p6

    p4

    p3

    p2

    p1

    b2b1 b3 b4 b5 b6

    Figure 3.4. A 12-cycle.

    nodes p1 through p6. The structure of the circulant sub-matrices in H gives rise

    to numerous 12-cycles.

    The Tanner graphs of the LDPC convolutional codes constructed herein have

    their girth lower bounded by the girth of the corresponding QC LDPC Tanner

    graph. For any cycle in the convolutional code Tanner graph we can find an

    equivalent cycle in the QC Tanner graph. Say a particular set of bit and constraint

    nodes form a cycle in the convolutional code Tanner graph. Then the relative

    shifts between the bit nodes sum to zero. The corresponding sequence of bit

    and constraint nodes in the QC code (obtained by reducing indices modulo m,

    where m is the circulant size in the QC code) have exactly the same relative shifts

    (now read modulo m) between the bit nodes, and hence sum to zero i.e., in

    the QC code Tanner graph, we find a corresponding cycle of the same length.

    However, a cycle in the QC Tanner graph does not always lead to a cycle in

    the convolutional Tanner graph. While the relative shifts between bit nodes may

    45

  • 7/27/2019 DESIGN AND ANALYSIS OF LDPC CONVOLUTIONAL CODES

    60/170

    sum to zero modulo m, the shift may be a non-zero multiple of m, meaning the

    corresponding bit nodes in the convolutional code do not form a cycle. Hence,

    it is possible that the convolutional code Tanner graph may have a larger girth

    than the QC code Tanner graph. Observe, however, that in Figure 3.4 the relativeshifts between bit nodes sum to zero, so that the corresponding bit nodes in the

    convolutional code also form a 12 cycle. Hence, the girth of the convolutional

    codes is also upper bounded by 12.

    3.2.4 Minimum distance

    At high SNRs, the maximum likelihood decodin


Recommended