Home >
Documents >
Hybrid Concatenated Codes and Iterative Decoding · Hybrid Concatenated Codes and Iterative...

Share this document with a friend

23

Transcript

TDA Progress Report 42-130 August 15, 1997

Hybrid Concatenated Codes and Iterative DecodingD. Divsalar and F. Pollara

Communications Systems and Research Section

A hybrid concatenated code with two interleavers is the parallel concatenation ofan encoder, which accepts the permuted version of the information sequence as itsinput, with a serially concatenated code, which accepts the unpermuted informationsequence. The serially concatenated code consists of an outer encoder, an inter-leaver, and an inner encoder. An upper bound to the average maximum-likelihoodbit-error probability of the hybrid concatenated convolutional coding schemes isobtained. Design rules for the parallel, outer, and inner codes that maximize theinterleaver’s gain and the asymptotic slope of the error-probability curves are pre-sented. Finally, a low-complexity iterative decoding algorithm that yields perfor-mance close to maximum-likelihood decoding is proposed. A special case of hybridconcatenated code where the outer code is a repetition code is analyzed, and anotherspecial case called self-concatenated code is introduced. Comparisons with parallelconcatenated convolutional codes, known as “turbo codes,” and with recent seriallyconcatenated convolutional codes are discussed, showing that the new scheme offersbetter performance at very low bit-error rates when low-complexity codes are used.An example of the proposed scheme for deep-space communications is presented.

I. Introduction

Concatenated coding schemes have been studied by Forney in [1]. They consist of the cascade ofan inner code and an outer code, which, in Forney’s approach, would be a relatively short inner blockcode, or a convolutional code with maximum-likelihood Viterbi decoding, and a long high-rate nonbinaryReed–Solomon outer code decoded by an algebraic error-correction algorithm. Concatenated codes havesince then evolved as a standard for those applications where very high coding gains are needed, such asdeep-space applications. Alternative solutions for concatenation have also been studied in [2], [3], and[19]. In [19], Tanner proposed a method for construction of long error-correcting codes from shorter onesbased on bipartite graphs and an iterative decoding.

Turbo codes [4] are parallel concatenated convolutional codes (PCCCs) using two constituent codes.Parallel concatenation was extended to more than two codes in [5] (see also [21]). These codes wereanalyzed in [6,22–24].

Using the same ingredients, namely convolutional encoders and interleavers, serially concatenatedconvolutional codes (SCCCs) have been shown to yield performances comparable and, in some cases,superior to turbo codes [14]. A third choice is a hybrid concatenation of convolutional codes (HCCC).In this article, we consider, as an example of hybrid concatenated code, only the parallel concatenation

1

of a convolutional code with a serially concatenated convolutional code. Serial concatenation of an outerconvolutional code with an inner turbo code and other types of hybrid concatenated codes are consideredin [15].

These concatenated coding schemes use a suboptimum decoding process based on iterating an a poste-riori probability (APP) algorithm [7] applied to each constituent code. A soft-input, soft-output (SISO)APP module described in [13] was used. As an example, we will show the results obtained by decodinga hybrid concatenation of three codes with very high coding gain for deep-space applications.

For HCCCs, we obtain analytical upper bounds to the performance of a maximum-likelihood (ML)decoder using analytical tools and notations introduced in [6] and [9]. We propose design rules leadingto the optimal choice of constituent convolutional codes that maximize the interleaver gain [6] and theasymptotic code performance, and we present a new iterative decoding algorithm with limited complexityusing the SISO module. Comparisons with turbo codes and serially concatenated codes of the samecomplexity and decoding delay are discussed.

In Section II, we derive analytical upper bounds to the bit-error probability of HCCCs using theconcept of “uniform interleavers” that decouples the output of the outer encoder from the input of theinner encoder, and the input of the parallel code from the input of the outer code. In Section III, wepropose design rules for HCCCs through an asymptotic approximation of the bit-error probability bound,assuming long interleavers or large signal-to-noise ratios (SNRs). Section IV describes a new iterativedecoding algorithm. Section V considers a special case of HCCC when the outer code is a repetitioncode. Furthermore, if the parallel code is a one-state, rate 1 code (no code), then we obtain an encodingstructure that we call “self-concatenated” since there is only one convolutional code involved. The self-concatenated code is analyzed, a simple iterative decoding structure is proposed, and simulation resultsare shown in Section VI. Simulation results for an example of HCCC for deep-space communications arepresented in Section VII.

II. Analytical Bounds on the Performance of Hybrid Concatenated Codes

Consider a linear (n, k) block code C with code rate Rc = k/n and minimum distance hm. An upperbound on the bit-error probability (using the union bound) of the block code C over additive whiteGaussian noise (AWGN) channels, with coherent detection and using maximum-likelihood decoding, canbe obtained as

Pb ≤n∑

h=hm

k∑w=1

w

kACw,hQ

(√2Rc h

EbN0

)(1)

where Eb/N0 is the signal-to-noise ratio per bit and ACw,h represents the number of codewords of theblock code C having output weight h and associated with input sequences of weight w. The ACw,h is theinput–output weight coefficient (IOWC). The function Q(

√2Rc h Eb/N0) represents the pairwise error

probability, which is a monotonic decreasing function of the signal-to-noise ratio and the output weight,h. The Q function is defined as Q(x) = (1/

√2π)

∫∞xe−(t2/2)dt.

If we construct an equivalent block code from the convolutional code, this bound applies to convo-lutional codes as well. Obviously, this result also applies to concatenated codes, including parallel andserial concatenations, as well as to the hybrid concatenated codes discussed in this article. As soon as weobtain the input–output weight coefficients ACw,h for an HCCC code, we can compute its performance.

2

A. Hybrid Concatenated Convolutional Codes

The structure of a hybrid concatenated convolutional code is shown in Fig. 1. It is composed of threeconcatenated codes: the parallel code Cp with rate Rpc = kp/np and equivalent block code representa-tion (N1/R

pc , N1), the outer code Co with rate Roc = ko/po and equivalent block code representation1

(N1/Roc , N1), and the inner code Ci with rate Ric = pi/ni and equivalent block code representation

(N2/Ric, N2), with two interleavers N1 and N2 bits long, generating an HCCC CH with overall rate Rc.

For simplicity, we assume kp = ko and po = pi4= p; then Rc = ko/(np + ni).

Since the HCCC has two outputs, the upper bound on the bit-error probability in Expression (1) canbe modified to

Pb ≤n1∑

h=hpm

n2∑h=him

k∑w=wm

w

kACHw,h1,h2

Q

(√2Rc (h1 + h2)

EbN0

)(2)

where ACHw,h1,h2for the HCCC code CH represents the number of codewords of the equivalent block code

with output weight h1 for the parallel code and output weight h2 for the inner code associated with aninput sequence of weight w; ACHw,h1,h2

is the IOWC for the HCCC; wm is the minimum weight of an inputsequence generating the error events of the parallel code and the outer code; hpm is the minimum weightof the codewords of Cp; and him is the minimum weight of the codewords of Ci.

CiINNER ENCODER

RATE = pi /ni

CoOUTER ENCODER

RATE = ko /pok

π2INTERLEAVERLENGTH = N2 n2

TO CHANNEL

CpPARALLEL ENCODER

RATE = kp /np

π1INTERLEAVERLENGTH = N1 n1

TO CHANNEL

Fig. 1. A hybrid concatenated code.

B. Computation of ACHw,h1,h2

for Hybrid Concatenated Codes With Random Interleavers

If the input block k is large, then the computation of ACHw,h1,h2for fixed interleavers is an almost

impossible task, except for the first few input and output weights. However, the average input–outputweight coefficients, ACHw,h1,h2

, for hybrid concatenated codes with two interleavers can be obtained byaveraging Expression (2) over all possible interleavers. This average is obtained by replacing the actualinterleavers with abstract interleavers called uniform interleavers [6], defined as probabilistic devicesthat map a given input word of weight w into all its distinct

(Nw

)permutations with equal probability

p = 1/(Nw

), so that the input and output weights are preserved and where N represents the size of the

interleaver.

With knowledge of the IOWC ACpw,h1

for the constituent parallel code, the IOWC ACow,l for the con-stituent outer code, and the IOWC ACil,h2

for the constituent inner code, using the concept of the uniforminterleaver, the ACHw,h1,h2

for the hybrid concatenated code can be obtained.

1 Here the outer code is assumed to be a convolutional code, but conventional block codes, such as Hamming and Bose–Chaudhuri–Hocquenghem (BCH) codes, can be used as outer codes. To reduce the trellis complexity, short block codescan be used, say, m times, where m is the ratio of the input block size k of the HCCC over the input block size of theshort block code.

3

According to the properties of uniform interleavers, the first interleaver transforms input data of weightw at the input of the outer encoder into all its distinct

(N1w

)permutations at the input of the parallel

encoder. Similarly, the second interleaver transforms a codeword of weight l at the output of the outerencoder into all its distinct

(N2l

)permutations at the input of the inner encoder. As a consequence,

each input data block weight w, through the action of the first uniform interleaver, enters the parallelencoder generating

(N1w

)codewords of the parallel code Cp, and each codeword of the outer code Co of

weight l, through the action of the second uniform interleaver, enters the inner encoder generating(N2l

)codewords of the inner code Ci. Thus, the expression for the IOWC of the HCCC is

ACHw,h1,h2=

N2∑l=0

ACpw,h1×ACow,l ×ACil,h2(N1w

)(N2l

) (3)

where ACow,l is the number of codewords of the outer code of weight l associated with an input word ofweight w.

Since we compute the average performance, this means that there will always be, for each value ofthe signal-to-noise ratio, at least a set of two particular interleavers yielding performance better than orequal to that of the two uniform interleavers. Using Eq. (3) in Expression (2), we can rewrite the upperbound in Expression (2) as

Pb(e) ≤N1/R

pc∑

h1=hpm

N2/Ric∑

h2=him

N1∑w=wm

N2∑l=0

ACpw,h1×ACow,l ×ACil,h2(N1w

)(N2l

) w

N1Q

(√2Rc (h1 + h2)

EbN0

)(4)

Example 1. Consider a rate 1/4 HCCC formed by a parallel four-state recursive systematic convo-lutional code with rate 1/2, where the systematic bits of the parallel encoder (as for turbo codes) arenot transmitted; an outer four-state nonrecursive convolutional code with rate 1/2; and an inner four-state recursive systematic convolutional code with rate 2/3, joined by two uniform interleavers of lengthN1 = N and N2 = 2N , where N=20, 40, 100, 200, and 300.

The code generator matrices are shown in Table 1. Using Expression (4), we have obtained the bit-error probability curves shown in Fig. 2. The input–output weight coefficients of constituent codes werecomputed with an efficient recursive algorithm that we have developed. The performance shows a verysignificant interleaver gain, i.e., lower values of the bit-error probability for higher values of N .

III. Design of Hybrid Concatenated Codes

In the following, we use analytical tools, definitions, and notations introduced in [6] and [9]. The designof hybrid concatenated codes is based on the asymptotic behavior of the upper bound in Expression (4)for large interleavers. The reason for the good performances of parallel and serial concatenated codeswith input block sizes of N symbols was that the normalized coefficients ACw,h/N of a concatenated codedecrease with interleaver size for all w and h. For a given signal-to-noise ratio and large interleavers, themaximum component of ACHw,h1,h2

/N over all input weights w and output weights h1 and h2 is propor-tional to NαM , with corresponding minimum output weight h(αM ).2 If αM < 0, then for a given SNR, the

2 The αM is the largest exponent of N , as will be defined formally in Eq. (17).

4

Table 1. Generating matrices for theconstituent convolutional codes.

Code description G(D)

Rate 1/2 recursive, parallel

[1,

1 +D2

1 +D +D2

]Rate 1/2 nonrecursive, outer [ 1 +D +D2, 1 +D2 ]

Rate 2/3 recursive, inner

1, 0,1 +D2

1 +D +D2

0, 1,1 +D

1 +D +D2

10–19

10–18

10–17

10–16

10–15

10–14

10–13

10–12

10–11

10–10

10–9

10–8

10–7

10–6

10–5

10–4

10–3

10–2

10–1

100

–1 0 1 2 3 4 5 6 7 8

CAPACITYRATE = 1/4BINARY INPUT

SIMULATIONm = ITERATIONSN = 16,384

m = 15

m = 10

N = 20

N = 40

N = 100

N = 200

N = 300

Fig. 2. Analytical bounds for HCCC of Example 1.

Eb /N0, dB

BE

R

5

performance of the concatenated code improves as the input block size is increased. If the input blocksize increases, then the sizes of interleavers used in the concatenated code should increase also. WhenαM < 0, we say that we have “interleaving gain” [6]. The more negative αM is, the more interleaving gainwe can obtain. In order to compute αM , we proceed as follows: Consider a rate R = p/n convolutionalcode C with memory ν and its equivalent (N/R,N − pν) block code whose codewords are all sequencesof length N/R bits of the convolutional code starting from and ending at the zero state. By definition,the codewords of the equivalent block code are concatenations of error events of the convolutional codes.By “error event of a convolutional code” we mean a sequence diverging from the zero state at time t = 0and remerging into the zero state at some discrete time t > 0. Let ACw,h,j be the input–output weightcoefficients given that the convolutional code generates j error events with total input weight w andoutput weight h (see Fig. 3). The Aw,h,j actually represents the number of sequences of weight h, inputweight w, and the number of concatenated error events j without any gaps between them, starting at thebeginning of the block. For N much larger than the memory of the convolutional code, the coefficientACw,h of the equivalent block code can be approximated by

ACw,h ∼nM∑j=1

(N/p

j

)ACw,h,j (5)

where nM , the largest number of error events concatenated in a codeword of weight h and generated by aweight w input sequence, is a function of h and w that depends on the encoder. The large N assumptionpermits neglecting the length of error events compared to N , which also implies that the number of waysj input sequences producing j error events can be arranged in a register of length N is

(N/pj

). The ratio

N/p derives from the fact that the code has rate p/n and, thus, N bits correspond to N/p input symbolsor, equivalently, trellis steps.

Σ wi = wj

i =1

ERROR EVENT

INPUTWEIGHT

OUTPUTWEIGHT

1 2 3 j

Σ hi = hi =1

j

w1

w2h2

h1

w3h3

wj hj

Fig. 3. A code sequence in Aw,h,j .C

Let us return now to the block code equivalent to the HCCC. Using Expression (5) with j replaced byni for the inner code, j replaced by no for the outer code, and, similarly, j replaced by np for the parallelcode,3 and noting that N2/p = N1/ko

4= N , we obtain [9]

ACow,l ∼noM∑no=1

(N

no

)Aow,l,no (6)

3 In the following, the superscripts “p”,“o,” and “i” will refer to quantities pertaining to parallel, outer, and inner codes,respectively, and subscripts “m” and “M” will denote “minimum” and “maximum,” respectively.

6

for the outer code and similar expressions for the inner and parallel codes. Then substituting theminto Expression (4), we obtain the bit-error probability bound of the hybrid concatenated block codeequivalent to the HCCC as

Pb(e)∼≤N1/R

pc∑

h1=hpm

N2/Ric∑

h2=him

N1∑w=wm

N2∑l=0

npM∑

np=1

noM∑no=1

niM∑ni=1

(Nnp

)(Nno

)(Nni

)(N1w

)(N2l

) Apw,h1,npAow,l,noA

il,h2,ni

× w

N1Q

(√2Rc (h1 + h2)

EbN0

)(7)

We are interested in large interleaver lengths and, thus, use for the binomial coefficient the asymptoticapproximation

(N

n

)∼ Nn

n!

Substitution of this approximation in Expression (7) gives the bit-error probability bound in the form

Pb(e)∼≤N1/R

pc∑

h1=hpm

N2/Ric∑

h2=him

N1∑w=wm

N2∑l=do

f

npM∑

np=1

noM∑no=1

niM∑ni=1

Nnp+no+ni−w−l−1Bh1,h2,w,l,np,no,ni

×Q(√

2Rc (h1 + h2)EbN0

)(8)

where

Bh1,h2,w,l,np,no,ni =w!l!

plkwo np!no!ni!

w

koAow,h1,npA

ow,l,noA

il,h2,ni

(9)

Using Expression (8), we will obtain some important design rules. The bound, Expression (8), to thebit-error probability can be computed by adding terms of the first two summations with respect to theHCCC weights, h1 + h2. The IOWC coefficients, which are functions of h1 and h2, depend, among otherparameters, on N . For large N , and for a given h1 and h2, the dominant coefficient corresponding toh1 + h2 is the one for which the exponent of N is maximum. Define this maximum exponent as

α(h1, h2) 4= maxw,l{np + no + ni − w − l − 1} (10)

where the maximum is over all error events with w, l that produce the output weights h1 and H2.Evaluating α(h1, h2) is in general not possible without specifying the constituent codes. Thus, we willconsider two important cases for which general expressions can be found.

7

A. The Exponent of N for the Minimum Weight

For large values of Eb/N0, the performance of the HCCC is dominated by the first terms of thesummations in h1 and h2, corresponding to the minimum values h1 = hpm and h2 = him. Noting that npM ,noM , and niM are the maximum number of concatenated error events in codewords of the parallel, outer,and inner code of weights hpm, l, and him, respectively, the following holds true:

niM = min

{⌊himdif

⌋,

⌊l

wim

⌋}(11)

noM = min

{⌊l

dof

⌋,

⌊w

wom

⌋}(12)

npM = min

{⌊hpmdpf

⌋,

⌊w

wpm

⌋}(13)

where wim, wom, and wpm are minimum weights of input sequences generating codewords with nonprop-agating low-output weights for the inner, outer, and parallel encoders, respectively. If the parallel codeis nonrecursive, then npM ≤ w, and we obtain α(hpM , h

iM ) ≤ maxw,l{no + ni − l − 1}. Using the same

method for maximization as in [14], we get

α(hpm, h

im

)≤ 1− dof (14)

where dof is the minimum Hamming distance of the outer code. This result is similar to one obtained forserial concatenated codes. However, if the parallel code is recursive, then npM ≤ bw/2c and α(hpM , h

iM ) ≤

−1 + maxw,l{n0 + ni − l − 1}, which, again using the method in [14], can be used to obtain

α(hpm, h

im

)≤ −dof (15)

The result in Expression (15) shows that the exponent of N corresponding to the minimum weight ofHCCC codewords is always negative, thus yielding an interleaver gain at high Eb/N0. Substitution of theexponent α(hpm, h

im) into Expression (8) truncated to the first term of the summation in h1 and h2 yields

limEbN0→∞

Pb(e)∼≤ BmN−d

ofQ

(√2Rc (hpm + him)

EbN0

)(16)

where the constant Bm is independent of N and can be computed from Expression (8) and Eq. (9).

Expression (16) suggests that, for the values of Eb/N0 and N where the HCCC performance is domi-nated by its free distance dCHf = hpm + him, increasing the interleaver length yields a gain in performance.To increase the interleaver gain, one should choose a recursive parallel code and an outer code with largedof . To improve the performance with Eb/N0, one should choose an inner–parallel code combination suchthat hpm+him is large. However, as in serial concatenated codes, there are coefficients corresponding to h1

and h2 for h1 > hpm and h2 > him that may increase with N . Next, we will evaluate the largest exponentof N , defined as

8

αM4= maxh1,h2{α(h1, h2)} = max

w,l,h1,h2{np + no + ni − w − l − 1} (17)

This exponent will allow us to find the dominant contribution to the bit-error probability for N →∞.

B. The Maximum Exponent of N

We need to treat the cases of nonrecursive and recursive inner encoders separately.

1. The Nonrecursive Inner Encoder. Consider the inner code and its impact on the exponentof N in Eq. (17). For a nonrecursive inner encoder, we have niM = l. In fact, every input sequence withweight one generates a nonpropagating low-output-weight error event, so that an input sequence withweight l will generate at most l error events corresponding to the concatenation of l error events of inputweight one. Thus, from Eq. (17), we have

αM = maxw{np + no − w − 1}

(1) At least one nonrecursive parallel or outer encoder. If the parallel encoder is nonrecursive,then npM = w, or, if the outer encoder is nonrecursive, then noM = w. Therefore, in anycase, we have

αM ≥ 0

and interleaving gain is not allowed.

(2) Both parallel and outer encoders are recursive. In [9], it is proved that, for recursiveconvolutional encoders, the minimum weight of input sequences generating error eventsis 2. As a consequence, an input sequence of weight w can generate at most bw/2cerror events. If both parallel and outer encoders are recursive, then npM = bw/2c andnoM = bw/2c. In this case, we have

αM = −1

and interleaving gain is allowed. This is the same result as for turbo codes [9].

2. The Recursive Inner Encoder. For a recursive inner encoder, we have niM = bl/2c. Thus, fromEq. (17), we have

αM = maxw,l

{np + no − w −

⌊l + 1

2

⌋− 1}

but l ≥ nodof (note that dof ≥ 2), so that

αM ≤ maxw

{np − w −

⌊dof + 1

2

⌋}

9

(1) Nonrecursive parallel encoder. If the parallel encoder is nonrecursive, we have npM = w,thus

αM ≤ −⌊dof + 1

2

⌋

and interleaving gain is allowed. This is the same result as for serial concatenated codes[14].

(2) Recursive parallel encoder. If the parallel encoder is recursive, we have npM = bw/2c.Since −b(w + 1)/2c ≤ −1, we obtain

αM ≤ −⌊dof + 3

2

⌋

which shows a higher interleaving gain than for serial concatenated codes. Note that thisis a simple upper bound, based only on dof . For a specific structure of the outer code, atighter bound can be obtained since, if the parallel code is recursive, the input weightsw ≥ 2 should be considered for the outer code. Knowing the structure of the outer code,a tighter bound can be obtained by computing αM ≤ maxw≥2{−b(w + 1)/2c+ blw/dofc− b(lw + 1)/2c − 1}, where lw is the minimum output weight of the outer code for inputweight w.

In conclusion, in order to achieve the highest interleaving gain for HCCC, we should select the typeof component codes based on the analysis above. Thus, the HCCC should employ

(1) A recursive inner encoder.

(2) A recursive parallel encoder.

(3) An outer encoder that can be either nonrecursive or recursive but that should have largedof .

3. Computation of h(αM). Next we consider the weight h(αM ) that is the sum of output weightsof inner and parallel codes associated with the highest exponent of N :

(1) For even dof , the weight h(αM ) associated with the highest exponent of N is given by

h(αM ) =dofd

if,eff

2+ dpf,eff (18)

(2) For odd dof , the value of h(αM ) is given by

h(αM ) =(dof − 3)dif,eff

2+ h(3)

m + dpf,eff (19)

10

where h(3)m is the minimum weight of sequences of the inner code generated by weight 3

input sequences. In Eqs. (18) and (19), dif,eff and dpf,eff are the effective free distances ofthe inner and the parallel codes, respectively. Tables of recursive systematic convolutionalcodes with maximum effective free distances and maximum h

(3)m are given in [9] and [10].

IV. Iterative Decoding of Hybrid Concatenated Codes

In Sections II and III, we have shown by analytical findings that HCCCs can outperform PCCCs whendecoded using an ML algorithm. In practice, however, ML decoding of these codes with large N is analmost impossible task. Thus, to acquire practical significance, the above described codes and analyticalbound need to be accompanied by a decoding algorithm of the same order of complexity as that forturbo codes, yet retaining its performance advantages. In this section, we present an iterative algorithmwith complexity not significantly higher than that needed to separately decode the three constituentconvolutional codes.

As for the iterative decoding of parallel (PCCC) [8,11,17,25] and serial (SCCC) concatenated convo-lutional codes, the core of the decoding procedure consists of an a posteriori probability (APP) decodingalgorithm applied to the constituent convolutional codes. The functionality of the APP decoder to be usedfor HCCC is slightly different from that needed in the PCCC and even the SCCC decoding algorithm,as we will show shortly. For decoding of the received sequence, we will use the soft-input, soft-output(SISO) APP module described in [12] and [13]. A functional diagram of the iterative decoding algorithmfor HCCC is presented in Fig. 4.

We will explain how the algorithm works according to the blocks of Fig. 4. The blocks labeled SISOhave two inputs and two outputs. The input labeled λ(c; I) represents the reliability of the unconstrainedoutput symbols of the encoder, while that labeled λ(u; I) represents the unconstrained input symbols ofthe encoder. Similarly, the outputs represent the same quantities conditioned to the code constraint asthey are evaluated by the APP decoding algorithm in the log domain. As opposed to the iterative de-coding algorithm employed for turbo decoding, where the APP algorithm computes only the “extrinsic”information of the input symbols of the encoder conditioned on the code constraint based on the uncon-strained reliability of the encoder input symbols, we fully exploit here the potential of the APP algorithm

π2

π1 π1–1

π2–1

SISOPARALLEL

•

•

FROMDEMODULATOR

SISOINNER

SISOOUTER

DECISION

FROMDEMODULATOR NOT USED

λ(c ;I ) λ(c ;O )

λ(u ;I ) λ(u ;O ) λ(c ;I )

λ(u ;I )

λ(c ;O )

λ(u ;O )

λ(c ;O )

λ(u ;O )λ(u ;I )

λ(c ;I )

NOT USED

Fig. 4. The iterative decoding algorithm for HCCCs.

11

at the outer decoder, where all four ports of the SISO are used in the iterative decoder. The SISO APPmodule updates reliability of both the input and output symbols based on the code constraints. Bothoutputs of the SISO, i.e., λ(c;O) and λ(u;O), directly generate the “extrinsic” information required foriterative decoding. So, there is no need to subtract the unconstrained input reliability from the outputreliability generated by the APP algorithm.

During the first iteration of the HCCC algorithm,

(1) The block labeled SISO inner is fed with the demodulator soft output, consisting ofthe reliability of symbols received from the channel, i.e., the received output symbolsof the inner encoder. The received reliabilities are processed by the SISO inner modulethat computes the extrinsic information of the input symbols conditioned on the innercode constraints. This information is passed through the second inverse interleaver (theblock labeled π−1

2 ). As the input symbols of the inner code (after inverse interleaving)correspond to the output symbols of the outer code, they are sent to the SISO module’supper port, which corresponds to the output symbols.

(2) The block labeled SISO parallel is fed with the demodulator soft output, consisting ofthe reliability of the received symbols from the channel, i.e., the received output symbolsof the parallel encoder. The received reliability is processed by the SISO parallel modulethat computes the extrinsic information of the input symbols conditioned on the parallelcode constraints. This information is passed through the first inverse interleaver (theblock labeled π−1

1 ). As the input symbols of the parallel code (after inverse interleaving)correspond to the input symbols of the outer code, they are sent to the SISO module’slower port, which corresponds to the input symbols.

(3) The block labeled SISO outer in turn processes the reliability of the unconstrained outputand input symbols received from the SISO inner and parallel modules, respectively, andcomputes the extrinsic information of both output and input symbols based on the outercode constraints. The extrinsic information of output and input symbols is fed back tothe SISO inner and SISO parallel modules in the second iteration as unconstrained inputreliabilities.

The reliability of input symbols of the SISO outer module and the reliability of the input symbols ofthe SISO parallel module will be used in the final iteration to recover the information bits.

A. Bit-by-Bit Iterative Decoding Using the APP SISO Algorithm in Log Domain

For completeness, we briefly describe the SISO algorithm (used for the parallel, inner, and outerconvolutional codes) based on the trellis section shown in Fig. 5 for a generic code E with input symbolu and output symbol c. A detailed description is provided in [13]. Consider an inner code with p1 inputbits and q1 output bits taking values {0, 1}, a parallel code with p2 input bits and q2 output bits takingvalues {0, 1}, and an outer code with p3 input bits and q3 binary outputs {0, 1}. Let the input symbol tothe convolutional code uk(e) represent the input bits uk,i(e) i = 1, 2, · · · , pm on a trellis edge at time k(m = 1 for the inner code, m = 2 for the parallel code, and m = 3 for the outer code), and let the outputsymbol of the convolutional code ck(e) represent the output bits ck,i(e); i = 1, 2, · · · , qm (m = 1 for theinner code, m = 2 for the parallel code, and m = 3 for the outer code).

Define the reliability of a bit Z taking values {0, 1} at time k as

λk[Z; · · ·] 4= logPk[Z = 1; ·]Pk[Z = 0; ·]

12

sS (e)

sE (e)

u(e), c(e)

e

START STATEFOR EDGE e

END STATEFOR EDGE e

EDGE OFTRELLIS

INPUTSYMBOL

OUTPUTSYMBOL

Fig. 5. The trellis section for the code E.

The second argument in the brackets, shown as a dot, may represent I, the input, or O, the output,to the SISO. We use the following identity,

a = log

[L∑i=1

eai

]= max

i{ai}+ δ(a1, ..., aL) 4= max

i

∗{ai}

where δ(a1, · · · , aL) is a correction term4 that can be computed using a look-up table. We define themax∗ operation as a maximization (compare–select) plus a correction term (look-up table). The notationmax∗ was first used in [18] for MAP detection of intersymbol interference channels (see also [8,11,17]).Assume binary PSK communication (QPSK also can be considered, if it uses Gray code mapping, sinceit is then equivalent to two parallel PSKs). The received samples, {yk,i}, at the output of the receivermatched filter are normalized such that additive noise samples have unit variance per dimension, i.e.,yk,i =

√2Es/No(2ck,i−1)+nk,i, which is the assumed channel model. However, the described algorithm

works with any other type of normalization if λk[Ck,i; I] in Eq. (20) is redefined accordingly. Withoutloss of generality, assume all encoders start (at the beginning of the block) at the all-zero state and endat the all-zero state (at the end of the block, when termination is used). For an encoder with memoryνm, let s represent the state of the encoder, where s ∈ {0, . . . , 2νm}, m = 1, 2, 3.

1. The APP SISO Algorithm for the Inner and Parallel Codes. The forward and backwardrecursions, respectively, are

αk(s) = maxe:sE(e)=s

∗{αk−1

[sS(e)

]+

pm∑i=1

uk,i(e)λk [Uk,i; I] +qm∑i=1

ck,i(e)λk [Ck,i; I]

}+ hαk

βk(s) = maxe:sS(e)=s

∗{βk+1

[sE(e)

]+

pm∑i=1

uk+1,i(e)λk+1 [Uk+1,i; I] +qm∑i=1

ck+1,i(e)λk+1 [Ck+1,i; I]

}+ hβk

4 A. J. Viterbi, “An Intuitive Justification and a Simplified Implementation of the MAP Decoder for Convolutional Codes,”submitted to the JSAC issue on Concatenated Coding Techniques and Iterative Decoding: Sailing Toward Channel Ca-pacity, January 1998.

13

with initial values α0(s) = 0, if s = 0 (initial zero state), and α0(s) = −∞, if s 6= 0. Similarly, βn(s) = 0,if s = 0 (final zero state), and βn(s) = −∞, if s 6= 0. Recursions are done for k = 1, . . . , n − 1, wheren represents the total number of trellis steps for an encoder (obviously the value of n can be differentfor the inner and the parallel codes). The channel reliabilities λk[Ck,i; I] are normalized versions of theobservation samples at the output of the matched filter(s). Based on the channel model described above,they can be represented as

λk[Ck,i; I] = 2√

2EsNo

yk,i (20)

and hαk and hβk are normalization constants. For the hardware implementation of the SISO, theseconstants are used to prevent buffer overflow. The above operations are similar to the Viterbi algorithmused in the forward and backward directions, except for a correction term that is added when compare–select operations are performed (this is the so-called dual-generalized Viterbi algorithm). If we ignore thecorrection term, i.e., we replace max∗ with max, we have exactly the Viterbi algorithm in the forwardand backward directions (this is also referred to as a bidirectional Viterbi algorithm). There is a smalldegradation in performance when replacing max∗ with max, which for some cases of turbo codes at lowSNRs is roughly 0.5 dB [17].

The extrinsic bit information for Uk,j ; j = 1, 2 · · · , pm; m = 1, 2 can be obtained from

λk(Uk,j ;O) = maxe:uk,j(e)=1

∗

αk−1

[sS(e)

]+

pm∑i=1i6=j

uk,i(e)λk[Uk,i; I] +qm∑i=1

ck,i(e)λk [Ck,i(e); I] + βk[sE(e)

]

− maxe:uk,j(e)=0

∗

αk−1

[sS(e)

]+

pm∑i=1i6=j

uk,i(e)λk [Uk,i; I] +qm∑i=1

ck,i(e)λk [Ck,i(e); I] + βk[sE(e)

]At the first iteration, all input reliabilities λk[Uk,i; I] are set to zero. The inner and parallel SISOscompute the extrinsic information λk(Uk,j ;O) from the above equations and provide it to the outer SISO.After the first iteration, the inner and parallel SISOs accept the extrinsics from the outer SISO (to bedescribed next) as the reliabilities of input bits of the inner and parallel encoders, together with theexternal observations from the channel for the computation of the new extrinsic information λk(Uk,j ;O)for the input bits. These are then provided to the outer SISO module.

2. The SISO Algorithm for the Outer Code. The forward and backward recursions, respectively,are

αk(s) = maxe:sE(e)=s

∗{αk−1

[sS(e)

]+

p3∑i=1

uk,i(e)λk [Uk,i; I] +q3∑i=1

ck,i(e)λk [Ck,i; I]

}+ hαk

βk(s) = maxe:sS(e)=s

∗{βk+1

[sE(e)

]+

p3∑i=1

uk+1,i(e)λk+1 [Uk+1,i; I] +q3∑i=1

ck+1,i(e)λk+1 [Ck+1,i; I]

}+ hβk

The extrinsic information for Ck,j ; j = 1, 2, · · · , q3, can be obtained from

14

λk(Ck,j ;O) = maxe:ck,j(e)=1

∗

αk−1

[sS(e)

]+

p3∑i=1

uk,i(e)λk [Uk,i; I] +q3∑i=1i6=j

ck,i(e)λk [Ck,i; I] + βk[sE(e)

]

− maxe:ck,j(e)=0

∗

αk−1

[sS(e)

]+

p3∑i=1

uk,i(e)λk [Uk,i; I] +q3∑i=1i6=j

ck,i(e)λk [Ck,i; I] + βk[sE(e)

]with initial values α0(s) = 0 if s = 0, and α0(s) = −∞ if s 6= 0. Similarly, βn(s) = 0 if s = 0,and βn(s) = −∞ if s 6= 0, where hαk and hβk are again normalization constants that for a hardwareimplementation of the SISO are used to prevent the buffer overflow.

The extrinsic information for Uk,j ; j = 1, 2, · · · , p3, can be obtained from

λk(Uk,j ;O) = maxe:uk,j(e)=1

∗

αk−1

[sS(e)

]+

p3∑i=1i6=j

uk,i(e)λk [Uk,i; I] +q3∑i=1

ck,i(e)λk [Ck,i; I] + βk[sE(e)

]

− maxe:uk,j(e)=0

∗

αk−1

[sS(e)

]+

p3∑i=1i6=j

uk,i(e)λk [Uk,i; I] +q3∑i=1

ck,i(e)λk [Ck,i; I] + βk[sE(e)

]The final decision can be obtained from the bit-reliability computation of Uk,j ; j = 1, 2, · · · , p3, as

Lk,j = λk (Uk,j ;O) + λk [Uk,j ; I]

and then passing it through a hard limiter.

In the iterative decoding, the outer SISO accepts the extrinsics from the inner SISO and parallelSISO as input reliabilities of coded bits and input bits, respectively, of the outer encoder. For the outerSISO, there is no external observation from the channel. The outer SISO uses the input reliabilities forcalculation of new extrinsics λk(Ck,j ;O) for coded bits and λk(Uk,j ;O) for the input information bits.These are then provided to the inner SISO and parallel SISO modules.

3. The SISO Algorithm for a Very Short Block Code as an Outer Code. When the outercode is a block code, one can obtain trellis representation of the block code and use the SISO algorithmdescribed in Section IV.A.2. However, if the block code (e.g., parity check code, Hamming code, or BCHcode) is very short, one can use the simplified version of the SISO algorithm. In this case, there is noneed for the forward and backward recursions. Actually, we can have a one-state trellis section, wherethe number of edges is equal to the number of codewords of the block code under consideration. Thenwe have the following brute-force-type algorithm: Assume a (q,p) block code with input u and outputcodewords c. Note the input block size of the HCCC (or SCCC) in this case can be an integer multipleof p, where p is small, say p ≤ 8.

The extrinsic information for Ck,j ; j = 1, 2, · · · , q, can be obtained from

15

λk(Ck,j ;O) = maxe:ck,j(e)=1

∗

p∑i=1

uk,i(e)λk [Uk,i; I] +q∑i=1i6=j

ck,i(e)λk [Ck,i; I]

− maxe:ck,j(e)=0

∗

p∑i=1

uk,i(e)λk [Uk,i; I] +q∑i=1i6=j

ck,i(e)λk [Ck,i; I]

The extrinsic information for Uk,j ; j = 1, 2, · · · , p, can be obtained from

λk(Uk,j ;O) = maxe:uk,j(e)=1

∗

p∑i=1i6=j

uk,i(e)λk [Uk,i; I] +q∑i=1

ck,i(e)λk [Ck,i; I]

− maxe:uk,j(e)=0

∗

p∑i=1i6=j

uk,i(e)λk [Uk,i; I] +q∑i=1

ck,i(e)λk [Ck,i; I]

V. Special Case of a Hybrid Concatenated Code When the Outer Code isRepetition Code

When the outer code is a repetition code, the hybrid concatenated code reduces to a so-called “repet-itive turbo code,” as suggested by M. Bingeman and A. K. Khandani in [20], where they compared bysimulation the suggested scheme with parallel concatenation of three convolutional codes.

Here we can apply the analytical results and the decoder described in Section IV for hybrid concate-nated codes to this special case.5 However, for the repetition code, the SISO algorithm reduces to asimpler operation, as will be explained in Section V.A.

For a rate 1/q repetition code concatenated N times, we have ACow,l =(Nw

); l = wq, and zero otherwise,

where N is the input block size in bits. Using Expression (5), we obtain

Pb(e) ≤N/Rpc∑h1=hpm

Nq/Ric∑h2=him

N∑w=wm

ACpw,h1×ACiwq,h2(Nqwq

) w

NQ

(√2Rc (h1 + h2)

EbN0

)(21)

In this special case, there is no need to use the interleaver π1. Applying the results in Section II tothis special case, we obtain αM and h(αM ) for the preferred case where both convolutional codes arerecursive. Using the tighter upper bound on αM described in Section III.B.2, or just Eq. (17), and notingthat no = w, npM = bw/2c, niM = bl/2c , and l = wq, and after maximization, we obtain

αM = −q

5 A different iterative decoder that uses repetition and a “recombination algorithm” that requires threshold comparisonswas suggested by Bingeman and Khandani.

16

Next we obtain the weight h(αM ), which is the sum of the minimum output weights of the inner andparallel codes associated with the highest exponent of N as

h(αM ) = q dif,eff + dpf,eff (22)

where, in Eq. (22), dif,eff and dpf,eff are the effective free distances of the inner and parallel codes,respectively. This is due to a single error event with input weight 2 for the parallel code and q errorevents, each of input weight 2, for the inner code. For this scheme, as for turbo codes, input data aresent once. This can be done by sending the systematic bits of the parallel encoder and not sending thesystematic bits of the inner encoder. It is interesting to note that exactly the same results for αM andh(αM ) can be obtained when we consider parallel concatenation of q + 1 convolutional codes (multipleturbo codes) when q of them are identical.

A. The SISO Algorithm for Rate 1/ q Repetition Outer Code

Rate 1/q repetition codes can be viewed as a one-state encoder with a trellis section, as shown inFig. 6. The SISO algorithm described for the outer code can now be simplified by setting αk(s) = 0 andβk(s) = 0. Since there are only one state and two edges corresponding to input bits 0 and 1, there is noneed for the forward recursion, the backward recursion, or the max∗ operation. If we denote the input bitby uk and the output bits by ck, j; j = 1, . . . , q, then we obtain the following results. This is a simplifiedversion of the SISO algorithm for a very short block code, described in Section IV.A.3.

The extrinsic information for Ck,j ; j = 1, 2, · · · , q, can be obtained from

λk(Ck,j ;O) = λk [Uk; I] +q∑i=1i6=j

λk [Ck,i; I]

The extrinsic information for Uk can be obtained from

λk(Uk,j ;O) =q∑i=1

λk[Ck; I]

Uk , Ck,1 Ck,2 Ck,3

1, 1 1 1

0, 0 0 0

Fig. 6. The trellis section of repetitioncode, q = 3.

17

VI. Self-Concatenated Code

Consider a self-concatenated code as shown in Fig. 7. This code can be considered as a special caseof a hybrid concatenated code where the outer code Co is a rate 1/q repetition code and the parallelcode is a rate 1, one-state code (no code). Since only one nontrivial code is used, we call this structure“self-concatenated.” If Ci is systematic recursive convolutional code, the systematic bits of the code arenot transmitted. Various overall code rates can be obtained by puncturing the parity bits of Ci.

CODECi

πINPUTDATA

INTERLEAVER

•••

Co

w

w

hqw

x1

x2

1

2

q

Fig. 7. A self-concatenated code.

For the rate 1/q repetition code and its N concatenations, we have ACow,l =(Nw

); l = wq, and zero

otherwise. For the rate 1, one-state parallel code (no code), we have ACpw,h1=(Nw

);h1 = w, and zero

otherwise. Using Expression (4), we obtain

Pb ≤Nq/Ric∑h=hm

N∑w=1

w

N

(Nw

)ACiqw,h(qNqw

) Q

(√2RcEbN0

(h+ w)

)(23)

Let us consider a special case of a self-concatenated code. If the interleaver is split into q equal sectionscorresponding to the original data and its q − 1 duplicates, and we send the data and its q − 1 permutedversions sequentially through a single convolutional code, then the overall code will be equivalent to amultiple turbo code (parallel concatenation of q codes). The special case of q = 2 with a structuredinterleaver, which does not require trellis termination, was proposed by Berrou [16]. This is just anotherway to implement turbo codes with two identical constituent convolutional codes using a single convo-lutional encoder. Now let us go back to our proposed self-concatenated code where we do not partitionthe interleaver into q interleavers. This allows us to obtain a better fixed interleaver of size qN , using aspread interleaver [5]. However, for the analysis in the next subsection, we use a uniform interleaver [6],which shows the same interleaving gain as for turbo codes.

A. The Maximum Exponent of N for the Self-Concatenated Code

Using Eq. (17), we obtain αM = maxw,h{w + ni − qw − 1}, where ni is the number of concatenatederror events in code Ci. If we use a nonrecursive convolutional encoder Ci, as in Fig. 7, we have ni ≤ qw.In this case, αM ≥ 0. Thus, we have no interleaving gain. However, for the recursive convolutionalencoder Ci, the minimum weight of input sequences generating error events is 2. As a consequence, aninput sequence of weight qw can generate at most ni = bqw/2c error events. In this case, the exponent ofN is negative, and we have an interleaving gain. For q = 2, the maximum exponent of N is equal to −1,and the minimum output weight is h+w = df,eff + 1. For q = 3, the maximum exponent of N is equal to−2, and the minimum output weight is h+ w = h

(3)m + 1. However, if h = h

(3)m =∞, then the minimum

output weight is h + w = 3df,eff + 2. We also could use the upper bound on αM of Section III.B.2 fornonrecursive parallel code to obtain the previous results by noting that dof = q.

An iterative decoder for the self-concatenated code, which we call a “self-iterative decoder” (sinceSISO for repetition code requires only q adders), is shown in Fig. 8 for q = 3.

18

MULTI-PLEXER

π

π–1

DEMULTI-PLEXER

+

+

+

•

•

•

+

•

•

•

•

•

•

+ DECODEDBITS

NOTUSED

FROM DEMODULATORNOISY x1

SISO

FROM DEMODULATORNOISY x2λ(u ;I )

λ(c ;O )

λ(u ;O )

λ(c ;I ) λ(u ;I )

λ(c ;I )

λ(u ;O )

λ(c ;O )

Fig. 8. A self-iterative decoder.

Ci

VII. Simulation Results

The HCCC code proposed in Example 1, namely a rate 1/4 HCCC with three four-state constituentconvolutional codes, was simulated over an AWGN channel for an input block of N=16,384 bits. Theiterative decoding scheme described in Section IV was used for simulations. Simulation results are com-pared to analytic bounds (where the bounds are computed for much smaller block sizes) in Fig. 2. Thebit-error probability versus the number of iterations, for a given Eb/N0 in dB as a parameter, is shownin Fig. 9(a). The same results are also plotted as bit-error probability versus Eb/N0 in dB, for a givennumber of iterations as a parameter, in Fig. 9(b). As is seen from Fig. 9(a) at Eb/N0=0.1 dB, a bit-errorprobability of 10−5 can be achieved with 19 iterations. Thus, at this bit-error probability, the perfor-mance is within 0.9 dB from the Shannon limit for rate 1/4 codes, over binary input AWGN channels.An optimum rate 1/4 PCCC (turbo code) with two four-state codes at 10−5 requires Eb/N0=0.4 dB with15 iterations (there is a very small improvement if we use 18 iterations instead of 15 for PCCC) and aninput block of N=16,384 bits. The main advantage of HCCC over PCCC, as can be seen from Fig. 9(b),occurs at low bit-error probabilities (less than 10−6). Even if we increase the number of states of thePCCC constituent codes to 8 states, the HCCC with three four-state codes outperforms the PCCC atlow bit-error rates. This also was predicted by our analysis. At high bit-error probabilities, HCCC andSCCC have similar performances (for the simulation performance of rate 1/4 SCCC with two four-statecodes and an input block of N=16,384 bits, see [14]). However, at very low bit-error probabilities, HCCCoutperforms SCCC since, based on our analysis, the interleaving gain is N−4 for HCCC and N−3 forSCCC (a factor of 16,384), while h(αM ) = 11 in both cases. To obtain the results in Figs. 9(a) and 9(b),5× 108 random bits were simulated for each point.

A. Simulation Results for a Self-Concatenated Code

Consider a rate 1/4 self-concatenated code with a four-state recursive convolutional code G(D) =[1, (1 + D2)/(1 + D + D2)], q = 3, and an input block of N = 256 and 1024 bits, as shown in Fig. 10.

19

The simulation result for this code, using the iterative decoder in Fig. 8, is shown in Fig. 11. One use ofSISO per input block was considered as one iteration. Interleavers with spread 11 and 23 were used forN = 256 and N = 1024, respectively, in the simulations. For N = 256 and an interleaver with spread 11,the minimum output weight of 26 was obtained for all possible input sequences (N = 256) of weight 1,2, 3. This minimum output weight occurred for input weight 2 only once.

ITERATIONS

PCCC15 ITERATIONS

18

15

1311

10

0.0 0.1 0.2 0.3 0.4 0.5 0.6

100

10–1

10–2

10–3

10–4

10–5

10–6

10–7

Eb /N 0, dB

(b)(a)

0.3 dB

0.2 dB

0.1 dB

0 dB

10–8

0 5 10 15 20

ITERATIONS

BE

R

10–7

10–6

10–5

10–4

10–3

10–2

10–1

100

Fig. 9. Simulation results for the HCCC with N = 16,384 bits: (a) BER versus number of iterations at differentbit SNRs and (b) BER versus Eb /N 0 at different numbers of iterations.

π••

• • ••

x1

x2

•

Fig. 10. Simulation results for rate 1/4 self-concatenated code with q = 3 andN = 256 and 1024 bits.

20

SELF-CONCATENATEDCODEFOUR-STATE, RATE 1/4REPETITION = 3

10–7

0.0 0.5 1.0 1.5 2.0 2.5 3.0

Eb /N 0, dB

BE

R10–3

10–2

10–1

100

10–4

10–5

10–6

4 ITERATIONS

Fig. 11. Simulation results for rate 1/4 self-concatenated codewith q = 3 and N = 256 and 1024 bits.

68

6

8

N = 256 bits

N = 1024 bits

Acknowledgments

The authors would like to thank Sergio Benedetto and Guido Montorsi of thePolitecnico di Torino, Italy, for their helpful comments and suggestions.

References

[1] G. D. Forney, Jr., Concatenated Codes, Cambridge, Massachusetts: M.I.T., 1966.

[2] R. H. Deng and D. J. Costello, “High Rate Concatenated Coding Systems UsingBandwidth Efficient Trellis Inner Codes,” IEEE Transactions on Communica-tions, vol. COM-37, no. 5, pp. 420–427, May 1989.

[3] J. Hagenauer and P. Hoeher, “Concatenated Viterbi Decoding,” Proceedings ofFourth Joint Swedish–Soviet Int. Workshop on Information Theory, Gotland,Sweden, Studenlitteratur, Lund, pp. 29–33, August 1989.

[4] C. Berrou, A. Glavieux, and P. Thitimajshima, “Near Shannon Limit Error-Correcting Coding and Decoding: Turbo-Codes,” Proceedings of ICC’93, Geneva,Switzerland, pp. 1064–1070, May 1993.

[5] D. Divsalar and F. Pollara, “Turbo Codes for PCS Applications,” Proceedings ofIEEE ICC’95, Seattle, Washington, June 1995.

21

[6] S. Benedetto and G. Montorsi, “Unveiling Turbo-Codes: Some Results on Par-allel Concatenated Coding Schemes,” IEEE Transactions on Information The-ory, vol. 43, no. 2, March 1996.

[7] L. R. Bahl, J. Cocke, F. Jelinek, and J. Raviv, “Optimal Decoding of LinearCodes for Minimizing Symbol Error Rate,” IEEE Transactions on InformationTheory, pp. 284–287, March 1974.

[8] P. Robertson, E. Villebrun, and P. Hoeher, “A Comparison of Optimal and Sub-Optimal MAP Decoding Algorithms Operating in the Log Domain,” Proceedingsof ICC’95, Seattle, Washington, pp. 1009–1013, June 1995.

[9] S. Benedetto and G. Montorsi, “Design of Parallel Concatenated ConvolutionalCodes,” IEEE Transactions on Communications, vol. 44, no. 5, pp. 591–600,May 1996.

[10] D. Divsalar and R. J. McEliece, “The Effective Free Distance of Turbo Codes,”IEE Electronic Letters, vol. 32, no. 5, pp. 445–446, February 29, 1996.

[11] S. S. Pietrobon, “Implementation and Performance of a Serial MAP Decoderfor Use in an Iterative Turbo Decoder,” Proceedings of 1995 IEEE InternationalSymposium on Information Theory, Whistler, British Columbia, Canada, p. 471,September 1995.

[12] S. Benedetto, D. Divsalar, G. Montorsi, and F. Pollara, “Soft-Output Decod-ing Algorithms for Continuous Decoding of Parallel Concatenated ConvolutionalCodes,” Proceedings of ICC’96, Dallas, Texas, June 1996.

[13] S. Benedetto, G. Montorsi, D. Divsalar, and F. Pollara, “A Soft-Input Soft-Output Maximum A Posteriori (MAP) Module to Decode Parallel and SerialConcatenated Codes,” The Telecommunications and Data Acquisition ProgressReport 42-127, July–September 1996, Jet Propulsion Laboratory, Pasadena, Cal-ifornia, pp. 1–20, November 15, 1996.http://tda.jpl.nasa.gov/tda/progress report/42-127/127H.pdf

[14] S. Benedetto, G. Montorsi, D. Divsalar, and F. Pollara, “Serial Concatenationof Interleaved Codes: Performance Analysis, Design, and Iterative Decoding,”The Telecommunications and Data Acquisition Progress Report 42-126, April–June 1996, Jet Propulsion Laboratory, Pasadena, California, pp. 1–26, August15, 1996.http://tda.jpl.nasa.gov/tda/progress report/42-126/126D.pdf

[15] S. Benedetto, D. Divsalar, G. Montorsi, and F. Pollara, “Soft-Input Soft-OutputBuilding Blocks for the Construction and Distributed Iterative Decoding of CodeNetworks,” European Transactions on Telecommunications, invited paper to spe-cial issue, to appear in January–February 1998.

[16] C. Berrou and M. Jezequel, “Frame-Oriented Convolutional Turbo Codes,” Elec-tronics Letters, vol. 32, no. 15, July 18, 1996.

[17] S. Benedetto, G. Montorsi, D. Divsalar, and F. Pollara, “Soft-Output DecodingAlgorithms in Iterative Decoding of Turbo Codes,” The Telecommunications andData Acquisition Progress Report 42-124, October–December 1995, Jet Propul-sion Laboratory, Pasadena, California, pp. 63–87, February 15, 1996.http://tda.jpl.nasa.gov/tda/progress report/42-124/124G.pdf

[18] J. Erfanian, S. Pasupathy, and G. Gulak, “Reduced Complexity Symbol Detec-tors With Parallel Structures for ISI Channels,” IEEE Transactions on Commu-nications, vol. 42, no. 2-4, pt. 3, pp. 1661–1671, February–April 1994.

22

[19] R. M. Tanner, Error-Correcting Coding System, U.S. Patent 4,295,218, October13, 1981.

[20] M. Bingeman and A. K. Khandani, “Repetitive Turbo-Codes for High Redun-dancy Coding,” Proceedings of the 1997 Conference on Information Sciences andSystems, Baltimore, Maryland, March 1997.

[21] J. H. Lodge, R. J. Young, P. Hoeher, and J. Hagenauer, “Separable MAP ‘Filters’for the Decoding of Product Codes and Concatenated Codes,” Proceedings ofICC’93, Geneva, Switzerland, pp. 1740–1745, May 1993.

[22] D. Divsalar, S. Dolinar, R. J. McEliece, and F. Pollara, “Transfer FunctionBounds on the Performance of Turbo Codes,” The Telecommunications andData Acquisition Progress Report, April–June 1995, Jet Propulsion Laboratory,Pasadena, California, pp. 44–55, August 15, 1995.http://tda.jpl.nasa.gov/tda/progress report/42-122/122A.pdf

[23] S. Dolinar and D. Divsalar, Weight Distributions for Turbo Codes Using Randomand Nonrandom Permutations,” The Telecommunications and Data AcquisitionProgress Report, April–June 1995, Jet Propulsion Laboratory, Pasadena, Cali-fornia, pp. 56–65, August 15, 1995.http://tda.jpl.nasa.gov/tda/progress report/42-122/122B.pdf

[24] L. C. Perez, J. Seghers, and D. J. Costello, Jr., “A Distance Spectrum Interpre-tation of Turbo Codes,” IEEE Transactions on Information Theory, vol. IT-42,pp. 1698–1709, November 1996.

[25] J. Hagenauer, E. Offer, and L. Papke, “Iterative Decoding of Binary Block andConvolutional Codes,” IEEE Transactions on Information Theory, vol. 42, no. 2,pp. 429–445, March 1996.

23

Recommended