+ All Categories
Home > Documents > A new Reed-Solomon code decoding algorithm based on Newton's interpolation

A new Reed-Solomon code decoding algorithm based on Newton's interpolation

Date post: 22-Sep-2016
Category:
Upload: uk
View: 219 times
Download: 3 times
Share this document with a friend
8
358 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 39, NO. 2, MARCH 1993 A New Reed-Solomon Code Decoding Algorithm Based on Newton's Interpolation Ulrich K. Sorger, Student Member, IEEE, Abstract- A new Reed-Solomon code decoding algorithm based on Newton's interpolation is presented. This algorithm has as main application fast generalized-minimum distance decoding of Reed-Solomon codes. It uses a modified Berlekamp-Massey algorithm to perform all necessary generalized-minimum-distance decoding steps in only one run. With a time-domain form of the new decoder the overall asymptotic generalized-minimum- distance decoding complexity becomes O(dn) with n the length and d the distance of the code (including the calculation of all error locations and values). This asymptotic complexity is optimal. Other applications are the possibility of fast decoding of Reed-Solomon codes with adaptive redundancy and a general parallel decoding algorithm with zero delay. Index Terms-Berlekamp-Massey algorithm, fast generalized- minimum-distance decoding, soft-decision decoding of Reed- Solomon codes. I. INTRODUCTION EED-SOLOMON and their subfield-subcodes (Bose- R Chaudhuri-Hocquenghem, Goppa, etc., or more gen- erally alternate codes) are one of the most important and perhaps best known classes of codes in coding theory. Up to now the encoding and decoding of Reed-Solomon codes are based on the Fourier transform. The approach proposed here is based on interpolation. To use interpolation for coding was already proposed by Mandelbaum [ll] back in 1979 (see also [7]). We shall focus our attention in this paper on Newton's interpolation and possibilities to decode using this interpolation. We first give a brief review of Reed-Solomon codes and Newton's interpolation. In the then next sections, we derive key equations and a new decoding algorithm based on this interpolation formula. This decoding algorithm will turn out to have properties, which are close to the properties of Newton's interpolation. First it can be used without any change for all punctured or singly extended Reed-Solomon and their subfield-subcodes. The next property is that this algorithm can be used as a black box where indexed symbols of the codeword corrupted by noise are successively fed into it in any order. After k of these symbols, IC the number of information symbols, the algorithm produces at every subsequent step an error-locator polynomial. This error locator polynomial is the one that is found by a conventional decoder with erasures at Manuscript received March 26, 1991; revised March 11, 1992. This work was presented in part at the 1991 International Winter Meeting on Coding and Information Theory, Essen, Germany, December 1991. The author is with the Institut fur Netzwerk und Signaltheorie, Merck- strasse 25, 6100 Darmstadt, Germany. IEEE Log Number 9203974. the last locations not yet fed into the decoder. This property has many applications (e.g., fast generalized-minimum-distance decoding). 11. REED-SOLOMON CODES over GF(qS) is defined in its simplest form: A Reed-Solomon code of distance d, length n = qs - 1 Let z be a primitive element of the Galois field GF(qS). A Reed-Solomon code over GF(qS) is the ensemble of words c = (co,cl,...,cn-l), with c, = C(2) and C(z) any polynomial over GF(qs) with deg {C(z)} 5 n - d. The code is decoded in the following way. Let r = c + e be a received word with e the (possibly zero) error word. There is exactly one polynomial ~(z) = RO + . . . + ~~-1z~--l over GF(qs) that fulfills T, = R(zz). The coefficients R, of R(z) and the values r, of r are related via the Mattson-Solomon polynomial or the Fourier transform [2, pp. 207-2461. It follows that R(z) = C(z) + E(z), with e, = E(z'). AS the maximum degree of the unknown C(z) is known, the d - 1 upper coefficients of R(z) are the upper coefficients of E(%) = Eo +. . . + En-lzn-l. These highest coefficients are used as syndrome. To minimize the number of error symbols one tries to choose the unknown coefficients E, so that E(x) has a maximum number of zeros. For this purpose, one introduces an error-locator polynomial R(z) with A, = A(P) = 0, if and only if e, = E(zz) # 0. It follows E(zz)A(zz) = 0 for all 0 _< z 5 n - 1. E(z)A(z) must then be a multiple of zn - 1 = nz0 (5 - z') i.e., E(z)A(z) 0 mod(zn - 1). h(z) can be found by using the known Fourier coefficients of E(x). To minimize the estimated number of errors is the same as to find such a polynomial A(z) that has the smallest number of zeros. With A(z) = n, e,fo (z - 2) this is equivalent to finding the error-locator polynomial of smallest degree. The multiplication mod zn - 1 of R(z) and A(z) gives N(z) := R(z)h(z) = (C(z) + E(z))A(z) E C(z)A(z). Hence, deg(N(1)) _< n - d + deg{A(z)}. Thus, with the convolution theorem [2, p. 2091 one gets U N,=CA,E,-, =o, forn-d+l+v<z<n-l, (1) ,=O with N(z) = No + ... + Nn-lzn-l and the unknown v = deg{A(z)}. From these linear key equations, A(z) is uniquely determined up to degree (d - 1)/2. One may thus correct up to (d - l)/2 errors. A fast possibility to solve (1) is the Berlekamp-Massey algorithm [I, pp. 184-1881, [12]. 0018-9448/93$03.00 0 1993 IEEE ~- -- -___
Transcript
Page 1: A new Reed-Solomon code decoding algorithm based on Newton's interpolation

358 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 39, NO. 2, MARCH 1993

A New Reed-Solomon Code Decoding Algorithm Based on Newton's Interpolation

Ulrich K. Sorger, Student Member, IEEE,

Abstract- A new Reed-Solomon code decoding algorithm based on Newton's interpolation is presented. This algorithm has as main application fast generalized-minimum distance decoding of Reed-Solomon codes. It uses a modified Berlekamp-Massey algorithm to perform all necessary generalized-minimum-distance decoding steps in only one run. With a time-domain form of the new decoder the overall asymptotic generalized-minimum- distance decoding complexity becomes O(dn) with n the length and d the distance of the code (including the calculation of all error locations and values). This asymptotic complexity is optimal. Other applications are the possibility of fast decoding of Reed-Solomon codes with adaptive redundancy and a general parallel decoding algorithm with zero delay.

Index Terms-Berlekamp-Massey algorithm, fast generalized- minimum-distance decoding, soft-decision decoding of Reed- Solomon codes.

I. INTRODUCTION

EED-SOLOMON and their subfield-subcodes (Bose- R Chaudhuri-Hocquenghem, Goppa, etc., or more gen- erally alternate codes) are one of the most important and perhaps best known classes of codes in coding theory. Up to now the encoding and decoding of Reed-Solomon codes are based on the Fourier transform. The approach proposed here is based on interpolation. To use interpolation for coding was already proposed by Mandelbaum [ll] back in 1979 (see also [7]). We shall focus our attention in this paper on Newton's interpolation and possibilities to decode using this interpolation.

We first give a brief review of Reed-Solomon codes and Newton's interpolation. In the then next sections, we derive key equations and a new decoding algorithm based on this interpolation formula. This decoding algorithm will turn out to have properties, which are close to the properties of Newton's interpolation. First it can be used without any change for all punctured or singly extended Reed-Solomon and their subfield-subcodes. The next property is that this algorithm can be used as a black box where indexed symbols of the codeword corrupted by noise are successively fed into it in any order. After k of these symbols, IC the number of information symbols, the algorithm produces at every subsequent step an error-locator polynomial. This error locator polynomial is the one that is found by a conventional decoder with erasures at

Manuscript received March 26, 1991; revised March 11, 1992. This work was presented in part at the 1991 International Winter Meeting on Coding and Information Theory, Essen, Germany, December 1991.

The author is with the Institut fur Netzwerk und Signaltheorie, Merck- strasse 25, 6100 Darmstadt, Germany.

IEEE Log Number 9203974.

the last locations not yet fed into the decoder. This property has many applications (e.g., fast generalized-minimum-distance decoding).

11. REED-SOLOMON CODES

over GF(qS) is defined in its simplest form: A Reed-Solomon code of distance d, length n = qs - 1

Let z be a primitive element of the Galois field GF(qS). A Reed-Solomon code over GF(qS) is the ensemble of words c = (co,cl,...,cn-l), with c, = C ( 2 ) and C(z) any polynomial over GF(qs) with deg {C(z)} 5 n - d.

The code is decoded in the following way. Let r = c + e be a received word with e the (possibly zero) error word. There is exactly one polynomial ~ ( z ) = RO + . . . + ~ ~ - 1 z ~ - - l over GF(qs) that fulfills T , = R(zz) . The coefficients R, of R(z) and the values r, of r are related via the Mattson-Solomon polynomial or the Fourier transform [2, pp. 207-2461. It follows that R(z ) = C(z) + E ( z ) , with e, = E(z') . AS the maximum degree of the unknown C(z) is known, the d - 1 upper coefficients of R(z) are the upper coefficients of E(%) = Eo + . . . + En-lzn-l. These highest coefficients are used as syndrome. To minimize the number of error symbols one tries to choose the unknown coefficients E, so that E ( x ) has a maximum number of zeros.

For this purpose, one introduces an error-locator polynomial R ( z ) with A, = A ( P ) = 0, if and only if e, = E ( z z ) # 0. It follows E ( z z ) A ( z z ) = 0 for all 0 _< z 5 n - 1. E(z)A(z) must then be a multiple of zn - 1 = nz0 (5 - z') i.e., E(z)A(z) 0 mod(zn - 1). h(z ) can be found by using the known Fourier coefficients of E(x ) . To minimize the estimated number of errors is the same as to find such a polynomial A(z) that has the smallest number of zeros. With A(z) = n, e,fo (z - 2 ) this is equivalent to finding the error-locator polynomial of smallest degree.

The multiplication mod zn - 1 of R(z) and A(z) gives N ( z ) := R(z)h(z ) = (C(z) + E ( z ) ) A ( z ) E C(z)A(z) . Hence, deg(N(1)) _< n - d + deg{A(z)}. Thus, with the convolution theorem [2, p. 2091 one gets

U

N,=CA,E,-, = o , f o r n - d + l + v < z < n - l , (1) ,=O

with N ( z ) = No + . . . + Nn-lzn-l and the unknown v = deg{A(z)}. From these linear key equations, A(z) is uniquely determined up to degree (d - 1)/2. One may thus correct up to (d - l ) / 2 errors. A fast possibility to solve (1) is the Berlekamp-Massey algorithm [I, pp. 184-1881, [12].

0018-9448/93$03.00 0 1993 IEEE

~- -- -___

Page 2: A new Reed-Solomon code decoding algorithm based on Newton's interpolation

SORGER: A NEW REED-SOLOMON CODE DECODING ALGORITHM

The previous procedure only gives the locations of the errors, but not their values. To evaluate the values, we write (2” - l )O(z ) = E(z)A(z). R(z ) can be calculated for deg{A(r)} < ( d - l ) / 2 : R, = ~ ~ = O A , E l , + t - J and Ek = 0 for k 2 7). Clearly, E(z ’ ) = P , may be calculated by fl(n;.)(zn - l ) / A ( r ) at z,. However, this expression is undefined at the error locations A(z-1) = 0. Using I’Hopital’s

and A’(r) the formal derivative of A ( . r ) . rule one gets: e j = 7 ~ Q ( z J ) / ( z J A ’ ( z l ) ) with z(”-l)J - - 2 - 3

111. NEWTON’S INTERPOLATION

The general interpolation problem is defined as follows: For the given vector of values a = (no. . . . . u,,-1) and the vector of locations z 1 (zo. . . . . zn-l) with z , # z j for i # J’

find a polynomial A(x) with A(z;) = a L . This interpolation problem can be solved by the Fourier transform if 2; = z i and zn = 1. In the general case, it may be solved by Newton’s interpolation.

The idea of Newton’s interpolation is as follows: It is ini- tialized by the calculation of a polynomial Ao(:r) of degree 0 with Ao(zo) = a0 or Ao(z) = A0 = (LO. The first step is the calculation of a polynomial A l (z ) = A0 + (:r - q ) A 1 of degree 1 where again A ~ ( Z O ) = no but additionally Al(z1) = al. We see that A l ( z 0 ) = no if again A0 = a() as for Ao(:r.). A1 is given by Al(z1) = no + (21 - z0)Al = 01. So only one new value, A I , is to be calculated. By continuing in this way, the polynomial A ( z ) is found.

The advantage of Newton’s interpolation is that if the vectors a and z are enlarged from length 71 to length 71, + 1 (another interpolation value is supplied) then only one additional coefficient of the interpolation polynomial needs to be calculated.

The general form of Newton’s interpolation is given by

17-1 1 - 1

A ( r ) = do + CA, (.T - z k ) . ,=1 k = 1

with z , # zJ for I # j and A,, the Newton coefficients, given by Newton’s triangle:

d o , o Ao 1 Ao T 1 - 2 Ao 71-1

Ai, i A1,2 . . . A1 11-1

A2 2 A2 n-1

with A”,, = a, = A ( z ? ) , Ak = A k . k and

A more detailed introduction to Newton’s interpolation can be found for example in [13, pp. 668-6701.

An important property for our purposes is the inverse of the property that only one new Newton coefficient has to be calculated if a new interpolation value is appended: The k last values a, influence only the k highest Newton coefficients A, (for k < 71) . Another important property is the “Newton convolution” formula derived in the Appendix:

N ( x ) = B ( z ) A ( z ) written in the “Newton form” as

is given by

__

359

( 3 )

with B z , k = B,,k+l + ( z k + l - zk--l)B,+l,k+l the elements of Newton’s triangle ( 2 ) of B ( z ) and A, the Newton coefficients of A ( r ) .

Iv. DECODING IN NEWTON COEFFICIENTS

We use from now on a definition of Reed-Solomon codes that is up to permutations equivalent to the definition previ- ously given

Let z ; # 0, z , # z3 for i # ;j be the elements of the Ga- lois field GF(qs). A (permuted) Reed-Solomon code over GF(q”) is the ensemble of words c = (CO, ~ 1 . . . . , ~ “ - 1 ) )

with ci = C ( z t ) and C(z) any polynomial over GF(q“) with deg{C( : r ) } 5 71, - d.

For a received word T a polynomial R(z ) with R(zi) = T L

can be found using Newton’s interpolation. If T = c+ e with e the (possibly zero) error vector and c an arbitrary Reed-Solomon codeword then again R(2) = C(z) + E ( s ) . As the highest d - 1 Newton coefficients of C ( s ) are zero in any form of Newton’s interpolation (C(z) has by the definition of the Reed-Solomon code bounded degree) the highest coefficients of R(:r) depend only on the error vector e. These d - 1 highest Newton coefficients €i = Ri of R(z ) are thus a modified syndrome.

To find key equations one introduces the error-locator poly- nomial A(.T) as previously stated. N ( z ) = R ( z ) A ( z ) mod

(:r - z , ) is given in Newton coefficients by the “Newton convolution” formula (3). As again deg{N(.x)} 5 n - d + deg(A(:r)) modified key equations depending on the modified syndrome can be given.

The decoding problem is as follows: If less than d / 2 errors have occurred, then the zeros of the polynomial A(%) of smallest degree v that solves

determines the error locations. One recognizes that the general form of (4) will not change if some upper coefficients E, are unknown. The number of key equations, however, becomes smaller. This implies, with the property of the Newton coeffi- cients described in Section 111, that if A(.) is the polynomial of smallest degree that solves all but some upper k modified key equations then it is the solution of the decoding problem with erasures at the last k locations. Note that Newton’s interpolation can be applied for all punctured and singly extended Reed-Solomon codes. For a punctured code, the polynomial G(z) is just not evaluated at all points of the

Page 3: A new Reed-Solomon code decoding algorithm based on Newton's interpolation

360 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 39, NO. 2, MARCH 1993

Galois field, for an extended code the evaluation at zero is considered, too.

For a generalized Reed-Solomon code and its subfield subcodes, the code polynomials are defined by C(z) = M(z)B(z ) with M ( z ) a fixed polynomial with M(zi # 0) and B ( z ) a polynomial of degree smaller than n - d + 1 [2, pp. 228-2291. In this case, the modified key equations are fourid by just setting r5 = ~-iM-’(zi).

A. Decoding with a Modified Berlehmp-Massey Algorithm

rithm, we first state its basic operations. To describe briefly the original Berlekamp-Massey algo-

Calculation of the Discrepancy for a Given Trial Error- Locator Polynomial A(x): The discrepancy is defined to be the “first”’ known nonzero ‘Fourier’ coefficient N ( z ) = E(z)A(z) provided that enough coefficients of E ( z ) are known so that this coefficient of N ( z ) is known. Shift of the Discrepancy: All “Fourier” coefficients of N ( z ) can be shifted2 one position to the right by a multiplication by z. N “ ( z ) := z N ( z ) + Nf+, = Ni. The position of the discrepancy is then also shifted. This operation is equivalent to a multiplication of the trial error-locator polynomial by z. Force the Discrepancy to Zero: If the position of the discrepancies A+ and A- of two polynomials A+(z) and A-(z) is the same then the polynomial A*(.) = A - ( z ) A + - A + ( z ) A - solves also the “next” key equa- tion as then N: = N F A + - N?A- .

The Berlekamp-Massey algorithm works in the following way. First the discrepancies A- and A+ and their positions of two (properly chosen) initializing error-locator polynomials A-(z) and A+(z) are calculated. If the positions of these discrepancies are different then shift one of the two initializing error-locator polynomials so that the positions are the same. An error-locator polynomial A* (z) that also solves the next key equation can then be calculated by the third operation (force the discrepancy to zero). One of the two polynomials A-(z) and A+(.) may then be replaced by A*(.) for the next iteration. As the aim is to solve the key equations by an error-locator polynomial of minimal degree, the error-locator polynomial of higher degree (after the shift) is crossed out. The degree (after the shift) of the error-locator polynomial that is crossed out and the degree of A*(.) is the same. But, A*(.) solves at least one more key equation or, equivalently, its discrepancy is located at a later position.

With the remaining two error-locator polynomials this pro- cedure is repeated. This again produces two error-locator polynomials, where one solves at least another key equation. If this procedure is repeated so often that (1) is solved (U the discrepancy becomes undefined) again two polynomials remain. It can be shown that using an appropriate initialization

‘The “first” known nonzero “Fourier” coefficient is found by using the natural ordering of the coefficients (i.e., Ni Ni+1,. . . or N;+1, Ni , . . .) and the fixed start from either the lowest or the highest coefficient.

’This shift can be only performed because of the Toeplitz structure in (1).

the remaining polynomial of smaller degree is the minimal solution.

To derive the basic steps of a modified Berlekamp-Massey algorithm that applies to (4) the three basic operations are modified.

1) Calculation of the Discrepancy: The discrepancy is now the first known nonzero Newton Coefficient of N ( z ) =

2) “Shifting” the Discrepancy: We first assume in equiva- lence to the original algorithm that A(z) is multiplied by z. The equivalent multiplication of N ( z ) gives, as easily checked, the Newton coefficients JV/+~ = Ni + zi+l.

Ni+l with N ” ( z ) := z N ( z ) . We here have to distin- guish in difference to the original operation two cases.

If one wants to solve the key equation starting from the highest known Newton coefficient this operation is already sufficient. The discrepancy is shifted. As the algorithm is in this form strictly equivalent to the original Berlekamp-Massey algorithm it will not be considered in the remainder of the paper. If one wants to solve the key equation starting from the lowest known Newton coefficient then A = h / k # 0 and thus also As = N; # 0. Therefore, the discrepancy is not yet shifted. If we do not multiply by z but by z - Zk with k the position of the discrepancy then N “ ( z ) := (z - z k ) N ( z ) and Ni = Nk-1 = o as N:+, = Ni + (zi+l - zlc)Ni+l. Therefore the position of the discrepancy is “shifted” at least one position to the right.3

E ( x ) N x ) .

3 ) To force the discrepancy to zero does not need any

Thus, the three operations of the original Berlekamp- Massey algorithm are modified for that they apply to Newton coefficients. We claim that the same general algorithm as in Fourier coefficients (as previously described) gives a modified Berlekamp-Massey algorithm that solves (4). The proof that it produces with an appropriate initialization the minimal error- locator polynomial and how to initialize the algorithm is deferred to the next section.

The algorithm solves (4) starting from the lowest known Newton coefficient of E(z) . Hence, such a decoder has kept the general iterative performance of Newton’s interpolation: If a previously unknown interpolation value is added only one new decoding step has to be performed.

change: A*(z) = A - ( z ) A + - A + ( z ) A - .

v. ANOTHER DESCRIPTION AND PROOF

In this section, we assume that the distance of the Reed-Solomon code is odd with d = 2t + 1. As before, z is the vector of locations. Let *2i(z) = nyi:-2(t-i) (z - z j ) be the erasure-locator polynomial erasing the last 2( t - z) locations. We get the key equation on erasing these locations [2 , P. 2581

3Here, the structure of (4) is used-which is no longer Toeplitz.

Page 4: A new Reed-Solomon code decoding algorithm based on Newton's interpolation

SORGER: A NEW REED-SOLOMON CODE DECODlNG ALGORITHM 361

Let E ( x ) be given in Newton coefficients:

n-2t-1

= S ( x ) + S( .r) ( T - z 3 ) . 7=0

i.e., S( r ) = So + SI n y ~ ~ ~ ~ ~ + z (.r - 2,) and S, = Eztn-zt. Note that for a received word R ( r ) = E ( r ) + C ( x ) and that then S ( r ) is the (known) Newton syndrome. Rewriting ( 5 ) gives

With

Lemma 2 implies that if there is a minimal solution of degree i solving (7) then the next “independent” solution (without common divisors) has to have at least degree 2i- 1 + 1: The minimal solution can be used to find (by shifting) a solution of (7) with i t 22 - 1 of degree 22 - 1 that is with Lemma 2 divided by the minimal solution.

In the next lemma, the basic steps of the modified Berlekamp-Massey algorithm are proved. This lemma has the iterative structure of the Berlekamp-Massey algorithm.

Lemma 3: Let A;(x) deg{A;(z)} = 1 be the minimal polynomial solving (7) and A:(.) of degree 2i - 1 + 1 be another solution of (7) that is not divided by A;(x) . Then, the nonzero polynomials

S ( x ) A ; (.z) (6) with Ki(:c) = n-2t-1

(10) nj=0 ( x - 2 3 )

and as Qs/o(z) = (:r” - l ) / n ~ ~ ~ t - l ( x - z J ) , the decoding problem becomes: If 2 ( t - 2 ) last locations are erased find the polynomial A; ( x ) of smallest degree solving

and i l (x) the minimal solution of

.4(.)R,(:r) - D(x)Q+(n:) = ~

S(:r)Ai(x) Qo(z) Q;(:c) - Ki(:r) (7) ~

solve (7) with 2 +- i + 1. One of them is minimal with degree k . The other has degree 2 ( i + 1) - k + 1 and it is not divided by the minimal solution.

Q 2 i ( Z )

with deg{A,(x)} :> d r g { K , ( x ) } (see (6)). This solution is then necessarily unique up to a constant factor. If we have solved (7) we wish -to solve the next problem ( 1 + 1 + z ) using the old solution. However, we do not only need the minimal solution of (7) but as seen in the previous section also another second solution.

We shall now prove (using this description) that the modified Berlekamp-Massey algorithm really finds the error-locator polynomials of smallest degree. We shall split the proof in some lemmas.

Lemma I : If A ? ( r ) is the minimal polynomial solving (7) then gcd[CI,(x).A,(.r)] = 1.

Proof: If gcd[R,(.r),A,(.c)] = V(.r) # 1 then A , ( r ) = A : ( r ) V ( n ) and R?(.r) = R:(r)V(T). Thus,

as then V ( x ) divides K,(:c). Therefore, also A f ( x ) solves (7) and deg{Af(:r)} < deg{A,(.r)}. But A,(:c) is minimal. 0

Lemma 2: If Ai(.) is the minimal polynomial solving (7) then it divides all other solutions A:(.) of (7) with deg{A:(z)} 5 i + 1.

Proof: Equation (7) defines a set of ‘i linear equations. The number of variables is one more than the degree of the error-locator polynomial. If one restricts oneself to degree i, one has i + 1 variables. The degree of the minimal polynomial is nothing else but the rank of this linear system. The number of linearly independent solutions is thus i - 1 + 1 with 1 = deg{ A;(z)} . These linearly independent solutions may

0 be given by z‘A(:c) with 0 5 k 5 ,i - 1.

Proof: The maximum degree of A ( x ) is 1: A ( x ) = { R, ( x ) / D ( z ) } - ‘ @ ( x ) modQ2; (x)/Q2i+2 ( x ) . It exists be- cause with (10) gcc l [Q; (x ) /D(x ) , Q2i(x)/Q’~i+2(~)] = l. This gives that both polynomials solve (7) with i +- a + 1.

If deg{A;+,(x)} = k = 1 + 2 - deg{B(x)} 5 i + 1, then A;+,(.) is with Lemma 2 the minimal solution: Any error- locator polynomial of degree less than i + 1 that solves (7) is divided by A,(x). The degree of A:+,(.) is then by (9) the maximum of d e g { D ( x ) } + 22 - 1 + 1 and deg{A(x)} + 1, which gives 22 - k - 3. Finally A;+,(x) does not divide A?+,(X)(A;(:C) does not divide B(x )AT(x ) using (10) and Lemma 1, which directly implies with (9) that already AT (x) does not divide A:+l(x)).

If deg{A~+,(z)} > i+l, then deg{A;+,(x)} = i f 1 as the maximum degree of the minimal A; ( x ) is i : i linear equations with i + 1 variables have always a nontrivial solution. Hence, D ( z ) = 1 and A;+,(x) with deg{A:+,(x)} = i f 1 is then the minimal nonzero solution (AL(:r) does not divide A;+,(x) + A?+,(x) # 0). A;+l(x) does not divide as this would with (8) imply that gcd [Q2;(x) /Q2i+2(x), A:+,(x)] = T.’(:r) # 1. But then A: = A:+l(z)/V(x) solves (7) with degree smaller than i + 1 in contradiction to lemma 2 and that

U

Note that in Lemma 3 two basic steps of the modified Berlekamp-Massey algorithm as described in the previous section are performed simultaneously.

It remains to describe the initialization of the algorithm. There are many possibilities to find an initial set of two

A?+l(x) is not divided by AY(x).

Page 5: A new Reed-Solomon code decoding algorithm based on Newton's interpolation

-

362 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 39, NO. 2, MARCH 1993

polynomials. One only has to find the first minimal polynomial of degree 1 and then to take any other polynomial of the necessary degree that is not divided by the minimal solution.

To preserve the iterative structure of the algorithm we propose the following initialization. If 1 is the smallest number for that S(z l ) # 0 with 1 > n - d, then set for the initialization A,(z) = 1, R ~ ( x ) = S ( X ) , K c = 0, A:(.) = z, C$(z) = zCs(z) - zlS(z l ) and K$(x) = zlS(z l ) . If one proceeds as in lemma 3, we get A-($) = (z - zl ) and A+(.) = z n;l:-d+l (z - z j ) , which is a valid initialization.

Using this initialization and Lemma 3, the following theo- rem is proved.

Theorem: The modified Berlekamp-Massey algorithm finds the error-locator polynomials of smallest degree.

One further remark on the proof-We did not use that we work with Newton's interpolation. We only used the fact that the interpolation polynomials Pi(.) := n;:', (z - z j ) have the property that Pi(,) divides Pi+l(z) for all i. Therefore, in any interpolation form with this property, a modified Berlekamp-Massey algorithm exists. Naturally this includes the "Fourier" form with Pi(,) = 2%.

VI. APPLICATIONS

A. Application to Adaptive Decoding

It may be interesting to have a decoder that is adaptive. Usually an adaption to the channel is done by retransmitting the whole codeword (automatic repeat request). It is, however, better as shown in [3] to only transmit some more redundant symbols if the decoded word does not fulfill some constraints. This procedure implies that a new decoding attempt is to be made after having received more redundancy. Using the modified Berlekamp-Massey algorithm as proposed in this paper only some additional decoding steps are to be performed. The complexity of such an adaptive system is thus reduced.

B. Application to Generalized-Minimum-Distance Decoding

The decoding of generalized-concatenated codes [ 10, pp. 590-5921 as well as soft-decision decoding can be done by the help of generalized-minimum-distance decoding [6, pp. 43-471-sometimes called successive erasure decod- ing-or the closely related Blokh-Zyablov algorithm [15]. These algorithms are also interesting for the decoding of block coded modulation, which may be considered as a generalized-concatenated code [9]. The generalized-minimum- distance algorithm applies, if the demodulator-or an inner decoder- produces not only a hard-decision output, but also information about the reliability of the symbol. This reliability information is used by the generalized-minimum-distance decoder: The decoder tries in different attempts to decode by the help of an error and erasure decoder. The first attempt is usually to try to decode without having set erasures. For the next attempt, the two least reliable symbols are erased, then the four least reliable and so forth up to d - 1 erasures. The order in which these attempts are made is not of importance. We restrict ourselves only to this short description of the structure

of the generalized-minimum-distance decoding algorithm (for its performance, see e.g., [ 5 ] , [6, pp. 43-51], [14]).

The complexity (i.e., the number of multiplications, addi- tions, and comparisons) of the generalized-minimum-distance decoding algorithm is originally given by the number of attempts (max. [ d / 2 ] ) times the complexity of the error and erasure decoder (for Reed-Solomon codes O(dn)) hence, 0 ( d 2 n ) . With the modified Berlekamp-Massey algorithm this complexity can be significantly reduced.

We might assume, without loss of generality, that the symbols at the last locations have the least reliability and that they are ordered with decreasing reliability. Otherwise just another form of Newton's interpolation has to be used. The modified Berlekamp-Massey algorithm solves the key equation starting from the first known coefficient E n - d + l .

It produces error locator polynomials A(z) with increasing degree. These polynomials solve only the lower key equations. They are thus the solution of the problem that some least reliable symbols are erased. The modified Berlekamp-Massey algorithm produces in only one run the solutions of all attempts of the generalized-minimum-distance decoding algorithm.

So one can find all necessary error-locator polynomials in only one run. To calculate the zeros of these polynomials with low complexity, however, needs also the transform of the algorithm in the "time-domain'' as in [2, pp. 264-2681.

1) Decoding in the Time-Domain: We are first interested in the polynomial R(z)Qo(z)/Qi(z) (using the description of the preceding section). To shift and to force the discrepancies to zero can be done only knowing this polynomial for the two error-locator polynomials A-(z) and A+(.). To force the discrepancy at location p = n - d + 1 + i to zero is (with the same notation as in (12))

This gives

The values of the discrepancies are thus given by the evalua- tion of these polynomials at zp , i.e.,

and

with C any nonzero constant. Set Aj = R(zj)@~(zj)/Qi(zj). We can thus define a "new" discrepancy to be the first Aj # 0 with j > n - d,

AT = AiA' - A:A,;

then gives the values of the polynomial after forcing the discrepancy at p to zero and, therefore, also the "new" dis- crepancy,

A: = ( ~ i - zP)Ai;

Page 6: A new Reed-Solomon code decoding algorithm based on Newton's interpolation

SORGER: A NEW REED-SOLOMON CODE DECODING ALGORITHM 363

then gives the values of the polynomial and therefore the discrepancy after a shift from location p . These results were derived by A ( z 7 ) B ( z Z ) = o,h, and A ( z , ) + B(z , ) = a, + b, if a , = A ( z , ) and b, = B ( z l ) .

A(.r) is calculated using the discrepancies Ap. In order to be able to correct the errors for all attempts, one has to find the zeros of all minimal A(.). This is-in the most simple form-done by a transform or evaluation of A(T) (i.e., A, = A(z,)). The complexity of this transform is U(UTL) with v the degree of the error-locator polynomial. That means that if all error-locator polynomials are transformed independently no gain in asymptotic complexity will be achieved.

It is simple to give an “iterative” transform, which is only marginally more complex than one transform. This “iterative” transform is just transferring all operations into the time- domain. The transform of .A*(x) with (3) is calculated to

A: = A,A: - A:A,

and the transform of A”( . r ) = ( , r - z P ) A ( x ) to

A: = ( 2 , - z p ) A , .

Thus, the transforms of the error-locator polynomials can be found iteratively and with this also the error locations A(z,) = A, = 0. Note that this iterative transform uses the same operations as to update the discrepancies.

However, only calculating the error locations of A(.r) is not sufficient. The generalized-minimum-distance decoder needs also to know which erasure locations are for a given error- locator polynomial A(.r) not in error. With the time-domain decoder this is simple: If A, = 0 then this location is correct as then O(z,) = 0.

One also might be interested in calculating the error values in the time domain. It is not really needed for the generalized- minimum-distance decoder of Reed-Solomon codes but for the one of its subfield-subcodes. There i t is not enough to find the location of the error symbols of a decoded word but one has to know the error and erasure values to check whether it is a codeword. We rewrite (7) in the form

q r ) = [S( . r )A( r ) - K(.r )] . QlO(.r)

Substituting (11) into ( 5 ) gives

(:l? - 1) A(z)Qo(..)

E ( z ) = [S(x)A(:r) - K ( x ) ] .

The error and erasure values can be thus calculated using only K ( z ) . They become

if Q o ( z , ) # 0; h ( z , ) = 0.

(12)

The “iterative” transform of K ( z , ) = k , is again equivalent to the calculation of A, or A,:k,* = A-k: - A+k,- and k7‘ = (zZ - z k ) k r . The ‘iterative’ transform of A’(z,) = A: is

[xA(x)]’ = xA’(:r) + A(z).) K ’ ( z J ) is calculated in exactly the same way. For the initialization, we set A i = 1, A T = z j ,

A:- = 0, A:+ = 1, k i p = 0, k y = 0 and 1 found as described in the Section V. The Q-,(zj) represent the syndrome needed for the time domain decoder. It can be found by an erasure decoder-that is a generalized encoder.

This time-domain decoder has almost the same complex- ity-the factor is smaller than 2-than the time-domain decoder proposed by Blahut [2, pp. 264-2681. The overall asymptotic complexity is thus (as it can be also seen in the flowchart given the next subsection) U ( n d ) .

Kovalev [8] showed that it is sufficient to perform only half the number of generalized minimum-distance decoding attempts. However, only to check whether one of these at- tempts produced the closest codeword or fulfils some distance properties still needs at least O ( d ) operations. The complex- ity of any generalized-minimum-distance decoding algorithm is thus lower bounded by this value. The proposed algorithm is, therefore, asymptotically optimal.

The complexity of this soft decision decoding algorithm approaches the complexity of hard decision decoding of Reed- Solomon codes. Hard-decision decoding, however, can be speeded up by using fast convolutions and other fast Fourier techniques. It can nevertheless be stated that the difference in decoding complexities is not as big as assumed up to now. This result fits nicely with recent result of Dumer [4]. He noted that the complexity exponent of some search algorithms for (nearly) maximum likelihood decoding is the same for soft and hard decisions.

2) Flowchart: The flowchart of a fast generalized-minimum- distance decoder becomes-if one is interested only in the error location^.^ The operations in this flowchart are compo- nentwise written vector operations.

a) Order the symbols in decreasing reliability R(z , ) = r , and the reliability of r , 2 the reliability of ~ i + l .

b) Use an erasure decoder with the last d - 1 locations erased ==+ ? as solution.

AJ = 1: A, = ( ~ j - ~ j ) ; QA(zJ) .

I F A;- = 0 THEN 1)- t p - + 1; GO TO c2).

k , 1 0, k: Z~A;, AT = O o ( z j ) , A; = O o ( z j ) z j - k:,

c l ) Initialize A -

c2) Initialize A+ :c,D- = 0: 11- = 72 - d + 1.

A t = z,;Af = z ;AT-zp-A- .p+ 1 P-.

n, - d + 1: D+ = 1. d) Basic step IF 1 0, THEN p+ t p+ + 1; GO TO

IF Ai - = 0, THEN p - t p - + 1; GO TO d).

4.

IF p - > p+, THEN A t t AT(z, - z p + ) ; D+ t D+ + 1.

ELSE Af t A,; Ai- - A j ’ Ai-. Af t AT ( z , ~ - zp+ ) .

A; + a,a;- - a;ap_.

A; t AJ(z; -+) . A,; + A y ( ~ j - zP-); D- + D- + 1.

IF D - > D+, THEN swap + and -. A:* = A-A:+ - A+A:- and = (2, - z k ) A : + A,. (Use that ‘For error values, the previous considerations are to be added

Page 7: A new Reed-Solomon code decoding algorithm based on Newton's interpolation

364 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 39, NO. 2, MARCH 1993

e) Test A- for validness. If it passes the test put the zeros of it and the zeros of A- on the list for generalized- minimum-distance decoding.

f ) IF p - < n GO TO d).

C. Application to Parallel Decoding with Zero Delay Parallelizing a decoding algorithm is interesting for im-

plementation in VLSI. This means that one tries to perform as many steps as possible simultaneously to reduce the time complexity. The fast generalized-minimum-distance algorithm as just proposed has such a parallel structure. Moreover, it applies to punctured or extended codes and it will have no delay if it performs one decoding step while a new symbol is received.

VII. SUMMARY

This paper uses Newton’s interpolation to derive a set of new key equations for the decoding of Reed-Solomon codes. We then modified the Berlekamp-Massey algorithm that it applies to this set of key equations. The intermediate error-locator polynomials turned out to be the solutions of subproblems of the generalized-minimum-distance decoding algorithm. Using this modified algorithm one finds in one run all solutions of the generalized minimum-distance decoding problem.

By using a time-domain description of this algorithm and an iterative transform of the error-locator polynomial the com- plexity is again significantly reduced. The asymptotic com- plexity of generalized-minimum-distance decoding becomes O( n d ) . The overall complexity of time-domain generalized- minimum-distance decoding of BCH codes is around the complexity of time-domain hard-decision decoding. Other applications of the modified Berlekamp-Massey algorithm are adaptive decoding and maybe a more general parallel decoder implementation.

APPENDIX

We derive here a convolution in Newton coefficients. Let A ( x ) be in the form

n - 1 i - 1

i=l j = o

This polynomial A(x ) is to be multiplied by another poly- nomial B(x) . The result of this product N ( x ) = B ( x ) A ( x ) should again be given in the “Newton form”

n - 1 i - 1

i = l j = o

For this purpose, consider the vectors ( x - z j ) as basis of the vector space GF(q)” described by polynomials. Then, expand B ( x ) in this basis

n-1 n-I

After the multiplication of A ( x ) by B ( x ) we get

n-1

N ( x ) = N?-‘(z ) + xn- l An-l-jBj,n-l j=O

with N?-l(x) a polynomial of degree smaller than n - 1. This result was derived by flrzi (x - zi) = 0 mod zn - 1

and considering the degree of the resulting polynomial terms. We, thus, have

n-1

N n - 1 = c A n - l - j B j , n - l . (14) j = O

To calculate the value of N n - 2 we write B ( x ) to another basis (this basis we call dual to n - 2).

n-1 n - 2

B ( x ) = BO,n-2 + B j , n - 2 (x - z k ) . j=1 k = n - 1 - j

For the product N ( z ) = A ( x ) B ( x ) mod get

(x - zi), we

n - 2

N ( x ) E N Y 2 ( x ) + z ~ - ~ d n - j - 2 B j , n - 2 , j = O

with N?-’(x) a polynomial of degree smaller than n - 2. It follows that

n - 2

j=O

Continuing in this manner, we get the general form of the convolution in Newton coefficients

a

j = O

with (from now on all subscripts are mod n)

n - 1 i

j = 1 k = j + l - i

To be able to use (15), we need a relation among the &,le:

B i , k = B i , k + l + ( z k - 1 - z k + l ) B i + l , k + l . (16)

Note that (16) is equivalent to (2). This is why we used the same notation. To prove (16), we must find the relation between the polynomials of the basis dual to IC:

k

u a , k ( x ) = n (x - Zj); j = k - i + l

and the polynomials of the basis dual to IC + 1:

k + l

U z , k + l ( z ) = n (x - Zj). j=k -a+2 j = 1 k = n - j

Page 8: A new Reed-Solomon code decoding algorithm based on Newton's interpolation

SORGER: A NEW REED-SOLOMON CODE DECODING ALGORITHM 365

Vo,j(x) is defined to be 1. We note that REFERENCES

k t l k

j = k - i + 2 j=k- i+2

k

+ n ( x - z , ) . J = k + l - z

Hence,

Let B ( x ) be to the basis dual to k + 1. With (17), one gets n

B ( x ) = B j Z , k + l [ ( z k + l - z - z k + l ) U t - l k ( r ) + U? k ( r ) ]

a=O

and, therefore, n

B(X) = [ B a , k + l + (zk--z - Z k + l ) B z + l , k + l ] U z k ( x ) . a=O

ACKNOWLEDGMENT

The author thanks the referees for several suggestions that improved the quality of the paper and gratefully acknowledges B. Dorsch who proposed the subject.

[ 11 E. R. Berlekamp, Algebraic Coding Theory. London: McGraw-Hill, 1986.

[2] R. E. Blahut, Theory and Practice of Error Control Codes. Reading, MA: Addison-Wesley, 1984

[3] B. Dorsch, “Successive check digits rather than information repetition,” in IEEE Int. ConJ Commun., Boston, MA, 1983, pp. 323-327.

141 I. I . Dumer, “On cascade decoding algorithms and suboptimal decoding,” presented at Proc. First Int. Symp. Commun. Theory Applicat., Scotland, Sept. 1991.

[5] G. A. Kabatyansky, “About metrics and decoding domains of Forney’s algorithm,” Proc. Fifth Int. Workshop Inform. Theory, MOSCOW, USSR, - . Jan. 1991, pp, 81-s5.

(61 G. D. Forney, Concatenated Codes, 1966.

Cambridge, MA, M.I.T. Press,

[7] W.-Henkel, Zur Decodierung algebraischer Blockcodes uber komplexen Alphabeten. Dusseldorf VDI Verlag, 1989.

[8] S. 1. Kovalev, “Two classes of minimum generalized distance decoding algorithms,” Probl. Peredach. Inform., vol. 22, no. 3, pp. 35-42, Sept. 1986.

[9] F. R. Kschischang, P. G. de Buda, and S. Pasupathy, “Block coset codes for -!li-ary phase shift keying,” Select. Areas Commun., vol. 7, pp. 9OCL912, Aug. 1989.

[IO] F. J. MacWilliams and N. J. A. Sloane, The Theory of Error Correcting Codes. Amsterdam, The Netherlands: North Holland, 1977.

[ 111 D. M. Mandelbaum, “Construction of error-correcting codes by Interpo- lation,” IEEE Trans. inform. Theory, vol. IT-25, pp. 27-35, Jan. 1979.

112) J. L. Massey, “Shift-register synthesis and BCH decoding,” IEEE Trans. Inform. Theory, vol. IT-15, pp. 122-127, Jan. 1969.

[ 131 Sokolnikov and Redheffer, Mathematics of Physics and Modern Engi- neering, 2nd ed.

[14] D.J. Taipale and M.B. Pursley, ”An improvement to generalized- minimum-distance decoding,” IEEE Trans. Inform. Theory, vol. 37, p. 167, Jan. 1991.

[151 V. A. Zinoviev, “Generalized concatentated codes for channels with error bursts and independent errors,” Probl. Peredach. inform., vol. 17, no. 4, pp. 53-56, Oct. 1981.

Tokyo: McGraw-Hill, Kogakusha, 1966.


Recommended