+ All Categories
Home > Documents > 10.1.1.34.5518[1]

10.1.1.34.5518[1]

Date post: 07-Apr-2018
Category:
Upload: nidhi-saxena
View: 219 times
Download: 0 times
Share this document with a friend

of 153

Transcript
  • 8/4/2019 10.1.1.34.5518[1]

    1/153

    [3] R.E. Blahut, Theory and Practice of Error Control Codes, Addison-Wesley, Reading,Massachusetts, 1983.

    [4] R.T. Chien, Cyclic deco ding pro cedures for Bose-Chaudhuri-Ho cquenghem codes,IEEE Trans. Inform. Theory, IT-10 (1964), 357{363.

    [5] P. Elias, Error-correcting codes for list deco ding, IEEE Trans. Inform. Theory, 37(1991), 5{12.

    [6] G.L. Feng and K.K. Tzeng, A Generalization of the Berlekamp-Massey algorithmfor multisequence shift-register synthesis with applications to deco ding cycliccodes,

    IEEE Trans. Inform. Theory, 37 (1991), 1274{1287.

    [7] S. Gao, M.A. Shokrollahi, Computing roots of polynomials over function

  • 8/4/2019 10.1.1.34.5518[1]

    2/153

    elds ofcurves, draft.

    [8] V. Guruswami, M. Sudan, Improved deco ding of Reed-Solomon and algebraic-geometric codes, IEEE Trans. Inform. Theory, to app ear.

    [9] M. Kaminski, D.G. Kirkpatrick, N.H. Bshouty, Addition requirements for matri

    xand transp osed matrix pro ducts, J. Algorithms, 9 (1988), 354{364.

    [10] R. Lidl, H. Niederreiter, Finite Fields, Addison-Wesley, Reading, Massachusetts,1983.

    [11] F.J. MacWilliams, N.J.A. Sloane, The Theory of Error-Correcting Codes, North-Holland, Amsterdam, 1977.

    [12] J.L. Massey, Shift-register synthesis and BCH deco ding, IEEE Trans. Inform

    . Theory,IT-15 (1969), 122{127.

    [13] R. Refslund Nielsen, T. Hholdt, Decoding Reed-Solomon codes beyond half theminimum distance, preprint.

    [14] M.O. Rabin, Probabilistic algorithms in

  • 8/4/2019 10.1.1.34.5518[1]

    3/153

    nite

  • 8/4/2019 10.1.1.34.5518[1]

    4/153

    elds, SIAM J. Comput., 9 (1980),273{280.

    [15] K. Saints, C. Heegard, Algebraic-geometric codes and multidimensional cycliccodes: a uni

  • 8/4/2019 10.1.1.34.5518[1]

    5/153

    ed theory and algorithms for deco ding using Grobner bases, IEEE Trans.

    Inform. Theory, IT-41 (1995), 1733{1751.

    [16] S. Sakata, Finding a minimal set of linear recurring relations capable of generating agiven

  • 8/4/2019 10.1.1.34.5518[1]

    6/153

    nite two-dimensional array, J. Symb. Comput., 5 (1988), 321{337.

    [17] M. Sudan, Decoding of Reed-Solomon codes beyond the error-correction bound,J.Compl., 13 (1997), 180{193.

    [18] Y. Sugiyama, M. Kasahara, S. Hirasawa, and T. Namekawa, A method for

    solving key equation for deco ding Goppa codes, Inform. Control, 27 (1975), 87{99.

    [19] R. Zippel, Eective Polynomial Computation, Kluwer, Boston, 1993.22

  • 8/4/2019 10.1.1.34.5518[1]

    7/153

    value (.+;s) with the polynomial T(x;y) =T(x;y). In this case we have

    i ii si;i

    a ..h

    hxxT(x;y);S(x;y)i= 0:

    0

    Case 2: (.+;s).(.+a;). To the smallestasuch that

    i ii

    0

    a ..h

    hxxT(x;y);S(x;y)i=60;

    ..h 0

    we can apply Lemma A.1(a) withT(x;y) xT(x;y), a,R(x;y) T(x;y),

    ;iandr ., to obtain a polynomial A(x;y) withlead(A(x;y)) = (;) for which

    i

  • 8/4/2019 10.1.1.34.5518[1]

    8/153

    b0

    hxA(x;y);S(x;y)i= 0;0ba:

    By rep eatedly applying Lemma A.1 as in Case 2, we can update the polynomial A(x;y)

    while keeping lead(A(x;y)) = (;) so that it satis

  • 8/4/2019 10.1.1.34.5518[1]

    9/153

    es

    a

    hxA(x;y);S(x;y)i= 0;0a:

    i

    The last equation means that the column (Y)is linearly dep endent on columns standing

    i(;)

    to its left inY

    . This completes the

  • 8/4/2019 10.1.1.34.5518[1]

    10/153

    rst step of our pro of.

    i

    In our second step, we show that rank(Y) =.+ 1 by applying Gaussian elimination to

    ii

    the columns ofY, where the linear combinations applied to the columns will be determined

    i

    by T

    (x;y). By Lemma A.1 it follows that the sequence of updates carried out onT(x;y)

    si;i.

    in the algorithm to pro duce T(x;y) guarantees that

    si;i

    (

    0 if 0a

  • 8/4/2019 10.1.1.34.5518[1]

    11/153

    si;i

    T

    the columns (Y), (0;

    1)P.(;s), we end up with a column vector (0;0;:::;0;)2

    iPiii

    i

    +1

    F.Next, we take advantage of the Hankel-like structure ofSand generalize our previous

    argument as follows. For every jin the range 0j

  • 8/4/2019 10.1.1.34.5518[1]

    12/153

    ii..1i i..1

    which readily implies by induction the desired result.

    References

    [1] A.V. Aho, J.E. Hopcroft, and J.D. Ullman, The Design and Analysis of ComputerAlgorithms, Addison-Wesley, Reading, Massachusetts, 1974.

    [2] E.R. Berlekamp, Algebraic Coding Theory, Second Edition, Aegean Park Press,LagunaHills, California, 1984.

    21

  • 8/4/2019 10.1.1.34.5518[1]

    13/153

    () ()

    () T

    is given by S= (S;S;:::;S

    ) . For instance, when`= 2 the matrixStakes

    (;)

    +1 +..1

    the form

    01

    (1) (1) (1) (1) (2) (1) (2)SSSSSSS

    0 1 k..2 k..1 0 N1..1 N2..1

    BC

    (1) (1) (1) (1) (2) (1) (2)BC

    SSSSSSS

    B12 k..1k 1 N1N2C

    B

    C:..

    . ..

    ..

  • 8/4/2019 10.1.1.34.5518[1]

    14/153

    B

    C

    ..

    . ..

    ..

    ..

    . ..

    ..

    @

    A

    (1) (1) (1) (2) (1) (2)

    (1)SSSSSSS

    ..1 +k..3 +k..2 ..1 +N1..2 +N2..2

    We show that ifSis the

  • 8/4/2019 10.1.1.34.5518[1]

    15/153

    rst (leftmost) column of type sinSthat is linearly dep endent

    (;s)

    on previous columns inS, then a polynomial Q(x;y) withlead(Q(x;y)) = (;s) is returned

    as output by the algorithm. The resp ective linear dep endency is given by the coecients of

    it

    Q(x;y); namely, the coecient ofxyinQ(x;y) multiplies the columnSin the linear

    (i;t)

    combination. By Lemma A.2, it suces to show that iflead(T(x;y)) takes in the course of

    s

    the algorithm a value greater than (;s) (with resp ect to), then the columnSis linearly

    (;s)

    indep endent of previous columns inS(this applies also to the case wherelead(T(x;y)) was

    s

    supp osed to take a value which is at least (N;s), thereby reaching line 13).

    s

    By Lemma A.1, line 11 is the only place in the algorithm wherelead(T(x;y)) can change.

    Therefore, all we have to show is that whenever line 11 is reached with given va

  • 8/4/2019 10.1.1.34.5518[1]

    16/153

    lues of,,

    r, andlead(T(x;y)) = (;), then each of the columnsS,p

  • 8/4/2019 10.1.1.34.5518[1]

    17/153

    rst.+ 1 entries. De

  • 8/4/2019 10.1.1.34.5518[1]

    18/153

    ne the setsDinductively as follows:

    ii

    D=;

    andD=D[f(.+j;s) : 0j

  • 8/4/2019 10.1.1.34.5518[1]

    19/153

    ed thatZconsists of all the columns ofY, except, possibly,

    ii i

    a certain number of rightmost columns of each type other thansinY.

    ii

    The rest of the proof is devoted to showing that every column inZis linearly indep endent

    i

    of columns that stand to its left inY. We show this in two steps:

    i

    .We

  • 8/4/2019 10.1.1.34.5518[1]

    20/153

    rst show that each column ofYthat is not a column ofZis linearly dep endent on

    ii

    previous columns inY.

    i

    .We then prove that rank(Y) =.+ 1.

    ii

    (We point out that in the classical case of`= 1, the matricesYandZcoincide, thereby

    ii

    making the

  • 8/4/2019 10.1.1.34.5518[1]

    21/153

    rst step vacuous.)

    Let (;) be the index of a column inYthat does not belong toZ. It is clear that

    ii

    . Since (;)2=D, it follows thatlead(T(x;y)).(;). Letabe an integer such

    ii;i

    that 0a.. We distinguish between two cases:

    i

    Case 1: (.+a;).(.+;s). Here, the index (;) reaches the value (.+a;) with

    i ii

    a polynomial T(x;y) withlead(T(x;y)) =lead(T(x;y)).(h;) before (;) takes the

    20

  • 8/4/2019 10.1.1.34.5518[1]

    22/153

    WAb[1e5el]o,pwra.enAsWden[pet16ohp]me.reietntthhdeeippxrrooooffooffPLroepmomsiatioAn.s14a.2s,iwthisicshimmilaakrestousperoooffLseamlrmeaadAy.1coanntdaiLneemdmina[1A.]2, LPbeet`=mn1omRnn(ateh)gxAaxat).i1yvTteL(bixene;ttyeb)Sgi;ve(Sarxsr(;ixysa;u)tyec)h=ipto=hlyaP0ntot`;l=em1aidS0a((lxts),rxale)R

  • 8/4/2019 10.1.1.34.5518[1]

    23/153

    0n0,e;A0(0x;ay)a=;=yr)leaandd(leTdaedx

  • 8/4/2019 10.1.1.34.5518[1]

    24/153

    (;nxye)+A1;(axAn;(dyx);=y)x...rT(x;y)..R(x;y).Then 123...hlleexaaadd((AxA(xx;T;yy())x;;S=y()xle;ayd)(liexa=d..(0rx,rT+10(x;Ay(a)x;;yar)n;.d iiLnfetFmhAiegmsrueiraneeSAx1eis.ic2nttiwaoTnnhhyi6ec,,hamwtlgeahoyeursvisteaehrrtmivhaeebilntaeesrFmaitgaiupktoreeelrsyant1tihooemtnevrioaamflluitnQeyapste(.exssS;,yutao)chnmtdhaeaneatanictsehaarntaiostif

  • 8/4/2019 10.1.1.34.5518[1]

    25/153

    itoeeinrtsasit(sio3onu0not)pnouutfrtntihvdpieeoarlalylntigfhooesmri2citaohLlnms-,. tsPbhterrrcaoooiuonmgft:eh(so1eu6Bmt)y.apnttyheexaneccodunttdhioietniaoolngfsotrhiinethalmilngeotsreir7tmhaminnadiste1as0l.w,Natyhosews,nmuifamalbeperortlyhonafonnmNoinsat+lrTivs.i(axTl;hiyte)esr=eattPiLont`t=sh0uoTsfs(att)ylwpxae)yysst iatTshsser(Qexdt;uey(

  • 8/4/2019 10.1.1.34.5518[1]

    26/153

    rx)nn;teiyhdt)iuoasinnssooa(uft3it0tsph)

  • 8/4/2019 10.1.1.34.5518[1]

    27/153

    .ueetsBoitrynhdeeltirhndeeegc8rwo,enteedhgcietoenitno,sntbthyriaanLtinelditmne(gem1T61as)0(.tA)w.x1e)ah

  • 8/4/2019 10.1.1.34.5518[1]

    28/153

    :cnoWel

  • 8/4/2019 10.1.1.34.5518[1]

    29/153

    P

    4 (t) t

    When applying Reconstruct in Step D2 toQ(x;y) =Q

    (x)y, we obtain the fol

    t=0

    lowing four dierent solutionsg(x) forf(x):

    18+ 14x;18+ 15x;14+ 16x;

    8+8x:

    The

  • 8/4/2019 10.1.1.34.5518[1]

    30/153

    rst two solutions share the same constant coecient, 18, which is a multiple rootof

    23 4

    the polynomial M

    (0;y) =Q(0;y) = 4 + 14y+ 14y+ 2y+ 17yat recursion level i= 0. The

  • 8/4/2019 10.1.1.34.5518[1]

    31/153

    rst solution forf(x) corresp onds to the correct codeword. The second solution is not evenay-root ofQ(x;y) (yet, as commented by one of the reviewers, it is a pre

  • 8/4/2019 10.1.1.34.5518[1]

    32/153

    x of they-root

    2

    18 + 15x+ 10x

    ). The third solution is ay-root ofQ(x;y) but not a consistent polynomial

    (the resp ective codeword has Hamming distance 15 fromv). And the fourth solutionis a

    consistent polynomial but does not correspond to the correct codeword.

    In this example, the algorithm in Figure 1 yields a second polynomial Q(x;y) which is

    given by

    (0) 234Q(x)=8+ 12x+9x+8x

    (1) 234Q(x)= 5 + 14x+ 7x

    + 15x+4x

    (2) 23Q(x)= 12+ 12x+ 15x+ 4x

    (3) 2Q(x)= 9 + 10x+ 14x

    (4)Q(x)= 13+x

    and the resp ective output of Reconstruct is

    18+ 14x;13+9x;10+x;

    8+8x;

  • 8/4/2019 10.1.1.34.5518[1]

    33/153

    with only the

  • 8/4/2019 10.1.1.34.5518[1]

    34/153

    rst and fourth polynomials being y-ro ots ofQ(x;y) (as well as being consistent

    22

    polynomials); the remaining irreducible factor ofQ(x;y) is (13 +x)y

    + (5 + 18x+ 17x)y+

    2

    (18+6x+ 15x).

    In the example above, the common factors of the two solutions forQ(x;y) correspond to

    the two (and all) consistent polynomials. However, there are examples where thecommon

    y-ro ots inF[x] of all the polynomials Q(x;y) generated in Figure 1 containin addition to

    k

    the consistent polynomialsalso inconsistent ones.

    We comment that the connection between the error vector eand the polynomials that

    app ear in the EKE seems to be less obvious than in the classical case; recall that when`= 1,

    ecan be obtained from (x) and(x) through Chien search [4] and Forney's algorithm [3].

    It would be interesting to

  • 8/4/2019 10.1.1.34.5518[1]

    35/153

    nd such an intimate relationship between eand the polynomials

    that app ear in the EKE also when`.1.

    We pointed out that Sudan's algorithms, as well as the deco ding algorithm Figur

    e 3

    can correct more than (n..k)=2 errors only whenk.(n+1)=3. Clearly, in many (if not

    most) practical applications, higher code rates are used. Therefore, it would beinteresting

    to investigate whether the EKE presented in this paper and the algorithm in Sect

    ion 4 can

    be generalized to higher rates. In particular, connecting the results of this work with the

    improvements presented in [8], might be possible (Nielsen and Hholdt have been working

    recently indep endently on a dierent approach to accelerate [8]; see [13]).

    18

  • 8/4/2019 10.1.1.34.5518[1]

    36/153

    7Summary

    The three main deco ding steps in our algorithm, as it app ears in Figure 3, aredenoted D0,

    D1, and D2, to point out their relationship with the classical deco ding algorithms as outlined

    in Section 1. Steps D0 and D1 replace Step S1 in Sudan's algorithm that

  • 8/4/2019 10.1.1.34.5518[1]

    37/153

    nds the bivariate

    polynomial Q(x;y).

    2

    As shown in Section 6, the time complexity of Step D0 isO(`nlogn) and the time

    2

    complexity of Step D1 isO(`.). Compared to classical deco ding, those

  • 8/4/2019 10.1.1.34.5518[1]

    38/153

    gures are larger by

    a factor of`. Step D2, which is presented in Section 5, is an ecient applicationof Step S2

    2

    in Sudan's algorithm and has time complexity O((`log`)k(n+`logq)).

    In cases where the particular use of the deco ding algorithm does not dictate anupper

    bound on`, we can select the value of`that maximizes (5) sub ject to (6). By (7) we will

    p

    thus have `=O(n/k). For this value of`, the time complexities of Steps D0, D1, and D2

    p

    p

    22

    3=2 ..1=2 2

    areO(nklogn),O((n..k)n/k), andO((nk+ log q)nlog (n=k)), resp ectively.

    The next example is provided to illustrate the various deco ding steps; the parameters

    were selected to be small enough so that the computation can be more easily veri

  • 8/4/2019 10.1.1.34.5518[1]

    39/153

    ed by the

    reader.

    Example7.1LetF

    =GF(19),n= 18, andk= 2. When maximizing (5) we get.= 12

    for`= 4 andm= 1. We select.=jand obtain from (3) that.

    =....

    j jj

    Suppose we encode the polynomial f(x) = 18 + 14xby (1) and get the following trans

    mitted codeword, error vector, and received word:

    c= ( 13;8;

    3;17;12;7;2;16;11;6;1;15;10;5;0;14;9;4 )

    e= ( 11;16;17;12;17;0;0;2;14;0;0;0;3;0;14;8;

    11;15)

  • 8/4/2019 10.1.1.34.5518[1]

    40/153

    v= ( 5;5;1;10;10;7;2;18;

    6;6;1;15;13;5;14;3;1;0 )

    The computation of the syndrome elements in Step D0 results in the following coecients

    P

    (t)(t) +Nt..2 i 4of the polynomials (S(x) =Sx) :

    i=0 i t=1

    (1)S(x) : ( 13;14;5;11;3;4;10;14;13;14;11;14;17;4;0;2)

    (2)S(x): ( 4;8;14;18;

    9;18;5;

  • 8/4/2019 10.1.1.34.5518[1]

    41/153

    13;11;6;8;8;16;0;12)

    (3)S(x): ( 3;12;5;7;10;18;4;14;0;

    14;18;11;16;3 )

    (4)S(x): (14;13;0;13;10;1;9;3;7;8;11;

    0;7 )

    (t)Step D1 yields the following polynomials Q(x),

    (0) 2345Q(x)= 4 + 12x+5x+ 11x+8x+ 13x

    (1) 234Q(x)= 14+ 14x+9x+ 16x+8x

    (2) 2Q(x)= 14+ 13x+x

    (3) 2

  • 8/4/2019 10.1.1.34.5518[1]

    42/153

    Q(x)= 2 + 11x+x

    (4)Q(x) = 17

    P

    4(t) twhereQ(x;y) =Q(x)yis the

  • 8/4/2019 10.1.1.34.5518[1]

    43/153

    rst polynomial returned by the algorithm in Figure 1

    t=1

    (0)andQ

    (x) is computed by (31).

    17

  • 8/4/2019 10.1.1.34.5518[1]

    44/153

    Proposition6.6Assume a call to Reconstruct as in Proposition 6.4, where Q(x;y) sat

    is

  • 8/4/2019 10.1.1.34.5518[1]

    45/153

    es (11). The time complexity of such an application to its full recursion depthis

    O((`log`)k(n

    +`logq)), and it can be implemented using space of overall sizeO(n).

    Proof:Each execution of Step R1, R2, or R8 has time complexity which is prop ortional

    to the number of coecients in the polynomials involved in that step. By Lemma 6.5,

    this number isO(n). By Prop osition 6.4, each of those steps is executed at most

    `ktimes

    throughout the recursion levels. Therefore, the contribution of Steps R1, R2, and R8 to the

    overall complexity of Reconstruct isO(`kn).

    By Prop osition 6.4, the sum of the degrees of all the polynomials M(0;y) that Step R3

    is applied to at theith recursion level is at most`. The roots inF=GF(q) of a polynomial

    2

    2

    of degreeucan be found in exp ected time complexity O((ulogu) logq) [10, Ch. 4],[14];

    also recall that there are known ecient deterministic algorithms for root extraction when

    the characteristic ofFis small [2, Ch. 10], and root extraction is particularly simple when

    `

    = 2 [11, pp. 277{278]. Therefore, for any 0i

  • 8/4/2019 10.1.1.34.5518[1]

    46/153

    2

    2

    level ihave accumulated time complexity O((`

    log`) logq), and the contribution of Step

    2

    2

    R3 to the overall complexity of Reconstruct isO((`log`)klogq).

    LetM

    (x;y) be the polynomial that Step R7 is applied to at some iteration level i. Writing

    M(x;y) as a polynomial inx, we get

    N0..1

    X

    [s] sM(x;y) =M(y)x:

    s=0

    [s]If`is the smallest integer tsuch that N+tis, then it is easy to check that degM(y)

  • 8/4/2019 10.1.1.34.5518[1]

    47/153

    0

    N0..1 N0

    ..1

    XX

    [s]s [s] scc

    M(x;y) =M(y)x

    =M(x;y+) =M(y+)x;

    s=0 s=0

    c[s] [s]

    and, so, we can compute each polynomial M(y) by interp olating the values M

    (+) at `

    s

    c[s]

    distinct points 2F. Therefore, each polynomial M(y) can be found in time complexity

    2

    O(`log`). We now observe that and that

    sss

    N0..1 `

    XX

  • 8/4/2019 10.1.1.34.5518[1]

    48/153

    `= (N+ti).(`+ 1)N.2(n+`

    + 1) ;

    st 0

    s=0 t=0

    the latter inequality following from (32). Hence, each execution of Step R7 hastime com

    2

    plexity O(nlog`), and the contribution of Step R7 to the overall complexity of Reconstruct

    2

    is thereforeO(kn`log`).

    Summing up the contributions of the steps of Reconstruct , the time complexity of an

    2

    application of Reconstruct to its full recursion depth isO((`log`)k(n+`logq)). As for the

    space complexity, notice that the input parameterQ(x;y) can be recomputed fromr,, and

    f

    the parameterM(x;y) to the next recursion level. So, instead of keeping the polynomials

    in each recursion level, we can recompute them after each execution of Step R9.Therefore,

    Reconstruct can be implemented using space of overall sizeO(n).

    Finally, we point out that the time complexity of computing the consistent codewords

  • 8/4/2019 10.1.1.34.5518[1]

    49/153

    2

    out of the output polynomials of Reconstruct isO(`nlogn), as the re-enco ding involves the

    evaluation of those polynomials at the code locators.

    16

  • 8/4/2019 10.1.1.34.5518[1]

    50/153

    Proof:Similarly to the notations in Figure 2, we denote

    `

    X

    (t) tcc

    M(x;y) =M(x)y=M(x;y+)

    1

    t=0

    and

    `

    X

    (t) tff

    M(x;y) =M(x)y=M(x;xy+):

    1

    t=0

    Since.is a root of multiplicity hofM(0;y), theny= 0 is a root of multiplicity hof

    1

    cc(t)c(h)c(t)

  • 8/4/2019 10.1.1.34.5518[1]

    51/153

    M(0;y). Thus, M(0) = 0 for 0t

  • 8/4/2019 10.1.1.34.5518[1]

    52/153

    es 1rh. Now,

    frc(t) t `

    XX

    M(x;y)M(x)x

    t (t) t..r t

    c

    M

    (x;y) = =y+M(x)xy:

    2

    rr

    xx

    t=0 t=r+1

    Substitutingx= 0 inM(x;y) yields a univariate polynomial M(0;y) of degreerh.

    22

    Corollary6.3Consider the special case where the very

  • 8/4/2019 10.1.1.34.5518[1]

    53/153

    rst execution of Step R2 in

    Reconstruct results in a polynomial M(x;y) such that all the roots inFofM(0;y) are

    simple. Then the polynomials obtained in Step R3 throughout all the subsequent recursive

    cal ls have degree at most 1, meaning that their roots can be found simply by solving linear

    equations overF.

    As for the general case, we have the following upper bounds.

    Proposition6.4Suppose that Reconstruct is initial ly cal led with the parameters (Q;k;0),

    P

    ` (t) t

    where Q=Q(x;y) =Q(x)yis a nonzero bivariate polynomial. Then the number of

    t=0

    output polynomials produced by Reconstruct is at most`and the overall number of recursive

    cal ls made to Reconstruct (in Step R9) is at most`(k..1).

    Proof:For 0s

  • 8/4/2019 10.1.1.34.5518[1]

    54/153

    0

    is at most`, and by Lemma 6.2 we have !!for every s2[k..1]. It can therefore be

    s s..1

    proved by induction onsthat! . As a result, Reconstruct generates at most!

    s k..1

    P

    k..2

    outputs, and the number of executions of Step R9 is! (k..1).

    s

    s=0

    Lemma6.5Assume a call to Reconstruct as in Proposition 6.4, and further assume that

    Q(x;y) satis

  • 8/4/2019 10.1.1.34.5518[1]

    55/153

    es (11). Then they-degree of each of the bivariate polynomials computed in

    any of the recursion levels is at most`, and itsx-degree is at mostm+`(k..1) =O(n=`).

    Proof:It is easy to see that Steps R2, R7, and R8 never increase they-degree. As for the

    x-degree, letM(x;y) be a polynomial computed in Step R2 in recursion level iand write

    i

    P

    (t) (t)`t

    M(x) =M(x)y. The degree ofM(x) can increase withionly as a result of Step

    t=0i i

    R8, and it is easy to check by induction onithat

    (t)degM(x)

  • 8/4/2019 10.1.1.34.5518[1]

    56/153

    i

    15

  • 8/4/2019 10.1.1.34.5518[1]

    57/153

    S(x;y) are computed by (25) is at most`times the complexity of computing the syndrome

    2

    in the classical case using (2), namelyO(`nlogn) (see the discussion in Section 1).

    The time and space complexities of the algorithm in Figure 1, which solves (30)in

    Step D1, are given in Prop osition 6.1 below. Throughout the proof of the prop osition, an

    iteration of the algorithm in Figure 1 in which the variable .takes the value swill be called

    an iteration of type s. If in additions2L, the iteration will said to be nontrivial.

    2

    Proposition6.1The time complexity of the algorithm in Figure 1 isO(`.

    ) and its space

    complexity isO(`.).

    Proof:In every nontrivial iteration of the algorithm of Figure 1, the most time-consuming

    steps are the computations of in line 4 and the polynomial updates in lines 6 and 11. The

    time complexity of all those computations is linear in the number of nonzero coecients of the

    resp ective polynomial T(x;y). The check in line 10 guarantees thatlead(T(x;y)).(N;s)

    sss

    P

  • 8/4/2019 10.1.1.34.5518[1]

    58/153

    `

    and, so, the number of nonzero coecients inT(x;y) never exceedsN. By (10), the

    st

    t=1

    number of coecients inT(x;y), as well as the time complexity of every computation in

    s

    lines 4, 6, and 11, isO(.).

    As shown in the proof of Lemma A.2 in the app endix, the number of nontrivial iterations

    of type sis smaller thanN+.for every s2[`]. The overall number of nontrivial iterations

    s

    of all types throughout the execution of the algorithm can therefore be boundedfrom above

    P

    `

    by `.+N=O(`.), where the equality follows from (10). The time complexity of the

    s

    s=1

    2

    whole algorithm is thus O(`.

    ).

    As for the space complexity, most of the memory is allo cated for the`

  • 8/4/2019 10.1.1.34.5518[1]

    59/153

    + 1 polynomials

    P

    ``

    (T(x;y)) andR(x;y), where for each of them we allocate N=O() coecients

    st

    s=1 t=1

    over F.

    (0)Next we turn to the time complexity of computing the polynomial Q(x) using (31).

    (t) (0)Note that degQ(x).degQ(x)

  • 8/4/2019 10.1.1.34.5518[1]

    60/153

    jj

    2

    (0)

    timeO(nlogn); this will also be the time complexity of interp olating Q(x) out of those

    computed values.

    The rest of this section is devoted to proving Prop osition 6.6 below in which the time

    and space complexities of the recursive pro cedure Reconstruct in Figure 2 are e

    stablished. In

    each of the recursion levels iof Reconstruct , we

  • 8/4/2019 10.1.1.34.5518[1]

    61/153

    nd roots of nonzero polynomials M(0;y) .

    M(0;y) with degree at most`. It may seem at

  • 8/4/2019 10.1.1.34.5518[1]

    62/153

    rst that the number of root extractions

    i

    could grow exp onentially. However, Lemma 6.2 below shows that having more thanone

    root ofM(0;y) in a given recursion level is comp ensated by having a multiple root in the

    i

    resp ective polynomial M(0;y) in the previous recursion level.

    i..1

    P

    ` (t) t

    Lemma6.2Let M(x;y) =M(x)ybe a nonzero bivariate polynomial overFand

    1

    t=0

    ..r

    let2Fbe ay-root of multiplicityhofM(0;y). De

  • 8/4/2019 10.1.1.34.5518[1]

    63/153

    neM(x;y) =xM(x;xy+),

    21

    r

    where ris the largest integer for whichxjM(x;xy+). Then degM(0;y)h.

    12

    14

  • 8/4/2019 10.1.1.34.5518[1]

    64/153

    Proposition5.2Let Q(x;y) be a nonzero bivariate polynomial. Everyy-root inF[x] of

    k

    Q(x;y) is found by the call Reconstruct (Q;k;0).

    Proof:Using the notations of Lemma 5.1, there is a recursion descend in Reconstruct

    where recursion level iis called with the parameters (Q;k;i) and[i] is set tog

    .

    ii

    We show in Prop osition 6.4 below that Reconstruct outputs at most`polynomials. To

    complete the deco ding, each of those polynomials is evaluated at the code locators, pro ducing

    up to`dierent codewords from which we select the consistent codewords as those that are

    at Hamming distance.from the received word v. Our deco ding algorithm for the RS

    code Cde

  • 8/4/2019 10.1.1.34.5518[1]

    65/153

    ned by (1) is summarized in Figure 3.

    RS

    PreliminaryStep:

    Given n, k, and distinct code locators 1; 2; : : : ; nin F ,

  • 8/4/2019 10.1.1.34.5518[1]

    66/153

    x an upper bound on the allowed number `

    of consistent codewords. Compute `, m, and so that (5) is maximized, sub ject to(6) and the upper

    bound on `. Compute the multipliers 1; 2; : : : ; nby (3). De

  • 8/4/2019 10.1.1.34.5518[1]

    67/153

    ne Ntby (9).

    Input:v=(v1;v

    2;:::;vn).

    StepD0:

    Compute the syndrome elements

    n

    X

    (t)ti

    S = v j ; t2[`]; 0i

  • 8/4/2019 10.1.1.34.5518[1]

    68/153

    (t).Compute the polynomial Q(0)(x)2degFN0

    Q[x] by(x)(31).< Nt; t2[`] :

    StepD2:

    (0).

    .CallReconstructwith the initial parameters Q(x; y) = Q (x) + Q (x; y), k, and 0.

    .For each g(x)2Fk[x] in the output ofReconstruct, compute the corresponding codeword c=

    (g(1); g(

    2); : : : ; g(n)). Output cif the Hamming distance between cand vis or less.

    Figure 3: Summary of the deco ding algorithm.6ComplexityAnalysisFollowing is a complexity analysis of our algorithm, as presented in Figure 3. The time

    complexity of Step D0, in which all the coecients of the bivariate syndrome polynomial

    13

  • 8/4/2019 10.1.1.34.5518[1]

    69/153

    the consistent polynomials. The pro cedure Reconstruct is initially called withparameters

    P

    ` (t) t

    (Q;k;0), whereQ=Q(x;y) =Q(x)yis a nonzero bivariate polynomial withy

    t=0

    degree , e.g., a polynomial that satis

  • 8/4/2019 10.1.1.34.5518[1]

    70/153

    es (11). The validity of Reconstruct , as established

    in Prop osition 5.2 below, is based on the following lemma, which shows that thecoecients

    of ay-root g(x) of Q(x;y) can all be calculated recursively as roots of univariate polynomials.

    procedureReconstruct(bivariate polynomial Q(x; y), integer k, integer i)

    =.A global array [0; : : : ; k..1] is assumed.

    The initial call needs to be with Q(x; y) = 0, 6

    k > 0, and i = 0.

    =

    r

    R1

  • 8/4/2019 10.1.1.34.5518[1]

    71/153

    nd the largest integer r for which Q(x; y)=x is still a (bivariate) polynomial;

    r

    R2 M (x; y).Q(x; y)=x ;

    R3

  • 8/4/2019 10.1.1.34.5518[1]

    72/153

    nd all the roots in F of the univariate polynomial M (0; y);

    R4 for each of the distinct rootsof M (0; y) dof

    R5 [i].;

    R6 if i = k..1 then output [0]; : : : ; [k..1];

    elsef

    c

    R7 M(x;y) M(x;y +);

    fc

    R8 M(x;y).M(x; xy);

    f

    R9Reconstruct(M(x; y); k; i+1);

    g

    g

    Figure 2: Recursive pro cedure for

  • 8/4/2019 10.1.1.34.5518[1]

    73/153

    nding a sup erset of the consistent polynomials.

    P

    s

    Lemma5.1Let g(x) =gxbe ay-root of a nonzero bivariate polynomial Q(x;y) over

    s

    s0

    P

    s..i

    F. For i.0, let.(x) =gxand letQ(x;y) andM(x;y) be de

  • 8/4/2019 10.1.1.34.5518[1]

    74/153

    ned inductively byisii

    si

    Q(x;y) =Q(x;y),

    0

    ..ri

    M(x;y) =xQ(x;y) andQ(x;y) =M(x;xy+g

    );i.0;

    iii+1ii

    ri

    where ris the largest integer such thatxdividesQ(x;y). Then, for everyi.

    0,

    ii

    Q(x;.(x)) = 0 andM(0;g) = 0;

    iiii

    whileM(0;y)6= 0.

    i

    Proof:First observe that they-degrees of the polynomials Q(x;y) are the same for all

    i

    iand, so,Q

  • 8/4/2019 10.1.1.34.5518[1]

    75/153

    (x;y)6= 0 andris well-de

  • 8/4/2019 10.1.1.34.5518[1]

    76/153

    ned. Also, sincexdoes not divideM(x;y) then

    ii i

    M(0;y) =60. Next, we prove thatQ(x;.(x)) = 0 by induction oni, where the induction

    iii

    basei= 0 is obvious. As for the induction step, if.

    (x) is ay-root ofQ(x;y), then

    ii

    .(x) = (.(x)..g)=xis ay-root ofQ(xy+g) and hence ofQ

    (x;y) =M(x;xy+g) .

    i+1iiiii+1 ii

    ..ri

    ..ri

    xQ(x;xy+g). Finally, by substitutingx= 0 inM(x;.(x)) =xQ(x;.(x)) = 0 we

    iiiiii

    obtainM(0;g) =M

  • 8/4/2019 10.1.1.34.5518[1]

    77/153

    (0;.(0)) = 0.

    iiii

    12

  • 8/4/2019 10.1.1.34.5518[1]

    78/153

    Input:BivariatepolynomialS(x;y)=P`t=1S(t)x)ytDatast`inrbduiecvxtaur(irae;tse:)psoulychnotmhaiatlsT2s([x`];ya)nd=P2t`=f10T;s(1t):x),y;wt,h+seNr2e[S..`(]2t,)gw;xh)e2reFTs(+t)Ntx..)1[2x]F;Nt[x]; vibsneaittvreaiogarfebiarlietnvedaeprxio2aelbsyFlnLeo;smi;a[r`l];R(x;y)=Pt`=1R(t)x)yt,whereR(t)x)2FNt[x]; Initialif(Rrzoe r;:xs)..y= 11;(t0o;1`)d;oTs(x;y) ys; 2341 whiLfile2L[` ]6=t;hhex..dnoleTafdfxx(T;y()xS;y(x);yi; 5678 igfii=ff0oo6=rut0p..tuht1erTnthtTh(exenn(;xy;)fy;f) L TL(xn;yf)..g;gxr..R(x;y); 111902 else/if*

    6=r TR0en r*f /x..rT(..x1;yT)..(x;y)R(x;y)!; 1134g(g;) segulscecL( ;L);nfure1:Algorithmforsolving(30).1

  • 8/4/2019 10.1.1.34.5518[1]

    79/153

    0 0 0 0 0 0

    The notation (i;t).(i;t) means that either (i;t) = (i;t) or (i;t).

    (i;t), andsucc(i;t)

    is the pair that immediately follows (i;t) with resp ect to the order de

  • 8/4/2019 10.1.1.34.5518[1]

    80/153

    ned by . For

    PP

    (t)

    ` it

    a nonzero bivariate polynomial T(x;y) =Txy, we de

  • 8/4/2019 10.1.1.34.5518[1]

    81/153

    nelead(T(x;y)) as the

    t=1 ii

    ()

    maximal pair (;), with respect to.for which T=60 and denotelead(T(x;y)) =.and

    x

    lead(T(x;y)) =.(lead(0) is de

  • 8/4/2019 10.1.1.34.5518[1]

    82/153

    ned to be (..1;..1)).

    y

    ()

    The algorithm in Figure 1 scans the syndrome elements S

    in the order de

  • 8/4/2019 10.1.1.34.5518[1]

    83/153

    ned by .

    on (;), and maintains up to`bivariate polynomials T

    (x;y);T(x;y);:::;T(x;y), where

    12`

    lead(T(x;y)) =. An invariant of the algorithm is thathxT(x;y);S(x;y)i= 0 for

    0y0i+t(k..1)

  • 8/4/2019 10.1.1.34.5518[1]

    94/153

    and

    (t) (`..t)(n..k) (t)deg (x)xU(x)

  • 8/4/2019 10.1.1.34.5518[1]

    95/153

    (x) =V(x)=x, we get

    `

    X

    (t) (t..1)(k..1) (t) n..k (x)xS(x).

    (x) (mod x);(24)

    t=1

    (t) (t) (t)where we have replaced S(x) by S(x), since the coecients ofS(x) that actually app ear

    11

    i

    in (23) are those that corresp ond to the powers x

    for 0i

  • 8/4/2019 10.1.1.34.5518[1]

    96/153

    P

    ` (t) t

    deco ding scheme. In Step D0, the bivariate syndrome polynomial S(x;y) =S(x)y

    t=1

    P

    ` (t) t

    is computed, and in Step D1, the bivariate polynomial Q(x;y) =Q

    (x)yis found

    t=0

    by solving the EKE. The following prop osition shows that the syndrome elementscan be

    computed in Step D0 using a formula which is a generalization of (2).

    P

    (t)(t) 1iProposition4.1Let ;;:::;.be as in (3) andS(x) =Sxas in (20). Then

    12n

    1 i=0 i

    n

    X

    (t)ti

    S

    =v;t2[`];i.0:

  • 8/4/2019 10.1.1.34.5518[1]

    97/153

    (25)

    j

    i jj

    j=1

    (t)(t)~

    Proof:For each t2[`], letV(x)2F[x] andU(x)2F[x] be the unique

    n(t..1)(n..1)

    polynomials that satisfy

    (t)t (t)

    ~

    (V(x)) =U

    (x)G(x) + V(x):(26)

    .

    t

    (t) (t)..(n..1)

    ..1 ..1

    t

    SinceV(.) = (V(.)) =v., we can expressV(x) as an interp olation

    j

  • 8/4/2019 10.1.1.34.5518[1]

    98/153

    jj j

    polynomial by

    n

    XY

    (t)(1..t)(n..1)

    t

    V(x) =v.(1..x):

    jr

    jj

    j=1

    r2[n]nfjg

    8

  • 8/4/2019 10.1.1.34.5518[1]

    99/153

    P

    ` (t) t

    Q

    (x)(V(x)) must be equal to its counterpart in the right-hand side of (13). If we now

    t=1

    `(n..k)..1

    reverse the order of coecients in both sides of (13), then the coecients of 1;x;:::;x

    should be identical in the resulting two polynomials. Formally,

    `n

    XY

    n..+`(n..k)..1 (t) ..1 ..1 t `(n..k)....1 ..1 `(n..k)

    xQ(x)(V

    (x))xB(x).(1..x) (mod x):

    j

    t=1 j=1

    (19)Using the de

  • 8/4/2019 10.1.1.34.5518[1]

    100/153

    nitions (15) and (18), the equation (19) becomes (17).

    As for the \if" part, suppose that (17) holds forB(x)2F[x]. De

  • 8/4/2019 10.1.1.34.5518[1]

    101/153

    neB(x) to be

    `(n..k)..

    the polynomial inF[x] that is obtained by reversing the order of coecients inB(x).

    `(n..k)..

    If we reverse each side of (17), we get two polynomials of degree less thann...+`(n..k)

    that may dier only in their lowest n...=Ncoecients. In other words, there exists

    0

    P

    (0) (0) `(t) tsome (unique) polynomial Q(x)2F[x] such thatQ(x) + Q(x)(V(x)) =B(x).

    N0t=1

    Q

    n (0)

    (x...). The bivariate polynomial Q(x;y) =Q(x) + Q(x;y) thus satis

  • 8/4/2019 10.1.1.34.5518[1]

    102/153

    es (11){(12),

    j

    j=1

    as required.

    P

    (t)(t) 1iFor t2[`], letS(x) =Sxbe the formal power series which is de

  • 8/4/2019 10.1.1.34.5518[1]

    103/153

    ned by

    1 i=0 i

    t

    (V(x))

    (t..1)(n..1) (t) (t)

    =xS(x) + U(x);

    (20)

    1

    G(x)

    (t)whereU(x)2F[x]. Indeed, sinceG(0) = 1, (20) is well-de

  • 8/4/2019 10.1.1.34.5518[1]

    104/153

    ned (see, for example,

    (t..1)(n..1)

    P

    n..2..t(k..1) (t)

    (t) i[10, Ch. 8]). Further, de

  • 8/4/2019 10.1.1.34.5518[1]

    105/153

    ne the univariate syndrome polynomials S(x) =Sx

    i=0 i

    P

    ` (t) t

    and the bivariate syndrome polynomial S(x;y) =S(x)y.

    t=1

    P

    ` (t) t (t)`Proposition3.3Let Q(x;y) =Q(x)ysatisfy (16) and let ( (x)) be de

  • 8/4/2019 10.1.1.34.5518[1]

    106/153

    ned

    t=1 t=1

    (0) (0) .

    by (15). There exists a (unique) polynomial Q(x) such thatQ(x;y) =Q(x) + Q(x;y)

    satis

  • 8/4/2019 10.1.1.34.5518[1]

    107/153

    es (11){(12) if and only if there exists a (unique) polynomial(x)2F[x] that

    n..k..satis

  • 8/4/2019 10.1.1.34.5518[1]

    108/153

    es the EKE`X(t) (t..1)(k..1) (t) n..k (x) .x

    .S(x) .

    (x) (mo d x) :(21)t=1Proof:

    By Lemma 3.2, all we need to prove is that the EKE (21) is equivalent to (17).Substitutingt (t..1)(n..1) (t) (t)(V(x)) = (x.S(x) + U(x)) .G

    (x)1into (17) and rearranging terms yields`X(t) (`..t)(n..k)+(t..1)(n..1) (t)~`(n..k) (x) .x.S(x) .G(x) .V(x) .G(x) (mo d x

    ) ;(22)1

  • 8/4/2019 10.1.1.34.5518[1]

    109/153

    t=1where`X~(t) (`..t)(n..k) (t)V

    (x) = V(x) . (x) .x.U(x) :t=1

    Now,(`.t)(n.k) + (t.1)(n.1) = (`.

    1)(n.k) + (t.1)(k.1)7

  • 8/4/2019 10.1.1.34.5518[1]

    110/153

    Lemma3.1LetQ(x;y)=P`t=0Q(t)x)ytbeabivariatepolynomialthatsatis

  • 8/4/2019 10.1.1.34.5518[1]

    111/153

    es(1).Then Q(x;y)satis

  • 8/4/2019 10.1.1.34.5518[1]

    112/153

    es12)ifandonlyifthereexistsapolynomialB(x)overFforwhich `Xtwhere =0Q(t)xd)eg(BV(xx))t

  • 8/4/2019 10.1.1.34.5518[1]

    113/153

    d)neoigvtvieQoarnn(Fti)sohfxsea)Vstia(s(xtfVy)e,ianxacg)hn(to1efq3

  • 8/4/2019 10.1.1.34.5518[1]

    114/153

    xnnn..g..e(1t1Vhj)(e),x.fa..oWn1l)doe;wwQini(lgtl)(ipxnot)lryot`=nd1ou,mcreeiastlphseeoctbsihtvaoeirlnty-e:hdabnydrbeivvearrsiiantge Lemma3.2LetQ(x;y)=G(Ptx)t`x=)1Q=(t)xxQ)Njny=tt..sQa1t(..its)fyxj..x1); atn2d[`]: (15) Qaan((0du)nlxeitq)uVseu)(cxhp)o,tlhyGan(toxmQ),i(aaxln;Byd)(x=(tQ2)(xF0)d`(xent`=g..1kQ+)b(etQ)[dxx(e])x

  • 8/4/2019 10.1.1.34.5518[1]

    115/153

    s;

  • 8/4/2019 10.1.1.34.5518[1]

    116/153

    t5e2)s.([`T1]h;)e{r(1e2x)isiftsaand(uonnilqyueif)tphoelryenoexm(i1si6ats)l `XtPBQr((0xo)oxsf):at+isWfQyein(sgx=t;a1y1r)3(tsw{)a(itxt1ih)4s

  • 8/4/2019 10.1.1.34.5518[1]

    117/153

    )t.exhsD(e`(..e1\

  • 8/4/2019 10.1.1.34.5518[1]

    118/153

    t)o(nn)ne{..l(yBk1)2ixf)(".VtpBoaxybr)teL.ttehSmeumppBpoa(loxy3sn).e1o,mtGhti(ahaxtel)roQef(de(0mx)eigxsort)deseiaxs

  • 8/4/2019 10.1.1.34.5518[1]

    119/153

    Ifwenow

  • 8/4/2019 10.1.1.34.5518[1]

    120/153

    xk,nandtm(hase=sDusmmde(

  • 8/4/2019 10.1.1.34.5518[1]

    121/153

    iannnl..elgekskt)=p2oe(snas+inbd1le)m3=,itnnh`..ac,totknhsja..eutnnimscf

  • 8/4/2019 10.1.1.34.5518[1]

    122/153

    trieo=osmnbw(6(n5i.)..thiktF()o7f=or2lcl`o.awn=Fsdotr1h`a(t=thne2+wcw1liael..lsosb2ibecktaam..linacmxamisme)=ibzw2edde(nnitf+hwu1)es=co3chbeot..oaskikne. Nt=n....t(k..1); (9) whereby(5),(6),and(8),wehave +1 `XtSLuedmanm'saa2lg.1oriWthhmeniesvbearse(d5)o{n(6t)hehofoldl,owthie=nr1geNtewtxoisltesmam+na`os+,ntz1aekr:oenbfirvoamria[t1e7]p.olynomialQ(x(1;y0)) overFthatsatis

  • 8/4/2019 10.1.1.34.5518[1]

    123/153

    es Q(x;y)=`Xtand Q(=0jQ;v(tj))x=)y0t;; 8djeg2Q[(nt)]:x)

  • 8/4/2019 10.1.1.34.5518[1]

    124/153

    In the last step of our deco ding algorithm, which app ears also in Sudan's algorithm, the

    (t) `codewords are reconstructed from the polynomials ( (x)) through a pro cedure for

  • 8/4/2019 10.1.1.34.5518[1]

    125/153

    nd

    t=1

    ing linear factors of bivariate polynomials. We show that those factors can be found in time

    2

    complexity O((`log`)k(n+`logq)) using root-

  • 8/4/2019 10.1.1.34.5518[1]

    126/153

    nders of degree-`univariate polynomials.

    There are known general algorithms for factoring multivariate polynomials [19];yet, those

    algorithms have relatively large complexity when applied to the particular application in this

    pap er. We also mention the recent work of Gao and Shokrollahi [7] where they study the

    (slightly dierent) problem of

  • 8/4/2019 10.1.1.34.5518[1]

    127/153

    nding linear factors of bivariate polynomials Q(x;y) where

    the polynomial arithmetic is carried out modulo a power ofx(in which case more solutions

    may exist).

    We comment that increasing the number .of correctable errors by increasing the number

    `of consistent codewords in Sudan's algorithm, as well as in ours, requires decreasing the

    maximum ratek=n

    of the RS codes to which the algorithm is applied. For example, when

    `= 2 we will have .(n..k)=2 only whenk.(n+1)=3. An improvement of Sudan's

    work [17] has been recently rep orted by Guruswami and Sudan in [8], where the constraints

    on the rates have been relaxed.

    This work is organized as follows. For the sake of completeness, we review Sudan's

    algorithm in Section 2. In Section 3, the EKE is derived, and then, in Section 4, we present

    (t) `an algorithm that solves the EKE for the polynomials ( (x)) . In Section 5, we show how

    t=1

    the consistent codewords can be eciently computed from those polynomials. Complexity

    analysis is given in Section 6 and,

  • 8/4/2019 10.1.1.34.5518[1]

    128/153

    nally, a summary and examples are presented in Section 7.

    2Sudan'salgorithm

    Given a prescribed upper bound `

    on the number of consistent codewords, Sudan's algorithm

    corrects any error pattern of up to.errors for

    .=n.(m+ 1) ..`(k.1);(5)

    wheremis the smallest nonnegative integer satisfying

    !

    `+1

    (m+ 1)(`+ 1) + (k.1)>n:

    (6)

    2

    We will further assume throughout this paper that`,k, andnsatisfy the inequality

    !

    `+1

    `+ (k.1)n:(7)

    2

    Indeed, it can be veri

  • 8/4/2019 10.1.1.34.5518[1]

    129/153

    ed that if (7) did not hold, then, for the same values ofk,n, and.,

    (5) and (6) would still be satis

  • 8/4/2019 10.1.1.34.5518[1]

    130/153

    ed if we decreased`by 1 and chose m=k..1; this meansthat, for the given k,nand., we could assume a shorter list of consistent codewords. In

    particular, setting`= 2 in (7) impliesk.(n+1)=3. Note that from (6) and the minimality

    ofmwe obtain

    !

    `+1

    (m+ 1)(`+ 1) +(k.1)n+`+1 :(8)

    2

    4

  • 8/4/2019 10.1.1.34.5518[1]

    131/153

    P

    n..k.. s

    Writing (x) = x

    , we obtain from the KE the following set of.homogeneous

    s

    s=0

    equations in the coecients of (x),

    n..k..

    X

    S= 0;n..k..i

  • 8/4/2019 10.1.1.34.5518[1]

    132/153

    In a recent paper [17], Sudan presented a decoding algorithm for [n;k] RS codesof the

    form (1) that corrects more thanb(n..k)=2cerrors. In this case, the deco ding might not be

    unique, so the deco der's task is to

  • 8/4/2019 10.1.1.34.5518[1]

    133/153

    nd the list of codewords that dier from the received

    word in no more than.locations. This task is referred to in the literature as list decoding [5].

    If Gaussian elimination is used as an equation solver in Sudan's algorithm, thenits time

    3

    complexity isO(n). The algorithm can be described as a method for interp olating the

    n

    polynomial f(x)2F[x] through the set of points f(;v)g, while taking into account

    kjj

    j=1

    that some of the values vmay be erroneous. The interp olation is done by computing

    j

    from the received word va nonzero bivariate polynomial Q(x;y) that vanishes at the points

    n

    f(;v)gand then

  • 8/4/2019 10.1.1.34.5518[1]

    134/153

    nding the linear factors,y..g(x), ofQ(x;y). The codewords to

    jj

    j=1

    which vis deco ded are computed from the polynomials g(x) through re-enco ding.

    Viewing Sudan's algorithm, it is intriguing to

  • 8/4/2019 10.1.1.34.5518[1]

    135/153

    nd its relationship with the classical RS

    deco ding algorithms; speci

  • 8/4/2019 10.1.1.34.5518[1]

    136/153

    cally, can his algorithm be somehow regarded as an extension of

    the previously-known RS deco ding algorithms, and, if so, can we reduce the timecomplexity

    of the counterpart of Step D1 in his algorithm from cubic innto quadratic inn..k?

    In this work, we provide positive answers to those questions. We use the algorithm

    in [17] as a basis for developing an extended key equation (in short, EKE) whichreduces to

    the (classical) KE when.

    =b(n..k)=2c. The EKE involves an integer parameter`which

    provides an upper bound on the number of codewords that can be at Hamming distance.

    from any received word; we refer to those codewords as the consistent codewords.Speci

  • 8/4/2019 10.1.1.34.5518[1]

    137/153

    cally,

    the EKE takes the form

    `

    X

    (t) (t..1)(k..1) (t) n..k (x)xS(x).

    (x) (mod x);

    t=1

    (1) (2) (`)whereS(x);S(x);:::;S(x) are `syndrome polynomials' that can be computed from the

    (1) (2) (`)received word and (x); (x);:::; (x), and(x) are polynomials that satisfy certain

    degree constraints. Those polynomials are then used to

  • 8/4/2019 10.1.1.34.5518[1]

    138/153

    nd the consistent codewords as a

  • 8/4/2019 10.1.1.34.5518[1]

    139/153

    nal step. The KE is a special case of the EKE when`= 1.

    (t) `To compute the polynomials f (x)g, we

  • 8/4/2019 10.1.1.34.5518[1]

    140/153

    rst translate the EKE into a set of.

    t=1

    homogeneous linear equations and then apply a generalization of Massey's algorit

    hm that

    takes advantage of the special structure of the equations and solves them in time complexity

    O(`.), which is quadratic inn..kassuming that`is

  • 8/4/2019 10.1.1.34.5518[1]

    141/153

    xed (e.g.,`= 2).

    3

  • 8/4/2019 10.1.1.34.5518[1]

    142/153

    An [n;k] (generalized) Reed-Solomon (in short, RS) code Cover Fis de

  • 8/4/2019 10.1.1.34.5518[1]

    143/153

    ned by

    RS

    C=fc= (f

    (.);f(.);:::;f(.)) :f(x)2F[x]g;(1)

    RS12nk

    where;;:::.aren

  • 8/4/2019 10.1.1.34.5518[1]

    144/153

    01n..k..1

    P

    n..k..1 i

    syndrome elements are commonly written in a form of a polynomial S(x) =Sx.

    i

    i=0

    StepD1:Solving the key equation (in short, KE) of RS codes

    n..k

    (x)S(x).

    (x) (mod x)

    for the error-locator polynomial (x) of degree.and for the error-evaluator polynomial

    (x) of degree

  • 8/4/2019 10.1.1.34.5518[1]

    145/153

    nding their values

    (from (x) and(x)).

    To specify the time complexity of algorithms, we count operations of elements in

    Fand

    use the notationh(m) =O(h(m)) to mean that there exist positive constants candm

    120

    such thath(m)ch(m) for all integers mm[1, Ch. 1]. Basing on techniques

    120

    presented in [1, Ch. 8], pro cedures for evaluating polynomials inF[x] atnpoints inF, as

    n

    well as interp olating such polynomials given their values atnpoints inF, can be carried out

    2

    in time complexity O(nlogn).

    Denote by [n] the set of integers f1;2;:::;ng. The syndrome elements in Step D0 are

    computed by

    n

    X

    i

    S

  • 8/4/2019 10.1.1.34.5518[1]

    146/153

    =v.(2)

    i jj

    j

    j=1

    where

    Y

    ..1

    .

    = (...) (3)

    jr

    j

    r2[n]nfjg

    (see Prop osition 4.1 below). Using a result by Kaminski et al. in [9], it can be shown that

    the time complexity of computing (2) is the same as that of evaluating a polynomial inF[x]

    n

    2

    at the code locators .,j2[n], and is thereforeO(nlogn).

    j

    Step D2 can be carried out through Chien search [4] and Forney's algorithm (see[3]).

    Both algorithms involve evaluation of polynomials at given points, implying thatreconstruct

    2

    ing the codewords in Step D2 can be executed in time complexity O(n

  • 8/4/2019 10.1.1.34.5518[1]

    147/153

    logn).

    2

  • 8/4/2019 10.1.1.34.5518[1]

    148/153

    Ecient Deco ding of Reed-Solomon Co des

    Beyond Half the Minimum Distance

    RonM.RothGititRuckenstein

    ComputerScienceDepartment

    Technion,Haifa32000,Israel

    e-mail:fronny,[email protected]

    Abstract

    A list decoding algorithm is presented for [n;k] Reed-Solomon (RS) codes over GF(q),

    which is capable of correcting more than b(n..k)=2c errors. Based on a previouswork of

    Sudan, an extended key equation (EKE) is derived for RS codes, which reduces tothe classi

    cal key equation when the number of errors is limited to b(n..k)=2c. Generalizing Massey's

    algorithm that

  • 8/4/2019 10.1.1.34.5518[1]

    149/153

    nds the shortest recurrence that generates a given sequence, an algorithm is

    2

    obtained for solving the EKE in time complexity O(` (n..k) ), where`

    is a design param

    eter, typically a small constant, which is an upper bound on the size of the list of decoded

    codewords (the case`= 1 corresp onds to classical deco ding of up to b(n..k)=2c errors where

    3

    the deco ding ends with at most one codeword). This improves on the time complexity O(n)

    needed for solving the equations of Sudan's algorithm by a naive Gaussian elimination. The

    polynomials found by solving the EKE are then used for reconstructing the codewords in time

    2

    complexity O((`log`)k(n+`logq)) using root-

  • 8/4/2019 10.1.1.34.5518[1]

    150/153

    nders of degree-`univariate polynomials.

    Keywords:Deco ding; Key equation; Polynomial interp olation; Reed-Solomon codes.

    1Introduction

    n

    An [n;k] code Cover the

  • 8/4/2019 10.1.1.34.5518[1]

    151/153

    nite

  • 8/4/2019 10.1.1.34.5518[1]

    152/153

    eldF=GF(q) is ak-dimensional linear subspace ofF.

    It is well-known thatCcan recover uniquely and correctly any pattern of.errors or less if

    and only ifb(d..1)=2c, wheredis the minimum Hamming distance between any two

    distinct elements (codewords) ofC. Most of the decoding algorithms for error-correcting

    codes indeed work under the assumption that the number of errors is at mostb(d..1)=2c.

    .

    This work was done in part at Hewlett-Packard Laboratories, Palo Alto, California, and Hewlett-Packard

    Laboratories, Israel. This work was supp orted by grant No. 95-522 from the Unit

    ed-States{Israel Binational

    Science Foundation (BSF), Jerusalem, Israel.

    1

  • 8/4/2019 10.1.1.34.5518[1]

    153/153


Recommended