Home > Documents > Introduction to Codes on Graphs and Iterative Decoding...

# Introduction to Codes on Graphs and Iterative Decoding...

Date post: 07-Feb-2021
Category:
View: 5 times
108
Introduction to Codes on Graphs and Iterative Decoding Algorithms Igal Sason Technion–Israel Institute of Technology Haifa 32000, Israel March 28, 2004 1
Transcript
• Introduction to Codes on Graphs

and Iterative Decoding Algorithms

Igal Sason

Technion–Israel Institute of Technology

Haifa 32000, Israel

[email protected]

March 28, 2004

1

• An important note:

These slides form a combination of the slides which were presented

by Robert McEliece in Allerton 2000 and ISIT 2001, and also they

are based on an introductory material which was presented by

Claude Berrou, Tom Richardson, Reudiger Urbanke and Charles

Thomas et al. in the IEEE Communications Magazine, vol. 41,

August 2003. The purpose of the slides is to provide an overview

on the subject, and the slides were edited only for educational

purposes in the graduate course ”Codes on Graphs and Iterative

Decoding Algorithms” which I give in the Electrical Engineering

department at Technion. The credit for the slides mainly goes to

the people who are mentioned above, as only 10% of these slides

were created by me.

Sincerely,I. Sason.

2

• Shannon’s Channel Coding Theorem

0.5 1 1.5 2 2.5

0.05

0.1

0.15

Rate (in multiples of capacity)

Bit error probability

attainable

region

forbidden

region

Theorem: For any (discrete-input memoryless) channel,there exists a number C, the channel capacity, such thatfor any desired data rate R < C and any desired error prob-ability p > 0, it is possible to design an encoder-decoderpair that permits the transmission of data over the channelat rate R and decoded error probability < p.

3

• How Hard is it to Approach Channel Capacity?

4

• How Hard is it to Approach Channel Capacity?

• Desired transmission rate R = C(1 − �).

5

• How Hard is it to Approach Channel Capacity?

• Desired transmission rate R = C(1 − �).

• Desired decoder error probability = p.

6

• How Hard is it to Approach Channel Capacity?

• Desired transmission rate R = C(1 − �).

• Desired decoder error probability = p.

• χE(�, p) = the minimum possible encoding complexity, inoperations per information bit.

7

• How Hard is it to Approach Channel Capacity?

• Desired transmission rate R = C(1 − �).

• Desired decoder error probability = p.

• χE(�, p) = the minimum possible encoding complexity, inoperations per information bit.

• χD(�, p) = the minimum possible decoding complexity, inoperations per information bit.

8

• How Hard is it to Approach Channel Capacity?

• Desired transmission rate R = C(1 − �).

• Desired decoder error probability = p.

• χE(�, p) = the minimum possible encoding complexity, inoperations per information bit.

• χD(�, p) = the minimum possible decoding complexity, inoperations per information bit.

For fixed p, how do χE(�, p) and χD(�, p), behave, as � → 0?

9

• The Classical Results.

Theorem: On a discrete memoryless channel of capacity

C, for any fixed p > 0, as �→ 0,

χE(�, p) = O(1/�2)

χD(�, p) = 2O(1/�2).

10

• The Classical Results.

Theorem: On a discrete memoryless channel of capacity

C, for any fixed p > 0, as � → 0,

χE(�, p) = O(1/�2)

χD(�, p) = 2O(1/�2).

Proof: Use linear codes with (per-bit) encoding complex-ity O(n), and ML decoding with complexity 2O(n). Andn = O(1/�2), because of the random coding exponent:

p ≤ e−nEr(R)

RC

E (R)r

where

Er(C(1 − �)) ≈ K�2 as � → 0.

11

• Theoreticians: Reduce the decoding complexity to

χD(ε, p) = O

((1

ε

)m), m ≥ 0.

Engineers: Approach the Shannon limit practically.

12

• Pre-1993 State of the Art on the AWGN Channel

RS

Encoder

Convol.

EncoderΠ

Interleaver

4.03.53.02.52.01.51.00.50.0-0.510

-7

10-6

10-5

10-4

10-3

10-2

10-1

Eb/No, dB

BE

R

Cassini

Voyager

Uncoded

(15,1/2)+(255,223)

Voyager = (7,1/2)+(255,223)Galileo (not S-band) = (15,1/4)+(255,223)Pathfinder/Cassini= (15,1/6)+(255,223)

Galileo(not S-band)

13

• 1993: And Then Came. . .

14

• The Turbo-Era State of the Art on the AWGN Channel

4.03.53.02.52.01.51.00.50.0-0.510

-7

10-6

10-5

10-4

10-3

10-2

10-1

Eb/No, dB

BE

R

Cassini

Voyager

Uncoded

(15,1/2)+(255,223)

Voyager = (7,1/2)+(255,223)Galileo (not S-band) = (15,1/4)+(255,223)Pathfinder/Cassini = (15,1/6)+(255,223)

Galileo(not S-band)

rate=1/6k=8920

rate=1/3k=8920

Turbo Codes

rate=1/3k=1784

rate=1/6k=1784

k=information block size10 iterations

15

• 1963: The Grandaddy of Them All:

16

• 17

• Codes that can be Decoded

in the “Turbo-Style”

• Classical turbo codes:

ΠInterleaver

encoder 1 (IIR)

encoder 2 (IIR)

• “Serial” turbo codes:

Π

Interleaver

encoder 1 encoder 2 (IIR)

18

• Codes that can be Decoded

in the “Turbo-Style”

• Gallager codes (Low-Density Parity-Check), regular andirregular:

+ + +

Π

codeword symbol nodes

parity check nodes

Interleaver

19

• We Should Include VariationsOn the Turbo-Theme (“Turbolike” Codes)

• Gallager (Low-Density Parity-Check) Codes

• Irregular LDPC Codes (Luby, Mitzenmacher, Richardson,Shokrollahi, Speilman, Stemann, and Urbanke)

• Repeat-Accumulate Codes (Divsalar, Jin, McEliece)

• Irregular Turbo Codes (Frey and MacKay)

• Asymmetric Turbo Codes (Costello and Massey)

• Mixture Turbo Codes (Divsalar, Dolinar, and Pollara)

• Doped Turbo Codes (ten Brink)

• Concatenated Tree Codes (Ping and Wu)

...

20

• Repeat-Accumulate (RA) Codes

(nonsystematic)

rate 1/qrepetition Π

rate 1 1/(1+D)

Interleaver

21

• Repeat-Accumulate (RA) Codes

(nonsystematic)

rate 1/qrepetition Π

rate 1 1/(1+D)

Interleaver

+ + + + + +

Tanner Graph Representation(k = 2, q = 3)

22

• Decoding an RA Code

Using Message Passing

+ + + + + +

23

• Decoding an RA Code

Using Message Passing

+ + + + + +

24

• Decoding an RA Code

Using Message Passing

+ + + + + +

25

• Decoding an RA Code

Using Message Passing

+ + + + + +

26

• Decoding an RA Code

Using Message Passing

+ + + + + +

27

• Decoding an RA Code

Using Message Passing

+ + + + + +

28

• Decoding an RA Code

Using Message Passing

+ + + + + +

29

• Decoding an RA Code

Using Message Passing

+ + + + + +

30

• Decoding an RA Code

Using Message Passing

+ + + + + +

31

• What are the Messages?

+

+

x

x

m

m

m = logp(x = 0)

p(x = 1).

32

• How Messages are Updated

At Variable Nodes

m

m1

m2

mk

m = m1 + m2 + · · · + mk.

33

• How Messages are Updated

At Check Nodes

m

m1

m2

mk+

m = m1 � m2 � · · · � mk

tanh(m

2) = tanh(

m1

2) tanh(

m2

2) · · · tanh(

mk

2)

34

• Decoding an RA Code on the BEC

Using Message Passing

−→= 1, −→= 0

+ + + + + +

35

• Decoding an RA Code on the BEC

Using Message Passing

−→= 1, −→= 0

+ + + + + +

36

• Decoding an RA Code on the BEC

Using Message Passing

−→= 1, −→= 0

+ + + + + +

37

• Decoding an RA Code on the BEC

Using Message Passing

−→= 1, −→= 0

+ + + + + +

38

• Decoding an RA Code on the BEC

Using Message Passing

−→= 1, −→= 0

+ + + + + +

39

• Decoding an RA Code on the BEC

Using Message Passing

−→= 1, −→= 0

+ + + + + +

40

• Decoding an RA Code on the BEC

Using Message Passing

−→= 1, −→= 0

+ + + + + +

41

• Turbolike Codes Have Certainly Met the Challenge

on the Binary Erasure Channel !

0 0

1 1

1-p

p

p

1-p

?

Theorem: For the binary erasure channel, for the ensem-ble of (degree-profile optimized) irregular LDPC codes withiterative belief propagation decoding, as �→ 0,

χD(�, p) = O(log1

�)

Irregular LDPC Codes, Density Evolution

(Luby, Mitzenmacher, Shokrollahi, Speilman, Stemann)

42

• Turbolike Codes Appear to Have Met the Challengeon the Additive Gaussian Noise Channel (I)

0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.45 0.510

-6

10-5

10-4

10-3

10-2

Eb/N

0 [dB]

BE

R

Shannon limit

dl=100d

l=200

Threshold (dl=100)Threshold (d

l=200)

Threshold (dl=8000)

Irregular LDPC Codes, Density Evolution

(Chung, Forney, Richardson, Urbanke, 2001)

43

• Turbolike Codes Appear to Have Met the Challengeon the Additive Gaussian Noise Channel (II)

1.E-06

1.E-05

1.E-04

1.E-03

1.E-02

1.E-01

-1.35 -0.85 -0.35 0.15 0.65 1.15 1.65

Eb/N0 (dB)

BE

R

iteration number = 5

iteration number = 10

iteration number = 20

iteration number = 30

iteration number = 40

iteration number = 50

N =200,r =5,

M =4, S =4 and

G (x )=(1+x )/(1+x +x2)

N =65534,r =7,

M =3, S =2 and

G (x )=1/(1+x )

Turbo-Hadamard Codes (Ping, Leung, Wu, ISIT 2001)

44

• What is the Complexity of Iterative

Message-Passing Decoding?

• Complexity per iteration:

χIT = 2E

k,

where E is the number of edges in the Tanner graph, andk is the number of information bits (χIT is an ensembleinvariant).

• N(�, π) = Number of iterations needed to achieve errorprobability π.

χD(�, π) = χIT · N(�, π).

45

• One Interesting Point

4.03.53.02.52.01.51.00.50.0-0.510

-7

10-6

10-5

10-4

10-3

10-2

10-1

Eb/No, dB

BE

R

Galilieo (15, 1/4) + (255,223)RSχ = 50,000 messages per

decoded bit

Rate 1/4 RA code (k = 4096)χ = 500 messages per decoded

bit

46

• A Conjecture

Turbolike codes meet the Shannon challenge for any sym-

metric binary input channel. To be precise: there exists

a sequence of turbolike code ensembles plus matched iter-

ative belief propagation decoding algorithms, such that for

any fixed p, as �→ 0,

χE(�, p) = O(log1

�)

χD(�, p) = O(1

�log

1

�)

(Khandekar and McEliece, ISIT 2001)

47

• Three Garden-Variety SBIC’s

0 0 0 0

1 1 1 1

Binary Symmetric Binary Erasure

z = N(0, 2)

+ y{+1,-1}

p

p

1-p

1-p

1-p

p

p

1-p

?

48

• The Generalization (Gallager, 1963)

-1 1

Definition: A symmetric binary-input channel is a mem-oryless, discrete-time channel with

• Input alphabet X = {+1,−1}.

• Output alphabet Y ⊆ Real Numbers.

• Transition probabilities

p(y|x = +1) = f(y)

p(y|x = −1) = f(−y).

49

• Examples of SBIC’s

• The Binary Erasure Channel:

f(y) = (1 − p)δ(y − 1) + pδ(y).

• The Binary Symmetric Channel:

f(y) = (1 − p)δ(y − 1) + pδ(y + 1).

f(y) = K exp((y − 1)2/2σ2).

• Fast Rayleigh Fading (noncoherent model):

f(y) =

{

K exp(−y/A) if y ≥ 0K exp(y(1 + A)/y) if y < 0.

50

• But What About Non-SBIC’s, i.e.,(Memoryless) Channels that are

51

• But What About Non-SBIC’s, i.e.,(Memoryless) Channels that are

0

1

p

1

1-p

0

1

Nonsymmetric?

52

• But What About Non-SBIC’s, i.e.,(Memoryless) Channels that are

0

1

p

1

1-p

0

1

Nonsymmetric?

Nonbinary?

53

• But What About Non-SBIC’s, i.e.,(Memoryless) Channels that are

0

1

p

1

1-p

0

1

Nonsymmetric?

Nonbinary?

Multiuser?

54

• But What About Non-SBIC’s, i.e.,(Memoryless) Channels that are

0

1

p

1

1-p

0

1

Nonsymmetric?

Nonbinary?

Multiuser?

Etc.?

55

• The Simplest Nonsymmetric Channel: The Z-Channel

0

1

p

1

1-p

0

1

0.2 0.4 0.6 0.8 1

0.2

0.4

0.6

0.8

1

p

Capacity

56

• An Experiment on the Z-channel(Rate / Repeat-Accumulate Code, k = .)

0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 110

-4

10-3

10-2

10-1

100

Pro

babili

ty o

f err

or

Performance of rate 1/3 RA codes, k=1000, #iterations=50, on a Zchannel

Bit error rateWord error rate

p* p*0

10- log p

57

• “Unfortunately, the problem of finding decoding algorithms

is not so simple.” —R.G.G.

58

• “Unfortunately, the problem of finding decoding algorithms

is not so simple.” —R.G.G.

Joint Decoding-Demapping

BinaryIterativeDecoder

De-MapperArbitrary

DMC

How does the Demapper interact with the Iterative De-

coder?

59

• A Bayesian Network For a 4:1 Mapper

Problem: infer z,z,z,z after observing y.

z1 z2 z3 z4

x

y

Noisy channel:q(y|x))

Mapping:x = f(z1,z2,z3,z4)

60

• The Corresponding Junction Tree

x=f(z1,z2,z3,z4)

p(z1) p(z2) p(z3) p(z4)

q(ye|x)

z1 z2 z3 z4

x

(Iterative Decoder)

61

• Example: The 16-ary Symmetric Channel

1-p

p/15 each

0000

0001

0010

0011

0100

0101

0110

0111

1000

1001

1010

1011

1100

1101

1110

1111

62

• An Experiment on the 16-ary Symmetric Channel

R = / RA Code, k =

0.4 0.5 0.6 0.7 0.8 0.910

-5

10-4

10-3

10-2

10-1

100

-log10

p

WE

R

Rate 1/3 Codes for the 16-ary symmetric channel

Regular RA CodeGF-256 RS CodeGF-16 RS Code(750,250) GF-16 MDS Code

63

• This Approach has Proved Effective on the

7�6�5�4�3�2�1�0�-1�-2�

0�

1�

2�

3�

E�b�/N�o�, dB�

Th

rou

gh

pu

t (b

its p

er

ch

an

nel u

se)�

Unconstrain

ed�

16Q

AM

8PSK

QPSK

BPSK�

A2�

A1�

T6Q�

T3B�

T2Q�

T2B�

C6Q�

C6B�

C7B�

C7Q�

T3Q�

T6B�

V78Q�

P78Q & S78Q�

S8Q�

V34Q�

C78Q�

C34Q�

P34Q�

S34Q�

P8Q�

T4Q�

T4B�

Turbo

Before

turbo

(graph due to Divsalar and Pollara)

64

• A Simple Multiuser (Multiaccess) Channel

w

xyp(y|x,w)

(w and x must transmit independently to y.)

65

• A Tanner Graph for a (,) LDPC Code

+

+

+

x1

x2

x3

x4

x5

x6

+

66

• Splitting the Graph (I)

+ +

+

x1

x2

x3

x4

x5

x6

+

67

• Splitting the Graph (II)

+ +

+

x1

x2

x3

x4

x5

x6

w1

w2

w3

w4

w5

w6

+

68

• A Tanner Graph for a Multiaccess LDPC Code

(n = , R = /, R = /).

�������

������������������

���

�������

������������������

���

�������

������������������

���

�������

������������������

���

�������

������������������

���

�������

������������������

���

+ +

+

x1

x2

x3

x4

x5

x6

w1

w2

w3

w4

w5

w6

y1

y2

y3

y4

y5

y6

User 1 User 2

+

yi = channel response to (wi, xi).

69

• The Corresponding Junction Graph

p(y1e|w1,x1)

(w1,w2,w3)

(x1,x4,x6)

(x3,x5,x6)

p(y2e|w2,x2)

p(y3e|w3,x3)

p(y4e|w4,x4)

p(y5e|w5,x5)

p(y6e|w6,x6)

(x2,x4,x5)

χ =

{

1 if even parity

0 if odd parity.

70

• Example: The Binary Adder Channel

+

x1

x2

y

x1 x2 y

0 0 0

0 1 1

1 0 1

1 1 2

71

• BAC Joint Decoding Example

+ +

+

+

72

• BAC Joint Decoding Example

+ +

+

+

1

0

1

2

1

2

73

• BAC Joint Decoding Example

+ +

+

+

1

0

1

2

1

2

??

74

• BAC Joint Decoding Example

+ +

+

+

1

0

1

2

1

2

??

0 0

75

• BAC Joint Decoding Example

+ +

+

+

1

0

1

2

1

2

??

? ?

0 0

76

• BAC Joint Decoding Example

+ +

+

+

1

0

1

2

1

2

??

? ?

0 0

1 1

77

• BAC Joint Decoding Example

+ +

+

+

1

0

1

2

1

2

??

? ?

? ?

0 0

1 1

78

• BAC Joint Decoding Example

+ +

+

+

1

0

1

2

1

2

??

? ?

? ?

0 0

1 1

1 1

79

• BAC Joint Decoding Example

+ +

+

+

1

0

1

2

1

2

??

? ?

? ?

0 0

1 1

1 1

80

• BAC Joint Decoding Example

+ +

+

+

1

0

1

2

1

2

0?

? ?

? 1

0 0

1 1

1 1

81

• BAC Joint Decoding Example

+ +

+

+

1

0

1

2

1

2

01

? ?

0 1

0 0

1 1

1 1

82

• BAC Joint Decoding Example

+ +

+

+

1

0

1

2

1

2

01

? ?

0 1

0 0

1 1

1 1

83

• BAC Joint Decoding Example

+ +

+

+

1

0

1

2

1

2

01

1 ?

0 1

0 0

1 1

1 1

84

• BAC Joint Decoding Example

+ +

+

+

1

0

1

2

1

2

01

1 0

0 1

0 0

1 1

1 1

85

• The Capacity Region for the BAC

R1+R2 = 1.50

R1

R2

1.0

1.0

86

• Experimental Results Based on SplittingIrregular RA Codes (n = )

1.43 1.44 1.45 1.46 1.47 1.48 1.49 1.510

-4

10-3

10-2

10-1

100

Total Rate R1 + R

2

BE

R

Performance on the BAC of IRA codes designed by graph-splitting

87

• A Theorem.

Theorem. Turbolike codes meet Shannon’s Challenge onthe BAC (without the need to timeshare).

Proof. Use density evolution. It’s almost exactly like the

BEC.

(Palanki, Khandekar and McEliece, Allerton 2001)

88

• Conclusions?

89

• Conclusions?

• On standard channel models (SBIC’s), coding technologyis quite mature.

90

• Conclusions?

• On standard channel models (SBIC’s), coding technologyis quite mature.

• Still, theory has a lot of catching up to do. Density evo-lution may not be enough.

91

• Conclusions?

• On standard channel models (SBIC’s), coding technologyis quite mature.

• Still, theory has a lot of catching up to do. Density evo-lution may not be enough.

• Although applications of turbolike codes to nonstandardchannels are just beginning to appear, graph-based iterativemessage-passing may be a panacea.

92

• Conclusions?

• On standard channel models (SBIC’s), coding technologyis quite mature.

• Still, theory has a lot of catching up to do. Density evo-lution may not be enough.

• Although applications of turbolike codes to nonstandardchannels are just beginning to appear, graph-based iterativemessage-passing may be a panacea.

Coding theory is alive, and provides many hot topics

for further research and for practical applications !

93

• Practical Applications of Turbo-Like

and Iterative Decoding Algorithms

in Wireless Communications

94

• Practical Implementations of Turbo Codes

• Turbo codes have been proposed to the ConsultativeCommittee for Space Data Systems (CCSDS) by theJet Propulsion Laboratory. The latency issue due tolarge interleavers is not crucial for deep space commu-nications.

• The standardized turbo encoder (see Fig. 1) consists oftwo 16-state recursive systematic convolutional (RSC)codes, connected with an algorithmically described in-terleaver.

• The constituent convolutional encoders are terminatedindependently at the end of each block. Code rates closeto 12,

13,

14 and

16 are achieved by appropriate puncturing

of the output symbols.

• Block lengths of 1784 through 8920 information bitsare specified, to match those of the (255, 223) Reed-Solomon code with interleaving depth 1 through 5.

95

• out 0a

RA

TE

1/3

RA

TE

1/4

RA

TE

1/2

G0

G0

o

o

D D D D

G1

G2

G3

G1

G2

G3

D D D D

ENCODER a

ENCODER b

INPUTINFORMATIONBLOCK

in a

in b

INFORMATIONBLOCK

BUFFER

out 1a

out 3a

out 2a

RA

TE

1/6

out 1b

out 3b

Not used

= exclusive OR = take every symbol = take every other symbol D = single bit delay

Figure 1: The Consultative Committee for Space Data Systems (CCSDS) turbocodes encoder.

96

• 0.0 0.5 1.0 1.5 2.0 2.5 3.0 3.5

10−6

10−5

10−4

10−3

10−2

10−1

100

TURBO8920,

RATE 1/2

TURBO 1784,RATE 1/2

(7,1/2) +RS (I = 5)

(7,1/2) +RS (I = 1)

(a)

BIT

-ER

RO

R R

AT

E

0.0−0.5 1.0 1.5 2.0

10−6

10−5

10−4

10−3

10−2

10−1

TURBO 8920,RATE 1/6

TURBO1784,RATE 1/6

(15,1/6) +RS (I = 5)

(b)

BIT

-ER

RO

R R

AT

E

(15,1/6) +RS (I = 1)

0.5

Eb /N0, dB

Eb /N0, dB

Figure 2: Bit error rates for (a) several codes with rates near 12 and (b) severalcodes with rates near 16 .

97

• Practical Use of Turbo Codes (Cont.)

• Turbo codes have already made their presence felt inpractical applications and are part of the third-generation(3G) wireless communication standards.

• In third-generation (3G) mobile cellular wireless sys-tems, error control coding must accommodate both voiceand data users, whose requirements vary considerablyin terms of latency, throughput and the impact of errorson the user application.

• Turbo encoding and decoding is used for the data trans-mission in 3G communication systems.

• The latency issue is a weak point for turbo codes, whichneed several repeated calculations at the receiver side,because of the length of the interleaver.

⇒ The latency issue is the reason why a simple convo-lutional code was preferred in 3G voice transmission.

98

• Figure 5. The four turbo codes used in practice: a) 8-state binary;, b) 8-state duobinary, both with polyno-

(a)

Y1

k binarydata

Y2

X

PermutationΠ

(b)

Y1

k/2 binarycouples

A

B

PermutationΠ

(c)

Y1

k binarydata

Y2

X

PermutationΠ

Y2

(d)

Y1

k/2 binarycouples

A

B

PermutationΠ

Y2

Figure 3: The four turbo codes used in practice: a) 8-state binary, b) 8-stateduobinary, both with polynomials 15, 13 (or their symmetric form 13, 15);c) 16-state binary; d) 16-state duobinary, both with polynomials 23, 35 (ortheir symmetric form 31, 27). Binary codes are suitable for rates lower than12 , duobinary codes for rates higher than

12 .

99

• � Table 1. Current known applications of convolutional turbo codes.

Application Turbo code Termination Polynomials Rates

CCSDS (deep space) Binary, 16-state Tail bits 23, 33, 25, 37 1/6, 1/4, 1/3, 1/2

UMTS, cdma2000 (3G mobile) Binary, 8-state Tail bits 13, 15, 17 1/4, 1/3, 1/2

DVB-RCS (return channel over satellite) Duobinary, 8-state Circular 15, 13 1/3 up to 6/7

DVB-RCT (return channel over terrestrial) Duobinary, 8-state Circular 15, 13 1/2, 3/4

Inmarsat (M4) Binary, 16-state No 23, 35 1/2

Eutelsat (Skyplex) Duobinary, 8-state Circular 15, 13 4/5, 6/7

Figure 4: Current known applications of convolutional turbo codes.

100

• Figure 6. Some examples of performance, expressed in FER, achievable with

Fram

e e

rro

r ra

te

QPSK, 8-state binary,R = 1/3, 640 bits

QPSK, 16-state duobinary,R = 2/3, 1504 bits

QPSK, 8-stateduobinary,R = 2/3, 1504 bits

8-PSK, 16-stateduobinary,R = 2/3, 1504 bits,pragmatic codedmodulation

Eb/N0 (dB)51

510–7

10–8

510–6

510–5

510–4

510–3

510–2

5

5

10–1

432

Figure 5: Some examples of performance, expressed in frame error rate (FER),achievable with turbo codes on Gaussian channels. In all cases: decodingusing the Max-Log-MAP algorithm with eight iterations and 4-bit associatedwith QPSK or 8-bit associated with 8-PSK input quantization.

101

• Figure 2. Voice and data channel encoding and decoding in 3G wireless.

Viterbi decoder

Turbo decoder

Voicestream

Datastream

Tra

nsm

itte

r

TurboencoderData

stream

Voicestream

RSC

RSC

π

Shift reg

Recursiveencoder (RSC)

Shift reg

Convolutionalencoder

Rece

iver

D1 D1

π-1

π

Figure 6: Voice and data channel encoding and decoding in third-generationwireless communication systems.

102

• Representation of Binary linear codes by Bi-partite Graphs

� Figure 2. A parity-check matrix H and the corresponding Tanner graph. To

υ10 C5

υ9

υ8

υ7

υ6

υ5

υ4

υ3

υ1

υ2

1 2 3 4 5 6 7 8 9 10

1 1 1 1 0 1 1 0 0 0

0 0 1 1 1 1 1 1 0 0

0 1 0 1 0 1 0 1 1 1

1 0 1 0 1 0 0 1 1 1

1

1

2

3

H :=

4

5 1 0 0 1 0 1 0 1 1

C4

C3

C2

C1

Figure 7: A parity-check matrix H and the corresponding Tanner-graph. Toillustrate the relation more clearly, column 8 and row 2 of H are shownin bold. The corresponding variable node and check nodes as well as theattached edges are emphasized as well.

103

• Practical Implementations of LDPC Codes

• An LDPC based solution was adopted for the latestDVB satellite communications.

• Flarion Technologies has implemented a programmableLDPC decoder. The hardware supports a wide rangeof LDPC designs. The iterative message-passing algo-rithm used is a 5-bit approximation of the belief prop-agation. A version of the decoder is working in theFlarion wireless system. Encoding is done on a DSP.

• Digital Fountain was founded by a group of people work-ing on LDPC codes. This company is using LDPC-likecodes to provide efficient and reliable content deliveryover the internet.

• Lucent Technologies has implemented an LDPC codefor optical networking. The device runs at 10 Gbpsthroughput with a coding rate of 0.93 and target errorperformance 10−15.

• The storage industry has also shown serious interest inincorporating LDPC codes into future-generation de-vices, but no products have been announced to date.

104

• � Figure 1. Frame error rate (FER) and bit error rate (BER) curves for rate 1/2

10–8

Bit

err

or

rate

10–9

10–7

10–6

10–5

10–4

10–3

10–2

10–1

100

Eb/N0 (dB)3.02.52.01.51.00.50.0

σ0.7070.7490.7940.8410.8910.9141.0

BER (%)7.899.1210.4011.7313.0914.4715.86

Sh

an

no

n lim

it

BER

k =

2048

BERk =

1024

BER

k =

4096

FERk = 512

BERk =

512

FERk =

1024

FER

k =

4096

FERk =

2048

Figure 8: Frame error rate (FER) and bit error rate (BER) curves for rate12 LDPC codes over the additive white Gaussian noise channel under beliefpropagation decoding. The performance improves significantly with increas-ing block length. The second axis shows the performance as a function ofthe standard deviation of the noise, assuming that the signal has unit energy.The third axis, finally, is labelled by the corresponding bit error probabilityof uncoded BPSK.

105

• Figure 5. FER and BER curves for rate 1/2 LDPC codes over the AWGN

10–1

10–2

10–3

10–4

10–5

10–6

10–7

10–8

10–9

10–10

10–11

100

Eb/N0 (dB)1.251.00.750.50.250.0

σ0.8660.8910.9170.9440.9721.0

BER (%)12.4113.0913.7814.4715.1615.86

FER hardware A

FER software ABER software A

BER hardware A

FER hard

ware B

BER hard

ware B

Sh

an

no

n lim

it

Figure 9: Frame error rate (FER) and bit error rate (BER) curves for rate12 LDPC codes over the AWGN channel, decoded with message-passing de-coding and 5-bit quantization. Code B was designed for deeper error floor,illustrating the trade-off.

106

• References

[1] Special issue on Codes on Graphs and Iterative Algorithms, IEEE Trans.on Information Theory, vol. 47, no. 2, February 2001.

[2] Special issues on The Turbo Principle: From Theory to Practice (PartsI, II), IEEE Journal on Selected Areas in Communications, vol. 19, no. 5& 9, May and September 2001.

[3] K. Andrews et al., ”Turbo-decoder implementation for the deep spacenetwork,” JPL Progress Report 42–128, pp. 1–20, February 2002.

[4] C. Berrou and A. Glavieux, “Near optimum error-correcting codingand decoding: turbo codes,” IEEE Trans. on Communications, vol. 44,no. 10, pp. 1261 - 1271, October 1996.

[5] C. Berrou, ”The ten-years-old turbo codes are entering into service,”IEEE Communications Magazine, vol. 41, pp. 110–116, August 2003.

[6] R. G. Gallager, ”Low-density parity-check codes”, IEEE Trans. on In-formation Theory, vol. 8, no. 1, pp. 21–28, October 1962.

[7] R. J. McEliece, ”Achieving the Shannon Limit: A Progress Report,”Plenary talk given at 38th Allerton Conference, Illinois, USA, October5, 2000.

[8] R. J. McEliece, ”Are turbo-like effective on non-standard channels ?,”Plenary talk given at the 2001 IEEE International Symposium on Infor-mation Theory (ISIT 2001), Washington, DC., USA, June 25, 2001.

[9] T. Richardson and R. Urbanke, ”The renaissance of Gallager’s low-density parity-check codes,” IEEE Communications Magazine, vol. 41,pp. 126–131, August 2003.

[10] T. Richardson and R. Urbanke, “An introduction to the analysis of it-erative coding systems”, The IMA volumes in Mathematics and its Ap-plications, vol. 123 (Codes, systems, and graphical models), pp. 1–37,2001.

107

• [11] C. Thomas et al, ”Intergrated circuits for channel coding in 3G mobilewireless systems,” IEEE Communications Magazine, vol. 41, pp. 150–159, August 2003.

[12] S. B. Wicker and S. Kim, Fundamentals of Codes, Graphs, and IterativeDecoding, Kluwer Academic Publishers, 2003.

108

Recommended