Home > Documents > Ldpc Codes

# Ldpc Codes

Date post: 02-Apr-2015
Category:
View: 220 times
Embed Size (px)
of 27 /27
Alma Bregaj Project, ECE 534 LDPC Codes
Transcript

Alma Bregaj

Project, ECE 534

LDPC Codes

A.Introduction

Low-density parity-check (LDPC) codes are a class of linear

block LDPC codes.

Their main advantage is that they provide a high performance

and linear time complex algorithms for decoding.

LDPC codes were first introduced by Gallager in his PhD

thesis in 1960.

2 Alma Bregaj

1. Error correction using parity-checks

Alma Bregaj3

Single parity check code (SPC)

Example:

The 7-bit ASCII string for the letter S is 1010011, and a parity bit is to be added as the eighth bit. The string for S already has an even number of ones (namely four) and so the value of the parity bit is 0, and the codeword for S is 10100110.

More formally, for the 7-bit ASCII plus even parity code we define a codeword c to have the following structure:

, where each is either 0 or 1, and every codeword satisfies the constraint:

this equation is called a parity-check equation.

ic ]ccccccc[c c 87654321

1.Error correction using parity-checks

Alma Bregaj4

1.Error correction using parity-checks

Alma Bregaj5

In matrix form a string is a valid codeword for

the code with parity-check matrix H if and only if it satisfies

the matrix equation:

(1.4)0 HyT

]ccccc[c y 654321

2.Encoding

Alma Bregaj6

The code constraints from example below can be re-written

as:

The codeword bits , , and contain the three bit message,

while the codeword bits, , , contain the three parity-

check bits.

Written this way the codeword constraints show how to

encode the message.

3216

325

214

cc c c

c c c

c c c

1c 2c 3c

4c 5c6c

0 0 1 1 c

1 01 c

0 11 c

6

5

4

2.Encoding

Alma Bregaj7

and so the codeword for this message is c = [110010].

Again these constraints can be written in matrix form as follows:

where the matrix G is called the generator matrix of the code.

The message bits are conventionally labeled by ,

where the vector u holds the k message bits. Thus the codeword c

corresponding to the binary message can be found

using the matrix equation:

c = uG.

];...uu ;[u u k21

]u,u ,[u u 321

3. Error detection and correction

Alma Bregaj8

Suppose a codeword has been sent down a binary symmetric

channel and one or more of the codeword bits may have been

flipped. The task is to detect any flipped bits and, if possible,

to correct them.

Firstly, we know that every codeword in the code must satisfy

(1.4), and so errors can be detected in any received word

which does not satisfy this equation.

3. Error detection and correction

Alma Bregaj9

Example:

The codeword c = [101110] from the code in Example 1.3 was sent through a channel and the string y = [101010] received. Substitution into equation (1.4) gives:

The result is nonzero and so the string y is not a codeword of this code.

3. Error detection and correction

Alma Bregaj10

, is called the syndrome of y.

The syndrome indicates which parity-check constraints are not satised by y.

The result of the equation below, in this example, indicates that the first parity-check equation in H is not satisfied by y.

Since this parity-check equation involves the 1-st,

2-nd and 4-th codeword bits we can conclude that at least one

of these three bits has been inverted by the channel.

Hy s T

3.Error detection and correction

Alma Bregaj11

The Hamming distance between two codewords is defined as

the number of bit positions in which they differ.

The minimum distance of a code, , is defined as the

smallest Hamming distance between any pair of codewords in

the code.

A code with minimum distance , can always detect t

errors whenever:

t <

mind

mind

mind

3.Error detection and correction

Alma Bregaj12

To go further and correct the bit flipping errors requires that the decoder determine which codeword was most likely to have been sent. Based only on knowing the binary received string, y, the best decoder will choose the codeword closest in Hamming distance to y. When there is more than one codeword at the minimum distance from y the decoder will randomly choose one of them.

This decoder is called the maximum likelihood (ML) decoder as it will always chose the codeword which is most likely to have produced y.

B.Representations for LDPC codes

Matrix representation

Graphical Representation

13 Alma Bregaj

1.Matrix Representation

14 Alma Bregaj

The matrix defined below is a parity check matrix with

dimension n x m for a (10, 5) code. We can now define two

numbers describing these matrix, for the number of 1’s in

each row and and for the columns.cw

rw

Regular and Irregular LDPC codes

A low-density parity-check code is a linear block code for

which the parity check matrix has a low density of 1's.

A regular LDPC code is a linear block code for whose parity-

check matrix H contains exactly 1's in each column and

exactly 1's in each row.

An irregular LDPC code if H is low density, but the number

of 1s in each row or column is not constant.

It is easiest to see the sense in which an LDPC code is regular

or irregular through its graphical representations.

15 Alma Bregaj

cw

)/( mnww cr

2. Graphical Representation

The Tanner graph of a code is drawn according to the

following rule: check node j is connected to the variable

node i whenever element in H is 1.

16 Alma Bregaj

ijh

C.Constructing LDPC codes

Several different algorithms exists to construct suitable

LDPC codes.

Gallager himself introduced one. Furthermore MacKay

proposed one MacKay to semi-randomly generate sparse

parity checks matrices. In fact, completely randomly chosen

codes are good with a high probability.

The problem that will arise, is that the encoding complexity

of such codes is usually rather high.

17 Alma Bregaj

D.Decoding LDPC Codes

Hard-decision decoding

Soft-decision decoding

18 Alma Bregaj

1.Hard-Decision Decoding

19 Alma Bregaj

1.Hard-Decision Decoding Step1: All v-nodes send a message to their c-nodes

containing the bit they believe to be the correct one for them.

Step 2: Every check nodes calculate a response to every connected variable node. The response message contains the bit that believes to be the correct one for this v-node assuming that the other v-nodes connected to are correct. In other words:If you look at the example, every c-node is connected to 4 v-nodes. So a c-node looks at the message received from three v-nodes and calculates the bit that the fourth v-node should have in order to fulfill the parity check equation.

20 Alma Bregaj

jf

jf

jf

jf

jf

jf

ic

ic

Alma Bregaj21

Step 3: The v-nodes receive the messages from the check

nodes and use this additional information to decide if their

originally received bit is OK. A simple way to do this is a

majority vote. When coming back to our example that

means, that each v-node has three sources of information

concerning its bit.The original bit received and two

suggestions from the check nodes. Table 3 illustrates this step.

Now the v-nodes can send another message with their (hard)

decision for the correct value to the check nodes.

Step 4: Go to step 2

1.Hard-Decision Decoding

22 Alma Bregaj

1.Hard-Decision Decoding

23 Alma Bregaj

2.Soft-Decision Decoding

The above description of hard-decision decoding was mainly

for educational purpose to get an overview about the idea.

Soft-decision decoding of LDPC codes yields in a better

decoding performance and is therefore the prefered method.

The underlying idea is exactly the same as in hard decision

decoding.

24 Alma Bregaj

E.Encoding LDPC Codes

Choose certain variable nodes to place the message bits on.

And in the second step calculate the missing values of the

other nodes.

An obvious solution for that would be to solve the parity

check equations. This would contain operations involving the

whole parity-check matrix and the complexity would be

again quadratic in the block length.

In practice however, more clever methods are used to ensure

that encoding can be done in much shorter time.

25 Alma Bregaj

F.Conclusions Low-density parity-check codes are being studied for a large

variety of applications, much as turbo codes, trellis codes etc.

They make it possible to implement parallelizable decoders.

The main disavantages are that encoders are somehow more complex and that the code lengh has to be rather long to yield good results.

Their main advantage is that they provide a performance which is very close to the capacity for a lot of different channels and linear time complex algorithms for decoding

26 Alma Bregaj

Thank you!

27 Alma Bregaj

Recommended