+ All Categories
Home > Documents > A STUDY OF LOW DENSITY PARITY-CHECK CODES USING …

A STUDY OF LOW DENSITY PARITY-CHECK CODES USING …

Date post: 31-Mar-2022
Category:
Upload: others
View: 6 times
Download: 0 times
Share this document with a friend
94
A STUDY OF LOW DENSITY PARITY-CHECK CODES USING SYSTEMATIC REPEAT-ACCUMULATE CODES _______________ A Thesis Presented to the Faculty of San Diego State University _______________ In Partial Fulfillment of the Requirements for the Degree Master of Science in Electrical Engineering _______________ by Jose Ruvalcaba Summer 2015 brought to you by CORE View metadata, citation and similar papers at core.ac.uk provided by CSUN ScholarWorks
Transcript
Page 1: A STUDY OF LOW DENSITY PARITY-CHECK CODES USING …

A STUDY OF LOW DENSITY PARITY-CHECK CODES USING

SYSTEMATIC REPEAT-ACCUMULATE CODES

_______________

A Thesis

Presented to the

Faculty of

San Diego State University

_______________

In Partial Fulfillment

of the Requirements for the Degree

Master of Science

in

Electrical Engineering

_______________

by

Jose Ruvalcaba

Summer 2015

brought to you by COREView metadata, citation and similar papers at core.ac.uk

provided by CSUN ScholarWorks

Page 2: A STUDY OF LOW DENSITY PARITY-CHECK CODES USING …
Page 3: A STUDY OF LOW DENSITY PARITY-CHECK CODES USING …

iii

Copyright © 2015

by

Jose Ruvalcaba

All Rights Reserved

Page 4: A STUDY OF LOW DENSITY PARITY-CHECK CODES USING …

iv

For I know the plans that I have for you”, declares the Lord, “plans to prosper you and not to

harm you, plans to give you hope and a future.

–Jeremiah 29:11

Page 5: A STUDY OF LOW DENSITY PARITY-CHECK CODES USING …

v

ABSTRACT OF THE THESIS

A Study of Low Density Parity-Check Codes Using Systematic

Repeat-Accumulate Codes

by

Jose Ruvalcaba

Master of Science in Electrical Engineering

San Diego State University, 2015

Low Density Parity-Check, or LDPC, codes have been a popular error correction

choice in the recent years. Its use of soft-decision decoding through a message-passing

algorithm and its channel-capacity approaching performance has made LDPC codes a strong

alternative to that of Turbo codes. However, its disadvantages, such as encoding complexity,

discourages designers from implementing these codes.

This thesis will present a type of error correction code which can be considered as a

subset of LDPC codes. These codes are called Repeat-Accumulate codes and are named such

because of their encoder structure. These codes is seen as a type of LDPC codes that has a

simple encoding method similar to Turbo codes. What makes these codes special is that they

can have a simple encoding process and work well with a soft-decision decoder. At the same

time, RA codes have been proven to be codes that will work well at short to medium lengths

if they are systematic. Therefore, this thesis will argue that LDPC codes can avoid some of

its encoding disadvantage by becoming LDPC codes with systematic RA codes.

This thesis will also show in detail how RA codes are good LDPC codes by

comparing its bit error performance against other LDPC simulation results tested at short to

medium code lengths and with different LDPC parity-check matrix constructions. With an

RA parity-check matrix describing our LDPC code, we will see how changing the interleaver

structure from a random construction to that of a structured can lead to improved

performance. Therefore, this thesis will experiment using three different types of interleavers

which still maintain the simplicity of encoding complexity of the encoder but at the same

time show potential improvement of bit error performance compared to what has been

previously seen with regular LDPC codes.

Page 6: A STUDY OF LOW DENSITY PARITY-CHECK CODES USING …

vi

TABLE OF CONTENTS

PAGE

ABSTRACT ...............................................................................................................................v

LIST OF TABLES ................................................................................................................... ix

LIST OF FIGURES ...................................................................................................................x

ACKNOWLEDGEMENTS .................................................................................................... xii

CHAPTER

1 INTRODUCTION .........................................................................................................1

1.1 Outline of Thesis ................................................................................................2

2 BACKGROUND ...........................................................................................................4

2.1 Digital Communications ....................................................................................4

2.2 Modulator/Demodulator ....................................................................................5

2.2.1 Channel .....................................................................................................6

2.2.2 Demodulator .............................................................................................7

2.3 Channel Coding .................................................................................................7

2.3.1 Linear Block Codes...................................................................................7

2.3.2 Convolutional Coding .............................................................................11

2.4 Hard-Decision and Soft-Decision Decoding ...................................................13

2.4.1 Turbo Codes ............................................................................................14

2.4.2 Low Density Parity Check Codes ...........................................................15

3 LOW-DENSITY PARITY CHECK CODES ..............................................................17

3.1 LDPC Definition ..............................................................................................17

3.1.1 Irregular Versus Regular .........................................................................18

3.1.2 LDPC Code Rate.....................................................................................19

3.1.3 LDPC Matrix and Graphical Representation ..........................................20

3.2 Message-Passing Iterative Decoding ...............................................................21

3.2.1 Bit-Flipping Decoding ............................................................................22

Page 7: A STUDY OF LOW DENSITY PARITY-CHECK CODES USING …

vii

3.2.1.1 Bit-Flipping Process.......................................................................22

3.2.2 Sum-Product Algorithm ..........................................................................23

3.2.2.1 Sum-Product Algorithm Representation ........................................25

3.2.2.2 Sum-Product Algorithm Process....................................................26

3.3 LDPC Parity-Check Matrix Construction ........................................................26

3.3.1 Gallager Codes ........................................................................................28

3.3.2 Repeat-Accumulate Codes ......................................................................29

3.4 LDPC Encoding ...............................................................................................29

3.4.1 Simple Encoding .....................................................................................30

4 REPEAT-ACCUMULATE CODES ...........................................................................32

4.1 Systematic and Non-Systematic Codes ...........................................................33

4.2 RA Parity-Check Matrix ..................................................................................34

4.3 Encoding RA Codes .........................................................................................35

4.4 Parity-Check Matrix H Construction ...............................................................38

4.5 Encoder and Parity-Check Construction Complexity .....................................41

4.6 Message-Passing Decoding for RA Codes ......................................................41

4.6.1 Graphical Representation for RA Codes .................................................42

4.6.2 Sum-Product Algorithm for RA Codes...................................................43

4.7 Interleavers for RA Codes ...............................................................................44

4.7.1 RA Interleaver Definition .......................................................................44

4.7.2 Interleaver Properties ..............................................................................45

4.7.3 Pseudo-Random Interleavers ..................................................................46

4.7.4 Structured-Type Interleavers ..................................................................46

4.7.5 L-Type Interleavers .................................................................................47

4.7.6 Modified L-Type Interleavers .................................................................50

4.8 Advantages and Disadvantages........................................................................52

5 SIMULATION .............................................................................................................54

5.1 Information Sequence ......................................................................................55

5.2 Channel Encoder ..............................................................................................55

5.2.1 RA Parity-Check Matrix and Encoder ....................................................55

5.2.2 Gallager Parity-Check Matrix .................................................................56

5.3 BPSK Modulator ..............................................................................................58

Page 8: A STUDY OF LOW DENSITY PARITY-CHECK CODES USING …

viii

5.4 AWGN Channel ...............................................................................................59

5.5 Sum-Product Decoder ......................................................................................59

6 RESULTS ....................................................................................................................61

6.1 Simulation Results ...........................................................................................61

6.1.1 Code Length, N = 96 ...............................................................................61

6.1.2 Code Length, N = 204 .............................................................................65

6.1.3 Code Length, N = 408 .............................................................................72

6.1.4 Code Length, N = 816 .............................................................................77

6.2 Summary ..........................................................................................................80

7 CONCLUSION AND FUTURE WORK ....................................................................81

REFERENCES ........................................................................................................................82

Page 9: A STUDY OF LOW DENSITY PARITY-CHECK CODES USING …

ix

LIST OF TABLES

PAGE

Table 3.1. Comparison Between Irregular and Regular LDPC Codes ....................................19

Table 6.1. N = 96 Simulation Points ........................................................................................68

Table 6.2. N = 204 Simulation Points ......................................................................................72

Table 6.3. N = 408 Simulation Points ......................................................................................76

Table 6.4. N = 816 Simulation Points ......................................................................................80

Page 10: A STUDY OF LOW DENSITY PARITY-CHECK CODES USING …

x

LIST OF FIGURES

PAGE

Figure 2.1. Digital communication system. ...............................................................................4

Figure 2.2. Linear block code with k = 4 and n = 7. ..................................................................8

Figure 2.3. Systematic codeword structure. ...............................................................................9

Figure 2.4. A rate 1/3 convolutional code. ...............................................................................12

Figure 2.5. Viterbi algorithm example. ....................................................................................13

Figure 2.6. Turbo code encoder and decoder. Encoder on the top and decoder on the

bottom. .........................................................................................................................15

Figure 3.1. Tanner graph for a LDPC code. Note the squares represent check nodes

and circles bit nodes. ....................................................................................................21

Figure 4.1. Block diagram structure of an RA code. ...............................................................32

Figure 4.2. Block diagram structure of a systematic RA code. ...............................................33

Figure 4.3. Block diagram of a non-systematic RA code. .......................................................34

Figure 4.4. A systematic RA code tanner graph. .....................................................................42

Figure 4.5. Block diagram with an RA encoder and a SPA LDPC decoder............................43

Figure 4.6. Equation for L-type interleavers ............................................................................49

Figure 4.7. Block diagram of the encoding circuit for a combined q = 3 repetition

code and modified L-type interleaver. .........................................................................52

Figure 5.1. MATLAB simulation block diagram. ...................................................................55

Figure 5.2. Block diagram of our systematic RA encoder. ......................................................56

Figure 5.3. RA parity check matrix w/ random interleaver (N = 408). ...................................57

Figure 5.4. RA parity check matrix w/ L-type interleaver. ......................................................57

Figure 5.5. RA parity check matrix w/ modified L-type interleaver. ......................................58

Figure 5.6. Gallager parity-check matrix with N = 408. ..........................................................58

Figure 6.1. Simulation results for [1]. ......................................................................................62

Figure 6.2. The original simulation results computed by MacKay..........................................62

Figure 6.3. Gallager parity-check matrix BER vs. SNR plot (N = 96). ...................................63

Page 11: A STUDY OF LOW DENSITY PARITY-CHECK CODES USING …

xi

Figure 6.4. SNR vs. BER plot with random interleaver (N = 96)............................................64

Figure 6.5. SNR vs. BER plot with L-type interleaver (L = 8, N = 96). .................................65

Figure 6.6. SNR vs. BER plot with modified L-type interleaver (L = 8, N = 96). ..................66

Figure 6.7. SNR performance with L-type interleaver (L = 30, N = 96). ................................67

Figure 6.8. SNR performance with modified L-type interleaver (L = 30, N = 96). ................68

Figure 6.9. Gallager parity-check matrix BER vs. SNR plot (N = 204). .................................69

Figure 6.10. SNR vs. BER plot with random interleaver (N = 204). ......................................69

Figure 6.11. SNR vs BER plot with L-type interleaver (L = 8, N = 204). ..............................70

Figure 6.12. SNR vs. BER plot with modified L-type interleaver (L = 8, N = 204). ..............70

Figure 6.13. SNR vs. BER plot with modified L-type interleaver (L = 30, N = 204). ............71

Figure 6.14. SNR vs. BER plot with L-type interleaver (L = 30, N = 204). ...........................71

Figure 6.15. Gallager code SNR vs. BER plot (N = 408). .......................................................72

Figure 6.16. SNR vs. BER plot for RA code with random interleaver (N = 408). ..................73

Figure 6.17. SNR vs. BER plot with modified L-type interleaver (L = 8, N = 408). ..............74

Figure 6.18. SNR vs. BER plot with an L-type interleaver (L = 8, N = 408). .........................75

Figure 6.19. SNR vs. BER plot with modified L-type interleaver (L = 30, N = 408). ............75

Figure 6.20. SNR vs. BER plot with L-type interleaver (L = 30, N = 408). ...........................76

Figure 6.21. SNR vs. BER plot with L-type interleaver (L = 8, N = 816). .............................78

Figure 6.22. SNR vs. BER plot with modified L-type interleaver (L = 8, N = 816). ..............78

Figure 6.23. SNR vs. BER plot with L-type interleaver (L = 30, N = 816). ...........................79

Figure 6.24. SNR vs. BER plot with modified L-type interleaver (L = 30, N = 816). ...........79

Page 12: A STUDY OF LOW DENSITY PARITY-CHECK CODES USING …

xii

ACKNOWLEDGEMENTS

To begin I want to thank God for this great blessing he has given me. For without

God, no part of this thesis would be possible. Second I want to give thanks to Professor

Harris, my advisor, who guided me throughout this whole project. To Professor Nagaraj and

Professor Sarah Johnson from the University of Newcastle in Australia whose help on the

topic allowed for the completion of this project. To Professor O’Sullivan who agreed to be

part of my panel. I want to give special thanks to my mom, dad and sister who had strong

faith in me and who went out of their way for me, in their own way, to help me get this

research completed. To my girlfriend, Leslie Flores, who was my greatest cheerleader and

has supported me through this process, even when at times it seemed like I was never going

to finish. To all my old Broadcom colleagues especially, Mrs. Ana Ramos, who always kept

me accountable about my thesis, even though work sometimes interfered with its completion.

However, I am mostly in debt to Mr. Juan Garcia who along with God has been there with

me in every step of the way. He dealt with my personal lows and highs from this project and

whose wise words and heavy prayers always kept me going, even if it felt there was no end

to this. To all of these people and friends who cheered me on, I thank you so much for

keeping me going. Thank you all for your support and know that I feel so blessed that God

put so many people in my path to get through this journey in my academic career. Again

Thank you and may God bless you all!

Page 13: A STUDY OF LOW DENSITY PARITY-CHECK CODES USING …

1

CHAPTER 1

INTRODUCTION

Today’s form of information transmission is in a digital form and is available in many

of the gadgets we use. From cell phones to satellite TV, consumers desire information

transmission to be as fast and error-free as possible, hence, when discussing about digital

systems, error-correcting methods are necessary to mention. In 1948, Claude Shannon

theorized that communications over a noisy channel can be improved by the use of a channel

code C. Shannon stated that given a discrete channel with capacity C and a source with

entropy per second H that:

If 𝐻 ≤ 𝐶 there exists a coding system such that the output of the source can be

transmitted over the channel with an arbitrarily small frequency of errors (or an

arbitrarily small equivocation). If 𝐻 > 𝐶 it is possible to encode the source so that

the equivocation is less than H. [1]

In other words, if we have a channel code rate R that is equal to or less than the

channel capacity, then it is possible to find an error-correction code able to achieve any given

probability of error. However, he did not give many details on how to achieve these codes.

Ever since, researchers have pushed the limits on finding error-correction codes that give us

improved performance without becoming limited to the available communication techniques

implemented.

In 1960, Robert Gallager, a PhD student at MIT, developed Low Density Parity-

Check Codes, or LDPC codes, in his doctoral dissertation. He proposed using sparse parity

check matrices as linear block codes along with soft decision iterative decoding for error

correction. Due to lack of advanced computer processing the ability for further research and

evaluation on this topic was stopped and left idle. As the years went on, some work on LDPC

Codes was considered, such was the major work developed by Professor Michael Tanner in

1981 and his work on an effective graphical representation of LDPC codes with so called

bipartite graphs, or Tanner graphs, which provided a simpler way of understanding how

Page 14: A STUDY OF LOW DENSITY PARITY-CHECK CODES USING …

2

iterative decoding worked. It was not until the introduction of Turbo Codes by Berrou,

Glavieux, and Thitimajshima in 1993 which re-ignited interest in error correction codes that

could reach close to Shannon capacity that gave way to the rediscovery of LDPC codes.

Independently, David MacKay, a professor at the University of Wales, rediscovered LDPC

codes and proved that these codes can achieve SNR level close to the Shannon Limit and to

an extend similar to those found in Turbo Codes.

As turbo codes and LDPC codes have been arising in popularity in recent years,

researchers began to look into ways in which we could improve these two methods or can

create a new set of codes which fit into the family of either “turbo-like” codes or “LDPC-

like” codes. Repeat-Accumulate codes, or RA codes for short, are a set of codes that can be

considered as part of both “like” type codes due to its ability to be represented both as

serially concatenated Turbo codes and as an LDPC code, depending on how it’s viewed.

This is because RA codes have an interesting implementation efficiency of being encoded

using a Turbo code representation while at the same time being decoded using a message-

passing algorithm such as it is done in LDPC codes.

In this thesis, we will look into RA codes and the construction of practical repeat-

accumulate parity-check matrices. Our goal is to show through simulations that utilizing RA

codes can have similar, if not better, bit error rate (BER) performance as those with LDPC

codes. Our thesis will also focus on the construction of the interleaver building block within

the RA encoder which can become essential to the encoder complexity and performance.

Overall, the idea to obtain from this project is that by utilizing an RA parity-check matrix and

encoder as an error correction code can achieve good performance at low encoding and

decoding complexity leaving us to ponder if these RA codes can potentially end the debate

between LDPC and Turbo codes and eventually replace them.

1.1 OUTLINE OF THESIS

The outline of this thesis will be as follows:

Chapter 2 will recap the basic building blocks of a communication system. It will

briefly discuss the modulator, demodulator, channel as well as channel encoder and

decoder typically used.

Chapter 3 will overview Low Density Parity-Check (LDPC) codes. This chapter will

look into the definition of LDPC codes, the various encoding methods, the soft and

Page 15: A STUDY OF LOW DENSITY PARITY-CHECK CODES USING …

3

hard-decision decoding algorithms used, and some of the advantages and

disadvantages in using LDPC codes.

Chapter 4 will discuss Repeat-Accumulate codes. The chapter will present its

encoding methods as well as the construction of its parity-check matrix. It also

mentions how it uses the soft-decision algorithm, belief propagation, to have good

decoding performance.

Chapter 5 gives the layout used for our MATLAB simulations between regular

Gallager LDPC codes and systematic Repeat-Accumulate codes.

Chapter 6 is the continuation of the previous chapter and will show the results from

the simulations. It will show various BER performance curves compared against an

uncoded BPSK performance curve at different code lengths and using different

interleavers.

Finally, chapter 7 will conclude this thesis and gives a summary on the study

performed and an interpretation on the results obtained.

Page 16: A STUDY OF LOW DENSITY PARITY-CHECK CODES USING …

4

CHAPTER 2

BACKGROUND

2.1 DIGITAL COMMUNICATIONS

A digital communication system can be described by a set of building blocks, such as

in Figure 2.1, in which transmitted information goes through. The system starts by having

information come out from the source. This information source can come either from a

person or machine, for example a voice signal or digital computer. It then goes through the

source encoder where it transforms the source output into a sequence of binary digits (bits)

called the information sequence. It is in this block that an A/D converter is found to convert

data from analog to digital. The source encoder is ideally designed so that

1. The number of bits per unit time required to represent the source output is

minimized

2. The source output can be ambiguously reconstructed from the information

sequence.

Figure 2.1. Digital communication system.

Next, each bit of the information sequence is encoded with redundant bits to create a

binary sequence called a codeword. A channel encoder is needed to protect our information

sequence from noisy environments that can distort our signal. Our channel decoder is used to

recover the code sequence after it passes through the channel. Section 2.3 and 2.4 will talk in

more in detail about channel encoders and decoders.

Page 17: A STUDY OF LOW DENSITY PARITY-CHECK CODES USING …

5

Following encoding comes modulating our encoded codeword. The modulator is

responsible for transforming each output symbol from the channel encoder into a waveform

suitable for transmission over a channel. Our Channel introduces noise to our signal and then

it’s received by our receiver which then demodulates our waveform. The demodulator

processes each received waveform and produces either a discrete or continuous output.

2.2 MODULATOR/DEMODULATOR

Modulation involves converting the original information signal (baseband signal) into

another signal with a frequency convenient for transmission. To achieve this we may vary

amplitude, frequency or phase of the signal to “modulate” our signal. PSK (Phase Shift

Keying), FSK (Frequency Shift Keying) and ASK (Amplitude Shift Keying) are some

methods used to modulate our signals.

Our baseband signals are transmitted as pulse trains which are generated by the

voltage of electrical signals. Because our data stream consist of 0’s and 1’s, we map them to

a “Bipolar” pulse train such that our 0 and 1 correspond to voltages -1 and 1, respectively,

that we call Non-Return to Zero (NRZ). To modulate this signal we select a waveform of

duration T seconds that is suitable for transmission for each encoded output symbol. For a

wideband channel, we will have:

𝑠1(𝑡) = √2𝐸𝑠

𝑇cos 2𝜋𝑓0𝑡 where 0 ≤ 𝑡 ≤ 𝑇

and

𝑠2(𝑡) = √2𝐸𝑠

𝑇cos(2𝜋𝑓0𝑡 + 𝜋) = −√

2𝐸𝑠

𝑇cos 2𝜋𝑓0𝑡 where 0 ≤ 𝑡 ≤ 𝑇

In this case, 𝑠1(t) is mapped to 1 and 𝑠2(t) is mapped to 0. T represents the duration of

symbol in seconds and 𝐸𝑠is the symbol energy, or bit energy in this case. This form of

modulation is called binary-phase shift keying, or BPSK. BPSK is defined as having our

binary bits, 0 and 1, mapped to two signals which are phase shifted between 0 and 𝜋. Each

Page 18: A STUDY OF LOW DENSITY PARITY-CHECK CODES USING …

6

signal is transmitted every T seconds and transmits only 1 bit at a time over the channel1.

When we increase the M-ary (𝑀 = 2𝑘) modulation scheme, i.e. if we transmit k bits instead

of transmitting 1 bit at a time, we see that our symbol energy relates to bit energy as

𝐸𝑠 = 𝑘𝐸𝑏

where k is the number of bits transmitting per symbol.

Finally, we can also define the BPSK bit error probability as

𝑝 = 𝑄(√2𝐸𝑠 𝑁0⁄ )

where 𝑄(𝑥) ≜1

√2𝜋∫ 𝑒−𝑦2 2⁄ 𝑑𝑦

𝑖𝑛𝑓

𝑥 is the complementary error function, or the Q-function, of

Gaussian statistics.

2.2.1 Channel

If the transmitted signal is 𝑠(𝑡), then the received signal becomes

𝑟(𝑡) = 𝑠(𝑡) + 𝑛(𝑡)

where 𝑛(𝑡) is a Gaussian random process with one-sided power spectral density (PSD), 𝑁0.

The channel introduced a Gaussian random process to our signal which distorts our signal.

By definition, a channel is a space where the signal is distorted by noise. The physical

channel described as 𝑛(𝑡) is called an Additive White Gaussian Noise (AWGN) channel

where its output is a Gaussian random variable with zero mean, 𝜇 = 0, and variance 𝜎2 =

𝑁0 2⁄ . Therefore, our AWGN channel will have a probability density function of

𝑝(𝑥) = 1

√2𝜋𝜎2𝑒

−(x−μ) 2

2𝜎2

Aside from AWGN noise we could also consider other type of noise such as thermal

noise, noise that comes from components, or multipath noise, called fading, and is defined as

1 Note that only 1 bit at a time which means that our symbol energy is the same as that of our bit energy 𝐸𝑏

Page 19: A STUDY OF LOW DENSITY PARITY-CHECK CODES USING …

7

noise composed of delayed versions of our original signal added together with our

transmitted signal.

2.2.2 Demodulator

The demodulator must produce an output corresponding to the received signal. An

optimum demodulator always includes a matched filter, or correlation detector, followed by a

switch that samples the output once every T seconds. For BPSK modulation with coherent

detection the sampled output is

𝑦 = ∫ 𝑟(𝑡)√2𝐸𝑠

𝑇

𝑇

0

cos 2𝜋𝑓0𝑡 𝑑𝑡

The sequence of un-quantized demodulator outputs must be quantized so they can be passed

on directly to the channel decoder for processing [2]. Therefore, decoding will occur by

matching each output to a decision region which will map our output to our original point.

2.3 CHANNEL CODING

Channel coding is a way of introducing redundant bits, or parity bits, into a

transmitted bit sequence in order to increase the transmission reliability in noisy

environments and improve the system’s error performance. Simple channel coding schemes

allow the received data signal to detect errors while more advanced channel coding schemes

provide the ability to correct channel errors as well [1]. Forward Error Correction (FEC)

codes are used to enable the detection and correction of channel codes and are used in

practice due to their ability to reduce bit error rate (BER) at a fixed power level or reduce

power level at a fixed error rate at the cost of increasing bandwidth [2]. To describe channel

coding, we look into two structurally different classes of coding methods: block codes and

convolutional codes.

2.3.1 Linear Block Codes

A linear block code starts by dividing the information sequence into message blocks

of k information bits, or symbols, each. A message block is represented by the binary k-tuple

𝒖 = (𝑢0, 𝑢1, … , 𝑢𝑘−1) called a message. There are a total of 2𝑘 different possible messages.

The encoder transforms each message u into an n-tuple c= (𝑐0, 𝑐1, … , 𝑐𝑛−1) of discrete

Page 20: A STUDY OF LOW DENSITY PARITY-CHECK CODES USING …

8

symbols, called a codeword. Therefore, corresponding to the 2𝑘 different possible messages,

there are 2𝑘 different possible codewords at the encoder output. This set of 2𝑘 codewords

length n is called an (n, k) block code. It is considered ‘linear’ if and only if the modulo-2

sum of two codewords is also a codeword [3]. While looking into the codeword and message

bits it is of interest to look into the ratio 𝑅 = 𝑘 𝑛⁄ which we call code rate. The code rate is

defined as the ratio between the number of information bits entering the encoder over the

number of encoded bits outputting the channel encoder. In other words, it tells how many

redundant parity bits we add per message bit.

For a binary code, each codeword c is also binary and when a codeword is assigned to

a message u, then 𝑘 ≤ 𝑛. When 𝑘 ≤ 𝑛, 𝑛 − 𝑘 redundant bits are added to each message to

form a codeword [2]. These redundant bits add protection to the code against any channel

impairments. Figure 2.2 shows an example of a linear block code with 𝑘 = 4 and 𝑛 = 7.

Figure 2.2. Linear block code with k =

4 and n = 7.

Page 21: A STUDY OF LOW DENSITY PARITY-CHECK CODES USING …

9

Each message u can generate a codeword c through a matrix G of size 𝑘 × 𝑛 called a

generator matrix. In other words, we can encode our message vector u by multiplying it with

our generator matrix G which results in our codeword c.

𝒄 = 𝒖𝐺

Example 1. To develop the codewords from Figure 2.2, we would multiply each

message by a 4 × 7 generator matrix. This is shown as:

[𝑐1𝑐2𝑐3𝑐4𝑐5𝑐6𝑐7] = [𝑢1𝑢2𝑢3𝑢4] |

1 1 0 1 0 0 00 1 1 0 1 0 01 1 1 0 0 1 01 0 1 0 0 0 1

|

It follows that an (n, k) linear code is completely specified by the k rows of a

generator matrix G. A desired property from linear block codes is the systematic structure

codewords can take, as shown in Figure 2.3. A codeword can be divided into two parts, the

message bits and the parity bits. The message consists of all the k message bits sent into the

encoder. The parity bit part consists of 𝑛 − 𝑘 parity check bits which are linear sums of the

information bits. A linear code producing a systematic codeword is produced when our

generator matrix is specified with an 𝑘 × 𝑘 identity matrix.

Figure 2.3. Systematic codeword structure.

For each 𝑘 × 𝑛 generator matrix G, there exists a (𝑛 − 𝑘) × 𝑛 matrix H such that any

vector of the row space of G is orthogonal to the rows of H and any vector that is orthogonal

to the rows of H is in the row space of G. This means that a codeword c is a codeword if and

only if is generated by generator matrix G and satisfies

𝑐𝐻𝑇 = 0

Page 22: A STUDY OF LOW DENSITY PARITY-CHECK CODES USING …

10

this means that

𝐺𝐻𝑇 = 0

Matrix H is called a parity-check matrix. Each row of H corresponds to a parity-check

equation and each column of H corresponds to a bit in the codeword.

Example 2. Given the generator matrix from Example 1 which was a (7, 4) linear

block code, our corresponding 3 × 7 parity-check matrix is

𝑯 = [1 0 0 1 0 1 10 1 0 1 1 1 00 0 1 0 1 1 1

]

Note that a parity-check equation is used to detect whether each codeword is valid. In

order to find if a codeword is valid, a modulo-2 sum of all the codeword bits need to be

added and add up to zero. If the parity-check equation is not satisfied, then we can conclude

that c is not a valid codeword.

Example 3. The parity-check equations for example 2 are

𝑝0 = 𝑐1 + 𝑐4 + 𝑐6 + 𝑐7 = 0

𝑝1 = 𝑐2 + 𝑐4 + 𝑐5 + 𝑐6 = 0

𝑝3 = 𝑐3 + 𝑐5 + 𝑐6 + 𝑐7 = 0

Error detection and correction for linear block codes can become quite simple with

the use of a parity-check matrix H. Firstly, because we know that each codeword must satisfy

𝑐𝐻𝑇 = 0 , we can detect errors in any received word by noting which words do not satisfy

this equation. Note that after we transmit our codeword c through a noisy channel, we will

have the received codeword

𝒓 = 𝒄 + 𝒆

Where e is the error vector, or error pattern, which adds the incorrect bits in our codeword.

To determine if the received codeword has errors, we compute the vector

𝒔 = 𝑟𝐻𝑇

Page 23: A STUDY OF LOW DENSITY PARITY-CHECK CODES USING …

11

Which we call the syndrome of r. The syndrome indicates which parity-check constraints are

not satisfied by r. If 𝒔 = 0, then we can determine r is a codeword, otherwise,𝒔 ≠ 0 and r is

not a codeword. In general, a linear block code can only detect a set of bit error if the error

pattern e is another codeword.

To measure the ability of a code to detect errors is through the minimum Hamming

distance or simply minimum distance 𝑑𝑚𝑖𝑛. The Hamming distance of a code refers to the

number of bits two codewords differ from each other. Minimum distance is technically

defined as the smallest Hamming distance between any pair of codewords in a code [3]. A

code’s 𝑑𝑚𝑖𝑛 can tell how many errors t can be detected as long as

𝑡 < 𝑑𝑚𝑖𝑛

Similarly, we can find how many error bits we can correct by noting that

𝑡 = ⌊𝑑𝑚𝑖𝑛 − 1

2⌋

2.3.2 Convolutional Coding

A binary convolutional code is defined by three parameters: n, k and m. Like encoders

in block codes, k refers to the number of bits from the incoming message u, or the number of

input bits going into the encoder, and parameter n refers to the number of bits in codeword c,

or the number of output bits from the encoder. The symbols u and c in this case refer to

sequences of blocks not a single block like in linear codes. However, each encoder not only

depends on the current input bit k message block but also on m previous message blocks.

Parameter m refers to the memory registers available in the encoder. Therefore, to encode a

message block sequence, a convolutional code takes into consideration the current and

previous code bit to create its codeword sequence. Hence, this encoder contains memory and

when implemented should be done using sequential logic.

Similar to linear block codes, the code rate R is defined as 𝑅 = 𝑘 𝑛⁄ . It tells us how

many output bits do we receive per input bit inserted. In order to encode our input sequence,

we connect the memory registers to adders in a particular bit order, which we call the

generator polynomial (g), for that particular bit. This generator polynomial selects the bits

that need to be combined to create our output sequence. Figure 2.4 show these polynomials.

Page 24: A STUDY OF LOW DENSITY PARITY-CHECK CODES USING …

12

Figure 2.4. A rate 1/3 convolutional code.

A generator polynomial can be thought like a generator matrix in the sense that it

“generates” our codeword c. Using Figure 4 as an example, generator polynomial

𝑔1 = (1,1,1), 𝑔2 = (0,1,1) and 𝑔3 = (1,0,1). The combination of these bits compose the

output sequence we plan to obtain. Each memory register can be described by using the

terminology ‘D’. We can use a polynomial representation by utilizing D to describe the

generator polynomials used. If using Figure 4, for example, our generator polynomials would

be 𝑔1 = 1 + 𝐷 + 𝐷2, 𝑔2 = 𝐷 + 𝐷2 and 𝑔3 = 1 + 𝐷.

Aside from memory m, convolutional codes are described through its constraint

length L. The constraint length L of a code represents the number of bits in the encoder

memory that affect the generation of the n output bits [3]. By definition 𝐿 = 𝑘(𝑚 − 1).

When describing the encoding process of convolutional codes, it is best to think about it in

terms of state diagrams. In other words, when output bits are calculated they are dependent

on current and past inputs on the encoder, similar to that of state diagrams. Therefore, we can

know which output bits will be created by noting the different combination that the encoder

can create. The number of bit combinations a convolutional encoder can make is defined as:

𝑁𝑢𝑚𝑏𝑒𝑟 𝑜𝑓 𝑠𝑡𝑎𝑡𝑒𝑠 = 2𝐿

Page 25: A STUDY OF LOW DENSITY PARITY-CHECK CODES USING …

13

This means that our constraint length gives us the amount of combinations we can

obtain. Because our goal is to protect our message as much as we can against noise, the more

output combinations we have the better the protection. Which means that to increase the

number of bit combinations we have to increase our constraint length L and ultimately

increase our memory m to have better error correcting [3].

One popular method to decode convolutional codes is using the Viterbi algorithm.

This algorithm is used if we want to have a maximum likelihood path decoder which works

by tracing a trellis structure. Depending on the received bit pattern, the decoder will decide

on the most likely path by using either a minimum Hamming distance or Euclidian distance

method. As many decoders, a Viterbi algorithm uses either a hard-decision or soft-decision

decoding method. Although different measurements may be taken, the algorithm decision

making is still the same. Figure 2.5 shows how a typical decoder trellis diagram looks.

Figure 2.5. Viterbi algorithm example.

2.4 HARD-DECISION AND SOFT-DECISION DECODING

Recall that when BPSK modulation is used on an AWGN channel with optimum

coherent detection and binary output quantization, our bit error probability (BER) for an

uncoded BPSK signal becomes

𝑝 = 𝑄(√2𝐸𝑠 𝑁0⁄ )

Page 26: A STUDY OF LOW DENSITY PARITY-CHECK CODES USING …

14

If binary coding is used, the modulator has only binary inputs (𝑀 = 2). However, if

the decoding process is said to be quantized for a binary input, then we would get binary

outputs quantized to either a 0 or 1. This type of decoding is considered hard-decision

decoding. Due to its implementation simplicity, linear block codes, and sometimes

convolutional codes, tend to use hard decision decoding. However, this works well only

when the quantization level Q is 𝑄 = 2, i.e. for every binary input we received a binary

output. When 𝑄 > 2, or the output is left unquantized, we consider the demodulator to make

soft-decision decoding. This means that instead of having a 0 or 1 value at the output, we will

get a continuous-value output. The input in this case is still a binary input but we are just

deciding to leave the output values unquantized.

The benefits of having hard-decision over soft-decision and vice versa depends on the

type of code implemented and the requirements needed to be met. If implementation

simplicity is required, hard-decision is used, however, if accurate decision making is needed

in order to improve error performance then soft-decision will be the target [3]. Although soft-

decision decoding is more difficult to implement, its significant performance improvement

compared to hard-decision is enough reason to use with codes. Soft-decision decoding is

heavily used these days especially with codes which have iterative decoding such as Turbo

codes and Low Density Parity Check codes.

2.4.1 Turbo Codes

Turbo codes were created and introduced by Berrou, Glavieux, and Thitimajshima in

1993. The codes are created by combining two or more component codes to different

interleaved version of the same information. Its decoding method involves not only using a

hard-decision decoding but also using a soft-decision decoding within its iterative decoding

algorithm. To best exploit the information learned from each decoder, the decoding algorithm

must effect an exchange of soft decisions rather than hard decisions. For a system with two

component codes, the concept behind turbo decoding is to pass soft decisions from the output

of one decoder to the input of the other decoder, and to iterate this process several times so as

to produce better decisions.

Turbo codes are also known as parallel concatenated convolutional codes (PCCC)

because in their implementation two convolutional encoders are used in parallel. Since the

Page 27: A STUDY OF LOW DENSITY PARITY-CHECK CODES USING …

15

encoders are parallel, they act on the same information at the same time rather than one

encoder processing information and then passing it on to the second encoder. Turbo decoders

are based on a soft-in-soft-out (SISO) technique. At the decoder side the systematic input,

and the two encoded data sequences from the two encoders are fed as inputs. At first, the

decoder tries to decode the various inputs in an order. Then the data is fed back through the

feedback path. The decoder iteratively decodes the inputs given to it; after a few iterations we

can make a pretty good estimate of data bit that was transmitted, and, as a result of this

feedback mechanism [4]. Figure 2.6 shows an example of a Turbo code encoder and decoder.

Figure 2.6. Turbo code encoder and decoder. Encoder on the top

and decoder on the bottom.

2.4.2 Low Density Parity Check Codes

Low Density Parity Check codes were discovered in the early 60’s and were re-

developed once research began to increase to find codes which could approach Shannon’s

limit capacity. At the time when Turbo codes where gaining popularity, Low Density Parity

Check codes, or LDPC for short, research began to increase as well. LDPC codes are

characterized by their sparse parity-check matrices and the use of soft-decision decoding. In

Page 28: A STUDY OF LOW DENSITY PARITY-CHECK CODES USING …

16

the next chapter we will discuss in detail about LDPC codes and show why they are a good

alternative for soft-decision decoding.

Page 29: A STUDY OF LOW DENSITY PARITY-CHECK CODES USING …

17

CHAPTER 3

LOW-DENSITY PARITY CHECK CODES

As we discussed earlier, Low Density Parity Check codes, or LDPC codes, are

capacity approaching codes that with the use of soft-decision decoding can achieve desirable

performance levels. To begin understanding them, we first look into the definition for a

regular LDPC code.

3.1 LDPC DEFINITION

A parity-check matrix H is defined as a (j, k)2 regular LDPC code if the following

properties are met:

Each row consists of ‘k’ ones

Each column consists of ‘j’ ones

The number of ones in common between any two columns in H should not be any

greater than one.

Both ‘j’ and ‘k’ are small compared to the code length, N, and the number of rows in

our parity check matrix, H.

The first two points tell us that for our code to be considered regular each of our row

weights must equal to ‘k’ and our column weights must equal to j. At the same time, we

require that each row, and columns, must have constant non-zero weights to be regular. In

other words, to be a regular LDPC code, each row must have exactly k ones and each column

must have exactly j ones. If this condition is not met, then our code becomes an Irregular

LDPC code. The last point refers to the requirement that k and j should be smaller than the

2 Note that we will define the non-zero values, or weights, of each row and column in our parity check

matrix as j and k, respectively

Page 30: A STUDY OF LOW DENSITY PARITY-CHECK CODES USING …

18

rows and columns, respective, of our parity check matrix. When this property is met, it

assures that our parity-check matrix is a ‘sparse’ matrix. This is the main characteristic about

LDPC codes and hence where they get their name from, i.e. Low-Density3.

LDPC codes are simply linear block codes with sparse parity-check matrices or, in

other words, a parity-check matrix that contains a very small number of non-zero weights,

which equal to‘1’ in binary form. This small number of weights is what makes our parity-

check matrix sparse and is necessary in order to have efficient soft-decision decoding which

increases linearly, in terms of complexity, as code length increases. In Gallager’s paper [1],

he showed the advantage of having a sparse H is that it can guarantee that minimum

distance, 𝑑𝑚𝑖𝑛 grows in a linear manner. This means that the sparser our code is, the better

our error detection and correction becomes hence why a classical block code will only work

well with an iterative decoding algorithm as long as H can be represented by a sparse parity-

check matrix.

3.1.1 Irregular Versus Regular

As defined previously, Irregular LDPC codes are defined when the number of non-

zero column weights, j, or row weights, k, are not constant throughout the parity-check

matrix. At times, it is convenient to use Irregular compared to Regular codes mainly because

of performance conditions. It has been shown that long random Irregular LDPC codes can

perform close to Shannon’s limit on very low noise channels. Although it is stated as

random, random codes would provide complex hardware designs which would not be

optimal for practical use. Therefore a pseudo-random irregular LDPC code construction is

followed. In Table 3.1 we show a simple comparison between Irregular and Regular LDPC

codes. Although this thesis will focus solely on regular LDPC codes, it is necessary to briefly

mention irregular codes which a popular in code construction today.

3 Because the matrix is sparse, there will be a low number of non-zeros, or ones, available making H “Low

Density”

Page 31: A STUDY OF LOW DENSITY PARITY-CHECK CODES USING …

19

Table 3.1. Comparison Between Irregular and Regular LDPC Codes

Irregular Regular

Short Code

Length

Irregular Codes may develop a

small minimum distance 𝑑𝑚𝑖𝑛 at

these levels. Hence, small values

of N are not advised.

Regular and graph based parity-

check matrices 𝐻 do well with

small 𝑁 and outperform codes

with random structured 𝐻

because it can guarantee girth

and minimum distance

properties difficult to achieve

for random codes.

Long Code

lengths

They are superior to Regular

LDPC codes. This is due to how

as N grows, we can have more

sparseness in H therefore, a higher

𝑑𝑚𝑖𝑛 equating to better

performance.

Simulations have shown that

Regular based H do not do as

well with Long code lengths.

Performance Achieve performance close to

channel capacity, however, these

LDPC codes equate poor word

error rates and high error floors in

terms of BER. Hence these codes

are not desirable in some

applications

Performance under iterative

decoding is best when column

weight is restricted to 𝑗 ≥ 3.

This condition allows the code

to reach good minimum

distance, dmin.

Hardware Due to longer lengths and pseudo-

random structures with Irregular

Codes, we would require more

computing power making

implementation complex.

Advantage in Hardware

implementation due to how we

can simplify the iterative

decoder.

3.1.2 LDPC Code Rate

For regular LDPC codes, we can make the following relationship:

𝑘𝑀 = 𝑗𝑁

Where M is the parity-check equations in our matrix H. If we assume that our parity-check

matrix has full rank we find the code rate to be the following:

𝑅 = 1 −𝑗

𝑘

This is because when the parity-check matrix is full rank:

𝑟𝑎𝑛𝑘2(𝐻) = 𝑀

Page 32: A STUDY OF LOW DENSITY PARITY-CHECK CODES USING …

20

In some cases, our parity-check matrix will not have full rank. Therefore, we can

approximate our code rate to be close to our previous definition. When this is the case, we

call R the design rate of the code [4].

3.1.3 LDPC Matrix and Graphical Representation

Often we can represent our LDPC code in either a matrix representation or a

graphical representation. If shown in matrix form, we can simply express it in a parity-check

matrix form.

Example 1. If I had a code which has a length 6 codeword of

𝑐 = [𝑐1𝑐2𝑐3𝑐4𝑐5𝑐6]

And satisfies the following parity-check equations:

𝑐1 + 𝑐2 + 𝑐4 = 0

𝑐2 + 𝑐3 + 𝑐5 = 0

𝑐1 + 𝑐2 + 𝑐3 + 𝑐6 = 0

Then we could represent a regular LDPC parity-check matrix in matrix form like:

𝐻 = [

1 1 0 1 0 00 1 1 0 1 01 0 0 0 1 10 0 1 1 0 1

]

This matrix has 𝑗 = 2, 𝑘 = 3 and 𝑟𝑎𝑛𝑘2(𝐻) = 3. Note that each row of H matches the parity

check equations that have been satisfied.

Even though a matrix representation seems straightforward, LDPC parity-check

matrices are often represented in a graphical form by a Tanner Graph. A Tanner graph is a

bipartite graph which is divided into two sets of nodes: Bit nodes and Check nodes. Bit nodes

represent the codeword bits, N, in a parity-check matrix and check nodes represent the parity-

check equations, M found in H. An edge is defined as the line which connects the bit nodes to

each check node corresponding to the bits which are found in the respective parity-check

equation. If we look at the parity-check matrix H, the number of edges in a Tanner graph

equals the number of 1’s in the parity-check matrix. A cycle in a Tanner graph is when a

Page 33: A STUDY OF LOW DENSITY PARITY-CHECK CODES USING …

21

sequence of connected nodes starts and ends in the same node in the graph and contains other

nodes no more than once. The length of a cycle is the number of edges it contains, and the

girth of a graph is the size of its smallest cycle. Figure 3.1 shows the Tanner graph for the

parity-check matrix shown in example 1 and highlights a 6 length cycle.

Figure 3.1. Tanner graph for a LDPC

code. Note the squares represent check

nodes and circles bit nodes.4

3.2 MESSAGE-PASSING ITERATIVE DECODING

Typically, error correction in linear block codes can happen if we apply direct

comparison between the received codeword and the transmitted codeword. As explained

previously, the syndrome of r, S, can tell us which parity-check constraints are not satisfied if

r does not meet the equation

𝑆 = 𝑟𝐻𝑇

However, this method only works successfully if the number of message bits, K, is

relatively small. If this is not the case, error correction and detection can become tedious and

complex. For LDPC codes, a class of algorithms were developed so they can perform better

than maximum likelihood methods used previously and have reduced complexity. These

4 Note that that the number of bit nodes in a Tanner graph equal N, or the number of columns of H, while

the number of check nodes equal M, the number of rows of H.

Page 34: A STUDY OF LOW DENSITY PARITY-CHECK CODES USING …

22

types of decoding algorithms are called message-passing algorithms, and their operation can

be explained by passing messages along the edges of Tanner Graph [4].

In message-passing algorithms, each node in the Tanner graph works in isolation,

only having access to the information contained in the messages on the edges connected to it.

These message-passing algorithms are also known as iterative decoding algorithms due to

how messages are passed back and forth between bit nodes and check nodes iteratively. This

process continues until the decoder converges to a result or until a maximum number

iterations is reached. To understand how this process works, we will look into two types of

message passing decoding which are bit-flipping decoding and belief propagation decoding.

Bit-flipping involves the use of binary messages passed back and forward between nodes and

making a hard-decision after each iteration. The Belief Propagation algorithm, also known as

sum-product algorithm, uses log-likelihood ratios to describe the node messages using sum

and product operations. Then, just like in bit-flipping, it will pass these log-likelihood ratios

between nodes.

3.2.1 Bit-Flipping Decoding

A bit-flipping algorithm, is the name given to hard-decision message passing

algorithms for LDPC codes. Typically, a binary, hard, decision is made by the detector and

passed to the decoder. For bit-flipping, binary messages are passed along the Tanner graph

edges and makes decision at both the bit and check nodes. The following process shows the

process of how bit-flipping decoding works.

3.2.1.1 BIT-FLIPPING PROCESS

Assuming we receive a hard-decision binary channel output y from our channel, i.e. 0

or 1, we will take on the following steps:

Step 1. The first step involves checking if the syndrome vector, S, is zero. In other

words, we check if

𝑆 = 𝑟𝐻𝑇

If 𝑆 = 0, then 𝑐 = 𝑟 and we can stop decoding.

If 𝑆 ≠ 0, then we have to consider all the non-zero components of S corresponding to

the parity check equation that are not satisfied by the elements of y.

Page 35: A STUDY OF LOW DENSITY PARITY-CHECK CODES USING …

23

Step 2. Once step one has been completed, we check which elements in the particular

parity-check equation are not satisfied. From this, we correlate them to those in y and

update y by flipping those components in y which do not satisfy check equations.

Step 3. After the update, we recalculate the syndrome. The whole process is repeated

again for a fixed number of iterations or until the syndrome is 0.

3.2.2 Sum-Product Algorithm

As we saw in bit-flipping, we send binary messages to each of the nodes and make

hard decisions accordingly to converge to the decoded codeword. Each message in each node

is represented by a binary value of either 0 or 1. The sum-product algorithm works similar,

however, this algorithm is a soft-decision based message-passing algorithm. This means that

instead of binary valued messages, our messages our now represented as probabilities.

The sum-product algorithm accepts the probability of each received bit coming from

the channel as input. Each input received, or channel, bit probabilities are called a priori

probabilities. A priori probabilities are the received bit probabilities which come out of the

channel and which were known in advanced before the LDPC decoder was operated. Once

the a priori probabilities are received, the extrinsic5 information is passed between nodes as

probabilities instead as hard-decisions. Finally, the decoder outputs bit probabilities which

are called a posteriori probability. The aim of sum-product algorithm is to accomplish two

things.

1. To compute the a posteriori probability (APP) for each codeword bit. In other

words,

𝑃𝑖 = 𝑃[𝑐𝑖 = 1|𝑠 = 0]

where 𝑠 = 0 refers to the event that all parity-check constraints are satisfied.

2. To select the decoded value for each bit as the value with the maximum APP

(MAP) probability6.

The extrinsic messages 𝐸𝑗,𝑖and 𝑀𝑗,𝑖 in a Tanner graph are defined as follows:

5 Extrinsic information refers to the probabilities that come from the check nodes to the bit nodes in a

Tanner graph and that does not include the check node probability for the corresponding bit node.

6 The sum-product algorithm iteratively computes an approximation of the MAP value for each code bit.

Page 36: A STUDY OF LOW DENSITY PARITY-CHECK CODES USING …

24

𝐸𝑗,𝑖 = Gives the probability messages sent from check node j to bit node i.

𝑀𝑗,𝑖 = Gives the probability messages sent from the bit nodes i to the check

nodes j

The extrinsic message 𝐸𝑗,𝑖 gives the probability that 𝑐𝑖 = 1 will cause parity-check

equation j to be satisfied. Note that 𝐸𝑗,𝑖 is not defined if bit 𝑖 is not included in check j since

no extrinsic information is passed between nodes i and j in this case. The values sent

represent the probability that a parity-check equation is satisfied if 𝑐𝑖 = 1.This is represented

as:

𝑃𝑗,𝑖𝑒𝑥𝑡 =

1

2−

1

2∏ (1 − 2𝑃𝑗,𝑖′

𝑖𝑛𝑡)𝑖′∈𝐵𝑗,𝑖′≠𝑖

Where 𝑃𝑗,𝑖′ is the current estimate available to check node j of the probability that 𝑐𝑖 = 1.

Note that the probability the parity-check equation is satisfied given 𝑐𝑖 = 0 is (1 − 𝑃𝑗,𝑖𝑒𝑥𝑡).

Although we can express the rest of the algorithm in terms of probabilities, it is

conventionally represented in terms of Log likelihood ratios (LLR). Log Likelihood ratios

are used to represent the metrics for a binary variable by a single value. Let’s define LLR’s

as:

𝐿(𝑥) = log𝑝(𝑥 = 0)

𝑝(𝑥 = 1)

where the probability that our value equals 1, 𝑝(𝑥 = 1) is given by 𝑝(𝑥 = 1) = 1 −

𝑝(𝑥 = 0). The sign of L(x) gives a hard-decision on x and its magnitude shows how

confident we are of our hard decision. With this, we can define that

𝐿(𝑥) > 0 The more confidence we have that 𝑝(𝑥) = 0

𝐿(𝑥) < 0 The more confidence we have that 𝑝(𝑥) = 1

The benefit of choosing to represent our probabilities in terms of log likelihood ratios is due

to our probability computation. By using LLR representation we can make our decoder

hardware implementation less complex by simply using adders instead of multipliers as it is

required with single probabilities. This is one of the advantages LDPC codes has which

makes it so attractive for use.

Page 37: A STUDY OF LOW DENSITY PARITY-CHECK CODES USING …

25

3.2.2.1 SUM-PRODUCT ALGORITHM

REPRESENTATION

In terms of LLR ratios, our sum- product algorithm is expressed as follows:

1. We begin by defining our extrinsic messages,𝐸𝑗,𝑖 to be the LLR of the probability

that bit i causes parity check j to be satisfied. Therefore, if we want to express our

extrinsic messages in terms of log-likelihood ratios we would obtain:

𝐸𝑗,𝑖 = 𝐿𝐿𝑅(𝑃𝑗,𝑖𝑒𝑥𝑡) = log

1 − 𝑃𝑗,𝑖𝑒𝑥𝑡

𝑃𝑗,𝑖𝑒𝑥𝑡

2. If we substitute the initial probability equation (𝑃𝑗,𝑖𝑒𝑥𝑡) into 𝐸𝑗,𝑖 we get

𝐸𝑗,𝑖 = log

12 +

12

∏ (1 − 2𝑃𝑖′𝑖𝑛𝑡)𝑖′∈𝐵𝑗,𝑖

′≠𝑖

12 −

12

∏ (1 − 2𝑃𝑖′𝑖𝑛𝑡)𝑖′∈𝐵𝑗,𝑖

′≠𝑖

3. To simplify this previous expression, we will use the relationship

tan 1

2log

1 − 𝑝

𝑝= 1 − 2𝑝

to make 𝐸𝑗,𝑖 become

𝐸𝑗,𝑖 = log

12 +

12

∏ 𝑡𝑎𝑛ℎ(𝑀𝑗,𝑖′/2)𝑖′∈𝐵𝑗,𝑖′≠𝑖

12 −

12

∏ 𝑡𝑎𝑛ℎ(𝑀𝑗,𝑖′/2)𝑖′∈𝐵𝑗,𝑖′≠𝑖

where

𝑀𝑗,𝑖′ = 𝐿𝐿𝑅(𝑃𝑗,𝑖′𝑖𝑛𝑡) = log

1 − 𝑃𝑗,𝑖′𝑖𝑛𝑡

𝑃𝑗,𝑖′𝑖𝑛𝑡

As an alternative, we could also use the relationship

2 tanh−1(𝑝) = 𝑙𝑜𝑔1 + 𝑝

1 − 𝑝

which will make our extrinsic message be represented as:

𝐸𝑗,𝑖 = 2 tanh−1 ∏𝑖′∈𝐵𝑗,𝑖

′≠𝑖𝑡𝑎𝑛ℎ (

𝑀𝑗,𝑖′

2)

4. Each bit has access to the input a priori LLR, 𝑟𝑖 and the LLR’s from every

connected check node. The total LLR of the i-th bit is the sum of these LLR’s.

They are represented as:

Page 38: A STUDY OF LOW DENSITY PARITY-CHECK CODES USING …

26

𝐿𝑖 = 𝐿𝐿𝑅(𝑃𝑖𝑖𝑛𝑡) = 𝑟𝑖 + ∑

𝑗∈𝐴𝑖𝐸𝑗,𝑖

5. Aside from the extrinsic messages from check node to bit node, it is convenient

we define the messages sent from the bit nodes to the check nodes, we will call

this variable 𝑀𝑗,𝑖. Note that 𝑀𝑗,𝑖 is not the full LLR value for each bit. This is

because it includes the check node information

3.2.2.2 SUM-PRODUCT ALGORITHM

PROCESS

Let’s summarize how the message-passing algorithm process works.

Step 1. The decoder receives the channel output values which become initialized as

the a priori LLR’s. These values are initially set to the bit nodes as

𝑀𝑗,𝑖 = 𝑟𝑖

Step 2. Each of these bit node values is sent to their corresponding check node. This

is done according to the location of the non-zero element in our parity-check matrix.

(Ex: If there is a ‘1’ in code bit, or matrix column, 3 and parity-check equation, or

row, 2, then we will be sending the value at bit node 3 to check node 2)

Step 3. Once at the check node, it will compute the extrinsic message,𝐸𝑗,𝑖 for each

message received using its LLR equation. Note that, if we were to be computing the

extrinsic message 𝐸2,3 for example, the extrinsic probability from the 2nd

check to the

3rd

bit will not use the bit node 𝑣3 to compute its value.

Step 4. After all the appropriate extrinsic probabilities are calculated, they will be

sent back to the corresponding bit nodes. At this point we will test both the intrinsic,

or the LLR value from the channel, and extrinsic messages and calculate the total

LLR for bit i.

Step 5. Finally, we make a hard-decision on the received bits given by the sign of the

LLR’s. This hard-decision gives us an estimated codeword 𝒛 which we check if it’s a

valid codeword if its syndrome 𝑠 = 0.

Step 6. If 𝑠 = 0 then we can consider 𝒛 to be a valid codeword. If this is not the case,

then we will use equation

𝑀𝑗,𝑖 = ∑ 𝐸𝑗′,𝑖 + 𝑟𝑖𝑗′∈𝐴𝑖,𝑗′≠𝑗

to set the bit node probabilities and repeat the process until we converge to the correct

codeword or reach the maximum number of decoding iterations.

3.3 LDPC PARITY-CHECK MATRIX CONSTRUCTION

When we are actually considering defining or modifying an LDPC code we are

actually considering changing our parity-check equation. That is because the construction of

Page 39: A STUDY OF LOW DENSITY PARITY-CHECK CODES USING …

27

sparse parity-check matrices are what differentiates classical linear block codes to LDPC

codes. Therefore, the construction of our parity-check matrix is critical to our LDPC code

design. While thinking on how to construct a parity-check matrix, one needs to keep in mind

the properties LDPC codes should meet. For example we may want to keep in consideration

the code length N, avoiding 4 or 6 cycles in our Tanner graph, or just achieving a desired

minimum distance. These properties should also match the design criteria designers want to

accomplish such as near-capacity code performance, efficient encoding and decoding or

lowering error-floor rates.

When considering practical LDPC codes we must consider long code construction.

Normally, randomly constructing a parity-check matrix almost always produces a good code

compared to structured parity-check matrices at higher code lengths. Although performance

close to capacity can be accomplished, implementation of a random code is very complex for

practical applications and shortening the code is required, therefore, H matrices are pseudo-

randomly constructed instead. That is, these codes are constructed randomly but certain

undesirable configurations, such as 4-cycles, are either avoided during construction or

removed afterwards.

The removal of cycles are important to consider especially when we use a message

passing algorithm as a soft-decision decoder. When cycles are present, the probabilities

between nodes become more correlated. This is even greater when the length of the cycle is

small, i.e. 4 or 6 length. This correlation tends to lead in a negative impact in decoding

performance, for when cycles are available, the probability correlations may not allow the

decoding to converge to the original codeword. Hence why if we recall from the LDPC code

definition, property 3 is stated such that we can avoid cycles of any form. Although this is

our goal, it is almost impossible to avoid cycles in LDPC codes. Yet not always is it good to

avoid cycles. It turns out that avoiding completely small cycles can also have an effect in the

minimum distance of our code so a combination of code construction and flexible decoding

methods needs to be achieved [4]. Therefore, as a rule of thumb it is always best to avoid

small cycles, especially 4-cycles, and the best way to start with them is by considering

different types of LDPC code constructions. The next two sections describe two methods of

constructing LDPC codes.

Page 40: A STUDY OF LOW DENSITY PARITY-CHECK CODES USING …

28

3.3.1 Gallager Codes

The original LDPC codes presented by Gallager were regular and defined by a

banded structure in 𝐻 [5]. For a given choice of j and k, Gallager developed a construction of

a class of linear codes specified by their parity-check matrices that are represented as

𝐻 = [

𝐻1

𝐻2

⋮𝐻𝑑

]

where 𝐻𝑑 refers to a submatrix 𝜇 × 𝜇𝑘 where k refers to the row weight of each matrix and

𝜇 refers to an integer which is greater than 1. When 𝜇 > 1 H has a very small density and

becomes a sparse matrix. Each row of a submatrix has k 1’s and each column of a submatrix

contains a single 1, or 𝑗 = 1. Therefore, each submatrix has a total of 𝜇𝑘 1′𝑠 . Note that the

total number of ones in H equals 𝑘𝑗𝜇, and the total number of entries in H is 𝜇2𝑗𝑘, and the

density of H becomes 1

𝑘. The overall parity check matrix H is of size 𝜇𝑗 × 𝜇𝑘

In this representation, the rows of Gallager’s parity-check matrix are divided into j

sets with 𝑀/𝑗 rows in each set. The first set of rows contains k consecutive ones ordered

from left to right across the columns. Every other set of rows is a randomly chosen column

permutation of this first set. Consequently, every column of H has a ‘1’ entry once in every

one of the j sets [5].

Example 2. A length 12 (3, 4) –regular Gallager parity-check matrix is represented as

𝐻 =

[ 1 1 1 1 0 0 0 0 0 0 0 00 0 0 0 1 1 1 1 0 0 0 00 0 0 0 0 0 0 0 1 1 1 11 0 1 0 0 1 0 0 0 1 0 00 1 0 0 0 0 1 1 0 0 0 10 0 0 1 1 0 0 0 1 0 1 01 0 0 1 0 0 1 0 0 1 0 00 1 0 0 0 1 0 1 0 0 1 00 0 1 0 1 0 0 0 1 0 0 1]

This example shows a few things. First, no two rows in a submatrix of H has any 1-

component in common and no two columns of a submatrix H has more than one 1 in

common. Second, this matrix shows that Gallager construction is of ‘regular’ form due to

how each row and column has a constant amount of ones. Third, and most important, one

Page 41: A STUDY OF LOW DENSITY PARITY-CHECK CODES USING …

29

could notice that the following submatrices after 𝐻1 are simply column permutations of the

first submatrix. Therefore, to construct a Gallager code, one simply must define 𝐻1 to define

the overall parity-check matrix. However, as trivial as it may sound, Gallager did not provide

a method for choosing column permutations of the submatrix 𝐻1 to form the other

submatrices such that the overall matrix H gives an LDPC code with good minimum distance

and meets the structural properties required. Therefore, computer searches are needed to find

good LDPC codes especially when there is a long code length and in terms of hardware

implementation will have low-complexity encoders.

3.3.2 Repeat-Accumulate Codes

Another type of LDPC construction, which is the main focus of this thesis, is called

repeat-accumulate (RA) codes. RA codes are constructions based on having our parity-check

matrix be split in a systematic form in which one part of the matrix is constructed in a step

pattern form with each column being weight-2.This part of H comprises of the last M

columns of the matrix [6]. The benefit of having these codes is that it makes for a systematic

code and for encoding simplicity.

Example 3. We will show how a length 12 rate ¼ repeat-accumulate code as

𝐻 =

[ 1 0 0 1 0 0 0 0 0 0 0 01 0 0 1 1 0 0 0 0 0 0 00 1 0 0 1 1 0 0 0 0 0 00 0 1 0 0 1 1 0 0 0 0 00 0 1 0 0 0 1 1 0 0 0 00 1 0 0 0 0 0 1 1 0 0 01 0 0 0 0 0 0 0 1 1 0 00 1 0 0 0 0 0 0 0 1 1 00 0 1 0 0 0 0 0 0 0 0 1]

This construction is that of a systematic RA code where the first three columns

correspond to the message bits. The fourth column of H is considered to be the first parity-bit

which can be used to encode our message. The next chapter goes in details about RA codes.

3.4 LDPC ENCODING

Typically in linear codes we find that the encoding process is a simple process to

follow compared to the decoding process it needs to take on. However, when dealing with

Page 42: A STUDY OF LOW DENSITY PARITY-CHECK CODES USING …

30

LDPC codes this changes due to its sparsely parity-check matrix. This leads to tedious and

complex versions of encoding LDPC codes. The next sections will give a brief example of

how researchers have gone about encoding LDPC codes.

3.4.1 Simple Encoding

To begin encoding linear block codes we have to go back to the following equation

𝒄 = 𝒖𝐺

in which G is the generator matrix of the code. For a binary code with K message bits and

length N codewords, a generator matrix is a 𝐾 × 𝑁 matrix. If my code is systematic, the

generator matrix will include an 𝐾 × 𝐾 identity matrix, named 𝐼𝑁−𝐾, for its first K columns

of the matrix. Each generator matrix has a parity-check matrix H which is considered to be

the null space of G. Knowing this we could relate a generator matrix to its parity-check

matrix as

𝐺𝐻𝑇 = 𝟎 (mod 2)

where 0 is an 𝐾 × 𝑚 all-zero matrix.

In general, a generator matrix for a code with parity-check matrix H can be found by

applying Gauss-Jordan elimination to H. H will be in the form of

𝐻 = [𝐴 𝐼𝑁−𝐾]

where 𝐼𝑁−𝐾 is a 𝐾 × 𝐾 identity matrix of order 𝑁 − 𝐾 and A is an (𝑁 − 𝐾) × 𝐾 binary

matrix.

Based on this definition we could also define our generator matrix to be

𝐺 = [𝐼𝐾 𝐴𝑇]

therefore, by using the Gauss-Jordan elimination method, we could encode and decode our

code simply by providing the code’s G and H matrices. However, this method has a few

drawbacks which has led to look for various alternatives and which makes LDPC encoding a

complex task to accomplish. First, by finding G with this method we cannot guarantee our

generator matrix will be sparse. It is known that what makes these codes attractive is its

ability to converge to accurate codewords by applying a soft-decision iterative decoding

Page 43: A STUDY OF LOW DENSITY PARITY-CHECK CODES USING …

31

method. However, for this to work we need to ensure that G and H be sparse. Due to this

issue the matrix multiplication 𝒄 = 𝒖𝐺 will have a complexity of order 𝑁2 operations, where

N is the number of bits in a codeword. This leads to our second drawback which is

implementation complexity. With an operation order of 𝑁2 we make our encoder

implementation very complex, especially for LDPC codes. As we know, LDPC codes work

better with larger N lengths, which can range from hundreds to thousands. Therefore this

method is not optimal to use. For arbitrary parity-check matrices it is a good approach to try

and avoid constructing G altogether and instead use a back-substitution with H method to

encode or pursue encoding LDPC codewords by using RA codes. In the next chapter we will

go in detail on how to encode using systematic RA codes.

Page 44: A STUDY OF LOW DENSITY PARITY-CHECK CODES USING …

32

CHAPTER 4

REPEAT-ACCUMULATE CODES

Repeat-Accumulate codes, or RA codes, are serially concatenated codes in which the

outer code is a rate 1/q repetition code and the inner code is a convolutional code with a rate

1/1+D. This 1/1+D convolutional code simply outputs the modulo-2 sum of the current input

bit and previous output bit. In other words, it receives the input values and sums, or

accumulates, with all stored inputs. Due to its process, this block is called an accumulator

block. Just as Turbo codes, in between the two constituent codes there is an interleaver block

which allows our RA code to produce a high-weight output thereby avoiding low-weight

codewords. Figure 4.1 shows the basic structure of an RA code. When talking about an RA

code, we are actually referring to the encoder design of the code, as Figure 4.1 shows. From

the figure it is observed that an RA code has a similar characteristic to that of Turbo codes

which is why many refer to RA codes as “Turbo-like” codes. With RA codes the only

concern is the construction for its encoder block due to having the advantage of working

extremely well with either a Turbo or LDPC decoder. However, to begin understanding RA

codes, it will be beneficial to look into the systematic and non-systematic aspects of the

codes as well as their parity-check matrix construction.

Figure 4.1. Block diagram structure of an RA code.

Page 45: A STUDY OF LOW DENSITY PARITY-CHECK CODES USING …

33

4.1 SYSTEMATIC AND NON-SYSTEMATIC CODES

RA codes can transmit both the message bits and parity bits in its codeword creating a

systematic RA code. Typically, when dealing with systematic RA codes, our encoder block

diagram includes an extra block we call a combiner. This block is placed in between the

interleaver and the accumulator as shown in Figure 4.2. A rate-a combiner simply modulo-2

sums each set of a bits coming into it. When adding the combiner to the mix, we typically

clump the combiner and the accumulator to be the inner code while our combiner is still is

considered the outer code. Our interleaver in this case will not be affected and will still be in

between the inner and outer codes. The purpose of having a combiner is that it allows us to

represent and decode our RA codes as LDPC codes. In other words, by having a systematic

codeword, we will be able to use a message passing algorithm as a decoding option.

Figure 4.2. Block diagram structure of a systematic RA code.

When we deal with non-systematic RA codes, our decoder is typically decoded as a

Turbo code. However, the implementation of such a codeword would require the removal of

our combiner block. It isn’t that a non-systematic RA code cannot be encoded with a

combiner, however, decoding would not be possible. This is because given only the parity

bits from the channel, there is no way the decoder can determine the values of the a bits that

are summed by the combiner to produce each parity bit [4]. This is why we will mostly focus

on the systematic approach in this thesis. Figure 4.3 shows the block diagram for a non-

systematic RA code.

Page 46: A STUDY OF LOW DENSITY PARITY-CHECK CODES USING …

34

Figure 4.3. Block diagram of a non-systematic RA code.

4.2 RA PARITY-CHECK MATRIX

In this thesis we will focus on viewing RA codes in terms of LDPC codes.7If viewed

as LDPC codes, we can see that RA codes are LDPC codes with an upper triangular form

already built into the parity-check matrix during the code design [6]. To begin, the

construction of an RA parity-check matrix is completely linked to the design of an RA

encoder, that is to say that each building block of an RA encoder is responsible in the

construction of our parity-check matrix.

An RA parity-check matrix H can be seen as a 𝑚 × 𝑛 matrix divided into two parts

𝐻 = [𝐻1𝐻2]

in which 𝐻1is a 𝐾𝑞/𝑎 × 𝐾 8 matrix specified by the interleaver and a column weight q and

row weight a. 9 𝐻2 is described as a 𝐾𝑞/𝑎 × 𝐾𝑞/𝑎 matrix of form

𝐻2 =

[ 1 0 0 0 0 01 1 0 ⋯ 0 0 00 1 1 0 0 0

⋮ ⋱ ⋮0 0 0 1 0 00 0 0 ⋯ 1 1 00 0 0 0 1 1 ]

7 Regardless of the point of view we take, i.e. Turbo or LDPC like, the construction of these codes will be

the same

8 Recall K is the number of message bits entering our code.

9 Note that compared to how we define a parity-check matrix for LDPC codes, RA codes will be defined

with different column and row weight notation. Still q and a refer to what we called j and k, respectively, in

chapter 2.

Page 47: A STUDY OF LOW DENSITY PARITY-CHECK CODES USING …

35

𝐻2 is characterized with a diagonal staircase of non-zeros, or 1’s, of column weight-2. When

this is the case, we refer to this parity check matrix as a weight-2 RA code. Therefore, we can

define a regular (q, a) RA parity-check matrix H if the weight of all the rows of 𝐻1 are the

same, each row having a weights, and if the weight of all the columns of 𝐻1 are the same,

each having q weights. Note that in terms of LDPC codes, an RA parity-check matrix does

not match the regular LDPC code definition. This is due to matrix 𝐻2 having columns of

weight-2 and one column of weight-1. Therefore, a regular RA code is solely defined by 𝐻1

and cannot be completely called an LDPC code due to its H construction.

Example 1. A (3, 2)-regular RA parity-check matrix for a length-10 rate 2/5 code

looks like:

𝐻 =

[ 1 0 1 0 1 0 0 0 0 00 1 0 1 1 1 0 0 0 01 1 0 0 0 1 1 0 0 00 0 1 1 0 0 1 1 0 01 0 1 0 0 0 0 1 1 00 1 0 1 0 0 0 0 1 1]

4.3 ENCODING RA CODES

The encoding process of RA codes gives way to a straightforward process when it

uses the building blocks described earlier. To refresh, an RA code will comprise of a rate 1

𝑞

repetition code, followed by an interleaver and ending with a 1

1+𝐷 convolutional code with

rate-1 which we call an accumulator. Once we pass our message input into these block we

will have our parity bits calculated and join them together with our message bits. Our focus

will be on systematic RA codes for this thesis, therefore, we will add a combiner between

interleaver and accumulator.10

10 Note that from this point it will be assumed that our RA codes are systematic and our encoder includes a

combiner.

Page 48: A STUDY OF LOW DENSITY PARITY-CHECK CODES USING …

36

We begin encoding by sending our input K message bits 𝒖 = [𝑢1 𝑢2 ⋯𝑢𝐾] into our

repetition code. The qK bits at the output of the repetition code are q copies of 𝑢 in the form

𝒃 = [𝑏1𝑏2 ⋯𝑏𝑞𝐾]

= [𝑢1𝑢1 …𝑢1 𝑢2𝑢2 ⋯ 𝑢2 ⋯ 𝑢𝐾𝑢𝐾 ⋯𝑢𝐾]

in other words, each message bit entry i.e. 𝑢1, 𝑢2 up to 𝑢𝐾 , will be repeated q times. Each of

the q repetitions will represent one entry of 𝑏𝑖. Again, a total of qK will be included in our

repetition output b. Next the interleaver pattern ∏ = [𝜋1𝜋2 ⋯𝜋𝑞𝐾] will define a permutation

of the bits in b. This permutation output will be defined by vector

𝒅 = [𝑑1𝑑2 ⋯𝑑𝑞𝐾] = [𝑏𝜋1𝑏𝜋2

⋯𝑏𝜋𝑞𝐾]

The output vector d will still be of size qK just like we had for the repetition code output b.

When the bits come into the combiner, the bits are summed, modulo 2, together in groups of

a. In other words, each set of a bits in d are combined together to give an r output vector of

𝑟 = 𝑑(𝑖−1)𝑎+1 + 𝑑(𝑖−1)𝑎+2 + ⋯+ 𝑑𝑖𝑎 (𝑚𝑜𝑑 2) 𝑤ℎ𝑒𝑟𝑒 𝑖 = 1,2, …𝐾𝑞

𝑎

Finally, the 𝐾𝑞 𝑎⁄ parity bits, p, which output the accumulator are defined as

𝑝𝑖 = 𝑝𝑖−1 + 𝑟1

Where each parity bit depends on the previous parity bit values. Considering that our focus is

on systematic RA codes, which involve a combination of both message bits and parity bits,

our encoded codeword will be

𝒄 = [𝑢1𝑢2 ⋯𝑢𝐾 𝑝1𝑝2 ⋯𝑝𝐾]

And thus we have a codeword with code length, N, and code rate, R, given by

𝑁 = 𝐾(1 + 𝑞 𝑎⁄ )

𝑅 =𝑎

𝑎 + 𝑞

Note that for non-systematic RA codes, only the parity bits are sent to the receiver and so we

have a code with length 𝑁 = 𝐾𝑞 𝑎⁄ and rate 𝑅 = 𝑎𝑞⁄ . Typically when we deal with regular

Page 49: A STUDY OF LOW DENSITY PARITY-CHECK CODES USING …

37

LDPC codes, we regard to our codewords as non-systematic as in the latter case. This comes

about due to the notion that non-systematic codewords provide improved performance. In the

case of RA codes, we will see that using systematic codewords is preferred because of the

same reason regular LDPC codes use non-systematic codewords.

Example 2. Suppose we have a message

𝒖 = [1 0 0 1]

that will be encoded using a length-10 RA code which consists of a 𝑞 = 3 repetition code, a

𝑎 = 2 combiner and an interleaver ∏ = [1, 7 ,4 ,10, 2, 5, 8, 11, 3, 9, 6, 12]. To begin, we will

first repeat each bit in our message q times to make output vector

𝒃 = [1 1 1 0 0 0 0 0 0 1 1 1]

Next, we will permute our repeater output using our interleaver pattern. The way it

works is we that each number in our interleaver tells of the bit position in b which will take

the new permuted position. For example, in bit position 2 in our interleaver vector we have

number ‘7’. This means that bit position ‘7’ from repeater output b, which in this case will

correspond to a binary ‘0’, will get permuted to the second position of our interleaver output.

This process will be repeated for each bit in b. The interleaver output, d, will now be:

𝒅 = [1 0 0 1 1 0 0 1 1 0 0 1]

After this, we will send d to our combiner, where it will combine each consecutive set

of 𝑎 = 2 sets to make an output vector r of 𝐾𝑞 𝑎⁄ bits. If we recall the previous definition for

r, our combiner output will become

𝒓 = [1 1 1 1 1 1]

Finally, we can calculate our parity bits from our accumulator by adding all the

previous calculated parities to our combiner bits. For example, the computation of the first

two parity bits is

Page 50: A STUDY OF LOW DENSITY PARITY-CHECK CODES USING …

38

𝑝1 = 𝑝0 + 𝑟1 = 0 + 1 = 1

𝑝2 = 𝑝1 + 𝑟2 = 1 + 1 = 0

𝑝6 = 𝑝5 + 𝑟6 = 1 + 1 = 0

Note that because there is no defined value for 𝑝0 we set it to 0.

After computing these values, our accumulator output, p, becomes

𝒑 = [1 0 1 0 1 0]

so, our encoded codeword, c, becomes

𝒄 = [1 0 0 1 1 0 1 0 1 0]

Once again note that the first K bits in c are the message bits and the last 𝑀 = 𝐾𝑞 𝑎⁄

bits are the parity bits we computed giving us our systematic codeword.

4.4 PARITY-CHECK MATRIX H CONSTRUCTION

Similar to LDPC codes, an RA code is described by its parity-check matrix H. In

section 4.2 we showed how we defined the parity-check matrix H for RA codes. However, to

see how we can construct a parity-check matrix, it would be simple if we relate the encoding

process to what we know about H.

We can make a direct relation between the parity-check equations in H to the

combiner and accumulator equations. Recall that the accumulator output is

𝑝𝑖 = 𝑝𝑖−1 + 𝑟1

and our combiner output, r, is

𝑟 = 𝑑(𝑖−1)𝑎+1 + 𝑑(𝑖−1)𝑎+2 + ⋯+ 𝑑𝑖𝑎 (𝑚𝑜𝑑 2)

This means, we can write, 𝑝𝑖 in terms of the combiner output as

𝑝𝑖 = 𝑝𝑖−1 + 𝑑(𝑖−1)𝑎+1 + 𝑑(𝑖−1)𝑎+2 + ⋯+ 𝑑𝑖𝑎

Therefore, because 𝑑𝑖 is simply copies, permuted copies of course, of the message and 𝑝𝑖 is

the i-th parity bit, then we can represent these parity-check equations directly using the

Page 51: A STUDY OF LOW DENSITY PARITY-CHECK CODES USING …

39

parity-check matrix H. This means that the first K columns of the parity-check matrix H

correspond to the message bits of our codeword. Likewise, the last M columns correspond to

the parity bits. With this in mind, we can see how we can compose the two sets of matrices,

𝐻1 and 𝐻2. The non-zero values in the rows of 𝐻1 are found by taking into consideration the

message bit position that compose d. The 𝑀 × 𝑀 matrix which makes 𝐻2 is then defined by

the accumulator output 𝑝𝑖.

Example 3. Continuing on Example 2, we want to construct the parity check matrix

H from Example 2. Let’s start by re-stating that 𝑞 = 3, which means that our repeater output,

b, will be

𝒃 = [𝑏1𝑏2𝑏3𝑏4𝑏5𝑏6𝑏7𝑏8𝑏9𝑏10𝑏11𝑏12]

= [𝑢1𝑢1𝑢1𝑢2𝑢2𝑢2 𝑢3𝑢3𝑢3𝑢4𝑢4𝑢4]

Next, using our interleaver ∏ = [1, 7 ,4 ,10, 2, 5, 8, 11, 3, 9, 6, 12] , we can find our b bits are

interleaved to make up our interleaver output, d, which is

𝒅 = [𝑑1𝑑2𝑑3𝑑4𝑑5𝑑6𝑑7𝑑8𝑑9𝑑10𝑑11𝑑12] = [𝑏1𝑏7𝑏4𝑏10𝑏2𝑏5𝑏8𝑏11𝑏3𝑏9𝑏6𝑏12]

or in terms of 𝑢𝑖

𝒅 = [𝑢1𝑢3𝑢2𝑢4𝑢1𝑢2 𝑢3𝑢4𝑢1𝑢3𝑢2𝑢4].

With this, we can now go ahead and find the ones in the rows of 𝐻1.

For its first row, note that it will correspond to the set of values in d that correspond

to bit 𝑟1from the combiner output. So,

𝑟1 = 𝑑1 + 𝑑2 (𝑚𝑜𝑑 2)

= 𝑏1 + 𝑏7 (𝑚𝑜𝑑 2)

= 𝑢1 + 𝑢3 (𝑚𝑜𝑑 2)

This tells us that row one of 𝐻1 will have a ‘1’ in columns 1 and 3.

Page 52: A STUDY OF LOW DENSITY PARITY-CHECK CODES USING …

40

By continuing with this method, we can construct 𝐻1 to be

𝐻1 =

[ 1 0 1 00 1 0 11 1 0 00 0 1 11 0 1 00 1 0 1]

There are a few things to mention from 𝐻1. First, note that the number of rows or

parity-check equations of 𝐻1 , as well as H, equals 𝑀 = 𝐾𝑞 𝑎⁄ . In this case, there are 𝑀 = 6

parity-check equations. Second, note that the number of columns of 𝐻1 is the same as the

number of information or message bits, K.

To construct those values in 𝐻2 we simply need to note that the last M columns of H

will make our 𝐻2 matrix. Therefore, in this case the last 6 columns will comprise of a

diagonal ‘1’ staircase matrix such as

𝐻2 =

[ 1 0 0 0 0 01 1 0 0 0 00 1 1 0 0 00 0 1 1 0 00 0 0 1 1 00 0 0 0 1 1]

Recall that each column of 𝐻2 matches a parity bit from our codeword or take into

consideration the accumulator output 𝑝𝑖. In other words, column 3 from 𝐻2 matches 𝑝3 in our

accumulator output.

If we calculate our first parity bit, we would obtain

𝑝1 = 𝑝0 + 𝑟1

Because our parity-check equations in H need to be satisfied, we will rewrite our equation as

𝑝1 + 𝑟1 = 0

𝑝1 + 𝑢1 + 𝑢3 = 0

Therefore, for this parity-check equation to be met, we would need to have ‘1’ values in each

of these positions in the graph. As we see, parity bit, 𝑝1, or column 1 from 𝐻2 , needs to be

Page 53: A STUDY OF LOW DENSITY PARITY-CHECK CODES USING …

41

filled with a non-zero value along with the rest of the bits from 𝐻1. If we wanted to see the

second parity bit

𝑝2 = 𝑝1 + 𝑟2

we will see that to satisfy the second parity-check equation we need to rewrite our equation

as

𝑝2 + 𝑝1 + 𝑟2 = 𝑝2 + 𝑝1 + 𝑢2 + 𝑢4 = 0

This shows that aside for the message bits, parity bits 1 and 2 in 𝐻2 need to be filled.

When combining the matrices 𝐻1 and 𝐻2 to make H we would obtain parity-check

matrix:

𝐻 =

[ 1 0 1 0 0 1 0 1 1 1 0 0 0 0 1 1 1 0 1 0 0 1 0 1

1 0 0 0 0 01 1 0 0 0 00 1 1 0 0 00 0 1 1 0 00 0 0 1 1 00 0 0 0 1 1]

4.5 ENCODER AND PARITY-CHECK

CONSTRUCTION COMPLEXITY

As it has been observed, the construction of an RA encoder is straightforward. In

terms of implementation complexity, it only uses adders to find its parity bits and eventually

to create its parity-check matrix. When compared to regular LDPC encoders, our hardware

implementation would be less complex especially because with an LDPC encoder we would

need adders and multipliers to find the parity bits.

4.6 MESSAGE-PASSING DECODING FOR RA CODES

To decode RA codes, we have an option on whether to decoded using the sum-

product algorithm as we do in LDPC codes or utilizing a Turbo decoding. However, due to

the complexity and performance advantage observed with SPA decoders, it is best to see how

an RA code interacts with a belief propagation algorithm. Similar to LDPC codes, to simply

understand SPA decoding in RA codes, it is best to look at the Tanner graph representation

of RA codes.

Page 54: A STUDY OF LOW DENSITY PARITY-CHECK CODES USING …

42

4.6.1 Graphical Representation for RA Codes

With the parity-check matrix construction shown in section 4.4, we can go ahead and

draw the Tanner graph for our matrix. Figure 4.4 shows the Tanner graph representation.

From Figure 4.4 we can see that different from LDPC codes, the tanner graph representation

for RA codes allows for the message bits in the codeword to be easily distinguishable [4].

We distinguish between systematic bit nodes corresponding to the K message bits, which are

shown at the top of the graph, and the parity-bit nodes corresponding to the M parity bits in

the codeword, which are shown at the bottom of the graph. Similar to LDPC codes, we have

our check nodes right above the parity-bit nodes and have our edges coming straight from

our bit nodes be permuted by our interleaver. The systematic bit nodes have degree q while

the parity-bit nodes have degree 2 except for the final parity-bit node, which has degree 111

.

The check nodes have degree 𝑎 + 2 except for the first, which has degree 𝑎 + 1 [4]. When

compared to our encoder, we can see which part of our Tanner graph corresponds to the

encoder building block and how it graphically represents our parity-check matrix.

Figure 4.4. A systematic RA code tanner graph.

11 This refers to how 𝐻2 is constructed.

Page 55: A STUDY OF LOW DENSITY PARITY-CHECK CODES USING …

43

4.6.2 Sum-Product Algorithm for RA Codes

If parity-bit nodes and systematic bit nodes are treated as indistinguishable, then the

sum-product decoding of an RA code is exactly the same as the sum-product decoding of an

LDPC code with the same parity-check matrix [4]. Therefore, a sum-product decoder for an

RA code decodes the parity-bit nodes exactly the same way as the systematic-bit nodes.

There may be some difference in scheduling while using sum-product decoding, however,

typically we would not focus on this. Also just like in LDPC codes, convergence to a valid

codeword is easily detected, and it is possible to stop decoding once a valid codeword has

been found.

Aside from sum-product algorithm one may decide to decode RA codes by using

turbo decoding. Using this decoder will work best for non-systematic RA codes for its

decoding process is identical to turbo decoding of serial concatenated codes. However, to use

turbo decoding means that we would use BCJR decoding on a trellis for the parity bits which

would make the computation of a single iteration more complex than it would be for a single

iteration using sum-product decoding. Hence it is why it is preferred to use SPA decoding in

RA codes and why when representing RA codes in terms of LDPC codes, we mainly focus

on its decoding algorithm. Although it seems to be a main advantage, its biggest

disadvantage to use a sum-product decoder is that overall it will need more iterations to

decode than that of turbo decoding [4].

Figure 4.5. Block diagram with an RA encoder and a SPA LDPC decoder.

Page 56: A STUDY OF LOW DENSITY PARITY-CHECK CODES USING …

44

4.7 INTERLEAVERS FOR RA CODES

An RA code is completely specified by code length N, row weight a, column weight

q and its interleaver pattern ∏. This means that in order to design a good RA code, we would

need to focus on the interleaver design, which ultimately becomes designing the parity-check

matrix 𝐻1. While an RA interleaver can be chosen randomly, which for long codes produces

good performance, random permutation provides challenges in terms of implementation

complexity leading to search for structured methods alleviating this issue. Because an RA

encoder can be seen in terms of turbo encoders, one may think of using interleavers which

work well with Turbo codes. However, due to the presence of the repeater and accumulator,

these turbo interleavers would not work for RA codes mainly because RA codes are decoded

as LDPC codes, not Turbo codes. Also, the interleaver must be designed such that it can

control properties of cycles that can arise in its Tanner graph. This means that even if turbo

interleavers were allowed to be used as RA interleavers, they would perform poorly due to

their inability to help fix cycles which arise from LDPC decoding. Hence in this section, we

will focus what is needed when designing RA code interleavers and present a couple of

interleaver structures used in practice.

4.7.1 RA Interleaver Definition

To design a (q, a)-regular RA code interleaver requires a permutation of the form:

∏=[π1π2⋯πKq]

such that

𝜋𝑖 ∈ {1,2, … , 𝐾𝑞}, 𝜋𝑖 ≠ 𝜋𝑗 ∀𝑖 ≠ 𝑗

This means that the i-th entry,𝜋𝑖, and the ⌈𝑖 𝑎⁄ ⌉-th row of H has a one in the ⌈𝜋𝑖 𝑞⁄ ⌉-th

column12

[4]. Since H is binary, we cannot define repeated entries, in other words, we require

that

12 Note that the ⌈𝑥⌉ notation refers to the smallest integer greater than or equal to x.e

Page 57: A STUDY OF LOW DENSITY PARITY-CHECK CODES USING …

45

⌈𝜋𝑖 𝑞⁄ ⌉ ≠ ⌈𝜋𝑗 𝑞⁄ ⌉ ∀ 𝑖 ≠ 𝑗 such that ⌈𝑖 𝑎⁄ ⌉ ≠ ⌈𝑗 𝑎⁄ ⌉

4.7.2 Interleaver Properties

Because an RA encoder is paired with a sum-product decoder to achieve good

decoding performance, it is necessary to take into consideration the properties of our Tanner

graph. In RA codes, the code will perform with fewer errors if small cycles in the code’s

Tanner graph are avoided. Therefore, similar to LDPC codes, it is needed to avoid 4 and 6

cycles as much as possible to improve our decoding performance. In order for an RA code to

satisfy this condition, it needs to make sure that its interleaver can permute our repeater

output in such a way that it avoids having 4 length cycles. There are two classes of 4-cycles

RA codes can encounter in its Tanner graph. The first is called a Type-1 4-cycle and will

occur if a column in 𝐻1contains two consecutive ones and a second pair of 1s comprising the

4-cycle occurs in 𝐻2.Example 4 shows how a type-1 4-cycle can be identified. The second

type is called a Type-2 4-cycle. This type occurs if two columns of 𝐻1contain two entries in

common. Note that this type does not involve 𝐻2 due to how it is not possible to have an

entry in common for 𝐻2. Example 5 shows how a type-2 4-cycle can be identified.

Example 4. Looking into the parity-check matrix H of length -10 that we previously

constructed, we can find a type-1 4-cycle shown on the matrix in bold

𝐻 =

[ 1 0 1 0 0 𝟏 0 1 1 𝟏 0 0 0 0 1 1 1 0 1 0 0 1 0 1

1 0 0 0 0 01 𝟏 0 0 0 00 𝟏 1 0 0 00 0 1 1 0 00 0 0 1 1 00 0 0 0 1 1]

Example 5. We can also find a type-2 4-cycle in H from Example 4.The cycle is

shown on the matrix in bold

𝐻 =

[ 𝟏 0 𝟏 0 0 1 0 1 1 1 0 0 0 0 1 1 𝟏 0 𝟏 0 0 1 0 1

1 0 0 0 0 01 1 0 0 0 00 1 1 0 0 00 0 1 1 0 00 0 0 1 1 00 0 0 0 1 1]

Page 58: A STUDY OF LOW DENSITY PARITY-CHECK CODES USING …

46

To avoid Type-1 4-cycles we require our interleavers to satisfy

⌈𝜋𝑖 𝑞⁄ ⌉ ≠ ⌈𝜋𝑗 𝑞⁄ ⌉ ∀ 𝑖 ≠ 𝑗 such that ⌈𝑖 𝑎⁄ ⌉ ≠ ⌈𝑗 𝑎⁄ ⌉ ± 1

while to avoid Type-2 4-cycles we require that our interleaver

⌈𝜋𝑗 𝑞⁄ ⌉ ≠ ⌈𝜋𝑖 𝑞⁄ ⌉ ∀ 𝑖 ≠ 𝑗 such that ∃ 𝑘, 𝑙 where

⌈𝑙 𝑎⁄ ⌉ = ⌈𝑗 𝑎⁄ ⌉, ⌈𝑘 𝑎⁄ ⌉ = ⌈𝑖 𝑎⁄ ⌉, ⌈𝜋𝑙 𝑞⁄ ⌉ = ⌈𝜋𝑘 𝑞⁄ ⌉

Note that cycles cannot be formed solely within the columns of 𝐻2 which means type-1 and

type-2 4-cycles cover all 4 length small cycles in an RA code [7].

4.7.3 Pseudo-Random Interleavers

As previously mentioned, random interleavers produce effective results in terms of

performance, hence why implementation of random interleavers have been applied to

simulate RA codes. However, randomly constructed interleavers pose implementation

challenges in terms of its hardware design. Therefore, designers have relied on using pseudo-

random type of interleavers which can be structurally designed, allowing for a simpler

hardware implementation, while still providing a random permutation. Along with designing

pseudo-random interleavers that show good performance, it is also important to design these

interleaver such that they remove small cycles.

For example, to remove type-1 4-cycles, it has been researched that applying a

pseudo-random interleaver called an S-random, or S-type, interleaver will help. An S-type

interleaver requires that no two entries of ∏ within S of each other have a value within S of

each other [7]. In order to avoid any type-1 4-cycles, in the strict sense, we need to specify

that 𝑆 ≥ 𝑚𝑎𝑥(𝑞 − 1,2𝑎 − 1). However, we can relax this rule and still accomplish removal

of type-1 4-cycles. As an S-type interleaver may appear to be an attractive option, it is not

recommended, Most importantly, due to how it cannot be used to avoid type-2 4-cycles when

𝑎 > 1. Therefore, other structure type interleavers may be of interest.

4.7.4 Structured-Type Interleavers

To be able to meet all the properties required for our RA code’s Tanner graph it is

best for interleavers to be structured, not random. One interleaver design option may be to

Page 59: A STUDY OF LOW DENSITY PARITY-CHECK CODES USING …

47

use a row-column, or block, interleaver as it is used for Turbo Codes. However, as mentioned

earlier, due to the fact that sum-product decoding is used for RA codes and there is a

presence of a repeater and accumulator in the encoder, this type of block interleaver will not

work. Also, the addition of large amounts of 4-cycles due to 𝑎 > 1 has been found that it will

make this interleaver give bad performance results in RA codes [7]. However, it is an option

many consider due to its simpler algorithmic implementation compared to that of an S-type

interleaver. Therefore, for a code designer it is of interest to develop an interleaver such that

it is structured, easy to describe, easy to implement and can avoid small cycles such as length

4 cycles as much as possible. Hence why it is important to look into two practical

interleavers which have been observed in [7] which are L-type interleavers.

4.7.5 L-Type Interleavers

An L-type interleaver is defined by the code parameters K, q, and an integer-value

parameter L. This L-parameter is used in order to avoid any small cycles that may arise.

Example 6 shows the L-type interleaver construction process in detail. The step process for

an L-type interleaver is

Step 1. The L-type interleaver first selects K message bits to make ∏1. See Figure 4.6

to check how to find ∏1.

Step 2. It then proceeds its permutation process by starting on the first bit and

skipping L bits on ∏1 and selecting message bits to make permutation vector ∏2.

Step 3. After it has reached the end of the vector, it will repeat Step 2 by starting on

the first unselected bit. It will get repeated until every message bit is selected.13

Step 4. We will continue to repeat this permutation process 𝑞 times. Therefore, we

will have a total of ∏𝑞

Step 5. Add 𝑖 − 1, where i refers to the interleaver index, for each value in the ∏𝑖

pattern

Step 6. Finally, combine each ∏𝑖 found to form an L-type interleaver.

13 Note that we should be able to select all elements when we reached the L bit position.

Page 60: A STUDY OF LOW DENSITY PARITY-CHECK CODES USING …

48

Example 6. Assume we have a message vector coming into our L-type interleaver

with 𝐾 = 8, 𝑁 = 16, 𝑎𝑛𝑑 𝐿 = 2. Also, assume that this is a (2, 2)-regular RA code with

rate-1

2. Therefore, to start off we will assume that the vector entering the interleaver is

size 𝑁 = 𝐾𝑞 = 16. This means that our final interleaver will be of length 16.

If we follow the definition of ∏1in Figure 4.6 which calculates that

∏ = [1 3 5 7 9 11 13 15]1

Next we will skip 𝐿 = 2 bits from ∏1to find ∏2 .We will continue this process until we start

at the 𝐿 = 2 ∏1 position. Therefore,

∏ = [1 5 9 13 ]2 , [3 7 11 15]

Because this is our q selection, we will stop finding other sub-interleavers. Following this,

we will add 𝑖 − 1 values to each number found in the sub-interval. In our case, ∏2 will

become:

∏2=[2 6 10 14 4 8 12 16]

To make the overall interleaver, we will append ∏1 and ∏2 to get our L-type interleaver

∏ = [1 3 5 7 9 11 13 15 2 6 10 14 4 8 12 16]

Therefore, its parity-check matrix will come out to be:

𝐻 =

[ 1 1 0 0 0 0 0 0 1 0 0 0 0 0 0 00 0 1 1 0 0 0 0 1 1 0 0 0 0 0 00 0 0 0 1 1 0 0 0 1 1 0 0 0 0 00 0 0 0 0 0 1 1 0 0 1 1 0 0 0 01 0 1 0 0 0 0 0 0 0 0 1 1 0 0 00 0 0 0 1 0 1 0 0 0 0 0 1 1 0 00 1 0 1 0 0 0 0 0 0 0 0 0 1 1 00 0 0 0 0 1 0 1 0 0 0 0 0 0 1 1]

Page 61: A STUDY OF LOW DENSITY PARITY-CHECK CODES USING …

49

Figure 4.6. Equation for L-type interleavers

In practice, the bits 𝑖𝐾 + 1 to (𝑖 + 1)𝐾 of ∏ can be seen as a type of row-column

permutation of bit (𝑖 − 1)𝐾 + 1 to 𝑖𝐾 of ∏. In this case it works out that the bits are written

in a row-wise manner into a matrix with L columns and are read-out column-wise. Thus the

L-type interleaver is the concatenation of 𝑞 − 1 of row-wise, column-read operations and so

it becomes straightforward to implement [7]. With tis interleaver design we can find various

girth properties that are important to mention.

Lemma 1: A (3, a)-regular RA code can be constructed without 4-cycles whenever

K > a3 by using an L-type interleaver such that L = a

Lemma 2: A (3, a)-regular RA code can be constructed without 6-cycles whenever

K ≥ 8a3 by using an L-type interleaver such that L = 2a

These lemmas show that aside from being simple to construct, L-type interleavers can

avoid small cycles and enable good performance in our code. It has been researched that the

same construction can be used to guarantee RA codes without 6-cycles. In this case, we

define type-1 6-cycle as one containing two accumulator columns and a type-2 6-cycle as one

containing one accumulator column and finally a type-3 6-cycle as one formed solely

within 𝐻1. If we want to remove 6-cycles, we can follow lemma 2 for construction [7].

For the case when 𝑎 = 1 we can do even better at avoiding small cycles. Typically,

we can achieve this if girth > 10 with 𝐿 = 2 and a K is odd, or if 𝐾 ≥ 7, 𝐾 = 1, 2𝑚𝑜𝑑3, 𝐾 ≥

Page 62: A STUDY OF LOW DENSITY PARITY-CHECK CODES USING …

50

21, 𝐿 = 3 and girth > 12. Unfortunately, when 𝑎 > 1 the L-type interleaver always adds

cycles of 8. Thus, while the L-type interleaver will produce good codes for shorter lengths,

where a cycle of 8 can be beneficial, it will not be a good option for long codes where codes

with large girth can be easily be constructed randomly [7].

4.7.6 Modified L-Type Interleavers

To avoid these 8-cycles especially for longer codes, RA codes can use a “modified”

version of the L-type interleaver. The Modified L-type interleaver is simply an L-type

interleaver with an extra step included. In order to understand how a modified L-type

interleaver works, it is best to recall the construction of an L-type interleaver as a row-wise,

column-read matrix.

Construction starts by having ∏1 staying with the same structure and left unchanged.

Preceding sub-interleavers, ∏𝑖, are formed by having the bits of ∏𝑖−1 written row-wise into

a matrix, 𝑀𝑖, with L columns and read column-wise14

. However, the process changes in that

after they are read, the bits from each column are written row-wise into another matrix,

which we call 𝐴𝑗 , and read out column-wise once again. The j-th column of 𝑀𝑖 is written into

a matrix 𝐴𝑗 with j columns. We will repeat this process (𝑞 − 1)𝐿 times.

Example 7. When constructing a modified L-type interleaver for the same K message

bits in example 6, we would have to create 𝐾𝑞 bits up in a matrix with 𝐿 = 2 columns and

written row-wise such as

𝑀1 =

[ 1 23 45 67 89 1011 1213 1415 16]

14 Note that this process is the same as what we would do with L-type interleavers if construction in matrix

form is desired.

Page 63: A STUDY OF LOW DENSITY PARITY-CHECK CODES USING …

51

Next, each column gets written into another matrix, 𝐴𝑗 with j columns in a row-wise manner.

This means

𝐴1 =

[ 13579111315]

and

𝐴2 = [

2 46 810 1214 16

]

We will make another write read operation for these 𝐴𝑗 matrices. This process will be

repeated (𝑞 − 1)𝐿 = 2 times. Because we already did it once, our next matrices will be:

𝐴1 =

[ 13579111315]

𝐴2 = [

261014

]

and

𝐴2 = [4 812 16

]

Therefore, if we read each column in order, we will get the interleaver

∏ = [1 3 5 7 9 11 13 15 2 6 10 14 4 12 8 16]

Page 64: A STUDY OF LOW DENSITY PARITY-CHECK CODES USING …

52

Implementation of this interleaver will now require a (𝑞 − 1)𝐿 matrix write-read

operations instead of 𝑞 − 1 row-write and column-read operations found in an L-type

interleaver. The modified interleaver is still simple to specify requiring just three parameters

in order to construct: L, q and K. In Figure 4.7, we see the block diagram of the encoding

circuit for both the regular and modified L-type interleaver [7].

With this construction we will be able to break most of the 8-cycles that we encounter

in our code. However, the disadvantage becomes that it will re-introduce 4-cycles into our

code. Yet even though we re-introduce 4-cycles, previous simulations have shown that

removing 4-cycles is not always beneficial for when we consider large lengths for it does not

produce much benefit. Just like in LDPC codes, removing small cycles is not always that

good for they affect the minimum distance of the code. Which means error correction is

impacted.

Figure 4.7. Block diagram of the encoding circuit for a combined q = 3 repetition

code and modified L-type interleaver. If we remove the dashed boxes, we obtain the

encoding circuit for that on an L-type interleaver.

4.8 ADVANTAGES AND DISADVANTAGES

Just as LDPC codes, RA codes has many advantages which makes them attractive to

utilize. As mentioned, the main advantage of using RA codes is the simplicity of the encoder

and parity-check matrix construction, especially in terms of hardware. It has been found that

the encoding complexity works in a linear manner. Also, along with a simpler encoder, it can

be used along a sum-product algorithm, which has a less complex decoder than Turbo

decoders, to achieve very good performance at short to medium lengths. However, RA codes

Page 65: A STUDY OF LOW DENSITY PARITY-CHECK CODES USING …

53

are not all great and have heavy disadvantages that can hinder the use of them. First of all,

systematic RA codes can only work for code rates of 1

2 or less. Also, they only work well for

short to medium code lengths which disable them from being capacity-approaching codes

[8]. However, many modifications to RA codes can be made to make these codes work well

against their current limitations. Please see references [9] and [8] for more details.

Page 66: A STUDY OF LOW DENSITY PARITY-CHECK CODES USING …

54

CHAPTER 5

SIMULATION

The goal behind our simulation was to look into the bit error performance of LDPC

codes when using a regular Repeat-Accumulate parity-check matrix and a regular LDPC

Gallager code parity-check matrix. The outline of the block diagram is shown in Figure 5.1.

Our purpose was to see if the use of RA codes with an LDPC decoder, specifically the sum-

product decoder, could hint that a simpler encoding process and hardware implementation

can still provide similar, or maybe better, BER performance than what we know to achieve

with LDPC codes at various code lengths. At the same time, we wanted to see how much

impact a change of interleavers within an RA code has and what benefit could we obtain

from choosing certain interleavers.

Our simulation procedure involved simulating through MATLAB various RA parity-

check matrices and encoders using different practical interleavers at different code lengths.

The parameters used for simulation were

Code length, N: 96,204,408,816

RA interleavers: Random, L-type, and Modified-L type

We compared each LDPC code against an uncoded BPSK signal to highlight two

points. The first is to remind us that coded systems are always preferred to uncoded systems

due to how they achieve a fixed BER value at a substantial SNR dB reduction. The second

point is to show how much improvement it gets when we vary interleavers and compare them

to a regular LDPC code.

The building blocks used for our simulation were the information source, channel

encoder and modulator for the transmitter side. For our receiver we used only the soft-

decision decoder as both a demodulator and channel decoder. The next sections will explain

each block in detail.

Page 67: A STUDY OF LOW DENSITY PARITY-CHECK CODES USING …

55

Figure 5.1. MATLAB simulation block diagram.

5.1 INFORMATION SEQUENCE

The simulation begins with our information source set to an all zero sequence. There

was no main motivation behind picking this particular source other than for debugging

purposes. For when initially starting simulations, we can see whether or not our codeword

was correctly decoded.

5.2 CHANNEL ENCODER

One of our goals is to compare the performance between a regular LDPC code based

on an RA parity-check matrix and a Gallager code parity-check matrix. Therefore, we had

two encoders to consider and simulate. Before encoding each message, we went ahead and

designed the parity-check matrices for each.

5.2.1 RA Parity-Check Matrix and Encoder

The parity-check matrix was based on a systematic RA code which has the structure

defined in chapter 4 and shown in Figure 5.2. The RA parity-check matrix was built

alongside the encoder simulation, due to how the RA encoder block structure is used for our

parity-check matrix design. Our RA parity-check matrix involved a 𝑘 × 𝑘 𝐻2 matrix which

had weight-2 columns and created a diagonal staircase matrix. Our RA encoder function also

switched between three interleavers: random, L-type and modified L-type interleaver. Our

random interleaver was developed by using the ‘randperm’ command in MATLAB and

setting it to be the interleaver output length of 𝑞𝐾. The use of this command enabled us to

have a permutation of values without repeating. Figure 5.3 shows an example of an RA

parity-check matrix with a random interleaver.

Page 68: A STUDY OF LOW DENSITY PARITY-CHECK CODES USING …

56

Figure 5.2. Block diagram of our systematic RA encoder.

The L-type and modified L-type interleaver were based on the description from [4].

Two MATLAB functions were developed for each and were implemented within our RA

encoder function. Our L-type function, was developed by following the step process shown

in example 6 in chapter 4 and using Figure 4.6 as our equations. Our modified L-type

interleaver is based on the write-row, column-read method where a matrix with L columns

was created and the index positions of our message were written row-wise and read column

wise (𝑞 − 1)𝐿 times. For both our L-type interleavers, we had the parameters 𝐿 = 8 and 𝐿 =

30. These parameters were used because reference [7] showed that at a code rate 𝑅 =1

2 our

L-type interleavers could achieve the best performance overall. Also, because it was

researched and found that RA codes work well for code rate smaller than or equal to 𝑅 =1

2

[8]. Figure 5.3 shows an RA parity-check matrix constructed with an L-type interleaver and

Figure 5.4 shows the parity-check matrix constructed with a modified L-type interleaver. If

we compare Figures 5.3 and 5.4 we will see where both L-type interleavers differentiate and

how different the 𝐻1 structure is.

5.2.2 Gallager Parity-Check Matrix

The second parity-check matrix used was a Gallager code parity-check matrix. The

MATLAB code for this parity-check matrix was found online and was developed by Sanket

Kalamkar [10]. We used this code because it did a good job in developing a Gallager code by

simply inputting the code length, column weight and row weight desired. Figure 5.6 shows

an example of a Gallager code parity-check matrix. It is important to note that due to

complexity of the encoder, we did not develop an encoder for this matrix. The encoding

planned was via a generator matrix which would involve converting H into G through Gauss-

Page 69: A STUDY OF LOW DENSITY PARITY-CHECK CODES USING …

57

Figure 5.3. RA parity check matrix w/ random interleaver (N =

408).

Figure 5.4. RA parity check matrix w/ L-type interleaver.

Jordan Elimination such that 𝐺𝐻𝑇 = 0. To avoid this, the use of the all-zero codeword came

into hand, for the use of an all-zero codeword with any regular LDPC encoder will always

create parity bits equal to zeros. Therefore, due to this issue, we applied the all-zero

codeword for all of our simulations.

0 50 100 150 200 250 300 350 400

0

50

100

150

200

nz = 1017

Parity Check Matrix H (RA codes)

0 50 100 150 200 250 300 350 400

0

50

100

150

200

nz = 1019

Parity Check Matrix H (RA codes w/ L-type (L=8,N=408))

Page 70: A STUDY OF LOW DENSITY PARITY-CHECK CODES USING …

58

Figure 5.5. RA parity check matrix w/ modified L-type interleaver.

Figure 5.6. Gallager parity-check matrix with N = 408.

5.3 BPSK MODULATOR

BPSK modulation was used in order to keep the design simple and easy to compare

against the theoretical BPSK bit probability error. The mapping utilized in this simulation

was

{0,1} → {√𝐸𝑠, −√𝐸𝑠}

where 𝐸𝑠is the signal energy per transmitted symbol [11]. In this simulation, we set our

𝐸𝑠 = 1 in order to make computation simpler. Therefore, with this definition we obtained

0 10 20 30 40 50 60 70 80 90

0

10

20

30

40

nz = 239

Parity Check Matrix H (RA codes; Modified L-type)

0 50 100 150 200 250 300 350 400

0

50

100

150

200

nz = 1224

Parity Check Matrix H (Gallager construction)

Page 71: A STUDY OF LOW DENSITY PARITY-CHECK CODES USING …

59

{0,1} → {1,−1}

Note that our BPSK mapping maps our 0 and 1 to NRZ values 1 and -1 respectively [11].

5.4 AWGN CHANNEL

The simulation had an AWGN, Additive White Gaussian Noise, channel with a mean

𝜇 = 1 and noise variance 𝑁0/2. If we recall, our BPSK signal over an AWGN channel can

be received as

𝑟 = 𝑏 + 𝑛

where r refers to the received signal, b is the BPSK modulated signal and n is the Gaussian

noise added from the channel. In order to map our AWGN channel, we used the MATLAB

function ‘randn’ to provide equally distributed 0 and 1 values to our transmitted signal.

Along with this, we utilized the code rate and symbol energy to define our noise variance.

Because our noise variance 𝜎2 = 𝑁0 2⁄ , and our SNR is defined as 𝐸𝑏 𝑁0⁄ , we can redefine

our noise variance to be

𝜎2 =𝐸𝑠

2𝑅(𝑆𝑁𝑅)

5.5 SUM-PRODUCT DECODER

The channel decoder for our code was an LDPC soft-decision decoder utilized for

regular LDPC codes, i.e. the sum-product algorithm. If we recall, a decoder receives from a

channel a noisy and erroneous, or a priori, probabilities which are sent to the bit nodes of our

Tanner graph. These probabilities are followed by a series of message passing computations

making the received codeword converge to the original sent message. In our simulation, we

treated our bit nodes and parity nodes indistinguishable from each other and dealt only with

systematic codewords, enabling us to use the same decoder as we would for regular LDPC

codes. Note that this decoder is based from the algorithm procedure described by Sarah

Johnson in [6].

Page 72: A STUDY OF LOW DENSITY PARITY-CHECK CODES USING …

60

Our sum-product algorithm used the alternative relationship for 𝐸𝑗,𝑖 which was

2 tanh−1(𝑝) = 𝑙𝑜𝑔1 + 𝑝

1 − 𝑝

in order to describe the extrinsic message between the check node and bit node. This

alternative was placed so we could reduce the complexity of our simulation implementation

and reduce the computation time. Note that this was the only block that we implemented for

our receiver in our communication channel. There was no need for a demodulator building

block before our channel decoder in this case because, when using soft-decision decoding, a

soft-decision decoder quantizes received values which are un-quantized. Therefore, our

decoder is already implementing some sort of hard-decision method to decide the correct

codeword it will not make sense to add another hard-decision block. Hence, when using soft-

decision decoding, there is no need to add the demodulator.

Page 73: A STUDY OF LOW DENSITY PARITY-CHECK CODES USING …

61

CHAPTER 6

RESULTS

6.1 SIMULATION RESULTS

For all of our simulations, we used a code rate of 𝑅 = 1 2⁄ and a column weight of 3.

We decided to focus on how a code length increase would impact the performance of RA

codes with different practical interleavers. In order to show how our results fared with those

of regular LDPC codes, we decided to compare them against a regular LDPC code with a

Gallager parity-check matrix and some simulation results found in [1]. Because this paper

also compared BER vs. SNR for LDPC codes, we decided to see how the results obtained by

[1] compared to RA codes when we changed different practical interleavers. Hence why we

used the same parameters as in [1] i.e. same code lengths (96,204,408 and 816), column

weight (3), code rate ( 1

2 ), and 13 iterations for SPA decoder. The simulation results obtained

by [1] can be seen in Figure 6.1 and the original results, based on David MacKay’s paper

Information Theory, Inference, and Learning Algorithms [1], are seen in Figure 6.2.

When simulating any code length, our plots included the performance curve of our

respective parity-check matrix and that of an uncoded BPSK SNR plot. We had ‘5’

simulation points because it was enough to show us the behavior of our code while at the

same time saving time in simulation computation. Our simulation results are shown in the

next sections

6.1.1 Code Length, N = 96

At 𝑁 = 96, we observed that when using a Gallager code parity-check matrix we

were able to reach similar values as those obtained in [1]. Figure 6.3 shows the BER vs SNR

plot of a Gallager parity-check matrix with 𝑁 = 96.

Page 74: A STUDY OF LOW DENSITY PARITY-CHECK CODES USING …

62

Figure 6.1. Simulation results for [1].

Figure 6.2. The original simulation results computed by MacKay.

Page 75: A STUDY OF LOW DENSITY PARITY-CHECK CODES USING …

63

Figure 6.3. Gallager parity-check matrix BER vs. SNR plot (N = 96).

When simulating an RA parity-check matrix with a random interleaver we observed

an improvement from that of a Gallager parity-check matrix. It seems that the column

weight-2 structure for 𝐻2 allowed our waterfall plot to decrease our BER further at 4 and 5

dB. Figure 6.4 shows our simulation plot for an RA parity-check matrix with random

interleaving.

For an RA parity-check matrix with an L-type interleaver we observed that we

obtained better results than that of a random interleaver. Overall, we saw that an L-type

interleaver construction in our parity-check matrix gave way to an improved performance

curve than that of Figure 6.1 and came close to that of Figure 6.2. Figure 6.5 shows this plot

when 𝐿 = 8.

A modified L-type interleaver showed very small improvement compared to that of

the L-type interleaver, however, it mostly seemed similar to that of an L-type interleaver.

Only at the SNR values at 4dB and 5dB we observed a very small decrease for modified L-

type interleavers. Overall, we can conclude that at small lengths, such as = 96 , both the

1 1.5 2 2.5 3 3.5 4 4.5 510

-3

10-2

10-1

Eb/No (dB)

Bit P

robabili

ty E

rror

SNR plot for LDPC encoder with Gallager Codes

Page 76: A STUDY OF LOW DENSITY PARITY-CHECK CODES USING …

64

Figure 6.4. SNR vs. BER plot with random interleaver (N = 96).

regular and modified L-type interleavers give similar performance. Figure 6.6 shows the

modified L-type simulated plot.

Unfortunately, when we changed our L parameter to 𝐿 = 30 , we observed that our

performance curves for both regular and modified L-type interleavers became poorer giving

us the worst performance out of all. Because the L parameter is an essential component that

defines how the interleaver will shuffle around values, at small code lengths higher L

parameters will not efficiently create enough sparseness in our H therefore giving us poor

performance overall. Even after adding the extra steps of a modified L-type version, we see

that the interleaver still does not add enough permutation. Therefore, our results show that

both the modified and regular L-type interleavers are the same in performance. Figure 6.7

and 6.8 shows the SNR plot for an L-type interleaver and modified L-type interleaver,

respectively.

In summary, Table 6.1 shows the BER values obtained for each SNR. Overall from

this table both the modified and regular L-type interleavers with 𝐿 = 8 gave us the best

1 1.5 2 2.5 3 3.5 4 4.5 510

-5

10-4

10-3

10-2

10-1

Eb/No (dB)

Bit P

robabili

ty E

rror

SNR plot for LDPC encoder with RA Codes

RA Codes

Uncoded BPSK

Page 77: A STUDY OF LOW DENSITY PARITY-CHECK CODES USING …

65

Figure 6.5. SNR vs. BER plot with L-type interleaver (L = 8, N = 96).

performance. The worst performance we obtained came from the same interleaver when 𝐿 =

30. Unfortunately, we did not obtain the exact values obtained in [1] but its waterfall plots

are enough to show similarities.

6.1.2 Code Length, N = 204

For code length 𝑁 = 204 we observed similar trends to those of length 96. The

performance for that of a Gallager code is shown in Figure 6.9. From our plot, we can see

that as we increase our code length, our BER improves with every increase point of SNR.

Compared to [1] we again see that the simulations shown in the paper are better than those

simulated here in regards to a Gallager parity-check matrix.

With an RA parity-check matrix with random interleaver, we again saw a major

improvement compared to the one in [1]. Our RA code plot in Figure 6.10 showed that it was

close to simulated results in [1]. Because length 96 also gave this behavior, we begin to

notice that the addition of an RA parity-check matrix and encoder at short lengths can

1 1.5 2 2.5 3 3.5 4 4.5 510

-5

10-4

10-3

10-2

10-1

Eb/No (dB)

Bit P

robabili

ty E

rror

SNR plot for LDPC encoder with RA Codes,(L-type)

RA Code

Uncoded BPSK

Page 78: A STUDY OF LOW DENSITY PARITY-CHECK CODES USING …

66

Figure 6.6. SNR vs. BER plot with modified L-type interleaver (L = 8,

N = 96).

achieve similar performance than with regular LDPC code. Therefore in theory, we can

conclude that with less encoder implementation complexity, we can achieve similar

performance behavior than what we already obtain.

Just like with code length 96, our RA parity-check matrix with an L-type interleaver

showed an improvement in terms of performance compared to that with a random interleaver.

Compared to the Gallager parity-check matrix our L-type interleaver shows that it can

produce a parity-check matrix which can greatly improve performance and still be systematic

with a small L parameter of 𝐿 = 8. Figure 6.11 shows the L-type RA parity-check matrix

BER vs. SNR plot.

Finally, as with 𝑁 = 96, our RA parity-check matrix with a modified L-type

interleaver showed a similar performance to an L-type interleaver and again the modified

version shows a slight improvement. Our simulations shows our H matrix with a modified L-

type interleaver provides a slightly steeper SNR plot than an L-type interleaver. As we

expected, as the code length increases our modified L-type interleaver will become less

1 1.5 2 2.5 3 3.5 4 4.5 510

-5

10-4

10-3

10-2

10-1

Eb/No (dB)

Bit P

robabili

ty E

rror

SNR plot for LDPC encoder with RA Codes (Modified L-type)

RA Codes

Uncoded BPSK

Page 79: A STUDY OF LOW DENSITY PARITY-CHECK CODES USING …

67

Figure 6.7. SNR performance with L-type interleaver (L = 30, N = 96).

similar to that of an L-type interleaver. Similar steepness from our modified L-type version

can be when compared to the simulated results in [1] and in MacKay’s results in MacKay’s

paper. See Figure 6.12 for plot.

Contrary to what was noticed at 𝑁 = 96, changing the L parameter to 𝐿 = 30 gives a

performance curve similar to what we see with 𝐿 = 8. Again, due to the increase of code

length N, our interleaver can now protect our code from noise more robustly than with a

smaller code length. We still noticed a slight decrease in performance for both types of

practical interleavers when compared with an H matrix with 𝐿 = 8, however, it was still

better than the Gallager code simulation. Figure 6.13 and 6.14 show the performance curves

for both type of L-type interleavers. Table 6.2 gives details for each SNR points in this code

length. In the table we see that again the L-type structures give steeper performance curves at

a low L-parameter. Comparing to those results in [1] we can see that our L-type structures

are the ones closer to approaching their curves.

1 1.5 2 2.5 3 3.5 4 4.5 510

-3

10-2

10-1

Eb/No (dB)

Bit P

robabili

ty E

rror

SNR plot for LDPC encoder with RA Codes,(L-type)

RA Code

Uncoded BPSK

Page 80: A STUDY OF LOW DENSITY PARITY-CHECK CODES USING …

68

Figure 6.8. SNR performance with modified L-type interleaver (L = 30,

N = 96).

Table 6.1. N = 96 Simulation Points

SNR (dB) 1 2 3 4 5

BER (GC) 0.0702 0.0312 0.008 0.0031 0.0022

BER(RA) 0.0526 0.2 0.0037 6.0827E-04 9.0616E-04

BER (RA w/ Mod. L-type) L=8 0.059 0.0196 0.0036 3.5726E-04 2.6039E-05

BER (RA w/ Mod. L-type) L=30 0.0592 0.0336 0.0154 0.007 0.003

BER(RA w/ L-type) L=8 0.0482 0.0164 0.0032 3.9684E-04 4.1663E-05

BER(RA w/ L-type) L=30 0.0598 0.0337 0.0147 0.0069 0.0035

BER(Uncoded) 0.0563 0.0375 0.0229 0.0125 0.006

1 1.5 2 2.5 3 3.5 4 4.5 510

-3

10-2

10-1

Eb/No (dB)

Bit P

robabili

ty E

rror

SNR plot for LDPC encoder with RA Codes (Modified L-type, L=30)

RA Codes

Uncoded BPSK

Page 81: A STUDY OF LOW DENSITY PARITY-CHECK CODES USING …

69

Figure 6.9. Gallager parity-check matrix BER vs. SNR plot (N =

204).

Figure 6.10. SNR vs. BER plot with random interleaver (N =

204).

1 1.5 2 2.5 3 3.5 4 4.5 510

-4

10-3

10-2

10-1

Eb/No (dB)

Bit P

robabili

ty E

rror

SNR plot for LDPC encoder with Gallager Codes

1 1.5 2 2.5 3 3.5 4 4.5 510

-5

10-4

10-3

10-2

10-1

Eb/No (dB)

Bit P

robability E

rror

SNR plot for LDPC encoder with RA Codes

RA Codes

Uncoded BPSK

Page 82: A STUDY OF LOW DENSITY PARITY-CHECK CODES USING …

70

Figure 6.11. SNR vs BER plot with L-type interleaver (L = 8, N =

204).

Figure 6.12. SNR vs. BER plot with modified L-type interleaver

(L = 8, N = 204).

1 1.5 2 2.5 3 3.5 4 4.5 510

-6

10-5

10-4

10-3

10-2

10-1

Eb/No (dB)

Bit P

robabili

ty E

rror

SNR plot for LDPC encoder with RA Codes,(L-type)

RA Code

Uncoded BPSK

1 1.5 2 2.5 3 3.5 4 4.5 510

-5

10-4

10-3

10-2

10-1

Eb/No (dB)

Bit P

robabili

ty E

rror

SNR plot for LDPC encoder with RA Codes, Modified L-type

RA Codes

Uncoded BPSK

Page 83: A STUDY OF LOW DENSITY PARITY-CHECK CODES USING …

71

Figure 6.13. SNR vs. BER plot with modified L-type interleaver (L

= 30, N = 204).

Figure 6.14. SNR vs. BER plot with L-type interleaver (L = 30, N =

204).

1 1.5 2 2.5 3 3.5 4 4.5 510

-5

10-4

10-3

10-2

10-1

Eb/No (dB)

Bit P

robabili

ty E

rror

SNR plot for LDPC encoder with RA Codes, Modified L-type

RA Codes

Uncoded BPSK

1 1.5 2 2.5 3 3.5 4 4.5 510

-5

10-4

10-3

10-2

10-1

Eb/No (dB)

Bit P

robabili

ty E

rror

SNR plot for LDPC encoder with RA Codes,(L-type)

RA Code

Uncoded BPSK

Page 84: A STUDY OF LOW DENSITY PARITY-CHECK CODES USING …

72

Table 6.2. N = 204 Simulation Points

SNR (dB) 1 2 3 4 5

BER (GC) 0.07 0.0164 0.005 0.0023 9.6510E-04

BER(RA) 0.0509 0.0119 0.002 2.4850E-04 3.6761E-05

BER (RA w/ Mod. L-type) L=8 0.0425 0.011 0.0014 9.4598E-05 0

BER (RA w/ Mod. L-type) L=30 0.0526 0.0167 0.0025 4.2747E-04 2.60E-05

BER(RA w/ L-type) L=8 0.0443 0.0112 0.0011 6.6660E-05 1.4704E-06

BER(RA w/ L-type) L=30 0.052 0.0186 0.0028 3.6908E-04 3.0389E-05

BER(Uncoded) 0.0563 0.0375 0.0229 0.0125 0.006

6.1.3 Code Length, N = 408

When trying our parity-check matrices H with a code length of 𝑁 = 408 we

concluded that some of the behavior trends that have been occurring since code length

𝑁 = 96 is still observed. When applying 𝑁 = 408 to our Gallager code H, we did not see

much change in its performance compared to 𝑁 = 204. Figure 6.15 shows this plot.

Figure 6.15. Gallager code SNR vs. BER plot (N = 408).

When random interleavers were used, we observed that at low SNR values, we did

better than that shown in [1], however, after we reached values about 3-4 dB, our results

1 1.5 2 2.5 3 3.5 4 4.5 510

-4

10-3

10-2

10-1

Eb/No (dB)

Bit P

robabili

ty E

rror

SNR plot for LDPC encoder with Gallager Codes (N=408)

Page 85: A STUDY OF LOW DENSITY PARITY-CHECK CODES USING …

73

seemed to potentially be worse compared to [1]. We cannot surely conclude this for there are

no points from 5dB and above plotted for either [1] or MacKay’s Paper at 𝑁 = 408.

Figure 6.16 shows the RA code with random interleaver.

Figure 6.16. SNR vs. BER plot for RA code with random interleaver

(N = 408).

Just as before, an L-type and modified L-type interleaver at 𝐿 = 8 also showed

similar behavior to that of random interleaver but our results seem to be slightly steeper than

those in [1]. Our L-type interleaver gave us the best performance curve, due that at 𝑁 = 408

provided the steepest curve out of all, even to that of a modified L-type interleaver. In our

simulations it can be observed that the soft decoder was able to converge to the codeword

after 4dB unlike its modified version which was not able to converge at all. Figure 6.17 and

6.18 show both L-type interleavers.

When 𝐿 = 30 we still see that these curves give steeper performance curves than

those of with random interleavers and Gallager codes, however, the curves with an L-

parameter of 𝐿 = 8 still has the slight advantage. Especially at values above 3dB, one can see

that the values with 𝐿 = 8 have a lower bit error probability. Also, just like at 𝐿 = 8, our L-

1 1.5 2 2.5 3 3.5 4 4.5 510

-5

10-4

10-3

10-2

10-1

Eb/No (dB)

Bit P

robabili

ty E

rror

SNR plot for LDPC encoder with RA Codes

RA Codes

Uncoded BPSK

Page 86: A STUDY OF LOW DENSITY PARITY-CHECK CODES USING …

74

type interleaver still has an improved performance over that of a modified L-type interleaver

when its L parameter is higher. The difference between them is not that great, therefore one

can decide to use any of these interleavers when using small lengths.

Compared to [1], we still have similar simulation results which tell us that using these

interleavers at such small lengths can give similar performance that what we can achieve

with other encoders and parity-check matrix constructions. Figures 6.19 and 6.20 show our

simulation plots for both L-type interleavers and Table 6.3 gives the simulation bit error

probabilities for each plot.

Figure 6.17. SNR vs. BER plot with modified L-type interleaver (L =

8, N = 408).

Page 87: A STUDY OF LOW DENSITY PARITY-CHECK CODES USING …

75

Figure 6.18. SNR vs. BER plot with an L-type interleaver (L = 8,

N = 408).

Figure 6.19. SNR vs. BER plot with modified L-type interleaver (L

= 30, N = 408).

1 1.5 2 2.5 3 3.5 4 4.5 510

-5

10-4

10-3

10-2

10-1

Eb/No (dB)

Bit P

robability E

rror

SNR plot for LDPC encoder with RA Codes,(L-type)

RA Code

Uncoded BPSK

Page 88: A STUDY OF LOW DENSITY PARITY-CHECK CODES USING …

76

Figure 6.20. SNR vs. BER plot with L-type interleaver (L = 30, N =

408).

Table 6.3. N = 408 Simulation Points

SNR (dB) 1 2 3 4 5

BER (GC) 0.0655 0.011 0.0023 0.0016 8.3546E-

04

BER(RA) 0.0493 0.0082 6.1734E-

04

5.5632E-

05

1.1764E-

05

BER (RA w/ Mod. L-type) L=8 0.0481 0.0083 5.1269E-

04

1.3724E-

05

1.7155E-

06

BER (RA w/ Mod. L-type)

L=30

0.0503 0.0092 6.9135E-

04

4.1172E-

05

4.9015E-

06

BER(RA w/ L-type) L=8 0.0464 0.0068 5.5681E-

04

1.9361E-

05

0

BER(RA w/ L-type) L=30 0.0454 0.0093 6.5043E-

04

3.2350E-

05

9.8029E-

07

BER(Uncoded) 0.0563 0.0375 0.0229 0.0125 0.006

1 1.5 2 2.5 3 3.5 4 4.5 510

-7

10-6

10-5

10-4

10-3

10-2

10-1

Eb/No (dB)

Bit P

robabili

ty E

rror

SNR plot for LDPC encoder with RA Codes,(L-type)

RA Code

Uncoded BPSK

Page 89: A STUDY OF LOW DENSITY PARITY-CHECK CODES USING …

77

6.1.4 Code Length, N = 816

Unlike the simulations performed in [1], we were able to simulate our RA parity-

check matrices at a code length of N=816. Although simulating at these values can bring a bit

of hardship to our computer’s processing power and can have a long computation time, we

were still able to simulate them to present them here. Our goal was to see two things. First,

we wanted to know if when approaching a higher code length would we see if the modified

L-type interleaver becomes a better interleaver option compared to a regular L-type

interleaver. The second was to see if a change of a higher L-parameter at a higher code length

would improve our performance curves even more.

From our simulations, we show that at a higher code length, our modified L-type

interleaver begins to have a slight improvement than our L-type interleavers. Although our

simulation plots, from Figure 6.21 and 6.22, show that there isn’t much difference from both

of them, it does show that compared to the other code lengths, an RA code with a modified

L-type interleaver is breaking away from a close similarity to that of an L-type. We can

predict that as our code length increases our modified L-type interleaver will push our

performance curve further down than with a regular L-type interleaver. When comparing to

the results MacKay showed in his paper, we can see that our results are still a bit

underperforming from those in the paper. This was expected, due to one of the main

disadvantages of RA codes is that they perform well at short and medium code lengths [8].

Therefore, it is here where we can attempt some enhancements to RA codes which can

majorly improve their performance curve.

Finally, when changing our L-parameter to 𝐿 = 30 that results were a bit mixed to

those with 𝐿 = 8. As it can be seen from Figures 6.23 and 6.24, our bit error probabilities at

each SNR point are lower than those with 𝐿 = 8 at higher SNR when utilizing a regular L-

type interleaver. When comparing both the modified L-type interleaver at different L-

parameters, at 𝐿 = 8 and 𝐿 = 30, we see that we have similar performance but at a smaller L

parameter we can still perform well. Therefore, we conclude that as a higher code length is

attempted a change in L would not make that much of a difference. However, we cannot be

completely certain unless we try N values of 1000 and above.

Page 90: A STUDY OF LOW DENSITY PARITY-CHECK CODES USING …

78

Figure 6.21. SNR vs. BER plot with L-type interleaver (L = 8, N = 816).

Figure 6.22. SNR vs. BER plot with modified L-type interleaver (L =

8, N = 816).

1 1.5 2 2.5 3 3.5 4 4.5 510

-6

10-5

10-4

10-3

10-2

10-1

Eb/No (dB)

Bit P

robabili

ty E

rror

SNR plot for LDPC encoder with RA Codes,(L-type)

RA Code

Uncoded BPSK

1 1.5 2 2.5 3 3.5 4 4.5 510

-6

10-5

10-4

10-3

10-2

10-1

Eb/No (dB)

Bit P

robabili

ty E

rror

SNR plot for LDPC encoder with RA Codes (Modified L-type)

Page 91: A STUDY OF LOW DENSITY PARITY-CHECK CODES USING …

79

Figure 6.23. SNR vs. BER plot with L-type interleaver (L = 30, N =

816).

Figure 6.24. SNR vs. BER plot with modified L-type interleaver

(L = 30, N = 816).

1 1.5 2 2.5 3 3.5 4 4.5 510

-7

10-6

10-5

10-4

10-3

10-2

10-1

Eb/No (dB)

Bit P

robabili

ty E

rror

SNR plot for LDPC encoder with RA Codes,(L-type)

RA Code

Uncoded BPSK

1 1.5 2 2.5 3 3.5 4 4.5 510

-6

10-5

10-4

10-3

10-2

10-1

Eb/No (dB)

Bit P

robabili

ty E

rror

SNR plot for RA Codes with Modified L-type (L=30, N=816)

Page 92: A STUDY OF LOW DENSITY PARITY-CHECK CODES USING …

80

Table 6.4. N = 816 Simulation Points

SNR (dB) 1 2 3 4 5

BER (RA w/ Mod. L-type) L=8 0.0464 0.0039 9.5579E-05 1.4704E-06 0.0000E+00

BER (RA w/ Mod. L-type) L=30 0.0473 0.0049 1.2536E-04 2.5733E-06 0.0000E+00

BER (RA w/ L-type) L=8 0.0473 0.0059 3.4274E-04 3.0267E-05 1.2254E-06

BER( RA w/ L-type) L=30 0.0494 0.0056 2.4164E-04 6.4944E-06 4.9015E-07

BER (Uncoded) 0.0563 0.0375 0.0229 0.0125 0.006

6.2 SUMMARY

Our simulations results present deterministic construction methods for practical RA

interleavers. These results show that with trivial and straightforward interleaver structures we

can give excellent decoding performance for RA codes. When compared against Gallager

codes, for example, RA codes surpassed its performance in each code length attempted.

When compared against the simulation results of [1] with small code lengths we notice that

our simulation results give performance similar, or slightly better, than those found in [1].

This shows that at the cost of a simpler parity-check matrix and encoder construction, we can

achieve the same performance at small lengths using RA codes. When focusing on the

interleaver structures between RA codes, we see that the deterministic nature of the

interleaver gives improved performance over random interleavers for short codes without

hindering their performance in long codes [7] hence giving way to a simpler encoding

construction. Therefore, we can conclude that we can achieve similar performance to regular

LDPC codes with RA codes with structured interleavers and not have a complex encoder

implementation.

Page 93: A STUDY OF LOW DENSITY PARITY-CHECK CODES USING …

81

CHAPTER 7

CONCLUSION AND FUTURE WORK

Low Density Parity Check codes are still new codes which have not been completely

exploited. Even though they are really popular today there is still vast room for improvement.

In this thesis, we showed how the use of systematic RA codes can be used as a simple

alteration of regular LDPC codes. By utilizing an RA parity-check matrix and encoder, we

showed that we can achieve similar or better performance curves at small to medium code

lengths than those simulated with regular LDPC codes. Although this work is just a glimpse

of the potential of RA codes, it should spark curiosity into researching further regarding RA

codes. Such research can include the construction of irregular RA codes and its degree

distribution or non-systematic RA codes with Turbo decoding.

As a recommendation for further work it is suggested to see how these systematic RA

codes shown here perform at longer code lengths, for example 1000, 2000 or even 10,000

bits. Due to the processing power available, we were not able to explore these ranges. It

would also be suggested that further research into interleaver design should be pursued.

Specifically, combinatorial design interleavers which have shown great potential in

improving performance compared to those randomly or structurally constructed [9]. Finally,

these systematic RA codes can be attempted with the utilized parameters when using

modified weight-3 accumulators in RA codes [12]. The change of our accumulator will

create weight-3 columns in 𝐻2 which would be interesting to see how it compares to regular

LDPC codes at small to medium lengths.

Page 94: A STUDY OF LOW DENSITY PARITY-CHECK CODES USING …

82

REFERENCES

[1] D. Dechene and K. Peets, “Simulated performance of low-density parity-check codes:

A MATLAB implementation,” Ph.D. dissertation, Fac. Eng., Lakehead Univ., Thunder

Bay, Ontario, 2006.

[2] S. Lin and D.J. Costello Jr., Error Control Coding: Fundamentals and Applications,

2nd ed. Upper Saddle River, NJ: Pearson Prentice Hall, 2004.

[3] C. Langton, “Tutorial 12: Encoding and decoding with convolutional codes,” Complex

to Real, Palo Alto, CA, USA, 2012. [Online]. Available: http://complextoreal.com/

tutorials/tutorial-12-convolutional-coding-and-decoding-made-easy/#.VLRbHivF-So

[4] S. J. Johnson, Iterative Error Correction: Turbo, Low-Density Parity-Check and

Repeat-Accumulate Codes. Cambridge, UK: Cambridge Univ. Press, 2000.

[5] R. G. Gallager, “Low density parity check codes,” M.I.T., Cambridge, MA, 1963.

[6] S. J. Johnson, "Introduction to low-density parity-check codes,” School Elect. Eng. and

Comput. Sci., Univ. Newcastle, Callaghan, New South Whales, Australia, 2006.

[7] S. J. Johnson and S. R. Weller, “Practical interleavers for repeat-accumulate codes,”

IEEE Trans. Commun., vol. 57, no. 5, pp. 1225-1228, May 2009.

[8] W. Ryan and S. Lin, Channel Codes: Classical and Modern. Cambridge, UK:

Cambridge Univ. Press, 2009.

[9] S. J. Johnson and S. R. Weller, "Combinatorial interleavers for systematic regular

repeat-accumulate codes [transactions letters]," Commun., IEEE Trans. on, vol. 56, no.

8, pp. 1201-1206, Aug. 2008.

[10] S. Kalamkar. (2013). Gallager's Construction of Parity Check Matrix for LDPC Codes

[Online]. Available: http://www.mathworks.com/matlabcentral/fileexchange/44454-

gallager-s-construction-of-parity-check-matrix-for-ldpc-codes/content//Gallager

_construction_LDPC.m

[11] B. Sklar, Digital Communications: Fundamentals and Applications, 2nd ed. Upper

Saddle River, NJ, USA: Prentice Hall, 2001.

[12] S. Johnson and S. Weller, “Interleaver and accumulator design for systematic repeat-

accumulate codes,” in Commun. Theory Workshop, 2005. Proc. 6th Australian, pp. 1-7.


Recommended