Home >
Documents >
One-Bit LDPC Message Passing Decoding Based on ...kurkoski/presentations/Zou-ieice11.pdfOne-Bit LDPC...

Share this document with a friend

Embed Size (px)

of 22
/22

Transcript

<#>

One-Bit LDPC Message Passing Decoding Based on Maximization of Mutual Information

LDPC Workshop

September 29, 2009

Tokyo Institute of Technology, Tokyo

ZOU Sheng and Brian M. Kurkoski [email protected]

University of Electro-Communications

Tokyo, Japan

University of Science and Technology of China

Hefei, China

Zou and Kurkoski. University of Electro-Communications /15

Conventional LDPC Message Quantization

2

Belief-propagation decoding of LDPC is well understood. Variable node function:

Z = Y1 + Y2 + Y3

Where Z, Y are continuous values:

Y2Y1 Z

Y3 logPr[y|x = 1]

Pr[y|x = 0]

0 0.25 0.5 0.75-1-1.25 -0.5 -0.25 1 1.25 1.5 1.75-2 -1.75 -1.5 -0.75

00.00 00.01 00.10 00.1111.00-10.11 11.10 11.11 01.00 01.01 01.10 01.1110.00 10.01 10.10 11.01

VLSI ImplementationY, Z are quantized using fixed-point representations

Increasing the number of bits improves performance, but increases complexityTypically, 6-7 bits per message are needed for floating-point performance

Can we do something better?

Zou and Kurkoski. University of Electro-Communications /15

Theory

Compute fundamental limits Capacity, bounds

Coding theory:• find good codes• efficient decoding algorithms• implement in C/Matlab

Break the wall between Theory and Practice

3

Broad Research Goal: Break this wall~ Find the fundamental limits on implementation complexity ~

• Theory: Find and solve new information theoretic problems• Practice: Improve the performance/complexity tradeoff

Cheaper devices, longer battery life, etc.

Practice

Circuits for mobile communications, storage, etc.

Implement in VLSI• low power consumption• high performanceBasic questions:

• How to quantize?• Which decoding algorithm?

Zou and Kurkoski. University of Electro-Communications /15

History of Quantization of Messages-Passing Algorithms

BCJR Algorithm vector quantization of the state metrics Convolutional codes, erasure channel: exact quantization [Globe 2003, ISIT 2004] Inter-symbol interference channel [ISIT 2005] High complexity

GF(q) LDPC codes Vector quantization of q-ary messages “Heuristic” vector quantization [ITA 2007] Good only certain chan.

Vector quantization is hard! Try scalar quantization

Binary LDPC codes quantize messages to maximize mutual information Channel quantization ≈ Message-passing decoding maps [Globecom 2008] Algorithm to quantize DMC [ITW 2010], proof of optimality [sub. IT 2011]

Typical VLSI 6-7 bits/message → our method 4 bits/messageFinite-length binary codes (this talk): Show results hold for finite-length codes Look at one-bit per message LDPC decoding, compare with bit-flipping

Above papers are my joint work with P. Siegel, J. Wolf, K. Yamaguchi, K. Kobayashi and H. Yagi. 4

Ve

cto

r q

ua

nti

zati

on

Sc

ala

r q

ua

nti

zati

on

/15

Zou and Kurkoski. University of Electro-Communications /15

Mutual information of a discrete memoryless channel (DMC):

Channel capacity C is the maximization of mutual information (over input distribution pj):

• Arimoto-Blahut algorithm computes the capacity.• Mutual information gives highest achievable rate R

Thus:Maximization of mutual information is an excellent metric for quantization!

Background: Maximizing Mutual Information

5

0 1

mut

ual

info

rmat

ion

pj

concave

I(X;Z) =!

k

!

j

pjQk|j logQk|j"j pjQk|j

.pj

Qk|j

R ! C = maxpj

I(X;Z)

DMCX

Zou and Kurkoski. University of Electro-Communications /15

Mutual information of a discrete memoryless channel (DMC):

Channel capacity C is the maximization of mutual information (over input distribution pj):

• Arimoto-Blahut algorithm computes the capacity.• Mutual information gives highest achievable rate R

Thus:Maximization of mutual information is an excellent metric for quantization!

Background: Maximizing Mutual Information

5

0 1

mut

ual

info

rmat

ion

pj

concave

I(X;Z) =!

k

!

j

pjQk|j logQk|j"j pjQk|j

.pj

Qk|j

R ! C = maxpj

I(X;Z)

DMCX

Zou and Kurkoski. University of Electro-Communications /15

Suppose a bit X is transmitted over two independent DMCsGoal: combine Y1 and Y2 into ZWant to maximize mutual information I(X;Z)How to combine?

Depends upon the alphabet size of Z:Easy Size 9: trivial to get I(X;Z) = I(X;Y1,Y2)Easy Size 2: making hard decisionsHard Size 3: Let me tell you....

A Question For You

6

Y1 ! {1, 2, 3}

Y2 ! {1, 2, 3} ?X ! {0, 1}Z ! {1, 2, 3, 4, 5, 6, 7, 8, 9}

Z ! {1, 2}

Z ! {1, 2, 3}

Zou and Kurkoski. University of Electro-Communications /15

Answer: “DMC Quantization Algorithm”

Create a “product channel”K: number of quantizer outputs

K = 9. A one-to-one mapping → no loss of mutual information K ≤ 8. “DMC Quantization Algorithm” finds the optimal quantizer

[K. and Yagi, sub. IT 2011, http://arxiv.org/abs/1107.5637]

7

0

1, 1

2, 1

3, 1

3, 2

3, 3

0

2

31

1

0

2

31

1

1

2, 1

2, 2

2, 3

3, 1

X

X

X

Y1

Y2

Y1 ,Y2

Zou and Kurkoski. University of Electro-Communications /15

Answer: “DMC Quantization Algorithm”

Create a “product channel”K: number of quantizer outputs

K = 9. A one-to-one mapping → no loss of mutual information K ≤ 8. “DMC Quantization Algorithm” finds the optimal quantizer

[K. and Yagi, sub. IT 2011, http://arxiv.org/abs/1107.5637]

7

0

1, 1

2, 1

3, 1

3, 2

3, 3

0

2

31

1

0

2

31

1

1

2, 1

2, 2

2, 3

3, 1

X

X

X

Y1

Y2

Y1 ,Y2 Z

Zou and Kurkoski. University of Electro-Communications /15

Z

2

3

1

Answer: “DMC Quantization Algorithm”

Create a “product channel”K: number of quantizer outputs

K = 9. A one-to-one mapping → no loss of mutual information K ≤ 8. “DMC Quantization Algorithm” finds the optimal quantizer

[K. and Yagi, sub. IT 2011, http://arxiv.org/abs/1107.5637]

7

0

1, 1

2, 1

3, 1

3, 2

3, 3

0

2

31

1

0

2

31

1

1

2, 1

2, 2

2, 3

3, 1

X

X

X

Y1

Y2

Y1 ,Y2

Zou and Kurkoski. University of Electro-Communications /15

From the quantizer, can easily construct a table that gives Z from Y1 and Y2

This table is a decoding rule!Y1, Y2 are inputs at a variable nodeZ is the output

Easily extend to check node, multiple inputs, etc.Message-passing decoding which maximizes mutual information

From Channel Quantizers to Decoding Algorithm

8

0

1, 1

2, 1

3, 1

3, 2

3, 3

1

2, 1

2, 2

2, 3

3, 1

X Z

2

3

1

values Z

1 2 3

1 1 1 1

2 1 2 2

3 3 3 3

Y2

Y1

Y2Y1 Z

Zou and Kurkoski. University of Electro-Communications /15

σ2=0.1

0.1

0.15

0.2

0.25

0.3

0.35

0.4

0.45

0.5

0.55

0.6

Noise

Var

ianc

e, σ

2

σ2=0.6

Quantization of a Binary-Input AWGN Channel

9

Before density evolution, we need to quantize the AWGN channel.

Use the Quantization Algorithm:

• Quantization Algorithm cannot operate on continuous output channels

• First create a DMC (using uniform quantization)

• Then apply the Quantization Algorithm

Example:

• AWGN various variances

• DMC with 30/500 outputs

• Quantized to 8 outputs

(boundaries are shown)

Zou and Kurkoski. University of Electro-Communications /15

σ2=0.1

0.1

0.15

0.2

0.25

0.3

0.35

0.4

0.45

0.5

0.55

0.6

DMC with 30 outputs

Noise

Var

ianc

e, σ

2

σ2=0.6

Quantization of a Binary-Input AWGN Channel

9

Before density evolution, we need to quantize the AWGN channel.

Use the Quantization Algorithm:

• Quantization Algorithm cannot operate on continuous output channels

• First create a DMC (using uniform quantization)

• Then apply the Quantization Algorithm

Example:

• AWGN various variances

• DMC with 30/500 outputs

• Quantized to 8 outputs

(boundaries are shown)

Zou and Kurkoski. University of Electro-Communications /15

σ2=0.1

0.1

0.15

0.2

0.25

0.3

0.35

0.4

0.45

0.5

0.55

0.6

DMC with 500 outputs

DMC with 30 outputs

Noise

Var

ianc

e, σ

2

σ2=0.6

Quantization of a Binary-Input AWGN Channel

9

Before density evolution, we need to quantize the AWGN channel.

Use the Quantization Algorithm:

• Quantization Algorithm cannot operate on continuous output channels

• First create a DMC (using uniform quantization)

• Then apply the Quantization Algorithm

Example:

• AWGN various variances

• DMC with 30/500 outputs

• Quantized to 8 outputs

(boundaries are shown)

Zou and Kurkoski. University of Electro-Communications /15

Infinite Block Length — (3,6) Regular LDPCDensity Evolution Noise Thresholds

10

2 3 4 5 6 8 12 16 20 24 28 32

0.35

0.4

0.45

0.5

0.55

0.6

0.65

0.7

0.75

0.8

Unquantized messages

Noise

Thr

esho

ld (C

hann

el V

aria

nce σ

2 )

AWGN Channel Quantization Levels, Kch (log scale)

AWGN noise thresholdConventional DE

BP threshold for DMCs

Zou and Kurkoski. University of Electro-Communications /15

2 3 4 5 6 8 12 16 20 24 28 32

0.35

0.4

0.45

0.5

0.55

0.6

0.65

0.7

0.75

0.8

1 bits/message

2 bits/message

3 bits/message4 bits/messageUnquantized messages

Noise

Thr

esho

ld (C

hann

el V

aria

nce σ

2 )

AWGN Channel Quantization Levels, Kch (log scale)

AWGN noise threshold

Infinite Block Length — (3,6) Regular LDPCDensity Evolution Noise Thresholds

10

4 bits/messagevery close!

Zou and Kurkoski. University of Electro-Communications /15

2 3 4 5 6 8 12 16 20 24 28 32

0.35

0.4

0.45

0.5

0.55

0.6

0.65

0.7

0.75

0.8

1 bits/message

2 bits/message

3 bits/message4 bits/messageUnquantized messages

Noise

Thr

esho

ld (C

hann

el V

aria

nce σ

2 )

AWGN Channel Quantization Levels, Kch (log scale)

AWGN noise threshold

Infinite Block Length — (3,6) Regular LDPCDensity Evolution Noise Thresholds

10

4 bits/messagevery close!

Channel: 1 bitdecoder: unquant.

Channel:unquantdecoder: 1 bit

Low-complexity Decoder

Zou and Kurkoski. University of Electro-Communications /15

What about finite-length codes?Investigate the proposed technique with one bit per message:

Variable-check message consists of one bitDecoding maps found using “DMC Quantization Algorithm”Channel is AWGN quantized to 16 levelsCompare with “Improved Modified Weighted Bit Flipping” (IMWBF)

algorithm [Jiang et al, Comm Letters, 2005].

Check node mapThe map below is “obvious”But, it was obtained automatically,

using optimization of mutual information

Proposed One Bit Message-Passing vs.Weighted Bit Flipping

11

inputs output

Number of 1’s at input Output

even 0

odd 1

Zou and Kurkoski. University of Electro-Communications /15

Variable Node Map — SNR of 3 dB — Automatically Obtained Using DMC Quantizer

12

check message(number of 1’s)

Channel Message

-8 -7 -6 -5 -4 -3 -2 -1 1 2 3 4 5 6 7 8

0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1

1 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1

2 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1 1

-8 -7 -6 -5 -4 -3 -2 -1 1 2 3 4 5 6 7 8

0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1

1 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1

2 0 0 0 0 1 1 1 1 1 1 1 1 1 1 1 1

iteration 1

iteration 2-3

If message disagrees with channel, do not flip bit

If two messages disagree, use channel’s hard decision

As iterations increase, the influence of check message becomes stronger

The maps are almost always symmetrical

Degree 3 node

check message

channel message

Zou and Kurkoski. University of Electro-Communications /15

One-Bit Message Passing Decoding — Simulation

13

1 2 3 4 5 6 7 8 910

!7

10!6

10!5

10!4

10!3

10!2

10!1

100

Eb/N0

Blo

ck E

rro

r R

ate

New

New(Mismatch)

IMBWF

Belief Propagation

Channel

• AWGN quantized to 16 levels

Code • Rate 1/2• (816, 408) from

MacKay’s web site

One-bit message passing has about the same performance as bit flipping.

More complicated BP has better performance.

Bit-flipping (IMWBF)

One-bit message passing

Belief Propagation

Zou and Kurkoski. University of Electro-Communications /15

Complexity Comparison

IMWBF algorithm must compute a flipping function:

Same complexity as one iteration of min-sum decoding!

At high SNR, only a few iterations needed.

Flipping function is high fraction of total complexity.

The two algorithms required about the same amount of computer time.

14

3 4 5 6 7 8 90

10

20

30

40

50

60

Eb/N0

Ave

rag

e N

um

be

r o

f It

era

tion

NewNew(Mismatch)IMWBF

wm = mini

|yi|

e =!

m

"2sm ! 1) · wm ! !|yn|

Bit-flipping (IMWBF)

One-bit message passing

Zou and Kurkoski. University of Electro-Communications /15

Conclusions

There is a “wall” between information theory and VLSI implementation Quantization of messages is important for practical implementations

Reducing quantization can reduce power consumption, cost, etc. New perspective breaks the wall:

Implementation is an information theoretic problem “DMC Quantization Algorithm” optimizes mutual information

Already know: How to optimally quantize channels For infinite-length codes, reduce to 4 bits/message (from 6-7 bits)

In this talk, showed: For finite length codes, one-bit per message decoders perform as well as

advanced bit-flipping algorithms Open questions:

Better understanding of performance/complexity trade-off The role of symmetry Implementation in VLSI

15

Recommended