LT Codes Paper by Michael Luby FOCS ‘02 Presented by Ashish Sabharwal Feb 26, 2003 CSE 590vg.

Post on 31-Mar-2015

218 views 0 download

Tags:

transcript

LT Codes

Paper by Michael Luby FOCS ‘02

Presented by Ashish Sabharwal Feb 26, 2003 CSE 590vg

2

Binary Erasure Channel

Code distance d ) can decode d-1 erasures

Probabilistic Model Bits get erased with prob p (Shannon) Capacity of BEC = 1 – p

In particular, p>1/2 is decodable!

Input00101

Codeword10100101

BECReceived10?001??

Input00101

decodeencode

“Packet loss”

3

LT Codes: Encoding

1

11

01

1 XOR 0 = 1

input 1 code bit

degree d = 2

1. Choose degree d from a distribution2. Pick d neighbors uniformly at random3. Compute XOR

4

LT Codes: Encoding

1

11

01

1

11

11

1

inputcodeword

… …

0

5

LT Codes: Decoding

1. Identify code bit of remaining degree 12. Recover corresponding input bit

1

11

01

?

11

11

?

inputcodeword

0

6

LT Codes: Decoding

1

11

01

11

11

input codeword

1 = 0 XOR 1

3. Update neighbors of this input bit4. Delete edges5. Repeat

7

LT Codes: Decoding

1

11

01

11

11

input codeword

1

8

LT Codes: Decoding

1

11

01

01

0 = 1 XOR 11

input codeword

0

Decoding unsuccessful!

9

LT Codes: Features

Binary, efficient

Bits can arrive in any order

Probabilistic model

No preset rate Generate as many or as few code bits

as required by the channel

Almost optimal

RS inefficient

Tornado codes are optimal andlinear time, but have fixed rate

10

Larger Encoding Alphabet

Why? Less overhead

Partition input into m-bit chunks Encoding symbol is bit-wise XOR

We’ll think of these as binary codes

11

Caveat: Transmitting the Graph

Send degree + list of neighbors

Associate a key with each code bit Encoder and decoder apply the same

function to the key to compute neighbors

Share random seed for pseudo-randomgenerator

12

Outline

The Goal

All 1’s distribution: Balls and Bins case

LT Process; Probabilistic machinery

Ideal Soliton Distribution

Robust Soliton Distribution

13

The Goal

Construct a degree distribution s.t.

1. Few encoding bits required for recovery= small t

2. Few bit operations needed= small sum of degrees= small s

14

All 1’s distribution: Balls and Bins

All encoding degrees are 1

k bins

t balls

• t balls thrown into k bins

• Pr [can’t recover input] Pr [no green input bits] = k . (1 – 1/k)t

¼ k e-t/k

• Pr [failure] guaranteed if t k ln k/

k bitinput

t unerasedcode bits

15

All 1’s distribution: Balls and Bins

t = k ln (k/)

s = k ln (k/)

GOODOptimal!

BADToo much overhead

k + k ln2(k/) suffices

16

Why is s = k ln (k/) optimal?

k bins

s balls

• s balls thrown into k bins

• can’t recover input ) no green input bits

• Pr [no green input bits] k . (1 – 1/k)s

¼ k e-s/k

• Pr [failure] if s k ln k/

k bitinput s edges

NOTE: This line of reasoning isnot quite right for lower bound!Use coupon collector type argument.

17

The LT Process

covered = { }processed = { }ripple = { }released = { }

a1

a2a3

a4a5

c1

c2c3

c4c5

c6

STATE:

ACTION:

Init: Release c2, c4, c6

18

The LT Process

released = {c2,c4,c6} covered = {a1,a3,a5}processed = { }ripple = {a1,a3,a5}

c1

c2c3

c4c5

c6

STATE:

ACTION:

Process a1

a1

a2a3

a4a5

19

The LT Process

released = {c2,c4,c6,c1}covered = {a1,a3,a5}processed = {a1}ripple = {a3,a5}

STATE:

ACTION:

Process a3

a1

a2a3

a4a5

c1

c2c3

c4c5

c6

20

The LT Process

released = {c2,c4,c6,c1} covered = {a1,a3,a5}processed = {a1,a3}ripple = {a5}

STATE:

ACTION:

Process a5

a1

a2a3

a4a5

c1

c2c3

c4c5

c6

21

The LT Process

released = {c2,c4,c6,c1,c5}covered = {a1,a3,a5,a4}processed = {a1,a3,a5}ripple = {a4}

STATE:

ACTION:

Process a4

a1

a2a3

a4a5

c1

c2c3

c4c5

c6

22

The LT Process

released = {c2,c4,c6,c1,c5,c3}covered = {a1,a3,a5,a4,a2}processed = {a1,a3,a5,a4}ripple = {a2}

STATE:

ACTION:

Process a2

a1

a2a3

a4a5

c1

c2c3

c4c5

c6

23

The LT Process

released = {c2,c4,c6,c1,c5,c3} covered = {a1,a3,a5,a4,a2}processed = {a1,a3,a5,a4,a2}ripple = { }

STATE:

ACTION:

Success!

a1

a2a3

a4a5

c1

c2c3

c4c5

c6

24

The LT Process: Properties

Corresponds to decoding

When a code bit cp is released The step at which this happens

is independent of other cq’s The input bit cp covers

is independent of other cq’s

25

Ripple size

Desired property of ripple Not too large: redundant covering Not too small: might die prematurely

GOAL: “Good” degree distribution Ripple doesn’t grow or shrink 1 input bit added per step

Why??

26

Degree Distributions

Degrees of code bits chosen independently

(d) = Pr [degree = d]

All 1’s distribution: (1) = 1, (d1) = 0initial ripple = all input bits“All-At-Once distribution”

27

Machinery: q(d,L), r(d,L), r(L)

L = | unprocessed | k, k-1, …,1

q(d,L) = Pr [ cp is released at L | deg(cp)=d}

r(d,L) = Pr [ cp is released at L, deg(cp)=d} = (d) q(d,L)

r(L) = Pr [cp is released at L] = d r(d,L) r(L) controls ripple size

28

q(d,L)

29

Ideal Soliton Distribution, (.)

“Soliton Wave”: dispersion balances refraction

Expected degree = ln k r(L) = 1/k for all L = k, …, 1

30

Expected Behavior

Choose t = k

Exp(s) = t Exp(deg) = k ln k

Exp(Initial ripple size) = t (1) = 1

Exp(# code bits released per step) = t r(L) = 1

) Exp(ripple size) = 1

optimal

31

We expect too much…

What if the ripple vanishes too soon?

In fact, very likely!

FIX: Robust Soliton Distribution

Higher initial ripple size ¼ k log k/ Expected change still 0

32

Robust Soliton Distribution, (.)

R = c k ln k/

(d) = ((d) + (d)) / where

t = k

33

Robust Soliton Distribution, (.)

t is small t = k k + O(k ln2 k/)

Exp(s) is small Exp(s) = t d d(d) = O(k ln k/)

34

Robust Soliton Distribution, (.)

Initial ripple size is not too small Exp(Initial ripple size) = t (1)

¼ R ¼ k ln k/

Ripple unlikely to vanish Ripple size = random walk of length

k Deviates from it’s mean by k ln k/

with prob

35

Robust Release Probability

t r(L) L / (L – R) for L ¸ R, const

t L=R..2R r(L) R ln R/ for const > 0

Proofs on board…

36

Pessimistic Filtering

Let Z = ripple size when L bits unprocessed

Let h = Pr [released code bit covers input bit not in ripple] h should be around (L – Z) / L

If h is lowered to any value (L – Z)/Lthen Pr[success] doesn’t increase

37

Pessimistic Filtering

Applying to robust release probability:

t r(L) ¸ L/(L – R) turns into t r(L) = L/(L – R) for worst case analysis

Will use pessimistic filtering again later

38

Main Theorem: Pr[success] ¸ 1–

Idea: ripple size is like a random walk of length k with mean R ¼ k ln k/

1. Initial ripple size ¸ R/2 with prob ¸ 1–/3 Chernoff bound: # of code bits of deg 1

2. Ripple does not vanish for L ¸ Rwith prob ¸ 1–/3

3. Last R input bits are covered by (k/R) spikewith prob ¸ 1–/3

39

Ripple does not vanish for L ¸ R

Let XL = | {code bits released at L} | Exp(XL) = L / (L – R)

Let YL = 0-1 random variable with Pr [YL = 0] = (L – R) / L

Let I = any end interval of {R, …, k-1} starting at L

RipplesizeL = R/2 + (L’ 2 I XL’ YL’) – (k–L)

Filtered down init ripplesize

40

Ripple does not vanish for L ¸ R

|L’ 2 I XL’ YL’ – (k–L) |

· |L’ 2 I (XL’ YL’ – Exp(XL’) YL’) |

+ |L’ 2 I (Exp(XL’) YL’ – Exp(XL’) Exp(YL’)) |

+ |L’ 2 I (Exp(XL’) Exp(YL’)) – (k–L) |

¸ R/4 with prob · (6k)

= 0

Pr [|L’ 2 I XL’ YL’ – (k–L) | ¸ R/2] · /(3k)

41

Ripple does not vanish for L ¸ R

Recall RipplesizeL = R/2 + L’ 2 I XL’ YL’ – (k–

L) There are k–R intervals I

Pr [Summation ¸ R/2 for some I] · /3

0 < RipplesizeL < R with prob ¸ 1–/3

Ripple doesn’t vanish!

42

Main Theorem: Pr[success] ¸ 1–

Idea: ripple size is like a random walk of length k with mean R ¼ k ln k/

1. Initial ripple size ¸ R/2 with prob ¸ 1–/3 Chernoff bound: # of code bits of deg 1

2. Ripple does not vanish for L ¸ Rwith prob ¸ 1–/3

3. Last R input bits are covered by (k/R) spikewith prob ¸ 1–/3

43

Last R input bits are covered

Recall t L=R..2R r(L) R ln R/

By argument similar to Balls and Bins,

Pr [Last R input bits not covered] · 1 – /3

44

Main Theorem

With Robust Soliton Distribution,the LT Process succeeds with prob ¸ 1 –

t = k + O(k ln2 k/)

s = O(k ln k/)