Efficiently decoding Reed-Muller codes from random errors

Post on 15-Jan-2017

14 views 2 download

transcript

Keywords:

Error-correcting codes,low-degree polynomials,

randomness, linear algebra

questions begging to be resolved

A Puzzle

Your friend picks a polynomial f (x) ∈ F2[x1, · · · , xm] of degreer ≈ 3pm.

A Puzzle

Your friend picks a polynomial f (x) ∈ F2[x1, · · · , xm] of degreer ≈ 3pm.

She gives you the entire truth-table of f , i.e. the value of f (v) for everyv ∈ {0,1}m .

1 0 1 1 0 0 01 0 1 1 0 0 0 1 1

0000 0001 0010 0011 0100 0101 0110 0111 1000 1001 1010 1011 1100 1101 1110 1111

A Puzzle

Your friend picks a polynomial f (x) ∈ F2[x1, · · · , xm] of degreer ≈ 3pm.

She gives you the entire truth-table of f , i.e. the value of f (v) for everyv ∈ {0,1}m .

1 0 1 1 0 0 01 0 1 1 0 0 0 1 1

0000 0001 0010 0011 0100 0101 0110 0111 1000 1001 1010 1011 1100 1101 1110 1111

Can you recover f ?

A Puzzle

Your friend picks a polynomial f (x) ∈ F2[x1, · · · , xm] of degreer ≈ 3pm.

She gives you the entire truth-table of f , i.e. the value of f (v) for everyv ∈ {0,1}m .

1 0 1 1 0 0 01 0 1 1 0 0 0 1 1

0000 0001 0010 0011 0100 0101 0110 0111 1000 1001 1010 1011 1100 1101 1110 1111

Can you recover f ?

Of course! Just interpolate.

A Puzzle

Your friend picks a polynomial f (x) ∈ F2[x1, · · · , xm] of degreer ≈ 3pm.

She gives you the entire truth-table of f , i.e. the value of f (v) for everyv ∈ {0,1}m . But she corrupts 49% of the bits.

1 0 1 1 0 0 01 0 1 1 0 0 0 1 1

0000 0001 0010 0011 0100 0101 0110 0111 1000 1001 1010 1011 1100 1101 1110 1111

Can you recover f ?

A Puzzle

Your friend picks a polynomial f (x) ∈ F2[x1, · · · , xm] of degreer ≈ 3pm.

She gives you the entire truth-table of f , i.e. the value of f (v) for everyv ∈ {0,1}m . But she corrupts 49% of the bits.

1 0 1 1 0 0 01 0 1 1 0 0 0 1 1

0000 0001 0010 0011 0100 0101 0110 0111 1000 1001 1010 1011 1100 1101 1110 1111

Can you recover f ?

A Puzzle

Your friend picks a polynomial f (x) ∈ F2[x1, · · · , xm] of degreer ≈ 3pm.

She gives you the entire truth-table of f , i.e. the value of f (v) for everyv ∈ {0,1}m . But she corrupts 49% of the bits.

0 1 0 0 1 1 11 0 1 1 0 0 0 1 1

0000 0001 0010 0011 0100 0101 0110 0111 1000 1001 1010 1011 1100 1101 1110 1111

Can you recover f ?

A Puzzle

Your friend picks a polynomial f (x) ∈ F2[x1, · · · , xm] of degreer ≈ 3pm.

She gives you the entire truth-table of f , i.e. the value of f (v) for everyv ∈ {0,1}m . But she corrupts 49% of the bits.

0 1 0 0 1 1 11 0 1 1 0 0 0 1 1

0000 0001 0010 0011 0100 0101 0110 0111 1000 1001 1010 1011 1100 1101 1110 1111

Can you recover f ?

A Puzzle

Your friend picks a polynomial f (x) ∈ F2[x1, · · · , xm] of degreer ≈ 3pm.

She gives you the entire truth-table of f , i.e. the value of f (v) for everyv ∈ {0,1}m . But she corrupts 49% of the bits.

0 1 0 0 1 1 11 0 1 1 0 0 0 1 1

0000 0001 0010 0011 0100 0101 0110 0111 1000 1001 1010 1011 1100 1101 1110 1111

Can you recover f ?

Impossible if errors are adversarial...

A Puzzle

Your friend picks a polynomial f (x) ∈ F2[x1, · · · , xm] of degreer ≈ 3pm.

She gives you the entire truth-table of f , i.e. the value of f (v) for everyv ∈ {0,1}m . But she corrupts 49% of the bits randomly.

0 1 0 0 1 1 11 0 1 1 0 0 0 1 1

0000 0001 0010 0011 0100 0101 0110 0111 1000 1001 1010 1011 1100 1101 1110 1111

Can you recover f ?

A Puzzle

Your friend picks a polynomial f (x) ∈ F2[x1, · · · , xm] of degreer ≈ 3pm.

She gives you the entire truth-table of f , i.e. the value of f (v) for everyv ∈ {0,1}m . But she corrupts 49% of the bits randomly.

0 1 0 0 1 1 11 0 1 1 0 0 0 1 1

0000 0001 0010 0011 0100 0101 0110 0111 1000 1001 1010 1011 1100 1101 1110 1111

Can you recover f ?

This talk: Yes, and we can do it efficiently!

A Puzzle

Your friend picks a polynomial f (x) ∈ F2[x1, · · · , xm] of degreer ≈ 3pm.

She gives you the entire truth-table of f , i.e. the value of f (v) for everyv ∈ {0,1}m . But she corrupts 49% of the bits randomly.

0 1 0 0 1 1 11 0 1 1 0 0 0 1 1

0000 0001 0010 0011 0100 0101 0110 0111 1000 1001 1010 1011 1100 1101 1110 1111

Can you recover f ?

This talk: Yes, and we can do it efficiently!

(“Efficiently decoding Reed-Muller codes from random errors”)

Efficiently decodingReed-Muller codesfrom random errors

Ramprasad Saptharishi Amir Shpilka Ben Lee VolkTIFR Tel Aviv University Tel Aviv University

Typical Setting

Alicia Machado

sendsDont vote for Trump

Channel corrupts the message...

Bobby Jindal

receivesDude vote for Trump

Want to encode messages with redundancies to make it resilient to errors

Typical Setting

Alicia Machado sendsDont vote for Trump

Channel corrupts the message...

Bobby Jindal

receivesDude vote for Trump

Want to encode messages with redundancies to make it resilient to errors

Typical Setting

Alicia Machado sendsDont vote for Trump

Channel corrupts the message...

Bobby Jindal

receivesDude vote for Trump

Want to encode messages with redundancies to make it resilient to errors

Typical Setting

Alicia Machado sendsDont vote for Trump

Channel corrupts the message...

Bobby Jindal receivesDude vote for Trump

Want to encode messages with redundancies to make it resilient to errors

Typical Setting

Alicia Machado sendsDont vote for Trump

Channel corrupts the message...

Bobby Jindal receivesDude vote for Trump

Want to encode messages with redundancies to make it resilient to errors

Reed-Muller Codes: RM (m, r )

Message: A degree polynomial f ∈ F2[x1, · · · , xm] of degree at most r .

Encoding: The evaluation of f on all points in {0,1}m .f (x1, x2, x3, x4) = x1x2+ x3x47−→

0 0 0 1 0 0 0 1 0 0 0 1 1 1 1 0

0000 0001 0010 0011 0100 0101 0110 0111 1000 1001 1010 1011 1100 1101 1110 1111

A linear code, with▶ Block Length: 2m := n▶ Distance: 2m−r (lightest codeword: x1x2 · · · xr )▶ Dimension:�m

0

�+�m

1

�+ · · ·+ �mr �=:� m≤r

�▶ Rate: dimension/block length=

� m≤r

�/2m

Reed-Muller Codes: RM (m, r )

Message: A degree polynomial f ∈ F2[x1, · · · , xm] of degree at most r .

Encoding: The evaluation of f on all points in {0,1}m .f (x1, x2, x3, x4) = x1x2+ x3x47−→

0 0 0 1 0 0 0 1 0 0 0 1 1 1 1 0

0000 0001 0010 0011 0100 0101 0110 0111 1000 1001 1010 1011 1100 1101 1110 1111

A linear code, with

▶ Block Length: 2m := n▶ Distance: 2m−r (lightest codeword: x1x2 · · · xr )▶ Dimension:�m

0

�+�m

1

�+ · · ·+ �mr �=:� m≤r

�▶ Rate: dimension/block length=

� m≤r

�/2m

Reed-Muller Codes: RM (m, r )

Message: A degree polynomial f ∈ F2[x1, · · · , xm] of degree at most r .

Encoding: The evaluation of f on all points in {0,1}m .f (x1, x2, x3, x4) = x1x2+ x3x47−→

0 0 0 1 0 0 0 1 0 0 0 1 1 1 1 0

0000 0001 0010 0011 0100 0101 0110 0111 1000 1001 1010 1011 1100 1101 1110 1111

A linear code, with▶ Block Length: 2m := n

▶ Distance: 2m−r (lightest codeword: x1x2 · · · xr )▶ Dimension:�m

0

�+�m

1

�+ · · ·+ �mr �=:� m≤r

�▶ Rate: dimension/block length=

� m≤r

�/2m

Reed-Muller Codes: RM (m, r )

Message: A degree polynomial f ∈ F2[x1, · · · , xm] of degree at most r .

Encoding: The evaluation of f on all points in {0,1}m .f (x1, x2, x3, x4) = x1x2+ x3x47−→

0 0 0 1 0 0 0 1 0 0 0 1 1 1 1 0

0000 0001 0010 0011 0100 0101 0110 0111 1000 1001 1010 1011 1100 1101 1110 1111

A linear code, with▶ Block Length: 2m := n▶ Distance: 2m−r (lightest codeword: x1x2 · · · xr )

▶ Dimension:�m

0

�+�m

1

�+ · · ·+ �mr �=:� m≤r

�▶ Rate: dimension/block length=

� m≤r

�/2m

Reed-Muller Codes: RM (m, r )

Message: A degree polynomial f ∈ F2[x1, · · · , xm] of degree at most r .

Encoding: The evaluation of f on all points in {0,1}m .f (x1, x2, x3, x4) = x1x2+ x3x47−→

0 0 0 1 0 0 0 1 0 0 0 1 1 1 1 0

0000 0001 0010 0011 0100 0101 0110 0111 1000 1001 1010 1011 1100 1101 1110 1111

A linear code, with▶ Block Length: 2m := n▶ Distance: 2m−r (lightest codeword: x1x2 · · · xr )▶ Dimension:�m

0

�+�m

1

�+ · · ·+ �mr �=:� m≤r

▶ Rate: dimension/block length=� m≤r

�/2m

Reed-Muller Codes: RM (m, r )

Message: A degree polynomial f ∈ F2[x1, · · · , xm] of degree at most r .

Encoding: The evaluation of f on all points in {0,1}m .f (x1, x2, x3, x4) = x1x2+ x3x47−→

0 0 0 1 0 0 0 1 0 0 0 1 1 1 1 0

0000 0001 0010 0011 0100 0101 0110 0111 1000 1001 1010 1011 1100 1101 1110 1111

A linear code, with▶ Block Length: 2m := n▶ Distance: 2m−r (lightest codeword: x1x2 · · · xr )▶ Dimension:�m

0

�+�m

1

�+ · · ·+ �mr �=:� m≤r

�▶ Rate: dimension/block length=

� m≤r

�/2m

Reed-Muller Codes: RM (m, r )

Generator Matrix: evaluation matrix of deg≤ r monomials:

v ∈ Fm2

M

:= E(m, r )M (v)

(Every codeword is spanned by the rows.)

Also called the inclusion matrix(M (v) = 1 if and only if “M ⊂ v”).

Reed-Muller Codes: RM (m, r )

Generator Matrix: evaluation matrix of deg≤ r monomials:

v ∈ Fm2

M

:= E(m, r )M (v)

(Every codeword is spanned by the rows.)

Also called the inclusion matrix(M (v) = 1 if and only if “M ⊂ v”).

Reed-Muller Codes: RM (m, r )

Generator Matrix: evaluation matrix of deg≤ r monomials:

v ∈ Fm2

M

:= E(m, r )M (v)

(Every codeword is spanned by the rows.)

Also called the inclusion matrix(M (v) = 1 if and only if “M ⊂ v”).

Decoding RM (m, r )

Worst Case Errors:

Up to d/2 (d = 2m−r is minimal distance).

d

d/2

(algorithm by [Reed54])

List Decoding: max radius with constant # of words

[Gopalan-Klivans-Zuckerman08,Bhowmick-Lovett15]List decoding radius = d .

Decoding RM (m, r )

Worst Case Errors: Up to d/2 (d = 2m−r is minimal distance).

d d/2

(algorithm by [Reed54])

List Decoding: max radius with constant # of words

[Gopalan-Klivans-Zuckerman08,Bhowmick-Lovett15]List decoding radius = d .

Decoding RM (m, r )

Worst Case Errors: Up to d/2 (d = 2m−r is minimal distance).

d d/2

(algorithm by [Reed54])

List Decoding: max radius with constant # of words

[Gopalan-Klivans-Zuckerman08,Bhowmick-Lovett15]List decoding radius = d .

Decoding RM (m, r )

Worst Case Errors: Up to d/2 (d = 2m−r is minimal distance).

d d/2

(algorithm by [Reed54])

List Decoding: max radius with constant # of words

[Gopalan-Klivans-Zuckerman08,Bhowmick-Lovett15]List decoding radius = d .

Decoding RM (m, r )

Worst Case Errors: Up to d/2 (d = 2m−r is minimal distance).

d d/2

(algorithm by [Reed54])

List Decoding: max radius with constant # of words

d [Gopalan-Klivans-Zuckerman08,Bhowmick-Lovett15]List decoding radius = d .

Two schools of study

▶ Hamming Model a.k.a worst-case errors

▶ Generally the model of interest for complexity theorists,

▶ Reed-Muller codes are not the best for these (far from optimalrate-distance tradeoffs).

▶ Shannon Model a.k.a random errors

▶ The standard model for coding theorists,

▶ Recent breakthroughs (e.g. Arıkan’s polar codes),

▶ An ongoing research endeavor:How do Reed-Muller codes perform in the Shannon model?

Two schools of study

▶ Hamming Model a.k.a worst-case errors

▶ Generally the model of interest for complexity theorists,

▶ Reed-Muller codes are not the best for these (far from optimalrate-distance tradeoffs).

▶ Shannon Model a.k.a random errors

▶ The standard model for coding theorists,

▶ Recent breakthroughs (e.g. Arıkan’s polar codes),

▶ An ongoing research endeavor:How do Reed-Muller codes perform in the Shannon model?

Models for random corruptions(channels)

Binary Erasure Channel — BEC(p)

Each bit independently replaced by ‘?’ with probability p

0 0 1 1 0

Models for random corruptions(channels)

Binary Erasure Channel — BEC(p)

Each bit independently replaced by ‘?’ with probability p

0 0 1 1 00 0 1

Models for random corruptions(channels)

Binary Erasure Channel — BEC(p)

Each bit independently replaced by ‘?’ with probability p

0 0 1 1 0? ? ?

Models for random corruptions(channels)

Binary Erasure Channel — BEC(p)

Each bit independently replaced by ‘?’ with probability p

0 0 1 1 0? ? ?

Binary Symmetric Channel — BSC(p)

Each bit independently flipped with probability p

Models for random corruptions(channels)

Binary Erasure Channel — BEC(p)

Each bit independently replaced by ‘?’ with probability p

0 0 1 1 0? ? ?

Binary Symmetric Channel — BSC(p)

Each bit independently flipped with probability p

0 0 1 1 00 0 1

Models for random corruptions(channels)

Binary Erasure Channel — BEC(p)

Each bit independently replaced by ‘?’ with probability p

0 0 1 1 0? ? ?

Binary Symmetric Channel — BSC(p)

Each bit independently flipped with probability p

0 0 1 1 01 1 0

Models for random corruptions(channels)

Binary Erasure Channel — BEC(p)

Each bit independently replaced by ‘?’ with probability p

0 0 1 1 0? ? ?

Binary Symmetric Channel — BSC(p)

Each bit independently flipped with probability p

0 0 1 1 01 1 0

(almost) equiv: fixed number t ≈ pn of random errors

Channel Capacity

Question: Given a channel, what is the best rate we can hope for?

Channel Capacity

Question: Given a channel, what is the best rate we can hope for?

Binary Erasure Channel — BEC(p)Each bit independently replaced by ‘?’ with probability p

0 0 1 1 0? ? ?

0 1 1

∗ ∗ ∗∗

Channel Capacity

Question: Given a channel, what is the best rate we can hope for?

Binary Erasure Channel — BEC(p)Each bit independently replaced by ‘?’ with probability p

0 0 1 1 0? ? ?

0 1 1

∗ ∗ ∗∗

If X n is transmitted to the channel and received as Y n , how many bits ofinformation about X n do we get from Y n?

Channel Capacity

Question: Given a channel, what is the best rate we can hope for?

Binary Erasure Channel — BEC(p)Each bit independently replaced by ‘?’ with probability p

0 0 1 1 0? ? ?

0 1 1

∗ ∗ ∗∗

If X n is transmitted to the channel and received as Y n , how many bits ofinformation about X n do we get from Y n?

Intuitively, (1− p)n.

Channel Capacity

Question: Given a channel, what is the best rate we can hope for?

Binary Symmetric Channel — BSC(p)Each bit independently flipped with probability p

0 0 1 1 00 1 1

∗ ∗ ∗∗

Channel Capacity

Question: Given a channel, what is the best rate we can hope for?

Binary Symmetric Channel — BSC(p)Each bit independently flipped with probability p

0 0 1 1 00 1 1

∗ ∗ ∗∗

If X n is transmitted to the channel and received as Y n , how many bits ofinformation about X n do we get from Y n?

Channel Capacity

Question: Given a channel, what is the best rate we can hope for?

Binary Symmetric Channel — BSC(p)Each bit independently flipped with probability p

0 0 1 1 00 1 1

∗ ∗ ∗∗

If X n is transmitted to the channel and received as Y n , how many bits ofinformation about X n do we get from Y n?

Intuitively, (1−H (p))n. (as� n

pn

�≈ 2H (p)·n)

Channel Capacity

Question: Given a channel, what is the best rate we can hope for?

[Shannon48] Maximum rate that enables decoding (w.h.p.) is:

1− p for BEC(p),1−H (p) for BSC(p).

Codes achieving this bound called capacity achieving.

Motivating questions for this talk

How well does Reed-Muller codes perform in the Shannon Model?

In BEC(p)?

In BSC(p)?

Are they as good as polar codes?

Motivating questions for this talk

How well does Reed-Muller codes perform in the Shannon Model?

In BEC(p)?

In BSC(p)?

Are they as good as polar codes?

Outline

30

So far

45

RM codesand erasures

0

Decoding errors,efficiently

20

Diagram not to scale

Outline

30

So far

45

RM codesand erasures

0

Decoding errors,efficiently

20

Diagram not to scale

Outline

30

So far

45

RM codesand erasures

0

Decoding errors,efficiently

20

Diagram not to scale

Outline

30

So far

45

RM codesand erasures

0

Decoding errors,efficiently

20Conclusionsand Q&A

Diagram not to scale

Outline

30

So far

45

RM codesand erasures

0

Decoding errors,efficiently

20Conclusionsand Q&A

Diagram not to scale

Dual of a code

Any linear space can be specified by a generating basis, or a solution to asystem of constraints.

C ⊥ = {u : ⟨v,u⟩= 0 for every v ∈C }

Parity Check Matrix(A basis forC ⊥ stacked as rows)

PC M v = 0 ⇐⇒ v ∈C

Dual of a code

Any linear space can be specified by a generating basis, or a solution to asystem of constraints.

C ⊥ = {u : ⟨v,u⟩= 0 for every v ∈C }

Parity Check Matrix(A basis forC ⊥ stacked as rows)

PC M v = 0 ⇐⇒ v ∈C

Dual of a code

Any linear space can be specified by a generating basis, or a solution to asystem of constraints.

C ⊥ = {u : ⟨v,u⟩= 0 for every v ∈C }

Parity Check Matrix(A basis forC ⊥ stacked as rows)

PC M v = 0 ⇐⇒ v ∈C

Dual of a code

Any linear space can be specified by a generating basis, or a solution to asystem of constraints.

C ⊥ = {u : ⟨v,u⟩= 0 for every v ∈C }

Parity Check Matrix(A basis forC ⊥ stacked as rows)

PC M v = 0 ⇐⇒ v ∈C

Linear codes and erasures

0 0 1 1 0? ? ?

Question: When can we decode from a pattern of erasures?

In order for a code to be good for BEC(p), the Parity CheckMatrix of the code must be “robustly high-rank”.

Linear codes and erasures

0 0 1 1 0? ? ?

Question: When can we decode from a pattern of erasures?

0 0 1 1 0

0 0 1 1 0

1 0 0

0 1 0

In order for a code to be good for BEC(p), the Parity CheckMatrix of the code must be “robustly high-rank”.

Linear codes and erasures

0 0 1 1 0? ? ?

Question: When can we decode from a pattern of erasures?

0 0 1 1 0

0 0 1 1 0

1 0 0

0 1 0

0 0 0 0 01 1 0

In order for a code to be good for BEC(p), the Parity CheckMatrix of the code must be “robustly high-rank”.

Linear codes and erasures

0 0 1 1 0? ? ?

Question: When can we decode from a pattern of erasures?

0 0 0 0 01 1 0

In order for a code to be good for BEC(p), the Parity CheckMatrix of the code must be “robustly high-rank”.

Linear codes and erasures

0 0 1 1 0? ? ?

Question: When can we decode from a pattern of erasures?

0 0 0 0 01 1 0

Decodable if and only if no non-zero codeword supported on erasures.

In order for a code to be good for BEC(p), the Parity CheckMatrix of the code must be “robustly high-rank”.

Linear codes and erasures

? ? ?

Question: When can we decode from a pattern of erasures?

0 0 0 0 01 1 0

Decodable if and only if no non-zero codeword supported on erasures.

Depends only the erasure pattern

In order for a code to be good for BEC(p), the Parity CheckMatrix of the code must be “robustly high-rank”.

Linear codes and erasures

? ? ?

Question: When can we decode from a pattern of erasures?

0 0 0 0 01 1 0

In order for a code to be good for BEC(p), the Parity CheckMatrix of the code must be “robustly high-rank”.

Linear codes and erasures

? ? ?

Question: When can we decode from a pattern of erasures?

Observation: A pattern of erasures are decodeable if and only if thecorresponding columns of the Parity Check Matrix are linearlyindependent.

In order for a code to be good for BEC(p), the Parity CheckMatrix of the code must be “robustly high-rank”.

Linear codes and erasures

? ? ?

Question: When can we decode from a pattern of erasures?

Observation: A pattern of erasures are decodeable if and only if thecorresponding columns of the Parity Check Matrix are linearlyindependent.

In order for a code to be good for BEC(p), the Parity CheckMatrix of the code must be “robustly high-rank”.

Reed-Muller codes under erasuresCool FactThe dual of RM (m, r ) is RM (m, r ′) where r ′ = m− r − 1.

Hence, the Parity Check Matrix of RM (m, r ) is the generator matrix ofRM (m, r ′).

�[m]≤r ′�

{0,1}m

v

M

M (v)

Question: Let R=� m≤r ′�. Suppose you pick (0.99)R columns at

random. Are they linearly independent with high probability?

r ′ :m/20 m

o(p

m/ log m)[ASW-15]

o(m)[ASW-15]

O(p

m)[KMSU+KP-16]

Reed-Muller codes under erasuresCool FactThe dual of RM (m, r ) is RM (m, r ′) where r ′ = m− r − 1.

Hence, the Parity Check Matrix of RM (m, r ) is the generator matrix ofRM (m, r ′).

�[m]≤r ′�

{0,1}m

v

M

M (v)

Question: Let R=� m≤r ′�. Suppose you pick (0.99)R columns at

random. Are they linearly independent with high probability?

r ′ :m/20 m

o(p

m/ log m)[ASW-15]

o(m)[ASW-15]

O(p

m)[KMSU+KP-16]

Reed-Muller codes under erasuresCool FactThe dual of RM (m, r ) is RM (m, r ′) where r ′ = m− r − 1.

Hence, the Parity Check Matrix of RM (m, r ) is the generator matrix ofRM (m, r ′).

�[m]≤r ′�

{0,1}m

v

M

M (v)

Question: Let R=� m≤r ′�. Suppose you pick (0.99)R columns at

random. Are they linearly independent with high probability?

r ′ :m/20 m

o(p

m/ log m)[ASW-15]

o(m)[ASW-15]

O(p

m)[KMSU+KP-16]

Reed-Muller codes under erasures

�[m]≤r ′�

{0,1}m

v

M

M (v)

Question: Let R=� m≤r ′�. Suppose you pick (0.99)R columns at

random. Are they linearly independent with high probability?

r ′ :m/20 m

o(p

m/ log m)[ASW-15]

o(m)[ASW-15]

O(p

m)[KMSU+KP-16]

Reed-Muller codes under erasures

�[m]≤r ′�

{0,1}m

v

M

M (v)

Question: Let R=� m≤r ′�. Suppose you pick (0.99)R columns at

random. Are they linearly independent with high probability?

r ′ :m/20 m

o(p

m/ log m)[ASW-15]

o(m)[ASW-15]

O(p

m)[KMSU+KP-16]

Reed-Muller codes under erasures

�[m]≤r ′�

{0,1}m

v

M

M (v)

Question: Let R=� m≤r ′�. Suppose you pick (0.99)R columns at

random. Are they linearly independent with high probability?

r ′ :m/20 m

o(p

m/ log m)[ASW-15]

o(m)[ASW-15]

O(p

m)[KMSU+KP-16]

Reed-Muller codes under erasures

�[m]≤r ′�

{0,1}m

v

M

M (v)

Question: Let R=� m≤r ′�. Suppose you pick (0.99)R columns at

random. Are they linearly independent with high probability?

r ′ :m/20 m

o(p

m/ log m)[ASW-15]

o(m)[ASW-15]

O(p

m)[KMSU+KP-16]

OPEN OPEN

From erasures to errors

Remark: If r falls in the green zone, then RM (m, m− r − 1) can correct≈ � m≤r

�random erasures.

m/20 m

o(p

m/ log m) o(m)O(p

m)

Theorem: [ASW] Any pattern correctable from erasures inRM (m, m− r − 1) is correctable from errors in RM (m, m− 2r − 2).

From erasures to errorsRemark: If r falls in the green zone, then RM (m, m− r − 1) can correct≈ � m≤r

�random erasures.

m/20 m

o(p

m/ log m) o(m)O(p

m)

Theorem: [ASW] Any pattern correctable from erasures inRM (m, m− r − 1) is correctable from errors in RM (m, m− 2r − 2).

From erasures to errorsRemark: If r falls in the green zone, then RM (m, m− r − 1) can correct≈ � m≤r

�random erasures.

m/20 m

o(p

m/ log m) o(m)O(p

m)

Theorem: [ASW] Any pattern correctable from erasures inRM (m, m− r − 1) is correctable from errors in RM (m, m− 2r − 2).

Corollary #1: (high-rate) Decodeable from (1− o(1))� m≤r

�random

errors in RM (m, m− 2r ) if r = o(p

m/ log m)(min distance of RM (m, m− 2r ) is 22r ) .

From erasures to errorsRemark: If r falls in the green zone, then RM (m, m− r − 1) can correct≈ � m≤r

�random erasures.

m/20 m

o(p

m/ log m) o(m)O(p

m)

Theorem: [ASW] Any pattern correctable from erasures inRM (m, m− r − 1) is correctable from errors in RM (m, m− 2r − 2).

From erasures to errorsRemark: If r falls in the green zone, then RM (m, m− r − 1) can correct≈ � m≤r

�random erasures.

m/20 m

o(p

m/ log m) o(m)O(p

m)

Theorem: [ASW] Any pattern correctable from erasures inRM (m, m− r − 1) is correctable from errors in RM (m, m− 2r − 2).

(If r = m2 − o(

pm), then m− 2r − 2= o(

pm))

Corollary #2: (low-rate) Decodeable from ( 12 − o(1))2m random errorsin RM (m, o(

pm)) (min distance of RM (m,

pm) is 2m−pm) .

From erasures to errorsRemark: If r falls in the green zone, then RM (m, m− r − 1) can correct≈ � m≤r

�random erasures.

m/20 m

o(p

m/ log m) o(m)O(p

m)

Theorem: [ASW] Any pattern correctable from erasures inRM (m, m− r − 1) is correctable from errors in RM (m, m− 2r − 2).

(If r = m2 − o(

pm), then m− 2r − 2= o(

pm))

Corollary #2: (low-rate) Decodeable from ( 12 − o(1))2m random errorsin RM (m, o(

pm)) (min distance of RM (m,

pm) is 2m−pm) .

[S-Shpilka-Volk]: Efficient decoding from errors.

Outline

30

So far

45

RM codesand erasures

0

Decoding errors,efficiently

20

Outline

30

So far

45

RM codesand erasures

0

Decoding errors,efficiently

20

What we want to prove

Theorem [S-Shpilka-Volk]There exists an efficient algorithm with the following guarantee:

Given a corrupted codeword w= v+ errS ofRM (m, m− 2r − 1),

if S happens to be a correctable erasure pattern inRM (m, m− r − 1),

then the algorithm correctly decodes v from w.

What we have access to

Received word is w := v+ errS for some v ∈ RM (m, m− 2r − 2) andS = {u1, . . . , ut }.

What we have access to

Received word is w := v+ errS for some v ∈ RM (m, m− 2r − 2) andS = {u1, . . . , ut }.Parity Check Matrix for RM (m, m− 2r − 2) is the generator matrix forRM (m, 2r + 1).

What we have access to

Received word is w := v+ errS for some v ∈ RM (m, m− 2r − 2) andS = {u1, . . . , ut }.

E(m, 2r + 1) v = 0

M

M (u1)+ · · ·+M (ut )

What we have access to

Received word is w := v+ errS for some v ∈ RM (m, m− 2r − 2) andS = {u1, . . . , ut }.

E(m, 2r + 1) w =

M

M (u1)+ · · ·+M (ut )

What we have access to

Received word is w := v+ errS for some v ∈ RM (m, m− 2r − 2) andS = {u1, . . . , ut }.

E(m, 2r + 1) w = M

M (u1)+ · · ·+M (ut )

What we have access to

Received word is w := v+ errS for some v ∈ RM (m, m− 2r − 2) andS = {u1, . . . , ut }.

E(m, 2r + 1) w = M

M (u1)+ · · ·+M (ut )

Have access to∑

i∈S M (ui ) for every monomial M with deg(M )≤ 2r + 1.

What we have access to

Received word is w := v+ errS for some v ∈ RM (m, m− 2r − 2) andS = {u1, . . . , ut }.

E(m, 2r + 1) w = M

M (u1)+ · · ·+M (ut )

Have access to∑

i∈S f (ui ) for every polynomial f with deg( f )≤ 2r + 1.

Erasure Correctable Patterns

A pattern of erasures is correctable if and only if the correspondingcolumns in the parity check matrix are linearly independent.

The parity check matrix for RM (m, m− r − 1) is the generator matrixfor RM (m, r ).

CorollaryA set of patterns S = {u1, . . . , ut } is erasure-correctable inRM (m, m− r − 1) if and only if

�u r

1 , . . . , u rtare linearly independent.

u ri is just the vector of degree r monomials evaluated at ui

Erasure Correctable Patterns

A pattern of erasures is correctable if and only if the correspondingcolumns in the parity check matrix are linearly independent.

The parity check matrix for RM (m, m− r − 1) is the generator matrixfor RM (m, r ).

CorollaryA set of patterns S = {u1, . . . , ut } is erasure-correctable inRM (m, m− r − 1) if and only if

�u r

1 , . . . , u rtare linearly independent.

u ri is just the vector of degree r monomials evaluated at ui

Erasure Correctable Patterns

A pattern of erasures is correctable if and only if the correspondingcolumns in the parity check matrix are linearly independent.

The parity check matrix for RM (m, m− r − 1) is the generator matrixfor RM (m, r ).

CorollaryA set of patterns S = {u1, . . . , ut } is erasure-correctable inRM (m, m− r − 1) if and only if

�u r

1 , . . . , u rtare linearly independent.

u ri is just the vector of degree r monomials evaluated at ui

Erasure Correctable Patterns

A pattern of erasures is correctable if and only if the correspondingcolumns in the parity check matrix are linearly independent.

The parity check matrix for RM (m, m− r − 1) is the generator matrixfor RM (m, r ).

CorollaryA set of patterns S = {u1, . . . , ut } is erasure-correctable inRM (m, m− r − 1) if and only if

�u r

1 , . . . , u rtare linearly independent.

u ri is just the vector of degree r monomials evaluated at ui

TheDecoding Algorithm

Input: Received word w (= v+ errS)

LemmaAssume that S is a pattern of erasures correctable in RM (m, m− r − 1).For any arbitrary u ∈ {0,1}m , we have u ∈ S if and only if there exists apolynomial g with deg(g )≤ r such that∑

i∈S

( f · g )(ui ) = f (u) for every f with deg( f )≤ r + 1.

Can be checked by solving a system of linear equations.Algorithm is straightforward.

TheDecoding Algorithm

Input: Received word w (= v+ errS)

LemmaAssume that S is a pattern of erasures correctable in RM (m, m− r − 1).

For any arbitrary u ∈ {0,1}m , we have u ∈ S if and only if there exists apolynomial g with deg(g )≤ r such that∑

i∈S

( f · g )(ui ) = f (u) for every f with deg( f )≤ r + 1.

Can be checked by solving a system of linear equations.Algorithm is straightforward.

TheDecoding Algorithm

Input: Received word w (= v+ errS)

LemmaAssume that S is a pattern of erasures correctable in RM (m, m− r − 1).For any arbitrary u ∈ {0,1}m , we have u ∈ S if and only if there exists apolynomial g with deg(g )≤ r such that∑

i∈S

( f · g )(ui ) = f (u) for every f with deg( f )≤ r + 1.

Can be checked by solving a system of linear equations.Algorithm is straightforward.

TheDecoding Algorithm

Input: Received word w (= v+ errS)

LemmaAssume that S is a pattern of erasures correctable in RM (m, m− r − 1).For any arbitrary u ∈ {0,1}m , we have u ∈ S if and only if there exists apolynomial g with deg(g )≤ r such that∑

i∈S

( f · g )(ui ) = f (u) for every f with deg( f )≤ r + 1.

Can be checked by solving a system of linear equations.Algorithm is straightforward.

Proof of LemmaClaim 1Let u1, . . . , ut ∈ {0,1}m such that

�u r

1 , . . . , u rtare linearly independent.

Then, for each i ∈ [t ], there is a polynomial hi such that:▶ deg(hi )≤ r ,▶ hi (u j ) = 1 if and only if i = j , and 0 otherwise.

Proof of LemmaClaim 1Let u1, . . . , ut ∈ {0,1}m such that

�u r

1 , . . . , u rtare linearly independent.

Then, for each i ∈ [t ], there is a polynomial hi such that:▶ deg(hi )≤ r ,▶ hi (u j ) = 1 if and only if i = j , and 0 otherwise.

Proof.

u r1 · · · u r

t

Row operationsI

0

Proof of LemmaClaim 1Let u1, . . . , ut ∈ {0,1}m such that

�u r

1 , . . . , u rtare linearly independent.

Then, for each i ∈ [t ], there is a polynomial hi such that:▶ deg(hi )≤ r ,▶ hi (u j ) = 1 if and only if i = j , and 0 otherwise.

Proof.

u r1 · · · u r

t Row operationsI

0

Proof of LemmaClaim 1Let u1, . . . , ut ∈ {0,1}m such that

�u r

1 , . . . , u rtare linearly independent.

Then, for each i ∈ [t ], there is a polynomial hi such that:▶ deg(hi )≤ r ,▶ hi (u j ) = 1 if and only if i = j , and 0 otherwise.

Proof of LemmaClaim 1Let u1, . . . , ut ∈ {0,1}m such that

�u r

1 , . . . , u rtare linearly independent.

Then, for each i ∈ [t ], there is a polynomial hi such that:▶ deg(hi )≤ r ,▶ hi (u j ) = 1 if and only if i = j , and 0 otherwise.

Main Lemma (⇒): If u ∈ S , then there is a polynomial g withdeg(g )≤ r such that∑ui∈S

( f · g )(ui ) = f (u) for every f with deg( f )≤ r + 1.

Proof of LemmaClaim 1Let u1, . . . , ut ∈ {0,1}m such that

�u r

1 , . . . , u rtare linearly independent.

Then, for each i ∈ [t ], there is a polynomial hi such that:▶ deg(hi )≤ r ,▶ hi (u j ) = 1 if and only if i = j , and 0 otherwise.

Main Lemma (⇒): If u ∈ S , then there is a polynomial g withdeg(g )≤ r such that∑ui∈S

( f · g )(ui ) = f (u) for every f with deg( f )≤ r + 1.

If u = ui , then g = hi satisfies the conditions.

Proof of LemmaClaim 1Let u1, . . . , ut ∈ {0,1}m such that

�u r

1 , . . . , u rtare linearly independent.

Then, for each i ∈ [t ], there is a polynomial hi such that:▶ deg(hi )≤ r ,▶ hi (u j ) = 1 if and only if i = j , and 0 otherwise.

Proof of LemmaClaim 1Let u1 , . . . , ut ∈ {0,1}m such that

�u r

1 , . . . , u rtare linearly independent.

Then, for each i ∈ [t ], there is a polynomial hi such that:▶ deg(hi )≤ r ,▶ hi (u j ) = 1 if and only if i = j , and 0 otherwise.

Proof of LemmaClaim 1Let u1 , . . . , ut ∈ {0,1}m such that

�u r

1 , . . . , u rtare linearly independent.

Then, for each i ∈ [t ], there is a polynomial hi such that:▶ deg(hi )≤ r ,▶ hi (u j ) = 1 if and only if i = j , and 0 otherwise.

Claim 2If u /∈ {u1, . . . , ut }, then there is a polynomial f such that deg( f )≤ r + 1 and

f (u) = 1 , but f (ui ) = 0 for all i = 1, . . . , t .

Proof of LemmaClaim 1Let u1 , . . . , ut ∈ {0,1}m such that

�u r

1 , . . . , u rtare linearly independent.

Then, for each i ∈ [t ], there is a polynomial hi such that:▶ deg(hi )≤ r ,▶ hi (u j ) = 1 if and only if i = j , and 0 otherwise.

Claim 2If u /∈ {u1, . . . , ut }, then there is a polynomial f such that deg( f )≤ r + 1 and

f (u) = 1 , but f (ui ) = 0 for all i = 1, . . . , t .

Main Lemma (⇐): If u /∈ S , then there is no polynomial g withdeg(g )≤ r such that∑

ui∈S

( f · g )(ui ) = f (u) for every f with deg( f )≤ r + 1.

Proof of LemmaClaim 1Let u1 , . . . , ut ∈ {0,1}m such that

�u r

1 , . . . , u rtare linearly independent.

Then, for each i ∈ [t ], there is a polynomial hi such that:▶ deg(hi )≤ r ,▶ hi (u j ) = 1 if and only if i = j , and 0 otherwise.

Claim 2If u /∈ {u1, . . . , ut }, then there is a polynomial f such that deg( f )≤ r + 1 and

f (u) = 1 , but f (ui ) = 0 for all i = 1, . . . , t .

Proof

Proof of LemmaClaim 1Let u1 , . . . , ut ∈ {0,1}m such that

�u r

1 , . . . , u rtare linearly independent.

Then, for each i ∈ [t ], there is a polynomial hi such that:▶ deg(hi )≤ r ,▶ hi (u j ) = 1 if and only if i = j , and 0 otherwise.

Claim 2If u /∈ {u1, . . . , ut }, then there is a polynomial f such that deg( f )≤ r + 1 and

f (u) = 1 , but f (ui ) = 0 for all i = 1, . . . , t .

ProofCase 1: Suppose hi (u) = 1 for some i .

u1

u10

ut0· · · ui

0 1 0 · · ·

Proof of LemmaClaim 1Let u1 , . . . , ut ∈ {0,1}m such that

�u r

1 , . . . , u rtare linearly independent.

Then, for each i ∈ [t ], there is a polynomial hi such that:▶ deg(hi )≤ r ,▶ hi (u j ) = 1 if and only if i = j , and 0 otherwise.

Claim 2If u /∈ {u1, . . . , ut }, then there is a polynomial f such that deg( f )≤ r + 1 and

f (u) = 1 , but f (ui ) = 0 for all i = 1, . . . , t .

ProofCase 1: Suppose hi (u) = 1 for some i .

u1

u10

ut0· · · ui

0 1 0 · · ·u ̸= ui hence u(ℓ) ̸= (ui )(ℓ) for some coordinate ℓ.

Proof of LemmaClaim 1Let u1 , . . . , ut ∈ {0,1}m such that

�u r

1 , . . . , u rtare linearly independent.

Then, for each i ∈ [t ], there is a polynomial hi such that:▶ deg(hi )≤ r ,▶ hi (u j ) = 1 if and only if i = j , and 0 otherwise.

Claim 2If u /∈ {u1, . . . , ut }, then there is a polynomial f such that deg( f )≤ r + 1 and

f (u) = 1 , but f (ui ) = 0 for all i = 1, . . . , t .

ProofCase 1: Suppose hi (u) = 1 for some i .

u1

u10

ut0· · · ui

0 1 0 · · ·u ̸= ui hence u(ℓ) ̸= (ui )(ℓ) for some coordinate ℓ.

f (x) = hi (x) ·�

xℓ− (ui )(ℓ)�works.

u1

u10

ut0· · · ui

0 · · ·

Proof of LemmaClaim 1Let u1 , . . . , ut ∈ {0,1}m such that

�u r

1 , . . . , u rtare linearly independent.

Then, for each i ∈ [t ], there is a polynomial hi such that:▶ deg(hi )≤ r ,▶ hi (u j ) = 1 if and only if i = j , and 0 otherwise.

Claim 2If u /∈ {u1, . . . , ut }, then there is a polynomial f such that deg( f )≤ r + 1 and

f (u) = 1 , but f (ui ) = 0 for all i = 1, . . . , t .

ProofCase 2: Suppose hi (u) = 0 for all i .

Proof of LemmaClaim 1Let u1 , . . . , ut ∈ {0,1}m such that

�u r

1 , . . . , u rtare linearly independent.

Then, for each i ∈ [t ], there is a polynomial hi such that:▶ deg(hi )≤ r ,▶ hi (u j ) = 1 if and only if i = j , and 0 otherwise.

Claim 2If u /∈ {u1, . . . , ut }, then there is a polynomial f such that deg( f )≤ r + 1 and

f (u) = 1 , but f (ui ) = 0 for all i = 1, . . . , t .

ProofCase 2: Suppose hi (u) = 0 for all i .

Look at∑

hi (x).u0

u11

ut1· · ·

Proof of LemmaClaim 1Let u1 , . . . , ut ∈ {0,1}m such that

�u r

1 , . . . , u rtare linearly independent.

Then, for each i ∈ [t ], there is a polynomial hi such that:▶ deg(hi )≤ r ,▶ hi (u j ) = 1 if and only if i = j , and 0 otherwise.

Claim 2If u /∈ {u1, . . . , ut }, then there is a polynomial f such that deg( f )≤ r + 1 and

f (u) = 1 , but f (ui ) = 0 for all i = 1, . . . , t .

ProofCase 2: Suppose hi (u) = 0 for all i .

Look at∑

hi (x).u0

u11

ut1· · ·

f (x) = 1−∑ hi (x) works.u1

u10

ut0· · ·

Proof of LemmaClaim 1Let u1 , . . . , ut ∈ {0,1}m such that

�u r

1 , . . . , u rtare linearly independent.

Then, for each i ∈ [t ], there is a polynomial hi such that:▶ deg(hi )≤ r ,▶ hi (u j ) = 1 if and only if i = j , and 0 otherwise.

Claim 2If u /∈ {u1, . . . , ut }, then there is a polynomial f such that deg( f )≤ r + 1 and

f (u) = 1 , but f (ui ) = 0 for all i = 1, . . . , t .

ProofCase 2: Suppose hi (u) = 0 for all i .

Look at∑

hi (x).u0

u11

ut1· · ·

f (x) = 1−∑ hi (x) works.u1

u10

ut0· · ·

Proof of LemmaClaim 1Let u1 , . . . , ut ∈ {0,1}m such that

�u r

1 , . . . , u rtare linearly independent.

Then, for each i ∈ [t ], there is a polynomial hi such that:▶ deg(hi )≤ r ,▶ hi (u j ) = 1 if and only if i = j , and 0 otherwise.

Claim 2If u /∈ {u1 , . . . , ut }, then there is a polynomial f such that deg( f )≤ r + 1 and

f (u) = 1 , but f (ui ) = 0 for all i = 1, . . . , t .

Proof of LemmaClaim 1Let u1 , . . . , ut ∈ {0,1}m such that

�u r

1 , . . . , u rtare linearly independent.

Then, for each i ∈ [t ], there is a polynomial hi such that:▶ deg(hi )≤ r ,▶ hi (u j ) = 1 if and only if i = j , and 0 otherwise.

Claim 2If u /∈ {u1 , . . . , ut }, then there is a polynomial f such that deg( f )≤ r + 1 and

f (u) = 1 , but f (ui ) = 0 for all i = 1, . . . , t .

Therefore, if�

u r1 , . . . , u r

tare linearly independent, then there exists a

polynomial g with deg(g )≤ r satisfying∑ui∈S

( f ·g )(ui ) = f (u) , for every polynomial f with deg( f )≤ r + 1,

if and only if u = ui for some i .

Decoding Algorithm▶ Input: Received word w (= v+ errS )

▶ Compute E(m, 2r + 1)w to get the value of∑

ui∈S f (ui ) for everypolynomial f with deg( f )≤ 2r + 1.

▶ For each u ∈ {0,1}m , solve for a polynomial g with deg(g )≤ rsatisfying∑ui∈S

( f · g )(ui ) = f (u) , for every f satisfying deg( f )≤ r + 1.

If there is a solution, then add u toCorruptions.▶ Flip the coordinates inCorruptions and interpolate.

Decoding Algorithm▶ Input: Received word w (= v+ errS )

▶ Compute E(m, 2r + 1)w to get the value of∑

ui∈S f (ui ) for everypolynomial f with deg( f )≤ 2r + 1.

▶ For each u ∈ {0,1}m , solve for a polynomial g with deg(g )≤ rsatisfying∑ui∈S

( f · g )(ui ) = f (u) , for every f satisfying deg( f )≤ r + 1.

If there is a solution, then add u toCorruptions.▶ Flip the coordinates inCorruptions and interpolate.

Decoding Algorithm▶ Input: Received word w (= v+ errS )

▶ Compute E(m, 2r + 1)w to get the value of∑

ui∈S f (ui ) for everypolynomial f with deg( f )≤ 2r + 1.

▶ For each u ∈ {0,1}m , solve for a polynomial g with deg(g )≤ rsatisfying∑ui∈S

( f · g )(ui ) = f (u) , for every f satisfying deg( f )≤ r + 1.

If there is a solution, then add u toCorruptions.

▶ Flip the coordinates inCorruptions and interpolate.

Decoding Algorithm▶ Input: Received word w (= v+ errS )

▶ Compute E(m, 2r + 1)w to get the value of∑

ui∈S f (ui ) for everypolynomial f with deg( f )≤ 2r + 1.

▶ For each u ∈ {0,1}m , solve for a polynomial g with deg(g )≤ rsatisfying∑ui∈S

( f · g )(ui ) = f (u) , for every f satisfying deg( f )≤ r + 1.

If there is a solution, then add u toCorruptions.▶ Flip the coordinates inCorruptions and interpolate.

Outline

30

So far

45

RM codesand erasures

0

Decoding errors,efficiently

20

Conclusionsand Q&A

Outline

30

So far

45

RM codesand erasures

0

Decoding errors,efficiently

20Conclusionsand Q&A

ICYMIRemark: If r falls in the green zone, then RM (m, m− r − 1) can correct≈ � m≤r

�random erasures.

m/20 m

o(p

m/ log m) o(m)O(p

m)

ICYMIRemark: If r falls in the green zone, then RM (m, m− r − 1) can correct≈ � m≤r

�random erasures.

m/20 m

o(p

m/ log m) o(m)O(p

m)

Theorem: [S-Shpilka-Volk] Any pattern that is erasure-correctable inRM (m, m− r − 1) is efficiently error-correctable in RM (m, m− 2r − 2).

ICYMIRemark: If r falls in the green zone, then RM (m, m− r − 1) can correct≈ � m≤r

�random erasures.

m/20 m

o(p

m/ log m) o(m)O(p

m)

Theorem: [S-Shpilka-Volk] Any pattern that is erasure-correctable inRM (m, m− r − 1) is efficiently error-correctable in RM (m, m− 2r − 2).

Corollary #1: (high-rate) Efficiently decodeable from (1− o(1))� m≤r

�random

errors in RM (m, m− 2r ) if r = o(p

m/ log m).(min distance of RM (m, m− 2r ) is 22r )

ICYMIRemark: If r falls in the green zone, then RM (m, m− r − 1) can correct≈ � m≤r

�random erasures.

m/20 m

o(p

m/ log m) o(m)O(p

m)

Theorem: [S-Shpilka-Volk] Any pattern that is erasure-correctable inRM (m, m− r − 1) is efficiently error-correctable in RM (m, m− 2r − 2).

Corollary #1: (high-rate) Efficiently decodeable from (1− o(1))� m≤r

�random

errors in RM (m, m− 2r ) if r = o(p

m/ log m).(min distance of RM (m, m− 2r ) is 22r )

Corollary #2: (low-rate) Efficiently decodeable from ( 12 − o(1))2m randomerrors in RM (m, o(

pm)). (min distance of RM (m,

pm) is 2m−pm)

The obvious open question

�[m]≤r

{0,1}m

v

M

M (v)

Question: Let R=� m≤r

�. Suppose you pick (0.99)R columns at random.

Are they linearly independent with high probability?

r :m/20 m

o(p

m/ log m)[ASW-15]

o(m)[ASW-15]

O(p

m)[KMSU+KP-16]

OPEN OPEN

\end{document}

The obvious open question

�[m]≤r

{0,1}m

v

M

M (v)

Question: Let R=� m≤r

�. Suppose you pick (0.99)R columns at random.

Are they linearly independent with high probability?

r :m/20 m

o(p

m/ log m)[ASW-15]

o(m)[ASW-15]

O(p

m)[KMSU+KP-16]

OPEN OPEN

\end{document}

The obvious open question

�[m]≤r

{0,1}m

v

M

M (v)

Question: Let R=� m≤r

�. Suppose you pick (0.99)R columns at random.

Are they linearly independent with high probability?

r :m/20 m

o(p

m/ log m)[ASW-15]

o(m)[ASW-15]

O(p

m)[KMSU+KP-16]

OPEN OPEN

\end{document}

The obvious open question

�[m]≤r

{0,1}m

v

M

M (v)

Question: Let R=� m≤r

�. Suppose you pick (0.99)R columns at random.

Are they linearly independent with high probability?

r :m/20 m

o(p

m/ log m)[ASW-15]

o(m)[ASW-15]

O(p

m)[KMSU+KP-16]

OPEN OPEN

\end{document}