Binary Input Broadcast Channel - Stanford...

Post on 20-Jul-2020

10 views 0 download

transcript

Binary Input Broadcast Channel

Abbas El Gamal

Stanford University

ITW, Jerusalem 2015

Broadcast channel (BC) (Cover 1972)

M1 ,M2 Xn

p(y1 , y2|x)

Yn1

Yn2

M1

M2

Encoder

Decoder 1

Decoder 2

∙ Private messages: M1 ∈ [1 : 2nR1 ], M2 ∈ [1 : 2nR2 ]∙ Capacity region: Closure of set of achievable rate pairs (R1 , R2)∙ Capacity region is not known in general

El Gamal (Stanford University) ITW 2015 2 / 44

Broadcast channel (BC) (Cover 1972)

M1 ,M2 Xn

p(y1 , y2|x)

Yn1

Yn2

M1

M2

Encoder

Decoder 1

Decoder 2

∙ Private messages: M1 ∈ [1 : 2nR1 ], M2 ∈ [1 : 2nR2 ]∙ Capacity region: Closure of set of achievable rate pairs (R1 , R2)∙ Capacity region is not known in general

∙ Inner and outer bounds that coincide for some special classes

But are in general very difficult to evaluate

El Gamal (Stanford University) ITW 2015 2 / 44

Broadcast channel (BC) (Cover 1972)

M1 ,M2 Xn

p(y1 , y2|x)

Yn1

Yn2

M1

M2

Encoder

Decoder 1

Decoder 2

∙ Private messages: M1 ∈ [1 : 2nR1 ], M2 ∈ [1 : 2nR2 ]∙ Capacity region: Closure of set of achievable rate pairs (R1 , R2)∙ Capacity region is not known in general

∙ Inner and outer bounds that coincide for some special classes

But are in general very difficult to evaluate

∙ Recent progress on computing the bounds for binary input BC (|X | ≤ 2)

El Gamal (Stanford University) ITW 2015 2 / 44

Motivation

∙ Study achievable rates for Poisson BC (Kim–Nachman–EG 2015)

X(t)

s1

s2

Y1(t)

Y2(t)

PP(.)

PP(.)

α

Continuous time Poisson BC

Y1(t)|{X(t) = x(t)} is PP(α(x(t) + s1)),

Y2(t)|{X(t) = x(t)} is PP(x(t) + s2),X(t) ∈ [0, 1], t , α, s1 , s2 ≥ 0, s1 ≤ s2

0

1

1

0

0

1

a1

b1

a2

b2

X

1

2

Binary P-BC (Wyner 1988a)

a1 = αs1Δ +O(Δ2),a2 = s2Δ +O(Δ2),b1 = α(1 + s1)Δ +O(Δ2),b2 = (1 + s2)Δ +O(Δ2)

El Gamal (Stanford University) Motivation and outline ITW 2015 3 / 44

Motivation

∙ Study achievable rates for Poisson BC (Kim–Nachman–EG 2015)

∙ We used recent techniques/results on binary input BC:

é Concave envelope method (Nair–Wang 2008)

é Evaluation of Marton’s region (Anantharam–Gohari–Nair 2013)

El Gamal (Stanford University) Motivation and outline ITW 2015 3 / 44

Motivation

∙ Study achievable rates for Poisson BC (Kim–Nachman–EG 2015)

∙ We used recent techniques/results on binary input BC:

é Concave envelope method (Nair–Wang 2008)

é Evaluation of Marton’s region (Anantharam–Gohari–Nair 2013)

∙ This talk:

é Highlight some of these techniques/results

é Describe new results for the Poisson BC

é Focus on computing capacity and bounds (no new coding schemes)

El Gamal (Stanford University) Motivation and outline ITW 2015 3 / 44

Outline

I. Optimality of superposition coding for BI-BC

BEC/BSC (Nair 2010) (proposed by Montanari)

– Superposition coding is always optimal

– Concave envelope method

– Degraded Ú→X←Ú less noisy Ú→

X←Ú more capable

– Effectively less noisy

II. Poisson broadcast channel (Kim–Nachman–EG 2015):

é Superposition coding is almost always optimal

III. Evaluation of Marton region for BI-BC (Anantharam–Gohari–Nair 2013)

é Gap between Marton region and UV outer bound for Poisson BC

El Gamal (Stanford University) Motivation and outline ITW 2015 4 / 44

Outline

I. Optimality of superposition coding for BI-BC

BEC/BSC (Nair 2010) (proposed by Montanari)

– Superposition coding is always optimal

– Concave envelope method

– Degraded Ú→X←Ú less noisy Ú→

X←Ú more capable

– Effectively less noisy

II. Poisson broadcast channel (Kim–Nachman–EG 2015):

é Superposition coding is almost always optimal

III. Evaluation of Marton region for BI-BC (Anantharam–Gohari–Nair 2013)

é Gap between Marton region and UV outer bound for Poisson BC

El Gamal (Stanford University) Motivation and outline ITW 2015 4 / 44

Outline

I. Optimality of superposition coding for BI-BC

BEC/BSC (Nair 2010) (proposed by Montanari)

– Superposition coding is always optimal

– Concave envelope method

– Degraded Ú→X←Ú less noisy Ú→

X←Ú more capable

– Effectively less noisy

II. Poisson broadcast channel (Kim–Nachman–EG 2015):

é Superposition coding is almost always optimal

III. Evaluation of Marton region for BI-BC (Anantharam–Gohari–Nair 2013)

é Gap between Marton region and UV outer bound for Poisson BC

El Gamal (Stanford University) Motivation and outline ITW 2015 4 / 44

Superposition coding inner bound for BC

Superposition coding inner bound RUX (Cover 1972, Bergmans 1973)

A rate pair (R1 , R2) is achievable for the BC if

R2 < I(U ; Y2),R1 < I(X ; Y1 |U),

R1 + R2 < I(X ; Y1)

for some pmf p(u, x)

∙ Y1 recovers both M1 and M2

∙ Y2 recovers M2 (carried by U)

∙ |U | ≤ |X | + 1 suffices (Ahlswede–Korner 1975)

El Gamal (Stanford University) I. Superposition coding ITW 2015 5 / 44

Classes of BC for which superposition coding is optimal

∙ Degraded (Gallager 1974): p(y�1|x) = p(y1|x) such that X → Y �1 → Y2

∙ Less noisy (Korner–Marton 1977): I(U ; Y1) ≥ I(U ; Y2) for every p(u, x)

∙ More capable (EG 1979): I(X ;Y1) ≥ I(X ; Y2) for every p(x)

El Gamal (Stanford University) I. Superposition coding ITW 2015 6 / 44

Classes of BC for which superposition coding is optimal

∙ Degraded (Gallager 1974): p(y�1|x) = p(y1|x) such that X → Y �1 → Y2

∙ Less noisy (Korner–Marton 1977): I(U ; Y1) ≥ I(U ; Y2) for every p(u, x)

∙ More capable (EG 1979): I(X ;Y1) ≥ I(X ; Y2) for every p(x)

∙ Degraded Ú→X←Ú less noisy Ú→

X←Ú more capable (Korner–Marton 1977)

El Gamal (Stanford University) I. Superposition coding ITW 2015 6 / 44

Classes of BC for which superposition coding is optimal

∙ Degraded (Gallager 1974): p(y�1|x) = p(y1|x) such that X → Y �1 → Y2

∙ Less noisy (Korner–Marton 1977): I(U ; Y1) ≥ I(U ; Y2) for every p(u, x)

∙ More capable (EG 1979): I(X ;Y1) ≥ I(X ; Y2) for every p(x)

∙ Degraded Ú→X←Ú less noisy Ú→

X←Ú more capable (Korner–Marton 1977)

∙ Essentially less noisy and more capable (Nair 2010)

é Less noisy Ú→X←Ú essentially less noisy

é More capable Ú→X←Ú essentially more capable

El Gamal (Stanford University) I. Superposition coding ITW 2015 6 / 44

BEC/BSC (Nair 2010)

0

1

0

1

є

1 − є

1 − є

X Y1

0

1

0

11 − p

1 − p

X Y2

∙ Superposition coding is always optimal

∙ Concave envelope

∙ Degraded Ú→X←Ú less noisy Ú→

X←Ú more capable

∙ Effectively less noisy BC

El Gamal (Stanford University) I. BEC/BSC ITW 2015 7 / 44

Y2 degraded version of Y1 for BEC/BSC

∙ Y2 degraded version of Y1 if 0 ≤ є ≤ 2p

El Gamal (Stanford University) I. BEC/BSC ITW 2015 8 / 44

є0 1

Degraded

Effectively less noisy

2p

Less noisy via concave envelope (van Dijk 1997)

∙ Y1 less noisy than Y2 if: I(U ; Y1) ≥ I(U ; Y2) for every p(u, x)∙ Equivalently if: maxp(u,x) �I(U ; Y2) − I(U ; Y1)� = 0, |U | ≤ |X | + 1

El Gamal (Stanford University) I. BEC/BSC ITW 2015 9 / 44

Less noisy via concave envelope (van Dijk 1997)

∙ Y1 less noisy than Y2 if: maxp(u,x) �I(U ; Y2) − I(U ; Y1)� = 0, |U | ≤ |X | + 1

∙ Consider:

maxp(u,x)

�I(U ;Y2) − I(U ;Y1)� = maxp(u,x)

�I(X ;Y2) − I(X ;Y2 |U) − I(X ;Y1) + I(X ;Y1 |U)�

El Gamal (Stanford University) I. BEC/BSC ITW 2015 9 / 44

Less noisy via concave envelope (van Dijk 1997)

∙ Y1 less noisy than Y2 if: maxp(u,x) �I(U ; Y2) − I(U ; Y1)� = 0, |U | ≤ |X | + 1

∙ Consider:

maxp(u,x)

�I(U ;Y2) − I(U ;Y1)� = maxp(u,x)

�I(X ;Y2) − I(X ;Y2 |U) − I(X ;Y1) + I(X ;Y1 |U)�

= maxp(x)

� maxp(u|x): |U |≤|X |+1

�I(X ;Y1 |U) − I(X ;Y2 |U)� − �I(X ;Y1) − I(X ;Y2)��

El Gamal (Stanford University) I. BEC/BSC ITW 2015 9 / 44

Less noisy via concave envelope (van Dijk 1997)

∙ Y1 less noisy than Y2 if: maxp(u,x) �I(U ; Y2) − I(U ; Y1)� = 0, |U | ≤ |X | + 1

∙ Consider:

maxp(u,x)

�I(U ;Y2) − I(U ;Y1)� = maxp(u,x)

�I(X ;Y2) − I(X ;Y2 |U) − I(X ;Y1) + I(X ;Y1 |U)�

= maxp(x)

� maxp(u|x): |U |≤|X |+1

�I(X ;Y1 |U) − I(X ;Y2 |U)� − �I(X ;Y1) − I(X ;Y2)��

= maxp(x)

�C[I(X ;Y1) − I(X ;Y2)] − �I(X ;Y1) − I(X ;Y2)��

El Gamal (Stanford University) I. BEC/BSC ITW 2015 9 / 44

I(X ;Y1) − I(X ;Y2)C[I(X ;Y1) − I(X ;Y2)]

p(x)

Less noisy via concave envelope (van Dijk 1997)

∙ Y1 less noisy than Y2 if: maxp(u,x) �I(U ; Y2) − I(U ; Y1)� = 0, |U | ≤ |X | + 1

∙ Consider:

maxp(u,x)

�I(U ;Y2) − I(U ;Y1)� = maxp(u,x)

�I(X ;Y2) − I(X ;Y2 |U) − I(X ;Y1) + I(X ;Y1 |U)�

= maxp(x)

� maxp(u|x): |U |≤|X |+1

�I(X ;Y1 |U) − I(X ;Y2 |U)� − �I(X ;Y1) − I(X ;Y2)��

= maxp(x)

�C[I(X ;Y1) − I(X ;Y2)] − �I(X ;Y1) − I(X ;Y2)��

∙ Hence, Y1 less noisy than Y2 if:

I(X ; Y1) − I(X ; Y2) = C[I(X ; Y1) − I(X ; Y2)] for every p(x)Equivalently if, I(X ; Y1) − I(X ; Y2) is concave in p(x)

El Gamal (Stanford University) I. BEC/BSC ITW 2015 9 / 44

Less noisy via concave envelope for BI-BC

∙ Y1 less noisy than Y2 if: I(X ; Y1) − I(X ; Y2) concave in p(x)∙ For BI-BC, define

I j(q) = I(X ; Yj), j = 1, 2, where X ∼ Bern(q), q ∈ [0, 1]

El Gamal (Stanford University) I. BEC/BSC ITW 2015 10 / 44

Less noisy via concave envelope for BI-BC

∙ Y1 less noisy than Y2 if: I(X ; Y1) − I(X ; Y2) concave in p(x)∙ For BI-BC, define

I j(q) = I(X ; Yj), j = 1, 2, where X ∼ Bern(q), q ∈ [0, 1]

Y1 less noisy than Y2 if: Second derivative of I1(q) − I2(q) ≤ 0 for every q

El Gamal (Stanford University) I. BEC/BSC ITW 2015 10 / 44

Y1 less noisy than Y2 for BEC/BSC

∙ Y1 less noisy than Y2 if: є ≤ 4p(1 − p)

El Gamal (Stanford University) I. BEC/BSC ITW 2015 10 / 44

0 1q

I1(q) − I2(q)C[I1(q) − I2(q)]

є0 1

DegradedLess noisy

Effectively less noisy

2p 4pp

Y1 more capable than Y2 for BEC/BSC

∙ Y1 more capable than Y2 if: I1(q) − I2(q) ≥ 0 for every q ∈ [0, 1]

El Gamal (Stanford University) I. BEC/BSC ITW 2015 11 / 44

Y1 more capable than Y2 for BEC/BSC

∙ Y1 more capable if є ≤ H(p)

El Gamal (Stanford University) I. BEC/BSC ITW 2015 11 / 44

0 1q

I1(q) − I2(q)

є0 1

DegradedLess noisy

More capableEffectively less noisy

2p 4pp H(p)

Y1 more capable than Y2 for BEC/BSC

∙ Y1 more capable if є ≤ H(p)∙ This shows: Degraded Ú→

X←Ú less noisy Ú→X←Ú more capable

El Gamal (Stanford University) I. BEC/BSC ITW 2015 11 / 44

0 1q

I1(q) − I2(q)

є0 1

DegradedLess noisy

More capableEffectively less noisy

2p 4pp H(p)

Beyond more capable

∙ For H(p) < є < 1, BEC/BSC is not more capable

El Gamal (Stanford University) I. BEC/BSC ITW 2015 11 / 44

0 1q

I1(q) − I2(q)

є0 1

DegradedLess noisy

More capableEffectively less noisy

2p 4pp H(p)

Superposition coding is always optimal for BEC/BSC

∙ Nair (2010) showed: Superposition coding is optimal for H(p) < є ≤ 1

é Introduced class of essentially less noisy

é Difficult to verify in general (when channel is not symmetric)

El Gamal (Stanford University) I. Effectively less noisy ITW 2015 12 / 44

є0 1

DegradedLess noisy

More capable

Essentially less noisy

2p 4pp H(p)

Superposition coding is always optimal for BEC/BSC

∙ Nair (2010) showed: Superposition coding is optimal for H(p) < є ≤ 1

é Introduced class of essentially less noisy

é Difficult to verify in general (when channel is not symmetric)

∙ We show that the channel is effectively less noisy:

é More general than essentially less noisy

é Easier to verify using concave envelope

El Gamal (Stanford University) I. Effectively less noisy ITW 2015 12 / 44

Effectively less noisy (Kim–Nachman–EG 2015)

∙ Consider the outer bound Ro on BC capacity region (Marton 1979):

R2 ≤ I(U ;Y2),R1 + R2 ≤ I(U ;Y2) + I(X ;Y1 |U)

for some p(u, x)Expressed via supporting hyperplanes:

max(R1 ,R2)∈Ro

(λR1 + R2) = maxp(u,x)

�λI(X ;Y1 |U) + I(U ;Y2)� for 0 ≤ λ ≤ 1

El Gamal (Stanford University) I. Effectively less noisy ITW 2015 12 / 44

Effectively less noisy (Kim–Nachman–EG 2015)

∙ Consider the outer bound Ro on BC capacity region (Marton 1979):

R2 ≤ I(U ;Y2),R1 + R2 ≤ I(U ;Y2) + I(X ;Y1 |U)

for some p(u, x)Expressed via supporting hyperplanes:

max(R1 ,R2)∈Ro

(λR1 + R2) = maxp(u,x)

�λI(X ;Y1 |U) + I(U ;Y2)� for 0 ≤ λ ≤ 1

∙ Compare to the superposition coding inner bound RUX:

max(R1 ,R2)∈RUX

(λR1 + R2) = maxp(u,x)

�λmin{I(X ;Y1 |U), I(X ;Y1) − I(U ;Y2)} + I(U ;Y2)�

for 0 ≤ λ ≤ 1

El Gamal (Stanford University) I. Effectively less noisy ITW 2015 12 / 44

Effectively less noisy (Kim–Nachman–EG 2015)

∙ Consider the outer bound Ro on BC capacity region (Marton 1979):

R2 ≤ I(U ;Y2),R1 + R2 ≤ I(U ;Y2) + I(X ;Y1 |U)

for some p(u, x)Expressed via supporting hyperplanes:

max(R1 ,R2)∈Ro

(λR1 + R2) = maxp(u,x)

�λI(X ;Y1 |U) + I(U ;Y2)� for 0 ≤ λ ≤ 1

∙ Compare to the superposition coding inner bound RUX:

max(R1 ,R2)∈RUX

(λR1 + R2) = maxp(u,x)

�λmin{I(X ;Y1 |U), I(X ;Y1) − I(U ;Y2)} + I(U ;Y2)�

for 0 ≤ λ ≤ 1

∙ Clearly, RUX = Ro if I(U ; Y1) ≥ I(U ; Y2) for every p(u, x) (less noisy)El Gamal (Stanford University) I. Effectively less noisy ITW 2015 12 / 44

Effectively less noisy (Kim–Nachman–EG 2015)

∙ Outer bound Ro on BC capacity region (Marton 1979):

max(R1 ,R2)∈Ro

(λR1 + R2) = maxp(u,x)

�λI(X ;Y1 |U) + I(U ;Y2)� for 0 ≤ λ ≤ 1

∙ Superposition coding inner bound RUX :

max(R1 ,R2)∈RUX

(λR1 + R2) = maxp(u,x)

�λmin{I(X ;Y1 |U), I(X ;Y1) − I(U ;Y2)} + I(U ;Y2)�

for 0 ≤ λ ≤ 1

∙ Clearly, RUX = Ro if I(U ; Y1) ≥ I(U ; Y2) for every p(u, x) (less noisy)

El Gamal (Stanford University) I. Effectively less noisy ITW 2015 13 / 44

Effectively less noisy (Kim–Nachman–EG 2015)

∙ Outer bound Ro on BC capacity region (Marton 1979):

max(R1 ,R2)∈Ro

(λR1 + R2) = maxp(u,x)

�λI(X ;Y1 |U) + I(U ;Y2)� for 0 ≤ λ ≤ 1

∙ Superposition coding inner bound RUX :

max(R1 ,R2)∈RUX

(λR1 + R2) = maxp(u,x)

�λmin{I(X ;Y1 |U), I(X ;Y1) − I(U ;Y2)} + I(U ;Y2)�

for 0 ≤ λ ≤ 1

∙ Clearly, RUX = Ro if I(U ; Y1) ≥ I(U ; Y2) for every p(u, x) (less noisy)

∙ This doesn’t need to hold for every p(u, x)

El Gamal (Stanford University) I. Effectively less noisy ITW 2015 13 / 44

Effectively less noisy (Kim–Nachman–EG 2015)

∙ Outer bound Ro on BC capacity region (Marton 1979):

max(R1 ,R2)∈Ro

(λR1 + R2) = maxp(u,x)

�λI(X ;Y1 |U) + I(U ;Y2)� for 0 ≤ λ ≤ 1

∙ Superposition coding inner bound RUX :

max(R1 ,R2)∈RUX

(λR1 + R2) = maxp(u,x)

�λmin{I(X ;Y1 |U), I(X ;Y1) − I(U ;Y2)} + I(U ;Y2)�

for 0 ≤ λ ≤ 1

∙ Clearly, RUX = Ro if I(U ; Y1) ≥ I(U ; Y2) for every p(u, x) (less noisy)

∙ This doesn’t need to hold for every p(u, x), for example,

RUX = Ro if I(U ; Y1) ≥ I(U ; Y2) for every p∗(x) ∈ P and every p(u|x),maxp(x)

maxp(u|x)

�λI(X ;Y1 |U) + I(U ;Y2)� = maxp∗(x)∈P

maxp(u|x)

�λI(X ;Y1 |U) + I(U ;Y2)�

for 0 ≤ λ ≤ 1

El Gamal (Stanford University) I. Effectively less noisy ITW 2015 13 / 44

Effectively less noisy (Kim–Nachman–EG 2015)

Effectively less noisy BC

Y1 is effectively less noisy than Y2 if:

I(U ;Y1) ≥ I(U ;Y2) for every p∗(x) ∈ P and every p(u|x) such that

p∗(x) = argmaxp(x)

�maxp(u|x)

�λI(X ;Y1 |U) + I(U ;Y2)��, 0 ≤ λ ≤ 1

El Gamal (Stanford University) I. Effectively less noisy ITW 2015 14 / 44

Effectively less noisy (Kim–Nachman–EG 2015)

Effectively less noisy BC

Y1 is effectively less noisy than Y2 if:

I(U ;Y1) ≥ I(U ;Y2) for every p∗(x) ∈ P and every p(u|x) such that

p∗(x) = argmaxp(x)

�maxp(u|x)

�λI(X ;Y1 |U) + I(U ;Y2)��, 0 ≤ λ ≤ 1

∙ In terms of concave envelope, Y1 effectively less noisy than Y2 if:

I(X ;Y1) − I(X ;Y2) = C[I(X ;Y1) − I(X ;Y2)] for every p∗(x) ∈ P

El Gamal (Stanford University) I. Effectively less noisy ITW 2015 14 / 44

Effectively less noisy (Kim–Nachman–EG 2015)

Effectively less noisy BC

Y1 is effectively less noisy than Y2 if:

I(U ;Y1) ≥ I(U ;Y2) for every p∗(x) ∈ P and every p(u|x) such that

p∗(x) = argmaxp(x)

�maxp(u|x)

�λI(X ;Y1 |U) + I(U ;Y2)��, 0 ≤ λ ≤ 1

∙ In terms of concave envelope, Y1 effectively less noisy than Y2 if:

I(X ;Y1) − I(X ;Y2) = C[I(X ;Y1) − I(X ;Y2)] for every p∗(x) ∈ P

∙ p∗(x) can also be recast in terms of concave envelope as:

p∗(x) = argmaxp(x)

�maxp(u|x)

�λI(X ;Y2 |U) + I(U ;Y1)��

= argmaxp(x)

�I(X ;Y1) + C[λI(X ;Y2) − I(X ;Y1)]�

El Gamal (Stanford University) I. Effectively less noisy ITW 2015 14 / 44

Y2 effectively less noisy than Y1 for BEC/BSC

∙ For BEC/BSC, it can be shown that:

p∗X(1) = q∗ = argmaxq

�I2(q) + C[λI2(q) − I1(q)]� = 0.5

é λI2(q) − I1(q) is symmetric about q = 0.5

é Both I2(q) and C[λI2(q) − I1(q)] are maximum at q∗ = 0.5

El Gamal (Stanford University) I. Back to BEC/BSC ITW 2015 15 / 44

0 1q

0.5

λI2(q) − I1(q)C[λI2(q) − I1(q)]

Y2 effectively less noisy than Y1 for BEC/BSC

∙ For BEC/BSC, it can be shown that:

p∗X(1) = q∗ = argmaxq

�I2(q) + C[λI2(q) − I1(q)]� = 0.5

∙ By inspection, I2(0.5) − I1(0.5) = C[I2(q) − I1(q))]!!!!q=0.5 for H(p) < є ≤ 1

El Gamal (Stanford University) I. Back to BEC/BSC ITW 2015 15 / 44

є0 1

DegradedLess noisy

More capable

Effectively less noisy

2p 4pp H(p)

BEC/BSC summary

∙ Superposition coding is always optimal

∙ Concave envelope method

∙ Degraded Ú→X←Ú less noisy Ú→

X←Ú more capable

∙ Effectively less noisy

é More general than essentially less noisy (Nair 2010)

é Easier to compute in general

El Gamal (Stanford University) I. Back to BEC/BSC ITW 2015 16 / 44

є0 1

DegradedLess noisy

More capable

Effectively less noisy

2p 4pp H(p)

Outline

I. Optimality of superposition coding for BI-BC

BEC/BSC (Nair 2010)

II. Poisson broadcast channel (Kim–Nachman–EG 2015):

é Superposition coding is almost always optimal

III. Evaluation of Marton region for BI-BC (Anantharam–Gohari–Nair 2013)

é Gap between Marton region and UV outer bound for Poisson BC

El Gamal (Stanford University) II. Outline ITW 2015 17 / 44

Poisson broadcast channel (P-BC)

X(t)

s1

s2

Y1(t)

Y2(t)

PP(.)

PP(.)

α

Continuous time Poisson BC

Y1(t)|{X(t) = x(t)} is PP(α(x(t) + s1)),

Y2(t)|{X(t) = x(t)} is PP(x(t) + s2),

X(t) ∈ [0, 1], t , α, s1 , s2 ≥ 0, s1 ≤ s2

El Gamal (Stanford University) II. Poisson broadcast channel ITW 2015 18 / 44

Poisson broadcast channel (P-BC)

X(t)

s1

s2

Y1(t)

Y2(t)

PP(.)

PP(.)

α

Continuous time Poisson BC

Y1(t)|{X(t) = x(t)} is PP(α(x(t) + s1)),

Y2(t)|{X(t) = x(t)} is PP(x(t) + s2),

X(t) ∈ [0, 1], t , α, s1 , s2 ≥ 0, s1 ≤ s2

0

1

1

0

0

1

a1

b1

a2

b2

X

1

2

Binary P-BC (Wyner 1988a)

0

1 2 3 4

tΔΔΔΔΔ

El Gamal (Stanford University) II. Poisson broadcast channel ITW 2015 18 / 44

Poisson broadcast channel (P-BC)

X(t)

s1

s2

Y1(t)

Y2(t)

PP(.)

PP(.)

α

Continuous time Poisson BC

Y1(t)|{X(t) = x(t)} is PP(α(x(t) + s1)),

Y2(t)|{X(t) = x(t)} is PP(x(t) + s2),

X(t) ∈ [0, 1], t , α, s1 , s2 ≥ 0, s1 ≤ s2

0

1

1

0

0

1

a1

b1

a2

b2

X

1

2

Binary P-BC (Wyner 1988a)

a1 = αs1Δ +O(Δ2),a2 = s2Δ +O(Δ2),b1 = α(1 + s1)Δ +O(Δ2),b2 = (1 + s2)Δ +O(Δ2)

El Gamal (Stanford University) II. Poisson broadcast channel ITW 2015 18 / 44

Poisson broadcast channel (P-BC)

X(t)

s1

s2

Y1(t)

Y2(t)

PP(.)

PP(.)

α

Continuous time Poisson BC

Y1(t)|{X(t) = x(t)} is PP(α(x(t) + s1)),

Y2(t)|{X(t) = x(t)} is PP(x(t) + s2),

X(t) ∈ [0, 1], t , α, s1 , s2 ≥ 0, s1 ≤ s2

0

1

1

0

0

1

a1

b1

a2

b2

X

1

2

Binary P-BC (Wyner 1988a)

0 1

1 2 3 4 1/Δt

ΔΔΔΔΔ

∙ Wyner (1988a,b), Lapidoth–Telatar–Urbanke (2003) argued that:

Capacity of P-BC ≡ capacity of 1/Δ-extension binary P-BC as Δ → 0

El Gamal (Stanford University) II. Poisson broadcast channel ITW 2015 18 / 44

Optimality of superposition coding for P-BC?

El Gamal (Stanford University) II. Poisson broadcast channel ITW 2015 19 / 44

Optimality of superposition coding for P-BC?

∙ Lapidoth–Telatar–Urbanke (2003) showed:

Y2 degraded version of Y1 if α ≥ 1 (s1 ≤ s2)

El Gamal (Stanford University) II. Poisson broadcast channel ITW 2015 19 / 44

α0 1

Degraded

Effectively less noisy

Optimality of superposition coding for P-BC?

∙ Lapidoth–Telatar–Urbanke (2003) showed:

Y2 degraded version of Y1 if α ≥ 1 (s1 ≤ s2)

∙ Can define less noisy/more capable/eff. less noisy through binary P-BC:

El Gamal (Stanford University) II. Poisson broadcast channel ITW 2015 19 / 44

Optimality of superposition coding for P-BC?

∙ Lapidoth–Telatar–Urbanke (2003) showed:

Y2 degraded version of Y1 if α ≥ 1 (s1 ≤ s2)

∙ Can define less noisy/more capable/eff. less noisy through binary P-BC:

é Let

I j(q) = limΔ→0

1

ΔI(X ;YΔ

j ), X ∼ Bern(q), j = 1, 2

El Gamal (Stanford University) II. Poisson broadcast channel ITW 2015 19 / 44

Optimality of superposition coding for P-BC?

∙ Lapidoth–Telatar–Urbanke (2003) showed:

Y2 degraded version of Y1 if α ≥ 1 (s1 ≤ s2)

∙ Can define less noisy/more capable/eff. less noisy through binary P-BC:

é Let

I j(q) = limΔ→0

1

ΔI(X ;YΔ

j ), X ∼ Bern(q), j = 1, 2

é Less noisy: Second derivative of I1(q) − I2(q) ≤ 0 for every q ∈ [0, 1]é More capable: I1(q) − I2(q) ≥ 0 for every q ∈ [0, 1]é Effectively less noisy: I1(q∗) − I2(q∗) = C[I1(q) − I2(q)]!!!!q=q∗ for every q∗ ∈ P,

q∗ = argmaxq

�I2(q) + C[λI2(q) − I1(q)]�

El Gamal (Stanford University) II. Poisson broadcast channel ITW 2015 19 / 44

Y1 less noisy than Y2 for P-BC

∙ s1 ≤ s2

∙ α1 = (1 + s1)/(1 + s2)

El Gamal (Stanford University) II. Poisson broadcast channel ITW 2015 20 / 44

α0 1

DegradedLess noisy

Effectively less noisy

α1

Y1 more capable than Y2 for P-BC

∙ s1 ≤ s2

∙ α1 = (1 + s1)/(1 + s2)∙ α2 = (s2 log(1 + 1/s2) − 1)/(s1 log(1 + 1/s1) − 1)

El Gamal (Stanford University) II. Poisson broadcast channel ITW 2015 20 / 44

α0 1

DegradedLess noisy

More capable

Effectively less noisy

α2 α1

Y2 less noisy than Y1 for P-BC

∙ s1 ≤ s2

∙ α1 = (1 + s1)/(1 + s2)∙ α2 = (s2 log(1 + 1/s2) − 1)/(s1 log(1 + 1/s1) − 1)∙ α4 = s1/s2

El Gamal (Stanford University) II. Poisson broadcast channel ITW 2015 20 / 44

α0 1

Effectively less noisy

Less noisy

α4 α2 α1

Y2 more capable than Y1 for P-BC

∙ s1 ≤ s2

∙ α1 = (1 + s1)/(1 + s2)∙ α2 = (s2 log(1 + 1/s2) − 1)/(s1 log(1 + 1/s1) − 1)∙ α4 = s1/s2∙ α3 = ((1 + s2) log(1 + 1/s2) − 1)/(1 + s1) log(1 + 1/s1) − 1)

El Gamal (Stanford University) II. Poisson broadcast channel ITW 2015 20 / 44

α0 1

Effectively less noisy

Less noisy

More capable

α4 α3 α2 α1

Y1 effectively less noisy than Y2 for P-BC

∙ s1 ≤ s2

∙ Y1 effectively less noisy than Y2 if I1(q∗) − I2(q∗) = C[I1(q) − I2(q)]!!!!q=q∗ ,

q∗ ≤ q2 = (1 + s2)1+s2/ess22 − s2

∙ α12 = (1 + s2, q2 + s2)/(1 + s1, q2 + s1),where (x, y) = x log(x/y) − x + y

El Gamal (Stanford University) II. Poisson broadcast channel ITW 2015 21 / 44

α0 1

DegradedLess noisy

More capable

Effectively less noisy

α4 α3 α2 α12 α1

Y2 effectively less noisy than Y1 for P-BC

∙ s1 ≤ s2

∙ Y2 effectively less noisy than Y1 if I2(q∗) − I1(q∗) = C[I2(q) − I1(q)]!!!!q=q∗ ,

q∗ ≥ q1 = (1 + s1)1+s1/ess11 − s1

∙ α23 = (s2 log(1 + q1/s2) − q1)/(s1 log(1 + q1/s1) − q1)

El Gamal (Stanford University) II. Poisson broadcast channel ITW 2015 21 / 44

α0 1

Effectively less noisy

Less noisy

More capable

α4 α3 α2 α12α23 α1

Summary

∙ Values of α for which superposition coding is optimal for P-BC (s1 ≤ s2)

El Gamal (Stanford University) II. Poisson broadcast channel ITW 2015 22 / 44

α0 1

DEGLN

MC

ELN

α4 α3 α2 α12α23 α1

DEG: Degraded LN: Less noisy

MC: More capable ELN: Effectively less noisy

Summary

0 2 4 6 8 100

0.5

1

0.1 2.1 4.1 6.1 8.1 10.10

0.5

1

1 3 5 7 9 110

0.5

1

3 5 7 9 11 13 0

0.5

1

s2s2

α

α

s1 = 0 s1 = 0.1

s1 = 1 s1 = 3

El Gamal (Stanford University) II. Poisson broadcast channel ITW 2015 23 / 44

Superposition coding is almost always optimal for P-BC

s1

t1

s2t2

α

a

Fraction of volume s.t. superposition coding optimal → 1 as t1, t2 , a → ∞

El Gamal (Stanford University) II. Poisson broadcast channel ITW 2015 24 / 44

Why superposition coding almost always optimal for P-BC

∙ Transition probabilities for BP-BC channel have consistent magnitudes

a1 = αs1Δ +O(Δ2), a2 = s2Δ +O(Δ2),b1 = α(1 + s1)Δ +O(Δ2), b2 = (1 + s2)Δ +O(Δ2)

El Gamal (Stanford University) II. Poisson broadcast channel ITW 2015 25 / 44

0

1

1

0

0

1

a1

b1

a2

b2

X

Y1

Y2

Why superposition coding almost always optimal for P-BC

∙ Transition probabilities for BP-BC channel have consistent magnitudes

a1 = αs1Δ +O(Δ2), a2 = s2Δ +O(Δ2),b1 = α(1 + s1)Δ +O(Δ2), b2 = (1 + s2)Δ +O(Δ2)

∙ This is not the case for BI-BC in general

El Gamal (Stanford University) II. Poisson broadcast channel ITW 2015 25 / 44

0

1

1

0

0

1

a1

b1

a2

b2

X

Y1

Y2

Capacity region of P-BC for remaining set of parameters

∙ What is the capacity region for α ∈ (α23 , α2), (s1 ≤ s2)?

α0 1α4 α3 α2 α12α23 α1

El Gamal (Stanford University) II. Poisson broadcast channel ITW 2015 26 / 44

Capacity region of P-BC for remaining set of parameters

∙ What is the capacity region for α ∈ (α23 , α2), (s1 ≤ s2)?

∙ We compare Marton’s inner bound and UV outer bound

α0 1α4 α3 α2 α12α23 α1

El Gamal (Stanford University) II. Poisson broadcast channel ITW 2015 26 / 44

Outline

I. Optimality of superposition coding for BI-BC

BEC/BSC (Nair 2010)

II. Poisson broadcast channel (Kim–Nachman–EG 2015):

é Superposition coding is almost always optimal

III. Evaluation of Marton region for BI-BC (Anantharam–Gohari–Nair 2013)

é Gap between Marton region and UV outer bound for Poisson BC

El Gamal (Stanford University) Outline ITW 2015 27 / 44

General inner bounds on capacity region of BC

Cover (1975), van der Meulen (1975) RUVW

A rate pair (R1 , R2) is achievable for BC if

R1 < I(W ;Y1) + I(V ;Y1 |W),R2 < I(W ;Y2) + I(U ;Y2 |W),

R1 + R2 < min{I(W ;Y1), I(W ;Y2)} + I(V ;Y1 |W) + I(U ;Y2 |W)

for some pmf p(w)p(u|w)p(v|w) and function x(w, u, v)

∙ Split message M j = (M j0 ,M j j), j = 1, 2

∙ W carries (M10 ,M20), recovered by Y1 and Y2

∙ V carries M11, recovered by Y1; U carries M22, recovered by Y2

∙ Includes superposition coding inner bound (U ← X, W ← U , V = )

El Gamal (Stanford University) III. General inner and outer bounds ITW 2015 28 / 44

General inner bounds on capacity region of BC

Marton (1979) RM inner bound

A rate pair (R1 , R2) is achievable for BC if

R1 < I(W ;Y1) + I(V ;Y1 |W),R2 < I(W ;Y2) + I(U ;Y2 |W),

R1 + R2 < min{I(W ;Y1), I(W ;Y2)} + I(V ;Y1 |W) + I(U ;Y2 |W)−I(U ;V |W)

for some pmf p(w, u, v) and function x(w, u, v)

∙ U and V given W correlated (they carry independent messages!)

∙ Correlation results in rate penalty

∙ Tight for all classes of BC with known capacity regions

El Gamal (Stanford University) III. General inner and outer bounds ITW 2015 28 / 44

Marton inner bound for BI-BC ≡ randomized time sharing

Marton region for BI-BC RRTD (Anantharam–Gohari–Nair 2013)

A rate pair (R1 , R2) is achievable for BC if

R1 < I(W ;Y1) +k

Hj=1

α j I(X ;Y1 |W = j),

R2 < I(W ;Y2) +5

Hj=k+1

α j I(X ;Y2 |W = j),

R1 + R2 < min{I(W ;Y1), I(W ;Y2)} +k

Hj=1

α j I(X ;Y1 |W = j) +5

Hj=k+1

α j I(X ;Y2 |W = j)

for some pW ( j) = α j , j ∈ [1 : 5], and p(x|w)

∙ Extends result on skewed binary BC (Hajek–Pursley 1979)

El Gamal (Stanford University) III. General inner and outer bounds ITW 2015 29 / 44

Marton inner bound for BI-BC ≡ randomized time sharing

Marton region for BI-BC RRTD (Anantharam–Gohari–Nair 2013)

A rate pair (R1 , R2) is achievable for BC if

R1 < I(W ;Y1) +k

Hj=1

α j I(X ;Y1 |W = j),

R2 < I(W ;Y2) +5

Hj=k+1

α j I(X ;Y2 |W = j),

R1 + R2 < min{I(W ;Y1), I(W ;Y2)} +k

Hj=1

α j I(X ;Y1 |W = j) +5

Hj=k+1

α j I(X ;Y2 |W = j)

for some pW ( j) = α j , j ∈ [1 : 5], and p(x|w)

∙ Extends result on skewed binary BC (Hajek–Pursley 1979)

∙ For BI-BC: RM = RUVW = RRTD

⇒ U and V independent given W suffices

El Gamal (Stanford University) III. General inner and outer bounds ITW 2015 29 / 44

Randomized time division (Hajek–Pursley 1979)

∙ Let W ∈ {1, 2} and fix pW(1) = α, p(x|w), and set

U = X if W = 1, U = є, otherwise; V = X if W = 2, V = є, otherwise

El Gamal (Stanford University) III. General inner and outer bounds ITW 2015 30 / 44

Randomized time division (Hajek–Pursley 1979)

∙ Let W ∈ {1, 2} and fix pW(1) = α, p(x|w), and set

U = X if W = 1, U = є, otherwise; V = X if W = 2, V = є, otherwise

∙ Generate 2n(R10+R20) wn(m10 ,m20) sequences

Wn 1 2 2 1 1 1 2 1 2 2 1 2 1 1

El Gamal (Stanford University) III. General inner and outer bounds ITW 2015 30 / 44

Randomized time division (Hajek–Pursley 1979)

∙ Let W ∈ {1, 2} and fix pW(1) = α, p(x|w), and set

U = X if W = 1, U = є, otherwise; V = X if W = 2, V = є, otherwise

∙ Generate 2n(R10+R20) wn(m10 ,m20) sequences∙ For every (m10 ,m20), generate 2nR11 un(m10 ,m20,m11) sequences

Wn 1 2 2 1 1 1 2 1 2 2 1 2 1 1

Un 0 є є 0 0 0 є 1 є є 0 є 0 1

El Gamal (Stanford University) III. General inner and outer bounds ITW 2015 30 / 44

Randomized time division (Hajek–Pursley 1979)

∙ Let W ∈ {1, 2} and fix pW(1) = α, p(x|w), and set

U = X if W = 1, U = є, otherwise; V = X if W = 2, V = є, otherwise

∙ Generate 2n(R10+R20) wn(m10 ,m20) sequences∙ For every (m10 ,m20), generate 2nR11 un(m10 ,m20,m11) sequences∙ For every (m10 ,m20), generate 2nR22 vn(m10 ,m20 ,m22) sequences

Wn 1 2 2 1 1 1 2 1 2 2 1 2 1 1

Un 0 є є 0 0 0 є 1 є є 0 є 0 1

Vn є 0 1 є є є 1 є 0 1 є 1 є є

El Gamal (Stanford University) III. General inner and outer bounds ITW 2015 30 / 44

Randomized time division (Hajek–Pursley 1979)

∙ Let W ∈ {1, 2} and fix pW(1) = α, p(x|w), and set

U = X if W = 1, U = є, otherwise; V = X if W = 2, V = є, otherwise

∙ Generate 2n(R10+R20) wn(m10 ,m20) sequences∙ For every (m10 ,m20), generate 2nR11 un(m10 ,m20,m11) sequences∙ For every (m10 ,m20), generate 2nR22 vn(m10 ,m20 ,m22) sequences∙ Set Xi(m1 ,m2) = ui(m10 ,m20 ,m11) if ui = є, and

Xi(m1 ,m2) = vi(m10 ,m20 ,m22) if vi = є

Wn 1 2 2 1 1 1 2 1 2 2 1 2 1 1

Un 0 є є 0 0 0 є 1 є є 0 є 0 1

Vn є 0 1 є є є 1 є 0 1 є 1 є є

Xn 0 0 1 0 0 0 1 1 0 1 0 1 0 1

El Gamal (Stanford University) III. General inner and outer bounds ITW 2015 30 / 44

Proving RM = RRTD for BI-BC

∙ Cardinality bounds via perturbation (Gohari–Anantharam 2009):

é |U |, |V | ≤ |X |, |W | ≤ |X | + 4

El Gamal (Stanford University) III. General inner and outer bounds ITW 2015 31 / 44

Proving RM = RRTD for BI-BC

∙ Cardinality bounds via perturbation (Gohari–Anantharam 2009):

é |U |, |V | ≤ |X |, |W | ≤ |X | + 4

∙ Sum-rate proof (Geng–Jog–Nair–Wang 2013a):

é For BI-BC sum rate: |W | ≤ |X | = 2 (|U |, |V | ≤ 2)

é Key inequality for BI-BC: (only few x(u, v, w) need to be considered)

I(V ;Y1) + I(U ;Y2) − I(U ;V ) ≤ max{I(X ;Y1), I(X ;Y2)}

El Gamal (Stanford University) III. General inner and outer bounds ITW 2015 31 / 44

Proving RM = RRTD for BI-BC

∙ Cardinality bounds via perturbation (Gohari–Anantharam 2009):

é |U |, |V | ≤ |X |, |W | ≤ |X | + 4

∙ Sum-rate proof (Geng–Jog–Nair–Wang 2013a):

é For BI-BC sum rate: |W | ≤ |X | = 2 (|U |, |V | ≤ 2)

é Key inequality for BI-BC: (only few x(u, v, w) need to be considered)

I(V ;Y1) + I(U ;Y2) − I(U ;V ) ≤ max{I(X ;Y1), I(X ;Y2)}

∙ General case (Anantharam–Gohari–Nair 2013): |U | + |V | ≤ |X | + 1

é Concave envelope

é Perturbation

é A min-max theorem for rate regions

El Gamal (Stanford University) III. General inner and outer bounds ITW 2015 31 / 44

Perturbation method

∙ Used to bound cardinality of U , V

∙ Caratheodory method (Ahlswede–Korner 1975) does not work

El Gamal (Stanford University) III. General inner and outer bounds ITW 2015 32 / 44

Perturbation method

∙ Gohari–Anantharam (2009) used perturbation to show: |U |, |V | ≤ |X |

El Gamal (Stanford University) III. General inner and outer bounds ITW 2015 32 / 44

Perturbation method

∙ Gohari–Anantharam (2009) used perturbation to show: |U |, |V | ≤ |X |

∙ Consider Marton sum rate bound S(p(u, v|x)) = I(U ;Y1) + I(V ;Y2) − I(U ;V )é Want to show: max S(p) can be attained with |U |, |V | ≤ |X |

El Gamal (Stanford University) III. General inner and outer bounds ITW 2015 32 / 44

Perturbation method

∙ Gohari–Anantharam (2009) used perturbation to show: |U |, |V | ≤ |X |

∙ Consider Marton sum rate bound S(p(u, v|x)) = I(U ;Y1) + I(V ;Y2) − I(U ;V )é Want to show: max S(p) can be attained with |U |, |V | ≤ |X |é Let p0(u, v|x) = argmax S(p) and define perturbed pmf

pє(u, v|x) = p0(u, v|x)(1 + єL(u)), where minu(1 + єL(u)) ≥ 0 and

∑u pє(u, v|x)L(u) = 0 for all x â⇒ non-zero L(u) exists if |U | > |X |

El Gamal (Stanford University) III. General inner and outer bounds ITW 2015 32 / 44

Perturbation method

∙ Gohari–Anantharam (2009) used perturbation to show: |U |, |V | ≤ |X |

∙ Consider Marton sum rate bound S(p(u, v|x)) = I(U ;Y1) + I(V ;Y2) − I(U ;V )é Want to show: max S(p) can be attained with |U |, |V | ≤ |X |é Let p0(u, v|x) = argmax S(p) and define perturbed pmf

pє(u, v|x) = p0(u, v|x)(1 + єL(u)), where minu(1 + єL(u)) ≥ 0 and

∑u pє(u, v|x)L(u) = 0 for all x â⇒ non-zero L(u) exists if |U | > |X |

é p0(u, v|x) maximizer â⇒ àS(på )

àє

!!!!!!p0 = 0,à2S(på)

àє2

!!!!!!p0 ≤ 0

After some calculation, we find that S(pє) must be a constant!

El Gamal (Stanford University) III. General inner and outer bounds ITW 2015 32 / 44

Perturbation method

∙ Gohari–Anantharam (2009) used perturbation to show: |U |, |V | ≤ |X |

∙ Consider Marton sum rate bound S(p(u, v|x)) = I(U ;Y1) + I(V ;Y2) − I(U ;V )é Want to show: max S(p) can be attained with |U |, |V | ≤ |X |é Let p0(u, v|x) = argmax S(p) and define perturbed pmf

pє(u, v|x) = p0(u, v|x)(1 + єL(u)), where minu(1 + єL(u)) ≥ 0 and

∑u pє(u, v|x)L(u) = 0 for all x â⇒ non-zero L(u) exists if |U | > |X |

é p0(u, v|x) maximizer â⇒ àS(på )

àє

!!!!!!p0 = 0,à2S(på)

àє2

!!!!!!p0 ≤ 0

After some calculation, we find that S(pє) must be a constant!

é Choose є large enough: pє(u, v|x) = p0(u, v|x)(1 + єL(u)) = 0 for some u

é Repeat until |U | ≤ |X |

El Gamal (Stanford University) III. General inner and outer bounds ITW 2015 32 / 44

UV outer bound on capacity region of BC

UV outer bound (Nair–EG 2007)

If (R1 , R2) is achievable, then

R1 ≤ I(V ;Y1),R2 ≤ I(U ;Y2),

R1 + R2 ≤ I(V ;Y1) + I(U ;Y2 |V ),R1 + R2 ≤ I(V ;Y1 |U) + I(U ;Y2)

for some p(u, v) and function x(u, v)

∙ Tight for almost all classes of BCs with known capacity regions

∙ Not tight in general (Geng–Gohari–Nair–Yu 2014)

El Gamal (Stanford University) III. General inner and outer bounds ITW 2015 33 / 44

Gap between Marton and UV for P-BC

∙ Recall, capacity region for α ∈ (α23 , α2), (s1 ≤ s2) is not known

α0 1α4 α3 α2 α12α23 α1

El Gamal (Stanford University) III. Back to P-BC ITW 2015 34 / 44

Gap between Marton and UV for P-BC

0.28 0.3 0.32 0.34 0.36 0.38 0.4

0.086

0.0895

0.093

0.0965

0.1

Sum

rate

α

Marton

UVmax{C1 ,C2}

s1 = 0.1 and s2 = 1 â⇒ α23 = 0.27, α2 = 0.4

El Gamal (Stanford University) III. Back to P-BC ITW 2015 35 / 44

Gap between Marton and UV for P-BC

0.28 0.3 0.32 0.34 0.36 0.38 0.4

0.086

0.0895

0.093

0.0965

0.1

Sum

rate

α

Marton

UVmax{C1 ,C2}

s1 = 0.1 and s2 = 1 â⇒ α23 = 0.27, α2 = 0.4

∙ Superposition coding is optimal for α ∈ (0.27, 0.286)No analytical expression for cutoff α

El Gamal (Stanford University) III. Back to P-BC ITW 2015 35 / 44

Beyond effectively less noisy

∙ Recall Y1 effectively less noisy than Y2 if:

I(U ;Y1) ≥ I(U ;Y2) for every p∗(x) ∈ P and every p(u|x), such that

maxp(x)

maxp(u|x)

�λI(X ;Y1 |U) + I(U ;Y2)� = maxp∗(x)∈P

maxp(u|x)

�λI(X ;Y1 |U) + I(U ;Y2)�

for every 0 ≤ λ ≤ 1

El Gamal (Stanford University) III. Back to P-BC ITW 2015 36 / 44

Beyond effectively less noisy

∙ Recall Y1 effectively less noisy than Y2 if:

I(U ;Y1) ≥ I(U ;Y2) for every p∗(x) ∈ P and every p(u|x), such that

maxp(x)

maxp(u|x)

�λI(X ;Y1 |U) + I(U ;Y2)� = maxp∗(x)∈P

maxp(u|x)

�λI(X ;Y1 |U) + I(U ;Y2)�

for every 0 ≤ λ ≤ 1

∙ This definition can be tightened to:

I(U ;Y2) ≤ I(U ;Y1) for every p∗(u, x) ∈ P(U , X), such that

maxp(u,x)

�λI(X ;Y1 |U) + I(U ;Y2)� = maxp∗(u,x)∈P(U ,X)

�λI(X ;Y1 |U) + I(U ;Y2)�

for every 0 ≤ λ ≤ 1

El Gamal (Stanford University) III. Back to P-BC ITW 2015 36 / 44

Beyond effectively less noisy

∙ Recall Y1 effectively less noisy than Y2 if:

I(U ;Y1) ≥ I(U ;Y2) for every p∗(x) ∈ P and every p(u|x), such that

maxp(x)

maxp(u|x)

�λI(X ;Y1 |U) + I(U ;Y2)� = maxp∗(x)∈P

maxp(u|x)

�λI(X ;Y1 |U) + I(U ;Y2)�

for every 0 ≤ λ ≤ 1

∙ This definition can be tightened to:

I(U ;Y2) ≤ I(U ;Y1) for every p∗(u, x) ∈ P(U , X), such that

maxp(u,x)

�λI(X ;Y1 |U) + I(U ;Y2)� = maxp∗(u,x)∈P(U ,X)

�λI(X ;Y1 |U) + I(U ;Y2)�

for every 0 ≤ λ ≤ 1

∙ This is the condition satisfied for P-BC with α ∈ (0.27, 0.286)

El Gamal (Stanford University) III. Back to P-BC ITW 2015 36 / 44

Beyond effectively less noisy

∙ Recall Y1 effectively less noisy than Y2 if:

I(U ;Y1) ≥ I(U ;Y2) for every p∗(x) ∈ P and every p(u|x), such that

maxp(x)

maxp(u|x)

�λI(X ;Y1 |U) + I(U ;Y2)� = maxp∗(x)∈P

maxp(u|x)

�λI(X ;Y1 |U) + I(U ;Y2)�

for every 0 ≤ λ ≤ 1

∙ This definition can be tightened to:

I(U ;Y2) ≤ I(U ;Y1) for every p∗(u, x) ∈ P(U , X), such that

maxp(u,x)

�λI(X ;Y1 |U) + I(U ;Y2)� = maxp∗(u,x)∈P(U ,X)

�λI(X ;Y1 |U) + I(U ;Y2)�

for every 0 ≤ λ ≤ 1

∙ This is the condition satisfied for P-BC with α ∈ (0.27, 0.286)∙ We don’t have explicit analytical expression for this range in general

El Gamal (Stanford University) III. Back to P-BC ITW 2015 36 / 44

Summary

∙ Highlighted techniques for computing capacity and bounds for BI-BC

é Concave envelope (less noisy, effectively less noisy, . . . )

é Marton = Cover–van der Meulen = randomized time division for BI-BC

El Gamal (Stanford University) Summary and final remarks ITW 2015 37 / 44

Summary

∙ Highlighted techniques for computing capacity and bounds for BI-BC

é Concave envelope (less noisy, effectively less noisy, . . . )

é Marton = Cover–van der Meulen = randomized time division for BI-BC

∙ Takeaway: Only extremal distributions matter

El Gamal (Stanford University) Summary and final remarks ITW 2015 37 / 44

Summary

∙ Highlighted techniques for computing capacity and bounds for BI-BC

é Concave envelope (less noisy, effectively less noisy, . . . )

é Marton = Cover–van der Meulen = randomized time division for BI-BC

∙ Takeaway: Only extremal distributions matter

∙ Presented new results on the Poisson BC:

é Superposition coding inner bound is almost always tight

é Gap between Marton and UV bounds

El Gamal (Stanford University) Summary and final remarks ITW 2015 37 / 44

Related work

∙ Other applications of concave envelope:

é MIMO BC with private and common messages (Geng–Nair 2014)

Direct proof of DPC optimality (Weingarten–Steinberg–Shamai 2006)

Extends result to setup with common message

El Gamal (Stanford University) Summary and final remarks ITW 2015 38 / 44

Related work

∙ Other applications of concave envelope:

é MIMO BC with private and common messages (Geng–Nair 2014)

Direct proof of DPC optimality (Weingarten–Steinberg–Shamai 2006)

Extends result to setup with common message

é Product of reversely semideterministic BC (Geng–Gohari–Nair–Yu 2014)

Shows that UV outer bound is not tight

El Gamal (Stanford University) Summary and final remarks ITW 2015 38 / 44

Related work

∙ Other applications of concave envelope:

é MIMO BC with private and common messages (Geng–Nair 2014)

Direct proof of DPC optimality (Weingarten–Steinberg–Shamai 2006)

Extends result to setup with common message

é Product of reversely semideterministic BC (Geng–Gohari–Nair–Yu 2014)

Shows that UV outer bound is not tight

∙ Binary input symmetric output BC (Geng–Nair–Shamai–Wang 2013b)

é Either superposition coding is optimal or gap between Marton and UV

El Gamal (Stanford University) Summary and final remarks ITW 2015 38 / 44

Final remarks

∙ Network information theory aims to:

é Find computable (analytical and numerical) expressions for capacity

é Coding schemes that achieve the capacity

El Gamal (Stanford University) Summary and final remarks ITW 2015 39 / 44

Final remarks

∙ Network information theory aims to:

é Find computable (analytical and numerical) expressions for capacity

é Coding schemes that achieve the capacity

∙ We know the capacity for very few channels

∙ We have inner and outer bounds for other channels

El Gamal (Stanford University) Summary and final remarks ITW 2015 39 / 44

Final remarks

∙ Network information theory aims to:

é Find computable (analytical and numerical) expressions for capacity

é Coding schemes that achieve the capacity

∙ We know the capacity for very few channels

∙ We have inner and outer bounds for other channels

∙ Computational tools play a key role in the development of this theory

é Find explicit capacity characterizations for important channels

é Help evaluate optimality (or lack thereof) of inner and outer bounds

é Develop deeper understanding of the bounds

El Gamal (Stanford University) Summary and final remarks ITW 2015 39 / 44

Final remarks

∙ Network information theory aims to:

é Find computable (analytical and numerical) expressions for capacity

é Coding schemes that achieve the capacity

∙ We know the capacity for very few channels

∙ We have inner and outer bounds for other channels

∙ Computational tools play a key role in the development of this theory

é Find explicit capacity characterizations for important channels

é Help evaluate optimality (or lack thereof) of inner and outer bounds

é Develop deeper understanding of the bounds

∙ Need more tools to evaluate capacity and its bounds

El Gamal (Stanford University) Summary and final remarks ITW 2015 39 / 44

Acknowledgments

∙ Hyeji Kim, Ben Nachman

El Gamal (Stanford University) Summary and final remarks ITW 2015 40 / 44

Acknowledgments

∙ Hyeji Kim, Ben Nachman

∙ Chandra Nair

∙ Young-Han Kim

El Gamal (Stanford University) Summary and final remarks ITW 2015 40 / 44

References

Ahlswede, R. and Korner, J. (1975). Source coding with side information and a converse for degraded broadcast channels.IEEE Trans. Inf. Theory, 21(6), 629–637.

Anantharam, V., Gohari, A., and Nair, C. (2013). Improved cardinality bounds on the auxiliary random variables in Marton’sinner bound. In Proc. IEEE Int. Symp. Inf. Theory, pp. 1272–1276.

Bergmans, P. P. (1973). Random coding theorem for broadcast channels with degraded components. IEEE Trans. Inf. Theory,19(2), 197–207.

Cover, T. M. (1972). Broadcast channels. IEEE Trans. Inf. Theory, 18(1), 2–14.

Cover, T. M. (1975). An achievable rate region for the broadcast channel. IEEE Trans. Inf. Theory, 21(4), 399–404.

El Gamal, A. (1979). The capacity of a class of broadcast channels. IEEE Trans. Inf. Theory, 25(2), 166–169.

Gallager, R. G. (1974). Capacity and coding for degraded broadcast channels. Probl. Inf. Transm., 10(3), 3–14.

Geng, Y., Gohari, A., Nair, C., and Yu, Y. (2014). On marton’s inner bound and its optimality for classes of product broadcastchannels. IEEE Trans. Inf. Theory, 60(1), 22–41.

Geng, Y., Jog, V., Nair, C., and Wang, Z. (2013a). An information inequality and evaluation of marton’s inner bound forbinary input broadcast channels. IEEE Trans. Inf. Theory, 59(7), 4095–4105.

Geng, Y. and Nair, C. (2014). The capacity region of the two-receiver gaussian vector broadcast channel with private andcommon messages. IEEE Trans. Inf. Theory, 60(4), 2087–2104.

Geng, Y., Nair, C., Shamai, S., and Wang, Z. (2013b). On broadcast channels with binary inputs and symmetric outputs. IEEETrans. Inf. Theory, 59(11), 6980–6989.

Gohari, A. A. and Anantharam, V. (2009). Evaluation of Marton’s inner bound for the general broadcast channel. In Proc.IEEE Int. Symp. Inf. Theory, Seoul, Korea, pp. 2462–2466.

Hajek, B. E. and Pursley, M. B. (1979). Evaluation of an achievable rate region for the broadcast channel. IEEE Trans. Inf.Theory, 25(1), 36–46.

Kim, H., Nachman, B., and El Gamal, A. (2015). Superposition coding is almost always optimal for the poisson broadcastchannel. In Proc. IEEE Int. Symp. Inf. Theory, Hong Kong.

El Gamal (Stanford University) Summary and final remarks ITW 2015 42 / 44

References (cont.)

Korner, J. and Marton, K. (1977). Comparison of two noisy channels. In I. Csiszar and P. Elias (eds.) Topics in InformationTheory (Colloquia Mathematica Societatis Janos Bolyai, Keszthely, Hungary, 1975), pp. 411–423. North-Holland,Amsterdam.

Lapidoth, A., Telatar, I., and Urbanke, R. (2003). On wide-band broadcast channels. IEEE Trans. Inf. Theory, 49(12),3250–3258.

Marton, K. (1979). A coding theorem for the discrete memoryless broadcast channel. IEEE Trans. Inf. Theory, 25(3), 306–311.

Nair, C. (2010). Capacity regions of two new classes of two-receiver broadcast channels. IEEE Trans. Inf. Theory, 56(9),4207–4214.

Nair, C. (2013). Upper concave envelopes and auxiliary random variables. Int. J. Adv. Eng. Sci. Appl. Math, 5(1), 12–20.

Nair, C. and El Gamal, A. (2007). An outer bound to the capacity region of the broadcast channel. IEEE Trans. Inf. Theory,53(1), 350–355.

Nair, C. and Wang, Z. V. (2008). On the inner and outer bounds for 2-receiver discrete memoryless broadcast channels. InProc. UCSD Inf. Theory Appl. Workshop, La Jolla, CA, pp. 226–229.

van der Meulen, E. C. (1975). Random coding theorems for the general discrete memoryless broadcast channel. IEEE Trans.Inf. Theory, 21(2), 180–190.

van Dijk, M. (1997). On a special class of broadcast channels with confidential messages. IEEE Trans. Inf. Theory, 43(2),712–714.

Weingarten, H., Steinberg, Y., and Shamai, S. (2006). The capacity region of the Gaussian multiple-input multiple-outputbroadcast channel. IEEE Trans. Inf. Theory, 52(9), 3936–3964.

Wyner, A. (1988a). Capacity and error exponent for the direct detection photon channel. i. IEEE Trans. Inf. Theory, 34(6),1449–1461.

Wyner, A. (1988b). Capacity and error exponent for the direct detection photon channel. ii. Information Theory, IEEETransactions on, 34(6), 1462–1471.

El Gamal (Stanford University) Summary and final remarks ITW 2015 43 / 44

References (cont.)

Wyner, A. D. (1973). A theorem on the entropy of certain binary sequences and applications—II. IEEE Trans. Inf. Theory,19(6), 772–777.

Wyner, A. D. and Ziv, J. (1973). A theorem on the entropy of certain binary sequences and appli-cations—I. IEEE Trans. Inf.Theory, 19(6), 769–772.

El Gamal (Stanford University) Summary and final remarks ITW 2015 44 / 44