Fundamental bounds for power consumption at thephysical layer: "waterslide curves" and the price of
certainty
Anant Sahaibased on joint work with student
Pulkit Grover
Wireless FoundationsDepartment of Electrical Engineering and Computer Sciences
University of California at Berkeley
Support from NSF and Sumitomo Electric
LIDS Seminar: Apr 28th, 2008
Anant Sahai (UC Berkeley) Low-Power Communication MIT LIDS 1 / 31
Shannon tells us:
Anant Sahai (UC Berkeley) Low-Power Communication MIT LIDS 2 / 31
Shannon tells us:
Delay: needed for laws of large numbers to apply
Anant Sahai (UC Berkeley) Low-Power Communication MIT LIDS 2 / 31
Shannon tells us:
Delay: needed for laws of large numbers to apply
Power: needed to apply the laws of large numbers
Anant Sahai (UC Berkeley) Low-Power Communication MIT LIDS 2 / 31
The promise of the waterfall curve
0 0.5 1 1.5 2 2.5 3 3.5−6
−5.5
−5
−4.5
−4
−3.5
−3
−2.5
−2
−1.5
−1
Power
log
10(⟨
Pe ⟩
)Uncoded transmission BSCShannon Waterfall BSCShannon Waterfall AWGN
Anant Sahai (UC Berkeley) Low-Power Communication MIT LIDS 3 / 31
The new problem
Classical goal: arbitrarily low probability of error.Classical assumption: not delay sensitive at all.New twist: minimizetotal power consumption
Anant Sahai (UC Berkeley) Low-Power Communication MIT LIDS 4 / 31
The new problem
Classical goal: arbitrarily low probability of error.Classical assumption: not delay sensitive at all.New twist: minimizetotal power consumptionImportant technology trends
Anant Sahai (UC Berkeley) Low-Power Communication MIT LIDS 4 / 31
The new problem
Classical goal: arbitrarily low probability of error.Classical assumption: not delay sensitive at all.New twist: minimizetotal power consumptionImportant technology trends
“Moore’s law” allows billions of transistors, and but only mildly reducespower-consumption per transistor.
Anant Sahai (UC Berkeley) Low-Power Communication MIT LIDS 4 / 31
The new problem
Classical goal: arbitrarily low probability of error.Classical assumption: not delay sensitive at all.New twist: minimizetotal power consumptionImportant technology trends
“Moore’s law” allows billions of transistors, and but only mildly reducespower-consumption per transistor.
New short-range applications: in-home networks, dense meshes,personal-area networks, UWB, between-chip communication,on-chipcommunication, etc.
Anant Sahai (UC Berkeley) Low-Power Communication MIT LIDS 4 / 31
Review of key prior work
Ephremides, Wireless Communications Magazine, 2002 Power consumption crucial across networking layers Many related papers.
Anant Sahai (UC Berkeley) Low-Power Communication MIT LIDS 5 / 31
Review of key prior work
Ephremides, Wireless Communications Magazine, 2002 Power consumption crucial across networking layers Many related papers.
Howard, Schlegel, and Iniewski, EUARASIP Wireless Comm/Net, 2006 Empirical study Found uncoded is often better
Anant Sahai (UC Berkeley) Low-Power Communication MIT LIDS 5 / 31
Review of key prior work
Ephremides, Wireless Communications Magazine, 2002 Power consumption crucial across networking layers Many related papers.
Howard, Schlegel, and Iniewski, EUARASIP Wireless Comm/Net, 2006 Empirical study Found uncoded is often better
Cui, Goldsmith, Bahai, Journal of Wireless Comm, 2005 Semi-empirical with uncoded modulation Found larger-constellations and higher rates better
Massad, Medard, and Zheng, ISITA 2004 Information-theoretic with constant decoder power Found better to use higher rates
Anant Sahai (UC Berkeley) Low-Power Communication MIT LIDS 5 / 31
Review of key prior work
Ephremides, Wireless Communications Magazine, 2002 Power consumption crucial across networking layers Many related papers.
Howard, Schlegel, and Iniewski, EUARASIP Wireless Comm/Net, 2006 Empirical study Found uncoded is often better
Cui, Goldsmith, Bahai, Journal of Wireless Comm, 2005 Semi-empirical with uncoded modulation Found larger-constellations and higher rates better
Massad, Medard, and Zheng, ISITA 2004 Information-theoretic with constant decoder power Found better to use higher rates
Bhardwaj and Chandrakasan, Allerton 2007 Focus on receiver sampling cost in UWB Found lower-rates and adaptive sampling is better
Anant Sahai (UC Berkeley) Low-Power Communication MIT LIDS 5 / 31
Outline
1 Motivation and introduction2 Classical results revisited3 A model for decoder power consumption4 General lower bounds5 Asymptotic behavior near capacity6 Optimal choice of rate
Anant Sahai (UC Berkeley) Low-Power Communication MIT LIDS 6 / 31
What should capacity-achieving mean?
minξTPT + ξCPC + ξDPD
ξT : net path loss (e.g.≈ 86dB for short-range)
ξC, ξD choice of weights for encoder and decoder power.
Anant Sahai (UC Berkeley) Low-Power Communication MIT LIDS 7 / 31
What should capacity-achieving mean?
minξTPT + ξCPC + ξDPD
ξT : net path loss (e.g.≈ 86dB for short-range)
ξC, ξD choice of weights for encoder and decoder power.
AssumePe → 0What happens to optimizingPT , PC, PD?
Anant Sahai (UC Berkeley) Low-Power Communication MIT LIDS 7 / 31
What should capacity-achieving mean?
minξTPT + ξCPC + ξDPD
ξT : net path loss (e.g.≈ 86dB for short-range)
ξC, ξD choice of weights for encoder and decoder power.
AssumePe → 0What happens to optimizingPT , PC, PD?
PT stays bounded:weakly certainty achieving
Anant Sahai (UC Berkeley) Low-Power Communication MIT LIDS 7 / 31
What should capacity-achieving mean?
minξTPT + ξCPC + ξDPD
ξT : net path loss (e.g.≈ 86dB for short-range)
ξC, ξD choice of weights for encoder and decoder power.
AssumePe → 0What happens to optimizingPT , PC, PD?
PT stays bounded:weakly certainty achieving PT , PC, PD all stay bounded:strongly certainty achieving
Anant Sahai (UC Berkeley) Low-Power Communication MIT LIDS 7 / 31
What should capacity-achieving mean?
minξTPT + ξCPC + ξDPD
ξT : net path loss (e.g.≈ 86dB for short-range)
ξC, ξD choice of weights for encoder and decoder power.
AssumePe → 0What happens to optimizingPT , PC, PD?
PT stays bounded:weakly certainty achieving PT , PC, PD all stay bounded:strongly certainty achieving
What happens asξC, ξD → 0? Computationasymptotically free, but not actually free.
Anant Sahai (UC Berkeley) Low-Power Communication MIT LIDS 7 / 31
What should capacity-achieving mean?
minξTPT + ξCPC + ξDPD
ξT : net path loss (e.g.≈ 86dB for short-range)
ξC, ξD choice of weights for encoder and decoder power.
AssumePe → 0What happens to optimizingPT , PC, PD?
PT stays bounded:weakly certainty achieving PT , PC, PD all stay bounded:strongly certainty achieving
What happens asξC, ξD → 0? Computationasymptotically free, but not actually free. PT → C−1(R): capacity achieving.
Anant Sahai (UC Berkeley) Low-Power Communication MIT LIDS 7 / 31
Decoding power vs communication range
1mm 10mm 1m 100m 10km−120
−100
−80
−60
−40
−20
0
20
40
60
Distance
γ (d
B)
Anant Sahai (UC Berkeley) Low-Power Communication MIT LIDS 8 / 31
Dense linear codes with brute-force decoding
0 10 20 30 40 50 60 70 80
−2
−4
−6
−8
−10
−12
−14
−16
−18
−20
−22
−24
Power
log 10
(⟨ P
e ⟩)Total powerOptimal transmit powerDecoding powerShannon waterfall
Decoding PowernR2nR, Error Prob 2−Esp(R,P)n
Anant Sahai (UC Berkeley) Low-Power Communication MIT LIDS 9 / 31
Convolutional codes with Viterbi decoding
0 10 20 30 40 50 60 70 80
−2
−4
−6
−8
−10
−12
−14
−16
−18
−20
−22
−24
Power
log 10
(⟨ P
e ⟩)Total powerOptimal transmit powerDecoding powerShannon waterfall
Decoding PowerLcR2LcR, Error Prob 2−Econv(R,P)Lc
Anant Sahai (UC Berkeley) Low-Power Communication MIT LIDS 10 / 31
Convolutional with “magical” sequential decoding
0 5 10 15
−2
−4
−6
−8
−10
−12
−14
−16
−18
−20
−22
−24
Power
log 10
(⟨ P
e ⟩)Shannon waterfallOptimal transmit powerDecoding powerTotal power
Decoding PowerLcR, Error Prob 2−Econv(R,P)Lc
Anant Sahai (UC Berkeley) Low-Power Communication MIT LIDS 11 / 31
Dense linear codes with “magical” syndrome decoding
0 5 10 15
−2
−4
−6
−8
−10
−12
−14
−16
−18
−20
−22
−24
Power
log 10
(⟨ P
e ⟩)Shannon waterfallOptimal transmit powerTotal powerDecoding power
Decoding Power(1− R)nR, Error Prob 2−Esp(R,P)n
Anant Sahai (UC Berkeley) Low-Power Communication MIT LIDS 12 / 31
Outline
1 Motivation and introduction2 Classical results revisited3 A model for decoder power consumption4 General lower bounds5 Asymptotic behavior near capacity6 Optimal choice of rate
Anant Sahai (UC Berkeley) Low-Power Communication MIT LIDS 13 / 31
A new hope: iterative decoding
Make assumptions about the decoder rather than the code.
ComputationalNode
Y
Bj
i
Rich enough to capture LDPC, RA, Turbo, etc. codes.
Anant Sahai (UC Berkeley) Low-Power Communication MIT LIDS 14 / 31
A new hope: iterative decoding
Make assumptions about the decoder rather than the code.
ComputationalNode
Y
Bj
i
Rich enough to capture LDPC, RA, Turbo, etc. codes.
Anant Sahai (UC Berkeley) Low-Power Communication MIT LIDS 14 / 31
A new hope: iterative decoding
Make assumptions about the decoder rather than the code.
ComputationalNode
Y
Bj
i
Each consumesEnode energy per iteration and can sendarbitrary messagesto itsα + 1 neighbors.
Rich enough to capture LDPC, RA, Turbo, etc. codes.
Anant Sahai (UC Berkeley) Low-Power Communication MIT LIDS 14 / 31
A new hope: iterative decoding
Make assumptions about the decoder rather than the code.
ComputationalNode
Y
Bj
i
Each consumesEnode energy per iteration and can sendarbitrary messagesto itsα + 1 neighbors.
Run for a fixed number of iterationsi.
Rich enough to capture LDPC, RA, Turbo, etc. codes.
Anant Sahai (UC Berkeley) Low-Power Communication MIT LIDS 14 / 31
A new hope: iterative decoding
Make assumptions about the decoder rather than the code.
ComputationalNode
Y
Bj
i
Each consumesEnode energy per iteration and can sendarbitrary messagesto itsα + 1 neighbors.
Run for a fixed number of iterationsi.
Rich enough to capture LDPC, RA, Turbo, etc. codes.Power-consumption≥ iEnode per received sample.
Anant Sahai (UC Berkeley) Low-Power Communication MIT LIDS 14 / 31
How to lower-bound the number of iterations?
X XY iB i
7 8R o o t b i tN o d e
YY 721B B1 2 YY 8B X4 5 YY 4 5 B X9 1 0 YY 4 1 0Key concept:decodingneighborhood
Anant Sahai (UC Berkeley) Low-Power Communication MIT LIDS 15 / 31
How to lower-bound the number of iterations?
X XY iB i
7 8R o o t b i tN o d e
YY 721B B1 2 YY 8B X4 5 YY 4 5 B X9 1 0 YY 4 1 0Key concept:decodingneighborhood
Decoding neighborhood sizen ≤ 1 + (α + 1)αi−1 ≈ αi.
Anant Sahai (UC Berkeley) Low-Power Communication MIT LIDS 15 / 31
How to lower-bound the number of iterations?
X XY iB i
7 8R o o t 5 b i tN o d e
YY 721B B1 2 YY 8B X4 5 YY 4 5 B X9 1 0 YY 4 1 0Key concept:decodingneighborhood
Decoding neighborhood sizen ≤ 1 + (α + 1)αi−1 ≈ αi.
Need to lower-bound averageprobability of bit error in termsof n.
Anant Sahai (UC Berkeley) Low-Power Communication MIT LIDS 15 / 31
How to lower-bound the number of iterations?
X XY iB i
7 8R o o t K b i tN o d e
YY 721B B1 2 YY 8B X4 5 YY 4 5 B X9 1 0 YY 4 1 0Key concept:decodingneighborhood
Decoding neighborhood sizen ≤ 1 + (α + 1)αi−1 ≈ αi.
Need to lower-bound averageprobability of bit error in termsof n.
Key insight:n is playing a roleanalogous to delay.
Anant Sahai (UC Berkeley) Low-Power Communication MIT LIDS 15 / 31
Outline
1 Motivation and introduction2 Classical results revisited3 A model for decoder power consumption4 General lower bounds5 Asymptotic behavior near capacity6 Optimal choice of rate
Anant Sahai (UC Berkeley) Low-Power Communication MIT LIDS 16 / 31
A local “sphere-packing” bound for the AWGN
Decoding neighborhood sizen ≤ 1 + (α + 1)αi−1 ≈ αi.
〈Pe〉 ≥ supσ2G>σ2
Pµ(n): C(G)<R
h−1b (δ(G))
2exp
(
−nD(σ2G||σ
2P) −
12φ(n, h−1
b (δ(G)))
(
σ2G
σ2P
− 1
))
C(G) = 12 log2(1 + PT
σ2G), δ(G): 1− C(G)
R
Anant Sahai (UC Berkeley) Low-Power Communication MIT LIDS 17 / 31
A local “sphere-packing” bound for the AWGN
Decoding neighborhood sizen ≤ 1 + (α + 1)αi−1 ≈ αi.
〈Pe〉 ≥ supσ2G>σ2
Pµ(n): C(G)<R
h−1b (δ(G))
2exp
(
−nD(σ2G||σ
2P) −
12φ(n, h−1
b (δ(G)))
(
σ2G
σ2P
− 1
))
C(G) = 12 log2(1 + PT
σ2G), δ(G): 1− C(G)
R
µ(n) = 12(1 + 1
T(n)+1 + 4T(n)+2nT(n)(1+T(n)))
T(n) = −WL(−exp(−1)(1/4)1/n)
WL(x) solvesx = WL(x) exp(WL(x))
φ(n, y) = −n(WL
(
−exp(−1)( y2)
2n
)
+ 1)
Anant Sahai (UC Berkeley) Low-Power Communication MIT LIDS 17 / 31
A local “sphere-packing” bound for the AWGN
Decoding neighborhood sizen ≤ 1 + (α + 1)αi−1 ≈ αi.
〈Pe〉 ≥ supσ2G>σ2
Pµ(n): C(G)<R
h−1b (δ(G))
2exp
(
−nD(σ2G||σ
2P) −
12φ(n, h−1
b (δ(G)))
(
σ2G
σ2P
− 1
))
C(G) = 12 log2(1 + PT
σ2G), δ(G): 1− C(G)
R
µ(n) = 12(1 + 1
T(n)+1 + 4T(n)+2nT(n)(1+T(n)))
T(n) = −WL(−exp(−1)(1/4)1/n)
WL(x) solvesx = WL(x) exp(WL(x))
φ(n, y) = −n(WL
(
−exp(−1)( y2)
2n
)
+ 1)
Double-exponential potential return on investments in decoding power!
Anant Sahai (UC Berkeley) Low-Power Communication MIT LIDS 17 / 31
Waterslide curves for general AWGN case
0 1 2 3 4
−2
−4
−8
−16
−32
−64
Power
log 10
(⟨ P
e ⟩)γ = 0.4γ = 0.3γ = 0.2Shannon limit
Anant Sahai (UC Berkeley) Low-Power Communication MIT LIDS 18 / 31
A local “sphere-packing” bound for the BSC
〈Pe〉 ≥ supC−1(R)<g≤ 1
2
h−1b (δ(g))
22−nD(g||p)
(
p(1− g)
g(1− p)
)ǫ√
n
hb(p): Binary entropy function
δ(g): 1− C(g)R
D(g||p): KL Divergence
ǫ:√
1K(g) log( 2
h−1b (δ(G))
)
K(g): inf0<η<1−gD(g+η||g)
η2 .
Anant Sahai (UC Berkeley) Low-Power Communication MIT LIDS 19 / 31
A local “sphere-packing” bound for the BSC
〈Pe〉 ≥ supC−1(R)<g≤ 1
2
h−1b (δ(g))
22−nD(g||p)
(
p(1− g)
g(1− p)
)ǫ√
n
hb(p): Binary entropy function
δ(g): 1− C(g)R
D(g||p): KL Divergence
ǫ:√
1K(g) log( 2
h−1b (δ(G))
)
K(g): inf0<η<1−gD(g+η||g)
η2 .
Double-exponentialactual returns observed byLentmaier, et al. 2005 forregular LDPC codes with iterative decoding!
Anant Sahai (UC Berkeley) Low-Power Communication MIT LIDS 19 / 31
Detailed look at BPSK case
1 2 3 4 5 6
−1
−2
−4
−8
−16
−32
Power
log(
⟨ Pe ⟩
)
γ = 0.4γ = 0.3γ = 0.2Shannon Waterfall
Anant Sahai (UC Berkeley) Low-Power Communication MIT LIDS 20 / 31
Detailed look at BPSK case
1 2 3 4 5 6 7 8
−2
−4
−8
−16
−32
Power
log
(⟨ P
e ⟩ )
Upper bound on total power
Lower bound on total power
Optimal transmit power
Shannon Waterfall
Optimal transmit powerat low ⟨ P
e ⟩
Anant Sahai (UC Berkeley) Low-Power Communication MIT LIDS 20 / 31
Deviations from Shannon
−3 −2 −1 0 1 2 30
5
10
15
20
25
30
log10
(γ)
Pop
t/C−
1 (R)
in d
B
Anant Sahai (UC Berkeley) Low-Power Communication MIT LIDS 21 / 31
Deviations from Shannon
0 0.2 0.4 0.6 0.8 1−30
−20
−10
0
10
20
30
Rate
Pow
er (
dB)
Normalized power gapOptimal transmit powerShannon limit
Anant Sahai (UC Berkeley) Low-Power Communication MIT LIDS 21 / 31
Deviations from Shannon
−3 −2 −1 0 1 2 3
−2
−4
−8
−16
−32
log10
(γ)
log 10
(⟨ P
e ⟩)
Anant Sahai (UC Berkeley) Low-Power Communication MIT LIDS 21 / 31
Sketch of proof
Pretend the code runs over channelG instead ofP.
Anant Sahai (UC Berkeley) Low-Power Communication MIT LIDS 22 / 31
Sketch of proof
Pretend the code runs over channelG instead ofP.
δ(G) > 0 implies average probability of error ish−1b (δ(G)).
Anant Sahai (UC Berkeley) Low-Power Communication MIT LIDS 22 / 31
Sketch of proof
Pretend the code runs over channelG instead ofP.
δ(G) > 0 implies average probability of error ish−1b (δ(G)).
Channel needs only to misbehave for the decoding neighborhoods tocause a bit-error.
Anant Sahai (UC Berkeley) Low-Power Communication MIT LIDS 22 / 31
Sketch of proof
Pretend the code runs over channelG instead ofP.
δ(G) > 0 implies average probability of error ish−1b (δ(G)).
Channel needs only to misbehave for the decoding neighborhoods tocause a bit-error.
ǫ/φ assure that the misbehavior is “typical” (even if n is small)
Anant Sahai (UC Berkeley) Low-Power Communication MIT LIDS 22 / 31
Sketch of proof
Pretend the code runs over channelG instead ofP.
δ(G) > 0 implies average probability of error ish−1b (δ(G)).
Channel needs only to misbehave for the decoding neighborhoods tocause a bit-error.
ǫ/φ assure that the misbehavior is “typical” (even if n is small)
Probability of an error event underP is lower-bounded by a convex-∪functionf of the probability underG.
Anant Sahai (UC Berkeley) Low-Power Communication MIT LIDS 22 / 31
Sketch of proof
Pretend the code runs over channelG instead ofP.
δ(G) > 0 implies average probability of error ish−1b (δ(G)).
Channel needs only to misbehave for the decoding neighborhoods tocause a bit-error.
ǫ/φ assure that the misbehavior is “typical” (even if n is small)
Probability of an error event underP is lower-bounded by a convex-∪functionf of the probability underG.
Average probability underP is minimized byf of the average probabilityunder G.
Anant Sahai (UC Berkeley) Low-Power Communication MIT LIDS 22 / 31
The low Pe limit
Dominant term:D(G||P) term in exponent.
Pe ≈ exp(−D(C−1(R)||P)n) = exp(−D(C−1(R)||P)αi)
Anant Sahai (UC Berkeley) Low-Power Communication MIT LIDS 23 / 31
The low Pe limit
Dominant term:D(G||P) term in exponent.
Pe ≈ exp(−D(C−1(R)||P)n) = exp(−D(C−1(R)||P)αi)
Take double logs of both sides.
ln ln1Pe
≈ ln D(C−1(R)||P) + i ln α
Anant Sahai (UC Berkeley) Low-Power Communication MIT LIDS 23 / 31
The low Pe limit
Dominant term:D(G||P) term in exponent.
Pe ≈ exp(−D(C−1(R)||P)n) = exp(−D(C−1(R)||P)αi)
Take double logs of both sides.
ln ln1Pe
≈ ln D(C−1(R)||P) + i ln α
Minimize sumζ + γi whereγ = ξDEnode
σ2PξT ln α
andζ = PTσ2
P.
Anant Sahai (UC Berkeley) Low-Power Communication MIT LIDS 23 / 31
The low Pe limit
Dominant term:D(G||P) term in exponent.
Pe ≈ exp(−D(C−1(R)||P)n) = exp(−D(C−1(R)||P)αi)
Take double logs of both sides.
ln ln1Pe
≈ ln D(C−1(R)||P) + i ln α
Minimize sumζ + γi whereγ = ξDEnode
σ2PξT ln α
andζ = PTσ2
P.
Solved by:
f (R, ζ)/∂f (R, ζ)
∂ζ= γ
wheref (R, PTσ2
P) = D(C−1(R)||P).
Anant Sahai (UC Berkeley) Low-Power Communication MIT LIDS 23 / 31
Outline
1 Motivation and introduction2 Classical results revisited3 A model for decoder power consumption4 General lower bounds5 Asymptotic behavior near capacity6 Optimal choice of rate
Anant Sahai (UC Berkeley) Low-Power Communication MIT LIDS 24 / 31
The waterfall curve revisited
0 0.5 1 1.5 2 2.5 3 3.5−6
−5.5
−5
−4.5
−4
−3.5
−3
−2.5
−2
−1.5
−1
Power
log
10(⟨
Pe ⟩
)
Uncoded transmission BSCShannon Waterfall BSCShannon Waterfall AWGN
Two gaps:(
C1−hb(Pe)
− C)
andgap = C − R.
Anant Sahai (UC Berkeley) Low-Power Communication MIT LIDS 25 / 31
The waterfall curve revisited
0 0.5 1 1.5 2 2.5 3 3.5−6
−5.5
−5
−4.5
−4
−3.5
−3
−2.5
−2
−1.5
−1
Power
log
10(⟨
Pe ⟩
)
Uncoded transmission BSCShannon Waterfall BSCShannon Waterfall AWGN
Two gaps:(
C1−hb(Pe)
− C)
andgap = C − R.
Pick a path to certainty:Pe → 0 soi ≈ln ln 1
Peln α +
ln 1D(C−1(R)||P)
ln α
Anant Sahai (UC Berkeley) Low-Power Communication MIT LIDS 25 / 31
The waterfall curve revisited
0 0.5 1 1.5 2 2.5 3 3.5−6
−5.5
−5
−4.5
−4
−3.5
−3
−2.5
−2
−1.5
−1
Power
log
10(⟨
Pe ⟩
)
Uncoded transmission BSCShannon Waterfall BSCShannon Waterfall AWGN
Two gaps:(
C1−hb(Pe)
− C)
andgap = C − R.
Pick a path to certainty:Pe → 0 soi ≈ln ln 1
Peln α +
ln 1D(C−1(R)||P)
ln α
Let R → C and soi = logα ln 1Pe
+ 2 logα1
gap+ o(· · · ).
Anant Sahai (UC Berkeley) Low-Power Communication MIT LIDS 25 / 31
The waterfall curve revisited
0 0.5 1 1.5 2 2.5 3 3.5−6
−5.5
−5
−4.5
−4
−3.5
−3
−2.5
−2
−1.5
−1
Power
log
10(⟨
Pe ⟩
)
Uncoded transmission BSCShannon Waterfall BSCShannon Waterfall AWGN
Two gaps:(
C1−hb(Pe)
− C)
andgap = C − R.
Pick a joint path to certainty:Pe = gapβ .
Let R → C . . .
Anant Sahai (UC Berkeley) Low-Power Communication MIT LIDS 25 / 31
Zoom into the neighborhood of capacity
−7.5 −7 −6.5 −6 −5.5 −5 −4.5 −4 −3.5
2
4
6
8
10
12
14
16
log10
(gap)
log 10
(n)
β = 2β = 1.5‘‘balanced’’ gapsβ = 0.75β = 0.5
Anant Sahai (UC Berkeley) Low-Power Communication MIT LIDS 26 / 31
Asymptotic scaling of iterations and gap
AssumePe = gapβ and BSC channel:
If β ≥ 1, i ≥ 2 log 1gap
+ log log 1gap
+ cβ .
If β ≤ 1, i ≥ 2β log 1gap
+ log log 1gap
+ cβ .
Anant Sahai (UC Berkeley) Low-Power Communication MIT LIDS 27 / 31
Asymptotic scaling of iterations and gap
AssumePe = gapβ and BSC channel:
If β ≥ 1, i ≥ 2 log 1gap
+ log log 1gap
+ cβ .
If β ≤ 1, i ≥ 2β log 1gap
+ log log 1gap
+ cβ .
Proved by takingg∗ = p + gapr and using careful Taylor expansions
aroundg = p.
Anant Sahai (UC Berkeley) Low-Power Communication MIT LIDS 27 / 31
Asymptotic scaling of iterations and gap
AssumePe = gapβ and BSC channel:
If β ≥ 1, i ≥ 2 log 1gap
+ log log 1gap
+ cβ .
If β ≤ 1, i ≥ 2β log 1gap
+ log log 1gap
+ cβ .
Proved by takingg∗ = p + gapr and using careful Taylor expansions
aroundg = p.
Much better than semi-trivial bound ofΩ(log log 1gap
).
Anant Sahai (UC Berkeley) Low-Power Communication MIT LIDS 27 / 31
Asymptotic scaling of iterations and gap
AssumePe = gapβ and BSC channel:
If β ≥ 1, i ≥ 2 log 1gap
+ log log 1gap
+ cβ .
If β ≤ 1, i ≥ 2β log 1gap
+ log log 1gap
+ cβ .
Proved by takingg∗ = p + gapr and using careful Taylor expansions
aroundg = p.
Much better than semi-trivial bound ofΩ(log log 1gap
).
Much more optimistic than Khandekar-McEliece conjecturedΩ( 1gap
).
Anant Sahai (UC Berkeley) Low-Power Communication MIT LIDS 27 / 31
Outline
1 Motivation and introduction2 Classical results revisited3 A model for decoder power consumption4 General lower bounds5 Asymptotic behavior near capacity6 Optimal choice of rate
Anant Sahai (UC Berkeley) Low-Power Communication MIT LIDS 28 / 31
Assume a fixed message size
Suppose message very large, but not urgent.
Anant Sahai (UC Berkeley) Low-Power Communication MIT LIDS 29 / 31
Assume a fixed message size
Suppose message very large, but not urgent.Why not increase rate?
Anant Sahai (UC Berkeley) Low-Power Communication MIT LIDS 29 / 31
Assume a fixed message size
Suppose message very large, but not urgent.Why not increase rate?
Advantage: fewer samples to process
Anant Sahai (UC Berkeley) Low-Power Communication MIT LIDS 29 / 31
Assume a fixed message size
Suppose message very large, but not urgent.Why not increase rate?
Advantage: fewer samples to process Disadvantage: need higher capacity and thus more power
Energy per message bit:1R
PTσ2
P+ max(1, 1
R)γi.
Anant Sahai (UC Berkeley) Low-Power Communication MIT LIDS 29 / 31
Assume a fixed message size
Suppose message very large, but not urgent.Why not increase rate?
Advantage: fewer samples to process Disadvantage: need higher capacity and thus more power
Energy per message bit:1R
PTσ2
P+ max(1, 1
R)γi.
Optimize over the choice ofR asPe varies.
Anant Sahai (UC Berkeley) Low-Power Communication MIT LIDS 29 / 31
Results for BPSK signaling
0.88 0.9 0.92 0.94 0.96 0.98 1−100
−80
−60
−40
−20
Ropt
log 10
(⟨ P
e ⟩)
γ = 100γ = 0.4γ = 0.1
Anant Sahai (UC Berkeley) Low-Power Communication MIT LIDS 30 / 31
Results for BPSK signaling
2 4 6 8 10 12 14 16 18 20
−60
−50
−40
−30
−20
−10
Energy per bit
log 10
(⟨ P
e ⟩)
Limiting value of optimal transmit power
neglecting the proessing energy
γ = 0.4γ = 0.1
γ = 1
Uncoded transmission
Anant Sahai (UC Berkeley) Low-Power Communication MIT LIDS 30 / 31
Concluding remarks
More details:arXiv:0801.0352
www.eecs.berkeley.edu/∼sahai/
Bounds can probably be tightened.
Anant Sahai (UC Berkeley) Low-Power Communication MIT LIDS 31 / 31
Concluding remarks
More details:arXiv:0801.0352
www.eecs.berkeley.edu/∼sahai/
Bounds can probably be tightened.
Encoding power still needs to be better understood.
Anant Sahai (UC Berkeley) Low-Power Communication MIT LIDS 31 / 31
Concluding remarks
More details:arXiv:0801.0352
www.eecs.berkeley.edu/∼sahai/
Bounds can probably be tightened.
Encoding power still needs to be better understood.
Is there a way around our model of decoding?
How to reconcile with “linear-time” codes like expanders?
Anant Sahai (UC Berkeley) Low-Power Communication MIT LIDS 31 / 31
Concluding remarks
More details:arXiv:0801.0352
www.eecs.berkeley.edu/∼sahai/
Bounds can probably be tightened.
Encoding power still needs to be better understood.
Is there a way around our model of decoding?
How to reconcile with “linear-time” codes like expanders?
Expand scope to cover multiterminal problems and other components.
Anant Sahai (UC Berkeley) Low-Power Communication MIT LIDS 31 / 31