Combinatorial Channel Signature Modulation for Wireless Networkssejdinov/talks/pdf/2012-02-08... ·...

Post on 09-Jul-2020

0 views 0 download

transcript

Combinatorial Channel Signature Modulationfor Wireless Networks

Dino Sejdinovic (Gatsby Unit, UCL)joint work with Rob Piechocki (CCR, Bristol)

Signal Processing and Communications Laboratory, University of Cambridge

February 8, 2012

Problem outline

I wireless mutual broadcast withN + 1 half-duplex nodes

I each node has a k-bit message totransmit to all others

I Can all nodes transmit at thesame frequency without elaboratetime scheduling?

Problem outline

I wireless mutual broadcast withN + 1 half-duplex nodes

I each node has a k-bit message totransmit to all others

I Can all nodes transmit at thesame frequency without elaboratetime scheduling?

Introduction

CCSM (Combinatorial Channel Signature Modulation):

I sparse, combinatorial representation of messages

I compressed sensing based decoding

I minimal MAC Layer coordination: access to a sharedchannel

I no need for collision detection/avoidance scheme(CSMA/CA)

I robust to time dispersion

I no need for guard intervals to eliminate ISI

Introduction

CCSM (Combinatorial Channel Signature Modulation):

I sparse, combinatorial representation of messages

I compressed sensing based decoding

I minimal MAC Layer coordination: access to a sharedchannel

I no need for collision detection/avoidance scheme(CSMA/CA)

I robust to time dispersion

I no need for guard intervals to eliminate ISI

Introduction

CCSM (Combinatorial Channel Signature Modulation):

I sparse, combinatorial representation of messages

I compressed sensing based decoding

I minimal MAC Layer coordination: access to a sharedchannel

I no need for collision detection/avoidance scheme(CSMA/CA)

I robust to time dispersion

I no need for guard intervals to eliminate ISI

Overview

CS for multiterminal communications

Combinatorial encoder

Sparse recovery solver

Simulation results

Outline

CS for multiterminal communications

Combinatorial encoder

Sparse recovery solver

Simulation results

Compressed sensing

I Compressed sensing (Candes, Romberg and Tao 2006;Donoho 2006) combines sampling and compression stepsinto one - into taking random linear projections.

sample compress

N

K<<N

Kstore/transmit

load/receive

decompress

K N

f

f

project

Mstore/transmit

load/receive

reconstruct

M N

f

f

M<<N

projections ≈ number of non-zeros × log (size of the vector)

Compressed sensing

I Compressed sensing (Candes, Romberg and Tao 2006;Donoho 2006) combines sampling and compression stepsinto one - into taking random linear projections.

sample compress

N

K<<N

Kstore/transmit

load/receive

decompress

K N

f

f

project

Mstore/transmit

load/receive

reconstruct

M N

f

f

M<<N

projections ≈ number of non-zeros × log (size of the vector)

Embracing the additive nature of wireless

I (Zhang and Guo 2010): each node is assigned a (known)dictionary of (sparse) on-o� signalling codewords, eachcodeword corresponding to a single message

I Own transmissions seen as erasures in dictionary

I The dictionary size exponential in the number of bits k in amessage - sparse recovery problem of size 2kN

1

1

1

1

transmitted codewords

1

1

1

2krecipient transmits

Combinations of the codebook elements?

Receivedwaveformby user 2

ChannelImpulseResponse

Codewordspan

Transmitted codewordby user 1

I Idea: Transmit (weighted) sums of a �xed number l ofon-o� signalling codewords.

I the choice of the l-combination of the dictionary elementscarries information

I Requires e�cient encoding of messages into l-combinations:constant weight coding (l out of L codes)

I Need l� L for sparse recovery

Reduction of complexity

I (Zhang and Guo, 2010): k = log2 L - sparse recovery problemof size 2kN

I CCSM: k ≈ log2(Ll

)= O(Lα log2 L), for l = Lα, 0 < α < 1

- sparse recovery problem of size∼ k1/αN

Encoder

transmitted

codeword

ix

C w

information vector

ki :1, lqkki :1,

lqk

i

2F

ib

CWC

mapper

weighing

mapper

ic

x

x

+1

-1

time slots

1,is

4,is

on-off signalling

dictionary

encoder i

φC : 00000 7→ 010100000

φC : 00001 7→ 000101000

· · ·

Outline

CS for multiterminal communications

Combinatorial encoder

Sparse recovery solver

Simulation results

Constant weight coding

I Combinatorial representation of information

I using l-combinations of the set of L elements, i.e., length-Lbinary vectors with exactly l ones.

I l� L (sparse)

De�nition

An (L, l)-constant weight code is a set of length-L binaryvectors with Hamming weight l:

C ⊆ {c ∈ FL2 : wH(c) = l}.

I single-bit error/single-type errordetection

I two-out-of-�ve barcode

Constant weight coding

I Set of possible messages: S = {0, 1 . . . ,K − 1}. UsuallyK = 2k, and we identify S ↔ Fk2.

I An (L, l)-constant weight code is C ⊆ {c ∈ FL2 : wH(c) = l}.I Goal: construct a bijective map φC : S → C

I Enumeration approaches:

I (Schalkwijk 1972), (Cover 1973): lexicographic ordering ofcodewords, requires registers of length O(L).

I (Ramabadran 1990): arithmetic coding, computationalcomplexity O(L).

I (Knuth 1986): complementation method, computationalcomplexity O(L) - but much faster in practice. Works onlyfor balanced codes: l = bL/2c.

I Can we do better when l� L?

I (Tian, Vaishampayan, Sloane 2009): embed both S and Cinto Rl and establish bijective maps by dissecting certainpolytopes in Rl.

Constant weight coding

I Set of possible messages: S = {0, 1 . . . ,K − 1}. UsuallyK = 2k, and we identify S ↔ Fk2.

I An (L, l)-constant weight code is C ⊆ {c ∈ FL2 : wH(c) = l}.I Goal: construct a bijective map φC : S → C

I Enumeration approaches:

I (Schalkwijk 1972), (Cover 1973): lexicographic ordering ofcodewords, requires registers of length O(L).

I (Ramabadran 1990): arithmetic coding, computationalcomplexity O(L).

I (Knuth 1986): complementation method, computationalcomplexity O(L) - but much faster in practice. Works onlyfor balanced codes: l = bL/2c.

I Can we do better when l� L?

I (Tian, Vaishampayan, Sloane 2009): embed both S and Cinto Rl and establish bijective maps by dissecting certainpolytopes in Rl.

Constant weight coding

I Set of possible messages: S = {0, 1 . . . ,K − 1}. UsuallyK = 2k, and we identify S ↔ Fk2.

I An (L, l)-constant weight code is C ⊆ {c ∈ FL2 : wH(c) = l}.I Goal: construct a bijective map φC : S → C

I Enumeration approaches:

I (Schalkwijk 1972), (Cover 1973): lexicographic ordering ofcodewords, requires registers of length O(L).

I (Ramabadran 1990): arithmetic coding, computationalcomplexity O(L).

I (Knuth 1986): complementation method, computationalcomplexity O(L) - but much faster in practice. Works onlyfor balanced codes: l = bL/2c.

I Can we do better when l� L?

I (Tian, Vaishampayan, Sloane 2009): embed both S and Cinto Rl and establish bijective maps by dissecting certainpolytopes in Rl.

Embedding C into Rl

I Given a constant-weight codeword c, de�neφ(c) = (y1L , . . . ,

ylL ) ∈ Rl, with yi:= position of the i-th 1 in

c.

I for L = 5, l = 2, φ(01010) = (2/5, 4/5).

I φ(C) is a discrete subset of the convex hull Tl of the points:

(0, 0, . . . 0, 0)(0, 0, . . . 0, 1)...

... . . ....

...(0, 1, . . . 1, 1)(1, 1, . . . 1, 1)

C embedded in a tetrahedron

x

y

(0,0)

(0,1) (1,1)

T2

xy

z

T3

(0,0,0)

(0,0,1)

(0,1,1) (1,1,1)

I T2 is the right triangle with vertices (0, 0), (0, 1), (1, 1)

I V ol(Tl) =1l! (the unit cube can be split into l! tetrahedra

congruent to Tl)

The case l = 2

−0.5 0 0.5 1 1.5−0.5

0

0.5

1

1.5

n=5

(2/5,4/5)=φ(01010)

(1/5,2/5)=φ(11000)

−0.5 0 0.5 1 1.5−0.5

0

0.5

1

1.5

n=50

Assume: information is a brick

I Brick Bl ⊂ Rl is a hyper-rectangle

Bl := [0, 1]×[12, 1]×[2

3, 1]×· · ·×[ l − 1

l, 1]

I Represent messages s ∈ {0, 1, . . . ,K − 1},K ≤

(Ll

)as integer l-tuple

(b(s)1 , b

(s)2 , . . . , b

(s)l ), s.t.

(b(s)1n

,b(s)2n

, . . . ,b(s)ln

) ∈ Bl: quotient andremainder

I V ol(Bl) =1l! = V ol(Tl)

x

y

(0,1/2)

(0,1)(1,1)

B2

(1,1/2)

x

y

z

1

1/2

1/3

B3

DissectionsHilbert's third problem:For any two polyhedra of the same volume, is it

possible to dissect one into a �nite number of pieces

that can be rearranged to give the other?

I In l = 2 dimensions (Bolyai-Gerwien 1833): yes

�scissor-equivalence�

I In d ≥ 3 dimensions (Dehn, 1902): no - polyhedra musthave equal Dehn invariants.

DissectionsHilbert's third problem:For any two polyhedra of the same volume, is it

possible to dissect one into a �nite number of pieces

that can be rearranged to give the other?

I In l = 2 dimensions (Bolyai-Gerwien 1833): yes

�scissor-equivalence�

I In d ≥ 3 dimensions (Dehn, 1902): no - polyhedra musthave equal Dehn invariants.

Encoding

x

y

z

1

1/2

1/3

B3

xy

z

T3

(0,0,0)

(0,0,1)

(0,1,1) (1,1,1)

I messages S = {0, 1 . . . ,K − 1} ←→ Bl ⊂ Rl diss.←→

Tl ⊂ Rl

←→ constant weight code CI (Tian, Vaishampayan, Sloane 2009) give an explicitrecursive dissection of Bl into Tl computable in O(l2)

Rate

I CWC rate: R(C) = 1L log2 |C| ≤ 1

L log2(Ll

)= O(l log2 LL )→ 0,

L→∞, when l� L.

CCSM rate 6= CWC rate

I CCSM rate: NM

(log2

(Ll

)+ lq

), where

I M is the time duration of the waveforms (number of rows inthe CS problem + number of own transmissions)

I Typical CS results: it su�ces to take M to be the numberof non-zeros × log (size of the vector)

I M = O(lN log2(LN))⇒ CCSM rate = O( log2 Llog2 L+log2N

)

Rate

I CWC rate: R(C) = 1L log2 |C| ≤ 1

L log2(Ll

)= O(l log2 LL )→ 0,

L→∞, when l� L.

CCSM rate 6= CWC rate

I CCSM rate: NM

(log2

(Ll

)+ lq

), where

I M is the time duration of the waveforms (number of rows inthe CS problem + number of own transmissions)

I Typical CS results: it su�ces to take M to be the numberof non-zeros × log (size of the vector)

I M = O(lN log2(LN))⇒ CCSM rate = O( log2 Llog2 L+log2N

)

Rate

I CWC rate: R(C) = 1L log2 |C| ≤ 1

L log2(Ll

)= O(l log2 LL )→ 0,

L→∞, when l� L.

CCSM rate 6= CWC rate

I CCSM rate: NM

(log2

(Ll

)+ lq

), where

I M is the time duration of the waveforms (number of rows inthe CS problem + number of own transmissions)

I Typical CS results: it su�ces to take M to be the numberof non-zeros × log (size of the vector)

I M = O(lN log2(LN))⇒ CCSM rate = O( log2 Llog2 L+log2N

)

Outline

CS for multiterminal communications

Combinatorial encoder

Sparse recovery solver

Simulation results

Channel signatures

Receivedwaveformby user 2

ChannelImpulseResponse

Codewordspan

Transmitted codewordby user 1

Transmittedcodewordby user 2

Modifiedcodewordspan

Received waveform byuser 2 (from user 1)

I Each user convolves codebook elements with the channelimpulse response

I Can subtract echoes of its own transmissions(self-interference remover)

Decoder

.

.

.

1ix

0x

1ix

.

.

.

Nx

*

*

*

*

channel between

transmitter 0 and

receiver i

0,ih

1, iih

1, iih

Ni ,h

ii,hix

*

erasure

channel

encoder i

self-

interference

remover

ixiy~

sparse

recovery

solver

iy

decoder i

iv

...

...

0c

1ˆic

1ˆic

Nc

1

w

1

C

0

1i

1i

N

...

...

self-channel

at i

weighing

demapper

CWC

demapper

additive

noise

iz~

xi = Sici

yi = Ei

N∑j=0

hi,j ∗ Sjcj + zi

−Ei (hi,i ∗ Sici) = A−iv−i + zi

Sparse recovery solver

yi = Ei

N∑j=0

hi,j ∗ Sjcj + zi

−Ei (hi,i ∗ Sici) = A−iv−i + zi

I User i needs to solve the following problem to detect thedesired signal:

v−i = argminv−i

‖yi −A−iv−i‖2s.t. ‖cj‖0 = l, for all j 6= i,

I A non-convex optimisation problem.

I Exactly Nl out of NL entries in v−i are non-zero - butsparsity level is: l

L � 1I Can use any of the myriad of CS decoding algorithms.

Sparse recovery solver

yi = Ei

N∑j=0

hi,j ∗ Sjcj + zi

−Ei (hi,i ∗ Sici) = A−iv−i + zi

I User i needs to solve the following problem to detect thedesired signal:

v−i = argminv−i

‖yi −A−iv−i‖2s.t. ‖cj‖0 = l, for all j 6= i,

I A non-convex optimisation problem.

I Exactly Nl out of NL entries in v−i are non-zero - butsparsity level is: l

L � 1I Can use any of the myriad of CS decoding algorithms.

Sparse recovery solver

yi = Ei

N∑j=0

hi,j ∗ Sjcj + zi

−Ei (hi,i ∗ Sici) = A−iv−i + zi

I User i needs to solve the following problem to detect thedesired signal:

v−i = argminv−i

‖yi −A−iv−i‖2s.t. ‖cj‖0 = l, for all j 6= i,

I A non-convex optimisation problem.

I Exactly Nl out of NL entries in v−i are non-zero - butsparsity level is: l

L � 1I Can use any of the myriad of CS decoding algorithms.

Sparse recovery solver

I Basis Pursuit/LASSO/convex relaxation (Tibshirani 1996),(Chen, Donoho, and Saunders 1998)

I LARS / homotopy (Efron et al, 2004)

I Greedy iterative methods:

I Compressive Sampling Matching Pursuit - CoSaMP(Needell and Tropp 2009)

I Subspace Pursuit - SP (Dai and Milenkovic 2009)

Subspace Pursuit

Greedily searches for the support set S such that y is closest tospan(AS).

1. Initialise. Set S to the Nl columns that maximize |〈ai, y〉|2. Identify further candidates. Set S ′ to the Nl columns that

maximize |〈ai, y − proj(y, span(AS)〉|3. Merge and Prune. Set S to the Nl columns from S ∪ S ′

with largest magnitudes in A+S∪S′y

4. Iterate (2)-(3) until the stopping criterion holds.

yyr

span(AS)

proj(y, span(AS))

Group Subspace Pursuit

Sparse vector has additional structure: each of N subvectors hasexactly l non-zero entries.

1. Initialise. Set S to the union of sets of l columns withineach group of L that maximize |〈ai, y〉|

2. Identify further candidates. Set S ′ to the union of sets of lcolumns within each group of L that maximize|〈ai, y − proj(y, span(AS)〉|

3. Merge and Prune. Set S to the union of sets of l columnswithin each group of L with largest magnitudes in A+

S∪S′y

4. Iterate (2)-(3) until the stopping criterion holds.

Outline

CS for multiterminal communications

Combinatorial encoder

Sparse recovery solver

Simulation results

Group Subspace Pursuit (2)

5 10 15 20 25 3010

−4

10−3

10−2

10−1

100

1/σn

2 [dB]

Pr(

err

or)

BP−100

BP−200

BP−300

GSP−100

GSP−200

GSP−300

GBP−300

GBP−200

GBP−100

Figure: Performance comparison of Basis Pursuit/Lasso (BP), GroupBasis Pursuit (GBP) and Group Subspace Pursuit (GSP). N = 10,l = 4, L = 32

CCSM Performance

0 2 4 6 8 10 12 14 16 18 2010

−5

10−4

10−3

10−2

10−1

100

SNR [dB]

Pr(

err

or)

M=150

M=175

M=200

M=225

M=250

Figure: Performance of the proposed method in terms of messageerror rate: In this case there are (N + 1) = 5 users simultaneouslybroadcasting messages using CCSM with L = 64, l = 12.

Comparison to CSMA/CA and TDMA (1)1. TDMA

I central controlling mechanismI time divided equally - no collisions, performanceindependent of the number of nodes

I guard interval (cyclic pre�x) of 20% slot duration

2. CSMA/CAI randomised deferment of transmissions in order to avoidcollisions; contention window: 16-1024 symbol intervals

I no symbol intervals wasted on distributed or shortinterframe space (DIFS/SIFS), propagation delay, physicalor MAC message headers and ACK responses

I a single message in each transmission queueI guard interval (cyclic pre�x) of 20% slot duration

randomized deferment time

Comparison to CSMA/CA and TDMA (2)

5 10 15 20 25

0.8

1

1.2

1.4

1.6

1.8

2

number of users

thro

ug

hp

ut

[in

bits/s

ym

bo

l in

terv

al]

CSMA/CA with 20% GI

fully centralized system with 20% GI

CCSM

CCSM: the minimum number of symbol intervals at which no messageerrors occurred in at least 100,000 simulation trials

L = 64, l = 12, q = 2 (QPSK)

k =⌊log2

(Ll

)⌋+ lq = 65

Concluding remarks

I a novel decentralized modulation and multiplexing methodfor wireless networks

I e�ective time/frequency duplexI minimal MACI inherent robustness to time dispersion

I low computational complexity which takes advantage of

I combinatorial representation of messagesI sparse recovery detection

I signi�cant throughput improvement in comparison tocollision avoidance schemes

Concluding remarks

I a novel decentralized modulation and multiplexing methodfor wireless networks

I e�ective time/frequency duplexI minimal MACI inherent robustness to time dispersion

I low computational complexity which takes advantage of

I combinatorial representation of messagesI sparse recovery detection

I signi�cant throughput improvement in comparison tocollision avoidance schemes

Concluding remarks

I a novel decentralized modulation and multiplexing methodfor wireless networks

I e�ective time/frequency duplexI minimal MACI inherent robustness to time dispersion

I low computational complexity which takes advantage of

I combinatorial representation of messagesI sparse recovery detection

I signi�cant throughput improvement in comparison tocollision avoidance schemes

References

R. Piechocki and D. Sejdinovic, �Combinatorial Channel SignatureModulation for Wireless ad-hoc Networks� preprint available from:arxiv.org/abs/1201.5608v1 (to appear in Proc. IEEE Int. Conf. on

Communications ICC 2012).

L. Zhang and D. Guo, �Wireless Peer-to-Peer Mutual Broadcast via SparseRecovery� preprint available from: arxiv.org/abs/1101.0294, in Proc. IEEEInt. Symp. Inform. Theory ISIT 2011.

W. Dai and O. Milenkovic, �Subspace pursuit for compressive sensing signalreconstruction,� IEEE Trans. Inform. Theory, vol. 55, pp. 2230� 2249, 2009.

C. Tian, V. Vaishampayan and N. J. A. Sloane, �A coding algorithm forconstant weight vectors: a geometric approach based on dissections,� IEEETrans. Inform. Theory, vol. 55, pp. 1051�1060, 2009.

G. Bianchi, �Performance Analysis of the IEEE 802.11 DistributedCoordination Function�, IEEE Journal on Selected Areas in

Communications, vol. 18, pp. 535�547, 2000.

Signalling dictionary

I One construction of on-o� signalling dictionary:

I all columns of Si have equal number of non-zero entries, setto⌊ML

⌋I every two columns in Si have disjoint supportI non-zero entries selected uniformly at random from somediscrete constellation

I transmitted codeword xi has exactly l ·⌊ML

⌋non-zero entries

(on-slots)

I M =M − l ·⌊ML

⌋o�-slots used to listen to the incoming signals

I Can also use LDPC / regular Gallager constructions (Baron,Sarvotham, Baraniuk 2010)

Assume: information is a brick

I Brick Bl ⊂ Rl is a hyper-rectangleBl := [0, 1]× [12 , 1]× [23 , 1]× · · · × [ l−1l , 1]

I Let l = 2, and L even. Then one can uniquely representmessage indices s ∈ {0, 1, . . . ,K − 1} (where K ≤ L(L−1)

2 )

as integer pairs (b(s)1 , b

(s)2 ), where

0 ≤ α =

⌊s

L/2

⌋≤ L− 1

0 ≤ β = s− L

2α ≤ L

2− 1

I De�ne b(s)1 = α, b

(s)2 = β + L

2 .

I It follows that

{(b(s)1L ,

b(s)2L )

}K−1s=0

⊂ Bl

Step 1: brick to triangular prism

B3 = B2 × [2/3, 1]→ T2 × [2/3, 1]

Step 2: triangular prism to tetrahedron

I inductive dissection for general w:

1. Bw = Bw−1 × [w−1w , 1]→ Tw−1 × [w−1

w , 1]2. Tw−1 × [w−1

w , 1]→ Tw

Step 2: triangular prism to tetrahedron

I inductive dissection for general w:

1. Bw = Bw−1 × [w−1w , 1]→ Tw−1 × [w−1

w , 1]2. Tw−1 × [w−1

w , 1]→ Tw

I Some loss in rate due to rounding (when n is in the range100-1000, and w =

√n, the loss is 1-2 bits/block)