+ All Categories
Home > Documents > Log-Likelihood Algebra

Log-Likelihood Algebra

Date post: 21-Jan-2016
Category:
Upload: ivria
View: 41 times
Download: 0 times
Share this document with a friend
Description:
 = - L (d). L (d). +. L (d). 0 = 0. +. Log-Likelihood Algebra. Sum of two LLRs L(d 1 ) + L(d 2 )  L ( d 1  d 2 ) = log exp[L(d 1 )] + exp [L(d 2 )] 1 + exp[L(d 1 )].exp [L(d 2 )] - PowerPoint PPT Presentation
Popular Tags:
22
1 Log-Likelihood Algebra Sum of two LLRs L(d 1 ) + L(d 2 ) L (d 1 d 2 ) = log exp[L(d 1 )] + exp [L(d 2 )] 1 + exp[L(d 1 )].exp [L(d 2 )] (-1) . sgn [L(d 1 )]. sgn [L(d 2 )] . Min ( |L(d 1 )| , |L(d 2 )| ) + + L (d) L (d) = - L (d) 0 = 0
Transcript
Page 1: Log-Likelihood Algebra

1

Log-Likelihood Algebra

Sum of two LLRs

L(d1) + L(d2) L (d1 d2 )

= log exp[L(d1)] + exp [L(d2)]

1 + exp[L(d1)].exp [L(d2)]

(-1) . sgn [L(d1)]. sgn [L(d2)] . Min ( |L(d1)| , |L(d2)| )

+

+

L (d)

L (d)

= - L (d)

0 = 0

Page 2: Log-Likelihood Algebra

2

Iterative decoding example

2D single parity code

di di = pij d1 = 1 d2 = 0 p12 = 1

d3 = 0 d4 = 1 d34 = 1

p13 = 1 p24 = 1 -

x1 = 0.75 x2 = 0.05 x12 = 1.25

x3 = 0.10 x4 = 0.15 x34 = 1.0

x13 = 3.0 x24 = 0.5 -

Page 3: Log-Likelihood Algebra

3

Iterative decoding example

Estimate Lc(xk)

= 2 xk /2

assuming 2 = 1

Lc(x1 )= 1.5 Lc(x2 )= 0.1 Lc(x12 )= 2.5

Lc(x3 )= 0.2 Lc(x4 )= 0.3 Lc(x34 )= 2.0

Lc(x13 )= 6.0 Lc(x24 )= 1.0 -

Page 4: Log-Likelihood Algebra

4

Iterative decoding example

Compute

Le( dj ) = [Lc ( x j) + L(dj ) ] Lc ( x ij)

+

Leh( d1) = [Lc ( x 2) + L(d2) ] Lc ( x 12 ) = new L( d1 )+

Leh( d2) = [Lc ( x 1) + L(d1) ] Lc ( x 12 ) = new L(d2)+

Leh( d3) = [Lc ( x 4) + L(d4) ] Lc ( x 34 ) = new L(d3) +

Leh( d4) = [Lc ( x 3) + L(d3) ] Lc ( x 34 ) = new L(d4)+

Page 5: Log-Likelihood Algebra

5

Iterative decoding example

Lev( d1) = [Lc ( x 3) + L(d3) ] Lc ( x 13 ) = new L( d1 )+

Lev( d2) = [Lc ( x 4) + L(d4) ] Lc ( x 24 ) = new L(d2)+

Lev( d3) = [Lc ( x 1) + L(d1) ] Lc ( x 13 ) = new L(d3) +

Lev( d4) = [Lc ( x 2) + L(d2) ] Lc ( x 24 ) = new L(d4)+

After many iterations the LLR is computed for decision making

L( di ) = Lc ( x i) + Leh(di) + Lev( dj )

^ ^^

Page 6: Log-Likelihood Algebra

6

First Pass output

Lc(x1 )= 1.5

Lc(x2 )= 0.1

Lc(x3 )= 0.2

Lc(x4 )= 0.3

Leh(d1 )= -0.1

Leh(d2 )= -1.5

Leh(d3 )= -0.3

Leh(d4 )= -0.2

Lev(d1 )= 0.1

Lev(d2 )= -0.1

Lev(d3 )= -1.4

Lev(d4 )= 1.0

L(d1 )= 1.5

L(d2 )= -1.5

L(d3 )= -1.5

L(d4 )= 1.1

Page 7: Log-Likelihood Algebra

7

Parallel Concatenation Codes

Component codes are Convolutional codes Recursive Systematic Codes Should have maximum effective free

distance Large Eb/No maximizing minimum

weight codewords Small Eb/No optimizing weight

distribution of the codewords Interleaving to avoid low weight

codewords

Page 8: Log-Likelihood Algebra

8

Non - Systematic Codes - NSC

dkdk-1 dk-2

+

+

{dk}

{uk}

{vk}

uk = g1i dk-i (mod 2) ; i=1

L-1

vk = g2i dk-i (mod 2) ; i=1

L-1

G1 = [ 1 1 1 ]

G2 = [ 1 0 1 ]

Page 9: Log-Likelihood Algebra

9

Recursive Systematic Codes - RSC

akak-1 ak-2

+

{dk} {uk}

{vk}

+

ak = dk + gi’ ak-i (mod 2) ; gi’ = g1i if uk = dk

i=1 g2i if vk = dk

L-1

Page 10: Log-Likelihood Algebra

10

Trellis for NSC & RSC

NSC RSC

00

11

11

00

10

10

01

01

a = 00

b = 01

c = 10

d = 11

00

11

11

00

10

10

01

01

a = 00

b = 01

c = 10

d = 11

Page 11: Log-Likelihood Algebra

11

Concatenation of RSC Codes

{dk} {uk}

ak ak-1 ak-2

+{v1k}

+

ak ak-1 ak-2

+{v2k}

+

Interleaver

{vk}

{ 0 0 …. 0 1 1 1 0 0 …..0 0 }

{ 0 0 …. 0 0 1 0 0 1 0 … 0 0 } produce low weight codewords in component coders

Page 12: Log-Likelihood Algebra

12

Feedback Decoder

APP Joint Probability k i,m = P { dk = i, Sk = m / R1 N }

State at time k

Received sequenceFrom time 1 to N

APP P { dk = i / R1 N } = k i,m ; i = 0,1 for binary

m

Log Likelihood Ratio L( dk ) = Log k 1,m

m

k 0,m

m

Likelihood Ratio ( dk ) = k 1,m

m

k 0,m

m

Page 13: Log-Likelihood Algebra

13

MAP Rule dk =1 ; L(dk) > 0

dk =0 ; L(dk) < 0

Feedback Decoder

L( dk) = Lc ( x k) + L(dk) + Le( dk )

^ ^

L1( dk ) = [Lc ( x k) + Le1(dk ) ]L2( dk ) = [ f{L1 ( dn) }n k + Le2(dk ) ]

Page 14: Log-Likelihood Algebra

14

Feedback Decoder

DECODER 1

Interleaver

DECODER 2

De-Interleaver

De-Interleaver

yk

xk

dk

L1( dk )

L1( dn )

Le2( dk )

L2( dk )

y1k y2k

Page 15: Log-Likelihood Algebra

15

Modified MAP Vs. SOVA

SOVA Viterbi Algorithm acting on soft inputs over

forward path of the trellis for a block of bits Add BM to SM compare select ML path

Modified MAP Viterbi Algorithm acting on soft inputs over

forward and reverse paths of the trellis for a block of bits

Multiply BM & SM Sum in both directions best overall statistic

Page 16: Log-Likelihood Algebra

16

MAP Decoding Example

dk dk-1 dk-2

+

{dk}

{uk}

{vk}

00

11

11

00

10

10

01

01

a = 00

b = 10

c = 01

d = 11

Page 17: Log-Likelihood Algebra

17

MAP Decoding Example d = { 1, 0, 0 } u = { 1, 0, 0 } x = { 1., 0.5, -0.6 } v = { 1, 0, 1 } y = { 0.8, 0.2, 1.2 } Apriori probabilities 1 = 0 = 0.5

Branch Metric k i,m = P { dk = i, Sk = m , Rk }

= P { Rk / dk = i, Sk = m }

. P {Sk = m / dk = i } . P { dk = i }

P {Sk = m / dk = i } = 1 / 2 L = ¼ ; P { dk = i } = 1 / 2 ;

k i,m = P { xk / dk = i, Sk = m }. P { yk / dk = i, Sk = m } . { ki / 2L }

Page 18: Log-Likelihood Algebra

18

MAP Decoding Example k i,m = P { xk / dk = i, Sk = m }. P { yk / dk = i, Sk = m } . { k

i / 2L }

For AWGN channel :

k i,m = { ki / 2L } (1/2 ) exp { - (xk – uk

i )2 /(2 2 ) }dxk

. (1/2 ) exp { - (yk – vki,m )2 /(2 2 ) }dyk

k i,m = { Ak ki } exp { (xk . uk

i )+ (yk . Vki,m )/ 2 }

Assuming Ak = 1 2 =1 ;

k i,m = 0.5 exp { xk . uki + yk . vk

i,m }

Page 19: Log-Likelihood Algebra

19

Subsequent steps Calculate branch metric

k i,m = 0.5 exp { xk . uki + yk .

vki,m }

Calculate forward state metric

k+1m = k i,b(j,m) k

b(j,m) J=0

1

Calculate reverse state metric

km = k

j,m k+1f(j,m)

J=0

1

Page 20: Log-Likelihood Algebra

20

Subsequent steps Calculate LLR for all times

Log Likelihood Ratio L( dk ) = Log k

m k1,m k+1

f(1,m)

m

km k

0,m k+1f(0,m)

m

Hard decision based on LLR

Page 21: Log-Likelihood Algebra

21

Iterative decoding steps

LLR L( dk ) = L(dk) + { 2xk } + Log [ke ]

Likelihood Ratio ( dk )

=

=

=

{ k1} k

m exp { xk . uk1 + yk . Vk

1,m } k+1f(1,m)

m

{ k0} k

m exp { xk . uk0 + yk . Vk

0,m } k+1f(0,m)

m

km exp { yk . Vk

1,m } k+1f(1,m)

m

km exp { yk . Vk

0,m } k+1f(0,m)

m

{ k} exp { 2xk }

{ k} exp { 2xk } { k

e}

Page 22: Log-Likelihood Algebra

22

Iterative decoding

For the second iteration;

k i,m = ke i exp { xk . uk

i + yk . Vk

i,m } Calculate LLR for all times

Log Likelihood Ratio L( dk ) = Log k

m k1,m k+1

f(1,m)

m

km k

0,m k+1f(0,m)

m

Hard decision based on LLR after multiple iterations


Recommended