+ All Categories
Home > Documents > Chapter10 Convolutional Codes - Welcome. WITS Lab

Chapter10 Convolutional Codes - Welcome. WITS Lab

Date post: 03-Feb-2022
Category:
Upload: others
View: 7 times
Download: 0 times
Share this document with a friend
115
Chapter10 Convolutional Codes Dr. Chih-Peng Li (李志鵬)
Transcript
untitled10.2 Structural Properties of Convolutional Codes
10.3 Distance Properties of Convolutional Codes
3
Convolutional Codes
Convolutional codes differ from block codes in that the encoder contains memory and the n encoder outputs at any time unit depend not only on the k inputs but also on m previous input blocks. An (n, k, m) convolutional code can be implemented with a k- input, n-output linear sequential circuit with input memory m.
Typically, n and k are small integers with k<n, but the memory order m must be made large to achieve low error probabilities.
In the important special case when k=1, the information sequence is not divided into blocks and can be processed continuously. Convolutional codes were first introduced by Elias in 1955 as an alternative to block codes.
4
Shortly thereafter, Wozencraft proposed sequential decoding as an efficient decoding scheme for convolutional codes, and experimental studies soon began to appear. In 1963, Massey proposed a less efficient but simpler-to- implement decoding method called threshold decoding. Then in 1967, Viterbi proposed a maximum likelihood decoding scheme that was relatively easy to implement for cods with small memory orders. This scheme, called Viterbi decoding, together with improved versions of sequential decoding, led to the application of convolutional codes to deep-space and satellite communication in early 1970s.
Convolutional Codes
5
Convolutional Code A convolutional code is generated by passing the information sequence to be transmitted through a linear finite-state shift register. In general, the shift register consists of K (k-bit) stages and n linear algebraic function generators.
6
Convoltuional Code Convolutional codes
k = number of bits shifted into the encoder at one time k=1 is usually used!!
n = number of encoder output bits corresponding to the k information bits Rc = k/n = code rate K = constraint length, encoder memory.
Each encoded bit is a function of the present input bits and their past ones. Note that the definition of constraint length here is the same as that of Shu Lin’s, while the shift register’s representation is different.
7
Encoding of Convolutional Code Example 1:
Consider the binary convolutional encoder with constraint length K=3, k=1, and n=3. The generators are: g1=[100], g2=[101], and g3=[111]. The generators are more conveniently given in octal form as (4,5,7).
8
Encoding of Convolutional Code Example 2:
Consider a rate 2/3 convolutional encoder. The generators are: g1=[1011], g2=[1101], and g3=[1010]. In octal form, these generator are (13, 15, 12).
9
Representations of Convolutional Code There are three alternative methods that are often used to describe a convolutional code:
Tree diagram Trellis diagram State disgram
10
Representations of Convolutional Code Tree diagram
Note that the tree diagram in the right repeats itself after the third stage. This is consistent with the fact that the constraint length K=3. The output sequence at each stage is determined by the input bit and the two previous input bits. In other words, we may sat that the 3-bit output sequence for each input bit is determined by the input bit and the four possible states of the shift register, denoted as a=00, b=01, c=10, and d=11.
Tree diagram for rate 1/3, K=3 convolutional code.
11
Tree diagram for rate 1/3, K=3 convolutional code.
12
0 1
0 1
0 1
0 1
→ →
→ →
→ →
→ →
13
Representations of Convolutional Code Example: K=2, k=2, n=3 convolutional code
Tree diagram
14
Representations of Convolutional Code Example: K=2, k=2, n=3 convolutional code
Trellis diagram
15
Representations of Convolutional Code Example: K=2, k=2, n=3 convolutional code
State diagram
16
Representations of Convolutional Code In general, we state that a rate k/n, constraint length K, convolutional code is characterized by 2k branches emanating from each node of the tree diagram. The trellis and the state diagrams each have 2k(K-1) possible states. There are 2k branches entering each state and 2k branches leaving each state.
17
the encoder consists of an m= 3-stage shift register together with n=2 modulo-2 adders and a multiplexer for serializing the encoder outputs. The mod-2 adders can be implemented as EXCLUSIVE-OR gates.
Since mod-2 addition is a linear operation, the encoder is a linear feedforward shift register. All convolutional encoders can be implemented using a linear feedforward shift register of this type.
Example: A (2, 1, 3) binary convolutional codes:
10.1 Encoding of Convolutional Codes
18
10.1 Encoding of Convolutional Codes
The information sequence u =(u0, u1, u2, …) enters the encoder one bit at a time. Since the encoder is a linear system, the two encoder output sequence can be obtained as the convolution of the input sequence u with the two encoder “impulse response.” The impulse responses are obtained by letting u =(1 0 0 …) and observing the two output sequence. Since the encoder has an m-time unit memory, the impulse responses can last at most m+1 time units, and are written as :
),,,( and ),,,( )()()()()()()()( 2 2
19
10.1 Encoding of Convolutional Codes The encoder of the binary (2, 1, 3) code is
The impulse response g(1) and g(2) are called the generator sequences of the code. The encoding equations can now be written as
where * denotes discrete convolution and all operations are mod-2. The convolution operation implies that for all l ≥ 0,
where
)2(
)1(
l l i i l l l m m i
u g u g u g u g jυ − − − =
= = + + + =∑ 0 for all .l iu l i− = <
20
Hence, for the encoder of the binary (2,1,3) code,
as can easily be verified by direct inspection of the encoding circuit. After encoding, the two output sequences are multiplexed into a signal sequence, called the code word, for transmission over the channel. The code word is given by
321 )2(
32 )1(
21
10.1 Encoding of Convolutional Codes
Example 10.1 Let the information sequence u = (1 0 1 1 1). Then the output sequences are
and the code word is 1) 0 1 1 1 0 1 (11) 1 1 (11) 1 1 0 1(
1) 0 0 0 0 0 0 (11) 1 0 (11) 1 1 0 1( (2)
)1(
v v
1). 1 0, 0 1, 0 1, 0 1, 0 0, 0 1, 0 1, (1=v
22
10.1 Encoding of Convolutional Codes
If the generator sequence g(1) and g(2) are interlaced and then arranged in the matrix






= −−−−
gggggggg
G
23
10.1 Encoding of Convolutional Codes If u has finite length L, then G has L rows and 2(m+L) columns, and v has length 2(m + L).
Example 10.2 If u=(1 0 1 1 1), then
agree with our previous calculation using discrete convolution.
1), 1 0, 0 1, 0 1, 0 1, 0 0, 0 1, 0 1, (1 1 11 11 011
1 11 11 011 1 11 10111
1 11 10111 11111 011
1) 1 1 0 (1
=





=
Consider a (3, 2, 1) convolutional codes
Since k = 2, the encoder consists of two m = 1- stage shift registers together with n = 3 mode-2 adders and two multiplexers.
25
10.1 Encoding of Convolutional Codes
The information sequence enters the encoder k = 2 bits at a time, and can be written as
or as the two input sequences
There are three generator sequences corresponding to each input sequence. Let represent the generator sequence corresponding to input i and output j, the generator sequence of the (3, 2, 1) convolutional codes are
),,,( )2( 2
)1( 2
)2( 1
)1( 1
)2( 0
j i gggg =
),0 1( ),0 1( ),1 0( ),1 1( 1), 0( 1), 1(
(3) 2
(2) 2
)1( 2
(3) 1
(2) 1
)1( 1
And the encoding equations can be written as
The convolution operation implies that
After multiplexing, the code word is given by
)3( 2
)2()3( 1
10.1 Encoding of Convolutional Codes
Example 10.3 If u(1) = (1 0 1) and u(2) = (1 1 0), then
and 1) 1 0 (00) (10) 1 (11) (11) 0 1( 1) 0 0 (10) (10) 1 (11) (01) 0 1( 1) 0 0 (11) (00) 1 (11) (11) 0 1(
)3(
)2(
)1(
v v v
).1 1 1 ,1 0 0 , 0 0 0 ,0 1 1(=v
28
The generator matrix of a (3, 2, m) code is
and the encoding equation in matrix are again given by v = uG. Note that each set of k = 2 rows of G is identical to the preceding set of rows but shifted n = 3 places to right.
10.1 Encoding of Convolutional Codes






=
−−−
G
29
Example 10.4 If u(1) = (1 0 1) and u(2) = (1 1 0), then u = (1 1, 0 1, 1 0) and
it agree with our previous calculation using discrete convolution.
10.1 Encoding of Convolutional Codes
1), 1 1 1, 0 0 0, 0 0 0, 1 (1 0 0 11 1 0 1 1 11 0 1
0 0 11 1 0 1 1 11 0 1
0 0 11 1 0 1 1 11 0 1
0) 1 1, 0 1, (1
=





=
10.1 Encoding of Convolutional Codes
In particular, the encoder now contains k shift registers, not all of which must have the same length. If Ki is the length of the ith shift register, then the encoder memory order m is defined as
1 max ii k
m K ≤ ≤
An example of a (4, 3, 2) convolutional encoder in which the shift register length are 0, 1, and 2.
31
10.1 Encoding of Convolutional Codes
The constraint length is defined as nAn(m+1). Since each information bit remains in the encoder for up to m+1 time units, and during each time unit can affect any of the n encoder outputs, nA can be interpreted as the maximum number of encoder outputs that can be affected by a signal information bit. For example, the constraint length of the (2,1,3), (3,2,1), and (4,3,2) convolutional codes are 8, 6, and 12, respectively.
32
10.1 Encoding of Convolutional Codes If the general case of an (n, k, m) code, the generator matrix is
where each Gl is a k × n submatrix whose entries are






= −−

10.1 Encoding of Convolutional Codes For an information sequence
and the code word is given by v = uG. Since the code word v is a linear combination of rows of the generator matrix G, an (n, k, m) convolutional code is a linear code.
),,(),,( )( 1
10.1 Encoding of Convolutional Codes
A convolutional encoder generates n encoded bits for each k information bits, and R = k/n is called the code rate. For an k·L finite length information sequence, the corresponding code word has length n(L + m), where the final n·m outputs are generated after the last nonzero information block has entered the encoder. Viewing a convolutional code as a linear block code with generator matrix G, the block code rate is given by kL/n(L + m), the ratio of the number of information bits to the length of the code word. If L » m, then L/(L + m) ≈ 1, and the block code rate and convolutional code are approximately equal .
35
If L were small, however, the ratio kL/n(L + m), which is the effective rate of information transmission, would be reduced below the code rate by a fractional amount
called the fractional rate loss. To keep the fractional rate loss small, L is always assumed to be much larger than m. Example 10.5
For a (2,1,3) convolutional codes, L=5 and the fractional rate loss is 3/8=37.5%. However, if the length of the information sequence is L=1000, the fractional rate loss is only 3/1003=0.3%.
10.1 Encoding of Convolutional Codes
mL m
nk mLnkLnk
36
In a linear system, time-domain operations involving convolution can be replaced by more convenient transform-domain operations involving polynomial multiplication. Since a convolutional encoder is a linear system, each sequence in the encoding equations can be replaced by corresponding polynomial, and the convolution operation replaced by polynomial multiplication. In the polynomial representation of a binary sequence, the sequence itself is represent by the coefficients of the polynomial. For example, for a (2, 1, m) code, the encoding equations become
where u(D) = u0 + u1D + u2D2 + ··· is the information sequence.
10.1 Encoding of Convolutional Codes
),()()( )()()(
The encoded sequences are
The generator polynomials of the code are
and all operations are modulo-2. After multiplexing, the code word become
the indeterminate D can be interpreted as a delay operator, and the power of D denoting the number of time units a bit is delayed with respect to the initial bit.
+++= +++= 2)2(
10.1 Encoding of Convolutional Codes Example 10.6
For the previous (2, 1, 3) convolutional code, the generator polynomials are g(1)(D) = 1+D2+D3 and g(2)(D) = 1+D+D2+D3. For the information sequence u(D) = 1+D2+D3+D4, the encoding equation are
and the code word is
Note that the result is the same as previously computed using convolution and matrix multiplication.
,1 )1)(1()(
+=+++++= v v
( ) (1) 2 (2) 2 3 7 9 11 14 15( ) ( ) 1 .D D D D D D D D D D D= + = + + + + + + +v v v
39
10.1 Encoding of Convolutional Codes
The generator polynomials of an encoder can be determined directly from its circuit diagram. Since the shift register stage represents a one-time-unit delay, the sequence of connection (a 1 representing a connection and a 0 no connection) from a shift register to an output is the sequence of coefficients in the corresponding generator polynomial. Since the last stage of the shift register in an (n, 1) code must be connected to at least one output, at least one of the generator polynomials must have degree equal to the shift register length m, that is
( ) ( )[ ]Dm j
1 ≤≤ =
40
10.1 Encoding of Convolutional Codes In an (n, k) code where k>1, there are n generator polynomials for each of the k inputs. Each set of n generators represents the connections from one of the shift registers to the n outputs. The length Ki of the ith shift register is given by
where is the generator polynomial relating the ith input to the jth output, and the encoder memory order m is
( ) ( )[ ] ,1 , degmax 1
kiDK j inji ≤≤=
10.1 Encoding of Convolutional Codes
Since the encoder is a linear system, and u(i)(D) is the ith input sequence and v(j)(D)is the jth output sequence, the generator polynomial can be interpreted as the encoder transfer function relating input i to output j. As with k-input, n-output linear system, there are a total of k·n transfer functions. These can be represented by the k × n transfer function matrix
( ) ( )Dj ig
1 2
42
10.1 Encoding of Convolutional Codes Using the transfer function matrix, the encoding equation for an (n, k, m) code can be expressed as
where is the k-tuple of input sequences and is the n-tuple of output sequences. After multiplexing, the code word becomes
( ) ( ) ( )DDD GUV =
( ) (1) (2) ( )( ), ( ), , ( )kD D D D U u u u ( ) (1) (2) ( )( ), ( ), , ( )nD D D D V v v v
( ) ( ) ( ) ( ) ( ) ( ) ( ). 121 nnnnn DDDDDD vvvv −+++=
For the previous (3, 2, 1) convolutional code
For the input sequences u(1)(D) = 1+D2 and u(2)(D)=1+D, the encoding equations are
and the code word is
( )

++ =
8 9 10 11
D D D D D
= + + + + +
= + + + + +
v
1 0 1 1 1 1 0 1 1 1 0 0
44
10.1 Encoding of Convolutional Codes Then, we can find a means of representing the code word v(D) directly in terms of the input sequences. A little algebraic manipulation yields
where
is a composite generator polynomial relating the ith input sequence to v(D).
( ) ( ) ( ) ( )DD i n
= 1
D
( ) ( ) ( ) ( ) ( ) ( ) ( )1 2 11 , 1 ,nn n n n i i i ig D D D D D D i k−−+ + ≤ ≤g g g
45
10.1 Encoding of Convolutional Codes Example 10.8
For the previous (2, 1, 3) convolutional codes, the composite generator polynomial is
and for u(D)=1+D2+D3+D4 , the code word is
again agreeing with previous calculations.
( ) ( ) ( ) ( ) ( ) 765432221 1 DDDDDDDDDD ++++++=+= ggg
3 7 9 11 14 15
1 1
D D D D D D D
=
= + + + ⋅ + + + + + +
= + + + + + + +
10.2 Structural Properties of Convolutional Codes
Since a convolutional encoder is a sequential circuit, its operation can be described by a state diagram. The state of the encoder is defined as its shift register contents. For an (n, k, m) code with k > 1, the ith shift register contains Ki previous information bits. Defined as the total encoder memory, the encoder state at time unit l is the binary K-tuple of inputs
and there are a total 2K different possible states.
1
10.2 Structural Properties of Convolutional Codes
For a (n, 1, m) code, K = K1 = m and the encoder state at time unit l is simply . Each new block of k inputs causes a transition to a new state. There are 2k branches leaving each state, one corresponding to each different input block. Note that for an (n, 1, m) code, there are only two branches leaving each state. Each branch is labeled with the k inputs causing the transition
and n corresponding outputs . The states are labeled S0,S1,…,S2K-1, where by convention Si represents the state whose binary K-tuple representation b0,b1,…,bK-1 is equivalent to the integer
( )mlll uuu −−− 21
lll υυυ
Ki b b b − −= + + +
48
Assuming that the encoder is initially in state S0 (all-zero state), the code word corresponding to any given information sequence can be obtained by following the path through the state diagram and noting the corresponding outputs on the branch labels. Following the last nonzero information block, the encoder is return to state S0 by a sequence of m all-zero blocks appended to the information sequence.
10.2 Structural Properties of Convolutional Codes
49
Encoder state diagram of a (2, 1, 3) code
If u = (1 1 1 0 1), the code word v = (1 1, 1 0, 0 1, 0 1, 1 1, 1 0, 1 1, 1 1)
10.2 Structural Properties of Convolutional Codes
50
10.2 Structural Properties of Convolutional Codes
51
The state diagram can be modified to provide a complete description of the Hamming weights of all nonzero code words (i.e. a weight distribution function for the code). State S0 is split into an initial state and a final state, the self- loop around state S0 is deleted, and each branch is labeled with a branch gain Xi ,where i is the weight of the n encoded bits on that branch. Each path connecting the initial state to the final state represents a nonzero code word that diverge from and remerge with state S0 exactly once. The path gain is the product of the branch gains along a path, and the weight of the associated code word is the power of X in the path gain.
10.2 Structural Properties of Convolutional Codes
52
Modified encoder state diagram of a (2, 1, 3) code.
The path representing the sate sequence S0S1S3S7S6S5S2S4S0 has path gain X2·X1·X1·X1·X2·X1·X2·X2=X12.
53
Modified encoder state diagram of a (3, 2, 1) code.
The path representing the sate sequence S0S1S3S2S0 has path gain X2·X1·X0·X1 =X12.
54
10.2 Structural Properties of Convolutional Codes
The weight distribution function of a code can be determined by considering the modified state diagram as a signal flow graph and applying Mason’s gain formula to compute its “generating function”
where Ai is the number of code words of weight i. In a signal flow graph, a path connecting the initial state to the final state which does not go through any state twice is called a forward path. A closed path starting at any state and returning to that state without going through any other state twice is called a loop.
( ) ,i
10.2 Structural Properties of Convolutional Codes
Let Ci be the gain of the ith loop. A set of loops is nontouching if no state belongs to more than one loop in the set. Let {i} be the set of all loops, {i’, j’} be the set of all pairs of nontouching loops, {i”, j”, l”} be the set of all triples of nontouching loops, and so on. Then define
where is the sum of the loop gains, is the product of the loop gains of two nontouching loops summed over all pairs of nontouching loops, is the product of the loop gains of three nontouching loops summed over all nontouching loops.
,1 '''' ''''''
10.2 Structural Properties of Convolutional Codes
And i is defined exactly like , but only for that portion of the graph not touching the ith forward path; that is, all states along the ith forward path, together with all branches connected to these states, are removed from the graph when computing i. Mason’s formula for computing the generating function T(X) of a graph can now be states as
where the sum in the numerator is over all forward paths and Fi is the gain of the ith forward path.
( ) , i i
( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( )XCS SLoop
XCSSS SLoop
XCSSSS SLoop
: 6
: 5
: 4
: 3
: 2
: 1Example (2,1,3) Code: There are 11 loops in the modified encoder state diagram.
58
10.2 Structural Properties of Convolutional Codes
Example (2,1,3) Code: (cont.) There are 10 pairs of nontouching loops :
( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( )4
118
10.2 Structural Properties of Convolutional Codes
Example (2,1,3) Code : (cont.) There are two triples of nontouching loops :
There are no other sets of nontouching loops. Therefore,
( ) ( ) ( ) ( )8
11107
XCCC
XCCC
10.2 Structural Properties of Convolutional Codes
Example (2,1,3) Code : (cont.) There are seven forward paths in this state diagram :
( ) ( ) ( ) ( ) ( ) ( ) ( ). :7path Foward
:6path Foward
:5path Foward
:4path Foward
:3path Foward
:2path Foward
:1path Foward
7 704210
7 604635210
8 5046735210
6 4046310
11 304256310
7 20467310
12 1042567310
10.2 Structural Properties of Convolutional Codes
Example (2,1,3) Code : (cont.) Forward paths 1 and 5 touch all states in the graph, and hence the sub graph not touching these paths contains no states. Therefore,
1 = 5 = 1.
3 = 6 = 1 - X
Example (2,1,3) Code : (cont.)
2 = 1 - X
The subgraph not touching forward path 4: 4 = 1 – (X + X) + (X2)
= 1 – 2X + X2
10.2 Structural Properties of Convolutional Codes
Example (2,1,3) Code : (cont.) The subgraph not touching forward path 7: 7 = 1 – (X + X4 + X5) + (X5)
= 1 – X – X4
10.2 Structural Properties of Convolutional Codes
Example (2,1,3) Code : (cont.) The generating function for this graph is then given by
( ) ( ) ( )
( ) ( ) ( )
+++++= −− −+
= −−
−−+−+⋅++−+ −+−+⋅
=
10.2 Structural Properties of Convolutional Codes
( ) ( ) ( )
( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( )
−−+++= ⋅+−+⋅+
−++−−−+⋅+ −+⋅+−+−−+⋅+
−+⋅+−++−−−=∑
Example (3,2,1) Code : (cont.) There are 15 forward path in this graph, and
Hence, the generating function is
( ) +++= ++−−− −−+++
10.2 Structural Properties of Convolutional Codes
Additional information about the structure of a code can be obtained using the same procedure. If the modified state diagram is augmented by labeling each branch corresponding to a nonzero information block with Y j, where j is the weight of the k information bits on the branch, and labeling every branch with Z, the generating function is given by
( ) .,, ,,
The augment state diagram for the (2, 1, 3) codes.
69
10.2 Structural Properties of Convolutional Codes
Example (2,1,3) Code: For the graph of the augment state diagram for the (2, 1, 3) codes, we have:
( ) ( ) ( ) ( ) ( ) ( ) 749746384324
633334222
748744
435322424637748
744533633748744
324435233633
744422637533748
1
−−−−−
−−−−++=
+−
+++++
+++++
++++++
++++−=
Example (2,1,3) Code: (cont.)
( ) ( )
( ) ( ) ( )
ZYXYZXZYXZYXT
71
Example (2,1,3) Code: (cont.) This implies that the code word of weight 6 has length 5 branches and an information sequence of weight 2, one code word of weight 7 has length 4 branches and information sequence weight 1, another has length 6 branches and information sequence weight 3, the third has length 7 branches and information sequence weight 3, and so on.
10.2 Structural Properties of Convolutional Codes
72
The Transfer Function of a Convolutional Code
The state diagram can be used to obtain the distance property of a convolutional code. Without loss of generality, we assume that the all-zero code sequence is the input to the encoder.
73
First, we label the branches of the state diagram as either D0=1, D1, D2, or D3, where the exponent of D denotes the Hamming distance between the sequence of output bits corresponding to each branch and the sequence of output bits corresponding to the all-zero branch. The self-loop at node a can be eliminated, since it contributes nothing to the distance properties of a code sequence relative to the all-zero code sequence. Furthermore, node a is split into two nodes, one of which represents the input and the other the output of the state diagram.
The Transfer Function of a Convolutional Code
74
Use the modified state diagram, we can obtain four state equations:
The transfer function for the code is defined as T(D)=Xe/Xa. By solving the state equations, we obtain:
3
X D X D X
X D X
6 2 4 8

=
= = + + + + = − ∑
( ) ( ) ( )

= −
6d
da
75
The transfer function can be used to provide more detailed information than just the distance of the various paths. Suppose we introduce a factor N into all branch transitions caused by the input bit 1. Furthermore, we introduce a factor of J into each branch of the state diagram so that the exponent of J will serve as a counting variable to indicate the number of branches in any given path from node a to node e.
The Transfer Function of a Convolutional Code
76
The state equations for the state diagram are:
Upon solving these equations for the ratio Xe/Xa, we obtain the transfer function:
3
X JND X JND X
X JD X
2
3 6 4 2 8 5 2 8 5 3 10
6 3 10 7 3 10
, , 1 1
J NDT D N J JND J
= − +
= + + +
+ + +
77
The exponent of the factor J indicates the length of the path that merges with the all-zero path for the first time. The exponent of the factor N indicates the number of 1s in the information sequence for that path. The exponent of D indicates the distance of the sequence of encoded bits for that path from the all-zero sequence.
Reference: John G. Proakis, “Digital Communications,” Fourth Edition, pp. 477— 482, McGraw-Hill, 2001.
The Transfer Function of a Convolutional Code
78
10.2 Structural Properties of Convolutional Codes
An important subclass of convolutional codes is the class of systematic codes. In a systematic code, the first k output sequences are exact replicas of the k input sequences, i.e.,
v(i) = u(i), i = 1, 2, …, k, and the generator sequences satisfy
( ) ,...k, , , i i j i j j
i 21 if0 if 1
=
≠ =
10.2 Structural Properties of Convolutional Codes
The generator matrix is given by






= −−

And the transfer matrix becomes
( )
( ) ( ) ( ) ( ) ( ) ( ) ( ) ( )
( ) ( ) ( ) ( )





=
+
+
+
10.2 Structural Properties of Convolutional Codes






82
Example (2,1,3) Code: The transfer function matrix is
G(D) = [1 1 + D + D3]. For an input sequence u(D) = 1 + D2 + D3, the information sequence is
and the parity sequence is
( ) ( ) ( ) ( ) ( ) ( ) 323211 111 DDDDDDD ++=⋅++== guv
83
10.2 Structural Properties of Convolutional Codes
One advantage of systematic codes is that encoding is somewhat simpler than for nonsystematic codes because less hardware is required. For an (n, k, m) systematic code with k > n – k , there exits a modified encoding circuit which normally requires fewer than K shift register states.
The (2, 1, 3) systematic code requires only one modulo-2 adder with three inputs.
84
10.2 Structural Properties of Convolutional Codes
Example (3,2,2) Systematic Code: Consider the (3, 2, 2) systematic code with transfer function matrix
( )

10.2 Structural Properties of Convolutional Codes
Example (3,2,2) Systematic Code: Since the information sequences are given by v(1)(D) = u(1)(D) and v(2)(D) = u(2)(D), and the parity sequence is given by
( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) , 3 2
23 1
13 DDDDD guguv +=
The (3, 2, 2) systematic encoder, and it requires only two stages of encoder memory rather than 4.
86
A complete discussion of the minimal encoder memory required to realize a convolutional code is given by Forney. In most cases the straightforward realization requiring K states of shift register memory is most efficient. In the case of an (n,k,m) systematic code with k>n-k, a simpler realization usually exists. Another advantage of systematic codes is that no inverting circuit is needed for recovering the information sequence from the code word.
10.2 Structural Properties of Convolutional Codes
87
Nonsystematic codes, on the other hand, require an inverter to recover the information sequence; that is, an n × k matrix G-1(D) must exit such that
G(D)G-1(D) = IDl
for some l ≥ 0, where I is the k × k identity matrix. Since V(D) = U(D)G(D), we can obtain
V(D)G-1(D) = U(D)G(D)G-1(D) = U(D)Dl , and the information sequence can be recovered with an l-time- unit delay from the code word by letting V(D) be the input to the n-input, k-output linear sequential circuit whose transfer function matrix is G-1(D).
10.2 Structural Properties of Convolutional Codes
88
10.2 Structural Properties of Convolutional Codes
For an (n, 1, m) code, a transfer function matrix G(D) has a feedforward inverse G-1(D) of delay l if and only if
GCD[g(1)(D), g(2)(D),…, g(n)(D)] = Dl
for some l ≥ 0, where GCD denotes the greatest common divisor. For an (n, k, m) code with k > 1, be the determinants of the distinct k × k submatrices of the transfer function matrix G(D). A feedforward inverse of delay l exits if and only if
for some l ≥ 0.
let ( ), 1, 2, , ,i
and the transfer function matrix
provides the required inverse of delay 0 [i.e., G(D)G−1(D) = 1]. The implementation of the inverse is shown below
2 3 2 3 0GCD 1 , 1 1D D D D D D + + + + + = =
( ) 2
90
10.2 Structural Properties of Convolutional Codes
Example (3,2,1) Code: For the (3, 2, 1) code , the 2 × 2 submatrices of G(D) yield determinants 1 + D + D2, 1 + D2, and 1. Since
there exists a feedforward inverse of delay 0. The required transfer function matrix is given by:
2 21 , 1 , 1 1GCD D D D + + + =
( )1
D D D
10.2 Structural Properties of Convolutional Codes
To understand what happens when a feedforward inverse does not exist, it is best to consider an example.
For the (2, 1, 2) code with g(1)(D) = 1 + D and g(2)(D) = 1 + D2 ,
and a feedforward inverse does not exist. If the information sequence is u(D) = 1/(1 + D) = 1 + D + D2
+ …, the output sequences are v(1)(D) = 1 and v(2)(D) = 1 + D; that is, the code word contains only three nonzero bits even though the information sequence has infinite weight. If this code word is transmitted over a BSC, and the three nonzero bits are changed to zeros by the channel noise, the received sequence will be all zeros.
2GCD 1 , 1 1 ,D D D + + = +
92
10.2 Structural Properties of Convolutional Codes
A MLD will then produce the all-zero code word as its estimated, since this is a valid code word and it agrees exactly with the received sequence. The estimated information sequence will be implying an infinite number of decoding errors caused by a finite number (only three in this case) of channel errors. Clearly, this is a very undesirable circumstance, and the code is said to be subject to catastrophic error propagation, and is called a catastrophic code.
and
( ) 0,D =u
( ) ( ) ( ) ( ) ( ) ( )1 2GCD , , , n lD D D D = g g g… ( )GCD i D
( ): 1, 2,...., are necessary and sufficient conditions lni Dk = =
93
10.2 Structural Properties of Convolutional Codes
Any code for which a feedforward inverse exists is noncatastrophic. Another advantage of systematic codes is that they are always noncatastrophic. A code is catastrophic if and only if the state diagram contains a loop of zero weight other than the self-loop around the state S0. Note that the self-loop around the state S3 has zero weight.
State diagram of a (2, 1, 2) catastrophic code.
94
10.2 Structural Properties of Convolutional Codes
In choosing nonsystematic codes for use in a communication system, it is important to avoid the selection of catastrophic codes. Only a fraction 1/(2n − 1) of (n, 1, m) nonsystematic codes are catastrophic. A similar result for (n, k, m) codes with k > 1 is still lacking.
95
The performance of a convolutional code depends on the decoding algorithm employed and the distance properties of the code. The most important distance measure for convolutional codes is the minimum free distance dfree, defined as
where v’ and v’’ are the code words corresponding to the information sequences u’ and u’’, respectively.
In equation above, it is assumed that if u’ and u’’ are of different lengths, zeros are added to the shorter sequence so that their corresponding code words have equal lengths.
( ){ }free min , : ,d d ′ ′′ ′ ′′≠v v u u
10.3 Distance Properties of Convolutional Codes
96
dfree is the minimum distance between any two code words in the code. Since a convolutional code is a linear code,
( ){ } ( ){ } ( ){ }
97
Also, it is the minimum weight of all paths in the state diagram that diverge from and remerge with the all-zero state S0, and it is the lowest power of X in the code-generating function T(X). For example, dfree = 6 for the (2, 1, 3) code of example 10.10(a), and dfree = 3 for the (3, 2, 1) code of Example 10.10(b). Another important distance measure for convolutional codes is the column distance function (CDF). Letting
denote the ith truncation of the code word v, and
denote the ith truncation of the code word u.
[ ] ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( )( )1 2 1 2 1 2 0 0 0 1 1 1, , ,n n n
i i ii v v v v v v v v v=v …
[ ] ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( )( )1 2 1 2 1 2 0 0 0 1 1 1, , ,k k k
i i ii u u u u u u u u u=u …
10.3 Distance Properties of Convolutional Codes
98
The column distance function of order i, di, is defined as
[ ] [ ]( ) [ ] [ ]{ } [ ] [ ]{ }
99
[G]i is a k(i + 1) × n(i + 1) submatrix of G with the form
or
G
1 1
0 1 0 1
m m
G G G
G G G
100
Then
is seen to depend only on the first n(i + 1) columns of G and this accounts for the name “column distance function.” The definition implies that di cannot decrease with increasing i (i.e., it is a monotonically nondecreasing function of i). The complete CDF of the (2, 1, 16) code with
is shown in the figure of the following page.
[ ] [ ]( ) [ ]{ }0 min : 0i i i
d w= ≠u G u
( ) 2 5 6 8 13 16
3 4 7 9 10 11 12 14 15 16
1 ,
1
= + + + + + + + + + + + + + + + + +
101
10.3 Distance Properties of Convolutional Codes
102
Two cases are of specific interest: i = m and i→∞. For i = m, dm is called the minimum distance of a convolutional code and will also be denoted dmin. From the definition di = min{w([u]i[G]i):[u]0≠0}, we see that dmin represents the minimum-weight code word over the first constraint length whose initial information block is nonzero. For the (2,1,16) convolutional code, dmin = d16 = 8. For i→∞, limi→∞di is the minimum-weight code word of any length whose first information block is nonzero. Comparing the definitions of limi→∞di and dfree, it can be shown that for noncatastrophic codes
freelim ii d d
103
di eventually reaches dfree and then increases no more. This usually happens within three to four constraint lengths (i.e., when i reaches 3m or 4m). Above equation is not necessarily true for catastrophic codes.
Take as an example the (2,1,2) catastrophic code whose state diagram is shown in Figure10.11. For this code, d0 = 2 and d1 = d2 = … = limi→∞di =3, since the truncated information sequence [u]i =(1, 1, 1, …, 1) always produces the truncated code word [v]i =(1 1, 0 1, 0 0, 0 0,…, 0 0), even in the limit as i→∞. Note that all paths in the state diagram that diverge from and remerge with the all-zero state S0 have weight at least 4, and dfree = 4.
10.3 Distance Properties of Convolutional Codes
104
Hence, we have a situation in which limi→∞di = 3 ≠ dfree = 4.
It is characteristic of catastrophic codes that an infinite-weight information sequence produces a finite-weight code word. In some cases, as in the example above, this code word can have weight less than the free distance of the code. This is due to the zero weight loop in the state diagram. In other words, an information sequence that cycles around this zero weight loop forever will itself pick up infinite weight without adding to the weight of the code word.
10.3 Distance Properties of Convolutional Codes
105
In a noncatastrophic code, which contains no zero weight loop other than the self-loop around the all-zero state S0, all infinite- weight information sequences must generate infinite-weight code words, and the minimum-weight code word always has finite length. Unfortunately, the information sequence that produces the minimum-weight code word may be quite long in some cases, thereby causing the calculation of dfree to be a rather formidable task. The best achievable dfree for a convolutional code with a given rate and encoder memory has not been determined exactly. Upper and lower bound on dfree for the best code have been obtained using a random coding approach.
10.3 Distance Properties of Convolutional Codes
106
A comparison of the bounds for nonsystematic codes with the bounds for systematic codes implies that more free distance is available with nonsystematic codes of a given rate and encoder memory than with systematic codes. This observation is verified by the code construction results presented in succeeding chapters, and has important consequences when a code with large dfree must be selected for use with either Viterbi or sequential decoding. Heller (1968) derived the upper bound on the minimum free distance of a rate 1/n convolutional code:
( ) 1


≤ + − −
107
Rate k/5
Rate k/7

Recommended