Lecture 6A
HammingDistanceandEuclideanDistance
PerformanceBlock ErrorRate
Coding Gain
AsymptoticCoding Gain
Soft-DecisionDecoding
Codes onGraphs
Lecture 6ACoding 2
Wireless Embedded Systems - Communication Systems Lab - Winter 2019 Lecture 6A 1
Lecture 6A
HammingDistanceandEuclideanDistance
PerformanceBlock ErrorRate
Coding Gain
AsymptoticCoding Gain
Soft-DecisionDecoding
Codes onGraphs
Hamming and Euclidean Distance
Consider two transmitted codewords a1 and a2 that differ by theHamming distance d.The modulated codewords can be expressed as two signal vectors s1 ands2.Suppose that the codewords are modulated using BPSK.Each Rx code bit is one of two signal vectors ±
√Ec
Ec is the average energy in a code bit (not usually equal to the averageenergy Eb in a uncoded data bit.)
For each bit position where a1 and a2 differ, the corresponding code bitsare separated by a euclidean distance d = 2
√Ec .
10
BPSK
dmin
√Eb
√Eb
Figure: Euclidean distance between uncoded symbols for BPSK. For coded signals,the energy per coded bit Ec is reduced as compared to the energy Eb per uncodedbit.
Wireless Embedded Systems - Communication Systems Lab - Winter 2019 Lecture 6A 2
Lecture 6A
HammingDistanceandEuclideanDistance
PerformanceBlock ErrorRate
Coding Gain
AsymptoticCoding Gain
Soft-DecisionDecoding
Codes onGraphs
Hamming Distance for BPSK
For a binary block code, the minimum number of positions that differbetween any two codewords is the minimum Hamming distance dmin.
The squared-euclidean distance between two codewords at this minimumHamming distance is
d2min = |s1 − s2|2
=
n∑i=1
(s1i − s2i)2
= 4dminEc
= 4(2t + 1)Ec, (1)
For other binary modulation formats, the expression for the euclideandistance is different, but the form of (1) is valid.
For other nonbinary modulation formats, the form given in (1) is notnecessarily valid.
Wireless Embedded Systems - Communication Systems Lab - Winter 2019 Lecture 6A 3
Lecture 6A
HammingDistanceandEuclideanDistance
PerformanceBlock ErrorRate
Coding Gain
AsymptoticCoding Gain
Soft-DecisionDecoding
Codes onGraphs
Code Bits and (User) Data Bits
To determine how well a code works, set the total energy in the codewordto be equal to the total energy in the dataword.
The total energy in the dataword is Ebk where Eb is the energy in anuncoded bit.
The energy in the codeword is Ecn where Ec is the energy in a code bit.
Equating the two expressions we have
Ec = EbRc, (2)
so that the energy per code bit is reduced relative to the energy peruncoded data bit by the code rate Rc = k/n.
Therefore, for the same noise power density spectrum N and the sameaverage symbol energy, a coded system will always, on average, producemore single code bit errors than an uncoded system because Ec < Eb.
Therefore, while the probability of a single code bit error increases beforedecoding, the block error rate decreases for a properly designed code.
Wireless Embedded Systems - Communication Systems Lab - Winter 2019 Lecture 6A 4
Lecture 6A
HammingDistanceandEuclideanDistance
PerformanceBlock ErrorRate
Coding Gain
AsymptoticCoding Gain
Soft-DecisionDecoding
Codes onGraphs
Performance of Hard-Decision Decoding
Consider hard-decision decoding for a channel with white additivegaussian noise.
Let pe be the probability of a block error.
pC = 1− pe is the probability of correctly decoding a block.
For an uncoded system, any bit error within a block of length n willproduce a block error.For independent transmitted bits, this error is given by
pe = 1− pC
= 1− (1− pb)n (3)
≈ npb, (4)
where last form is valid pb � 1.
Wireless Embedded Systems - Communication Systems Lab - Winter 2019 Lecture 6A 5
Lecture 6A
HammingDistanceandEuclideanDistance
PerformanceBlock ErrorRate
Coding Gain
AsymptoticCoding Gain
Soft-DecisionDecoding
Codes onGraphs
Uncoded Error Rate for BPSK
The probability of a bit error for binary phase-shift-keying ispb =
12 erfc
(√Eb/N
)(See Lecture 4B).
Substituting, the block error rate for an uncoded binaryphase-shift-keying system is
pe ≈n
2 erfc
(√EbN
), (5)
and is simply n times the probability of a single bit error if thatprobability is small.
Wireless Embedded Systems - Communication Systems Lab - Winter 2019 Lecture 6A 6
Lecture 6A
HammingDistanceandEuclideanDistance
PerformanceBlock ErrorRate
Coding Gain
AsymptoticCoding Gain
Soft-DecisionDecoding
Codes onGraphs
Block Error Rate
Now compare this uncoded block error rate to the block error rate usinga code. If the block code can correct up to t errors in a word of length n,then the probability of correctly decoding the block is
pC = 1− pe =
t∑`=0
(n
`
)(pb)
`︸︷︷︸` bits in error
(1− pb)n−`︸ ︷︷ ︸
` bits correct
,
where (n`) = n!/`!(n− `)! is the binomial coefficient , which is thenumber of patterns of ` errors within a word of length n.
The term (pe)` (1− pe)
n−` is the probability that the word contains `code bits that are in error and n− ` code bits that are correct.
Every senseword that contains up to t errors can be corrected.
A block decoding error occurs if the senseword contains more than terrors. This probability is
pe = 1− pC =
n∑`=t+1
(n
`
)(pb)
` (1− pb)n−` (6)
(Hard-coded block error rate)Wireless Embedded Systems - Communication Systems Lab - Winter 2019 Lecture 6A 7
Lecture 6A
HammingDistanceandEuclideanDistance
PerformanceBlock ErrorRate
Coding Gain
AsymptoticCoding Gain
Soft-DecisionDecoding
Codes onGraphs
Figure
Eb/N0
Log
Bit
Erro
r Rat
e
Soft Decision
Coding Gain
Eb/N0 (dB)
Log
Bit
Erro
r Rat
e
Hard DecisionLimit for BPSK
ShannonLimit
Uncoded data bit
UncodedBPSK
Softdecision
decoding
(a) (b)
�4 4 8 12�8
�6
�4
�2
0
-1.6 0.37
11.2 dB
Code bit before
decoding
Harddecision
decoding Harddecision
coding gain
Figure: (a) The code bit error rate before decoding using binary phase-shift-keying,the user data bit error rate, the bit error rate after hard-decision decoding for a Golay(23,12,7) code, and the upper bound of the bit error rate using soft-decisiondecoding using (11). (b) The maximum achievable coding gain.
Wireless Embedded Systems - Communication Systems Lab - Winter 2019 Lecture 6A 8
Lecture 6A
HammingDistanceandEuclideanDistance
PerformanceBlock ErrorRate
Coding Gain
AsymptoticCoding Gain
Soft-DecisionDecoding
Codes onGraphs
Coding Gain
The coding gain is defined as the difference in Eb/N between theuncoded probability of an error the coded probability of an error.
The maximum coding gain is limited because there is a fundamentallower bound on the value of Eb/N required to communicate reliably overa memoryless channel with white additive gaussian noise.
For this channel, the minimum value of Eb/N is called the Shannon limitand is loge 2 or −1.59 dB .
The performance of all coded-modulated formats for a memorylesschannel with white additive gaussian noise, regardless of the complexity,lies to the right of this limit.
If hard-decision detection is used, then the lower limit for the minimumvalue of Eb/N is increased to π
2 loge 2 = 0.37 dB is about 2 dB largerthan the Shannon limit.
Wireless Embedded Systems - Communication Systems Lab - Winter 2019 Lecture 6A 9
Lecture 6A
HammingDistanceandEuclideanDistance
PerformanceBlock ErrorRate
Coding Gain
AsymptoticCoding Gain
Soft-DecisionDecoding
Codes onGraphs
Simplified Forms
An estimate valid if pb � 1 can be obtained by noting that theprobability for each error pattern is a rapidly decreasing function of thenumber of errors in the pattern.
Therefore, the error pattern with the least number of errors that stillproduces a block error is the most likely error pattern.
The most likely error pattern that produces a block error is the patternwith t + 1 errors.
Wireless Embedded Systems - Communication Systems Lab - Winter 2019 Lecture 6A 10
Lecture 6A
HammingDistanceandEuclideanDistance
PerformanceBlock ErrorRate
Coding Gain
AsymptoticCoding Gain
Soft-DecisionDecoding
Codes onGraphs
Simplified Block Error Probability
The block error probability for this error pattern is determined using onlythe first term in the summation in (6) and is(
n
t + 1
)(pb)
t+1 (1− pb)n−t−1
Examining this term, if n is large and pb is small, then(1− pb)
n−t−1 ≈ 1.
Accordingly, approximate probability of a block error is
pe ≈(
n
t + 1
)pt+1b
≈ ntpt+1b (7)
where nt.= ( nt+1) is the number of possible patterns with t + 1 errors.
Wireless Embedded Systems - Communication Systems Lab - Winter 2019 Lecture 6A 11
Lecture 6A
HammingDistanceandEuclideanDistance
PerformanceBlock ErrorRate
Coding Gain
AsymptoticCoding Gain
Soft-DecisionDecoding
Codes onGraphs
Asymptotic Coding Gain
The probability of a single code bit error depends on the modulationformat.
For binary phase-shift-keying in additive gaussian noise,pb =
12 erfc
(√Ec/N
)where Ec is the energy in a code bit.
Using this expression in (7) and substituting Ec = EbRc for the energy ina code bit, the probability of a block error is
pe ≈ nt
[12erfc
(√Rc
EbN
)]t+1
.
If Ec/N.= x is large then pb � 1, and erfc(x) ≈ e−x2
. Thereforeerfc(x)(t+1) ≈ erfc(x
√t + 1) and
pe ≈ nt
[12erfc
(√Rc(t + 1)Eb
N
)]. (8)
(Hard-coded block error probability)
Wireless Embedded Systems - Communication Systems Lab - Winter 2019 Lecture 6A 12
Lecture 6A
HammingDistanceandEuclideanDistance
PerformanceBlock ErrorRate
Coding Gain
AsymptoticCoding Gain
Soft-DecisionDecoding
Codes onGraphs
Asymptotic Hard-Decision Coding Gain
Comparing the coded probability of a block error to the uncodedprobability of a block error given in (3), there is a difference of Rc(t + 1)in the argument to the erfc function.
This difference is defined as the asymptotic coding gain G.
For the Golay code used in Figure 2, Rc = 12/23 dmin = 7, andt = b(dmin − 1)/2c = 3.
The asymptotic coding gain is G = Rc(t + 1) = 48/23 = 3.2 dB and isin good agreement with the 3.1 dB derived from (3) at pe = 10−6
because E/N is large, pe is small, and t is small compared to n so thatthe term nt is of second-order importance.
If these conditions are satisfied, then using erfc(x) ≈ e−x2and
2(t + 1) ≈ dmin, we can write
pe ≈ e−RcdminEb/2N (9)
(Approximate hard-coded block error rate)
Wireless Embedded Systems - Communication Systems Lab - Winter 2019 Lecture 6A 13
Lecture 6A
HammingDistanceandEuclideanDistance
PerformanceBlock ErrorRate
Coding Gain
AsymptoticCoding Gain
Soft-DecisionDecoding
Codes onGraphs
Repetition Codes
The expression for asymptotic coding gain is suspect if any of theconditions used for the derivation of (9) is not satisfied.
For example, for the (3,1) repetition code, the number of errors t = 1 isnot small with respect to the length of the code n = 3.
Consequently, while the calculated coding gain at pe = 10−6 is about 1dB, value of the asymptotic coding gain, which is G = 2/3, is less thanone and is meaningless.
Wireless Embedded Systems - Communication Systems Lab - Winter 2019 Lecture 6A 14
Lecture 6A
HammingDistanceandEuclideanDistance
PerformanceBlock ErrorRate
Coding Gain
AsymptoticCoding Gain
Soft-DecisionDecoding
Codes onGraphs
Soft-Decision Decoding
Now consider ideal soft-decision decoding in which the number ofquantization levels is large.
An upper bound for the soft-decision probability of a block error can bedetermined using the probability of a detection error derived from theunion bound (see Lecture 4B)
pe =≈n
2 erfc
(√d2
min4N
)(10)
Now substitute the relationship between the minimum euclidean distancedmin and the minimum Hamming distance dmin given in (1) into (10) toyield
pe(coded) ≤n
2 erfc
(√(2t + 1)Rc
EbN
), (11)
(Bound on soft-coded block error rate)
where Ec = RcEb has been used, and n is the average number of nearestblocks.
Wireless Embedded Systems - Communication Systems Lab - Winter 2019 Lecture 6A 15
Lecture 6A
HammingDistanceandEuclideanDistance
PerformanceBlock ErrorRate
Coding Gain
AsymptoticCoding Gain
Soft-DecisionDecoding
Codes onGraphs
Soft-decision Asymptotic Coding Gain
Examining the argument of the erfc function in (11), the asymptoticcoding gain G for soft-decision decoding is
Rc(2t + 1) = Rdmin
For the Golay (23,12) code, dmin = 7, and the asymptotic gain is84/23=5.6 dB.
For a block code that can correct a large number of errors, t� 1,t + 1 ≈ t and 2t + 1 ≈ 2t.
For this type of code, the argument of the erfc function for soft-decisiondecoding is a factor of
√2 larger than the argument for hard-decision
decoding and
pe ≈ e−RcdminEb/N . (12)
(Approximate soft-coded block error rate)
Comparing this expression to the approximate probability of a block errorfor hard-decision soft decision decoding has an asymptotic coding gainthat is e2 larger.
Wireless Embedded Systems - Communication Systems Lab - Winter 2019 Lecture 6A 16
Lecture 6A
HammingDistanceandEuclideanDistance
PerformanceBlock ErrorRate
Coding Gain
AsymptoticCoding Gain
Soft-DecisionDecoding
Codes onGraphs
Trellis and Tanner Graph
Linear codes are defined algebraically by matrix equations, but can bedepicted graphically in several ways.
Graphs can be powerful aids for understanding and designing encodersand decoders for large codes.
The two graphical models that will be described here are the trellisgraph and the Tanner graph.
These graphs will be illustrated for the code known as the (8,4,4)extended Hamming code.
This code, which is the Hamming (7,4,3) code with an additional checkbit, has a minimum Hamming distance equal to four.
Wireless Embedded Systems - Communication Systems Lab - Winter 2019 Lecture 6A 17
Lecture 6A
HammingDistanceandEuclideanDistance
PerformanceBlock ErrorRate
Coding Gain
AsymptoticCoding Gain
Soft-DecisionDecoding
Codes onGraphs
Trellis for a Graph
0
1
1
1
1
1 1
11 1
0
1
1
1 11
1
1
0
0
0
0
00
0
000
1
111
0
0
00
00
00
1 1
0 0
Figure: The (8,4,4) extended Hamming code on a minimal trellis. The highlightedpath is 01100110 and is one of sixteen codewords.
Wireless Embedded Systems - Communication Systems Lab - Winter 2019 Lecture 6A 18
Lecture 6A
HammingDistanceandEuclideanDistance
PerformanceBlock ErrorRate
Coding Gain
AsymptoticCoding Gain
Soft-DecisionDecoding
Codes onGraphs
Trellis for a Graph
Each branch of the trellis is labeled with either a zero or a one.
There are sixteen paths through the trellis from the left node to the rightnode.
The sequence of labels on each such path specifies a codeword.
Every four-bit dataword is represented by a unique path by any convenientmethod of assigning the sixteen four-bit datawords to the sixteen paths.
The extended (8,4,4) Hamming code has minimum distance four.There are 256 eight-bit words in total, of which the sixteen described bythe trellis are codewords
128 words are at distance one from a unique codeword,remaining 112 words are each at Hamming distance two from twocodewords.
Wireless Embedded Systems - Communication Systems Lab - Winter 2019 Lecture 6A 19
Lecture 6A
HammingDistanceandEuclideanDistance
PerformanceBlock ErrorRate
Coding Gain
AsymptoticCoding Gain
Soft-DecisionDecoding
Codes onGraphs
Hard and Soft Decisions on Graphs
A hard-decision decoder sees the trellis labeled as shown in Figure 3 withones and zeros and when given a senseword.
In effect, the decoder finds the path that agrees with the senseword in allbut at most one place.
A soft-decision decoder sees the trellis labels as ±A instead of 0, 1 andfinds the path that is closest to the senseword in total squared euclideandistance.
Such a search for the best codeword could be carried out by the methodsof sequential decoding, such as the Viterbi algorithm
topics discussed in the context of a convolutional code in the next lecture.
Convolutional codes are described in Section ?? on a trellis that is similarto the trellis shown in Figure 3.
Wireless Embedded Systems - Communication Systems Lab - Winter 2019 Lecture 6A 20
Lecture 6A
HammingDistanceandEuclideanDistance
PerformanceBlock ErrorRate
Coding Gain
AsymptoticCoding Gain
Soft-DecisionDecoding
Codes onGraphs
Tanner Graphs
A Tanner graph is a bipartite graph that is an alternative to the trellisgraph that is useful for describing some iterative algorithms.
As an example, consider the extended (8,4,4) Hamming code using acheck matrix under a preferred permutation of columns given by
H =
1 1 1 0 0 1 0 01 0 1 1 0 0 0 11 0 0 0 1 1 0 10 0 1 0 0 1 1 1
, (13)
where the permutation is chosen to give the check matrix a kind ofsymmetry.
Because the check matrix has a symmetric structure with four ones inevery row, it is described by a Tanner graph
Wireless Embedded Systems - Communication Systems Lab - Winter 2019 Lecture 6A 21
Lecture 6A
HammingDistanceandEuclideanDistance
PerformanceBlock ErrorRate
Coding Gain
AsymptoticCoding Gain
Soft-DecisionDecoding
Codes onGraphs
Tanner Graphs
r1 r2 r3 r4 r5 r6r7
f1 f2 f3 f4
r0
Figure: Tanner graph for an (8,4,4) code.
Wireless Embedded Systems - Communication Systems Lab - Winter 2019 Lecture 6A 22
Lecture 6A
HammingDistanceandEuclideanDistance
PerformanceBlock ErrorRate
Coding Gain
AsymptoticCoding Gain
Soft-DecisionDecoding
Codes onGraphs
Tanner Graphs
The Tanner graph displays connections rather than paths.
It consists of two rows of nodes connected by lines called graph edges.
The row of nodes, depicted by circles on the bottom, corresponds to theeight bits ci of a codeword.
These are called codebit nodes or bit nodes.
The bit nodes are labeled ri and are initially identified with the sensewordcomponents.
The nodes in the row on the top are called check nodes with thosenodes depicted as squares.
Wireless Embedded Systems - Communication Systems Lab - Winter 2019 Lecture 6A 23
Lecture 6A
HammingDistanceandEuclideanDistance
PerformanceBlock ErrorRate
Coding Gain
AsymptoticCoding Gain
Soft-DecisionDecoding
Codes onGraphs
Tanner Graphs - 2
Each check node represents one row of the check matrix H.
These check nodes are labeled fj .
Each check node is connected to the four bit nodes that have ones in thecorresponding row of H so that a check node fj is connected to bit noderi if Hji = 1.
The Tanner graph and the check matrix are equivalent in that one can bedeveloped from the other
However, the Tanner graph makes visible the loops that are in thedependencies but are not distinguished in the check matrix
Therefore, the Tanner graph in useful for understanding the structure ofthe dependencies in a code.
Wireless Embedded Systems - Communication Systems Lab - Winter 2019 Lecture 6A 24
Lecture 6A
HammingDistanceandEuclideanDistance
PerformanceBlock ErrorRate
Coding Gain
AsymptoticCoding Gain
Soft-DecisionDecoding
Codes onGraphs
Tanner Graphs - 3
The girth of a Tanner graph is the length of the shortest loop in thegraph
Section ?? studies the use of the Tanner graph to describe iterativedecoding algorithms
Note that evidence about bit node r4 of the senseword is not directlyinformative about decoding the correct value of bit c3
That evidence reaches c3 only indirectly through the bit nodes r0 and r7,and even more indirectly through other paths
This metaphor of evidence propagating in a graph is useful for describingiterative decoding, and for other kinds of analysis.
Wireless Embedded Systems - Communication Systems Lab - Winter 2019 Lecture 6A 25