Post on 19-Jan-2022
transcript
ECE 6640Digital Communications
Dr. Bradley J. BazuinAssistant Professor
Department of Electrical and Computer EngineeringCollege of Engineering and Applied Sciences
ECE 6640 2
Chapter 8
8. Channel Coding: Part 3.1. Reed-Solomon Codes. 2. Interleaving and Concatenated Codes. 3. Coding and Interleaving Applied to the Compact Disc
Digital Audio System. 4. Turbo Codes.5. Appendix 8A. The Sum of Log-Likelihood Ratios.
ECE 6640 3
Sklar’s Communications System
Notes and figures are based on or taken from materials in the course textbook: Bernard Sklar, Digital Communications, Fundamentals and Applications,
Prentice Hall PTR, Second Edition, 2001.
ECE 6640 4
Reed-Solomon Codes
• Nonbinary cyclic codes with symbols consisting of m-bit sequences– (n, k) codes of m-bit symbols exist for all n and k where
– Convenient example
– An “extended code” could use n=2m and become a perfect length hexidecimal or byte-length word.
• R-S codes achieve the largest possible code minimum distance for any linear code with the same encoder input and output block lengths!
22nk0 m
t212,12k,n mm
1kndmin
2
kn2
1dt min
ECE 6640 5
Comparative Advantage to Binary
• For a (7,3) binary code:– 2^7=128 n-tuples– 2^3=8 3-“symbol” codewords– 8/128=1/16 of the n-tuples are codewords
• For a (7,3) R-S with 3-bit symbols– (2^7)^3 =2,097,152 n-tuples– (2^3)^3= 512 3-“symbol” codewords– 2^9/2^21=1/2^12=1/4,096 of the n-tuples are codewords
• Significantly increasing hamming distances are possible!
ECE 6640 6
R-S Error Probability
• Useful for burst-error corrections– Numerous systems suffer from burst-errors
• Error Probability
• The bit error probability can be upper bounded by the symbol error probability for specific modulation types. For MFSK
12
1tj
j12jm
mE
mm
p1pj
12j
121P
122
PP
m
1m
E
B
Burst Errors
• Result in a series of bits or symbols being corrupted.• Causes:
– Signal fading (cell phone Rayleigh Fading)– Lightening or other “impulse noise” (radar, switches, etc.)– Rapid Transients– CD/DVD damage
• See Wikipedia for references: http://en.wikipedia.org/wiki/Burst_error
• Note that for R-S Codes, the t correction is for symbols, not just bits … therefore, t=4 implies 3 to 4 n-tuples of sequential errors.
ECE 6640 7
ECE 6640 8
R-S and Finite Fields
• R-S codes use generator polynomials– Encoding may be done in a systematic form– Operations (addition, subtraction, multiplication and division) must
be defined for the m-bit symbol systems.
• Galois Fields (GF) allow operations to be readily defined
ECE 6640 9
R-S Encoding/Decoding
• Done similarly to binary cyclic codes– GF math performed for multiplication and addition of feedback
polynomial
• U(X)=m(X) x g(X) with p(X) parity computed• Syndrome computation performed• Errors detected and corrected, but with higher complexity
(a binary error calls for flipping a bit, what about an m-bit symbol?)– r(X)=U(X) + e(X)– Must determine error location and error value …
ECE 6640 10
Reed-Solomon Summary
• Widely used in data storage and communications protocols• You may need to know more in the future
(systems you work with may use it)
7.11 Reed-Solomon Codes
• Reed-Solomon codes are a special class of nonbinary BCH codes that were first introduced in Reed and Solomon.
• An good overview can be found at:– http://www.cs.cmu.edu/~guyb/realworld/reedsolomon/reed_solom
on_codes.html
• Matlab Information– http://www.mathworks.com/help/comm/ug/error-detection-and-
correction.html#bsxtjo1
ECE 6640 11John G. Proakis, “Digital Communications, 5th ed.,” McGraw Hill, Fourth Edition, 2008.
ISBN: 978-0-07-295716-6.
ECE 6640 12
Reed-Solomon Codes
• Nonbinary cyclic codes with symbols consisting of m-bit sequences– (n, k) codes of m-bit symbols exist for all n and k where
– Convenient example
– An “extended code” could use n=2m and become a perfect length hexidecimal or byte-length word.
• R-S codes achieve the largest possible code minimum distance for any linear code with the same encoder input and output block lengths!
220 mnk
t212,12k,n mm
1kndmin
2
kn2
1dt min
John G. Proakis, “Digital Communications, 5th ed.,” McGraw Hill, Fourth Edition, 2008. ISBN: 978-0-07-295716-6.
ECE 6640 13
Comparative Advantage to Binary
• For a (7,3) binary code:– 2^7=128 n-tuples– 2^3=8 3-“symbol” codewords– 8/128=1/16 of the n-tuples are codewords
• For a (7,3) R-S with 3-bit symbols (t=2)– (2^7)^3 =2,097,152 n-tuples– (2^3)^3= 512 3-“symbol” codewords– 2^9/2^21=1/2^12=1/4,096 of the n-tuples are codewords
• Significantly increasing hamming distances are possible!Notes and figures are based on or taken from materials in the course textbook:
Bernard Sklar, Digital Communications, Fundamentals and Applications, Prentice Hall PTR, Second Edition, 2001.
Reed Solomon Code Options
• m=3– (7,5) 3-bit symbols, t=1– (7,3) 3-bit symbols, t=2
• m=4– (15,13) 4-bit symbols, t=1– (15,11) 4-bit symbols, t=2– (15, 9) 4-bit symbols, t=3– (15, 7) 4-bit symbols, t=4– (15, 5) 4-bit symbols, t=5
ECE 6640 14
• Byte wide coding• m=8
– (255,223) 8-bit symbols, t=16– (255,239) 8-bit symbols, t=8
Note: The symbols may be transmitted as m-aryelements. (i.e. m=3 8-psk or m=4 16-QAM)
t represents m-bit “symbol” error corrections
John G. Proakis, “Digital Communications, 5th ed.,” McGraw Hill, Fourth Edition, 2008. ISBN: 978-0-07-295716-6.
ECE 6640 15
R-S Error Probability
• Useful for burst-error corrections– Numerous systems suffer from burst-errors
• Error Probability - Symbol
• The bit error probability can be upper bounded by the symbol error probability for specific modulation types. For MFSK
12
1tj
j12jm
mE
mm
p1pj
12j
121P
122
PP
m
1m
E
B
John G. Proakis, “Digital Communications, 5th ed.,” McGraw Hill, Fourth Edition, 2008. ISBN: 978-0-07-295716-6.
Example 7.11-2
ECE 6640 16John G. Proakis, “Digital Communications, 5th ed.,” McGraw Hill, Fourth Edition, 2008.
ISBN: 978-0-07-295716-6.
ECE 6640 17
R-S and Finite Fields
• R-S codes use generator polynomials– Encoding may be done in a systematic form– Operations (addition, subtraction, multiplication and division) must
be defined for the m-bit symbol systems.
• Galois Fields (GF) allow operations to be readily defined
John G. Proakis, “Digital Communications, 5th ed.,” McGraw Hill, Fourth Edition, 2008. ISBN: 978-0-07-295716-6.
ECE 6640 18
R-S Encoding/Decoding
• Done similarly to binary cyclic codes– GF math performed for multiplication and addition of feedback
polynomial
• U(X)=m(X) x g(X) with p(X) parity computed• Syndrome computation performed• Errors detected and corrected, but with higher complexity
(a binary error calls for flipping a bit, what about an m-bit symbol?)– r(X)=U(X) + e(X)– Must determine error location and error value …
John G. Proakis, “Digital Communications, 5th ed.,” McGraw Hill, Fourth Edition, 2008. ISBN: 978-0-07-295716-6.
ECE 6640 19
Reed-Solomon Summary
• Widely used in data storage and communications protocols• You may need to know more in the future
(systems you work with may use it)
John G. Proakis, “Digital Communications, 5th ed.,” McGraw Hill, Fourth Edition, 2008. ISBN: 978-0-07-295716-6.
ECE 6640 20
Interleaving
• Convolutional codes are suitable for memoryless channels with random error events.
• Some errors have bursty nature:– Statistical dependence among successive error events (time-correlation)
due to the channel memory.• Like errors in multipath fading channels in wireless communications,
errors due to the switching noise, …
• “Interleaving” makes the channel looks like as a memoryless channel at the decoder.
Digital Communications I: Modulation and Coding Course, Period 3 – 2006, Sorour Falahati, Lecture 13
ECE 6640 21
Interleaving …
• Interleaving is done by spreading the coded symbols in time (interleaving) before transmission.
• The reverse in done at the receiver by deinterleaving the received sequence.
• “Interleaving” makes bursty errors look like random. Hence, Conv. codes can be used.
• Types of interleaving:– Block interleaving– Convolutional or cross interleaving
Digital Communications I: Modulation and Coding Course, Period 3 – 2006, Sorour Falahati, Lecture 13
ECE 6640 22
Interleaving …
• Consider a code with t=1 and 3 coded bits.• A burst error of length 3 can not be corrected.
• Let us use a block interleaver 3X3
A1 A2 A3 B1 B2 B3 C1 C2 C3
2 errors
A1 A2 A3 B1 B2 B3 C1 C2 C3
Interleaver
A1 B1 C1 A2 B2 C2 A3 B3 C3
A1 B1 C1 A2 B2 C2 A3 B3 C3
Deinterleaver
A1 A2 A3 B1 B2 B3 C1 C2 C3
1 errors 1 errors 1 errorsDigital Communications I: Modulation and Coding Course, Period 3 – 2006, Sorour Falahati, Lecture 13
A Block Interleaver
• A block interleaver formats the encoded data in a rectangular array of m rows and n columns. Usually, each row of the array constitutes a codeword of length n. An interleaver of degree m consists of m rows (m codewords) as illustrated in Figure 7.12–2.
ECE 6640 23John G. Proakis, “Digital Communications, 5th ed.,” McGraw Hill, Fourth Edition, 2008. ISBN: 978-0-07-295716-6.
Convolutional Interleaving
• A simple banked switching and delay structure can be used as proposed by Ramsey and Forney.– Interleave after encoding and prior to transmission– Deinterleave after reception but prior to decoding
ECE 6640 24
Forney Reference
• Forney, G., Jr., "Burst-Correcting Codes for the Classic BurstyChannel," Communication Technology, IEEE Transactions on , vol.19, no.5, pp.772,781, October 1971.
ECE 6640 25
Convolutional Example
• Data fills the commutator registers
• Output sequence (in repeating blocks of 16)– 1 14 11 8– 5 2 15 12– 9 6 3 16– 13 10 7 4– 1 14 11 8– 5 2 15 12– 9 6 3 16– 13 10 7 4
ECE 6640 26
Proakis 7.13 Combining Codes
• The problem, however, is that the decoding complexity of a block code generally increases with the block length, and this dependence in general is an exponential dependence. Therefore improved performance through using block codes is achieved at the cost of increased decoding complexity.
• One approach to design block codes with long block lengths and with manageable complexity is to begin with two or more simple codes with short block lengths and combine them in a certain way to obtain codes with longer block length that have better distance properties.
• Then some kind of suboptimal decoding can be applied to the combined code based on the decoding algorithms of the simple constituent codes.
– Product Codes– Concatenated Codes
ECE 6640 27John G. Proakis, “Digital Communications, 5th ed.,” McGraw Hill, Fourth Edition, 2008.
ISBN: 978-0-07-295716-6.
Product Codes
• A simple method of combining two or more codes is described in this section. Let us assume we have two systematic linear block codes; code Ci is an (ni , ki ) code with minimum distance dmin i for i = 1, 2. The product of these codes is an (n1n2, k1k2) linear block code whose bits are arranged in a matrix form as shown in Figure 7.13–1.
ECE 6640 28John G. Proakis, “Digital Communications, 5th ed.,” McGraw Hill, Fourth Edition, 2008.
ISBN: 978-0-07-295716-6.
ECE 6640 29
Concatenated codes
• A concatenated code uses two levels on coding, an inner code and an outer code (higher rate).– Popular concatenated codes: Convolutional codes with Viterbi
decoding as the inner code and Reed-Solomon codes as the outer code
• The purpose is to reduce the overall complexity, yet achieving the required error performance.
Interleaver Modulate
Deinterleaver
Inner encoder
Inner decoder
Demodulate
Channel
Outer encoder
Outer decoder
Input data
Output data
Digital Communications I: Modulation and Coding Course, Period 3 – 2006, Sorour Falahati, Lecture 13
ECE 6640 30
Practical example: Compact Disc
• Channel in a CD playback system consists of a transmitting laser, a recorded disc and a photo-detector.
• Sources of errors are manufacturing damages, fingerprints or scratches
• Errors have bursty like nature.• Error correction and concealment is done by using a
concatenated error control scheme, called cross-interleaverReed-Solomon code (CIRC).
“Without error correcting codes, digital audio would not be technically feasible.”
Digital Communications I: Modulation and Coding Course, Period 3 – 2006, Sorour Falahati, Lecture 13
CD CIRC Specifications
Maximum correctable burst length 4000 bits (2.5 mm track length)Maximum interpolatable burst length 12,000 bit (8 mm)Sample interpolation rate One sample every 10 hours at PB=10-4
1000 samples/min at PB=10-3
Undetected error samples (clicks) Less than one every 750 hours at PB=10-3
Negligible at PB=10-3
New discs are characterized by PB=10-4
ECE 6640 31
ECE 6640 32
Compact disc – cont’d
interleave
encode interleave encode interleave2C *D 1C D
deinterleave
decode deinterleave decode deinterleave2C *D 1C D
Encoder
Decoder
• CIRC encoder and decoder:
Digital Communications I: Modulation and Coding Course, Period 3 – 2006, Sorour Falahati, Lecture 13
CD Encoder Process
ECE 6640 33
RS(255, 251) 24 Used Symbols227 Unused SymbolsEqu. RS(28, 24)
RS(255, 251) 28 Used Symbols 223 Unused SymbolsEqu. RS(32, 28)
16-bit Left Audio16-bit Right Audio(24 byte frame)RS code 8-bit symbols
Overall Rate 3/4
CD Decoder Process
ECE 6640 34
ECE 6640 35
Advanced Topic: Turbo Codes
• Concatenated coding scheme for achieving large coding gains– Combine two or more relatively simple building blocks or
component codes. Often combined with interleaving. – For example: A Reed-Solomon outer code with a convolutional
inner code
• May use soft decisions in first decoder to pass to next decoder. Multiple iterations of decoding may be used to improve decisions!
• A popular topic for research, publications, and applications.
Turbo Code MATLAB
• I have been trying to run a simulation ….– Reed Solomon Examples– Turbo Code Examples
ECE 6640 36
Turbo Code Performance
• The decoding operation can be performed multiple times or iterations.
• There is a degree of improvement as shown.
ECE 6640 37
MATLAB Simulations
ECE 6640 38
-0.5 0 0.5 1 1.5 2 2.5 3 3.5 4 4.5
10-8
10-6
10-4
10-2
100
Eb/N0 (dB)
BE
R
LTE Turbo-Coding
N = 2048, 1 iterations
-0.5 0 0.5 1 1.5 2 2.5 3 3.5 4 4.5
10-8
10-6
10-4
10-2
100
Eb/N0 (dB)
BE
R
LTE Turbo-Coding
N = 2048, 2 iterations
MATLAB Simulations
ECE 6640 39
-0.5 0 0.5 1 1.5 2 2.5 3 3.5 4 4.5
10-8
10-6
10-4
10-2
100
Eb/N0 (dB)
BE
R
LTE Turbo-Coding
N = 2048, 3 iterations
-0.5 0 0.5 1 1.5 2 2.5 3 3.5 4 4.5
10-8
10-6
10-4
10-2
100
Eb/N0 (dB)
BE
R
LTE Turbo-Coding
N = 2048, 4 iterations
Section 8.9 Turbo Codes
• The construction and decoding of concatenated codes with interleaving, using convolutional codes.
• Parallel concatenated convolutional codes (PCCCs) with interleaving, also called turbo codes, were introduced by Berrou et al. (1993) and Berrou and Glavieux (1996).
• A basic turbo encoder, shown in Figure 8.9–1, is a recursive systematic encoder (RSC or RSCC) that employs two recursive systematic convolutional encoders in parallel, where the second encoder is preceded by an interleaver.
ECE 6640 40John G. Proakis, “Digital Communications, 5th ed.,” McGraw Hill, Fourth Edition, 2008.
ISBN: 978-0-07-295716-6.
Turbo Coding
• We observe that the nominal rate at the output of the turbo encoder is Rc = 1/3.
• As in the case of concatenated block codes, the interleaver is usually selected to be a block pseudorandom interleaver that reorders the bits in the information sequence before feeding them to the second encoder.
• In effect, as will be shown later, the use of two recursive convolutional encoders in conjunction with the interleaver produces a code that contains very few codewords of low weight. – The use of the interleaver in conjunction with the two encoders
results in codewords that have relatively few nearest neighbors. That is, the codewords are relatively sparse.ECE 6640 41
John G. Proakis, “Digital Communications, 5th ed.,” McGraw Hill, Fourth Edition, 2008. ISBN: 978-0-07-295716-6.
An Recursive Systematic enCoder(RSC)
• EXAMPLE 8.9–1. – A (31, 27) RSC encoder is represented by g1 = (11001) and
g2 =(10111) corresponding to g1(D) = 1+ D + D4
g2(D) = 1+ D2 + D3 + D4. – The encoder is given by the block diagram shown in Figure 8.9–2.
ECE 6640 42John G. Proakis, “Digital Communications, 5th ed.,” McGraw Hill, Fourth Edition, 2008.
ISBN: 978-0-07-295716-6.
Performance Bounds
• Turbo codes are two recursive systematic convolutional codes concatenated by an interleaver.
• Although the codes are linear and time-invariant, the operation of the interleaver, although linear, is not time-invariant.
• The trellis of the resulting linear but time-varying finite-state machine has a huge number of states that makes maximum-likelihood decoding hopeless.
• Therefore the text offers a “union bound” approach but refers readers to other papers.
ECE 6640 43John G. Proakis, “Digital Communications, 5th ed.,” McGraw Hill, Fourth Edition, 2008.
ISBN: 978-0-07-295716-6.
Iterative Decoding
• A suboptimal iterative decoding algorithm, known as the turbo decoding algorithm, was proposed by Berrou et al. (1993) which achieves excellent performance very close to the theoretical bound predicted by Shannon.
• The turbo decoding algorithm is based on iterative usage of the Log-APP or the Max-Log-APP algorithm. (APP: a-posteriori probability) a BCJR simplification described on p. 546.
• A soft-input soft-output decoder is used that allows multiple iterations to be performed.
ECE 6640 44John G. Proakis, “Digital Communications, 5th ed.,” McGraw Hill, Fourth Edition, 2008.
ISBN: 978-0-07-295716-6.
Decoder Performance
ECE 6640 45
It is seen from these plots that three regions are distinguishable. • For the low-SNR region where the error
probability changes very slowly as a function of Eb/N0 and the number of iterations.
• For moderate SNRs the error probability drops rapidly with increasing Eb/N0 and over many iterations Pb decreases consistently. This region is called the waterfall region or the turbo cliff region.
• Finally, for moderately large Eb/N0 values, the code exhibits an error floor which is typically achieved with a few iterations.
• As discussed before, the error floor effect in turbo codes is due to their low minimum distance.
John G. Proakis, “Digital Communications, 5th ed.,” McGraw Hill, Fourth Edition, 2008. ISBN: 978-0-07-295716-6.
Drawback and Summary
• The major drawback with decoding turbo codes with large interleavers is the decoding delay and the computational complexity inherent in the iterative decoding algorithm.
• In most data communication systems, however, the decoding delay is tolerable, and the additional computational complexity is usually justified by the significant coding gain that is achieved by the turbo code.
ECE 6640 46
ECE 6640 47
References
• http://www.eg.bucknell.edu/~kozick/elec47601/notes.html• Digital Communications I: Modulation and Coding Course, Period 3 – 2006,
Sorour Falahati, Lecture 13• A Tutorial on Convolutional Coding with Viterbi Decoding
by Chip Fleming of Spectrum Applications – http://home.netcom.com/~chip.f/viterbi/tutorial.html
• Robert Morelos-Zaragoza, The Error Correcting Codes (ECC) Page– http://www.eccpage.com/
• Matthew C. Valenti, Center for Identification Technology Research (CITeR), West Virginia University Site
– http://www.csee.wvu.edu/~mvalenti/turbo.html