+ All Categories
Home > Documents > Error Control Coding

Error Control Coding

Date post: 10-Oct-2015
Category:
Upload: nurman-presidenz
View: 34 times
Download: 0 times
Share this document with a friend
Description:
Error Control Coding

of 70

Transcript
  • 3F4 Error Control CodingDr. I. J. Wassell

  • IntroductionError Control Coding (ECC)Extra bits are added to the data at the transmitter (redundancy) to permit error detection or correction at the receiverDone to prevent the output of erroneous bits despite noise and other imperfections in the channelThe positions of the error control coding and decoding are shown in the transmission model

  • Transmission Model

  • Error ModelsBinary Symmetric Memoryless ChannelAssumes transmitted symbols are binaryErrors affect 0s and 1s with equal probability (i.e., symmetric)Errors occur randomly and are independent from bit to bit (memoryless)INOUT00111-p1-ppp p is the probability of bit error or the Bit Error Rate (BER) of the channel

  • Error ModelsMany other typesBurst errors, i.e., contiguous bursts of bit errorsoutput from DFE (error propagation)common in radio channelsInsertion, deletion and transposition errorsWe will consider mainly random errors

  • Error Control TechniquesError detection in a block of dataCan then request a retransmission, known as automatic repeat request (ARQ) for sensitive dataAppropriate forLow delay channelsChannels with a return pathNot appropriate for delay sensitive data, e.g., real time speech and data

  • Error Control TechniquesForward Error Correction (FEC)Coding designed so that errors can be corrected at the receiverAppropriate for delay sensitive and one-way transmission (e.g., broadcast TV) of dataTwo main types, namely block codes and convolutional codes. We will only look at block codes

  • Block CodesWe will consider only binary dataData is grouped into blocks of length k bits (dataword)Each dataword is coded into blocks of length n bits (codeword), where in general n>kThis is known as an (n,k) block code

  • Block CodesA vector notation is used for the datawords and codewords,Dataword d = (d1 d2.dk)Codeword c = (c1 c2..cn)The redundancy introduced by the code is quantified by the code rate,Code rate = k/ni.e., the higher the redundancy, the lower the code rate

  • Block Code - ExampleDataword length k = 4Codeword length n = 7This is a (7,4) block code with code rate = 4/7For example, d = (1101), c = (1101001)

  • Error Control Process101101Source code data chopped into blocksCodeword (n bits)Dataword (k bits)Codeword + possible errors (n bits)Dataword (k bits)Error flags

  • Error Control ProcessDecoder gives corrected dataMay also give error flags toIndicate reliability of decoded dataHelps with schemes employing multiple layers of error correction

  • Parity CodesExample of a simple block code Single Parity Check CodeIn this case, n = k+1, i.e., the codeword is the dataword with one additional bitFor even parity the additional bit is,For odd parity the additional bit is 1-qThat is, the additional bit ensures that there are an even or odd number of 1s in the codeword

  • Parity Codes Example 1Even parity(i) d=(10110) so, c=(101101)d=(11011) so,c=(110110)

  • Parity Codes Example 2Coding table for (4,3) even parity code

  • Parity CodesTo decodeCalculate sum of received bits in block (mod 2)If sum is 0 (1) for even (odd) parity then the dataword is the first k bits of the received codewordOtherwise errorCode can detect single errorsBut cannot correct error since the error could be in any bitFor example, if the received dataword is (100000) the transmitted dataword could have been (000000) or (110000) with the error being in the first or second place respectivelyNote error could also lie in other positions including the parity bit

  • Parity CodesKnown as a single error detecting code (SED). Only useful if probability of getting 2 errors is small since parity will become correct againUsed in serial communicationsLow overhead but not very powerfulDecoder can be implemented efficiently using a tree of XOR gates

  • Hamming DistanceError control capability is determined by the Hamming distanceThe Hamming distance between two codewords is equal to the number of differences between them, e.g.,1001101111010010 have a Hamming distance = 3Alternatively, can compute by adding codewords (mod 2)=01001001 (now count up the ones)

  • Hamming DistanceThe Hamming distance of a code is equal to the minimum Hamming distance between two codewordsIf Hamming distance is:1 no error control capability; i.e., a single error in a received codeword yields another valid codewordXXXXXXXX is a valid codewordNote that this representation is diagrammatic only.In reality each codeword is surrounded by n codewords. That is, one for every bit that could be changed

  • Hamming DistanceIf Hamming distance is:2 can detect single errors (SED); i.e., a single error will yield an invalid codewordXOXOXOX is a valid codewordO in not a valid codewordSee that 2 errors will yield a valid (but incorrect) codeword

  • Hamming DistanceIf Hamming distance is:3 can correct single errors (SEC) or can detect double errors (DED)XOOXOOXX is a valid codewordO in not a valid codewordSee that 3 errors will yield a valid but incorrect codeword

  • Hamming Distance - ExampleHamming distance 3 code, i.e., SEC/DEDOr can perform single error correction (SEC)X is a valid codewordO is an invalid codeword

  • Hamming DistanceThe maximum number of detectable errors is

    That is the maximum number of correctable errors is given by,

    where dmin is the minimum Hamming distance between 2 codewords and means the smallest integer

  • Linear Block CodesAs seen from the second Parity Code example, it is possible to use a table to hold all the codewords for a code and to look-up the appropriate codeword based on the supplied datawordAlternatively, it is possible to create codewords by addition of other codewords. This has the advantage that there is now no longer the need to held every possible codeword in the table.

  • Linear Block CodesIf there are k data bits, all that is required is to hold k linearly independent codewords, i.e., a set of k codewords none of which can be produced by linear combinations of 2 or more codewords in the set.The easiest way to find k linearly independent codewords is to choose those which have 1 in just one of the first k positions and 0 in the other k-1 of the first k positions.

  • Linear Block CodesFor example for a (7,4) code, only four codewords are required, e.g.,So, to obtain the codeword for dataword 1011, the first, third and fourth codewords in the list are added together, giving 1011010This process will now be described in more detail

  • Linear Block CodesAn (n,k) block code has code vectorsd=(d1 d2.dk) andc=(c1 c2..cn)The block coding process can be written as c=dGwhere G is the Generator Matrix

  • Linear Block CodesThus,ai must be linearly independent, i.e., Since codewords are given by summations of the ai vectors, then to avoid 2 datawords having the same codeword the ai vectors must be linearly independent

  • Linear Block CodesSum (mod 2) of any 2 codewords is also a codeword, i.e.,Since for datawords d1 and d2 we have;So,

  • Linear Block Codes0 is always a codeword, i.e.,Since all zeros is a dataword then,

  • Error Correcting Power of LBCThe Hamming distance of a linear block code (LBC) is simply the minimum Hamming weight (number of 1s or equivalently the distance from the all 0 codeword) of the non-zero codewordsNote d(c1,c2) = w(c1+ c2) as shown previouslyFor an LBC, c1+ c2=c3So min (d(c1,c2)) = min (w(c1+ c2)) = min (w(c3))Therefore to find min Hamming distance just need to search among the 2k codewords to find the min Hamming weight far simpler than doing a pair wise check for all possible codewords.

  • Linear Block Codes example 1For example a (4,2) code, suppose;For d = [1 1], then;a1 = [1011]a2 = [0101]

  • Linear Block Codes example 2A (6,5) code withIs an even single parity code

  • Systematic CodesFor a systematic block code the dataword appears unaltered in the codeword usually at the startThe generator matrix has the structure,kRR = n - kP is often referred to as parity bits

  • Systematic CodesI is k*k identity matrix. Ensures dataword appears as beginning of codewordP is k*R matrix.

  • Decoding Linear CodesOne possibility is a ROM look-up tableIn this case received codeword is used as an addressExample Even single parity check code;AddressData000000 0000001 1000010 1000011 0 .Data output is the error flag, i.e., 0 codeword ok,If no error, dataword is first k bits of codewordFor an error correcting code the ROM can also store datawords

  • Decoding Linear CodesAnother possibility is algebraic decoding, i.e., the error flag is computed from the received codeword (as in the case of simple parity codes)How can this method be extended to more complex error detection and correction codes?

  • Parity Check MatrixA linear block code is a linear subspace Ssub of all length n vectors (Space S)Consider the subset Snull of all length n vectors in space S that are orthogonal to all length n vectors in SsubIt can be shown that the dimensionality of Snull is n-k, where n is the dimensionality of S and k is the dimensionality of Ssub It can also be shown that Snull is a valid subspace of S and consequently Ssub is also the null space of Snull

  • Parity Check MatrixSnull can be represented by its basis vectors. In this case the generator basis vectors (or generator matrix H) denote the generator matrix for Snull - of dimension n-k = RThis matrix is called the parity check matrix of the code defined by G, where G is obviously the generator matrix for Ssub- of dimension kNote that the number of vectors in the basis defines the dimension of the subspace

  • Parity Check MatrixSo the dimension of H is n-k (= R) and all vectors in the null space are orthogonal to all the vectors of the codeSince the rows of H, namely the vectors bi are members of the null space they are orthogonal to any code vectorSo a vector y is a codeword only if yHT=0Note that a linear block code can be specified by either G or H

  • Parity Check MatrixSo H is used to check if a codeword is valid,R = n - kThe rows of H, namely, bi, are chosen to be orthogonal to rows of G, namely aiConsequently the dot product of any valid codeword with any bi is zero

  • Parity Check MatrixThis is so since,and so,This means that a codeword is valid (but not necessarily correct) only if cHT = 0. To ensure this it is required that the rows of H are independent and are orthogonal to the rows of GThat is the bi span the remaining R (= n - k) dimensions of the codespace

  • Parity Check MatrixFor example consider a (3,2) code. In this case G has 2 rows, a1 and a2Consequently all valid codewords sit in the subspace (in this case a plane) spanned by a1 and a2In this example the H matrix has only one row, namely b1. This vector is orthogonal to the plane containing the rows of the G matrix, i.e., a1 and a2Any received codeword which is not in the plane containing a1 and a2 (i.e., an invalid codeword) will thus have a component in the direction of b1 yielding a non- zero dot product between itself and b1

  • Parity Check MatrixSimilarly, any received codeword which is in the plane containing a1 and a2 (i.e., a valid codeword) will not have a component in the direction of b1 yielding a zero dot product between itself and b1

  • Error SyndromeFor error correcting codes we need a method to compute the required correctionTo do this we use the Error Syndrome, s of a received codeword, crs = crHTIf cr is corrupted by the addition of an error vector, e, thencr = c + eands = (c + e) HT = cHT + eHTs = 0 + eHTSyndrome depends only on the error

  • Error SyndromeThat is, we can add the same error pattern to different codewords and get the same syndrome.There are 2(n - k) syndromes but 2n error patternsFor example for a (3,2) code there are 2 syndromes and 8 error patternsClearly no error correction possible in this caseAnother example. A (7,4) code has 8 syndromes and 128 error patterns.With 8 syndromes we can provide a different value to indicate single errors in any of the 7 bit positions as well as the zero value to indicate no errorsNow need to determine which error pattern caused the syndrome

  • Error SyndromeFor systematic linear block codes, H is constructed as follows,G = [ I | P] and so H = [-PT | I]where I is the k*k identity for G and the R*R identity for HExample, (7,4) code, dmin= 3

  • Error Syndrome - ExampleFor a correct received codeword cr = [1101001]In this case,

  • Error Syndrome - ExampleFor the same codeword, this time with an error in the first bit position, i.e.,cr = [1101000]In this case a syndrome 001 indicates an error in bit 1 of the codeword

  • Comments about HThe minimum distance of the code is equal to the minimum number of columns (non-zero) of H which sum to zeroWe can expressWhere do, d1, dn-1 are the column vectors of HClearly crHT is a linear combination of the columns of H

  • Comments about HFor a codeword with weight w (i.e., w ones), then crHT is a linear combination of w columns of H.Thus we have a one-to-one mapping between weight w codewords and linear combinations of w columns of HThus the min value of w is that which results in crHT=0, i.e., codeword cr will have a weight w (w ones) and so dmin = w

  • Comments about HFor the example code, a codeword with min weight (dmin = 3) is given by the first row of G, i.e., [1000011] Now form linear combination of first and last 2 cols in H, i.e., [011]+[010]+[001] = 0So need min of 3 columns (= dmin) to get a zero value of cHT in this example

  • Standard ArrayFrom the standard array we can find the most likely transmitted codeword given a particular received codeword without having to have a look-up table at the decoder containing all possible codewords in the standard arrayNot surprisingly it makes use of syndromes

  • Standard ArrayThe Standard Array is constructed as follows,

    All patterns in row have same syndromeDifferent rows have distinct syndromesThe array has 2k columns (i.e., equal to the number of valid codewords) and 2R rows (i.e., the number of syndromes)

  • Standard ArrayThe standard array is formed by initially choosing ei to be,All 1 bit error patternsAll 2 bit error patternsEnsure that each error pattern not already in the array has a new syndrome. Stop when all syndromes are used

  • Standard ArrayImagine that the received codeword (cr) is c2 + e3 (shown in bold in the standard array)The most likely codeword is the one at the head of the column containing c2 + e3The corresponding error pattern is the one at the beginning of the row containing c2 + e3 So in theory we could implement a look-up table (in a ROM) which could map all codewords in the array to the most likely codeword (i.e., the one at the head of the column containing the received codeword)This could be quite a large table so a more simple way is to use syndromes

  • Standard ArrayThis block diagram shows the proposed implementation

  • Standard ArrayFor the same received codeword c2 + e3, note that the unique syndrome is s3This syndrome identifies e3 as the corresponding error patternSo if we calculate the syndrome as described previously, i.e., s = crHTAll we need to do now is to have a relatively small table which associates s with their respective error patterns. In the example s3 will yield e3Finally we subtract (or equivalently add in modulo 2 arithmetic) e3 from the received codeword (c2 + e3) to yield the most likely codeword, c2

  • Hamming CodesWe will consider a special class of SEC codes (i.e., Hamming distance = 3) where,Number of parity bits R = n k and n = 2R 1Syndrome has R bits0 value implies zero errors2R 1 other syndrome values, i.e., one for each bit that might need to be correctedThis is achieved if each column of H is a different binary word remember s = eHT

  • Hamming CodesSystematic form of (7,4) Hamming code is,The original form is non-systematic,Compared with the systematic code, the column orders of both G and H are swapped so that the columns of H are a binary count

  • Hamming CodesThe column order is now 7, 6, 1, 5, 2, 3, 4, i.e., col. 1 in the non-systematic H is col. 7 in the systematic H.

  • Hamming Codes - ExampleFor a non-systematic (7,4) coded = 1011c = 1110000 + 0101010 + 1101001 = 0110011

    e = 0010000cr= 0100011

    s = crHT = eHT = 011Note the error syndrome is the binary address of the bit to be corrected

  • Hamming CodesDouble errors will always result in wrong bit being corrected, sinceA double error is the sum of 2 single errorsThe resulting syndrome will be the sum of the corresponding 2 single error syndromesThis syndrome will correspond with a third single bit errorConsequently the corrected codeword will now contain 3 bit errors, i.e., the original double bit error plus the incorrectly corrected bit!

  • Bit Error Rates after DecodingFor a given channel bit error rate (BER), what is the BER after correction (assuming a memoryless channel, i.e., no burst errors)?To do this we will compute the probability of receiving 0, 1, 2, 3, . errorsAnd then compute their effect

  • Bit Error Rates after DecodingExample A (7,4) Hamming code with a channel BER of 1%, i.e., p = 0.01P(0 errors received) = (1 p)7 = 0.9321P(1 error received) = 7p(1 p)6 = 0.0659

    P(3 or more errors) = 1 P(0) P(1) P(2) = 0.000034

  • Bit Error Rates after DecodingSingle errors are corrected, so,0.9321+ 0.0659 = 0.998 codewords are correctly detectedDouble errors cause 3 bit errors in a 7 bit codeword, i.e., (3/7)*4 bit errors per 4 bit dataword, that is 3/7 bit errors per bit.Therefore the double error contribution is 0.002*3/7 = 0.000856

  • Bit Error Rates after DecodingThe contribution of triple or more errors will be less than 0.000034 (since the worst that can happen is that every databit becomes corrupted)So the BER after decoding is approximately 0.000856 + 0.000034 = 0.0009 = 0.09%This is an improvement over the channel BER by a factor of about 11

  • Perfect CodesIf a codeword has n bits and we wish to correct up to t errors, how many parity bits (R) are needed?Clearly we need sufficient error syndromes (2R of them) to identify all error patterns up to t errorsNeed 1 syndrome to represent 0 errorsNeed n syndromes to represent all 1 bit errorsNeed n(n-1)/2 to syndromes to represent all 2 bit errorsNeed nCe = n!/(n-e)!e! syndromes to represent all e bit errors

  • Perfect CodesSo,If equality then code is PerfectOnly known perfect codes are SEC Hamming codes and TEC Golay (23,12) code (dmin=7). Using previous equation yields

  • SummaryIn this section we haveUsed block codes to add redundancy to messages to control the effects of transmission errorsEncoded and decoded messages using Hamming codesDetermined overall bit error rates as a function of the error control strategy


Recommended