Home >Documents >Analysis of Ldpc Convolutional Codes Derived From Ldpc Block Codes

Analysis of Ldpc Convolutional Codes Derived From Ldpc Block Codes

Date post:19-Jul-2015
Category:
View:36 times
Download:0 times
Share this document with a friend
Transcript:

ANALYSISOFLDPCCONVOLUTIONALCODESDERIVEDFROMLDPCBLOCKCODESADissertationSubmittedtotheGraduateSchooloftheUniversityofNotreDameinPartialFulllmentoftheRequirementsfortheDegreeofDoctorofPhilosophybyAliEmrePusane,B.S.,M.S.,M.S.,M.S.DanielJ.Costello,Jr.,DirectorGraduatePrograminElectricalEngineeringNotreDame,IndianaApril2008c _CopyrightbyAliEmrePusane2008AllRightsReservedANALYSISOFLDPCCONVOLUTIONALCODESDERIVEDFROMLDPCBLOCKCODESAbstractbyAliEmrePusaneLDPCconvolutional codes havebeenshowntobecapableof achievingthesame capacity-approaching performance as LDPC block codes with iterative mes-sagepassingdecoding. Inthisdissertation,wepresentseveralmethodsofderiv-ingfamiliesof time-varyingandtime-invariantLDPCconvolutional codesfromLDPCblockcodes. WedemonstratethatthederivedLDPCconvolutionalcodessignicantly outperform the underlying LDPC block codes, and we investigate thereasonsfortheseconvolutionalgains.It is well known that cycles in the Tanner graph representation of a sparse codeaect theiterativedecodingalgorithm, withshort cycles generallypushingitsperformancefurtherawayfromoptimum. Henceitiscommonpracticetodesigncodesthatdonotcontainshortcycles, soastoobtainindependentmessagesinatleasttheinitial iterationsofthedecodingprocess. WeshowthatthederivedLDPCconvolutional codes havebetter graph-cycleproperties thantheir blockcodecounterparts. Inparticular, weshowthattheunwrappingprocessthatisusedtoderiveLDPCconvolutional codesfromLDPCblockcodescanbreaksomecyclesof theunderlyingLDPCblockcode, butcannotcreateanyshortercycles. WeprovethatanycycleinanLDPCconvolutionalcodealwaysmapstoacycleofthesameorsmallerlengthintheunderlyingLDPCblockcode.AliEmrePusaneMinimumdistanceisanimportantcodedesignparameterwhenthechannelqualityishigh, sincecodewordsthatareminimumdistanceapartarethemostlikelytocauseerrorswithMLornear-MLdecoding. Inthiscase, theminimumdistancedeterminestheso-callederroroorbehaviorintheperformancecurvecorresponding to high channel signal-to-noise ratios. Thus studying the minimumdistanceproperties of acodefamilygives insight intoits error oor behavior.Weuseasymptoticmethodstocalculatealowerboundonthefreedistanceofseveral ensembles of asymptotically good LDPC convolutional codes derived fromprotograph-basedLDPCblockcodes. Further,weshowthatthefreedistancetoconstraint lengthratioof theLDPCconvolutional codesexceedstheminimumdistancetoblocklengthratioofthecorrespondingLDPCblockcodes.Message-passingiterative decoders for LDPCblockcodes are knowntobesubject todecodingfailures duetoso-calledpseudo-codewords. Thesefailurescancausethelargesignal-to-noiseratioperformanceofmessage-passingiterativedecodingtobeworsethanthatpredictedbythemaximum-likelihooddecodingunionbound. Weaddressthepseudo-codewordproblemfromtheconvolutionalcode perspective. Inparticular, we showthat the minimumpseudo-weight ofanLDPCconvolutional codeisatleastaslargeastheminimumpseudo-weightof anunderlyingLDPCblockcode. This result, whichparallels awell-knownrelationshipbetweentheminimumHammingweightof convolutional codesandthe minimum Hamming weight of their quasi-cyclic counterparts, is due to the factthateverypseudo-codewordintheLDPCconvolutional codeinducesapseudo-codewordintheLDPCblockcodewithpseudo-weightnolargerthanthatoftheconvolutional pseudo-codeword. Moregenerally, wedemonstrateadierenceinthe weight spectra of LDPC block and convolutional codes that leads to improvedAliEmrePusaneperformance at low-to-moderate signal-to-noise ratios for the convolutional codes,aconclusionsupportedbysimulationresults.TomywifeOzlemandmyparentsAyseandOzkanSendiriiCONTENTSFIGURES. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . viTABLES . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . viiiACKNOWLEDGMENTS . . . . . . . . . . . . . . . . . . . . . . . . . . . ixAUTHORSNOTE. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiiCHAPTER1: INTRODUCTION. . . . . . . . . . . . . . . . . . . . . . . 11.1 ASimpleDigitalCommunicationSystem. . . . . . . . . . . . . . 11.2 ChannelModel . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31.3 ChannelEncoding . . . . . . . . . . . . . . . . . . . . . . . . . . 51.3.1 Blockcodes . . . . . . . . . . . . . . . . . . . . . . . . . . 51.3.2 Convolutionalcodes . . . . . . . . . . . . . . . . . . . . . 81.3.3 Low-densityparitycheckblockcodes . . . . . . . . . . . . 131.4 ChannelDecoding . . . . . . . . . . . . . . . . . . . . . . . . . . . 171.4.1 DecodingofLDPCblockcodes . . . . . . . . . . . . . . . 181.5 DissertationOutline . . . . . . . . . . . . . . . . . . . . . . . . . 21CHAPTER2: LDPCCONVOLUTIONALCODES. . . . . . . . . . . . . 232.1 AnIntroductiontoLDPCConvolutionalCodes . . . . . . . . . . 242.2 ImprovementstothePipelineDecoder . . . . . . . . . . . . . . . 302.2.1 Astoppingruleforthepipelinedecoder . . . . . . . . . . 302.2.2 On-demandvariablenodeactivationschedule . . . . . . . 322.2.3 Compactpipelinedecoderarchitecture . . . . . . . . . . . 352.3 ImplementationComplexityComparisonswithLDPCBlockCodes 362.3.1 Computationalcomplexity . . . . . . . . . . . . . . . . . . 362.3.2 Processor(hardware)complexity . . . . . . . . . . . . . . 372.3.3 Storagerequirements . . . . . . . . . . . . . . . . . . . . . 382.3.4 Decodingdelay . . . . . . . . . . . . . . . . . . . . . . . . 38iiiCHAPTER3: DERIVINGLDPCCONVOLUTIONALCODESBYUN-WRAPPINGLDPCBLOCKCODES . . . . . . . . . . . . . . . . . . 423.1 DerivingTime-InvariantLDPCConvolutional CodesbyUnwrap-pingQC-LDPCBlockCodes. . . . . . . . . . . . . . . . . . . . . 433.2 Deriving Time-Varying LDPC Convolutional Codes by UnwrappingRandomlyConstructedLDPCBlockCodes . . . . . . . . . . . . . 463.3 AUniedApproachtoUnwrapping . . . . . . . . . . . . . . . . . 47CHAPTER 4: PERFORMANCE OF DERIVED LDPC CONVOLUTIONALCODESANDTHECONVOLUTIONALGAIN. . . . . . . . . . . . . 544.1 UnwrappedLDPCConvolutionalCodeExamples . . . . . . . . . 544.2 UnwrappingwithDierentStepSizes . . . . . . . . . . . . . . . . 60CHAPTER5: GRAPHSTRUCTUREOFDERIVEDLDPCCONVOLU-TIONALCODES. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 645.1 ABoundontheGirthofUnwrappedLDPCConvolutionalCodes 64CHAPTER6: MINIMUMDISTANCEGROWTHRATESOFDERIVEDLDPCCONVOLUTIONALCODEFAMILIES . . . . . . . . . . . . . 686.1 Anensembleofprotograph-basedLDPCblockcodes . . . . . . . 686.2 ProtographWeightEnumerators . . . . . . . . . . . . . . . . . . . 726.3 FreeDistanceBounds . . . . . . . . . . . . . . . . . . . . . . . . . 736.3.1 Tail-bitingconvolutionalcodes. . . . . . . . . . . . . . . . 736.3.2 Atail-bitingLDPCconvolutionalcodeensemble. . . . . . 756.3.3 Afreedistancebound . . . . . . . . . . . . . . . . . . . . 766.3.4 Thefreedistancegrowthrate . . . . . . . . . . . . . . . . 776.4 DistanceGrowthRateResults . . . . . . . . . . . . . . . . . . . . 78CHAPTER 7: PSEUDO-CODEWORD STRUCTURE OF DERIVED LDPCCONVOLUTIONALCODES . . . . . . . . . . . . . . . . . . . . . . . 877.1 BackgroundandNotations . . . . . . . . . . . . . . . . . . . . . . 887.2 Pseudo-CodewordStructures of Time-Invariant LDPCConvolu-tionalCodesDerivedFromQC-LDPCBlockCodes . . . . . . . . 927.2.1 Thefundamentalcone . . . . . . . . . . . . . . . . . . . . 997.2.2 Minimumpseudo-weights . . . . . . . . . . . . . . . . . . . 1117.3 Pseudo-CodewordsinTime-VaryingLDPCConvolutionalCodes . 123CHAPTER8: CONCLUSIONSANDRECOMMENDATIONSFORFU-TURERESEARCH . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129ivAPPENDIX A: GRAPH CYCLE HISTOGRAMS OF LDPC BLOCK ANDCONVOLUTIONALCODES . . . . . . . . . . . . . . . . . . . . . . . 132BIBLIOGRAPHY . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135vFIGURES1.1 Abinarycommunicationsystemmodel. . . . . . . . . . . . . . . . 21.2 Thebinarysymmetricchannel. . . . . . . . . . . . . . . . . . . . 41.3 Thebinaryerasurechannel. . . . . . . . . . . . . . . . . . . . . . 51.4 ArateR = 1/2convolutionalencoderwithmemory2. . . . . . . 101.5 Bipartite Tanner graph corresponding to the (10,3,6)-regular LDPCblockcodeofExample1.3.8. . . . . . . . . . . . . . . . . . . . . . 151.6 Cyclesoflength4(solid)and6(dashed). . . . . . . . . . . . . . . 162.1 Shift-registerbasedencoder. . . . . . . . . . . . . . . . . . . . . . 272.2 TannergraphofanR=1/3LDPCconvolutionalcodeandanillus-trationofpipelinedecoding. . . . . . . . . . . . . . . . . . . . . . 292.3 Biterrorrateperformanceofon-demandvariablenodeschedulingandthecompactdecoder. . . . . . . . . . . . . . . . . . . . . . . 342.4 PerformancecomparisonofLDPCblockandconvolutionalcodes. 403.1 Tanner graph of (a) a QC block code of length n = rc = 7 3 = 21,(b) a QC block code of length n = rc = 14 3 = 42, (c) the derivedLDPC convolutional code with ms= 2 and s= (ms+1)c = 33 = 9. 453.2 Tanner graph of (a) the original LDPC block code, (b) LDPC blockcodeaftercuttingalongthediagonal,(c)afterappendingthesub-matricesfor fast encoding, and(d) theresultingLDPCconvolu-tionalcode. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 483.3 Derivingatime-invariant LDPCconvolutional code fromaQC-LDPC block code: (a) QC-LDPC code, (b) after reordering of rowsandcolumns,(c)resultingtime-varyingLDPCconvolutionalcode,(d)resultingtime-invariantLDPCconvolutionalcode. . . . . . . 514.1 Performance of three (3,5)-regular QC-LDPC block codes and theirassociated time-invariant and time-varying LDPC convolutional codes. 56vi4.2 Performanceofa[4608,2304] QC-LDPCblockcodeandtheasso-ciatedtime-invariantandtime-varyingLDPCconvolutionalcodes. 584.3 Performance of a family of randomly constructed protograph-basedLDPCblockcodesandtheassociatedtime-varyingLDPCconvo-lutionalcodes. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 594.4 Performance of LDPCconvolutional codes unwrappedfromthe[400,162]QC-LDPCblockcodeusingdierentstepsizes. . . . . . 614.5 Theshapeoftheconvolutional codeparity-checkmatrixnon-zeroregionfor(a)k = r(left),(b)k < r(right). . . . . . . . . . . . . 636.1 Thecopy-and-permuteoperationforaprotograph. . . . . . . . . 706.2 Tanner graphrepresentationof aprotograph-basedLDPCblockcodeensemble. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 716.3 DistancegrowthratesforExample6.4.1. . . . . . . . . . . . . . . 806.4 DistancegrowthratesforExample6.4.2. . . . . . . . . . . . . . . 826.5 DistancegrowthratesforExample6.4.3. . . . . . . . . . . . . . . 846.6 SimulationresultsforExample6.4.3. . . . . . . . . . . . . . . . . 857.1 TheTannergraphofH1[ij[ 0.50eitherforthesum-product-algorithm-typeiterativemessage-passingdecoder or for themin-sum-algorithm-typeiterativemessage-passingdecoder. . . . . . . 947.5 ThefundamentalpolytopeforH = [111]. . . . . . . . . . . . . . 1017.6 Theperformanceof arateR=1/4(3, 4)-regular LDPCconvo-lutional codeandthreeassociated(3, 4)-regularQC-LDPCblockcodes. Notethatthehorizontal axisisEs/N0andnotthemorecommonEb/N0= (1/R)Es/N0. . . . . . . . . . . . . . . . . . . 121viiTABLES2.1 COMPUTATIONAL AND MEMORY REQUIREMENTS OF THEPROPOSEDDECODERS . . . . . . . . . . . . . . . . . . . . . . 397.1 THE PSEUDO-WEIGHTS OF THE PSEUDO-CODEWORDS INEXAMPLE7.2.9 . . . . . . . . . . . . . . . . . . . . . . . . . . . 1197.2 THE MINIMUM PSEUDO-WEIGHTS OF THE CODES C(r)QC, GIVENBYTHEPARITY-CHECKMATRIXH(r)QC(X),forr = 1, 2, 3, 4 . 122A.1 NORMALIZED GRAPH CYCLE HISTOGRAM FOR LDPC CODESDERIVEDFROMTHE[155,64]QCCODE . . . . . . . . . . . . 133A.2 NORMALIZED GRAPH CYCLE HISTOGRAM FOR LDPC CODESDERIVEDFROMTHE[4608,2304]QCCODE . . . . . . . . . . 134A.3 NORMALIZED GRAPH CYCLE HISTOGRAM FOR LDPC CODESDERIVED FROM THE RATE R = 1/2 PROTOGRAPH-BASEDLDPCCODE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 134viiiACKNOWLEDGMENTSWhenIarrivedontheNotreDamecampussixyearsago,IhadnoideathatthetimeI wouldspendherewouldbethemost rewardingyears of mylife. ImetsomeofthemostamazingpeopleIknowonthiscampus,includingmywifeOzlem, and this dissertation would not have been possible without their help andsupport.Firstof all, IwouldliketoexpressmydeepestgratitudetomyadvisorDr.Daniel J. Costello, Jr. for his guidance during my research and study. Throughoutmy doctoral work he encouraged me to develop independent thinking and researchskills. Hecontinuallystimulatedmyanalytical thinkingandgreatlyassistedmewithscienticwriting. Hewasalwaysaccessibleandwillingtohelphisstudentswiththeirresearch. Iamveryfortunatetohavehimasmyadvisor.WhenDr. Kamil Sh. ZigangirovvisitedtheCodingResearchGroupintheSpringof 2003, Iwasthenewestmemberinthegroup. Thisgavemeauniqueopportunity to start working on several problems with Dr. Zigangirov. His endlessstringofideasandourlongdiscussionswereveryhelpfulinshapingmyresearchaswellasdevelopingmyresearchtechniques. IwouldliketothankDr. ThomasE. Fujaforhishelpandsupportthroughouttheyearsandforbeingpartofmycommittee. I would also like to thank Dr. J. Nicholas Laneman and Dr. Robert L.Stevensonforservingonmycommittee. Alongsidemydoctoralwork, IreceivedanM.S.inappliedmathematicsunderthesupervisionofDr. JoachimRosenthalixintheDepartmentofMathematics. Although, wedidnothavemuchchancetocollaboratemoreduetogeographicaldistance,IenjoyedworkingwithhimandIthankhimforhishelpandsupport.MydissertationisacompilationofresultsfromseveralcollaborationsImadewithleadingresearchers inthe eld. I have verymuchenjoyedworkingwithDr. StephenBates,ZhengangChen,Dr. DariushDivsalar,Dr. NorbertGoertz,Logan Gunthorpe, Dr. Alberto Jimenez-Feltstrom, Dr. Christopher R. Jones, Dr.Michael Lentmaier, David G. M. Mitchell, Dr. Roxana Smarandache, Dr. ArvindSridharan, Dr. Pascal O. Vontobel, Dr. Dmitri K. Zigangirov on several projects.I would like to acknowledge and thank them for their contribution to my researchduringmydoctoral studies. Dr. Michael Lentmaier, Dr. RoxanaSmarandache,andDr. Pascal O. Vontobel deservespecial thanksfortheircontributiontomydissertation.I wish to thank my friends and colleagues at Notre Dame; Ajit, Arvind, Ching,Christian,Deepak,Junying,Marcin,Ralf,Wei,Xiaoweiandmanyothers. IalsothankDavidMitchellforbeingagreatresearchpartner. MehtapandDemirhanTunchavebeenourbestfriendssincetheyarrivedatNotreDamethreeyearsago. I would like to thank them for their continuous support. Especially since thearrival of theirsonAslanTuncinAugust2007,OzlemandIhavespentprettymuchallourfreetimetryingtomakeAslansmile.Prior to coming to Notre Dame, I received an M.S. degree from Istanbul Tech-nicalUniversityunderthesupervisionofDr.UmitAygol u. Myheartfeltthanksgo to Dr. Ayg ol u whose guidance and support prepared me for my doctoral work.IwouldalsoliketothankDr. Erdal Panayrc forhishelpandsupportinthesameperiod.xFinally, IwouldliketothankmywifeOzlem. Herlove, encouragementandsupporthaveinspiredmeandgivenmestrengthsincethedaywemet. Ithankmy parents Ayse andOzkan Sendir. They have always supported and encouragedme to do my best in all matters of life.Ozlems parentsUlker and Cevdet Kayhanhavealwaysbeenthereforuswhenweneededtheirsupport. ThisdissertationisdedicatedtoOzlemandourparents.xiAUTHORSNOTEThisdissertationincludesresultsfromseveralcollaborationswithleadingre-searchers in the eld from across the globe. I would like to acknowledge and thankmycollaboratorsfortheircontributiontomyresearchduringmyPh.D. studies.On this note,I will list our collaborations and acknowledge my co-authors on thepublishedpapers.Chapter2ofthisdissertationdealswiththebackgroundforLDPCconvolu-tional codesanditisanessential partof thedissertationleadingintotheworkpresentedinthefollowingchapters. Section2.2dealswithseveralimprovementswehaveproposedforthepipelinedecoderthatisusedtodecodeLDPCconvolu-tional codes. This has been joint work with Dr. Michael Lentmaier, Dr. Kamil Sh.Zigangirov,andmydissertationadvisor,Dr. DanielJ.Costello,Jr.. Ourresultswere published in [1]. This work was later extended to include block transmissionofLDPCconvolutionalcodesandcomparisonsbetweenLDPCblockandconvo-lutionalcodes. TheseextensionswereaidedbythecontributionsofDr. AlbertoJimenez-Feltstr omandDr. ArvindSridharanandresultedin[2]. TheLDPCblockvs. LDPCconvolutionalcodecomparisonswerefurtherdevelopedwiththehelpofDr. StephenBates,Dr. ChristopherR.Jones,andDr. DariushDivsalarandresultedin[3,4].Chapters 3, 4, and 5 constitute the heart of this dissertation and were publishedin[5]. Dr. RoxanaSmarandache, Dr. Pascal O. Vontobel, andDr. Daniel J.xiiCostello, Jr.scontributionsaregreatlyappreciated. Ourcollaborationalsoleadtothepseudo-codewordanalysispresentedinChapter7,whichwaspublishedin[6,7].ThedistancegrowthrateanalysispresentedinChapter6isacollaborationwith David G. M. Mitchell, Dr. Kamil Sh. Zigangirov, and Dr. Daniel J. Costello,Jr.. Ourresultshavebeensubmittedforpublicationin[8].Thereareacoupleof projects I haveparticipatedinthat arenot includedinthisdissertation. Iwouldalsoliketoacknowledgemycollaboratorsontheseprojects.AcodeconstructionalgorithmforirregularLDPCconvolutionalcodeswaspresentedin[9]. ThiswasjointworkwithDr. Kamil Sh. ZigangirovandDr. DanielJ.Costello,Jr..A bandwidth ecient LDPC convolutional code construction technique wasproposedin[10]. This was joint workwithDr. Michael Lentmaier, Dr.Thomas E. Fuja, Dr. Kamil Sh. Zigangirov, and Dr. Daniel J. Costello, Jr..Serial decodingarchitectures were consideredfor the decodingof LDPCconvolutional codes in [1113]. This was joint work with Dr. Stephen Bates,LoganGunthorpe, ZhengangChen, Dr. Kamil Sh. Zigangirov, andDr.DanielJ.Costello,Jr..Finally, ananalysisoftheerrorcorrectingcapabilityofLDPCblockcodeswaspresentedin[14]. ThiswasjointworkwithDr. DmitriK.Zigangirov,Dr. KamilSh. Zigangirov,andDr. DanielJ.Costello,Jr..xiiiCHAPTER1INTRODUCTION1.1 ASimpleDigitalCommunicationSystemA digital communication system block diagram is given in Figure 1.1. A binaryinformation source produces the data to be transmitted. The original message canbe digital, as in computers, or digitized versions of analog signals, as in voice com-munications. In either case, a source encoder is employed in order to eliminate theredundancy in the digital data. This operation is generally known as compressionanditincreasesthethroughputofthecommunicationsystembysavingvaluablesignalingtime. Thecompresseddatais expectedtoberestoredbythesourcedecoder at the receiver side. Ideally, it is desirable for a source decoder to decodethe compressed data perfectly (lossless data compression); however, some sacriceinperformanceissometimesmadetoincreasethecompressionratio(lossydatacompression). Anidealsourceencoderminimizesthenumberofsymbolsneededtorepresentthedata, suchthatthesymbolsintheinformationsequenceattheoutput of thesourceencoder areindependent andequallylikely. Thechannelencodertakesxedlengthblocksu = [u1, u2, . . . , uk]oftheinformationsequenceand adds redundancy to form a codeword v = [v1, v2, . . . , vn]. The ratio R = k/n,k < n, is called the code rate. The channel decoder utilizes this redundancy in thenoisy received block r = [r1, r2, . . . , rn] to try and recover the original information1block u. Finally, the modulator uses the codeword vto generate a waveform x(t)that is suitable for transmission over the physical transmission channel. The noisyreceivedwaveformr(t)isdemodulatedtoarriveatthereceivedblockr,whichisdecodedtoproduceanestimate u=[ u1, u2, . . . , uk] of theoriginal informationblock. Whiletheinformationblockuandthecodewordvareassumedtobebi-nary,dependingonthechannelmodel,thereceivedblockrmaybeunquantized.SourceEncodermessageChannelEncoderModulatorChannelDemodulatorSourceDecoderChannelDecodermessageestimate uu vrx(t)r(t)Binary Information SourceBinary Information SinkDiscreteChannelNoiseTransmitterReceiverFigure1.1. Abinarycommunicationsystemmodel.One major result of the pioneering paper by Claude Shannon [15] is that, for abroad class of channel codes, arbitrarily low error probabilities can be achieved solong as the code rate R is less than the channel capacity C, which is determinedbythephysical propertiesof thechannel. TheconversetheoremstatesthatifR > C, reliable transmission with arbitrarily low error probabilities is not possible.Anothermajorresultof[15]isthatonecandesignthesourceandchannelcodes2separatelyandstill achieveoptimality. Basedonthisfact, wefocusonlyonthechannelcodingpartofthecommunicationsysteminthisdissertation.1.2 ChannelModelThroughout the dissertation, binary phase shift keying (BPSK) modulation isassumed. InBPSK,abinarysymbolvi 0, 1ismodulatedtoobtainaBPSKsymbolxi,wherexi=2vi 1 +1, 1. xithenmodulatesasinusoidalpulsetoproducethesignalx(t),whichistransmittedoveranadditivewhiteGaussiannoise (AWGN) channel. The eect of the AWGN channel on a received symbol ritakestheformofadditivenoise,i.e.,ri= xi + ni,i = 1, 2, . . . , nwherethenoisevariables niareindependent, identicallydistributed, Gaussianrandomvariableswithzeromeanandvariance2=N0/2, andN0isthesinglesidedpower spectral densityof the noise process. The relative strengthof atransmitted channel symbol is given in terms of the bit signal-to-noise (SNR) ratio,whichisdenedasEb/N0,whereEbdenotestheaverageenergyperinformationsymbol.Giventheabovedenitionof SNR, Shannonsnoisy-channel codingtheoremcan now be restated in a way that applies to designing a code with a certain targetrate. ForagivencoderateR, thenoisy-channel codingtheoremdeterminesaminimumchannelqualitySNRsuchthatreliabletransmissionispossibleifandonly if SNR > SNR. Code families that achieve reliable communications at SNRsvery close to SNR with asymptotically large block lengths are conveniently namedcapacity-approachingcodes.3TheoutputoftheAWGNchanneldescribedaboveiscontinuousvalued, i.e.,softdemodulationisassumed. However, insomecommunicationscenarios, harddemodulationmaybeanoption. Harddemodulationquantizestheoutputofthechannel intoapre-determinedsetof symbols. Thesimplestharddemodulationexampleisthebinarysymmetricchannel(BSC)modeldepictedinFigure1.2.01011 1 Figure1.2. Thebinarysymmetricchannel.The BSCis describedbyasingle parameter its crossover probability.Withprobability, asymbol is invertedduringtransmissionover aBSC. TheharddemodulationemployedonaBSCassumesacertaindecisionboundarytodistinguishbetweenthetwopossiblevalues. Thecloserthereceivedsymbolistothisdecisionboundary, thelargerprobabilityatransmissionerrorhasoccurred.Thebinaryerasurechannel (BEC)model utilizesthisapproachtoarriveatthetransmissiondiagramshowninFigure1.3. Eachsymbolisassumedtobeerasedwith probability . The biggest dierence from the BSC model is that the receivedsymbol cannot be erroneous, i.e., asymbol transmittedover aBECis eitherreceivedcorrectlyoranerasureoccurs.40101

1 1 Figure1.3. Thebinaryerasurechannel.1.3 ChannelEncodingSymbols of a sequence transmitted over a channel are prone to random channelerrors. Channelcodesareemployedtointroduceredundancytotheinformationsequenceandtousethis redundant informationtorecover theinformationse-quence from a received noisy sequence. Channel codes can be categorized into twobroadclasses, namelyblockcodesandconvolutionalcodes. Blockcodesoperateonblocksof theinformationsequenceandencode/decodetheseblocksindepen-dently, whereasconvolutional codesworkcontinuouslyatthesymbol level. Wewillreviewthesetwoapproachesinthissection.1.3.1 BlockcodesAlinearblockcodeCofrateR=k/ntakesablockofinformationsymbolsu = [u1, u2, . . . , uk] and produces a codeword v = [v1, v2, . . . , vn]. This dissertationfocusesonlyonlinearblockcodesdenedoverthebinaryeld F2. Inthiscase,there is a one-to-one correspondence between the 2kpossible distinct informationblocksand2kcodewordsof lengthn. Thelinearityconditionyieldsacompactdescriptionoftheblockcodeasak-dimensionalsubspaceofthevectorspace Fn2.5Using this description, it is possible to represent each codeword v corresponding toan information block u as a linear combination of k linearly independent codewordsv =k

i=1uigi, (1.1)where gi= [gi,1, gi,2, . . . , gi,n], 1 i k, and modulo-2 addition is assumed. TheseklinearlyindependentcodewordscanthereforebearrangedtoformageneratormatrixGfortheblockcodeC,whereGisgivenasG =_________g1g2...gk_________=_________g1,1g1,2 g1,ng2,1g2,2 g2,n............gk,1gk,2 gk,n_________. (1.2)Ablockcodecanbedenedintermsofitsgeneratormatrixasv1n= u1kGkn. (1.3)Dierent sets of linearlyindependent codewords leadtothe same blockcode.However, the mapping of information blocks to codewords is changed in this case.Also, ablockcodeiscalledsystematicifthecodewordsymbolscanbearrangedinsuchawaythattherstksymbolscorrespondtotheinformationblocku.Example1.3.1ConsiderarateR=4/7blockcodeof lengthn=7. Thiscodemapsinformationblocksof lengthk=4tocodewordsof lengthn=7usinga6generatormatrixGgivenasG =_________1 0 0 0 1 0 10 1 0 0 1 1 10 0 1 0 1 1 00 0 0 1 0 1 1_________47. (1.4)Theinformationblocku = [0, 1, 1, 0]canbeencodedtoproducev = uG =_0 1 1 0__________1 0 0 0 1 0 10 1 0 0 1 1 10 0 1 0 1 1 00 0 0 1 0 1 1_________(1.5)=_0 1 1 0 0 0 1_. (1.6)The4 4identitymatrixthatconstitutestherst4columnsofGresultsinasystematicblockcode. TheHammingdistancebetweentwocodewordsv1andv2ofCisthenumberof positions in which they dier, which equals the Hamming weight1of the mod-2sumofv1andv2,i.e.,v1 v2. TheminimumdistancedminofCistheminimumHammingdistancebetweenanytwocodewordsof C. Since, thelinearityof thecodeguaranteesthatthesummationofanytwocodewordsisstillacodewordinthe code, the minimum distance dminequals the minimum weight of any non-zerocodewordinC.A linear block code can also be described by its parity-check matrix. A parity-1The Hamming weight of a vector is the number of non-zero components.7checkmatrixHisthenullspaceofthegeneratormatrixG,i.e.,GHT= 0. (1.7)Therefore,foranycodewordvinC,vHT= 0, (1.8)where HT(also called the syndrome former matrix) is the transposed parity-checkmatrix. For a rate R = k/n linear block code of length n, the parity-check matrixisan(n k) nmatrix.Example1.3.2Aparity-checkmatrixforthelinearblockcodegiveninExample1.3.1isgivenbyH =______1 1 1 0 1 0 00 1 1 1 0 1 01 1 0 1 0 0 1______37. (1.9)

Similartothecaseforthegeneratormatrix,severalparity-checkmatricesforthesamecodeCcanbeformedbytakingdierentsetsofn kbasisvectorsofthenullspaceofG.1.3.2 ConvolutionalcodesConvolutional codingtakes averydierent approachtoencodinginforma-tionsequences. Ratherthantakinglargeblocksoftheinformationsequenceandencoding/decodingthemseparately, convolutional codescontinuouslyencodean8informationsequencetoacodesequence. Weuseadierentnotationforcon-volutionalcodes. Thenumberofinformationsymbolsthatenteraconvolutionalencoderperunittimeisdenotedbyb, whilethenumberofcodesymbolsattheoutputisdenotedbyc. Thus, thecoderateof aconvolutional codeisgivenasR=b/c. Ateachtimeinstant, inordertoproduceac-tuple, aconvolutionalencoderprocessesbincominginformationsymbolsaswellasseveralinformationsymbolsthat wereencodedandstoredinprevioustimeinstants. Theencodermemorymdetermineshowlongpreviousinformationsymbolscanbestoredandusedintheencoder. Wenowintroduceasimpleexampletodemonstratetheencodingprocess.Example1.3.3ConsiderarateR = b/c = 1/2convolutional codewithmemorym=2. Theblockdiagramfortheconvolutional encoderisgiveninFigure1.4,wherethesquaresrepresentthedelayelements. Ateachtimeinstantt,theconvo-lutionalencodertakesb = 1informationsymbolutfromtheinformationsequenceuandproducesc=2codesymbolsv1tandv2ttoformthecodesequencev. Theinput-outputrelationdescribedinFigure1.4isgivenasv1t= ut, (1.10)v2t= ut + ut2. (1.11)

The convolutional encoder in Example 1.3.3 is said to be time-invariant, sincetheinput-output relationsgivenin(1.10) and(1.11) donot changewithtime.Inthefollowingchapters,wewillintroduceandanalyzebothtime-invariantandtime-varyingconvolutionalcodes.9utv1tv2tFigure1.4. ArateR = 1/2convolutionalencoderwithmemory2.Similar tothenotationof blockcodes, convolutional codes canalsobede-scribed using generator matrices. In general, a rate R = b/c convolutional encodertakesaninformationsequenceuintheformu = [. . . , u0, u1, . . . , ut, . . .], (1.12)whereui= [u1i, u2i, . . . , ubi],foralli Z. Thisinformationsequenceismappedtoacodesequencevgivenbyv = [. . . , v0, v1, . . . , vt, . . .], (1.13)wherevi=[v1i, v2i, . . . , vci]. For aconvolutional encoder without feedback, therelationbetweenuandvisgivenbyvt= utG0(t) +ut1G1(t) + +utmGm(t). (1.14)Here, each binary matrix Gi(t), i = 0, 1, . . . , m, of size bc determines which of theb symbols in the corresponding information b-tuple takes part in the computation10ofeachofthecoutputsymbolsattimeinstantt. Fortime-invariantcodes, theGi(t) matrices are the same for each time unit and can therefore simply be denotedasGi.Theencodingoperationgivenin(1.14)canberewrittenasv = uGusingthegeneratormatrixG =________________......G0(0)Gm(m)G0(1)Gm(m + 1)......G0(t)Gm(m + t)......________________. (1.15)Example1.3.4ThegeneratormatrixGcorrespondingtotheconvolutionalcodeofExample1.3.3isgivenasG =____________.........11 00 0111 00 0111 00 01.........____________. (1.16)

As seenin(1.16), thegenerator matrixhas arepetitivestructurefor time-invariant convolutional codes. AshorthandnotationcanbedenedusingthedelayvariableD. Theinformationandcodesequencescanbedescribedinterms11ofthedelayvariableasu(D) = . . . +u0 +u1D + . . . +utDt+ . . . , (1.17)v(D) = . . . +v0 +v1D + . . . +vtDt+ . . . . (1.18)Theencodingoperationcanthenbedescribedasv(D) = u(D)G(D), (1.19)wherethegeneratormatrixG(D)isgivenasG(D) = G0 +G1D + . . . +GmDm. (1.20)Similartothescalarrepresentationgivenin(1.12)-(1.14), thecoecientsui,vi,andGiareofsize1 b,1 c,andb c,respectively.Example1.3.5ThegeneratormatrixG(D)correspondingtotheconvolutionalcodeofExample1.3.3isgivenbyG(D) = [1 1] + [0 0]D + [0 1]D2= [1 1 + D2]. (1.21)

Eachcodewordv(D)satisesv(D)HT(D) = 0, (1.22)whereHT(D)(alsocalledthesyndromeformermatrix)isthetransposedparity-checkmatrixofsizec (c b).12Example1.3.6ThesyndromeformermatrixHT(D)correspondingtothegen-eratormatrixG(D)giveninExample1.3.5isgivenbyHT(D) = [1 + D21]. (1.23)

Finally, similar totheminimumdistancedminof blockcodes, thefreedis-tancedfreeofaconvolutionalcodeisdenedastheminimumHammingdistancebetweenanytwonite-lengthcodesequences. If oneof thetwocodesequencesis shorter thantheother, zeros canbeaddedtotheshorter sequencesothatthecorrespondingcodewordshaveequallengths. Since,thelinearityofthecodeguarantees that thesummationof anytwocodesequences is still avalidcodesequence, thefreedistancedfreeequalstheminimumweightof anynite-lengthnon-zerocodesequence.1.3.3 Low-densityparitycheckblockcodesInthelastfteenyears, theareaof channel codinghasbeenrevolutionizedbythepractical realizationof capacity-approachingcodes, initiatedbythein-ventionof turbocodes in1993[16]. Afewyears after theinventionof turbocodes, researchers became aware that Gallagers low-density parity-check (LDPC)blockcodes, rst introducedin[17], werealsocapableof capacity-approachingperformance. WenowgiveaformaldenitionofLDPCblockcodes.Denition1.3.7AnLDPCblockcodeisalinearblockcodewhoseparity-checkmatrixHhasarelativelylownumberofonesinitsrowsandcolumns. Inpartic-ular, an(n, J, K)-regularLDPCblockcodeisalinearblockcodeoflengthnand13rate R 1J/Kwhose parity-check matrix H has exactly Jones in each columnandKonesineachrow,whereJ, K

Popular Tags:

of 158

Embed Size (px)
Recommended