Low-density Parity-check (LDPC) Codes
Asha Rao
School of Science (Mathematical Sciences)RMIT University
Australia
Joint work with Diane Donovan and Emine Sule Yazıcı
Arasufest, Kalamata, Greece3 August, 2019
Outline
Some History
The Technical DetailsSome Peculiarities
Known Combinatorial ConstructionsConstruction from BIBDs
A New Construction
LDPC Codes c©Asha Rao 2 / 44
Outline
Some History
The Technical DetailsSome Peculiarities
Known Combinatorial ConstructionsConstruction from BIBDs
A New Construction
LDPC Codes c©Asha Rao 2 / 44
Outline
Some History
The Technical DetailsSome Peculiarities
Known Combinatorial ConstructionsConstruction from BIBDs
A New Construction
LDPC Codes c©Asha Rao 2 / 44
Outline
Some History
The Technical DetailsSome Peculiarities
Known Combinatorial ConstructionsConstruction from BIBDs
A New Construction
LDPC Codes c©Asha Rao 2 / 44
Some History
Outline
Some History
The Technical DetailsSome Peculiarities
Known Combinatorial ConstructionsConstruction from BIBDs
A New Construction
LDPC Codes c©Asha Rao 3 / 44
Some History
Some History of LDPC Codes
I First discovered by Robert Gallager in the 1960s, ... and thenforgotten for the next 30 years.
LDPC Codes c©Asha Rao 4 / 44
Some History
Understanding LDPC Codes – Shannon Limit
I Shannon’s 1948 paper.
I communications channel: bandwidth and noise.
I Bandwidth: The range ofelectronic, optical orelectromagnetic frequenciesused to transmit a signal.
I Noise: Anything that candisturb this signal.
LDPC Codes c©Asha Rao 5 / 44
Some History
Understanding LDPC Codes – Shannon Limit
I Shannon’s 1948 paper.
I communications channel: bandwidth and noise.
I Bandwidth: The range ofelectronic, optical orelectromagnetic frequenciesused to transmit a signal.
I Noise: Anything that candisturb this signal.
LDPC Codes c©Asha Rao 5 / 44
Some History
Understanding LDPC Codes – Shannon Limit
I Shannon’s 1948 paper.
I communications channel: bandwidth and noise.
I Bandwidth: The range ofelectronic, optical orelectromagnetic frequenciesused to transmit a signal.
I Noise: Anything that candisturb this signal.
LDPC Codes c©Asha Rao 5 / 44
Some History
Understanding LDPC Codes – Shannon Limit
I Shannon’s 1948 paper.
I communications channel: bandwidth and noise.
I Bandwidth: The range ofelectronic, optical orelectromagnetic frequenciesused to transmit a signal.
I Noise: Anything that candisturb this signal.
LDPC Codes c©Asha Rao 5 / 44
Some History
The Shannon Limit
I Shannon Limit: Given a channel with known bandwidth and noisecharacteristics,
I Can calculate the maximum rate at which the data can be sent overthis channel with zero error.
I This rate is called the channel capacity.
I Now called the Shannon Limit.
LDPC Codes c©Asha Rao 6 / 44
Some History
The Shannon Limit
I Shannon Limit: Given a channel with known bandwidth and noisecharacteristics,
I Can calculate the maximum rate at which the data can be sent overthis channel with zero error.
I This rate is called the channel capacity.
I Now called the Shannon Limit.
LDPC Codes c©Asha Rao 6 / 44
Some History
The Shannon Limit
I Shannon Limit: Given a channel with known bandwidth and noisecharacteristics,
I Can calculate the maximum rate at which the data can be sent overthis channel with zero error.
I This rate is called the channel capacity.
I Now called the Shannon Limit.
LDPC Codes c©Asha Rao 6 / 44
Some History
The Shannon Limit
I Shannon Limit: Given a channel with known bandwidth and noisecharacteristics,
I Can calculate the maximum rate at which the data can be sent overthis channel with zero error.
I This rate is called the channel capacity.
I Now called the Shannon Limit.
LDPC Codes c©Asha Rao 6 / 44
Some History
Noisy Channels
I If a channel is noisy, then zero error can be approached only byadding redundancy.
Example: Wish to transmit message consisting of 3 bits: 001.
I You could send it 3 times: Send 001001001
I If receiver got 001011001, she could be pretty sure you sent 001.
I This process is an error-correcting code.
I The noisier the channel, the more redundancy has to be added.
I The longer the code, the lower the transmission rate!
LDPC Codes c©Asha Rao 7 / 44
Some History
Noisy Channels
I If a channel is noisy, then zero error can be approached only byadding redundancy.
Example: Wish to transmit message consisting of 3 bits: 001.
I You could send it 3 times: Send 001001001
I If receiver got 001011001, she could be pretty sure you sent 001.
I This process is an error-correcting code.
I The noisier the channel, the more redundancy has to be added.
I The longer the code, the lower the transmission rate!
LDPC Codes c©Asha Rao 7 / 44
Some History
Noisy Channels
I If a channel is noisy, then zero error can be approached only byadding redundancy.
Example: Wish to transmit message consisting of 3 bits: 001.
I You could send it 3 times: Send 001001001
I If receiver got 001011001, she could be pretty sure you sent 001.
I This process is an error-correcting code.
I The noisier the channel, the more redundancy has to be added.
I The longer the code, the lower the transmission rate!
LDPC Codes c©Asha Rao 7 / 44
Some History
Noisy Channels
I If a channel is noisy, then zero error can be approached only byadding redundancy.
Example: Wish to transmit message consisting of 3 bits: 001.
I You could send it 3 times: Send 001001001
I If receiver got 001011001, she could be pretty sure you sent 001.
I This process is an error-correcting code.
I The noisier the channel, the more redundancy has to be added.
I The longer the code, the lower the transmission rate!
LDPC Codes c©Asha Rao 7 / 44
Some History
Noisy Channels
I If a channel is noisy, then zero error can be approached only byadding redundancy.
Example: Wish to transmit message consisting of 3 bits: 001.
I You could send it 3 times: Send 001001001
I If receiver got 001011001, she could be pretty sure you sent 001.
I This process is an error-correcting code.
I The noisier the channel, the more redundancy has to be added.
I The longer the code, the lower the transmission rate!
LDPC Codes c©Asha Rao 7 / 44
Some History
Noisy Channels
I If a channel is noisy, then zero error can be approached only byadding redundancy.
Example: Wish to transmit message consisting of 3 bits: 001.
I You could send it 3 times: Send 001001001
I If receiver got 001011001, she could be pretty sure you sent 001.
I This process is an error-correcting code.
I The noisier the channel, the more redundancy has to be added.
I The longer the code, the lower the transmission rate!
LDPC Codes c©Asha Rao 7 / 44
Some History
Noisy Channels
I If a channel is noisy, then zero error can be approached only byadding redundancy.
Example: Wish to transmit message consisting of 3 bits: 001.
I You could send it 3 times: Send 001001001
I If receiver got 001011001, she could be pretty sure you sent 001.
I This process is an error-correcting code.
I The noisier the channel, the more redundancy has to be added.
I The longer the code, the lower the transmission rate!
LDPC Codes c©Asha Rao 7 / 44
Some History
Shannon’s Ground-breaking Work
I Shannon showed that better error-correcting codes are possible.
I He proved that given any communications channel, there exists anerror-correcting code that enables transmissions to approach theShannon Limit.
I Unfortunately, his proof was an existence proof.
I How should one go about constructing such codes?
LDPC Codes c©Asha Rao 8 / 44
Some History
Shannon’s Ground-breaking Work
I Shannon showed that better error-correcting codes are possible.
I He proved that given any communications channel, there exists anerror-correcting code that enables transmissions to approach theShannon Limit.
I Unfortunately, his proof was an existence proof.
I How should one go about constructing such codes?
LDPC Codes c©Asha Rao 8 / 44
Some History
Shannon’s Ground-breaking Work
I Shannon showed that better error-correcting codes are possible.
I He proved that given any communications channel, there exists anerror-correcting code that enables transmissions to approach theShannon Limit.
I Unfortunately, his proof was an existence proof.
I How should one go about constructing such codes?
LDPC Codes c©Asha Rao 8 / 44
Some History
Shannon’s Ground-breaking Work
I Shannon showed that better error-correcting codes are possible.
I He proved that given any communications channel, there exists anerror-correcting code that enables transmissions to approach theShannon Limit.
I Unfortunately, his proof was an existence proof.
I How should one go about constructing such codes?
LDPC Codes c©Asha Rao 8 / 44
Some History
Example
I You wish to send a single 4-bit message.
I There are 16 possible 4-bit messages: 0000, 0001, 0011, . . . , 1111.
I Shannon’s proof: Assign each message its own randomly selectedcode - like a serial number.
I Suppose now that the channel is so noisy that the 4-bit messageneeds an 8-bit code.
LDPC Codes c©Asha Rao 9 / 44
Some History
Example
I You wish to send a single 4-bit message.
I There are 16 possible 4-bit messages: 0000, 0001, 0011, . . . , 1111.
I Shannon’s proof: Assign each message its own randomly selectedcode - like a serial number.
I Suppose now that the channel is so noisy that the 4-bit messageneeds an 8-bit code.
LDPC Codes c©Asha Rao 9 / 44
Some History
Example
I You wish to send a single 4-bit message.
I There are 16 possible 4-bit messages: 0000, 0001, 0011, . . . , 1111.
I Shannon’s proof: Assign each message its own randomly selectedcode - like a serial number.
I Suppose now that the channel is so noisy that the 4-bit messageneeds an 8-bit code.
LDPC Codes c©Asha Rao 9 / 44
Some History
Example
I You wish to send a single 4-bit message.
I There are 16 possible 4-bit messages: 0000, 0001, 0011, . . . , 1111.
I Shannon’s proof: Assign each message its own randomly selectedcode - like a serial number.
I Suppose now that the channel is so noisy that the 4-bit messageneeds an 8-bit code.
LDPC Codes c©Asha Rao 9 / 44
Some History
Example (Cont.)
I The receiver would have a “code-book” in which the 16 possible 4-bitmessages were correlated to 16 eight-bit codes.
I Since there are 256 8-bit sequences, at least 240 of these are not inthe code book.
I If the receiver receives one of the 240 then she knows an error hasoccurred.
I But there is likely only one of the 16 that is closest to the sentcodeword.
LDPC Codes c©Asha Rao 10 / 44
Some History
Example (Cont.)
I The receiver would have a “code-book” in which the 16 possible 4-bitmessages were correlated to 16 eight-bit codes.
I Since there are 256 8-bit sequences, at least 240 of these are not inthe code book.
I If the receiver receives one of the 240 then she knows an error hasoccurred.
I But there is likely only one of the 16 that is closest to the sentcodeword.
LDPC Codes c©Asha Rao 10 / 44
Some History
Example (Cont.)
I The receiver would have a “code-book” in which the 16 possible 4-bitmessages were correlated to 16 eight-bit codes.
I Since there are 256 8-bit sequences, at least 240 of these are not inthe code book.
I If the receiver receives one of the 240 then she knows an error hasoccurred.
I But there is likely only one of the 16 that is closest to the sentcodeword.
LDPC Codes c©Asha Rao 10 / 44
Some History
Example (Cont.)
I The receiver would have a “code-book” in which the 16 possible 4-bitmessages were correlated to 16 eight-bit codes.
I Since there are 256 8-bit sequences, at least 240 of these are not inthe code book.
I If the receiver receives one of the 240 then she knows an error hasoccurred.
I But there is likely only one of the 16 that is closest to the sentcodeword.
LDPC Codes c©Asha Rao 10 / 44
Some History
What Shanon showed
I Statistically: consider all possible ways in which random codes couldbe assigned to messages
I =⇒ ∃ at least one that comes close to the Shannon limit.
I The longer the code, the closer you can get.
I Not very close using 8-bit codes for 4-bit message; much closer using2000-bit msg for 1000-bit msg.
I A codebook is, however, totally impractical.
I For a 1000-bit message, the codebook would not fit into all thedigital storage space in the world.
LDPC Codes c©Asha Rao 11 / 44
Some History
What Shanon showed
I Statistically: consider all possible ways in which random codes couldbe assigned to messages
I =⇒ ∃ at least one that comes close to the Shannon limit.
I The longer the code, the closer you can get.
I Not very close using 8-bit codes for 4-bit message; much closer using2000-bit msg for 1000-bit msg.
I A codebook is, however, totally impractical.
I For a 1000-bit message, the codebook would not fit into all thedigital storage space in the world.
LDPC Codes c©Asha Rao 11 / 44
Some History
What Shanon showed
I Statistically: consider all possible ways in which random codes couldbe assigned to messages
I =⇒ ∃ at least one that comes close to the Shannon limit.
I The longer the code, the closer you can get.
I Not very close using 8-bit codes for 4-bit message; much closer using2000-bit msg for 1000-bit msg.
I A codebook is, however, totally impractical.
I For a 1000-bit message, the codebook would not fit into all thedigital storage space in the world.
LDPC Codes c©Asha Rao 11 / 44
Some History
What Shanon showed
I Statistically: consider all possible ways in which random codes couldbe assigned to messages
I =⇒ ∃ at least one that comes close to the Shannon limit.
I The longer the code, the closer you can get.
I Not very close using 8-bit codes for 4-bit message; much closer using2000-bit msg for 1000-bit msg.
I A codebook is, however, totally impractical.
I For a 1000-bit message, the codebook would not fit into all thedigital storage space in the world.
LDPC Codes c©Asha Rao 11 / 44
Some History
What Shanon showed
I Statistically: consider all possible ways in which random codes couldbe assigned to messages
I =⇒ ∃ at least one that comes close to the Shannon limit.
I The longer the code, the closer you can get.
I Not very close using 8-bit codes for 4-bit message; much closer using2000-bit msg for 1000-bit msg.
I A codebook is, however, totally impractical.
I For a 1000-bit message, the codebook would not fit into all thedigital storage space in the world.
LDPC Codes c©Asha Rao 11 / 44
Some History
What Shanon showed
I Statistically: consider all possible ways in which random codes couldbe assigned to messages
I =⇒ ∃ at least one that comes close to the Shannon limit.
I The longer the code, the closer you can get.
I Not very close using 8-bit codes for 4-bit message; much closer using2000-bit msg for 1000-bit msg.
I A codebook is, however, totally impractical.
I For a 1000-bit message, the codebook would not fit into all thedigital storage space in the world.
LDPC Codes c©Asha Rao 11 / 44
Some History
Iterative Codes
I 1990s: code discovered that could do what Shannon described
I The world had forgotten about (ignored) for 30 years, the work thatRobert Gallager had done.
I In 1993, IEEE Inter. Conf. Comms, Alain Glavieux and Claude Berroupresented a new set of codes
I That came very close to the Shannon limit.
I These were Electronic engineers, not coding theorists.
I They were found to be right.
I What they had discovered were Turbo codes.
LDPC Codes c©Asha Rao 12 / 44
Some History
Iterative Codes
I 1990s: code discovered that could do what Shannon described
I The world had forgotten about (ignored) for 30 years, the work thatRobert Gallager had done.
I In 1993, IEEE Inter. Conf. Comms, Alain Glavieux and Claude Berroupresented a new set of codes
I That came very close to the Shannon limit.
I These were Electronic engineers, not coding theorists.
I They were found to be right.
I What they had discovered were Turbo codes.
LDPC Codes c©Asha Rao 12 / 44
Some History
Iterative Codes
I 1990s: code discovered that could do what Shannon described
I The world had forgotten about (ignored) for 30 years, the work thatRobert Gallager had done.
I In 1993, IEEE Inter. Conf. Comms, Alain Glavieux and Claude Berroupresented a new set of codes
I That came very close to the Shannon limit.
I These were Electronic engineers, not coding theorists.
I They were found to be right.
I What they had discovered were Turbo codes.
LDPC Codes c©Asha Rao 12 / 44
Some History
Iterative Codes
I 1990s: code discovered that could do what Shannon described
I The world had forgotten about (ignored) for 30 years, the work thatRobert Gallager had done.
I In 1993, IEEE Inter. Conf. Comms, Alain Glavieux and Claude Berroupresented a new set of codes
I That came very close to the Shannon limit.
I These were Electronic engineers, not coding theorists.
I They were found to be right.
I What they had discovered were Turbo codes.
LDPC Codes c©Asha Rao 12 / 44
Some History
Iterative Codes
I 1990s: code discovered that could do what Shannon described
I The world had forgotten about (ignored) for 30 years, the work thatRobert Gallager had done.
I In 1993, IEEE Inter. Conf. Comms, Alain Glavieux and Claude Berroupresented a new set of codes
I That came very close to the Shannon limit.
I These were Electronic engineers, not coding theorists.
I They were found to be right.
I What they had discovered were Turbo codes.
LDPC Codes c©Asha Rao 12 / 44
Some History
Iterative Codes
I 1990s: code discovered that could do what Shannon described
I The world had forgotten about (ignored) for 30 years, the work thatRobert Gallager had done.
I In 1993, IEEE Inter. Conf. Comms, Alain Glavieux and Claude Berroupresented a new set of codes
I That came very close to the Shannon limit.
I These were Electronic engineers, not coding theorists.
I They were found to be right.
I What they had discovered were Turbo codes.
LDPC Codes c©Asha Rao 12 / 44
Some History
Iterative Codes
I 1990s: code discovered that could do what Shannon described
I The world had forgotten about (ignored) for 30 years, the work thatRobert Gallager had done.
I In 1993, IEEE Inter. Conf. Comms, Alain Glavieux and Claude Berroupresented a new set of codes
I That came very close to the Shannon limit.
I These were Electronic engineers, not coding theorists.
I They were found to be right.
I What they had discovered were Turbo codes.
LDPC Codes c©Asha Rao 12 / 44
Some History
The Search was On
I Turbo codes are “iterative” codes.
I The decoder makes a series of guesses about what the message issupposed to be.
I Every time the new guess is fed back into the decoder, the guess getscloser to the message.
I Doing this many times gets the error rate as low as required.
I Immediately everyone started searching for iterative coding schemes.
I They discovered Gallager’s work!
LDPC Codes c©Asha Rao 13 / 44
Some History
The Search was On
I Turbo codes are “iterative” codes.
I The decoder makes a series of guesses about what the message issupposed to be.
I Every time the new guess is fed back into the decoder, the guess getscloser to the message.
I Doing this many times gets the error rate as low as required.
I Immediately everyone started searching for iterative coding schemes.
I They discovered Gallager’s work!
LDPC Codes c©Asha Rao 13 / 44
Some History
The Search was On
I Turbo codes are “iterative” codes.
I The decoder makes a series of guesses about what the message issupposed to be.
I Every time the new guess is fed back into the decoder, the guess getscloser to the message.
I Doing this many times gets the error rate as low as required.
I Immediately everyone started searching for iterative coding schemes.
I They discovered Gallager’s work!
LDPC Codes c©Asha Rao 13 / 44
Some History
The Search was On
I Turbo codes are “iterative” codes.
I The decoder makes a series of guesses about what the message issupposed to be.
I Every time the new guess is fed back into the decoder, the guess getscloser to the message.
I Doing this many times gets the error rate as low as required.
I Immediately everyone started searching for iterative coding schemes.
I They discovered Gallager’s work!
LDPC Codes c©Asha Rao 13 / 44
Some History
The Search was On
I Turbo codes are “iterative” codes.
I The decoder makes a series of guesses about what the message issupposed to be.
I Every time the new guess is fed back into the decoder, the guess getscloser to the message.
I Doing this many times gets the error rate as low as required.
I Immediately everyone started searching for iterative coding schemes.
I They discovered Gallager’s work!
LDPC Codes c©Asha Rao 13 / 44
Some History
The Search was On
I Turbo codes are “iterative” codes.
I The decoder makes a series of guesses about what the message issupposed to be.
I Every time the new guess is fed back into the decoder, the guess getscloser to the message.
I Doing this many times gets the error rate as low as required.
I Immediately everyone started searching for iterative coding schemes.
I They discovered Gallager’s work!
LDPC Codes c©Asha Rao 13 / 44
Some History
Why was Gallager’s work ignored for so long?
I Gallager’s decoding too difficult - for the 1960s.
I Gallager’s codes used parity bits.
I Bits that contain information about the message bits.
I Reliability: which message bits were sent? Improved by feeding backthe received bits
I The only problem is a closed loop, which could give false reliability.
I Gallager’s codes were designed to decrease the likelihood of suchclosed loops.
LDPC Codes c©Asha Rao 14 / 44
Some History
Why was Gallager’s work ignored for so long?
I Gallager’s decoding too difficult - for the 1960s.
I Gallager’s codes used parity bits.
I Bits that contain information about the message bits.
I Reliability: which message bits were sent? Improved by feeding backthe received bits
I The only problem is a closed loop, which could give false reliability.
I Gallager’s codes were designed to decrease the likelihood of suchclosed loops.
LDPC Codes c©Asha Rao 14 / 44
Some History
Why was Gallager’s work ignored for so long?
I Gallager’s decoding too difficult - for the 1960s.
I Gallager’s codes used parity bits.
I Bits that contain information about the message bits.
I Reliability: which message bits were sent? Improved by feeding backthe received bits
I The only problem is a closed loop, which could give false reliability.
I Gallager’s codes were designed to decrease the likelihood of suchclosed loops.
LDPC Codes c©Asha Rao 14 / 44
Some History
Why was Gallager’s work ignored for so long?
I Gallager’s decoding too difficult - for the 1960s.
I Gallager’s codes used parity bits.
I Bits that contain information about the message bits.
I Reliability: which message bits were sent? Improved by feeding backthe received bits
I The only problem is a closed loop, which could give false reliability.
I Gallager’s codes were designed to decrease the likelihood of suchclosed loops.
LDPC Codes c©Asha Rao 14 / 44
Some History
Why was Gallager’s work ignored for so long?
I Gallager’s decoding too difficult - for the 1960s.
I Gallager’s codes used parity bits.
I Bits that contain information about the message bits.
I Reliability: which message bits were sent? Improved by feeding backthe received bits
I The only problem is a closed loop, which could give false reliability.
I Gallager’s codes were designed to decrease the likelihood of suchclosed loops.
LDPC Codes c©Asha Rao 14 / 44
Some History
Why was Gallager’s work ignored for so long?
I Gallager’s decoding too difficult - for the 1960s.
I Gallager’s codes used parity bits.
I Bits that contain information about the message bits.
I Reliability: which message bits were sent? Improved by feeding backthe received bits
I The only problem is a closed loop, which could give false reliability.
I Gallager’s codes were designed to decrease the likelihood of suchclosed loops.
LDPC Codes c©Asha Rao 14 / 44
Some History
Gallager’s codes
I Achieve the closest approaches to the Shannon Limit.
I Better than even turbo codes.
I Highly sought after for high-speed communication and data storage
I Integrated into standards for wireless data transmissions.
I Computer chips dedicated to decoding can be found in cell phones.
LDPC Codes c©Asha Rao 15 / 44
Some History
Gallager’s codes
I Achieve the closest approaches to the Shannon Limit.
I Better than even turbo codes.
I Highly sought after for high-speed communication and data storage
I Integrated into standards for wireless data transmissions.
I Computer chips dedicated to decoding can be found in cell phones.
LDPC Codes c©Asha Rao 15 / 44
Some History
Gallager’s codes
I Achieve the closest approaches to the Shannon Limit.
I Better than even turbo codes.
I Highly sought after for high-speed communication and data storage
I Integrated into standards for wireless data transmissions.
I Computer chips dedicated to decoding can be found in cell phones.
LDPC Codes c©Asha Rao 15 / 44
Some History
Gallager’s codes
I Achieve the closest approaches to the Shannon Limit.
I Better than even turbo codes.
I Highly sought after for high-speed communication and data storage
I Integrated into standards for wireless data transmissions.
I Computer chips dedicated to decoding can be found in cell phones.
LDPC Codes c©Asha Rao 15 / 44
Some History
Gallager’s codes
I Achieve the closest approaches to the Shannon Limit.
I Better than even turbo codes.
I Highly sought after for high-speed communication and data storage
I Integrated into standards for wireless data transmissions.
I Computer chips dedicated to decoding can be found in cell phones.
LDPC Codes c©Asha Rao 15 / 44
The Technical Details
Outline
Some History
The Technical DetailsSome Peculiarities
Known Combinatorial ConstructionsConstruction from BIBDs
A New Construction
LDPC Codes c©Asha Rao 16 / 44
The Technical Details
What is an LDPC code?
I Linear codes.
I Starts with a parity-check matrix.
I Each check digit is the (modulo 2) sum of a particular, preassignedset of information (message) digits.
I Hence the parity-check matrix H represents a set of linearhomogenous binary equations.
I The set of codewords is the set of solutions (the null-space) of theseequations.
LDPC Codes c©Asha Rao 17 / 44
The Technical Details
What is an LDPC code?
I Linear codes.
I Starts with a parity-check matrix.
I Each check digit is the (modulo 2) sum of a particular, preassignedset of information (message) digits.
I Hence the parity-check matrix H represents a set of linearhomogenous binary equations.
I The set of codewords is the set of solutions (the null-space) of theseequations.
LDPC Codes c©Asha Rao 17 / 44
The Technical Details
What is an LDPC code?
I Linear codes.
I Starts with a parity-check matrix.
I Each check digit is the (modulo 2) sum of a particular, preassignedset of information (message) digits.
I Hence the parity-check matrix H represents a set of linearhomogenous binary equations.
I The set of codewords is the set of solutions (the null-space) of theseequations.
LDPC Codes c©Asha Rao 17 / 44
The Technical Details
What is an LDPC code?
I Linear codes.
I Starts with a parity-check matrix.
I Each check digit is the (modulo 2) sum of a particular, preassignedset of information (message) digits.
I Hence the parity-check matrix H represents a set of linearhomogenous binary equations.
I The set of codewords is the set of solutions (the null-space) of theseequations.
LDPC Codes c©Asha Rao 17 / 44
The Technical Details
What is an LDPC code?
I Linear codes.
I Starts with a parity-check matrix.
I Each check digit is the (modulo 2) sum of a particular, preassignedset of information (message) digits.
I Hence the parity-check matrix H represents a set of linearhomogenous binary equations.
I The set of codewords is the set of solutions (the null-space) of theseequations.
LDPC Codes c©Asha Rao 17 / 44
The Technical Details
Example
I Here is the parity check matrix of the [6,3] code.
H =
x1 x2 x3 x4 x5 x6 1 1 1 1 0 00 1 1 0 1 01 1 0 0 0 1
I The parity-check equations are:
x4 = x1 + x2 + x3
x5 = x2 + x3
x6 = x1 + x2
LDPC Codes c©Asha Rao 18 / 44
The Technical Details
Example
I Here is the parity check matrix of the [6,3] code.
H =
x1 x2 x3 x4 x5 x6 1 1 1 1 0 00 1 1 0 1 01 1 0 0 0 1
I The parity-check equations are:
x4 = x1 + x2 + x3
x5 = x2 + x3
x6 = x1 + x2
LDPC Codes c©Asha Rao 18 / 44
The Technical Details
The Design of LDPC codes
I LDPC codes: have sparse (low-density) parity-check matrices.
I Thus the matrix has mostly 0s and very few 1s.
I A (n, γ, ρ) low-density code is a code of block length n
I Parity-check matrix: each column contains γ 1s and each rowcontains ρ 1s.
I Such a code, with γ and ρ fixed is called a regular LDPC code.
I Note that this type of matrix would not have the identity matrix part.
I The equations represented by these matrices can always be solved togive the check digits as explicit sums of information digits.
LDPC Codes c©Asha Rao 19 / 44
The Technical Details
The Design of LDPC codes
I LDPC codes: have sparse (low-density) parity-check matrices.
I Thus the matrix has mostly 0s and very few 1s.
I A (n, γ, ρ) low-density code is a code of block length n
I Parity-check matrix: each column contains γ 1s and each rowcontains ρ 1s.
I Such a code, with γ and ρ fixed is called a regular LDPC code.
I Note that this type of matrix would not have the identity matrix part.
I The equations represented by these matrices can always be solved togive the check digits as explicit sums of information digits.
LDPC Codes c©Asha Rao 19 / 44
The Technical Details
The Design of LDPC codes
I LDPC codes: have sparse (low-density) parity-check matrices.
I Thus the matrix has mostly 0s and very few 1s.
I A (n, γ, ρ) low-density code is a code of block length n
I Parity-check matrix: each column contains γ 1s and each rowcontains ρ 1s.
I Such a code, with γ and ρ fixed is called a regular LDPC code.
I Note that this type of matrix would not have the identity matrix part.
I The equations represented by these matrices can always be solved togive the check digits as explicit sums of information digits.
LDPC Codes c©Asha Rao 19 / 44
The Technical Details
The Design of LDPC codes
I LDPC codes: have sparse (low-density) parity-check matrices.
I Thus the matrix has mostly 0s and very few 1s.
I A (n, γ, ρ) low-density code is a code of block length n
I Parity-check matrix: each column contains γ 1s and each rowcontains ρ 1s.
I Such a code, with γ and ρ fixed is called a regular LDPC code.
I Note that this type of matrix would not have the identity matrix part.
I The equations represented by these matrices can always be solved togive the check digits as explicit sums of information digits.
LDPC Codes c©Asha Rao 19 / 44
The Technical Details
The Design of LDPC codes
I LDPC codes: have sparse (low-density) parity-check matrices.
I Thus the matrix has mostly 0s and very few 1s.
I A (n, γ, ρ) low-density code is a code of block length n
I Parity-check matrix: each column contains γ 1s and each rowcontains ρ 1s.
I Such a code, with γ and ρ fixed is called a regular LDPC code.
I Note that this type of matrix would not have the identity matrix part.
I The equations represented by these matrices can always be solved togive the check digits as explicit sums of information digits.
LDPC Codes c©Asha Rao 19 / 44
The Technical Details
The Design of LDPC codes
I LDPC codes: have sparse (low-density) parity-check matrices.
I Thus the matrix has mostly 0s and very few 1s.
I A (n, γ, ρ) low-density code is a code of block length n
I Parity-check matrix: each column contains γ 1s and each rowcontains ρ 1s.
I Such a code, with γ and ρ fixed is called a regular LDPC code.
I Note that this type of matrix would not have the identity matrix part.
I The equations represented by these matrices can always be solved togive the check digits as explicit sums of information digits.
LDPC Codes c©Asha Rao 19 / 44
The Technical Details
The Design of LDPC codes
I LDPC codes: have sparse (low-density) parity-check matrices.
I Thus the matrix has mostly 0s and very few 1s.
I A (n, γ, ρ) low-density code is a code of block length n
I Parity-check matrix: each column contains γ 1s and each rowcontains ρ 1s.
I Such a code, with γ and ρ fixed is called a regular LDPC code.
I Note that this type of matrix would not have the identity matrix part.
I The equations represented by these matrices can always be solved togive the check digits as explicit sums of information digits.
LDPC Codes c©Asha Rao 19 / 44
The Technical Details Some Peculiarities
A Side Note
I Random codes are capacity achieving, as Shannon implicitly proved inhis paper.
I However, achieving capacity is only part of the story.
I To be used for communication ones needs to be able to encode anddecode these codes, FAST.
I Random codes of rate R are just 2Rn random vectors of length n overthe input alphabet.
I But to encode and decode them we would need a codebook.
LDPC Codes c©Asha Rao 20 / 44
The Technical Details Some Peculiarities
A Side Note
I Random codes are capacity achieving, as Shannon implicitly proved inhis paper.
I However, achieving capacity is only part of the story.
I To be used for communication ones needs to be able to encode anddecode these codes, FAST.
I Random codes of rate R are just 2Rn random vectors of length n overthe input alphabet.
I But to encode and decode them we would need a codebook.
LDPC Codes c©Asha Rao 20 / 44
The Technical Details Some Peculiarities
A Side Note
I Random codes are capacity achieving, as Shannon implicitly proved inhis paper.
I However, achieving capacity is only part of the story.
I To be used for communication ones needs to be able to encode anddecode these codes, FAST.
I Random codes of rate R are just 2Rn random vectors of length n overthe input alphabet.
I But to encode and decode them we would need a codebook.
LDPC Codes c©Asha Rao 20 / 44
The Technical Details Some Peculiarities
A Side Note
I Random codes are capacity achieving, as Shannon implicitly proved inhis paper.
I However, achieving capacity is only part of the story.
I To be used for communication ones needs to be able to encode anddecode these codes, FAST.
I Random codes of rate R are just 2Rn random vectors of length n overthe input alphabet.
I But to encode and decode them we would need a codebook.
LDPC Codes c©Asha Rao 20 / 44
The Technical Details Some Peculiarities
A Side Note
I Random codes are capacity achieving, as Shannon implicitly proved inhis paper.
I However, achieving capacity is only part of the story.
I To be used for communication ones needs to be able to encode anddecode these codes, FAST.
I Random codes of rate R are just 2Rn random vectors of length n overthe input alphabet.
I But to encode and decode them we would need a codebook.
LDPC Codes c©Asha Rao 20 / 44
The Technical Details Some Peculiarities
The Encoding Problem
I If the input alphabet is a field, for example F2, then encoding iseasier.
I There are many codes defined over such alphabets.
I A linear code of block length n and dimension k is a subspace of thevector space Fn
2.
I Linear codes can be encoded in polynomial time.
I Decoding of these codes - maximum likelihood decoding (MLD) isNP hard.
LDPC Codes c©Asha Rao 21 / 44
The Technical Details Some Peculiarities
The Encoding Problem
I If the input alphabet is a field, for example F2, then encoding iseasier.
I There are many codes defined over such alphabets.
I A linear code of block length n and dimension k is a subspace of thevector space Fn
2.
I Linear codes can be encoded in polynomial time.
I Decoding of these codes - maximum likelihood decoding (MLD) isNP hard.
LDPC Codes c©Asha Rao 21 / 44
The Technical Details Some Peculiarities
The Encoding Problem
I If the input alphabet is a field, for example F2, then encoding iseasier.
I There are many codes defined over such alphabets.
I A linear code of block length n and dimension k is a subspace of thevector space Fn
2.
I Linear codes can be encoded in polynomial time.
I Decoding of these codes - maximum likelihood decoding (MLD) isNP hard.
LDPC Codes c©Asha Rao 21 / 44
The Technical Details Some Peculiarities
The Encoding Problem
I If the input alphabet is a field, for example F2, then encoding iseasier.
I There are many codes defined over such alphabets.
I A linear code of block length n and dimension k is a subspace of thevector space Fn
2.
I Linear codes can be encoded in polynomial time.
I Decoding of these codes - maximum likelihood decoding (MLD) isNP hard.
LDPC Codes c©Asha Rao 21 / 44
The Technical Details Some Peculiarities
The Encoding Problem
I If the input alphabet is a field, for example F2, then encoding iseasier.
I There are many codes defined over such alphabets.
I A linear code of block length n and dimension k is a subspace of thevector space Fn
2.
I Linear codes can be encoded in polynomial time.
I Decoding of these codes - maximum likelihood decoding (MLD) isNP hard.
LDPC Codes c©Asha Rao 21 / 44
The Technical Details Some Peculiarities
Bipartite Representation of linear codes
I Linear codes can be represented as bipartite graphs.
I The top nodes: message symbolsThe bottom nodes: check symbols
x1 x2 x3
x4 x5 x6
LDPC Codes c©Asha Rao 22 / 44
The Technical Details Some Peculiarities
Bipartite Representation of linear codes
I Linear codes can be represented as bipartite graphs.
I The top nodes: message symbolsThe bottom nodes: check symbols
x1 x2 x3
x4 x5 x6
LDPC Codes c©Asha Rao 22 / 44
The Technical Details Some Peculiarities
What’s different about LDPC codes?
I Not every binary linear code can be represented by a sparse bipartitegraph
I If there is such a representation then we get an low-densityparity-check (LDPC) code.
I The sparsity is the key property
I It allows the algorithmic efficiency of LDPC codes.
I Better than MLD are sub-optimal algorithms that are polynomial timeby construction.
LDPC Codes c©Asha Rao 23 / 44
The Technical Details Some Peculiarities
What’s different about LDPC codes?
I Not every binary linear code can be represented by a sparse bipartitegraph
I If there is such a representation then we get an low-densityparity-check (LDPC) code.
I The sparsity is the key property
I It allows the algorithmic efficiency of LDPC codes.
I Better than MLD are sub-optimal algorithms that are polynomial timeby construction.
LDPC Codes c©Asha Rao 23 / 44
The Technical Details Some Peculiarities
What’s different about LDPC codes?
I Not every binary linear code can be represented by a sparse bipartitegraph
I If there is such a representation then we get an low-densityparity-check (LDPC) code.
I The sparsity is the key property
I It allows the algorithmic efficiency of LDPC codes.
I Better than MLD are sub-optimal algorithms that are polynomial timeby construction.
LDPC Codes c©Asha Rao 23 / 44
The Technical Details Some Peculiarities
What’s different about LDPC codes?
I Not every binary linear code can be represented by a sparse bipartitegraph
I If there is such a representation then we get an low-densityparity-check (LDPC) code.
I The sparsity is the key property
I It allows the algorithmic efficiency of LDPC codes.
I Better than MLD are sub-optimal algorithms that are polynomial timeby construction.
LDPC Codes c©Asha Rao 23 / 44
The Technical Details Some Peculiarities
What’s different about LDPC codes?
I Not every binary linear code can be represented by a sparse bipartitegraph
I If there is such a representation then we get an low-densityparity-check (LDPC) code.
I The sparsity is the key property
I It allows the algorithmic efficiency of LDPC codes.
I Better than MLD are sub-optimal algorithms that are polynomial timeby construction.
LDPC Codes c©Asha Rao 23 / 44
The Technical Details Some Peculiarities
Why do LDPC codes stand out?
I LDPC codes achieve remarkable performance with iterative decoding.
I This performance is close to the Shanon limit.
I They are some of the best codes for error-control
I Both for communication as well as digital storage systems.
I Until the early 2000s, no analytical method (algebraic or geometric)available for constructing these codes.
I Good LDPC codes until then were computer-generated. i.e. random.
I Encoding these long codes is quite complex
LDPC Codes c©Asha Rao 24 / 44
The Technical Details Some Peculiarities
Why do LDPC codes stand out?
I LDPC codes achieve remarkable performance with iterative decoding.
I This performance is close to the Shanon limit.
I They are some of the best codes for error-control
I Both for communication as well as digital storage systems.
I Until the early 2000s, no analytical method (algebraic or geometric)available for constructing these codes.
I Good LDPC codes until then were computer-generated. i.e. random.
I Encoding these long codes is quite complex
LDPC Codes c©Asha Rao 24 / 44
The Technical Details Some Peculiarities
Why do LDPC codes stand out?
I LDPC codes achieve remarkable performance with iterative decoding.
I This performance is close to the Shanon limit.
I They are some of the best codes for error-control
I Both for communication as well as digital storage systems.
I Until the early 2000s, no analytical method (algebraic or geometric)available for constructing these codes.
I Good LDPC codes until then were computer-generated. i.e. random.
I Encoding these long codes is quite complex
LDPC Codes c©Asha Rao 24 / 44
The Technical Details Some Peculiarities
Why do LDPC codes stand out?
I LDPC codes achieve remarkable performance with iterative decoding.
I This performance is close to the Shanon limit.
I They are some of the best codes for error-control
I Both for communication as well as digital storage systems.
I Until the early 2000s, no analytical method (algebraic or geometric)available for constructing these codes.
I Good LDPC codes until then were computer-generated. i.e. random.
I Encoding these long codes is quite complex
LDPC Codes c©Asha Rao 24 / 44
The Technical Details Some Peculiarities
Why do LDPC codes stand out?
I LDPC codes achieve remarkable performance with iterative decoding.
I This performance is close to the Shanon limit.
I They are some of the best codes for error-control
I Both for communication as well as digital storage systems.
I Until the early 2000s, no analytical method (algebraic or geometric)available for constructing these codes.
I Good LDPC codes until then were computer-generated. i.e. random.
I Encoding these long codes is quite complex
LDPC Codes c©Asha Rao 24 / 44
The Technical Details Some Peculiarities
Why do LDPC codes stand out?
I LDPC codes achieve remarkable performance with iterative decoding.
I This performance is close to the Shanon limit.
I They are some of the best codes for error-control
I Both for communication as well as digital storage systems.
I Until the early 2000s, no analytical method (algebraic or geometric)available for constructing these codes.
I Good LDPC codes until then were computer-generated. i.e. random.
I Encoding these long codes is quite complex
LDPC Codes c©Asha Rao 24 / 44
The Technical Details Some Peculiarities
Why do LDPC codes stand out?
I LDPC codes achieve remarkable performance with iterative decoding.
I This performance is close to the Shanon limit.
I They are some of the best codes for error-control
I Both for communication as well as digital storage systems.
I Until the early 2000s, no analytical method (algebraic or geometric)available for constructing these codes.
I Good LDPC codes until then were computer-generated. i.e. random.
I Encoding these long codes is quite complex
LDPC Codes c©Asha Rao 24 / 44
Known Combinatorial Constructions
Outline
Some History
The Technical DetailsSome Peculiarities
Known Combinatorial ConstructionsConstruction from BIBDs
A New Construction
LDPC Codes c©Asha Rao 25 / 44
Known Combinatorial Constructions
Combinatorial Constructions - Finite Geometry
I In 2000 Kou, Lin and Fossorier came up with the first geometricconstructions.
I Construction was based on the lines and points of a finite geometry.
I They used Euclidean and projective geometries.
I These results were followed by others.
I Tanner, Sridhara and Fuja constructed LDPC codes from thesubgroups of the multiplicative group of a prime field GF(p). (arxiv)
LDPC Codes c©Asha Rao 26 / 44
Known Combinatorial Constructions
Combinatorial Constructions - Finite Geometry
I In 2000 Kou, Lin and Fossorier came up with the first geometricconstructions.
I Construction was based on the lines and points of a finite geometry.
I They used Euclidean and projective geometries.
I These results were followed by others.
I Tanner, Sridhara and Fuja constructed LDPC codes from thesubgroups of the multiplicative group of a prime field GF(p). (arxiv)
LDPC Codes c©Asha Rao 26 / 44
Known Combinatorial Constructions
Combinatorial Constructions - Finite Geometry
I In 2000 Kou, Lin and Fossorier came up with the first geometricconstructions.
I Construction was based on the lines and points of a finite geometry.
I They used Euclidean and projective geometries.
I These results were followed by others.
I Tanner, Sridhara and Fuja constructed LDPC codes from thesubgroups of the multiplicative group of a prime field GF(p). (arxiv)
LDPC Codes c©Asha Rao 26 / 44
Known Combinatorial Constructions
Combinatorial Constructions - Finite Geometry
I In 2000 Kou, Lin and Fossorier came up with the first geometricconstructions.
I Construction was based on the lines and points of a finite geometry.
I They used Euclidean and projective geometries.
I These results were followed by others.
I Tanner, Sridhara and Fuja constructed LDPC codes from thesubgroups of the multiplicative group of a prime field GF(p). (arxiv)
LDPC Codes c©Asha Rao 26 / 44
Known Combinatorial Constructions
Combinatorial Constructions - Finite Geometry
I In 2000 Kou, Lin and Fossorier came up with the first geometricconstructions.
I Construction was based on the lines and points of a finite geometry.
I They used Euclidean and projective geometries.
I These results were followed by others.
I Tanner, Sridhara and Fuja constructed LDPC codes from thesubgroups of the multiplicative group of a prime field GF(p). (arxiv)
LDPC Codes c©Asha Rao 26 / 44
Known Combinatorial Constructions
LDPC Codes from BIBDs
I In 2004 Vasic and Milenkovic showed the construction from BIBDs.
I They introduced constructions based on cyclic difference families
I Also cycle-invariant difference sets
I And Affine 1-configurations.
I These codes have low-complexity implementation.
LDPC Codes c©Asha Rao 27 / 44
Known Combinatorial Constructions
LDPC Codes from BIBDs
I In 2004 Vasic and Milenkovic showed the construction from BIBDs.
I They introduced constructions based on cyclic difference families
I Also cycle-invariant difference sets
I And Affine 1-configurations.
I These codes have low-complexity implementation.
LDPC Codes c©Asha Rao 27 / 44
Known Combinatorial Constructions
LDPC Codes from BIBDs
I In 2004 Vasic and Milenkovic showed the construction from BIBDs.
I They introduced constructions based on cyclic difference families
I Also cycle-invariant difference sets
I And Affine 1-configurations.
I These codes have low-complexity implementation.
LDPC Codes c©Asha Rao 27 / 44
Known Combinatorial Constructions
LDPC Codes from BIBDs
I In 2004 Vasic and Milenkovic showed the construction from BIBDs.
I They introduced constructions based on cyclic difference families
I Also cycle-invariant difference sets
I And Affine 1-configurations.
I These codes have low-complexity implementation.
LDPC Codes c©Asha Rao 27 / 44
Known Combinatorial Constructions
LDPC Codes from BIBDs
I In 2004 Vasic and Milenkovic showed the construction from BIBDs.
I They introduced constructions based on cyclic difference families
I Also cycle-invariant difference sets
I And Affine 1-configurations.
I These codes have low-complexity implementation.
LDPC Codes c©Asha Rao 27 / 44
Known Combinatorial Constructions Construction from BIBDs
Balanced Incomplete Block Design (BIBD)
I A (v , c , λ) BIBD is an ordered pair (V ,B)
I Where V is a v -element set and
I B is a collection of b c-subsets of V called blocks
I such that every element of V is contained in exactly r blocks.
I And every 2-subset of V is contained in exactly λ blocks.
I Such a design is also called a 2-design.
I A BIBD with block size c = 3 is called a Steiner triple system.
LDPC Codes c©Asha Rao 28 / 44
Known Combinatorial Constructions Construction from BIBDs
Balanced Incomplete Block Design (BIBD)
I A (v , c , λ) BIBD is an ordered pair (V ,B)
I Where V is a v -element set and
I B is a collection of b c-subsets of V called blocks
I such that every element of V is contained in exactly r blocks.
I And every 2-subset of V is contained in exactly λ blocks.
I Such a design is also called a 2-design.
I A BIBD with block size c = 3 is called a Steiner triple system.
LDPC Codes c©Asha Rao 28 / 44
Known Combinatorial Constructions Construction from BIBDs
Balanced Incomplete Block Design (BIBD)
I A (v , c , λ) BIBD is an ordered pair (V ,B)
I Where V is a v -element set and
I B is a collection of b c-subsets of V called blocks
I such that every element of V is contained in exactly r blocks.
I And every 2-subset of V is contained in exactly λ blocks.
I Such a design is also called a 2-design.
I A BIBD with block size c = 3 is called a Steiner triple system.
LDPC Codes c©Asha Rao 28 / 44
Known Combinatorial Constructions Construction from BIBDs
Balanced Incomplete Block Design (BIBD)
I A (v , c , λ) BIBD is an ordered pair (V ,B)
I Where V is a v -element set and
I B is a collection of b c-subsets of V called blocks
I such that every element of V is contained in exactly r blocks.
I And every 2-subset of V is contained in exactly λ blocks.
I Such a design is also called a 2-design.
I A BIBD with block size c = 3 is called a Steiner triple system.
LDPC Codes c©Asha Rao 28 / 44
Known Combinatorial Constructions Construction from BIBDs
Balanced Incomplete Block Design (BIBD)
I A (v , c , λ) BIBD is an ordered pair (V ,B)
I Where V is a v -element set and
I B is a collection of b c-subsets of V called blocks
I such that every element of V is contained in exactly r blocks.
I And every 2-subset of V is contained in exactly λ blocks.
I Such a design is also called a 2-design.
I A BIBD with block size c = 3 is called a Steiner triple system.
LDPC Codes c©Asha Rao 28 / 44
Known Combinatorial Constructions Construction from BIBDs
Balanced Incomplete Block Design (BIBD)
I A (v , c , λ) BIBD is an ordered pair (V ,B)
I Where V is a v -element set and
I B is a collection of b c-subsets of V called blocks
I such that every element of V is contained in exactly r blocks.
I And every 2-subset of V is contained in exactly λ blocks.
I Such a design is also called a 2-design.
I A BIBD with block size c = 3 is called a Steiner triple system.
LDPC Codes c©Asha Rao 28 / 44
Known Combinatorial Constructions Construction from BIBDs
Balanced Incomplete Block Design (BIBD)
I A (v , c , λ) BIBD is an ordered pair (V ,B)
I Where V is a v -element set and
I B is a collection of b c-subsets of V called blocks
I such that every element of V is contained in exactly r blocks.
I And every 2-subset of V is contained in exactly λ blocks.
I Such a design is also called a 2-design.
I A BIBD with block size c = 3 is called a Steiner triple system.
LDPC Codes c©Asha Rao 28 / 44
Known Combinatorial Constructions Construction from BIBDs
Incidence Matrix of a BIBD
I The point-block incidence matrix of a (V ,B) design is a v × b matrix
Ap,b = (aij) in which, for i ∈ V ,
aij =
{1 if i ∈ Bj
0 otherwise
I Think of the points as the parity-check equations and blocks as bitsof a linear block code.
I Then Ap,b defines a parity-check matrix H of an LDPC code.
LDPC Codes c©Asha Rao 29 / 44
Known Combinatorial Constructions Construction from BIBDs
Incidence Matrix of a BIBD
I The point-block incidence matrix of a (V ,B) design is a v × b matrix
Ap,b = (aij) in which, for i ∈ V ,
aij =
{1 if i ∈ Bj
0 otherwise
I Think of the points as the parity-check equations and blocks as bitsof a linear block code.
I Then Ap,b defines a parity-check matrix H of an LDPC code.
LDPC Codes c©Asha Rao 29 / 44
Known Combinatorial Constructions Construction from BIBDs
Incidence Matrix of a BIBD
I The point-block incidence matrix of a (V ,B) design is a v × b matrix
Ap,b = (aij) in which, for i ∈ V ,
aij =
{1 if i ∈ Bj
0 otherwise
I Think of the points as the parity-check equations and blocks as bitsof a linear block code.
I Then Ap,b defines a parity-check matrix H of an LDPC code.
LDPC Codes c©Asha Rao 29 / 44
Known Combinatorial Constructions Construction from BIBDs
Incidence Matrix of a BIBD
I The point-block incidence matrix of a (V ,B) design is a v × b matrix
Ap,b = (aij) in which, for i ∈ V ,
aij =
{1 if i ∈ Bj
0 otherwise
I Think of the points as the parity-check equations and blocks as bitsof a linear block code.
I Then Ap,b defines a parity-check matrix H of an LDPC code.
LDPC Codes c©Asha Rao 29 / 44
Known Combinatorial Constructions Construction from BIBDs
Example – BIBD(7,3,1)
The symmetric STS with v = 7 and b = 7.
I Let B = {B1,B2, . . . ,B7} be the blocks given by
B1 = {0, 1, 3},B2 = {1, 2, 4}, . . . ,B7 = {0, 2, 6}
I The point-block incidence matrix is of the form:
Ap,b =
1 0 0 0 1 0 11 1 0 0 0 1 00 1 1 0 0 0 11 0 1 1 0 0 00 1 0 1 1 0 00 0 1 0 1 1 00 0 0 1 0 1 1
LDPC Codes c©Asha Rao 30 / 44
Known Combinatorial Constructions Construction from BIBDs
Example – BIBD(7,3,1)
The symmetric STS with v = 7 and b = 7.
I Let B = {B1,B2, . . . ,B7} be the blocks given by
B1 = {0, 1, 3},B2 = {1, 2, 4}, . . . ,B7 = {0, 2, 6}
I The point-block incidence matrix is of the form:
Ap,b =
1 0 0 0 1 0 11 1 0 0 0 1 00 1 1 0 0 0 11 0 1 1 0 0 00 1 0 1 1 0 00 0 1 0 1 1 00 0 0 1 0 1 1
LDPC Codes c©Asha Rao 30 / 44
Known Combinatorial Constructions Construction from BIBDs
Example – BIBD(7,3,1)
The symmetric STS with v = 7 and b = 7.
I Let B = {B1,B2, . . . ,B7} be the blocks given by
B1 = {0, 1, 3},B2 = {1, 2, 4}, . . . ,B7 = {0, 2, 6}
I The point-block incidence matrix is of the form:
Ap,b =
1 0 0 0 1 0 11 1 0 0 0 1 00 1 1 0 0 0 11 0 1 1 0 0 00 1 0 1 1 0 00 0 1 0 1 1 00 0 0 1 0 1 1
LDPC Codes c©Asha Rao 30 / 44
Known Combinatorial Constructions Construction from BIBDs
Example – BIBD(7,3,1)
The symmetric STS with v = 7 and b = 7.
I Let B = {B1,B2, . . . ,B7} be the blocks given by
B1 = {0, 1, 3},B2 = {1, 2, 4}, . . . ,B7 = {0, 2, 6}
I The point-block incidence matrix is of the form:
Ap,b =
1 0 0 0 1 0 11 1 0 0 0 1 00 1 1 0 0 0 11 0 1 1 0 0 00 1 0 1 1 0 00 0 1 0 1 1 00 0 0 1 0 1 1
LDPC Codes c©Asha Rao 30 / 44
Known Combinatorial Constructions Construction from BIBDs
Example – BIBD(7,3,1)
The symmetric STS with v = 7 and b = 7.
I Let B = {B1,B2, . . . ,B7} be the blocks given by
B1 = {0, 1, 3},B2 = {1, 2, 4}, . . . ,B7 = {0, 2, 6}
I The point-block incidence matrix is of the form:
Ap,b =
1 0 0 0 1 0 11 1 0 0 0 1 00 1 1 0 0 0 11 0 1 1 0 0 00 1 0 1 1 0 00 0 1 0 1 1 00 0 0 1 0 1 1
LDPC Codes c©Asha Rao 30 / 44
Known Combinatorial Constructions Construction from BIBDs
Parameters of the LDPC code constructed
I The column weight of a parity-check matrix constructed from a BIBDis c.
I The code-rate can then be given by (b − rank (H))/b.
I Unfortunately, the rank of H is usually quite hard to determine.
I The rate of an LDPC code based on a 2− (v , c , λ) can be bounded:
R ≥λ v(v−1)c(c−1) − v
λ v(v−1)c(c−1)
I Unfortunately, this bound is often very loose.
LDPC Codes c©Asha Rao 31 / 44
Known Combinatorial Constructions Construction from BIBDs
Parameters of the LDPC code constructed
I The column weight of a parity-check matrix constructed from a BIBDis c.
I The code-rate can then be given by (b − rank (H))/b.
I Unfortunately, the rank of H is usually quite hard to determine.
I The rate of an LDPC code based on a 2− (v , c , λ) can be bounded:
R ≥λ v(v−1)c(c−1) − v
λ v(v−1)c(c−1)
I Unfortunately, this bound is often very loose.
LDPC Codes c©Asha Rao 31 / 44
Known Combinatorial Constructions Construction from BIBDs
Parameters of the LDPC code constructed
I The column weight of a parity-check matrix constructed from a BIBDis c.
I The code-rate can then be given by (b − rank (H))/b.
I Unfortunately, the rank of H is usually quite hard to determine.
I The rate of an LDPC code based on a 2− (v , c , λ) can be bounded:
R ≥λ v(v−1)c(c−1) − v
λ v(v−1)c(c−1)
I Unfortunately, this bound is often very loose.
LDPC Codes c©Asha Rao 31 / 44
Known Combinatorial Constructions Construction from BIBDs
Parameters of the LDPC code constructed
I The column weight of a parity-check matrix constructed from a BIBDis c.
I The code-rate can then be given by (b − rank (H))/b.
I Unfortunately, the rank of H is usually quite hard to determine.
I The rate of an LDPC code based on a 2− (v , c , λ) can be bounded:
R ≥λ v(v−1)c(c−1) − v
λ v(v−1)c(c−1)
I Unfortunately, this bound is often very loose.
LDPC Codes c©Asha Rao 31 / 44
Known Combinatorial Constructions Construction from BIBDs
Parameters of the LDPC code constructed
I The column weight of a parity-check matrix constructed from a BIBDis c.
I The code-rate can then be given by (b − rank (H))/b.
I Unfortunately, the rank of H is usually quite hard to determine.
I The rate of an LDPC code based on a 2− (v , c , λ) can be bounded:
R ≥λ v(v−1)c(c−1) − v
λ v(v−1)c(c−1)
I Unfortunately, this bound is often very loose.
LDPC Codes c©Asha Rao 31 / 44
Known Combinatorial Constructions Construction from BIBDs
Parameters of the LDPC code constructed
I The column weight of a parity-check matrix constructed from a BIBDis c.
I The code-rate can then be given by (b − rank (H))/b.
I Unfortunately, the rank of H is usually quite hard to determine.
I The rate of an LDPC code based on a 2− (v , c , λ) can be bounded:
R ≥λ v(v−1)c(c−1) − v
λ v(v−1)c(c−1)
I Unfortunately, this bound is often very loose.
LDPC Codes c©Asha Rao 31 / 44
Known Combinatorial Constructions Construction from BIBDs
Some of the Limitations of this Construction
I The construction given by Vasic and Milenkovic (2004) is restrictedto BIBDs with λ = 1.
I =⇒ no cycles of length 4 in the bipartite graph of the code.
I Giving several constructions of STSs from families of abelian groups.
I However such BIBDs only exist when
v is a power of a prime of the form
v ≡ 1(mod 6) or v ≡ 7(mod 12),
v = pn where p ≡ 7(mod 12).
LDPC Codes c©Asha Rao 32 / 44
Known Combinatorial Constructions Construction from BIBDs
Some of the Limitations of this Construction
I The construction given by Vasic and Milenkovic (2004) is restrictedto BIBDs with λ = 1.
I =⇒ no cycles of length 4 in the bipartite graph of the code.
I Giving several constructions of STSs from families of abelian groups.
I However such BIBDs only exist when
v is a power of a prime of the form
v ≡ 1(mod 6) or v ≡ 7(mod 12),
v = pn where p ≡ 7(mod 12).
LDPC Codes c©Asha Rao 32 / 44
Known Combinatorial Constructions Construction from BIBDs
Some of the Limitations of this Construction
I The construction given by Vasic and Milenkovic (2004) is restrictedto BIBDs with λ = 1.
I =⇒ no cycles of length 4 in the bipartite graph of the code.
I Giving several constructions of STSs from families of abelian groups.
I However such BIBDs only exist when
v is a power of a prime of the form
v ≡ 1(mod 6) or v ≡ 7(mod 12),
v = pn where p ≡ 7(mod 12).
LDPC Codes c©Asha Rao 32 / 44
Known Combinatorial Constructions Construction from BIBDs
Some of the Limitations of this Construction
I The construction given by Vasic and Milenkovic (2004) is restrictedto BIBDs with λ = 1.
I =⇒ no cycles of length 4 in the bipartite graph of the code.
I Giving several constructions of STSs from families of abelian groups.
I However such BIBDs only exist when
v is a power of a prime of the form
v ≡ 1(mod 6) or v ≡ 7(mod 12),
v = pn where p ≡ 7(mod 12).
LDPC Codes c©Asha Rao 32 / 44
Known Combinatorial Constructions Construction from BIBDs
Some of the Limitations of this Construction
I The construction given by Vasic and Milenkovic (2004) is restrictedto BIBDs with λ = 1.
I =⇒ no cycles of length 4 in the bipartite graph of the code.
I Giving several constructions of STSs from families of abelian groups.
I However such BIBDs only exist when
v is a power of a prime of the form
v ≡ 1(mod 6) or v ≡ 7(mod 12),
v = pn where p ≡ 7(mod 12).
LDPC Codes c©Asha Rao 32 / 44
Known Combinatorial Constructions Construction from BIBDs
Some of the Limitations of this Construction
I The construction given by Vasic and Milenkovic (2004) is restrictedto BIBDs with λ = 1.
I =⇒ no cycles of length 4 in the bipartite graph of the code.
I Giving several constructions of STSs from families of abelian groups.
I However such BIBDs only exist when
v is a power of a prime of the form
v ≡ 1(mod 6) or v ≡ 7(mod 12),
v = pn where p ≡ 7(mod 12).
LDPC Codes c©Asha Rao 32 / 44
Known Combinatorial Constructions Construction from BIBDs
Some of the Limitations of this Construction
I The construction given by Vasic and Milenkovic (2004) is restrictedto BIBDs with λ = 1.
I =⇒ no cycles of length 4 in the bipartite graph of the code.
I Giving several constructions of STSs from families of abelian groups.
I However such BIBDs only exist when
v is a power of a prime of the form
v ≡ 1(mod 6) or v ≡ 7(mod 12),
v = pn where p ≡ 7(mod 12).
LDPC Codes c©Asha Rao 32 / 44
A New Construction
Outline
Some History
The Technical DetailsSome Peculiarities
Known Combinatorial ConstructionsConstruction from BIBDs
A New Construction
LDPC Codes c©Asha Rao 33 / 44
A New Construction
Some new results - Rao, Donovan, Yazici
I Zhang et. al. (2010) construct LDPC codes from sets of MOLS
I MOLS are only known to exist for orders a power of a prime.
I We have come up with a similar construction, but for all orders6n, n ∈ Z.
I Our construction uses the cyclic group of order 2n.
I Most importantly, we are able to give explicit algebraic expressions forthe rate of the code.
I These codes achieve high rate (≥ 0.8) for n ≥ 8.
LDPC Codes c©Asha Rao 34 / 44
A New Construction
Some new results - Rao, Donovan, Yazici
I Zhang et. al. (2010) construct LDPC codes from sets of MOLS
I MOLS are only known to exist for orders a power of a prime.
I We have come up with a similar construction, but for all orders6n, n ∈ Z.
I Our construction uses the cyclic group of order 2n.
I Most importantly, we are able to give explicit algebraic expressions forthe rate of the code.
I These codes achieve high rate (≥ 0.8) for n ≥ 8.
LDPC Codes c©Asha Rao 34 / 44
A New Construction
Some new results - Rao, Donovan, Yazici
I Zhang et. al. (2010) construct LDPC codes from sets of MOLS
I MOLS are only known to exist for orders a power of a prime.
I We have come up with a similar construction, but for all orders6n, n ∈ Z.
I Our construction uses the cyclic group of order 2n.
I Most importantly, we are able to give explicit algebraic expressions forthe rate of the code.
I These codes achieve high rate (≥ 0.8) for n ≥ 8.
LDPC Codes c©Asha Rao 34 / 44
A New Construction
Some new results - Rao, Donovan, Yazici
I Zhang et. al. (2010) construct LDPC codes from sets of MOLS
I MOLS are only known to exist for orders a power of a prime.
I We have come up with a similar construction, but for all orders6n, n ∈ Z.
I Our construction uses the cyclic group of order 2n.
I Most importantly, we are able to give explicit algebraic expressions forthe rate of the code.
I These codes achieve high rate (≥ 0.8) for n ≥ 8.
LDPC Codes c©Asha Rao 34 / 44
A New Construction
Some new results - Rao, Donovan, Yazici
I Zhang et. al. (2010) construct LDPC codes from sets of MOLS
I MOLS are only known to exist for orders a power of a prime.
I We have come up with a similar construction, but for all orders6n, n ∈ Z.
I Our construction uses the cyclic group of order 2n.
I Most importantly, we are able to give explicit algebraic expressions forthe rate of the code.
I These codes achieve high rate (≥ 0.8) for n ≥ 8.
LDPC Codes c©Asha Rao 34 / 44
A New Construction
Some new results - Rao, Donovan, Yazici
I Zhang et. al. (2010) construct LDPC codes from sets of MOLS
I MOLS are only known to exist for orders a power of a prime.
I We have come up with a similar construction, but for all orders6n, n ∈ Z.
I Our construction uses the cyclic group of order 2n.
I Most importantly, we are able to give explicit algebraic expressions forthe rate of the code.
I These codes achieve high rate (≥ 0.8) for n ≥ 8.
LDPC Codes c©Asha Rao 34 / 44
A New Construction
Some details of our construction
I Given a Difference Covering Array DCA(κ, η;m):
I η × κ matrix Q = [q(i , j)] with entries from an abelian group (G , ∗),such that
for all distinct pairs of columns 0 ≤ j , j ′ ≤ k − 1 the difference set
∆j ,j ′ = {q(i , j) ∗ (q(i , j ′))−1 | 0 ≤ i ≤ η − 1}
contains every element of G at least once.
I Here, take (G , ∗) = (Z2n, ∗) where n ≥ 2 and ∗ is addition modulo2n.
I The DCA are said to be cyclic in this case.
LDPC Codes c©Asha Rao 35 / 44
A New Construction
Some details of our construction
I Given a Difference Covering Array DCA(κ, η;m):
I η × κ matrix Q = [q(i , j)] with entries from an abelian group (G , ∗),such that
for all distinct pairs of columns 0 ≤ j , j ′ ≤ k − 1 the difference set
∆j ,j ′ = {q(i , j) ∗ (q(i , j ′))−1 | 0 ≤ i ≤ η − 1}
contains every element of G at least once.
I Here, take (G , ∗) = (Z2n, ∗) where n ≥ 2 and ∗ is addition modulo2n.
I The DCA are said to be cyclic in this case.
LDPC Codes c©Asha Rao 35 / 44
A New Construction
Some details of our construction
I Given a Difference Covering Array DCA(κ, η;m):
I η × κ matrix Q = [q(i , j)] with entries from an abelian group (G , ∗),such that
for all distinct pairs of columns 0 ≤ j , j ′ ≤ k − 1 the difference set
∆j ,j ′ = {q(i , j) ∗ (q(i , j ′))−1 | 0 ≤ i ≤ η − 1}
contains every element of G at least once.
I Here, take (G , ∗) = (Z2n, ∗) where n ≥ 2 and ∗ is addition modulo2n.
I The DCA are said to be cyclic in this case.
LDPC Codes c©Asha Rao 35 / 44
A New Construction
Some details of our construction
I Given a Difference Covering Array DCA(κ, η;m):
I η × κ matrix Q = [q(i , j)] with entries from an abelian group (G , ∗),such that
for all distinct pairs of columns 0 ≤ j , j ′ ≤ k − 1 the difference set
∆j ,j ′ = {q(i , j) ∗ (q(i , j ′))−1 | 0 ≤ i ≤ η − 1}
contains every element of G at least once.
I Here, take (G , ∗) = (Z2n, ∗) where n ≥ 2 and ∗ is addition modulo2n.
I The DCA are said to be cyclic in this case.
LDPC Codes c©Asha Rao 35 / 44
A New Construction
Some details of our construction
I Given a Difference Covering Array DCA(κ, η;m):
I η × κ matrix Q = [q(i , j)] with entries from an abelian group (G , ∗),such that
for all distinct pairs of columns 0 ≤ j , j ′ ≤ k − 1 the difference set
∆j ,j ′ = {q(i , j) ∗ (q(i , j ′))−1 | 0 ≤ i ≤ η − 1}
contains every element of G at least once.
I Here, take (G , ∗) = (Z2n, ∗) where n ≥ 2 and ∗ is addition modulo2n.
I The DCA are said to be cyclic in this case.
LDPC Codes c©Asha Rao 35 / 44
A New Construction
Details
I We work with arrays Q = [q(i , j)] that satisfy the followingproperties:
P1. First column of Q contains only 0; remaining columns contain eachentry of Z2n precisely once.
P2. For all pairs of distinct columns, j and j ′, j 6= 0 6= j ′,∆j,j′ = {q(i , j)− q(i , j ′) mod 2n | 0 ≤ i ≤ n} = Z2n \ {0}.
I Example
QT4 =
0 0 0 00 1 2 31 3 0 2
, QT6 =
0 0 0 0 0 00 1 2 3 4 51 3 5 0 2 43 0 4 1 5 2
LDPC Codes c©Asha Rao 36 / 44
A New Construction
Details
I We work with arrays Q = [q(i , j)] that satisfy the followingproperties:
P1. First column of Q contains only 0; remaining columns contain eachentry of Z2n precisely once.
P2. For all pairs of distinct columns, j and j ′, j 6= 0 6= j ′,∆j,j′ = {q(i , j)− q(i , j ′) mod 2n | 0 ≤ i ≤ n} = Z2n \ {0}.
I Example
QT4 =
0 0 0 00 1 2 31 3 0 2
, QT6 =
0 0 0 0 0 00 1 2 3 4 51 3 5 0 2 43 0 4 1 5 2
LDPC Codes c©Asha Rao 36 / 44
A New Construction
Details
I We work with arrays Q = [q(i , j)] that satisfy the followingproperties:
P1. First column of Q contains only 0; remaining columns contain eachentry of Z2n precisely once.
P2. For all pairs of distinct columns, j and j ′, j 6= 0 6= j ′,∆j,j′ = {q(i , j)− q(i , j ′) mod 2n | 0 ≤ i ≤ n} = Z2n \ {0}.
I Example
QT4 =
0 0 0 00 1 2 31 3 0 2
, QT6 =
0 0 0 0 0 00 1 2 3 4 51 3 5 0 2 43 0 4 1 5 2
LDPC Codes c©Asha Rao 36 / 44
A New Construction
Details
I We work with arrays Q = [q(i , j)] that satisfy the followingproperties:
P1. First column of Q contains only 0; remaining columns contain eachentry of Z2n precisely once.
P2. For all pairs of distinct columns, j and j ′, j 6= 0 6= j ′,∆j,j′ = {q(i , j)− q(i , j ′) mod 2n | 0 ≤ i ≤ n} = Z2n \ {0}.
I Example
QT4 =
0 0 0 00 1 2 31 3 0 2
, QT6 =
0 0 0 0 0 00 1 2 3 4 51 3 5 0 2 43 0 4 1 5 2
LDPC Codes c©Asha Rao 36 / 44
A New Construction
Details
I Large numbers of DCA exist satisfying P1 and P2.
I Removing one row r0 where q(r0, 1)− q(r0, 2) = n mod 2n gives aPBIBD.
I Take a starter block SBj = (0, j , q(j , 2)) and
I Construct sets of blocks Bja, for 0 ≤ a ≤ 2n − 1, a ∈ Z2n as follows:
Bja = {a, (j + a (mod 2n)) + 2n, (q(j , 2) + a (mod 2n)) + 4n}.
LDPC Codes c©Asha Rao 37 / 44
A New Construction
Details
I Large numbers of DCA exist satisfying P1 and P2.
I Removing one row r0 where q(r0, 1)− q(r0, 2) = n mod 2n gives aPBIBD.
I Take a starter block SBj = (0, j , q(j , 2)) and
I Construct sets of blocks Bja, for 0 ≤ a ≤ 2n − 1, a ∈ Z2n as follows:
Bja = {a, (j + a (mod 2n)) + 2n, (q(j , 2) + a (mod 2n)) + 4n}.
LDPC Codes c©Asha Rao 37 / 44
A New Construction
Details
I Large numbers of DCA exist satisfying P1 and P2.
I Removing one row r0 where q(r0, 1)− q(r0, 2) = n mod 2n gives aPBIBD.
I Take a starter block SBj = (0, j , q(j , 2)) and
I Construct sets of blocks Bja, for 0 ≤ a ≤ 2n − 1, a ∈ Z2n as follows:
Bja = {a, (j + a (mod 2n)) + 2n, (q(j , 2) + a (mod 2n)) + 4n}.
LDPC Codes c©Asha Rao 37 / 44
A New Construction
Details
I Large numbers of DCA exist satisfying P1 and P2.
I Removing one row r0 where q(r0, 1)− q(r0, 2) = n mod 2n gives aPBIBD.
I Take a starter block SBj = (0, j , q(j , 2)) and
I Construct sets of blocks Bja, for 0 ≤ a ≤ 2n − 1, a ∈ Z2n as follows:
Bja = {a, (j + a (mod 2n)) + 2n, (q(j , 2) + a (mod 2n)) + 4n}.
LDPC Codes c©Asha Rao 37 / 44
A New Construction
Details
I the orbit of j , for j ∈ Z2n \ {r0} : Oj = {Bja | 0 ≤ a ≤ 2n − 1}.
I The starter block (0, q(r0, 1), q(r0, 2)) and the corresponding orbithave been omitted.
I the setB = ∪j∈Z2n\{n} Oj
is a set of b = 4n2 − 2n, 3-subsets (blocks) of V = Z6n.
LDPC Codes c©Asha Rao 38 / 44
A New Construction
Details
I the orbit of j , for j ∈ Z2n \ {r0} : Oj = {Bja | 0 ≤ a ≤ 2n − 1}.
I The starter block (0, q(r0, 1), q(r0, 2)) and the corresponding orbithave been omitted.
I the setB = ∪j∈Z2n\{n} Oj
is a set of b = 4n2 − 2n, 3-subsets (blocks) of V = Z6n.
LDPC Codes c©Asha Rao 38 / 44
A New Construction
Details
I the orbit of j , for j ∈ Z2n \ {r0} : Oj = {Bja | 0 ≤ a ≤ 2n − 1}.
I The starter block (0, q(r0, 1), q(r0, 2)) and the corresponding orbithave been omitted.
I the setB = ∪j∈Z2n\{n} Oj
is a set of b = 4n2 − 2n, 3-subsets (blocks) of V = Z6n.
LDPC Codes c©Asha Rao 38 / 44
A New Construction
Example
SB0 = (0, 0, 1) SB1 = (0, 1, 3) SB3 = (0, 3, 2),
O0 = O1 = O3 ={B0,0 = {0, 4, 9}, {B1,0 = {0, 5, 11}, {B3,0 = {0, 7, 10},B0,1 = {1, 5, 10}, B1,1 = {1, 6, 8}, B3,1 = {1, 4, 11},B0,2 = {2, 6, 11}, B1,2 = {2, 7, 9}, B3,2 = {2, 5, 8},B0,3 = {3, 7, 8}} B1,3 = {3, 4, 10}} B3,3 = {3, 6, 9}}
LDPC Codes c©Asha Rao 39 / 44
A New Construction
Parity-check matrix
H =[
H0 H1 H3
],
H =
1 0 0 0 1 0 0 0 1 0 0 00 1 0 0 0 1 0 0 0 1 0 00 0 1 0 0 0 1 0 0 0 1 00 0 0 1 0 0 0 1 0 0 0 1
1 0 0 0 0 0 0 1 0 1 0 00 1 0 0 1 0 0 0 0 0 1 00 0 1 0 0 1 0 0 0 0 0 10 0 0 1 0 0 1 0 1 0 0 0
0 0 0 1 0 1 0 0 0 0 1 01 0 0 0 0 0 1 0 0 0 0 10 1 0 0 0 0 0 1 1 0 0 00 0 1 0 1 0 0 0 0 1 0 0
LDPC Codes c©Asha Rao 40 / 44
A New Construction
Sketch of Proof
I The incidence matrix H of the DCAs has 4n2 − 2n columns (blocks)and 6n rows (points).
I Property: all pairs of points of the constructed PBIBD occur in atmost one block =⇒ girth of Tanner graph ≥ 6.
I Property: each PBIBD block contains one point from each ofV1 = {0, . . . , 2n − 1}, V2 = {2n, . . . , 4n − 1} and V3 = {4n, . . . , 6n − 1}=⇒ ∃ at least 2 lin. dep. rows =⇒ rank is ≤ 6n − 2.
LDPC Codes c©Asha Rao 41 / 44
A New Construction
Sketch of Proof
I The incidence matrix H of the DCAs has 4n2 − 2n columns (blocks)and 6n rows (points).
I Property: all pairs of points of the constructed PBIBD occur in atmost one block =⇒ girth of Tanner graph ≥ 6.
I Property: each PBIBD block contains one point from each ofV1 = {0, . . . , 2n − 1}, V2 = {2n, . . . , 4n − 1} and V3 = {4n, . . . , 6n − 1}=⇒ ∃ at least 2 lin. dep. rows =⇒ rank is ≤ 6n − 2.
LDPC Codes c©Asha Rao 41 / 44
A New Construction
Sketch of Proof
I The incidence matrix H of the DCAs has 4n2 − 2n columns (blocks)and 6n rows (points).
I Property: all pairs of points of the constructed PBIBD occur in atmost one block =⇒ girth of Tanner graph ≥ 6.
I Property: each PBIBD block contains one point from each ofV1 = {0, . . . , 2n − 1}, V2 = {2n, . . . , 4n − 1} and V3 = {4n, . . . , 6n − 1}=⇒ ∃ at least 2 lin. dep. rows =⇒ rank is ≤ 6n − 2.
LDPC Codes c©Asha Rao 41 / 44
A New Construction
Sketch of Proof
I Set of 6n − 2 lin. indep. columns in the parity check matrix:
1. Entire orbit O0,
2. All but the last block of orbit O1,
3. All but the last three blocks of the orbit O2
4. First two blocks of the orbit On+1
=⇒ the rank of this matrix ≥ 6n − 2.
I The rank of H, the parity-check matrix of the LDPC code is 6n − 2.
I Hence the rate of the LDPC code is (4n2 − 8n + 2)/(4n2 − 2n).
LDPC Codes c©Asha Rao 42 / 44
A New Construction
Sketch of Proof
I Set of 6n − 2 lin. indep. columns in the parity check matrix:
1. Entire orbit O0,
2. All but the last block of orbit O1,
3. All but the last three blocks of the orbit O2
4. First two blocks of the orbit On+1
=⇒ the rank of this matrix ≥ 6n − 2.
I The rank of H, the parity-check matrix of the LDPC code is 6n − 2.
I Hence the rate of the LDPC code is (4n2 − 8n + 2)/(4n2 − 2n).
LDPC Codes c©Asha Rao 42 / 44
A New Construction
Sketch of Proof
I Set of 6n − 2 lin. indep. columns in the parity check matrix:
1. Entire orbit O0,
2. All but the last block of orbit O1,
3. All but the last three blocks of the orbit O2
4. First two blocks of the orbit On+1
=⇒ the rank of this matrix ≥ 6n − 2.
I The rank of H, the parity-check matrix of the LDPC code is 6n − 2.
I Hence the rate of the LDPC code is (4n2 − 8n + 2)/(4n2 − 2n).
LDPC Codes c©Asha Rao 42 / 44
A New Construction
Sketch of Proof
I Set of 6n − 2 lin. indep. columns in the parity check matrix:
1. Entire orbit O0,
2. All but the last block of orbit O1,
3. All but the last three blocks of the orbit O2
4. First two blocks of the orbit On+1
=⇒ the rank of this matrix ≥ 6n − 2.
I The rank of H, the parity-check matrix of the LDPC code is 6n − 2.
I Hence the rate of the LDPC code is (4n2 − 8n + 2)/(4n2 − 2n).
LDPC Codes c©Asha Rao 42 / 44
A New Construction
Sketch of Proof
I Set of 6n − 2 lin. indep. columns in the parity check matrix:
1. Entire orbit O0,
2. All but the last block of orbit O1,
3. All but the last three blocks of the orbit O2
4. First two blocks of the orbit On+1
=⇒ the rank of this matrix ≥ 6n − 2.
I The rank of H, the parity-check matrix of the LDPC code is 6n − 2.
I Hence the rate of the LDPC code is (4n2 − 8n + 2)/(4n2 − 2n).
LDPC Codes c©Asha Rao 42 / 44
A New Construction
Sketch of Proof
I Set of 6n − 2 lin. indep. columns in the parity check matrix:
1. Entire orbit O0,
2. All but the last block of orbit O1,
3. All but the last three blocks of the orbit O2
4. First two blocks of the orbit On+1
=⇒ the rank of this matrix ≥ 6n − 2.
I The rank of H, the parity-check matrix of the LDPC code is 6n − 2.
I Hence the rate of the LDPC code is (4n2 − 8n + 2)/(4n2 − 2n).
LDPC Codes c©Asha Rao 42 / 44
A New Construction
Sketch of Proof
I Set of 6n − 2 lin. indep. columns in the parity check matrix:
1. Entire orbit O0,
2. All but the last block of orbit O1,
3. All but the last three blocks of the orbit O2
4. First two blocks of the orbit On+1
=⇒ the rank of this matrix ≥ 6n − 2.
I The rank of H, the parity-check matrix of the LDPC code is 6n − 2.
I Hence the rate of the LDPC code is (4n2 − 8n + 2)/(4n2 − 2n).
LDPC Codes c©Asha Rao 42 / 44
A New Construction
Sketch of Proof
I Set of 6n − 2 lin. indep. columns in the parity check matrix:
1. Entire orbit O0,
2. All but the last block of orbit O1,
3. All but the last three blocks of the orbit O2
4. First two blocks of the orbit On+1
=⇒ the rank of this matrix ≥ 6n − 2.
I The rank of H, the parity-check matrix of the LDPC code is 6n − 2.
I Hence the rate of the LDPC code is (4n2 − 8n + 2)/(4n2 − 2n).
LDPC Codes c©Asha Rao 42 / 44
A New Construction
Rates of some of our codes
n Length of code rate of code(4n2 − 2n)
3 30 0.4674 56 0.6075 90 0.6896 132 0.7427 182 0.7808 240 0.8089 306 0.830
I In the past, combinatorial QC-LDPC codes (based on finite fileds)have needed to be quite long (> 5000 bits) to get high rates (≥ 0.8).
LDPC Codes c©Asha Rao 43 / 44
A New Construction
Rates of some of our codes
n Length of code rate of code(4n2 − 2n)
3 30 0.4674 56 0.6075 90 0.6896 132 0.7427 182 0.7808 240 0.8089 306 0.830
I In the past, combinatorial QC-LDPC codes (based on finite fileds)have needed to be quite long (> 5000 bits) to get high rates (≥ 0.8).
LDPC Codes c©Asha Rao 43 / 44
A New Construction
Thank You
Ask me for any references you wish to see.
LDPC Codes c©Asha Rao 44 / 44