+ All Categories
Home > Documents > Information Theory Prepared by: Amit Degada Teaching Assistant, ECED, NIT Surat.

Information Theory Prepared by: Amit Degada Teaching Assistant, ECED, NIT Surat.

Date post: 13-Jan-2016
Category:
Upload: asher-shawn-welch
View: 215 times
Download: 1 times
Share this document with a friend
30
Information Theory Prepared by: Amit Degada Teaching Assistant, ECED, NIT Surat
Transcript
Page 1: Information Theory Prepared by: Amit Degada Teaching Assistant, ECED, NIT Surat.

Information Theory

Prepared by:Amit DegadaTeaching Assistant,ECED, NIT Surat

Page 2: Information Theory Prepared by: Amit Degada Teaching Assistant, ECED, NIT Surat.

Goal of Today’s Lecture

Information Theory……Some Introduction Information Measure Function Determination for Information Average Information per Symbol Information rate Coding Shannon-Fano Coding

Page 3: Information Theory Prepared by: Amit Degada Teaching Assistant, ECED, NIT Surat.

Information Theory

It is a study of Communication Engineering plus Maths.

A Communication Engineer has to Fight with Limited Power Inevitable Background Noise Limited Bandwidth

Page 4: Information Theory Prepared by: Amit Degada Teaching Assistant, ECED, NIT Surat.

Information Theory deals with

The Measure of Source Information

The Information Capacity of the channel

Coding

If The rate of Information from a source does not exceed the capacity of the Channel, then there exist a Coding Scheme such that Information can be transmitted over the Communication Channel with arbitrary small amount of errors despite the presence of Noise

Source Encoder

Channel Encoder

Noisy Channel

Channel Decoder

Source Decoder

Equivalent noiseless Channel

Page 5: Information Theory Prepared by: Amit Degada Teaching Assistant, ECED, NIT Surat.

Information Measure

This is utilized to determine the information rate of discrete Sources

Consider two Messages

A Dog Bites a Man High probability Less information

A Man Bites a Dog Less probability High Information

So we can say that

Information α (1/Probability of Occurrence)

Page 6: Information Theory Prepared by: Amit Degada Teaching Assistant, ECED, NIT Surat.

Information Measure

Also we can state the three law from Intution

Rule 1: Information I(mk) approaches to 0 as Pk approaches infinity.

Mathematically I(mk) = 0 as Pk 1

e.g. Sun Rises in East

Page 7: Information Theory Prepared by: Amit Degada Teaching Assistant, ECED, NIT Surat.

Information Measure

Rule 2: The Information Content I(mk) must be Non Negative contity.

It may be zero

Mathematically I(mk) >= 0 as 0 <= Pk <=1

e.g. Sun Rises in West.

Page 8: Information Theory Prepared by: Amit Degada Teaching Assistant, ECED, NIT Surat.

Information Measure

Rule 3: The Information Content of message having Higher probability is less than the Information Content of Message having Lower probability

Mathematically I(mk) > I(mj)

Page 9: Information Theory Prepared by: Amit Degada Teaching Assistant, ECED, NIT Surat.

Information Measure

Also we can state for the Sum of two messages that the information content in the two combined messages is same as the sum of information content of each message Provided the occurrence is mutually independent.

e.g. There will be Sunny weather Today. There will be Cloudy weather Tomorrow

Mathematically

I (mk and mj) = I(mk mj) = I(mk)+I(mj)

Page 10: Information Theory Prepared by: Amit Degada Teaching Assistant, ECED, NIT Surat.

Information measure So Question is which function that we can use that measure the

Information?

Information = F(1/Probability)

Requirement that function must satisfy1. Its output must be non negative Quantity.2. Minimum Value is 0.3. It Should make Product into summation.

Information I(mk) = Log b (1/ Pk )

Here b may be 2, e or 10

If b = 2 then unit is bits b = e then unit is nats b = 10 then unit is decit

Page 11: Information Theory Prepared by: Amit Degada Teaching Assistant, ECED, NIT Surat.

Conversion Between Units

102

10

loglnlog

ln 2 log 2

vvv

Page 12: Information Theory Prepared by: Amit Degada Teaching Assistant, ECED, NIT Surat.

Example

A Source generates one of four symbols during each interval with probabilities P1=1/2, P2=1/4, P3= P4=1/8. Find the Information content of three messages.

Page 13: Information Theory Prepared by: Amit Degada Teaching Assistant, ECED, NIT Surat.

Average Information Content

It is necessary to define the information content of the particular symbol as communication channel deals with symbol.

Here we make following assumption…..

1. The Source is stationery, so Probability remains constant with time.

2. The Successive symbols are statistically independent and come out at avg rate of r symbols per second

Page 14: Information Theory Prepared by: Amit Degada Teaching Assistant, ECED, NIT Surat.

Average Information Content

Suppose a source emits M Possible symbols s1, s2, …..SM having Probability of occurrence

p1,p2,…….pm

For a long message having symbols N (>>M)

s1 will occur P1N times, like also

s2 will occur P2N times so on…….

1

1M

i

Pi

Page 15: Information Theory Prepared by: Amit Degada Teaching Assistant, ECED, NIT Surat.

Average Information Content

Since s1 occurs p1N times so information Contribution by s1 is p1Nlog(1/p1).

Similarly information Contribution by s2 is p2Nlog(1/p2). And So on…….

Hence the Total Information Content is

And Average Information is obtained by

1

1log

M

total iii

I NPP

1

1log

Mtotal

iii

IH P

N P

Bits/Symbol

It means that In long message we can expect H bit of information per symbol. Another name of H is entropy.

Page 16: Information Theory Prepared by: Amit Degada Teaching Assistant, ECED, NIT Surat.

Information Rate

Information Rate = Total Information/ time taken

Here Time Taken

n bits are transmitted with r symbols per second. Total Information is nH.

Information rate

nTb

r

nHR

nr

R rH

Bits/sec

Page 17: Information Theory Prepared by: Amit Degada Teaching Assistant, ECED, NIT Surat.

Some Maths

H satisfies following Equation

20 logH M

Maximum H Will occur when all the message having equal Probability.

Hence H also shows the uncertainty that which of the symbol will occur.

As H approaches to its maximum Value we can’t determine which message will occur.

Consider a system Transmit only 2 Messages having equal probability of occurrence 0.5. at that Time H=1

And at every instant we cant say which one of the two message will occur.

So what would happen for more then two symbol source?

Page 18: Information Theory Prepared by: Amit Degada Teaching Assistant, ECED, NIT Surat.

Variation of H Vs. p

Let’s Consider a Binary Source,

means M=2

Let the two symbols occur at the probability

p and

1-p Respectively.

Where o < p < 1.

So Entropy can be

2 2

1 1log (1 ) log

1H p p

p p

( )p Horse Shoe Function

Page 19: Information Theory Prepared by: Amit Degada Teaching Assistant, ECED, NIT Surat.

Variation of H Vs. P

( )0

dH d p

dp dp

2

2

1 10

1

d H

dp p p

0.5 10

1

Now We want to obtain the shape of the curve

1log 0

p

p

Verify it by Double differentiation

Page 20: Information Theory Prepared by: Amit Degada Teaching Assistant, ECED, NIT Surat.

Example

Page 21: Information Theory Prepared by: Amit Degada Teaching Assistant, ECED, NIT Surat.

Maximum Information rate

R rH

2max logH M

We Know that

Also

2max logR r M

Hence

Page 22: Information Theory Prepared by: Amit Degada Teaching Assistant, ECED, NIT Surat.

Coding for Discrete memoryless Source

Here Discrete means The Source is emitting different symbols that are fixed.

Memoryless = Occurrence of present symbol is independent of previous symbol.

Average Code Length

1

i i

M

i

N pN

Where

Ni=Code length in Binary digits (binits)

Page 23: Information Theory Prepared by: Amit Degada Teaching Assistant, ECED, NIT Surat.

Coding for Discrete memoryless Source

1b

R H

r N

Efficiency

Page 24: Information Theory Prepared by: Amit Degada Teaching Assistant, ECED, NIT Surat.

Coding for Discrete memoryless Source

Kraft’s inequality

1

2 1M

Ni

i

K

If this is satisfied then only the Coding is uniquely Decipherable or Separable.

Page 25: Information Theory Prepared by: Amit Degada Teaching Assistant, ECED, NIT Surat.

ExampleFind The efficiency and Kraft’s inequality

mi pi Code I Code II Code III Code IV

A

B

C

D

½

¼

¼

¼

00

01

10

11

0

1

10

11

0

01

011

0111

0

10

110

111

This Code is not Uniquely Decipherable

Page 26: Information Theory Prepared by: Amit Degada Teaching Assistant, ECED, NIT Surat.

Shannon –Fano Coding Technique

Algorithm.Step 1: Arrange all messages in descending

order of probability.

Step 2: Devide the Seq. in two groups in such a way that sum of probabilities in each group is same.

Step 3: Assign 0 to Upper group and 1 to Lower group.

Step 4: Repeat the Step 2 and 3 for Group 1 and 2 and So on……..

Page 27: Information Theory Prepared by: Amit Degada Teaching Assistant, ECED, NIT Surat.

Example

Messages

MiPi No. Of

BitsCode

M1

M2

M3

M4

M5

M6

M7

m8

½

1/8/

1/8

1/16

1/16

1/16

1/32

1/32

0

1

1

1

1

1

1

1

0

0

1

1

1

1

1

0

1

0

0

1

1

1

0

1

0

1

1

0

1

Coding Procedure

1

3

3

4

4

4

5

5

0

100

101

1100

1101

1110

11110

11111

Page 28: Information Theory Prepared by: Amit Degada Teaching Assistant, ECED, NIT Surat.

This can be downloaded from

www.amitdegada.weebly.com/download

After 5:30 Today

Page 29: Information Theory Prepared by: Amit Degada Teaching Assistant, ECED, NIT Surat.

Questions

Page 30: Information Theory Prepared by: Amit Degada Teaching Assistant, ECED, NIT Surat.

Thank You


Recommended