+ All Categories
Home > Engineering > Image compression Mechanism

Image compression Mechanism

Date post: 09-Apr-2017
Category:
Upload: amit-lute
View: 68 times
Download: 6 times
Share this document with a friend
21
G.H RAISONI COLLEGE OF ENGINEERING (An Autonomous Institute Under UGC act 1956 & affiliated to R.T.M. Nagpur University) Department Of Electronics & Telecommunication Engineering, Session 2013-14 Presentation On IMAGE COMPRESSION IN DIGITAL IMAGE PROCESSINGSubmitted By:- AMIT V. LUTE M.Tech I SEM. (Communication Engg.) Roll No. 13
Transcript
Page 1: Image compression Mechanism

G.H RAISONI COLLEGE OF ENGINEERING (An Autonomous Institute Under UGC act 1956 & affiliated to R.T.M. Nagpur

University)Department Of Electronics & Telecommunication Engineering,

Session 2013-14

Presentation On “IMAGE COMPRESSION IN DIGITAL IMAGE

PROCESSING”

Submitted By:- AMIT V. LUTE

M.Tech I SEM.(Communication Engg.)

Roll No. 13

Page 2: Image compression Mechanism

•Image compression address the problem of reducing the amount of data required to represent a digital image with no significant loss of information.

• Interest in image compression dates back more than 25 years.

•The field is now poised significant growth through the practical application of the theoretic work that began in 1940s

•C.E. Shannon and others first formulated the probabilistic view of information and its representation , transmission and compression.

Image Compression

Page 3: Image compression Mechanism

• Images take a lot of storage space:

1. 1024 x 1024 x 32 bits images requires 4 MB

2.Suppose having some video that is 640 x 480 x 24 bits x 30 frames per second, 1 minute of video would require 1.54 GB• Many bytes take a long time to transfer slow connections – suppose we have 56,000 bps

1. 4MB will take almost 10 minutes

2. 1.54 GB will take almost 66 hours• Storage problems, plus the desire to exchange images over the Internet, have lead to a large interest in image compression algorithms.

Need of Image Compression:-

Page 4: Image compression Mechanism

•If more data are used than is strictly necessary, then we say that there is redundancy in the dataset.

•Data redundancy is not abstract concept but a mathematically quantifiable entity . If n1 and nc denote the number of information carrying units in two data sets that represent the same information, the relative data redundancy RD of the first data set ( n1 ) can be defined as RD= 1 – 1/CR (1)

Where CR is compression ratio, defined as CR = n1/ nc (2)

•Where n1 is the number of information carrying units used in the uncompressed dataset and nc is the number of units in the compressed dataset. The same units should be used for n1 and nc; bits or bytes are typically used.When nc<<n1 , CR large value and RD 1. Larger values of C indicate better compression

Compression algorithms remove redundancy:-

Page 5: Image compression Mechanism

Source encoder Data redundancy reduction

Channel encoder

Channel Channel decoder

Source decoderReconstruction

Input image ( f(x,y) Reconstructed image f’(x,y)

•An input image is fed into the encoder which creates a set of symbols from the input data.

•After transmission over the channel, the encoded representation is fed to the decoder, where a reconstructed output image f’(x,y) is generated .

A general algorithm for data compression and image reconstruction

Page 6: Image compression Mechanism

Example of Image Compression

Encoder 0101100111... Decoder

Original Image Decoded ImageBitstream

Page 7: Image compression Mechanism

Data compression algorithms can be divided into two groups:-

1 Lossless algorithms- remove only redundancy present in the data.

•The reconstructed image is identical to the original , i.e., all af the information present in the input image has been preserved by compression .

2. Higher compression- is possible using lossy algorithms which create redundancy (by discarding some information) and then remove it .

Page 8: Image compression Mechanism

Three basic types of redundancy can be identified in a

single image:

1) Coding redundancy

2) Interpixel redundancy

3) Psychovisual redundancy

 Types of redundancy:-

Page 9: Image compression Mechanism

- Our quantized data is represented using codewords

•The code words are ordered in the same way as the intensities that they represent; thus the bit pattern 00000000, corresponding to the value 0, represents the darkest points in an image and the bit pattern 11111111, corresponding to the value 255, represents the brightest points.

- if the size of the codeword is larger than is necessary to represent all quantization levels, then we have coding redundancy

•An 8-bit coding scheme has the capacity to represent 256 distinct levels of intensity in an image . But if there are only 16 different grey levels in a image , the image exhibits coding redundancy because it could be represented using a 4-bit coding scheme.

•Coding redundancy can also arise due to the use of fixed-length code words.

Coding Redundancy

Page 10: Image compression Mechanism

•Grey level histogram of an image also can provide a great deal of insight into the construction of codes to reduce the amount of data used to represent it .

•Let us assume, that a discrete random variable rk in the interval (0,1) represents the grey levels of an image and that each rk occurs with probability Pr(rk). Probability can be estimated from the histogram of an image using Pr(rk) = hk/n for k = 0,1……L-1 (3)

•Where L is the number of grey levels and hk is the frequency of occurrence of grey level k (the number of times that the kth grey level appears in the image) and n is the total number of the pixels in the image. If the number of the bits used to represent each value of rk is l(rk), the average number of bits required to represent each pixel is :

)()(1

0kr

L

kkavg rPrlL

(4)

Coding redundancy

Page 11: Image compression Mechanism

•The intensity at a pixel may correlate strongly with the intensity value of its neighbors.

•Because the value of any given pixel can be reasonably predicted from the value of its neighbors Much of the visual contribution of a single pixel to an image is redundant; it could have been guessed on the bases of its neighbors values.

•We can remove redundancy by representing changes in intensity rather than absolute intensity values .

•For example , the differences between adjacent pixels can be used to represent an image . Transformation of this type are referred to as mappings. They are called reversible if the original image elements can be reconstructed from the transformed data set.

Interpixel redundancy

Page 12: Image compression Mechanism

•We have a image with 256 possible gray levels.

•We can apply uniform quantization to four bits or 16 possible levels The resulting compression ratio is 2:1.

•The false contouring is present in the previously smooth regions of the original image.

•The significant improvements possible with quantization that takes advantage of the peculiarities of the human visual system . The method used to produce this result is known as improved gray-scale (IGS) quantization.

• It recognizes the eye’s inherent sensitivity to edges and breaks them up by adding to each pixel a pseudo-random number, which is generated from the order bits of neighboring pixels, before quantizing the result.

Psychovisual redundancy

Page 13: Image compression Mechanism

•Delta compression is a very simple, lossless techniques in which we recode an image in terms of the difference in gray level between each pixel and the previous pixel in the row.

•The first pixel must be represented as an absolute value, but subsequent values can be represented as differences , or “deltas”.

For example:

FIGURE :Example of delta encoding. The first value in the encoded file is the same as the first value in the original file. Thereafter, each sample in the encoded file is the difference between the current and last sample in the original file.

Delta compression

Page 14: Image compression Mechanism

•A “run” of consecutive pixels whose gray levels are identical is replaced with two values i.e. the length of the run and the gray level of all pixels in the run.

•Example ( 50, 50,50,50) becomes (4,50)

•Especially suited for synthetic images containing large homogeneous regions .

•The encoding process is effective only if there are sequences of 4 or more repeating characters

CTRL COUNT CHAR

FIGURE:. Format of three byte code word

CTRL - control character which is used to indicate compressionCOUNT- number of counted characters in stream of the same charactersCHAR - repeating characters

Run length encoding Compression

Page 15: Image compression Mechanism

•RLE algorithms are parts of various image compression techniques like BMP, PCX, TIFF, and is also used in PDF file format, but RLE also exists as separate compression technique and file format.

•MS Windows standard for RLE have the same file format as well-known BMP file format, but it's RLE format is defined only for 4-bit and 8-bit color images.

•Two types of RLE compression is used 4bit RLE and 8bit RLE as expected the first type is used for 4-bit images, second for 8-bit images.

Examples of RLE implementations

Page 16: Image compression Mechanism

•Statistical coding Compression techniques remove the coding redundancy in an image.

•Information theory tells us that the amount of information conveyed by a codeword relates to its probability of occurrence.

•Codeword that occur rarely convey more information than codeword that occur frequently in the data.

•A random event i that occurs with probability P(i) is said to contain

I(i) = -logP(I ) units of information ( self information )

•If P(i) = 1 ( that is, the event always occurs) I(i) = 0 and no information is attributed to it. Let us assume that information source generates a random sequence of symbols ( grey level).

•The probability of occurrence for a grey level i is P(i) . If we have 2b-1 gray level ( symbols ) the average self-information obtained from i outputs is called entropy.

Statistical coding Compression

Page 17: Image compression Mechanism

• After computing the histogram and normalizing the task is to construct a set of code words to represent each pixel value . These code words must have the following properties:

1. Different code words must have different lengths.

2. Code words that occurs infrequently ( low probability ) should use more bits. Code words that occur frequently ( high probability ) should use fewer bits.

3. It must not be possible to mistake a particular sequence of concatenated code words for any other sequence.

The average bit length of code words is )().(12

0

iPilLb

iavg

where l(i) is the length of the codeword used to represent the grey level i. b is the smallest number of bits needed to generate a number of quatization levels observed in an image. The upper limit for Lavg is b and the lower limit for Lavg is the entropy.

Statistical coding Compression

Page 18: Image compression Mechanism

Advantages of Image Compression:

•Less disk space, more data i.e. images in reality.

•Faster writing and reading of images.

•Faster image file transfer.

•Variable dynamic range of images. •Byte order independent in images.

Page 19: Image compression Mechanism

Applications of Image Compression

• Satellite imagery

• Fax

• Digital cameras

• High definition television (HDTV)

• DVD technology

Page 20: Image compression Mechanism

References:-

[1] R. C. Gonzaleas and R. E. Woods, "Digital Image Processing", 2nd Ed., Prentice Hall, 2004.

[2] Subhasis Saha, "Image Compression - from DCT to Wavelets : A Review.”

[3] G. K. Wallace, "The JPEG Still Picture Compression Standard.”

Page 21: Image compression Mechanism

THANK YOU…!!!!


Recommended