+ All Categories
Home > Documents > Dip Answer Key Sep2010

Dip Answer Key Sep2010

Date post: 06-Apr-2018
Category:
Upload: neeraj-singh
View: 226 times
Download: 0 times
Share this document with a friend

of 71

Transcript
  • 8/3/2019 Dip Answer Key Sep2010

    1/71

    Explain with neat block diagram thesteps involved in compressing a

    video signal

    10 marks question

    Sep 2010

  • 8/3/2019 Dip Answer Key Sep2010

    2/71

  • 8/3/2019 Dip Answer Key Sep2010

    3/71

  • 8/3/2019 Dip Answer Key Sep2010

    4/71

  • 8/3/2019 Dip Answer Key Sep2010

    5/71

  • 8/3/2019 Dip Answer Key Sep2010

    6/71

  • 8/3/2019 Dip Answer Key Sep2010

    7/71

  • 8/3/2019 Dip Answer Key Sep2010

    8/71

  • 8/3/2019 Dip Answer Key Sep2010

    9/71

  • 8/3/2019 Dip Answer Key Sep2010

    10/71

  • 8/3/2019 Dip Answer Key Sep2010

    11/71

  • 8/3/2019 Dip Answer Key Sep2010

    12/71

  • 8/3/2019 Dip Answer Key Sep2010

    13/71

    Sep 2010

    Discuss the following Simultaneous contrast 5 marks

    Optical Illusion 5 marks

  • 8/3/2019 Dip Answer Key Sep2010

    14/71

  • 8/3/2019 Dip Answer Key Sep2010

    15/71

    10 Marks Sep 2010

    What is image segmentation? Describe brieflythresholding and adaptive thresholding?(3+3+4)

  • 8/3/2019 Dip Answer Key Sep2010

    16/71

    Introduction

    Segmentation is a very important imageanalysis pre-processing step.

    Note - in image analysis the inputs are imagesbut the outputs are attributes extracted fromthese images.

  • 8/3/2019 Dip Answer Key Sep2010

    17/71

  • 8/3/2019 Dip Answer Key Sep2010

    18/71

    More points

    Segmentation is mostly based on rather adhoc methods.

    Segmentation usually makes sense in a scopeof a particular application. Segmentation depends on an application, its

    semantics. Methods are not universally applicable to all

    images.

  • 8/3/2019 Dip Answer Key Sep2010

    19/71

    V ideo On Fundamentals

  • 8/3/2019 Dip Answer Key Sep2010

    20/71

    Image segmentation approach

    Image segmentation algorithms are based on oneof the two basic properties of intensity values

    discontinuity and similarity .

    Discontinuity The assumption is that boundaries of regions are

    sufficiently different from each other and from the

    background to allow boundary detection based onlocal discontinuities in intensity. Edge-based segmentation is the principal approach used in thiscategory.

  • 8/3/2019 Dip Answer Key Sep2010

    21/71

    C ontinued .

    Similarity Region based segmentation approaches in the

    second category are based on partitioning animage into regions that are similar according to aset of predefined criteria.

  • 8/3/2019 Dip Answer Key Sep2010

    22/71

    Thresholding

    Thresholding is a fundamental approach thatis quite popular when speed is an importantfactor.

    Threshold-based, according to a global property,usually intensity, where the global knowledge isrepresented by the intensity histogram.

  • 8/3/2019 Dip Answer Key Sep2010

    23/71

  • 8/3/2019 Dip Answer Key Sep2010

    24/71

  • 8/3/2019 Dip Answer Key Sep2010

    25/71

  • 8/3/2019 Dip Answer Key Sep2010

    26/71

    H ow it works

    Whereas the conventional thresholding operator uses aglobal threshold for all pixels, adaptive thresholdingchanges the threshold dynamically over the image. Thismore sophisticated version of thresholding can

    accommodate changing lighting conditions in the image,e.g.those occurring as a result of a strong illuminationgradient or shadows.A daptive thresholding typically takes a grayscale or colorimage as input and, in the simplest implementation,outputs a binary imagerepresenting the segmentation. Foreach pixel in the image, a threshold has to be calculated. If the pixel value is below the threshold it is set to thebackground value, otherwise it assumes the foregroundvalue.

  • 8/3/2019 Dip Answer Key Sep2010

    27/71

    Two main approaches to finding the threshold: (i) theC how and Kanenko approach and (ii) local thresholding.

    Basis - The assumption behind both methodsis that smaller image regions are more likely tohave approximately uniform illumination, thusbeing more suitable for thresholding.

  • 8/3/2019 Dip Answer Key Sep2010

    28/71

    (i) the C how and Kanenko approachand (ii) local thresholding

    the C how and Kanenko approach the image is divided into an array of overlapping

    sub images and then find the optimum thresholdfor each sub image by investigating its histogram.

    local thresholding statistically examine the intensity values of the

    local neighborhood of each pixel.

  • 8/3/2019 Dip Answer Key Sep2010

    29/71

    10 marks

    Explain JPEG compression

  • 8/3/2019 Dip Answer Key Sep2010

    30/71

    JPEG C ompression(The steps include the D CT part in the next question, so read in continuation )

    Lossless and lossy image compression

    The JPEG process is a widely used form of

    lossy image compression that centers aroundthe D C T

  • 8/3/2019 Dip Answer Key Sep2010

    31/71

  • 8/3/2019 Dip Answer Key Sep2010

    32/71

    JPEG C ompression process

  • 8/3/2019 Dip Answer Key Sep2010

    33/71

    Subsampling

    JPEG uses 4:2:0 subsampling scheme. This schemesubsamples in both the horizontal and vertical directions by afactor two. Theoretically. A n average chroma pixel ispositioned between the rows and columns

  • 8/3/2019 Dip Answer Key Sep2010

    34/71

    Subsampling Y C rC b

  • 8/3/2019 Dip Answer Key Sep2010

    35/71

    Sep 2010 10 marks

    Discuss the working principle of D C T in imagecompression

  • 8/3/2019 Dip Answer Key Sep2010

    36/71

    DC T - Introduction

    The d iscr ete cosin e t ransf o rm (DC T) is a technique forconverting a signal into elementary frequencycomponents. It is widely used in image compressionA ll three of the following standards employ a basictechnique known as the discrete cosine transform(DC T). Developed by A hmed, Natarajan, and Rao[1974], the D C T is a close relative of the discreteFourier transform (DFT).

    JPEG For compression of still images MPEG For compression of motion video H .261 - For compression of video telephony and

    teleconferencing

  • 8/3/2019 Dip Answer Key Sep2010

    37/71

    JPEG and D C T

    Psychophysical experts suggests that humansare much less likely to notice the loss of veryhigh spatial frequency components than lower

    frequency components.JPEG uses D C T to basically reduce the highfrequency contents and then efficiently codethe result into a bit string. A s frequencybecomes higher it becomes less importantand the coefficient may be set to zero.

  • 8/3/2019 Dip Answer Key Sep2010

    38/71

    DC T

    Each image is subdivided into 8x8 blocks and 2 D DCT is applied to each block image f( i,j),with output being the D C T co-efficients F (u,v)

    for each block.

  • 8/3/2019 Dip Answer Key Sep2010

    39/71

    The Two-Dimensional D C T

    The one-dimensional D C T is useful inprocessing one-dimensional signals such asspeech waveforms. For analysis of two-dimensional (2D) signals such as images, weneed a 2D version of the D C T. For an n x mmatrix s, the 2D D C T is computed in a simple

    way: The 1D D C T is applied to each row of sand then to each column of the result. Thus,the transform of s is given by

  • 8/3/2019 Dip Answer Key Sep2010

    40/71

    The D C T Equation

  • 8/3/2019 Dip Answer Key Sep2010

    41/71

    The D C T Matrix

  • 8/3/2019 Dip Answer Key Sep2010

    42/71

    Doing the D C T on 8*8 block

  • 8/3/2019 Dip Answer Key Sep2010

    43/71

    Performing D C T (Final Step)

  • 8/3/2019 Dip Answer Key Sep2010

    44/71

    JPEG C ompression continue

  • 8/3/2019 Dip Answer Key Sep2010

    45/71

    JPEG C ompression continue

  • 8/3/2019 Dip Answer Key Sep2010

    46/71

    JPEG Compr e ssi on co n t inu e

  • 8/3/2019 Dip Answer Key Sep2010

    47/71

    Sep 2010 10 marks

    What are the differences between smoothingand sharpening filters? Explain with anexample

  • 8/3/2019 Dip Answer Key Sep2010

    48/71

    Imag e e nhanc e m e n t s

  • 8/3/2019 Dip Answer Key Sep2010

    49/71

    S m oot hing

    Image sensors and transmission channels mayproduce certain type of noise characterized byrandom and isolated pixels with out-of-range graylevels, which are either much lower or higherthan the gray levels of the neighboring pixels. Thechallenge is to distinguish between the details(small features, edges, lines, etc.) in the imageand this type of isolated and out-of-range noise,

    with the goal to keep the former whilesuppressing the latter.

  • 8/3/2019 Dip Answer Key Sep2010

    50/71

    Smoothing Domain FiltersSmoothing Domain FiltersEdges and other sharp transitions (such as noise) in the gray levels of an imagecontribute significantly to the high frequency content of its Fourier transform. H encesmoothing (blurring) is achieved in the frequency domain by attenuating a specifiedrange of high frequency components in the transform of a given image. Low passfiltering would be part of preprocessing stage for an image analysis system lookingfor features in an image bank.

    Low-pass filters reveal underlying two-dimensional waveform with a longwavelength or low frequency image contrast at the expense of higher spatialfrequencies.

    Low-frequency information allows the identification of the background pattern,and produces an output image in which the detail has been smoothed or removed fromthe original. C hoosing the median valuefrom the moving window does a better job of suppressing noise and preserving edgesthan the mean filter [3-4].

  • 8/3/2019 Dip Answer Key Sep2010

    51/71

    S harp e ning

    A s the opposite of smoothing operations, imagesharpening has the goal to enhance the details (thehigh spatial frequency components) of the image.

    A high-pass filtered image can be obtained as thedifference between the original image and its low-pass filtered version.

  • 8/3/2019 Dip Answer Key Sep2010

    52/71

    Sharpening Domain FiltersImage sharpening can be achieved in the frequency domain by high pass filtering process.

    Simply subtracting the low-frequency image resulting from a low pass filter from

    the original image can enhance high spatial frequencies. H igh -frequency information

    allows us either to isolate or to amplify the local detail. If the high-frequency detail is

    amplified by adding back to the image some multiple of the high frequency

    component extracted by the filter, then the result is a sharper, de-blurred image.

    The above two approaches can be implemented in the following manner:

    The Fourier transform of an image, as expressed by the amplitude spectrum is a

    breakdown of the image into its frequency or scale components. Filtering of these

    components use frequency domain filters that operate on the amplitude spectrum of an

    image and remove, attenuate or amplify the amplitudes in specified wavebands. The

    frequency domain can be represented as a 2-dimensional scatter plot known as a

    Fourier spectrum, in which lower frequencies fall at the center and progressivelyhigher frequencies are plotted outward [3-6].

    Filtering in the frequency domain consists of 3 steps:

    1. Obtain the Fourier transform the original image and compute the Fourier spectrum.

    2. Select appropriate filter transfer function and multiply by the elements of the

    Fourier spectrum. ( H ere the transfer function is the standard Gaussian and

    Butterworth filters as low and high pass filters with varying input arguments. The

    filter order and value of sigma have been selected by default as 1 and 10 respectively).

  • 8/3/2019 Dip Answer Key Sep2010

    53/71

    C ontinued .

    Refer the slides 04 Enhancement - SpatialFiltering - C h3

  • 8/3/2019 Dip Answer Key Sep2010

    54/71

  • 8/3/2019 Dip Answer Key Sep2010

    55/71

    Sep 2010

    Q 2B - What is morphological imageprocessing? Explain any two basic operations.(2+4+4)

    A nswer Please refer the PDF 11 Mo rph olo gica l IP - Ch 9. p d f

  • 8/3/2019 Dip Answer Key Sep2010

    56/71

    Sep 2010

    Q 2 A - What is Bit Plane image slicing? Explainthe procedure of performing bit plane slicing?

  • 8/3/2019 Dip Answer Key Sep2010

    57/71

  • 8/3/2019 Dip Answer Key Sep2010

    58/71

  • 8/3/2019 Dip Answer Key Sep2010

    59/71

    Sep 2010

    1A What are intensity and spatial resolutionof an image? Explain.

  • 8/3/2019 Dip Answer Key Sep2010

    60/71

    Spatial Resolution

    Spatial resolution is a measure of the smallestdiscernible detail in an imag e .

    Quantitatively, spatial resolution can be stated as Line pairs per unit distance Dots (pixels) per unit distance

    Image resolution definition The largest number of discernible line pairs per unit

    distance. In U.S. we call this as dpi (dots per inch) E.g. newspaper with resolution 75 dpi Book page 2400 dpi

  • 8/3/2019 Dip Answer Key Sep2010

    61/71

    Spatial Resolution C ont

    To be meaningful, measures of spatialresolution must be stated with respect tospatial units. Image size by itself doesn t tellthe complete story. To say that an image has,say, a resolution 1024 * 1024 pixels is not ameaningful statement without stating the

    spatial dimensions encompassed by theimage.

  • 8/3/2019 Dip Answer Key Sep2010

    62/71

    Intensity resolution

    Intensity resolution similarly refers to thesmallest discernible change in in te nsi ty level.We have considerable discretion regarding thenumber of samples used to generate a digitalimage, but this is not true regarding thenumber of intensity levels. Based on hardwareconsiderations, the number of intensity levelsusually is a an integer power of two. The mostcommon number is 8 bits.

  • 8/3/2019 Dip Answer Key Sep2010

    63/71

    Intensity resolution cont

    Unlike spatial resolution, which must be based on a perunit of distance basis to be meaningful, it is commonpractice to refer to the number of bits used to quantizeintensity as the intensity resolution.For example, it is common to say that an image whoseintensity is quantized into 256 levels has 8 bits of intensity resolution.True discernible changes in intensity are influenced notonly by noise and saturation values but also by thecapabilities of human perception.

  • 8/3/2019 Dip Answer Key Sep2010

    64/71

    Sep 2010

    1B What is two dimensional Discrete FourierTransform? Why frequency domain imageanalysis is very important? Explain. (2+ 4+ 4)

  • 8/3/2019 Dip Answer Key Sep2010

    65/71

  • 8/3/2019 Dip Answer Key Sep2010

    66/71

  • 8/3/2019 Dip Answer Key Sep2010

    67/71

  • 8/3/2019 Dip Answer Key Sep2010

    68/71

  • 8/3/2019 Dip Answer Key Sep2010

    69/71

    Why frequency domain image analysisis very important

  • 8/3/2019 Dip Answer Key Sep2010

    70/71

  • 8/3/2019 Dip Answer Key Sep2010

    71/71


Recommended