Elements of Visual Perception

Post on 21-Apr-2015

1,466 views 0 download

description

CS804B, M1_2, Lecture Notes

transcript

Resmi N.G.Resmi N.G.Reference: Digital Image Processing

Rafael C. GonzalezRichard E. Woods

Elements of Visual Perception

3/18/2012 2CS04 804B Image Processing - Module1

� Human Eye- nearly a sphere; diameter 20mm approx.� Three membranes enclose it:

� Cornea and sclera – the outer cover� Choroid� RetinaCornea – tough transparent tissue that covers anteriorCornea – tough transparent tissue that covers anterior

surface of the eye.Sclera – opaque membrane enclosing remainder of the

optical globe.Choroid – directly below the sclera

- contains network of blood vessels (majorsource of nutrition to eye).

3/18/2012 CS04 804B Image Processing - Module1 3

� Choroid coat is heavily pigmented - helps to reducethe extraneous light entering the eye and backscatterwithin the optical globe.

� Choroid is divided into ciliary body and iris diaphragmat the anterior extreme.

� The diaphragm contracts and expands to control the� The diaphragm contracts and expands to control theamount of light that enters the eye.

� Iris is the central opening of the eye. Diameter variesfrom 2-8mm.

� Front of the iris – contains the visible pigment of theeye.

� Back of the iris – Black pigment.

3/18/2012 CS04 804B Image Processing - Module1 4

� Lens – made up of concentric layers of fibrous cellsand is suspended by fibers that attach to the ciliarybody.

� It contains 60-70% water, 6% fat and large amount ofprotein.

� It is colored by slightly yellow pigmentation.� It is colored by slightly yellow pigmentation.� Excessive clouding of lens leads to cataract resulting

in poor color discrimination and loss of clear vision.� It absorbs 8% of visible light spectrum; higher

absorption occurs at shorter wavelengths.

3/18/2012 CS04 804B Image Processing - Module1 5

� Retina –

� When eye is properly focused, light from an object outside the eye is imaged at the retina.

� Pattern vision – by distribution of light receptors over the surface of retina.the surface of retina.

� Two classes of receptors:� Cones� Rods

3/18/2012 CS04 804B Image Processing - Module1 6

� Cones – 6-7 million cones in each eye at the centralportion of retina called the fovea.

� - highly sensitive to color� - each cone is connected to its own nerve end� - can resolve high details� - muscles controlling the eye rotate the eyeball until� - muscles controlling the eye rotate the eyeball until

the image of object falls on fovea.

� Cone vision is bright light (or photopic) vision.

3/18/2012 CS04 804B Image Processing - Module1 7

� Rods – 75-150 million rods over the retinal surface.� -larger area of distribution� Several rods are connected to a single nerve.� - reduces the amount of detail� -gives overall picture of filed of view� -not involved in color vision� -not involved in color vision� - sensitive to low levels of illumination

� Rod vision is dim light (or scotopic) vision

� Blind spot – Area without receptors.

3/18/2012 CS04 804B Image Processing - Module1 8

Image formation in the eye� Principal difference between lens of eye and ordinary

optical lens is that lens of eye is more flexible.

� The radius of curvature of anterior surface of lens is� The radius of curvature of anterior surface of lens isgreater than radius of its posterior surface.

� The shape of the lens is controlled by tension in fibresof the ciliary body.

3/18/2012 CS04 804B Image Processing - Module1 9

� To focus on farther objects, the controlling musclescause lens to be relatively flattened.

� To focus on nearby objects, these muscles allow thelens to become thicker.

3/18/2012 CS04 804B Image Processing - Module1 10

� Focal length- distance between center of lens and retina(varies from 17mm-14mm).

� 15/100 = h/17� Or, h=2.55mm

3/18/2012 CS04 804B Image Processing - Module1 11

� Retinal image is reflected primarily in the area offovea.

� Perception then takes place by relative excitation oflight perceptors(transforms radiant energy intoelectric impulses that are ultimately decoded by theelectric impulses that are ultimately decoded by thebrain).

3/18/2012 CS04 804B Image Processing - Module1 12

Brightness Adaptation and Discrimination

� Digital images are displayed as discrete set ofintensities. So, ability of eye to discriminate betweendifferent intensity levels is important.

� Subjective brightness (intensity as perceived by humanvisual system) is a logarithmic function of lightintensity incident on the eye.

3/18/2012 CS04 804B Image Processing - Module1 13

� Visual system can adapt to large range of intensities bychanging its overall sensitivity. This property is calledbrightness adaptation.

� Total range of distinct intensity levels it candiscriminate simultaneously is small.

� The current sensitivity level of visual system for anygiven set of conditions is called brightness adaptationlevel.

3/18/2012 CS04 804B Image Processing - Module1 14

3/18/2012 CS04 804B Image Processing - Module1 15

Brightness Discrimination� Experiment to determine ability of human visual

system for brightness discrimination:� An opaque glass is illuminated from behind using a

light source of intensity I.light source of intensity I.� Add ∆Is till a perceived change occurs.

3/18/2012 CS04 804B Image Processing - Module1 16

� The ratio of the increment threshold to thebackground intensity, ∆Ic/I, is called the Weber ratio.

� When ∆Ic/I is small, small % change in intensity isdiscriminable, and hence there is good brightnessdiscrimination.

� When ∆Ic/I is large, large % change in intensity isrequired and hence there is poor brightnessdiscrimination (at low levels of illumination).

� When in a noisy environment you must shout to beheard while a whisper works in a quiet room.

3/18/2012 CS04 804B Image Processing - Module1 17

3/18/2012 CS04 804B Image Processing - Module1 18

� The brightness discrimination increases asbackground illumination increases.

� As the eye roams about the image, a different set ofincremental changes are detected at each newincremental changes are detected at each newadaptation level.

� The eye is thus capable of a much broader range ofoverall intensity discrimination.

3/18/2012 CS04 804B Image Processing - Module1 19

� Perceived brightness is not a simple function ofintensity. Mach Bands:

3/18/2012 CS04 804B Image Processing - Module1 20

� Simultaneous Contrast:

3/18/2012 CS04 804B Image Processing - Module1 21

� Optical illusions:

3/18/2012 CS04 804B Image Processing - Module1 22

Light and Electromagnetic Spectrum

3/18/2012 CS04 804B Image Processing - Module1 23

� Wavelength, and frequency are related by theexpression

where c is the speed of light.� Energy of various components of electromagnetic

cλ =

ν

� Energy of various components of electromagneticspectrum is given by

where h is the Planck’s constant.

3/18/2012 CS04 804B Image Processing - Module1 24

E h= ν

� Electromagnetic wave is a stream of massless particles,each traveling in a wavelike pattern and at the speed oflight.

� Each massless particle contains a certain amount ofenergy.energy.

� Each bundle of energy is called a photon.

� Light is a particular type of electromagnetic radiationthat can be seen and sensed by human eye.

3/18/2012 CS04 804B Image Processing - Module1 25

� Visible band – from violet to red (chromatic light).� The colors that we perceive in an object are

determined by the nature of the light reflected fromthe object.

� Three basic quantities describe the quality of� Three basic quantities describe the quality ofchromatic light source:� Radiance� Luminance� Brightness

3/18/2012 CS04 804B Image Processing - Module1 26

� Radiance – total amount of energy that flows from thelight source (measured in Watts).

� Luminance – gives a measure of amount of energy anobserver perceives from a light source (measured inLumens).Lumens).

� Brightness – intensity as perceived by human visualsystem.

3/18/2012 CS04 804B Image Processing - Module1 27

l Luminance is the amount of visible light that comes tothe eye from a surface.

l Illuminance is the amount of light incident on a surface.

l Reflectance is the proportion of incident light that isreflected from a surface.reflected from a surface.

l Lightness is the perceived reflectance of a surface.

l Brightness is the perceived intensity of light coming fromthe image itself, and is also defined as perceivedluminance.

3/18/2012 CS04 804B Image Processing - Module1 28

� Achromatic or monochromatic light – light that is voidof color, its only attribute being the intensity (rangesfrom black to grays to white).

3/18/2012 CS04 804B Image Processing - Module1 29

Image Sensing and Acquisition� Images are generated by combination of an

illumination source and reflection or absorption ofenergy from that source by objects to be imaged.

� Three principal sensor arrangements to transformillumination energy into digital images:� Single imaging sensor� Line sensor� Array sensor

3/18/2012 CS04 804B Image Processing - Module1 30

3/18/2012 CS04 804B Image Processing - Module1 31

Image Acquisition Using Single Sensor

� Incoming energy is converted to a voltage bycombination of input electric power and sensormaterial responsive to the type of energy beingdetected.detected.

� Response of the sensor is the output voltage waveformwhich has to be digitized.

� Filter is used to improve selectivity.

3/18/2012 CS04 804B Image Processing - Module1 32

� eg; photodiode� Constructed of silicon materials� Output voltage waveform – proportional to light.

3/18/2012 CS04 804B Image Processing - Module1 33

� To generate 2D image using single sensor, there mustbe relative displacements in both x and y directionsbetween sensor and the area to be imaged.

3/18/2012 CS04 804B Image Processing - Module1 34

� Another example of imaging with single sensor

� Place a laser source coincident with the sensor

� Moving mirrors are used to control the outgoing beam� Moving mirrors are used to control the outgoing beamin a scanning pattern and to direct the laser signal ontothe sensor.

3/18/2012 CS04 804B Image Processing - Module1 35

Image Acquisition Using Sensor Strips

� Sensor strip has an in-line arrangement of sensors.

� The sensor strip provides imaging elements in onedirection.direction.

� Motion perpendicular to the strip provides imaging inthe other direction, thereby completing the 2D image.

3/18/2012 CS04 804B Image Processing - Module1 36

3/18/2012 CS04 804B Image Processing - Module1 37

Image Acquisition Using Sensor Arrays

3/18/2012 CS04 804B Image Processing - Module1 38

� Since, the sensor array is 2D, a complete image can beobtained by focusing the energy onto the surface ofthe array.

� Imaging system collects the incoming energy from anillumination source and focuses it onto an imageillumination source and focuses it onto an imageplane.

� The front end of the imaging system is a lens (ifillumination is light), which projects the viewed sceneonto the lens focal plane.

3/18/2012 CS04 804B Image Processing - Module1 39

� The sensor array coincident with the focal planeproduces output proportional to the intensity of lightreceived at each sensor.

� This output is then digitized by another section of the� This output is then digitized by another section of theimaging system.

3/18/2012 CS04 804B Image Processing - Module1 40

A Simple Image Formation Model

� Images are denoted using two-dimensional functionsof the form f(x,y).

� The value of f is a positive scalar quantity.� The value of f is a positive scalar quantity.

� When an image is generated from a physical process,its values are proportional to energy radiated by aphysical source. Hence, f(x,y) must be nonzero andfinite.

0 < f(x,y) < ∞

3/18/2012 41CS04 804B Image Processing - Module1

� The function f(x,y) is characterized by twocomponents:� Illumination component : The amount of source

illumination incident on the scene being viewed.It is denoted by i(x,y).

� Reflectance component: The amount of� Reflectance component: The amount ofillumination reflected by the objects in the scene.It is denoted by r(x,y).

f(x,y) is expressed as a product of these twocomponents.

f(x,y) = i(x,y)r(x,y)

3/18/2012 42CS04 804B Image Processing - Module1

� where 0 < i(x,y) < ∞and 0 < r(x,y) < 1

(total absorption) (total reflectance)

The nature of i(x,y) is determined by the illuminationsource.source.

The nature of r(x,y) is determined by the characteristicsof the imaged objects.

For images formed by transmission of the illuminationthrough a medium (as in X-ray imaging), reflectivity isreplaced by transmissivity.

3/18/2012 43CS04 804B Image Processing - Module1

� The intensity of a monochrome image at any point(x0,y0) is called the gray level l of the image at thatpoint.

� l = f (x0,y0)� l lies in the range Lmin ≤ l ≤ Lmax

� Lmin : should be positive� Lmin : should be positive� Lmax : should be finite

� Lmin= imin rmin

� Lmax= imax rmax

� The interval [Lmin, Lmax] is called the gray scale.

3/18/2012 44CS04 804B Image Processing - Module1

Image Sampling and Quantization� The output of most sensors is a continuous voltage

waveform whose amplitude and spatial behaviour arerelated to the physical phenomenon being sensed.

� This continuous sensed data has to be converted todigital form.

� This involves two processes:� Sampling� Quantization

3/18/2012 45CS04 804B Image Processing - Module1

Basic Concepts� An image may be continuous with respect to the x- and

y-coordinates and also in amplitude.

� To convert it to digital form, the function must be� To convert it to digital form, the function must besampled in both coordinates and in amplitude.

� Digitizing the coordinate values is called sampling.

� Digitizing the amplitude values is calledquantization.

3/18/2012 46CS04 804B Image Processing - Module1

� Generating Digital Image

3/18/2012 CS04 804B Image Processing - Module1 47

� To sample the plot of amplitude values of thecontinuous image along AB, take equally spacedsamples along AB. This set of discrete locations givethe sampled function.

� The sample values still span a continuous range of� The sample values still span a continuous range ofgray-level values. These values also must be convertedto discrete quantities(quantization) to obtain a digitalimage.

� The gray level scale can be divided into a number ofdiscrete levels ranging from black to white.

3/18/2012 CS04 804B Image Processing - Module1 48

� In the figure, one of the eight discrete gray levels isassigned to each sample.

� Starting at the top of the image and carrying out this� Starting at the top of the image and carrying out thisprocedure line by line for the entire image willproduce a two-dimensional digital image.

3/18/2012 CS04 804B Image Processing - Module1 49

� Method of sampling is determined by the sensorarrangement used to generate the image.

� Single sensing element combined with mechanicalmotion� Sampling – by selecting the number of individual

mechanical increments at which the sensor is activatedto collect the data.to collect the data.

� Sensing Strip� Sampling – the number of sensors in the strip limits

sampling in one direction.� Sensor array

� Sampling – the number of sensors in the array limitssampling in both the directions.

3/18/2012 CS04 804B Image Processing - Module1 50

Representing Digital Images

� The result of sampling and quantization is a matrix ofreal numbers.

� Let the image f(x,y) be sampled such that the digitalimage has M rows and N columns.

� The values of coordinates are now discrete quantities.

3/18/2012 CS04 804B Image Processing - Module1 51

3/18/2012 CS04 804B Image Processing - Module1 52

� The complete MxN image can be represented usingmatrix form.

� Each element of the matrix array is called an imageelement, picture element or pixel.

3/18/2012 CS04 804B Image Processing - Module1 53

−−−−

=

)1,1(...)1,1()0,1(

............

)1,1(......)0,1(

)1,0(...)1,0()0,0(

),(

MNfNfNf

Mff

Mfff

yxf

� a0,0 a0,1 … a0,N-1� a1,0 a1,1 … a1,N-1� A = . . .� . . .� aM-1,0 aM-1,1 … aM-1,N-1

� where aij = f(x=i,y=j) = f(i,j).� where aij = f(x=i,y=j) = f(i,j).� The sampling process may be viewed as partitioning thexy-plane into a grid.

� f(x,y) is a digital image if (x,y) are integers from Z2 and fis a function that assigns a gray-level value to each distinctpair of coordinates (x,y).

3/18/2012 CS04 804B Image Processing - Module1 54

� The number of distinct gray-levels allowed for eachpixel is an integer power of 2.

� L = 2k

� The range of values spanned by the gray scale is calledthe dynamic range of an image.

� High dynamic range – high contrast imageLow dynamic range – low contrast image� Low dynamic range – low contrast image

� The number of bits required to store a digitized image,b = M x N x k

� When M =N, b = N2k.� When an image can have 2k gray levels, it is referred to

as a k-bit image.

3/18/2012 CS04 804B Image Processing - Module1 55

Spatial and Gray-Level Resolution

� Sampling determines the spatial resolution of animage, which is the smallest discernible detail in animage.

� Resolution is the smallest number of discernible line� Resolution is the smallest number of discernible linepairs per unit distance. A line consists of a line and itsadjacent space.

� Resolution can also be represented using number ofpixel columns (width) and number of pixel rows(height).

3/18/2012 CS04 804B Image Processing - Module1 56

� Resolution can also be defined as the total number ofpixels in an image, given as number of megapixels.

� More the number of pixels in a fixed range, higher theresolution.resolution.

� Gray-level resolution refers to the smallest discerniblechange in gray level.

� More the number of bits, higher the resolution.

3/18/2012 CS04 804B Image Processing - Module1 57

� Consider an image of size 1024 x 1024 pixels whose graylevels are represented by 8 bits.

� The image can be subsampled to reduce its size.

� Subsampling is done by deleting appropriate number� Subsampling is done by deleting appropriate numberof rows from the original image.

� eg; A 512 x 512 image can be obtained by deleting everyother row and column from 1024 x 1024 image. Thenumber of gray levels is kept constant.

3/18/2012 CS04 804B Image Processing - Module1 58

3/18/2012 CS04 804B Image Processing - Module1 59

3/18/2012 CS04 804B Image Processing - Module1 60

� The number of samples is kept constant and thenumber of gray levels is reduced.

3/18/2012 CS04 804B Image Processing - Module1 61

� The number of bits is reduced keeping spatial resolution constant.

3/18/2012 CS04 804B Image Processing - Module1 62

� False contouring – When the bit depth becomesinsufficient to accurately sample a continuousgradation of color tone, the continuous gradient willappear as a series of discrete steps or bands. This istermed as false contouring.

3/18/2012 CS04 804B Image Processing - Module1 63

� Images can be of low detail, intermediate detail, orhigh detail depending on the values of N and k.

3/18/2012 CS04 804B Image Processing - Module1 64

� Each point in Nk-plane represents an image having values of N and k equal to coordinates of that point.

� Isopreference curves – Curves that correspond to images of equal subjective quality.

3/18/2012 CS04 804B Image Processing - Module1 65

� The quality of the images tends to increase as N and k areincreased.

� A decrease in k generally increases the apparent contrast of� A decrease in k generally increases the apparent contrast ofan image.

� For images with a larger amount of detail, only a few graylevels are needed.

3/18/2012 CS04 804B Image Processing - Module1 66

Aliasing and Moire Patterns

� Aliasing – The distortion that results fromundersampling when the signal reconstructed fromsamples is different from the original continuoussignal.signal.

� Shannon Sampling Theorem – To avoid aliasing, thesampling rate should be greater than or equal to twicethe highest frequency present in the signal.

3/18/2012 CS04 804B Image Processing - Module1 67

Sine Wave

Sine Wave sampled once per cycle

3/18/2012 CS04 804B Image Processing - Module1 68

Sine Wave sampled 1.5 times per cycle- results in a lower frequency wave

3/18/2012 CS04 804B Image Processing - Module1 69

Sine Wave sampled twice per cycle

� For an image, aliasing occurs if the resolution is toolow.

� To reduce the aliasing effects on an image, its highfrequency components are reduced prior to samplingby blurring the image.by blurring the image.

� Moire Pattern – Interference patterns created whentwo grids are overlaid at an angle.

3/18/2012 70CS04 804B Image Processing - Module1

3/18/2012 CS04 804B Image Processing - Module1 71

Zooming and Shrinking Digital Images

� Zooming – Oversampling� Shrinking – Undersampling

� Zooming involves two steps:� Zooming involves two steps:� Creation of new pixel locations� Assigning gray levels to new pixel locations

� Nearest neighbour interpolation� Pixel Replication

� Bilinear interpolation

3/18/2012 72CS04 804B Image Processing - Module1

� Nearest neighbour interpolation� Size of zoomed image need not be an integer multiple of size

of original image.� Fits a finer grid over the original image.� Gray level corresponding to the closest pixel in original image

is assigned as the gray level of new pixel.� Expand the grid to the original size.

� Pixel replication� Special case of nearest neighbour interpolation� Size of zoomed image is an integer multiple of size of original

image� Duplication of columns and rows are done the required

number of times.� Produces checkerboard effect.

3/18/2012 CS04 804B Image Processing - Module1 73

Bilinear interpolation – uses 4 nearest neighbours of apoint.

3/18/2012 CS04 804B Image Processing - Module1 74

� Linear interpolation: a straight line between the 2known points.

3/18/2012 CS04 804B Image Processing - Module1 75

� This can be understood as a weighted average, wherethe weights are inversely related to the distance fromthe end points to the unknown point.

� The weights are which are

the normalized distances between the unknown point and each of the end points.

3/18/2012 CS04 804B Image Processing - Module1 76

� Interpolating in x-direction

3/18/2012 CS04 804B Image Processing - Module1 77

� Interpolating in y-direction

3/18/2012 CS04 804B Image Processing - Module1 78

� Bilinear interpolation – uses 4 nearest neighbours of apoint.

� The gray level assigned to the new pixel is given byv(x’,y’) = ax’ + by’ + cx’y’ + d

The coefficients are determined from the fourequations in four unknowns written using the fournearest neighbours of (x’,y’).

3/18/2012 CS04 804B Image Processing - Module1 79

3/18/2012 CS04 804B Image Processing - Module1 80

� Shrinking� Shrinking by a non-integer factor

� Expands the grid to fit over the original image.� Does gray-level nearest neighbour or bilinear

interpolation� Shrinks the grid back to the original size.Shrinks the grid back to the original size.

3/18/2012 CS04 804B Image Processing - Module1 81

Basic Relationships Between Pixels� Neighbours of a pixel

� 4-neighbours� diagonal-neighbours� 8-neighbours

(i-1,j-1) (i-1,j) (i-1,j+1)

3/18/2012 CS04 804B Image Processing - Module1 82

� 8-neighbours

� Adjacency� 4-adjacency� 8-adjacency� m-adjacency

(i,j-1) (i,j) (i,j+1)

(i+1,j-1) (i+1,j) (i+1,j+1)

� Neighbours of a pixel

� A pixel p at (x,y) has 4 horizontal and verticalneighbours whose coordinates are given by:

(x+1,y) , (x-1,y), (x, y+1) and (x,y-1)This set of pixels called the 4-neighbours of p is

3/18/2012 CS04 804B Image Processing - Module1 83

� This set of pixels called the 4-neighbours of p isdenoted by N4(p).

� Each pixel is of unit distance from p and may lie outsidethe digital image for a pixel on the border of the image.

� The 4 diagonal neighbours are given by:(x+1, y+1), (x+1, y-1), (x-1, y+1) and (x-1, y-1)

This set of pixels is denoted by ND(p). These pointstogether with the 4-neighbours are called the 8-neighbours of p, denoted by N8(p).

3/18/2012 CS04 804B Image Processing - Module1 84

� Adjacency, Connectivity, Regions and Boundaries� Connectivity – Two pixels are connected if they are

neighbours and if their gray levels satisfy a specifiedcriterion of similarity (eg; if their gray levels are equal).

� Adjacency – Let V be the set of gray-level values used� Adjacency – Let V be the set of gray-level values usedto define adjacency.

� In binary image, V={1}, for adjacency of pixels withvalue 1.

� a) 4-adjacency: Two pixels p and q with values from Vare 4-adjacent if q is in the set N4(p).

3/18/2012 CS04 804B Image Processing - Module1 85

� b) 8-adjacency: Two pixels p and q with values from Vare 8-adjacent if q is in the set N8(p).

� c) m-adjacency : Two pixels p and q with values fromV are m-adjacent if :V are m-adjacent if :� q is in N4(p)� Q is in ND(p) and the set N4(p)∩N4(q) has no pixels

whose values are from V.

3/18/2012 86CS04 804B Image Processing - Module1

3/18/2012 CS04 804B Image Processing - Module1 87

� Two image subsets S1 and S2 are adjacent if somepixel in S1 is adjacent to some pixel in S2.

� A digital path or curve from pixel p withcoordinates (x,y) to pixel q with coordinates (s,t) isa sequence of distinct pixels with coordinatesa sequence of distinct pixels with coordinates(x0,y0), (x1,y1), …, (xn,yn) where (x0,y0) = (x,y),(xn,yn) = (s,t) and pixels (xi,yi) and (xi-1,yi-1) areadjacent for 1 ≤ i ≤ n. n is the length of the path.

� If (x0,y0) = (xn,yn) ,the path is a closed path.

3/18/2012 88CS04 804B Image Processing - Module1

� We call the paths 4-, 8-, or m-paths depending on thetype of adjacency.type of adjacency.

3/18/2012 89CS04 804B Image Processing - Module1

� Let S be a subset of pixels in an image.

� Connectivity: Two pixels p and q are said to beconnected in S if there exists a path between themconsisting entirely of pixels in S.

� Connected Component: For any pixel p in S, the setof pixels that are connected to it in S is called aconnected component of S.

� Connected Set: If the set S has only one connectedcomponent, then it is called a connected set.

3/18/2012 90CS04 804B Image Processing - Module1

� Region: Let R be a subset of pixels in an image. IfR is a connected set, it is called a region of theimage.

� Boundary: Boundary of a region R is the set ofpixels in the region that have one or moreneighbours that are not in R. It forms a closed pathpixels in the region that have one or moreneighbours that are not in R. It forms a closed pathand is a global concept.

� Edge: Edges are formed from pixels with derivativevalues that exceed a threshold. It is based on measureof gray-level discontinuity at a point and is a localconcept.

3/18/2012 91CS04 804B Image Processing - Module1

� Distance Measures

� For pixels p, q and z with coordinates (x,y), (s,t) and(v,w) respectively D is a distance function if :

� a) D(p,q) ≥ 0, (D(p,q) = 0 iff p = q)� b) D(p,q) = D(q,p)� c) D(p,z) ≤ D(p,q) + D(q,z)

3/18/2012 92CS04 804B Image Processing - Module1

� The Euclidean distance between p and q is defined as� De(p,q) = [(x-s)2 + (y-t)2]1/2

� D4 distance (or city-block distance) between p and qis defined as� D4(p,q) = |x-s|+|y-t|� D4(p,q) = |x-s|+|y-t|� Pixels with D4 = 1 are 4-neighbours of (x,y).

� D8 distance (or chessboard distance) between p andq is defined as� D8(p,q) = max(|x-s|,|y-t|)� Pixels with D8 = 1 are 8-neighbours of (x,y).

3/18/2012 93CS04 804B Image Processing - Module1

� D4 and D8 distances between p and q are independentof any paths that might exist between the pointsbecause the distances involve only the coordinates ofbecause the distances involve only the coordinates ofthe points.

3/18/2012 94CS04 804B Image Processing - Module1

Euclidean distance (2-norm)

D4 distance (city-block distance)

D8 distance (chessboard distance)

1 1 11 1

2 2 2 2 2

2 22 2

23

3 3

34 42

22

22 22

55

5 5

01 1

1

1

01 1

1

1

01 1

1

11 1

1 1

2

2

2

2 2 2 2

2

2

2

2

2

2

2

2

2

2

23

3

3 3

3

3

4 4

2 2

2

2 2

22

22 22

5

5

55

5

5

3/18/2012 95CS04 804B Image Processing - Module1

� Dm distance between p and q is defined as theshortest m-path between the two points.

� a) V= {1}

� Dm = 2� pp2p4

3/18/2012 96CS04 804B Image Processing - Module1

� b) V= {1}

� Dm = 3� pp1p2p4

3/18/2012 97CS04 804B Image Processing - Module1

� c) V= {1}

� Dm = 3� pp2p3p4

3/18/2012 98CS04 804B Image Processing - Module1

� d) V= {1}

� Dm = 4� pp1p2p3p4

3/18/2012 99CS04 804B Image Processing - Module1

Image Operations on Pixels

� Images are represented as matrices.� Matrix division is not defined.� Arithmetic operations including division are defined

between corresponding pixels in the images involved.

3/18/2012 100CS04 804B Image Processing - Module1

Linear and Non-linear Operations

� Let H be an operator whose input and output areimages.

� H is said to be linear operator if for any two images f� H is said to be linear operator if for any two images fand g and any two scalars a and b,

� H(af + bg) = aH(f) + bH(g)� eg; adding 2 images.� Non-linear operation does not obey the above

condition.

3/18/2012 101CS04 804B Image Processing - Module1

Thank YouThank You

3/18/2012 CS04 804B Image Processing - Module1 102