+ All Categories
Home > Technology > GRUPO 1 : digital manipulation of bright field and florescence images noise reduction contrast...

GRUPO 1 : digital manipulation of bright field and florescence images noise reduction contrast...

Date post: 18-May-2015
Category:
Upload: viisonartificial2012
View: 355 times
Download: 0 times
Share this document with a friend
Popular Tags:
30
CHAPTER 14 Digital Manipulation of Brightfield and Fluorescence Images: Noise Reduction, Contrast Enhancement, and Feature Extraction Richard A. Cardullo* and Edward H. HinchcliVe *Department of Biology, The University of California Riverside, California 92521 Department of Biological Sciences University of Notre Dame, Notre Dame, Indiana 46556 I. Introduction II. Digitization of Images III. Using Gray Values to Quantify Intensity in the Microscope IV. Noise Reduction A. Temporal Averaging B. Spatial Methods V. Contrast Enhancement VI. Transforms, Convolutions, and Further Uses for Digital Masks A. Transforms B. Convolution C. Digital Masks as Convolution Filters VII. Conclusions References I. Introduction The theoretical basis of image processing along with its applications is an extensive topic that cannot be adequately covered here but has been presented in a number of texts dedicated exclusively to this topic (Andrews and Hunt, 1977; METHODS IN CELL BIOLOGY, VOL. 81 0091-679X/07 $35.00 Copyright 2007, Elsevier Inc. All rights reserved. 285 DOI: 10.1016/S0091-679X(06)81014-9
Transcript
Page 1: GRUPO 1 :  digital manipulation of bright field and florescence images noise reduction contrast enhancement and feature extraction

CHAPTER 14

METHODS IN CELL BIOLCopyright 2007, Elsevier Inc.

Digital Manipulation of Brightfield andFluorescence Images: Noise Reduction,Contrast Enhancement, andFeature Extraction

Richard A. Cardullo* and Edward H. HinchcliVe†

*Department of Biology, The University of CaliforniaRiverside, California 92521

†Department of Biological SciencesUniversity of Notre Dame, Notre Dame, Indiana 46556

I.

OGYAll ri

I

,gh

ntroduction

VOL. 81 0091ts reserved. 285 DOI: 10.1016/S0091

-679X-679X

II.

D igitization of Images I II. U sing Gray Values to Quantify Intensity in the Microscope IV. N oise Reduction

A. T

emporal Averaging B. S patial Methods

V.

C ontrast Enhancement V I. T ransforms, Convolutions, and Further Uses for Digital Masks

A. T

ransforms B. C onvolution C. D igital Masks as Convolution Filters

V

II. C onclusions R eferences

I. Introduction

The theoretical basis of image processing along with its applications is an

extensive topic that cannot be adequately covered here but has been presented in

a number of texts dedicated exclusively to this topic (Andrews and Hunt, 1977;

/07 $35.00(06)81014-9

Page 2: GRUPO 1 :  digital manipulation of bright field and florescence images noise reduction contrast enhancement and feature extraction

286 Richard A. Cardullo and Edward H. HinchcliVe

Bates andMcDonnell, 1986; Chellappa and Sawchuck, 1985; Gonzalez andWintz,

1987; Inoue and Spring, 1997; Russ, 1994; Shotton, 1993). In this chapter, the basic

principles of image processing used routinely by microscopists will be presented.

Since image processing allows the investigator to convert the microscope/detector

system into a quantitative device, this chapter will focus on three basic problems:

(1) reducing ‘‘noise,’’ (2) enhancing contrast, and (3) quantifying intensity of an

image. These techniques can then be applied to a number of diVerent methodolo-

gies such as video-enhanced diVerential interference microscopy (VEDIC; Chapter

16 by Salmon and Tran, this volume), nanovid microscopy, fluorescence recovery

after photobleaching, fluorescence correlation spectroscopy, fluorescence reso-

nance energy transfer, and fluorescence ratio imaging (Cardullo, 1999). In all

cases, knowledge of the basic principles of microscopy, image formation, and

image-processing routines is absolutely required to convert the microscope into a

device capable of pushing the limits of resolution and contrast.

II. Digitization of Images

An image must first be digitized before an arithmetic, or logical, operation can

be performed on it (Pratt, 1978). For this discussion, a digital image is a discrete

representation of light intensity in space (Fig. 1). A particular scene can be viewed

as being continuous in both space and light intensity and the process of digitization

converts these to discrete values. The discrete representation of intensity is com-

monly referred to as gray values whereas the discrete representation of position

is given as picture elements, or pixels. Therefore, each pixel has a corresponding

gray value which is related to light intensity [e.g., at each coordinate (x,y) there is a

corresponding gray value designated as GV(x,y)]. The key to digitizing an image is

to provide enough pixels and grayscale values to adequately describe the original

image.

Clearly, the fidelity of reproduction between the true image and the digitized

image depends on both the spacing between the pixels (e.g., the number of bits that

map the image) and the number of gray values used to describe the intensity of that

image. Figure 1B shows a theoretical one-dimensional scan across a portion of an

image. Note that the more pixels used to describe, or sample, an image, the better

the digitized image reflects the true nature of the original. Conversely, as the

number of pixels is progressively reduced, the true nature of the original image is

lost. When choosing the digitizing device for a microscope, particular attention

must be paid to match ing the resol ution lim it of the micr oscope (� 0.2 mm for

visib le light, see Chapter 1 by Sluder and Nordberg , this volume ) to the resol ution

limit of the digitizer (Inoue, 1986). A digitizing array that has an eVective separa-tion of 0.05 mm per pixel is, at best, using four pixels to describe resolvable objects

in a microscope resulting in a highly digitized representation of the original image

(note that this is most clearly seen when using the digitized zoom feature of many

imaging devices which results in a ‘‘boxy’’ image representation). In contrast, a

Page 3: GRUPO 1 :  digital manipulation of bright field and florescence images noise reduction contrast enhancement and feature extraction

Pixel number1050 15 20 25 30

Gra

y va

lue

20

30

40

50

60

Pixel number

Gra

y va

lue

200 4 8 12 16

30

40

50

60

Pixel number

Gra

y va

lue

200 2 4 6 8

30

40

50

60

Position (mm)

Inte

nsity

200 2 4 6 8 10

30

40

50

60A

DC

B

Fig. 1 (A) A densitometric line scan through a microscopic image is described by intensity values on

the y-axis and its position along the x-axis. (B) A 6-bit-digitized representation (64 gray values) of the

object in (A), with 32 bits used to describe the position across 10 mm. The digital representation captures

the major details of the original object but some finer detail is lost. Note that the image is degraded

further when the position is described by only (C) 16 bits or (D) 8 bits.

14. Digital Manipulation of Brightfield and Fluorescence Images 287

digitizer which has pixel elements separated by 1 mm eVectively averages gray

values five times above the resolution limit of the microscope resulting in a

degraded representation of the original image.

In addition to assigning the number of pixels for an image, it is also important

to know the number of gray values needed to faithfully represent the intensity of

that image. In Fig. 1B, the original image has been digitized at 6-bit resolution

(6 bits ¼ 26 ¼ 64 gray values from 0 to 63). The image could be better described by

more gray values (e.g., 8 bits ¼ 256 gray levels) but would be poorly described

by less gray values (e.g., 2 bits ¼ 4 gray values).

The decision on how many pixels and gray values are needed to describe an

image is dictated by the properties of the original image. Figure 1 represents a low-

contrast, high-resolution image which needs many gray scales and pixels to

adequately describe it. However, some images are by their very nature high

Page 4: GRUPO 1 :  digital manipulation of bright field and florescence images noise reduction contrast enhancement and feature extraction

288 Richard A. Cardullo and Edward H. HinchcliVe

contrast and low resolution and require less pixels and gray values to describe it

(e.g., a line drawing may require only 1 bit of gray-level depth, black or white).

Ultimately, the trade-oV is one in contrast, resolution, and speed of processing.

The more descriptors used to represent an image, the slower the processing

routines will be performed. In general, an image should be described by as few

pixels and gray values as needed so that speed of processing can be optimized. For

many applications, the user can select a narrower window, or region of interest

(ROI), within the image to speed up processing.

III. Using Gray Values to Quantify Intensity in the Microscope

A useful feature shared by all image processors is that they allow the microsco-

pist a way to quantify image intensity values into some meaningful parameter

(Green, 1989; Russ, 1990). In standard light microscopy, the light intensity—and

therefore the digitized gray values—is related to the optical density (OD) which is

proportional to the log of the relative light intensity. In dilute solutions (i.e., in the

absence of significant light scattering), the OD is proportional to the concentra-

tion of absorbers, C, the molar absorptivity, e, and the path length, l, through the

vessel containing the absorbers. In such a situation, the OD is related to these

parameters using Beer’s law:

OD ¼ logI0

I

� �¼ eCl

where I and I0 are the intensities of light in the presence and absence of absorber,

respectively. Within dilute solutions, it therefore might be possible to equate a

change in OD with changes either in molar absorptivity, path length, or concen-

tration. However, with objects as complex as cells, all three parameters can vary

tremendously and the utility of using OD to measure a change in any one

parameter is diYcult.

Although diYcult to interpret in cells, measuring changes in digitized gray

values in an OD wedge oVers the investigator a good way to calibrate an entire

microscope system coupled to an image processor. Figure 2 shows such a calibra-

tion using a brightfield microscope coupled to a CCD camera and an image

processor. The wedge had 0.15-OD increments. The camera/image processor

unit was digitized to 8 bits (0–255) and the median gray value was recorded for

a 100 � 100 pixel array (the ROI) in each step of the wedge. In this calibration, the

black level of the camera was adjusted so that the highest OD corresponded to a

gray value of 5. At the other end of the scale (the lowest OD used), the relative

intensity was normalized so that I/I0 was equal to 1 and the corresponding gray

value was �95% of the maximum gray value (�243). As seen in Fig. 2, as the step

wedge is moved through the microscope, the median value of the gray value

increased as the log of I/I0. In addition to acting as a useful calibration, this figure

Page 5: GRUPO 1 :  digital manipulation of bright field and florescence images noise reduction contrast enhancement and feature extraction

Relative intensity

0.03 0.1 0.3 1

Gra

y va

lue

5

10

20

30

50

70

100

200

Fig. 2 Calibration of a detector using an image processor. The light intensity varied incrementally

using an OD step wedge (0.15-OD increments), and the gray value was plotted as a function of the

normalized intensity. In this instance the camera/image processor system was able to quantify diVer-

ences in light intensity over a 40-fold range.

14. Digital Manipulation of Brightfield and Fluorescence Images 289

shows that an 8-bit processor can reliable quantify changes in light intensity over

two orders of magnitude.

IV. Noise Reduction

The previous sections have assumed that the object being imaged is relatively free

of noise and is of suYcient contrast to generate a usable image. Although this may

be true in some instances, the ultimate challenge in many applications is to obtain

reliable quantitative information from objects which produce a low-contrast, noisy

signal (Erasmus, 1982). This is particularly true in cell physiological measurements

using specialized modes of microscopy such as VEDIC, fluorescence ratio imaging,

nanovid microscopy, and so on. There are diVerent ways to reduce noise and the

methods of noise reduction chosen depend on many diVerent factors, including thesource of the noise, the type of camera employed for a particular application, and

the contrast of the specimen. For the purposes of this chapter, we shall distinguish

between temporal and spatial techniques to increase the signal-to-noise ratio (SNR)

of an image.

A. Temporal Averaging

In most low-light level applications, there is considerable amount of shot noise

associated with the signal. If quantitation is needed, it is often necessary to reduce

the amount of shot noise in order to improve the SNR. Because this type of noise

reduction requires averaging over a number of frames (�2 frames), this method is

Page 6: GRUPO 1 :  digital manipulation of bright field and florescence images noise reduction contrast enhancement and feature extraction

290 Richard A. Cardullo and Edward H. HinchcliVe

best used for static objects. Clearly, temporal averaging can be diYcult to use for

optimizing contrast for dynamic processes such as cell movement, detecting rapid

changes in intracellular ion concentrations over time, quantifying molecular mo-

tions using fluorescence recovery after photobleaching, or single particle tracking.

The trade-oV is between improving SNR versus blurring or missing the capture of

a dynamic event. Current digital microscopy equipment allows for very short

exposure times, even with the low-light levels associated with live cell imaging.

Thus, frame averaging can be an acceptable solution to improve SNR, provided

that the light exposures are suYciently short (Fig. 3).

Assume that at any given time, t, within a given pixel, a signal, Si(t), represents

both the true image, I, which may be inclusive of background, and some source of

noise, Ni(t). Since the noise is stochastic in nature, Ni(t) will vary in time, taking on

both positive and negative values, and the signal, Si(t), will vary about some mean

value. For each frame, the signal is therefore just:

Si tð Þ ¼ I þNi tð ÞAs the signal is averaged over M frames, an average value for Si(t) and Ni(t) is

obtained:

hSiiM ¼ I þ hNiiMwhere hSiiM and hNiiM represent the average value of Si(t) and Ni(t) over M

frames. As the number of frames, M, goes to infinity, the average value of Ni

goes to zero and therefore:

hSiiM!1 ¼ I

The question facing the microscopist is how large M should be so the SNR is

acceptable. This is determined by a number of factors including the magnitude of

the original signal, the amount of noise, and the degree of precision required by

the particular quantitative measurement. A quantitative measure of noise reduc-

tion can be obtained by looking at the standard deviation of the noise, which

decreases inversely as the square root of the number of frames (sM ¼ s0/√M).

Therefore, averaging a field for 4 frames will give a 2-fold improvement in the

SNR, averaging for 16 frames yields a 4-fold improvement, while averaging for

256 frames yields a 16-fold improvement. At some point the user obviously

reaches a point of diminishing returns where the noise level is below the resolution

limit of the digitizer and any improvement in SNR is minimal (Fig. 4).

Although frame-averaging techniques are not always appropriate for moving

objects, it is possible to apply a running average where the resulting image is a

weighted sum of all previous frames. Because the image is constantly updated on a

frame-by-frame basis, these types of recursive techniques are useful for following

moving objects but improvement in SNR is always less than that obtained with the

simple averaging technique outlined in the previous paragraph (Erasmus, 1982).

Page 7: GRUPO 1 :  digital manipulation of bright field and florescence images noise reduction contrast enhancement and feature extraction

No frame averaging 4� frame averaging 8� frame averaging

Fig. 3 Frame averaging improves the SNR. A dividing mammalian cell expressing GFP-a tubulin was imaged using a spinning disk

confocal microscope. Images were collected with no frame averaging, 4 frames averaged or 8 frames averaged. Random noise is reduced due

to frame averaging.

Page 8: GRUPO 1 :  digital manipulation of bright field and florescence images noise reduction contrast enhancement and feature extraction

Number of frames averaged

Noi

se

0 32 64 96 128 160 192 224 2560.0

0.2

0.4

0.6

0.8

1.0

0 2 4 6 8 10 12 14 160.0

0.2

0.4

0.6

0.8

1.0

Fig. 4 Reduction in noise as a function of the numbers of frames averaged. The noise is reduced

inversely as the square root of the number of frames averaged. In this instance, the noise was normalized

to the average value obtained for a single frame. The major gain in noise reduction is obtained after

averaging very few frames (inset) and averaging for more than 64 frames leads to only minor gains in the

SNR.

292 Richard A. Cardullo and Edward H. HinchcliVe

Additional recursive filters are possible which optimize the SNR but these are

typically not available on commercial image processors.

B. Spatial Methods

A number of spatial techniques are available which allow the user to reduce

noise on a pixel-by-pixel basis. The simplest of these techniques generally uses

simple arithmetic operations within a single frame or, alternatively, between two

diVerent frames. In general, these types of routines involve either image subtrac-

tion or division from a background image or calculate a mean or median value

around the neighborhood of a particular pixel. More sophisticated methods use

groups of pixels (known as masks, kernels, or filters), which perform higher-order

functions to extract particular features from an image. These types of techniques

will be discussed separately in Section VI.

Page 9: GRUPO 1 :  digital manipulation of bright field and florescence images noise reduction contrast enhancement and feature extraction

14. Digital Manipulation of Brightfield and Fluorescence Images 293

1. Arithmetic Operations Between an Object and a Background Image

If one has an image, which has a constant noise component in a given pixel in

each frame, the noise component can be removed by performing a simple subtrac-

tion that removes the noise and optimizes the SNR. Although SNR is improved,

subtraction methods can also significantly decrease the dynamic range but these

problems can generally be avoided when the microscope and camera systems are

adjusted to give the optimal signal.

Any constant noise component can be removed by subtraction and, in general,

it is always best to subtract the noise component from a uniform background

image (Fig. 5). Thus, if a pixel within an image has a gray value of say 242, with

the background having a gray value of 22 in that same pixel, then a simple

Pixel number0 10 20 30 40 50 60

Gra

y va

lue

0

50

100

150

200

250C

Gra

y va

lue

0

50

100

150

200

250

Pixel number0 10 20 30 40 50 60

Pixel number0 10 20 30 40 50 60

A

Gra

y va

lue

0

50

100

150

200

250B

Fig. 5 Subtracting noise from an image. (A) Line scan across an object and the surrounding

background. (B) Line scan across the background alone reveals variations in intensity that may be

due to uneven light intensity across the field, camera defects, dirt on the optics, and so on. (C) Image in

(A) subtracted from image in (B). The result is a ‘‘cleaner’’ image with a higher SNR in the processed

image compared to the original in (A).

Page 10: GRUPO 1 :  digital manipulation of bright field and florescence images noise reduction contrast enhancement and feature extraction

294 Richard A. Cardullo and Edward H. HinchcliVe

subtraction would yield a resultant value of 220. Image subtraction therefore

preserves the majority of the signal, and the subtracted image can then be pro-

cessed further using other routines. In order to reduce temporal noise, both images

can be first averaged as described in Section IV.A.

2. Concept of a Digital Mask

A number of mathematical manipulations of images involve using an array (or a

digital mask) around a neighborhood of a particular pixel. These digital masks can

be used either to select particular pixels from the neighborhood (as in averaging or

median filtering discussed in Section IV.B.3) or alternatively can be used to apply

some mathematical weighting function to an image on a pixel-by-pixel basis to

extract particular features from that image (discussed in detail in Section VI).When

the mask is overlaid on an image, the particular mathematical operation is per-

formed and the resultant value is placed into the same array position and the

operation is performed repeatedly until the entire image has been transformed.

Although a digital mask can take on any shape, themost commonmasks are square

with the center pixel being the particular pixel being operated on at any given time

(Fig. 6). The most common masks are 3� 3 or 5� 5 arrays so that only the nearest

neighbors will have an eVect on the pixel being operated on. Additionally, larger

arrays will greatly increase the number of computations that need to be performed

which can significantly slow down the rate of processing a particular image.

px + 1,y + 1

px + 1,y − 1

px + 1,y

px,y + 1

px,y

px,y − 1px − 1,y − 1

px − 1,y + 1

px − 1,y

Fig. 6 Digital mask used for computing means, averages, and higher-order mathematical operations,

especially convolutions. In the case of the median and average filters, the mask is overlaid over each

pixel in the image and the resultant value is calculated and placed into the identical pixel location in a

new image buVer.

Page 11: GRUPO 1 :  digital manipulation of bright field and florescence images noise reduction contrast enhancement and feature extraction

14. Digital Manipulation of Brightfield and Fluorescence Images 295

3. Averaging Versus Median Filters

A

When an image contains random and infrequent intensity spikes with particular

pixels, a digital mask can be used around each pixel to remove them. Two

common ways to remove these intensity spikes are to calculate either the average

value or the median value within the neighborhood and assign that value to the

center pixel in the processed image (Fig. 7). The choice of filter used will depend on

the type of processed image that is desired. Although both types of filters will

Position2 4 6 8

Gra

y va

lue

0

50

100

150

20023

B

C

25 26 192 27 47 52 56 55 5926 27 23 27 135 42 55 116 52 61117 26 28 29 31 175 52 56 57 5925 28 19 31 186 49 5229 27 178 57 30 51 19526 22 26 42 12 46

5

57577

5657

5555142 54

Position

Gra

y va

lue

02 4 6 8

20

40

60

80

100

36 45 58 78 68 72 61 6335 26 57 78 81 67 56 6353 47 65 71 86 72 60 5142 48 65 56 71 58 70 59

26 27 28 42 52 55 55 5727 27 29 42 52 56 56 5728 28 31 49 51 52 56 5626 28 31 46 49 51 56 55

Position

Gra

y va

lue

02 4 6 8

20

40

60

80

100

Fig. 7 Comparison of 3� 3 averaging andmedian filters to reduce noise. (A) Digital representation of

an image displaying gray values at diVerent pixel locations. In general, the object possesses a boundary

which is detected as a line scan from left to right. However, the image has a number of intensity spikes

which significantly mask the true boundary. A line scan across a particular row (row denoted by arrow

and scan is on the right-hand side) reveals both high- and low-intensity values which greatly distort the

image. (B) When a 3 � 3 averaging filter is applied to the image, the extreme intensity values are

significantly reduced but the image is smoothed in the vicinity of the boundary. (C) In contrast, a 3 � 3

median filter removes the extreme intensity values while preserving the true nature of the boundary.

Page 12: GRUPO 1 :  digital manipulation of bright field and florescence images noise reduction contrast enhancement and feature extraction

296 Richard A. Cardullo and Edward H. HinchcliVe

degrade an image, the median filter preserves edges better than the averaging filter

since all values with the digital mask are used to compute the mean. Further,

averaging filters are seldom used to remove intensity spikes because the spikes

themselves contribute to the new intensity value in the processed image and

therefore the resultant image is blurred.

The median filter is more desirable for removing infrequent intensity spikes

from an image since those intensity values are always removed from the processed

image once the median is computed. In this case, any spike is replaced with the

median value within the digital mask, which gives a more uniform appearance to

the processed image. Hence, a uniform background that contains infrequent

intensity spikes will look absolutely uniform in the processed image. Since the

median filter preserves edges (a sharpening filter), it is often used for high-contrast

images (Fig. 8).

Fig. 8 EVects of median sharpen and smooth filters on image contrast. A dividing mammalian cell

expressing GFP-a tubulin was imaged using a spinning disk confocal microscope. After collection,

separate digital filters were applied to the image.

Page 13: GRUPO 1 :  digital manipulation of bright field and florescence images noise reduction contrast enhancement and feature extraction

14. Digital Manipulation of Brightfield and Fluorescence Images 297

V. Contrast Enhancement

One of the most common uses of image processing is to digitally enhance the

contrast of the image using a number of diVerent methods (Castleman, 1979).

In brightfield modes such as phase contrast or diVerential interference contrast,

the addition of a camera and an image processor can significantly enhance the

contrast so that specimens with inherent low contrast can be observed. Addition-

ally, contrast routines can be used to enhance an image in a particular region,

which may allow the investigator to quantify structures or events not possible with

the microscope alone. This is the basis for VEDIC, which allows, for example, the

motion of low-contrast specimens such as microtubules or chromosomes to be

quantifie d (Chapt er 16 by Sa lmon an d Tran, this volume ).

In order to optimize contrast enhancement digitally, it is imperative that the

microscope optics and the camera be adjust ed so that the full dynami c ran ge of the

system is utilized. This is discus sed furt her in Chapt er 17 by Wolf et al ., this

volume. The gray values of the image and background can then be displayed as a

histogram (Fig. 9) and the user is then able to adjust the brightness and contrast

within a particular region of the image. Within a particular gray value range, the

user can then stretch the histogram so that values within that range are spread out

over a diVerent range in the processed image. Although this type of contrast

enhancement is artificial, this allows the user to discriminate features which

otherwise may not have been detectable by eye in the original image.

Stretching gray values over a particular range in an image is one type of

mathematical manipulation, which can be performed on a pixel-by-pixel basis.

In general, any digital image can be mathematically manipulated to produce an

image with diVerent gray values. The user-defined function that transforms the

original function is known as the image transfer function (ITF), which specifies the

value and the mathematical operation that will be performed on the original

image. This type of operation is a point operation, which means that the output

gray value of the ITF is dependent only on the input gray value on a pixel-by-pixel

basis. The gray values of the processed image, I2, are therefore transformed

at every pixel location relative to the original image using the same ITF. Hence,

every gray value in the processed image is transformed according the generalized

relationship:

GV2 ¼ f ðGV1Þ

where GV2 is the gray value at every pixel location in the processed image, GV1 is

the input gray value of the original image, and f(GV1) is the ITF acting on the

original image.

The simplest type of ITF is a linear equation of slope m and intercept b:

GV2 ¼ mGV1 þ b

Page 14: GRUPO 1 :  digital manipulation of bright field and florescence images noise reduction contrast enhancement and feature extraction

Gray value

Num

ber

of p

ixel

s

0 50 100 150 200 250

0 50 100 150 200 250

0 50 100 150 200 2500

500

1000

1500

2000

2500

3000

Num

ber

of p

ixel

s

0

500

1000

1500

2000

2500

3000

Num

ber

of p

ixel

s

0

500

1000

1500

2000

2500

3000

Gray value

Gray value

A B

C

Fig. 9 Histogram representation of gray values for an entire image. (A) The image contains two

distributions of intensity over the entire gray value range (0–255). (B) The lower distribution can be

removed either through subtraction (if lower values are due to a uniform background) or by applying

the appropriate ITF which assigns a value of 0 to all input pixels having a gray value less than 100. The

resulting distribution contains only information from input pixels with a value greater than 100.

(C) The histogram of the higher distribution can be stretched to fill the lower gray values resulting in

a lower mean value than the original.

298 Richard A. Cardullo and Edward H. HinchcliVe

In this case, the digital contrast of the processed image is linearly transformed

with the brightness and contrast determined by both the value of the slope and

intercept chosen. In the most trivial case, choosing values of m ¼ 1 and b ¼ 0

would leave all gray values of the processed image identical to the original image

(Fig. 10A). Raising the value of the intercept while leaving the slope unchanged

would have the eVect of increasing all gray values by some fixed value (identical to

increasing the DC or black level control on a camera). Similarly, decreasing the

value of the intercept will produce a darker image than the original. The value of

the slope is known as the contrast enhancement factor and changes in the value ofm

will have significant eVects on how the gray values are distributed in an image.

A value of m > 1 will have the eVect of spreading out the gray values over a wider

range in the processed image relative to the original image. Conversely, values of

m < 1 will reduce the number of gray values used to describe a processed image

Page 15: GRUPO 1 :  digital manipulation of bright field and florescence images noise reduction contrast enhancement and feature extraction

Gra

y va

lue

0

50

100

150

200

250

GV

2

0

50

100

150

200

250

GV

2

0

50

100

150

200

250

Gra

y va

lue

0

50

100

150

200

250

Pixel number0 10 20 30 40 50 60

Gra

y va

lue

0

50

100

150

200

250

GV1

0 50 100 150 200 250

GV

2

0

50

100

150

200

250

A

C

B

Fig. 10 Application of diVerent linear ITFs to a low-intensity, low-contrast image. (A) Intensity line

scan through an object which is described by few gray values. Applying a linear ITF withm¼ 1 and b¼ 0

(right) will result in no change from the initial image. (B) Applying a linear ITFwithm¼ 5 and b¼ 0 (right)

leads to significant improvement in contrast. (C) Applying a linear ITF with m ¼ 2 and b ¼ 50 (right)

slightly improves contrast and increases the brightness of the entire image.

14. Digital Manipulation of Brightfield and Fluorescence Images 299

relative to the original (Fig. 10). As noted by Inoue (1986), although linear ITFs

can be useful, the same eVects can be best achieved by properly adjusting the

camera’s black levels and gain controls. However, this may not always be practical

if conditions under the microscope are constantly changing or if this type of

contrast enhancement is needed after the original images are stored.

Page 16: GRUPO 1 :  digital manipulation of bright field and florescence images noise reduction contrast enhancement and feature extraction

300 Richard A. Cardullo and Edward H. HinchcliVe

The ITF is obviously not restricted to linear functions, and nonlinear ITFs can be

extremely useful for enhancing particular features of an image while eliminating or

reducing others (Fig. 11). Nonlinear ITFs are also useful for correcting sources of

nonlinear response in an optical system or to calibrate the light response of an

Gra

y va

lue

0

50

100

150

200

250

GV

2

0

50

100

150

200

250

GV

2

0

50

100

150

200

250

Gra

y va

lue

20

40

60

80

100

120

140

160

180

Pixel number0 10 20 30 40 50 60

Gra

y va

lue

0

50

100

150

200

250

GV1

0 50 100 150 200 250

GV

2

0

50

100

150

200

250

A

C

B

Fig. 11 Application of diVerent nonlinear ITFs to the same low-intensity, low-contrast image in

Fig. 8. (A) Initial image and ITF (right) resulting in no change. (B) Application of a hyperbolic ITF

(right) to the image results in amplification of lower input values and only slightly increases the gray

values for higher input values. (C) Application of a Gaussian ITF (right) to the image results in

amplification of low values, with an oVset, and minimizes input values beyond 100.

Page 17: GRUPO 1 :  digital manipulation of bright field and florescence images noise reduction contrast enhancement and feature extraction

14. Digital Manipulation of Brightfield and Fluorescence Images 301

optical system (Inoue, 1986). The actual form of the ITF, being linear or nonlinear,

is generally application dependent and user defined. For example, nonlinear ITFs

that are sigmoidal in shape are useful for enhancing images, which compress the

contrast in the center of the histogram and increase contrast in the tail regions of the

histogram. This type of enhancement would be useful for images where most of

the information about the image is in the tails of the histogram while the central

portion of the histogram contains mostly background information. One type of

nonlinear ITF, which is sigmoidal in shape, will enhance an 8-bit image of this type

is given by the equation:

GV2 ¼ 128

ðb� cÞa ½ðb� cÞa � ðb�GV1Þa þ ðGV1 þ cÞa�

where b and c are the maximum and minimum gray values for the input image,

respectively and are arbitrary contrast enhancement factors (Inoue, 1986).

For values of a ¼ 1, this normally sigmoidal ITF becomes linear with a lope of

256(b � c). As a increases beyond 1, the ITF becomes more sigmoidal in nature

with greater compression occurring at the middle gray values.

In practice, ITFs are generally calculated in memory using a lookup table

(LUT). An LUT represents the transformation that is performed on each pixel

on the basis of that intensity value (Figs. 12 and 13). In addition to LUTs, which

Output gray value

0 50 100 150 200 250

Inpu

t lig

ht in

tens

ity

0.0

0.2

0.4

0.6

0.8

1.0

A

B

C D

Fig. 12 Some diVerent gray value LUTs used to alter contrast in images. (A) Inverse LUT,

(B) logarithmic LUT, (C) square root LUT, (D) square LUT, and (E) exponential LUT. Pseudo-

color LUTs would assign diVerent colors instead of gray values.

Page 18: GRUPO 1 :  digital manipulation of bright field and florescence images noise reduction contrast enhancement and feature extraction

A

D

C

B

Fig. 13 DiVerent LUTs applied to the image of a check cell. (A) No filter, (B) reverse contrast LUT,

(C) square root LUT, (D) pseudo-color LUT.

302 Richard A. Cardullo and Edward H. HinchcliVe

Page 19: GRUPO 1 :  digital manipulation of bright field and florescence images noise reduction contrast enhancement and feature extraction

14. Digital Manipulation of Brightfield and Fluorescence Images 303

perform particular ITFs, LUTs are also useful for pseudo-coloring images where

particular user-defined colors represent gray values in particular ranges. This is

particularly useful in techniques such as ration imaging where color LUTs are

used to represent concentrations of Ca2þ, pH, or other ions, when various indicator

dyes are employed within cells.

VI. Transforms, Convolutions, and Further Uses for Digital Masks

In the previous sections, the most frequently used methods for enhancing

contrast and reducing noise using temporal methods, simple arithmetic opera-

tions, and LUTs were described. However, more advanced methods are often

needed to extract particular features from an image which may not be possible

using these simple methods (Jahne, 1991). In this section, some of the concepts

and applications associated with transforms and convolutions will be introduced.

A. Transforms

Transforms take an image from one space to another. Probably, the most used

transform is the Fourier transform which takes one from coordinate space to

spatial frequen cy space (see Chapt er 2 by W olf, this volume for a discus sion of

Fourier transforms). In general, a transform of a function in one dimension has

the form:

TðuÞ ¼Xx

f ðxÞgðx; uÞ

where T(u) is the transform of f(x) and g(x,u) is known as the forward

transformation kernel. Similarly, the inverse transform is given by the relation:

f ðxÞ ¼Xu

TðuÞhðx; uÞ

where h(x,u) is the inverse transformation kernel. In two dimensions, these

transformation pairs simply become:

Tðu; vÞ ¼Xx

Xy

f ðx; yÞgðx; y; u; vÞ

f ðx; yÞ ¼Xu

Xv

Tðu; vÞhðx; y; u; vÞ

It is the kernel functions that provide the link, which brings a function from one

space to another. The discrete forms shown above suggest that these operations can

be performed on a pixel-by-pixel basis and many transforms in image processing

Page 20: GRUPO 1 :  digital manipulation of bright field and florescence images noise reduction contrast enhancement and feature extraction

304 Richard A. Cardullo and Edward H. HinchcliVe

are computed in this manner (known as a discrete Fourier transform or DFT).

However, DFTs are generally approximated using diVerent algorithms to yield a

fast Fourier transform, or FFT.

In the Fourier transform, the forward transformation kernel is:

gðx; uÞ ¼ 1

Ne�2piux

and the reverse transformation kernel is:

f ðx; uÞ ¼ 1

Neþ2piux

Hence, a Fourier transform is achieved by multiplying a digitized image pixel-by-

pixel whose gray value is given by f(x,y) by the forward transformation kernel

given above. Transforms, and in particular Fourier transforms, can make certain

mathematical manipulations of images considerably easier than if they were

performed in coordinate space directly.

One example where conversion to frequency space using an FFT is useful is in

identifying both high- and low-frequency components on an image that allows one

to make quantitative choices about information that can be either used or dis-

carded. Sharp edges and many types of noise will contribute to the high-frequency

content of an image’s Fourier transform. Image smoothing and noise removal can

therefore be achieved by attenuating a range of high-frequency components in the

transform range. In this case, a filter function, F(u,v), is selected that eliminates the

high-frequency components of that transformed image, I(u,v). The ideal filter

would simply cut oV all frequencies about some threshold value, I0 (known as

the cutoV frequency):

Fðu; vÞ ¼ 1 if jIðu; vÞj � I0Fðu; vÞ ¼ 0 if jIðu; vÞj > I0

The absolute value brackets refer to the fact that these are zero-phase shift filters

because they do not change the phase of the transform. A graphical representation

of an ideal low-pass filter is shown in Fig. 14. Just as an image can be blurred by

attenuating high-frequency components using a low-pass filter, so they can be

sharpened by attenuating low-frequency components (Fig. 14). In analogy to the

low-pass filter, an ideal high-pass filter has the following characteristics:

Fðu; vÞ ¼ 0 if jIðu; vÞj � I0Fðu; vÞ ¼ 1 if jIðu; vÞj > I0

Although useful, Fourier transforms can be computationally intense and are still

not routinely used in most microscopic applications of image processing.

A mathematically related technique known as convolution, which utilizes digital

Page 21: GRUPO 1 :  digital manipulation of bright field and florescence images noise reduction contrast enhancement and feature extraction

Low pass High pass

F (u,v )

I (u,v )I0

11

A B

I0

0

F (u,v )

I(u,v )0

Fig. 14 Frequency domain cutoV filters. The filter function in frequency space, F(u,v), is used to cut

oV all frequencies above or below some cutoV frequency, I0. (A) A high-pass filter attenuates all

frequencies below I0 leading to a sharpening of the image. (B) A low-pass filter attenuates all frequencies

above I0 which eliminates high-frequency noise but leads to smoothing or blurring of the image.

14. Digital Manipulation of Brightfield and Fluorescence Images 305

masks to select particular features of an image, is the preferred method of

microscopists since many of these operations can be performed at faster rates

and perform the mathematical operation in coordinate space instead of frequency

space. These operations are outlined in Section VI.B.

B. Convolution

The convolution of two functions, f(x) and g(x), is given mathematically by:

Z þ1

�1f ðaÞgðx� aÞda

where a is a dummy variable of integration. It is easiest to visualize the mechanics

of convolution graphically as demonstrated in Fig. 15, which, for simplicity,

shows the convolution for two square pulses. The convolution can be broken

down into three simple steps:

1. Before carrying out the integration, reflect g(a) about the origin, yielding

g(�a) and then displace it by some distance x to give g(x � a).2. For all values of x, multiply f(a) by g(x � a). The product will be nonzero at

all points where the functions overlap.

3. Integrating this product yields the convolution between f(x) and g(x).

Hence, the properties of the convolution are determined by the independent

function f(x) and a function g(x) that selects for certain desired details in the

function f(x). The selecting function g(x) is therefore analogous to the forward

transformation kernel in frequency space except that it selects for features in

coordinate space instead of frequency space. This clearly makes the convolution

an important image-processing technique for microscopists who are interested in

feature extraction.

Page 22: GRUPO 1 :  digital manipulation of bright field and florescence images noise reduction contrast enhancement and feature extraction

f (x)

x

g(x)

0 1x

1

A B

C

2

10

f (x) g(x)

2

x20

Fig. 15 Graphical representation of one-dimensional convolution. (A) In this simple example, the

function, f(x), to be convolved is a square pulse of equal height and width. (B) The convolving function,

g(x), is a rectangular pulse which is twice as high as it is wide. The convolving function is then reflected

and is then moved from �1 to þ1. (C) In all areas where there is no overlap, the products of f(x) and

g(x) is zero. However, g(x) overlaps f(x) in diVerent amounts from x¼ 0 to x¼ 2 with maximum overlap

occurring at x¼ 1. The operation therefore detects the trailing edge of f(x) at x¼ 0 and the convolution

results in a triangle which increases in height from 0 to 2 for 0 < x� 1 and decreases in height from 2 to

0 for 1 � x < 2.

306 Richard A. Cardullo and Edward H. HinchcliVe

One simple application of convolutions is the convolution of a function with an

impulse function (commonly known as a delta function), d(x � x0):Z þ1

�1f ðxÞdðx� x0Þdx ¼ f ðx0Þ

For our purposes, d(x � x0) is located at x ¼ x0 and the intensity of the impulse is

determined by the value f(x) at x ¼ x0 and is zero everywhere else. In this example,

we will let the kernel g(x) represent three impulse functions separated by a period, t:

gðxÞ ¼ dðxþ tÞ þ dðxÞ þ dðx� tÞAs shown in Fig. 16, the convolution of the square pulse f(x) with these three

impulses results in a copying of f(x) at the impulse points.

Page 23: GRUPO 1 :  digital manipulation of bright field and florescence images noise reduction contrast enhancement and feature extraction

x−t

−t

0 +t

+t0

A B

C

x

f (x) g (x)

f(x) g(x )

x

Fig. 16 Using a convolution to copy an object. (A) The function f(x) is a rectangular pulse of

amplitude, A, with its leading edge at x ¼ 0. (B) The convolving functions g(x) are three delta functions

at x ¼ �t, x ¼ 0, and x ¼ þt. (C) The convolution operation f(x)g(x) results in copying of the three

rectangular pulses at x ¼ �t, x ¼ 0, and x ¼ þt.

14. Digital Manipulation of Brightfield and Fluorescence Images 307

As with Fourier transforms, the actual mechanics of convolution can rapidly

become computationally intensive for a large number of points. Fortunately,

many complex procedures can be adequately performed using a variety of digital

masks as illustrated in Section VI.C.

C. Digital Masks as Convolution Filters

For many purposes, the appropriate digital mask can be used to extract

features from images. The convolution filter, acting as a selection function g(x),

can be used to modify images in a particular fashion. Convolution filters reas-

sign intensities by multiplying the gray value of each pixel in the image by

the corresponding values in the digital mask and then summing all the values; the

resultant is then assigned to the center pixel of the new image and the operation is

then repeated for every pixel in the image (Fig. 17). Convolution filters can vary in

size (i.e., 3 � 3, 5 � 5, 7 � 7, and so on) depending on the type of filter chosen

and the relative weight that is required from neighboring values from the center

pixel.

Page 24: GRUPO 1 :  digital manipulation of bright field and florescence images noise reduction contrast enhancement and feature extraction

Fig. 17 Performing convolutions using a digital mask. The convolution mask is applied to each pixel

in the image. The value assigned to the central pixel results from multiplying each element in the mask

by the gray value in the corresponding image, summing the result, and assigning the value to the

corresponding pixel in a new image buVer. The operation is repeated for every pixel resulting in the

processed image. Or diVerent operations, a scalar multiplier and/or oVset may be needed.

308 Richard A. Cardullo and Edward H. HinchcliVe

For example, consider a simple 3 � 3 convolution filter, which has the form:

1/9

1/9 1/9

1/9

1/9 1/9

1/9

1/9 1/9
Page 25: GRUPO 1 :  digital manipulation of bright field and florescence images noise reduction contrast enhancement and feature extraction

14. Digital Manipulation of Brightfield and Fluorescence Images 309

Applied to a pixel with an intensity of 128 and surrounded by other intensity

values as follows:

123

62 97

237

128 6

19

23 124

The gray value in the processed image at that pixel, therefore, would have a new

value of 1/9(123 þ 62 þ 97 þ 237 þ 128 þ 6 þ 19 þ 2 þ 124) ¼ 819/9 ¼ 91. Note

that this convolution filter is simply an averaging filter identical to the operation

described in Section IV (in contrast, a median filter would have returned a value of

128). A 5 � 5 averaging filter would simply be a mask, which contains 1/25 in each

pixel whereas a 7 � 7 averaging filter would contain 1/49 in each pixel. Since the

speed of processing decreases with the size of the digital mask, the most frequently

used filters are 3 � 3 masks.

In practice, the values found in the digital masks tend to be integer values with a

divisor that can vary depending on the desired operation. In addition, because

many operations can lead to resultant values, which are negative (since the values

in the convolution filter can be negative), oVset values are often used to prevent

this from occurring. In the example of the averaging filter, the values in the kernel

would be:

1

1 1

1

1 1

1

1 1

with a divisor value of 9 and an oVset of zero. In general, for an 8-bit image,

divisors and oVsets are chosen so that all processed values following the convolu-

tion fall between 0 and 255.

Understanding the nature of convolution filters is absolutely necessary when

using the microscope as a quantitative tool. User-defined convolution filters can

be used to extract information specific for a particular application. When begin-

ning to use these filters, it is important to have a set of standards, which the filters

can be applied to in order to see if the desired eVect has been achieved. In general,

the best test objects for convolution filters are simple geometric objects such as

squares, grids, isosceles and equilateral triangles, circles, and so on. Many com-

mercially available graphics packages provide such test objects in a variety of

graphics formats. Examples of some widely used convolution masks are given in

the following sections.

Page 26: GRUPO 1 :  digital manipulation of bright field and florescence images noise reduction contrast enhancement and feature extraction

310 Richard A. Cardullo and Edward H. HinchcliVe

1. Point Detection in a Uniform Field

Assume that an image consists of a series of grains on a constant background

(e.g., a dark-field image of a cellular autoradiogram). The following 3 � 3 mask is

designed to detect these points:

�1

�1 �1

�1

þ8 �1

�1

�1 �1

When the mask encounters a uniform background, then the gray values in the

processed center pixel will be zero. If, on the other hand, a value above the

constant background is encountered, then its value will be amplified above that

background and a high-contrast image will result.

2. Line Detection in a Uniform Field

Similar to the point mask in the previous example, a number of line masks can

be used to detect sharp, orthogonal edges in an image. These line masks can be

used alone or in tandem to detect horizontal, vertical, or diagonal edges in an

image. Horizontal and vertical line masks are represented as:

�1

�1 �1

þ2

þ2 þ2

�1

�1 �1

�1

þ2 �1

�1

þ2 �1

�1

þ2 �1

whereas, diagonal line masks are given as:

�1

�1 þ2

�1

þ2 �1

þ2

�1 �1

þ2

�1 �1

�1

þ2 �1

�1

�1 þ2
Page 27: GRUPO 1 :  digital manipulation of bright field and florescence images noise reduction contrast enhancement and feature extraction

14. Digital Manipulation of Brightfield and Fluorescence Images 311

In any line mask, the direction of nonpositive values used reflects the direction

of the line detected. When choosing the type of line mask to be utilized, the user

must a priori know the directions of the edges to be enhanced.

3. Edge Detection-Computing Gradients

Of course, lines and points are seldom encountered in natures and another

method for detecting edges would be desirable. By far, the most useful edge

detection procedure is one that picks up any inflection point in intensity. This is

best achieved by using gradient operators, which take the first derivative of light

intensity in both the x- and y-directions. One type of gradient convolution filters,

which are often used, is the Sobel filter. An example of a Sobel filter, which

calculates horizontal edges, is the Sobel North filter expressed as the following

3 � 3 kernel:

þ1

þ2 þ1

0

0 0

�1

�2 �1

This filter is generally not used alone, but is instead used along with the Sobel

East filter, which is used to detect vertical edges in an image. The 3 � 3 kernel for

this filter is:

�1

0 þ1

�2

0 þ2

�1

0 þ1

These two Sobel filters can be used to calculate both the angle of edges in an

image and the relative steepness of intensity (i.e., the derivative of intensity

with respect to position) of that image. The so-called Sobel Angle filter returns

arctangent of the ratio of the resultant Sobel North filtered pixel value to the Sobel

East filtered pixel value while the Sobel Magnitude filter calculates a resultant

value from the square root of the sum of the squares of the Sobel North and Sobel

East values.

In addition to Sobel filters, a number of diVerent gradient filters can be used

(specifically Prewitt or Roberts gradient filters) depending on the specific applica-

tion. Figure 18 shows the design and outlines the basic properties of these filters,

and Fig. 19 shows the eVects of these filters on a fluorescence micrograph.

Page 28: GRUPO 1 :  digital manipulation of bright field and florescence images noise reduction contrast enhancement and feature extraction

Name Kernels Uses

−1

Gradient

+1

+1

2−1

+1−1

+1

+1 Detects the vertical edges of objects in an image.

−1 +10

0

0

−2 +2

−1 +1

East

+1

0

North detects horizontal edges; East detects vertical edges. North and Eastused to calculate Sobel Angle and Sobel Magnitude (see test). Filtersshould not be used independently; ifhorizontal or vertical detection isdesired, use Prewitt.

Sobel

−1−2

North

−1

+2

0

+1

0

+1

0

+1+1

0

−1 −1 −1

North

+1

+1

+1

−1 0

0

0

Prewitt

East

−1

−1North detects horizontal edges; East detects vertical edges.

Northeast detects diagonaledges from top-left to bottom-right; Northwest detects diagonal edges from top-right to bottom-left.

0

0

1

−1

Northeast

0

0

1

−1

Northwest

Roberts

0

Fig. 18 DiVerent 3 � 3 gradient filters used in imaging. Shown are four diVerent gradient operators

and their common uses in microscopy and imaging.

312 Richard A. Cardullo and Edward H. HinchcliVe

4. Laplacian Filters

Laplacian operators calculate the second derivative of intensity with respect to

position and are useful for determining whether a pixel is on the dark side or light

side of an edge. Specifically, the Laplace-4 convolution filter, given as:

0

�1 0

�1

þ4 �1

0

�1 0

detects the light and dark sides of an edge in an image. Because of its sensitivity to

noise, this convolution mask is seldom used by itself as an edge detector. In order

to keep all values of the processed image within 8 bits and positive, a divisor of

8 and an oVset value of 128 are often employed.

Page 29: GRUPO 1 :  digital manipulation of bright field and florescence images noise reduction contrast enhancement and feature extraction

Fig. 19 DiVerent filters applied to a fluorescence image of a dividing mammalian cell. Inverse contrast

LUT, gradient filter, Laplacian filter, Sobel filter.

14. Digital Manipulation of Brightfield and Fluorescence Images 313

The point detection filter shown earlier is also a kind of Laplace filter (known

as the Laplace-8 filter). This filter uses a divisor value of 16 and an oVset value of128. Unlike the Laplace-4 filter, which only enhances edges, the Laplace-8 filter

enhances edges and other features of the object.

VII. Conclusions

The judicious choice of image-processing routines can greatly enhance an image

and can extract features, which are not otherwise possible. When applying digital

manipulations to an image, it is imperative to understand the routines that are

being employed and to make use of well-designed standards when testing them

out. With the advent of high-speed digital detectors and computers, near real-time

processing involving moderately complicated routines is now possible.

Page 30: GRUPO 1 :  digital manipulation of bright field and florescence images noise reduction contrast enhancement and feature extraction

314 Richard A. Cardullo and Edward H. HinchcliVe

References

Andrews, H. C., and Hunt, B. R. (1977). ‘‘Digital Image Restoration.’’ Prentice-Hall, Englewood CliVs,

NJ.

Bates, R. H. T., and McDonnell, M. J. (1986). ‘‘Image Restoration and Construction.’’ Oxford

University Press, New York, NY.

Cardullo, R. A. (1999). Electronic and computer image enhancement in light microscopy. In ‘‘Encyclope-

dia of Life Sciences.’’ Wiley & Sons, Hoboken, NJ.

Castleman, K. R. (1979). ‘‘Digital Image Processing.’’ Prentice-Hall, Englewood CliVs, NJ.

Chellappa, R., and Sawchuck, A. A. (1985). ‘‘Digital Image Processing and Analysis.’’ IEEE Press,

New York, NY.

Erasmus, S. J. (1982). Reduction of noise in a TV rate electron microscope image by digital filtering.

J. Microsc. 127, 29–37.

Gonzalez, R. C., and Wintz, P. (1987). ‘‘Digital Image Processing.’’ Addison-Wesley, Reading, MA.

Green, W. B. (1989). ‘‘Digital Image Processing: A Systems Approach.’’ Van Nostrand Reinhold,

New York, NY.

Inoue, S. (1986). ‘‘Video Microscopy.’’ Plenum, New York, NY.

Inoue, S., and Spring, K. R. (1997). ‘‘Video Microscopy,’’ 2nd edn. Plenum, New York, NY.

Jahne, B. (1991). ‘‘Digital Image Processing.’’ Springer-Verlag, New York, NY.

Pratt, W. K. (1978). ‘‘Digital Image Processing.’’ Wiley, New York, NY.

Russ, J. C. (1990). ‘‘Computer-Assisted Microscopy. The Measurement and Analysis of Images.’’

Plenum, New York, NY.

Russ, J. C. (1994). ‘‘The Image Processing Handbook.’’ CRC Press, Ann Arbor, MI.

Shotton, D. (1993). ‘‘Electronic Light Microscopy: Techniques in Modern Biomedical Microscopy.’’

Wiley-Liss, New York, NY.


Recommended