+ All Categories
Home > Documents > Gray Level Image Processing by Cellular Logic Transforms

Gray Level Image Processing by Cellular Logic Transforms

Date post: 06-Nov-2016
Category:
Upload: kendall
View: 225 times
Download: 3 times
Share this document with a friend
4
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, VOL. PAMI-5, NO. 1, JANUARY 1983 Correspondence Gray Level Image Processing by Cellular Logic Transforms KENDALL PRESTON, JR. Abstract-The cellular logic transform has been used extensively in the analysis of bilevel images generated by thresholding gray level images. Now, by a process of gray level resynthesis, it is shown to be useful in gray level image processing. Index Terms-Cellular logic, digital filtering, image processing, neigh- borhood logic. I. INTRODUCTION In using cellular logic transforms for image processing, the traditional data flow is to first digitize the analog image infor- mation. Typically, 8-16 bits are used per picture element (pixel) over a 512 X 512 or 1024 X 1024 array. (Larger arrays are usually "windowed" due to the limitations of digital image display systems.) Second, the probability density func- tion (PDF) of the digitized image is computed, and many of its parameters, e.g., mean, variance, range, midrange, median, etc., are calculated. Third, using these parameters of the PDF, one or more thresholds are selected using certain task-specific criteria. Fourth, the digitized image is transformed into one or more bilevel images utilizing the selected threshold values. Fifth, using the bilevel images as inputs, cellular logic trans- forms (CLT's) are designed and used for the purpose of extracting features from the bilevel images. These features included such measures as area, perimeter, size, shape, texture, etc., for various image components. Sixth, from these mea- surements, objects within the original image are located and sorted as to their class and identity. Recently, in our laboratory, we have changed the traditional data flow to include an image resynthesis step during which the bilevel images which result from CLT image analysis are used to form an output gray level image. The purpose of this correspondence is to present some of our results, and to indi- cate the usefulness of the CLT in gray level image processing. II. COMPRESSION AND DECOMPRESSION In the particular system used, the digitized gray level image is presented at 8 bits/pixel in a 512 X 512 array (Fig. 1). The image is processed using the SUPRPIC image processing system developed jointly between Carnegie-Mellon University and the University of Pittsburgh [1]. When this system is used to convert a gray level image to a set of bilevel images, the 99.8 percent range of the PDF is computed and divided into 16 equal increments. These increments are used to define 16 equi- spaced thresholds over the range of the PDF. The COMPRESS command of SUPRPIC automatically generates bilevel images at these thresholds (Fig. 2). The cellular logic command of SUPRPIC is the AUGRED Manuscript received July 7, 1981; revised February 16, 1982. This work was supported by the National Institute of General Medical Science under Grant R01-GM28221-01. The author is with the Department of Electrical Engineering, Carnegie- Mellon University, Pittsburgh, PA 15213 and the Department of Radia- tion Health, University of Pittsburgh, Pittsburgh, PA 15213. Fig. 1. The digitized micrograph of normal human liver tissue produced by the Automatic Light Microscope Scanner (ALMS) model 2 of the Jet Propulsion Laboratory using illumination at 550 nm (green). command whose action has been described extensively else- where by the author [21 . In order to carry out the AUGRED command, the following quantities are calculated: fac1 =. xi i cnumi = E ixj X - Xi + 1, j (1) (2) where Xij is the value at the jth pixel location of the ith mem- ber of the directly adjacent neighbors of that pixel. There are clearly eight such neighbors so that i = I,... *, 8. The order usually taken for i is clockwise and, in the circular summation taken in (2), (i + 1) = 1 when i = 8. These quantities fac1 and cnum1 are computed for each pixel over the array of bilevel numbers. Pixel values over an output array are then computed as given by the Boolean expression X; = AjX- + BjXj where O Aj = { and (3) (4) if facj SFAC if fac1 > FA C { 0 if cnum1 <CNUM B1 = (5 ) .1 if cnumi > CNUM where FAC and CNUM are parameters of the particular version of the AUGRED command utilized. Although detailed dis- cussion of the actions produced by the AUGRED command are beyond the scope of this correspondence, it should be noted 0162-8828/83/0100-0055$01.00 © 1983 IEEE 55
Transcript
Page 1: Gray Level Image Processing by Cellular Logic Transforms

IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, VOL. PAMI-5, NO. 1, JANUARY 1983

CorrespondenceGray Level Image Processing by Cellular Logic Transforms

KENDALL PRESTON, JR.

Abstract-The cellular logic transform has been used extensively inthe analysis of bilevel images generated by thresholding gray levelimages. Now, by a process of gray level resynthesis, it is shown to beuseful in gray level image processing.

Index Terms-Cellular logic, digital filtering, image processing, neigh-borhood logic.

I. INTRODUCTION

In using cellular logic transforms for image processing, thetraditional data flow is to first digitize the analog image infor-mation. Typically, 8-16 bits are used per picture element(pixel) over a 512 X 512 or 1024 X 1024 array. (Largerarrays are usually "windowed" due to the limitations of digitalimage display systems.) Second, the probability density func-tion (PDF) of the digitized image is computed, and many ofits parameters, e.g., mean, variance, range, midrange, median,etc., are calculated. Third, using these parameters of the PDF,one or more thresholds are selected using certain task-specificcriteria. Fourth, the digitized image is transformed into oneor more bilevel images utilizing the selected threshold values.Fifth, using the bilevel images as inputs, cellular logic trans-forms (CLT's) are designed and used for the purpose ofextracting features from the bilevel images. These featuresincluded such measures as area, perimeter, size, shape, texture,etc., for various image components. Sixth, from these mea-surements, objects within the original image are located andsorted as to their class and identity.Recently, in our laboratory, we have changed the traditional

data flow to include an image resynthesis step during whichthe bilevel images which result from CLT image analysis areused to form an output gray level image. The purpose of thiscorrespondence is to present some of our results, and to indi-cate the usefulness of the CLT in gray level image processing.

II. COMPRESSION AND DECOMPRESSIONIn the particular system used, the digitized gray level image

is presented at 8 bits/pixel in a 512 X 512 array (Fig. 1). Theimage is processed using the SUPRPIC image processing systemdeveloped jointly between Carnegie-Mellon University and theUniversity of Pittsburgh [1]. When this system is used toconvert a gray level image to a set of bilevel images, the 99.8percent range of the PDF is computed and divided into 16equal increments. These increments are used to define 16 equi-spaced thresholds over the range of the PDF. The COMPRESScommand of SUPRPIC automatically generates bilevel imagesat these thresholds (Fig. 2).The cellular logic command of SUPRPIC is the AUGRED

Manuscript received July 7, 1981; revised February 16, 1982. Thiswork was supported by the National Institute of General MedicalScience under Grant R01-GM28221-01.The author is with the Department of Electrical Engineering, Carnegie-

Mellon University, Pittsburgh, PA 15213 and the Department of Radia-tion Health, University of Pittsburgh, Pittsburgh, PA 15213.

Fig. 1. The digitized micrograph of normal human liver tissue producedby the Automatic Light Microscope Scanner (ALMS) model 2 of theJet Propulsion Laboratory using illumination at 550 nm (green).

command whose action has been described extensively else-where by the author [21 . In order to carry out the AUGREDcommand, the following quantities are calculated:

fac1 =. xii

cnumi = E ixjX - Xi + 1, j

(1)

(2)

where Xij is the value at the jth pixel location of the ith mem-ber of the directly adjacent neighbors of that pixel. There areclearly eight such neighbors so that i = I,...*, 8. The orderusually taken for i is clockwise and, in the circular summationtaken in (2), (i + 1) = 1 when i = 8.These quantities fac1 and cnum1 are computed for each pixel

over the array of bilevel numbers. Pixel values over an outputarray are then computed as given by the Boolean expression

X; = AjX- + BjXj

whereO

Aj = {

and

(3)

(4)if facj SFACif fac1 > FAC

{0 if cnum1 <CNUMB1 = (5)

.1 if cnumi > CNUMwhere FAC and CNUM are parameters of the particular versionof the AUGRED command utilized. Although detailed dis-cussion of the actions produced by the AUGRED command arebeyond the scope of this correspondence, it should be noted

0162-8828/83/0100-0055$01.00 © 1983 IEEE

55

Page 2: Gray Level Image Processing by Cellular Logic Transforms

IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, VOL. PAMI-5, NO. 1, JANUARY 1983

ORIGINAL IMAGE DATA FROM J280 AT THRS 00-15 -SUPRPIC- 07JUN81

Fig. 2. Twelve of the 16 bilevel images generated when the gray levelimage shown in Fig. 1 is thresholded at 16 equal intervals across therange of the probability density function.

that FAC = 8 indicates an interior point in a contiguous regionof the bilevel image, FAC= 0 indicates an exterior point,CNUM = 2 indicates an edge point, CNUM= 4 indicates alink in a contiguous chain one pixel wide, etc.Numerous combinations of FAC and CNUM are possible,

leading to a wide repertoire of image processing sequences.The great advantage of the CLT implemented by AUGRED isthat it may be carried out by table lookup, and thus executedin less than 1 Ms/picture point using the array processor avail-able to the SUPRPIC system. (More elaborate and expensivemachines, such as the diff3 [3 ], the CLIP 4 [4], and the DAP5 ], execute this command with picture point operation times

of 25, 2, and 0.05 ns, respectively.) It may also be noted thatcertain of the AUGRED results commute with the max-mintransform, a linear numerical processing command initially de-scribed by Nakagawa and Rosenfeld (61, as originally pointedout by Goetcherian [ 7].

It has been found that cellular image processing may be usedto process gray level images and generate gray level outputs.The algorithm described in this correspondence uses severalcycles of the AUGRED command with FAC = 2 and CNUM = 9(the DON'T CARE value) for the purpose of noise removal.This operation causes the erasure of all contiguous clusterswhich are smaller than 2 X 2 pixels. It also erases long stringswhich are one pixel in width. Next, using FAC = 5 andCNUM = 4 (in order to retain continuity), all remainingclusters in the size range 2 X 2 to 3 X 3 are reduced to residues(isolated elements surrounded by zero background). The resi-dues which result are then transferred to a separate arraywhere they are used to label the original clusters from which

they were derived using the original array as a reference. Thelabeled clusters are accumulated by Boolean ORing into anaccumulator as all 16 bilevel images are processed. The con-tents of this accumulator are then oRed into each of theoriginal bilevel images (Fig. 3).At this point, it is possible to use the SUPRPIC DECOMPRESS

command to convert the 16 bilevel images back to a singlegray level image. This is done by examining each picture pointin each of the images shown in Fig. 3 and counting the numberof pixels which are black. This count then becomes the integerwhich represents the corresponding picture point in the graylevel image. The resultant gray level image is shown in Fig. 4.Since this 4 bit image has a narrower dynamic range than theoriginal 8 bit image, it appears slightly out of focus. Thiseffect may be ameliorated by utilizing 32 or even 64 thresholdsin performing the cellular logic transforms described above.Usually this is unnecessary so that the added computationalexpense is not required.

III. RESULTSIn the image shown in Fig. 1, the viewer desires to analyze

the distribution of cell nuclei and study their interrelationships.This is a difficult task using the original image as some cellnuclei appear at low contrast and are, therefore, almost in-visible, with others appear at such high contrast that theyclump together in those areas of the image which are dark.In Fig. 4, it is readily seen that all nuclei have been rectified

to equal contrast and are far more visible to the observer foranalysis. Still further enhancement has been carried out usingSUPRPIC by constructing the deformation retract of the con-

56

Page 3: Gray Level Image Processing by Cellular Logic Transforms

IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, VOL. PAMI-5, NO. 1, JANUARY 1983

NUCLEI FOUNID MAD ORRED INTO IMAGES FOR THRS02-13 -SUPRPIC- 08JUN8I

Fig. 3. Result when small clusters of binary ones in the bilevel imagesfor thresholds 8-16 are labeled and oRed into the bilevel images at allother thresholds.

Fig. 4. Gray level image reconstructed from the 16 bilevel imagesgenerated by the ORing operation illustrated in Fig. 3.

tents of the accumulator (Fig. 5) and ORing this result intobilevel images from threshold 00 to threshold 09. By usingonly thresholds 00-09, the deformation retract is made to

Fig. 5. DJeformation retract generated from the centroids of the labeledclusters found in the bilevel images generated at thresholds 8-16.

appear somewhat lighter in the gray level image which resultswhen the DECOMPRESS command is applied to yield Fig. 6.The result given in Fig. 6 allows the trained observer to analyze

57

Page 4: Gray Level Image Processing by Cellular Logic Transforms

IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, VOL. PAMI-5, NO. 1, JANUARY 1983

Fig. 6. Gray level image generated from the bilevel images producedwhen the deformation retract (Fig. 5) is ORed into those bilevelimages which correspond to thresholds 0-9.

the "architecture" of the original image. Fig. 6 shows bothcell nuclei (at high contrast) and the deformation retract atlower contrast. It should be noted that the apparent cellboundaries in this figure merely provide an estimate of theactual location of cell boundaries. Even this estimate hasalready been found useful in the computer analysis of tissuestructure.

IV. DISCUSSION

This correspondence is not intended to present a thoroughanalysis of all possible gray level image processing techniquesavailable using the cellular logic regime. These are still underinvestigation and will be the subject of a further, more elabo-rate report. However, it is felt that, at this time, a preview ofthe utility of cellular logic in gray level image processing isworth presenting to the image analysis community.Further efforts along these lines will include: 1) a more

rigorous analysis of the specific effect of cellular logic imageprocessing using spatial frequency- analysis to compare theoutput versus the input, 2) a systematic presentation of thevarious types of transforms available to the user of cellularlogic image enhancement, and 3) a comparison to other imageenhancement techniques such as digital spatial filtering, max-min filtering, and three-dimensional cellular logic filtering.The computational expense of cellular logic filtering is not

great, even when using general-purpose computers. In ourlaboratory, a Perkin-Elmer 3230 having 4K of cache is utilizedwhich, when inner loops are coded in assembly language,requires only 30 Ms/picture point CLT. Thus, only 8 s are re-quired to operate upon a 512 X 512 bilevel image for oneiteration. In the case described above, the major computa-tional burden (20 iterations/bilevel image at each of thresh-olds 8-16) was required in the computation of the result shownin Fig. 3. The total time for this computation (including anegligible amount of time for nine Boolean OR'S) was 20 min.Reconstruction of the gray level image from the bilevel images(Fig. 4) requires less than 1 min. Construction of the deforma-tion retract uses 40 total iterations (5 min), while the finalreconstruction (Fig. 6) adds another minute. Thus, in approxi-

mately 2 h on a general-purpose 32 bit minicomputer, all pro-cessing may be completed. If a more powerful main frame,e.g., the CDC 7600, is utilized, as has been done recently forpurposes of comparison, the result is a reduction in compu-tational time to less than 5 min. With the special-purposehardware available in our laboratory which performs one512 X 512 iteration on a bilevel image in 0.3 s, the computa-tional time is on the order of 1 min.

ACKNOWLEDGMENTThe work reported here was performed using the Biomedical

Image Processing Unit of the Department of Radiation Health,School of Public Health, University of Pittsburgh, Pittsburgh,PA. The manuscript was typed by R. Kabler of ExecutiveSuite, Tucson, AZ.

REFERENCES[11 K. Preston, Jr., "Image processing software: A survey," in Progress

in Pattern Recognition, L. N. Kanal and A. Rosenfeld, Eds.Amsterdam, The Netherlands: North-Holland, 1981.

[2] -, "Image manipulative languages: A preliminary survey," inPattern Recognition in Practice, E. S. Gelsema and L. N. Kanal,Eds. Amsterdam, The Netherlands: North-Holland, 1980.

[31 D. Graham and P. E. Norgren, "The diff3 analyzer," in Real-TimeMedical Image Processing, M. Onoe, K. Preston, Jr., and A. Rosen-feld, Eds. New York: Plenum, 1980.

[4] T. J. Fountain, "CLIP4: A progress report," in Languages andArchitectures for Image Processing, M. J. B. Duff and S. Levialdi,Eds. London, England: Academic, 1981.

(5] D. J. Hunt, "The ICL DAP and its application to image process-ing," in Languages and Architectures for Image Processing, M. J. B.Duff and S. Levialdi, Eds. London, England: Academic, 1981.

[6] Y. Nakagawa and A. Rosenfeld, "A note on the use of local minand max operations in digital picture processing," IEEE Trans.Syst., Man, Cybern., vol. SMC-8, pp. 632-635, 1978.

[7] V. Goetcherian, "From binary to grey tone image processing usingfuzzy logic concepts," Pattern Recognition, vol. 12, no. 1, pp.7-15, 1980.

Direct Computation of the Focus of Expansion

RAMESH JAIN

Abstract-Optical flow carries valuable information about the natureand depth of surfaces and the relative motion between observer and ob-jects. In the extraction of this information, the focus of expansionplays a vital role. In contrast to the current approaches, this paper pre-sents a method for the direct computation of the focus of expansionusing an optimization approach. The optical flow can then be computedusing the focus of expansion.

Index Terms-Dynamic scene analysis, focus of expansion, movingobserver, optical flow, segmentation.

I. INTRODUCTION

Determination of optical flow has attracted the attention ofmany researchers interested in developing systems for the un-derstanding of a dynamic scene from a sequence of frames[3], [61-[18], [29]. It has been shown that optical flow car-ries valuable information about the nature and depth of sur-

Manuscript received December 17, 1981; revised July 16, 1982.The author was with the Intelligent Systems Laboratory, Department

of Computer Science, Wayne State University, Detroit, MI 48202. Heis now with the Department of Electrical and Computer Engineering,University of Michigan, Ann Arbor, MI 48109.

0162-8828/83/0100-0058$01.00 C 1983 IEEE

58


Recommended