+ All Categories
Home > Documents > Image gathering and processing: information and fidelity

Image gathering and processing: information and fidelity

Date post: 05-Oct-2016
Category:
Upload: kathryn
View: 213 times
Download: 0 times
Share this document with a friend
23
1644 J. Opt. Soc. Am. A/Vol. 2, No. 10/October 1985 Image gathering and processing: information and fidelity Friedrich 0. Huck and Carl L. Fales NASA Langley Research Center, Hampton, Virginia 23665 Nesim Halyo and Richard W. Samms Information & Control Systems, Inc., Hampton, Virginia 23666 Kathryn Stacy Computer Sciences Corporation, Hampton, Virginia 23666 Received September 21, 1984; accepted May 13, 1985 In this paper we formulate and use information and fidelity criteria to assess image gathering and processing, com- bining optical design with image-forming and edge-detection algorithms. The optical design of the image-gather- ing system revolves around the relationship among sampling passband, spatial response, and signal-to-noise ratio (SNR). Our formulations of information, fidelity, and optimal (Wiener) restoration account for the insufficient sampling (i.e., aliasing) common in image gathering as well as for the blurring and noise that conventional formula- tions account for. Performance analyses and simulations for ordinary optical-design constraints and random scenes indicate that (1) different image-formingalgorithms prefer different optical designs; (2) informationally op- timized designs maximize the robustness of optimal image restorations and lead to the highest-spatial-frequency channel (relative to the sampling passband) for which edge detection is reliable (if the SNR is sufficiently high); and (3) combining the informationally optimized design with a 3 by 3 lateral-inhibitory image-plane-processing algorithm leads to a spatial-response shape that approximates the optimal edge-detection response of (Marr's model of) human vision and thus reduces the data preprocessing and transmission required for machine vision. 1. INTRODUCTION After years of assessing photographic images in terms of in- formation content and of image fidelity and related criteria of visualquality," Linfoot 5 commented (in 1958)that "If the arithmetical recording of optical images were a standard practice today, instead of a prospect for the future opened up by the advent of the fast computing machines, we would go on to add that informationally optimized designswere always to be preferred. When, however, the process of recording (under the name of image interpretation) has to be carried out in the nervous systems of human beings, the choice between, say, a fidelity-maximized design and an informationally op- timized one ought to be made on the basis of the experimen- tally determined needs and capabilities of the human inter- preter." Digital image processing has become a standard practice today and will continue to grow in importance as the compu- tational capacity of computers increases and the goals of image processing become more ambitious. In addition to processing image data for the human interpreter, it also has become in- creasingly of interest to process data for interpretation by computer algorithms, that is, to develop machine vision. Thus Linfoot's quest to match designcriteria to task continues to be important as technology progresses and objectives change. One important task is to combine the optical design of digital image-gathering systems with data-processing algo- rithms in order to improve end-to-end performances For example, edge detection is used as a first step in constructing primal sketches for machine vision. 7 12 This step is ordinarily performed with little regard to the performance characteristics of the image-gathering system. However, optimum edge- detection responses can be closely approximated with a minimal amount of data processing if the optical design, in- cluding lens apodization, and the lateral-inhibitory algorithm are properly combined.' 3 Furthermore, data transmission can be substantially reduced if the processing is performed during image acquisition (i.e., in the image plane) rather than in the computer.' 3 Natural vision seems to do this to acquire, transmit, and process visual information efficiently.' 3 - 15 The formation of photographic images that Linfootl- 5 studied consists of a single process that can be mathematically expressed as the convolution of the object with the point- spread function of the imagingsystem. The resultant images are degraded by blurring (e.g., diffraction and aberrations) and random noise (e.g., film granularity). In this paper we combine the study of two separate pro- cesses 6 : image gathering, the design of line-scan and sen- sor-array imaging systems, and image processing, the algo- rithms used to form images or extract spatial features (e.g., edges) from sampled data. The reconstructions are degraded by aliasing (and high-frequency artifacts) as well as by blur- ring and noise. These degradations have received consider- able attention in the literature, for example, Refs. 16 to 31 (Refs. 16 to 24 emphasize image gathering, and Refs. 25 to 28 emphasize image processing; only Refs. 29 to 31 combine image gathering with processing). However, despite this general recognition of aliasing as a major source of image degradation, it appears nevertheless that the insufficient sampling that causes aliasing has not been accounted for in the image-restoration algorithms, such as the optimal Wiener filter, that are found in the literature. 25 - 28 We investigate the fidelity of images and edges that can be 0740-3232/85/101644-23$02.00 ( 1985 Optical Society of America Huck et al.
Transcript
Page 1: Image gathering and processing: information and fidelity

1644 J. Opt. Soc. Am. A/Vol. 2, No. 10/October 1985

Image gathering and processing: information and fidelity

Friedrich 0. Huck and Carl L. Fales

NASA Langley Research Center, Hampton, Virginia 23665

Nesim Halyo and Richard W. Samms

Information & Control Systems, Inc., Hampton, Virginia 23666

Kathryn Stacy

Computer Sciences Corporation, Hampton, Virginia 23666

Received September 21, 1984; accepted May 13, 1985

In this paper we formulate and use information and fidelity criteria to assess image gathering and processing, com-bining optical design with image-forming and edge-detection algorithms. The optical design of the image-gather-ing system revolves around the relationship among sampling passband, spatial response, and signal-to-noise ratio(SNR). Our formulations of information, fidelity, and optimal (Wiener) restoration account for the insufficientsampling (i.e., aliasing) common in image gathering as well as for the blurring and noise that conventional formula-tions account for. Performance analyses and simulations for ordinary optical-design constraints and randomscenes indicate that (1) different image-forming algorithms prefer different optical designs; (2) informationally op-timized designs maximize the robustness of optimal image restorations and lead to the highest-spatial-frequencychannel (relative to the sampling passband) for which edge detection is reliable (if the SNR is sufficiently high);and (3) combining the informationally optimized design with a 3 by 3 lateral-inhibitory image-plane-processingalgorithm leads to a spatial-response shape that approximates the optimal edge-detection response of (Marr'smodel of) human vision and thus reduces the data preprocessing and transmission required for machine vision.

1. INTRODUCTION

After years of assessing photographic images in terms of in-formation content and of image fidelity and related criteriaof visual quality," Linfoot5 commented (in 1958) that "If thearithmetical recording of optical images were a standardpractice today, instead of a prospect for the future opened upby the advent of the fast computing machines, we would goon to add that informationally optimized designs were alwaysto be preferred. When, however, the process of recording(under the name of image interpretation) has to be carried outin the nervous systems of human beings, the choice between,say, a fidelity-maximized design and an informationally op-timized one ought to be made on the basis of the experimen-tally determined needs and capabilities of the human inter-preter."

Digital image processing has become a standard practicetoday and will continue to grow in importance as the compu-tational capacity of computers increases and the goals of imageprocessing become more ambitious. In addition to processingimage data for the human interpreter, it also has become in-creasingly of interest to process data for interpretation bycomputer algorithms, that is, to develop machine vision.Thus Linfoot's quest to match design criteria to task continuesto be important as technology progresses and objectiveschange.

One important task is to combine the optical design ofdigital image-gathering systems with data-processing algo-rithms in order to improve end-to-end performances Forexample, edge detection is used as a first step in constructingprimal sketches for machine vision.7 12 This step is ordinarilyperformed with little regard to the performance characteristics

of the image-gathering system. However, optimum edge-detection responses can be closely approximated with aminimal amount of data processing if the optical design, in-cluding lens apodization, and the lateral-inhibitory algorithmare properly combined.'3 Furthermore, data transmissioncan be substantially reduced if the processing is performedduring image acquisition (i.e., in the image plane) rather thanin the computer.' 3 Natural vision seems to do this to acquire,transmit, and process visual information efficiently.'3 -15

The formation of photographic images that Linfootl-5

studied consists of a single process that can be mathematicallyexpressed as the convolution of the object with the point-spread function of the imaging system. The resultant imagesare degraded by blurring (e.g., diffraction and aberrations)and random noise (e.g., film granularity).

In this paper we combine the study of two separate pro-cesses6: image gathering, the design of line-scan and sen-sor-array imaging systems, and image processing, the algo-rithms used to form images or extract spatial features (e.g.,edges) from sampled data. The reconstructions are degradedby aliasing (and high-frequency artifacts) as well as by blur-ring and noise. These degradations have received consider-able attention in the literature, for example, Refs. 16 to 31(Refs. 16 to 24 emphasize image gathering, and Refs. 25 to 28emphasize image processing; only Refs. 29 to 31 combineimage gathering with processing). However, despite thisgeneral recognition of aliasing as a major source of imagedegradation, it appears nevertheless that the insufficientsampling that causes aliasing has not been accounted for inthe image-restoration algorithms, such as the optimal Wienerfilter, that are found in the literature.25 -28

We investigate the fidelity of images and edges that can be

0740-3232/85/101644-23$02.00 ( 1985 Optical Society of America

Huck et al.

Page 2: Image gathering and processing: information and fidelity

Vol. 2, No. 10/October 1985/J. Opt. Soc. Am. A 1645

optimally restored from the sampled data obtained with in-formationally optimized image-gathering systems and com-pare the resultant performance with that obtained with otherdesigns and algorithms. The optical-design optimizationrevolves around the relationship among sampling passband,spatial response, and rms signal-to-noise ratio (SNR). Toaccount for this relationship, we formulate information, fi-delity, and the Wiener filter as a function of insufficientsampling as well as of the spatial response and noise thatconventional formulations account for.

The data-processing algorithms considered in this papermetamorphose the spatial detail in different ways without lossof information. Since optimal-restoration algorithms dependon the often unknown statistical properties of the radiancefield, we also examine the robustness of the combined opti-cal-response and data-processing algorithm to erroneouslyassumed radiance-field statistics.

We do not attempt here to optimize the spatial-responseshape of the image-gathering system; this problem is treatedelsewhere: Fales et al. 24 for maximizing information capacityand Park and Schowengerdt2 9 for minimizing the mean-squared error of image reconstructions. Instead, we use a setof Gaussian-response shapes: They are nearly information-ally optimum for the constraints ordinarily imposed on thedesign of image-gathering systems by the realizability of op-tical-aperture responses,24 and the difference-of-Gaussian(DOG) responses approximate the optimal edge-detectionresponse of Marr's model of human vision.7-9

Edge detection may be regarded as the extreme limit of edgeenhancement, which is often used to improve the visibility ofboundaries and fine detail in low-contrast scenes. Thus ourassessment of image fidelity and edge fidelity spans the rangeof objectives that are ordinarily encountered in image pro-cessing.

It may be reasonable to anticipate, as did Linfoot, 5 thatinformationally optimized image-gathering systems offer theopportunity for extracting optimally the widest range ofspatial features. One of Shannon's theorems 32 (the 21st)states that the communication-channel design that maximizesthe information density of the acquired (sufficiently sampled)data also maximizes the fidelity of optimally restored repre-sentations of the continuous-input source. More specifically,Frieden3 3 shows how the spectral information density of theimage data limits the mean-squared restoration error (MSRE)of the images that are optimally restored with the Wienerfilter,3 4' 35 and he concludes that the MSRE should mono-tonically decrease as the total acquired information densityincreases. However, both Shannon and Frieden constrainedthe upper limit of system performance only by SNR andbandwidth, whereas the performance of practical image-gathering systems is also constrained by the realizability ofoptical-aperture responses. This additional constraint leadsto the inevitable trade-off between aliasing and blurring thatShannon and Frieden did not account for.

2. IMAGE GATHERING AND PROCESSING

A. FormulationFigure 1 illustrates the space-invariant (or isoplanatic)image-gathering and -processing system analyzed in this

paper. Image gathering converts a continuous (incoherent)radiance field L(x, y) into a sampled signal s(x, y), and imageprocessing converts this signal into a continuous reconstruc-tion R (x, y). The signal s(x, y) and the reconstruction R (x,y) are defined by the expressions

S(X, y) = [KL(X, Y) * Tg(X, Y) + N(x, y)] LW (x, y) (1)

and

R (x, y) = s (x, y) * K- 1TrP(X, Y)= {[L(x, y) * Tg(x, y) + K-'N(x, y)] WU (x, y))

* Tp(X,Y),

(2a)

(2b)

where K is the steady-state gain of the (linear) radiance-to-signal conversion, rg(x, y) and -r(x, y) are the spatial re-sponses of the image-gathering system and the processingalgorithm, respectively, N(x, y) is the (additive) sensor noise,* denotes spatial convolution, and LWJ (x, y) denotes samplingin the x, y rectangular coordinate system that is used as ref-erence for the imaging process. Fales et al. 24 have presenteda detailed theoretical treatment of the relationship betweensensitivity, which depends on K and N(x, y), and the nor-malized spatial response Tg(X, y). (The noise that may beintroduced by forming an image, e.g., film granularity, is ne-glected.)

Taking the Fourier transform of R (x, y) given by Eq. (2b)yields the spatial-frequency representation of the recon-struction

R(v, co) = g[L(v, w)fg(v, co) + K-1S(v, w)]

* iii (v, W) W~,(V, W), (2c)

where L (v, co) and (V, c) are the spatial-radiance and noisetransforms, respectively, and Tg(v, co) and Tp(v, w) are thespatial-frequency responses of the image-gathering systemand the processing algorithms, respectively. The samplingfunction for a rectangular lattice is given by

CW (v, a) = E E 4V _ m _ m=-X n=-- X

= 6(V, W) + [i (V, c),•0,0

where X, Y are the sampling intervals and the term LI, Heo (v,co) accounts for the sampling sidebands. The associatedsampling passband is

P = I(v, w), Ivi < 1/2X, cvi < 1/2 Y1,

and its area is

JAI = 1/XY.

Radiancefield Image gathering

rresponseL (x,y) Kt (x,y)

Fig. 1. Model of image gathering and processing.

Huck et al.

Page 3: Image gathering and processing: information and fidelity

1646 J. Opt. Soc. Am. A/Vol. 2, No. 10/October 1985

Expanding Eq. (2c) yields

1?(v, Cv) = & (v, c) + 1a(V, c) + Pn (v, c),

where

J.,(v, c) = L(v, v)Tg(V, c)Tp(V, Cv),

Pa(V, c) = [L(v, co)?g (v, c) * Li (V, c)Tp (V, cv),#0,0

A(v, CO) = K-'[I(v, cv) * Li (V, c)]p(V, cv).

Taking the inverse Fourier transform of R(v, cv) yields

R(x, y) = R (x, y) + Ra(X, y) + Rn(Xy).

(2d)

(2e)

This formulation of the reconstruction R (x, y) distinguishesamong three components: Rs (x, y), the desired signal com-ponent; Ra (x, y), the sampling sideband components; andR, (x, y), the sensor noise component.

We assume that the radiance field L(x, y) is a randomprocess, effectively confined to some isoplanatism patch Acentered at x = y = 0. Since it is the variation of L(x, y) fromthe mean radiance level of patch A that is of interest, we canlet the mean of L(x, y) be zero. The Wienet spectrum (orpower spectral density) of L(x, y) can then be approximatedbyl" 2

4?L(V, CO) = IL(, CO c)2,

where IL (v, cv)j2 indicates that IL (v, cv)I2 has been averaged over

the ensemble of radiance fields to which L(x, y) belongs. Theassociated variance of L (x, y) is

0L 2 = rr L(U, c)dud.

Our treatment of the signal component Rs (x, y) and thesampling sideband components Ra (x, y) is based on the as-sumption that the radiance field L(x, y) is a wide-sense sta-tionary process. (See Appendix A for details.) With thisassumption, we can express the Wiener spectrum of the re-construction R(x, y),

'R(V, Cv) -If (v, c)12,aA

as

'IR (V, cv) = I [DL (V, v) Tg (V, c)I2+ K-24 N(V, c)] * L1 (V, Cv) IP (V, cv)I2

= 48((V, c) + (Da(V, cv) + (D.(V, co),

Lens

where

4s (V, c) = L(V, c)j| g (V, c) iTP (V, cv)j 2

4 a(V, cv) = ['L(V, cv))Ig(V, cv)12 * Lj (V, cv)][Tp(V, cv)12,#0,0

4!(, cv) = K-2 [4 ?N(V, c) * lii (v, cv)]IiP(V, cv)l2 .

The Wiener spectrum of the reconstructed white noise canalso be given by 24

4 (v, c) = |B|-1Ko-2N 2IP(V, Cv)12 ,

where

2= Jf [&R(V, cv)12dvdc

is the variance of the sensor noise. [For the latter formulationof 4%(v, w) to be valid for line-scan as well as sensor-arrayimaging systems, the effects of filtering on noise, which re-duces it, and of undersampling, which increases it, must benegligible.]

B. Image-Gathering ResponsesConventional image-gathering systems consist of an objectivelens (or lens system) and some sort of photon-detection andsampling mechanism. The most common mechanisms aresensor-array and line-scan devices. The lens and the pho-tosensor apertures are basically low-pass spatial-frequencyfilters. The spatial-frequency response of the image-gath-ering system, which is the product of these two low-pass-filterresponses, ordinarily decreases smoothly with increasingspatial frequency until the lens diffraction limit is reached.

The low-pass frequency response of conventional image-gathering systems can be changed to a bandpass frequencyresponse by lateral-inhibitory (or neighborhood) signal pro-cessing, as is depicted in Fig. 2. It is convenient here, becausewe are interested in edge detection as well as image formation,to represent the image-gathering responses by the DOGfunction

g x Y) J7rp2 exp(-r 2/2fl2)- 2 exp[-r2/2(saf 2127r12 a(

(4a)

tg(v, cv) = exp[-2(-7r-p) 2] - W expH-2(w7ralp)2 ], (4b)

(3a) where r 2= x2 + Y

2, P

2= V

2+ c

2, a = 1.6, and the center-to-

center spacing (or sampling interval) of the sensor array is X(3b) = Y = 1, so that the sampling passband P = I(v, c), IvI < 0.5,

s a'a0~~~~~~~~~~~~~~8(,Y

Fig. 2. Sensor-array image-gathering system with lateral-inhibitory signal processing.

Huck et al.

Page 4: Image gathering and processing: information and fidelity

Vol. 2, No. 10/October 1985/J. Opt. Soc. Am. A 1647

-2 -1 0 1 2 3xy(a) Spatial re

4 -4 -3 -2

sponse, ,3 = 0.

0 0.2 0.4 0.8 0.8 1.0 1.2 1.4V, CO

(b) Spatial frequency response,

I

8 _ ~~~~~~~~~~~~~~~~~I4 _ ~~~~~~~~~~~~~~~~~I

I I/I."I/-- .... .I

I l I I I I I I I I1.2 t.4 1.8 1.8 2.0

(c) Spatial frequency response, W = 1.

Fig. 3. Responses of the image-gathering system.

Il < 0.51. Appendix B demonstrates how the response givenby Eqs. (4) for W = 1 can be implemented by combining theoptical design, including lens apodization, with the lateral-inhibitory processing algorithm.

The optical-design parameter 1 controls the high-frequencyresponse, as is shown in Fig. 3(b) for W = 0. These Gauss-ian-response shapes have been shown to provide favorabletrade-offs between aliasing and blurring for the range ofSNR's (from 10 to 100) that are commonly encountered inpractice' 9' 22' 23 and thus to be nearly informationally optimumfor the design constraints that are ordinarily imposed onpractically realizable image-gathering systems.2 4 However,there is no single, practically realizable, best response. For

W =1 example, shaping the lens's optical transfer function (OTF)by apodization3 6,37 to enhance the response inside and sup-press it outside the sampling passband also reduces theamount of light reaching the photosensor and hence the SNR.

The neighborhood-weighting parameter W controls thelow-frequency response. Figure 3(c) illustrates the effect ofneighborhood processing with W = 1 on the responses shownin Fig. 3(b). The effect on the noise is accounted for bymultiplying the photosensor noise term N(x, y) by (1 +

_ _ ~~~~W2/8)1/2.13The DOG function for W = 1 is similar in shape to Marr's

L0 L J 1 3 4 V 2G(x, y) function [where V2 is the Laplacian operator a2/x 2

y + 02/ay2 and G (x, y) is the two-dimensional Gaussian distri-

6. bution].8' 9 According to his model, the highest-spatial-fre-quency channel of human vision corresponds to 1 = 6.3 X 10-3

deg for the center-to-center distance between the eye's fovealphotoreceptors of 0.008 deg,13 or: = 0.76 for the normalized

----------- 0.2 sampling intervals X = Y = 1.-- - 0.4 Edges can be detected either by digitial processing in the…----0.8 computer or by image-plane processing (as depicted in Fig.

.- o 2). Digital filtering often requires large (up to 31-by-31) filter

masks to approximate the DOG function satisfactorily."However, this response can also be obtained, as is shown inAppendix B, by combining merely a 3-by-3 element operationwith the optical design. One can thereby reduce the numberof computations by as much as a factor of 100. If the pro-cessing is done in the image plane, then one can also sub-

.e 6 ..8 2.0 stantially reduce signal dynamic range (for analog-to-digitalconversion) and data transmission. The required sensor-

W = O. array technology is becoming available. Several siliconcharge-coupled devices have been developed 384 0 that perform3-by-3 or more element operations during readout.

The mechanism of natural vision similarly combines opticalresponse with signal processing. The angular sensitivity ofthe eye's photoreceptors (i.e., the Stiles-Crawford effect4 '),which produces the same effect as lens apodization, 4 2' 43 en-hances the combined pupil and photoreceptor spatial-fre-quency response within the photoreceptor sampling passbandand suppresses it outside the passband, while the lateral-inhibitory preprocessing made possible by the interconnection

- -- - -. - - of neighboring photoreceptors in the retina suppresses thelow-frequency response.13

C. Radiance FieldWe assume that the radiance field L(x, y) is both homoge-neous and isotropic, so that the variance of L(x, y) is inde-pendent of (x, y). 28 Furthermore, we assume that the auto-correlation of L (x, y) is44

,4 5

(5a)PL (X, Y) = CL 2e-rIr.

The associated Wiener spectrum, which is the circularlysymmetric Fourier transform (i.e., the Hankel transform) of'IL (X, y ) iS44,45

4'L (V, c) -[1 + (27rA4p)2]3/2 (5b)

Equations (5) can be derived by assuming that L (x, y) is arandom set of two-dimensional pulses whose separation robeys the Poisson probability density function with the (ex-pected) mean separation (or spatial detail) Ar and whose

0.5

0.4

0.3

X 0.2t-,

0.1

0

- 0.1 I-4 -3

a

4?-

1. .

I-

3

- 0.4

0 .;

-_n20 0.2 0.4 0.8 0.8 1.0

v, co

Huck et al.

o0A

Page 5: Image gathering and processing: information and fidelity

1648 J. Opt. Soc. Am. A/Vol. 2, No. 10/October 1985

magnitude obeys the Gaussian probability density functionwith the (expected) mean YL and variance '-L2 .4 5

Equation (5b) can be used to generate various Wienerspectra simply by changing the parameter Mr, The set ofWiener spectra shown in Fig. 4 for several Mtr can reasonablybe expected to account for many natural scenes.44 The Mar

considered here range from about one order of magnitudesmaller to one order of magnitude larger than the samplingintervals X = Y = 1. As a result, the Wiener spectra rangewithin the sampling passband from nearly a constant mag-nitude to a magnitude that decreases with frequency p at therate p-3.

A significant departure from this set of Wiener spectracould result from periodic components in the radiance field.The Wiener spectrum of a radiance field with a periodic as well

R(x, y) at a given frequency or spectral information densityis4 7

h(v, cv) = log2 1 + (I) cv) + I(v c) (6)

The (total) information density h is obtained by summing1i (v, cv) over all spatial frequencies that provide nonredundantinformation. This restricts the summation to the samplingpassband P, since the random variables L(v, c)vTg(v, co) * Li(v, c) and IV(v, c) * LIJ (v, cv) are periodic. Furthermore,one-half of the sampling passband is redundant. Thus, fora sufficiently smooth h(v, cv), the information density be-comes

h = I f (v, cv)dvdcv =! log, [1 + 4'(,c) dvdcv (7a)2 JJB 2 (DI .ha(V, c) + on (v, A)

= 2 1L02 1 + 4L'(V, cv)g(V, c)12 1dvd, (7b)

+ cL (V, W)Iig(v, cv )12 * Lii (v, cv) + JBI-1 KL2

- ~ ~ ~ ~ rm *0, O I _Y

as a random component is the sum of two parts: the contin-uous curve for the random component (as shown in Fig. 4) anda series of impulses for the periodic component 46 (not ac-counted for in this study). The effect of periodic components,usually referred to as moir6 patterns in optical-system anal-yses, has been illustrated in numerous publications (e.g., Bi-berman 2

1 and Rosenfeld and Kak2 8).

3. INFORMATION DENSITY

A. FormulationInformation theory treats the reconstruction R (x, y) as a re-ceived message giving information about the incident radiancefield L(x, y) and accounts for degradations as loss of infor-mation. To formulate the information contained in R(x, y),it must be assumed that the radiance field L(x, y) and noiseN(x, y) are Gaussian processes, in addition to being wide-sense stationary. The unsampled component R8 (x, y) andthe sampling sideband components Ra (x, y) of the recon-struction have been treated as statistically independent,Gaussian processes. Thus the information contained in

10'

1011k

bI 100

NjoI

;s 10--.R

<lee

10-'

10-3

10 -4 10-3 10-4 10-I

P = Z Q)100 101

where 4)L'(V, Cv) = aL 42bL(V, c) and KUL/cN is the rmsSNR.

The reconstruction R(x, y) is a wide-sense stationary pro-cess only if the processing passband A is contained within thesampling passband B. However, a careful formulation ofinformation for sampling systems based on an extension ofthe approach taken by Fellgett and Linfoot' reveals thatstationarity of R(x, y) is not required for the informationdensity h to be independent of the processing response fP (V,c), provided, of course, that P (v, cv) sD 0 for (v, cv) e B. If theresponse Tp (v, cv) does not extend to the sampling passbandX, that is, if fp (u, cv) = 0 for some (v, co) E ., then the limitsB on Eqs. (7) must be replaced by the processing passband R.The actual shape of fp (v, cv) is not important so far as infor-mation transfer is concerned, provided that the above con-straint is satisfied and the transfer does not introduce noise(e.g., film granularity). The processing can, in principle, al-ways be continued until the information is available again inits original form. For example, in the limit as the responseip (v, cv) approaches unity, the reconstruction R(x, y) revertsto the original sampled signal s(x, y) given by Eq. (1).

The image-gathering response that maximizes the infor-mation density h is

Tgo(V, c) = (, c) E PJ0, elsewhere

The corresponding maximum information density is

h 2 = J 10og2l + (KN)2DL'(V c)Jdudcv.

(8)

(9)

If the Wiener spectrum 4?L (V, cv) of the radiance field is uni-

form within the sampling passband (i.e., behaves like whitenoise), then the maximum information density becomes

ho = -IBI log2[1 + (SNR) 2].2

(10)

This expression is the familiar channel capacity of a system

I CHII I l|ii~ I I I 111111 I I I 11111 I I111111 I I I Ilia

- _ e'--'- -- 1/3- - \ s- -- - - - 1/3

-.--------------------------- s-u -------

_ iiii I I gI i I I 1111

Huck et al.

Fig. 4. Wiener spectra of the radiance field.

Page 6: Image gathering and processing: information and fidelity

Vol. 2, No. 10/October 1985/J. Opt. Soc. Am. A 1649

that is bounded only by bandwidth and rms SNR (SNR =KOL/,N).

B. Computational ResultsFigures 5 to 7 compare the information obtained by thepractically realizable image-gathering responses shown in Fig.3 with the theoretically maximum information that could beobtained if the image-gathering system were constrained onlyby the sampling passband and the SNR; the difference rep-resents the information lost by aliasing and blurring in the

0

presence of noise. This loss of information manifests itselfin Fig. 5 by -the rapid decrease in the spectral informationdensity (v, c) near the sampling passband. The small ex-tension of )(v, c) beyond the sampling passband is redundantand so does not-add to the (total) information density h.

Figure 6 demonstrates that the informationally optimizedoptical design varies with SNR. In general, as the SNR isincreased, the optical response should be shaped to decreasealiasing at the (inevitable) cost of increasing blur.

For the Gaussian-response shapes Tgg(v, c) given by Eq.

0

o 0.1 0.2 0.3 0.4 G.5v,co

0.6 0.7 0 0.1 0.2 0.3 0.4'U, CA)

0.5 0.6 0.7 0 0.1 0.2 0.3 0.4 0.5 0.6- v1),Co

A) A (VC'0) = 1 (v, w) - B(a) -r.0 (v Icv) elsewh~ere.

Fig. 5. Spectral information density hI(v, c) for (a) the theoretically optimal spatial-frequency response, (b) a typical low-pass spatial-frequencyresponse, and (c) a typical bandpass spatial-frequency response. The log curves are given for five mean spatial details $r, and the linear three-dimensional perspectives are given for ,tr = 1. Sampling passband, B = {(v, cv)IvI < 0.5, 1kIo < 0.51; SNR, KUL/ N = 32.

101

4 100

10-I4 8 16 32 64 128 256 512 1024 4 8 16 32 64 128 256 512 1024

K /aoN KorJaN

(a) j = 1, W = 0. (b) P = 1, W = 1.

Fig. 6. Information densityh versus SNR KaL/uN for the theoretically optimal spatial-frequency response -go(v, c) and for the typically realizablespatial-frequency responses shown in Fig. 3. The difference represents the information lost by aliasing and blurring in the presence of noise.The mean spatial detail ,ur = 1.

.5

102

101

100

10-

= I I I I

---------- 1/9_ ------- 1/3

I- 3__ __ g9I I I I .1 I I 1

= I I I I I I

.~~~N A

= ---- -

I I I I I I

~_ II_

-I Z- N

I,.

I I I I0.7

Huck et al.

(C) A (V, W; #=0.6, W=1).T.(b) A (V, W; 8=0.6, W=O).

T,

Page 7: Image gathering and processing: information and fidelity

1650 J. Opt. Soc. Am. A/Vol. 2, No. 10/October 1985

4 8 16 32 64 128 256KaoJa,

0.01 0.1 1 10 100

* 8 16 32 64 128 256

_ I I I I I _

_ _:

I /.'!lI l .l,

0.01 0.1 1 10 100

. . . . . . . . i4 8 16 32 64 128 256K /cra/o

0.01 0.1 14A~

10 100

A) A,(V ({0 e r B(a) Irgovc) = 0, elsewhere

(b) A (V,C; ,8 = 0.6, W=0).

Fig. 7. Information density h versus SNR KaL/UN and mean spatial detail Aur for (a) the theoretically optimal, spatial-frequency response,(b) a typical low-pass spatial-frequency response, and (c) a typical bandpass spatial-frequency response.

(4b), it turns out that when aliasing is reduced to zero (byletting 3 -a ), then the resultant blurring reduces the signalto zero as well. As a consequence, there always remains a gapbetween the typically realizable information density and thetheoretically optimum one. However, it would, in practice,be possible to passband-limit the optical response exactly tothe sampling passband with an appropriate objective lensF/No. Aliasing would then be reduced to zero while the sig-nal, although substantially blurred, would still retain somecontrast for all the spatial frequencies within the samplingpassband. Then, in the limit as the SNR approaches infinity,the practically realizable information density would approachthe theoretically optimum one. But, the information densitywould be low for the SNR's usually encountered in prac-tice.2 4

As the results given in Fig. 6(a) demonstrate, as much asabout one-half of the theoretically maximum informationdensity is usually lost with practically realizable optical re-sponses because of the degradations caused by aliasing andblurring, even for informationally optimized relationshipsbetween sampling passband, spatial response, and SNR.Substantially more information may be lost if this relationshipis not optimized.

Linfoot5 observed that "quantitatively speaking, most ofthe information in a high-quality [or high-SNR] image is or-

dinarily contained in its fine detail." This observation isconsistent with the results shown in Figs. 6 and 7. Althoughthe loss of information caused by the dynamic-range com-pression of low spatial frequencies can be large when SNR'sare below 32 or so, this loss of information becomes small forhigher SNR's.

Figure 7 also demonstrates that the variation of informationdensity with mean spatial detail reaches its maximum valuewhen the sampling intervals are approximately equal to themean spatial detail (i.e., when X = Y M A, - 1). That is, ascan be seen from Fig. 4 (remembering that the samplingpassband extends to v, cv = 0.5), the information density ismaximum when the sampling passband is most closelymatched to the Wiener spectrum of the scene, nearly re-gardless of the other image-gathering-system characteristics(i.e., spatial response and SNR).

It is interesting to note here how Linfoot3 and Biberman22

have related, based on their experiences, information andbandwidth to the visual quality of images. First Linfoot:"An optical system can properly be said to be of high qualityonly if the amount of information contained in its image ap-proaches the maximum possible .. ., and it is an agreeableconsequence ... that those which are efficient according tothis criterion also form images which are sharp and clear inthe usual sense of the words." Then Biberman: "...nothing

- I I I I I I J

- /. ------- 1/9_ , _------- 1/3

I

3_--- 9

I I I I I I I

: I I I I I I I :

_ : -

_ 7/

_ I I

_ ,_

i I I I I I I

101

A 10o

jo-1

101

, log

10-i

I I I I I I I :

I/I I/7I, -

1/ /,'

l~ lil l l l'

I I I I I

- /|,/'~

/ ---- 16_ . .32

-- 126l l_ _ _ _ 3 2l

i I I I I

1._I I/ I-// / *' III // i :

I I - _I I I

Huck et al.

(C) A (V, W; 9--0.6, W=1).T.

Page 8: Image gathering and processing: information and fidelity

Vol. 2, No. 10/October 1985/J. Opt. Soc. Am. A 1651

compares with providing a bright, large structureless display,driven by adequate signal with bandwidth matched to fre-quency content of the image." The results in Fig. 7 demon-strate that these two attributes of a good design, "informa-tionally optimized" and "high-SNR bandwidth matched," areclosely related.

squared difference between the object radiance field L(x, y)and its image R(x, y), as defined by

f=1

4. IMAGE FIDELITY

A. FormulationFollowing Fellgett and Linfoot,' Linfoot, 3,4 and O'Neill,48 weuse the image-fidelity criterion, which assesses the mean-

oI I I I OI

= 1 -

Jf IL(x, y)-R(x, y)12 dxdy

£fA IL(x, y)12dxdy1 Xf AI if IL(v, cv) - A(V, c)I 2 dvdcv

I IL (v, cv)I2 dvdcv

I A if :-

(11a)

(lib)

I

I I I I I

_/=I-I

_ 7 /hmatched/- / * ----------- 1/ -, = 1/9

I- / -- , = 1/9,/, = I A&

l7 l -

I I I I I

,_- - - ~~Z_ _

Wiener II.---------- Sinc

- -Cubic--- Linear

I I I I0 0.2 0.4 0.6 0.8

(=(a) KUada, = 8.

1.0

3

,= 1 ~~~~~~~~~~1

- II }1 /3

I I I I I I0 0.2 0.4 0.6 0.8 1.0

(b) KaoJax = 32.

I I

I I I I I I0 0.2 0.4 0.6 0.8

'3(c) KacJaN = 128.

Fig. 8. Information density h and image fidelity f versus optical-design parameter /3 [Fig. 3(b)] for three SNR's KUL/aN. Image fidelity isgiven for the matched and the two unmatched Wiener restorations and for the sinc, the cubic, and the linear interpolations. The assumed meanspatial detail for the unmatched restoration is y,. = 1/9, which approximates the scene spectrum as white noise within the sampling passband.One of the two unmatched restorations is done with the parametric Wiener filter, letting 'y = 1/4.

I I I _

5:2 -.----

l l ,

_ I I I I _

- _

_ _

: - - - - - - - - - 1/3 -

_ ,#; 1

, -------

1.0

0.8

0.6 1

4-i

- I

I I I I I

0 .4f

0.2

n

1.0

0.8I

0.6

o.4

0.21

01.0

Huck et al.

- - - - - - --t!-!:1 - - - -z-::= It:I

I#

Page 9: Image gathering and processing: information and fidelity

1652 J. Opt. Soc. Am. A/Vol. 2, No. 10/October 1985

Spatial frequencyresponse

Spatialresponse

Ts (V, X)

=,0I) 0

To (X.y)

/05 u ''- -. s lb I.

o~~~~~~~~~~~~~~~~~~~~ o

+(X.y) T;(X.Y)

I

(a) Image gathering. (b) Image restoring. (c) Image gathering and restoring.

Fig. 9. Typical response of the image-gathering system, the Wiener restoration, and the combined image-gathering and -restoring process.The sampling intervals, X = Y = 1; the optical-design parameter, A = 0.6; the neighborhood-weighting parameter, W = 0; the SNR, KOUL/aN= 32; and the mean spatial detail, I., = 1.

where again I(.)12 denotes the ensemble average, or expectedvalue, of 1(-)12. The image fidelity f and the mean-squaredrestoration error (MSRE) (Ref. 33) e2 are related by

f = 1 - E2/UrL2. (lic)

(In Appendix A we distinguished between three sources oferrors: blurring, aliasing, and noise, and in Appendix D weexamine the high-frequency artifacts that may occur inWiener restorations when insufficient sampling is severe.)

The response of the Wiener filter, which by definitionminimizes the MSRE E

2, may be derived either by a variational

method (e.g., Frieden33 ) or by following the approach takenby Wiener3 4 and Helstrom.3 5 With the latter approach, theterms of e2 are grouped in such a manner that it becomes di-rectly evident that E

2 is minimized by the restoration

-p (v, c) = (v, cv)

= L(V, c)'ig*(V, cv)

[4

L(V, Cv)j1Tg(V, c)12 + K-2)N(V, Cv)] * L (V, c)

(12a)

= AL'(V, c)ig*(v, c)

4?L'(V, cv)j1Tg(V, cv)12 * Lb (v, c) + II -1 (K )-2

(12b)

where L'(V, c) = aL 4L (v, cv) and KaL/aN is the rms SNR.It follows that the maximum image fidelity that can be real-ized with the Wiener restoration is

fm= d' AL'(V, )'g(v, cv)'(v, c)dudcv. (13)

For a sufficiently sampled and noise-free signal, the Wienerfilter +(v, cv) becomes the inverse filter g* (v, cv)Ig (v, cv)I -2 .If, in addition to these two conditions, the image-gatheringresponse 'tg(v, c) also completely encompasses the Wienerspectrum AL (v, cv), then the reconstruction R (x, y) becomesan exact replica of the radiance field L(x, y). As Sondhi49

observed, inverse filtering attempts perfect resolution withoutregard to noise and aliasing, whereas Wiener filtering mini-mizes the mean-squared error without regard to resolution.

Some compromise between the disadvantages of inverseand Wiener filtering is commonly made by the simple expe-dient (among other approaches2 5) of reducing the influencethat the noise spectrum has in shaping the Wiener-filter re-sponse. This is done by multiplying the Wiener spectrum ofthe noise 4N(V, cv) by the parameter Py. The parametricWiener filter becomes, therefore, the Wiener filter for y = 1and the inverse filter for oy = O. (We disregard aliasing hereas a separate and distinct source of noise because it is done soin practice.)

The image-restoration algorithms found in the litera-ture2 5-28 do not, as do Eqs. (12), completely account for theimage-gathering process: they explicitly account for theblurring caused by the spatial-frequency-response limitationof optical apertures and for the photosensor noise; however,they do not account for the aliasing caused by insufficientsampling, which degrades the restored image often as muchas and sometimes even more than the blurring and noise.22

B. Relationship to InformationFrieden33 has shown that the Wiener restoration '(v, cv) andMSRE 0

2 can be elegantly expressed as a function of thespectral information density hi(v, cv). To derive these ex-

* +I(X,y)

Huck et al.

4"'.a) 4,(V'CO) �(V'W)

4

P---,O.'S 4

I/'!5

1

Page 10: Image gathering and processing: information and fidelity

Vol. 2, No. 10/October 1985/J. Opt. Soc. Am. A 1653

pressions, it is convenient to express hI(v, c) given by Eq. (6)as

I(v, c) = 102' (v c ( cv, ca v,) + 4 (,c~

and the Wiener filter given by Eqs. (12) as

't(v, c) 1 Is(V, c)1Tg(V, cv) I4R(V, v)

1g~, cv)1__ 4(V, cv) + (.(V, cv)]

g (V, -) 4R(V, cv)

It can then be seen that '(v, cv) and A(v, c) are related by

(v, c) = A [1 - 2-)l(v-w)], (14a)~g(v"cv)

l(V, c) = l102[1 -'g(V, ) 4(V, cv1 (14b)

and that e2 and I(v, cv) are related by

E2 = iff 'L(bcv,))2-i(-)dvdcv. (15)

Equation (14a) shows how information limits the ability of theWiener filter to compensate for the response of the image-gathering system, and Eq. (15) shows how information limitsthe MSRE. Substituting Eq. (14a) into Eq. (13) shows howinformation limits the maximum-realizable image fidelityto

ff= ffw 4Y(Vc)[1 - 2-i(-w)Idvdc. (16)

C. Computational Results

Conventional ResponsesFigure 8 characterizes information density hi and image fi-delity f versus optical-design parameter f3. The images areeither restored with a Wiener filter (see Fig. 9) or recon-structed with the sinc,26 cubic,29 or linear26 interpolation (seeFig. 10).

The shape of the Wiener filter 'I'(v, cv), as shown in Fig. 9(b),depends strongly on the sampling lattice. If the samplinglattice were neglected (as is usually done in practice25-28), thenthe shape of '(v, c) would be circularly symmetric like theimage-gathering response Yg(v, c). However, as can be ob-served by comparing the spectral information density h (v, c)in Fig. 5(b) with the image-gathering and -restoring responsefg(v, c)+ (v, c) in Fig. 9(c), the Wiener filter, which accountsfor the sampling lattice, also utilizes the available informationmore effectively and, therefore, can be expected to improveimage fidelity.

When the scene spectrum is exactly known and accountedfor, then the matched Wiener restoration compensates so ef-fectively for the image-gathering response that, judging fromthese results alone, it would appear that neither informa-tionally (or otherwise) optimized design nor high SNR is im-portant for high image fidelity. However, the scene statisticsare seldom known a priori nor can they be determined exactlyfrom the aliased and noisy signal a posteriori. Thus it is therobustness of the optimal restoration that becomes an im-portant consideration. To test the common practice of as-

suming that the scene spectrum is flat, we let the scene pa-rameter ,r = 1/9 for shaping the Wiener-filter response whilethe actual Mr remains either 1/3, 1, or 3. As the curves in Fig.8 demonstrate, the most robust image restoration is realizedfor the informationally optimized design.

Figure 8 also illustrates the effect on image fidelity of lettingthe Wiener-filter parameter y = 1/4 (as is commonly done in'practice2 5 ) to enhance the contrast of small detail, again as-suming that Mr = 1/9 instead of the actual 1/3, 1, and 3. Thischoice of y has the agreeable (and fortuitous) consequence ofslightly improving image fidelity because it compensates tosome degree for differences between actual and assumed scenespectra.

Finally, Fig. 8 demonstrates that the fidelity of images;formed by the sinc, cubic, and linear interpolations are eachmaximized by a different optical design. Although thesedesigns are not substantially different from one other, theyare substantially different from the designs that maximize therobustness of optimally restored images, especially when

1.0 r

0.8 H-

0.6 1I-

0.2

a

-0.2 0 1.0 0.5 1.0

'U. Wxy(a) Sync interpolation.

U1(

1.C

0.I

0A.

a3.

(UO.,

-O.02

2I I

0 0.5 1.0v, co

(b) Cubic interpolation.

0

0

1.0

0.8

0.61

0.41

0.2

0

-0.21 1 1. 1

-2 -1 0 1xy-0

2

.2 I I I I I I0 0.5 1.0 1.5 2.0

v, w2.5 3.0

(c) Linear interpolation.Fig. 10. Responses of the reconstruction algorithms.

Huck et al.

0 .4F-

o .1

I

Page 11: Image gathering and processing: information and fidelity

1654 J. Opt. Soc. Am. A/Vol. 2, No. 10/October 1985

s.oF

0.9 F-

0.8 F-

I-0.7 F-

0.2 0.4 0.8 0.8 1.0 0p0

. x,

Wiener-*- --- Cubic

-- ------ Linear

I I I I I0.2 0.4 0.8 0.8

p81.0

1.0 A B ft\ I ----------- 0.3

0.8 _\ I _ 0... I~ ~ ~ ~ .

'.10.8 .

II..

0.4 I

0.2I

0 - - - -I'

0.2 I I1 0 0.5 1.0 1.5

V

Wiener restoration, -rT(x) = *(X).

Cubic interpolation, Tp(x) = C(x).

-8 0x

8

x

Linear interpolation, Tr(x) = A(x).

(a) Optical design parameter f3 = 0.3. (b) Optical design parameter /? = 0.6.

Fig. 11. One-dimensional simulations for the matched Wiener restoration and the cubic and the linear interpolations. L(x) is the input radiancefield; s(x) is the sampled signal; and R(x) is the representation constructed from s(x). The mean spatial detail Ag, = 1, and the SNR KoL/aN= 32.

SNR's are high. Furthermore, the image fidelity for thecomputationally simple cubic interpolation approaches thatfor the Wiener restoration.

Together, these results show that different image-formingalgorithms prefer different optical designs. This preferencearises because different optical designs trade off aliasing andblurring in different ways, while different algorithms also

metamorphose the spatial detail that is degraded by aliasingand blurring in different ways. The Wiener restoration is lesssensitive to blurring because it partly compensates for theimage-gathering response, whereas the interpolations are moresensitive to blurring because they do not account for this re-sponse.

The one-dimensional simulations shown in Figs. 11 and 12

Huck et al.

I I I I

I I I I10t

2

' 1

a

x -1

-2

-3

31

-2

'-3

3

X -1i4

0.8s-

Page 12: Image gathering and processing: information and fidelity

Vol. 2, No. 10/October 1985/J. Opt. Soc. Am. A 1655

101

1000

x4

.5

1.0 matched- --- - mismatched

- - - mismatched.y = 1/40.9h-

0.81

0.7

0.6

1.0

0.8

0.81

- /i !l I I I

/I

- 0.41

0.2

0

I

- I

\ I'I _ '.'

,L-----

, I I I0.2 0.4 0.8 0.8 1.0 0 0.2 0.4 0.8 .8 1.0 0 0.5 1.0 1.5

la p

Wiener restoration, Tr(x) = xp=l).

'Vr

Aa4

x x

Mismatched Wiener restoration, T5(X) = +(xA=l/9).

-8ax

0x

Mismatched parametric Wiener restoration, -r,(x) = (x,^=I/9,y=1/4).

(a) Optical design parameter /8 = 0.3. (b) Optical design parameter , = 0.6.

Fig. 12. One-dimensional simulations for the matched and the two unmatched Wiener restorations using the same conditions as for Fig. 11.

The sampling-frequency artifact in the mismatched Wiener restorations is ordinarily blurred in practice by the two-dimensional image-recon-struction mechanism.

support these general observations. [See Appendix C fordifferences between two-dimensional and one-dimensionalformulations.] As is shown in Fig. 11, the visual quality of thematched Wiener restorations is nearly the same for two dif-ferent optical designs 3 = 0.3 and 0.6, and this quality is closelyapproached with the cubic interpolations for one of thesedesigns, iB = 0.3.

As is shown in Fig. 12, the quality of the mismatched res-torations is not significantly degraded for the informationallyoptimized design 3 = 0.6; however, it is significantly degradedfor ( = 0.3. The degradation manifests itself most noticeablyin slowly varying regions as a high-frequency artifact, namely,the sampling frequency. This occurs, as is demonstrated inAppendix D, because the mismatched Wiener filter extends

_ I I I I

I I I I

Huck et al.

_ w . J . . . A . .

u

Page 13: Image gathering and processing: information and fidelity

1656 J. Opt. Soc. Am. A/Vol. 2, No. 10/October 1985

10'

10'

10-1

10-s0 0.5 1.0 1a$

(a) Wiener restoration,pMr = 1.

a

(b) M

Fig. 13. Responses of the Wiener restorati(dimensional simulations shown in Fig. 12.

beyond the sampling frequency. (See]pling-frequency artifact is ordinarily blhtwo-dimensional images, because the inmechanism, say cathode-ray tube or lasiimage onto a screen or film with a spot ocause of its finite size, as a low-pass filtersuppress raster scanning effects, that is, Iless image.2 2

Dynamic-Range CompressionThe dynamic range of the signal can beeral-inhibitory processing with little loss olthe optical design is informationally opti:is sufficiently high. (Compare, for exai

101

,4 100

10-1

1.0

0.8

0.61

0.41

0.21

___-//--

/1 41 1 1I I~~~~~~_

I l E Figs. 6 and 7 for W = 0 with W = 1.) The results shown in Fig.14 indicate that it is then also possible to restore images withlittle loss of fidelity.

The improvement in image fidelity that the choice of theWiener-filter parameter y can make (by compensating tosome degree for differences between the actual and the as-sumed scene spectra) is now more pronounced than before.

\ ' ., (Compare Figs. 8 and 14.) The improvement is very sub-I \ I ' . ~stantial for the low SNR of 8 and is also noticeable for the

0.5 1.0 1.5 higher SNR of 32.Mismatched Wiener As is illustrated by the simulations shown in Fig. 15, therestoration, mismatched restorations introduce a very modest degradation

P4 = 1/9. for the informationally optimized design /3 = 0.6 but introduceton filters for the one- severe degradations for al = 0.3 caused by the appearance of

the sampling frequency as a high-frequency artifact. Fur-Fig. 13.) The sam- thermore, it may be noted that the mismatched parametricirred in practice, in Wiener restoration for A = 0.6 slightly improves the visualiage-reconstruction resemblance between L(x) and R(x) compared with theer-beam, paints the nonparametric restoration, consistent with the results shownf light that acts, be- in Fig. 14.-; the goal often is to The compression of the signal dynamic range by lateral-;o paint a structure- inhibitory processing is proportional to the ratio a,(W)/ar,(W

= 0), where US 2(W) is the variance of the signal s(x, y; W)given by

compressed by lat-, information if bothmized and the SNRnple, the results in

1.0

US2 (W) = s 'L(V, )g(v, c)I 2 dvdcv.

_ (X

The dynamic-range compression depends, as is shown in Fig,16, on the radiance-field statistics.

At

/I.'

I I~~~~~~ -fI/

0 0.2 0.4 0.6 0.8 1.0(=

(b) K~arL/N = 32.

0 0.2 0.4 0.6 0.8 1.0(1

(c) KadaN = 128.Fig. 14. Information density h and image fidelity f versus optical-design parameters A for three SNR's KaL/ oN. Neighborhood-weightingparameter, W = 0.8; otherwise the conditions are the same as for Fig. 8.

.

I I

........ ---0.3~~0.0

I _ -

Huck et al.

I I I

----

,* 7 ~ ~ ~ -_

_ I I I I

- //~ Z_ - /~ ~ ~~~-

_ /,. I

/ _ I I I

- / *~------ 1/9

/ . - -1/3 -_ - ,/,/ -. 1/

_ / , -- -3 -

/ ,. --- ~~~9

00 0.2 0.4 0.6 0.8

(a) Kao/aa = 8.

-1 .'j-

4-4

Page 14: Image gathering and processing: information and fidelity

Vol. 2, No. 10/October 1985/J. Opt. Soc. Am. A 1657

I I

0 0.2 0.4 0.8 0.8 1.op8

1.on

0.8f

0.8

0.41

0.21

/_ / I/ _ matched 'FM

r- / --- mismatched

/. mismatched./ y= 1/4

1 l l l- - I0 0.2 0.4 0.6 0.8 L.O

.43

8 -8

Wiener restoration, T,(x) =

-8

_1J_0 l I 0 0 0 0,

10

I 0 0 " `-. 0 00

xMismatched Wiener restoration, .r(x) = +(x,'=1/9).

x xMismatched parametric Wiener restoration, Tr(x) = i(x,.'1/9,y=1/4).

(a) Optical design parameter a = 0.3. (b) Optical design parameter a = 0.6.

Fig. 15. One-dimensional simulations for the matched and the two unmatched Wiener restorations. Neighborhood-weighting parameter,W = 0.8; otherwise the conditions are the same as for Figs. 11 and 12.

For the informationally optimized edge-detection response(i.e., for / = 0.6 and W = 1 as shown in Fig. 3), lateral inhibi-tion reduces the signal variance between about one half forfine spatial detail (i.e., yl = 1/9) to one tenth for coarse detail(i.e., M, = 9). Thus lateral inhibition can significantly reducethe signal variance with little loss of information. This resultsupports Barlow's50 suggestion that lateral inhibition in thehuman eye evolved because of the narrow dynamic range ofthe nerve fibers that transmit visual information from the eye

to the brain and that the edge-detection response of the eyeis a natural consequence of this "critical limiting factor."

5. EDGE FIDELITY

A. FormulationThe objective for machine vision is not to construct a repre-sentation that closely resembles the scene but a representation

Huck et al.

10'

to'

i- I

P0. 30.8l

1.0

0.8

o.eF

0.4

0.2

0

3

2

I

r.Cd

-1

-2

0 0.5v

1.0 1.5

0x x

.5

x

If-

- - --- --- --

I I I

Vx)----------- R(.)

0 S(X)

0 0 0:0 0 0 0 I1.0: I 0 0

8

Page 15: Image gathering and processing: information and fidelity

1658 J. Opt. Soc. Am. A/Vol. 2, No. 10/October 1985

1.0

0I-

W0.

b

~:0.

0

Huck et al.

9g*(V, W)Te(V, Co)|g(V, w)h-2 , so that 'fIe(v, wo) = 1 when fgg(V,wO) = fe (V, W). It follows that the maximum-realizable edgefidelity is

fer = 2 .F AL'(V, W)A e(V, W)12fg(V, co)+(v, co)dvdw.

(19)

B. Relationship to InformationThe relationship that exists between spectral informationdensity and image fidelity for the Wiener restoration can alsobe extended to edge fidelity. The Wiener edge filter given byEq. (18) becomes

+le(V @) = v, [1 -2- )]ig(v, ct)

(20)

0 0.2 0.4 0.6 0.8 1.0W

Fig. 16. Standard deviation of the signal versus lateral-inhibitoryweighting W. The optical-design parameter, Al = 0.6.

that extracts a specific feature of the scene, ordinarily its edgesfor constructing a primal sketch. If we let Te(X, y) be the idealedge-detection response, say, the DOG function igg(x, y; 1, W= 1), then Le(x, y) = L(x, y) * Te(X, y) becomes the idealrepresentation. Thus the question that naturally arises is:What is the highest-spatial-frequency channel Te (V, a); 13) forwhich edges can be detected with reasonable accuracy? Or,similarly, what is the finest spatial detail for which a primalsketch can be reliably constructed?

To assess performance and design trade-offs for edge de-tection, we define edge fidelity as

fe = 1 - AILe(x, y) - R(x, y)J2 dxdy

ISA ILe(xY)l2dXdy

FA, ff ' I Le(v, CO) - fl(v, w)l2dvdco

= 1 -

1~ SS IL, (v, )12dvdco

= 1 - fe2/wL(e)2' (17c)

where L, (v, w) = L(v, W)fe (V, CO), e, 2 is the mean-squared

difference between the ideal representation Le (x, y) and theactual one R(x, y), and aL(C)2 is the variance of Le (x, y).

Other more common approaches do not or do not fully ac-count for the image-gathering process. For example, Marrand Hildreth 8 and others, 9-12 who use the V2G or the similarDOG function to detect edges in image data, do not accountfor either the spatial response or the sampling associated withimage gathering; whereas Modestino and Fries,5 1 who useleast-mean-square spatial filtering, do account for the spatialresponse but not the sampling.

The optimal restoration or Wiener filter for edge detectionthat minimizes the MSRE (e 2 is

1p (U, W) = ''e(V, W) = 4'(V, W) e(U, W), (18)

where 4I(v, w) is the Wiener filter given by Eqs. (12). For asufficiently sampled and noise-free signal, ,e (v, w) becomes

and the maximum-realizable edge fidelity becomes

fern =aL2 2 si: (V, W)[Te(V, c)12 [1 - 2-i(-)]dvdw.

(21)

The expressions for the maximum image fidelity [Eq. (16)]and edge fidelity [Eq. (21)] differ only by the weighting oremphasis of the spatial-frequency components of the radiancefield. Image fidelity weights the spatial frequencies equally,whereas edge fidelity weights the spatial frequencies by theedge-detection filter response Te (v, Cw).

Nevertheless, we should not expect that there will ordinarilybe a significant difference between the optical design thatmaximizes the optimally restored image fidelity and the de-sign that maximizes edge fidelity. Both the spatial-frequencyresponse of optical systems [e.g., Fig. 3(b)] and the Wienerspectrum of random radiance fields (e.g., Fig. 4) decreasesmoothly with spatial frequency. Thus the spectral infor-mation density h(v, w) [e.g., Fig. 5(b)] also decreases smoothlywith spatial frequency until the spatial frequency approachesthe sampling-passband limit where aliasing and/or blurringrapidly accelerate this decrease. Because of this basic con-straint, we should expect that the informationally optimizeddesign ordinarily comes close to maximizing both the opti-mally restored image fidelity and the edge fidelity. The majordifference is one of emphasis: edge fidelity is more sensitiveto the restoration of fine detail and therefore also more sen-sitive to the optical design.

C. Computational ResultsFigure 17 characterizes information density h and edge fidelityfe versus the optical-design parameter 1. The ideal edge-detection response is assumed to be the DOG function Te (v,w; 13) given by Eq. (4b) and illustrated in Fig. 3(c). The re-construction is performed with either the Wiener edge filteror the sinc, cubic, or linear interpolation.

As can be observed, the performance obtained with theinterpolations approaches that obtained with the optimalWiener edge filter when the SNR's are high. The simulationsshown in Fig. 18 confirm that for high SNR's (i.e., Ko-LlaN =

32) the cubic and the linear interpolations locate the zerocrossings of R (x), which represent the edges of L (x), nearlyas well as does the optimal filter. It should also be noted thatdifferent algorithms for edge detection do not prefer differentoptical designs as they do for image formation.

Comparing in Fig. 17 the information-density and the

Page 16: Image gathering and processing: information and fidelity

Vol. 2, No. 10/October 1985/J. Opt. Soc. Am. A 1659

edge-fidelity curves suggests that the informationally opti-mized design for sufficiently high SNR's represents thehighest-spatial-frequencychannel 'te(v, co; 13) forwhich edges

can be detected with reasonable accuracy. This observationis further illustrated by the simulations shown in Fig. 19. Theactual representation R(x) closely approximates the ideal

101

A 10'

10-1

0.8

0.6 i-'4-

p, = 1/3

- - Wiener.--------- Sinc-- … --- Cubic- - - Linear

// \ 1\

I I " I ' I

I ' I ' I. I ,

8 -

/

4

s - //"/ .-,

o ( I

0.4F-

0.2 F-

1.

0.

0.'4:

0.

0.

1.0or-

0.8F-

0.6K'4I

0.4h-

0 .21-

IA, = 3

/4=_

// ,-

I I

1,.I:

I

representation L(x) * Te(x; 13) for 13 > 0.6 and departs from theideal representation for 13 < 0.6. It is interesting to note thatthe informationally optimized design for high SNR's, 1 3 0.6to 0.7 for KaL/aN = 128, is close to the normalized opticaldesign 13 = 0.76 of Marr's model of the highest-spatial-fre-quency channel of human vision.

- -/'

.'I

If

I I

I I I I I

- -s

I I . I I I

I/--

{/

I

I I I I

/III I I I I

0.2 0.4 0.6 0.8 1.0 1.2 0.2 0.4 0.6 0.8 1.0 1.2 0.2 0.4 0.8 0.8 1.0 1.2

(a) KaLoa, = 8. (b) Kailax = 32. (c) KoaJoa = 128.

Fig. 17. Information density h and edge fidelity fe versus optical-design parameter 3 [Fig. 3(c)] for three SNR's Ko-LloYN. Edge fidelity isgiven for the Wiener edge restoration and three interpolations. The ideal edge-detection response is assumed to be re(v, cv; ,B) = Tg(v, c; 1,W = 1).

Huck et al.

=_ I I i I_ 14

- -1/9.I

…3--, - 1/

//1 I.71 1_ I I _ I I

I I I I

-/ .'- - -. .s = ~ .~ -_- 1 I: -

I I I I

I I I I

I.

I.

I:I I.

1.0 f

n

Page 17: Image gathering and processing: information and fidelity

1660 J. Opt. Soc. Am. A/Vol. 2, No. 10/October 1985

_ I I I

I I I I0.2 0.4 0.6 0.8 1.0

a3

1.0

0.8

.4

0.4

K-

Wiener -z

....---- Cubic -

--- Linear <¢,-t

1.0

0.8

0.8~

O.41

0.2

0.2 F-

0.2 0.4 0.6 .8 1.0 1.2 0 0.5 1.0 1.5

.0.3- -- - 0.3

'0.6

- II.

- .. I

p V

- - - L(x)F --- .----- ~ Vx) s(x)I I 2--------- R(x)

F_'__ ! 0 a(X)- r-_ I

AE Al_1 ~ ~ ~ 1"9_ - . _ , '0

_____' ~ ~ ~ ., I I7 ~ ~ ~ ~ iL_-2 ' ! _.H

-3 1 I I I I I I I -8 0 a

x

--'- ---' !It I _1 , a, X ~, ___/, "A 40

-2 I

-3 1 1 l l l l l l l-I I Il III-8 a 8

X

i

t-~

14

3

2

Ii

- r- I I

I~

-1

-2

L-I _- _

3 1 1 1 1 I I I I I I I I I I I I I-8 0

xa

0xLinear interpolation, T,(x) = A(x).

(a) Optical design parameter / = 0.3. (b) Optical design parameter a = 0.6.Fig. 18. One-dimensional simulations for the Wiener edge filter and the cubic and linear interpolations. L(x) is the input radiance field, s(x)is the sampled signal, L(x) * Te(x; ) is the ideal representation, and R(x) is the representation constructed from s(x). Mean spatial detail,,= 1; SNR, KoL/aN = 32. [The magnitude of L(x) * T(x),s(x), andR(x) has been amplified bya factor of 3 relative toL(x) for easier visual

comparison.]

6. CONCLUSIONS

In this paper we investigated the relationship among the in-formation density of sampled image data and the image fi-delity and the edge fidelity of representations constructedtherefrom without loss of information. The design of the

image-gathering system revolved around the relationshipamong sampling passband, spatial response, and SNR. Weused (Gaussian) spatial responses that are nearly informa-tionally optimum for the constraints ordinarily imposed onthe design of image-gathering systems by the realizability ofoptical-aperture responses, and we used random radiance-

10'

10-1 L0

Huck et al.

3

2

x

I-

x

3

2

Wiener restoration, -r,(x) = %(X).0x

a-x~

x~

Cubic interpolation, Tp(x) = C(x).

0x

a

. I I In I I I I I -a 2

I

Page 18: Image gathering and processing: information and fidelity

Vol. 2, No. 10/October 1985/J. Opt. Soc. Am. A 1661

field spectra that are typical of those ordinarily encounteredin nature.

We have found that no image-gathering system design other

than the one that optimizes the acquired information densitycan be expected to improve further the fidelity of the opti-mally restored image. That is, the two objectives-to opti-mize the information density of the acquired image data andto maximize the fidelity of the image that is optimally restored

from these data-are ordinarily consistent with each other.The relationship between information density and'image

fidelity is further clarified when we consider the robustnessof the optimal restoration. If the radiance field is a prioriknown and exactly accounted for, then the matched Wienerrestoration compensates so effectively for the image-gatheringresponse that the optimally restored image fidelity remainsnearly constant over a wide range of designs (which includesthe informationally optimized design). However, it is thedesign that maximizes information density for which theWiener restoration is ordinarily most robust (i.e., most inde-pendent of uncertainties about the scene statistics).

10, = I I I I

10*

10-1 I I l0 0.2 0.4 0.6 0.8 1.

/3

1.0

0.8H

ol6-I'4

4.

0.4o-

0.41-

o.2

0.21l-

00.2 0.40

I I I I0.6 0.8

/P1.0 1.2 0

-pt

A.8I -…0…,.8I _---1.0

! -- i.2/ . . . .

I)' \ 'I

I I0.5 1.0 1.5

V

3

ai

1.

14.

X

I-

1Z9_�Z

1;Z

1�

.4

-1

-2

/3 = 0.2 -UX)I_ ~ ~~ L -- jux) - r.,

I ..*.....R(x)-o 9(X)

r-- I

A, f. Al .".

31 - I I ~

-2

-8 0 8

3 - p = 0.6

3~ ~~~~~~~ 1.0 ---

I I I I I I I I I I i I I I I

x

-_ *0 8 -8 0 8x x

Fig. 19. One-dimensional simulations for six optical-design parameters a and the Wiener edge filter using the same conditions as those for

Fig. 18.

Huck et al.

L

*(x-.A)

1.0[ - - - -

0.8

0.6L

Page 19: Image gathering and processing: information and fidelity

1662 J. Opt. Soc. Am. A/Vol. 2, No. 10/October 1985

Different image-forming algorithms prefer different opticaldesigns. The designs that maximize image fidelity for thesinc, cubic, and linear interpolations differ from one other andsubstantially differ from the informationally optimized design.Thus, end-to-end performance can be maximized only bycombining the optical design with the image-forming algo-rithm. Therefore it follows also that the visual or measuredquality of an image cannot be used as the standard for judgingthe performance of the image-gathering system independentof the image-forming algorithm (and vice versa).

Most information in a high-SNR image is ordinarily con-tained in its fine detail. The dynamic range of low-spatial-frequency radiance-field variations can be reduced with littleloss of information if the optical design is informationallyoptimized and the SNR is sufficiently high. It is then alsopossible to restore images optimally with little loss of fidelity.Furthermore, when the informationally optimized opticaldesign is combined with a minimal 3-by-3 lateral-inhibitoryedge-detection algorithm, then the resultant spatial responseapproaches that of (Marr's model of) the optimal edge-de-tection shape of human vision.

The relationship between information density and edgedetection is further clarified by the result that the informa-tionally optimized design for sufficiently high SNR's corre-sponds to the highest-spatial-frequency channel (relative toa fixed sampling passband) for which edges can be detectedwith reasonable reliability. Furthermore, the relationshipbetween this edge-detection response and the associatedsampling interval is similar to the relationship between Marr'smodel of the "smallest channel in early'human vision," 14 andthe center-to-center distance between the photoreceptors inthe foveal region of the eye's retina. Thus there exists anintuitively satisfying relationship between the efficient ac-quisition and processing of visual information, the spatialresponse of natural vision, and the reliable detection of edgesat the smallest possible scale.

The two figures of merit, image fidelity and edge fidelity,emphasize two extreme attributes of the processed data:image fidelity emphasizes resemblance to the object with littleregard to fine spatial detail, whereas edge fidelity emphasizesfine spatial detail with little regard to resemblance to theobject. Images with high visual quality, in which "a humanobserver [can] identify the original objects with as much detailas possible," 49 ordinarily fall somewhere between these twoextremes. Thus our results demonstrate, as Linfoot5 antici-pated, that "informationally optimized designs are always tobe preferred" when computer processing is used for varioustasks ranging from the optimal restoration of images to theenhancement or the extraction of spatial features, withoutregard to the complexity of the processing that may be re-quired to do so.

The SNR's required for various tasks differ: high imagefidelity can be obtained with SNR's less than 10; however,higher SNR's up to about 30 are desirable to make Wienerrestorations robust, and still higher SNR's above 30 may bedesirable for Wiener restorations from dynamic-range com-pressed data and for edge detection at the smallest possiblescale. If an image-gathering system is destined for a widerange of tasks, then the task that requires the highest SNRshould dictate the optical-design trade-off.

APPENDIX A: CORRELATION AND MEAN-SQUARE ERROR PROPERTIES

We discuss here the correlation properties of the reconstruc-tion R(x, y), as given by Eqs. (2), that justify our treatmentof insufficient sampling under the assumption that the radi-ance field L(x, y) is a wide-sense stationary process. Fur-thermore, we explicitly distinguish between three mean-square error components in R(x, y): those caused by blurring,aliasing, and noise. The following approach along classicallines, although lacking the elegance of generalized harmonicanalysis, is both simple and intuitively appealing.

In the isoplanatism patch A, R (x, y) possesses the Fourierseries representation

R(x,y) = E Rpq exp[i2r(vpx + WqY)J,pq

(Al)

where

Rpq =A A (VP, CO),pq AI

RA (Up, Wq) is the finite Fourier transform of R (x, y), I Al = 1xlyXUp = Pl1x, Wq = q/ly, and p, q are integers. For simplicity, weneglect the noise term in Eqs. (2) until later. The noise isassumed to be uncorrelated with the radiance field, and theconclusions drawn about the radiance field and its samplingsidebands also apply to the noise and its sampling side-bands.

The patch A is assumed to be sufficiently large so thatfrequency-space smearing owing to its finiteness is negligibleand hence is suppressed as a subscript. Therefore, neglectingthe noise term, we get

A (VpWq) = mn(Vp, Oq),m,n

where

Rmn(Vp, CWq) +vp X, q+

X Tg(Vp + X' 0 q + n p(vp, Wq)

is the Fourier component associated with the (m, n) side-band.

We wish to consider the correlation

Elfmn(vp, Wq)f Mnn'(vp', Wq'))

=EfLVp+M, Iq+n)

x L* V(P + m + n)

X tgvp+ X (O t(p+ i

Xl~p(Vp, Wq)fp(Vp1, Wq'), (A2)

where E{.- = f-. denotes the expected value, or ensemble av-erage, of {.-. We assume that L(x, y) is a wide-sense stationaryprocess. For such a process, the Fourier series coefficientsLpq are approximately uncorrelated for a finite but sufficientlylarge isoplanatism patch A and become uncorrelated in thelimit as IAl approaches infinity. 5 2 Thus

Huck et al.

Page 20: Image gathering and processing: information and fidelity

Vol. 2, No. 10/October 1985/J. Opt. Soc. Am. A 1663

ElLpqL q } _ EllLpq 1 2 }6pp;qq" (A3) where

We will assume that A is indeed large enough that the equalityis valid to any desired degree of accuracy. The average power

in mode (p, q) is

EllLpqJ1 ElL(, 0 )J~~q 2 AI 2 lLuCq2}

Substituting for the incremental element of area in frequencyspace AVp\ACq = 1/IAJ, we can write the power spectral

density, or Wiener spectrum, of the radiance field L(x, y)as

(DL(Vp, CXq) =EjJLtp 2= E[IL(p, 1q)121-AvpAwq JAI 1 p'

Thus Eq. (A2) becomes

E{L (Vp, wq)L*(vp, coq,) = (Up, jq) Al bpp';qq',

and the correlation function given by Eq. (A2) becomes

E{fmn(vUp, Cq)fn'n'(Up', Wq')l = JAI 4L(UP + X IWq +-n

X 6Vp+k"Vp,+>;Wq+eWqq'+Y y |glVp + q I

tp- EA (VP, wq)

and tA (Vp, Cq) is the finite Fourier transform of E(x, y). Fora particular member of the ensemble, the power in mode (p,

q) is I p, I , so that, from the properties of Fourier series,

(A7)iAI .fS Exy)J2dxdy = Tlpql 2_

Thus we can express the MSRE defined by Eq. (A5) as

(2 = A El F (vp Wq) [1 -Tg (Vp, Wq)p (Up, Wq)]JAl12 Ip,q

- Z[ m L((up+, Cwq+o )~n^Oo X Y

(A8)X Tg(vp + X I Wq + p (Vpq) I1

A typical cross term contains

E{(uvp + q+, )L*(v P XV)P

X Tp(Vp, wq)(p Vp + ' Wq + Y (A4)

Now we discuss the implication of the result given by Eq.

(A4). The Fourier components in the same sideband (mn, n= m', n') are uncorrelated. Therefore

Rmn (X, Y) - fE Amn(Vp, wq)exp[i2r(Upx + WqY)]JAI pq

is a wide-sense stationary process. However, the Fourier

components from distinct sidebands are not uncorrelated.Thus the reconstruction R(x, y) is not in general a stationaryprocess in the presence of sampling. There exists, however,

an interesting special case for which R(x, y) is stationary,namely, if

Tp(Up, q)p(Up + X ' Cq + Y ) = 0

for (m, n) (m', n') and all up, cop. The necessary conditionis that fp (Up, coq) 0 O for IupI > 1/2X, IWqI > 1/2Y, that is, that

the reconstruction filter be confined to the sampling passband.For this case, the definition used here for 4?R (V, w) representsthe true power spectral density. Otherwise, it can be shownthat (bR (V, CO) corresponds to an average power spectral densityobtained from the Fourier transform of the autocorrelationaveraged over A.

The mean-square restoration error (MSRE) E2 is definedby

e2 = EA J Ie(x, y)l2dxdy1 ,

where e(x,y) = L(x,y) - R(x,y). As before, E(x,y) possthe Fourier series representation valid in the patch A:

E(x, y) = F, tpq exp[i27r(vpx + Wqy)],pq

(A5)

sesses

= IAkL Vp + X IWq +-n

revealing that there is no interference between the terms fromdistinct sidebands. For sufficiently smooth power spectraldensities and spatial-frequency responses the discrete sumbecomes an integral (recall that 1/JAI = AVp, Awq), and weobtain

E2= Eb2 + ea2,

where

2b =S (U )| g(V, W)T~p(V, W)12 dudw (A9)

is the mean-square error caused by blurrring and

fa2 = ffC [4L(V, W)j'Tg(V, w)I2 * Ci (V, c)1JJ [ #~~~~~~~~~~~5!0,0X ITp (V, wO)I 2 dUdco (A10)

is the mean-square error caused by insufficient sampling, oraliasing.

For completeness, we may add the noise term in Eqs. (2) sothat the total MSRE becomes

E2 = Eb2 + ea2 + En2, (Alla)where

2 = K- 2 sS [4 N(U, W) * LL(U, W)](lp(v, c)l2dUdcv (Allb)

is the mean-square error caused by noise. Thus the imagefidelity given by Eq. (11) can also be expressed as

f = 1 - (eb2 + ea2 + En2 )/0L2.

Our treatment of the aliasing noise caused by insufficient(A6) sampling is similar to the treatment of Fellgett and Linfoot

(Ref. 1, p, 399) of photon noise in photographic images; both

Huck et al.

Page 21: Image gathering and processing: information and fidelity

1664 J. Opt. Soc. Am. A/Vol. 2, No. 10/October 1985 Huck et al.

are dependent on the incident radiance field but appear asadditive, uncorrelated noise terms.

APPENDIX B: COMBINING OPTICAL DESIGNWITH ALGORITHM TO APPROXIMATE THEDIFFERENCE-OF-GAUSSIAN RESPONSE

The DOG edge-detection response given by Eqs. (4) can beapproximated by combining a minimal 3-by-3 lateral-inhib-itory processing mask (or Laplacian operator) with the opticaldesign of the image-gathering system, as is depicted in Fig.2. We use the square sensor-array sampling lattice becauseit is commonly available; however, the regular hexagonal-sampling lattice provides better circular symmetry with fewer(six instead of eight) neighborhood subtractions per photo-sensor. 1 3,2 4

The spatial response r,(x, y) and the spatial-frequencyresponse T (U, co) of a square photosensor array with sym-metric 3-by-3 lateral-inhibitory processing are given, re-spectively, by

T8 (x, y) = II(x, y) + a [II(x/3, y) + II (x, y/3) -2II (x, y)]+ b[I (x/3,y/3) - ll(x/3,y) - rl(x,y/3) + fl(x,y)],

r,(x4y)

where

II(X, y) = 1,IxI, Iyl < 1/2

elsewhere

and

T2 (v, CD) = (1 - 2a + b)sinc v sinc c+ 3(a - b)[sinc 3v sinc c+ sinc U sinc 3co] + 9b sinc 3v sinc 3co.

We assume that the photosensor apertures are contiguous andthat their width is unity. Figure 20 illustrates the responsesT, (x, y) and -2 (v, cv) for weightings a = -0.18 and b = -0.07,which provide the most circularly symmetric response.

As is shown in Fig. 20, the spatial-frequency response # 8 (v,cv) extends far beyond the sampling passband S. The re-sponse beyond A can be reduced either by parallel processingof signals from a larger number of neighboring photosensors(i.e., a larger mask) or without further processing by appro-priately shaping the response of the objective lens. Figure20 illustrates the point-spread function (PSF) Tl (X, y) and theOTF fI (V, c) of a lens with a coherent cut-off frequency 1/2XF= 0.4 and an aperture transmittance T(v, w) = 1 - (U2 + W2)

Sensor-array with 3x3lateral inhibitory processing

~~~ ~~~ ~-31~~~~~~~~~~~~~~~~~-1-~~~~~~~~~~~~~3

r1(x.Y)

0.8 A -Va U

13 0.6 1

_ 0.2 I 1. I

C.) -0.2

-0.4 , A I -1 -I I I I I I0 0.2 040.6 0.8 1.0 1.2 1.4 1.6 1.8 2.0

v'W

Optics

1.0

0.8

0.6-

P1 0.4.

-0.2 1 10n -

-0.2 II I I I I I I I0 0.2 0.40.60.8 1.0 1.2 1.4 1.6 1.8 2.0

V..

'r,(x.y)

f

Combined response0.4 - I

0.3

*. 0.1

-0.1 I I I I I I I I0 0.2 0.4 0.6 0.8 1.0 1.2 1.4 1.6 1.8 2.0

V.

(a) Spatial response. (b) Spatial frequency response.Fig. 20. Combining optical design with sensor-array lateral-inhibitory processing to approximate the DOG function.

lkl

1)

N)

I~~>

Page 22: Image gathering and processing: information and fidelity

Vol. 2, No. 10/October 1985/J. Opt. Soc. Am. A 1665

I-

-2 -1 0 1 2 3 4xly

(a) Spatial response.

0 0.2 0.4 0.6V, Cao

(b) Spatial frequency response.Fig. 21. Comparison of image-gathering response with DOG function.

for geometrical coordinates Dv/2, DwI2 of the lens with ap-erture diameter D. (For further details, see Refs. 13 and 24.)The combined spatial response rg (x, y) = Tr (Xy Y) * Tr (X, y)and spatial-frequency response 1Tg (v, w) = tI (v, C)O (v, W)closely approximate, as is demonstrated in Fig. 21, theDOG-function response given by Eqs. (4) for W = 1.

APPENDIX C: ONE-DIMENSIONALSIMULATION

The formulations given for two-dimensional image gatheringand processing are also applicable to the one-dimensionalsimulations (simply by dropping the y dependence), exceptthat (1) the Wiener spectrum of the radiance field becomes

'L (U= 2t,.YrL 2

Ix + (2-7g, )2'

(2) the spatial response of the image-gathering system be-comes

Tg(x) = <r!,l {exp-x2/2/21 - 'V exp[-x2/2(Qx)2]}

Tg(v) = exp[-2(7r/v) 2] - W exp[-2(w7rav)2j],

where, as before, the sampling interval X = 1; and (3) the ef-fect of lateral-inhibitory processing on the noise is accountedfor by multiplying the photosensor noise term N(x) by (1 +W2/2)1/2.

APPENDIX D: HIGH-FREQUENCY ARTIFACTIN RESTORATIONS

Using one-dimensional notation and disregarding noise forconvenience, we can write Eq. (2c) as

(O) = I[L(u)tg(v) * Li (v)jip(v)= L (u)tg (U)p (V) + L (v - 1)tog (v - 1) ()

+ L(v + l)tg(v + 1)fp(v) +. .. (DI)

Taking the inverse Fourier transform, we obtain after somemathematical manipulation

R(x) = Ro(x) + 2Rejei 2 rxRi(x)I + ...

where Ro(x) is the desired restoration given by

(D2)

Ro(x) = C L(v)g(v)ip(v)exp(i2wxu)dv, (D3)

and R(x) represents an amplitude and a phase modulationof the high-frequency carrier e iZwx as given by

Rj(x) = £ L(v0Tg(v)Tp(v + 1)exp(i2wxu)dv. (D4)

The high-frequency artifact is most noticeable in slowlyvarying regions of Ro(x). An examination of the structuresof Eqs. (D3) and (D4) suggests that the complex quantityRi(x) is also slowly varying in these regions. Since imagereconstructions emphasize these slowly varying regions, onlylow-frequency components (i.e., I 1 << 1) contribute to theintegral, i.e., to the modulation of the high-frequency carrierei27rx

Note that the mismatched Wiener restorations shown inFigs. 12 and 15 exhibit the artifact for fl = 0.3 for which T1 (v= 1) is considerable but not for 3 = 0.6 for which Tp (V = 1) issmall. (See Fig. 13.) Similarly, the high-frequency artifactis absent for the Wieners cubic, and linear reconstructionsshown in Fig. 11 as a consequence of Tp(v = 1) 0O. (See Figs.9 and 10.)

ACKNOWLEDGMENT

We dedicate this paper to E. H. Linfoot whose papers on in-formation and fidelity were the catalyst of this one.

REFERENCES

1. P. B. Fellgett and E. H. Linfoot, "On the assessment of opticalimages," Philos. Trans. R. Soc. London 247, 369-407 (1955).

2. E. H. Linfoot, "Information theory and optical images," J. Opt.Soc. Am. 45, 808-819 (1955).

3. E. H. Linfoot, "Transmission factors and optical design," J. Opt.Soc. Am. 46, 740-752 (1956).

4. E. H. Linfoot, "Quality evaluations of optical systems," Opt. Acta5, 1-14 (1958).

5. E. H. Linfoot, "Optical image evaluation from the standpoint ofcommunication theory," Physica XXIV, 476-494 (1958).

6. W. T. Cathey, B. R. Frieden, W. T. Rhodes, and C. K. Rushforth,"Image gathering and processing for enhanced resolution," J. Opt.Soc. Am. A 1, 241-250 (1984).

7. D. Marr and S. Ullman, "Bandpass channels, zero-crossings, andearly visual information processing," J. Opt. Soc. Am. 69,914-916(1979).

x-

-0.11 1-4 -3 0.8 1.0

Huck et al.

Page 23: Image gathering and processing: information and fidelity

1666 J. Opt. Soc. Am. A/Vol. 2, No. 10/October 1985

8. D. Marr and E. Hildreth, "Theory of edge detection," Proc. R.Soc. London Ser. B 207, 187-217 (1980).

9. D. Marr, Vision (Freeman, San Francisco, Calif., 1982).10. M. Brady, "Computational approaches to image understanding,"

Comput. Surv. 14, 3-71 (1982).11. E. C. Hildreth, "The detection of intensity changes by computer

and biological vision systems," Comput. Vision Graphics ImageProcess. 22,1-27 (1983).

12. D. H. Ballard and C. M. Brown, Computer Vision (Prentice-Hall,Englewood Cliffs, N.J., 1982).

13. F. 0. Huck, C. L. Fales, S. K. Park, D. J. Jobson, and R. W.Samms, "Image-plane processing of visual information," Appl.Opt. 23, 3160-3167 (1984).

14. D. Marr, T. Poggio, and E. Hildreth, "Smallest channel in earlyhuman vision," J. Opt. Soc. Am. 70, 868-870 (1980).

15. M. E. Jernigarl and R. W. Wardell, "Does the eye contain optimaledge detection mechanisms?" IEEE Trans. Syst. Man Cybern.SMC-11, 441-444 (1981).

16. P. Mertz and F. Gray, "Theory of scanning and its relation tocharacteristics of transmitted signal in telephotography andtelevision," Bell Syst. Tech. J. 13, 464-515 (July 1934).

17. 0. H. Schade, Sr., "Image gradation, graininess and sharpnessin television and motion-picture systems," J. Soc. Motion Pict.Telev. Eng. 56,137-171 (1951); 58,181-222 (1952); 61, 97-164(1953); 64, 593-617 (1955).

18. L. G. Callahan and W. M. Brown, "One- and two-dimensionalprocessing in line scanning systems," Appl. Opt. 2, 401-407(1963).

19. A. Macovski, "Spatial and temporal analysis of scanned systems,"Appl. Opt. 9, 1906-1910 (1970).

20. S. J. Katzberg, F. 0. Huck, and S. D. Wall, "Photosensor apertureshaping to reduce aliasing in optical-mechanical line-scan imagingsystems," Appl. Opt. 12,1054-1060 (1973).

21. A. H. Robinson, "Multidimensional Fourier transforms and imageprocessing with finite scanning apertures," Appl. Opt. 12,2344-2352 (1973).

22. L. M. Biberman, ed., Perception of Displayed Information(Plenum, New York, 1973).

23. F. 0. Huck, N. Halyo, and S. K. Park, "Aliasing and blurring in2-D sampled imagery," Appl. Opt. 19, 2174-2181 (1980).

24. C. L. Fales, F. 0. Huck, and R. W. Samms, "Imaging system de-sign for improved information capacity," Appl. Opt. 23,872-888(1984).

25. H. C. Andrews and B. R. Hunt, Digital Image Restoration(Prentice-Hall, Englewood Cliffs, N.J., 1977).

26. W. K. Pratt, Digital Image Processing (Wiley, New York,1978).

27. T. S. Huang, ed., Picture Processing and Digital Filtering(Springer-Verlag, Berlin, 1979).

28. A. Rosenfeld and A. C. Kak, Digital Picture Processing (Aca-demic, New York, 1982).

29. S. K. Park and R. A. Schowengerdt, "Image reconstruction byparametric cubic convolution," Comput. Vision Graphics ImageProcess. 23, 258-272 (1983).

30. S. K. Park and R. A. Schowengerdt, "Image sampling, recon-

struction, and the effect of sample-scene phasing," Appl. Opt. 21,3142-3151 (1982).

31. R. A. Schowengerdt, S. K. Park, and R. Gray, "Topics in thetwo-dimensional sampling and reconstruction of images," Int.J. Remote Sensing 5(2), 333-347 (1984).

32. C. E. Shannon, "A mathematical theory of communication," BellSyst. Tech. J. 27,379-423; 28,623-656 (1948); C. E. Shannon andW. Weaver, The Mathematical Theory of Communication (U.Illinois Press, Urbana, Ill., 1964).

33. B. R. Frieden, "Information and the restorability of images," J.Opt. Soc. Am. 60, 575-576 (1970).

34. N. Wiener, Extrapolation, Interpolation, and Smoothing ofStationary Times Series (Wiley, New York, 1949).

35. C. W. Helstrom, "Image restoration by the method of leastsquares," J. Opt. Soc. Am. 57, 297-303 (1967).

36. B. R. Frieden, "How well can a lens system transmit entropy?,"J. Opt. Soc. Am. 58,1105-1112 (1968).-

37. M. Mino and Y. Okano, "Improvement in the OTF of a defocusedoptical system through the use of shaded apertures," Appl. Opt.10, 2219-2225 (1970).

38. J. E. Hall and J. D. Awtrey, "Real-time image enhancement using3 by 3 pixel neighborhood operator functions," Opt. Eng. 19,421-424 (1980).

39. L. S. Davis, "Computer architectures for image processing," IEEEComput. Soc. Workshop Proc. CH1530-5, 249-254 (1980).

40. W. L. Fversole, J. F. Salzman, F. V. Taylor, and W. L. Harland,"Programmable image processing element," SPIE Proc. 301,66-77 (1981).

41. J. M. Enoch and F. L. Tobey, ed., Vertebrate PhotoreceptorOptics (Springer-Verlag, Berlin, 1981).

42. H. J. Metcalf, "Stiles-Crawford apodization," J. Opt. Soc. Am.55, 72-74 (1965).

43. J. P. Carroll, "Apodization model of the Stiles-Crawford effect,"J. Opt. Soc. Am. 70, 1155-1156 (1980).

44. M. Kass and J. Hughes, "A stochastic image model for AI," IEEEProc. Syst. Man Cybern. Conf. X, 369-372 (1983).

45. Y. Itakura, S. Tsutsumi, and T. Takagi, "Statistical propertiesof the background noise for the atmospheric windows in the in-termediate infrared region," Infrared Phys. 14,17-29 (1974).

46. Y. L. Lee, Statistical Theory of Communications (Wiley, NewYork, 1964).

47. F. 0. Huck and S. K. Park, "Optical-mechanical line-scan imagingprocess: its information capacity and efficiency," Appl. Opt. 14,2509-2520 (1975).

48. E. L. O'Neill, Introduction to Statistical Optics (Addison-Wesley,Reading, Mass., 1963).

49. M. M. Sondhi, "Image restoration: the removal of spatially in-variant degradations," IEEE Proc. 60, 842-852 (1972).

50. H. B. Barlow, "Critical factors in the design of the eye and visualcortex," Proc. R. Soc. London Ser. B 212, 1 (1981).

51. J. W. Modestino and R. W. Fries, "Edge detection in noisy imagesusing recursive digital filtering," Comput. Graphics Image Pro-cess. 6, 409-433 (1977).

52. A. Papoulis, Probability, Random Variables, and StochasticProcesses (McGraw-Hill, New York; 1965).

Huck et al.


Recommended