+ All Categories
Home > Documents > range data

range data

Date post: 08-Jan-2017
Category:
Upload: lamnguyet
View: 217 times
Download: 0 times
Share this document with a friend
6
Vol. 7, No. 12/December 1990/J. Opt. Soc. Am. A 2193 Probabilistic method for integrating multiple sources of range data V. Michael Bove, Jr. Media Laboratory, Massachusetts Institute of Technology, Room E15-322, 20 Ames Street, Cambridge, Massachusetts 02139 Received January 11, 1990; accepted May 15, 1990 It is suggested that passive-camera range-sensing methods be considered as returning not just range values but rather probability density functions, characterized by two-dimensional arrays of both expected value and variance. Employing variance data permits probabilistic integration of the range values from multiple range-sensing algo- rithms, with results demonstrably superior to simple averaging. This method is analyzed and carried out for sign- bit correlation stereopsis and two-lens depth-from-focus. Calculating the Cram6r-Rao lower bound on variance for these two range-sensing algorithms can be done with relatively little additional computation. INTRODUCTION A variety of passive-camera range-sensing methods have been described in the machine vision literature, including stereopsis, depth-from-focus, shape-from-shading, and' structure-from-motion.1 The accuracy of all algorithms is dependent on the signal-to-noise ratio and the content of the input images; however, this dependence varies in such a fashion that different algorithms commonly err in different image regions. For this reason, a better scene description may be obtained by combining the results of applying more than one method. Research into integrating range data from various meth- ods, or from more than one observation using a single meth- od, has often approached the problem via either parallel or iterative algorithms to fit surfaces given multiple con- straints. 2 ' 3 Such an approach is particularly useful when the range information available is sparse, which may result from certain algorithms, or when assumptions may be made about the surface structure of the objects imaged. When data points (perhaps of varying fidelity) are available every- where, and when one does not wish to apply constraints such as surface smoothness to the result, a different sort of ap- proach to the problem is preferable. The author has previously noted 4 that when employing a passive-camera range-sensing method it is possible to com- pute both a range value and a variance for each point in the image. Previously, this variance information was used in a surface-fitting method where points with a large variance were permitted to move more than those for which the soft- ware returned a lower variance. Considering the output of a range-sensing algorithm as a probability density function (PDF), however,will permit the development of probability- based methods of handling the range data, which, in turn, will provide a solution to the problem of combining multiple range-sensing modalities. PROBABILISTIC MODEL FOR RANGE SENSING While the actual error characteristics of a range-sensing algorithm measured under certain circumstances might take a somewhat different form, a Gaussian distribution will be assumed throughout this paper. It is to be understood that probabilistic integration of range data can be performed, if less straightforwardly, for whatever PDF has been experi- mentally or theoretically determined for the processes in question; a Gaussian model for error has also proven to work quite well in experiments. If the PDF of the range-sensing modality is modeled as Gaussian, then it is specified by locally varying parameters of expected value E and variance 0 2 , where E is the true distance to the point in question and 0y 2 is a function of the image's local spectral content and signal-to-noise ratio, as well as algorithmic details, such as window size and filters. A useful variance measure for esti- mation problems of this type is the Cramer-Rao inequality (see, for example, Ref. 5), which sets a lower bound on the variance of the estimate. In Appendix A this is derived for two different range-sensing schemes. To supply a figure for the expected value it will be necessary to take a common statistical step and to observe that (lacking any other data) the best estimate for E is the z value returned by the algo- rithm. Given that a range-sensing technique may be modeled at some point in the image as a Gaussian random process with expected value E 0 and variance uo 2 , its PDF at that point will be (1) PZIE(zOIEO, o0) = I exp[-(zo - Eo) 2 /2o 2 -1. vaO The result of another range-sensing process will provide some other expected value and variance, and the problem of integrating the range data becomes that of finding the maxi- mum-likelihood estimator for the expected value and com- puting the variance of the resulting estimate. This is essen- tially the data-adjustment process as formulated by Gauss (see historical discussion in Ref. 6; an early detailed treat- ment of the topic is in Ref. 7). Given n range-sensing pro- cesses, the maximum likelihood estimate for E is that which satisfies the equation adI PE,,e(ZOIEj, i) i=1= azo zo=E 0740-3232/90/122193-06$02.00 ©1990 Optical Society of America (2) V. Michael Bove, Jr.
Transcript
Page 1: range data

Vol. 7, No. 12/December 1990/J. Opt. Soc. Am. A 2193

Probabilistic method for integrating multiple sources ofrange data

V. Michael Bove, Jr.

Media Laboratory, Massachusetts Institute of Technology, Room E15-322, 20 Ames Street, Cambridge,Massachusetts 02139

Received January 11, 1990; accepted May 15, 1990

It is suggested that passive-camera range-sensing methods be considered as returning not just range values butrather probability density functions, characterized by two-dimensional arrays of both expected value and variance.Employing variance data permits probabilistic integration of the range values from multiple range-sensing algo-rithms, with results demonstrably superior to simple averaging. This method is analyzed and carried out for sign-bit correlation stereopsis and two-lens depth-from-focus. Calculating the Cram6r-Rao lower bound on variance forthese two range-sensing algorithms can be done with relatively little additional computation.

INTRODUCTIONA variety of passive-camera range-sensing methods havebeen described in the machine vision literature, includingstereopsis, depth-from-focus, shape-from-shading, and'structure-from-motion.1 The accuracy of all algorithms isdependent on the signal-to-noise ratio and the content of theinput images; however, this dependence varies in such afashion that different algorithms commonly err in differentimage regions. For this reason, a better scene descriptionmay be obtained by combining the results of applying morethan one method.

Research into integrating range data from various meth-ods, or from more than one observation using a single meth-od, has often approached the problem via either parallel oriterative algorithms to fit surfaces given multiple con-straints.2 ' 3 Such an approach is particularly useful when therange information available is sparse, which may result fromcertain algorithms, or when assumptions may be made aboutthe surface structure of the objects imaged. When datapoints (perhaps of varying fidelity) are available every-where, and when one does not wish to apply constraints suchas surface smoothness to the result, a different sort of ap-proach to the problem is preferable.

The author has previously noted4 that when employing apassive-camera range-sensing method it is possible to com-pute both a range value and a variance for each point in theimage. Previously, this variance information was used in asurface-fitting method where points with a large variancewere permitted to move more than those for which the soft-ware returned a lower variance. Considering the output of arange-sensing algorithm as a probability density function(PDF), however, will permit the development of probability-based methods of handling the range data, which, in turn,will provide a solution to the problem of combining multiplerange-sensing modalities.

PROBABILISTIC MODEL FOR RANGE SENSINGWhile the actual error characteristics of a range-sensingalgorithm measured under certain circumstances might takea somewhat different form, a Gaussian distribution will be

assumed throughout this paper. It is to be understood thatprobabilistic integration of range data can be performed, ifless straightforwardly, for whatever PDF has been experi-mentally or theoretically determined for the processes inquestion; a Gaussian model for error has also proven to workquite well in experiments. If the PDF of the range-sensingmodality is modeled as Gaussian, then it is specified bylocally varying parameters of expected value E and variance0 2, where E is the true distance to the point in question and0y2 is a function of the image's local spectral content andsignal-to-noise ratio, as well as algorithmic details, such aswindow size and filters. A useful variance measure for esti-mation problems of this type is the Cramer-Rao inequality(see, for example, Ref. 5), which sets a lower bound on thevariance of the estimate. In Appendix A this is derived fortwo different range-sensing schemes. To supply a figure forthe expected value it will be necessary to take a commonstatistical step and to observe that (lacking any other data)the best estimate for E is the z value returned by the algo-rithm.

Given that a range-sensing technique may be modeled atsome point in the image as a Gaussian random process withexpected value E0 and variance uo2, its PDF at that point willbe

(1)PZIE(zOIEO, o0) = I exp[-(zo - Eo)2/2o 2-1.vaOThe result of another range-sensing process will providesome other expected value and variance, and the problem ofintegrating the range data becomes that of finding the maxi-mum-likelihood estimator for the expected value and com-puting the variance of the resulting estimate. This is essen-tially the data-adjustment process as formulated by Gauss(see historical discussion in Ref. 6; an early detailed treat-ment of the topic is in Ref. 7). Given n range-sensing pro-cesses, the maximum likelihood estimate for E is that whichsatisfies the equation

adI PE,,e(ZOIEj, i)i=1=

azo zo=E

0740-3232/90/122193-06$02.00 © 1990 Optical Society of America

(2)

V. Michael Bove, Jr.

Page 2: range data

2194 J. Opt. Soc. Am. A/Vol. 7, No. 12/December 1990

It is customary for simplicity to take the partial derivative ofthe natural logarithm of the PDF8 ; monotonicity of the natu-ral logarithm function means that this equation will have thesame solution. Applying this technique to n observations ofEq. (1), the resulting estimate for the expected value is

(3)

Gauss's law of propagation of error for a function of severalmeasured quantities allows computation of the variance ofthe result

2 (OE 2 n 0I~~1, a~~i _

0i 2

(4)

It may be instructive also to compute explicitly the ex-pected value and variance for the case of combining twoGaussian PDF's. Given a second process in addition of Eq.(1) above, with a Gaussian PDF whose parameters are (forclarity called) FO and po2, then it should be possible to com-pute a better model for z. The result should be another PDFwith a smaller variance and an expected value with morestatistical support. This PDF may be expressed as

PZIE,F,a,P(ZOIEO, FO 0 PO) = Pz1Ea(ZOIEO, aO)PZIFp(ZOIF, PO)PE,F,a,p(E, FO, C0O, PO)

(5)

Finding the joint PDF in the denominator seems to pose achallenge, but first the two conditional PDF's might be mul-tiplied together:

PzIEa(ZOIEO, o)pZF,P(zol F 0, PO)

1 exp[-(z 0 - Eo)212ao2 -(Zo - F0)2/2po2]. (6)2 wraopo

This is surely none too informative, but by completing thesquare in the exponent one may express the equation in adifferent form:

PzIE,a(ZOIEO, o)Pz1FP(zoIF0, Po) = 12 iraopo

,Yo2Fo + po2Eo\ / 2a0 2P02 (EO - FO)2 1ao2 + Po2 / Ico2 + Po2 2( 0.0

2 + Po2) J(7)

This can in turn be rewritten as

PZIEa(ZOIEO, ao)PzIFp(ZoI F0 PO) = 1 12r(cr 0

2 + P0 2)

X x (E - F 0 )2 Tl .1

e[ 2(2 + po2) 4{-r[ao2 Po2/(0.o2 + P02 )]1 /2

X exp[ ' _ 02Fo + p0

2EO 2/ / 2+ p02X exP[~ ~ - g2 + Po2 / ,2 + P02 .i

Fig. 1. When both input processes (dashed curves) have equalvariances (top), the resulting PDF (solid curves) has its expectedvalue centered between the two samples. However, unequal vari-ances (bottom) lead to an expected value close to the sample fromthe process with the smaller variance.

value (aOIFO + poEO)1(0.02 + p02) and variance (ao2pol)l 0- 0 2+ Po 2). This is in agreement with Eqs- 3 and 4.

Derived in either manner, the result has some propertiesof interest. At the top of Fig. 1, two input processes withequal variances (dashed curves) result in a PDF whose ex-pected value is centered between them. However, when thevariances are unequal (lower plot), the expected value gravi-tates strongly toward the narrower PDF.

It should be evident that this approach is computationallysimple (it is, of course, not necessary to compute the PDF'sexplicitly, rather only to manipulate the variances and ex-pected values) and as the calculations are strictly local, thisapproach may be performed rapidly on parallel processinghardware.

APPLYING THE TECHNIQUE TO TWORANGE-SENSING PROCESSESTo assess the usefulness of the approach described above forthe problem at hand, it has been applied to the task ofintegrating range data from a sign-bit correlation stereopsisprocess with that from a two-lens depth-from-focus algo-rithm developed by the author.

In sign-bit correlation stereopsis,9 disparity is estimatedbetween left and right views of a scene by applying a band-pass filter to the images, quantizing the result to one bit perpoint, and finding the peak of the correlation between re-gions in the two sign-bit images. This operation is typicallycarried out at multiple scales to reduce false matches.

The bandpass filter most commonly used is the Laplacianof a Gaussian, often called the V2G operator. If an originalimage is R(x, y), then the sign-bit image upon which thecorrelation will take place is

V. Michael Bove, Jr.

R,(x, y = sgn[V2G * R(x, y)]. (9)

The algorithm searches for the value of disparity T thatmaximizes the correlation

(8)

Kcorr(,r = E [r(xi +7-, yi)s(xi, yi)],

j=1

The leading factor here may be seen as the joint PDFPEFap(Eo1 Fo) ao, po), which means [in light of Eq. (5)] thatthe second factor is the desired conditional PDFPz1EFap(zo1Eo, FO, ao, Po), a Gaussian PDF with expected

(10)

where r and s are K-pixel subimages of sign-bit images repre-

n 1 n 1E = 2 Ei -7Y 0.,. O'i 2 'i=1 i=1

X exp - zo

Page 3: range data

Vol. 7, No. 12/December 1990/J. Opt. Soc. Am. A 2195

senting two horizontally offset views of the scene. In thesimplest possible case, the two cameras are parallel, withoptical axes separated by a distance b. Then distance zrelates to disparity r as follows:

z = fb/r, (11)

where f is the focal length of the camera lenses.Two-lens depth-from-focus examines a pair of images that

differ in lens parameters such as focal length, focus distance,or iris aperture. In the present algorithm, 10 an image madewith a large aperture (and thus a short depth of field) ismodeled as the result of applying a distance-dependentpoint-spread function of known form to another image iden-tical except for having a smaller aperture. In this case, if Ris the defocused image and S is the image intensities in theabsence of defocusing (approximated by the long-depth-of-field image),

R(x, y) = S(x, y) * F[c(x, y)].

Fig. 3. Range image resulting from depth-from-focus process.Lighter points are closer to the camera.

(12)

Here F is the point-spread function, which varies with (x, y)and c, a quantity related to its spread. (The term c is itsdiameter, if it is modeled as a cylinder, or the diameter atwhich it rolls off to some defined level for some other shape,i.e., what is called the "circle of confusion" by photogra-phers.) Assuming that the camera is focused forward of thescene, c relates to z as

fvv-f - nc' (13)

where f is the focal length, n the numerical aperture, and vthe lens-to-image-plane distance, it should be possible torecover the point-spread function and, thus, the range, bylocally deconvolving the two images. To do this, one takesdiscrete Fourier transforms of corresponding windowed re-gions of the two images, divides the spectrum coefficients,and subjects the result (which should represent the spec-trum of the point-spread function) to a polynomial regres-sion fit to estimate c, which is then converted to a z valuecorresponding to an average over the windowed region. Theform of the polynomial approximation for the point-spreadfunction's transform is provided by calibrated measure-

Fig. 2. View of a scene imaged with a small lens aperture. Anotherview from the same location with a larger aperture and a small-aperture view from a location offset slightly to the left were usedtogether with this as input to the range-sensing processes.

Fig. 4. Variance lower bound computed for data in Fig. 3. Lighterpoints have higher variance.

ments on the lens under known conditions. In actual prac-tice the point-spread function is assumed to be rotationallysymmetrical and the division and regression carried out inone dimension (that of radial frequency) rather than two bycollapsing together and normalizing all terms at equal dis-tances from the origin of the frequency plane (a, wy) = (0, 0).

In the present experiment, three images of a scene havebeen digitized: two from the same camera viewpoint at lensapertures of f/22 and f15.6 (the latter with a neutral-densityfilter to equalize the exposure) and another at f/22 from ahorizontally offset position (Fig. 2). The first two of theseimages were fed to the depth-from-focus algorithm, resultingin the range and variance data pictured in Figs. 3 and 4. Thefirst and third images were used to compute the sign-bitcorrelation stereopsis data in Figs. 5 and 6. It should benoted that in both cases the range data points that areobviously incorrect occur in regions where the variance islarge; it is also important to observe that the variance peaksfor the two different algorithms occur in widely differentregions of the image-thus the potential benefit in using thevariance to combine the two sets of data. In particular, thedepth-from-focus algorithm has problems with regions hav-ing little detail, such as the rabbit's chest and the margins ofthe book; variance peaks for the stereopsis algorithm occur

V. Michael Bove, Jr.

Page 4: range data

2196 J. Opt. Soc. Am. A/Vol. 7, No. 12/December 1990

range were not available for the scene used in this experi-ment, but a useful technique for evaluating the quality of theresult is to make a computer-graphics rendering of the rangedata values from a different viewpoint, with the intensityvalues texture mapped onto the surface. Objects are quiterecognizable in the rendering of the variance-weighted com-bination (Fig. 9) but severely distorted in the unweightedaverage (Fig. 10).

Fig. 5. Range image resulting from sign-bit correlation stereopsis.

Fig. 8. Range image computed by unweighted averaging of thedata from the two processes.

Fig. 6. Variance lower bound computed for data in Fig. 5. Notethat peaks occur in different regions than for depth-from-focus.

Fig. 9. Scene rerendered from a new viewpoint using probabilisti-cally combined range data.

Fig. 7. Range image computed from both range-finding processesby the method outlined in this paper.

at object boundaries because of occlusion effects and atregions with ambiguous correspondences like the small typeon the book cover.

The obvious errors are largely eliminated in Fig. 7, whichshows the variance-weighted combination of the range data,much more so than in the outcome of simple, unweightedaveraging (Fig. 8). Calibrated measurements of actual

Fig. 10. Scene rerendered from a new viewpoint using averagedrange data.

V. Michael Bove, Jr.

Page 5: range data

Vol. 7, No. 12/December 1990/J. Opt. Soc. Am. A 2197

CONCLUSIONThe central problem in passive-camera range sensing is toextract as much information as possible from a given set ofimages with characteristics over which the algorithm has nocontrol. With a better understanding of how each algor-ithm's accuracy varies, each method's contribution to theestimation process may be used in the best possible manner.In the two cases analyzed here, calculating the variancelower bound provides an improved result well worth thesmall added computation.

APPENDIX A: COMPUTING THECRAMER-RAO LOWER BOUNDBy applying the calculus of variations to the depth-from-focus algorithm, a relationship between variance in thepoint-spread function estimate and variance in the distanceestimate may be shown4 :

ff 2 = (nZ)2 C2 (n2) 2C, (Al)

where n is the numerical aperture of the lens, f is the focallength, and v is the lens-to-image-plane distance.

Given a model for computing variance in z from variancein c, the next step is to relate variance in c to image charac-teristics. A useful bound on the variance of an estimate ofthis sort is the Cramer-Rao inequality. Given additivenoise, the observed short depth-of-field spectrum r(w) willbe modeled as the sum of some s(w, c) and white, zero-meanGaussian noise n(w) with standard deviation 0-n (Ref. 11):

r(w) = s(, c) + n(w), 0 • w < K. (A2)

The Cram6r-Rao lower bound on minimum mean-squareerror of an estimate of the random variable c is

2 a2 lnWpfc(ro, 00)] -10.>~E 1 c (A3)

where r is the vector of observations of r,observations of the ratios of spectral terms.for s(w, c) may be split,

s(k, c) = s(w)F(, c).

The joint PDF for r and c may be rewritten as

Pr,c(ro, Co) = Pric(rolco)pc(co).

in this case theThe expression

(A4)

If s(w) is assumed to be known, then the PDF for the obser-vation vector may be expressed as

pric(rolco) = (As )

X exp{- 2 E [r(w) - F(wi, c)s(wi)]2}. (A8)

After multiple substitutions and simplifications, theCramer-Rao bound becomes

o.C2 Ž { + 2 OF(i, C)]2}1 (A9)

an expression that allows computing the lower bound for thevariance in the estimate of c given a particular s(w), a mea-sure of noise content, and a presumed-correct model for theblur function F(c, c). This is then converted to variance inthe z estimate by applying Eq. (Al). As the form of thefunction F(w, c) is known in advance (it is specified as part ofthe algorithm), the function representing the partial deriva-tive is known in advance as well and must only be evaluatednumerically for the computed value of c and summed overeach a, in the power spectrum. This represents an extreme-ly small increase in computation given that a pair of discreteFourier transforms and a regression fit have already beencarried out.

That this bound is highly optimistic and will not likely beapproached by an actual implementation is due to severalcauses, in particular: F(w, c) is not a perfect model; thenoise model assumed is not a perfect model; s(w) is notactually known (instead there is only a long-depth-of-fieldspectrum, which itself contains some amount of blur, noise,and aliasing13); and the spectrum estimates result from awindowing process that may introduce a certain amount ofbias, variance, and spatial uncertainty.

Proceeding in a similar fashion, one can compute the lowervariance bound for sign-bit correlation stereopsis. Again,first it is necessary to relate variance in z to variance in thecomputed quantity, in this case disparity. In the simplestpossible case, the two cameras are parallel, with optical axesseparated by distance b. If the disparity is r,

0.z2 = (fb) 2 0 2.

(A5)

Proceeding further requires some knowledge of the sceneand camera parameters. It might be assumed that all c areequiprobable between some minimum and some maximumvalue determined by the imaging situation.12 Thus, thePDF for c is

pc(co) = Cmax- Cn Cm0n C 0 Cmax (A6)0 otherwise

The PDF for the noise component n is

Pn(nO) = exp - 2I (A7)V2_,r On \2o-n2

(A10)

Nishihara9 has shown that when both the signal and thenoise can be modeled as zero-mean Gaussian processes, theprobability of some point in the convolution changing sign asa result of the noise is

P= tani Uen ;7r OS

(All)

in the analysis to follow, though, any known P will work, sothe assumption that the signal is Gaussian is not essential.

As in the depth-from-focus discussion, some simplifyingassumptions are in order. Misalignments, vertical dispari-ties, occlusions, optical distortions, and similar factors willbe neglected, and it will be assumed that if the correlationpatches from the two images are r and s, in the absence ofnoise

V. Michael Bove, Jr.

Page 6: range data

2198 J. Opt. Soc. Am. A/Vol. 7, No. 12/December 1990

(A12) REFERENCES AND NOTES

Assuming a uniform PDF for x, and assuming s is known,the PDF for an r of K points is the familiar Pascal distribu-tion

Prir(rolro) = pU( 1 - p)(K-u) (A13)

where U, the number of points in r whose sign has beenchanged by noise, is

U - [I [1r(xi)s(xi-T (A14)

[Note that a point r(xi) or s(xi) is restricted to either 1 or -1.]Simplification yields the expression

2 1 ~ KIO.,2 > [- ln( E,) d E r(xi)s(xi- (A15)

The partial derivative can be seen as simply the slope of thecorrelation function in the vicinity of the peak. When theslope is small (that is, the peak is broad) or P is large, thevariance increases. Since the correlation function was al-ready computed as part of the stereo matching process, littleadditional computation is needed to estimate the derivativeby, for example, differencing between the peak and itsneighbors.

ACKNOWLEDGMENTSThe author thanks Tamas Sandor for reading and providinghelpful comments on an early draft of this paper.

The study described in this paper has been supported byCPW Technologies (Columbia Pictures Entertainment In-corporated, Paramount Pictures Corporation, and WarnerBrothers Incorporated).

1. R. A. Jarvis, "A perspective on range finding techniques forcomputer vision," IEEE Trans. Pattern Anal. Mach. Intell.PAMI-5, 122-139 (1983).

2. W. E. L. Grimson, From Images to Surfaces (MIT Press, Cam-bridge, Mass., 1981), pp. 101-137.

3. D. Terzopoulos, "The computation of visible-surface represen-tations," IEEE Trans. Pattern Anal. Mach. Intell. 10, 417-438(1988).

4. V. M. Bove, Jr., "Synthetic movies derived from multi-dimen-sional image sensors," Ph.D. dissertation (Massachusetts Insti-tute of Technology, Cambridge, Mass., 1989).

5. H. L. Van Trees, Detection, Estimation, and Modulation The-ory, Part I (Wiley, New York, 1968), pp. 65-73.

6. W. Gellert, H. Kuistner, M. Hellwich, and H. Kistner, eds., TheVNR Concise Encyclopedia of Mathematics (Van NostrandReinhold, New York, 1977), pp. 607-624.

7. S. W. Holman, Discussion of the Precision of Measurements(Wiley, New York, 1892), pp. 4-99.

8. K. Fukunaga, Introduction to Statistical Pattern Recognition(Academic, New York, 1972), p. 127.

9. H. K. Nishihara, "Practical real-time imaging stereo matcher,"Opt. Eng. 23, 536-545 (1984).

10. V. M. Bove, Jr., "Discrete Fourier transform based depth-from-focus," in Digest of Topical Meeting on Image Understandingand Machine Vision (Optical Society of America, Washington,D.C., 1989), pp. 118-121.

11. Because of image sensor characteristics, the assumption ofwhite, Gaussian noise is probably not strictly correct. If theactual spectral distribution of noise were known, it would beregarded here as the product of white noise and some transferfunction H(w). Whiteness is also technically violated by thecollapse of two-dimensional spectra to radial frequency, as notall radial frequency terms result from the same number of termsin the original spectrum, and thus variance is a function of a.

12. More correctly, one might assume that all z are equiprobableand compute the PDF for c based on this knowledge as well ason the shape of the c(z) curve for the particular lens parametersemployed. For a scene in which the range of c is fairly limited,assuming that a uniform PDF for c is not unreasonable given auniform PDF for z.

13. Aliasing content can be included in the expression for r(w) as anadditional term s(W - w), where W is the sampling frequency.

r(xi + r) = s(xi).

V. Michael Bove, Jr.


Recommended