+ All Categories
Home > Documents > Cramér–Rao analysis of orientation estimation: viewing geometry influences on the information...

Cramér–Rao analysis of orientation estimation: viewing geometry influences on the information...

Date post: 02-Oct-2016
Category:
Upload: paul-s
View: 217 times
Download: 3 times
Share this document with a friend
20
Crame ´ r Rao analysis of orientation estimation: viewing geometry influences on the information conveyed by target features David R. Gerwe and Paul S. Idell Boeing Lasers & Electro-Optics Systems, 6633 Canoga Avenue, MC WB63, Canoga Park, California 91309 Received May 20, 2002; revised manuscript received December 9, 2002; accepted January 2, 2003 A methodology for analyzing an imaging sensor’s ability to assess target properties is developed. By the ap- plication of a Crame ´r Rao covariance analysis to a statistical model relating the sensor measurements to the target, a lower bound can be calculated on the accuracy with which any unbiased algorithm can form estimates of target properties. Such calculations are important in understanding how a sensor’s design influences its performance for a given assessment task and in performing feasibility studies or system architecture design studies between sensor designs and sensing modalities. A novel numerical model relating a sensor’s mea- surements to a target’s three-dimensional geometry is developed in order to overcome difficulties in accurately performing the required numerical computations. The accuracy of the computations is verified against simple test cases that can be solved in closed form. Examples are presented in which the approach is used to inves- tigate the influence of viewing perspective on orientation accuracy limits. These examples are also used to examine the potential accuracy improvement that could be gained by fusing multiperspective data. © 2003 Optical Society of America OCIS codes: 100.2960, 100.5010. 1. INTRODUCTION One of the most common uses of imagery is the assess- ment of the properties of the imaged target and its sub- components. Properties that are directly related to an image include size, shape, position, orientation, tempera- ture, and reflectance characteristics, to name a few. In the process of designing a sensor or planning a measure- ment operation, it is important to understand and predict how a sensor’s characteristics and the method of its em- ployment influence the accuracy with which the desired target assessments can be made. It is also of interest to understand the performance benefit that might be achieved by fusing data from multiple sensing sources. Examples of data fusion include combining information from images taken at multiple perspectives and times and/or from a mix of sensing modalities such as conven- tional passive optical, hyperspectral, ranging laser radar, radar, and range-Doppler imaging measurements. Assessment accuracies are often characterized by ana- lyzing the noise properties of specific estimation algo- rithms or by testing algorithms against simulated or ex- perimental data and comparing the results against the known or otherwise inferred state of the target. 18 This approach is most useful in predicting the accuracy of a specific target, sensor, and processing algorithm combina- tion. The accuracies obtained will, however, be colored by the optimality of the algorithm. To use this approach in determining how sensor and target characteristics limit assessment accuracy, the algorithms employed must have optimality characteristics such as those exhibited by maximum-likelihood, maximum a posteriori, and mini- mum-mean-square-error (MMSE) estimators. Simula- tions using optimal estimators have been widely and suc- cessfully used to characterize target assessment accura- cies on tasks such as detection, recognition, location, and orientation estimation. 1,2,917 The development and the implementation of such algorithms can be difficult and time-consuming and may involve significant computa- tional requirements, especially if many repetitions are needed to build up adequate performance statistics. This approach is convenient, however, if an end goal is to real- ize an optimal estimation algorithm. This paper addresses the discussed analysis needs by using the approach of the Crame ´r Rao lower bound (CRLB). 18,19 Because this accuracy bound is derived from a probabilistic and presumably physics-based model of the target sensor interaction and measurement noise processes, it represents a fundamental limit that is inde- pendent of the ability or the requirement to implement an optimal estimation algorithm. The CRLB simply gives the lowest mean square error (MSE) achievable by the theoretically optimal unbiased estimator. This is power- ful for performing feasibility studies and system architec- ture design studies, since it permits sensor performance to be analyzed without knowledge of an optimal estima- tion algorithm. The CRLB is also a useful benchmark for comparing actual algorithms against, since it describes a limit on the best possible performance. It must be kept in mind, though, that if in practice an efficient estimation algorithm is not implemented, performance could be sig- nificantly worse than the limit indicated by the CRLB. This is strictly true only if the actual physics of the target/ sensor system truly follows the forward model used to compute the CRLB. Differences could permit the real system to perform better than the CRLB. Typically, how- ever, increasing the fidelity of the model to D. R. Gerwe and P. S. Idell Vol. 20, No. 5/May 2003/J. Opt. Soc. Am. A 797 1084-7529/2003/050797-20$15.00 © 2003 Optical Society of America
Transcript
Page 1: Cramér–Rao analysis of orientation estimation: viewing geometry influences on the information conveyed by target features

D. R. Gerwe and P. S. Idell Vol. 20, No. 5 /May 2003/J. Opt. Soc. Am. A 797

Cramer–Rao analysis of orientation estimation:viewing geometry influences on the

information conveyed by target features

David R. Gerwe and Paul S. Idell

Boeing Lasers & Electro-Optics Systems, 6633 Canoga Avenue, MC WB63, Canoga Park, California 91309

Received May 20, 2002; revised manuscript received December 9, 2002; accepted January 2, 2003

A methodology for analyzing an imaging sensor’s ability to assess target properties is developed. By the ap-plication of a Cramer–Rao covariance analysis to a statistical model relating the sensor measurements to thetarget, a lower bound can be calculated on the accuracy with which any unbiased algorithm can form estimatesof target properties. Such calculations are important in understanding how a sensor’s design influences itsperformance for a given assessment task and in performing feasibility studies or system architecture designstudies between sensor designs and sensing modalities. A novel numerical model relating a sensor’s mea-surements to a target’s three-dimensional geometry is developed in order to overcome difficulties in accuratelyperforming the required numerical computations. The accuracy of the computations is verified against simpletest cases that can be solved in closed form. Examples are presented in which the approach is used to inves-tigate the influence of viewing perspective on orientation accuracy limits. These examples are also used toexamine the potential accuracy improvement that could be gained by fusing multiperspective data. © 2003Optical Society of America

OCIS codes: 100.2960, 100.5010.

1. INTRODUCTIONOne of the most common uses of imagery is the assess-ment of the properties of the imaged target and its sub-components. Properties that are directly related to animage include size, shape, position, orientation, tempera-ture, and reflectance characteristics, to name a few. Inthe process of designing a sensor or planning a measure-ment operation, it is important to understand and predicthow a sensor’s characteristics and the method of its em-ployment influence the accuracy with which the desiredtarget assessments can be made. It is also of interest tounderstand the performance benefit that might beachieved by fusing data from multiple sensing sources.Examples of data fusion include combining informationfrom images taken at multiple perspectives and timesand/or from a mix of sensing modalities such as conven-tional passive optical, hyperspectral, ranging laser radar,radar, and range-Doppler imaging measurements.

Assessment accuracies are often characterized by ana-lyzing the noise properties of specific estimation algo-rithms or by testing algorithms against simulated or ex-perimental data and comparing the results against theknown or otherwise inferred state of the target.1–8 Thisapproach is most useful in predicting the accuracy of aspecific target, sensor, and processing algorithm combina-tion. The accuracies obtained will, however, be coloredby the optimality of the algorithm. To use this approachin determining how sensor and target characteristicslimit assessment accuracy, the algorithms employed musthave optimality characteristics such as those exhibited bymaximum-likelihood, maximum a posteriori, and mini-mum-mean-square-error (MMSE) estimators. Simula-tions using optimal estimators have been widely and suc-

1084-7529/2003/050797-20$15.00 ©

cessfully used to characterize target assessment accura-cies on tasks such as detection, recognition, location, andorientation estimation.1,2,9–17 The development and theimplementation of such algorithms can be difficult andtime-consuming and may involve significant computa-tional requirements, especially if many repetitions areneeded to build up adequate performance statistics. Thisapproach is convenient, however, if an end goal is to real-ize an optimal estimation algorithm.

This paper addresses the discussed analysis needs byusing the approach of the Cramer–Rao lower bound(CRLB).18,19 Because this accuracy bound is derivedfrom a probabilistic and presumably physics-based modelof the target sensor interaction and measurement noiseprocesses, it represents a fundamental limit that is inde-pendent of the ability or the requirement to implement anoptimal estimation algorithm. The CRLB simply givesthe lowest mean square error (MSE) achievable by thetheoretically optimal unbiased estimator. This is power-ful for performing feasibility studies and system architec-ture design studies, since it permits sensor performanceto be analyzed without knowledge of an optimal estima-tion algorithm. The CRLB is also a useful benchmark forcomparing actual algorithms against, since it describes alimit on the best possible performance. It must be keptin mind, though, that if in practice an efficient estimationalgorithm is not implemented, performance could be sig-nificantly worse than the limit indicated by the CRLB.This is strictly true only if the actual physics of the target/sensor system truly follows the forward model used tocompute the CRLB. Differences could permit the realsystem to perform better than the CRLB. Typically, how-ever, increasing the fidelity of the model to

2003 Optical Society of America

Page 2: Cramér–Rao analysis of orientation estimation: viewing geometry influences on the information conveyed by target features

798 J. Opt. Soc. Am. A/Vol. 20, No. 5 /May 2003 D. R. Gerwe and P. S. Idell

include more real-life uncertainties in the system drivesthe CRLB up. Since it is virtually impossible to includeall actual uncertainties in the model, the CRLB is gener-ally optimistic. In many cases, an estimator thatachieves the CRLB (an efficient estimator) does notexist. Other bounds may be derived—for example, theBhattacharyya, Bobrovsky–Zakai, and Weiss–Weinsteinbounds—that are equal to or greater than the CRLB andthus may provide tighter limits on the accuracy that theoptimal estimator could achieve.18–20

An advantageous feature of CRLBs is that they aresolely a function of the local derivatives of the Bayesianprobability density function (pdf) used to describe themeasurement process as averaged over the noise pro-cesses. A significant body of work has been performedwith the use of bounds on the MSE of various error met-rics that have many of the same advantages as those ofthe CRLBs for evaluating theoretical limits on target as-sessment accuracy.21–24 MSE bounds typically involveevaluating integrals of the error metric over the space ofpossible parameter values and weighted by the posteriorpdf. The CRLB provides an alternative approach, whichin many cases will require fewer computational opera-tions because of the fact that it is solely a function of localderivatives. This is especially important in addressingestimation problems that involve many degrees of free-dom. Though similar, Cramer–Rao and MSE bounds of-fer slightly different and complementary interpretationsof estimation accuracy limits.18–20 Relationships be-tween CRLBs and MSE metrics on target assessment un-der asymptotic limits such as high signal-to-noise ratiohave been examined by Grenander et al.23 and Cooperand Miller.24 It should be pointed out that the applica-tion of CRLBs is restricted to parameter spaces that areflat or limiting situations in which nonflat parameterspaces such as orientation Euler angles can be approxi-mated as flat. MSE-type bounds more naturally accom-modate such issues.21–23,25 This issue is discussed in fur-ther detail in Subsection 2.B.

Because the pdfs from which Cramer–Rao and MSEbounds are derived are often easily generalized26 to ex-press the joint statistics of measurement sequences andmultiple sensors, they naturally address analysis of thepotential performance benefits of data fusion. This per-mits the performance limits on assessing the same targetfeatures using fundamentally different measurement mo-dalities to be compared on a common footing. It also pro-vides a method for understanding the ways in which com-binations of multiple measurements may providecomplementary information. This use of CRLBs is dem-onstrated in Section 6, in which the potential accuracygains of fusing multiperspective data for target orienta-tion estimation is analyzed. Numerous other examplesof this feature of probability-model-based approaches ex-ist in the literature both as a method of accuracy predic-tion and as a framework for development of data fusionalgorithms.1,2,11,12,15,16,22 Through the use of Fisher in-formation, which is a function of the Bayesian model andfrom which the CRLB is derived, this paper also examineshow information regarding target orientation is exhibitedwithin an image. Insights on how the location of fea-tures on a target influences the information provided re-

garding different orientational aspects are developedthrough illustrative examples presented in Sections 4 and6 by using simple rectangular targets and complex three-dimensional (3-D) targets, respectively.

The starting point for CRLB calculations is a forwardprobabilistic model describing the statistical relation ofthe imaging measurement to a state vector of target prop-erties. For common additive noise sources with Gauss-ian or Poisson statistics, the required computations pri-marily involve calculating derivatives of the mean imagemeasurement with respect to the object state parametervalues. Because of the highly nonlinear relations be-tween the two-dimensional (2-D) image and 3-D targetstructure, accurate numerical calculation of the deriva-tives is challenging. However, once an adequate target/sensor model is developed, it is straightforward to analyzeassessment bounds for any combination of target and sen-sor properties and for multiple images and sensor types.The calculated bound corresponds to the accuracy withwhich the estimated target properties will tend to matchthe true stipulated values. This implies that the boundis specific to the actual values of all target properties, tothe imaging geometry (e.g., range, illumination, and per-spective), and to the sensor characteristics [e.g., noisesources, aperture size, focal length, instantaneous field ofview (IFOV), spectral sensitivity, and exposure time].IFOV refers to the angular portion of the field of view(FOV) subtended by one focal-plane-array (FPA) element,that is, the actual width of the element divided by the fo-cal length of the system.

In exploring the use of CRLBs for assessment accuracycharacterization, we focus on estimating properties of anincoherently illuminated target against a uniform back-ground using gray-scale imagery as would be produced bya conventional passive electro-optic imaging system.This encompasses a wide class of problems including as-sessing properties of bodies in space, or in earth’s atmo-sphere against a uniform background, and for manyearth-bound situations such as surveillance or robotic vi-sion (so long as clutter is not significant). The dominantnoise sources are assumed to be a combination of photonnoise from target, background, and ambient light sources,along with FPA readout noise. General CRLB expres-sions are derived in Section 2 for an arbitrary set of targetand sensor parameters. Accurate treatment of morecomplicated noise sources such as clutter, coherent-illumination-related speckle, and turbulence-inducedspeckle would require further generalization of the imag-ing model presented here.

The illustrative estimation problem focused on in thispaper is target orientation. Section 3 discusses the diffi-culties of calculating partial derivatives on the nonlinearrelationships of image plane measurements to orientationof targets with complex 3-D shapes and lighting geom-etries. It then presents a novel approach to overcomingthis problem by development of a highly accurate render-ing model that generates smooth and continuous changesin the measurement values with respect to small changesin target shape and orientation. The approach is basedon an analytic method of calculating the radiant exitancefrom discrete surface elements on the target’s surface andincident on each pixel of the sensor’s FPA. The accuracy

Page 3: Cramér–Rao analysis of orientation estimation: viewing geometry influences on the information conveyed by target features

D. R. Gerwe and P. S. Idell Vol. 20, No. 5 /May 2003/J. Opt. Soc. Am. A 799

of the approach is verified in Section 4 by comparing nu-merical results against an exact closed-form expressionderived for a simple target. Sensitivity of the calcula-tions to the step size used in the finite-difference approxi-mations is also examined.

Section 5 explores the impact of nuisance parameters.These correspond to unknown information about the tar-get or the sensor that is not really of concern except forthe impact of the added uncertainties on the ability to es-timate the quantities of principal interest. However,inclusion of all the relevant target and system uncertain-ties is quite important to the CRLB analysis ap-proach.10,14,19,21,22,24,27,28 If there are cross correlationsin the measurement between the parameters of principalinterest and other quantities not included in the analysis,the calculated bound will be artificially low and thus de-creased in relevance to the real-world problem.

Section 6 demonstrates the presented methodology byanalyzing the decrease in the CRLB on the accuracy withwhich a satellite’s orientation can be estimated when sev-eral images taken from different perspectives are consid-ered jointly. This section also examines the influence ofviewing perspective on bounds for estimating different as-pects of a target’s orientation.

2. MATHEMATICAL DEVELOPMENTA. Imaging Sensor Cramer–Rao Lower BoundAssume that the vector j 5 $j1 , j2 ,..., jk ,..., jN% speci-fies all relevant target/sensor system parameters that areunknown. The Cramer–Rao bound describes a lowerlimit on the accuracy achievable (minimum error vari-ance) in forming an unbiased estimate of jk from a mea-surement vector d18,19:

^~ jk 2 jk!2& > CRLB$jkuj% [ @FD21~j!21#kk , (1)

@FD~j!#kl 5 2K ]2 ln p~duj!

]jk]j lL

5 K ] ln p~duj!

]jk

] ln p~duj!

]j lL . (2)

In these equations, j is the estimated value of j, FD(j) isthe Fisher information matrix representing informationobtainable from the data, and the averages are performedover the noise statistics. The quantity p(duj) is the pdffor obtaining measurement d given values of j corre-sponding to a particular system state. This functionmust be specified such that it accurately models the sys-tem of interest.

Typically, in addition to the principal parameters thatone wishes to estimate, the vector of unknowns j includesadditional uncertainties in the system. To estimate theprincipal parameters, either we must specify values ofthese additional ‘‘nuisance’’ parameters by using what-ever a priori information is available and integrate thenuisance parameters out of the pdf (possibly weighted byany a priori statistics) or we must jointly estimate the fullparameter set. These additional uncertainties can havea significant influence on accuracy bounds. To obtain re-alistic results, it is important for their influence to be con-

sidered in the analysis along with the influence of any apriori information that can be employed. The influenceof a priori knowledge is easily incorporated into the boundby adding a second term FP(j) to relation (1), which char-acterizes the additional information provided by the priorp(j):

@FP~j!#kl 5 2K ]2 ln p~j!

]jk]j lL 5 K ] ln p~j!

]jk

] ln p~j!

]j lL .

(3)

The modified bound is29

^~ jk 2 jk!2& > @$FD~j! 1 FP~j!%21#kk . (4)

When the a priori statistics of j are jointly Gaussian withcovariance matrix Lj and mean ^j&, i.e.,

p~j! 5 @~2p!N/2uLju1/2#21

3 exp@212 ~j 2 ^j&!TLj

21~j 2 ^j&!#, (5)

then the a priori information is simply described by theinverse of the covariance matrix of j, i.e., FP 5 Lj

21. Theinfluence of nuisance parameters and a priori informationon target parameter estimation accuracy bounds will beexplored further in Section 5.

1. Noise StatisticsTo develop the imaging CRLB further, we must nowspecify the noise statistics described by p(duj). We con-sider imaging systems for which the measurement noiseis statistically independent between each FPA elementand is an additive sum of Poisson and Gaussian noisesources:

d~ x ! 5 Poisson$ g~ x, j!% 1 ns x. (6)

In Eq. (6), ns xis a zero-mean Gaussian random variable

with standard deviation s x , Poisson $J% denotes a Pois-son random variable with mean J, and x 5 (x, y) indexesthe grid of elements on the imaging system’s FPA. Thefunction g( xuj) specifies the mean number of photoelec-trons plus dark-current electrons detected at sensor ele-ment x of the image measurement when the target/sensorsystem state is j, and thus it embodies the relation of theensemble-average image to all target/sensor parameters.

Note that both the Poisson and Gaussian terms mayeach include contributions from several signal and noiseprocesses that are in addition to the target, such as targetbackground, stray light, dark current, and electronicsnoise. Noise associated with digital quantization of thesignal has been neglected. This approximation is reason-able as long as the quantization step size is significantlysmaller than either the Poisson or Gaussian noise compo-nents and the signal does not saturate. The describednoise model is appropriate for incoherent image measure-ments using a wide class of sensors such as CCD, comple-mentary metal-oxide semiconductor, and photomultiplierarrays along with many infrared sensors.14,15,30,31 Accu-rate treatment of noise sources with pixel-to-pixel corre-lations, such as laser speckle, turbulence-induced scintil-lation, and backgrounds with random structure (such asclutter), would require further generalization of the imag-ing model presented here.

Page 4: Cramér–Rao analysis of orientation estimation: viewing geometry influences on the information conveyed by target features

800 J. Opt. Soc. Am. A/Vol. 20, No. 5 /May 2003 D. R. Gerwe and P. S. Idell

2. Cramer–Rao Lower Bound for Poisson NoiseExamining first the simplified case for which the mea-surement vector d includes only a Poisson noise contribu-tion (i.e., s x 5 0), we obtain the following conditionalprobability pdf:

p~duj! 5 )xPF

exp@2g~ xuj!#g~ xuj!d~ x !

d~ x !!. (7)

In Eq. (7), F denotes the set of FPA elements in the FOV.If we substitute Eq. (7) into Eq. (2) and employ the factthat for Poisson statistics the variance of d( xuj) is equalto its mean g( xuj), it is straightforward to show that

@FD~j!#kl 5 (xPF

]g~ xuj!

]jk

]g~ xuj!

]j l

g~ xuj!. (8)

3. Cramer–Rao Lower Bound for Gaussian NoiseIf instead we assume that the noise associated with themeasurements at each FPA element x consists solely of aGaussian component with variance s x

2, i.e.,

d~ x ! 5 g~ x, j! 1 ns x, (9)

then the measurement d is described by the following con-ditional pdf:

p~duj! 5 )xPF

1

A2ps x2

exp@2ud~ x ! 2 g~ xuj!u2/~2s x2!#.

(10)

Upon substituting Eq. (10) into Eq. (2), we find that

@FD~j!#kl 5 (xPF

]g~ xuj!

]jk

]g~ xuj!

]j l

s x2 . (11)

When k 5 l, each term of the sums in Eqs. (8) and (11)expresses the Fisher information gained from an indi-vidual FPA element x about a single state parameter jk .From this, we see that the Fisher information about pa-rameter jk is proportional to the sensitivity of a pixelmeasurement to variations in that state parameter andinversely proportional to the measurement noise. Be-cause of statistical independence of the noise at eachpixel, the total Fisher information is simply the sum overthe Fisher information obtained from all detector ele-ments. When k Þ l, each term in Eq. (8) or (11) ex-presses the joint Fisher information for system param-eters jk and j l contributed by each FPA element.

4. Cramer–Rao Lower Bound for Additive Combinationof Poisson and Gaussian NoiseWhen the noise is an additive combination of Poisson andGaussian sources, the pdf becomes more complicated.However, it is often a reasonable treatment of thissituation32 to use the following approximate noise model:

d~ x ! 5 Poisson$ g~ x, j! 1 s x2% 2 s x

2, (12)

which matches the mean and the variance of the com-bined noise. Within this approximation,

@FD~j!#kl 5 (xPF

]g~ xuj!

]jk

]g~ xuj!

]j l

g~ xuj! 1 s x2 . (13)

Note that the denominator of the fraction in Eq. (13) isthe noise variance of pixel x.

5. Cramer–Rao Lower Bound for Multiple ImagesLet us now generalize the result to the case for which dcorresponds to multiple images, taken either as a tempo-ral sequence from one sensor and/or by multiple sensorsthat may each employ different imaging modalities. As-suming that the noise for each sensor is statistically inde-pendent and using the described method of approximat-ing the Poisson and Gaussian noise combination by abiased Poisson process, we obtain the following pdf:

p~d 1 s2uj!

5 )q

)xPFq

3exp@2gq~ xuj! 1 s x,q

2 #@ gq~ xuj! 1 s x,q2 #dq~ x !1s x,q

2

@dq~ x ! 1 s x,q2 #!

,

(14)

where q is used to index the image set and Fq representsthe FOV for image q. This final generalization results in

@FD~j!#kl > (q

(xPFq

]gq~ xuj!

]jk

]gq~ xuj!

]j l

gq~ xuj! 1 s x,q2 , (15)

which is just the sum of the Fisher information matricescorresponding to each image. Because of this simple re-sult, it is easy to calculate the potential performance ben-efits that could be achieved by fusing data from multipleimage measurements and imaging modalities. It can beshown that combining information from multiple inde-pendent measurements always reduces the CRLB on ev-ery element of the vector parameter j below that obtainedfrom any single measurement.27 This is a direct conse-quence of the data processing theorem for Fisherinformation.33

It is of interest to note that when the Gaussian noisecomponent can be neglected, such as in the photon noiselimit (i.e., s 2 ! g ;$x P Fq ; q 5 1, 2,...%), the Fisher in-formation scales linearly with the light level of the image.Correspondingly, estimation accuracy scales as the squareroot of the inverse of the light level. In the oppositelimit, for which background noise dominates ( s2

@ g ;$x P Fq ; q 5 1, 2,...%), the Fisher informationscales in proportion to the square of the light level, andestimation accuracy scales as its inverse.

Later sections of this paper will focus on sensors thatmeasure the intensity distribution of light reflected off atarget as a function of angle about the line of sight (LOS).Photon- and sensor-noise-dominated gray-scale imagestaken by a conventional passive electro-optic system op-erating at visible wavelengths are common examples thatfollow this model. Equation (14) and relation (15) also

Page 5: Cramér–Rao analysis of orientation estimation: viewing geometry influences on the information conveyed by target features

D. R. Gerwe and P. S. Idell Vol. 20, No. 5 /May 2003/J. Opt. Soc. Am. A 801

apply to IR and multispectral imaging modalities andmay be appropriate to other sensor types as well, includ-ing nonimaging modalities if the index x of the measure-ment parameter d( x) is constructed as a vector instead ofas a 2-D array.

B. Interpretation Issues of Cramer–Rao Lower Boundfor Orientation EstimationThe target assessment task focused on in this paper istarget orientation. The classical Cramer–Rao bound for-mulation applies only to flat Euclidean spaces in whicheach element of the vector parameter may take on anyreal value. Alternatives have been developed that pro-vide a rigorous treatment of accuracy bounds on param-eter spaces with curved geometries.12,13,22–25 Since Eulerangles do not form such a space, the use of CRLBs to ad-dress orientation estimation is limited in its applicability.However, as long as the actual bounds are small relativeto the range of the parameter space, the Cramer–Rao ap-proach can be successfully used to produce reasonable re-sults. For example, use of the CRLB should provide asensible method of analysis for systems in which the cal-culated bound on an orientation angle is typically lessthan 10°. (In this paper, all calculated bounds are lessthan 1.0°.) However, when bounds exceed 10°, the accu-racy of the results may become suspect. Of course,bounds greater than 360° are meaningless except for theimplication that the measurement provides almost no in-formation regarding the target’s orientation! Related is-sues in which two or more orientations produce nearly in-distinguishable images and are construed as equivalentlythe same target state have been discussed by Grenanderet al.22,23 In this paper, it is assumed that a prioriknowledge of the target’s orientation state and the reso-lution of the sensor act to break any such degeneracies, sothat they do not pose an issue.

When one is considering bounds on Euler angles, caremust also be given to interpret them within the context ofthe relation between the actual and reference orienta-tions of the target. This is due to the fact that the order-ing of the Euler rotations used to specify an object’s poseis not communicative. To make interpretation as simpleas possible, we calculate the CRLBs presented in this pa-per by using the actual orientation of the object as the ref-erence orientation. Given that the resulting bounds aresmall (in this case, less than 1.0°), differences betweenthe ordering of rotations is negligible, and the bounds canbe interpreted simply as the accuracy in yaw, pitch, androll about the actual orientation.

3. TARGET/SENSOR MODELSection 2 demonstrated that a general statistical modelfor many imaging modalities and the correspondingCRLB for estimating a set of unknown parameters couldbe described by Eq. (14) and relation (15), respectively.Lurking within these simple formulas is the complicatedfunction gq( xuj), which describes the ensemble average ofmeasurement dq( x) when the state of the target/sensorsystem is j. For a typical imaging system, gq( xuj) mustdescribe the 3-D relations between the sensed image andthe target structure along with effects such as directional

lighting, viewing perspectives, radiometry, spectral ef-fects, system point-spread function (PSF), properties ofthe detector elements (e.g., CCD FPA), and the bidirec-tional reflectivity distribution function (BRDF) describingthe target’s surface reflectance properties. Relation (15)also requires calculation of the derivative of the meanmeasurement at each FPA element as a function of eachstate parameter of interest, i.e., ]gq( xuj)/]jk . Self-obscuration and other lighting effects generally cause theprojection of a 3-D structure onto a 2-D measurement tobe highly nonlinear, making a closed-form solution intrac-table for all but the simplest of target structures. Tosolve more complicated problems, this paper examinesthe approach of making finite-difference approximationsto the derivatives. This reduces the problem to that ofconstructing a numerical model for accurately renderingimages of the target as would be seen by the sensor in theabsence of noise. As will be discussed in detail below,this approach necessitated development of a novel render-ing method in order to accurately calculate the requiredderivatives. Although this paper will focus on imagingsensors that measure the 2-D spatial intensity distribu-tion of incoherent light reflected by a target (e.g., a typicalcamera), the approach taken could easily be extended toother imaging modalities such as thermal, hyperspectral,laser radar, and range-Doppler sensors.

A. Image-Rendering AlgorithmTwo rendering approaches were investigated. The firstwas based on the established tracking for antisatellite(TASAT) code developed by the U.S. Air Force ResearchLaboratories.34 As is common practice in many imagingcodes, TASAT first generates a pristine image on a finegrid that is representative of the radiance reflected fromthe target toward the sensor. Convolving the pristineimage with the system PSF (also modeled on a fine grid)produces the intensity pattern incident on the FPA. Thenoise-free sensor measurement can then be obtained bydownsampling.

As described in detail in Appendix A, by forcing each el-ement of the pristine image grid to either be fully occu-pied by a segment of the target or be empty, this approachleads to a quantization of the position of edges of targetsubstructures. Antialiasing measures in TASAT may beemployed to reduce this effect—the result is a grid with alocally adaptive, yet still finite, cell spacing. The render-ing approach used by TASAT is a proven method for gen-erating radiometrically accurate images and possessesthe advantages of being straightforward and of utilizingthe efficiency of the fast Fourier transform for performingconvolutions. However, as shown in Appendix A, theedge location quantization effect can cause large inaccu-racies when approximating derivatives by using finite dif-ferences. In theory, this limitation can be overcome byusing finer-resolution grids, but in practice the grid sizesbecome painfully large.

To circumvent this problem, we developed a noveltarget/sensor model based on populating the surfaces ofthe target with surface elements (as illustrated in Fig. 1)and by use of a fully analytic representation of the systemPSF. Each surface element has associated data values,such as location, differential area, surface normal, and

Page 6: Cramér–Rao analysis of orientation estimation: viewing geometry influences on the information conveyed by target features

802 J. Opt. Soc. Am. A/Vol. 20, No. 5 /May 2003 D. R. Gerwe and P. S. Idell

material type. The various calculation grids employedby this analytic rendering algorithm are summarized inTable 1. The algorithmic procedure is performed throughthe following steps:

1. For the viewing perspective and for each directionallight source, an obscuration mapping operation is per-formed in which the target surface elements are projectedonto the obscuration grid Gobscur , facilitating classifica-tion of each element as being visible from that direction oras being obscured by other surface elements. Under thismapping, the surface elements are treated as having cir-cular shapes with radii associated with their differentialareas. The obscuration profiles are determined by the re-lation between the surface normal and the LOS. To de-termine obscurations of one surface element by another,Gobscur should be fine enough such that, when projectedonto the target, the height and the width of each targetsurface element subtends two or more grid cells. As a re-sult of this criterion and the criterion given in step 3 be-low, Gobscur will also be several times finer than that of theconvolved image grid Gconv img used in steps 3 and 4 below.By-products of this step are depth and component mapscalculated on Gobscur . An example of a depth map is dis-played in Fig. 2. The reader should understand that al-though partial obscuration of one surface element by an-

Fig. 1. 3-D plot of the centers of the surface elements used topopulate a target’s exterior. For purpose of illustration, thenumber of surface elements in the model shown here has beensignificantly reduced below that of a typical model.

Table 1. Sampling Density and Purposeof the Series of Grids Used by the Analytic

Rendering Algorithm

Grid Type Sample Spacing Purpose

FPA grid GFPA FPA pixel spacing Sensor measurementConvolved image

grid Gconv img

<(FPA pixel width)/4and <(PSF width)/4

Convolved image(photon fluxdensity incidenton the detector)

Obscuration gridGobscur

Finer sampling thanGconv img and severalsamples acrosseach target surfaceelement

Determine self-obscurations oftarget segments,generates depth-and component-map by-products

other is not considered, this step of the rendering processdoes not result in any type of quantization of the positionsof the surface elements.

2. The radiant exitance of each illuminated and nonob-scured surface element is calculated from its BRDF prop-erties and the relative geometry associated with the nor-mal to the surface and with the lighting and viewingdirections.

3. With each target surface element treated as a pointsource, the optical intensity distribution incident on theFPA associated with an individual element a is calculatedon the convolved image grid Gconv img . The optical inten-sity distribution is given by a closed-form expression de-scribing the sensor’s PSF, which is a function of the vectorseparation between the projection of the surface elementa’s center onto Gconv img and the center of the grid ele-ments of Gconv img themselves:

image intensity distribution~x8, y8!u~surf. elem. a!

5 K 3 ~radiant intensity!a

3 psf(x8 2 Px8~ za!, y8 2 Py8~ za!). (16)

In Eq. (16), za denotes the position of surface element a inthe target coordinates, Px8 and Py8 project the target co-ordinate frame onto the FPA plane, and the constant K in-cludes all unit conversion and system throughput factors.The cell spacing of the convolved image grid Gconv img usedin this step is finer than that of the FPA and is such thatthe PSF is well sampled. This provides an accurate rep-resentation of the intensity distribution before the effectsof sampling onto the FPA.

4. The results of the previous step are summed over allsurface elements of the target to produce the photon fluxdistribution incident on the FPA. This data product istermed the convolved image.

5. The convolved image is downsampled onto the FPApixel grid GFPA , producing the noiseless FPA measure-ment. This grid is the same as that used in the equa-tions of Subsection 2.A. An example is shown inFig. 2.

Since both the projection operator and the PSF functionin Eq. (16) are closed-form expressions, the influence ofeach surface element on the FPA measurement variessmoothly with changes in the element locations. Subsec-tion 3.B demonstrates that this allows extremely accuratecalculations of the required derivatives of the FPA mea-surements to be achieved with the use of finite-differenceapproximations.

Fig. 2. Example of a depth map and corresponding radiometricrendering using the surface element target/sensor modeling ap-proach.

Page 7: Cramér–Rao analysis of orientation estimation: viewing geometry influences on the information conveyed by target features

D. R. Gerwe and P. S. Idell Vol. 20, No. 5 /May 2003/J. Opt. Soc. Am. A 803

B. Target Self-ObscurationOne difficulty remains unresolved by this model, which isan accurate treatment of the continuum of changes in theobscuration of one surface by another with respect to rela-tive changes in viewing and lighting geometry. Ourmodel does treat this effect to some degree by allowing asmooth and continuous variation of the location of theoverlap between surfaces. For relatively simple objectssuperimposed on dark backgrounds, it may be reasonableto assume that the border of the silhouette is the predomi-nant source of information regarding position and orien-tation. However, this issue undoubtedly warrants fur-ther exploration. One approach to assessing thesignificance of the possible error is to compare the Fisherinformation calculations obtained by using the analyticrendering code with a closed-form solution derived for asimple target consisting of two overlapping rectangularplates.35 Section 4 will follow this approach in validatingother aspects of the analytic rendering methodology de-scribed in this paper for performing CRLB analysis.

Some thought has been given to development of ascheme for modeling smooth and continuous obscurationof target features by each other. This goal might be ac-complished by modifying the algorithm described aboveas follows36:

1. The surface of each of the primitive shapes compos-ing the target are tesselated into an interlocking set of tri-angles.

2. At a given orientation, each triangle is comparedwith all other triangles, areas of overlap are identified,and the unobscured area of each triangle is calculated.In performing this comparison, one uses a series of simplecriteria to eliminate combinations of triangles that cannotoverlap (such as separation transverse to and along theLOS). This reduces the need for more computationallyburdensome region-of-overlap calculations to be per-formed for all (No. of elements)2 combinations. Caremust be given to address situations in which surface ele-ments physically intersect and in which an element is ob-scured by multiple other elements.

3. As above, the radiance exitant from each surface el-ement (i.e., in this case, each triangular facet) is treatedas emanating from a point located at the center of the el-ement.

4. A high density of surface elements samples the tar-get such that the light distribution is effectively continu-ous relative to the PSF and IFOV widths.

4. ACCURACY VALIDATION OF THETARGET/SENSOR MODELThis section compares the accuracy of the numericalmodel against calculations from an exact closed-form so-lution for a simple target. These comparisons will beused to validate that the analytic rendering algorithm de-scribed in Section 3 performs as expected and to indicatethe degree of accuracy that can be obtained in using it tocompute CRLBs. For an arbitrary target, a closed-formexpression of the mean FPA measurement is quite diffi-cult to find. However, for a simple target such as a flatrectangular plate, the calculations are tractable. Two ro-

tation aspects are considered: in-plane orientation aboutthe LOS and out-of-plane orientation around an axis per-pendicular to the LOS. For both cases, we assume thatthe nominal orientation of the plate is such that it is per-pendicular to the LOS and that the long edges of the rect-angle are parallel to columns of the FPA array. Theplate’s surface is assumed to have a Lambertian BRDFand to be illuminated uniformly from all directions, i.e.,ambientlike lighting conditions. The closed-form expres-sion for the PSF is described by Eqs. (B4) and (B5) below,which result in a pyramidlike profile with square support.Note that since a function with finite support or discon-tinuous derivatives has infinite bandwidth, the samplespacing of the convolved image grid Gconv img must ap-proach zero to satisfy the Nyquist criterion. However,using a grid with cells <1/4 the width of the PSF is foundto produce image renderings that are not visibly aliasedand that exhibit smooth and continuous changes in pixelvalues with respect to changes in target position and ori-entation.

A. Out-of-Plane OrientationUnder the described imaging geometry, the effect of anout-of-plane rotation is to cause a foreshortening of theprojection of the plate onto the FPA. The angle of rota-tion is considered to be about the y axis, which corre-sponds to the vertical direction of the FPA and is perpen-dicular to the LOS. To simplify the math, we set thedimensions of the plate such that it is narrow along the xaxis and extends greatly beyond the FPA’s FOV along they axis. As a result, the FPA image will not vary as afunction of y, reducing the derivation to a one-dimensional (1-D) calculation. Under the LambertianBRDF assumption, the photon flux seen by the FPA fromany portion of the plate remains constant as a function oforientation. The resulting source intensity distributionin the plane of the FPA and its derivatives are derived inAppendix B. Except for a proportionality constant, ex-pressions (B9)–(B25) completely specify the Fisher infor-mation matrix of relation (15) in closed form.

We now compare the CRLB on estimating the plate’sout-of-plane orientation as calculated numerically and bythe exact closed-form expression. The imaging geometryis as described above, and details of the target/sensor sys-tem are given in Table 2.

Figure 3 shows the resulting FPA image and the de-rivative of the FPA image for the plate at uy 5 20°, corre-sponding to rotation about the y axis by 20° from its nomi-nal position of being perpendicular to the LOS. Thedotted–dashed lines in Fig. 3(b) encircle the slice of ele-ments used for the plots of Fig. 4. The values of the FPAmeasurements, their derivatives, and the single-pixelFisher information

@]g~x, yuuy!/]uy#2

g~x, yuuy! 1 sx,y2 (17)

are plotted in Figs. 4(a), 4(b), and 4(c), respectively, forthe FPA elements along this horizontal slice. The totalFisher information is found by summing the term in rela-tion (17) over all FPA elements. As derived in Subsection2.A, the terms g(x, yuuy) and sx,y

2 correspond to the vari-

Page 8: Cramér–Rao analysis of orientation estimation: viewing geometry influences on the information conveyed by target features

804 J. Opt. Soc. Am. A/Vol. 20, No. 5 /May 2003 D. R. Gerwe and P. S. Idell

Fig. 3. (a) FPA image of the flat rectangular plate at a 20° out-of-plane rotation from the nominal position (face perpendicularto the LOS), (b) derivative of the image measurement with re-spect to out-of-plane rotational motion duy about the verticalaxis, (c) relative magnitude of the Fisher information at eachpixel.

Fig. 4. Plots of the pixel values corresponding to the imageslices indicated in Fig. 3. Calculations performed by using theexact closed-form expression and by using the numerical modelare plotted with solid curves and a series of plus signs, respec-tively. Except for (c), the difference in the results is too small tobe visible. The dashed vertical lines indicate the position of theedge of the target.

Table 2. Target/Sensor Parameters for Comparingthe Numerical and Closed Expression Results for

Imaging of a Flat Plate

System Parameter Value

Aperture diameter 1.57 mFPA IFOV 0.414 mradFPA size 32 3 32 pixelsPSF width 0.21, 0.41, 0.82, 1.7, 2.5 mradPSF profile Triangle functionTarget range 600 kmTarget width WR 3.5 mDirectional illumination 0 W m22 l21 (l in mm)Ambient lighting 500 W m22 l21 (l in mm)Exposure time 4 msTypical signal 24.7 3 106 photoelectrons total,

49.4 3 103 per/pixel on targetBRDF LambertianWavelength 0.65 mmBandwidth 0.2 mmAnalog-to-digital gain 1Background noise 40e2 rmsThroughput efficiency 0.05

ances of the signal-dependent photon noise and back-ground Gaussian noise sources (e.g., CCD-electronics-related read noise), respectively. As described in Table 2,the background Gaussian noise is s 5 40e2, and g(x),plotted in Fig. 3(a), is the average number of photoelec-trons at each pixel (x, y), e.g., typically on the order of 53 104 photoelectrons for pixels that lie on the target and

corresponding to a photon noise level of approximately224 photoelectrons.

The comparison between the numerical approximationand the closed-form expression results are quite good.Using the exact expression gives a CRLB equal to0.0152°. The numerical approximation gives 0.0154°, in-dicating a discrepancy of 1.03%. Similar tests were donefor a range of orientation angles, PSF widths, surface el-ement densities, and sampling ratios between the gridsGconv img and GFPA used for the convolved and the FPA im-age. The results indicated that to obtain numerical re-sults that are within 10% of the closed-form solutions, thefollowing criteria on the various sampling rates associ-ated with the target surface element density and the dif-ferent calculation grids should be met:

1. The density of surface elements representing thetarget’s surface should be such that the width of the mainPSF lobe as projected onto the target subtends at leastfour surface elements.

2. The density of surface elements should satisfy asimilar criterion for the projected height/width of a singleFPA element, i.e., the IFOV height and width should eachsubtend at least four surface elements.

3. The convolved image grid Gconv img should have fouror more samples across the width of the PSF’s main lobeand at least two samples across each FPA element.

4. The width of the PSF needs to be at least twice thewidth of a FPA element.

The first two criteria correspond directly to the intuitivelyobvious need for the target to be well sampled relative tothe resolution of the image system in order to preventaliasing effects. Similarly, the third criterion requiresthat the imaging model adequately sample the structureof the PSF. The last criterion is surprising and may in-dicate a limitation of the model. It may be that this lastcriterion can be relaxed if the first three are strength-ened. Further investigation is warranted in understand-ing this issue.

For the 58 test cases that met these criteria, the meanaccuracy of the CRLB calculation was 1.24% with a worst-case discrepancy of 6.4%. It was seen that improving thetarget and PSF sampling rates (i.e., by increasing the tar-get surface element density and using a finer-grid spacingfor the convolved image grid Gconv img) tended to improvethe accuracy. Conversely, tests in which the above crite-ria were not quite met generally exhibited less agreementwith the closed-form results. Over the full range of casesconsidered, in which the inequalities listed above werenever broken by more than a factor of 4, accuracy de-creased gracefully as the criteria were exceeded andaliasing-type effects increased. Further exploration isneeded to understand and characterize the detailed rela-tionships between the various sampling rates and the ac-

Page 9: Cramér–Rao analysis of orientation estimation: viewing geometry influences on the information conveyed by target features

D. R. Gerwe and P. S. Idell Vol. 20, No. 5 /May 2003/J. Opt. Soc. Am. A 805

curacy of the numerical calculations. The main interestfor this paper, though, is verification that sampling den-sities can be chosen for which the numerical CRLB calcu-lations consistently agree with the exact closed-form ex-pression within levels reasonable for a first-order analysis(e.g., within 10% or better).

It is somewhat surprising and of significant interestthat the plot in Fig. 4(c) indicates that the single-pixelFisher information peaks just beyond the outside edge ofthe plate (indicated by the dashed vertical lines). This isbecause for Poisson statistics the noise level is highest onthe bright interior side of the target edge and drops to-ward the background noise level on the exterior side. Asa result, expression (17) has a peak just beyond the edgeof the target, at a point where the derivative is still highbut the signal (and thus the noise) has decreased to asmall value. Note that this phenomenon is directly re-lated to the fact that the dominant noise for this case issignal dependent. If the measurement were of a low-reflectance target superimposed on a bright backgroundand still photon noise dominated, the same phenomenawould occur but with the Fisher information peak shiftedto the interior side of the target edge. When the noise ispredominantly signal independent and invariant of pixellocation, so that the denominator of expression (17) is con-stant, the Fisher peak would lie, as expected, centeredover the edge location. The width and the location of theFisher peak are also influenced by the relative edge re-sponse of the system, which depends strongly on the ratioof PSF width to IFOV. It is apparent that the complexinterplay among PSF, IFOV, background noise, andstrength of the target signal has a significant effect on theFisher information corresponding to an edge’s location.Such phenomena will have a strong impact on the accu-racy limits with which a system can be used to performassessment tasks related to target feature positions. Us-ing CRLBs to analyze and understand such influences onthe accuracy limits for performing target assessmenttasks provides a means for optimizing design and opera-tion of sensors and associated estimation algorithms.

B. In-Plane OrientationThis subsection validates the numerical approach toCRLB calculation for estimating the in-plane orientationof a target (rotation about the LOS). Again, the valida-tion is accomplished by comparison of the numerical re-sults against exact closed-form expressions correspondingto a flat-plate target. These expressions are derived inAppendix B. The comparisons were repeated for a seriesof PSF widths and for several sampling density combina-tions in the numerical model. Typical results are dis-played in Figs. 5 and 6. For this particular case, the ex-act CRLB was 0.0046°, and the numerical result was inerror by 0.5%. Fifty-eight cases were tested that satis-fied the criteria outlined in Subsection 4.A. The maxi-mum discrepancy was 3%.

It is interesting to note that the CRLB for determiningin-plane orientation is much lower than the out-of-planeCRLB bound calculated in Subsection 4.A. ComparingFigs. 3 and 5, we see that the derivatives of the FPA mea-surements near the plate’s edge are higher for in-planerotation than for out-of-plane rotation. As a result, the

Fisher information is stronger and the bound is lower.The magnitude of the information is also seen to increasewith distance from the center of rotation, since the changein position induced by a differential change in attitude isproportional to this length. In Section 6, the trend forbounds on estimating in-plane orientation to be generallylower (allowing better accuracy) than those on estimatingout-of-plane orientation is further demonstrated in ananalysis example involving estimating the pose of theHubble Space Telescope (HST).

The results just described correspond to a plate that isoriented such that its normal is close to parallel with theLOS. Similar comparisons were performed for a platethat was nearly on edge with respect to the LOS. Theclosed-form expressions are not presented here, but thederivation is analogous to that of Appendix B. Again, thenumerical and exact results agreed well, with the largesterror over the 52 cases considered being 8% and with amean of approximately 4%.

C. Sensitivity of Difference Approximations toPerturbation SizeAs discussed in Section 3 and Appendix A, there is poten-tial for the calculations to be sensitive to the size of theperturbation used in the finite-difference approximationsof the derivatives required by relation (15). If the pertur-bation to the system state is too large, the average FPA

Fig. 5. (a) FPA image of the flat rectangular plate at its nominalposition (face perpendicular to the LOS), (b) derivative of the im-age measurement with respect to in-plane rotational motion duzabout the LOS, (c) relative magnitude of the Fisher informationat each pixel.

Fig. 6. Plots of the pixel values corresponding to the imageslices indicated in Fig. 5. Calculations performed by using theexact closed-form expression and by using the numerical modelare plotted with solid curves and a series of plus signs, respec-tively. As seen from the plots, the results agree almost perfectly.

Page 10: Cramér–Rao analysis of orientation estimation: viewing geometry influences on the information conveyed by target features

806 J. Opt. Soc. Am. A/Vol. 20, No. 5 /May 2003 D. R. Gerwe and P. S. Idell

measurement g(xuj 1 dj) will not vary linearly in theperturbation dj. If it is too small, the numerical preci-sion of the computer will become an issue. For the com-putations to be robust, it is desirable that there be a largerange of magnitudes of udju for which the derivative ap-proximations are accurate and consistent.

We tested for these sensitivity issues by calculating theCRLB for 13 different target/sensor parameters over alarge range of perturbation step sizes. The renderingswere performed by using both the method described inSection 3 and a variant mode that consists of the follow-ing steps:

1. A high-resolution pristine rendering of the target isgenerated on a grid that has eight cells across each FPAelement. This intermediate rendering does not yet in-clude any blurring effects. Like the procedure describedin Section 3, the target is modeled as a set of discrete sur-face elements. Lighting and obscuration calculations aremade for the particular viewing geometry, and the resultsare multiplied by a system throughput factor to give thephoton flux incident on the focal plane from each surfaceelement. Each individual flux rate is added to the pris-tine grid at the cell corresponding to the projection of thesurface element’s center onto the focal plane. A finalmultiplicative factor adjusts for quantum efficiency andexposure time. Although the details differ, this render-ing method is similar to the one used in TASAT in thesense that light reflected/emitted from facets of the tar-get’s surface is treated such that it emanates at the near-est point on a projected grid—i.e., the radiance is spa-tially quantized.

2. The system PSF distribution is calculated on a simi-lar grid.

3. A convolved image corresponding to the photon fluxdistribution incident on the FPA array is produced by dis-crete convolution of the pristine image with the PSF.

4. Downsampling the convolved image and adding anybackground sources gives the noiseless FPA measure-ment.

We will refer to the mode just described as the grid-basedrendering mode and the mode described in Section 3 asthe analytic rendering mode.

A model of the HST is used for the target, and the sen-sor is modeled after the Maui Space Surveillance Sys-tem’s 1.6-m telescope and GEMINI imaging system. De-tails are given in Table 3. Note that to avoid exceedingthe memory limits of the computer used for these compu-tations, the modeled FPA elements are twice as wide asthe elements of the true GEMINI system. However,since resolution was primarily limited by the blur size ofthe PSF, this discrepancy is probably minor.

Using both the analytic and grid-based renderingmodes, we calculated CRLBs on the HST’s orientation asa function of the size of the difference used in the deriva-tive approximations. The results are plotted in Fig. 7.The analytic rendering approach is seen to be consistentfor step sizes ranging from Du 5 10213° to 1.0°, which is14 orders of magnitude! Selecting a finite-difference stepsize from the middle of this range gives a very good mar-gin of comfort that the calculations will be accurate for a

large variety of imaging conditions. At the largest stepsize considered (Du 5 10°), features in the image shift byseveral pixels. This is undoubtedly beyond the linear re-gion, causing the magnitudes of the derivatives to be over-estimated. As expected, the CRLB appears to err on thelow side. For step sizes smaller than 10213° numericalerrors become significant.

Results using the grid-based rendering method are ingood agreement with the analytic approach for Du5 0.1° to 1.0°, jump around a bit from 1025° to 0.01°, andexhibit a second region of stability from 10212° to 1025°.We suspect that for Du 5 0.1° to 1.0°, the step size islarge enough that spatial quantization is not limiting theaccuracy, but that from Du 5 1025° to 0.01°, the quanti-zation effect is causing derivatives to be overestimatedand as a result is artificially reducing the CRLB. At step

Fig. 7. Dependency of the target orientation CRLB calculationon the size of the difference used in the derivative approxima-tions and on the method of rendering (analytic or grid based).

Table 3. Target/Sensor Parameters Used inTesting the Sensitivity of the CRLB Calculationsto the Magnitude of the Perturbations Used in

Approximating Derivatives by Finite Difference

System Parameter Value

Aperture diameter 1.57 mFPA IFOV 0.414 mradFPA size 64 3 64 pixelsPSF width 2.484 mradPSF profile Hermite–Gaussian expansionTarget range 900 kmTarget length 12.91 mSolar illumination 2 3 103 W m22 l21 (l in mm)Ambient lighting (earthshine) 500 W m22 l21 (l in mm)Exposure time 4 msTypical signal 5.0 3 107 photoelectrons total,

21 3 103 per pixel on targetBRDF LambertianWavelength 0.65 mmBandwidth 0.2 mmAnalog-to-digital gain 1Background noise 160e2 rmsThroughput efficiency 0.05

Page 11: Cramér–Rao analysis of orientation estimation: viewing geometry influences on the information conveyed by target features

D. R. Gerwe and P. S. Idell Vol. 20, No. 5 /May 2003/J. Opt. Soc. Am. A 807

sizes of Du 5 10212° to 1025°, the surface elements do notusually move enough to traverse any of the pristine gridboundaries. However, some effect must cause slightchanges in the images; otherwise, the derivatives wouldall be zero, causing the calculated CRLB to be infinite.We suspect that the changes are due to BRDF-inducedvariations in reflected light. If this is the case, it indi-cates that this effect may contribute as strongly to theFisher information as does feature location.

CRLBs were also calculated on estimating offsets to theFOV and on estimating the PSF width. The results areplotted in Figs. 8 and 9, respectively. Units of the ordi-nate axis are in fractions of a FPA element and in frac-tions of the nominal PSF width. For both bounds, theanalytic rendering approach again gives consistent calcu-lations over an enormous range of finite-difference stepsizes. For the FPA offset, the grid-based results agreewith the analytic approach at only Du 5 0.01° and 0.1°,and no other range of stability is exhibited. Selecting avalue from this range would not leave much margin forerror if the grid-based approach for image rendering wereto be used for CRLB calculation. Although it neveragrees with the analytic approach, the grid-based methodproduces consistent answers for the PSF width CRLBover a large range. This is because finite differences inthe image due to changes in the PSF shape are not af-fected by locational quantization of target features. Dis-crepancies between the two curves in Fig. 9 are mostlikely attributed to the fact that the analytic renderingmodel calculations had been performed by using a trian-gularly shaped PSF profile while the grid-based render-ings were produced by using a Gaussian PSF profile.Since the PSF profiles were different, moderate differ-ences in the accuracy bound on estimating PSF width areto be expected.

A similar analysis is performed in Appendix A, usingTASAT to produce the renderings required for the finite-difference calculations. Problems related to spatialquantization caused this approach to be even less robustto the choice of size than the grid-based rendering ap-proach just described.

Fig. 8. Dependency of the FOV offset CRLB calculation on thesize of the difference used in the derivative approximations andon the method of rendering.

5. NUISANCE PARAMETERSTypically, in addition to the primary target properties tobe estimated, there are other unknown quantities that af-fect the measurement but are otherwise not of interest.Since these quantities serve only to complicate the prob-lem, they are commonly referred to as nuisanceparameters.19 The additional uncertainty related to thenuisance parameters tends to increase the CRLB for thequantities that are actually of interest. This occursthrough cross correlations in the Fisher information.These correlations are expressed in the off-diagonal ele-ments of the Fisher matrix and increase the CRLB whenthey differ from zero.19 As pointed out in Section 1, in-clusion of all relevant nuisance parameters can be crucialto obtaining calculated bounds that are connected to real-world problems.

For assessment tasks such as target orientation, thelist of possible nuisance parameters includes any un-known quantity that might affect the image. Examplesof possible unknowns are details about the target’s 3-Dstructure, reflective properties of its surface, and strengthand direction of the illumination sources. For purpose ofillustration, we assume that, other than orientation, theonly quantities that are not well characterized are uncer-tainties in the PSF size and shape and in the relative po-sition of the FOV with respect to the target’s center (i.e.,offsets to the LOS). We explored the effect of these nui-sance parameters on target orientation for the imagingscenario detailed in Table 3 for a range of PSF widths andtarget orientations. Variations in the PSF shape wereparameterized by using a sum of the first four terms ofthe 2-D Hermite–Gaussian sequence.37 The impact ofuncertainties in the PSF shape and size were surprisinglysmall. For all examples tested, their inclusion nevercaused the CRLB to increase by more than 20%. Varia-tions in the offsets to the FOV had a stronger effect and insome cases increased the CRLB by up to a factor of 2.The influence of nuisance parameters related to uncer-tainties in target/sensor characteristics is explored inmore detail by Gerwe and Idell.27

It is also important to note that the orientation of theobject is calculated in reference to the sensor’s LOS. Al-

Fig. 9. Dependency of the PSF width CRLB calculation on thesize of the difference used in the derivative approximations andon the method of rendering.

Page 12: Cramér–Rao analysis of orientation estimation: viewing geometry influences on the information conveyed by target features

808 J. Opt. Soc. Am. A/Vol. 20, No. 5 /May 2003 D. R. Gerwe and P. S. Idell

though in some cases the system’s pointing angles rela-tive to the earth and the orientation of the image aboutthe LOS may be estimated from the image, more gener-ally they will be provided by the mount control system.Any errors in these parameters will cause the estimate ofthe target’s orientation to be biased. In this paper, theCRLBs on orientation estimation are always in referenceto the true LOS.

6. MULTIPERSPECTIVE DATA FUSIONThis section demonstrates the utility of the analytic ren-dering approach developed and validated in the previoussections for performing CRLB calculations. As an illus-trative example, we analyze the influence of viewing per-spective on estimation accuracy limits and show the po-tential for reducing the single-sensor CRLB by combiningimage data taken at multiple perspectives. The specificassessment task considered is the estimation of the HST’spose using three geographically separated sensing sys-tems with characteristics similar to those of the 1.6-mtelescope and the GEMINI sensor at the U.S. Air ForceResearch Laboratory’s Maui Space Surveillance Site.

As portrayed in Fig. 10, the viewing geometry is suchthat the perspectives of sensors A, B, and C are along or-

thogonal LOS vectors described by the x, y, and z axes,respectively. A second set of orthogonal body-centeredaxes is defined in reference to the HST, with Y directed

Fig. 10. Imaging geometry of sensors A, B, and C with respectto the target and diagram of the yaw, pitch, and roll body-centered orientation axes $dfY , dfP , dfR%. A series of orien-tations is considered as the HST is rotated around sensor B’sLOS with its pitch axis coaligned with the Y axis.

Fig. 11. Mean image measurements obtained by sensors A, B, and C at a series of orientations of the HST as it is rotated about sensorB’s LOS or, equivalently, the HST’s P axis (pitch). To assist interpretation of the imaging geometry, a corresponding set of depth mapsis also displayed. The x-, y-, and z-axis directions (see Fig. 10) are overlayed for convenience. The three labels PY , PP , and PR denoteparticular viewing perspectives referred to later in Section 6.

Page 13: Cramér–Rao analysis of orientation estimation: viewing geometry influences on the information conveyed by target features

D. R. Gerwe and P. S. Idell Vol. 20, No. 5 /May 2003/J. Opt. Soc. Am. A 809

through the antennas, P directed along the struts con-necting the solar panels to the satellite body, and R di-rected along the optic axis of its telescope. These will bereferred to as the yaw, pitch, and roll axes, respectively.

The calculations are performed for a sequence of orien-tations (uy) as the HST is rotated about sensor B ’s LOS(i.e., the y axis or, equivalently, P) (see Fig. 11). The ref-erence pose (uy 5 0°) is defined such that the HST hasthe optic axis of its telescope directed along the z directionand its antennas aligned with the x axis. This is equiva-lent to setting $Y 5 x, P 5 y, R 5 z%. CRLBs are cal-culated on rotational deviations in yaw, pitch, and roll,i.e., $dfY , dfP , dfR%, about the body-centered axis vec-tors. This is equivalent to using Euler angles but withthe $0, 0, 0% position always equal to the true orientation.

Fig. 12. CRLB on estimating the yaw aspect (dfY) of the HST’sorientation with the use of image measurements from sensors A,B, and C, both individually and in combination. The bound iscalculated for a sequence of orientations as the HST is rotated360° about sensor B’s LOS or, equivalently, the HST’s P axis. Asexpected, the bounds vary as a function of the target/sensor ge-ometry. Combining measurements from all three sensors al-ways reduces the lower bound below that obtained for any singlesensor.

Fig. 13. CRLB on estimating the pitch aspect (dfP) of theHST’s orientation.

Calculating the CRLBs on angles referenced to anotherbase orientation (e.g., on $ux , uy , uz%) can make the re-sults difficult to interpret because of nonuniqueness andof noncommunicability of the order in which rotations areapplied.

The details for both sensors are identical to those de-scribed in Table 3. Vertical and horizontal offsets to theFOV of the FPA, and the height and the width of the PSF,are included as nuisance parameters. The results areplotted in Figs. 12–14. Immediate inspection revealsthat, as expected, the bounds on the yaw, pitch, and rollaspects of the pose vary as a function of viewing perspec-tive. At orientations near uy 5 290° and 90°, sensor Chas the best leverage on dfY , and near uy 5 0° and 180°,sensor A has the upper hand. More extreme variationsare seen in Figs. 13 and 14 for dfP and dfR , respectively.

The bounds on orientation estimation with sensor B areroughly identical for the entire sequence of poses consid-ered. This is because the viewing perspective is the samefor all values of uy and as a result the images are almostidentical within a rotation about the LOS. The only dif-ferences among them are slight details involving the rela-tive spatial relations between target features and FPA el-ement boundaries. However, the small fluctuations inthe calculated values are probably just as likely to be at-tributable to accuracy limitations of the target/sensormodel as to the influence of the location of the FPA bound-aries on the CRLB.

The plots also clearly demonstrate that the CRLB forthe combined measurement is always superior to that ofany sensor individually.27 The potential benefit is obvi-ous. Multiple perspectives reduce the occurrence of‘‘blind spots’’ at which the combined measurement is in-sensitive to an aspect of the satellite’s pose.

Further examination of the results provides some in-sight as to which perspectives and target features providethe best orientation-related information. Let us startwith dfY . As labeled in Fig. 11, we will refer to the per-spectives of sensors A, B, and C at uy 5 180° as the PY ,PP , and PR perspectives, respectively. We also differen-tiate between two regimes of foreshortening effects thatoccur when out-of-plane rotation alters the appearance ofa broad surface as projected onto the sensor’s FPA. Bothare instances of the same effect, but since the projectedarea of a surface is roughly A cos(nsurf. • nLOS), the rate atwhich the projected area changes depends on the anglebetween the LOS nLOS and the normal to the surface,nsurf. . When nsurf. points in the general direction of thesensor, out-of-plane rotation will cause a weak expansionor contraction of the projected area. This projection ge-ometry will be referred to as a foreshortening of type I.When nsurf. is generally perpendicular to the LOS, out-of-plane rotation will cause a similar but much stronger ef-fect on the surface’s silhouette. This regime of foreshort-ening will be called type II. In either case, themagnitude of the change in appearance will be propor-tional to the width of the surface along the direction per-pendicular to both nsurf. and the axis of the rotational as-pect under consideration.

In the PY perspective, changes in yaw correspond to in-plane rotations of the satellite and result in the best (low-est) CRLB, sdfY

5 0.0038°. Figure 15 displays a map of

Page 14: Cramér–Rao analysis of orientation estimation: viewing geometry influences on the information conveyed by target features

810 J. Opt. Soc. Am. A/Vol. 20, No. 5 /May 2003 D. R. Gerwe and P. S. Idell

the relative strength of the Fisher information gainedfrom each FPA element. Examination indicates that forthis perspective, the in-plane rotation of the edges of thesolar panels and the silhouette of the satellite body arevery informative about yaw. Estimation of dfY is moredifficult from perspective PR , which results in sdfY

5 0.0108°. In this case, most of the Fisher informationis gained from a combination of translational movementand type II foreshortening of the solar panels, of the sidesof the main body, and of the telescope shutter. From per-spective PP , weak yaw information is gained from type Iforeshortening of the silhouette of the body correspondingto out-of-plane rotational orientation. A combination ofchanges in the shutter’s projected appearance also con-tributes to the Fisher information. This perspective isrelatively uninformative compared with PY and PR , andit results in the worst CRLB, sdfY

5 0.0392°.PP and PR give roughly equal leverage for estimating

Fig. 14. CRLB on estimating the roll aspect (dfR) of the HST’sorientation.

Fig. 15. These images portray the relative strength of theFisher information provided by each pixel of a FPA regarding theyaw, pitch, and roll orientation aspects of the HST. The top,middle, and bottom rows correspond to the viewing perspectivesseen by sensors A, B, and C, respectively, with the HST posi-tioned in its u 5 180° pose. These perspectives are referred toas PY , PP , and PR and are identical to those seen in the bottomrow of Fig. 11.

pitch, with CRLBs of sdfP5 0.0042° and 0.0046°, respec-

tively. Examining Fig. 15, we see that for the PP per-spective, in-plane orientation aspects of the silhouette ofthe satellite body provide strong information about pitch.The strongest contributions in the PR perspective aretype II foreshortening of the solar panels and of the tele-scope shutter. Differences in the tilt of each solar paneland of the shutter cause the CRLB to actually be slightlylower at an orientation approximately 5° off PR . Per-spective PY is the least informative for estimating pitch,with a CRLB of sdfP

5 0.0133°. In this case, the pri-mary hints at pitch are type I foreshortening of the solarpanels and of the sides of the body, both of which are rela-tively weak.

As also seen with dfY and dfP , the optimum orienta-tion for estimating dfR occurs when this orientational as-pect corresponds to in-plane rotation of the target’s pose,i.e., when the rotational axis is parallel with the LOS. Itappears that the Fisher information regarding in-planeorientation tends to be strong relative to that of the otherorientational aspects and as such should be easier to es-timate. Of course, this trend will be highly subject toother factors such as target shape and lighting properties.In this particular case, the features that contribute themost information are the projected position of the solarpanels and of the antenna on the FPA array. The second-best perspective for estimating roll is PP . Here theFisher information is strongly dominated by type II fore-shortening of the shutter. Without this feature, theCRLB would be much higher (worse accuracy), since thesatellite body and solar panels provide almost no clues tothe roll aspect. Least favored is perspective PY , forwhich the main information source is the weaker type Iforeshortening observed on the solar panels and on thetelescope shutter.

Most of the insights demonstrated by this illustrativeexample agree with what our intuitive feel would predictto be the best perspectives for determining different ori-entational aspects of the target. However, the perspec-tives discussed were chosen to be along lines of symmetryof the target, which tends to simplify interpretation of theresults. More realistic examples would include addi-tional complexities such as directional lighting effects,higher level of detail in the target, and more complicatedgeometries.

One obvious utility of this type of analysis is to helpplan the optimal viewing geometry for estimating param-eters of interest. As a hypothetical example, one mightbe able to predict that the most advantageous perspectivefor estimating the cross-track orientation of a low-earth-orbit satellite with a ground-based telescope would beachieved shortly after it rises past the horizon. Further-more, optimal lighting might be found to occur when thesun is as close as possible to being directly behind the sen-sor. This would indicate that the images are best takenat morning terminator conditions (with the target in thewest). Cross-track orientation might turn out to be mosteasily estimated at the satellite’s culmination. If it couldbe assumed that the satellite was not maneuvering or if aprior was used to describe constraints on the satellite’smaneuvering capabilities, data fusion benefits gainedfrom joint estimation of all orientation aspects and from

Page 15: Cramér–Rao analysis of orientation estimation: viewing geometry influences on the information conveyed by target features

D. R. Gerwe and P. S. Idell Vol. 20, No. 5 /May 2003/J. Opt. Soc. Am. A 811

the use of a full sequence of pictures taken throughout thepass would probably greatly reduce the CRLB.

We must comment that the calculated orientation accu-racy bounds seem small compared with the resolution ofthe simulated imagery. However, implicit in the calcula-tions of these bounds is the assumption that the 3-Dstructure and material properties of the target/sensorcharacteristics are known exactly. As described in Sec-tion 5, the only nuisance parameters included in thesimulations of this paper are target position and PSF sizeand shape. As shown by Gerwe and Idell,27 including amore realistic and comprehensive set of target- andsensor-related uncertainties may significantly increasethe CRLB.

7. SUMMARYFundamental limits on the accuracy with which targetproperties can be estimated from imaging methods wereexplored through an approach based on the CRLB. Thepaper focused on orientation estimation using conven-tional imaging of incoherent light sources. However, themethodology is equally applicable to other assessmenttasks and sensing modalities. A novel rendering tech-nique was developed based on representation of the 3-Dstructure of a target’s surface by populating it with alarge number of surface elements. This representationcircumvented difficulties that were encountered in ap-proximating derivatives with finite differences. By en-abling a continuum in the relations between the locationof one surface’s edge with respect to another, varying thetarget’s orientation caused smooth and continuouschanges in the rendered image. Differential changeswith which one surface obscures another were not, how-ever, accounted for. Further exploration is needed to de-termine whether this effect is small enough that it is rea-sonable to ignore its contribution to the total Fisherinformation.36

The modeling approach was validated by comparingthe numerical results against numbers obtained from anexact closed-form expression derived for a simple rectan-gular plate. The results indicated that the CRLB calcu-lations for orientation estimation could be expected to beaccurate to within 6%. We also explored the degree withwhich inclusion of a few nuisance parameters increasedthe CRLB. Uncertainties in the structure of the PSFwere found to have a surprisingly small effect, but, undersome conditions, uncertainties regarding the target’s loca-tion relative to the FOV increased the CRLB by a factor of2.

An illustrative example was presented, demonstratingthe utility of this approach for understanding the influ-ence of various system parameters on estimation accu-racy limits. Specifically, we explored how the informa-tional content of an image measurement regarding atarget’s 3-D orientation could be optimized by choice ofviewing perspective. Maps indicating the relativestrength of Fisher information as a function of location inthe image provided insights in understanding how loca-tion and orientation of different target features relative toviewing perspective influenced their contribution to thetotal information. The potential benefit that could be de-

rived through the combined information gained from im-ages taken at multiple viewpoint perspectives was alsoshown.

APPENDIX A: EFFECT OF EDGE LOCATIONQUANTIFICATION ON FINITE-DIFFERENCEAPPROXIMATIONSThe aim of this appendix is to demonstrate the difficultiesencountered when attempting to make finite-differencederivative approximations based on rendering codes thathave the effect of quantizing the position of the edges oftarget substructures. In this context, the term edge isused to denote either the true edge of a flat facet or theapparent edge of a curved surface such as a sphere, whichappears in its projection onto a 2-D plane.

TASAT is a good example of a prominently used codewith the quantization properties just described. TASATis commonly used to simulate tracking and imaging ofsatellites and other high-altitude bodies. TASAT wasused in the first attempt at performing the CRLB calcu-lations described in this paper (until the associated prob-lems in accurately calculating derivatives was discov-ered). This code models the target as a construct ofabstract geometric primitives (rectangle, cylinder, sphere,cone, etc.), each with a set of assigned material proper-ties. Rendering is effectively performed through the fol-lowing steps34:

1. Calculate a pristine radiance map on a fine grid thatis representative of the radiance emitted from the targettoward the sensor. This is performed by

(a) tracing rays toward the target through the cornersof each grid cell,(b) determining the location at which each ray firsttouches one of the target’s primitives and noting thematerial type of the intersected surface,(c) associating radiance values with each ray accordingto the BRDF of the materials and the light source di-rection(s),(d) obtaining values at the center of each cell by aver-aging the radiances from the rays at each corner, and,(e) as an optional antialiasing measure, if the values atthe corners of a cell differ by more than 10%–30%,throwing additional rays from within that cell in orderto more accurately sample the radiance variations.

2. Perform a discrete convolution of the pristine imagewith a PSF corresponding to the effects of blurring by theoptics, jitter, etc. The result corresponds to the intensitypattern incident on the FPA.

3. Downsample from the fine grid to a grid matchingthe spacing of the FPA detector elements.

As briefly pointed out in Section 3, the antialiasing steps1(d) and 1(e) above help to reduce aliasing effects by a lo-cal adaptation of the grid resolution to the level of targetdetail and by a small degree of low-pass filtering providedthrough averaging of rays at the corners of each grid cellto get the value in the center. However, despite this mea-sure, spatial location is still quantized to finite step sizes,

Page 16: Cramér–Rao analysis of orientation estimation: viewing geometry influences on the information conveyed by target features

812 J. Opt. Soc. Am. A/Vol. 20, No. 5 /May 2003 D. R. Gerwe and P. S. Idell

and it is precisely this fact that causes difficulties infinite-difference derivative approximations.

Consider the following example. Derivatives of thenoiseless FPA measurements with respect to the orienta-tion angle uz are approximated by (1) rendering the targetat slightly perturbed orientations about the nominal posi-tion, (2) taking the difference, and (3) dividing by themagnitude of the perturbation:

]g~xuuz!

uz'

g~xuuz 1 duz/2! 2 g~xuuz 2 duz/2!

duz. (A1)

As portrayed in Fig. 16, let us suppose that the gross ef-fect of the slight rotation duz on the upper right edge ofthe plate is for it to shift to the right by approximately 1pixel as projected onto the FPA. Assume a system PSFwith a top-hat-like shape and a width of approximately 4

Fig. 16. Pristine image projection of the upper right corner of aflat plate as it is rotated by du, causing the edges to shift approxi-mately 1 FPA pixel to the right. The fine lines correspond to thepristine grid, with the FPA element edges highlighted by thethicker grid lines. The illustration also contains an overlay ofthe PSF spot and a highlighted row of FPA elements.

Fig. 17. Change in the simulated intensity distribution along arow of FPA elements as the edge of a target structure in Fig. 16shifts approximately 1 FPA element to the right. The numberson the ordinate axis denote FPA pixel edges, and the small stepsindicate edges of the convolved image grid elements.

FPA pixels. Let us see how rotations of the plate changethe simulated FPA measurements when the fine-resolution grid used for the pristine and convolved imageshas eight subcells across each FPA element. A plot of theexpected response along the horizontal line of FPA ele-ments highlighted in Fig. 16 is shown in Fig. 17.

As the plate is rotated smoothly from uz 2 duz/2 to uz1 duz/2, the edge shifts to the right through approxi-mately eight fine-grid elements. The values of the ele-ments of the simulated convolved image near the movingedge undergo a corresponding eight step changes. How-ever, depending on the exact amount of the total rotationduz and the relative position of the edge on the grid, thenumber of step changes could range from 7 to 9. The ac-curacy of the finite-difference approximation will be only1 part in 8, or approximately 12.5%. By virtue of causingedges to shift across more pixels, better accuracy can beobtained by choosing larger values of duz . However, toolarge a value will also start to introduce effects fromhigher-order derivative terms and will invalidate thefinite-difference approximation.

A much worse case can occur for target edges near thecenter of rotation at which spatial shifts will be verysmall. Most edges will not shift enough to cross any fine-grid boundaries and thus will have zero contribution tothe derivative approximations. This is unlikely to be asevere problem, since the actual effect of these edges onthe CRLB should be small anyway. However, dependingon the exact value of duz and the relative position of theedge to the grid lines, there will be occasional occurrencesfor which an edge moves by a minuscule amount yetcrosses a grid boundary. The resulting effect will be agrossly overestimated magnitude in the finite-differenceapproximation. It is clearly evident that an exceedinglyhigh grid resolution will be needed to comfortably ensurethat a value of duz can be chosen that produces adequateaccuracy for the calculations at all regions of the FPA andfor all target features.

With the use of TASAT, the CRLB on estimating theorientation of the HST about its LOS was calculated byusing a range of finite-difference step sizes. The resultsprovide a clear demonstration of the difficulties in accu-rately approximating derivatives when using the ap-proach employed by TASAT to produce renderings.(Similar calculations were performed in Subsection 3.C todemonstrate the robustness of the analytic renderingmethod to the choice of finite-difference step size.) Therenderings were performed with the antialiasing featuresin TASAT enabled. The intermediate pristine and con-volved images were calculated on grids with eight pointsacross each IFOV. The signal strength was higher, butotherwise the sensor characteristics and the target geom-etry were similar to those outlined in Table 3. Nuisanceparameters were not included, and only the in-plane ori-entation aspect was considered. The results are plottedin Fig. 18. As the size of the difference used in the de-rivative approximations decreases below 0.1°, the quanti-zation effect quickly causes severe overestimation of thederivatives. Above 1°, we expect higher-order deriva-tives to begin to significantly affect the approximations.This leaves a slim region between 0.1° and 1°, where onemight hope that the derivatives have been accurately cal-

Page 17: Cramér–Rao analysis of orientation estimation: viewing geometry influences on the information conveyed by target features

D. R. Gerwe and P. S. Idell Vol. 20, No. 5 /May 2003/J. Opt. Soc. Am. A 813

culated both near the center of rotation and near theedges of the FOV. Of course, this range is expected tovary for different imaging geometries and sensor charac-teristics. Obviously, ensuring that the calculations areaccurate will be difficult and will require re-evaluation ofan appropriate step size range each time the target, sen-sor, or imaging geometry characteristic is significantly al-tered.

APPENDIX B: CLOSED-FORMEXPRESSIONS FOR A FOCAL-PLANE-ARRAY IMAGE OF A FLAT-PLATE TARGETThis appendix derives closed-form expressions for the fluxincident on the elements of a FPA from a flat-plate targetand its derivative as a function of the plate’s orientation.Two rotational aspects are considered: in-plane orienta-tion about the LOS and out-of-plane orientation aroundan axis perpendicular to the LOS. For both cases, we as-sume that the nominal orientation of the plate is suchthat it is perpendicular to the LOS and that the longedges of the rectangle are parallel to columns of the FPAarray. The plate’s surface is treated as having a Lamber-tian BRDF and as being illuminated uniformly from alldirections, i.e., ambientlike lighting conditions. Equa-tions (B4) and (B5) below provide a closed-form expres-sion for the PSF corresponding to a pyramidlike profilewith square support.

1. Out-of-Plane OrientationUnder the described imaging geometry, the effect of anout-of-plane rotation is to cause a foreshortening of theprojected width of the plate onto the FPA. The angle ofrotation is considered to be about the y axis. To simplifythe math, we set the dimensions of the plate such that itis narrow along the x axis and extends greatly beyond theFPA’s FOV along the y axis. As a result, the FPA imagewill not vary as a function of y, reducing the derivation toa 1-D calculation. Under the Lambertion BRDF assump-tion, the radiant intensity (corresponding to incident pho-

Fig. 18. Dependency of the target orientation CRLB calculationon the size of the difference used in the derivative approxima-tions when using TASAT to render the images.

tons per steradian) seen by the FPA from any portion ofthe plate remains constant as a function of orientation.The resulting source intensity distribution in the plane ofthe FPA can now be expressed by the equations that fol-low:

plate intensity~x, yuuy! } rectF x

Wo~uy!G , (B1)

Wo~uy! 5 WR cos uy , (B2)

rect~x ! 5 H 0, uxu . 1/2

1, uxu < 1/2. (B3)

The quantity Wo(uy) is the projected width of the plate asseen on the FPA when rotated out of plane by uy . WR isthe true width of the plate.

The PSF is represented by a 2-D triangle function ofwidth Wp in x and in y:

psf~x, y ! 5 triS x

WpD triS y

WpD , (B4)

tri~x ! 5 H 1 1 x, 21 < x < 0

1 2 x, 0 < x < 1

0, otherwise. (B5)

The integrated photon flux g at each FPA element(x, y) is the convolution of the projected source distribu-tion and the PSF, integrated over the element area. If wedenote Ax,y as the set of points within the region sub-tended by FPA element (x, y), then

g~x, y !`EE~j,h!PAx,y

djdhEE dj8dh8

3 psf~j 2 j8, h 2 h8!plate intensity~j8, h8!.(B6)

If we set the unit distance equal to the detector elementspacing, (j, h) P Ax,y is true when rect(x 2 j) 5 1 (as-suming square FPA elements and a 100% fill factor).Multiplying the integrand by this rect function allows usto change the limits of the integral over j and h to 6`.Substituting relation (B1) and Eq. (B4) into relation (B6),we obtain

go~x, y ! } H E dj rect~x 2 j!E dj8

3 triS j 2 j8

WpD rectF j8

Wo~uy!G J

3 H E dh rect~ y 2 h!E dh8 triS h 2 h8

WpD J .

(B7)

The subscript o in go has been added to denote that thequantity corresponds to the expected FPA photon fluxesfor out-of-plane rotations of the plate. The second brack-eted term is not dependent on the source orientation andevaluates to a constant, giving

Page 18: Cramér–Rao analysis of orientation estimation: viewing geometry influences on the information conveyed by target features

814 J. Opt. Soc. Am. A/Vol. 20, No. 5 /May 2003 D. R. Gerwe and P. S. Idell

go~x, y ! } E dj rect~x 2 j!E dj8

3 triS j 2 j8

WpD rectF j8

Wo~uy!G . (B8)

A closed-form expression for the mean measurement atFPA element (x, y) is now found by substituting Eqs. (B2),(B3), and (B5) into relation (B8)—a straightforward buttedious exercise. The form of the solution depends on themagnitudes of Wp and Wo(uy). For Wp . 1 and Wo(uy). 2Wp 1 1, the following result is obtained:

go~x, y ! }

¦

0, Wp 1 0.5 < x

I1~x!, Wp 2 0.5 < x < Wp 1 0.5

I2~x!, 0.5 < x < Wp 2 0.5

I3~x!, 20.5 < x < 0.5

I4~x!, 2Wp 1 0.5 < x < 20.5

I5~x!, 2Wp 2 0.5 < x < 2Wp 1 0.5

I6~x!, x < 2Wp 2 0.5

x 5 x0 [ uxu 2 Wo~uy!/2

,

(B9)

The functions I1–I6 are defined as follows:

I1~x! 5 T1~Wp 1 0.5! 2 T1~x!, (B10)

I2~x! 5 T1~Wp 1 0.5! 2 T1~Wp 2 0.5!

1 T2~Wp 2 0.5! 2 T2~x!, (B11)

I3~x! 5 T1~Wp 1 0.5! 2 T1~Wp 2 0.5!

1 T2~Wp 2 0.5! 2 T2~0.5!

1 T3~0.5! 2 T3~x!, (B12)

I4~x! 5 T1~Wp 1 0.5! 2 T1~Wp 2 0.5!

1 T2~Wp 2 0.5! 2 T2~0.5!

1 T3~0.5! 2 T3~20.5!

1 T4~20.5! 2 T4~x!, (B13)

I5~x! 5 T1~Wp 1 0.5! 2 T1~Wp 2 0.5!

1 T2~Wp 2 0.5! 2 T2~0.5! 1 T3~0.5!

2 T3~20.5! 1 T4~20.5! 2 T4~2Wp 1 0.5!

1 T5~2Wp 1 0.5! 2 T5~x!, (B14)

I6~x! 5 T1~Wp 1 0.5! 2 T1~Wp 2 0.5!

1 T2~Wp 2 0.5! 2 T2~0.5! 1 T3~0.5!

2 T3~20.5! 1 T4~20.5! 2 T4~2Wp 1 0.5!

1 T5~2Wp 1 0.5! 2 T5~2Wp 2 0.5!,

(B15)

where

T1~z! 5~Wp 1 0.5!2z 2 ~Wp 1 0.5!z2 1 z3/3

2Wp2 , (B16)

T2~z! 5Wpz 2 0.5z2

Wp2 , (B17)

T3~z! 5z

Wp2

z

4Wp2 2

z3

3Wp2 , (B18)

T4~z! 5Wpz 1 0.5z2

Wp2 , (B19)

T5~z! 5~Wp 1 0.5!2z 1 ~Wp 1 0.5!z2 1 z3/3

2Wp2 . (B20)

Taking the derivative of relation (B9) with respect toWo(uy) gives

]go~x, y !

]Wo} 5

0, Wp 1 0.5 < xD1~x!, Wp 2 0.5 < x < Wp 1 0.5D2~x!, 0.5 < x < Wp 2 0.5D3~x!, 20.5 < x < 0.5D2~x!, 2Wp 1 0.5 < x < 20.5D1~x!, 2Wp 2 0.5 < x < 2Wp 1 0.50, x < 2Wp 2 0.5

x 5 xo [ uxu 2 Wo~uy!/2

,

(B21)with functions D1 –D3 defined as

D1~x! 5~Wp 1 0.5 2 x!2

4Wp2 , (B22)

D2~x! 5Wp 2 x

2Wp2 , (B23)

D3~x! 52Wp 2 0.5 2 2x2

4Wp2 . (B24)

The derivative with respect to uy is easily found by ap-plying the chain rule to Eq. (B2) and relation (B21):

]go~x, yuuy!

]uy5

]go~x, yuuy!

]Wo~uy!

]Wo~uy!

]uy

5 2]go~x, yuuy!

]Wo~uy!WR sin~uy!. (B25)

Except for a proportionality constant, expressions(B9)–(B25) completely specify the Fisher information ma-trix of relation (15) in closed form.

2. In-Plane OrientationThis subsection will derive an exact expression for the im-age as a function of the in-plane orientation of the plateabout the LOS. As a function of orientation, the pro-jected profile can be found by transforming the equationsfor the lines that form the left and right edges of theplate. In the nominal (uy 5 0°) orientation, these equa-tions are xo 5 6WR/2. The in-plane rotational transfor-mation to orientation angle uz is

Page 19: Cramér–Rao analysis of orientation estimation: viewing geometry influences on the information conveyed by target features

D. R. Gerwe and P. S. Idell Vol. 20, No. 5 /May 2003/J. Opt. Soc. Am. A 815

S xy D 5 F cos uz 2sin uz

sin uz cos uzG S xo

yoD . (B26)

Substituting xo 5 6WR/2 and solving for x as a func-tion of y produces

x 56WR/2 2 y sin uz

cos uz(B27)

As projected onto the FPA, the resulting intensity distri-bution function for the plate is

plate intensity~x, yuuz! 5 rectF uxu

Wi~x, yuuz!G

for uz : H sin uz , WR , (B28)

Wi~x, yuuz! 5WR 2 sign~x !2y sin uz

cos uz(B29)

sign~x ! 5 H 21 x , 0

1 x . 0, (B30)

where H is the height of the FPA and the units are in FPAelements. Substituting Eqs. (B4) and (B28) into relation(B6) gives the mean expected FPA measurement:

gi~x, yuuz! } E dj rect~x 2 j!E dj8H triS j 2 j8

WpD

3 E dh rect~ y 2 h!E dh8 triS h 2 h8

WpD

3 rectF uj8u

Wi~x, yuuz!G J , (B31)

where we have used the subscript i on gi to denote thatthis quantity corresponds to in-plane rotations of theplate.

Assuming that the orientation angle uz is near zero andthat WR @ 1, then the plate’s intensity distribution isslowly varying in h8 and it is reasonable to take this term,rect@ uj8u/Wi(x, yuuz)#, outside the two most interior inte-grals. Within this approximation, the result is analogousto relation (B7), obtained for the case of out-of-plane rota-tions, and the integrals in h and h8 evaluate to a con-stant, giving

gi~x, yuuz! } E dj rect~x 2 j!E dj8

3 triS j 2 j8

WpD rectF uj8u

Wi~x, yuuz!G .

(B32)

By substituting Eqs. (B3) and (B5) into relation (B32),we obtain the final closed-form expression for the ex-pected FPA flux at each element (x, y) for in-plane rota-tions and its derivative with respect to Wi(x, yuuz), i.e.,]gi(x, yuuz)/]Wi(x, yuuz). For Wp . 1 and WR2 H sin uz . 2Wp 1 1, these expressions are identical torelations (B9) and (B21), except that the quantity x mustbe redefined to correspond to edge positions of the platefor in-plane rotations:

j 5 x i [ uxu 2 Wi~x, y, uz!/2. (B33)

The functions I1–I6 and D1 –D3 are defined just asabove.

Applying the chain rule, we find the derivative of giwith respect to uz to be

]gi~x, yuuz!

]uz5

]gi~x, yuuz!

]Wi~x, yuuz!

]Wi~x, yuuz!

]uz

5]gi~x, yuuz!

]Wi~x, yuuz!3 H 2sign~x !y

1@WR/2 2 sign~x !y cos uz#sin uz

cos2 uzJ .

(B34)

The Fisher information at each pixel of the imaged platecan now be specified for in-plane rotations within a mul-tiplicative constant by substituting the new definition of xexpressed by Eq. (B33) into expression (B9), (B21), and(B34).

ACKNOWLEDGMENTSThe authors extend thanks to Joel Vaughn and JenniferHill of the Boeing Co. for their contributions in imple-menting the rendering code used in this paper and in run-ning simulations that provided supporting data. The au-thors also acknowledge their appreciation of theconstructive reviews made by the anonymous referees,which led to a general improvement of this paper. Mostspecifically, limitations on the applicability ofCramer–Rao bounds on nonflat parameter spaces werenoted, and a significant body of work12,13,21–25 with whichthe authors were unaware was pointed out.

Corresponding author David Gerwe can be reached byemail at [email protected] or by phone at 818-586-8220.

REFERENCES AND NOTES1. S. M. Hannon and J. H. Shapiro, ‘‘Laser radar target detec-

tion with a multipixel joint range-intensity processor,’’ inLaser Radar III, R. J. Becherer, ed., Proc. SPIE 999, 162–175 (1988).

2. S. M. Hannon and J. H. Shapiro, ‘‘Active-passive detectionof multipixel targets,’’ in Laser Radar V, R. J. Becherer, ed.,Proc. SPIE 1222, 2–23 (1990).

3. T. J. Green, Jr., and J. H. Shapiro, ‘‘Maximum-likelihood la-ser radar range profiling with the expectation-maximization algorithm,’’ Opt. Eng. 31, 2343–2354 (1992).

4. J. Zhao, S. Ahalt, and C. B. Stribling, ‘‘3-D orientation vec-tor estimation from satellite imagery,’’ in Signal Processing,Sensor Fusion, and Target Recognition V, I. Kadar and V.Libby, eds., Proc. SPIE 2755, 472–483 (1996).

5. L. Hassebrook, M. Lhamon, M. Wang, and J. Chatterjee,‘‘Postprocessing of correlation for orientation estimation,’’Opt. Eng. 36, 2710–2718 (1997).

6. X. Du, S. Ahalt, and B. Stribling, ‘‘Three-dimensional vectorestimation for subcomponents of space object imagery,’’ Opt.Eng. 37, 798–807 (1998).

7. B. Li, Q. Zheng, S. Der, R. Chellappa, N. M. Nasrabadi, L.A. Chhan, and L.-C. Wang, ‘‘Experimental evaluation ofneural, statistical and model-based approaches to FLIR

Page 20: Cramér–Rao analysis of orientation estimation: viewing geometry influences on the information conveyed by target features

816 J. Opt. Soc. Am. A/Vol. 20, No. 5 /May 2003 D. R. Gerwe and P. S. Idell

ATR,’’ in Automatic Target Recognition VIII, F. A. Sadjadi,ed., Proc. SPIE 3371, 388–397 (1998).

8. A. E. Koksal, J. H. Shapiro, and W. M. Wells III, ‘‘Model-based object recognition using laser radar range imagery,’’in Automatic Target Recognition IX, F. A. Sadjadi, ed., Proc.SPIE 3718, 256–266 (1999).

9. R. Li, ‘‘Model-based target recognition using laser radar,’’Opt. Eng. 31, 322–327 (1992).

10. T. J. Green and J. H. Shapiro, ‘‘Detecting objects in three-dimensional laser radar range images,’’ Opt. Eng. 33, 865–874 (1994).

11. M. I. Miller, A. Srivastava, and U. Grenander, ‘‘Conditional-mean estimation via jump-diffusion processes in multipletarget tracking/recognition,’’ IEEE Trans. Signal Process.43, 2678–2690 (1995).

12. M. I. Miller, U. Grenander, J. A. O’Sullivan, and D. L. Sny-der, ‘‘Automatic target recognition organized via jump-diffusion algorithms,’’ IEEE Trans. Image Process. 6, 157–174 (1997).

13. A. D. Lanterman, M. I. Miller, and D. L. Snyder, ‘‘GeneralMetropolis–Hastings jump diffusions for automatic targetrecognition in infrared scenes,’’ Opt. Eng. 36, 1123–1137(1997).

14. M. Cooper, U. Grenander, M. I. Miller, and A. Srivastava,‘‘Accommodating geometric and thermodynamic variabilityfor forward-looking infrared sensors,’’ in Algorithms forSynthetic Aperture Radar Imagery IV, E. G. Zelnio, ed.,Proc. SPIE 3070, 162–172 (1997).

15. J. Kostakis, M. Cooper, T. Green, Jr., M. Miller, J.O’Sullivan, J. Shapiro, and D. Snyder, ‘‘Multispectralactive-passive sensor fusion for ground-based target orien-tation estimation,’’ in Automatic Target Recognition VIII, F.A. Sadjadi, ed., Proc. SPIE 3371, 500–507 (1998).

16. J. Kostakis, M. Cooper, T. Green, Jr., M. Miller, J.O’Sullivan, J. Shapiro, and D. Snyder, ‘‘Multispectral sen-sor fusion for ground-based target orientation estimation:FLIR, LADAR, HRR,’’ in Automatic Target Recognition IX,F. A. Sadjadi, ed., Proc. SPIE 3718, 14–24 (1999).

17. A. Srivastava, U. Grenander, G. R. Jensen, and M. I. Miller,‘‘Jump-diffusion Markov processes on orthogonal groups forobject pose estimation,’’ J. Stat. Plann. Infer. 103, 15–37(2002).

18. H. L. Van Trees, Detection, Estimation, and ModulationTheory: Part 1 (Wiley, New York, 1968).

19. S. Kay, Fundamentals of Statistical Signal Processing: Es-timation Theory (Prentice-Hall, Englewood Cliffs, N.J.,1993).

20. E. Weinstein and A. J. Weiss, ‘‘A general class of lowerbounds in parameter estimation,’’ IEEE Trans. Inf. Theory34, 338–342 (1988).

21. A. Srivastava and U. Grenander, ‘‘Metrics for target recog-nition,’’ in Applications of Artificial Neural Networks in Im-age Processing III, N. M. Nasrabadi and A. K. Katsaggelos,eds., Proc. SPIE 3307, 29–36 (1998).

22. U. Grenander, M. I. Miller, and A. Srivastava, ‘‘Hilbert–Schmidt lower bounds for estimators on matrix Lie groups,’’IEEE Trans. Pattern Anal. Mach. Intell. 20, 790–801(1998).

23. U. Grenander, A. Srivastava, and M. I. Miller, ‘‘Asymptoticperformance analysis on Bayesian target recognition,’’IEEE Trans. Inf. Theory 46, 1658–1665 (2000).

24. M. L. Cooper and M. Miller, ‘‘Information measures for ob-ject recognition accommodating signature variability,’’IEEE Trans. Inf. Theory 46, 1896–1907 (2000).

25. H. Hendriks, ‘‘A Cramer–Rao type lower bound for estima-

tors with values in a manifold,’’ J. Multivar. Analy. 38, 245–261 (1991).

26. As long as the noise of each measurement and sensor is sta-tistically independent, which is true in a large number ofsituations, the joint pdf is simply the product of the pdfscorresponding to the individual measurements.

27. J. V. D. R. Gerwe and P. S. Idell, ‘‘Cramer–Rao bound analy-sis of target characterization accuracy limits for imagingsystems,’’ in Multifrequency Electronic/Photonic Devicesand Systems for Dual-Use Applications, A. R. Pirich, P. L.Repak, P. S. Idell, and S. R. Czyzak, eds., Proc. SPIE 4490,245–255 (2001).

28. A. D. Lanterman, M. I. Miller, and D. L. Snyder, ‘‘Represen-tations of thermodynamic variability in the automated un-derstanding of FLIR scenes,’’ in Automatic Object Recogni-tion VI, F. A. Sadjadi, ed., Proc. SPIE 2756, 26–37 (1996).

29. The bound given in relation (4) corresponds to the lowestMMSE achievable by an optimal estimator for the specificvalue of j used in the calculations. Another, more globalbound can be calculated by averaging Eq. (3) over all valuesof j and weighted by the a priori distribution p(j). ThisBayesian bound gives the minimum-mean-square accuracyachievable by any estimator including those that are bi-ased. See pp. 72–73 and 84–85 of Van Trees.18

30. D. Snyder, D. Angelisanti, W. Smith, and G.-M. Dai, ‘‘Cor-rection for nonuniform flat-field response in focal-plane ar-rays,’’ in Digital Image Recovery and Synthesis III, P. S.Idell and T. J. Schulz, eds., Proc. SPIE 2827, 60–67 (1996).

31. D. Snyder, C. Helstrom, A. Lanterman, M. Faisal, and R.White, ‘‘Compensation for readout noise in CCD images,’’ J.Opt. Soc. Am. A 12, 272–283 (1995).

32. D. Snyder, A. M. Hammoud, and R. L. White, ‘‘Image recov-ery from data acquired with a charge-coupled-device cam-era,’’ J. Opt. Soc. Am. A 10, 1014–1023 (1993).

33. R. E. Blahut, Principles and Practice of Information Theory(Addison-Wesley, Reading, Mass., 1987).

34. J. Riker, R. Butts, G. Crockett, C. Baer, G. Kroncke, G. Co-chran, D. Briscoe, M. Stephens, R. Suizu, and D. Clark,‘‘Tracking for anti-satellite (TASAT) systems simulation.Vol. II—Physical models,’’ Tech. Rep. RDA-TR-154306-001(Research and Development Associates Logicon, 105 EastVermijo, Suite 450, Colorado Springs, CO, 1989).

35. Such a comparison was performed after the compilation ofthis paper. The details of the calculations are too long toinclude here but were similar to those presented in Section4 and Appendix B. It was found that the numerical CRLBcalculations were generally 5%–20% larger than that indi-cated by the closed-form expression. This difference is toosmall relative to the potential inaccuracies in the numericalcalculations to infer much about how the obscuration ef-fects influence the Fisher information. It does, however,provide an indication that the overall effect is fairly smalland that the relative location of target edges dominates theFisher information.

36. The described modifications to the rendering algorithmwere implemented subsequent to the compilation of this pa-per. Tests indicate that as a complex 3-D target was ro-tated, the pixel values in the vicinity of obscuration edgeschanged smoothly and continuously. A rigorous evaluationof the accuracy of the approach for computing CRLBshas not been performed, but preliminary results are prom-ising.

37. A. E. Siegman, Lasers (University Science Books, Mill Val-ley, Calif., 1986).


Recommended