+ All Categories
Home > Documents > IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 54 ...nehorai/paper/01621403.pdfIEEE TRANSACTIONS ON...

IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 54 ...nehorai/paper/01621403.pdfIEEE TRANSACTIONS ON...

Date post: 24-Sep-2020
Category:
Upload: others
View: 2 times
Download: 0 times
Share this document with a friend
13
IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 54, NO. 5, MAY 2006 1737 Performance Bounds on Image Registration ˙ Imam S ¸amil Yetik and Arye Nehorai, Fellow, IEEE Abstract—Registration is a fundamental step in image pro- cessing systems where there is a need to match two or more images. Applications include motion detection, target recognition, video processing, and medical imaging. Although a vast number of publications have appeared on image registration, performance analysis is usually performed visually, and little attention has been given to statistical performance bounds. Such bounds can be useful in evaluating image registration techniques, determining parameter regions where accurate registration is possible, and choosing features to be used for the registration. In this paper, Cramér–Rao bounds on a wide variety of geometric deformation models, including translation, rotation, shearing, rigid, more general affine and nonlinear transformations, are derived. For some of the cases, closed-form expressions are given for the maximum-likelihood (ML) estimates, as well as their variances, as space permits. The bounds are also extended to unknown original objects. Numerical examples illustrating the analytical performance bounds are presented. Index Terms—Cramér–Rao bounds, image registration, perfor- mance analysis. I. INTRODUCTION I MAGE registration is the process of matching two or more images that differ in certain aspects but essentially represent the same object. The images to be registered may be obtained using different viewpoints, sensors, or time instants. Many engineering problems involve such cases, including motion detection, where images taken at different time instants need to be registered; target recognition, where images taken from different viewpoints should be combined; and video processing, where different frames should be registered for more efficient compression. Other application areas are three-dimensional (3-D) modeling, where two-dimensional (2-D) images must be integrated to construct a 3-D model, and medical imaging, where combining information from different modalities is useful since each of them may have certain advantages, as well as registering images taken at different time instants to analyze tumor growth or regular child growth. In some of these applica- tions, the deformation itself is of interest, for example in motion estimation or tumor growth, whereas in others deformation Manuscript received June 1, 2004; revised May 3, 2005. This work was sup- ported by the Air Force Office of Scientific Research Grant F49620-02-1-0339, and the National Science Foundation Grants CCR-0105334 and CCR-0330342. The associate editor coordinating the review of this manuscript and approving it for publication was Dr. Hilde M. Huizenga. ˙ I. S ¸. Yetik was with the Department of Electrical and Computer Engineering, University of Illinois at Chicago, Chicago, IL 60607 USA. He is now with the Department of Biomedical Engineering, University of California, Davis, CA 95616-8678 USA (e-mail: [email protected]). A. Nehorai was with the Department of Electrical and Computer Engineering, University of Illinois at Chicago, Chicago, IL 60607 USA. He is now with Washington University in St. Louis, St. Louis, MO 63130-4899 USA (e-mail: [email protected]). Digital Object Identifier 10.1109/TSP.2006.870552 analysis is required only to correct or align the images, such as combining medical images obtained using different modalities. There are excellent tutorials on image registration, including [1] focusing on geometric registration, [2] discussing intensity matching, and [3], which presents a well-organized classifica- tion of image registration algorithms. Interested readers are en- couraged to read these papers and the references therein for a more exhaustive background on image registration than the fol- lowing summary. Image registration can be of two main types, depending on the nature of the images and distortion model assumptions. The first is matching only the coordinates of the images assuming that the intensity values of the image pixels are not altered, but only their coordinates are transformed. This may occur in images taken from different viewpoints ignoring the effect of lighting, images taken at different time instants but under similar conditions (e.g., satellite images where the actual content of the image is moving), or simply as a preprocessing step for intensity matching. The second type of image registration is more general and includes both intensity matching and geometric correction (correcting the coordinates). Intensity values of two images representing the same real-world object can be different due to many reasons, including different sensors, lighting conditions, equipment set- tings, and environmental conditions (e.g., in satellite images). Image registration methods can be classified in many ways other than the above two main clusters. These include dimen- sion-based classification: registration of 2-D and 2-D images is the most common; 3-D and 3-D images, which occurs in med- ical imaging where 3-D images are available from tomographic imaging techniques; and 2-D and 3-D images in registering to- mographic images to X-ray images. Another classification could be made based on the application areas, such as medical imaging or video processing. We can also distinguish between image reg- istration algorithms, which are either fully automatic or semiau- tomatic, where a user interaction is required at a certain level. It is further possible to divide image registration algorithms into methods requiring preprocessing, versus others, that are applied directly to the images, and to group them as global (or space-in- variant) or local (space variant) methods. Finally, it is possible to categorize them based on the methods that are used, such as intensity-based or information-based algorithms. All image registration methods essentially use a features set (either extracted from the image or the image intensities), a sim- ilarity measure between these features sets, and a search for the maximum of this similarity measure. The class of registration algorithms that preprocess the image before registration extracts the coordinates of the features that are common in both images and then match the images using these coordinates. The features used can be isolated points [4], [5], lines [6], [7], or surfaces [8], [9]. These common features in 1053-587X/$20.00 © 2006 IEEE
Transcript
Page 1: IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 54 ...nehorai/paper/01621403.pdfIEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 54, NO. 5, MAY 2006 1737 Performance Bounds on Image Registration

IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 54, NO. 5, MAY 2006 1737

Performance Bounds on Image RegistrationImam Samil Yetik and Arye Nehorai, Fellow, IEEE

Abstract—Registration is a fundamental step in image pro-cessing systems where there is a need to match two or moreimages. Applications include motion detection, target recognition,video processing, and medical imaging. Although a vast numberof publications have appeared on image registration, performanceanalysis is usually performed visually, and little attention hasbeen given to statistical performance bounds. Such bounds can beuseful in evaluating image registration techniques, determiningparameter regions where accurate registration is possible, andchoosing features to be used for the registration. In this paper,Cramér–Rao bounds on a wide variety of geometric deformationmodels, including translation, rotation, shearing, rigid, moregeneral affine and nonlinear transformations, are derived. Forsome of the cases, closed-form expressions are given for themaximum-likelihood (ML) estimates, as well as their variances,as space permits. The bounds are also extended to unknownoriginal objects. Numerical examples illustrating the analyticalperformance bounds are presented.

Index Terms—Cramér–Rao bounds, image registration, perfor-mance analysis.

I. INTRODUCTION

IMAGE registration is the process of matching two or moreimages that differ in certain aspects but essentially represent

the same object. The images to be registered may be obtainedusing different viewpoints, sensors, or time instants. Manyengineering problems involve such cases, including motiondetection, where images taken at different time instants needto be registered; target recognition, where images taken fromdifferent viewpoints should be combined; and video processing,where different frames should be registered for more efficientcompression. Other application areas are three-dimensional(3-D) modeling, where two-dimensional (2-D) images mustbe integrated to construct a 3-D model, and medical imaging,where combining information from different modalities isuseful since each of them may have certain advantages, as wellas registering images taken at different time instants to analyzetumor growth or regular child growth. In some of these applica-tions, the deformation itself is of interest, for example in motionestimation or tumor growth, whereas in others deformation

Manuscript received June 1, 2004; revised May 3, 2005. This work was sup-ported by the Air Force Office of Scientific Research Grant F49620-02-1-0339,and the National Science Foundation Grants CCR-0105334 and CCR-0330342.The associate editor coordinating the review of this manuscript and approvingit for publication was Dr. Hilde M. Huizenga.

I. S. Yetik was with the Department of Electrical and Computer Engineering,University of Illinois at Chicago, Chicago, IL 60607 USA. He is now with theDepartment of Biomedical Engineering, University of California, Davis, CA95616-8678 USA (e-mail: [email protected]).

A. Nehorai was with the Department of Electrical and Computer Engineering,University of Illinois at Chicago, Chicago, IL 60607 USA. He is now withWashington University in St. Louis, St. Louis, MO 63130-4899 USA (e-mail:[email protected]).

Digital Object Identifier 10.1109/TSP.2006.870552

analysis is required only to correct or align the images, such ascombining medical images obtained using different modalities.

There are excellent tutorials on image registration, including[1] focusing on geometric registration, [2] discussing intensitymatching, and [3], which presents a well-organized classifica-tion of image registration algorithms. Interested readers are en-couraged to read these papers and the references therein for amore exhaustive background on image registration than the fol-lowing summary.

Image registration can be of two main types, depending on thenature of the images and distortion model assumptions. The firstis matching only the coordinates of the images assuming that theintensity values of the image pixels are not altered, but only theircoordinates are transformed. This may occur in images takenfrom different viewpoints ignoring the effect of lighting, imagestaken at different time instants but under similar conditions (e.g.,satellite images where the actual content of the image is moving),or simply as a preprocessing step for intensity matching. Thesecond type of image registration is more general and includesboth intensity matching and geometric correction (correctingthe coordinates). Intensity values of two images representingthe same real-world object can be different due to many reasons,including different sensors, lighting conditions, equipment set-tings, and environmental conditions (e.g., in satellite images).

Image registration methods can be classified in many waysother than the above two main clusters. These include dimen-sion-based classification: registration of 2-D and 2-D images isthe most common; 3-D and 3-D images, which occurs in med-ical imaging where 3-D images are available from tomographicimaging techniques; and 2-D and 3-D images in registering to-mographic images to X-ray images. Another classification couldbe made based on the application areas, such as medical imagingor video processing. We can also distinguish between image reg-istration algorithms, which are either fully automatic or semiau-tomatic, where a user interaction is required at a certain level. Itis further possible to divide image registration algorithms intomethods requiring preprocessing, versus others, that are applieddirectly to the images, and to group them as global (or space-in-variant) or local (space variant) methods. Finally, it is possibleto categorize them based on the methods that are used, such asintensity-based or information-based algorithms.

All image registration methods essentially use a features set(either extracted from the image or the image intensities), a sim-ilarity measure between these features sets, and a search for themaximum of this similarity measure.

The class of registration algorithms that preprocess the imagebefore registration extracts the coordinates of the features thatare common in both images and then match the images usingthese coordinates. The features used can be isolated points [4],[5], lines [6], [7], or surfaces [8], [9]. These common features in

1053-587X/$20.00 © 2006 IEEE

Page 2: IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 54 ...nehorai/paper/01621403.pdfIEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 54, NO. 5, MAY 2006 1737 Performance Bounds on Image Registration

1738 IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 54, NO. 5, MAY 2006

the images can be found using feature extraction and segmenta-tion algorithms when we have no access to the real-world object.In the case of some medical imaging techniques we may haveaccess to the object and embed some artificial features that willbe visible in both images and then use them for the registrationas in [10]. After obtaining the features set we can use statis-tical methods to match the two images when there are enoughnumber of features so that the linear system relating the images(or their features) to the real-world object (or its features) isoverdetermined. Intensity matching is performed after a geo-metric correction is made.

The class of registration algorithms that directly uses theimage intensity values (without preprocessing) simultaneouslydetermines geometric deformation and intensity distortion.Such algorithms use voxel similarity measures, for instance,intensity difference [11] or correlation measures [12]. Othertechniques based on information have been proposed [13], [14]where the basic idea is to maximize the information shared bythe images after the registration is performed.

Although many methods have been proposed for image reg-istration, very little attention has been given to evaluating theirperformance and limitations. the common methods for evalu-ating registration include utilizing a gold standard which is rarelyavailable e.g., in medical imaging techniques where fixed visiblemarkers can be attached to the object [15], visual evaluation[16] requiring interaction of trained personnels, and consistencymeasures [17]. A novel method that quantifies registration errorscan be found in [18], where the registration accuracy is evalu-ated using a “circular” approach; namely images from multiplemodalities are registered serially ending with the modality thatis the first in the series. In this way, a perfect registration shouldresult in the identity operation and the deviations from theidentity is related to the registration error [18].

In this paper, we formulate the image registration problem as astatistical parameter estimation problem and derive Cramér–Raobounds (CRBs) as performance measures (see also [19]). TheCRB is a lower bound on the covariance of any unbiased esti-mator and is asymptotically achieved by the maximum-likeli-hood (ML) estimator. It is an important benchmark performancemeasure that can be used to evaluate the efficiency of registrationalgorithms, to determine the parameter regions where good andpoor estimates are expected, and to optimize image registrationby selecting the features to be used. The CRB is widely studiedin statistical signal processing areas, such as communications,radar, sonar, and biomedicine. Image registration literature lacksthe study of this important bound except for a very limited defor-mation model (only translation) in the context of motion estima-tion [20]. This paper aims to fill this gap by deriving CRB expres-sions for a wide class of geometric deformation models includingtranslation, rotation, scaling, shearing, rigid, more general affineand nonlinear transformations shown in Fig. 1 (see Section II fordefinitions and details).

In Section II, we present the image registration problem as aparameter estimation problem and give two basic frameworks:Registration using 1) isolated points in Section II-A and 2)the image intensities to be used in registration in Section II-Cfor geometric registration. We refer to these isolated points assimply “points” in the rest of the paper. We also derive CRBs

Fig. 1. Illustrations of geometric deformations. Part (a) shows the originalimage, (b) translation, (c) rotation, (d) skew, (e) rigid (rotation+translation),(f) affine (rigid+skew), and (g) nonlinear transformations.

using these two frameworks in Section II. The maximum-likelihood estimates (MLEs) and their variances are given forsome of the cases as the space permits. Extensions to unknownreal-world objects are also presented. We give numerical ex-amples in Section III for easier visualization of the analyticallyderived bounds. Section IV is the conclusion and discussion.

II. PROBLEM FORMULATION AND PERFORMANCE ANALYSIS

Let be a real-world object of interest, and and twofunctions that represent the images to be registered of this ob-ject. The coordinates of the image are geometricallydeformed versions of the coordinates of the image , asfollows:

(2.1)

where and represent operators that map the coor-dinates (i.e., operate on coordinates and result in transformedcoordinates). The intensity of the image is also distorted re-sulting in the image . Let represent an operator which altersthe intensity values of the image, then the overall relationshipbetween the images and is

(2.2)

Here, operates on the geometrically distorted image changingits intensity values and resulting in the other image. The imageregistration problem is estimating and , and ac-cording to some criterion using parametric models for the un-known geometric deformation and intensity distortion between

and . In this paper, we focus on the geometric registrationand assume that is not present resulting in

(2.3)

Page 3: IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 54 ...nehorai/paper/01621403.pdfIEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 54, NO. 5, MAY 2006 1737 Performance Bounds on Image Registration

YETIK AND NEHORAI: PERFORMANCE BOUNDS ON IMAGE REGISTRATION 1739

A. Registration Using Isolated Points

Many registration algorithms use isolated points on the im-ages to find the geometric distortion [4], [5] and then determinethe optimum intensity matching using geometrically aligned im-ages. Let represent the points corre-sponding to features of be the corre-sponding isolated points of , and the iso-lated points of representing the same features. In this frame-work, and are the observations (hence, random functionsdue to measurement noise) and is the real-world coordinatesset (assumed to be deterministic). Assuming a global geometricdeformation (i.e., the same deformation for all isolated points),the statistical model is

(2.4)

(2.5)

(2.6)

(2.7)

(2.8)

where the subscripts of the block matrices denote their dimen-sions, the subscripts “ ” of the points denote the th component( , or ) of the th point, is the identity matrix, a matrixwith all zero entries, a matrix with all one entries, “ ” theKronecker product, the geometric deformation, andthe translation, and finally a matrix denoting additivenoise. Equation (2.4) formulates two observations of the reallandmarks. One of the observations is assumed to be a noisyversion of the real locations, and the other one is a noisy ver-sion of the geometrically deformed real-world locations. Theadditive noise is assumed to be white Gaussian and independentfor different points and directions [1], [20], [14]. This additivenoise is essentially a result of the preprocessing step. Dependingon the application and the preprocessing method, the Gaussiannoise model may not be valid [21]. In this case the log-likeli-hood function and the bounds will have different forms, and theCRBs can be calculated following similar steps using the noisemodel that fits the physical problem.

The Gaussian noise is allowed to have different variancesalong different directions with covariance, as follows:

(2.9)

Let be the unknown parameters that define the geometric de-formation and intensity distortion and the probabilitydensity function (pdf) of the random functions and given

. The CRB matrix CRB for the unknown parameters is

the inverse of the Fisher information matrix (FIM) denoted by[22]:

CRB (2.10)

(2.11)

where denotes the th entry of the th compo-nent of , and denotes the expectation operator. We deriveexpressions for the FIM and CRB matrices for a wide class ofgeometric deformations. All of the derivations follow three mainsteps: i) finding , ii) taking the partial derivatives,iii) calculating the expectation.

We present the general form of here, omit thespecial forms in the following derivations, and list the resultingFIM and CRB matrices. Considering the model in (2.4), we have

constant (2.12)

where

(2.13)

Note that the first and last terms in (2.12) do not involve defor-mation parameters; hence, they become zero in computing theCRB because of the derivatives with respect to the parametersin step ii). Taking the derivatives and expectation gives the thelement of FIM

(2.14)

We now use (2.14) to calculate the FIMs and CRBs for a wideclass of geometric deformations using the described framework.The MLEs for some of the deformation parameters and expres-sions of their variances are given. We also extend to unknownand observe how the bounds change for some of the cases. It isnot possible to list all MLEs and extend all results to unknown

because of limited space, but the derivations are essentiallysimilar to those we give here.

1) 3-D Translation: The CRBs on the translation parametersare previously given in [20] where images are used directly forthe registration, in contrast to this case of isolated points. For3-D translation, is the identity matrix and the only unknownparameters are the translation parameters, i.e., .The 3 3 FIM is

(2.15)

Page 4: IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 54 ...nehorai/paper/01621403.pdfIEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 54, NO. 5, MAY 2006 1737 Performance Bounds on Image Registration

1740 IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 54, NO. 5, MAY 2006

where the subscript “T” stands for translation. The CRB matrixis the inverse of the FIM

CRB (2.16)

This shows that the performance bounds for estimating a transla-tion parameter in a certain direction is independent of the trans-lation amount and the noise variance in other directions. It isfurther independent of the locations of the points used for theregistration. The MLEs of the translation parameters can be ob-tained by solving the likelihood equation, i.e., finding the max-imum of (2.12), resulting in

(2.17)

with the probability distribution

(2.18)

where denotes the Gaussian pdf. The variance of the estimate,, is . Observe that this is equal to the CRB for

all values of .Extension to unknown : We need to simultaneously estimateand the translation parameters since the real-world object is

not known. The resulting MLEs are

(2.19)

(2.20)

The FIM components for the translation parameters do notchange compared with the FIM assuming known real-worldobject, but the whole CRB components are different thanthe MLE variances because of the cross terms. After somearithmetic manipulation, the CRB on the translation parametersbecomes

CRB (2.21)

Observe that estimation of increases the bounds by a factorof two. The MLEs of the translation parameters now have thedistribution resulting in the error covariance ,which is equal to the CRB for all values of .

2) 2-D Rotation: A 2-D rotation matrix is

(2.22)

where is the rotation angle. We assume that the rotation angleis in the lower left corner where the origin (0, 0) of the image

is assumed to appear. The Fisher information and CRB becomescalars since there is only one unknown parameter . Here

(2.23)

with CRB . For the case of equal variance in the anddirections, simplifies to

, and CRB . Observe that betterrotation estimates (lower CRB values) are possible when ’sand ’s are larger. This is intuitive since points further fromthe rotation center will be affected more and hence will makeit easier to estimate the rotation. Observe also that the bound isindependent of the rotation angle for only equal variances.

The MLE of the rotation angle for the case of equal variancesis

(2.24)

where

(2.25)

The pdf of the random variable can be found using the derivedpdf method, and we can find the variance of this estimator usingthe distribution of as follows:

(2.26)

where denotes the pdf of . Using the relationship be-tween and , we have

(2.27)

This variance can be calculated by substituting the expressionfor , however we skip this here because of limited space;note that the pdf of has a complicated expression given in [23].

Extension to unknown : The FIM has now components forand ’s as well. The additional elements of the FIM are

(2.28)

(2.29)

(2.30)

Page 5: IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 54 ...nehorai/paper/01621403.pdfIEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 54, NO. 5, MAY 2006 1737 Performance Bounds on Image Registration

YETIK AND NEHORAI: PERFORMANCE BOUNDS ON IMAGE REGISTRATION 1741

These additional entries will alter the CRB components whichcan be calculated by a simple inversion of the FIM.

3) 2-D Rigid Transformations: A transformation is calledrigid if it involves only rotation, and translation. The generalform of a 2-D rigid transformation is

(2.31)

The unknown parameters for this deformation model areand the 3 3 FIM has the following elements:

(2.32)

Consider the special case of equal variances , tohave a clearer understanding, as follows:

(2.33)

Observe that larger and values result in higher informa-tion (better estimates) for the rotation angle for the choice oflower-left corner as the rotation origin. As simpler deformationmodels, the FIM values for the translation parameters dependonly on the number of the points and the noise variances; how-ever, there will be other factors due to the inversion when theCRB is calculated.

Extension to unknown : We obtain the following FIM com-ponents assuming equal noise variances in and directions,when the estimation of the unknown parameters are taken intoaccount:

(2.34)

4) 2-D Skew: Skew deformation, or shear, is a nonrigidtransformation that maps parallel lines to parallel lines. Shearsalong and directions are

(2.35)

and the general form is

(2.36)

with unknown parameters resulting in the fol-lowing FIM:

(2.37)

where “SK” stands for skew. The inverse gives the CRB matrix

CRB (2.38)

The CRB shows that the estimation performance of the shearin one direction does not affect the other one. Moreover, theperformance is not affected by the amount of shear, but dependsonly on the number and locations of the points and the noisevariances.

Solving the likelihood equation yields the MLEs for the shearparameters, as follows:

(2.39)

Page 6: IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 54 ...nehorai/paper/01621403.pdfIEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 54, NO. 5, MAY 2006 1737 Performance Bounds on Image Registration

1742 IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 54, NO. 5, MAY 2006

The variances of these estimates are

(2.40)

Note that the MLE variances are equal to the CRB componentsfor all values of .

Extension to unknown : In this case, the additional FIM en-tries are

(2.41)

5) 2-D Affine Transformation: Affine transformations arethe most general form of geometric deformations mappingparallel lines to parallel lines:

(2.42)

Here are six free parameters. The resulting6 6 FIM has the block form

(2.43)

where

(2.44)

The bounds for the estimation of the affine parameters are inde-pendent of the parameter values, but depend only on the loca-tions, number of points, and noise variances.

We can easily obtain the MLEs for the 2-D affine transformsby solving the likelihood equation

(2.45)

where, and

(2.46)

Estimates of the affine transform parameters are normally dis-tributed since they are all linear combinations of other normallydistributed random variables (e.g., ). The variances of theseestimates are

(2.47)

Extension to unknown : In this case, we obtain

(2.48)

A part of the results presented in this subsection was indepen-dently derived in [24].

Page 7: IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 54 ...nehorai/paper/01621403.pdfIEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 54, NO. 5, MAY 2006 1737 Performance Bounds on Image Registration

YETIK AND NEHORAI: PERFORMANCE BOUNDS ON IMAGE REGISTRATION 1743

6) 2-D Non-Linear Transformations: We consider a moregeneral form of geometric deformation compared with the pre-vious deformation models where the transformed coordinatesare not linear in the original coordinates. The transformationscommonly used for this purpose are bivariate polynomial, asfollows:

(2.49)

where and are the unknown parameters of the trans-formation. Note that the affine transformation is a special caseof the bivariate polynomial transformation for . The FIMcomponents are

(2.50)

The FIM has a block diagonal form resulting in a block diagonalCRB matrix. That is, the bounds for the parameters of thedirection are independent of those of the direction. Observealso that the bounds do not depend on the deformation parametervalues, but only on the points and noise variances.

Extension to unknown : The additional FIM entries are

(2.51)

B. 3-D Deformations

We have derived the corresponding bounds for the 3-D ro-tation, 3-D rigid transformation, 3-D affine and 3-D nonlineartransformations, but we skip them here because of the limitedspace. These derivations and results, which are similar to theprevious cases, can be found in [25] and [26].

We summarize the results of this section in terms of the de-pendence of the registration bounds on associated aspects inTable I.

TABLE IDEPENDENCE OF BOUNDS ON IMAGE AND DEFORMATION PARAMETERS

C. Geometric Registration Using Images

In this case, no preprocessing is made on the images to extractthe features; hence, we do not know the corresponding points.We also note that the two images will not have the same domainas explained in detail in [2]. Here, we will consider the overlap-ping domain sets of the two images. It is not possible to obtainexplicit expressions for the MLEs since the unknown parame-ters are hidden in the coordinates and a search is necessary tofind the maximum of the likelihood function.

We assume that the intensity values are not altered except forthe additive noise. The statistical model we use is

(2.52)

(2.53)

where , and are the transformed coordinates andis the additive noise. It is not possible to model the

noise as an additive Gaussian model when different techniquesare used to acquire images. In certain cases, e.g., in differentmodalities of medical imaging, the images are related by anonlinear and nonrandom function. Therefore, the results weobtain here are valid for the same modality where additiveGaussian noise is a reasonable assumption. The unknownparameters are hidden in , and , as follows:

(2.54)

Using the short-hand notation for the image coordinatesand , our observations are now

, and the term is

constant (2.55)

Here, thesummation isoverall thecoordinatesof theoverlap-ping domain of the two images. In general this domain dependson the transformation for finite images. However, we assume thatthe registration is performed using a single domain that appearsin both images for a certain set of values of the distortion parame-ters. Therefore, we ignore this dependence in the rest of the paperand assume that it is the same for the valuesof the deformation pa-rameters that are of interest. It is possible to use several such do-mains when the change in the overlapping domain is significantlydifferent. In that case, we would have to calculate FIMs for all do-mains separately, and each of these would be valid for the set ofdistortion parameters that corresponds to a specific domain.

Page 8: IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 54 ...nehorai/paper/01621403.pdfIEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 54, NO. 5, MAY 2006 1737 Performance Bounds on Image Registration

1744 IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 54, NO. 5, MAY 2006

With the above assumption, the first term does not depend onthe unknown parameters and becomes zero during the derivativeoperation when calculating the FIM. We need to take the deriva-tive of a function of three variables which are all functionsof the elements of . Hence, we use the chain rule

(2.56)

Computing the derivatives using (2.56) and then taking the ex-pectations gives the th element of the FIM for 3-D, as follows:

(2.57)

and for 2-D, as follows:

(2.58)

where the subscripts “ and ” denote partial derivativeswith respect to and , respectively. These derivatives ofcan be calculated by first interpolating the discrete image andthen using the resulting interpolated continuous function fordirect derivative calculation. It is also possible to approximatethe derivatives using the difference function.

We now derive the FIM and CRB matrices for the deforma-tion models that we described in Section II-A.

1) 3-D Translation: In this case, depends onand FIM has the components

(2.59)

Taking the inverse would give the CRB components (explicitexpressions are omitted here). The CRBs depend on the imagederivatives and noise variances, and also on the amount of trans-lation unlike the registration methods that use isolated points asin Section II-A-1). We also observe that the cross-terms are notzero in contrast to the methods using isolated points. The expres-sions in (2.59) for 2-D are given in [20]. We give the generalized3-D version here to keep the flow of the paper.

2) 2-D Rotation: The Fisher information is a scalar sincethere is only one unknown parameter, namely the rotation angle

, as follows:

(2.60)

with CRB . In this case larger values do not guar-antee lower bounds because of the term .

3) 2-D Rigid Transform: First, define the following vari-ables to simplify the presentation:

(2.61)

FIM has the following components for the unknown parameters:

(2.62)

Page 9: IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 54 ...nehorai/paper/01621403.pdfIEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 54, NO. 5, MAY 2006 1737 Performance Bounds on Image Registration

YETIK AND NEHORAI: PERFORMANCE BOUNDS ON IMAGE REGISTRATION 1745

We observe that many cross-terms, which are zero when isolatedpoints are used, are now nonzero due to the lack of knowledgeof the corresponding points in the two images to be registered.

4) 2-D Skew: We have

(2.63)

and the inverse gives the CRB matrix shown in (2.64) at thebottom of the page. We observe that the bounds now depend onthe amount of shear, unlike the case of registration using isolatedpoints as in Section II-A-4).

5) 2-D Affine Transform: The FIM is

(2.65)

where

(2.66)

The bounds on the variances of the estimates of the affine param-eters depend on the parameter values as in translation, rotation,and rigid transformations using images for registration; note thatthey are independent of the parameter values when the isolatedpoints are used for the registration, see Section II-A. The resultsof this subsection were independently derived in [24].

6) 3-D Rotation and Rigid Transformations: We skip the ex-plicit forms of the FIM components here; however, the deriva-tion is quite similar to the 2-D case.

7) 3-D Skew: The unknown parameters are, resulting in

(2.67)where

(2.68)

and the inverse gives the CRB. We have similar observationsabout the dependence of the bounds on the image derivative andshear parameter values as in 2-D case.

8) 3-D Affine Transform: FIM has the block form shown in(2.69), shown at the bottom of the page, where

(2.70)

Similar observations can be made as in 2-D affinetransformations.

We note that larger derivatives result in better performancesfor registration using the images with the above deformationmodels. That is, images that change more rapidly are easier toregister. This is, in fact, intuitive since smaller derivatives meanimages closer to a constant background and the difference be-tween the images to be registered is more difficult to detect.

D. Relation Between the Parameters and Image Coordinates

The bounds we have derived are on the variances of the dis-tortion parameters, but not directly the image coordinates. Inpractice, the error between the coordinates of the registered im-ages are of importance rather than the variance of the distortionparameters. We now explain how the variance of the distortionparameters can be used to obtain the information on the error ofthe image coordinates.

In practice, the estimated transformation is applied to andthe registration error can be defined as the mean-squared errorbetween the points of one image and registered version of theother image, as follows:

(2.71)

where is the squared Euclidian distance between the referenceand registered points

(2.72)

and

(2.73)

CRB (2.64)

(2.69)

Page 10: IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 54 ...nehorai/paper/01621403.pdfIEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 54, NO. 5, MAY 2006 1737 Performance Bounds on Image Registration

1746 IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 54, NO. 5, MAY 2006

Here, denotes the estimated transformation, and it is a randomvariable. Hence, we calculate the mean of the defined in (2.71),as follows:

(2.74)

The terms and can be calculated as

(2.75)

This expression can be calculated for each of the cases sepa-rately using the distributions of the parameter estimates. In thefollowing, we give this derivation for the affine case.

For the affine case

(2.76)

Expanding the square results in terms that involve the CRBs ofthe deformation parameters , we have

CRB CRB CRB CRB

CRB CRB (2.77)

where CRB are the components of the CRB of affine param-eters that can be calculated using (2.42). The second term is

(2.78)

and the third term can be calculated using the estimates givenby (2.45)

(2.79)

Similar steps can be followed to obtain a bound on . Thisconcludes the relation between a lower bound on the registrationerror and CRBs on the distortion parameter estimates.

III. NUMERICAL EXAMPLES

We use basic examples to calculate and plot the CRBs andMLE variances that are analytically derived in Section II forbetter visualization. These plots illustrate the analytical resultsof Section II. We note that the bounds are strong functions of theimages, hence the derived conclusions are valid for these spe-cific examples, although it may be possible to get some insightusing the following examples.

Fig. 2. Comparison of MLE variances (solid lines) with CRBs (dashed lines)as a function of number of feature points. (a) is for the rotation angle �, (b) dof the affine transformation, (c) d , and (d) d .

Fig. 3. MRI images of size 110� 110 used to calculate the bounds. (a) showsthe landmarks used for registration using points, (b) shows the image usedfor registration using image intensities, and (c) is another image with higherfrequency content.

A. Effect of Number of Points

We plot the CRB components and MLE variances for the casewhere points are used for registration. We use points that areuniformly distributed on the image in both directions ( and

). Fig. 2(a) shows the MLE variance and CRB for the rota-tion angle (2.23) and (2.26) when the geometric deformationis only rotation. Fig. 2(b)–(d) shows the CRB and MLE vari-ances for the affine parameters given by (2.43) and (2.47); (b)is for , (c) for , and (d) for . We observe that the CRBsare achieved by the MLEs when the number of points used issufficiently large. We can also see that the CRBs for the trans-lation parameter of the affine transformation is smaller whencompared to others.

B. Registration Using Isolated Points

For this case, we use landmarks of a brain MRI slice as thepoints used for the registration. These landmarks are shown inFig. 3(a). We first plot the CRB on the variances of the esti-mate of rotation angle (2.23) as a function of the rotationangle. In Fig. 4 shows the CRBs for three different noise levels,

.These noise levels correspond to a standard deviation around3–5 pixels, which are reasonable error margins. We can see thatthe CRB approaches to a constant as the noise levels along and

directions become closer. This is intuitive since equal noise

Page 11: IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 54 ...nehorai/paper/01621403.pdfIEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 54, NO. 5, MAY 2006 1737 Performance Bounds on Image Registration

YETIK AND NEHORAI: PERFORMANCE BOUNDS ON IMAGE REGISTRATION 1747

Fig. 4. Cramér–Rao bounds on the rotation angle as a function of the rotationangle � assuming the geometric deformation is only rotation. The dotted lineshows the CRB for � = 10; � = 30, dotted line for � = 10; � = 20,and solid line for � = 10; � = 10. The CRB approaches a constant as thevariances along x and y directions become closer.

Fig. 5. CRB components for a rigid transformation as a function of the rotationangle �. (a) is for the � component, (b) for the cross-component of t and �,(c) for the t component, and (d) for the cross-component of t and t . Theperiodic structure is a result of the sines and cosines of the rotation matrix.

variances bring an isotropy to the problem. This is clearer whenwe consider circular coordinates. The rotation angle depends onthe ratio of the and coordinates, and when the variances areequal the effect is canceled resulting in a constant CRB.

Next, we analyze the rigid transformation. Fig. 5 shows theCRBs on the rotation angle , the translation parameter (weonly plot , since will have a similar form), and the crossCRB components, all given by (2.34). For this case, we usenoise variances and translation parameters

. It is interesting to note that the CRB for therotation angle has a different minimum when compared with thecase when the deformation is only rotation. This result showsthe effect of the translation on the performance of estimating arotation angle.

Fig. 6. CRB components for two different images assuming rigidtransformations. The image with smaller derivatives (p1(x; y)) result inlarger bounds.

C. Registration Using Images

We use the images shown in Fig. 3(b) and (c) for calculatingthe bounds in this section. All three images were obtained froma single image shown in Fig. 3(b) by applying artificial changesto obtain the image in Fig. 3(c) and manually determining land-marks to obtain the image in Fig. 3(b). The image in part Fig. 3(c)has higher frequency content, hence larger derivative values, sothat we can observe the effect of image content on the registrationperformance. The images have the size 110 110 with integergray values between 0 and 255. We have used an additive noiseof variance 100, resulting in a standard deviation of 10. We ap-proximate the derivatives using the difference function

(3.80)

In the first row of Fig. 6, we show the images that we usedto calculate the CRB components for the rigid transformations.The CRBs (2.62) are plotted in the next three rows. The second

Page 12: IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 54 ...nehorai/paper/01621403.pdfIEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 54, NO. 5, MAY 2006 1737 Performance Bounds on Image Registration

1748 IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 54, NO. 5, MAY 2006

row shows the CRB for the rotation angle, the third row thecross-component between the rotation angle and translationparameter, and the fourth row the CRB for the translationparameter.

These plots first show that we do not have a periodic structureof the CRBs. That is, the lack of knowledge of the correspondingpoints makes it very difficult to recover small deformations. Inthis case, CRB values are significantly larger for rotation an-gles smaller than approximately 15 . In the case where the land-marks are used, it was possible to obtain small CRB values forsmall deformations, basically because the corresponding pointsare known.

Observe also that the CRBs for the high-frequency imageis smaller when compared with the image with low-frequencyimage. This suggests that preprocessing of images before regis-tration has the potential to improve registration accuracy.

IV. CONCLUSION

We have derived statistical performance bounds for imageregistration algorithms. For some of the cases, we have derivedalso the expressions for the MLEs and their finite-sample vari-ances. We have considered a wide class of geometric deforma-tions for both registration using points and intensity values ofthe images. We summarize the results below.

Registration using isolated points:

• The bounds on the variances of the translation param-eter estimates (when the deformation model is translationonly) depend only on the number of points but not on theamount of translation of the locations of the points. Thevariances of the MLEs are equal to the CRB values forany number of points.

• Better estimates can be obtained for the rotation angle (ifthe deformation model is rotation only) when the pointsare further from the rotation origin. The bounds are inde-pendent of the rotation amount for the case of equal noisevariances for the and directions.

• For rigid transformations (rotation, and translation),better estimates are obtained for the rotation angle whenpoints further from the rotation center are used for thecase of equal noise variances for all directions.

• For 2-D skew, the bounds for the shear parameters alongthe two directions are decoupled. The bounds dependonly on the number and locations of the points used butnot on the shear amount. The MLE variances are equal tothe CRB components.

• The bounds for the parameters of affine and bivariatepolynomial transformations are independent of the pa-rameter values, but depend only on the points and noisevariances.

Registration using images:• The bounds for all geometric deformation parameters de-

pend on the values of the deformation parameters in con-trast to some of the cases of registration using isolatedpoints.

• Unlike the case of registration using points, the boundsachieve large values as the deformation amount is in-creased.

• Images with larger derivative values result in smallerCRB values and better performances for the distortionparameter estimates.

As future work, it is possible to compare the CRBs to theperformances of standard methods, such as information-basedor intensity-similarity-based methods. It would also be usefulto derive the confidence intervals for easier visualization.

Another direction is to consider the registration of videoframes and exploit the fact that the deformation (e.g., the mo-tion of a certain object in the video) is correlated in time. In thiscase, iterative estimation can be applied and the performanceanalysis for this case is of interest.

It would also be interesting to search for distance metricsother than mean-squared error. For instance, the error for the ro-tation angle may be represented using covariance of vector an-gular error [27]. The performance bounds will then be on thesemetrics rather than conventional mean-squared error.

REFERENCES

[1] L. G. Brown, “A survey of image registration techniques,” ACM Comput.Surv., vol. 24, pp. 325–376, 1992.

[2] D. L. G. Hill, P. G. Batchelor, M. Holden, and D. J. Hawkes, “Medicalimage registration,” Phys. Med. Biol., vol. 46, pp. R41–R45, 2001.

[3] J. B. A. Maintz and M. A. Viergever, “A survey of medical image regis-tration,” Med. Image Anal., vol. 2, pp. 1–36, 1998.

[4] J. Fitzpatrick, J. West, and C. R. Maurer Jr., “Predicting error in rigid-body, point-based registration,” IEEE Trans. Med. Imag., vol. 17, no. 5,pp. 694–702, Oct. 1998.

[5] D. L. G. Hill, D. J. Hawkes, M. J. G. J. E. Crossman, T. C. S. Cox, A.J. S. E. E. C. M. L. Bracey, and P. Graves, “Registration of MR and CTimages for skull base surgery using point-like anatomical features,” Br.J. Radiol., vol. 64, pp. 1030–1035, 1991.

[6] X. Pennec and J.-P. Thirion, “A framework for uncertainty and validationof 3-D registration methods based on points and frames,” Int. J. Comput.Vision, vol. 25, pp. 203–229, 1997.

[7] A. Gueziec and N. Ayache, “Smoothing and matching of 3-D spacecurves,” Int. J. Comput. Vis., vol. 12, pp. 79–104, 1994.

[8] C. R. Maurer Jr., R. J. Maciunas, and J. M. Fitzpatrick, “Registrationof head CT images to physical space using a weighted combinationof points and surfaces,” IEEE Trans. Med. Imag., vol. 17, no. 5, pp.753–761, Oct. 1998.

[9] C. A. Pelizzari, G. T. Y. Chen, D. R. Spelbring, R. R. Weichselbau, andC.-T. Chen, “Accurate three-dimensional registration of CT, PET, and/orMR images of the brain,” J. Comput. Assist. Tomogr., vol. 13, pp. 20–26,1989.

[10] T. M. Peters, A. J. Clark, A. Olivier, E. P. Merchand, M. D. G. Mawko,L. Muresan, and R. Ethier, “Integrated stereotaxic imaging with CT, MRimaging, and digital subtraction angiography,” Radiology, vol. 161, pp.821–826, 1986.

[11] K. J. Friston, J. Ashburner, J. B. Poline, C. D. Frith, J. D. Heather, and R.Frackowiak, “Spatial registration and normalization of images,” HumanBrain Mapping, vol. 2, pp. 165–189, 1995.

[12] L. Lemieux, U. C. Wieshmann, F. N. Moran, D. R. Fish, and S. D.Shorvon, “The detection and significance of subtle changes in mixed-signal brain lesions by serial MRI scan matching and spatial normaliza-tion,” Med. Image Anal., vol. 2, pp. 227–242, 1998.

[13] D. Rueckert, L. I. Sonoda, C. Hayes, D. L. G. Hill, M. O. Leach, andD. J. Hawkes, “Nonrigid registration using free-form deformations: Ap-plication to breast MR images,” IEEE Trans. Med. Imag., vol. 18, pp.712–721, 1999.

[14] F. Maes, A. Collignon, D. Vandermeulen, G. Marchal, and P. Suetens,“Multimodality image registration by maximization of mutual informa-tion,” IEEE Trans. Med. Imag., vol. 16, no. 2, pp. 187–198, Apr. 1997.

[15] B. J. West, “Comparison and evaluation of retrospective intermodalitybrain image registration techniques,” J. Comput. Assist. Tomogr., vol. 21,pp. 554–566, 1997.

[16] J. D. Mckilay, “Opportunities for information visualization,” IEEEComp. Graph. App., vol. 20, no. 1, pp. 22–23, Jan.–Feb. 2000.

[17] R. P. Woods, S. T. Grafton, C. J. Holmes, S. R. Cherry, and J. C. Mazz-iotta, “Automated image registration: I. General methods and intrasub-ject, intramodality validation,” J. Comput. Assist. Tomogr., vol. 22, pp.139–152, 1998.

Page 13: IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 54 ...nehorai/paper/01621403.pdfIEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 54, NO. 5, MAY 2006 1737 Performance Bounds on Image Registration

YETIK AND NEHORAI: PERFORMANCE BOUNDS ON IMAGE REGISTRATION 1749

[18] M. van Herk, J. C. de Munck, J. V. Lebesque, S. Muller, C. Rasch, and A.Touw, “Automatic registration of pelvic computed tomography data andmagnetic resonance scans including a full circle method for quantitativeaccuracy evaluation,” Med. Phys., vol. 25, pp. 2054–2067, 1998.

[19] I. S. Yetik and A. Nehorai, “Performance bounds on image registration,”in Proc. IEEE Int. Conf. Acoustics, Speech, Signal Processing, vol. 2,Philadelphia, PA, Mar. 2005, pp. 117–120.

[20] D. Robinson and P. Milanfar, “Fundamental performance limits in imageregistration,” IEEE Trans. Image Process., vol. 13, no. 9, pp. 1185–1199,Sep. 2004.

[21] J. P. C. R. Meyer and P. H. Bland, “Retrospective correction of intensityinhomogeneities in MRI,” IEEE Trans. Med. Imag., vol. 14, no. 1, pp.36–41, Mar. 1995.

[22] V. Poor, An Introduction to Signal Detection and Estimation, 2nd ed.New York: Springer, 1994.

[23] D. V. Hinkley, “On the ratio of two correlated random variables,”Biometrika, vol. 56, pp. 635–639, 1969.

[24] W. Li and H. Leung, “A maximum likelihood approach for image regis-tration using control point and intensity,” IEEE Trans. Image Process.,vol. 13, no. 8, pp. 115–1127, Aug. 2004.

[25] I. S. Yetik, “Extended-source estimation using magnetoencephalog-raphy and performance bounds on image registration,” Ph.D. disserta-tion, Electrical and Computer Eng. Dept., Univ. of Illinois at Chicago,2004.

[26] I. S. Yetik and A. Nehorai, “Performance bounds on image registration,”Dept. Electrical and Computer Engineering, Univ. Illinois at Chicago,Rep. UIC-ECE-04-8, 2004.

[27] A. Nehorai and E. Paldi, “Vector-sensor array processing for electro-magnetic source localization,” IEEE Trans. Signal Process., vol. 42, no.2, pp. 376–398, Feb. 1994.

Imam Samil Yetik was born in Istanbul, Turkey,in 1978. He received the B.Sc. degree in electricaland electronics engineering from Bogazici Univer-sity, Istanbul, Turkey, in 1998, the M.S. degree inelectrical and electronics engineering from BilkentUniversity, Ankara, Turkey, in 2000, and the Ph.D.degree in electrical and computer engineering fromthe University of Illinois at Chicago, in 2004.

He is currently with the Department of BiomedicalEngineering at the University of California at Davis.

His research interest is mainly statistical sensorarray signal processing and image processing with applications to biomedicine.

Arye Nehorai (S’80–M’83–SM’90–F’94) receivedthe B.Sc. and M.Sc. degrees in electrical engineeringfrom The Technion—Israel Institute of Technology,Haifa, Israel, and the Ph.D. degree in electrical engi-neering from Stanford University, Stanford, CA.

After graduation, he worked as a Research Engi-neer for Systems Control Technology, Inc., Palo Alto,CA. From 1985 to 1989, he was Assistant Professorand from 1989 to 1995 he was Associate Professorwith the Department of Electrical Engineering at YaleUniversity, New Haven, CT. In 1995, he joined the

Department of Electrical Engineering and Computer Science at the Universityof Illinois at Chicago (UIC), as a Full Professor. From 2000 to 2001, he wasChair of the department’s Electrical and Computer Engineering (ECE) Division,which is now a new department. In 2001, he was named University Scholar ofthe University of Illinois. He holds a joint professorship with the ECE and Bio-engineering Departments at UIC. His research interests are in signal processing,communications, and biomedicine.

Dr. Nehorai has been a Fellow of the Royal Statistical Society since 1996. Heserved as Chairman of the IEEE Signal Processing Connecticut Chapter from1986 to 1995, and a Founding Member, Vice-Chair and later Chair of the IEEESignal Processing Society’s Technical Committee on Sensor Array and Multi-channel (SAM) Processing from 1998 to 2002. He was the Co-General Chair ofthe First and Second IEEE SAM Signal Processing Workshops held in 2000and 2002. He is Vice President—Publications and Chair of the PublicationsBoard of the IEEE Signal Processing Society. He was Editor-in-Chief of theIEEE TRANSACTIONS ON SIGNAL PROCESSING from January 2000 to December2002. He is currently a Member of the Editorial Board of Signal Processing, theIEEE SIGNAL PROCESSING MAGAZINE, and The Journal of the Franklin Insti-tute. He has previously been an Associate Editor of the IEEE TRANSACTIONS ON

ACOUSTICS, SPEECH AND SIGNAL PROCESSING, the IEEE SIGNAL PROCESSING

LETTERS, the IEEE TRANSACTIONS ON ANTENNAS AND PROPAGATION, the IEEEJOURNAL OF OCEANIC ENGINEERING, and Circuits, Systems, and Signal Pro-cessing. He was co-recipient, with P. Stoica, of the 1989 IEEE Signal ProcessingSociety’s Senior Award for Best Paper. He received the Faculty Research Awardfrom the UIC College of Engineering in 1999 and was Adviser of the UIC Out-standing Ph.D. Dissertation Award of Aleksandar Dogandzic in 2001. He waselected Distinguished Lecturer of the IEEE Signal Processing Society for theterm 2004 to 2005.


Recommended