Date post: | 05-Apr-2018 |
Category: |
Documents |
Upload: | nagaraj-bhat |
View: | 219 times |
Download: | 0 times |
8/2/2019 A Robust Feature Based Algorithm for Aerial Image Registration
http://slidepdf.com/reader/full/a-robust-feature-based-algorithm-for-aerial-image-registration 1/6
A Robust, Feature-based Algorithm for Aerial
Image Registration
Mohamed S. Yasein
Department of Electrical and Computer Engineering,
University of Victoria
Victoria, B.C., Canada, V8W 3P6
Email: [email protected]
Pan Agathoklis
Department of Electrical and Computer Engineering,
University of Victoria
Victoria, B.C., Canada, V8W 3P6
Email: [email protected]
Abstract— In this paper an algorithm for aerial image reg-istration is proposed. The objective of this algorithm is toregister aerial images having only partial overlap, which are alsogeometrically distorted due to the different sensing conditions andin addition they may be contaminated with noise, may be blurred,etc. The geometric distortions considered in the registrationprocess are rotation, translation and scaling. The proposedalgorithm consists of three main steps: feature point extractionusing a feature point extractor based on scale-interaction of Mexican-hat wavelets, obtaining the correspondence between thefeature points of the first (reference) and the second image basedon Zernike moments of neighborhoods centered on the featurepoints, and estimating the transformation parameters betweenthe first and the second images using an iterative weighted leastsquares algorithm. Experimental results illustrate the accuracy of image registration for images with partial overlap in the presenceof additional image distortions, such as noise contamination andimage blurring.
I. INTRODUCTION
Image registration has found applications in numerous real-
life applications such as remote sensing, medical image analy-sis, computer vision and pattern recognition [1]. Given two, or
more, images to be registered, image registration estimates the
parameters of the geometric transformation model that maps
a given image to the reference one.
Many image registration techniques have been proposed in
the literature. In general, existing image registration techniques
can be categorized into two classes. The first class utilizes im-
age intensity to estimate the parameters of the transformation
between two images using an approach involving all pixels of
the image, such as [2], [3]. On the other hand, the second class
extracts a set of feature points from the image and uses only
these points to obtain the parameters of the transformation,
such as [4], instead of using all pixels. Extensive surveys of image registration techniques can be found in [1], [5].
One of the applications of image registration is in remote
sensing where several aerial images are being used to obtain
coverage of a region. These individual images usually do
not cover the same area and may be sensed under different
conditions. A typical situation is when images have only
partial overlap and because of different sensing conditions,
they appear distorted. Further, due to environmental condi-
tions, some of them may be noisy or they may not be well
focused. The registration of such images has been extensively
considered in the literature due to the potential applications.
Earlier techniques used manual markers or GPS locations
for registration. Recently, techniques have been developed
for automatic registration of aerial images. Some of these
techniques rely on image contours, such as [6], [7], [8], others
rely on feature points of the image , such as [9], and othersrely on lines and feature points of the image, such as [10]. The
performance of such techniques depends on several factors,
such as the area of overlap between images and to what extent
it is possible to model the different orientation between images
with simple geometric transformations. Further, image quality,
affected by distortions such as noise contamination and blur-
ring, as well as, image characteristics such as smooth/textured
areas or similarity of different areas, play also a role in the
techniques’ performance.
Feature point-based techniques rely on locating feature
points in both images and using these feature points to obtain
the transformation parameters for registering the two images.
They tend to give good results, but their performance dependson the accuracy of the feature point extractor. In order to
improve the performance, robust estimation techniques have
been used in [9], [10], [11].
In this paper, an algorithm for aerial image registration is
proposed. The main objective of the proposed algorithm is
accurately registering aerial images, which are distorted due
to different sensing conditions. The images may have partial
overlap and are further geometrically distorted. The possible
geometric distortions considered are rotation, scaling and
translation. The algorithm proposed here is an extension of the
one presented in [12] and can deal with images having partial
overlap using an adaptive weighted least squares technique.
The proposed algorithm involves three stages: feature pointextraction, obtaining the correspondence between the feature
points of the two images, and transformation parameters
estimation. An enhanced efficient feature point extractor that is
based on scale-interaction of Mexican-hat wavelets [13], [12]
is utilized to extract two sets of feature points from the first and
the second images respectively. The correspondence between
these two sets of points is evaluated using Zernike moments
invariants of circular neighborhoods that are centered on the
feature points. Zernike moments have proved to be superior in
17311-4244-0755-9/07/$20.00 '2007 IEEE
8/2/2019 A Robust Feature Based Algorithm for Aerial Image Registration
http://slidepdf.com/reader/full/a-robust-feature-based-algorithm-for-aerial-image-registration 2/6
terms of information redundancy, low sensitivity to noise [14],
and their rotation invariance property [15]. The transformation
parameters are estimated using an adaptive weighted least
squares technique with an objective function that depends
on the weighted difference between the locations of a set of
feature points in the first image and another set in the second
image. Experimental results show that the proposed algorithm
leads to accurate registration and robustness against several
distortions types, e.g., image blurring and noise contamination.
The paper is organized as follows. Section II describes
the proposed registration algorithm in detail. In Section III,
experimental results are presented and the performance of
the proposed algorithm is discussed. Finally, conclusions are
drawn in Section IV.
I I . THE PROPOSED REGISTRATION ALGORITHM
A geometric distortion of an image can take many forms,
from relatively simple transformations to complex geometric
distortions. The types of distortions considered here are ro-
tation, translation, and scaling (RTS) transformations. Suchtransformations are rigid in the sense that shapes and angles
are preserved. A combined transformation of these types
typically has four parameters: translation parameters in
and¡
directions (¢ £
and¢ ¥
respectively ), rotation angle¦
,
and scaling parameter § . This transformation maps a point¨ © ! of the first image
#to a point ¨ $ © $
$
! of
the transformed image #
$ as follows:
¨
$
©
f ¨ 0 ! © 4
¢£
¢¥
! 7
§ 9
¦
! ¨ (1)
where f is the transformation function, 0 © C
¢ £ ¢ ¥ ¦ § H is the
transformation parameters vector, 4 and 9 represent transla-
tion and rotation operations respectively, and#
¨ ! represents
a pixel value at location¨ ©
!
.The problem of image registration is to estimate the trans-
formation parameters in the above equation using the first and
the second images. The images may be further distorted, for
example, by noise contamination or the two images may have
only partial overlap between them. The approach used in this
paper is based on estimating the transformation parameters
using an adaptive weighted least squares technique with an
objective function that depends on the weighted difference
between the locations of a set of feature points in the first
image # and another set in the second image #
$ . In order
to obtain the transformation parameters, three main steps are
performed in the proposed algorithm: feature extraction, find-
ing correspondence between feature points, and transformationparameters estimation. Typically, the feature extraction process
and obtaining the correspondence between feature points are
carried out on gray scale images or the luminance component
of color images.
A. Feature Point Extraction
A set of feature points are extracted from the image using
an enhanced efficient feature point extractor that is based on
scale-interaction of Mexican-hat wavelets [13], [12]. This step
involves two stages. In the first stage, the response of the image
to a feature detection operation is obtained and in the second
stage, the feature points are localized by finding the local
maxima in the response. Obtaining the responseS
¨
§
§
!
in the first stage can be represented as
S
¨
§
§
! © T V ¨
§
! X V ¨
§
! T (2)
whereV ¨
§ `
! ©
#
¨ ! b b e f ¨
§ `
!
denotes the 2-D convolution of the image#
with the Mex-
ican hat wavelet and #
¨ ! represents the intensity of the
image at location ¨ © P D ! . In the spatial domain, the
Mexican hat wavelet can be expressed as
e fa h ¨
§`
! © r
t
u v
X
7
t
y
f
(4)
where t
©
v
,§
` is the scale of the function, and are the vertical and horizontal coordinates respectively.
The second stage localizes the feature points of the imageby finding the local maxima of the responseS
. A local
maximum of S is a point with maximum value in a disk-
shaped neighborhood of radius
. An example showing the
process of feature point extraction from an image is illustrated
in Fig. 1.
(a) (b)
(c) (d)
Fig. 1. Feature point extraction process: (a) The feature points superimposedon the input image, (b) Response of applying Mexican hat wavelet with scale
, (c) Response of applying Mexican hat wavelet with scale , (d) Absolutedifference of the two responses and the locations of the obtained local maxima.
B. Correspondence between Points
Applying the feature extraction process on the first and the
second images results in two sets of feature points,
and
$ respectively. The number of the feature points in the first
1732
8/2/2019 A Robust Feature Based Algorithm for Aerial Image Registration
http://slidepdf.com/reader/full/a-robust-feature-based-algorithm-for-aerial-image-registration 3/6
image is and the corresponding number of the second image
is
$ . The objective of this step is to pair feature points of
the first image with the corresponding ones of the second
image. This is done using circular neighborhoods of radius
centered on each feature point
in the first image and each
point
$
in the second image. The similarity measure used is
based on computing Zernike moments-based descriptors [14]
using these circular neighborhoods. One important feature of
the magnitude of the complex Zernike moments is that they
are rotational invariant [16] and this is the reason for using
circular neighborhoods. If the image (or regions of interest,
i.e., the circular neighborhoods) is rotated by an angle S , then
the Zernike moment
of the rotated image can be obtained
as
$
©
f
(5)
Thus, the magnitudes of the Zernike moments can be used as
rotationally invariant image features. Translation invariance is
achieved by taking the locations of the image feature points
as the centers of the neighborhoods.
The correspondence between feature points in the two
images is obtained using the following algorithm:
1) For each point of
and
$ in images#
and#
$ ,
respectively, take a circular neighborhood of radius
and construct a descriptor vector
[15] as
© T
j T m m m T
j
T m m m T
n j nT ! (6)
where T
j
T is the magnitude of Zernike moment of
a non-negative integer order
,
X T T is even, andT T
. When computing the Zernike moments of a
circular neighborhood located around a feature point, the
feature point is taken as the origin and the coordinates
of each pixel inside the neighborhood are mapped to the
range inside a unit circle, i.e.,
7
(
r
. Zernike
moments of order
are given by
©
7
r
!
z
£
z
£
{ }
¦
!
! (7)
where
©
7
¦
©
!
and
! represents the intensity at a pixel inside
the circular neighborhood. In the above equation,{
}
denotes the complex conjugate of the Zernike polyno-mial of order and repetition which can be defined
as
{
¦
! ©
9
! f
(8)
where9
! is a real-valued radial polynomial defined
as
9
! ©
n
X
r
!
X
§
!
§
(9)
where
©
r
v
m m m ; T T
; and
X T T is
even. While higher order moments contain information
about fine details in the image, they are more sensitive
to noise than lower order moments [14]. Therefore, the
highest moment order used in the descriptor vector
(r
in the algorithm) is chosen to achieve a compromise
between noise sensitivity and the information content of
the moments.
2) Construct the distance matrix
, where each entry
of this matrix is given by
©
X
$
!
©
`
T
! X
$
! T (10)
where
! and
$
! are the entries of
and
$
, respectively,
©
r
v
m m m
and
©
r
v
m m m
$ .
In other words, each entry represents the
X
of the difference between the two descriptor vectors of
the feature points
and
$
in the first and the secondimages, respectively. In the distance matrix , find the
minimum distance coefficients along rows and along
columns. A correspondence between two points and
$
is established if, and only if, the minimum distance
coefficient in a row is also the minimum distance coef-
ficient in the associated column of
. This results in
paired points, where
ª
$ ! .
C. Transformation Parameters Estimation
The transformation parameters, required to transform the
distorted image to its appropriate size, orientation and position,
will be estimated by solving an iterative weighted least squares
minimization problem where the objective function depends on
the distance between the feature point pairs in the two images.
The objective function is defined in terms of the
X
of
the weighted errors
®
0 ! ©
±
f
$
0 ! X
! !
©
z
²
³
´
´
´
f
$
0 ! X
´
´
´
(11)
where ± © C
³
m m m
³
m m m
³
z
²
H ,³
is the weight associated
with the distance between the feature points pairs
and
$
) and0 © C
¢£
¢¥
¦ § H
is a vector of the transformationparameters. The transformation parameters can be obtained
then by solving the optimization problem
ª
·
®
0 ! (12)
The solution of this optimization problem would give the
correct transformation parameters provided that the correspon-
dence obtained in the previous sub-section is correct for all
feature points pairs. This will not be the case if, for example,
the two images have only partial overlap between them. It
1733
8/2/2019 A Robust Feature Based Algorithm for Aerial Image Registration
http://slidepdf.com/reader/full/a-robust-feature-based-algorithm-for-aerial-image-registration 4/6
8/2/2019 A Robust Feature Based Algorithm for Aerial Image Registration
http://slidepdf.com/reader/full/a-robust-feature-based-algorithm-for-aerial-image-registration 5/6
(a)
(b)
(c)
Example 1 (d) Example 2
Fig. 2. Examples of registering aerial images: (a) First images, (b) Second images, (c) Correspondence between the feature points of the first images,represented by crosses, and the feature points of transformed distorted images, represented by squares , and (d) The transformed distorted images are overlaidon the corresponding reference images
1735
8/2/2019 A Robust Feature Based Algorithm for Aerial Image Registration
http://slidepdf.com/reader/full/a-robust-feature-based-algorithm-for-aerial-image-registration 6/6
between images, in the presence of additional distortions, such
as image blurring and noise contamination. Results indicate
that the use of the iterative weighted lease squares algorithm
is very effective in eliminating feature points that have false
correspondence and that the proposed algorithm leads to an
accurate estimation of the transformation parameters.
ACKNOWLEDGMENT
This work has been supported by the Natural Sciences and
Engineering Research Council of Canada (NSERC).
REFERENCES
[1] Lisa G. Brown. A survey of image registration techniques. ACM Computing Surveys (CSUR), 24(4): 325–376, 1992.
[2] B. S. Reddy and B. N. Chatterji, “An FFT-Based technique for transla-tion, rotation, and scale-invariant image registration,” IEEE Transactions
on Image Processing, vol. 5, no. 5, 1996.[3] G. Wolberg and S. Zokai. Robust image registration using log-polar
transform. In Proceedings of the IEEE International Conference on Image
Processing (ICIP), Vancouver, BC, Canada, Sept. 2000.[4] Y. Jianchao, “Image registration based on both feature and intensity
matching,” in Proceedings of the IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP), Salt Lake City, UT ,USA, May 2001, pp. 1693–1696.
[5] B. Zitová and J. Flusser, “Image registration methods: a survey,”Proceedings of Image Vision Computing, vol. 21, no. 11, pp. 977-1000,2003.
[6] H. Li, B. S. Manjunath and S. K. Mitra, “A contour-based approach tomultisensor image registration,” IEEE Transactions on Image Processing,vol. 4, no. 3, pp. 320–334–1364, 1995.
[7] X. Dai and S. Khorram, “A feature-based image registration algorithmusing improved chain-code representation combined with invariant mo-ments,” IEEE Transactions on Geoscience and Remote Sensing, vol. 37,no. 5, pp. 2351–2362, 1999.
[8] V. Govindu and C. Shekhar, “Alignment using distributions of localgeometric properties,” IEEE Transactions on Pattern Analysis and Machine
Intelligence, vol. 21, no. 10, pp. 1031–1043, 1999.[9] T. Kim and Y.-J. Im, “Automatic satellite image registration by combi-
nation of matching and random sample consensus,” IEEE Transactions onGeoscience and Remote Sensing, vol. 41, no. 5, pp. 1111–1117, 2003.
[10] C. Rao, Y. Guo, H. Sawhney and R. Kumar, “A heterogeneous feature-based image alignment method,” Proceedings of the 18th InternationalConference on Pattern Recognition (ICPR 2006), Hong Kong, vol. 2, pp.345–350, Aug 2006.
[11] M. A. Fischler and R. C. Bolles, “Random sample consensus: a paradigmfor model fitting with applications to image analysis and automatedcartography,” Graphics and Image Processing, vol. 24, no. 6, pp. 381–395, Jun 1981.
[12] M. S. Yasein and Pan Agathoklis, “Automatic and robust imageregistration using feature points extraction and Zernike moments invariants,” in Proceedings of the Fifth IEEE International Symposium on SignalProcessing and Information Technology, Athens, Greece, Dec. 2005, pp.566–571.
[13] M. Kutter, S. K. Bhattacharjee and T. Ebrahimi, “Toward second gen-eration watermarking schemes,” in Proceedings of the IEEE InternationalConference on Image Processing (ICIP), Kobe, Japan, Oct. 1999, pp. 320–323.
[14] C.-H. Teh and R. T. Chin, “On image analysis by the methodsof moments,” IEEE Transactions on Pattern Analysis and Machine
Intelligence, vol. 10, no. 4, pp. 496–513, 1988.[15] A. Khotanzad and Y. H. Hong, “Invariant image recognition by
Zernike moment,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 12, no. 5, pp. 489–497, 1990.
[16] S. X. Liao and M. Pawlak, “On the accuracy of Zernike momentsfor image analysis,” IEEE Transactions on Pattern Analysis and Machine
Intelligence, vol. 20, no. 12, pp. 1358–1364, 1998.[17] F. R. Hampel, E. M. Ronchetti, P. J. Rousseeuw and W. A. Stahel,
“Robust statistics: the approach based on influence functions,” Wiley, 1986.
1736