Post on 06-Apr-2018
transcript
8/3/2019 Tutorial_Wavelet for Image Fusion
1/29
Wavelet for Image Fusion
Shih-Gu Huang()
Graduate Institute of Communication Engineering& Department of Electrical Engineering,
National Taiwan University,
Abstract
Image fusion is the process that combines information from multiple images of the same
scene. The result of image fusion is a new image that retains the most desirable information
and characteristics of each input image. The main application of image fusion is merging the
gray-level high-resolution panchromatic image and the colored low-resolution multispectral
image. It has been found that the standard fusion methods perform well spatially but usually
introduce spectral distortion. To overcome this problem, numerous multiscale transform
based fusion schemes have been proposed. In this paper, we focus on the fusion methods
based on the discrete wavelet transform (DWT), the most popular tool for image processing.
Due to the numerous multiscale transform, different fusion rules have been proposed for
different purpose and applications. In this paper, experiment results of several applications
and comparisons between different fusion schemes and rules are addressed.
1. Introduction
1.1. Image fusion
Image fusion is the process that combines information from multiple images of the same
scene. These images may be captured from different sensors, acquired at different times, or
having different spatial and spectral characteristics. The object of the image fusion is to retain
the most desirable characteristics of each image. With the availability of multisensor data in
many fields, image fusion has been receiving increasing attention in the researches for a wide
spectrum of applications. We use the following four examples to illustrate the purpose of
image fusion:
(1) In optical remote sensing fields, the multispectral (MS) image which contains color
information is produced by three sensors covering the red, green and blue spectral
wavelengths. Because of the trade-off imposed by the physical constraint between spatial
8/3/2019 Tutorial_Wavelet for Image Fusion
2/29
and spectral resolutions, the MS image has poor spatial resolution. On the contrast, the
panchromatic (PAN) images has high spatial resolution but without color information.
Image fusion can combine the geometric detail of the PAN image and the color
information of the MS image to produce a high-resolution MS image. Fig. 1.1 shows the
MS and PAN images provided by IKONOS, a commercial earth observation satellite, and
the resulting fused image [1].
(a) (b) (c)
Fig. 1.1. Image fusion for an IKONOS scene for Cairo, Egypt: (a) multispectral low resolution input
image, (b) panchromatic high resolution input image, and (c) fused image of IHS.
(2)As the optical lenses in CCD devices have limited depth-of focus, it is often impossible to
obtain an image in which all relevant objects are in focus. To achieve all interesting objects
in focus, several CCD images, each of which contains some part of the objects in focus, are
required. The fusion process is expected to select all focused objects from these images.
An experiment is shown in Fig. 1.2 [2].
(a) (b) (c)
Fig. 1.2. CCD visual images with the (a) right and (b) left clocks out of focus, respectively; (c) the
resulting fused image from (a) and (b) with the two clocks in focus.
(3)Sometimes the in focus problem is due to the different characteristics of different typesof optical sensors, such as visual sensors, infrared sensors, Gamma sensors and X-Ray
8/3/2019 Tutorial_Wavelet for Image Fusion
3/29
sensors. Each of these types of sensors offers different information to the human operator
or a computer vision system. An experiment is shown in Fig. 1.3 [3]. The image in Fig.
1.3(a), captured from a visual sensor, provides most visual and textural details, while the
image in Fig. 1.3(b), captured from an infrared sensor, can highlight the man hiding behind
the smoke. Therefore, the image fusion process can combine all the interesting details into
a composite image (see Fig. 1.3(c)).
(a) (b)
(c)
Fig. 1.3. Images captured from the (a) visual sensor and (b) infrared sensor, respectively; (c) the
resulting fused image from (a) and (b) with all interesting details in focus.
(4) In medical imaging, different medical imaging techniques may provide scans with
complementary and occasionally conflicting information, such as magnetic resonance
image (MRI), computed tomography (CT), positron emission tomography (PET), and single
photon emission computed tomography (SPECT). In Fig. 1.4, the MRI and PET images are
fused [4]. The PET is a functional image displaying the brain activity, but without
anatomical information. The MRI, having higher spatial resolution than the PET, provides
anatomical information but without functional activity. The object of image fusion is toachieve a high spatial resolution image with functional and anatomical information.
8/3/2019 Tutorial_Wavelet for Image Fusion
4/29
(a) (b) (c)
Fig. 1.4. (a) MRI and (b) PET images; (c) fused image from (a) and (b).
1.2. Image fusion methods
There are various methods that have been developed to perform image fusion. Some
well-known image fusion methods are listed below [5-9].
(1) Intensity-hue-saturation (IHS) transform based fusion
(2) Principal component analysis (PCA) based fusion
(3) Arithmetic combinations
(a) Brovey transform
(b) Synthetic variable ratio technique
(c) Ratio enhancement technique
(4) Multiscale transform based fusion
(a) High-pass filtering method
(b) Pyramid method
(i) Gaussian pyramid
(ii) Laplacian Pyramid
(iii) Gradient pyramid(iv) Morphological pyramid
(v) Ratio of low pass pyramid
(vi) Contrast pyramid
(vii) Filter-subtract-decimate pyramid
(c) Wavelet transform
(i) Discrete wavelet transform (DWT)
(ii) Stationary wavelet transform
(iii)Dual tree discrete wavelet transform
(iv)Lifting wavelet transform
8/3/2019 Tutorial_Wavelet for Image Fusion
5/29
(v) Multi-wavelet transform
(d) Curvelet transform
(5) Total probability density fusion
(6) Biologically inspired information fusion
In this paper, we first introduce the IHS based and PCA based fusion schemes because they are
classical, simple and probably the most popular. Then we focus our attention on the DWT
based fusion methods and hybrid methods which combine the IHS and DWT. As for the other
fusion methods mentioned above, a brief introduction of various pyramid methods, total
probability density fusion and biologically inspired information fusion is available in [6].
Comparison of different wavelet transform in image fusion is available in [8], while the
arithmetic combinations have discussed in [9].
1.3. Wavelet based image fusion
The standard image fusion techniques, such as IHS based method, PCA based method and
Brovey transform method operate under spatial domain. However, the spatial domain fusions
may produce spectral degradation. This is particularly crucial in optical remote sensing if the
images to fuse were not acquired at the same time. Therefore, compared with the ideal
output of the fusion, these methods often produce poor result. Over the past decade, new
approaches or improvements on the existing approaches are regularly being proposed toovercome the problems in the standard techniques. As multiresolution analysis has become
one of the most promising methods in image processing, the wavelet transform has become a
very useful tool for image fusion. It has been found that wavelet-based fusion techniques
outperform the standard fusion techniques in spatial and spectral quality, especially in
minimizing color distortion [7,10-11]. Schemes that combine the standard methods (HIS or
PCA) with wavelet transforms produce superior results than either standard methods or
simple wavelet-based methods alone. However, the tradeoff is higher complexity and cost.
1.4. Organization
The rest of this paper is organized as follows. Section 2 describes some image processing
backgrounds that would be used in the fusion schemes. In Section 3, we give a brief
introduction to the discrete wavelet transform (DWT). In Section 4, the standard image fusion
schemes and some DWT based fusion schemes are introduced, while several basic and simple
fusion rules are introduced in Section 5. Section 6 presents some experimental results and
comparisons between different fusion schemes and rules. Finally, this paper is concluded inSection 7.
8/3/2019 Tutorial_Wavelet for Image Fusion
6/29
2. Background
2.1 Image registration (IR) [16]
IR is the process that transforms several images into the same coordinate system. Forexample, given an image, several copies of the image are out-of-shape by rotation, shearing,
twisting, etc. With the given image as reference, IR can align the out-of-shape images to be
the same as the given image. Therefore, IR is essential preprocessing operation for image
fusion.
2.2 Image resampling (RS) [17]
RS is the procedure that creates a new version of the original image with a different width and
height in pixels. Simply speaking, RS can change the size of the image. Increasing the size is
called upsampling, for example, the left image in Fig. 2.1 is enlarged by a factor of 10 as
shown in the right image in Fig. 2.1. On the contrast, decreasing the size is called downsamplig.
Note that the spatial resolution would not change after the RS procedure, either upsampling
or downsamplig.
Fig. 2.1. Image upsampling with a factor of 10.
2.3 Histogram matching [18]
Consider two images X and Y. If Y is histogram-matched to X, the pixel values of Y is changed
by a nonlinear transform such that the histogram of the new Y is the as that of X.
8/3/2019 Tutorial_Wavelet for Image Fusion
7/29
3. Discrete Wavelet Transform: A Review
In this paper, we only introduce the discrete wavelet transform (DWT) based fusion schemes
because DWT is the basic and simplest transform among numerous multiscale transform and
other type of wavelet based fusion schemes are usually similar to the DWT fusion scheme.
Therefore, we give a brief introduction of the DWT [15] in this section.
3.1. Continuous wavelet transform (CWT)
Given a input signalx(t), the CWT ofx(t) is defined as
( ) ( )1
,w
t aX a b x t dt
bb
=
,
where location factor a can be any real number, and scaling factor b can be positive real
number. The mother wavelet (t) is a well-designed function so that the CWT has low
computation complexity and is reversible. It is obvious that as b is larger, ( )( ) /t a b is
more like a high-frequency signal, and thus output Xw(a, b) would represent the
high-frequency component ofx(t) after inner product with ( )( ) /t a b . Also, larger b
implies the window size of ( )( ) /t a b is smaller; that is, the location (time) resolution is
smaller. Therefore, the location-scaling (time-frequency) resolution plane is as Fig. 3.1.
Fig. 3.1. Time-frequency resolution plane for wavelet transform.
8/3/2019 Tutorial_Wavelet for Image Fusion
8/29
3.1. Continuous wavelet transform with discrete coefficients (DC-CWT)
Although the CWT performs well in mathematics, it is hard to implement. Thus, it is not useful
in practical. As we restrict the values of parameters a and b as
a = n2m and b = 2m,
the CWT can be rewritten as
/2( , ) 2 ( ) (2 )m mw
X n m x t t n dt
= .
We call this special case the CWT with discrete coefficients (DC-CWT). The main reason of this
setting is easy in implementation. As the mother wavelet (t) satisfies
( ) ( ) ( ) ( )2 2 and 2 2k kk k
t h t k t g t k = = ,
Xw(n, m) can be computed fromXw(n, m1) by digital convolution,
1
2
1
2
( , ) 2 (2 , 1) ;
( , ) 2 (2 , 1) .
w k w
k
w k w
k
n m g n k m
X n m h n k m
= + +
= + +
The (t), called scaling function, can be deemed as a low-pass filter compared to the high-pass
filter (t). Although the setting ofa = n2m and b = 2m, we can only obtain some coefficients
in the particular positions of the time-frequency distribution, as shown in Fig. 3.2. However,
these coefficients are enough for most acoustics and image processing.
Fig. 3.2. T ime-frequency positions of the DC-CWT coefficients.
8/3/2019 Tutorial_Wavelet for Image Fusion
9/29
3.1. Discrete wavelet transform (DWT)
3.1.1. 1-D discrete wavelet transform
The DWT is similar to the DC-CWT except that the input signal is discrete. Therefore, the
design rules for (t), (t), g[k] and h[k] are similar as in the DC-CWT. The block diagram of the
1-D DWT is illustrated in Fig. 3.3.
Fig. 3.3. The block diagram of the 1-D DWT.
3.1.2. Multi-level 1-D discrete wavelet transform
Furthermore, the 1-D DWT confidents can be decomposed again using the 1-D DWT. This
scheme is called multi-level 1-D DWT (see Fig. 3.4 for 2-level 1-D DWT).
Fig. 3.4. The block diagram of the 2-level 1-D DWT.
3.1.3. Inverse discrete wavelet transform (IDWT)
The reconstruction process from the DWT coefficients is shown in the right part of Fig. 3.5,
called inverse DWT (IDWT). The filters h[n], g[n], h1[n] and g1[n] in the figure can be design by
quadrature mirror filter (QMF) method or Orthonormal filter method.
8/3/2019 Tutorial_Wavelet for Image Fusion
10/29
Fig. 3.5. The block diagrams of DWT and IDWT.
3.1.4. 2-D discrete wavelet transform
2-D DWT is very useful for image processing because the image data are discrete and the
spatial-spectral resolution is dependent on the frequency. An example is shown in Fig. 3.6. The
DWT has the property that the spatial resolution is small in low-frequency bands but large in
high-frequency bands. The left-top sub-image (the band with lowest frequencies) has the
smallest spatial resolution and represents the approximation information of the original image.
Thus, the DWT is suitable for image compression. On the contrast, the other sub-images (the
bands with high frequencies) show the detailed information of the original image. Therefore,
these sub-images can be used for edge detection or corner detection.
Fig. 3.6. The 2-D DWT of a square image.
8/3/2019 Tutorial_Wavelet for Image Fusion
11/29
4. Image Fusion Schemes
In this section, we use two examples of image fusion to some basic fusion schemes, including
intensity-hue-saturation (IHS) transform fusion, principal component analysis (PCA) fusion and
wavelet-based fusion schemes. One example is the fusion of panchromatic (PAN) and
multispectral (MS) images. The fused image is requested to combine the high-resolution
spatial information of the PAN and the color information of the MS image. See Fig. 1.1 in
Section 1. Another example is the fusion of multifocus images. In this example, several images
with different focus regions are combined to produce a fused image that contains all these
focus regions. See Fig. 1.2 in Section 1.
4.1. Panchromatic and multispectral image fusion
Consider that the PAN and MS images are acquired from a remote IKONOS sensor. IKONOS is
a commercial earth observation satellite. It offers MS and PAN imagery characterized by
4-meter and 1-meter spatial resolution respectively. The PAN image is without color
information while the MS image covers three spectral bands (MS1 red, MS2 green, MS3
blue) as shown in Table 4.1. In order to take benefit of the high spatial information of the
PAN image and the essential spectral information of the MS image, we introduce HIS based,
PCA based, DWT based and DWT combined with HIS fusion schemes.
Band Spectral resolution Spatial resolution
PAN 0.45-0.90 m 1 meter
MS3 (Blue) 0.445-0.516 m 4 meters
MS2 (Green) 0.506-0.595 m 4 meters
MS1 (Red) 0.632-0.698 m 4 meters
Near IR 0.757-0.853 m 4 meters
Table 4.1. The spectral and spatial resolution of IKONOS.
8/3/2019 Tutorial_Wavelet for Image Fusion
12/29
4.1.1. Standard IHS fusion method
As the MS image is represented in RGB color space, we can separate the intensity (I) and color
information, hue (H) and saturation (S), by IHS transform. The I component can be deemed as
an image without color information. Because the I component resembles the PAN image, we
match the histogram of the PAN image to the histogram of the I component. Then, the I
component is replacedby the high-resolution PAN image before the inverse IHS transform is
applied. The main steps, illustrated in Fig. 4.1, of the standard HIS fusion scheme are
[7,12-13]:
(1)Perform image registration (IR) to PAN and MS, and resample MS.
(2)Convert MS from RGB space into IHS space.
(3)Match the histogram of PAN to the histogram of the I component.
(4)Replace the I component with PAN.
(5)Convert the fused MS back to RGB space.
Fig. 4.1. Standard IHS fusion scheme.
In step (1), image resampling (RS) is utilized so that the MS image has the same spatial
resolution as the PAN image. IR and RS have introduced in Section 2. In step (3), because the
mean and variance of the I component (I=(R+G+B)/3) are different to those of the PAN image,histogram matching is employed to prevent the change of the histogram of the MS image.
8/3/2019 Tutorial_Wavelet for Image Fusion
13/29
Fig. 4.2. Spectral response of the PAN and MS sensor of IKONOS.
4.1.2. Standard PCA fusion method
An alternative to IHS-based method is principal component analysis (PCA). In Fig. 4.2 [14], it is
found out that the MS bands are somewhat correlated. The PCA transform can convert the
correlated MS bands into a set of uncorrelated components, say PC1, PC2, PC3 The first
principle component (PC1) also resembles the PAN image. Therefore, the PCA fusion scheme
is similar to the IHS fusion scheme [9-10,12]:
(1)Perform IR to PAN and MS, and resample MS.
(2)Convert the MS bands into PC1, PC2, PC3, by PCA transform.
(3)Match the histogram of PAN to the histogram of PC1.
(4)Replace PC1 with PAN.
(5)Convert PAN, PC2, PC3, back by reverse PCA.
Fig. 4.3. Standard PCA fusion scheme.
In general, the PC1 collects the spatial information which is common to all the bands, while
8/3/2019 Tutorial_Wavelet for Image Fusion
14/29
PC2, PC3 collect the spectral information that is specific to each band [7]. Therefore, PCA
based fusion is very suitable for merging the MS and PAN images. Compared to the IHS fusion,
the PCA fusion has the advantage that the MS images is allowed to contain more than three
bands. For instance, the near infrared component (Table 4.1) is also taken into account, or the
MS image is form from more than one (satellite) sensors. In this case, the MS bands cannot be
exactly separated into R, G and B bands.
4.1.3. Substitutive DWT fusion method
The standard IHS and PCA based fusion methods is suitable for the case that the I component
(or PC1) of the MS image is highly correlated with the PAN image. This is possible only if the
PAN band covers all the MS bands, and also if both the I component (or PC1) of the MS image
and the PAN image has similar spectral response. The second condition is usually impossible
(see Fig. 4.2). Thus, the two standard fusion methods usually introduce spectral distortion. A
As mentioned in Section 3, the one-level DWT can decompose an image into a set of
low-resolution sub-images (DWT coefficients), LL, H1, H2 and H3. The LL sub-image is the
approximation image while the H1, H2 and H3 sub-images contains the details of the image. It
is straightforward to have the fusion method to retain the LL sub-image of the MS image and
replace the H1, H2 and H3 by those of the PAN image. Therefore, the fused image contains
the extra spatial details from the high resolution PAN image. Also, if we downsample thefused image, the low-resolution fused image will be approximately equivalent to the original
low-resolution MS image. That is, the DWT fusion method may outperform the standard
fusion methods in terms of minimizing the spectral distortion. The main steps, illustrated in
Fig. 4.4, of the substitutive DWT fusion scheme are [10]:
(1)Perform IR to PAN and MSi, and resample MSi.
(2)Match the histogram of PAN to the histogram of MSi.
(3)Apply DWT to both the histogram-matched PAN and MSi.(4)Replace the detail sub-images (H1, H2 and H3) of MSi with those of PAN.
(5)Perform IDWT on the new combined set of sub-images.
8/3/2019 Tutorial_Wavelet for Image Fusion
15/29
Fig. 4.4. Substitutive DWT fusion scheme.
If the MS image contains three bands, i.e. MS1-MS3, the steps mentioned above require to be
repeated 3 times. That is, four DWTs and three IDWTs are required. Since the resolution of
the MS image is 1/4 of that of the PAN image in KONOS, we can modify the scheme as:
(1)Perform IR to PAN and MSi.
(2)Match the histogram of PAN to the histogram of MSi.
(3)Transform the histogram-matched PAN using the n-level DWT (n=2 in the case).
(4)Replace the approximation (LL) sub-image of PAN with MSi.
(5)Perform IDWT on the new combined set of sub-images.
This modification can save 3 DWT computations. This modification can be applied to any DWT
based fusion methods. However, in some applications, the RS process is still necessary in step
(1), and we will discuss it in Section 4.2.
4.1.4. Substitutive IHS-DWT fusion method
In this subsection, we introduce the hybrid method that combines the IHS fusion and the DWT
fusion. A great deal of research has focused on incorporating the IHS transform into wavelet
methods, since the IHS fusion methods performs well spatially while the wavelet fusion
methods perform well spectrally [10]. Besides, as IHS transform is employed, the fusion
process is reduced to the fusion of the I component and the PAN image, and thus we do not
need to repeat the fusion process for all MS bands. However, again the restriction of using
IHS-DWT fusion method is that the MS image should contain only three bands. The main steps,
illustrated inFig. 4.5, of the substitutive IHS-DWT fusion scheme are [10,12]:
(1)Perform IR to PAN and MS, and resample MS.
(2)Convert MS from RGB space into IHS space.
(3)Match the histogram of PAN to the histogram of the I component.(4)Apply DWT to both the histogram-matched PAN and the I component.
8/3/2019 Tutorial_Wavelet for Image Fusion
16/29
(5)Replace the detail sub-images (H1, H2 and H3) of the I component with those of PAN.
(6)Perform IDWT to obtain fused I component.
(7)Convert the resulting MS back to RGB space.
Fig. 4.5. Substitutive IHS-DWT fusion scheme.
4.2. Multifocus image fusion
As the optical lenses in CCD devices have limited depth-of focus, it is often impossible to
obtain an image in which all relevant objects are in focus. One possible solution to this
problem is to combine several CCD images, each of which contains some part of the objects in
focus. The fusion schemes for this example (Fig 4.6 [4]) are similar to the DWT fusion scheme
and the modified one in Section 4.1.3. Consider there are two CCD imagesXand Y. IfXand Y
have similar spatial resolution, a RS followed by an IR are previously required, as shown in Fig
4.6 (a). If the resolution of one image, say Y, is smaller than 1/2n
resolution of another image,
sayX, a n-level DWT is only applied to Y. IfYand the approximation (LL) sub-image ofXstill
have different resolution, IR and RS is still required (see Fig. 4.6 (b) with n=1). Besides, the
histogram matching is neglected since the histogram reference is not determined.
(a)
8/3/2019 Tutorial_Wavelet for Image Fusion
17/29
(b)
Fig. 4.6. Schemes of multifocus image fusions where (a) input images have similar resolution and (b)
one input image has resolution (at least 1/2) smaller than another one.
For merging the DWT coefficients, the substitution strategy used in all fusion methods in
Section 4.1 is no longer suitable in this example. A simple and straightforward method is
choose-max (CM) [4], i.e. choosing the maximal DWT coefficients between X and Y. The
steps of this method using the scheme in Fig. 4.6 (a) are:
(1)Perform IR and RS toXand Y.
(2)Apply DWT to both X and Y, and their coefficients in pixel p are DX(p) and DY(p),
respectively.
(3)The output DWT coefficient in pixelp is DZ(p) given by
( ) , if ( ) ( )( )( ) , if ( ) ( )
X X Y
Z
Y X Y
D p D p D pD pD p D p D p
= .
(4)Perform IDWT to DZ.
5. Fusion Rules
In this Section, we focus on the DWT fusion methods. Most DWT fusion schemes are similar to
the schemes illustrated in Figs. 4.4 and 4.6. However, various fusion rules are proposed for a
wide variety of applications. The fusion rule includes two parts, activity-level measurement
method and coefficient combining method. In Section 4, we have introduced substitution
fusion rule in the example of PAN and MS image fusion. In this fusion rule, the activity-level
measurement is not used while the coefficient combining method is simply the substitution.
In the example of multifocus image fusion, the activity-level measurement is ( )I
D p while
the coefficient combining method is choose-max (CM). In the following, some simple and
basic fusion rules, picked out from [4], are introduced.
First, consider two source imagesXand Y, and the fused imageZ. Generally, an image I has its
8/3/2019 Tutorial_Wavelet for Image Fusion
18/29
DWT coefficients denoted as DIand the activity level as AI. Thus, we shall encounter DX, DY, DZ,
AX andAY. The index of each DWT coefficient is denoted by p = (m, n, k, l), illustrated by Fig.
5.1, where kis the decomposition level while I is the index of the frequency band. So (m, n, k, l)
implies the (m,n)-th coefficient in LL (k=0, l=0) or Hklband. Therefore, DI(p) andAI(p) are the
value and activity level ofp DWT coefficient. Note that coefficients in different level can use
different fusion rule.
(m,n,
0,0)
(m,n,
1,2)
(m,n,
1,1)
(m,n,
1,3)
(m,n,2,2)
(m,n,2,1) (m,n,2,3)
Fig. 5.1. The two-level DWT coefficients.
5.1. Activity-level measurement methods
There are three categories of methods for computing the activity level AI(p) at position p:
coefficient-based, window-based and region-based measures.
(1) Coefficient-based activity (CBA)In CBA, the activity level is given by
2( ) ( ) or ( ) ( )
I I I IA p D p A p D p= = .
(2) Window-based activity (WBA)The WBA employ a small (typically 3 3 or 5 5) window centered at the current
coefficient position. Thus, the activity level AI(p) is determined by the coefficients
surrounding p using a small. One option of the WBA is the weighted average method
(WA-WBA)
,
( ) ( , ) ( , , , )I I
s S t T
A p w s t D m s n t k l
= + + ,
where ( , )w s t is the weighting function, and the value of S and T is determined by the
8/3/2019 Tutorial_Wavelet for Image Fusion
19/29
window size. Compared with the CBA, the WBA has the benefit of smaller interference
from noise. Other possible options of the WBA include the rank filter method (RF-WBA),
the spatial frequency method (SF-WBA) and statistical method (ST-WBA).
(3) Region-based activity (RBA)The regions used in RBA measurement are similar to windows with odd shapes. In the
following, we list the procedure for calculating the activity level of RBA.
(a) Perform image segmentation on the LL band sub-image. Therefore, the LL band
sub-image is divided into several regions.
(b)Any region Rv
in the low-frequency band has a corresponding group of coefficients,
defined as C(Rv), in each high-frequency band, as illustrated in Fig. 5.2.
(c)2
( ) ( )
1 1( ) ( ) or ( ) ( )V V
V VI I I I
p C R p C RV V
A R D p A R D pN N
= = .
(d)CBA or RF-WBA, if is on an edge
( )( ), if is insideV VI
I
pA p
A R p R
= .
Fig. 5.2. The different black boxes, associated to each decomposition level, are coefficient
corresponding to the same image spatial representation in each original image, i.e. the same pixel
or pixels positions in the original images.
5.2. Coefficient combining methods
Selection and averaging are probably the most popular coefficient combining methods.
Selection method can collect the largest DWT coefficients between two images. Therefore, itis suitable for collecting the edges or corners, i.e. detailed information. Averaging method is
employed while the DWT coefficients between two images are both important. Therefore,
averaging method is suitable for combing the low-frequency bands because the
approximation images of the images to be fused usually look similar.
(1) SelectionThe simplest selection method is choose-max (CM),
8/3/2019 Tutorial_Wavelet for Image Fusion
20/29
{ ( ) , if ( ) ( )( ) ( ) , if ( ) ( )X X Y ZY X Y
D p A p A pD p
D p A p A p
=
.
In the high-frequency bands (H11, H12, H13, H21), the larger DWT coefficients
correspond to sharper brightness changes and thus to the salient features in the imagesuch as edges, lines, and region boundaries [4]. Therefore, the CM method is useful in the
collection of the detailed information. In multifocus image fusion, the input images Xand
Yhave diffident regions out of focus. It is obvious the detailed DWT coefficients of the
out-of-focus region in Xwould be smaller than those of the corresponding region in Y.
Thus, the CM scheme is very suitable for merging the detail information ofXand Y.
(2) General weighted average (WA)For eachp, the composite DZis obtain by
( ) ( ) ( ) ( ) ( )Z X X Y Y
D p w p D p w p D p= + .
The weighting factors wX(p) and wY(p) can be deterministic or dependent on the activity
levels ofXand Y, given by (indexp is omitted)
1 and 0, if ,
1 1 11 and , if ,
2 2 10 and 1, if ,
1 1 1and 1 , if ,
2 2 1
X Y X Y XY
XYX Y Y X Y XY
X Y X Y XY
XYX Y X X Y XY
w w A A M
Mw w w A A M
w w A A M
Mw w w A A M
= =
8/3/2019 Tutorial_Wavelet for Image Fusion
21/29
asAXAY.
The WA method is useful for merging the LL bands ofXand Ybecause the approximation
images usually looks similar, i.e. have high cross-correlation. However, since the DWT
coefficients in high-frequency bands would have negative values, the WA may introduce
the pattern cancellation problem if employed in high-frequency bands.
(3) Adaptive weighted average (AWA)The AWA scheme is a special WA scheme that the weight wX(p) is not deterministic or
dependent on the cross-correlation but only relevant to the neighborhood aroundp inX,
( ) ( ) ( )a
X X X w p D p D p= ,
where ( )XD p is average value over the neighborhood (say NM) centered at p. Simply
speaking, the weight represents the degree of interest ofp. For example, the warmer and
cooler pixels in the thermal image will be assigned larger weights. Thus, the AWA scheme
is useful to distinguish objects having special characteristics compared to their
neighborhoods.
Other coefficient combining methods include fusion by energy comparison (EC), region-based
fusion by multiresolution feature detection (RFD), background elimination (BE) and variancearea based (VA) [4].
6. Experimental Results and Comparison
In this section, we first give an experiment to each of the three typical fusion applications: PANand MS image fusion, Multifocus image fusion and Medical image fusion. Next, we give a
comparison between the fusion schemes discussed in Section 4.1 and a brief comparison
between some fusion rules.
6.1. PAN and MS image fusion
Consider the remote sensing images acquired from an IKONOS sensor: panchromatic (PAN)
image with 1 m spatial resolution and multispectral (MS) image with 4 m spatial resolution.
8/3/2019 Tutorial_Wavelet for Image Fusion
22/29
Three bands (MS1 red, MS2 green, MS3 blue) form the color MS image as displayed in
Fig. 6.1. As the IHS-DWT fusion scheme using substitution fusion rule is employed, the resulting
fused MS image is shown in Fig. 6.2 [4].
Fig. 6.1. (a) MS1 band, (b) MS2 band, (c) MS3 band and (d) color MS image with 4 m spatial resolution;
(e) PAN image with 1 m spatial resolution.
Fig. 6.2. Fused color MS image using IHS-DWT fusion scheme and substitution fusion rule.
6.2. Multifocus image fusion
Due to the depth of focus problem in CCD optical lenses, one possibility to obtain an image
with all interesting objects are in focus is performing image fusion to several images with
partial objects in focus. An example is shown in Fig 6.3. The image in Fig 6.3 (a) has the left
vehicle out of focus while the image in Fig 6.3 (b) has the right vehicle out of focus. As the two
8/3/2019 Tutorial_Wavelet for Image Fusion
23/29
8/3/2019 Tutorial_Wavelet for Image Fusion
24/29
(a) (b)
Fig. 6.5. (a) Colored MS image with 4 m spatial resolution; (b) PAN image with 1 m spatial resolution.
6.4. Comparison of IHS, PCA, and IHS-DWT fusion methods
Again, we use the IKONOS MS and PAN images to analyze the performance of the IHS, PCA,
and IHS-DWT fusion methods. The images used here are shown in Fig. 6.5. The fused MS
image is compared to the original high-resolution MS image (i.e. 1 m spatial resolution). The
mean square error (MSE) and root mean square error (RMSE) values of the MS1 red, MS2
green, MS3 blue are shown in Table 6.1 [12]. We can found out that IHS-DWT method
outperforms the PCA method, while the IHS method has the worst performance. It has been
found in [7] that PCA performs better than IHS, and in particular, that the spectral distortion in
the fused bands is usually less noticeable, even if it cannot be completely avoided. The
wavelet based method can further reduce the spectral distortion, and thus the IHS-DWT
fusion method is the best choice in most cases.
Method MS3 (0.445-0.516 m) MS2 (0.506-0.595 m) MS1 (0.632-0.698 m)
MSE RMSE MSE RMSE MSE RMSE
IHS 43987.13 209.73 45672.59 213.71 33824.55 183.91
PCA 1366.13 36.96 5790.80 76.09 7193.81 84.81
8/3/2019 Tutorial_Wavelet for Image Fusion
25/29
IHS-DWT 1302.91 36.09 1306.33 36.14 1304.04 36.11
Table 6.1. MSE and RMSE for IKONOS image fusion.
6.5 Comparison of different fusion rules
In this comparison, we use the example mentioned in Section 6.3, the medical image fusion.
Different fusion rules are applied to this example to compare the performance.
6.5.1. Comparison of different activity-level measurement methods
Recall all the activity-level measurement methods mentioned in this paper: coefficient-based
activity (CBA), window-based activity (WBA) including WA-WBA, RF-WBA, SF-WBA and
ST_WBA, and Region-based activity (RBA). It has been concluded in [4] that the performance
comparison of these method is
CBA = WA-WBA = RF-WBA > SF-WBA = ST-WBA > RBA.
The RBA is worse than the CBA and WBA because it requires good region segmentation, thus
for some applications it is ineffective.
6.5.2. Comparison of different coefficient combing methods
We have introduced three types of different coefficient combing method, including
choose-max (CM), weighted average (WA) and adaptive weighted average (AWA) methods.
Note that the approximation coefficients (APX) and details coefficients (DET) can use different
combing methods. In this experiment, the Biorthogonal wavelet family ( ( )bior ,N N ) is used. For
each set of ( )bior ,N N and number of DWT levels, we evaluate the RMSE of all possible
combination of combining methods for APX and DET, i.e. CM-CM, CM-WA, CM-AWA,
WA-CM,. The best results are shown in Table 6.2 [4]. For example, as 5-level DWT with
bior(1,1) is used, the best choice is AWA-CM. From the table, we can easily conclude that the
CM method is the best choice for combing details coefficients while WA and WAW methods
are better choice for combing approximation coefficients than the CM method. Furthermore,
we can find out that the AWA method is more suitable than the WA method.
8/3/2019 Tutorial_Wavelet for Image Fusion
26/29
Table 6.2. RMSE for the wavelets Biorthogonal family.
7. Conclusions
8/3/2019 Tutorial_Wavelet for Image Fusion
27/29
In this paper, the object and definition of image fusion is addressed. We also give a brief
introduction to the wavelet transform which is the main tool for image fusion. A number of
different fusion schemes have been proposed in terms of the two fusion applications: the
panchromatic (PAN) and multispectral (MS) image fusion and multifocus image fusion. In the
former application, the object of image fusion is generating a new image that enjoys the
high-spatial resolution of the PAN images and the color information of the MS image. Schemes
includes the intensity-hue-saturation (IHS) fusion scheme, principle component analysis (PCA)
fusion scheme, discrete wavelet transform (DWT) fusion scheme and IHS-DWT hybrid scheme
are introduced. We have concluded in experimental results that the PCA scheme outperforms
the IHS scheme while IHS-DWT scheme has the best performance because its spectral
distortion is minimal. In another application, the object of image fusion is collect the all the
objects in focus from several CCD images of the same scene. Since the input image are
gray-level, and thus only the DWT scheme is suitable. However, there are numerous fusion
rules for merging the DWT coefficients of the input images. A fusion rules includes the choice
of an activity-level measurement and the choice of a coefficient combing method. It has been
shown in simulation results that CBA is the best activity-level measurements, and choose-max
(CM) is the best method for combining approximation coefficients while weighted average
(WA) and adaptive WA (AWA) are good for combing detail coefficients.
7. References
8/3/2019 Tutorial_Wavelet for Image Fusion
28/29
[1] S. Ibrahim and M. Wirth, Multiresolution region-based image fusion using the Contourlet
Transform, in IEEE TIC-STH, Sept. 2009
[2] W. Huang, Zh.L. Jing, Multi-focus image fusion using pulse coupled neural network,
Pattern Recognition Letters, vol. 28, no. 9, 2007, pp. 1123-1132.
[3] Dr. Nikolaos Mitianoudis, Image fusion: theory and application,
http://www.iti.gr/iti/files/document/seminars/iti_mitianoudis_280410.pdf
[4] G. Pajares and J. M. Cruz, A wavelet-based image fusion tutorial, Pattern Recognit., vol.
37, no. 9, pp. 18551872, 2004.
[5] T. Stathaki, Image Fusion: Algorithms and Applications. New York: Academic, 2008.
[6] F. Sadjadi, Comparative image fusion analysis, in Proc. IEEE Conf. Comput. Vision Pattern
Recogn., San Diego, CA, Jun. 2005, vol. 3.
[7] M. Gonzles Audcana and A. Seco, Fusion of multispectral and panchromatic images
using wavelet transform. Evaluation of crop classification accuracy, in Proc. 22nd EARSeL
Annu. Symp. GeoinformationEur.-Wide Integr., Prague, Czech Republic, 46 June 2002, T.Benes, Ed., 2003, pp. 265272.
[8] Performance Comparison of various levels of Fusion of Multi-focused Images using
Wavelet Transform
[9] Y. Zhang, Understanding image fusion, Photogramm. Eng. Remote Sens., vol. 70, no. 6,
pp. 657661, Jun. 2004.
[10] Krista Amolins, Yun Zhang, and Peter Dare, Wavelet based image fusion techniquesAn
introduction, review and comparison, ISPRS Journal of Photogrammetric and Remote
Sensing, Vol. 62, pp. 249-263, 2007.
[11] J. Nuez, X. Otazu, O. Fors, A. Prades, V. Pal, and Romn Arbiol, Multiresolution-based
image fusion with additive wavelet decomposition, IEEE Trans. Geosci. Remote Sensing,
vol. 37, pp. 12041211, May 1999.
[12] V. Vijayaraj, N. H. Younan, and C. G. Ohara, Quantitative analysis of pansharpened
8/3/2019 Tutorial_Wavelet for Image Fusion
29/29
images, Opt. Eng., vol. 45, no. 4, pp. 046 202-1046 202-12, 2006.
[13] Haixia Liu, Bing Zhang, Xia Zhang, Junsheng Li, Zhengchao Chen and Xiaoxue Zhou, AN
improved fusion method for pan-sharpening Beijing-1 Micro-Satellite images , in IEEE
IGARSS, 2009
[14] K. A. Kalpoma and J. Kudoh, Image fusion processing for IKONOS 1-m color imagery,
IEEE Trans. on Geosci. RemoteSens., vol. 45, no. 10, pp. 3075-3086, Oct. 2007.
[15] Jian-Jiun Ding, Time-Frequency Analysis and Wavelet Transform,
http://djj.ee.ntu.edu.tw/index.php.
[16] Isaac Bankman, Handbook of Medical Imaging: Medical image processing and analysis,
1st edition, Academic Press, 2000
[17] Jonathan Sachs, Image Resampling, http://www.dl-c.com/Resampling.pdf
[18] RC Gonzalez and RE Woods, Digital Image Processing, 2nd Ed., Englewood Cliffs,NJ:
Prentice-Hall, Inc., 2002.