+ All Categories
Home > Documents > Research Article Effective Multifocus Image Fusion Based ...

Research Article Effective Multifocus Image Fusion Based ...

Date post: 28-Oct-2021
Category:
Upload: others
View: 4 times
Download: 0 times
Share this document with a friend
11
Research Article Effective Multifocus Image Fusion Based on HVS and BP Neural Network Yong Yang, 1,2 Wenjuan Zheng, 1 and Shuying Huang 3 1 School of Information Technology, Jiangxi University of Finance and Economics, Nanchang 330013, China 2 School of Life Science and Technology, University of Electronic Science and Technology of China, Chengdu 610054, China 3 School of Soſtware and Communication Engineering, Jiangxi University of Finance and Economics, Nanchang 330013, China Correspondence should be addressed to Yong Yang; [email protected] Received 17 July 2013; Accepted 19 December 2013; Published 6 February 2014 Academic Editors: P. Bifulco, C. Saravanan, K. Teh, and C. Zhang Copyright © 2014 Yong Yang et al. is is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. e aim of multifocus image fusion is to fuse the images taken from the same scene with different focuses to obtain a resultant image with all objects in focus. In this paper, a novel multifocus image fusion method based on human visual system (HVS) and back propagation (BP) neural network is presented. ree features which reflect the clarity of a pixel are firstly extracted and used to train a BP neural network to determine which pixel is clearer. e clearer pixels are then used to construct the initial fused image. irdly, the focused regions are detected by measuring the similarity between the source images and the initial fused image followed by morphological opening and closing operations. Finally, the final fused image is obtained by a fusion rule for those focused regions. Experimental results show that the proposed method can provide better performance and outperform several existing popular fusion methods in terms of both objective and subjective evaluations. 1. Introduction Due to a finite depth of field of optical lenses, it is usually impossible to get an image in which all relevant objects are in focus; that is, only those objects within the depth of field of the camera will be in focus, while other objects will be out of focus [1]. Consequently, in order to obtain an image with every object in focus, images taken from the same scene focusing on different objects need to be fused, that is, multifocus image fusion [2]. Image fusion refers to an image preprocessing technique that combines two or more source images that have been registered into a single image according to some fusion rules. Its aim is to integrate complementary and redundant information of multiple images coming from the same scene to form a single image that contains more information of the scene than any of the individual source images [3]. Multifocus image fusion is an important branch of this field. e fused image obtained then turns out to be more suitable for human/machine perception, segmentation, feature extraction, detection, or target recognition tasks [4]. Image fusion is generally performed at different levels of information representation, namely, pixel level, feature level, and decision level [5]. Up to now, many multifocus image fusion techniques have been developed. Basically, the fusion technique can be categorized into spatial domain fusion and transform domain fusion [6]. e spatial domain- based methods directly select the clearer pixels or regions from source images in the spatial domain to construct the fused image [7, 8]. e basic idea of the transformed domain-based methods is to perform certain multiresolution decomposition on each source image, then integrate all these decompositions to obtain one combined representation according to some fusion rules, and finally reconstruct the fused image by performing the inverse transformation to the combined representation [9]. e simplest fusion method is to take the average of the source images pixel by pixel. e method is simple and suitable for real-time processing. However, it does not consider the correlation between the surrounding pixels and oſten leads to several undesired side effects such as reduced contrast [3]. In order to improve the quality of the fused image, the block-based multifocus image fusion methods have been proposed [7, 8]. ese methods are shiſt-invariant, and all of the operations are performed in Hindawi Publishing Corporation e Scientific World Journal Volume 2014, Article ID 281073, 10 pages http://dx.doi.org/10.1155/2014/281073
Transcript
Page 1: Research Article Effective Multifocus Image Fusion Based ...

Research ArticleEffective Multifocus Image Fusion Based onHVS and BP Neural Network

Yong Yang12 Wenjuan Zheng1 and Shuying Huang3

1 School of Information Technology Jiangxi University of Finance and Economics Nanchang 330013 China2 School of Life Science and Technology University of Electronic Science and Technology of China Chengdu 610054 China3 School of Software and Communication Engineering Jiangxi University of Finance and Economics Nanchang 330013 China

Correspondence should be addressed to Yong Yang greatyangy126com

Received 17 July 2013 Accepted 19 December 2013 Published 6 February 2014

Academic Editors P Bifulco C Saravanan K Teh and C Zhang

Copyright copy 2014 Yong Yang et al This is an open access article distributed under the Creative Commons Attribution Licensewhich permits unrestricted use distribution and reproduction in any medium provided the original work is properly cited

The aim of multifocus image fusion is to fuse the images taken from the same scene with different focuses to obtain a resultantimage with all objects in focus In this paper a novel multifocus image fusion method based on human visual system (HVS) andback propagation (BP) neural network is presented Three features which reflect the clarity of a pixel are firstly extracted and usedto train a BP neural network to determine which pixel is clearer The clearer pixels are then used to construct the initial fusedimage Thirdly the focused regions are detected by measuring the similarity between the source images and the initial fused imagefollowed by morphological opening and closing operations Finally the final fused image is obtained by a fusion rule for thosefocused regions Experimental results show that the proposed method can provide better performance and outperform severalexisting popular fusion methods in terms of both objective and subjective evaluations

1 Introduction

Due to a finite depth of field of optical lenses it is usuallyimpossible to get an image in which all relevant objects arein focus that is only those objects within the depth of fieldof the camera will be in focus while other objects will beout of focus [1] Consequently in order to obtain an imagewith every object in focus images taken from the samescene focusing on different objects need to be fused that ismultifocus image fusion [2] Image fusion refers to an imagepreprocessing technique that combines two or more sourceimages that have been registered into a single image accordingto some fusion rules Its aim is to integrate complementaryand redundant information of multiple images coming fromthe same scene to form a single image that contains moreinformation of the scene than any of the individual sourceimages [3] Multifocus image fusion is an important branchof this field The fused image obtained then turns out to bemore suitable for humanmachine perception segmentationfeature extraction detection or target recognition tasks [4]

Image fusion is generally performed at different levelsof information representation namely pixel level feature

level and decision level [5] Up to now many multifocusimage fusion techniques have been developed Basically thefusion technique can be categorized into spatial domainfusion and transform domain fusion [6]The spatial domain-based methods directly select the clearer pixels or regionsfrom source images in the spatial domain to constructthe fused image [7 8] The basic idea of the transformeddomain-basedmethods is to perform certain multiresolutiondecomposition on each source image then integrate allthese decompositions to obtain one combined representationaccording to some fusion rules and finally reconstruct thefused image by performing the inverse transformation to thecombined representation [9]

The simplest fusion method is to take the average ofthe source images pixel by pixel The method is simpleand suitable for real-time processing However it does notconsider the correlation between the surrounding pixelsand often leads to several undesired side effects such asreduced contrast [3] In order to improve the quality ofthe fused image the block-based multifocus image fusionmethods have been proposed [7 8] These methods areshift-invariant and all of the operations are performed in

Hindawi Publishing Corporatione Scientific World JournalVolume 2014 Article ID 281073 10 pageshttpdxdoiorg1011552014281073

2 The Scientific World Journal

the spatial domain so they have high computational effi-ciency However they are also faced with some problemsThe first problem is how to determine the suitable size ofthe subblockThesemethods usually suffer from block effectswhich severely reduce the quality of the fused image if the sizeof the subblock is selected unreasonably Another problemis that which evaluation criteria would be more suitable tomeasure the clarity of the subblocks In recent years variousapproaches based on multiscale transforms have been pro-posed including pyramid transform and wavelet transformsuch as the Laplacian pyramid [10] gradient pyramid [11]the ratio of low pass pyramid [12] discrete wavelet transform(DWT) [13ndash15] shift-invariant discrete wavelet transform(SIDWT) [16] curvelet transform [17] contourlet transform[18] and nonsubsampled contourlet transform (NSCT) [19]Pyramid decomposition-based image fusion can achieve agood effect However the pyramid decomposition of theimage is redundant decomposition The information of thedifferent decomposition layers is correlative which makesit easy to reduce the stability of the algorithm GenerallyDWT is superior to the previous pyramid-based methodsbecause of providing directional information and withoutcarrying redundant information across different resolutionsMoreover DWT has good locality of time frequency How-ever these methods based on multiscale transforms are shift-variant namely their performance will quickly deterioratewhen there is a slight cameraobject movement or there ismisregistration of the source images [7 20] Although theSIDWT [16] and NSCT [19] algorithms both can overcomethe shortcomingmentioned above the implementation of thealgorithm is more complicated and more time-consumingBesides some information of the source images may be lostduring the inverse multiresolution transform implementa-tion [21] Recently pulse coupled neural network (PCNN)hasalso been introduced to the multifocus image fusion as seenin literature [22 23] However the PCNN technique is verycomplex and has too many parameters In addition it is longand time-consuming

In order to overcome the shortcoming of the methodsmentioned above in this paper we propose a pixel levelmultifocus image fusion method based on HVS and BPneural network Firstly three features including texturefeature local visibility and local visual feature contrast areextracted based on HVS and are used to train the BP neuralnetwork Secondly the initial fused image is acquired usingBP neural network followed by a consistency verificationprocess Then in order to avoid yielding any artificial orerroneous information that may be introduced during theprocess of preliminary fusion the focused regions in eachsource image are determined by a hybrid procedure Finallythe fused image is obtained based on the focused regionsand initial fused image The experiments show that theperformance of the proposed method is superior to severalexisting fusion methods

The rest of the paper is organized as follows The relatedtheory of the proposed method is described in Section 2The fusion method that is based on HVS and BP neuralnetwork is introduced in Section 3 Experimental results

Inputlayer

Hiddenlayer

Output layer

Figure 1 Architecture of BP neural network

and performance analysis are presented and discussedin Section 4 and the last section gives some concludingremarks

2 Related Theoretical Knowledge

21 BP Neural Network BP neural network is a multilayerfeed-forward neural network which is one of themost widelyused neural networks The problem of multifocus fusionbased on BP neural work can be considered as a classificationproblem focused or blurred

The basic BP neural network is a three-layer networkincluding input layer hidden layer and output layer Thearchitecture of BP neural network in the paper is shown inFigure 1 According to [24] we also adopt empirical formulato determine the number of nodes of the hidden layer andthe formula is defined as follows

119899ℎ = sqrt (119899119894 + 119899119900) + 1 (1)

where 119899ℎ 119899119894 and 119899119900 are the number of nodes of the hiddenlayer the number of nodes of the input layer and the numberof nodes of the output layer respectively

22 Features Extraction In this paper for each pixel weextract three features based on the pixel centered of the 3 times 3

window to reflect its clarity These are the texture featureslocal visibility and local visual feature contrast

221 Texture Features Log-Gabor filter was designed inthe log coordinate system which is more conducive to thetexture feature extraction [25] The main advantage of thelog-Gabor functions is that it can construct filters witharbitrary bandwidth under the condition of maintainingthe DC component 0 which reduces filters redundancyFurthermore log-Gabor filters are more in line with theHVS Texture features (TF) based on amplitude informationreflect the high and low frequency energy distribution of theimages Therefore taking the advantages of the log-Gaborfilters into account texture features of the multifocus imagebased on amplitude information will be extracted using log-Gabor filters 2D Log-Gabor filter is defined in the frequencydomain as follows [26]

119867 (119891 120579) = 119867119891 times 119867120579 (2)

The Scientific World Journal 3

where 119867119891 is radial component and 119867120579 is direction compo-nents Specifically the expressions are as follows

119867119891 = exp

minus[log(1198911198910)]2

2[log(1205901198911198910)]

2

119867120579 = expminus

(120579 minus 1205790)2

21205902

120579

(3)

inwhich1198910 is the center frequency of filters 1205790 is the directionof filters and 120590119891 is a constant that controls radial filtersbandwidth 119861119891 Consider

119861119891 = 2radic

2

log 2

times

10038161003816100381610038161003816100381610038161003816

log(

120590119891

1198910

)

10038161003816100381610038161003816100381610038161003816

(4)

In order to obtain log-Gabor filters with the same bandwidth120590119891 must be changed along with 1198910 so that the value of

1205901198911198910 is constant 120590120579 determines direction bandwidth 119861120579Consider

119861120579 = 2120590120579radic2 log 2 (5)

222 Local Visibility In the paper we introduce the conceptof the image visibility (VI) which is inspired from the HVSand defined as follows [27]

VI =

1

119872 times 119873

119872

sum

119894=1

119873

sum

119895=1

(

1

119898119896

)

120572

times

1003816100381610038161003816119868 (119894 119895) minus 119898119896

1003816100381610038161003816

119898119896

(6)

where 119898119896 is themean intensity value of the image120572 is a visualconstant ranging from 06 to 07 and 119868(119894 119895) denotes the grayvalue of pixel at position (119894 119895)

VI is more significant in multifocus image fusion thandifferent sensor image fusion and the measurement has beensuccessfully used in multifocus image fusion [27] In thepaper in order to represent the clarity of a pixel the localvisibility (LVI) in spatial domain is proposed The LVI isdefined as

LVI (119909 119910) =

1

(2119898 + 1) times (2119899 + 1)

119898

sum

119894=minus119898

119899

sum

119895=minus119899

(

1

119868(119909 119910)

)

120572

times

10038161003816100381610038161003816119868 (119909 + 119894 119910 + 119895) minus 119868 (119909 119910)

10038161003816100381610038161003816

119868 (119909 119910)

if 119868 (119909 119910) = 0

119868 (119909 119910) otherwise(7)

where (2119898 + 1) times (2119899 + 1) is the size of neighborhood windowand 119868(119909 119910) is the mean intensity value of the pixel (119909 119910)

centered of the (2119898 + 1) times (2119899 + 1) window

223 Local Visual Feature Contrast The findings of psychol-ogy and physiology have shown that HVS is highly sensitiveto changes in the local contrast of the image but insensitiveto real luminance at each pixel [28] The local luminancecontrast formula is defined as follows

119862 =

119871 minus 119871119861

119871119861

=

Δ119871

119871119861

(8)

where 119871 is the local luminance and 119871119861 is the local luminanceof the background namely the low frequency componentThereforeΔ119871 can be taken as the high frequency componentHowever the value of single pixel is not enough to determinewhich pixel is focused without considering the correlationbetween the surrounding pixels Therefore to represent thesalient features of the image more accurately the local visualfeature (LVC) contrast in spatial domain is introduced and isdefined as

LVC (119909 119910) =

(

1

119868(119909 119910)

)

120572

times

SML (119909 119910)

119868 (119909 119910)

if 119868 (119909 119910) = 0

SML (119909 119910) otherwise(9)

where 119868(119909 119910) is the mean intensity value of the pixel (119909 119910)

centered of the neighborhood window 120572 is a visual constant

ranging from 06 to 07 and the SML(119909 119910) denotes the sum-modified-Laplacian (SML) located at (119909 119910) and more detailsabout SML can be found in [7]

3 The Proposed Multifocus ImageFusion Method

31 Initial Fused Image Obtained by BP Neural NetworkFigure 2 shows the schematic diagram of the proposedmethod for obtaining the initial fused image based on BPneural network Here we only consider the case of two-source-image fusion though the method can be extendedstraightforwardly to handle more than two with the assump-tion that the source images have always been registered

The algorithm first calculates salient features of each pixelform each source image by averaging over a small windowAssume that there are two pixels (one from each sourceimage) and BP neural network is trained to determine whichone is in focus Then the initial fused image is constructedby selecting the clearer pixel followed by a consistencyverification process Specifically the algorithm consists of thefollowing steps

Step 1 Assume that there are two source images 119860 and 119861Denote the 119894th pixel pair by 119860 119894 and 119861119894 respectively

Step 2 For each pixel extract three features based on the pixelcentered of the 3 times 3 window which reflect its clarity (details

4 The Scientific World Journal

A

B

BP neural network

Fused result based on BP

neural network

Consistency verification

Initial fused image

Extract featurestexture featurelocal visibility

local visual feature contrast

Figure 2 Schematic diagram of the BP neural network based fusion method

A

B

BP neural network

Fused result based on BP

neural network

Consistency verification

Initial fused image

Extract featurestexture featurelocal visibility

local visual feature contrast

Similaritymeasure

Determine focused regions

Final fused image

Select border pixels

Morphological opening and

closing

Figure 3 Schematic diagram of the proposed image fusion method

in Section 22) Denote the feature vectors for 119860 119894 and 119861119894 by(TF119860119894 LVI119860119894 LVC119860119894) and (TF119861119894 LVI119861119894 LVC119861119894) respectively

Step 3 Train a BP neural network to determine which pixelis clearer The difference vector (TF119860119894 minus TF119861119894 LVI119860119894 minus LVI119861119894 LVC119860119894 minus LVC119861119894) is used as input and the output is labeledaccording to

target119894 =

1 if 119860 119894 is clearer than 119861119894

0 otherwise(10)

Step 4 Perform simulation of the trained BP neural networkon all pixel pairs The 119894th pixel 119865119894 of the fused image is thenconstructed as

119865119894 =

119860 119894 if out119894 ge 05

119861119894 otherwise(11)

where out119894 is the BP neural network output using the 119894th pixelpair as corresponding input

Step 5 Verify consistency of the result of the fusion obtainedin Step 4 Especially when the BPneural network decides thata particular pixel is to come from119860 butwith themajority of itssurrounding pixel from 119861 this pixel will be changed to comefrom 119861

32 The Method for Obtaining Final Fused Image In orderto ensure that the pixels of the fused image come from the

focused regions of each source image we need to identifythe focused regions in each source image firstly Then thefused image can be constructed by simply selecting pixels inthose regions And as for the boundary of focused regionsthe corresponding pixel of the initial fused image is selectedas the pixel of the final fused image Therefore we proposedthe following flow chart for obtaining the final fused image asillustrated in Figure 3

321 Detection of the Focused Regions The pixels of thesource images with higher similarity to the correspondinginitial fused image pixels can be considered to be located inthe focused regionsThus the focused regions in each sourceimage can be determined by this method In the paper weadopt root mean square error (RMSE) [14] to measure thesimilarity between the source images and the initial fusedimage Specifically the algorithm of the detection of focusedregions consists of the following steps

Step 1 Calculate the RMSE of each pixel within (2119898 + 1) times

(2119899 + 1) window between the source images and the initialfused image Assume that 119860 and 119861 are two source imagesand 119865 is the initial fused image The formulas are definedas follows respectively In order to acquire the best fusioneffect we have tried different window sizes and found thatthe fusion effect is best when the size of the window is 5times5 or7 times 7

The Scientific World Journal 5

Step 2 Compare the values RMSE119860119865(119909 119910) andRMSE119861119865(119909 119910)

to determine which pixel is in focus The decision diagramwhich is a binary image will be constructed as follows

119885 (119909 119910) =

1 if RMSE119860119865 (119909 119910) lt RMSE119861119865 (119909 119910)

0 otherwise(12)

where ldquo1rdquo in 119885 indicates that the pixel at position (119909 119910) insource image 119860 is in focus conversely the pixel in sourceimage119861 is in focus which indicates that the pixel with smallerRMSE(119909 119910) value is more possible in focus

Step 3 In order to determine all the focused pixels and avoidthe misjudgement of pixels morphological opening andclosing with small square structuring element and connecteddomain are employed Opening denoted as 119885 ∘ 119887 is that 119885 iseroded firstly by the structure element 119887 followed by dilationof the result by 119887 It can smooth the contours of the objectand remove narrow connections and small protrusions Likethe opening closing can also smooth the contours of theobject However the difference is that closing can join narrowgaps and fill the hole which is smaller than the structureelement 119887 Closing is dilation by 119887 followed by erosion by 119887

and is denoted as 119885 ∙ 119887 In fact those small holes are usuallygenerated by themisjudgement of pixelsWhatwasworse theholes larger than 119887 are hard to remove simply using openingand closing operators Therefore a threshold TH should beset to remove the holes smaller than the threshold but largerthan 119887 Then opening and closing are again used to smooththe contours of the object Finally the focused regions of eachsource image can be acquired which can be more uniformand have well connected regions

As for the structure element 119887 and the TH they canbe determined according to the experimental results In thepaper the structure element 119887 is a 7 times 7 matrix with logical1 In order to remove small and isolated areas which aremisjudged two different thresholds are setThefirst thresholdis set to be 20000 to remove areas which are focused in image119861 but misjudged as blurred The second threshold is set to be3000 to remove those areas which are focused in image 119860 butmisjudged as blurred

322 Fusion of the Focused Regions The final fused imageFF can be acquired according to the fusion rules that are asfollows

FF (119909 119910) =

119860 (119909 119910) if 119885119885 (119909 119910) == 1 count (119909 119910)

= (2119898 + 1) times (2119899 + 1)

119861 (119909 119910) if 119885119885 (119909 119910) == 0 count (119909 119910) = 0

119865 (119909 119910) otherwise(13)

where

count (119909 119910) =

119894=119898

sum

119894=minus119898

119895=119899

sum

119895=minus119899

119885119885 (119909 + 119894 119910 + 119895) (14)

119885119885 is the modified 119885 matrix of Step 3 in Section 321119860(119909 119910) 119861(119909 119910) 119865(119909 119910) and FF(119909 119910) denote the gray value

of pixel at position (119909 119910) of the source images (119860 and 119861)the initial fused image 119865 and the final fused image FFrespectively and (2119898 + 1) times (2119899 + 1) is the size of slippingwindow count(119909 119910) = (2119898 + 1) times (2119899 + 1) suggests thatthe pixel at position (119909 119910) in image 119860 is in focus and will beselected as the pixel of the final fused image FF directly Onthe contrary count(119909 119910) = 0 indicates that the pixel at theposition coming from image 119861 is focused and can be chosenas the pixel of the final fused image FF Other cases namely0 lt count(119909 119910) lt (2119898 + 1) times (2119899 + 1) imply that the pixel atposition (119909 119910) is located in the boundary of focused regionsand the corresponding pixel of the initial fused image 119865 isselected as the pixel of the final fused image FF

4 Experimental Results andPerformance Analysis

41 Experimental Setup In this section the first step weshould do is to train the BP neural network The trainingexperiment is performed on the standard popular widelyused ldquolenardquo image which is a 256-level image with all infocus We then artificially produce three out-of-focus imagesblurred with Gaussian radius of 05 10 and 15 respectivelyA training set with a total of 4 times 256 times 256 pixel pairs isformedThe three features of each pixel TF LVI andLVC areextracted with 120572 = 065 In addition we artificially producea pair of out-of-focus images shown in Figures 4(a) and 4(b)which are acquired by blurring the left part and the middlepart of the original image using the Gaussian functionrespectively To evaluate the advantage of the proposed fusionmethod experiments are performed on three sets of sourceimages as shown in Figures 4 5 and 6 respectively includingone set of source images produced artificially and two sets ofsource images acquired naturally Their sizes are 256 times 256256times256 and 640times480 respectivelyThese images all containmultiple objects at different distances from the camera andonly those objects within the depth of field of the camera willbe focused while other objects naturally will be out of focuswhen taken For example Figure 5(a) is focused on testingcard while Figure 5(b) is focused on the pepsi can

In order to compare the performance of the proposedfusion method these multifocus images are also performedusing the conventional and classical methods such as takingthe average of the source images pixel by pixel the gradientpyramid method [11] the DWT-based method and theSIDWT-based method [16] The decomposition level of themultiscale transform is 4 layers The wavelet basis of theDWT and SIDWT is DBSS (2 2) and Haar respectivelyThe fusion rules of lowpass subband coefficients and thehighpass subband coefficients are the ldquoaveragingrdquo scheme andthe ldquoabsolute maximum choosingrdquo scheme respectively

42 Evaluation Criteria In general the evaluation methodsof image fusion can be categorized into subjective methodsand objective methods However observer personal visualdifferences and psychological factors will affect the results ofimage evaluation Furthermore inmost cases it is difficult forus to perceive the difference among fusion results Therefore

6 The Scientific World Journal

(a) (b) (c)

(d) (e) (f)

(g) (h) (i)

Figure 4 Original images and fused images of ldquolenardquo (a) focus on the right (b) focus on left and right sides (c) fused image using average(d) fused image using gradient pyramid (e) fused image using DWT (f) fused image using SIDWT (g) fused image using the proposedmethod (h) the initial focused region (i) the modified focused region

the subjective evaluation of the fused results is always incom-prehensive Hence in addition to the subjective evaluationwe also adopt severalmetrics to objectively evaluate the imagefusion results and quantitatively compare the different fusionmethods in the paper

421 Mutual Information (MI) The mutual informationMI119860119865 between the source image 119860 and the fused image 119865 isdefined as follows

MI119860119865 =

119871minus1

sum

119896=0

119871minus1

sum

119894=0

119901119860119865 (119896 119894) log2

119901119860119865 (119896 119894)

119901119860 (119896) times 119901119865 (119894)

(15)

where 119901119860119865 is the jointly normalized histogram of 119860 and 119865119901119860 and 119901119865 are the normalized histograms of 119860 and 119865 119871 isthe gray level of the image and 119896 and 119894 represent thepixel value of the images 119860 and 119865 respectively The mutualinformation MI119861119865 between the source image 119861 and the fusedimage 119865 is similar to MI119860119865 The mutual information betweenthe source images 119860 119861 and the fused image 119865 is defined asfollows

MI119860119861119865

= MI119860119865 + MI119861119865 (16)The metric reflects the total amount of information that

the fused image 119865 contains about source images119860 and 119861Thelarger the value is the more the information is obtained fromthe original image and the better the fusion effect is

The Scientific World Journal 7

(a) (b) (c)

(d) (e) (f)

(g) (h) (i)

Figure 5 Original and fused image of ldquopepsirdquo (a) focus on the right (b) focus on the left (c) fused image using average (d) fused imageusing gradient pyramid (e) fused image using DWT (f) fused image using SIDWT (g) fused image using the proposed method (h) theinitial focused region (i) the modified focused region

422 119876119860119861119865 The metric 119876

119860119861119865 evaluates the sum of edgeinformation preservation values and is defined as follows

119876119860119861119865

= (

119872

sum

119898=1

119873

sum

119899=1

(119876119860119865

(119898 119899) times 120596119860 (119898 119899)

+ 119876119861119865

(119898 119899) times 120596119861 (119898 119899)))

times (

119872

sum

119898=1

119873

sum

119899=1

(120596119860 (119898 119899) + 120596119861 (119898 119899)))

minus1

(17)

where 119876119860119865

(119898 119899) = 119876119860119865

119892(119898 119899)119876

119860119865

120572(119898 119899) 119876

119860119865

119892(119898 119899) and

119876119860119865

120572(119898 119899) are the edge strength and orientation preservation

values respectively 119876119861119865

(119898 119899) is similar to 119876119860119865

(119898 119899) and120596119860(119898 119899) and120596119861(119898 119899) areweights tomeasure the importanceof 119876119860119865

(119898 119899) and 119876119861119865

(119898 119899) respectively The dynamic rangeof 119876119860119861119865 is [0 1] and it should be as close to 1 as possible

and for the ldquoideal fusionrdquo 119876119860119861119865

= 1 In addition (119898 119899)

represents the pixel location and 119872 and 119873 are the size ofimages respectively

The119876119860119861119865metric reflects the quality of visual information

obtained from the fusion of input images Therefore thelarger the value the better the performance

8 The Scientific World Journal

(a) (b) (c)

(d) (e) (f)

(g) (h) (i)

Figure 6 Original and fused image of ldquodiskrdquo (a) focus on the right (b) focus on the left (c) fused image using average (d) fused image usinggradient pyramid (e) fused image using DWT (f) fused image using SIDWT (g) fused image using the proposed method (h) the initialfocused region (i) the modified focused region

423 CorrelationCoefficient (CORR) Correlation coefficientbetween the fused image 119865 and the standard reference image119877 is defined as follows

CORR

= (

119872

sum

119898=1

119873

sum

119899=1

(119877 (119898 119899) minus 119877 (119898 119899)) (119865 (119898 119899) minus 119865 (119898 119899)))

times (

119872

sum

119898=1

119873

sum

119899=1

(119877 (119898 119899) minus 119877 (119898 119899))

2

times

119872

sum

119898=1

119873

sum

119899=1

(119865 (119898 119899) minus 119865 (119898 119899))

2

)

minus12

(18)

where 119877(119898 119899) and 119865(119898 119899) represent the pixel gray averagevalue of the standard reference image 119877 and fused image 119865respectively

The metric reflects the degree of correlation between thefused image and the standard reference image The larger thevalue is the better the fusion effect is

424 Root Mean Squared Error (RMSE) Root mean squareerror (RMSE) between the fusion image 119865 and the standardreference image 119877 is defined as follows

RMSE =radic

sum119872

119898=1sum119873

119899=1(119877 (119898 119899) minus 119865 (119898 119899))

2

119872 times 119873

(19)

The metric is used to measure the difference between thefused image and the standard reference image The smallerthe value is the better the fusion effect is

43 Fusion of Artificial Test Images The experiment is per-formed on a pair of ldquolenardquomultifocus images as shown in Fig-ures 4(a) and 4(b) The initial and modified detected focusedregions are shown in Figures 4(h) and 4(i) respectivelyThe white pixels in Figure 4(i) indicate that correspondingpixels from Figure 4(a) are in focused regions while the blackpixels suggest that corresponding pixels from Figure 4(b)are in focused regions By comparison we can observe thatthe detected focused regions of Figure 4(i) are better thanthose of Figure 4(h) for example there are somemisdetectedfocused regions in the right side of Figure 4(h) whereas theyare correctly detected in Figure 4(i) because the right side

The Scientific World Journal 9

Table 1 Performance comparison of different fusion algorithms in Figure 4

Fusion algorithms MI 119876119860119861119865 CORR RMSE

Average method 74946 072072 098987 88023Gradient pyramid 5102 072693 098876 13791DWT 71207 076948 099784 40532SIDWT 73712 077171 099553 57595Proposed method 98067 081128 099962 28699

of it is almost totally white The fusion results obtained bythe previous five different methods are shown in Figures4(c)ndash4(g) respectively It can be found that the results of thepixel averaging and gradient pyramid method have a poorcontrast compared to those of the DWT-based method theSIDWT-based method and the proposed method Howeverit is difficult for us to perceive the difference among the resultsof the DWT-based method the SIDWT-based method andthe proposed method according to the subjective evaluationTherefore to objectively evaluate these five fusion methodsquantitative assessments of the five fusion results are neededThe results of the quantitative assessments are shown inTable 1 As can be seen from Table 1 MI 119876

119860119861119865 and CORRvalues of the proposed method are higher and RMSE valueis less than those of the other methods which means that byusing our proposed method the best quantitative evaluationresults have been achieved

44 Fusion of Real Digital Camera Images The experimentscarried out in this section are performed on two sets ofsource images acquired naturally as shown in Figures 5(a)-5(b) and Figures 6(a)-6(b) respectively The initial andmodified detected focused regions of those two sets ofsource images are shown in Figures 5(h)-5(i) and Figures6(h)-6(i) respectively The fused images obtained by usingpixel averaging method gradient pyramid method DWT-based method the SIDWT-based method and the proposedmethod on these two sets of source images are shown inFigures 5(c)ndash5(g) and Figures 6(c)ndash6(g) respectively Fromthe fusion results we can easily observe that fusion effectsacquired based on the pixel averaging and gradient pyramidare not satisfactory and with poor contrast For example theregions of the testing card in Figures 5(c)-5(d) are not clearbut they are clear in Figures 5(e)ndash5(g) But it is difficult todiscriminate the difference among the results of the DWT-based method the SIDWT-based method and the proposedmethod by subjective evaluation so we need to do objectiveevaluation However it should be noted that the referenceimage is usually not available for real multifocus imagesso only the two evaluation criteria including the MI and119876119860119861119865 are used to objectively compare the fusion results The

quantitative comparison of the five methods for fusion ofthese two sets of source images is shown in Tables 2 and 3respectively As can be seen from the two tables we can findthat the MI and 119876

119860119861119865 values of the proposed method aresignificantly higher than those of the othermethods It shouldbe noted that we have carried out experiments on othermultifocus images and their results are identical to these two

Table 2 Performance comparison of different fusion algorithms inFigure 5

Fusion algorithms MI 119876119860119861119865

Average method 72941 064922Gradient pyramid 59768 067975DWT 64442 068264SIDWT 682160 070890Proposed method 91957 075904

Table 3 Performance comparison of different fusion algorithms inFigure 6

Fusion algorithms MI 119876119860119861119865

Average method 59845 052143Gradient pyramid 53657 063792DWT 53949 064323SIDWT 58380 067620Proposed method 83105 073806

examples so we did not mention all of them here Thereforethe results of subjective and objective evaluation presentedhere can verify that the performance of the proposedmethodis superior to those of the other methods

5 Conclusions

By combining the idea of the correlation between the neigh-boring pixels and BP neural networks a novel multifocusimage fusion method based on HVS and BP neural networkis proposed in the paper Three features which are basedon HVS and can reflect the clarity of a pixel are extractedand used to train a BP neural network to determine whichpixel is clearer The clearer pixels are combined to form theinitial fused image Then the focused regions are detectedby judging whether pixels from the initial fused image arein the focused regions or not Finally the final fused imageis obtained with the help of the technique of focused regiondetection by a certain fusion rule The results of subjectiveand objective evaluation of several experiments show that theproposed method outperforms several popular widely usedfusionmethods In the future we will focus on improving therobustness of the method for noise

Conflict of Interests

The authors declare no conflict of interests

10 The Scientific World Journal

Acknowledgments

The authors would like to thank the anonymous reviewersfor their valuable comments and constructive suggestionsThis work was supported by the National Natural ScienceFoundation of China (no 60963012 and no 61262034) by theKey Project of Chinese Ministry of Education (no 211087)by the Natural Science Foundation of Jiangxi Province (no20114BAB211020 and no 20132BAB201025) by the Young Sci-entist Foundation of Jiangxi Province (no 20122BCB23017)and by the Science and Technology Research Project of theEducation Department of Jiangxi Province (no GJJ13302)

References

[1] P Shah S N Merchant and U B Desai ldquoMultifocus andmultispectral image fusion based on pixel significance usingmultiresolution decompositionrdquo Signal Image and Video Pro-cessing vol 7 pp 95ndash109 2013

[2] V Aslantas and R Kurban ldquoFusion of multi-focus imagesusing differential evolution algorithmrdquo Expert Systems withApplications vol 37 no 12 pp 8861ndash8870 2010

[3] R Benes P Dvorak M Faundez-Zanuy V Espinosa-Duroand J Mekysk ldquoMulti-focus thermal image fusionrdquo PatternRecognition Letters vol 34 pp 536ndash544 2013

[4] J Dong D Zhuang Y Huang and J Fu ldquoAdvances in multi-sensor data fusion algorithms and applicationsrdquo Sensors vol 9no 10 pp 7771ndash7784 2009

[5] S Li and B Yang ldquoHybrid multiresolution method for multi-sensor multimodal image fusionrdquo IEEE Sensors Journal vol 10no 9 pp 1519ndash1526 2010

[6] S Li B Yang and J Hu ldquoPerformance comparison of differentmulti-resolution transforms for image fusionrdquo InformationFusion vol 12 no 2 pp 74ndash84 2011

[7] W Huang and Z Jing ldquoEvaluation of focus measures in multi-focus image fusionrdquo Pattern Recognition Letters vol 28 no 4pp 493ndash500 2007

[8] S Li J T Kwok and Y Wang ldquoMultifocus image fusion usingartificial neural networksrdquo Pattern Recognition Letters vol 23no 8 pp 985ndash997 2002

[9] M B A Haghighat A Aghagolzadeh and H SeyedarabildquoMulti-focus image fusion for visual sensor networks in DCTdomainrdquoComputers and Electrical Engineering vol 37 no 5 pp789ndash797 2011

[10] P J Burt and E H Adelson ldquoThe Laplacian pyramid as acompact image coderdquo IEEE Transactions on Communicationsvol 31 no 4 pp 532ndash540 1983

[11] P J Burt ldquoA gradient pyramid basis for pattern selective imagefusionrdquo in Proceedings of the Society for Information DisplayConference pp 467ndash470 1992

[12] A Toet ldquoImage fusion by a ration of low-pass pyramidrdquo PatternRecognition Letters vol 9 no 4 pp 245ndash253 1989

[13] G Pajares and J M de la Cruz ldquoA wavelet-based image fusiontutorialrdquo Pattern Recognition vol 37 no 9 pp 1855ndash1872 2004

[14] Y Zheng E A Essock B C Hansen and A M Haun ldquoA newmetric based on extended spatial frequency and its applicationto DWT based fusion algorithmsrdquo Information Fusion vol 8no 2 pp 177ndash192 2007

[15] Y Yang D S Park S Huang and N Rao ldquoMedical imagefusion via an effective wavelet-based approachrdquo Eurasip Journal

on Advances in Signal Processing vol 2010 Article ID 579341 13pages 2010

[16] M Unser ldquoTexture classification and segmentation usingwavelet framesrdquo IEEE Transactions on Image Processing vol 4no 11 pp 1549ndash1560 1995

[17] F Nencini A Garzelli S Baronti and L Alparone ldquoRemotesensing image fusion using the curvelet transformrdquo InformationFusion vol 8 no 2 pp 143ndash156 2007

[18] L Yang B L Guo and W Ni ldquoMultimodality medical imagefusion based on multiscale geometric analysis of contourlettransformrdquo Neurocomputing vol 72 no 1ndash3 pp 203ndash211 2008

[19] Q Zhang and B-L Guo ldquoMultifocus image fusion using thenonsubsampled contourlet transformrdquo Signal Processing vol89 no 7 pp 1334ndash1346 2009

[20] B Yang and S Li ldquoMultifocus image fusion and restorationwithsparse representationrdquo IEEE Transactions on Instrumentationand Measurement vol 59 no 4 pp 884ndash892 2010

[21] WWu X M Yang Y Pang J Peng and G Jeon ldquoA multifocusimage fusion method by using hidden Markov modelrdquo OpticsCommunications vol 287 pp 63ndash72 2013

[22] Z Wang Y Ma and J Gu ldquoMulti-focus image fusion usingPCNNrdquo Pattern Recognition vol 43 no 6 pp 2003ndash2016 2010

[23] D Agrawal and J Singhai ldquoMultifocus image fusion usingmodified pulse coupled neural network for improved imagequalityrdquo IET Image Processing vol 4 no 6 pp 443ndash451 2010

[24] F Zhang and H Y Chang ldquoEmploying BP neural networksto alleviate the sparsity issue in collaborative filtering rec-ommendation algorithmsrdquo Journal of Computer Research andDevelopment vol 43 pp 667ndash672 2006

[25] P F Xiao and X Z Feng Segmentation and InformationExtraction of High-Resolution Remote Sensing Image BeijingScience Press 2012

[26] R B Huang F N Lang and Z Shi ldquoLog-Gabor and 2D semi-supervised discriminant analysis based face image retrievalrdquoApplication Research of Computers vol 29 pp 393ndash396 2012

[27] Y Chai H Li and Z Li ldquoMultifocus image fusion schemeusing focused region detection and multiresolutionrdquo OpticsCommunications vol 284 no 19 pp 4376ndash4389 2011

[28] H-F Li Y Chai and X-Y Zhang ldquoMultifocus image fusionalgorithm based onmultiscale products and property of humanvisual systemrdquo Control and Decision vol 27 no 3 pp 355ndash3612012

International Journal of

AerospaceEngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

RoboticsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Active and Passive Electronic Components

Control Scienceand Engineering

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

RotatingMachinery

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporation httpwwwhindawicom

Journal ofEngineeringVolume 2014

Submit your manuscripts athttpwwwhindawicom

VLSI Design

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Shock and Vibration

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Civil EngineeringAdvances in

Acoustics and VibrationAdvances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Electrical and Computer Engineering

Journal of

Advances inOptoElectronics

Hindawi Publishing Corporation httpwwwhindawicom

Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

SensorsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Chemical EngineeringInternational Journal of Antennas and

Propagation

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Navigation and Observation

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

DistributedSensor Networks

International Journal of

Page 2: Research Article Effective Multifocus Image Fusion Based ...

2 The Scientific World Journal

the spatial domain so they have high computational effi-ciency However they are also faced with some problemsThe first problem is how to determine the suitable size ofthe subblockThesemethods usually suffer from block effectswhich severely reduce the quality of the fused image if the sizeof the subblock is selected unreasonably Another problemis that which evaluation criteria would be more suitable tomeasure the clarity of the subblocks In recent years variousapproaches based on multiscale transforms have been pro-posed including pyramid transform and wavelet transformsuch as the Laplacian pyramid [10] gradient pyramid [11]the ratio of low pass pyramid [12] discrete wavelet transform(DWT) [13ndash15] shift-invariant discrete wavelet transform(SIDWT) [16] curvelet transform [17] contourlet transform[18] and nonsubsampled contourlet transform (NSCT) [19]Pyramid decomposition-based image fusion can achieve agood effect However the pyramid decomposition of theimage is redundant decomposition The information of thedifferent decomposition layers is correlative which makesit easy to reduce the stability of the algorithm GenerallyDWT is superior to the previous pyramid-based methodsbecause of providing directional information and withoutcarrying redundant information across different resolutionsMoreover DWT has good locality of time frequency How-ever these methods based on multiscale transforms are shift-variant namely their performance will quickly deterioratewhen there is a slight cameraobject movement or there ismisregistration of the source images [7 20] Although theSIDWT [16] and NSCT [19] algorithms both can overcomethe shortcomingmentioned above the implementation of thealgorithm is more complicated and more time-consumingBesides some information of the source images may be lostduring the inverse multiresolution transform implementa-tion [21] Recently pulse coupled neural network (PCNN)hasalso been introduced to the multifocus image fusion as seenin literature [22 23] However the PCNN technique is verycomplex and has too many parameters In addition it is longand time-consuming

In order to overcome the shortcoming of the methodsmentioned above in this paper we propose a pixel levelmultifocus image fusion method based on HVS and BPneural network Firstly three features including texturefeature local visibility and local visual feature contrast areextracted based on HVS and are used to train the BP neuralnetwork Secondly the initial fused image is acquired usingBP neural network followed by a consistency verificationprocess Then in order to avoid yielding any artificial orerroneous information that may be introduced during theprocess of preliminary fusion the focused regions in eachsource image are determined by a hybrid procedure Finallythe fused image is obtained based on the focused regionsand initial fused image The experiments show that theperformance of the proposed method is superior to severalexisting fusion methods

The rest of the paper is organized as follows The relatedtheory of the proposed method is described in Section 2The fusion method that is based on HVS and BP neuralnetwork is introduced in Section 3 Experimental results

Inputlayer

Hiddenlayer

Output layer

Figure 1 Architecture of BP neural network

and performance analysis are presented and discussedin Section 4 and the last section gives some concludingremarks

2 Related Theoretical Knowledge

21 BP Neural Network BP neural network is a multilayerfeed-forward neural network which is one of themost widelyused neural networks The problem of multifocus fusionbased on BP neural work can be considered as a classificationproblem focused or blurred

The basic BP neural network is a three-layer networkincluding input layer hidden layer and output layer Thearchitecture of BP neural network in the paper is shown inFigure 1 According to [24] we also adopt empirical formulato determine the number of nodes of the hidden layer andthe formula is defined as follows

119899ℎ = sqrt (119899119894 + 119899119900) + 1 (1)

where 119899ℎ 119899119894 and 119899119900 are the number of nodes of the hiddenlayer the number of nodes of the input layer and the numberof nodes of the output layer respectively

22 Features Extraction In this paper for each pixel weextract three features based on the pixel centered of the 3 times 3

window to reflect its clarity These are the texture featureslocal visibility and local visual feature contrast

221 Texture Features Log-Gabor filter was designed inthe log coordinate system which is more conducive to thetexture feature extraction [25] The main advantage of thelog-Gabor functions is that it can construct filters witharbitrary bandwidth under the condition of maintainingthe DC component 0 which reduces filters redundancyFurthermore log-Gabor filters are more in line with theHVS Texture features (TF) based on amplitude informationreflect the high and low frequency energy distribution of theimages Therefore taking the advantages of the log-Gaborfilters into account texture features of the multifocus imagebased on amplitude information will be extracted using log-Gabor filters 2D Log-Gabor filter is defined in the frequencydomain as follows [26]

119867 (119891 120579) = 119867119891 times 119867120579 (2)

The Scientific World Journal 3

where 119867119891 is radial component and 119867120579 is direction compo-nents Specifically the expressions are as follows

119867119891 = exp

minus[log(1198911198910)]2

2[log(1205901198911198910)]

2

119867120579 = expminus

(120579 minus 1205790)2

21205902

120579

(3)

inwhich1198910 is the center frequency of filters 1205790 is the directionof filters and 120590119891 is a constant that controls radial filtersbandwidth 119861119891 Consider

119861119891 = 2radic

2

log 2

times

10038161003816100381610038161003816100381610038161003816

log(

120590119891

1198910

)

10038161003816100381610038161003816100381610038161003816

(4)

In order to obtain log-Gabor filters with the same bandwidth120590119891 must be changed along with 1198910 so that the value of

1205901198911198910 is constant 120590120579 determines direction bandwidth 119861120579Consider

119861120579 = 2120590120579radic2 log 2 (5)

222 Local Visibility In the paper we introduce the conceptof the image visibility (VI) which is inspired from the HVSand defined as follows [27]

VI =

1

119872 times 119873

119872

sum

119894=1

119873

sum

119895=1

(

1

119898119896

)

120572

times

1003816100381610038161003816119868 (119894 119895) minus 119898119896

1003816100381610038161003816

119898119896

(6)

where 119898119896 is themean intensity value of the image120572 is a visualconstant ranging from 06 to 07 and 119868(119894 119895) denotes the grayvalue of pixel at position (119894 119895)

VI is more significant in multifocus image fusion thandifferent sensor image fusion and the measurement has beensuccessfully used in multifocus image fusion [27] In thepaper in order to represent the clarity of a pixel the localvisibility (LVI) in spatial domain is proposed The LVI isdefined as

LVI (119909 119910) =

1

(2119898 + 1) times (2119899 + 1)

119898

sum

119894=minus119898

119899

sum

119895=minus119899

(

1

119868(119909 119910)

)

120572

times

10038161003816100381610038161003816119868 (119909 + 119894 119910 + 119895) minus 119868 (119909 119910)

10038161003816100381610038161003816

119868 (119909 119910)

if 119868 (119909 119910) = 0

119868 (119909 119910) otherwise(7)

where (2119898 + 1) times (2119899 + 1) is the size of neighborhood windowand 119868(119909 119910) is the mean intensity value of the pixel (119909 119910)

centered of the (2119898 + 1) times (2119899 + 1) window

223 Local Visual Feature Contrast The findings of psychol-ogy and physiology have shown that HVS is highly sensitiveto changes in the local contrast of the image but insensitiveto real luminance at each pixel [28] The local luminancecontrast formula is defined as follows

119862 =

119871 minus 119871119861

119871119861

=

Δ119871

119871119861

(8)

where 119871 is the local luminance and 119871119861 is the local luminanceof the background namely the low frequency componentThereforeΔ119871 can be taken as the high frequency componentHowever the value of single pixel is not enough to determinewhich pixel is focused without considering the correlationbetween the surrounding pixels Therefore to represent thesalient features of the image more accurately the local visualfeature (LVC) contrast in spatial domain is introduced and isdefined as

LVC (119909 119910) =

(

1

119868(119909 119910)

)

120572

times

SML (119909 119910)

119868 (119909 119910)

if 119868 (119909 119910) = 0

SML (119909 119910) otherwise(9)

where 119868(119909 119910) is the mean intensity value of the pixel (119909 119910)

centered of the neighborhood window 120572 is a visual constant

ranging from 06 to 07 and the SML(119909 119910) denotes the sum-modified-Laplacian (SML) located at (119909 119910) and more detailsabout SML can be found in [7]

3 The Proposed Multifocus ImageFusion Method

31 Initial Fused Image Obtained by BP Neural NetworkFigure 2 shows the schematic diagram of the proposedmethod for obtaining the initial fused image based on BPneural network Here we only consider the case of two-source-image fusion though the method can be extendedstraightforwardly to handle more than two with the assump-tion that the source images have always been registered

The algorithm first calculates salient features of each pixelform each source image by averaging over a small windowAssume that there are two pixels (one from each sourceimage) and BP neural network is trained to determine whichone is in focus Then the initial fused image is constructedby selecting the clearer pixel followed by a consistencyverification process Specifically the algorithm consists of thefollowing steps

Step 1 Assume that there are two source images 119860 and 119861Denote the 119894th pixel pair by 119860 119894 and 119861119894 respectively

Step 2 For each pixel extract three features based on the pixelcentered of the 3 times 3 window which reflect its clarity (details

4 The Scientific World Journal

A

B

BP neural network

Fused result based on BP

neural network

Consistency verification

Initial fused image

Extract featurestexture featurelocal visibility

local visual feature contrast

Figure 2 Schematic diagram of the BP neural network based fusion method

A

B

BP neural network

Fused result based on BP

neural network

Consistency verification

Initial fused image

Extract featurestexture featurelocal visibility

local visual feature contrast

Similaritymeasure

Determine focused regions

Final fused image

Select border pixels

Morphological opening and

closing

Figure 3 Schematic diagram of the proposed image fusion method

in Section 22) Denote the feature vectors for 119860 119894 and 119861119894 by(TF119860119894 LVI119860119894 LVC119860119894) and (TF119861119894 LVI119861119894 LVC119861119894) respectively

Step 3 Train a BP neural network to determine which pixelis clearer The difference vector (TF119860119894 minus TF119861119894 LVI119860119894 minus LVI119861119894 LVC119860119894 minus LVC119861119894) is used as input and the output is labeledaccording to

target119894 =

1 if 119860 119894 is clearer than 119861119894

0 otherwise(10)

Step 4 Perform simulation of the trained BP neural networkon all pixel pairs The 119894th pixel 119865119894 of the fused image is thenconstructed as

119865119894 =

119860 119894 if out119894 ge 05

119861119894 otherwise(11)

where out119894 is the BP neural network output using the 119894th pixelpair as corresponding input

Step 5 Verify consistency of the result of the fusion obtainedin Step 4 Especially when the BPneural network decides thata particular pixel is to come from119860 butwith themajority of itssurrounding pixel from 119861 this pixel will be changed to comefrom 119861

32 The Method for Obtaining Final Fused Image In orderto ensure that the pixels of the fused image come from the

focused regions of each source image we need to identifythe focused regions in each source image firstly Then thefused image can be constructed by simply selecting pixels inthose regions And as for the boundary of focused regionsthe corresponding pixel of the initial fused image is selectedas the pixel of the final fused image Therefore we proposedthe following flow chart for obtaining the final fused image asillustrated in Figure 3

321 Detection of the Focused Regions The pixels of thesource images with higher similarity to the correspondinginitial fused image pixels can be considered to be located inthe focused regionsThus the focused regions in each sourceimage can be determined by this method In the paper weadopt root mean square error (RMSE) [14] to measure thesimilarity between the source images and the initial fusedimage Specifically the algorithm of the detection of focusedregions consists of the following steps

Step 1 Calculate the RMSE of each pixel within (2119898 + 1) times

(2119899 + 1) window between the source images and the initialfused image Assume that 119860 and 119861 are two source imagesand 119865 is the initial fused image The formulas are definedas follows respectively In order to acquire the best fusioneffect we have tried different window sizes and found thatthe fusion effect is best when the size of the window is 5times5 or7 times 7

The Scientific World Journal 5

Step 2 Compare the values RMSE119860119865(119909 119910) andRMSE119861119865(119909 119910)

to determine which pixel is in focus The decision diagramwhich is a binary image will be constructed as follows

119885 (119909 119910) =

1 if RMSE119860119865 (119909 119910) lt RMSE119861119865 (119909 119910)

0 otherwise(12)

where ldquo1rdquo in 119885 indicates that the pixel at position (119909 119910) insource image 119860 is in focus conversely the pixel in sourceimage119861 is in focus which indicates that the pixel with smallerRMSE(119909 119910) value is more possible in focus

Step 3 In order to determine all the focused pixels and avoidthe misjudgement of pixels morphological opening andclosing with small square structuring element and connecteddomain are employed Opening denoted as 119885 ∘ 119887 is that 119885 iseroded firstly by the structure element 119887 followed by dilationof the result by 119887 It can smooth the contours of the objectand remove narrow connections and small protrusions Likethe opening closing can also smooth the contours of theobject However the difference is that closing can join narrowgaps and fill the hole which is smaller than the structureelement 119887 Closing is dilation by 119887 followed by erosion by 119887

and is denoted as 119885 ∙ 119887 In fact those small holes are usuallygenerated by themisjudgement of pixelsWhatwasworse theholes larger than 119887 are hard to remove simply using openingand closing operators Therefore a threshold TH should beset to remove the holes smaller than the threshold but largerthan 119887 Then opening and closing are again used to smooththe contours of the object Finally the focused regions of eachsource image can be acquired which can be more uniformand have well connected regions

As for the structure element 119887 and the TH they canbe determined according to the experimental results In thepaper the structure element 119887 is a 7 times 7 matrix with logical1 In order to remove small and isolated areas which aremisjudged two different thresholds are setThefirst thresholdis set to be 20000 to remove areas which are focused in image119861 but misjudged as blurred The second threshold is set to be3000 to remove those areas which are focused in image 119860 butmisjudged as blurred

322 Fusion of the Focused Regions The final fused imageFF can be acquired according to the fusion rules that are asfollows

FF (119909 119910) =

119860 (119909 119910) if 119885119885 (119909 119910) == 1 count (119909 119910)

= (2119898 + 1) times (2119899 + 1)

119861 (119909 119910) if 119885119885 (119909 119910) == 0 count (119909 119910) = 0

119865 (119909 119910) otherwise(13)

where

count (119909 119910) =

119894=119898

sum

119894=minus119898

119895=119899

sum

119895=minus119899

119885119885 (119909 + 119894 119910 + 119895) (14)

119885119885 is the modified 119885 matrix of Step 3 in Section 321119860(119909 119910) 119861(119909 119910) 119865(119909 119910) and FF(119909 119910) denote the gray value

of pixel at position (119909 119910) of the source images (119860 and 119861)the initial fused image 119865 and the final fused image FFrespectively and (2119898 + 1) times (2119899 + 1) is the size of slippingwindow count(119909 119910) = (2119898 + 1) times (2119899 + 1) suggests thatthe pixel at position (119909 119910) in image 119860 is in focus and will beselected as the pixel of the final fused image FF directly Onthe contrary count(119909 119910) = 0 indicates that the pixel at theposition coming from image 119861 is focused and can be chosenas the pixel of the final fused image FF Other cases namely0 lt count(119909 119910) lt (2119898 + 1) times (2119899 + 1) imply that the pixel atposition (119909 119910) is located in the boundary of focused regionsand the corresponding pixel of the initial fused image 119865 isselected as the pixel of the final fused image FF

4 Experimental Results andPerformance Analysis

41 Experimental Setup In this section the first step weshould do is to train the BP neural network The trainingexperiment is performed on the standard popular widelyused ldquolenardquo image which is a 256-level image with all infocus We then artificially produce three out-of-focus imagesblurred with Gaussian radius of 05 10 and 15 respectivelyA training set with a total of 4 times 256 times 256 pixel pairs isformedThe three features of each pixel TF LVI andLVC areextracted with 120572 = 065 In addition we artificially producea pair of out-of-focus images shown in Figures 4(a) and 4(b)which are acquired by blurring the left part and the middlepart of the original image using the Gaussian functionrespectively To evaluate the advantage of the proposed fusionmethod experiments are performed on three sets of sourceimages as shown in Figures 4 5 and 6 respectively includingone set of source images produced artificially and two sets ofsource images acquired naturally Their sizes are 256 times 256256times256 and 640times480 respectivelyThese images all containmultiple objects at different distances from the camera andonly those objects within the depth of field of the camera willbe focused while other objects naturally will be out of focuswhen taken For example Figure 5(a) is focused on testingcard while Figure 5(b) is focused on the pepsi can

In order to compare the performance of the proposedfusion method these multifocus images are also performedusing the conventional and classical methods such as takingthe average of the source images pixel by pixel the gradientpyramid method [11] the DWT-based method and theSIDWT-based method [16] The decomposition level of themultiscale transform is 4 layers The wavelet basis of theDWT and SIDWT is DBSS (2 2) and Haar respectivelyThe fusion rules of lowpass subband coefficients and thehighpass subband coefficients are the ldquoaveragingrdquo scheme andthe ldquoabsolute maximum choosingrdquo scheme respectively

42 Evaluation Criteria In general the evaluation methodsof image fusion can be categorized into subjective methodsand objective methods However observer personal visualdifferences and psychological factors will affect the results ofimage evaluation Furthermore inmost cases it is difficult forus to perceive the difference among fusion results Therefore

6 The Scientific World Journal

(a) (b) (c)

(d) (e) (f)

(g) (h) (i)

Figure 4 Original images and fused images of ldquolenardquo (a) focus on the right (b) focus on left and right sides (c) fused image using average(d) fused image using gradient pyramid (e) fused image using DWT (f) fused image using SIDWT (g) fused image using the proposedmethod (h) the initial focused region (i) the modified focused region

the subjective evaluation of the fused results is always incom-prehensive Hence in addition to the subjective evaluationwe also adopt severalmetrics to objectively evaluate the imagefusion results and quantitatively compare the different fusionmethods in the paper

421 Mutual Information (MI) The mutual informationMI119860119865 between the source image 119860 and the fused image 119865 isdefined as follows

MI119860119865 =

119871minus1

sum

119896=0

119871minus1

sum

119894=0

119901119860119865 (119896 119894) log2

119901119860119865 (119896 119894)

119901119860 (119896) times 119901119865 (119894)

(15)

where 119901119860119865 is the jointly normalized histogram of 119860 and 119865119901119860 and 119901119865 are the normalized histograms of 119860 and 119865 119871 isthe gray level of the image and 119896 and 119894 represent thepixel value of the images 119860 and 119865 respectively The mutualinformation MI119861119865 between the source image 119861 and the fusedimage 119865 is similar to MI119860119865 The mutual information betweenthe source images 119860 119861 and the fused image 119865 is defined asfollows

MI119860119861119865

= MI119860119865 + MI119861119865 (16)The metric reflects the total amount of information that

the fused image 119865 contains about source images119860 and 119861Thelarger the value is the more the information is obtained fromthe original image and the better the fusion effect is

The Scientific World Journal 7

(a) (b) (c)

(d) (e) (f)

(g) (h) (i)

Figure 5 Original and fused image of ldquopepsirdquo (a) focus on the right (b) focus on the left (c) fused image using average (d) fused imageusing gradient pyramid (e) fused image using DWT (f) fused image using SIDWT (g) fused image using the proposed method (h) theinitial focused region (i) the modified focused region

422 119876119860119861119865 The metric 119876

119860119861119865 evaluates the sum of edgeinformation preservation values and is defined as follows

119876119860119861119865

= (

119872

sum

119898=1

119873

sum

119899=1

(119876119860119865

(119898 119899) times 120596119860 (119898 119899)

+ 119876119861119865

(119898 119899) times 120596119861 (119898 119899)))

times (

119872

sum

119898=1

119873

sum

119899=1

(120596119860 (119898 119899) + 120596119861 (119898 119899)))

minus1

(17)

where 119876119860119865

(119898 119899) = 119876119860119865

119892(119898 119899)119876

119860119865

120572(119898 119899) 119876

119860119865

119892(119898 119899) and

119876119860119865

120572(119898 119899) are the edge strength and orientation preservation

values respectively 119876119861119865

(119898 119899) is similar to 119876119860119865

(119898 119899) and120596119860(119898 119899) and120596119861(119898 119899) areweights tomeasure the importanceof 119876119860119865

(119898 119899) and 119876119861119865

(119898 119899) respectively The dynamic rangeof 119876119860119861119865 is [0 1] and it should be as close to 1 as possible

and for the ldquoideal fusionrdquo 119876119860119861119865

= 1 In addition (119898 119899)

represents the pixel location and 119872 and 119873 are the size ofimages respectively

The119876119860119861119865metric reflects the quality of visual information

obtained from the fusion of input images Therefore thelarger the value the better the performance

8 The Scientific World Journal

(a) (b) (c)

(d) (e) (f)

(g) (h) (i)

Figure 6 Original and fused image of ldquodiskrdquo (a) focus on the right (b) focus on the left (c) fused image using average (d) fused image usinggradient pyramid (e) fused image using DWT (f) fused image using SIDWT (g) fused image using the proposed method (h) the initialfocused region (i) the modified focused region

423 CorrelationCoefficient (CORR) Correlation coefficientbetween the fused image 119865 and the standard reference image119877 is defined as follows

CORR

= (

119872

sum

119898=1

119873

sum

119899=1

(119877 (119898 119899) minus 119877 (119898 119899)) (119865 (119898 119899) minus 119865 (119898 119899)))

times (

119872

sum

119898=1

119873

sum

119899=1

(119877 (119898 119899) minus 119877 (119898 119899))

2

times

119872

sum

119898=1

119873

sum

119899=1

(119865 (119898 119899) minus 119865 (119898 119899))

2

)

minus12

(18)

where 119877(119898 119899) and 119865(119898 119899) represent the pixel gray averagevalue of the standard reference image 119877 and fused image 119865respectively

The metric reflects the degree of correlation between thefused image and the standard reference image The larger thevalue is the better the fusion effect is

424 Root Mean Squared Error (RMSE) Root mean squareerror (RMSE) between the fusion image 119865 and the standardreference image 119877 is defined as follows

RMSE =radic

sum119872

119898=1sum119873

119899=1(119877 (119898 119899) minus 119865 (119898 119899))

2

119872 times 119873

(19)

The metric is used to measure the difference between thefused image and the standard reference image The smallerthe value is the better the fusion effect is

43 Fusion of Artificial Test Images The experiment is per-formed on a pair of ldquolenardquomultifocus images as shown in Fig-ures 4(a) and 4(b) The initial and modified detected focusedregions are shown in Figures 4(h) and 4(i) respectivelyThe white pixels in Figure 4(i) indicate that correspondingpixels from Figure 4(a) are in focused regions while the blackpixels suggest that corresponding pixels from Figure 4(b)are in focused regions By comparison we can observe thatthe detected focused regions of Figure 4(i) are better thanthose of Figure 4(h) for example there are somemisdetectedfocused regions in the right side of Figure 4(h) whereas theyare correctly detected in Figure 4(i) because the right side

The Scientific World Journal 9

Table 1 Performance comparison of different fusion algorithms in Figure 4

Fusion algorithms MI 119876119860119861119865 CORR RMSE

Average method 74946 072072 098987 88023Gradient pyramid 5102 072693 098876 13791DWT 71207 076948 099784 40532SIDWT 73712 077171 099553 57595Proposed method 98067 081128 099962 28699

of it is almost totally white The fusion results obtained bythe previous five different methods are shown in Figures4(c)ndash4(g) respectively It can be found that the results of thepixel averaging and gradient pyramid method have a poorcontrast compared to those of the DWT-based method theSIDWT-based method and the proposed method Howeverit is difficult for us to perceive the difference among the resultsof the DWT-based method the SIDWT-based method andthe proposed method according to the subjective evaluationTherefore to objectively evaluate these five fusion methodsquantitative assessments of the five fusion results are neededThe results of the quantitative assessments are shown inTable 1 As can be seen from Table 1 MI 119876

119860119861119865 and CORRvalues of the proposed method are higher and RMSE valueis less than those of the other methods which means that byusing our proposed method the best quantitative evaluationresults have been achieved

44 Fusion of Real Digital Camera Images The experimentscarried out in this section are performed on two sets ofsource images acquired naturally as shown in Figures 5(a)-5(b) and Figures 6(a)-6(b) respectively The initial andmodified detected focused regions of those two sets ofsource images are shown in Figures 5(h)-5(i) and Figures6(h)-6(i) respectively The fused images obtained by usingpixel averaging method gradient pyramid method DWT-based method the SIDWT-based method and the proposedmethod on these two sets of source images are shown inFigures 5(c)ndash5(g) and Figures 6(c)ndash6(g) respectively Fromthe fusion results we can easily observe that fusion effectsacquired based on the pixel averaging and gradient pyramidare not satisfactory and with poor contrast For example theregions of the testing card in Figures 5(c)-5(d) are not clearbut they are clear in Figures 5(e)ndash5(g) But it is difficult todiscriminate the difference among the results of the DWT-based method the SIDWT-based method and the proposedmethod by subjective evaluation so we need to do objectiveevaluation However it should be noted that the referenceimage is usually not available for real multifocus imagesso only the two evaluation criteria including the MI and119876119860119861119865 are used to objectively compare the fusion results The

quantitative comparison of the five methods for fusion ofthese two sets of source images is shown in Tables 2 and 3respectively As can be seen from the two tables we can findthat the MI and 119876

119860119861119865 values of the proposed method aresignificantly higher than those of the othermethods It shouldbe noted that we have carried out experiments on othermultifocus images and their results are identical to these two

Table 2 Performance comparison of different fusion algorithms inFigure 5

Fusion algorithms MI 119876119860119861119865

Average method 72941 064922Gradient pyramid 59768 067975DWT 64442 068264SIDWT 682160 070890Proposed method 91957 075904

Table 3 Performance comparison of different fusion algorithms inFigure 6

Fusion algorithms MI 119876119860119861119865

Average method 59845 052143Gradient pyramid 53657 063792DWT 53949 064323SIDWT 58380 067620Proposed method 83105 073806

examples so we did not mention all of them here Thereforethe results of subjective and objective evaluation presentedhere can verify that the performance of the proposedmethodis superior to those of the other methods

5 Conclusions

By combining the idea of the correlation between the neigh-boring pixels and BP neural networks a novel multifocusimage fusion method based on HVS and BP neural networkis proposed in the paper Three features which are basedon HVS and can reflect the clarity of a pixel are extractedand used to train a BP neural network to determine whichpixel is clearer The clearer pixels are combined to form theinitial fused image Then the focused regions are detectedby judging whether pixels from the initial fused image arein the focused regions or not Finally the final fused imageis obtained with the help of the technique of focused regiondetection by a certain fusion rule The results of subjectiveand objective evaluation of several experiments show that theproposed method outperforms several popular widely usedfusionmethods In the future we will focus on improving therobustness of the method for noise

Conflict of Interests

The authors declare no conflict of interests

10 The Scientific World Journal

Acknowledgments

The authors would like to thank the anonymous reviewersfor their valuable comments and constructive suggestionsThis work was supported by the National Natural ScienceFoundation of China (no 60963012 and no 61262034) by theKey Project of Chinese Ministry of Education (no 211087)by the Natural Science Foundation of Jiangxi Province (no20114BAB211020 and no 20132BAB201025) by the Young Sci-entist Foundation of Jiangxi Province (no 20122BCB23017)and by the Science and Technology Research Project of theEducation Department of Jiangxi Province (no GJJ13302)

References

[1] P Shah S N Merchant and U B Desai ldquoMultifocus andmultispectral image fusion based on pixel significance usingmultiresolution decompositionrdquo Signal Image and Video Pro-cessing vol 7 pp 95ndash109 2013

[2] V Aslantas and R Kurban ldquoFusion of multi-focus imagesusing differential evolution algorithmrdquo Expert Systems withApplications vol 37 no 12 pp 8861ndash8870 2010

[3] R Benes P Dvorak M Faundez-Zanuy V Espinosa-Duroand J Mekysk ldquoMulti-focus thermal image fusionrdquo PatternRecognition Letters vol 34 pp 536ndash544 2013

[4] J Dong D Zhuang Y Huang and J Fu ldquoAdvances in multi-sensor data fusion algorithms and applicationsrdquo Sensors vol 9no 10 pp 7771ndash7784 2009

[5] S Li and B Yang ldquoHybrid multiresolution method for multi-sensor multimodal image fusionrdquo IEEE Sensors Journal vol 10no 9 pp 1519ndash1526 2010

[6] S Li B Yang and J Hu ldquoPerformance comparison of differentmulti-resolution transforms for image fusionrdquo InformationFusion vol 12 no 2 pp 74ndash84 2011

[7] W Huang and Z Jing ldquoEvaluation of focus measures in multi-focus image fusionrdquo Pattern Recognition Letters vol 28 no 4pp 493ndash500 2007

[8] S Li J T Kwok and Y Wang ldquoMultifocus image fusion usingartificial neural networksrdquo Pattern Recognition Letters vol 23no 8 pp 985ndash997 2002

[9] M B A Haghighat A Aghagolzadeh and H SeyedarabildquoMulti-focus image fusion for visual sensor networks in DCTdomainrdquoComputers and Electrical Engineering vol 37 no 5 pp789ndash797 2011

[10] P J Burt and E H Adelson ldquoThe Laplacian pyramid as acompact image coderdquo IEEE Transactions on Communicationsvol 31 no 4 pp 532ndash540 1983

[11] P J Burt ldquoA gradient pyramid basis for pattern selective imagefusionrdquo in Proceedings of the Society for Information DisplayConference pp 467ndash470 1992

[12] A Toet ldquoImage fusion by a ration of low-pass pyramidrdquo PatternRecognition Letters vol 9 no 4 pp 245ndash253 1989

[13] G Pajares and J M de la Cruz ldquoA wavelet-based image fusiontutorialrdquo Pattern Recognition vol 37 no 9 pp 1855ndash1872 2004

[14] Y Zheng E A Essock B C Hansen and A M Haun ldquoA newmetric based on extended spatial frequency and its applicationto DWT based fusion algorithmsrdquo Information Fusion vol 8no 2 pp 177ndash192 2007

[15] Y Yang D S Park S Huang and N Rao ldquoMedical imagefusion via an effective wavelet-based approachrdquo Eurasip Journal

on Advances in Signal Processing vol 2010 Article ID 579341 13pages 2010

[16] M Unser ldquoTexture classification and segmentation usingwavelet framesrdquo IEEE Transactions on Image Processing vol 4no 11 pp 1549ndash1560 1995

[17] F Nencini A Garzelli S Baronti and L Alparone ldquoRemotesensing image fusion using the curvelet transformrdquo InformationFusion vol 8 no 2 pp 143ndash156 2007

[18] L Yang B L Guo and W Ni ldquoMultimodality medical imagefusion based on multiscale geometric analysis of contourlettransformrdquo Neurocomputing vol 72 no 1ndash3 pp 203ndash211 2008

[19] Q Zhang and B-L Guo ldquoMultifocus image fusion using thenonsubsampled contourlet transformrdquo Signal Processing vol89 no 7 pp 1334ndash1346 2009

[20] B Yang and S Li ldquoMultifocus image fusion and restorationwithsparse representationrdquo IEEE Transactions on Instrumentationand Measurement vol 59 no 4 pp 884ndash892 2010

[21] WWu X M Yang Y Pang J Peng and G Jeon ldquoA multifocusimage fusion method by using hidden Markov modelrdquo OpticsCommunications vol 287 pp 63ndash72 2013

[22] Z Wang Y Ma and J Gu ldquoMulti-focus image fusion usingPCNNrdquo Pattern Recognition vol 43 no 6 pp 2003ndash2016 2010

[23] D Agrawal and J Singhai ldquoMultifocus image fusion usingmodified pulse coupled neural network for improved imagequalityrdquo IET Image Processing vol 4 no 6 pp 443ndash451 2010

[24] F Zhang and H Y Chang ldquoEmploying BP neural networksto alleviate the sparsity issue in collaborative filtering rec-ommendation algorithmsrdquo Journal of Computer Research andDevelopment vol 43 pp 667ndash672 2006

[25] P F Xiao and X Z Feng Segmentation and InformationExtraction of High-Resolution Remote Sensing Image BeijingScience Press 2012

[26] R B Huang F N Lang and Z Shi ldquoLog-Gabor and 2D semi-supervised discriminant analysis based face image retrievalrdquoApplication Research of Computers vol 29 pp 393ndash396 2012

[27] Y Chai H Li and Z Li ldquoMultifocus image fusion schemeusing focused region detection and multiresolutionrdquo OpticsCommunications vol 284 no 19 pp 4376ndash4389 2011

[28] H-F Li Y Chai and X-Y Zhang ldquoMultifocus image fusionalgorithm based onmultiscale products and property of humanvisual systemrdquo Control and Decision vol 27 no 3 pp 355ndash3612012

International Journal of

AerospaceEngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

RoboticsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Active and Passive Electronic Components

Control Scienceand Engineering

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

RotatingMachinery

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporation httpwwwhindawicom

Journal ofEngineeringVolume 2014

Submit your manuscripts athttpwwwhindawicom

VLSI Design

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Shock and Vibration

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Civil EngineeringAdvances in

Acoustics and VibrationAdvances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Electrical and Computer Engineering

Journal of

Advances inOptoElectronics

Hindawi Publishing Corporation httpwwwhindawicom

Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

SensorsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Chemical EngineeringInternational Journal of Antennas and

Propagation

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Navigation and Observation

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

DistributedSensor Networks

International Journal of

Page 3: Research Article Effective Multifocus Image Fusion Based ...

The Scientific World Journal 3

where 119867119891 is radial component and 119867120579 is direction compo-nents Specifically the expressions are as follows

119867119891 = exp

minus[log(1198911198910)]2

2[log(1205901198911198910)]

2

119867120579 = expminus

(120579 minus 1205790)2

21205902

120579

(3)

inwhich1198910 is the center frequency of filters 1205790 is the directionof filters and 120590119891 is a constant that controls radial filtersbandwidth 119861119891 Consider

119861119891 = 2radic

2

log 2

times

10038161003816100381610038161003816100381610038161003816

log(

120590119891

1198910

)

10038161003816100381610038161003816100381610038161003816

(4)

In order to obtain log-Gabor filters with the same bandwidth120590119891 must be changed along with 1198910 so that the value of

1205901198911198910 is constant 120590120579 determines direction bandwidth 119861120579Consider

119861120579 = 2120590120579radic2 log 2 (5)

222 Local Visibility In the paper we introduce the conceptof the image visibility (VI) which is inspired from the HVSand defined as follows [27]

VI =

1

119872 times 119873

119872

sum

119894=1

119873

sum

119895=1

(

1

119898119896

)

120572

times

1003816100381610038161003816119868 (119894 119895) minus 119898119896

1003816100381610038161003816

119898119896

(6)

where 119898119896 is themean intensity value of the image120572 is a visualconstant ranging from 06 to 07 and 119868(119894 119895) denotes the grayvalue of pixel at position (119894 119895)

VI is more significant in multifocus image fusion thandifferent sensor image fusion and the measurement has beensuccessfully used in multifocus image fusion [27] In thepaper in order to represent the clarity of a pixel the localvisibility (LVI) in spatial domain is proposed The LVI isdefined as

LVI (119909 119910) =

1

(2119898 + 1) times (2119899 + 1)

119898

sum

119894=minus119898

119899

sum

119895=minus119899

(

1

119868(119909 119910)

)

120572

times

10038161003816100381610038161003816119868 (119909 + 119894 119910 + 119895) minus 119868 (119909 119910)

10038161003816100381610038161003816

119868 (119909 119910)

if 119868 (119909 119910) = 0

119868 (119909 119910) otherwise(7)

where (2119898 + 1) times (2119899 + 1) is the size of neighborhood windowand 119868(119909 119910) is the mean intensity value of the pixel (119909 119910)

centered of the (2119898 + 1) times (2119899 + 1) window

223 Local Visual Feature Contrast The findings of psychol-ogy and physiology have shown that HVS is highly sensitiveto changes in the local contrast of the image but insensitiveto real luminance at each pixel [28] The local luminancecontrast formula is defined as follows

119862 =

119871 minus 119871119861

119871119861

=

Δ119871

119871119861

(8)

where 119871 is the local luminance and 119871119861 is the local luminanceof the background namely the low frequency componentThereforeΔ119871 can be taken as the high frequency componentHowever the value of single pixel is not enough to determinewhich pixel is focused without considering the correlationbetween the surrounding pixels Therefore to represent thesalient features of the image more accurately the local visualfeature (LVC) contrast in spatial domain is introduced and isdefined as

LVC (119909 119910) =

(

1

119868(119909 119910)

)

120572

times

SML (119909 119910)

119868 (119909 119910)

if 119868 (119909 119910) = 0

SML (119909 119910) otherwise(9)

where 119868(119909 119910) is the mean intensity value of the pixel (119909 119910)

centered of the neighborhood window 120572 is a visual constant

ranging from 06 to 07 and the SML(119909 119910) denotes the sum-modified-Laplacian (SML) located at (119909 119910) and more detailsabout SML can be found in [7]

3 The Proposed Multifocus ImageFusion Method

31 Initial Fused Image Obtained by BP Neural NetworkFigure 2 shows the schematic diagram of the proposedmethod for obtaining the initial fused image based on BPneural network Here we only consider the case of two-source-image fusion though the method can be extendedstraightforwardly to handle more than two with the assump-tion that the source images have always been registered

The algorithm first calculates salient features of each pixelform each source image by averaging over a small windowAssume that there are two pixels (one from each sourceimage) and BP neural network is trained to determine whichone is in focus Then the initial fused image is constructedby selecting the clearer pixel followed by a consistencyverification process Specifically the algorithm consists of thefollowing steps

Step 1 Assume that there are two source images 119860 and 119861Denote the 119894th pixel pair by 119860 119894 and 119861119894 respectively

Step 2 For each pixel extract three features based on the pixelcentered of the 3 times 3 window which reflect its clarity (details

4 The Scientific World Journal

A

B

BP neural network

Fused result based on BP

neural network

Consistency verification

Initial fused image

Extract featurestexture featurelocal visibility

local visual feature contrast

Figure 2 Schematic diagram of the BP neural network based fusion method

A

B

BP neural network

Fused result based on BP

neural network

Consistency verification

Initial fused image

Extract featurestexture featurelocal visibility

local visual feature contrast

Similaritymeasure

Determine focused regions

Final fused image

Select border pixels

Morphological opening and

closing

Figure 3 Schematic diagram of the proposed image fusion method

in Section 22) Denote the feature vectors for 119860 119894 and 119861119894 by(TF119860119894 LVI119860119894 LVC119860119894) and (TF119861119894 LVI119861119894 LVC119861119894) respectively

Step 3 Train a BP neural network to determine which pixelis clearer The difference vector (TF119860119894 minus TF119861119894 LVI119860119894 minus LVI119861119894 LVC119860119894 minus LVC119861119894) is used as input and the output is labeledaccording to

target119894 =

1 if 119860 119894 is clearer than 119861119894

0 otherwise(10)

Step 4 Perform simulation of the trained BP neural networkon all pixel pairs The 119894th pixel 119865119894 of the fused image is thenconstructed as

119865119894 =

119860 119894 if out119894 ge 05

119861119894 otherwise(11)

where out119894 is the BP neural network output using the 119894th pixelpair as corresponding input

Step 5 Verify consistency of the result of the fusion obtainedin Step 4 Especially when the BPneural network decides thata particular pixel is to come from119860 butwith themajority of itssurrounding pixel from 119861 this pixel will be changed to comefrom 119861

32 The Method for Obtaining Final Fused Image In orderto ensure that the pixels of the fused image come from the

focused regions of each source image we need to identifythe focused regions in each source image firstly Then thefused image can be constructed by simply selecting pixels inthose regions And as for the boundary of focused regionsthe corresponding pixel of the initial fused image is selectedas the pixel of the final fused image Therefore we proposedthe following flow chart for obtaining the final fused image asillustrated in Figure 3

321 Detection of the Focused Regions The pixels of thesource images with higher similarity to the correspondinginitial fused image pixels can be considered to be located inthe focused regionsThus the focused regions in each sourceimage can be determined by this method In the paper weadopt root mean square error (RMSE) [14] to measure thesimilarity between the source images and the initial fusedimage Specifically the algorithm of the detection of focusedregions consists of the following steps

Step 1 Calculate the RMSE of each pixel within (2119898 + 1) times

(2119899 + 1) window between the source images and the initialfused image Assume that 119860 and 119861 are two source imagesand 119865 is the initial fused image The formulas are definedas follows respectively In order to acquire the best fusioneffect we have tried different window sizes and found thatthe fusion effect is best when the size of the window is 5times5 or7 times 7

The Scientific World Journal 5

Step 2 Compare the values RMSE119860119865(119909 119910) andRMSE119861119865(119909 119910)

to determine which pixel is in focus The decision diagramwhich is a binary image will be constructed as follows

119885 (119909 119910) =

1 if RMSE119860119865 (119909 119910) lt RMSE119861119865 (119909 119910)

0 otherwise(12)

where ldquo1rdquo in 119885 indicates that the pixel at position (119909 119910) insource image 119860 is in focus conversely the pixel in sourceimage119861 is in focus which indicates that the pixel with smallerRMSE(119909 119910) value is more possible in focus

Step 3 In order to determine all the focused pixels and avoidthe misjudgement of pixels morphological opening andclosing with small square structuring element and connecteddomain are employed Opening denoted as 119885 ∘ 119887 is that 119885 iseroded firstly by the structure element 119887 followed by dilationof the result by 119887 It can smooth the contours of the objectand remove narrow connections and small protrusions Likethe opening closing can also smooth the contours of theobject However the difference is that closing can join narrowgaps and fill the hole which is smaller than the structureelement 119887 Closing is dilation by 119887 followed by erosion by 119887

and is denoted as 119885 ∙ 119887 In fact those small holes are usuallygenerated by themisjudgement of pixelsWhatwasworse theholes larger than 119887 are hard to remove simply using openingand closing operators Therefore a threshold TH should beset to remove the holes smaller than the threshold but largerthan 119887 Then opening and closing are again used to smooththe contours of the object Finally the focused regions of eachsource image can be acquired which can be more uniformand have well connected regions

As for the structure element 119887 and the TH they canbe determined according to the experimental results In thepaper the structure element 119887 is a 7 times 7 matrix with logical1 In order to remove small and isolated areas which aremisjudged two different thresholds are setThefirst thresholdis set to be 20000 to remove areas which are focused in image119861 but misjudged as blurred The second threshold is set to be3000 to remove those areas which are focused in image 119860 butmisjudged as blurred

322 Fusion of the Focused Regions The final fused imageFF can be acquired according to the fusion rules that are asfollows

FF (119909 119910) =

119860 (119909 119910) if 119885119885 (119909 119910) == 1 count (119909 119910)

= (2119898 + 1) times (2119899 + 1)

119861 (119909 119910) if 119885119885 (119909 119910) == 0 count (119909 119910) = 0

119865 (119909 119910) otherwise(13)

where

count (119909 119910) =

119894=119898

sum

119894=minus119898

119895=119899

sum

119895=minus119899

119885119885 (119909 + 119894 119910 + 119895) (14)

119885119885 is the modified 119885 matrix of Step 3 in Section 321119860(119909 119910) 119861(119909 119910) 119865(119909 119910) and FF(119909 119910) denote the gray value

of pixel at position (119909 119910) of the source images (119860 and 119861)the initial fused image 119865 and the final fused image FFrespectively and (2119898 + 1) times (2119899 + 1) is the size of slippingwindow count(119909 119910) = (2119898 + 1) times (2119899 + 1) suggests thatthe pixel at position (119909 119910) in image 119860 is in focus and will beselected as the pixel of the final fused image FF directly Onthe contrary count(119909 119910) = 0 indicates that the pixel at theposition coming from image 119861 is focused and can be chosenas the pixel of the final fused image FF Other cases namely0 lt count(119909 119910) lt (2119898 + 1) times (2119899 + 1) imply that the pixel atposition (119909 119910) is located in the boundary of focused regionsand the corresponding pixel of the initial fused image 119865 isselected as the pixel of the final fused image FF

4 Experimental Results andPerformance Analysis

41 Experimental Setup In this section the first step weshould do is to train the BP neural network The trainingexperiment is performed on the standard popular widelyused ldquolenardquo image which is a 256-level image with all infocus We then artificially produce three out-of-focus imagesblurred with Gaussian radius of 05 10 and 15 respectivelyA training set with a total of 4 times 256 times 256 pixel pairs isformedThe three features of each pixel TF LVI andLVC areextracted with 120572 = 065 In addition we artificially producea pair of out-of-focus images shown in Figures 4(a) and 4(b)which are acquired by blurring the left part and the middlepart of the original image using the Gaussian functionrespectively To evaluate the advantage of the proposed fusionmethod experiments are performed on three sets of sourceimages as shown in Figures 4 5 and 6 respectively includingone set of source images produced artificially and two sets ofsource images acquired naturally Their sizes are 256 times 256256times256 and 640times480 respectivelyThese images all containmultiple objects at different distances from the camera andonly those objects within the depth of field of the camera willbe focused while other objects naturally will be out of focuswhen taken For example Figure 5(a) is focused on testingcard while Figure 5(b) is focused on the pepsi can

In order to compare the performance of the proposedfusion method these multifocus images are also performedusing the conventional and classical methods such as takingthe average of the source images pixel by pixel the gradientpyramid method [11] the DWT-based method and theSIDWT-based method [16] The decomposition level of themultiscale transform is 4 layers The wavelet basis of theDWT and SIDWT is DBSS (2 2) and Haar respectivelyThe fusion rules of lowpass subband coefficients and thehighpass subband coefficients are the ldquoaveragingrdquo scheme andthe ldquoabsolute maximum choosingrdquo scheme respectively

42 Evaluation Criteria In general the evaluation methodsof image fusion can be categorized into subjective methodsand objective methods However observer personal visualdifferences and psychological factors will affect the results ofimage evaluation Furthermore inmost cases it is difficult forus to perceive the difference among fusion results Therefore

6 The Scientific World Journal

(a) (b) (c)

(d) (e) (f)

(g) (h) (i)

Figure 4 Original images and fused images of ldquolenardquo (a) focus on the right (b) focus on left and right sides (c) fused image using average(d) fused image using gradient pyramid (e) fused image using DWT (f) fused image using SIDWT (g) fused image using the proposedmethod (h) the initial focused region (i) the modified focused region

the subjective evaluation of the fused results is always incom-prehensive Hence in addition to the subjective evaluationwe also adopt severalmetrics to objectively evaluate the imagefusion results and quantitatively compare the different fusionmethods in the paper

421 Mutual Information (MI) The mutual informationMI119860119865 between the source image 119860 and the fused image 119865 isdefined as follows

MI119860119865 =

119871minus1

sum

119896=0

119871minus1

sum

119894=0

119901119860119865 (119896 119894) log2

119901119860119865 (119896 119894)

119901119860 (119896) times 119901119865 (119894)

(15)

where 119901119860119865 is the jointly normalized histogram of 119860 and 119865119901119860 and 119901119865 are the normalized histograms of 119860 and 119865 119871 isthe gray level of the image and 119896 and 119894 represent thepixel value of the images 119860 and 119865 respectively The mutualinformation MI119861119865 between the source image 119861 and the fusedimage 119865 is similar to MI119860119865 The mutual information betweenthe source images 119860 119861 and the fused image 119865 is defined asfollows

MI119860119861119865

= MI119860119865 + MI119861119865 (16)The metric reflects the total amount of information that

the fused image 119865 contains about source images119860 and 119861Thelarger the value is the more the information is obtained fromthe original image and the better the fusion effect is

The Scientific World Journal 7

(a) (b) (c)

(d) (e) (f)

(g) (h) (i)

Figure 5 Original and fused image of ldquopepsirdquo (a) focus on the right (b) focus on the left (c) fused image using average (d) fused imageusing gradient pyramid (e) fused image using DWT (f) fused image using SIDWT (g) fused image using the proposed method (h) theinitial focused region (i) the modified focused region

422 119876119860119861119865 The metric 119876

119860119861119865 evaluates the sum of edgeinformation preservation values and is defined as follows

119876119860119861119865

= (

119872

sum

119898=1

119873

sum

119899=1

(119876119860119865

(119898 119899) times 120596119860 (119898 119899)

+ 119876119861119865

(119898 119899) times 120596119861 (119898 119899)))

times (

119872

sum

119898=1

119873

sum

119899=1

(120596119860 (119898 119899) + 120596119861 (119898 119899)))

minus1

(17)

where 119876119860119865

(119898 119899) = 119876119860119865

119892(119898 119899)119876

119860119865

120572(119898 119899) 119876

119860119865

119892(119898 119899) and

119876119860119865

120572(119898 119899) are the edge strength and orientation preservation

values respectively 119876119861119865

(119898 119899) is similar to 119876119860119865

(119898 119899) and120596119860(119898 119899) and120596119861(119898 119899) areweights tomeasure the importanceof 119876119860119865

(119898 119899) and 119876119861119865

(119898 119899) respectively The dynamic rangeof 119876119860119861119865 is [0 1] and it should be as close to 1 as possible

and for the ldquoideal fusionrdquo 119876119860119861119865

= 1 In addition (119898 119899)

represents the pixel location and 119872 and 119873 are the size ofimages respectively

The119876119860119861119865metric reflects the quality of visual information

obtained from the fusion of input images Therefore thelarger the value the better the performance

8 The Scientific World Journal

(a) (b) (c)

(d) (e) (f)

(g) (h) (i)

Figure 6 Original and fused image of ldquodiskrdquo (a) focus on the right (b) focus on the left (c) fused image using average (d) fused image usinggradient pyramid (e) fused image using DWT (f) fused image using SIDWT (g) fused image using the proposed method (h) the initialfocused region (i) the modified focused region

423 CorrelationCoefficient (CORR) Correlation coefficientbetween the fused image 119865 and the standard reference image119877 is defined as follows

CORR

= (

119872

sum

119898=1

119873

sum

119899=1

(119877 (119898 119899) minus 119877 (119898 119899)) (119865 (119898 119899) minus 119865 (119898 119899)))

times (

119872

sum

119898=1

119873

sum

119899=1

(119877 (119898 119899) minus 119877 (119898 119899))

2

times

119872

sum

119898=1

119873

sum

119899=1

(119865 (119898 119899) minus 119865 (119898 119899))

2

)

minus12

(18)

where 119877(119898 119899) and 119865(119898 119899) represent the pixel gray averagevalue of the standard reference image 119877 and fused image 119865respectively

The metric reflects the degree of correlation between thefused image and the standard reference image The larger thevalue is the better the fusion effect is

424 Root Mean Squared Error (RMSE) Root mean squareerror (RMSE) between the fusion image 119865 and the standardreference image 119877 is defined as follows

RMSE =radic

sum119872

119898=1sum119873

119899=1(119877 (119898 119899) minus 119865 (119898 119899))

2

119872 times 119873

(19)

The metric is used to measure the difference between thefused image and the standard reference image The smallerthe value is the better the fusion effect is

43 Fusion of Artificial Test Images The experiment is per-formed on a pair of ldquolenardquomultifocus images as shown in Fig-ures 4(a) and 4(b) The initial and modified detected focusedregions are shown in Figures 4(h) and 4(i) respectivelyThe white pixels in Figure 4(i) indicate that correspondingpixels from Figure 4(a) are in focused regions while the blackpixels suggest that corresponding pixels from Figure 4(b)are in focused regions By comparison we can observe thatthe detected focused regions of Figure 4(i) are better thanthose of Figure 4(h) for example there are somemisdetectedfocused regions in the right side of Figure 4(h) whereas theyare correctly detected in Figure 4(i) because the right side

The Scientific World Journal 9

Table 1 Performance comparison of different fusion algorithms in Figure 4

Fusion algorithms MI 119876119860119861119865 CORR RMSE

Average method 74946 072072 098987 88023Gradient pyramid 5102 072693 098876 13791DWT 71207 076948 099784 40532SIDWT 73712 077171 099553 57595Proposed method 98067 081128 099962 28699

of it is almost totally white The fusion results obtained bythe previous five different methods are shown in Figures4(c)ndash4(g) respectively It can be found that the results of thepixel averaging and gradient pyramid method have a poorcontrast compared to those of the DWT-based method theSIDWT-based method and the proposed method Howeverit is difficult for us to perceive the difference among the resultsof the DWT-based method the SIDWT-based method andthe proposed method according to the subjective evaluationTherefore to objectively evaluate these five fusion methodsquantitative assessments of the five fusion results are neededThe results of the quantitative assessments are shown inTable 1 As can be seen from Table 1 MI 119876

119860119861119865 and CORRvalues of the proposed method are higher and RMSE valueis less than those of the other methods which means that byusing our proposed method the best quantitative evaluationresults have been achieved

44 Fusion of Real Digital Camera Images The experimentscarried out in this section are performed on two sets ofsource images acquired naturally as shown in Figures 5(a)-5(b) and Figures 6(a)-6(b) respectively The initial andmodified detected focused regions of those two sets ofsource images are shown in Figures 5(h)-5(i) and Figures6(h)-6(i) respectively The fused images obtained by usingpixel averaging method gradient pyramid method DWT-based method the SIDWT-based method and the proposedmethod on these two sets of source images are shown inFigures 5(c)ndash5(g) and Figures 6(c)ndash6(g) respectively Fromthe fusion results we can easily observe that fusion effectsacquired based on the pixel averaging and gradient pyramidare not satisfactory and with poor contrast For example theregions of the testing card in Figures 5(c)-5(d) are not clearbut they are clear in Figures 5(e)ndash5(g) But it is difficult todiscriminate the difference among the results of the DWT-based method the SIDWT-based method and the proposedmethod by subjective evaluation so we need to do objectiveevaluation However it should be noted that the referenceimage is usually not available for real multifocus imagesso only the two evaluation criteria including the MI and119876119860119861119865 are used to objectively compare the fusion results The

quantitative comparison of the five methods for fusion ofthese two sets of source images is shown in Tables 2 and 3respectively As can be seen from the two tables we can findthat the MI and 119876

119860119861119865 values of the proposed method aresignificantly higher than those of the othermethods It shouldbe noted that we have carried out experiments on othermultifocus images and their results are identical to these two

Table 2 Performance comparison of different fusion algorithms inFigure 5

Fusion algorithms MI 119876119860119861119865

Average method 72941 064922Gradient pyramid 59768 067975DWT 64442 068264SIDWT 682160 070890Proposed method 91957 075904

Table 3 Performance comparison of different fusion algorithms inFigure 6

Fusion algorithms MI 119876119860119861119865

Average method 59845 052143Gradient pyramid 53657 063792DWT 53949 064323SIDWT 58380 067620Proposed method 83105 073806

examples so we did not mention all of them here Thereforethe results of subjective and objective evaluation presentedhere can verify that the performance of the proposedmethodis superior to those of the other methods

5 Conclusions

By combining the idea of the correlation between the neigh-boring pixels and BP neural networks a novel multifocusimage fusion method based on HVS and BP neural networkis proposed in the paper Three features which are basedon HVS and can reflect the clarity of a pixel are extractedand used to train a BP neural network to determine whichpixel is clearer The clearer pixels are combined to form theinitial fused image Then the focused regions are detectedby judging whether pixels from the initial fused image arein the focused regions or not Finally the final fused imageis obtained with the help of the technique of focused regiondetection by a certain fusion rule The results of subjectiveand objective evaluation of several experiments show that theproposed method outperforms several popular widely usedfusionmethods In the future we will focus on improving therobustness of the method for noise

Conflict of Interests

The authors declare no conflict of interests

10 The Scientific World Journal

Acknowledgments

The authors would like to thank the anonymous reviewersfor their valuable comments and constructive suggestionsThis work was supported by the National Natural ScienceFoundation of China (no 60963012 and no 61262034) by theKey Project of Chinese Ministry of Education (no 211087)by the Natural Science Foundation of Jiangxi Province (no20114BAB211020 and no 20132BAB201025) by the Young Sci-entist Foundation of Jiangxi Province (no 20122BCB23017)and by the Science and Technology Research Project of theEducation Department of Jiangxi Province (no GJJ13302)

References

[1] P Shah S N Merchant and U B Desai ldquoMultifocus andmultispectral image fusion based on pixel significance usingmultiresolution decompositionrdquo Signal Image and Video Pro-cessing vol 7 pp 95ndash109 2013

[2] V Aslantas and R Kurban ldquoFusion of multi-focus imagesusing differential evolution algorithmrdquo Expert Systems withApplications vol 37 no 12 pp 8861ndash8870 2010

[3] R Benes P Dvorak M Faundez-Zanuy V Espinosa-Duroand J Mekysk ldquoMulti-focus thermal image fusionrdquo PatternRecognition Letters vol 34 pp 536ndash544 2013

[4] J Dong D Zhuang Y Huang and J Fu ldquoAdvances in multi-sensor data fusion algorithms and applicationsrdquo Sensors vol 9no 10 pp 7771ndash7784 2009

[5] S Li and B Yang ldquoHybrid multiresolution method for multi-sensor multimodal image fusionrdquo IEEE Sensors Journal vol 10no 9 pp 1519ndash1526 2010

[6] S Li B Yang and J Hu ldquoPerformance comparison of differentmulti-resolution transforms for image fusionrdquo InformationFusion vol 12 no 2 pp 74ndash84 2011

[7] W Huang and Z Jing ldquoEvaluation of focus measures in multi-focus image fusionrdquo Pattern Recognition Letters vol 28 no 4pp 493ndash500 2007

[8] S Li J T Kwok and Y Wang ldquoMultifocus image fusion usingartificial neural networksrdquo Pattern Recognition Letters vol 23no 8 pp 985ndash997 2002

[9] M B A Haghighat A Aghagolzadeh and H SeyedarabildquoMulti-focus image fusion for visual sensor networks in DCTdomainrdquoComputers and Electrical Engineering vol 37 no 5 pp789ndash797 2011

[10] P J Burt and E H Adelson ldquoThe Laplacian pyramid as acompact image coderdquo IEEE Transactions on Communicationsvol 31 no 4 pp 532ndash540 1983

[11] P J Burt ldquoA gradient pyramid basis for pattern selective imagefusionrdquo in Proceedings of the Society for Information DisplayConference pp 467ndash470 1992

[12] A Toet ldquoImage fusion by a ration of low-pass pyramidrdquo PatternRecognition Letters vol 9 no 4 pp 245ndash253 1989

[13] G Pajares and J M de la Cruz ldquoA wavelet-based image fusiontutorialrdquo Pattern Recognition vol 37 no 9 pp 1855ndash1872 2004

[14] Y Zheng E A Essock B C Hansen and A M Haun ldquoA newmetric based on extended spatial frequency and its applicationto DWT based fusion algorithmsrdquo Information Fusion vol 8no 2 pp 177ndash192 2007

[15] Y Yang D S Park S Huang and N Rao ldquoMedical imagefusion via an effective wavelet-based approachrdquo Eurasip Journal

on Advances in Signal Processing vol 2010 Article ID 579341 13pages 2010

[16] M Unser ldquoTexture classification and segmentation usingwavelet framesrdquo IEEE Transactions on Image Processing vol 4no 11 pp 1549ndash1560 1995

[17] F Nencini A Garzelli S Baronti and L Alparone ldquoRemotesensing image fusion using the curvelet transformrdquo InformationFusion vol 8 no 2 pp 143ndash156 2007

[18] L Yang B L Guo and W Ni ldquoMultimodality medical imagefusion based on multiscale geometric analysis of contourlettransformrdquo Neurocomputing vol 72 no 1ndash3 pp 203ndash211 2008

[19] Q Zhang and B-L Guo ldquoMultifocus image fusion using thenonsubsampled contourlet transformrdquo Signal Processing vol89 no 7 pp 1334ndash1346 2009

[20] B Yang and S Li ldquoMultifocus image fusion and restorationwithsparse representationrdquo IEEE Transactions on Instrumentationand Measurement vol 59 no 4 pp 884ndash892 2010

[21] WWu X M Yang Y Pang J Peng and G Jeon ldquoA multifocusimage fusion method by using hidden Markov modelrdquo OpticsCommunications vol 287 pp 63ndash72 2013

[22] Z Wang Y Ma and J Gu ldquoMulti-focus image fusion usingPCNNrdquo Pattern Recognition vol 43 no 6 pp 2003ndash2016 2010

[23] D Agrawal and J Singhai ldquoMultifocus image fusion usingmodified pulse coupled neural network for improved imagequalityrdquo IET Image Processing vol 4 no 6 pp 443ndash451 2010

[24] F Zhang and H Y Chang ldquoEmploying BP neural networksto alleviate the sparsity issue in collaborative filtering rec-ommendation algorithmsrdquo Journal of Computer Research andDevelopment vol 43 pp 667ndash672 2006

[25] P F Xiao and X Z Feng Segmentation and InformationExtraction of High-Resolution Remote Sensing Image BeijingScience Press 2012

[26] R B Huang F N Lang and Z Shi ldquoLog-Gabor and 2D semi-supervised discriminant analysis based face image retrievalrdquoApplication Research of Computers vol 29 pp 393ndash396 2012

[27] Y Chai H Li and Z Li ldquoMultifocus image fusion schemeusing focused region detection and multiresolutionrdquo OpticsCommunications vol 284 no 19 pp 4376ndash4389 2011

[28] H-F Li Y Chai and X-Y Zhang ldquoMultifocus image fusionalgorithm based onmultiscale products and property of humanvisual systemrdquo Control and Decision vol 27 no 3 pp 355ndash3612012

International Journal of

AerospaceEngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

RoboticsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Active and Passive Electronic Components

Control Scienceand Engineering

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

RotatingMachinery

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporation httpwwwhindawicom

Journal ofEngineeringVolume 2014

Submit your manuscripts athttpwwwhindawicom

VLSI Design

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Shock and Vibration

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Civil EngineeringAdvances in

Acoustics and VibrationAdvances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Electrical and Computer Engineering

Journal of

Advances inOptoElectronics

Hindawi Publishing Corporation httpwwwhindawicom

Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

SensorsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Chemical EngineeringInternational Journal of Antennas and

Propagation

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Navigation and Observation

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

DistributedSensor Networks

International Journal of

Page 4: Research Article Effective Multifocus Image Fusion Based ...

4 The Scientific World Journal

A

B

BP neural network

Fused result based on BP

neural network

Consistency verification

Initial fused image

Extract featurestexture featurelocal visibility

local visual feature contrast

Figure 2 Schematic diagram of the BP neural network based fusion method

A

B

BP neural network

Fused result based on BP

neural network

Consistency verification

Initial fused image

Extract featurestexture featurelocal visibility

local visual feature contrast

Similaritymeasure

Determine focused regions

Final fused image

Select border pixels

Morphological opening and

closing

Figure 3 Schematic diagram of the proposed image fusion method

in Section 22) Denote the feature vectors for 119860 119894 and 119861119894 by(TF119860119894 LVI119860119894 LVC119860119894) and (TF119861119894 LVI119861119894 LVC119861119894) respectively

Step 3 Train a BP neural network to determine which pixelis clearer The difference vector (TF119860119894 minus TF119861119894 LVI119860119894 minus LVI119861119894 LVC119860119894 minus LVC119861119894) is used as input and the output is labeledaccording to

target119894 =

1 if 119860 119894 is clearer than 119861119894

0 otherwise(10)

Step 4 Perform simulation of the trained BP neural networkon all pixel pairs The 119894th pixel 119865119894 of the fused image is thenconstructed as

119865119894 =

119860 119894 if out119894 ge 05

119861119894 otherwise(11)

where out119894 is the BP neural network output using the 119894th pixelpair as corresponding input

Step 5 Verify consistency of the result of the fusion obtainedin Step 4 Especially when the BPneural network decides thata particular pixel is to come from119860 butwith themajority of itssurrounding pixel from 119861 this pixel will be changed to comefrom 119861

32 The Method for Obtaining Final Fused Image In orderto ensure that the pixels of the fused image come from the

focused regions of each source image we need to identifythe focused regions in each source image firstly Then thefused image can be constructed by simply selecting pixels inthose regions And as for the boundary of focused regionsthe corresponding pixel of the initial fused image is selectedas the pixel of the final fused image Therefore we proposedthe following flow chart for obtaining the final fused image asillustrated in Figure 3

321 Detection of the Focused Regions The pixels of thesource images with higher similarity to the correspondinginitial fused image pixels can be considered to be located inthe focused regionsThus the focused regions in each sourceimage can be determined by this method In the paper weadopt root mean square error (RMSE) [14] to measure thesimilarity between the source images and the initial fusedimage Specifically the algorithm of the detection of focusedregions consists of the following steps

Step 1 Calculate the RMSE of each pixel within (2119898 + 1) times

(2119899 + 1) window between the source images and the initialfused image Assume that 119860 and 119861 are two source imagesand 119865 is the initial fused image The formulas are definedas follows respectively In order to acquire the best fusioneffect we have tried different window sizes and found thatthe fusion effect is best when the size of the window is 5times5 or7 times 7

The Scientific World Journal 5

Step 2 Compare the values RMSE119860119865(119909 119910) andRMSE119861119865(119909 119910)

to determine which pixel is in focus The decision diagramwhich is a binary image will be constructed as follows

119885 (119909 119910) =

1 if RMSE119860119865 (119909 119910) lt RMSE119861119865 (119909 119910)

0 otherwise(12)

where ldquo1rdquo in 119885 indicates that the pixel at position (119909 119910) insource image 119860 is in focus conversely the pixel in sourceimage119861 is in focus which indicates that the pixel with smallerRMSE(119909 119910) value is more possible in focus

Step 3 In order to determine all the focused pixels and avoidthe misjudgement of pixels morphological opening andclosing with small square structuring element and connecteddomain are employed Opening denoted as 119885 ∘ 119887 is that 119885 iseroded firstly by the structure element 119887 followed by dilationof the result by 119887 It can smooth the contours of the objectand remove narrow connections and small protrusions Likethe opening closing can also smooth the contours of theobject However the difference is that closing can join narrowgaps and fill the hole which is smaller than the structureelement 119887 Closing is dilation by 119887 followed by erosion by 119887

and is denoted as 119885 ∙ 119887 In fact those small holes are usuallygenerated by themisjudgement of pixelsWhatwasworse theholes larger than 119887 are hard to remove simply using openingand closing operators Therefore a threshold TH should beset to remove the holes smaller than the threshold but largerthan 119887 Then opening and closing are again used to smooththe contours of the object Finally the focused regions of eachsource image can be acquired which can be more uniformand have well connected regions

As for the structure element 119887 and the TH they canbe determined according to the experimental results In thepaper the structure element 119887 is a 7 times 7 matrix with logical1 In order to remove small and isolated areas which aremisjudged two different thresholds are setThefirst thresholdis set to be 20000 to remove areas which are focused in image119861 but misjudged as blurred The second threshold is set to be3000 to remove those areas which are focused in image 119860 butmisjudged as blurred

322 Fusion of the Focused Regions The final fused imageFF can be acquired according to the fusion rules that are asfollows

FF (119909 119910) =

119860 (119909 119910) if 119885119885 (119909 119910) == 1 count (119909 119910)

= (2119898 + 1) times (2119899 + 1)

119861 (119909 119910) if 119885119885 (119909 119910) == 0 count (119909 119910) = 0

119865 (119909 119910) otherwise(13)

where

count (119909 119910) =

119894=119898

sum

119894=minus119898

119895=119899

sum

119895=minus119899

119885119885 (119909 + 119894 119910 + 119895) (14)

119885119885 is the modified 119885 matrix of Step 3 in Section 321119860(119909 119910) 119861(119909 119910) 119865(119909 119910) and FF(119909 119910) denote the gray value

of pixel at position (119909 119910) of the source images (119860 and 119861)the initial fused image 119865 and the final fused image FFrespectively and (2119898 + 1) times (2119899 + 1) is the size of slippingwindow count(119909 119910) = (2119898 + 1) times (2119899 + 1) suggests thatthe pixel at position (119909 119910) in image 119860 is in focus and will beselected as the pixel of the final fused image FF directly Onthe contrary count(119909 119910) = 0 indicates that the pixel at theposition coming from image 119861 is focused and can be chosenas the pixel of the final fused image FF Other cases namely0 lt count(119909 119910) lt (2119898 + 1) times (2119899 + 1) imply that the pixel atposition (119909 119910) is located in the boundary of focused regionsand the corresponding pixel of the initial fused image 119865 isselected as the pixel of the final fused image FF

4 Experimental Results andPerformance Analysis

41 Experimental Setup In this section the first step weshould do is to train the BP neural network The trainingexperiment is performed on the standard popular widelyused ldquolenardquo image which is a 256-level image with all infocus We then artificially produce three out-of-focus imagesblurred with Gaussian radius of 05 10 and 15 respectivelyA training set with a total of 4 times 256 times 256 pixel pairs isformedThe three features of each pixel TF LVI andLVC areextracted with 120572 = 065 In addition we artificially producea pair of out-of-focus images shown in Figures 4(a) and 4(b)which are acquired by blurring the left part and the middlepart of the original image using the Gaussian functionrespectively To evaluate the advantage of the proposed fusionmethod experiments are performed on three sets of sourceimages as shown in Figures 4 5 and 6 respectively includingone set of source images produced artificially and two sets ofsource images acquired naturally Their sizes are 256 times 256256times256 and 640times480 respectivelyThese images all containmultiple objects at different distances from the camera andonly those objects within the depth of field of the camera willbe focused while other objects naturally will be out of focuswhen taken For example Figure 5(a) is focused on testingcard while Figure 5(b) is focused on the pepsi can

In order to compare the performance of the proposedfusion method these multifocus images are also performedusing the conventional and classical methods such as takingthe average of the source images pixel by pixel the gradientpyramid method [11] the DWT-based method and theSIDWT-based method [16] The decomposition level of themultiscale transform is 4 layers The wavelet basis of theDWT and SIDWT is DBSS (2 2) and Haar respectivelyThe fusion rules of lowpass subband coefficients and thehighpass subband coefficients are the ldquoaveragingrdquo scheme andthe ldquoabsolute maximum choosingrdquo scheme respectively

42 Evaluation Criteria In general the evaluation methodsof image fusion can be categorized into subjective methodsand objective methods However observer personal visualdifferences and psychological factors will affect the results ofimage evaluation Furthermore inmost cases it is difficult forus to perceive the difference among fusion results Therefore

6 The Scientific World Journal

(a) (b) (c)

(d) (e) (f)

(g) (h) (i)

Figure 4 Original images and fused images of ldquolenardquo (a) focus on the right (b) focus on left and right sides (c) fused image using average(d) fused image using gradient pyramid (e) fused image using DWT (f) fused image using SIDWT (g) fused image using the proposedmethod (h) the initial focused region (i) the modified focused region

the subjective evaluation of the fused results is always incom-prehensive Hence in addition to the subjective evaluationwe also adopt severalmetrics to objectively evaluate the imagefusion results and quantitatively compare the different fusionmethods in the paper

421 Mutual Information (MI) The mutual informationMI119860119865 between the source image 119860 and the fused image 119865 isdefined as follows

MI119860119865 =

119871minus1

sum

119896=0

119871minus1

sum

119894=0

119901119860119865 (119896 119894) log2

119901119860119865 (119896 119894)

119901119860 (119896) times 119901119865 (119894)

(15)

where 119901119860119865 is the jointly normalized histogram of 119860 and 119865119901119860 and 119901119865 are the normalized histograms of 119860 and 119865 119871 isthe gray level of the image and 119896 and 119894 represent thepixel value of the images 119860 and 119865 respectively The mutualinformation MI119861119865 between the source image 119861 and the fusedimage 119865 is similar to MI119860119865 The mutual information betweenthe source images 119860 119861 and the fused image 119865 is defined asfollows

MI119860119861119865

= MI119860119865 + MI119861119865 (16)The metric reflects the total amount of information that

the fused image 119865 contains about source images119860 and 119861Thelarger the value is the more the information is obtained fromthe original image and the better the fusion effect is

The Scientific World Journal 7

(a) (b) (c)

(d) (e) (f)

(g) (h) (i)

Figure 5 Original and fused image of ldquopepsirdquo (a) focus on the right (b) focus on the left (c) fused image using average (d) fused imageusing gradient pyramid (e) fused image using DWT (f) fused image using SIDWT (g) fused image using the proposed method (h) theinitial focused region (i) the modified focused region

422 119876119860119861119865 The metric 119876

119860119861119865 evaluates the sum of edgeinformation preservation values and is defined as follows

119876119860119861119865

= (

119872

sum

119898=1

119873

sum

119899=1

(119876119860119865

(119898 119899) times 120596119860 (119898 119899)

+ 119876119861119865

(119898 119899) times 120596119861 (119898 119899)))

times (

119872

sum

119898=1

119873

sum

119899=1

(120596119860 (119898 119899) + 120596119861 (119898 119899)))

minus1

(17)

where 119876119860119865

(119898 119899) = 119876119860119865

119892(119898 119899)119876

119860119865

120572(119898 119899) 119876

119860119865

119892(119898 119899) and

119876119860119865

120572(119898 119899) are the edge strength and orientation preservation

values respectively 119876119861119865

(119898 119899) is similar to 119876119860119865

(119898 119899) and120596119860(119898 119899) and120596119861(119898 119899) areweights tomeasure the importanceof 119876119860119865

(119898 119899) and 119876119861119865

(119898 119899) respectively The dynamic rangeof 119876119860119861119865 is [0 1] and it should be as close to 1 as possible

and for the ldquoideal fusionrdquo 119876119860119861119865

= 1 In addition (119898 119899)

represents the pixel location and 119872 and 119873 are the size ofimages respectively

The119876119860119861119865metric reflects the quality of visual information

obtained from the fusion of input images Therefore thelarger the value the better the performance

8 The Scientific World Journal

(a) (b) (c)

(d) (e) (f)

(g) (h) (i)

Figure 6 Original and fused image of ldquodiskrdquo (a) focus on the right (b) focus on the left (c) fused image using average (d) fused image usinggradient pyramid (e) fused image using DWT (f) fused image using SIDWT (g) fused image using the proposed method (h) the initialfocused region (i) the modified focused region

423 CorrelationCoefficient (CORR) Correlation coefficientbetween the fused image 119865 and the standard reference image119877 is defined as follows

CORR

= (

119872

sum

119898=1

119873

sum

119899=1

(119877 (119898 119899) minus 119877 (119898 119899)) (119865 (119898 119899) minus 119865 (119898 119899)))

times (

119872

sum

119898=1

119873

sum

119899=1

(119877 (119898 119899) minus 119877 (119898 119899))

2

times

119872

sum

119898=1

119873

sum

119899=1

(119865 (119898 119899) minus 119865 (119898 119899))

2

)

minus12

(18)

where 119877(119898 119899) and 119865(119898 119899) represent the pixel gray averagevalue of the standard reference image 119877 and fused image 119865respectively

The metric reflects the degree of correlation between thefused image and the standard reference image The larger thevalue is the better the fusion effect is

424 Root Mean Squared Error (RMSE) Root mean squareerror (RMSE) between the fusion image 119865 and the standardreference image 119877 is defined as follows

RMSE =radic

sum119872

119898=1sum119873

119899=1(119877 (119898 119899) minus 119865 (119898 119899))

2

119872 times 119873

(19)

The metric is used to measure the difference between thefused image and the standard reference image The smallerthe value is the better the fusion effect is

43 Fusion of Artificial Test Images The experiment is per-formed on a pair of ldquolenardquomultifocus images as shown in Fig-ures 4(a) and 4(b) The initial and modified detected focusedregions are shown in Figures 4(h) and 4(i) respectivelyThe white pixels in Figure 4(i) indicate that correspondingpixels from Figure 4(a) are in focused regions while the blackpixels suggest that corresponding pixels from Figure 4(b)are in focused regions By comparison we can observe thatthe detected focused regions of Figure 4(i) are better thanthose of Figure 4(h) for example there are somemisdetectedfocused regions in the right side of Figure 4(h) whereas theyare correctly detected in Figure 4(i) because the right side

The Scientific World Journal 9

Table 1 Performance comparison of different fusion algorithms in Figure 4

Fusion algorithms MI 119876119860119861119865 CORR RMSE

Average method 74946 072072 098987 88023Gradient pyramid 5102 072693 098876 13791DWT 71207 076948 099784 40532SIDWT 73712 077171 099553 57595Proposed method 98067 081128 099962 28699

of it is almost totally white The fusion results obtained bythe previous five different methods are shown in Figures4(c)ndash4(g) respectively It can be found that the results of thepixel averaging and gradient pyramid method have a poorcontrast compared to those of the DWT-based method theSIDWT-based method and the proposed method Howeverit is difficult for us to perceive the difference among the resultsof the DWT-based method the SIDWT-based method andthe proposed method according to the subjective evaluationTherefore to objectively evaluate these five fusion methodsquantitative assessments of the five fusion results are neededThe results of the quantitative assessments are shown inTable 1 As can be seen from Table 1 MI 119876

119860119861119865 and CORRvalues of the proposed method are higher and RMSE valueis less than those of the other methods which means that byusing our proposed method the best quantitative evaluationresults have been achieved

44 Fusion of Real Digital Camera Images The experimentscarried out in this section are performed on two sets ofsource images acquired naturally as shown in Figures 5(a)-5(b) and Figures 6(a)-6(b) respectively The initial andmodified detected focused regions of those two sets ofsource images are shown in Figures 5(h)-5(i) and Figures6(h)-6(i) respectively The fused images obtained by usingpixel averaging method gradient pyramid method DWT-based method the SIDWT-based method and the proposedmethod on these two sets of source images are shown inFigures 5(c)ndash5(g) and Figures 6(c)ndash6(g) respectively Fromthe fusion results we can easily observe that fusion effectsacquired based on the pixel averaging and gradient pyramidare not satisfactory and with poor contrast For example theregions of the testing card in Figures 5(c)-5(d) are not clearbut they are clear in Figures 5(e)ndash5(g) But it is difficult todiscriminate the difference among the results of the DWT-based method the SIDWT-based method and the proposedmethod by subjective evaluation so we need to do objectiveevaluation However it should be noted that the referenceimage is usually not available for real multifocus imagesso only the two evaluation criteria including the MI and119876119860119861119865 are used to objectively compare the fusion results The

quantitative comparison of the five methods for fusion ofthese two sets of source images is shown in Tables 2 and 3respectively As can be seen from the two tables we can findthat the MI and 119876

119860119861119865 values of the proposed method aresignificantly higher than those of the othermethods It shouldbe noted that we have carried out experiments on othermultifocus images and their results are identical to these two

Table 2 Performance comparison of different fusion algorithms inFigure 5

Fusion algorithms MI 119876119860119861119865

Average method 72941 064922Gradient pyramid 59768 067975DWT 64442 068264SIDWT 682160 070890Proposed method 91957 075904

Table 3 Performance comparison of different fusion algorithms inFigure 6

Fusion algorithms MI 119876119860119861119865

Average method 59845 052143Gradient pyramid 53657 063792DWT 53949 064323SIDWT 58380 067620Proposed method 83105 073806

examples so we did not mention all of them here Thereforethe results of subjective and objective evaluation presentedhere can verify that the performance of the proposedmethodis superior to those of the other methods

5 Conclusions

By combining the idea of the correlation between the neigh-boring pixels and BP neural networks a novel multifocusimage fusion method based on HVS and BP neural networkis proposed in the paper Three features which are basedon HVS and can reflect the clarity of a pixel are extractedand used to train a BP neural network to determine whichpixel is clearer The clearer pixels are combined to form theinitial fused image Then the focused regions are detectedby judging whether pixels from the initial fused image arein the focused regions or not Finally the final fused imageis obtained with the help of the technique of focused regiondetection by a certain fusion rule The results of subjectiveand objective evaluation of several experiments show that theproposed method outperforms several popular widely usedfusionmethods In the future we will focus on improving therobustness of the method for noise

Conflict of Interests

The authors declare no conflict of interests

10 The Scientific World Journal

Acknowledgments

The authors would like to thank the anonymous reviewersfor their valuable comments and constructive suggestionsThis work was supported by the National Natural ScienceFoundation of China (no 60963012 and no 61262034) by theKey Project of Chinese Ministry of Education (no 211087)by the Natural Science Foundation of Jiangxi Province (no20114BAB211020 and no 20132BAB201025) by the Young Sci-entist Foundation of Jiangxi Province (no 20122BCB23017)and by the Science and Technology Research Project of theEducation Department of Jiangxi Province (no GJJ13302)

References

[1] P Shah S N Merchant and U B Desai ldquoMultifocus andmultispectral image fusion based on pixel significance usingmultiresolution decompositionrdquo Signal Image and Video Pro-cessing vol 7 pp 95ndash109 2013

[2] V Aslantas and R Kurban ldquoFusion of multi-focus imagesusing differential evolution algorithmrdquo Expert Systems withApplications vol 37 no 12 pp 8861ndash8870 2010

[3] R Benes P Dvorak M Faundez-Zanuy V Espinosa-Duroand J Mekysk ldquoMulti-focus thermal image fusionrdquo PatternRecognition Letters vol 34 pp 536ndash544 2013

[4] J Dong D Zhuang Y Huang and J Fu ldquoAdvances in multi-sensor data fusion algorithms and applicationsrdquo Sensors vol 9no 10 pp 7771ndash7784 2009

[5] S Li and B Yang ldquoHybrid multiresolution method for multi-sensor multimodal image fusionrdquo IEEE Sensors Journal vol 10no 9 pp 1519ndash1526 2010

[6] S Li B Yang and J Hu ldquoPerformance comparison of differentmulti-resolution transforms for image fusionrdquo InformationFusion vol 12 no 2 pp 74ndash84 2011

[7] W Huang and Z Jing ldquoEvaluation of focus measures in multi-focus image fusionrdquo Pattern Recognition Letters vol 28 no 4pp 493ndash500 2007

[8] S Li J T Kwok and Y Wang ldquoMultifocus image fusion usingartificial neural networksrdquo Pattern Recognition Letters vol 23no 8 pp 985ndash997 2002

[9] M B A Haghighat A Aghagolzadeh and H SeyedarabildquoMulti-focus image fusion for visual sensor networks in DCTdomainrdquoComputers and Electrical Engineering vol 37 no 5 pp789ndash797 2011

[10] P J Burt and E H Adelson ldquoThe Laplacian pyramid as acompact image coderdquo IEEE Transactions on Communicationsvol 31 no 4 pp 532ndash540 1983

[11] P J Burt ldquoA gradient pyramid basis for pattern selective imagefusionrdquo in Proceedings of the Society for Information DisplayConference pp 467ndash470 1992

[12] A Toet ldquoImage fusion by a ration of low-pass pyramidrdquo PatternRecognition Letters vol 9 no 4 pp 245ndash253 1989

[13] G Pajares and J M de la Cruz ldquoA wavelet-based image fusiontutorialrdquo Pattern Recognition vol 37 no 9 pp 1855ndash1872 2004

[14] Y Zheng E A Essock B C Hansen and A M Haun ldquoA newmetric based on extended spatial frequency and its applicationto DWT based fusion algorithmsrdquo Information Fusion vol 8no 2 pp 177ndash192 2007

[15] Y Yang D S Park S Huang and N Rao ldquoMedical imagefusion via an effective wavelet-based approachrdquo Eurasip Journal

on Advances in Signal Processing vol 2010 Article ID 579341 13pages 2010

[16] M Unser ldquoTexture classification and segmentation usingwavelet framesrdquo IEEE Transactions on Image Processing vol 4no 11 pp 1549ndash1560 1995

[17] F Nencini A Garzelli S Baronti and L Alparone ldquoRemotesensing image fusion using the curvelet transformrdquo InformationFusion vol 8 no 2 pp 143ndash156 2007

[18] L Yang B L Guo and W Ni ldquoMultimodality medical imagefusion based on multiscale geometric analysis of contourlettransformrdquo Neurocomputing vol 72 no 1ndash3 pp 203ndash211 2008

[19] Q Zhang and B-L Guo ldquoMultifocus image fusion using thenonsubsampled contourlet transformrdquo Signal Processing vol89 no 7 pp 1334ndash1346 2009

[20] B Yang and S Li ldquoMultifocus image fusion and restorationwithsparse representationrdquo IEEE Transactions on Instrumentationand Measurement vol 59 no 4 pp 884ndash892 2010

[21] WWu X M Yang Y Pang J Peng and G Jeon ldquoA multifocusimage fusion method by using hidden Markov modelrdquo OpticsCommunications vol 287 pp 63ndash72 2013

[22] Z Wang Y Ma and J Gu ldquoMulti-focus image fusion usingPCNNrdquo Pattern Recognition vol 43 no 6 pp 2003ndash2016 2010

[23] D Agrawal and J Singhai ldquoMultifocus image fusion usingmodified pulse coupled neural network for improved imagequalityrdquo IET Image Processing vol 4 no 6 pp 443ndash451 2010

[24] F Zhang and H Y Chang ldquoEmploying BP neural networksto alleviate the sparsity issue in collaborative filtering rec-ommendation algorithmsrdquo Journal of Computer Research andDevelopment vol 43 pp 667ndash672 2006

[25] P F Xiao and X Z Feng Segmentation and InformationExtraction of High-Resolution Remote Sensing Image BeijingScience Press 2012

[26] R B Huang F N Lang and Z Shi ldquoLog-Gabor and 2D semi-supervised discriminant analysis based face image retrievalrdquoApplication Research of Computers vol 29 pp 393ndash396 2012

[27] Y Chai H Li and Z Li ldquoMultifocus image fusion schemeusing focused region detection and multiresolutionrdquo OpticsCommunications vol 284 no 19 pp 4376ndash4389 2011

[28] H-F Li Y Chai and X-Y Zhang ldquoMultifocus image fusionalgorithm based onmultiscale products and property of humanvisual systemrdquo Control and Decision vol 27 no 3 pp 355ndash3612012

International Journal of

AerospaceEngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

RoboticsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Active and Passive Electronic Components

Control Scienceand Engineering

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

RotatingMachinery

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporation httpwwwhindawicom

Journal ofEngineeringVolume 2014

Submit your manuscripts athttpwwwhindawicom

VLSI Design

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Shock and Vibration

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Civil EngineeringAdvances in

Acoustics and VibrationAdvances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Electrical and Computer Engineering

Journal of

Advances inOptoElectronics

Hindawi Publishing Corporation httpwwwhindawicom

Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

SensorsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Chemical EngineeringInternational Journal of Antennas and

Propagation

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Navigation and Observation

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

DistributedSensor Networks

International Journal of

Page 5: Research Article Effective Multifocus Image Fusion Based ...

The Scientific World Journal 5

Step 2 Compare the values RMSE119860119865(119909 119910) andRMSE119861119865(119909 119910)

to determine which pixel is in focus The decision diagramwhich is a binary image will be constructed as follows

119885 (119909 119910) =

1 if RMSE119860119865 (119909 119910) lt RMSE119861119865 (119909 119910)

0 otherwise(12)

where ldquo1rdquo in 119885 indicates that the pixel at position (119909 119910) insource image 119860 is in focus conversely the pixel in sourceimage119861 is in focus which indicates that the pixel with smallerRMSE(119909 119910) value is more possible in focus

Step 3 In order to determine all the focused pixels and avoidthe misjudgement of pixels morphological opening andclosing with small square structuring element and connecteddomain are employed Opening denoted as 119885 ∘ 119887 is that 119885 iseroded firstly by the structure element 119887 followed by dilationof the result by 119887 It can smooth the contours of the objectand remove narrow connections and small protrusions Likethe opening closing can also smooth the contours of theobject However the difference is that closing can join narrowgaps and fill the hole which is smaller than the structureelement 119887 Closing is dilation by 119887 followed by erosion by 119887

and is denoted as 119885 ∙ 119887 In fact those small holes are usuallygenerated by themisjudgement of pixelsWhatwasworse theholes larger than 119887 are hard to remove simply using openingand closing operators Therefore a threshold TH should beset to remove the holes smaller than the threshold but largerthan 119887 Then opening and closing are again used to smooththe contours of the object Finally the focused regions of eachsource image can be acquired which can be more uniformand have well connected regions

As for the structure element 119887 and the TH they canbe determined according to the experimental results In thepaper the structure element 119887 is a 7 times 7 matrix with logical1 In order to remove small and isolated areas which aremisjudged two different thresholds are setThefirst thresholdis set to be 20000 to remove areas which are focused in image119861 but misjudged as blurred The second threshold is set to be3000 to remove those areas which are focused in image 119860 butmisjudged as blurred

322 Fusion of the Focused Regions The final fused imageFF can be acquired according to the fusion rules that are asfollows

FF (119909 119910) =

119860 (119909 119910) if 119885119885 (119909 119910) == 1 count (119909 119910)

= (2119898 + 1) times (2119899 + 1)

119861 (119909 119910) if 119885119885 (119909 119910) == 0 count (119909 119910) = 0

119865 (119909 119910) otherwise(13)

where

count (119909 119910) =

119894=119898

sum

119894=minus119898

119895=119899

sum

119895=minus119899

119885119885 (119909 + 119894 119910 + 119895) (14)

119885119885 is the modified 119885 matrix of Step 3 in Section 321119860(119909 119910) 119861(119909 119910) 119865(119909 119910) and FF(119909 119910) denote the gray value

of pixel at position (119909 119910) of the source images (119860 and 119861)the initial fused image 119865 and the final fused image FFrespectively and (2119898 + 1) times (2119899 + 1) is the size of slippingwindow count(119909 119910) = (2119898 + 1) times (2119899 + 1) suggests thatthe pixel at position (119909 119910) in image 119860 is in focus and will beselected as the pixel of the final fused image FF directly Onthe contrary count(119909 119910) = 0 indicates that the pixel at theposition coming from image 119861 is focused and can be chosenas the pixel of the final fused image FF Other cases namely0 lt count(119909 119910) lt (2119898 + 1) times (2119899 + 1) imply that the pixel atposition (119909 119910) is located in the boundary of focused regionsand the corresponding pixel of the initial fused image 119865 isselected as the pixel of the final fused image FF

4 Experimental Results andPerformance Analysis

41 Experimental Setup In this section the first step weshould do is to train the BP neural network The trainingexperiment is performed on the standard popular widelyused ldquolenardquo image which is a 256-level image with all infocus We then artificially produce three out-of-focus imagesblurred with Gaussian radius of 05 10 and 15 respectivelyA training set with a total of 4 times 256 times 256 pixel pairs isformedThe three features of each pixel TF LVI andLVC areextracted with 120572 = 065 In addition we artificially producea pair of out-of-focus images shown in Figures 4(a) and 4(b)which are acquired by blurring the left part and the middlepart of the original image using the Gaussian functionrespectively To evaluate the advantage of the proposed fusionmethod experiments are performed on three sets of sourceimages as shown in Figures 4 5 and 6 respectively includingone set of source images produced artificially and two sets ofsource images acquired naturally Their sizes are 256 times 256256times256 and 640times480 respectivelyThese images all containmultiple objects at different distances from the camera andonly those objects within the depth of field of the camera willbe focused while other objects naturally will be out of focuswhen taken For example Figure 5(a) is focused on testingcard while Figure 5(b) is focused on the pepsi can

In order to compare the performance of the proposedfusion method these multifocus images are also performedusing the conventional and classical methods such as takingthe average of the source images pixel by pixel the gradientpyramid method [11] the DWT-based method and theSIDWT-based method [16] The decomposition level of themultiscale transform is 4 layers The wavelet basis of theDWT and SIDWT is DBSS (2 2) and Haar respectivelyThe fusion rules of lowpass subband coefficients and thehighpass subband coefficients are the ldquoaveragingrdquo scheme andthe ldquoabsolute maximum choosingrdquo scheme respectively

42 Evaluation Criteria In general the evaluation methodsof image fusion can be categorized into subjective methodsand objective methods However observer personal visualdifferences and psychological factors will affect the results ofimage evaluation Furthermore inmost cases it is difficult forus to perceive the difference among fusion results Therefore

6 The Scientific World Journal

(a) (b) (c)

(d) (e) (f)

(g) (h) (i)

Figure 4 Original images and fused images of ldquolenardquo (a) focus on the right (b) focus on left and right sides (c) fused image using average(d) fused image using gradient pyramid (e) fused image using DWT (f) fused image using SIDWT (g) fused image using the proposedmethod (h) the initial focused region (i) the modified focused region

the subjective evaluation of the fused results is always incom-prehensive Hence in addition to the subjective evaluationwe also adopt severalmetrics to objectively evaluate the imagefusion results and quantitatively compare the different fusionmethods in the paper

421 Mutual Information (MI) The mutual informationMI119860119865 between the source image 119860 and the fused image 119865 isdefined as follows

MI119860119865 =

119871minus1

sum

119896=0

119871minus1

sum

119894=0

119901119860119865 (119896 119894) log2

119901119860119865 (119896 119894)

119901119860 (119896) times 119901119865 (119894)

(15)

where 119901119860119865 is the jointly normalized histogram of 119860 and 119865119901119860 and 119901119865 are the normalized histograms of 119860 and 119865 119871 isthe gray level of the image and 119896 and 119894 represent thepixel value of the images 119860 and 119865 respectively The mutualinformation MI119861119865 between the source image 119861 and the fusedimage 119865 is similar to MI119860119865 The mutual information betweenthe source images 119860 119861 and the fused image 119865 is defined asfollows

MI119860119861119865

= MI119860119865 + MI119861119865 (16)The metric reflects the total amount of information that

the fused image 119865 contains about source images119860 and 119861Thelarger the value is the more the information is obtained fromthe original image and the better the fusion effect is

The Scientific World Journal 7

(a) (b) (c)

(d) (e) (f)

(g) (h) (i)

Figure 5 Original and fused image of ldquopepsirdquo (a) focus on the right (b) focus on the left (c) fused image using average (d) fused imageusing gradient pyramid (e) fused image using DWT (f) fused image using SIDWT (g) fused image using the proposed method (h) theinitial focused region (i) the modified focused region

422 119876119860119861119865 The metric 119876

119860119861119865 evaluates the sum of edgeinformation preservation values and is defined as follows

119876119860119861119865

= (

119872

sum

119898=1

119873

sum

119899=1

(119876119860119865

(119898 119899) times 120596119860 (119898 119899)

+ 119876119861119865

(119898 119899) times 120596119861 (119898 119899)))

times (

119872

sum

119898=1

119873

sum

119899=1

(120596119860 (119898 119899) + 120596119861 (119898 119899)))

minus1

(17)

where 119876119860119865

(119898 119899) = 119876119860119865

119892(119898 119899)119876

119860119865

120572(119898 119899) 119876

119860119865

119892(119898 119899) and

119876119860119865

120572(119898 119899) are the edge strength and orientation preservation

values respectively 119876119861119865

(119898 119899) is similar to 119876119860119865

(119898 119899) and120596119860(119898 119899) and120596119861(119898 119899) areweights tomeasure the importanceof 119876119860119865

(119898 119899) and 119876119861119865

(119898 119899) respectively The dynamic rangeof 119876119860119861119865 is [0 1] and it should be as close to 1 as possible

and for the ldquoideal fusionrdquo 119876119860119861119865

= 1 In addition (119898 119899)

represents the pixel location and 119872 and 119873 are the size ofimages respectively

The119876119860119861119865metric reflects the quality of visual information

obtained from the fusion of input images Therefore thelarger the value the better the performance

8 The Scientific World Journal

(a) (b) (c)

(d) (e) (f)

(g) (h) (i)

Figure 6 Original and fused image of ldquodiskrdquo (a) focus on the right (b) focus on the left (c) fused image using average (d) fused image usinggradient pyramid (e) fused image using DWT (f) fused image using SIDWT (g) fused image using the proposed method (h) the initialfocused region (i) the modified focused region

423 CorrelationCoefficient (CORR) Correlation coefficientbetween the fused image 119865 and the standard reference image119877 is defined as follows

CORR

= (

119872

sum

119898=1

119873

sum

119899=1

(119877 (119898 119899) minus 119877 (119898 119899)) (119865 (119898 119899) minus 119865 (119898 119899)))

times (

119872

sum

119898=1

119873

sum

119899=1

(119877 (119898 119899) minus 119877 (119898 119899))

2

times

119872

sum

119898=1

119873

sum

119899=1

(119865 (119898 119899) minus 119865 (119898 119899))

2

)

minus12

(18)

where 119877(119898 119899) and 119865(119898 119899) represent the pixel gray averagevalue of the standard reference image 119877 and fused image 119865respectively

The metric reflects the degree of correlation between thefused image and the standard reference image The larger thevalue is the better the fusion effect is

424 Root Mean Squared Error (RMSE) Root mean squareerror (RMSE) between the fusion image 119865 and the standardreference image 119877 is defined as follows

RMSE =radic

sum119872

119898=1sum119873

119899=1(119877 (119898 119899) minus 119865 (119898 119899))

2

119872 times 119873

(19)

The metric is used to measure the difference between thefused image and the standard reference image The smallerthe value is the better the fusion effect is

43 Fusion of Artificial Test Images The experiment is per-formed on a pair of ldquolenardquomultifocus images as shown in Fig-ures 4(a) and 4(b) The initial and modified detected focusedregions are shown in Figures 4(h) and 4(i) respectivelyThe white pixels in Figure 4(i) indicate that correspondingpixels from Figure 4(a) are in focused regions while the blackpixels suggest that corresponding pixels from Figure 4(b)are in focused regions By comparison we can observe thatthe detected focused regions of Figure 4(i) are better thanthose of Figure 4(h) for example there are somemisdetectedfocused regions in the right side of Figure 4(h) whereas theyare correctly detected in Figure 4(i) because the right side

The Scientific World Journal 9

Table 1 Performance comparison of different fusion algorithms in Figure 4

Fusion algorithms MI 119876119860119861119865 CORR RMSE

Average method 74946 072072 098987 88023Gradient pyramid 5102 072693 098876 13791DWT 71207 076948 099784 40532SIDWT 73712 077171 099553 57595Proposed method 98067 081128 099962 28699

of it is almost totally white The fusion results obtained bythe previous five different methods are shown in Figures4(c)ndash4(g) respectively It can be found that the results of thepixel averaging and gradient pyramid method have a poorcontrast compared to those of the DWT-based method theSIDWT-based method and the proposed method Howeverit is difficult for us to perceive the difference among the resultsof the DWT-based method the SIDWT-based method andthe proposed method according to the subjective evaluationTherefore to objectively evaluate these five fusion methodsquantitative assessments of the five fusion results are neededThe results of the quantitative assessments are shown inTable 1 As can be seen from Table 1 MI 119876

119860119861119865 and CORRvalues of the proposed method are higher and RMSE valueis less than those of the other methods which means that byusing our proposed method the best quantitative evaluationresults have been achieved

44 Fusion of Real Digital Camera Images The experimentscarried out in this section are performed on two sets ofsource images acquired naturally as shown in Figures 5(a)-5(b) and Figures 6(a)-6(b) respectively The initial andmodified detected focused regions of those two sets ofsource images are shown in Figures 5(h)-5(i) and Figures6(h)-6(i) respectively The fused images obtained by usingpixel averaging method gradient pyramid method DWT-based method the SIDWT-based method and the proposedmethod on these two sets of source images are shown inFigures 5(c)ndash5(g) and Figures 6(c)ndash6(g) respectively Fromthe fusion results we can easily observe that fusion effectsacquired based on the pixel averaging and gradient pyramidare not satisfactory and with poor contrast For example theregions of the testing card in Figures 5(c)-5(d) are not clearbut they are clear in Figures 5(e)ndash5(g) But it is difficult todiscriminate the difference among the results of the DWT-based method the SIDWT-based method and the proposedmethod by subjective evaluation so we need to do objectiveevaluation However it should be noted that the referenceimage is usually not available for real multifocus imagesso only the two evaluation criteria including the MI and119876119860119861119865 are used to objectively compare the fusion results The

quantitative comparison of the five methods for fusion ofthese two sets of source images is shown in Tables 2 and 3respectively As can be seen from the two tables we can findthat the MI and 119876

119860119861119865 values of the proposed method aresignificantly higher than those of the othermethods It shouldbe noted that we have carried out experiments on othermultifocus images and their results are identical to these two

Table 2 Performance comparison of different fusion algorithms inFigure 5

Fusion algorithms MI 119876119860119861119865

Average method 72941 064922Gradient pyramid 59768 067975DWT 64442 068264SIDWT 682160 070890Proposed method 91957 075904

Table 3 Performance comparison of different fusion algorithms inFigure 6

Fusion algorithms MI 119876119860119861119865

Average method 59845 052143Gradient pyramid 53657 063792DWT 53949 064323SIDWT 58380 067620Proposed method 83105 073806

examples so we did not mention all of them here Thereforethe results of subjective and objective evaluation presentedhere can verify that the performance of the proposedmethodis superior to those of the other methods

5 Conclusions

By combining the idea of the correlation between the neigh-boring pixels and BP neural networks a novel multifocusimage fusion method based on HVS and BP neural networkis proposed in the paper Three features which are basedon HVS and can reflect the clarity of a pixel are extractedand used to train a BP neural network to determine whichpixel is clearer The clearer pixels are combined to form theinitial fused image Then the focused regions are detectedby judging whether pixels from the initial fused image arein the focused regions or not Finally the final fused imageis obtained with the help of the technique of focused regiondetection by a certain fusion rule The results of subjectiveand objective evaluation of several experiments show that theproposed method outperforms several popular widely usedfusionmethods In the future we will focus on improving therobustness of the method for noise

Conflict of Interests

The authors declare no conflict of interests

10 The Scientific World Journal

Acknowledgments

The authors would like to thank the anonymous reviewersfor their valuable comments and constructive suggestionsThis work was supported by the National Natural ScienceFoundation of China (no 60963012 and no 61262034) by theKey Project of Chinese Ministry of Education (no 211087)by the Natural Science Foundation of Jiangxi Province (no20114BAB211020 and no 20132BAB201025) by the Young Sci-entist Foundation of Jiangxi Province (no 20122BCB23017)and by the Science and Technology Research Project of theEducation Department of Jiangxi Province (no GJJ13302)

References

[1] P Shah S N Merchant and U B Desai ldquoMultifocus andmultispectral image fusion based on pixel significance usingmultiresolution decompositionrdquo Signal Image and Video Pro-cessing vol 7 pp 95ndash109 2013

[2] V Aslantas and R Kurban ldquoFusion of multi-focus imagesusing differential evolution algorithmrdquo Expert Systems withApplications vol 37 no 12 pp 8861ndash8870 2010

[3] R Benes P Dvorak M Faundez-Zanuy V Espinosa-Duroand J Mekysk ldquoMulti-focus thermal image fusionrdquo PatternRecognition Letters vol 34 pp 536ndash544 2013

[4] J Dong D Zhuang Y Huang and J Fu ldquoAdvances in multi-sensor data fusion algorithms and applicationsrdquo Sensors vol 9no 10 pp 7771ndash7784 2009

[5] S Li and B Yang ldquoHybrid multiresolution method for multi-sensor multimodal image fusionrdquo IEEE Sensors Journal vol 10no 9 pp 1519ndash1526 2010

[6] S Li B Yang and J Hu ldquoPerformance comparison of differentmulti-resolution transforms for image fusionrdquo InformationFusion vol 12 no 2 pp 74ndash84 2011

[7] W Huang and Z Jing ldquoEvaluation of focus measures in multi-focus image fusionrdquo Pattern Recognition Letters vol 28 no 4pp 493ndash500 2007

[8] S Li J T Kwok and Y Wang ldquoMultifocus image fusion usingartificial neural networksrdquo Pattern Recognition Letters vol 23no 8 pp 985ndash997 2002

[9] M B A Haghighat A Aghagolzadeh and H SeyedarabildquoMulti-focus image fusion for visual sensor networks in DCTdomainrdquoComputers and Electrical Engineering vol 37 no 5 pp789ndash797 2011

[10] P J Burt and E H Adelson ldquoThe Laplacian pyramid as acompact image coderdquo IEEE Transactions on Communicationsvol 31 no 4 pp 532ndash540 1983

[11] P J Burt ldquoA gradient pyramid basis for pattern selective imagefusionrdquo in Proceedings of the Society for Information DisplayConference pp 467ndash470 1992

[12] A Toet ldquoImage fusion by a ration of low-pass pyramidrdquo PatternRecognition Letters vol 9 no 4 pp 245ndash253 1989

[13] G Pajares and J M de la Cruz ldquoA wavelet-based image fusiontutorialrdquo Pattern Recognition vol 37 no 9 pp 1855ndash1872 2004

[14] Y Zheng E A Essock B C Hansen and A M Haun ldquoA newmetric based on extended spatial frequency and its applicationto DWT based fusion algorithmsrdquo Information Fusion vol 8no 2 pp 177ndash192 2007

[15] Y Yang D S Park S Huang and N Rao ldquoMedical imagefusion via an effective wavelet-based approachrdquo Eurasip Journal

on Advances in Signal Processing vol 2010 Article ID 579341 13pages 2010

[16] M Unser ldquoTexture classification and segmentation usingwavelet framesrdquo IEEE Transactions on Image Processing vol 4no 11 pp 1549ndash1560 1995

[17] F Nencini A Garzelli S Baronti and L Alparone ldquoRemotesensing image fusion using the curvelet transformrdquo InformationFusion vol 8 no 2 pp 143ndash156 2007

[18] L Yang B L Guo and W Ni ldquoMultimodality medical imagefusion based on multiscale geometric analysis of contourlettransformrdquo Neurocomputing vol 72 no 1ndash3 pp 203ndash211 2008

[19] Q Zhang and B-L Guo ldquoMultifocus image fusion using thenonsubsampled contourlet transformrdquo Signal Processing vol89 no 7 pp 1334ndash1346 2009

[20] B Yang and S Li ldquoMultifocus image fusion and restorationwithsparse representationrdquo IEEE Transactions on Instrumentationand Measurement vol 59 no 4 pp 884ndash892 2010

[21] WWu X M Yang Y Pang J Peng and G Jeon ldquoA multifocusimage fusion method by using hidden Markov modelrdquo OpticsCommunications vol 287 pp 63ndash72 2013

[22] Z Wang Y Ma and J Gu ldquoMulti-focus image fusion usingPCNNrdquo Pattern Recognition vol 43 no 6 pp 2003ndash2016 2010

[23] D Agrawal and J Singhai ldquoMultifocus image fusion usingmodified pulse coupled neural network for improved imagequalityrdquo IET Image Processing vol 4 no 6 pp 443ndash451 2010

[24] F Zhang and H Y Chang ldquoEmploying BP neural networksto alleviate the sparsity issue in collaborative filtering rec-ommendation algorithmsrdquo Journal of Computer Research andDevelopment vol 43 pp 667ndash672 2006

[25] P F Xiao and X Z Feng Segmentation and InformationExtraction of High-Resolution Remote Sensing Image BeijingScience Press 2012

[26] R B Huang F N Lang and Z Shi ldquoLog-Gabor and 2D semi-supervised discriminant analysis based face image retrievalrdquoApplication Research of Computers vol 29 pp 393ndash396 2012

[27] Y Chai H Li and Z Li ldquoMultifocus image fusion schemeusing focused region detection and multiresolutionrdquo OpticsCommunications vol 284 no 19 pp 4376ndash4389 2011

[28] H-F Li Y Chai and X-Y Zhang ldquoMultifocus image fusionalgorithm based onmultiscale products and property of humanvisual systemrdquo Control and Decision vol 27 no 3 pp 355ndash3612012

International Journal of

AerospaceEngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

RoboticsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Active and Passive Electronic Components

Control Scienceand Engineering

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

RotatingMachinery

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporation httpwwwhindawicom

Journal ofEngineeringVolume 2014

Submit your manuscripts athttpwwwhindawicom

VLSI Design

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Shock and Vibration

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Civil EngineeringAdvances in

Acoustics and VibrationAdvances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Electrical and Computer Engineering

Journal of

Advances inOptoElectronics

Hindawi Publishing Corporation httpwwwhindawicom

Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

SensorsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Chemical EngineeringInternational Journal of Antennas and

Propagation

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Navigation and Observation

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

DistributedSensor Networks

International Journal of

Page 6: Research Article Effective Multifocus Image Fusion Based ...

6 The Scientific World Journal

(a) (b) (c)

(d) (e) (f)

(g) (h) (i)

Figure 4 Original images and fused images of ldquolenardquo (a) focus on the right (b) focus on left and right sides (c) fused image using average(d) fused image using gradient pyramid (e) fused image using DWT (f) fused image using SIDWT (g) fused image using the proposedmethod (h) the initial focused region (i) the modified focused region

the subjective evaluation of the fused results is always incom-prehensive Hence in addition to the subjective evaluationwe also adopt severalmetrics to objectively evaluate the imagefusion results and quantitatively compare the different fusionmethods in the paper

421 Mutual Information (MI) The mutual informationMI119860119865 between the source image 119860 and the fused image 119865 isdefined as follows

MI119860119865 =

119871minus1

sum

119896=0

119871minus1

sum

119894=0

119901119860119865 (119896 119894) log2

119901119860119865 (119896 119894)

119901119860 (119896) times 119901119865 (119894)

(15)

where 119901119860119865 is the jointly normalized histogram of 119860 and 119865119901119860 and 119901119865 are the normalized histograms of 119860 and 119865 119871 isthe gray level of the image and 119896 and 119894 represent thepixel value of the images 119860 and 119865 respectively The mutualinformation MI119861119865 between the source image 119861 and the fusedimage 119865 is similar to MI119860119865 The mutual information betweenthe source images 119860 119861 and the fused image 119865 is defined asfollows

MI119860119861119865

= MI119860119865 + MI119861119865 (16)The metric reflects the total amount of information that

the fused image 119865 contains about source images119860 and 119861Thelarger the value is the more the information is obtained fromthe original image and the better the fusion effect is

The Scientific World Journal 7

(a) (b) (c)

(d) (e) (f)

(g) (h) (i)

Figure 5 Original and fused image of ldquopepsirdquo (a) focus on the right (b) focus on the left (c) fused image using average (d) fused imageusing gradient pyramid (e) fused image using DWT (f) fused image using SIDWT (g) fused image using the proposed method (h) theinitial focused region (i) the modified focused region

422 119876119860119861119865 The metric 119876

119860119861119865 evaluates the sum of edgeinformation preservation values and is defined as follows

119876119860119861119865

= (

119872

sum

119898=1

119873

sum

119899=1

(119876119860119865

(119898 119899) times 120596119860 (119898 119899)

+ 119876119861119865

(119898 119899) times 120596119861 (119898 119899)))

times (

119872

sum

119898=1

119873

sum

119899=1

(120596119860 (119898 119899) + 120596119861 (119898 119899)))

minus1

(17)

where 119876119860119865

(119898 119899) = 119876119860119865

119892(119898 119899)119876

119860119865

120572(119898 119899) 119876

119860119865

119892(119898 119899) and

119876119860119865

120572(119898 119899) are the edge strength and orientation preservation

values respectively 119876119861119865

(119898 119899) is similar to 119876119860119865

(119898 119899) and120596119860(119898 119899) and120596119861(119898 119899) areweights tomeasure the importanceof 119876119860119865

(119898 119899) and 119876119861119865

(119898 119899) respectively The dynamic rangeof 119876119860119861119865 is [0 1] and it should be as close to 1 as possible

and for the ldquoideal fusionrdquo 119876119860119861119865

= 1 In addition (119898 119899)

represents the pixel location and 119872 and 119873 are the size ofimages respectively

The119876119860119861119865metric reflects the quality of visual information

obtained from the fusion of input images Therefore thelarger the value the better the performance

8 The Scientific World Journal

(a) (b) (c)

(d) (e) (f)

(g) (h) (i)

Figure 6 Original and fused image of ldquodiskrdquo (a) focus on the right (b) focus on the left (c) fused image using average (d) fused image usinggradient pyramid (e) fused image using DWT (f) fused image using SIDWT (g) fused image using the proposed method (h) the initialfocused region (i) the modified focused region

423 CorrelationCoefficient (CORR) Correlation coefficientbetween the fused image 119865 and the standard reference image119877 is defined as follows

CORR

= (

119872

sum

119898=1

119873

sum

119899=1

(119877 (119898 119899) minus 119877 (119898 119899)) (119865 (119898 119899) minus 119865 (119898 119899)))

times (

119872

sum

119898=1

119873

sum

119899=1

(119877 (119898 119899) minus 119877 (119898 119899))

2

times

119872

sum

119898=1

119873

sum

119899=1

(119865 (119898 119899) minus 119865 (119898 119899))

2

)

minus12

(18)

where 119877(119898 119899) and 119865(119898 119899) represent the pixel gray averagevalue of the standard reference image 119877 and fused image 119865respectively

The metric reflects the degree of correlation between thefused image and the standard reference image The larger thevalue is the better the fusion effect is

424 Root Mean Squared Error (RMSE) Root mean squareerror (RMSE) between the fusion image 119865 and the standardreference image 119877 is defined as follows

RMSE =radic

sum119872

119898=1sum119873

119899=1(119877 (119898 119899) minus 119865 (119898 119899))

2

119872 times 119873

(19)

The metric is used to measure the difference between thefused image and the standard reference image The smallerthe value is the better the fusion effect is

43 Fusion of Artificial Test Images The experiment is per-formed on a pair of ldquolenardquomultifocus images as shown in Fig-ures 4(a) and 4(b) The initial and modified detected focusedregions are shown in Figures 4(h) and 4(i) respectivelyThe white pixels in Figure 4(i) indicate that correspondingpixels from Figure 4(a) are in focused regions while the blackpixels suggest that corresponding pixels from Figure 4(b)are in focused regions By comparison we can observe thatthe detected focused regions of Figure 4(i) are better thanthose of Figure 4(h) for example there are somemisdetectedfocused regions in the right side of Figure 4(h) whereas theyare correctly detected in Figure 4(i) because the right side

The Scientific World Journal 9

Table 1 Performance comparison of different fusion algorithms in Figure 4

Fusion algorithms MI 119876119860119861119865 CORR RMSE

Average method 74946 072072 098987 88023Gradient pyramid 5102 072693 098876 13791DWT 71207 076948 099784 40532SIDWT 73712 077171 099553 57595Proposed method 98067 081128 099962 28699

of it is almost totally white The fusion results obtained bythe previous five different methods are shown in Figures4(c)ndash4(g) respectively It can be found that the results of thepixel averaging and gradient pyramid method have a poorcontrast compared to those of the DWT-based method theSIDWT-based method and the proposed method Howeverit is difficult for us to perceive the difference among the resultsof the DWT-based method the SIDWT-based method andthe proposed method according to the subjective evaluationTherefore to objectively evaluate these five fusion methodsquantitative assessments of the five fusion results are neededThe results of the quantitative assessments are shown inTable 1 As can be seen from Table 1 MI 119876

119860119861119865 and CORRvalues of the proposed method are higher and RMSE valueis less than those of the other methods which means that byusing our proposed method the best quantitative evaluationresults have been achieved

44 Fusion of Real Digital Camera Images The experimentscarried out in this section are performed on two sets ofsource images acquired naturally as shown in Figures 5(a)-5(b) and Figures 6(a)-6(b) respectively The initial andmodified detected focused regions of those two sets ofsource images are shown in Figures 5(h)-5(i) and Figures6(h)-6(i) respectively The fused images obtained by usingpixel averaging method gradient pyramid method DWT-based method the SIDWT-based method and the proposedmethod on these two sets of source images are shown inFigures 5(c)ndash5(g) and Figures 6(c)ndash6(g) respectively Fromthe fusion results we can easily observe that fusion effectsacquired based on the pixel averaging and gradient pyramidare not satisfactory and with poor contrast For example theregions of the testing card in Figures 5(c)-5(d) are not clearbut they are clear in Figures 5(e)ndash5(g) But it is difficult todiscriminate the difference among the results of the DWT-based method the SIDWT-based method and the proposedmethod by subjective evaluation so we need to do objectiveevaluation However it should be noted that the referenceimage is usually not available for real multifocus imagesso only the two evaluation criteria including the MI and119876119860119861119865 are used to objectively compare the fusion results The

quantitative comparison of the five methods for fusion ofthese two sets of source images is shown in Tables 2 and 3respectively As can be seen from the two tables we can findthat the MI and 119876

119860119861119865 values of the proposed method aresignificantly higher than those of the othermethods It shouldbe noted that we have carried out experiments on othermultifocus images and their results are identical to these two

Table 2 Performance comparison of different fusion algorithms inFigure 5

Fusion algorithms MI 119876119860119861119865

Average method 72941 064922Gradient pyramid 59768 067975DWT 64442 068264SIDWT 682160 070890Proposed method 91957 075904

Table 3 Performance comparison of different fusion algorithms inFigure 6

Fusion algorithms MI 119876119860119861119865

Average method 59845 052143Gradient pyramid 53657 063792DWT 53949 064323SIDWT 58380 067620Proposed method 83105 073806

examples so we did not mention all of them here Thereforethe results of subjective and objective evaluation presentedhere can verify that the performance of the proposedmethodis superior to those of the other methods

5 Conclusions

By combining the idea of the correlation between the neigh-boring pixels and BP neural networks a novel multifocusimage fusion method based on HVS and BP neural networkis proposed in the paper Three features which are basedon HVS and can reflect the clarity of a pixel are extractedand used to train a BP neural network to determine whichpixel is clearer The clearer pixels are combined to form theinitial fused image Then the focused regions are detectedby judging whether pixels from the initial fused image arein the focused regions or not Finally the final fused imageis obtained with the help of the technique of focused regiondetection by a certain fusion rule The results of subjectiveand objective evaluation of several experiments show that theproposed method outperforms several popular widely usedfusionmethods In the future we will focus on improving therobustness of the method for noise

Conflict of Interests

The authors declare no conflict of interests

10 The Scientific World Journal

Acknowledgments

The authors would like to thank the anonymous reviewersfor their valuable comments and constructive suggestionsThis work was supported by the National Natural ScienceFoundation of China (no 60963012 and no 61262034) by theKey Project of Chinese Ministry of Education (no 211087)by the Natural Science Foundation of Jiangxi Province (no20114BAB211020 and no 20132BAB201025) by the Young Sci-entist Foundation of Jiangxi Province (no 20122BCB23017)and by the Science and Technology Research Project of theEducation Department of Jiangxi Province (no GJJ13302)

References

[1] P Shah S N Merchant and U B Desai ldquoMultifocus andmultispectral image fusion based on pixel significance usingmultiresolution decompositionrdquo Signal Image and Video Pro-cessing vol 7 pp 95ndash109 2013

[2] V Aslantas and R Kurban ldquoFusion of multi-focus imagesusing differential evolution algorithmrdquo Expert Systems withApplications vol 37 no 12 pp 8861ndash8870 2010

[3] R Benes P Dvorak M Faundez-Zanuy V Espinosa-Duroand J Mekysk ldquoMulti-focus thermal image fusionrdquo PatternRecognition Letters vol 34 pp 536ndash544 2013

[4] J Dong D Zhuang Y Huang and J Fu ldquoAdvances in multi-sensor data fusion algorithms and applicationsrdquo Sensors vol 9no 10 pp 7771ndash7784 2009

[5] S Li and B Yang ldquoHybrid multiresolution method for multi-sensor multimodal image fusionrdquo IEEE Sensors Journal vol 10no 9 pp 1519ndash1526 2010

[6] S Li B Yang and J Hu ldquoPerformance comparison of differentmulti-resolution transforms for image fusionrdquo InformationFusion vol 12 no 2 pp 74ndash84 2011

[7] W Huang and Z Jing ldquoEvaluation of focus measures in multi-focus image fusionrdquo Pattern Recognition Letters vol 28 no 4pp 493ndash500 2007

[8] S Li J T Kwok and Y Wang ldquoMultifocus image fusion usingartificial neural networksrdquo Pattern Recognition Letters vol 23no 8 pp 985ndash997 2002

[9] M B A Haghighat A Aghagolzadeh and H SeyedarabildquoMulti-focus image fusion for visual sensor networks in DCTdomainrdquoComputers and Electrical Engineering vol 37 no 5 pp789ndash797 2011

[10] P J Burt and E H Adelson ldquoThe Laplacian pyramid as acompact image coderdquo IEEE Transactions on Communicationsvol 31 no 4 pp 532ndash540 1983

[11] P J Burt ldquoA gradient pyramid basis for pattern selective imagefusionrdquo in Proceedings of the Society for Information DisplayConference pp 467ndash470 1992

[12] A Toet ldquoImage fusion by a ration of low-pass pyramidrdquo PatternRecognition Letters vol 9 no 4 pp 245ndash253 1989

[13] G Pajares and J M de la Cruz ldquoA wavelet-based image fusiontutorialrdquo Pattern Recognition vol 37 no 9 pp 1855ndash1872 2004

[14] Y Zheng E A Essock B C Hansen and A M Haun ldquoA newmetric based on extended spatial frequency and its applicationto DWT based fusion algorithmsrdquo Information Fusion vol 8no 2 pp 177ndash192 2007

[15] Y Yang D S Park S Huang and N Rao ldquoMedical imagefusion via an effective wavelet-based approachrdquo Eurasip Journal

on Advances in Signal Processing vol 2010 Article ID 579341 13pages 2010

[16] M Unser ldquoTexture classification and segmentation usingwavelet framesrdquo IEEE Transactions on Image Processing vol 4no 11 pp 1549ndash1560 1995

[17] F Nencini A Garzelli S Baronti and L Alparone ldquoRemotesensing image fusion using the curvelet transformrdquo InformationFusion vol 8 no 2 pp 143ndash156 2007

[18] L Yang B L Guo and W Ni ldquoMultimodality medical imagefusion based on multiscale geometric analysis of contourlettransformrdquo Neurocomputing vol 72 no 1ndash3 pp 203ndash211 2008

[19] Q Zhang and B-L Guo ldquoMultifocus image fusion using thenonsubsampled contourlet transformrdquo Signal Processing vol89 no 7 pp 1334ndash1346 2009

[20] B Yang and S Li ldquoMultifocus image fusion and restorationwithsparse representationrdquo IEEE Transactions on Instrumentationand Measurement vol 59 no 4 pp 884ndash892 2010

[21] WWu X M Yang Y Pang J Peng and G Jeon ldquoA multifocusimage fusion method by using hidden Markov modelrdquo OpticsCommunications vol 287 pp 63ndash72 2013

[22] Z Wang Y Ma and J Gu ldquoMulti-focus image fusion usingPCNNrdquo Pattern Recognition vol 43 no 6 pp 2003ndash2016 2010

[23] D Agrawal and J Singhai ldquoMultifocus image fusion usingmodified pulse coupled neural network for improved imagequalityrdquo IET Image Processing vol 4 no 6 pp 443ndash451 2010

[24] F Zhang and H Y Chang ldquoEmploying BP neural networksto alleviate the sparsity issue in collaborative filtering rec-ommendation algorithmsrdquo Journal of Computer Research andDevelopment vol 43 pp 667ndash672 2006

[25] P F Xiao and X Z Feng Segmentation and InformationExtraction of High-Resolution Remote Sensing Image BeijingScience Press 2012

[26] R B Huang F N Lang and Z Shi ldquoLog-Gabor and 2D semi-supervised discriminant analysis based face image retrievalrdquoApplication Research of Computers vol 29 pp 393ndash396 2012

[27] Y Chai H Li and Z Li ldquoMultifocus image fusion schemeusing focused region detection and multiresolutionrdquo OpticsCommunications vol 284 no 19 pp 4376ndash4389 2011

[28] H-F Li Y Chai and X-Y Zhang ldquoMultifocus image fusionalgorithm based onmultiscale products and property of humanvisual systemrdquo Control and Decision vol 27 no 3 pp 355ndash3612012

International Journal of

AerospaceEngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

RoboticsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Active and Passive Electronic Components

Control Scienceand Engineering

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

RotatingMachinery

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporation httpwwwhindawicom

Journal ofEngineeringVolume 2014

Submit your manuscripts athttpwwwhindawicom

VLSI Design

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Shock and Vibration

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Civil EngineeringAdvances in

Acoustics and VibrationAdvances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Electrical and Computer Engineering

Journal of

Advances inOptoElectronics

Hindawi Publishing Corporation httpwwwhindawicom

Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

SensorsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Chemical EngineeringInternational Journal of Antennas and

Propagation

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Navigation and Observation

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

DistributedSensor Networks

International Journal of

Page 7: Research Article Effective Multifocus Image Fusion Based ...

The Scientific World Journal 7

(a) (b) (c)

(d) (e) (f)

(g) (h) (i)

Figure 5 Original and fused image of ldquopepsirdquo (a) focus on the right (b) focus on the left (c) fused image using average (d) fused imageusing gradient pyramid (e) fused image using DWT (f) fused image using SIDWT (g) fused image using the proposed method (h) theinitial focused region (i) the modified focused region

422 119876119860119861119865 The metric 119876

119860119861119865 evaluates the sum of edgeinformation preservation values and is defined as follows

119876119860119861119865

= (

119872

sum

119898=1

119873

sum

119899=1

(119876119860119865

(119898 119899) times 120596119860 (119898 119899)

+ 119876119861119865

(119898 119899) times 120596119861 (119898 119899)))

times (

119872

sum

119898=1

119873

sum

119899=1

(120596119860 (119898 119899) + 120596119861 (119898 119899)))

minus1

(17)

where 119876119860119865

(119898 119899) = 119876119860119865

119892(119898 119899)119876

119860119865

120572(119898 119899) 119876

119860119865

119892(119898 119899) and

119876119860119865

120572(119898 119899) are the edge strength and orientation preservation

values respectively 119876119861119865

(119898 119899) is similar to 119876119860119865

(119898 119899) and120596119860(119898 119899) and120596119861(119898 119899) areweights tomeasure the importanceof 119876119860119865

(119898 119899) and 119876119861119865

(119898 119899) respectively The dynamic rangeof 119876119860119861119865 is [0 1] and it should be as close to 1 as possible

and for the ldquoideal fusionrdquo 119876119860119861119865

= 1 In addition (119898 119899)

represents the pixel location and 119872 and 119873 are the size ofimages respectively

The119876119860119861119865metric reflects the quality of visual information

obtained from the fusion of input images Therefore thelarger the value the better the performance

8 The Scientific World Journal

(a) (b) (c)

(d) (e) (f)

(g) (h) (i)

Figure 6 Original and fused image of ldquodiskrdquo (a) focus on the right (b) focus on the left (c) fused image using average (d) fused image usinggradient pyramid (e) fused image using DWT (f) fused image using SIDWT (g) fused image using the proposed method (h) the initialfocused region (i) the modified focused region

423 CorrelationCoefficient (CORR) Correlation coefficientbetween the fused image 119865 and the standard reference image119877 is defined as follows

CORR

= (

119872

sum

119898=1

119873

sum

119899=1

(119877 (119898 119899) minus 119877 (119898 119899)) (119865 (119898 119899) minus 119865 (119898 119899)))

times (

119872

sum

119898=1

119873

sum

119899=1

(119877 (119898 119899) minus 119877 (119898 119899))

2

times

119872

sum

119898=1

119873

sum

119899=1

(119865 (119898 119899) minus 119865 (119898 119899))

2

)

minus12

(18)

where 119877(119898 119899) and 119865(119898 119899) represent the pixel gray averagevalue of the standard reference image 119877 and fused image 119865respectively

The metric reflects the degree of correlation between thefused image and the standard reference image The larger thevalue is the better the fusion effect is

424 Root Mean Squared Error (RMSE) Root mean squareerror (RMSE) between the fusion image 119865 and the standardreference image 119877 is defined as follows

RMSE =radic

sum119872

119898=1sum119873

119899=1(119877 (119898 119899) minus 119865 (119898 119899))

2

119872 times 119873

(19)

The metric is used to measure the difference between thefused image and the standard reference image The smallerthe value is the better the fusion effect is

43 Fusion of Artificial Test Images The experiment is per-formed on a pair of ldquolenardquomultifocus images as shown in Fig-ures 4(a) and 4(b) The initial and modified detected focusedregions are shown in Figures 4(h) and 4(i) respectivelyThe white pixels in Figure 4(i) indicate that correspondingpixels from Figure 4(a) are in focused regions while the blackpixels suggest that corresponding pixels from Figure 4(b)are in focused regions By comparison we can observe thatthe detected focused regions of Figure 4(i) are better thanthose of Figure 4(h) for example there are somemisdetectedfocused regions in the right side of Figure 4(h) whereas theyare correctly detected in Figure 4(i) because the right side

The Scientific World Journal 9

Table 1 Performance comparison of different fusion algorithms in Figure 4

Fusion algorithms MI 119876119860119861119865 CORR RMSE

Average method 74946 072072 098987 88023Gradient pyramid 5102 072693 098876 13791DWT 71207 076948 099784 40532SIDWT 73712 077171 099553 57595Proposed method 98067 081128 099962 28699

of it is almost totally white The fusion results obtained bythe previous five different methods are shown in Figures4(c)ndash4(g) respectively It can be found that the results of thepixel averaging and gradient pyramid method have a poorcontrast compared to those of the DWT-based method theSIDWT-based method and the proposed method Howeverit is difficult for us to perceive the difference among the resultsof the DWT-based method the SIDWT-based method andthe proposed method according to the subjective evaluationTherefore to objectively evaluate these five fusion methodsquantitative assessments of the five fusion results are neededThe results of the quantitative assessments are shown inTable 1 As can be seen from Table 1 MI 119876

119860119861119865 and CORRvalues of the proposed method are higher and RMSE valueis less than those of the other methods which means that byusing our proposed method the best quantitative evaluationresults have been achieved

44 Fusion of Real Digital Camera Images The experimentscarried out in this section are performed on two sets ofsource images acquired naturally as shown in Figures 5(a)-5(b) and Figures 6(a)-6(b) respectively The initial andmodified detected focused regions of those two sets ofsource images are shown in Figures 5(h)-5(i) and Figures6(h)-6(i) respectively The fused images obtained by usingpixel averaging method gradient pyramid method DWT-based method the SIDWT-based method and the proposedmethod on these two sets of source images are shown inFigures 5(c)ndash5(g) and Figures 6(c)ndash6(g) respectively Fromthe fusion results we can easily observe that fusion effectsacquired based on the pixel averaging and gradient pyramidare not satisfactory and with poor contrast For example theregions of the testing card in Figures 5(c)-5(d) are not clearbut they are clear in Figures 5(e)ndash5(g) But it is difficult todiscriminate the difference among the results of the DWT-based method the SIDWT-based method and the proposedmethod by subjective evaluation so we need to do objectiveevaluation However it should be noted that the referenceimage is usually not available for real multifocus imagesso only the two evaluation criteria including the MI and119876119860119861119865 are used to objectively compare the fusion results The

quantitative comparison of the five methods for fusion ofthese two sets of source images is shown in Tables 2 and 3respectively As can be seen from the two tables we can findthat the MI and 119876

119860119861119865 values of the proposed method aresignificantly higher than those of the othermethods It shouldbe noted that we have carried out experiments on othermultifocus images and their results are identical to these two

Table 2 Performance comparison of different fusion algorithms inFigure 5

Fusion algorithms MI 119876119860119861119865

Average method 72941 064922Gradient pyramid 59768 067975DWT 64442 068264SIDWT 682160 070890Proposed method 91957 075904

Table 3 Performance comparison of different fusion algorithms inFigure 6

Fusion algorithms MI 119876119860119861119865

Average method 59845 052143Gradient pyramid 53657 063792DWT 53949 064323SIDWT 58380 067620Proposed method 83105 073806

examples so we did not mention all of them here Thereforethe results of subjective and objective evaluation presentedhere can verify that the performance of the proposedmethodis superior to those of the other methods

5 Conclusions

By combining the idea of the correlation between the neigh-boring pixels and BP neural networks a novel multifocusimage fusion method based on HVS and BP neural networkis proposed in the paper Three features which are basedon HVS and can reflect the clarity of a pixel are extractedand used to train a BP neural network to determine whichpixel is clearer The clearer pixels are combined to form theinitial fused image Then the focused regions are detectedby judging whether pixels from the initial fused image arein the focused regions or not Finally the final fused imageis obtained with the help of the technique of focused regiondetection by a certain fusion rule The results of subjectiveand objective evaluation of several experiments show that theproposed method outperforms several popular widely usedfusionmethods In the future we will focus on improving therobustness of the method for noise

Conflict of Interests

The authors declare no conflict of interests

10 The Scientific World Journal

Acknowledgments

The authors would like to thank the anonymous reviewersfor their valuable comments and constructive suggestionsThis work was supported by the National Natural ScienceFoundation of China (no 60963012 and no 61262034) by theKey Project of Chinese Ministry of Education (no 211087)by the Natural Science Foundation of Jiangxi Province (no20114BAB211020 and no 20132BAB201025) by the Young Sci-entist Foundation of Jiangxi Province (no 20122BCB23017)and by the Science and Technology Research Project of theEducation Department of Jiangxi Province (no GJJ13302)

References

[1] P Shah S N Merchant and U B Desai ldquoMultifocus andmultispectral image fusion based on pixel significance usingmultiresolution decompositionrdquo Signal Image and Video Pro-cessing vol 7 pp 95ndash109 2013

[2] V Aslantas and R Kurban ldquoFusion of multi-focus imagesusing differential evolution algorithmrdquo Expert Systems withApplications vol 37 no 12 pp 8861ndash8870 2010

[3] R Benes P Dvorak M Faundez-Zanuy V Espinosa-Duroand J Mekysk ldquoMulti-focus thermal image fusionrdquo PatternRecognition Letters vol 34 pp 536ndash544 2013

[4] J Dong D Zhuang Y Huang and J Fu ldquoAdvances in multi-sensor data fusion algorithms and applicationsrdquo Sensors vol 9no 10 pp 7771ndash7784 2009

[5] S Li and B Yang ldquoHybrid multiresolution method for multi-sensor multimodal image fusionrdquo IEEE Sensors Journal vol 10no 9 pp 1519ndash1526 2010

[6] S Li B Yang and J Hu ldquoPerformance comparison of differentmulti-resolution transforms for image fusionrdquo InformationFusion vol 12 no 2 pp 74ndash84 2011

[7] W Huang and Z Jing ldquoEvaluation of focus measures in multi-focus image fusionrdquo Pattern Recognition Letters vol 28 no 4pp 493ndash500 2007

[8] S Li J T Kwok and Y Wang ldquoMultifocus image fusion usingartificial neural networksrdquo Pattern Recognition Letters vol 23no 8 pp 985ndash997 2002

[9] M B A Haghighat A Aghagolzadeh and H SeyedarabildquoMulti-focus image fusion for visual sensor networks in DCTdomainrdquoComputers and Electrical Engineering vol 37 no 5 pp789ndash797 2011

[10] P J Burt and E H Adelson ldquoThe Laplacian pyramid as acompact image coderdquo IEEE Transactions on Communicationsvol 31 no 4 pp 532ndash540 1983

[11] P J Burt ldquoA gradient pyramid basis for pattern selective imagefusionrdquo in Proceedings of the Society for Information DisplayConference pp 467ndash470 1992

[12] A Toet ldquoImage fusion by a ration of low-pass pyramidrdquo PatternRecognition Letters vol 9 no 4 pp 245ndash253 1989

[13] G Pajares and J M de la Cruz ldquoA wavelet-based image fusiontutorialrdquo Pattern Recognition vol 37 no 9 pp 1855ndash1872 2004

[14] Y Zheng E A Essock B C Hansen and A M Haun ldquoA newmetric based on extended spatial frequency and its applicationto DWT based fusion algorithmsrdquo Information Fusion vol 8no 2 pp 177ndash192 2007

[15] Y Yang D S Park S Huang and N Rao ldquoMedical imagefusion via an effective wavelet-based approachrdquo Eurasip Journal

on Advances in Signal Processing vol 2010 Article ID 579341 13pages 2010

[16] M Unser ldquoTexture classification and segmentation usingwavelet framesrdquo IEEE Transactions on Image Processing vol 4no 11 pp 1549ndash1560 1995

[17] F Nencini A Garzelli S Baronti and L Alparone ldquoRemotesensing image fusion using the curvelet transformrdquo InformationFusion vol 8 no 2 pp 143ndash156 2007

[18] L Yang B L Guo and W Ni ldquoMultimodality medical imagefusion based on multiscale geometric analysis of contourlettransformrdquo Neurocomputing vol 72 no 1ndash3 pp 203ndash211 2008

[19] Q Zhang and B-L Guo ldquoMultifocus image fusion using thenonsubsampled contourlet transformrdquo Signal Processing vol89 no 7 pp 1334ndash1346 2009

[20] B Yang and S Li ldquoMultifocus image fusion and restorationwithsparse representationrdquo IEEE Transactions on Instrumentationand Measurement vol 59 no 4 pp 884ndash892 2010

[21] WWu X M Yang Y Pang J Peng and G Jeon ldquoA multifocusimage fusion method by using hidden Markov modelrdquo OpticsCommunications vol 287 pp 63ndash72 2013

[22] Z Wang Y Ma and J Gu ldquoMulti-focus image fusion usingPCNNrdquo Pattern Recognition vol 43 no 6 pp 2003ndash2016 2010

[23] D Agrawal and J Singhai ldquoMultifocus image fusion usingmodified pulse coupled neural network for improved imagequalityrdquo IET Image Processing vol 4 no 6 pp 443ndash451 2010

[24] F Zhang and H Y Chang ldquoEmploying BP neural networksto alleviate the sparsity issue in collaborative filtering rec-ommendation algorithmsrdquo Journal of Computer Research andDevelopment vol 43 pp 667ndash672 2006

[25] P F Xiao and X Z Feng Segmentation and InformationExtraction of High-Resolution Remote Sensing Image BeijingScience Press 2012

[26] R B Huang F N Lang and Z Shi ldquoLog-Gabor and 2D semi-supervised discriminant analysis based face image retrievalrdquoApplication Research of Computers vol 29 pp 393ndash396 2012

[27] Y Chai H Li and Z Li ldquoMultifocus image fusion schemeusing focused region detection and multiresolutionrdquo OpticsCommunications vol 284 no 19 pp 4376ndash4389 2011

[28] H-F Li Y Chai and X-Y Zhang ldquoMultifocus image fusionalgorithm based onmultiscale products and property of humanvisual systemrdquo Control and Decision vol 27 no 3 pp 355ndash3612012

International Journal of

AerospaceEngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

RoboticsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Active and Passive Electronic Components

Control Scienceand Engineering

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

RotatingMachinery

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporation httpwwwhindawicom

Journal ofEngineeringVolume 2014

Submit your manuscripts athttpwwwhindawicom

VLSI Design

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Shock and Vibration

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Civil EngineeringAdvances in

Acoustics and VibrationAdvances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Electrical and Computer Engineering

Journal of

Advances inOptoElectronics

Hindawi Publishing Corporation httpwwwhindawicom

Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

SensorsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Chemical EngineeringInternational Journal of Antennas and

Propagation

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Navigation and Observation

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

DistributedSensor Networks

International Journal of

Page 8: Research Article Effective Multifocus Image Fusion Based ...

8 The Scientific World Journal

(a) (b) (c)

(d) (e) (f)

(g) (h) (i)

Figure 6 Original and fused image of ldquodiskrdquo (a) focus on the right (b) focus on the left (c) fused image using average (d) fused image usinggradient pyramid (e) fused image using DWT (f) fused image using SIDWT (g) fused image using the proposed method (h) the initialfocused region (i) the modified focused region

423 CorrelationCoefficient (CORR) Correlation coefficientbetween the fused image 119865 and the standard reference image119877 is defined as follows

CORR

= (

119872

sum

119898=1

119873

sum

119899=1

(119877 (119898 119899) minus 119877 (119898 119899)) (119865 (119898 119899) minus 119865 (119898 119899)))

times (

119872

sum

119898=1

119873

sum

119899=1

(119877 (119898 119899) minus 119877 (119898 119899))

2

times

119872

sum

119898=1

119873

sum

119899=1

(119865 (119898 119899) minus 119865 (119898 119899))

2

)

minus12

(18)

where 119877(119898 119899) and 119865(119898 119899) represent the pixel gray averagevalue of the standard reference image 119877 and fused image 119865respectively

The metric reflects the degree of correlation between thefused image and the standard reference image The larger thevalue is the better the fusion effect is

424 Root Mean Squared Error (RMSE) Root mean squareerror (RMSE) between the fusion image 119865 and the standardreference image 119877 is defined as follows

RMSE =radic

sum119872

119898=1sum119873

119899=1(119877 (119898 119899) minus 119865 (119898 119899))

2

119872 times 119873

(19)

The metric is used to measure the difference between thefused image and the standard reference image The smallerthe value is the better the fusion effect is

43 Fusion of Artificial Test Images The experiment is per-formed on a pair of ldquolenardquomultifocus images as shown in Fig-ures 4(a) and 4(b) The initial and modified detected focusedregions are shown in Figures 4(h) and 4(i) respectivelyThe white pixels in Figure 4(i) indicate that correspondingpixels from Figure 4(a) are in focused regions while the blackpixels suggest that corresponding pixels from Figure 4(b)are in focused regions By comparison we can observe thatthe detected focused regions of Figure 4(i) are better thanthose of Figure 4(h) for example there are somemisdetectedfocused regions in the right side of Figure 4(h) whereas theyare correctly detected in Figure 4(i) because the right side

The Scientific World Journal 9

Table 1 Performance comparison of different fusion algorithms in Figure 4

Fusion algorithms MI 119876119860119861119865 CORR RMSE

Average method 74946 072072 098987 88023Gradient pyramid 5102 072693 098876 13791DWT 71207 076948 099784 40532SIDWT 73712 077171 099553 57595Proposed method 98067 081128 099962 28699

of it is almost totally white The fusion results obtained bythe previous five different methods are shown in Figures4(c)ndash4(g) respectively It can be found that the results of thepixel averaging and gradient pyramid method have a poorcontrast compared to those of the DWT-based method theSIDWT-based method and the proposed method Howeverit is difficult for us to perceive the difference among the resultsof the DWT-based method the SIDWT-based method andthe proposed method according to the subjective evaluationTherefore to objectively evaluate these five fusion methodsquantitative assessments of the five fusion results are neededThe results of the quantitative assessments are shown inTable 1 As can be seen from Table 1 MI 119876

119860119861119865 and CORRvalues of the proposed method are higher and RMSE valueis less than those of the other methods which means that byusing our proposed method the best quantitative evaluationresults have been achieved

44 Fusion of Real Digital Camera Images The experimentscarried out in this section are performed on two sets ofsource images acquired naturally as shown in Figures 5(a)-5(b) and Figures 6(a)-6(b) respectively The initial andmodified detected focused regions of those two sets ofsource images are shown in Figures 5(h)-5(i) and Figures6(h)-6(i) respectively The fused images obtained by usingpixel averaging method gradient pyramid method DWT-based method the SIDWT-based method and the proposedmethod on these two sets of source images are shown inFigures 5(c)ndash5(g) and Figures 6(c)ndash6(g) respectively Fromthe fusion results we can easily observe that fusion effectsacquired based on the pixel averaging and gradient pyramidare not satisfactory and with poor contrast For example theregions of the testing card in Figures 5(c)-5(d) are not clearbut they are clear in Figures 5(e)ndash5(g) But it is difficult todiscriminate the difference among the results of the DWT-based method the SIDWT-based method and the proposedmethod by subjective evaluation so we need to do objectiveevaluation However it should be noted that the referenceimage is usually not available for real multifocus imagesso only the two evaluation criteria including the MI and119876119860119861119865 are used to objectively compare the fusion results The

quantitative comparison of the five methods for fusion ofthese two sets of source images is shown in Tables 2 and 3respectively As can be seen from the two tables we can findthat the MI and 119876

119860119861119865 values of the proposed method aresignificantly higher than those of the othermethods It shouldbe noted that we have carried out experiments on othermultifocus images and their results are identical to these two

Table 2 Performance comparison of different fusion algorithms inFigure 5

Fusion algorithms MI 119876119860119861119865

Average method 72941 064922Gradient pyramid 59768 067975DWT 64442 068264SIDWT 682160 070890Proposed method 91957 075904

Table 3 Performance comparison of different fusion algorithms inFigure 6

Fusion algorithms MI 119876119860119861119865

Average method 59845 052143Gradient pyramid 53657 063792DWT 53949 064323SIDWT 58380 067620Proposed method 83105 073806

examples so we did not mention all of them here Thereforethe results of subjective and objective evaluation presentedhere can verify that the performance of the proposedmethodis superior to those of the other methods

5 Conclusions

By combining the idea of the correlation between the neigh-boring pixels and BP neural networks a novel multifocusimage fusion method based on HVS and BP neural networkis proposed in the paper Three features which are basedon HVS and can reflect the clarity of a pixel are extractedand used to train a BP neural network to determine whichpixel is clearer The clearer pixels are combined to form theinitial fused image Then the focused regions are detectedby judging whether pixels from the initial fused image arein the focused regions or not Finally the final fused imageis obtained with the help of the technique of focused regiondetection by a certain fusion rule The results of subjectiveand objective evaluation of several experiments show that theproposed method outperforms several popular widely usedfusionmethods In the future we will focus on improving therobustness of the method for noise

Conflict of Interests

The authors declare no conflict of interests

10 The Scientific World Journal

Acknowledgments

The authors would like to thank the anonymous reviewersfor their valuable comments and constructive suggestionsThis work was supported by the National Natural ScienceFoundation of China (no 60963012 and no 61262034) by theKey Project of Chinese Ministry of Education (no 211087)by the Natural Science Foundation of Jiangxi Province (no20114BAB211020 and no 20132BAB201025) by the Young Sci-entist Foundation of Jiangxi Province (no 20122BCB23017)and by the Science and Technology Research Project of theEducation Department of Jiangxi Province (no GJJ13302)

References

[1] P Shah S N Merchant and U B Desai ldquoMultifocus andmultispectral image fusion based on pixel significance usingmultiresolution decompositionrdquo Signal Image and Video Pro-cessing vol 7 pp 95ndash109 2013

[2] V Aslantas and R Kurban ldquoFusion of multi-focus imagesusing differential evolution algorithmrdquo Expert Systems withApplications vol 37 no 12 pp 8861ndash8870 2010

[3] R Benes P Dvorak M Faundez-Zanuy V Espinosa-Duroand J Mekysk ldquoMulti-focus thermal image fusionrdquo PatternRecognition Letters vol 34 pp 536ndash544 2013

[4] J Dong D Zhuang Y Huang and J Fu ldquoAdvances in multi-sensor data fusion algorithms and applicationsrdquo Sensors vol 9no 10 pp 7771ndash7784 2009

[5] S Li and B Yang ldquoHybrid multiresolution method for multi-sensor multimodal image fusionrdquo IEEE Sensors Journal vol 10no 9 pp 1519ndash1526 2010

[6] S Li B Yang and J Hu ldquoPerformance comparison of differentmulti-resolution transforms for image fusionrdquo InformationFusion vol 12 no 2 pp 74ndash84 2011

[7] W Huang and Z Jing ldquoEvaluation of focus measures in multi-focus image fusionrdquo Pattern Recognition Letters vol 28 no 4pp 493ndash500 2007

[8] S Li J T Kwok and Y Wang ldquoMultifocus image fusion usingartificial neural networksrdquo Pattern Recognition Letters vol 23no 8 pp 985ndash997 2002

[9] M B A Haghighat A Aghagolzadeh and H SeyedarabildquoMulti-focus image fusion for visual sensor networks in DCTdomainrdquoComputers and Electrical Engineering vol 37 no 5 pp789ndash797 2011

[10] P J Burt and E H Adelson ldquoThe Laplacian pyramid as acompact image coderdquo IEEE Transactions on Communicationsvol 31 no 4 pp 532ndash540 1983

[11] P J Burt ldquoA gradient pyramid basis for pattern selective imagefusionrdquo in Proceedings of the Society for Information DisplayConference pp 467ndash470 1992

[12] A Toet ldquoImage fusion by a ration of low-pass pyramidrdquo PatternRecognition Letters vol 9 no 4 pp 245ndash253 1989

[13] G Pajares and J M de la Cruz ldquoA wavelet-based image fusiontutorialrdquo Pattern Recognition vol 37 no 9 pp 1855ndash1872 2004

[14] Y Zheng E A Essock B C Hansen and A M Haun ldquoA newmetric based on extended spatial frequency and its applicationto DWT based fusion algorithmsrdquo Information Fusion vol 8no 2 pp 177ndash192 2007

[15] Y Yang D S Park S Huang and N Rao ldquoMedical imagefusion via an effective wavelet-based approachrdquo Eurasip Journal

on Advances in Signal Processing vol 2010 Article ID 579341 13pages 2010

[16] M Unser ldquoTexture classification and segmentation usingwavelet framesrdquo IEEE Transactions on Image Processing vol 4no 11 pp 1549ndash1560 1995

[17] F Nencini A Garzelli S Baronti and L Alparone ldquoRemotesensing image fusion using the curvelet transformrdquo InformationFusion vol 8 no 2 pp 143ndash156 2007

[18] L Yang B L Guo and W Ni ldquoMultimodality medical imagefusion based on multiscale geometric analysis of contourlettransformrdquo Neurocomputing vol 72 no 1ndash3 pp 203ndash211 2008

[19] Q Zhang and B-L Guo ldquoMultifocus image fusion using thenonsubsampled contourlet transformrdquo Signal Processing vol89 no 7 pp 1334ndash1346 2009

[20] B Yang and S Li ldquoMultifocus image fusion and restorationwithsparse representationrdquo IEEE Transactions on Instrumentationand Measurement vol 59 no 4 pp 884ndash892 2010

[21] WWu X M Yang Y Pang J Peng and G Jeon ldquoA multifocusimage fusion method by using hidden Markov modelrdquo OpticsCommunications vol 287 pp 63ndash72 2013

[22] Z Wang Y Ma and J Gu ldquoMulti-focus image fusion usingPCNNrdquo Pattern Recognition vol 43 no 6 pp 2003ndash2016 2010

[23] D Agrawal and J Singhai ldquoMultifocus image fusion usingmodified pulse coupled neural network for improved imagequalityrdquo IET Image Processing vol 4 no 6 pp 443ndash451 2010

[24] F Zhang and H Y Chang ldquoEmploying BP neural networksto alleviate the sparsity issue in collaborative filtering rec-ommendation algorithmsrdquo Journal of Computer Research andDevelopment vol 43 pp 667ndash672 2006

[25] P F Xiao and X Z Feng Segmentation and InformationExtraction of High-Resolution Remote Sensing Image BeijingScience Press 2012

[26] R B Huang F N Lang and Z Shi ldquoLog-Gabor and 2D semi-supervised discriminant analysis based face image retrievalrdquoApplication Research of Computers vol 29 pp 393ndash396 2012

[27] Y Chai H Li and Z Li ldquoMultifocus image fusion schemeusing focused region detection and multiresolutionrdquo OpticsCommunications vol 284 no 19 pp 4376ndash4389 2011

[28] H-F Li Y Chai and X-Y Zhang ldquoMultifocus image fusionalgorithm based onmultiscale products and property of humanvisual systemrdquo Control and Decision vol 27 no 3 pp 355ndash3612012

International Journal of

AerospaceEngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

RoboticsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Active and Passive Electronic Components

Control Scienceand Engineering

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

RotatingMachinery

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporation httpwwwhindawicom

Journal ofEngineeringVolume 2014

Submit your manuscripts athttpwwwhindawicom

VLSI Design

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Shock and Vibration

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Civil EngineeringAdvances in

Acoustics and VibrationAdvances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Electrical and Computer Engineering

Journal of

Advances inOptoElectronics

Hindawi Publishing Corporation httpwwwhindawicom

Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

SensorsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Chemical EngineeringInternational Journal of Antennas and

Propagation

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Navigation and Observation

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

DistributedSensor Networks

International Journal of

Page 9: Research Article Effective Multifocus Image Fusion Based ...

The Scientific World Journal 9

Table 1 Performance comparison of different fusion algorithms in Figure 4

Fusion algorithms MI 119876119860119861119865 CORR RMSE

Average method 74946 072072 098987 88023Gradient pyramid 5102 072693 098876 13791DWT 71207 076948 099784 40532SIDWT 73712 077171 099553 57595Proposed method 98067 081128 099962 28699

of it is almost totally white The fusion results obtained bythe previous five different methods are shown in Figures4(c)ndash4(g) respectively It can be found that the results of thepixel averaging and gradient pyramid method have a poorcontrast compared to those of the DWT-based method theSIDWT-based method and the proposed method Howeverit is difficult for us to perceive the difference among the resultsof the DWT-based method the SIDWT-based method andthe proposed method according to the subjective evaluationTherefore to objectively evaluate these five fusion methodsquantitative assessments of the five fusion results are neededThe results of the quantitative assessments are shown inTable 1 As can be seen from Table 1 MI 119876

119860119861119865 and CORRvalues of the proposed method are higher and RMSE valueis less than those of the other methods which means that byusing our proposed method the best quantitative evaluationresults have been achieved

44 Fusion of Real Digital Camera Images The experimentscarried out in this section are performed on two sets ofsource images acquired naturally as shown in Figures 5(a)-5(b) and Figures 6(a)-6(b) respectively The initial andmodified detected focused regions of those two sets ofsource images are shown in Figures 5(h)-5(i) and Figures6(h)-6(i) respectively The fused images obtained by usingpixel averaging method gradient pyramid method DWT-based method the SIDWT-based method and the proposedmethod on these two sets of source images are shown inFigures 5(c)ndash5(g) and Figures 6(c)ndash6(g) respectively Fromthe fusion results we can easily observe that fusion effectsacquired based on the pixel averaging and gradient pyramidare not satisfactory and with poor contrast For example theregions of the testing card in Figures 5(c)-5(d) are not clearbut they are clear in Figures 5(e)ndash5(g) But it is difficult todiscriminate the difference among the results of the DWT-based method the SIDWT-based method and the proposedmethod by subjective evaluation so we need to do objectiveevaluation However it should be noted that the referenceimage is usually not available for real multifocus imagesso only the two evaluation criteria including the MI and119876119860119861119865 are used to objectively compare the fusion results The

quantitative comparison of the five methods for fusion ofthese two sets of source images is shown in Tables 2 and 3respectively As can be seen from the two tables we can findthat the MI and 119876

119860119861119865 values of the proposed method aresignificantly higher than those of the othermethods It shouldbe noted that we have carried out experiments on othermultifocus images and their results are identical to these two

Table 2 Performance comparison of different fusion algorithms inFigure 5

Fusion algorithms MI 119876119860119861119865

Average method 72941 064922Gradient pyramid 59768 067975DWT 64442 068264SIDWT 682160 070890Proposed method 91957 075904

Table 3 Performance comparison of different fusion algorithms inFigure 6

Fusion algorithms MI 119876119860119861119865

Average method 59845 052143Gradient pyramid 53657 063792DWT 53949 064323SIDWT 58380 067620Proposed method 83105 073806

examples so we did not mention all of them here Thereforethe results of subjective and objective evaluation presentedhere can verify that the performance of the proposedmethodis superior to those of the other methods

5 Conclusions

By combining the idea of the correlation between the neigh-boring pixels and BP neural networks a novel multifocusimage fusion method based on HVS and BP neural networkis proposed in the paper Three features which are basedon HVS and can reflect the clarity of a pixel are extractedand used to train a BP neural network to determine whichpixel is clearer The clearer pixels are combined to form theinitial fused image Then the focused regions are detectedby judging whether pixels from the initial fused image arein the focused regions or not Finally the final fused imageis obtained with the help of the technique of focused regiondetection by a certain fusion rule The results of subjectiveand objective evaluation of several experiments show that theproposed method outperforms several popular widely usedfusionmethods In the future we will focus on improving therobustness of the method for noise

Conflict of Interests

The authors declare no conflict of interests

10 The Scientific World Journal

Acknowledgments

The authors would like to thank the anonymous reviewersfor their valuable comments and constructive suggestionsThis work was supported by the National Natural ScienceFoundation of China (no 60963012 and no 61262034) by theKey Project of Chinese Ministry of Education (no 211087)by the Natural Science Foundation of Jiangxi Province (no20114BAB211020 and no 20132BAB201025) by the Young Sci-entist Foundation of Jiangxi Province (no 20122BCB23017)and by the Science and Technology Research Project of theEducation Department of Jiangxi Province (no GJJ13302)

References

[1] P Shah S N Merchant and U B Desai ldquoMultifocus andmultispectral image fusion based on pixel significance usingmultiresolution decompositionrdquo Signal Image and Video Pro-cessing vol 7 pp 95ndash109 2013

[2] V Aslantas and R Kurban ldquoFusion of multi-focus imagesusing differential evolution algorithmrdquo Expert Systems withApplications vol 37 no 12 pp 8861ndash8870 2010

[3] R Benes P Dvorak M Faundez-Zanuy V Espinosa-Duroand J Mekysk ldquoMulti-focus thermal image fusionrdquo PatternRecognition Letters vol 34 pp 536ndash544 2013

[4] J Dong D Zhuang Y Huang and J Fu ldquoAdvances in multi-sensor data fusion algorithms and applicationsrdquo Sensors vol 9no 10 pp 7771ndash7784 2009

[5] S Li and B Yang ldquoHybrid multiresolution method for multi-sensor multimodal image fusionrdquo IEEE Sensors Journal vol 10no 9 pp 1519ndash1526 2010

[6] S Li B Yang and J Hu ldquoPerformance comparison of differentmulti-resolution transforms for image fusionrdquo InformationFusion vol 12 no 2 pp 74ndash84 2011

[7] W Huang and Z Jing ldquoEvaluation of focus measures in multi-focus image fusionrdquo Pattern Recognition Letters vol 28 no 4pp 493ndash500 2007

[8] S Li J T Kwok and Y Wang ldquoMultifocus image fusion usingartificial neural networksrdquo Pattern Recognition Letters vol 23no 8 pp 985ndash997 2002

[9] M B A Haghighat A Aghagolzadeh and H SeyedarabildquoMulti-focus image fusion for visual sensor networks in DCTdomainrdquoComputers and Electrical Engineering vol 37 no 5 pp789ndash797 2011

[10] P J Burt and E H Adelson ldquoThe Laplacian pyramid as acompact image coderdquo IEEE Transactions on Communicationsvol 31 no 4 pp 532ndash540 1983

[11] P J Burt ldquoA gradient pyramid basis for pattern selective imagefusionrdquo in Proceedings of the Society for Information DisplayConference pp 467ndash470 1992

[12] A Toet ldquoImage fusion by a ration of low-pass pyramidrdquo PatternRecognition Letters vol 9 no 4 pp 245ndash253 1989

[13] G Pajares and J M de la Cruz ldquoA wavelet-based image fusiontutorialrdquo Pattern Recognition vol 37 no 9 pp 1855ndash1872 2004

[14] Y Zheng E A Essock B C Hansen and A M Haun ldquoA newmetric based on extended spatial frequency and its applicationto DWT based fusion algorithmsrdquo Information Fusion vol 8no 2 pp 177ndash192 2007

[15] Y Yang D S Park S Huang and N Rao ldquoMedical imagefusion via an effective wavelet-based approachrdquo Eurasip Journal

on Advances in Signal Processing vol 2010 Article ID 579341 13pages 2010

[16] M Unser ldquoTexture classification and segmentation usingwavelet framesrdquo IEEE Transactions on Image Processing vol 4no 11 pp 1549ndash1560 1995

[17] F Nencini A Garzelli S Baronti and L Alparone ldquoRemotesensing image fusion using the curvelet transformrdquo InformationFusion vol 8 no 2 pp 143ndash156 2007

[18] L Yang B L Guo and W Ni ldquoMultimodality medical imagefusion based on multiscale geometric analysis of contourlettransformrdquo Neurocomputing vol 72 no 1ndash3 pp 203ndash211 2008

[19] Q Zhang and B-L Guo ldquoMultifocus image fusion using thenonsubsampled contourlet transformrdquo Signal Processing vol89 no 7 pp 1334ndash1346 2009

[20] B Yang and S Li ldquoMultifocus image fusion and restorationwithsparse representationrdquo IEEE Transactions on Instrumentationand Measurement vol 59 no 4 pp 884ndash892 2010

[21] WWu X M Yang Y Pang J Peng and G Jeon ldquoA multifocusimage fusion method by using hidden Markov modelrdquo OpticsCommunications vol 287 pp 63ndash72 2013

[22] Z Wang Y Ma and J Gu ldquoMulti-focus image fusion usingPCNNrdquo Pattern Recognition vol 43 no 6 pp 2003ndash2016 2010

[23] D Agrawal and J Singhai ldquoMultifocus image fusion usingmodified pulse coupled neural network for improved imagequalityrdquo IET Image Processing vol 4 no 6 pp 443ndash451 2010

[24] F Zhang and H Y Chang ldquoEmploying BP neural networksto alleviate the sparsity issue in collaborative filtering rec-ommendation algorithmsrdquo Journal of Computer Research andDevelopment vol 43 pp 667ndash672 2006

[25] P F Xiao and X Z Feng Segmentation and InformationExtraction of High-Resolution Remote Sensing Image BeijingScience Press 2012

[26] R B Huang F N Lang and Z Shi ldquoLog-Gabor and 2D semi-supervised discriminant analysis based face image retrievalrdquoApplication Research of Computers vol 29 pp 393ndash396 2012

[27] Y Chai H Li and Z Li ldquoMultifocus image fusion schemeusing focused region detection and multiresolutionrdquo OpticsCommunications vol 284 no 19 pp 4376ndash4389 2011

[28] H-F Li Y Chai and X-Y Zhang ldquoMultifocus image fusionalgorithm based onmultiscale products and property of humanvisual systemrdquo Control and Decision vol 27 no 3 pp 355ndash3612012

International Journal of

AerospaceEngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

RoboticsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Active and Passive Electronic Components

Control Scienceand Engineering

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

RotatingMachinery

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporation httpwwwhindawicom

Journal ofEngineeringVolume 2014

Submit your manuscripts athttpwwwhindawicom

VLSI Design

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Shock and Vibration

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Civil EngineeringAdvances in

Acoustics and VibrationAdvances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Electrical and Computer Engineering

Journal of

Advances inOptoElectronics

Hindawi Publishing Corporation httpwwwhindawicom

Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

SensorsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Chemical EngineeringInternational Journal of Antennas and

Propagation

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Navigation and Observation

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

DistributedSensor Networks

International Journal of

Page 10: Research Article Effective Multifocus Image Fusion Based ...

10 The Scientific World Journal

Acknowledgments

The authors would like to thank the anonymous reviewersfor their valuable comments and constructive suggestionsThis work was supported by the National Natural ScienceFoundation of China (no 60963012 and no 61262034) by theKey Project of Chinese Ministry of Education (no 211087)by the Natural Science Foundation of Jiangxi Province (no20114BAB211020 and no 20132BAB201025) by the Young Sci-entist Foundation of Jiangxi Province (no 20122BCB23017)and by the Science and Technology Research Project of theEducation Department of Jiangxi Province (no GJJ13302)

References

[1] P Shah S N Merchant and U B Desai ldquoMultifocus andmultispectral image fusion based on pixel significance usingmultiresolution decompositionrdquo Signal Image and Video Pro-cessing vol 7 pp 95ndash109 2013

[2] V Aslantas and R Kurban ldquoFusion of multi-focus imagesusing differential evolution algorithmrdquo Expert Systems withApplications vol 37 no 12 pp 8861ndash8870 2010

[3] R Benes P Dvorak M Faundez-Zanuy V Espinosa-Duroand J Mekysk ldquoMulti-focus thermal image fusionrdquo PatternRecognition Letters vol 34 pp 536ndash544 2013

[4] J Dong D Zhuang Y Huang and J Fu ldquoAdvances in multi-sensor data fusion algorithms and applicationsrdquo Sensors vol 9no 10 pp 7771ndash7784 2009

[5] S Li and B Yang ldquoHybrid multiresolution method for multi-sensor multimodal image fusionrdquo IEEE Sensors Journal vol 10no 9 pp 1519ndash1526 2010

[6] S Li B Yang and J Hu ldquoPerformance comparison of differentmulti-resolution transforms for image fusionrdquo InformationFusion vol 12 no 2 pp 74ndash84 2011

[7] W Huang and Z Jing ldquoEvaluation of focus measures in multi-focus image fusionrdquo Pattern Recognition Letters vol 28 no 4pp 493ndash500 2007

[8] S Li J T Kwok and Y Wang ldquoMultifocus image fusion usingartificial neural networksrdquo Pattern Recognition Letters vol 23no 8 pp 985ndash997 2002

[9] M B A Haghighat A Aghagolzadeh and H SeyedarabildquoMulti-focus image fusion for visual sensor networks in DCTdomainrdquoComputers and Electrical Engineering vol 37 no 5 pp789ndash797 2011

[10] P J Burt and E H Adelson ldquoThe Laplacian pyramid as acompact image coderdquo IEEE Transactions on Communicationsvol 31 no 4 pp 532ndash540 1983

[11] P J Burt ldquoA gradient pyramid basis for pattern selective imagefusionrdquo in Proceedings of the Society for Information DisplayConference pp 467ndash470 1992

[12] A Toet ldquoImage fusion by a ration of low-pass pyramidrdquo PatternRecognition Letters vol 9 no 4 pp 245ndash253 1989

[13] G Pajares and J M de la Cruz ldquoA wavelet-based image fusiontutorialrdquo Pattern Recognition vol 37 no 9 pp 1855ndash1872 2004

[14] Y Zheng E A Essock B C Hansen and A M Haun ldquoA newmetric based on extended spatial frequency and its applicationto DWT based fusion algorithmsrdquo Information Fusion vol 8no 2 pp 177ndash192 2007

[15] Y Yang D S Park S Huang and N Rao ldquoMedical imagefusion via an effective wavelet-based approachrdquo Eurasip Journal

on Advances in Signal Processing vol 2010 Article ID 579341 13pages 2010

[16] M Unser ldquoTexture classification and segmentation usingwavelet framesrdquo IEEE Transactions on Image Processing vol 4no 11 pp 1549ndash1560 1995

[17] F Nencini A Garzelli S Baronti and L Alparone ldquoRemotesensing image fusion using the curvelet transformrdquo InformationFusion vol 8 no 2 pp 143ndash156 2007

[18] L Yang B L Guo and W Ni ldquoMultimodality medical imagefusion based on multiscale geometric analysis of contourlettransformrdquo Neurocomputing vol 72 no 1ndash3 pp 203ndash211 2008

[19] Q Zhang and B-L Guo ldquoMultifocus image fusion using thenonsubsampled contourlet transformrdquo Signal Processing vol89 no 7 pp 1334ndash1346 2009

[20] B Yang and S Li ldquoMultifocus image fusion and restorationwithsparse representationrdquo IEEE Transactions on Instrumentationand Measurement vol 59 no 4 pp 884ndash892 2010

[21] WWu X M Yang Y Pang J Peng and G Jeon ldquoA multifocusimage fusion method by using hidden Markov modelrdquo OpticsCommunications vol 287 pp 63ndash72 2013

[22] Z Wang Y Ma and J Gu ldquoMulti-focus image fusion usingPCNNrdquo Pattern Recognition vol 43 no 6 pp 2003ndash2016 2010

[23] D Agrawal and J Singhai ldquoMultifocus image fusion usingmodified pulse coupled neural network for improved imagequalityrdquo IET Image Processing vol 4 no 6 pp 443ndash451 2010

[24] F Zhang and H Y Chang ldquoEmploying BP neural networksto alleviate the sparsity issue in collaborative filtering rec-ommendation algorithmsrdquo Journal of Computer Research andDevelopment vol 43 pp 667ndash672 2006

[25] P F Xiao and X Z Feng Segmentation and InformationExtraction of High-Resolution Remote Sensing Image BeijingScience Press 2012

[26] R B Huang F N Lang and Z Shi ldquoLog-Gabor and 2D semi-supervised discriminant analysis based face image retrievalrdquoApplication Research of Computers vol 29 pp 393ndash396 2012

[27] Y Chai H Li and Z Li ldquoMultifocus image fusion schemeusing focused region detection and multiresolutionrdquo OpticsCommunications vol 284 no 19 pp 4376ndash4389 2011

[28] H-F Li Y Chai and X-Y Zhang ldquoMultifocus image fusionalgorithm based onmultiscale products and property of humanvisual systemrdquo Control and Decision vol 27 no 3 pp 355ndash3612012

International Journal of

AerospaceEngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

RoboticsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Active and Passive Electronic Components

Control Scienceand Engineering

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

RotatingMachinery

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporation httpwwwhindawicom

Journal ofEngineeringVolume 2014

Submit your manuscripts athttpwwwhindawicom

VLSI Design

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Shock and Vibration

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Civil EngineeringAdvances in

Acoustics and VibrationAdvances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Electrical and Computer Engineering

Journal of

Advances inOptoElectronics

Hindawi Publishing Corporation httpwwwhindawicom

Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

SensorsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Chemical EngineeringInternational Journal of Antennas and

Propagation

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Navigation and Observation

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

DistributedSensor Networks

International Journal of

Page 11: Research Article Effective Multifocus Image Fusion Based ...

International Journal of

AerospaceEngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

RoboticsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Active and Passive Electronic Components

Control Scienceand Engineering

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

RotatingMachinery

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporation httpwwwhindawicom

Journal ofEngineeringVolume 2014

Submit your manuscripts athttpwwwhindawicom

VLSI Design

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Shock and Vibration

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Civil EngineeringAdvances in

Acoustics and VibrationAdvances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Electrical and Computer Engineering

Journal of

Advances inOptoElectronics

Hindawi Publishing Corporation httpwwwhindawicom

Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

SensorsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Chemical EngineeringInternational Journal of Antennas and

Propagation

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Navigation and Observation

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

DistributedSensor Networks

International Journal of


Recommended