+ All Categories
Home > Documents > [IEEE 2013 International Symposium on Signals, Circuits and Systems (ISSCS) - Iasi, Romania...

[IEEE 2013 International Symposium on Signals, Circuits and Systems (ISSCS) - Iasi, Romania...

Date post: 27-Jan-2017
Category:
Upload: ramona
View: 216 times
Download: 1 times
Share this document with a friend
4
Parallel Approach for Multifocus Image Fusion Silviu Ioan Bejinariu, Florin Rotaru, Cristina Diana Niţă, Ramona Luca Institute of Computer Science, Romanian Academy, Iaşi Branch, Iaşi, Romania {silviu.bejinariu, florin.rotaru, cristina.nita, ramona.luca}@iit.academiaromana-is.ro Abstract— A parallel approach of multifocus image fusion is proposed. Most of parallelization techniques for image processing and image fusion presented in literature are based on the use of graphic processor (GPU) computing power or the message passing interface (MPI) paradigm. Quite a few are exploiting the capabilities offered by the multi-core processors. This paper presents two parallelized methods for multifocus image fusion. The first one is based on the spatial frequency evaluation at pixel level and the second uses the morphological wavelet decomposition. An efficiency evaluation is presented for the case of implementation on multi-core processors. I. INTRODUCTION Image fusion is the combining process of relevant information from one or more images to create a single image with more informational content. Multifocus image fusion refers to images containing very large depth scenes in which the clarity of objects varies depending on their distance to camera. To obtain a clear image, multiple images are captured, focusing the camera on objects located at different distances and then, these images are fused to obtain an image of higher quality. Most multifocus image correction techniques define and evaluate a sharpness criterion for quantification of high frequency content which corresponds to edges or region borders. Then the input images are combined making sure that the resulted image maximizes the sharpness criterion. The fusion techniques may be applied on pixel, block or region level. The most used and easy to implement image fusion techniques are pixel approaches [1], [2]. Other approaches use spatial frequency [3], morphologic operators or multiresolution decomposition techniques like wavelet transform [4] to fuse the images. In [5], the authors use two measures: spatial frequency and visibility to define a two steps fusion scheme based on the information level. The spatial frequency based method is used in this paper to build a parallelized fusion scheme. The spatial frequency is used also in [9] to develop a fusion method based on the segmentation of the fused image obtained by averaging the input images followed by a recombination of regions. More elaborated block based fusion schemes use neural networks to select blocks from the source images [10]. The morphological version of the Haar wavelet transform using the morphological dilation and erosion operators introduced in [6], [7] was used in [8] to define a morphological wavelet based fusion scheme involving only operations with integer numbers. This way the computing complexity is simplified drastically. This is the second fusion method for which a parallelized version is proposed in this paper. Parallel approaches of image processing and image fusion methods are based on: CUDA (Compute Unified Device Architecture) - a parallel platform implemented on the NVIDIA graphics processing units; MPI (Message Passing Interface) - a memory distributed model used mostly for very large datasets processing. While multi-core processors are common now, the parallel algorithms based on the shared memory model may be easily implemented using standard development environments [16] on a single computer without special hardware requirements. More efficient algorithms were developed for image processing and fusion using hybrid parallelization schemes [17], [18]. The first section of this paper presents the sequential version of the fusion algorithm based on morphological wavelets [8] and a fusion scheme that combines the information level measures proposed in [5]. A post-processing enhancement procedure proposed in [16] is also shortly presented. The third section analyzes the parallelization opportunities for the fusion schemes previously described. The last section presents the results obtained by applying the proposed parallel fusion schemes for some well known multifocus test images. II. MULTIFOCUS IMAGE FUSION A. Morphological Wavelets for Multifocus Image Fusion Let X be the input image represented as a N M × matrix ( N , M are even positive integers) and B a block of four pixels ) , ( c r , ) 1 , ( + c r , ) , 1 ( c r + and ) 1 , 1 ( + + c r . The analysis ( ) ω ψ , and synthesis ( ) ω ψ , operators are defined by the following relations [8]: ( )( ) ( ) ( ) ( ) ( ) { } ) , , ( ) )( ( 1 , 1 , , 1 , 1 , , , max d h v y y y B X c r X c r X c r X c r X M B X = + + + + = = ω ψ (1) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) + + > + + + + = + > + + = + > + + = otherwise M c r X c r X M if c r X M y otherwise M c r X c r X M if c r X M y otherwise M c r X c r X M if c r X M y d h v 1 , 1 0 1 , 1 1 , 1 , 1 0 , 1 , 1 1 , 0 1 , 1 , (2) 978-1-4673-6143-9/13/$31.00 ©2013 IEEE
Transcript
Page 1: [IEEE 2013 International Symposium on Signals, Circuits and Systems (ISSCS) - Iasi, Romania (2013.07.11-2013.07.12)] International Symposium on Signals, Circuits and Systems ISSCS2013

Parallel Approach for Multifocus Image Fusion

Silviu Ioan Bejinariu, Florin Rotaru, Cristina Diana Niţă, Ramona Luca

Institute of Computer Science, Romanian Academy, Iaşi Branch, Iaşi, Romania {silviu.bejinariu, florin.rotaru, cristina.nita, ramona.luca}@iit.academiaromana-is.ro

Abstract— A parallel approach of multifocus image fusion is proposed. Most of parallelization techniques for image processing and image fusion presented in literature are based on the use of graphic processor (GPU) computing power or the message passing interface (MPI) paradigm. Quite a few are exploiting the capabilities offered by the multi-core processors. This paper presents two parallelized methods for multifocus image fusion. The first one is based on the spatial frequency evaluation at pixel level and the second uses the morphological wavelet decomposition. An efficiency evaluation is presented for the case of implementation on multi-core processors.

I. INTRODUCTION Image fusion is the combining process of relevant

information from one or more images to create a single image with more informational content. Multifocus image fusion refers to images containing very large depth scenes in which the clarity of objects varies depending on their distance to camera. To obtain a clear image, multiple images are captured, focusing the camera on objects located at different distances and then, these images are fused to obtain an image of higher quality.

Most multifocus image correction techniques define and evaluate a sharpness criterion for quantification of high frequency content which corresponds to edges or region borders. Then the input images are combined making sure that the resulted image maximizes the sharpness criterion. The fusion techniques may be applied on pixel, block or region level.

The most used and easy to implement image fusion techniques are pixel approaches [1], [2]. Other approaches use spatial frequency [3], morphologic operators or multiresolution decomposition techniques like wavelet transform [4] to fuse the images.

In [5], the authors use two measures: spatial frequency and visibility to define a two steps fusion scheme based on the information level. The spatial frequency based method is used in this paper to build a parallelized fusion scheme. The spatial frequency is used also in [9] to develop a fusion method based on the segmentation of the fused image obtained by averaging the input images followed by a recombination of regions. More elaborated block based fusion schemes use neural networks to select blocks from the source images [10].

The morphological version of the Haar wavelet transform using the morphological dilation and erosion operators introduced in [6], [7] was used in [8] to define a morphological wavelet based fusion scheme involving only operations with integer numbers. This way the computing complexity is

simplified drastically. This is the second fusion method for which a parallelized version is proposed in this paper.

Parallel approaches of image processing and image fusion methods are based on: CUDA (Compute Unified Device Architecture) - a parallel platform implemented on the NVIDIA graphics processing units; MPI (Message Passing Interface) - a memory distributed model used mostly for very large datasets processing. While multi-core processors are common now, the parallel algorithms based on the shared memory model may be easily implemented using standard development environments [16] on a single computer without special hardware requirements. More efficient algorithms were developed for image processing and fusion using hybrid parallelization schemes [17], [18].

The first section of this paper presents the sequential version of the fusion algorithm based on morphological wavelets [8] and a fusion scheme that combines the information level measures proposed in [5]. A post-processing enhancement procedure proposed in [16] is also shortly presented. The third section analyzes the parallelization opportunities for the fusion schemes previously described. The last section presents the results obtained by applying the proposed parallel fusion schemes for some well known multifocus test images.

II. MULTIFOCUS IMAGE FUSION

A. Morphological Wavelets for Multifocus Image Fusion Let X be the input image represented as a NM × matrix

( N,M are even positive integers) and B a block of four pixels ),( cr , )1,( +cr , ),1( cr + and )1,1( ++ cr . The analysis

( )↑↑ ωψ , and synthesis ( )↓↓ ωψ , operators are defined by the following relations [8]:

( )( ) ( ) ( ) ( ) ( ){ }),,())((

1,1,,1,1,,,max

dhv yyyBX

crXcrXcrXcrXMBX

=

++++==↑

ω

ψ (1)

( ) ( )( )

( ) ( )( )

( ) ( )( )⎩

⎨⎧

−++>++−++−

=

⎩⎨⎧

−+>+−+−

=

⎩⎨⎧

−+>+−+−

=

otherwiseMcrXcrXMifcrXM

y

otherwiseMcrXcrXMifcrXM

y

otherwiseMcrXcrXMifcrXM

y

d

h

v

1,101,11,1

,10,1,1

1,01,1,

(2)

978-1-4673-6143-9/13/$31.00 ©2013 IEEE

Page 2: [IEEE 2013 International Symposium on Signals, Circuits and Systems (ISSCS) - Iasi, Romania (2013.07.11-2013.07.12)] International Symposium on Signals, Circuits and Systems ISSCS2013

where dhv yyy ,, are the vertical, horizontal and diagonal signal details.

The signal reconstruction is made using the synthesis operator [8]:

( ) ( ) ( )( ) ( ) ( ) ( ) ( ){ }1,1,,1,1,,,,

,ˆ,ˆ,'.

++++∈+=

crcrcrcrvuvuYvuXvuX (3)

The .+ operator is the usual additive operator and

( ) ( ) ( ) ( )( ) ( )( ) ( )( ) ( )( ) ( )0,min1,1ˆ

0,min,1ˆ0,min1,ˆ

0,,,min,ˆ1,1ˆ,1ˆ1,ˆ,ˆ

d

h

v

dhv

ycrY

ycrY

ycrY

yyycrY

McrXcrXcrXcrX

−=++

−=+

−=+

=

=++=+=+=

(4)

In the fusion step, the signal details with the greatest absolute value are selected on each level of the image reconstruction.

B. Image fusion based on the information level Other fusion techniques use the information level to choose

the source of the pixels in the fused image. In [5], two measures of the in formation level: spatial frequency and block visibility were proposed. The two steps algorithm divides the source images in blocks and uses the two criteria to choose the blocks to create 2 fused images. Then, the two obtained images are fused again using one of the two criteria. The parallel fusion scheme proposed in this paper uses only the spatial frequency measure computed in a rectangular neighborhood centered in each pixel of the input images. The spatial frequency in an image block of size MxN is defined as [5]:

22 CFRFSF += (5)

where RF and CF are the row and column frequencies, defined by the following relations:

( ) ( )[ ]

( ) ( )[ ]∑∑

∑∑

= =

= =

−−=

−−=

N

r

M

c

N

r

M

c

c,rFc,rFMN

CF

c,rFc,rFMN

RF

1 2

22

2 1

22

11

11

(6)

In [16], an enhancement procedure based on the usage of a fusion map was proposed. The fusion map describes the contribution of each input image to the fusion result. Because in most cases the focalization planes are compact regions in the input images, a smoothing procedure is applied to the fusion map and then the fused image is reconstructed based on the filtered fusion map. The enhancement procedure was combined with both fusion methods described above: morphological

wavelets based and information level based multifocus image fusion.

The evaluation of the fusion results is performed using a similarity between two images measure, in terms of the Roberts gradient operator [8]:

( )( ) ( )( )

( )( ) ( )( )∑∑∑

+

−−=

22

2

,',

,',1',

crGcrG

crGcrGGGS (7)

where ( ) ( ) ( ){ }crGcrGcrG n ,,,max, 1 …= for all the positions ( )cr, in the gradients niGi …1, = of the input images

niX i …1, = and 'G is the gradient of the fused image 'X . ( )crG , is the magnitude of Roberts operator for each position. The quality of the fused image is better when the similarity

value is close to 1.

III. PARALLEL TASKS IN THE FUSION PROCEDURE In this section the possibilities of parallel implementation of

fusion procedures are analyzed. It should be noted that the number of available processors is reduced (2, 4 or 8) and the images size is not very large. An excessive task partitioning obviously leads to poor performances due to the large number of synchronization operations. Therefore, the parallelization will not be excessive.

A. Parallelism in spatial frequency based fusion procedure In case of spatial frequency based fusion, the value of the

measure is computed in each pixel of both input images. Because the images are the same size, the sequential version computes the spatial frequency and builds the fused image in the same loop, as it is described below: for each row for each column Compute spatial frequency in the pixel of the 1st image Compute spatial frequency in the pixel of the 2nd image Compute the pixel value in the fused image end for end for Compute similarity evaluation measure

The fusion procedure contains two nested loops. The body is

executed MxN times, where M and N are the images sizes. The body of the outer loop is executed M times, it is time consuming and the executed tasks are independent. This outer loop should be executed in parallel. The spatial frequency is computed also by two nested loops. The inner sequence is executed P2 times where P is the dimension of the pixel neighborhood in which the spatial frequency is evaluated. Considering that this task is already included in a parallel loop, to avoid the excessive fragmentation it will be executed sequentially.

The similarity measure computing requires the evaluation of 3 different gradients (the two input images and the fused image). This task is executed once, it is not time consuming but the three gradients computing are completely independent.

Page 3: [IEEE 2013 International Symposium on Signals, Circuits and Systems (ISSCS) - Iasi, Romania (2013.07.11-2013.07.12)] International Symposium on Signals, Circuits and Systems ISSCS2013

These tasks will be executed in parallel without a significant speed increase.

B. Parallelism in the morphological wavelet based fusion procedure

In case of morphological wavelet based fusion, there is a preliminary step in which the morphological wavelet transform is computed for both input images. Then, the fused image is build by evaluating the fused coefficients of the wavelet transform, for all the decomposition levels, in all pixels, as below: Compute MWT of the 1st image Compute MWT of the 2nd image for each MWT decomposition level for each row for each column Evaluate de fused coefficients and Compute the fused pixel value end for end for end for Compute similarity evaluation measure

For the initialization step there are two possibilities: to

implement a parallel version of the MWT transform and to compute in parallel the transforms of the input images. We opted for the second solution, considering the case of more than two input images. Regarding the image fusion and reconstruction, these tasks are unified in a 3 nested processing loops. The outer loop can’t be executed in parallel due the fact that input data is changed from a step to another. Only the middle loop will be parallelized. The similarity evaluation will be parallelized as in the first case.

As described in [16] the enhancement procedure is performed in two steps which cannot be executed in parallel because of the data dependency.

The most common evaluation of parallel algorithms is performed using the parallel efficiency defined in [14] as

nttE

p

= , where st is the time used by the sequential version

of the algorithm, pt is the processing time for the parallel version and n is the number of used processors.

IV. EXPERIMENT The proposed parallelization method was applied on different

sets of multifocus images available on Internet [11]. In this paragraph the results obtained for a single multifocus image set are presented. The original images are depicted in figure 1.a,b. The figure 1.c,d contain the fusion results obtained by applying the original methods described above and figure 1.e,f the results obtained after applying the post-processing procedure proposed in [16]. The obtained results are summarized in table I. In case of the morphological wavelet a two-levels decomposition was applied and a 7x7 window was used to compute the spatial frequency [16].

The parallel implementation of the described fusion procedures was evaluated on two computers: Core2Duo 2.16 GHz processor and Intel Core i5 3.10 GHz. Both systems have 4GB RAM and use Windows 7 64 bits as operating system. To highlight the efficiency of the parallelization, the evaluation was made using 'pepsi' images scaled to 200%.

In table III, the time (in seconds) is presented for different processing conditions: Core2Duo/Core i5 processor (first column), compiled as 32/64 bits application (second column), sequential/parallel implementation (third column). The average processing time (seconds) is presented for the following fusion procedures: morphological wavelet (MWT) based fusion, spatial frequency (SPF) based fusion, MWT and SPF completed by the post-processing procedure (MWT-Enh and SPF-Enh).

In table II, the parallel efficiency is computed for the same situations.

It must be noticed that: • Even if the MWT based fusion is faster because only

integer operations are performed, a better efficiency is obtained in case of spatial frequency, when also floating point operations are involved;

• A small speed-up is obtained also when the application is compiled on 64 bits because less conversion operations are required;

• When the post processing procedure is applied, the parallel efficiency is less than for the original method, because it is not parallelized;

• The lower efficiency values obtained in case of MWT based fusion on the Core i5 processor are caused by the reduced degree of parallelization in the wavelet transform;

• The most obvious gain was obtained for the spatial frequency based image fusion. The parallel efficiency is 0.91 on Core2Duo and 0.87 on the Core i5 processor. Similar values were obtained in both 32 and 64 bits versions of the application.

The proposed parallelization of image fusion procedures was implemented and tested in an image processing framework developed by authors. It is implemented in C++ as a Windows application. For image manipulation and some processing functions, the OpenCV library is used [12], [13]. Parallelization was implemented using the parallel programming support available in Microsoft Visual Studio 2010 [15].

V. CONCLUSIONS In this paper, a parallel approach for multifocus image fusion

is proposed. It is based on the shared memory parallelization model which may be easily implemented on the common multi-core processors. Parallel versions are proposed for the multifocus image fusion schemes based on the morphological wavelet transform and spatial frequency. The results, as depicted in tables II and III are encouraging. The proposed method will be developed to process more than two multifocus source images and other image fusion methods. The parallel implementations for the multi-core processors represents a challenge for those who are not satisfied by the speed gain

Page 4: [IEEE 2013 International Symposium on Signals, Circuits and Systems (ISSCS) - Iasi, Romania (2013.07.11-2013.07.12)] International Symposium on Signals, Circuits and Systems ISSCS2013

achieved due to the higher CPU speed without using the full computing power of the new processors.

TABLE I. SUMMARY OF THE FUSION RESULTS FOR IMAGES IN FIG.1

Morphological wavelet based fusion

Information level based fusion

Similarity measure – original method 0.850 0.937

Similarity measure after post processing 0.947 0.950

a. Original image “pepsi1” focused on the near plane [11]

b. Original image “disk2” focused on the far plane [11]

c. Fused image obtained using morphological wavelets, similarity

measure value 0.850

d. Fused image obtained using information level criteria, similarity

measure value 0.937

e. Fusion result after post processing,

similarity measure value 0.947 f. Fusion result after post processing,

similarity measure value 0.950

Figure 1. Fusion map and fusion results obtained using “pepsi1” and “pepsi2” as input images. The left column corresponds to the morphological wavelet multifocus image fusion and the second column to the spatial frequency based method. Images a.,b. are from [11] and images c.-f. are obtained by the authors.

TABLE II. PARALLEL EFFICIENCY FOR MULTIFOCUS FUSION PROCEDURES

Fusion method

MWT SPF MWT Enh SPF Enh

Core2Duo, Win32 0.68 0.91 0.56 0.75 Core2Duo, x64 0.64 0.84 0.53 0.79 Core i5, Win32 0.38 0.87 0.31 0.45

Core i5, x64 0.29 0.75 0.30 0.47

TABLE III. PROCESSING TIME (SECONDS) FOR MULTIFOCUS FUSION PROCEDURES

Processing conditions

Fusion method

MWT SPF MWT Enh SPF Enh

Cor

e2D

uo

Win

32

Seq. 0.171 0.484 0.312 0.796 Par. 0.125 0.265 0.281 0.531

x64 Seq. 0.140 0.468 0.250 0.687

Par. 0.110 0.280 0.234 0.436

Cor

e i5

Win

32

Seq. 0.141 0.327 0.249 0.531 Par. 0.093 0.094 0.203 0.296

x64 Seq. 0.109 0.327 0.203 0.468

Par. 0.093 0.109 0.172 0.251

REFERENCES [1] Z.Li, Z.Jing, G.Liu, S.Sun and H.Leung, “Pixel visibility based

multifocus image fusion”, Proceedings of the 2003 International Conference on Neural Networks and Signal Processing, 2003, Vol. 2, pp. 1050-1053.

[2] S.Li, J.T.Kwok and Y.Wanga, “Combination of images with diverse focuses using the spatial frequency”, Information Fusion, Vol. 2, 2001, pp. 169-176.

[3] B.Yang and S.Li, “Multi-focus image fusion based on spatial frequency and morphological operators”, Chinese Optics Letters, Vol. 5, 2007, pp. 452-453.

[4] W.W.Wang, P.L.Shui and G.X.Song, “Multifocus image fusion in wavelet domain”, International Conference on Machine Learning and Cybernetics, 2003, Vol. 5, pp. 2887 – 2890.

[5] R.Maruthi and K.Sankarasubramanian, “Multi focus image fusion based on the information level in the regions of the images”, Journal of Theoretical and Applied Information Technology, 2007, pp. 80-85.

[6] J.Goutsias and H.J.Heijmans, “Nonlinear multiresolution signal decomposition schemes, part 1: morphological pyramids”, IEEE Trans. Image Processing, Vol. 9, (November 2000), pp. 1862–1876.

[7] H.J.Heijmans and J.Goutsias, “Nonlinear multiresolution signal decomposition schemes, part 2: morphological wavelets”, IEEE Trans. Image Processing, Vol. 9 (November 2000), pp. 1897–1913.

[8] I.De and B.Chanda, “A simple and efficient algorithm for multifocus image fusion using morphological wavelets”, Elsevier, Signal Processing 86 (2006), pp. 924–936.

[9] Shutao Li and Bin Yang, “Multifocus image fusion using region segmentation and spatial frequency”, Image and Vision Computing 26 (2008) 971–979.

[10] Wei Huang and Zhongliang Jing, “Multi-focus image fusion using pulse coupled neural network”, Pattern Recognition Letters 28 (2007) 1123–1132.

[11] Lehigh University, Multifocus images database, www.ece.lehigh.edu/SPCRL/IF/pepsi.htm.

[12] Open Source Computer Vision Library, Reference Manual, Copyright 1999-2001 Intel Corporation.

[13] Gary Bradski and Adrian Kaehler, Learning OpenCV, O’Reilly Media, Inc, 2008.

[14] Y. Cheng, Y. Li and R. Zhao, “A parallel image fusion algorithm based on wavelet packet”, Proceedings of 8th International Conference on Signal Processing, ICSP2006, Vol. 2.

[15] Colin Campbell and Ade Miller, Parallel programming with Microsoft Visual C++, Microsoft Corporation, 2012.

[16] S.Bejinariu, F.Rotaru and C.Niţă, "Post-processing enhancement for multifocus image fusion", Proceedings of the 16th International Conference on System Theory, Control and Computing, ICSTCC 2012, Sinaia, 12-14 Octombrie 2012.

[17] F. Wei and A.E. Yilmaz, “A hybrid message passing/shared memory parallelization of the adaptive integral method for multi-core clusters”, Parallel Computing, 2011, No. 37, pp. 279–301.

[18] H.Jin et al., “High performance computing using MPI and OpenMP on multi-core parallel systems”, Parallel Computing, 2011, N0, 37, pp. 562–575.


Recommended