+ All Categories
Home > Documents > Extended depth of field in images through complex amplitude pre-processing and optimized digital...

Extended depth of field in images through complex amplitude pre-processing and optimized digital...

Date post: 26-Nov-2023
Category:
Upload: independent
View: 0 times
Download: 0 times
Share this document with a friend
12
Extended depth of field in images through complex amplitude pre-processing and optimized digital post-processing q L.M. Ledesma-Carrillo a , M. Lopez-Ramirez a , C.A. Rivera-Romero a , A. Garcia-Perez a , G. Botella b , E. Cabal-Yepez a,a Division de Ingenierias, Campus Irapuato-Salamanca, Universidad de Guanajuato, Carr. Salamanca-Valle km 3.5 + 1.8, Comunidad de Palo Blanco, 36700 Salamanca, Guanajuato, Mexico b Departamento de Arquitectura de Computadores y Automatica, Universidad Complutense de Madrid, Avda. Complutense S/N, 28040 Madrid, Spain article info Article history: Available online 25 November 2013 abstract Many applications require images with high resolution and an extended depth of field. Directly changing the depth of field in optical systems results in losing resolution and information from the captured scene. Different methods have been proposed for carrying out the task of extending the depth of field. Traditional techniques consist of optical- system manipulation by reducing the pupil aperture along with the image resolution. Other methods propose the use of optical arrays with computing-intensive digital post-processing for extending the depth of field. This work proposes a pre-processing opti- cal system and a cost-effective post-processing digital treatment based on an optimized Kalman filter to extend the depth of field in images. Results demonstrate that the proposed pre-processing and post-processing techniques provide images with high resolution and extended depth of field for different focalization errors without requiring optical system calibration. In assessing the resulting image through the universal image quality index, this technique proves superior. Ó 2013 Elsevier Ltd. All rights reserved. 1. Introduction Many applications such as robot vision, bio-imaging, microscopy and others require optical systems for capturing scenes of interest. These systems depend upon images with high resolution and extended depth of field [1–3]. The system must be calibrated for correct functioning, since relevant regions may be in front of or behind the focal plane. By directly changing the optical system depth of field, resolution and information are lost from the captured scene [4,5]. An optical system with high resolution and extended depth of field is not possible; therefore, it is convenient to improve the system characteristics by utilizing independent processing techniques [6–8]. To improve the system characteristics, the first step is to carry out pre-processing by using a set of optical filter arrays for image smoothing, noise cancelation, edge detection, contrast manipulation, etc. The second step involves the treatment of the pre-processed data (post-processing) through computing algorithms like digital filtering in the time or frequency realm to retrieve the captured image with less noise effects from the environment, quantization process, and pre-processing [9–11]. It is worth noticing that to achieve a good image resto- ration, familiarity with the applied pre-processing is convenient; otherwise, an approximation must be used complicating the post-processing task to such a degree that information from the captured scene could be lost [12,13]. 0045-7906/$ - see front matter Ó 2013 Elsevier Ltd. All rights reserved. http://dx.doi.org/10.1016/j.compeleceng.2013.11.003 q This paper is for CAEE special issue SI-40th Reviews processed and approved for publication by Editor-in-Chief Dr. Manu Malek. Corresponding author. Tel.: +52 (464) 6479940. E-mail address: [email protected] (E. Cabal-Yepez). Computers and Electrical Engineering 40 (2014) 29–40 Contents lists available at ScienceDirect Computers and Electrical Engineering journal homepage: www.elsevier.com/locate/compeleceng
Transcript

Computers and Electrical Engineering 40 (2014) 29–40

Contents lists available at ScienceDirect

Computers and Electrical Engineering

journal homepage: www.elsevier .com/ locate/compeleceng

Extended depth of field in images through complex amplitudepre-processing and optimized digital post-processing q

0045-7906/$ - see front matter � 2013 Elsevier Ltd. All rights reserved.http://dx.doi.org/10.1016/j.compeleceng.2013.11.003

q This paper is for CAEE special issue SI-40th Reviews processed and approved for publication by Editor-in-Chief Dr. Manu Malek.⇑ Corresponding author. Tel.: +52 (464) 6479940.

E-mail address: [email protected] (E. Cabal-Yepez).

L.M. Ledesma-Carrillo a, M. Lopez-Ramirez a, C.A. Rivera-Romero a, A. Garcia-Perez a,G. Botella b, E. Cabal-Yepez a,⇑a Division de Ingenierias, Campus Irapuato-Salamanca, Universidad de Guanajuato, Carr. Salamanca-Valle km 3.5 + 1.8, Comunidad de Palo Blanco,36700 Salamanca, Guanajuato, Mexicob Departamento de Arquitectura de Computadores y Automatica, Universidad Complutense de Madrid, Avda. Complutense S/N, 28040 Madrid, Spain

a r t i c l e i n f o a b s t r a c t

Article history:Available online 25 November 2013

Many applications require images with high resolution and an extended depth of field.Directly changing the depth of field in optical systems results in losing resolution andinformation from the captured scene. Different methods have been proposed for carryingout the task of extending the depth of field. Traditional techniques consist of optical-system manipulation by reducing the pupil aperture along with the image resolution.Other methods propose the use of optical arrays with computing-intensive digitalpost-processing for extending the depth of field. This work proposes a pre-processing opti-cal system and a cost-effective post-processing digital treatment based on an optimizedKalman filter to extend the depth of field in images. Results demonstrate that the proposedpre-processing and post-processing techniques provide images with high resolution andextended depth of field for different focalization errors without requiring optical systemcalibration. In assessing the resulting image through the universal image quality index, thistechnique proves superior.

� 2013 Elsevier Ltd. All rights reserved.

1. Introduction

Many applications such as robot vision, bio-imaging, microscopy and others require optical systems for capturing scenesof interest. These systems depend upon images with high resolution and extended depth of field [1–3]. The system must becalibrated for correct functioning, since relevant regions may be in front of or behind the focal plane. By directly changing theoptical system depth of field, resolution and information are lost from the captured scene [4,5]. An optical system with highresolution and extended depth of field is not possible; therefore, it is convenient to improve the system characteristics byutilizing independent processing techniques [6–8]. To improve the system characteristics, the first step is to carry outpre-processing by using a set of optical filter arrays for image smoothing, noise cancelation, edge detection, contrastmanipulation, etc. The second step involves the treatment of the pre-processed data (post-processing) through computingalgorithms like digital filtering in the time or frequency realm to retrieve the captured image with less noise effects fromthe environment, quantization process, and pre-processing [9–11]. It is worth noticing that to achieve a good image resto-ration, familiarity with the applied pre-processing is convenient; otherwise, an approximation must be used complicatingthe post-processing task to such a degree that information from the captured scene could be lost [12,13].

30 L.M. Ledesma-Carrillo et al. / Computers and Electrical Engineering 40 (2014) 29–40

Different techniques have emerged for extending the depth of field in an optical system. The traditional techniqueconsists of reducing the pupil aperture in the optical system manipulation [14]; however, this action reduces the imageresolution, which is directly correlated with the pupil aperture and affects the light capture capability of the optical system.On the other hand, if more resolution is required, the pupil aperture must be increased, reducing the system depth of field[4–8,15–17]. In [18], extended depth of field with improved image resolution is reached through digital processing combinedwith optical arrays and variable pupil aperture. The process consists of taking several images for different pupil aperturesand obtaining their average to produce a digitally restored image. The effectiveness of the method depends on the totalnumber of different pupil apertures used, considerably increasing the computational cost of the restoration process. Themore pupil apertures used, the better the restoration results. In [19], a similar block-based algorithm is presented combiningmulti-focus images of a scene for enhancing the depth of field in an image. In [20], an image fusion algorithm is describedbased on a multi-scale top-hat transform and toggle contrast operator to improve depth of field in images. In [21], a camerawith a large-format lens that projects a perspective image, which is scanned by a digital sensor, one part at a time, isproposed. The process must to be repeated at different focal distances to form an image of the scene with extended depthof field. In [22], an algorithm is presented for compensating limited depth of field in an image from a sequence of images. In[23], a system for remote control of the extended focal depth region is proposed for applications where it is desirable tominimize the movement of the focusing lens. The approach uses three axicons—two external for generating a ring andone internal for producing the focal region. For a different approach, a segmentation approach is proposed in [24] to extractfocused objects from images with low depth of field. The approach uses a focus energy map based on the difference of high-frequency components between the focused region and the defocused background; the construction of region/boundarymasks for the focused object by watershed transform; entropy thresholding and flood filling; a boundary linking methodto obtain closed-region/boundary masks; and a trimap containing seed regions for the focused object, defocused background,and uncertain regions. Finally, the trimap is used as the input to an image matting model, which is utilized to classify thepixels in the uncertain regions to obtain an accurate focused object segmentation result. In [15], depth of field extensionin images is reached by segmenting the image into multiple clusters using a color-based region classification method andfinding a rectangular region that encloses each cluster. Color shifting vectors are estimated using a phase correlation method.The set of clusters are aligned in the opposite direction, and, finally, they are fused to produce an approximate in-focusimage. In [25,26], a cubic phase mask, which is insensitive to focalization errors, is used to reduce the influence of theseerrors on the modulation transfer function (MTF) of the optical system. However, the MTF is not on the system focal planeand induces partial losses of information, which can be recovered through additional digital image processing.

During the restoration of images that have been treated with a degradation process like the use of masks, it is importantto know the optical transfer function (OTF) of the applied pre-processing. For a known OTF, restoring the image utilizinginverse filtering should be possible; however, inverse filtering recovery is quite complicated because of the OTF’s highsensitivity to small frequency variations [27,28]. On the other hand, when the system OTF is unknown, the restoration ap-proaches proposed in the reviewed literature utilize computationally expensive approximations that elapse for long periodsof time and consume great amounts of resources. For instance, in [29], an algorithmic implementation is presented for aparametric image processing framework to speed up the computation of addition, subtraction, and multiplication during im-age restoration. In real applications, the image is captured and digitally stored with inherent quantization errors that makerestoring the pre-processed image impossible by utilizing inverse filtering, although the system OTF is well known [30,31].For instance, in [32], a fuzzy-based two-step filter for restoring images corrupted with additive noise is proposed by com-puting the difference between the central pixel and its neighbors inside a selected window; then, a fuzzy association degreeis calculated for each difference value using a Gaussian membership function. The process is repeated to improve the ob-tained result by reducing the error in the differences. In [33], the intensity histogram equalization is used for image contrastenhancement using the histogram matching concept, where the input image is matched to its smoothed version to reduceundesirable artifacts. Using this approach, pixel intensities are randomly perturbed by distributing the resultant imageintensities over the available range to increase image contrast. As described above, a generalized and optimized approachfor extending the depth of field on an image by using just one pupil aperture without affecting its resolution is desirable.

The contribution of this work is a novel methodology composed of a pre-processing optical system and a post-processingdigital treatment for extending the depth of field on images. A complex-amplitude mask (filtering) that uses just one pupilaperture is used during the pre-processing stage. The proposed mask annihilates problems like taking grate numbers ofimages with variable pupil apertures. The complex amplitude filter consists of a cubic-phase filter and a Gaussian-amplitudefilter, both of which provide an optical system with low sensitivity to focalization errors that affect the image modulation,making it necessary to apply digital post-processing for image restoration. An iterative Kalman filter is proposed for imagerestoration in the frequency domain during the post-processing. Different from previous works in the reviewed literature,where great numbers of images taken with different pupil apertures are treated through computing intensive algorithmsfor extending the depth of field, the proposed approach is applied on a single image taken with one pupil aperture forenhancing its depth of field in an optimized way. The universal image quality index (UIQI) [34] is used as a benchmarkfor determining the number of iterations on the Kalman filter that provide optimal image restoration with extended depthof field during post-processing utilizing the proposed optical system.

The remainder of this paper is organized as follows: Fundamentals on the introduced pre-processing algorithm and post-processing method for depth of field extension are given in Section 2. Section 3 describes the experimentation for testing the

L.M. Ledesma-Carrillo et al. / Computers and Electrical Engineering 40 (2014) 29–40 31

effectiveness of the proposed methodology. Results are presented in Section 4, highlighting the advantages of the appliedtechnique over the traditional approach for depth of field extension. Conclusions and remarks are given in Section 5.

2. Theoretical background

2.1. MTF and focalization errors

To visualize the focalization error effects on the optical system MTF, the complex amplitude transmittance of a general-ized pupil aperture C(l) in 1-D is used [35]:

CðlÞ ¼ QðlÞej2pw lXð Þ

2

; ð1Þ

where l is the frequency domain variable, O is the maximum cut-off frequency, w represents the aberration coefficient forfocalization errors in wavelengths, and Q(l) describes the complex-amplitude transmittance for the pupil aperture.

The proposed system OTF H(l) is the normalized version of C(l) autocorrelation. Therefore, the MTF M(l) is the OTFmodulus. Fig. 1 shows the focalization error effects on the optical system MTF.

HðlÞ ¼ 1N

Z 1

�1C aþ l

2

� �C� a� l

2

� �ej2p 2wl

Xð Þ2ada ð2Þ

MðlÞ ¼ jHðlÞj ð3Þ

To reduce the oscillations of the MTF in Fig. 1, a complex amplitude filter that combines a cubic phase mask and aGaussian-amplitude filter is required.

2.2. Complex amplitude filtering

The utilization of a phase-mask filter and a Gaussian-amplitude filter to obtain an MTF with low sensitivity to focalizationerrors is discussed in [36], where the MTF of a generalized pupil aperture P(l) is defined as

PðlÞ ¼ CðlÞe�2pc lXð Þ

2

ej2pa lXð Þ

3

; ð4Þ

where C(l) is the complex amplitude transmittance of a generalized pupil aperture, c denotes the attenuation factor of theGaussian-amplitude filter, and a expresses the maximum value of the optical-path difference for the cubic phase mask.

Fig. 2a depicts the in-focus and out-of-focus MTF with induced oscillations for w = 0 and w = 3, respectively obtained byapplying the cubic-phase mask with a maximum difference in the optical path a = 33. The Gausian-amplitude filter lessensthe oscillations induced by the cubic-phase mask. Fig. 2b shows the obtained MTF when the Gaussian-amplitude filter isapplied with an attenuation factor c = 0.7. In Fig. 2, it is clear how the complex amplitude filter, composed of the cubic-phasemask and the Gaussian-amplitude filter, reduces the influence of focalization errors on the optical system MTF, providing thesame frequency response for the in-focus and out-of-focus cases. However, the modulation is substantially reduced.Consequently, a digital post-processing for image restoration is in order.

Fig. 1. Modulation transfer function (MTF) (a) In focus (w = 0). (b) Out of focus (w = 3).

Fig. 2. In-focus and out-of-focus MTF for the cubic-phase mask with (a) a = 33 and c = 0. (b) a = 33 and c = 0.7.

32 L.M. Ledesma-Carrillo et al. / Computers and Electrical Engineering 40 (2014) 29–40

2.3. Image restoration

Considering an image i(x,y) that has been corrupted through a degradation process h(x,y) plus noise n(x,y), the resultingimage g(x,y) can be described as

gðx; yÞ ¼ iðx; yÞ � hðx; yÞ þ nðx; yÞ; ð5Þ

where h(x,y) is also known as the point spread function (PSF).A direct way for obtaining the restoration r(x,y) of i(x,y) is through an inverse filtering, considering the degradation pro-

cess h(x,y) as known and the additive noise as zero n(x,y) = 0, for computation simplicity. Computing the Fourier transformof the process described in (5), and taking into account the considerations above, the process can be rewritten as

Gðu; mÞ ¼ Iðu; mÞHðu; mÞ: ð6Þ

From (6), the restoration R(u,v) of the image I(u,v) through inverse filtering is given by

Rðu; mÞ ¼ Gðu; mÞHðu; mÞ : ð7Þ

However, image recovery through inverse filtering is quite complicated because of the OTF’s high sensitivity to smallfrequency variations [27,28], as described above.

2.4. Kalman filtering

The Kalman filter is a set of mathematical equations that provides a computationally efficient (recursive) mean toestimate the state xk of a discrete-time controlled process in a way that minimizes the mean of the squared error [37]. Itis defined by the linear stochastic difference equation

xk ¼ Axk�1 þ Buk�1 þ gk�1; ð8Þ

with a measurement zk defined by the difference equation

zk ¼ Hxk þ vk: ð9Þ

The random variables gk and vk represent the process and measurement noise, respectively. The matrix A in (8) relates thestate at the previous time step xk�1 to the state at the current step xk, in the absence of either a driving function or processnoise. The matrix B relates the optional control input uk to the state xk. The matrix H in (9) relates the state xk to themeasurement zk.

In the Kalman filter, the goal is to find an equation that computes an a posteriori state estimate x̂k as a linear combinationof an a priori estimate x̂k�1 and a weighted difference between an actual measurement zk and a measurement predictionHx̂k�1 as

x̂k ¼ x̂k�1 þ Kðzk � Hx̂k�1Þ: ð10Þ

The residual ðzk � Hx̂k�1Þ in (10) reflects the discrepancy between the predicted measurement Hx̂k�1 and the actualmeasurement zk. The matrix K is the gain that minimizes the a posteriori error covariance Vk, and it can be obtained by

Fig. 3. Description of proposed methodology for depth of field extension in images.

Fig. 4. Images with different degrees of aberration (w = 0 to w = 3.0) used as input to the proposed technique for extending their depth of field.

L.M. Ledesma-Carrillo et al. / Computers and Electrical Engineering 40 (2014) 29–40 33

Kk ¼Vk�1HT

HVk�1HT þ Sk

: ð11Þ

From (11), it can be seen that as the measurement error covariance Sk approaches zero, the gain K weights the residual moreheavily; whereas, as the a priori estimate error covariance Vk�1 approaches zero, the gain K weights the residual less heavily.

Applying the Kalman filter equations described above for the restoration R(u,v) of the corrupted image G(u,v), the Kalmanfilter gain Kk(u,v) in the Fourier domain can be written as

Kkðu; mÞ ¼Vk�1ðu; mÞHTðu; mÞ

Vk�1ðu; mÞjHðu; mÞj2 þ Skðu; mÞ; ð12Þ

where H(u,v) is the system OTF, Sk is the 2-D DFT of the noise covariance matrix, and Vk(u,v) is the 2-D DFT error covariancematrix. The noise covariance matrix Sk in (12) can be written as

Fig. 5.and a =

34 L.M. Ledesma-Carrillo et al. / Computers and Electrical Engineering 40 (2014) 29–40

Skðu; mÞ ¼ r2U; ð13Þ

where U is the unitary matrix, and r2 is the standard deviation of the measurement noise. On the other hand, the processerror covariance matrix Vk is given by

Vkðu; mÞ ¼ Vk�1ðu; mÞ � Kkðu; mÞHðu; mÞVk�1: ð14Þ

The restored image estimation bRkðu; mÞ can be written as

bRkðu; mÞ ¼ bRk�1ðu; mÞ þ Kkðu; mÞ½Gkðu; mÞ � Hðu; mÞbRk�1ðu; mÞ�: ð15Þ

The initial conditions for the equations described above are bR0ðu; mÞ ¼ 0 and V0(u,v) = 255, since 256 different gray scales areused.

2.5. Universal image quality index

The universal image quality index (UIQI) is designed to model any image distortion as a combination of three factors: lossof correlation, luminance distortion, and contrast distortion [34,38]. The index takes values between �1 and 1, with 1 beingthe best quality. For two images i(x,y) and r(x,y) of dimensions N �M, the UIQI is defined as

UIQI ¼ 4rir�i�r

ðr2i þ r2

r Þð�i2 þ �r2Þ; ð16Þ

Pre-processed images with different degrees of aberration (w = 0 to w = 3.0) through the complex-amplitude filter with parameters X = 128, c = 0.7,33.

L.M. Ledesma-Carrillo et al. / Computers and Electrical Engineering 40 (2014) 29–40 35

where

�i ¼ 1NM

XN

x¼1

XM

y¼1

iðx; yÞ; ð17Þ

�r ¼ 1NM

XN

x¼1

XM

y¼1

rðx; yÞ; ð18Þ

r2i ¼

1NM

XN

x¼1

XM

y¼1

ðiðx; yÞ ��iÞ2; ð19Þ

r2r ¼

1NM

XN

x¼1

XM

y¼1

ðrðx; yÞ � �rÞ2; and; ð20Þ

rir ¼1

ðN � 1ÞðM � 1ÞXN

x¼1

XM

y¼1

ðiðx; yÞ ��iÞðrðx; yÞ � �rÞ: ð21Þ

In (16)–(21), i(x,y) and r(x,y) are the original images used as benchmarks without any degradation, and the restored imagesafter pre- and post-processing, respectively. The mean of the original and restored image are given by �i and �r, respectively.The variance of the original and restored image are given by r2

i and r2r , respectively. The cross variance between the original

image i(x,y) and the restored image r(x,y) is given by rir . Finally, a UIQI = 1 means r(x,y) = i(x,y).

2.6. Proposed methodology for depth of field extension

To test the performance of the proposed optical system on extending the depth of field in images, the methodologydescribed in Fig. 3 is used. In the first step, the image is captured and pre-processed utilizing the complex amplitude filterdescribed in Section 2.2. Then, the Kalman filter is applied on the pre-processed image to restore it, as described in Section2.4. Additionally, a study on the optimal number of iterations for the Kalman filter is carried out to obtain an optimal imagerestoration with extended depth of field, utilizing the UIQI computation as a benchmark in respect of the original imagewithout focalization errors (w = 0).

Fig. 6. UIQI evolution in the image restoration, according to the number of iterations in the Kalman filter.

36 L.M. Ledesma-Carrillo et al. / Computers and Electrical Engineering 40 (2014) 29–40

3. Experimentation

Six different standard images with high frequency contents in the greyscale format are used for testing the proposed tech-nique described in Section 2.6: Lena, Peppers, Baboon, Boat, Barbara, and Siemens Star. Images with different defocus error oraberration degree (from w = 0 to w = 3) are used as input to the proposed system, as shown in each column of Fig. 4.

A complex-amplitude mask (4) is used on the images in Fig. 4 to carry out the pre-processing described in Section 2.2.Optimal results in the pre-processing stage are obtained for the attenuation factor of the Gaussian-amplitude filter c = 0.7and the maximum value of the optical-path difference a = 33, according to [36].

Following the Kalman filter principle, the pre-processed result can be used for retrieving the original image with extendeddepth of field by applying (14) and (15) in the Fourier domain. However, the Kalman filter is an adaptive, recursive methodthat requires a precise number of iterations to produce the best estimate. Therefore, in an additional study, the number ofiterations k to produce an optimal image restoration through the proposed technique is determined quantitatively utilizingthe universal image quality index (UIQI) as a benchmark.

4. Results

4.1. Pre-processing results

Obtained results from the optical pre-processing stage, which applies the complex-amplitude filter in (4) to the inputimages in Fig. 4 with the optimal parameters c = 0.7 and a = 33, determined in [36], and X = 128, are shown in Fig. 5. Fromthis figure, it is clear that the image quality is brought to a common level independently from the aberration on the pupil,

Fig. 7. Image restoration through the pre-processing and post-processing (8 iterations in the Kalman filter) in the proposed technique with aberration-errorannihilation.

Fig. 8. Relevance of the pre- and post-processing stages for high performance depth of field extension applying the proposed methodology.

Fig. 9. Qualitative comparison of obtained results using the traditional technique against the proposed approach for depth-of-field extension in imageswith different focus errors (from w = 0 to w = 3).

L.M. Ledesma-Carrillo et al. / Computers and Electrical Engineering 40 (2014) 29–40 37

38 L.M. Ledesma-Carrillo et al. / Computers and Electrical Engineering 40 (2014) 29–40

reducing the influence of errors in the image, but affecting its modulation. Hence, a post-processing stage for image resto-ration through digital techniques is necessary.

4.2. Post-processing results

A numeric analysis to determine how many iterations k are required in the Kalman filter to obtain an optimal restorationof the treated image is carried out utilizing the UIQI as a benchmark. Fig. 6 depicts the evolution of the UIQI against the num-ber of iterations in the Kalman filter for each treated image in respect of its in-focus version without any degradation (w = 0).From the numerical analysis of each treated image, an average number of k = 8 iterations in the Kalman filter is obtained forthe proposed system tuning. In Fig. 7, images with extended depth of field are obtained from the pre-processed images inFig. 5. In the post-processing state, 8 iterations are performed in the Kalman filter for the six treated cases (Lena, Peppers,Baboon, Boat, Barbara, and Siemens Star), to compensate modulation loss from the pre-processing stage and obtain an opti-mal restoration (UIQI � 1) independently of the pupil aberration w in the original image (Fig. 4). Fig. 8 depicts the relevanceof the pre- and post-processing stages for reaching high performance results (UIQI � 1) applying the introduced methodol-ogy. In the first row of Fig. 8, input images to the proposed system with different focalization errors (from w = 0 to w = 3) areshown. The second row depicts output images after pre-processing. Finally, row 3 gives the output image from the proposedsystem after post-processing.

4.3. Result discussion

From obtained results, it is clear that the proposed pre-processing and post-processing techniques provide images withhigh resolution and extended depth of field for different focalization errors without requiring the optical system calibration,as shown in Fig. 7. This outperforms previous approaches from the reviewed literature that apply the traditional techniquewhere the image depth of field is extended by processing several images taken with different pupil apertures [14,18–22], asshown in Fig. 9. The maximum obtained UIQI utilizing the traditional technique is 0.9476 for different aberrations (fromw = 0 to w = 3), whereas the proposed methodology provides a UIQI = 0.9864 for all aberrations. Different from [23], wherelarge lens arrays are used for extending the focal depth region, the proposed technique extends the depth of field on imagesby applying a complex amplitude mask using only one pupil aperture. Regarding computational cost, different from othertechniques in the reviewed literature—where several computing intensive algorithms like region/boundary saliency maps,watershed transform, entropy thresholding, flood filling [24], and fuzzy logic [32], among others, are applied for extendingthe image depth of filed [15]—the proposed techniques apply just the complex amplitude mask for pre-processing and aKalman filter with an optimized fixed number of iterations for image restoration (post-processing). From the obtainedresults, it is clear that the proposed technique is a generalized and computationally effective approach, with low sensitivityto focalization errors, for extending the depth of field on images.

5. Conclusions

High resolution images with extended depth of field are very important in many applications; however, directly changingthe optical system depth of field causes loss of information and resolution in the captured scene. Traditional techniques toextend depth of field are computationally expensive, require a great number of images taken with different pupil apertures,and use long periods of time to obtain an adequate result. Therefore, the novelty and contribution in this work is ageneralized and computationally effective optical system with low sensitivity to focalization errors for extending the depthof field in images for many real applications. Different from the traditional techniques, the proposed approach applies acomplex amplitude mask during pre-processing and an iterative Kalman filter with an optimal number of iterations duringpost-processing, on a single image. The usability and high performance of the introduced technique for extending depth offield in images is demonstrated by the obtained results from the different study cases where the UIQI is around 1 for alltreated images, outperforming those obtained by applying the traditional approach.

Acknowledgments

This work was supported in part by the National Council on Science and Technology (CONACYT), Mexico, underScholarships 254859, 254860, and 252262.

Authors thank Dr. J. Ojeda-Castaneda for many enlightening discussions regarding depth of field extension.

References

[1] Yang S, Weidong C, Heng H, Yue W, Dagan FD. Object localization in medical images based on graphical model with contrast and interest-region terms.In: Proc IEEE computer vision and pattern recognition workshops (CVPRW), Providence, RI; 2012. p. 1–7.

[2] Ikeda M, Theuwissen A, Solhusvik J, Bosiers J. Computational imaging. In: Proc IEEE solid-state circuits conference digest of technical papers (ISSCC),San Francisco, CA; 2012. p. 504–5.

[3] Lakshman P. Combining deblurring and denoising for handheld HDR imaging in low light conditions. Comput Electr Eng 2012;38:434–43.

L.M. Ledesma-Carrillo et al. / Computers and Electrical Engineering 40 (2014) 29–40 39

[4] Ferran C, Bosch S, Carnicer A. Design of optical systems with extended depth of field: an educational approach to wavefront coding techniques. IEEETrans Educ 2012;55:271–8.

[5] Bishop TE, Favaro P. The light field camera: extended depth of field, aliasing, and superresolution. IEEE Trans Pattern Anal 2012;34:972–86.[6] Boddeti VN, Kumar BV. Extended-depth-of-field iris recognition using unrestored wavefront-coded imagery. IEEE Trans Syst Man Cybern A

2010;40:495–508.[7] Aslantas V, Kurban R. Extending depth-of-field of a digital camera using particle swarm optimization based image fusion. In: Proc IEEE 14th

international symposium on consumer electronics (ISCE), Braunschweig, Germany; 2010. p. 1–5.[8] Bagheri S. Maximizing signal-to-noise-ratio in extended depth-of-field bio-imaging. In: Proc 9th Euro-American workshop on information optics

(WIO), Helsinki, Finland; 2010. p. 1–2.[9] Srisomboon K, Srisaiprai S, Thongdit P, Lee W, Pattanavijit A. A performance comparison of two WS filters for image reconstruction technique under

different image types. In: Proc 9th international conference on electrical engineering/electronics, computer, telecommunications and informationtechnology (ECTI-CON), Phetchaburi, Thailand; 2012. p. 1–4.

[10] Luqman Bin Muhd Zain M, Elamvazuthi I, Begam M. Comparative analysis of filtering techniques for ultrasound images of bone fracture. In: Procinternational conference on intelligent and advanced systems (ICIAS), Kuala Lumpur, Malaysia; 2010. p. 1–4.

[11] Bhattacharyya D, Dutta J, Das P, Bandyopadhyay R, Bandyopadhyay SK, Tai-Hoon K. Discrete Fourier transformation based image authenticationtechnique. In: Proc 8th IEEE international conference on cognitive informatics (ICCI), Kowloon, Hong Kong; 2009. p. 196–200.

[12] de Vieilleville F, Basarab A, Kouamee D, Lobjois V. Light sheet fluorescence microscopy images deblurring with background estimation. In: Proc IEEEworkshop on signal processing systems (SIPS), San Francisco, CA; 2010. p. 254–9.

[13] Vega M, Mateos J, Molina R, Katsaggelos AK. Astronomical image restoration using variational methods and model combination. Stat Methodol2012;9:19–31.

[14] Ramsay E, Serrels KA, Waddie AJ, Taghizadeh MR, Reid DT. Optical super-resolution with aperture-function engineering. Am J Phys 2008;76:1002–6.[15] Kim S, Lee E, Hayes MH, Paik J. Multifocusing and depth estimation using a color shift model-based computational camera. IEEE Trans Image Process

2012;21:4152–66.[16] Zhang R, Zhang M, Lu K, Wang T, Wang L, Zhuang S. Analyzing wavelength behavior of a cubic phase mask imaging system based on modulation

transfer function. Opt Commun 2012;285:1082–6.[17] Ni M, Aubrun J, Bishop M, Byler E, Hamilton HH, Lorell KR, et al. The control system of a distributed aperture imaging testbed. IEEE Trans Contr Syst

Technol 2010;18:1338–44.[18] Hong D, Cho H. Depth-of-field extension method using variable annular pupil division. IEEE-ASME Trans Mech 2012;17:390–6.[19] De I, Chand B. Multi-focus image fusion using a morphology-based focus measure in a quad-tree structure. Inform Fusion 2013;14:136–46.[20] Bai X. Morphological image fusion using the extracted image regions and details based on multi-scale top-hat transform and toggle contrast operator.

Digit Signal Process 2013;23:542–54.[21] Ben-Ezra M. A digital gigapixel large-format tile-scan camera. IEEE Comput Graph 2011;31:49–61.[22] Pertuz S, Puig D, Garcia MA, Fusiello A. Generation of all-in-focus images by noise-robust selective fusion of limited depth-of-field images. IEEE Trans

Image Process 2013;22:1242–51.[23] Chebbi B, Minko S, Al-Akwaa N, Golub I. Remote control of extended depth of field focusing. Opt Commun 2010;283:1678–83.[24] Liu Z, Li W, Shen L, Han Z, Zhang Z. Automatic segmentation of focused objects from images with low depth of field. Pattern Recogn Lett

2010;31:572–81.[25] Liu P, Liu Q-X, Zhao T-Y, Chen Y-P, Yu F-H. Biconjugate gradient stabilized method in image deconvolution of wavefront coding system. Opt Laser

Technol 2013;47:329–35.[26] Carles G, Ferran C, Carnicer A, Bosch S. Design and implementation of a scene-dependant dynamically selfadaptable wavefront coding imaging system.

Comput Phys Commun 2012;41:147–54.[27] Rostami M, Michailovich O, Zhou W. Image deblurring using derivative compressed sensing for optical imaging application. IEEE Trans Image Process

2012;21:3139–49.[28] Cho TS, Zitnick CL, Joshi N, Kang SB, Szeliski R, Freeman WT. Image restoration by matching gradient distributions. IEEE Trans Pattern Anal

2012;34:683–94.[29] Mohammad K, Agaian S, Hudson F. Implementation of digital electronic arithmetics and its application in image processing. Comput Electr Eng

2010;36:424–34.[30] Figueiredo MAT, Bioucas-Dias JM. Algorithms for imaging inverse problems under sparsity regularization. In: Proc 3rd international workshop on

cognitive information processing (CIP), Baiona, Spain; 2012. p. 1–6.[31] Oswald-Tranta B, Sorger M, Leary PO. Motion deblurring of infrared images from a microbolometer camera. Infrared Phys Technol 2010;53:274–9.[32] Nair MS, Raju G. Additive noise removal using a novel fuzzy-based filter. Comput Electr Eng 2011;37:644–55.[33] Kwok NM, Jia X, Wang D, Chen SY, Fang G, Ha QP. Visual impact enhancement via image histogram smoothing and continuous intensity relocation.

Comput Electr Eng 2011;37:681–94.[34] Wang Z, Bovik AC. A universal image quality index. IEEE Signal Proc Lett 2002;9:81–4.[35] Goodman JW. Introduction to Fourier optics. 2nd ed. New York: McGraw-Hill; 1996.[36] Ojeda-Castaneda J, Yepez-Vidal E, Garcia-Almanza E. Complex amplitude filters for extended depth of field. Photon Lett Poland 2010;2:1–3.[37] da Silva LA, Joaquim MB. Noise reduction in biomedical speech signal processing based on time and frequency Kalman filtering combined with spectral

subtraction. Comput Electr Eng 2008;34:154–64.[38] Bhadauria HS, Dewal ML. Medical image denoising using adaptive fusion of curvelet transform and total variation. Comput Electr Eng

2013;39:1451–60.

L.M. Ledesma-Carrillo received the B.E. and M.E. degree in electrical engineering from the Universidad de Guanajuato, in 2011 and 2013, respectability.Currently, he is pursuing the Ph.D. degree at the Universidad de Guanajuato, Mexico. His fields of interest include digital signal and image processing onFPGAs for applications in robotic vision and optics.

M. Lopez-Ramirez received the B.E. and M.E. degree in electrical engineering from the Universidad de Guanajuato, Mexico, in 2011 and 2013, respectively.He is currently pursuing the Ph.D. degree at the same institution. His research interests include, image and signal processing, power electronics, powerquality and digital systems applied to industry.

C.A. Rivera-Romero received the B.E. degree in computing programming from the University of Zacatecas in 2006. She received the M.E. degree inelectronics from the Universidad de Guanajuato, Mexico in 2012. She is currently a lecturer and researcher at the Unidad Academica de Ingenieria ElectricaJalpa en la Universidad Autonoma de Zacatecas.

A. Garcia-Perez received the B.E. and M.E. degrees in electronics from the University of Guanajuato, Mexico, and the Ph.D. degree in electrical engineeringfrom the University of Texas, Dallas, USA. He is currently a Titular Professor with the Department of Electronic Engineering, University of Guanajuato. Hisfields of interest include digital signal processing.

G. Botella received the Ph.D. in 2007 from University of Granada, Spain. He was a research fellow funded by EU working at University College London, UK

40 L.M. Ledesma-Carrillo et al. / Computers and Electrical Engineering 40 (2014) 29–40

and visitor professor at Florida State University, USA. In 2006 he joined at Complutense University of Madrid, Spain. His research interests include Imageand Video Processing acceleration for FPGAs, GPUs and many core-based systems.

E. Cabal-Yepez received the Ph.D. degree from the University of Sussex in the United Kingdom. The B.E. and M.E. degrees from FIMEE Universidad deGuanajuato, Mexico, where he is currently working as a full-time professor and doing research work focused on Digital Signal and Image Processing, FPGAs,and Embedded Systems for real-time processing. He can be contacted at [email protected].


Recommended