+ All Categories
Home > Documents > [IEEE 2009 Conference for Visual Media Production (CVMP) - London, United Kingdom...

[IEEE 2009 Conference for Visual Media Production (CVMP) - London, United Kingdom...

Date post: 25-Dec-2016
Category:
Upload: stacey
View: 214 times
Download: 1 times
Share this document with a friend
10
Fusion of Bracketing Pictures Marcelo Bertalm´ ıo 1 and Stacey Levine 2 1 Departamento de Tecnolog´ ıas de la Informaci ´ on y las Comunicaciones, Universitat Pompeu Fabra T` anger 122-140, 08018 Barcelona, Spain. [email protected] 2 Department of Mathematics and Computer Science, Duquesne University Pittsburgh, PA, USA 15282. [email protected] Abstract When taking pictures in a night scene with artificial lighting, a very common situation, for most cameras the light coming from the scene is not enough and we are posed with the following problem: we can use the flash which yields a bright, sharp image but with colors quite different from the ones present in the real scene, or we can either increase the exposure setting so as to let the camera absorb more light, or just keep the normal (short) exposure setting and take a dark picture. If we discard the flash as an option and use instead the exposure bracketing feature available in most photo cameras, we obtain a series of pictures taken in rapid succession with different exposure times, with the implicit idea that the user picks from this set the better looking image. But it’s a quite common situation that none of these images is good enough: in general, good color information is retained from longer exposure settings while sharp details are obtained from the shorter ones. In this work we propose a technique for automatically recombining a bracketed pair of images into a single picture that reflects the optimal properties of each one. The proposed technique uses stereo matching to extract the optimal color information from the overexposed image and Non-local means denoising to suppress noise while retaining sharp details in the luminance of the underexposed image. Keywords: Exposure bracketing, Image fusion, Denoising, Stereo matching. 1 Introduction Often times there are challenging lighting conditions that prevent an ideal photograph with both clear detail and accurate color from being obtained. In particular, if the lighting is too poor, while objects still might be distinguishable when using a standard shutter speed, the true colors are almost completely lost. The only mechanism for obtaining proper color information would be to use a very slow shutter speed, but this would inevitably cause blurring of the main subjects. This situation is far from uncommon. For instance, for most photo cameras an indoor night scene with (domestic) artificial lighting would present the problem just described, which is why the camera usually suggests using the flash, which yields a bright, sharp image but with colors quite different from the ones present in the real scene. Many photo cameras allow the user to perform exposure bracketing, which is to take in rapid succession a series of pictures of the same field of view but using varying shutter speeds, with the idea that the user picks from this set of images the one that has a better compromise between color information and sharp details. But it’s a quite common situation that none of these images is good enough: in general, good color information is retained from longer exposure settings while sharp details are obtained from the shorter ones (see figure 1). In this work we propose a technique for automatically recombining a bracketed pair of images into a single picture that reflects the optimal properties of each one. The proposed technique uses stereo matching to extract the optimal color information from the overexposed image and Non-local means denoising [3] to suppress noise while retaining sharp details in the luminance of the underexposed image. While the method proposed in this paper is intended for still images, in the final section we suggest how to extend it to deal with motion pictures, which suffer from the same limitations, as pointed out in [12, 9]: night shooting on location requires that everything be lit artificially, which is always very time consuming and also may be a big problem if the location covers a wide area or is difficult to access. There are related works on image fusion that deal with, for example, fusing a pair of pictures taken both with and without a flash, or deblurring a long-exposure image with a blurry and non-blurry image pair (see e.g. [14, 6, 7, 15, 16] and references there-in). However, these existing methods cannot be applied to most common scenarios. The current literature on fusing flash/no-flash image sets assume there is no motion, which in general is not the case (see figure 1) whenever the picture features human beings, unless they keep absolutely still. Current work on picture deblurring assumes the blur only comes from camera motion and the scene is static, which again prevents this method from being applied to most pictures featuring people. In this work we aim to handle both static and non-static scenes, with camera and/or subject motion. In [11] 2009 Conference for Visual Media Production 978-0-7695-3893-8/09 $26.00 © 2009 IEEE DOI 10.1109/CVMP.2009.13 25
Transcript

Fusion of Bracketing Pictures

Marcelo Bertalmıo1 and Stacey Levine2

1 Departamento de Tecnologıas de la Informacion y las Comunicaciones, Universitat Pompeu FabraTanger 122-140, 08018 Barcelona, Spain.

[email protected] Department of Mathematics and Computer Science, Duquesne University

Pittsburgh, PA, USA [email protected]

Abstract

When taking pictures in a night scene with artificial lighting, avery common situation, for most cameras the light coming fromthe scene is not enough and we are posed with the followingproblem: we can use the flash which yields a bright, sharpimage but with colors quite different from the ones present inthe real scene, or we can either increase the exposure setting soas to let the camera absorb more light, or just keep the normal(short) exposure setting and take a dark picture. If we discardthe flash as an option and use instead the exposure bracketingfeature available in most photo cameras, we obtain a seriesof pictures taken in rapid succession with different exposuretimes, with the implicit idea that the user picks from this setthe better looking image. But it’s a quite common situationthat none of these images is good enough: in general, goodcolor information is retained from longer exposure settingswhile sharp details are obtained from the shorter ones. In thiswork we propose a technique for automatically recombininga bracketed pair of images into a single picture that reflectsthe optimal properties of each one. The proposed techniqueuses stereo matching to extract the optimal color informationfrom the overexposed image and Non-local means denoising tosuppress noise while retaining sharp details in the luminanceof the underexposed image.

Keywords: Exposure bracketing, Image fusion, Denoising,Stereo matching.

1 Introduction

Often times there are challenging lighting conditions thatprevent an ideal photograph with both clear detail and accuratecolor from being obtained. In particular, if the lighting istoo poor, while objects still might be distinguishable whenusing a standard shutter speed, the true colors are almostcompletely lost. The only mechanism for obtaining propercolor information would be to use a very slow shutter speed,but this would inevitably cause blurring of the main subjects.This situation is far from uncommon. For instance, for mostphoto cameras an indoor night scene with (domestic) artificial

lighting would present the problem just described, which iswhy the camera usually suggests using the flash, which yieldsa bright, sharp image but with colors quite different from theones present in the real scene.

Many photo cameras allow the user to perform exposurebracketing, which is to take in rapid succession a series ofpictures of the same field of view but using varying shutterspeeds, with the idea that the user picks from this set of imagesthe one that has a better compromise between color informationand sharp details. But it’s a quite common situation thatnone of these images is good enough: in general, good colorinformation is retained from longer exposure settings whilesharp details are obtained from the shorter ones (see figure 1).

In this work we propose a technique for automaticallyrecombining a bracketed pair of images into a single picturethat reflects the optimal properties of each one. The proposedtechnique uses stereo matching to extract the optimal colorinformation from the overexposed image and Non-local meansdenoising [3] to suppress noise while retaining sharp details inthe luminance of the underexposed image. While the methodproposed in this paper is intended for still images, in thefinal section we suggest how to extend it to deal with motionpictures, which suffer from the same limitations, as pointed outin [12, 9]: night shooting on location requires that everythingbe lit artificially, which is always very time consuming andalso may be a big problem if the location covers a wide area oris difficult to access.

There are related works on image fusion that deal with, forexample, fusing a pair of pictures taken both with and withouta flash, or deblurring a long-exposure image with a blurryand non-blurry image pair (see e.g. [14, 6, 7, 15, 16] andreferences there-in). However, these existing methods cannotbe applied to most common scenarios. The current literatureon fusing flash/no-flash image sets assume there is no motion,which in general is not the case (see figure 1) whenever thepicture features human beings, unless they keep absolutelystill. Current work on picture deblurring assumes the bluronly comes from camera motion and the scene is static, whichagain prevents this method from being applied to most picturesfeaturing people. In this work we aim to handle both static andnon-static scenes, with camera and/or subject motion. In [11]

2009 Conference for Visual Media Production

978-0-7695-3893-8/09 $26.00 © 2009 IEEE

DOI 10.1109/CVMP.2009.13

25

(a) (b)

Figure 1: (a) Long-exposure image. (b) Short-exposure image.

the authors propose a fusion method that combines histogrammatching with spatial color matching with very good results,but they require the underexposed image to be noise-free, andthe spatial color matching of small regions is problematic.

The paper is laid out as follows. In section 2, we discussthe proposed automated algorithm. Numerical results arepresented in section 3 that demonstrate the effectiveness of thealgorithm. In section 4, we draw some conclusions and discussfuture work.

2 Proposed Algorithm

For the proposed algorithm we work with a bracketed series oftwo images, one underexposed and one overexposed. A fastshutter speed typically yields the underexposed image, whichwe will call Iu. This image has sharp details, but very littlecolor information. A slower shutter speed typically yields anoverexposed image, Io which retains good color information,but the details are often blurred. Our goal is to automaticallyadjust the color in Iu to match that of Io while retaining sharpdetails.

The main steps in the algorithm are:

1. Perform stereo matching in both the rows and columns ofIu to those of Io to obtain a new image IHV

u .

2. Denoise and equalize the grayscale luminance Lu of Iu toobtain a new luminance Lu.

3. Replace the luminance of IHVu with Lu to obtain the final

result O.

Each of these steps require both pre- and post-processing aswell as some deviations from the standard approaches, all ofwhich are outlined below.

2.1 Transfer the color from Io via Stereo matching

The purpose of this first step is to extract the color informationfrom Io and transfer it to Iu. The stereo matching performs

much better if the histograms of Iu and its luminance Lu

are first modified to match those of Io and its luminanceLo respectively. Both histogram modifications are performedglobally, and in the case of Iu, it is applied in each of theRGB color channels separately (we have used the histogrammatching algorithm described in [10], section 3.2). This yieldsthe new equalized images Ih

u and Lhu. Then Ih

u is modified soits luminance is exactly Lh

u. That is, for each pixel (i, j) we set

Iu(i, j) =Ihu (i, j)Lh

u(i, j)luminance(Ih

u (i, j)). (1)

Now stereo matching can be used to transfer the bright colorcontained in the overexposed Io onto the preprocessed Iu. Thematching is performed line-by-line via dynamic programmingusing the following approach, adapted from the one proposedby Cox et al. [4] in the context of dense stereo matchingand already used for several other image restoration andenhancement tasks like deinterlacing [1], denoising [2] ordemosaickimg [8].

A dense matching of the kth row of Iu to the set of rows{k − d, k − d+ 1, ..., k + d− 1, k + d} in Io is computed viadynamic programming. With this procedure we find, for eachpixel (k, p) in the kth row of Iu, a match in each of the rows{k − d, k − d+ 1, ..., k + d− 1, k + d} in Io. Then we createthe value IH

u (k, p) in the image IHu by performing the median

average of the matches of Iu(k, p) :IHu (k, p) = median{matches of Iu(k, p) in rows k −d, ..., k + d of Io}.

To avoid vertical or horizontal artifacts (see figure 2), thematching is performed horizontally (row by row) producingIHu , and also vertically (column by column,) yielding IV

u .

Both the horizontal and vertical matchings in the dynamicprogramming have an associated cost, which is the sum ofabsolute differences of the neighborhoods of the pixel and itsmatch; so the image IH

u has an associated cost image CH , andIVu has the cost imageCV . These costs are used to combine the

images IHu and IV

u to create IHVu . In particular, for each pixel

26

Figure 2: Top: IHu , result of matching the rows of fig. 1b

to the rows of fig. 1a. Middle: IVu , result of matching

the columns of fig. 1b to the columns of fig. 1a. Bottom:IHVu , combination of the previous two images according to

equation 2.

(i, j), a combined image IHVu is computed using the formula

IHVu (i, j) = wH(i, j)IH

u (i, j) + wV (i, j)IVu (i, j), (2)

where

wH(i, j) =eH(i, j)

eH(i, j) + eV (i, j),

wV (i, j) =eV (i, j)

eH(i, j) + eV (i, j).

If CH(i, j) ≤ CV (i, j),

eH(i, j) = 1.0,

eV (i, j) = exp

(−|C

H(i, j)− CV (i, j)|ρ

),

otherwiseeV (i, j) = 1.0,

eH(i, j) = exp

(−|C

H(i, j)− CV (i, j)|ρ

).

The value ρ is just a positive constant.

In other words, IHVu is an average where more weight is given

to the match (either horizontal or vertical) with less error.This is the same procedure used in [2] to combine spacial andtemporal deinterlacing images.

Since the purpose of this first step was to extract the colorinformation from Io and transfer it to Iu, one might wonderwhy is it necessary to use the rather complicated technique justpresented instead of a straightforward approach like histogrammatching. True, the simplest approach would be to matchchannel by channel the histogram of Iu to that of Io, but thisdoes not give optimal results. An example can be found infigure 3 (right) where we see that the problems with histogrammatching are twofold: the noise becomes magnified, andshifted colors are transferred (notice the wrong hue of yellowand blue in the boy’s shirt).

2.2 Re-adjust the luminance

The second step consists of computing an optimal grayscaleluminance component, Lu, that will then be transfered to theimage IHV

u generated above. This new luminance should retainthe sharp details inLu while keeping the brightness distributionof Lo.

A direct contrast enhancement applied to Lu would magnifythe noise (see figure 4). Therefore we first remove ’extrema’or edges, textures, and noise, then enhance the contrast, andfinally add back the extrema. This is based on a similarprocedure proposed in [13] (section 8.4) and is performed usingthe following approach.

Non-local means [3] is used to denoise Lu to obtain LNLu .

Non-local means is a new methodology for denoising thatexploits the natural redundancy in images by averaging similar’patches’ within the image. This non-local technique has beenfound to preserve edges and details better than typical localsmoothing approaches.

This smoothed image LNLu is now used in two ways. First, a

’grain image’ GNL = Lu − LNLu is obtained which consists

of the information lost by smoothing using NL-means. This isalso called the ’method error’ in [3] . Second, the histogramof LNL

u is modified so that it matches the histogram of Lo,obtaining LNL,h

u .

The information in the grain image GNL is added to thesmoothed, equalized luminance LNL,h

u to obtain the luminancethat will be used in the final image

Lu = LNL,hu +GNL. (3)

27

Figure 3: Left: overexposed image (detail). Middle: our final result. Right: the result we would obtain with global histogrammatching (see text).

Figure 4: Left: Lo, luminance of overexposed image (detail). Middle: Lu. Right: if we use histogram matching to modify Lu

directly, without denoising it first, the noise is enhanced visibly (see text).

Note that noise in Lu is not significantly magnified while thedetails are still preserved (see figure 5).

2.3 Transfer the luminance to the final image

The final output of the algorithm, O, is generated by replacingthe luminance of IHV

u from equation (2) with Lu from equation(3), but with a slight twist. For each pixel (i, j), we set

O(i, j) = IHVu (i, j)

Lu(i, j)− g ∗ Lu(i, j) + g ∗ LHV (i, j)LHV (i, j))

,

(4)where LHV is the luminance of IHV

u and g is a fixed Gaussiankernel of effective radius σg .

Note that equation (4) consists of the substraction and additionof local averages of the luminance. If both averages are equal,then this is no different than the earlier computation in equation(1). However, if the averages are indeed different, this ensuresthat the output O will have a luminance whose average is equalto that of the luminance of the combined matched image IHV

u .

This will happen even if the luminance Lu that we would liketo impose is different than the luminance of IHV

u .

This slight modification is in fact crucial in getting plausibleresults. Otherwise, the luminance of O is simply the imposedLu, and if this is very different from LHV (which oftenhappens, since Lu was obtained globally with histogramequalization) then the colors in O will look very different fromthe colors in Io, which is precisely the opposite of what wewould want.

2.4 Parameters

There are three parameters which affect the quality of the finaloutput, and which are related to the amount of noise and motionpresent in the image.

The stereo matching procedure described in section 2.1 has twokey parameters. The first is the number of matches that areaveraged, D = 2 ∗ d + 1. If, for instance, there is significantvertical motion (either global or just local) between Iu and Io,

28

(a) (b)

(c) (d)

(e) (f)

Figure 5: (a) Lu, luminance image of fig. 1b . (b) LNLu , Non-local means [3] denoising of Lu. (c) Grain image GNL =

Lu − LNLu (scaled for visualization purposes). (d) Lo, luminance image of fig. 1a. (e) LNL,h

u , obtained by matching thehistogram of LNL

u to that of Lo. (f) Lu = LNL,hu +GNL, the luminance that will be used in the final output of our algorithm.

the value of D should be large when performing matching ofhorizontal lines. Otherwise the matching will be poor, sincefor many pixels the dynamic programming procedure will notfind acceptable matches. Of course the same can be said forhorizontal motion and matching of vertical lines. In our set-upwe use the same value D for horizontal and vertical matching,so larger values of D are needed for any significant motion.But increasing D implies also a significant increase in thecomputational cost, so a compromise must be reached. Thesecond parameter is σ2, which in the original context of stereomatching by Cox et al. [4] was the estimated variance of the

additive Gaussian noise present in the stereo pair. But in oursetting σ2 is related both to noise and motion, because it setsthe threshold for the maximum acceptable difference that theneighborhoods of two pixels can have and still be matched1.Therefore both strong noise and significant motion blur requirea larger value for σ2. But increasing σ2 implies a greatertolerance regarding the similarity of the matched pixels, andtherefore the overall matching results might be poor.

Finally, the third parameter of our algorithm is the effective1In [4], differences above this threshold implied an occlusion, i.e. the pixel

is visible in one image of the pair but not in the other.

29

Figure 6: Top: overexposed image. Bottom: underexposedimage. Middle: our final result, obtained by transfering theluminance image Lu in fig. 5f to the combination image IHV

u

in fig. 2c.

radius h of the neighborhood used for averaging in the Non-local means denoising method [3]: noisier images require alarger h, but too large a value for h may cause over-smoothingand this problem is not solved by adding back the grain imageGNL.

3 Numerical Results

We have run our algorithm on several pairs of exposurebracketing images taken with a consumer camera. Using non-optimized code on a 3GHz, 1Gb PC, the computational costfor each 770 × 430 image is of 170 seconds for the Non-local means denoising, 30 seconds (D = 3) to 180 seconds(D = 23) for the stereo matching, and negligible for the other

steps (histogram matching, luminance transfer, etc.).

The results can be found in figures 6-10. Note that these imagesets have some challenging characteristics that our algorithm isable to handle. For example, the images in figure 7 featurepeople whose mouths are open in one image and closed inthe other. Moreover, although the subjects in figure 7 movevery little, this small motion causes noticeable motion blur dueto the exposure time required to properly capture the colors.The physical motion in figure 10 is a bit more problematic,but the proposed algorithm still produces reasonable results.But again, these are very common scenarios and methods thatrequire static scenes and still subjects are not practical.

The values for the parameters h and D were fairly consistent,but the images benefitted from a more adaptive σ2. A Non-local means parameter of h = 1 worked well for the imagesin figures 6, 7, and the left image in figure 9. The image onthe right in figure 9 worked better with h = 2, while the noisierimages in figures 8 and 10 did better with a higher value of h =4. As for the stereo matching parameters, D = 23 was used forall of the images. Values for the parameter σ2 ranged from 0.3to 2, and thus were a bit more image dependent. The result infigure 6 was generated using σ2 = 0.5, figure 7 used values of(from left to right) σ2 = 2, 1, figure 8 used σ2 = 1, 1.5, figure9 used σ2 = 0.5, 0.75, and figure 10 used σ2 = 0.3, 1.

However, too much motion and/or occlusion may causeproblems which can’t be solved simply by increasing σ2 or D.Figure 10 demonstrates two of these problems. The front of theguitar in the overexposed image in the first set is completelyoccluded. The overexposed image in the second set is quiteblurred and the main subject changes position much more thanin the previous image sets. This causes the colors on the guitarto be inaccurate, as well as severe discoloration on the guitarplayer’s shirt.

4 Conclusion

In this work we proposed a novel methodology forautomatically combining a bracketed set of images (oneunderexposed, one overexposed) in such a way that yields animage retaining the optimal qualities in each one. In particular,the bright color in the overexposed image is superimposedonto the sharp details contained in the underexposed one.The algorithm can handle moving subjects as well as minorocclusions which was not possible in previous works.

There are still improvements that can be made to the algorithm.A mechanism for automatically estimating the parameterswould be highly desirable, especially given the variability ofsome of the parameters. The Non-local means parameter hcan be directly tied to the noise level, but the parameter σ2

is a bit more difficult to estimate. Furthermore, althoughcombining the horizontal and vertical stereo matching reducesline artifacts, the approach is still a one dimensional methodtrying to match two dimensional objects. A two dimensionalmatching algorithm would be very beneficial in this setting,although far from trivial. We also note that large occlusions

30

Figure 7: Top: overexposed image. Bottom: underexposed image. Middle: our fusion result.

pose a problem as demonstrated in figure 10.

This work has an interesting potential extension to videoprocessing. In particular, if the first frame F0 of a video isobtained with a long exposure time and the remaining frames,Fi for i = 1, ..., T , with short exposure, we can applythe proposed method in the following way. First apply thealgorithm described in section 2 to the pair (F0, F1) obtainingF1, then successively to each pair (Fn−1, Fn) obtaining Fn. Intheory, the color from F0 should be transferred to all remainingframes, while they still retain a good level of detail. Butocclusion can be a problem as soon as the frames are too farapart, and also as the geometric configuration of the scenechanges over time so will the global illumination configuration.This will be the subject of further work.

Finally, we are currently investigating the application of themethod proposed in this paper to the formation of High

Dynamic Range (HDR) images from a sequence of different-exposure photographs [5], a setting where camera and subjectmotion are also very relevant and hard to deal with.

Acknowledgements

Thanks to Lisandro Cilento and Luis Sanchez for their help.The first author acknowledges partial support by PNPGCproject, reference MTM2006-14836. The second author isfunded in part by NSF-DMS #0505729.

References

[1] C. Ballester, M. Bertalmıo, V. Caselles, L. Garrido,A. Marques, and F. Ranchin. An Inpainting-BasedDeinterlacing Method. IEEE Transactions on ImageProcessing, 16(10):2476–2491, 2007.

31

Figure 8: Top: overexposed image. Bottom: underexposed image. Middle: our fusion result.

[2] Marcelo Bertalmıo, Vicent Caselles, and Alvaro Pardo.Movie denoising by average of warped lines. IEEE Trans.Image Process., 16(9):2333–2347, 2007.

[3] A. Buades, B. Coll, and J. M. Morel. A review of imagedenoising algorithms, with a new one. Multiscale Model.Simul., 4(2):490–530, 2005.

[4] I.J. Cox, S.L. Hingorani, S.B. Rao, and B.M. Maggs. Amaximum likelihood stereo algorithm. Computer visionand image understanding, 63(3):542–567, 1996.

[5] P. Debevec and J. Malik. Recovering high dynamicrange radiance maps from photographs. In Proc. of the24th annual conf. on Computer graphics, pages 369–378,1997.

[6] E. Eisemann and F. Durand. Flash photographyenhancement via intrinsic relighting. ACM Transactionson Graphics (TOG), 23(3):673–678, 2004.

[7] R. Fattal, M. Agrawala, and S. Rusinkiewicz. Multiscaleshape and detail enhancement from multi-light imagecollections. ACM Transactions on Graphics, 26(3):51,2007.

[8] S. Ferradans, M. Bertalmıo, and V. Caselles. Geometry-Based Demosaicking. IEEE Transactions on ImageProcessing, 18(3):665–670, 2009.

[9] G. Haro, M. Bertalmıo, and V. Caselles. Visual acuity inday for night. International Journal of Computer Vision,69(1):109–117, 2006.

[10] D.J. Heeger and J.R. Bergen. Pyramid-based textureanalysis/synthesis. In Proceedings of the 22ndannual conference on Computer graphics and interactivetechniques, pages 229–238. ACM New York, NY, USA,1995.

[11] J. Jia, J. Sun, C.-K. Tang, and H. Shum. Bayesiancorrection of image intensity with spatial consideration.

32

Figure 9: Top: overexposed image. Bottom: underexposed image. Middle: our fusion result.

In Proceedings of ECCV 2004, number 3023 in LNCS,2004.

[12] S. Lumet. Making movies. Alfred A. Knopf, 1995.

[13] R. Palma-Amestoy, E. Provenzi, M. Bertalmıo, andV. Caselles. A perceptually inspired variationalframework for color enhancement. IEEE Transcationson Pattern Analysis and Machine Intelligence, 31(3):458–474, 2009.

[14] G. Petschnigg, M. Agrawala, Hoppe. H., R. Szeliski,M. Cohen, and K. Toyama. Digital photography withflash and no-flash image pairs. ACM Transactions onGraphics, 23(3):664–672, 2004.

[15] Lu Yuan, Jian Sun, Long Quan, and Heung-Yeung Shum.Image deblurring with blurred/noisy image pairs. IACMTransactions on Graphics, 26(3):1–10, 2007.

[16] Lu Yuan, Jian Sun, Long Quan, and Heung-Yeung Shum.Progressive inter-scale and intra-scale non-blind imagedeconvolution. In SIGGRAPH ’08: ACM SIGGRAPH2008 papers, pages 1–10, New York, NY, USA, 2008.ACM.

33

Figure 10: When there is significant occlusion (left) or motion (right) from the over to the underexposed image, then the colormatching presents problems (e.g. see the singer’s guitar). Top: overexposed image. Bottom: underexposed image. Middle:our fusion result.

34


Recommended