+ All Categories
Home > Documents > Color-to-grayscale conversion through weighted multiresolution channel fusion

Color-to-grayscale conversion through weighted multiresolution channel fusion

Date post: 29-Nov-2023
Category:
Upload: tno
View: 0 times
Download: 0 times
Share this document with a friend
7
Color-to-grayscale conversion through weighted multiresolution channel fusion Tirui Wu Alexander Toet
Transcript

Color-to-grayscale conversionthrough weighted multiresolutionchannel fusion

Tirui WuAlexander Toet

Color-to-grayscale conversion through weightedmultiresolution channel fusion

Tirui Wua and Alexander Toetb,c,*aFord Motor Research and Engineering (Nanjing) Co. Ltd., No. 118 General Road, Jiangning Development Zone, Nanjing 211100, ChinabUtrecht University, Helmholtz Institute, The Netherlands and Department of Experimental Psychology, Utrecht,The NetherlandscTNO, Kampweg 5, 3769 DE Soesterberg, The Netherlands

Abstract. We present a color-to-gray conversion algorithm that retains both the overall appearance and thediscriminability of details of the input color image. The algorithm employs a weighted pyramid image fusionscheme to blend the R, G, and B color channels of the input image into a single grayscale image. The useof simple visual quality metrics as weights in the fusion scheme serves to retain visual contrast from eachof the input color channels. We demonstrate the effectiveness of the method by qualitative and quantitativecomparison with several state-of-the-art methods. © 2014 SPIE and IS&T [DOI: 10.1117/1.JEI.23.4.043004]

Keywords: color-to-grayscale; image fusion.

Paper 14199 received Apr. 9, 2014; revised manuscript received Jun. 3, 2014; accepted for publication Jun. 11, 2014; publishedonline Jul. 7, 2014.

1 IntroductionDespite the widespread availability of color sensors andcolor display technologies, color-to-grayscale conversionis still widely used in many applications, including black-and-white printing (e-ink-based book readers), producingreading materials for color blind people, single-channelimage processing, and nonphotorealistic rendering withblack-and-white media. A good color-to-grayscale conver-sion should preserve both the overall appearance and the dis-criminability of the features of the original color image.However, since color-to-grayscale conversion reducesthree-dimensional color data to one-dimensional data, lossof information is inevitable. To retain feature discriminationin color-to-gray conversion, the resulting grayscale valuesshould reflect chromatic differences in the input image.Hence, the problem is to find a lower dimension embeddingof the original data that optimally preserves the perceptualcontrast between data points (pixels) in the original data.This implies that the following constraints should betaken into account:1

• Global consistency: pixels with the same color in theinput color image should be mapped to the same grayvalue in the grayscale image.

• Local luminance consistency: luminance gradientsshould be preserved.

• Preservation of chromatic contrast: different colorsshould be mapped to different grayscale values.

• Preservation of hue order: the natural ordering of huesshould be reflected in their converted grayscale values.

Many different color-to-gray conversion algorithms havebeen proposed that aim to retain the original discriminabilityof color images (for some excellent overviews see Refs. 2and 3). They are typically divided into two main categories:

local and global mappings. In a local mapping method, thecolor-to-gray mapping of pixel values is spatially varying,depending on the local distributions of colors. Although alocal mapping can accurately preserve local features, it mayproduce inhomogeneous representations of constant colorregions when the mapping changes in those regions. In aglobal mapping method, a spatially uniform color-to-graymapping is used: the same colors are consistently mappedto the same grayscale values over an image, guaranteeinghomogenous representations of constant color regions.However, it is a challenge to determine a global mappingthat preserves local features. The previous studies haveshown that the global mappings are typically efficient andfast and ensure that the identical color values are mappedto identical gray values, while local approaches can yield bet-ter results but are typically complex and computationallyexpensive.3,4

In this paper, we present a color-to-gray conversion algo-rithm that retains both the overall appearance and the dis-criminability of the input image. The algorithm uses aweighted pyramid image fusion scheme to blend the R,G, and B color channels of the input image into a single-grayscale image. First, a scalar-valued weight map is com-puted for each of the R, G, and B color channels. This weightmap is composed of several simple visual quality metrics andserves to preserve only the most salient (informative) visualdetails from each of the input color channels in the final gray-scale image. Next, a multiresolution weight map is con-structed for each channel by applying a Gaussian pyramidtransform to its corresponding weight map. The three multi-resolution weight maps are then normalized such that theysum to one for each pixel. Then, the R, G, and B color chan-nels of the input image are decomposed into Laplacian pyra-mids, which basically contain band-pass filtered versions atdifferent spatial scales.5 Finally, after multiplication of theseLaplacian pyramids by their corresponding Gaussian

*Address all correspondence to: Alexander Toet, E-mail: [email protected] 0091-3286/2014/$25.00 © 2014 SPIE and IS&T

Journal of Electronic Imaging 043004-1 Jul∕Aug 2014 • Vol. 23(4)

Journal of Electronic Imaging 23(4), 043004 (Jul∕Aug 2014)

pyramid weight maps and summation, the final grayscaleconverted image is obtained by reconstructing the resultingpyramid. The method is straightforward and automatic(requires no user interaction), has limited complexity, and per-forms at least as well as the best performing state-of-the-artcolor-to-grayscale conversion algorithms. The main contribu-tion of this paper is that it formulates the color-to-grayscaleconversion problem as a visual saliency-weighted multiscalechannel fusion scheme, to achieve a grayscale representationthat optimally represents the information contained in each ofthe channels of an RGB color image.

The rest of this paper is organized as follows. Section 2briefly reviews related work. Section 3 describes the pro-posed method. Section 4 demonstrates the effectiveness ofthe proposed method, and Sec. 5 provides the conclusions.

2 Related WorkA simple and widely used method to convert an input colorimage into a grayscale image is to first separate its luminanceand chrominance channels (e.g., by linearly combining its R,G, and B channels6), and then take the luminance channel asits grayscale representation while discarding the chromi-nance information.7 A typical approach is, for instance, totake the Y channel in the CIE XYZ color space.8 In thisapproach, clearly distinguishable regions with isoluminantcolors in the original image will be mapped to the same gray-scale and become indistinguishable in the result image.

Local color-to-grayscale conversion methods typicallyuse local chrominance edges for enhancement. Bala andEschbach9 add high frequency chromaticity componentsto the luminance channel to preserve the local distinction

between adjacent regions with different colors in thecolor-to-grayscale conversion. Neumann et al.4 determinea single-gradient field that is most consistent with boththe color and the luminance gradient fields of the inputcolor image and integrate the result to reconstruct a grayscaleimage. Smith et al.10 presented a two-step grayscale transfor-mation that combines a global mapping based on perceivedlightness with local chromatic contrast enhancement.The method first applies a global mapping based on theHelmholtz–Kohlrausch effect,11 and then locally enhanceschrominance edges using adaptively weighted multiscaleunsharp masking. Although the global mapping is image in-dependent, the local enhancement reintroduces lost chro-matic discontinuities. However, this method may distortthe appearance of constant color regions,1,12 and the secondstep requires user control. Song et al.2 defined three visualcues by considering the color spatial distribution, the gradientinformation, and the perceptual priority of the color channels(hue, chroma, and lightness) and formulated color-to-grayconversion as a visual cue preservation procedure.

Global color-to-grayscale conversion methods that aim tominimize the overall perceptual difference between the inputcolor images and the output grayscale images typicallyinvolve complex optimization steps. Gooch et al.13 try to pre-serve color contrast by minimizing an objective function thatis based on the local contrast between all pixel pairs. Themethod is computationally complex and requires user inter-action. Kuk et al.14 extended the method of Gooch et al.13 byconsidering both local and global contrasts. Rasche et al.15,16

aim to preserve contrast while maintaining luminance con-sistency by minimizing an error function based on matching

Fig. 1 Flowchart of the color-to-grayscale fusion method.

Journal of Electronic Imaging 043004-2 Jul∕Aug 2014 • Vol. 23(4)

Wu and Toet: Color-to-grayscale conversion through weighted multiresolution channel fusion

perceived grayscale differences to corresponding perceptualcolor differences. Color quantization is required to reduce theextreme computational costs of the optimization procedure,which results in artifacts in (natural) images with continuoustones. Grundland and Dodgson17 presented a global mappingthat adds a fixed amount of chrominance to the luminance toenhance grayscale contrast, whereas the original luminanceand the color ordering are better preserved by restraining theadded amount of chrominance. However, luminance andchrominance contrasts may cancel out on addition and themethod does not discriminate between colors perpendicularto the projection axis.1 Kim et al.1 presented a nonlinearglobal mapping that preserves feature discriminability,color ordering, and luminance fidelity by minimizing the dif-ference between color and resultant grayscale image gra-dients. The influence of chromatic contrast on featurediscriminability is controlled by the user. Lu et al.18 useda bimodal energy function as a more flexible (weaker)color ordering constraint, which enables a fast implementa-tion.19 Zhu et al.12 globally modulate the luminance channelof a color input image with the normalized contrast of themost salient (L, H, or S) channel and adopt the result asthe grayscale conversion. Lee et al.20 preserve color contrastby adding chromatic contrast to the luminance component.Hsin et al.21 combine a global mapping that preserves thenatural hue order of the input image with a local mappingthat restores local contrast. Lau et al.22 define an energy func-tion on a clustered color image. This method is able to per-form transformation between different color spaces. Wuet al.23 presented an interactive two-scale (global-local) algo-rithm that first segments the input image, then determines agrayscale for each segment in a global optimization pro-cedure, and finally performs local contrast enhancement torestore local details.

3 MethodMertens et al.24,25 presented a method to fuse a multipleexposure sequence into a single high quality low dynamicrange image. To retain as much detail and color as possible,the fusion process selects the visually most salient pixelsusing simple quality measures (e.g., saturation and contrast)that are computed for each pixel in the multiexposuresequence. The method is related to image fusion techniquesused for e.g., depth-of-field enhancement,26 multimodalimage fusion,27–29 and video enhancement.30 It is mademore flexible than earlier approaches (e.g., Ref. 31) by incor-porating adjustable visual saliency metrics. Mertens et al.’s24,25pyramid blending strategy is similar to that of Grundlandet al.32 but it employs different saliency metrics. In thispaper, we present a color-to-grayscale conversion methodinspired by Mertens et al.24,25 that converts an input colorimage into a grayscale image by hierarchical saliencyweighted fusing of its three color channels.

Following Mertens et al.,24,25 we first compute a scalar-valued weight map for each of the R, G, and B channelsof the input color image. This weight map is composed ofthree simple visual quality metrics and serves to preserveonly the most salient (informative) visual details fromeach of the input color channels in the final grayscale image.

The first quality measure C represents contrast and iscomputed by taking the square value of a Laplacian-filteredversion of the input channel. It tends to assign a large weight

to visually important elements such as edges and texture. Thesecond measure S is defined for a given channel as one minusthe absolute difference between that channel and the mean ofall three channels (e.g., for the R channel, the S measure is1.0 − jR − ½ðRþ Gþ BÞ∕3�j; assuming pixel values rangingfrom 0 to 1). This measure assigns larger weights to pixelswith a smaller deviation from the mean of the three channels.The rationale for this assumption is the fact that the colorchannels for naturalistic images are highly correlated inRGB space. Therefore, pixels near the mean of the three

Fig. 2 Comparison of the results of our method with those of (CIEY),(Smith08),10 (Neumann07),4 (Grundland07),33 (Rasche05),15

(Gooch05),13 (Bala04)9 for each of the 24 test images. Input and resultimages of other methods are courtesy of Cadik.3

Journal of Electronic Imaging 043004-3 Jul∕Aug 2014 • Vol. 23(4)

Wu and Toet: Color-to-grayscale conversion through weighted multiresolution channel fusion

channels are assumed to be well defined, whereas pixels farfrom this mean are assumed to represent outliers. The lastquality measure E (corresponding to Mertens et al.’s wellexposedness24,25) is computed by weighting each pixel inten-sity i with its distance from 0.5 (the mean pixel value) usinga Gaussian weighting function: exp½−ði − 0.5Þ2∕2σ2�.Following Mertens et al.,24,25 we adopt σ ¼ 0.2. This weight-ing ensures that pixels that are either under- (near 0) or over-(near 1) exposed are weighted less than pixels that are wellexposed (near 0.5). This metric prioritizes information fromintermediate valued image regions that typically provide amore articulated representation of image details.24,25 Thethree quality measures are then combined into a single-weight map through multiplication:

Wij;k ¼ CωC

ij;k × SωS

ij;k × EωE

ij;k; (1)

whereC, S, and E represent, respectively, contrast, saturationand well-exposedness, with corresponding “weighting”exponents ωC, ωS, and ωE. The subscripts ij, k refer respec-tively to a pixel with coordinates ði; jÞ in channel k, k ∈fR;G; Bg.

After computing a weight map for each of the (R, G, andB) input channels, the three weight maps are normalizedsuch that they sum to one for each pixel:

Wij;k ¼"XNk 0¼1

Wij;k 0

#−1

Wij;k: (2)

Straightforward application of the normalized weightmaps to fuse the three color channels of the input colorimage may result in undesirable halos around edges andmay spill information across object boundaries. This prob-lem can be solved by using a pyramidal image fusionscheme5 as suggested by Mertens at al.24,25 The hierarchicalweighted fusion scheme we propose for color-to-grayscaleconversion is as follows (see Fig. 1). First, a multiresolutionweight map is constructed for each input channel by apply-ing a Gaussian pyramid transform to its correspondingweight map. Next, the three input channels are decomposedinto Laplacian pyramids, which basically contain band-passfiltered versions at different spatial scales.5 Then, theseLaplacian pyramids are multiplied by their correspondingGaussian pyramid weight maps and the results are summed.The final grayscale converted image is then obtained byreconstructing the resulting pyramid. This color-to-grayscale

conversion method is straightforward and automatic (itrequires no user interaction) and has limited complexity.In Sec. 4, we will show that it performs at least as well asthe best performing state-of-the-art color-to-grayscale con-version algorithms.

4 Experimental ResultsTo evaluate the performance of our algorithm, we applied itto a set of 24 color images (first column in Fig. 2) that arepart of a publicly available color-to-gray benchmark dataset,which also includes the results of several state-of-the-artcolor-to-gray conversion algorithms.3 In all cases, we usedthe parameter setting ωC ¼ ωS ¼ ωE ¼ 1. Figure 2 showsthe results of our color-to-grayscale conversion algorithm(second column) and those of the CIE-Y conversion,Smith et al.,10 Neumann et al.,4 Grundland and Dodgson,33

Rasche et al.,15 Gooch et al.,13 and Bala and Eschbach,9 foreach of the 24 input color images. These results show thatour method preserves the overall visual appearance and colorcontrast of all the input images just as well or even better thanthe best performing state-of-the-art methods. Figure 3 showsthe results for the fifth input image from Fig. 2 together withan enlarged part of each of the results. This figure clearlyshows the ability of our algorithm to preserve both localand more global consistencies.

To investigate the relative contribution of the different fac-tors in the fusion weight map we also tested seven differentcombinations of the parameters ½ωC;ωS;ωE�: [1 0 0], [0 1 0],[0 0 1], [1 1 0], [1 0 1], [0 1 1], and [1 1 1]. Figure 4 shows anexample of the results of our algorithm with each of theseparameter settings on image 5 from Fig. 2. This figure clearlyshows that the elimination of the contrast measure from theweight map (ωc ¼ 0; images 2, 3, and 6 in Fig. 4) results ingrayscale images with significantly reduced luminance con-trast. Figure 5 shows a comparison of the relative effect ofour three quality measures. In this example, color-to-grayscaleconversion is performed with either contrast, saturation,or well-exposedness as a weighting factor. This exampleshows that the contrast factor C preserves local luminance gra-dients, whereas the relative saliency factor S and the well-exposedness factor E both serve to preserve, respectively,chrominance and global luminance contrast to some extent.

To enable an objective quantification of the performanceof a color-to-grayscale conversion method, we adopt the nor-malized cross-correlation NCC between the resulting gray-scale image and the R, G, and B color channels of theoriginal input image as a conversion quality metric,

Fig. 3 The results of the different grayscale-to-color conversion algorithms applied to image 5 from Fig. 2(top row). The bottom row shows an enlargement of the left bottom corner from each correspondingimage in the top row.

Journal of Electronic Imaging 043004-4 Jul∕Aug 2014 • Vol. 23(4)

Wu and Toet: Color-to-grayscale conversion through weighted multiresolution channel fusion

NCC ¼ 1

3

X3i¼1

Px;y½Iiðx; yÞ · Igðx; yÞ�ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiP

x;y Iiðx; yÞ2 ·P

x;y Igðx; yÞ2q ; (3)

where Ii represents one of the three (R, G, or B) channels ofthe color input image, Ig represents the grayscale outputimage, and x, y represent the image coordinates. The ration-ale for using this metric is the requirement that the color var-iations (features) in each of the individual color channelsshould be optimally preserved in the resulting grayscaleimage. We computed the NCC metric for each of the con-version results shown in Fig. 2. The results shown in

Fig. 6 demonstrate that our new method performs betterthan or equal to the other methods for all images tested here.

5 ConclusionsWe presented a color-to-gray conversion algorithm thatemploys a weighted pyramid image fusion scheme toblend the R, G, and B color channels of the input imageinto a single-grayscale image. The use of simple visual qual-ity metrics as weights in the fusion scheme serves to retaincontrast details from each of the input color channels. Visual(subjective) and quantitative (objective) comparisons of theresults of our method with those produced by several state-

Fig. 4 The results of our grayscale-to-color conversion algorithms applied to image 5 from Fig. 2 forseven different parameter combinations.

Fig. 5 Illustration of the relative contribution of the contrast measure C [1 0 0], the relative saliencymeasureS [0 1 0], and the results of our grayscale-to-color conversion algorithms applied to image 3 from Fig. 2.

Fig. 6 Normalized cross correlation between the original input images and their grayscale conversionsproduced by our method and previous methods [(CIEY),(Smith08),10 (Neumann07),4 (Grundland07),33

(Rasche05),15 (Gooch05),13 (Bala04)9].

Journal of Electronic Imaging 043004-5 Jul∕Aug 2014 • Vol. 23(4)

Wu and Toet: Color-to-grayscale conversion through weighted multiresolution channel fusion

of-the-art methods demonstrate that our new method retainsas many or even more features from the original color imagein the final grayscale fused image than the best performingstate-of-the-art methods.

5.1 Limitations of the MethodFor most of the examples tested and for a single-fixed param-eter setting (ωC ¼ ωS ¼ ωE ¼ 1; i.e., equal weights for eachof the quality measures), our algorithm already retains moreinformation in the final grayscale image representation thanseveral the state-of the-art methods. As for any color-to gray-scale conversion algorithm there may, of course, be excep-tions to this finding. For instance, the relatively strongcontribution of the contrast factor to the composite weightmap may result in an over enhancement of edges (e.g.,image id3 in Fig. 2). This can easily be remedied by adjust-ing the relative weights of the individual quality measures.

The relative luminance contribution of the individualchannels to the final grayscale image is weighted withtheir respective saliency maps (composed of the three featurequality factors C, S, and E). In this way, the method convertsthe input color image into a grayscale representation thatretains visually salient information at each level of detailfrom the individual color channels. A limitation of thisapproach is the fact that the perceptual hue order may belost in this process.

Since there are currently no validated objective metricsavailable to quantify the performance of color-to-grayscaleconversion methods, we adopted the normalized cross-corre-lation NCC between the resulting grayscale image and the R,G, and B color channels of the original input image as thecomputational conversion quality metric. This metric mea-sures to what extent information from the individual inputcolor channels is retained in the final result. Since this metrichas no direct relation to human visual perception, it may, insome cases, assign a quality ranking that is not strictly relatedto human judgment (e.g., the visual quality of the CIE-Yresults for images 3 and 5 in Fig. 2 is actually quite low,whereas their NCC values are relatively high).

AcknowledgmentsEffort sponsored by the Air Force Office of ScientificResearch, Air Force Material Command, USAF, under grantnumber FA8655-11-1-3015. The U.S. Government is author-ized to reproduce and distribute reprints for governmentalpurpose notwithstanding any copyright notation thereon.

References

1. Y. Kim et al., “Robust color-to-gray via nonlinear global mapping,”ACM Trans. Graph. 28(5), 1–4 (2009).

2. M. Song et al., “Color to gray: visual cue preservation,” IEEE Trans.Pattern Anal. Mach. Intell. PAMI 32(9), 1537–1552 (2010).

3. M. Cadik, “Perceptual evaluation of color-to-grayscale image conver-sions,” Comput. Graphics Forum 27(7), 1745–1754 (2008).

4. L. Neumann, M. Cadik, and A. Nemcsics, “An efficient perception-based adaptive color to gray transformation,” in Proc. Comput.Aesthetics, pp. 73–80, Eurographics Association, Aire-la-Ville,Switzerland, Switzerland (2007).

5. P. J. Burt and E. H. Adelson, “The Laplacian pyramid as a compactimage code,” IEEE Trans. Commun. 31(4), 532–540 (1983).

6. G. Wyszecki and W. S. Stiles, Color Science: Concepts and Methods,Quantitative Data and Formulae, 2nd ed., John Wiley & Sons, NewYork (2000).

7. H. R. Wu and K. R. Rao, Digital Video Image Quality and PerceptualCoding, CRC Press, Boca Raton, FL (2005).

8. M. D. Fairchild, Color Appearance Models, 3rd ed., Wiley-IS&T,Chicester, Sussex (2013).

9. R. Bala and R. Eschbach, “Spatial color-to-grayscale transform preserv-ing chrominance edge information,” in Proc. of the Color ImagingConf. 2004, pp. 82–86, Society for Imaging Science andTechnology, Springfield, VA (2004).

10. K. Smith et al., “Apparent greyscale: a simple and fast conversion toperceptually accurate images and video,” Comput. Graphics Forum27(2), 193–200 (2008).

11. Y. Nayatani, “Simple estimation methods for the Helmholtz–Kohlrausch effect,” Color Res. Appl. 22(6), 385–401 (1997).

12. W. Zhu, R. Hu, and L. Liu, “Grey conversion via perceived-contrast,”The Visual Computer 30(3),299–309 (2014).

13. A. A. Gooch et al., “Color2Gray: salience-preserving color removal,”ACM Trans. Graphics 24(3), 634–639 (2005).

14. J. G. Kuk, J. H. Ahn, and N. I. Cho, “A color to grayscale conversionconsidering local and global contrast,” in Computer Vision—ACCV2010, R. Kimmel, R. Klette, and A. Sugimoto, Eds., pp. 513–524,Springer, Berlin Heidelberg (2011).

15. K. Rasche, R. Geist, and J. Westall, “Re-coloring images for gamuts oflower dimension,” Comput. Graphics Forum 24(3), 423–432 (2005).

16. K. Rasche, R. Geist, and J. Westall, “Detail preserving reproduction ofcolor images for monochromats and dichromats,” IEEE Comput.Graphics Appl. 25(3), 22–30 (2005).

17. M. Grundland and N. A. Dodgson “The decolorize algorithm for con-trast enhancing, color to grayscale conversion,” Technical ReportUCAM-CL-TR-649, University of Cambridge Computer Laboratory,Cambridge, UK (2005).

18. C. Lu, L. Xu, and J. Jia, “Contrast preserving decolorization,” in IEEEInt. Conf. Computational Photography (ICCP), pp. 1–7, IEEE,Piscataway, New Jersey (2012).

19. C. Lu, J. Xu, and J. Jia, “Real-time contrast preserving decolorization,”in Proc. SIGGRAPH Asia 2012 (SA ‘12), pp. 1–4, ACM, New York(2012).

20. T.-H. Lee, B.-K. Kim, and W.-J. Song, “Converting color images tograyscale images by reducing dimensions,” Opt. Eng. 49(5), 057006(2010).

21. C. Hsin, H.-N. Le, and S.-J. Shin, “Color to grayscale transform preserv-ing natural order of hues,” in Int. Conf. on Electrical Engineering andInformatics (ICEEI), pp. 1–6, IEEE, Piscataway, New Jersey (2011).

22. C. Lau, W. Heidrich, and R. Mantiuk, “Cluster-based color space opti-mizations,” in Proc. of the IEEE Int. Conf. Computer Vision (ICCV2011), pp. 1172–1179, IEEE, Piscataway, New Jersey (2011).

23. J. Wu, X. Shen, and L. Liu, “Interactive two-scale color-to-gray,” TheVisual Computer 28(6–8), 723–731 (2012).

24. T. Mertens, J. Kautz, and F. Van Reeth, “Exposure fusion: a simple andpractical alternative to high dynamic range photography,” Comput.Graphics Forum 28(1), 161–171 (2009).

25. T. Mertens, J. Kautz, and F. Van Reeth, “Exposure fusion,” in Proc. ofthe 15th Pacific Conf. on Computer Graphics and Applications PG ‘07,pp. 382–390, IEEE Computer Society, Washington, DC (2007).

26. J. M. Ogden et al., “Pyramid-based computer graphics,” RCA Engineer30(5), 4–15 (1985).

27. A. Toet, “Image fusion by a ratio of low-pass pyramid,” PatternRecognit. Lett. 9(4), 245–253 (1989).

28. A. Toet, J. J. van Ruyven, and J. M. Valeton, “Merging thermal andvisual images by a contrast pyramid,” Opt. Eng. 28(7), 789–792 (1989).

29. A. Toet, “Hierarchical image fusion,” Mach. Vision Appl. 3(1), 1–11(1990).

30. R. Raskar, A. Ilie, and J. Yu, “Image fusion for context enhancementand video surrealism,” in Proc. of the Third Int. Symp. on Non-Photorealistic Animation and Rendering, pp. 85–152, ACM Press,New York (2004).

31. P. J. Burt and R. J. Kolczynski, “Enhanced image capture throughfusion,” in Proc. Fourth Int. Conf. Computer Vision, pp. 173–182,IEEE Computer Society Press, Washington (1993).

32. M. Grundland et al., “Cross dissolve without cross fade: preserving con-trast, color and salience in image compositing,” Comput. GraphicsForum 25(3), 577–586 (2006).

33. M. Grundland and N. A. Dodgson, “Decolorize: fast, contrast enhanc-ing, color to grayscale conversion,” Pattern Recognit. 40(11), 2891–2896 (2007).

Tirui Wu is currently a control engineer at Ford REC (China), wherehe investigates vision-based active safety technologies (lane depar-ture warning, forward collision avoidance, traffic sign recognition,pedestrian detection, etc.), machine learning, image fusion, face rec-ognition, and motion magnification.

Alexander Toet is currently a senior research scientist at TNO(Soesterberg, the Netherlands), where he investigates multimodalimage fusion, image quality, computational models of human visualsearch and detection, the quantification of visual target distinctnessand cross-modal perceptual interactions between the visual, auditory,olfactory, and tactile senses. He is a fellow of SPIE and a seniormember of the Institute of Electrical and Electronics Engineers (IEEE).

Journal of Electronic Imaging 043004-6 Jul∕Aug 2014 • Vol. 23(4)

Wu and Toet: Color-to-grayscale conversion through weighted multiresolution channel fusion


Recommended