+ All Categories
Home > Documents > IIJEC-2014-03-05-028

IIJEC-2014-03-05-028

Date post: 18-May-2017
Category:
Upload: editorijettcs1
View: 212 times
Download: 0 times
Share this document with a friend
7
IPASJ International Journal of Electronics & Communication (IIJEC) Web Site: http://www.ipasj.org/IIJEC/IIJEC.htm A Publisher for Research Motivatin........ Email: [email protected] Volume 2, Issue 3, March 2014 ISSN 2321-5984 Volume 2, Issue 3, March 2014 Page 10 Abstract Very interesting thing is image processing when we process the image of real time scenario. In this article, we will present a technique for measuring visibility distances under foggy weather conditions using a camera mounted onboard a moving vehicle. Our research has focused in particular on the problem of detecting daytime fog and estimating visibility distances; thanks to these efforts, an original method has been developed, tested and patented. The approach consists of dynamically implementing Koschmieder's Law. Our method enables computing the meteorological visibility distance, a measured defined by the International Commission on Illumination (CIE) as the distance beyond which a black object of an appropriate dimension is perceived with a contrast of less than 5%. Our proposed solution is an original one, featuring the advantage of utilizing a single camera and necessitating the presence of just the road and sky in the scene. As opposed to other methods that require the explicit extraction of the road, this method offers fewer constraints by virtue of being applicable with no more than the extraction of a homogeneous surface containing a portion of the road and sky within the image. This image preprocessing also serves to identify the level of compatibility of the processed image with the set of Koschmieder's model hypotheses. One source of accidents when driving a vehicle is the presence of homogeneous and heterogeneous fog. One source of difficulties when processing outdoor images is the presence of haze, fog or smokes which fades the colors and reduces the contrast of the observed objects. Fog fades the colors and reduces the contrast of the observed objects with respect to their distances. Various camera-based Advanced Driver Assistance Systems (ADAS) can be improved if efficient algorithms are designed for visibility enhancement of road images. There are so many algorithms for visibility enhancement but not a single algorithm is dedicated to the road images. The visibility enhancement algorithm is not dedicated to road images and thus it leads to limited quality results on images of this kind. In this paper, we interpret the algorithm in visibility enhancement algorithm as the inference of the local atmospheric veil subject to two constraints. From this interpretation, we propose an extended algorithm which better handles road images by taking into account that a large part of the image can be assumed to be a planar road. The advantages of the proposed local algorithm are its speed, the possibility to handle both color images or gray-level images, and its small number of parameters. A comparative study and quantitative evaluation with other state-of-the-art algorithms is proposed on synthetic images with several types of generated fog. This evaluation demonstrates that the new algorithm produces similar quality results with homogeneous fog and that it is able to better deal with the presence of heterogeneous fog.. The main advantage of the proposed algorithm compared with other is its speed: its complexity is a linear function of the number of image pixels only. This speed allows visibility restoration to be applied for the first time within real-time processing applications such as sign, lane-marking and obstacle detection from an in-vehicle camera. Another advantage is the possibility to handle both color images and gray level images since the ambiguity between the presence of fog and the objects with low color saturation is solved by assuming only small objects can have colors with low saturation. The algorithm is controlled only by a few parameters and consists in: atmospheric veil inference, image restoration and smoothing, tone mapping. A comparative study and quantitative evaluation is proposed with a few other state of the art algorithms which demonstrate that similar or better quality results are obtained. Finally, an application is presented to lane- marking extraction in gray level images, illustrating the interest of the approach. We introduce a novel algorithm and variants for visibility restoration from a single image. Keywords: ADAS, Real time Scenario, Image Restoration, Homogeneous Fog, Heterogeneous Fog. 1. INTRODUCTION The processing of the image which is foggy or full of haze is quite difficult. Under degraded weather conditions, the contrast of images which are grabbed by a classical in-vehicle camera in the visible light range is drastically degraded, which makes current in-vehicle applications relying on such sensors very sensitive to weather conditions. An in- vehicle vision system should take fog effects into account to be more reliable. A first solution is to adapt the operating thresholds of the system or to deactivate it momentarily if these thresholds have been surpassed. A second solution is to remove fog effects from the image beforehand. Improved Perceptibility of Road Scene Images under Homogeneous and Heterogeneous Fog Shalini Gupta 1 , Vijay Prakash Singh 2 and Ashutosh Gupta 3 1&2 Department of Electronics & Communication Engineering, RKDF College of Engineering, RGPV Bhopal, India 3 Deptartment of Electronics and Communication Engineering, RKDF College of Engineering, RGPV Bhopal, India
Transcript
Page 1: IIJEC-2014-03-05-028

IPASJ International Journal of Electronics & Communication (IIJEC) Web Site: http://www.ipasj.org/IIJEC/IIJEC.htm

A Publisher for Research Motivatin........ Email: [email protected] Volume 2, Issue 3, March 2014 ISSN 2321-5984

Volume 2, Issue 3, March 2014 Page 10

Abstract Very interesting thing is image processing when we process the image of real time scenario. In this article, we will present a technique for measuring visibility distances under foggy weather conditions using a camera mounted onboard a moving vehicle. Our research has focused in particular on the problem of detecting daytime fog and estimating visibility distances; thanks to these efforts, an original method has been developed, tested and patented. The approach consists of dynamically implementing Koschmieder's Law. Our method enables computing the meteorological visibility distance, a measured defined by the International Commission on Illumination (CIE) as the distance beyond which a black object of an appropriate dimension is perceived with a contrast of less than 5%. Our proposed solution is an original one, featuring the advantage of utilizing a single camera and necessitating the presence of just the road and sky in the scene. As opposed to other methods that require the explicit extraction of the road, this method offers fewer constraints by virtue of being applicable with no more than the extraction of a homogeneous surface containing a portion of the road and sky within the image. This image preprocessing also serves to identify the level of compatibility of the processed image with the set of Koschmieder's model hypotheses. One source of accidents when driving a vehicle is the presence of homogeneous and heterogeneous fog. One source of difficulties when processing outdoor images is the presence of haze, fog or smokes which fades the colors and reduces the contrast of the observed objects. Fog fades the colors and reduces the contrast of the observed objects with respect to their distances. Various camera-based Advanced Driver Assistance Systems (ADAS) can be improved if efficient algorithms are designed for visibility enhancement of road images. There are so many algorithms for visibility enhancement but not a single algorithm is dedicated to the road images. The visibility enhancement algorithm is not dedicated to road images and thus it leads to limited quality results on images of this kind. In this paper, we interpret the algorithm in visibility enhancement algorithm as the inference of the local atmospheric veil subject to two constraints. From this interpretation, we propose an extended algorithm which better handles road images by taking into account that a large part of the image can be assumed to be a planar road. The advantages of the proposed local algorithm are its speed, the possibility to handle both color images or gray-level images, and its small number of parameters. A comparative study and quantitative evaluation with other state-of-the-art algorithms is proposed on synthetic images with several types of generated fog. This evaluation demonstrates that the new algorithm produces similar quality results with homogeneous fog and that it is able to better deal with the presence of heterogeneous fog.. The main advantage of the proposed algorithm compared with other is its speed: its complexity is a linear function of the number of image pixels only. This speed allows visibility restoration to be applied for the first time within real-time processing applications such as sign, lane-marking and obstacle detection from an in-vehicle camera. Another advantage is the possibility to handle both color images and gray level images since the ambiguity between the presence of fog and the objects with low color saturation is solved by assuming only small objects can have colors with low saturation. The algorithm is controlled only by a few parameters and consists in: atmospheric veil inference, image restoration and smoothing, tone mapping. A comparative study and quantitative evaluation is proposed with a few other state of the art algorithms which demonstrate that similar or better quality results are obtained. Finally, an application is presented to lane-marking extraction in gray level images, illustrating the interest of the approach. We introduce a novel algorithm and variants for visibility restoration from a single image. Keywords: ADAS, Real time Scenario, Image Restoration, Homogeneous Fog, Heterogeneous Fog. 1. INTRODUCTION The processing of the image which is foggy or full of haze is quite difficult. Under degraded weather conditions, the contrast of images which are grabbed by a classical in-vehicle camera in the visible light range is drastically degraded, which makes current in-vehicle applications relying on such sensors very sensitive to weather conditions. An in-vehicle vision system should take fog effects into account to be more reliable.

A first solution is to adapt the operating thresholds of the system or to deactivate it momentarily if these thresholds have been surpassed.

A second solution is to remove fog effects from the image beforehand.

Improved Perceptibility of Road Scene Images under Homogeneous and Heterogeneous Fog

Shalini Gupta1, Vijay Prakash Singh2 and Ashutosh Gupta3

1&2Department of Electronics & Communication Engineering, RKDF College of Engineering, RGPV Bhopal, India 3Deptartment of Electronics and Communication Engineering, RKDF College of Engineering, RGPV Bhopal, India

Page 2: IIJEC-2014-03-05-028

IPASJ International Journal of Electronics & Communication (IIJEC) Web Site: http://www.ipasj.org/IIJEC/IIJEC.htm

A Publisher for Research Motivatin........ Email: [email protected] Volume 2, Issue 3, March 2014 ISSN 2321-5984

Volume 2, Issue 3, March 2014 Page 11

Unfortunately, the nature of fog or haze is unpredictable i.e., the haze effects vary across the scene. It simply means to say that the modeling the real fog or haze model is not always appropriate. They are exponential with respect to the depth of scene points. Consequently, space invariant filtering techniques cannot be used directly to adequately remove weather effects from images. A judicious approach is to detect the weather conditions so as to estimate the decay in the image and then to remove it. The majority of sensors dedicated to measuring visibility distances (scatterometer, transmissometer) are expensive to operate and quite often complicated to install and calibrate correctly. Moreover, this type of equipment cannot easily be placed onboard a vehicle. Using a camera does not entail such obstacles. Bush [5] and Kwon [10] relied upon a fixed camera placed above the roadway for the purpose of measuring visibility distances. However, systems that meet this purpose with an onboard camera are encountered less frequently. Pomerleau [22] estimates visibility by means of measuring the contrast attenuation of road markings at various distances in front of a moving vehicle. Hautière et al. [9] detect the presence of daytime fog and estimate the meteorological visibility distance. An extension of their method using stereovision is presented in [8]. Methods which restore image contrast under bad weather conditions are encountered more often in the literature. Unfortunately, they all have rather constraints too strong to be used onboard a moving vehicle. Some techniques require prior information about the scene [21]. Others require dedicated hardware in order to estimate the weather conditions [25]. Some techniques rely only on the acquired images and exploit the atmospheric scattering to obtain the rangemap of the scene [19, 6]. This range map is then used to adequately restore the contrast. However, these methods require fog conditions to change between image acquisitions. Techniques based on polarization can also be used to reduce haziness in the image [23]. Unfortunately, they require two differently filtered images of the same scene. Some techniques assume a flat world scene like [20]. The authors compute the extinction coefficient of fog and assume a flat world seen from a forward-looking airborne camera. However, they approximate the distribution of radiances in the image with a simple Gaussian with known variance. Finally, it is proposed in [17] to restore the contrast of more complex scenes. However, the user must manually specify a location for sky region, vanishing point and an approximation of distance distribution in the image. In this thesis, we propose to restore automatically the contrast of images grabbed by an in-vehicle camera. Because it is not possible in this context to compute the road scene structure beforehand, we propose a scheme quite opposite to the one proposed in [19]. Thus, weather conditions are first estimated and used to restore the contrast according to a road scene structure, which is inferred a priori and then refined during the contrast restoration process. According to the aimed in-vehicle application, different algorithms with increasing complexities are proposed. Results are presented using sample frames from three different video clips of road scenes under foggy weather. The paper is organized as follows. We first present a model of fog visual effects. Then, a technique estimating the extinction coefficient of the atmosphere in the current image from a single camera is presented. Once the weather condition is known, a contrast restoration principle is proposed with its associated tools. 2. FOG EFFECTS ON VISION: The literature on the interaction of light with the atmosphere has been written over more than two centuries [4, 2]. Different reviews on the subject are for a long time available in the literature [15, 16, 14] and still serve as reference for more recent works in computer vision [6, 18, 9]. In this section, selected results dealing with fog effects on vision are presented.

Figure 2 Effect of Fog in Vision

2.1. Optical Properties of Fog In fog, a proportion of the light is scattered by water droplets and thus deviates from its path. Because the absorption of visible light by water droplets proves to be negligible [3], the scattering and extinction coefficients tend to be

Page 3: IIJEC-2014-03-05-028

IPASJ International Journal of Electronics & Communication (IIJEC) Web Site: http://www.ipasj.org/IIJEC/IIJEC.htm

A Publisher for Research Motivatin........ Email: [email protected] Volume 2, Issue 3, March 2014 ISSN 2321-5984

Volume 2, Issue 3, March 2014 Page 12

interchangeable. The relation tying the incident luminous flux to the transmitted flux T is known as Beer-Lambert’s law and is expressed as follows: = (1) 2.2. Visual Properties of Fog The attenuation of luminance through the atmosphere was studied by Koschmieder [15], who derived an equation relating the apparent luminance or radiance L of an object located at distance d to the luminance measured close to this object:

L = + (1- ) (2) This expression indicates that the luminance of the object seen through fog is attenuated in (Beer-Lambert law); it also reveals a luminance reinforcement of the form (1- ) resulting from daylight scattered by the slab of fog between the object and the observer, also named air light. is the atmospheric luminance. In the presence of fog, it is also the background luminance on which the target can be detected. The previous equation may then be written as follows: L - = ( - ) (3) 2.3 Estimation of the Extinction Coefficient of Fog In this section, a method to compute the extinction coefficient β using a single camera behind the vehicle windshield is recalled from [7, 10]. 2.3.1 Flat World Hypothesis In the image plane, the position of a pixel is given by its (u, v) coordinates. The coordinates of the optical center projection in the image are designated by ( , ). H denotes the height of the camera, θ the angle between the optical axis of the camera and the horizontal, and the horizon line. The intrinsic parameters of the camera are its focal length , and the horizontal size and vertical size

of a pixel. We have also made use herein of = and = , and have typically considered: ≈ = .

The hypothesis of a flat road is adopted, which makes it possible to associate a distance d with each line v of the image:

d = if v > , Where ⅄ = (4) 2.3.2 Recovery of Koschmieder’s Law Parameters Following a variable change from d to v then becomes: I = R – ( R - ) (1- ) (5) By twice taking the derivative of I with respect to v, one obtains the following:

= β Φ (v) ) (6) where Φ (v) = . The equation =0 has two solutions. The solution = 0 is of no interest. The only useful solution is given in (11): β = (7) where denotes the position of the inflection point of I (v). In this manner, the parameter β of Koschmieder’s law is obtained once is known. Finally, thanks to , and β values, the values of the other parameters of (7) are deduced through use of and , which are respectively the values of the function I and its derivative in v = :

R= – (1- ) (8)

Page 4: IIJEC-2014-03-05-028

IPASJ International Journal of Electronics & Communication (IIJEC) Web Site: http://www.ipasj.org/IIJEC/IIJEC.htm

A Publisher for Research Motivatin........ Email: [email protected] Volume 2, Issue 3, March 2014 ISSN 2321-5984

Volume 2, Issue 3, March 2014 Page 13

= + (9) where R is the mean intrinsic intensity of the road surface. 2.6 Estimation of the horizon line position To obtain the values of the parameters of (7), the position of the horizon line must be estimated. It can be estimated by means of the pitching of the vehicle when an inertial sensor is available, but is generally estimated by an additional image processing.

Figure 3 (a) Black segments: measurement bandwidth. (b) Curve on the left: vertical variation of intensity within the

bandwidth. (c) Horizontal line: image line representative of the meteorological visibility distance. 2.5 Estimation of the inflection point To estimate the parameters of (7), the median intensity on each line of a vertical band is estimated and an inflection point is detected. So as to be in accordance with Koschmieder’s law assumptions, this band should only take into account a homogeneous area and the sky. Thus, a region within the image that displays minimal line-to-line gradient variation when browsed from bottom to top is identified thanks to a region growing process, illustrated in Fig. 2. A vertical band is then selected in the detected area. Finally, taking the median intensity of each segment yields the vertical variation of the intensity of the image and the position of the inflection point. 3. SIMULATIONS To evaluate visibility enhancement algorithms, we need images of the same scene with and without fog. However, obtaining such kind of pairs of images is extremely difficult in practice since it requires checking that the illumination conditions are the same into the scene with and without fog. As a consequence, for the evaluation of the proposed visibility enhancement algorithm and its comparison with existing algorithms, we build up a set of synthetic images with and without fog. 3.1 Synthetic Images To evaluate visibility enhancement algorithms, we need images of the same scene with and without fog. However, obtaining such kind of pairs of images is extremely difficult in practice since it requires to check that the illumination conditions are the same into the scene with and without fog. As a consequence, for the evaluation of the proposed visibility enhancement algorithm and its comparison with existing algorithms, we build up a set of synthetic images with and without fog. The software we used is named SiVICTM and allows to build physically-based road environments, to generate a moving vehicle with a physically-driven model of its dynamic behavior [14], and virtual embedded sensors (proprioceptive, exteroceptive and communication). From a realistic complex urban model, we produced images from a virtual camera inboard a simulated vehicle moving on a road path. We have generated a set of 18 images from various viewpoints trying to sample as many scene aspects as possible. Each image is of size 640×480 and a subset of 6 images is shown in the first column of Fig. 4. For each point of view, its associated depth map is also generated, as shown in the second column of Fig. 4. Indeed, the depth map is required to be able to add fog consistently in the images. We generate 4 different types of fog:

Page 5: IIJEC-2014-03-05-028

IPASJ International Journal of Electronics & Communication (IIJEC) Web Site: http://www.ipasj.org/IIJEC/IIJEC.htm

A Publisher for Research Motivatin........ Email: [email protected] Volume 2, Issue 3, March 2014 ISSN 2321-5984

Volume 2, Issue 3, March 2014 Page 14

Uniform fog: Koschmieder’s law (1) is applied on the 18 original images with a visibility distance of 85.6m to generate 18 images with a uniform fog added. Heterogeneous k fog: fog being not always perfectly homogeneous, we introduce variability in Koschmieder’s law (1) by weighting differently k with respect to the pixel position. These spatial weights are obtained as a Perlin’s noise between 0 and 1, i.e. a noise spatially correlated at different scales (2,4,8, · · · to the size of the image in pixels) [15]. Perlin’s noise is obtained as a linear combination over the spatially correlated noise generated at different scales with weight for scale s. Heterogeneous fog: rather than having k heterogeneous and constant, we also test the case where is heterogeneous thanks again to Perlin’s noise and where k is constant. Heterogeneous k and fog: in order to challenge the algorithms, we also generate a fog based on Koschmieder’s law (1) where k and are both heterogeneous thanks to two independent Perlin’s noises.

Table 1 Average absolute difference on 18 images between enhanced images and target images without fog, for the 4 compared algorithms, and for the 4 types of synthetic fog.

Algorithm uniform Variable k

Variable

Variable k &

Nothing MSR FSS NBPC NBPC+PA

70.6 46.0 34.7 6.3 48.8 5.8 31.9 4.6

49.9 71.4 34.1 3.0 35.5 6.4 29.0 4.3

56.9 46.4 5.0 44.2 6.6 42.9 5.9 40.2 4.3

39.1 71.1 13.6 43.8 9.1 35.5 6.2 37.2 4.4

4. RESULTS Here we have used the some synthetic images which is used to overcome the effect of haze or fog the different algorithms like PA, NBPC, NBPC+PA used. In this section the black pixels which is problems in PA has been eliminated in this part.

(a)

(b)

Figure 4 From (a) Colour NBPC (b) Colour NBPC +PA.

Page 6: IIJEC-2014-03-05-028

IPASJ International Journal of Electronics & Communication (IIJEC) Web Site: http://www.ipasj.org/IIJEC/IIJEC.htm

A Publisher for Research Motivatin........ Email: [email protected] Volume 2, Issue 3, March 2014 ISSN 2321-5984

Volume 2, Issue 3, March 2014 Page 15

5. COMPARISON: We apply each algorithm on the 4 types of synthetic fog. Used algorithms are: multi-scale retinex (MSR), enhancement with free-space segmentation (FSS), enhancement with noblack- pixel constraint (NBPC) and enhancement with noblack- pixel constraint combined with planar scene assumption (NBPC+PA). The results on 4 images with a uniform fog is presented in Figure 4. Notice the increase of the contrast for the farther objects: some object that was barely visible in foggy image appears clearly in enhanced images. A first visual analysis confirms that MSR is not suited for foggy images, that vertical objects appears too dark with FSS, that roads look over-corrected by NBPC, and that NBPC+PA comes as a nice trade-off. The quantified comparison consists simply in computing the absolute difference between the image without fog and the image obtained after enhancement. Results, averaged over the 18 images, the number of image pixels and the number of image color components, are shown in Table I. To qualify easily the improvement obtained by the different algorithms, the average absolute difference between the images with and without fog is also computed and shown in column two of the table. One can notice that the proposed algorithms are able, in the best case, to divide the average difference by approximatively a factor of two. The multi-scale retinex (MSR) is not a visibility enhancement algorithm dedicated to scene with various object depths. The average difference is decreased for the uniform fog and for fog with heterogeneous . Interestingly, when the k is heterogeneous, the multi-scale retinex is worse than doing nothing. This is explained by the fact that MSR increases some contrasts corresponding to fog and not to the scene. With uniform fog, enhancement with free-space segmentation (FSS) and with no-black-pixel constraint combined with planar scene assumption (NBPC+PA) gives best results, while enhancement with no-black-pixel constraint (NBPC) is worse than the multi-scale retinex (MSR), due to too strong contrast distortions on the road part of the image. Nevertheless enhancement with NBPC is better than MSR at long range distances. The enhancement with NBPC+PA allows to keep the good properties of the enhancement with NBPC at long range distances without contrast distortions on the road part of the image thanks to the combination with the planar assumption. For the three types of heterogeneous fog, enhancement with NBPC+PA leads to better results compared to FSS. This can be explained by the fact that FSS enhancement algorithm relies strongly on the assumption that k and are constant over whole image when NBPC+PA algorithm does not. Indeed, NBPC+PA algorithm only assumes that k and

are locally constant in the image and thus, most of the time, it performs better with heterogeneous fog compared to others. 6. CONCLUSION Thanks to the interpretation of the local visibility enhancement algorithm [1] in terms of two constraints on the inference of the atmospheric veil, we introduce a third constraint to take into account the fact that road images contain a large part of planar road, assuming a fog with a visibility distance higher than 50m. The obtained visibility enhancement algorithm performs better than the original algorithm on road images as demonstrated on a set of 18 synthetic images where a uniform fog is added following Koschmieder’s law. We also generate different types of heterogeneous fog, a situation never considered previously in our domain. The proposed algorithm also demonstrates its ability to improve visibility in such difficult heterogeneous situations. Obtained results are compared with respect to state-of-the-art algorithms: multi-scale retinex [9], enhancement with free-space segmentation (FSS) [8] and enhancement based on no-blackpixel constraint (NBPC) [1]. We are planning to extend the set of synthetic images used for the ground truth and for research purpose and in particular to allow other researchers to rate their own visibility enhancement algorithms. Future Work Our work also has the common limitation of most haze removal methods - the haze imaging model may be invalid. More advanced models can be designed to describe complicated phenomena, such as the sun’s influence on the sky region, and the blueish hue near the horizon. We intend to investigate haze removal based on these models in the future. Furthermore, combination of two different algorithms can be implemented for further enhancement of the image References [1] J.-P. Tarel and N. Hauti`ere, “Fast visibility restoration from a single color or gray level image,” in Proceedings of

IEEE International Conference on Computer Vision (ICCV’09), Kyoto, Japan, 2009, pp. 2201–2208 [2] S. G. Narashiman and S. K. Nayar, “Interactive deweathering of an image using physical model,” in IEEE

Workshop on Color and Photometric Methods in Computer Vision, Nice, France, 2003.

Page 7: IIJEC-2014-03-05-028

IPASJ International Journal of Electronics & Communication (IIJEC) Web Site: http://www.ipasj.org/IIJEC/IIJEC.htm

A Publisher for Research Motivatin........ Email: [email protected] Volume 2, Issue 3, March 2014 ISSN 2321-5984

Volume 2, Issue 3, March 2014 Page 16

[3] N. Hauti`ere, J.-P. Tarel, and D. Aubert, “Towards fog-free in vehicle vision systems through contrast restoration,” in IEEE International Conference on Computer Vision and Pattern Recognition (CVPR’07), Minneapolis, Minnesota, USA, 2007, pp. 1–8,

[4] R. Tan, N. Pettersson, and L. Petersson, “Visibility in bad weather from a single image,” in Proceedings of the IEEE Intelligent Vehicles Symposium (IV’07), Istanbul, Turkey, 2007, pp. 19–24.

[5] R. Tan, “Visibility in bad weather from a single image,” in IEEE Conference on Computer Vision and Pattern Recognition (CVPR’08), Anchorage, Alaska, 2008, pp. 1–8.

[6] K. He, J. Sun, and X. Tang, “Single image haze removal using dark channel prior,” in IEEE Conference on Computer Vision and Pattern Recognition (CVPR’09), Miami Beach, Florida, USA, 2009, pp. 1956– 1963.

[7] N. Hauti`ere and D. Aubert, “Contrast restoration of foggy images through use of an onboard camera,” in Proc. IEEE Conference on Intelligent Transportation Systems (ITSC’05), Vienna, Austria, 2005, pp. 1090–1095.

[8] N. Hauti`ere, J.-P. Tarel, and D. Aubert, “Mitigation of visibility loss for advanced camera based driver assistances,” To appear in IEEE Transactions on Intelligent Transportation Systems, 2010.

[9] Z. Rahman, D. J. Jobson, and G. W. Woodell, “Multi-scale retinex for color rendition and dynamic range compression,” in Society of Photo- Optical Instrumentation Engineers (SPIE) Conference Series, ser. Presented at the Society of Photo-Optical Instrumentation Engineers (SPIE) Conference, A. G. Tescher, Ed., vol. 2847, Nov. 1996, pp. 183–191.

[10] N. Hauti`ere, J.-P. Tarel, J. Lavenant, and D. Aubert, “Automatic fog detection and estimation of visibility distance through use of an onboard camera,” Machine Vision and Applications, vol. 17, no. 1, pp. 8–20,2006,

[11] Z. Rahman, D. J. Jobson, G. A. Woodell, and G. D. Hines, “Image enhancement, image quality, and noise,” in Society of Photo-Optical Instrumentation Engineers (SPIE) Conference Series, ser. Presented at the Society of Photo-Optical Instrumentation Engineers (SPIE) Conference, K. M. Iftekharuddin & A. A. S. Awwal, Ed., vol. 5907, Aug. 2005, pp. 164–178.

[12] J. Lavenant, J.-P. Tarel, and D. Aubert, “Proc´ed´e de d´etermination de la distance de visibilit´e et proc´ed´e de d´etermination de la pr´esence d’un brouillard,” French pattent number 0201822, INRETS/LCPC, February 2002.

[13] N. Hauti`ere, J.-P. Tarel, and D. Aubert, “Free space detection for autonomous navigation in daytime foggy weather,” in Proceedings of IAPR Conference on Machine Vision Applications (MVA’09), Yokohama, Japan, 2009, pp. 501–504, http

[14] D. Gruyer, C. Royere, N. du Lac, G. Michel, and J.-M. Blosseville, “Sivic and rtmaps, interconnected platforms for the conception and the evaluation of driving assistance systems,” in Proc. Intelligent Transport Systems World Congress, London, England, 2006, pp. 1–8. [15] K. Perlin, “An image synthesizer,” SIGGRAPH Computer Graphic, vol. 19, no. 3, pp. 287–296, 1985.


Recommended