In the field of computer and machine vision, haze and fog lead to image degradation through various degradation mechanisms including but not limited to contrast attenuation, blurring and pixel distortions. This limits the efficiency of machine vision systems such as video surveillance, target tracking and recognition. Various single image dark channel dehazing algorithms have aimed to tackle the problem of image hazing in a fast and efficient manner. Such algorithms rely upon the dark channel prior theory towards the estimation of the atmospheric light which offers itself as a crucial parameter towards dehazing. This paper studies the state-of-the-art in this area and puts forwards their strengths and weaknesses. Through experiments the efficiencies and shortcomings of these algorithms are shared. This information is essential for researchers and developers in providing a reference for the development of applications and future of the research field.
Images and videos have become integral parts of our daily lives either directly by viewing them or indirectly by extracting the information contained in these media and applying this information towards the achievement of other goals. In the field of machine vision, such images and videos are heavily relied upon in transforming the information of the real-world into digital data. This digital data becomes the basis for the development of various algorithms towards the realization of tasks which include but are not limited to traffic monitoring [
Haze and fog are both very common, naturally occurring weather phenomena that directly impact upon the contrast and quality of images. The effect of haze on image quality is as a result of a random scattering of light within the medium at irregular angles and hence prohibiting all pixels of the image to be completely reconstructed as the image acquisition point. This phenomenon is illustrated in
Due to the hindrances posed by haze and fog to machine and computer vision systems, a considerable and rapidly growing community and effort is dedicated towards the removal of haze and its impact on digital images. In order to achieve dehazing, earlier approaches usually relied upon the addition of certain kinds of information. Examples are seen in the work of Ref. [
In the research area of image dehazing and computer vision in general the widely accepted and adopted model applied towards the formation of haze is represented in Equation (1) below.
In Equation (1), the captured image intensity is represented by I, while A and J represent the global atmospheric light and scene radiance respectively. The parameter t(x) represents the transmission of the medium. This parameter presents a measure of the amount of light that is not scattered or diffused by the medium but reaches the image acquisition device. This parameter is strongly depended on the medium under observation. Generally speaking, the goal of image dehazing is to effectively recover the parameters A, J and t(x) from the acquired image, I(x). As previously stated, various approaches have been taken towards this task by assuming that certain scene parameters have been provided. This assumption has led to the failure of a majority of such techniques in effectively tackling the problem in real-life problems. The dark channel prior is a relatively new approach that has built upon the short-comings of these predecessor techniques.
The dark channel prior approach towards image dehazing is built upon observation that in non-sky portions there exist at least one color channel with its associated pixels having very low intensities sometimes close to zero. Intuitively, the computed intensity within such portions approach zero. This concept is mathematically represented in Equation (2) below.
In Equation (2),
As previously established, in order to effectively dehaze an image through implementation with the dark channel prior, it is essential to recover three core parameters namely A, J and t(x). The atmospheric light (A), which is a generalization of the phenomenon which entails the deviation of light as it travels through the medium, has been extrapolated from the patch within the haze image associated with the highest haze intensity. Examples of such work are seen in Ref. [
Here in this section we present and analyze the current state-of-the art dark channel prior image dehazing approaches. The work in this section addresses these algorithms in an analytical manner while this is followed by an experimental evaluation in the section 4 aimed at a practical evaluation of each approach. The work in Ref.
[
Although this approach sets a key milestone and succeeds in dehazing the image by means of the dark channel prior, one of its core components is the Laplacian matrix associated with its matting scheme. Assuming that the matrix is represented as L, L therefore becomes a prerequisite towards deriving the optimization problem represented in Equation (3) as:
In Equation (3), the parameter U represents the identity matrix which is of the same size L. This core step in the processing pipeline of the proposed algorithm is also responsible for the high computational cost associated with the algorithm. This has also motivated an extension of the algorithm into the work presented in Ref. [
The work presented in Ref. [
tightly coupled with the Laplacian matrix based matting approach. In this way the optimized form of
The work in Ref. [
In the above Equation (4),
The approach then proceeds to derive t(x) as:
A bilateral filter is also adopted towards refining
ter is capable of preserving edges and attaining performances that are robust and stable, the filter fails at enhancing detail and as well as speed and this motivates the work in Ref. [
The final algorithm analyzed in this section is the work presented in Ref. [
A key contribution of the work in Ref. [
In this section of the paper, in order to verify the performances of the analyzed algorithms in this paper, an implementation of the algorithms is achieved in a test-bed environment. The test-bed environment is realized in order to achieve fairness in analysis while also allowing all analyzed algorithms to perform in their most natural manner without any hindrances. The testbed and experimental evaluation are carried out on a Pentium quadcore processor at 2.8 Ghz with a 4 GB internal memory. The experimental results presented in this section are divided into qualitative visual experiments and subjective quantitative experiments. The first category of experiments aims to discuss the visual impact of the dehazing algorithms on the image while the second category is aimed at addressing the effect of the algorithms on the intrinsic properties of the images. Computational speed of each algorithm is also discussed. For ease of reference and simplicity, the algorithms presented and discussedare labeled as: Ref. [
As illustrated in the results presented in
of dehazing is achieved regardless which algorithm is being applied. A close observation however shows that although dehazing results with the approach in Ref. He et al. and He et Sun are higher than Yu et al., these algorithms present results riddled with a bluish sheen, a drawback that has been attributed to their insufficient estimation of the atmospheric light parameter. The results attained with the “trees.jpg” dataset are also in line with this trend with Ketan et al. achieving the perfect balance between dehazing effect and contrast balance. The results in Refs. He et al. and He et Sun follow closely in terms of performance with the least enhanced results being attained in Yu et al.
While in the above subsection we compared the algorithm using visual effect as a metric, a more quantitative comparison is carried out in this subsection in an attempt to bring to light the way in which these various algorithms truly perform and how they impact upon the non-visual component of the image. In the experiments carried out here, we examine how the various dehazing algorithms affect the image quality by assessing the following image metrics:
1. Mean Squared Error (MSE)
2. Peak Signal to Noise Ration (PSNR)
3. Signal to Noise Ration (SNR)
4. Structural Similarity Index (SSIM)
Since almost all of the metrics applied here towards quantitative comparison are full reference in nature, implementations with the requirement both haze-free and the dehazed output from the various algorithms in order to efficiently extract the merits that are truly representative the dehazing performance of the algorithms. For this reason, since haze-free and accompanying hazy image datasets are virtually non-existent to the best of our knowledge, we select some haze-free outdoor images and synthesize the haze using a hazing function similar what was applied in Ketan et al. With this dataset, we then proceed to realize dehazing using the various algorithms and then extract the metrics using the original images as reference and the dehazed output images as target.
From the results presented in
Algorithms | Image Properties | |||
---|---|---|---|---|
MSE | PSNR | SNR | SSIM | |
He et Sun | 7205.8 | 9.5 | 5.9 | 0.6 |
Ketan et al. | 6120.5 | 9.7 | 6.2 | 0.7 |
Yu et al. | 9607.5 | 6.4 | 5.2 | 0.3 |
He et al. | 8403.7 | 8.2 | 5.7 | 0.5 |
in this area, followed by the algorithm presented in Yu et al. Since some image dehazing systems are required to operate online, speed of dehazing is sometimes a requirement in the dehazing scheme and we therefore present a comparison between the various state-of-the-art in terms of dehzing speeds. This comparison result is presented in
The work in this thesis has studied the dark channel prior dehazing from a perspective review. The paper has firstly, presented a theoretical framework of the dark channel image dehazing research field with some core concepts and theories. This has established that the dark channel prior has indeed tackled some existing problems associated with some already well established dehazing schemes. The paper has then proceeded to present some state-of-the-arts and brought to light the major contributions of these algorithms. In order to provide some useful reference for researchers in the field, we implement these state-of-the-arts and compare them with each other both theoretically and through experimental evaluation. The experimental results have suggested that while all state-of-the-arts indeed achieve some level of dehazing, some algorithms outperform others in terms of visual effect, computational speed and image quality improvements. In terms of visual and perceptive improvements, the work by He et Sun and Ketan et al. achieve the highest level of performance. While He et Sun achieves enhancement of the contrast and sharpness of image with minimal removal of haze density, Ketan et al. on the other hand excels in haze intensity reduction while achieving little towards sharpness or contrast enhancement. In extending this theorem to the results achieved in the intrinsic image parameter experiments, it is conclusive that both algorithms are applicable towards applications that apply dehazing as a low level operation and a foundation for building more complex and higher level algorithms such as target detection and recognition. This paper argues this because both algorithms have demonstrated that they do not only achieve visual improvements but also intrinsic image properties are improved almost to their original state. While the algorithms in He et al. and Yu et al. fail to attain performance efficiencies comparable to He et Sun and Ketan et al., these algorithms have also proven to be applicable since dehazing performance is stable and acceptable. Computation experiments carried out suggest that the algorithm presented in He et al. is more applicable in offline systems while Yu et al. and He et Sun are more applicable in online systems. This is due to the computational load and time required in resolving single image dehazing. Finally, the algorithm in Ketan et al. which requires a training phase prior to dehazing could be applicable in both online and offline approaches. In online systems,
Algorithms | Computational Time/s | |
---|---|---|
Toys.jpg | Trees.jpg | |
He et al. | 551.717 | 1650.429 |
He et Sun | 0.952 | 0.7993 |
Ketan et al. | 1.536 | 2.214 |
Yu et al. | 2.152 | 4.172 |
however, prior knowledge of the target scene would be required in order to achieve pre-training in an offline environment before online deployment of the scheme.
This work was supported by the Natural Science Foundation of Guangdong Province (Grant No. 2015A030310278), the Fundamental Research Funds for the Central Universities (Grant No. 2015ZZ131).
Ebtesam MohameedAlharbi,PengGe,HongWang, (2016) A Research on Single Image Dehazing Algorithms Based on Dark Channel Prior. Journal of Computer and Communications,04,47-55. doi: 10.4236/jcc.2016.42006