Journal of Computer and Communications
Vol.05 No.11(2017), Article ID:79303,12 pages
10.4236/jcc.2017.511005

A Novel Dark-Channel Dehazing Algorithm Based on Adaptive-Filter Enhanced SSR Theory

Ebtesam Mohameed Alharbi1,2, Hong Wang1,2*, Peng Ge1,2

1School of Electronics and Information, South China University of Technology, Guangzhou, China

2Engineering Research Center for Optoelectronics of Guangdong Province, School of Science and Opto-Electronics, South China University of Technology, Guangzhou, China

Copyright © 2017 by authors and Scientific Research Publishing Inc.

This work is licensed under the Creative Commons Attribution International License (CC BY 4.0).

http://creativecommons.org/licenses/by/4.0/

Received: September 1, 2017; Accepted: September 22, 2017; Published: September 25, 2017

ABSTRACT

Low visibility in foggy days results in less contrasted and blurred images with color distortion which adversely affects and leads to the sub-optimal performances in image and video monitoring systems. The causes of foggy image degradation were explained in detail and the approaches of image enhancement and image restoration for defogging were introduced. The study proposed an enhanced and advanced form of the improved Retinex theory-based dehazing algorithm. The proposed algorithm achieved novel in the manner in which the dark channel prior was efficiently combined with the dark-channel prior into a single dehazing framework. The proposed approach performed the first stage in dehazing within the dark channel domain through implementation with an adaptive filter. This novel approach allowed for the dark channel features to be efficiently refined and boosted, a scheme, which according to the obtained results, significantly improved dehazing results in later stages. Experimental results showed that this approach did little to trade-off dehazing speed for efficiency. This makes the proposed algorithm a strong candidate for real-time systems due to its capability to realize efficient dehazing at considerably rapid speeds. Finally, experimental results were provided to validate the superior performance and efficiency of the proposed dehazing algorithm.

Keywords:

Retinex Theory, Dehazing, Image Enhancement and Image Restoration, Image Defogging

1. Introduction

Image dehazing is an interdisciplinary challenge which involves not only machine vision but meteorology, optics and some aspects of computer graphics. Haze as well as fog are limiting factors for the visual range within the atmosphere and have the effect and potential to heavily and significantly reduce the contrast of the target scenes. A core objective of image analysis is aimed at the improvement of visibility, the recovery of constituent colors, as well as all constituting image parameters as if the image was captured or acquired under favorable conditions. The core benefit of image dehazing lies in the manner in which it allows computer vision and human vision systems to capitalize on such improved and refined images for the realization of various applications. Additionally, most computer vision applications, ranging from low-level image analysis schemes to high-level object recognition, usually tend to assume the input image as the ultimate and most reliable source of the scene radiance. This, therefore, goes to establish that the performance of computer vision algorithms no matter how high level they may have a strong dependence upon the quality and reliability of the input image. Such algorithms will invariably suffer from biased and corrupted input images as a result of haze or fog presence within the target scene.

Research work dedicated towards image dehazing has been ongoing for several decades now. We can loosely group the current state-of-the-art into two main groups. Firstly, we have the image enhancement-based schemes that fail to consider the physical modeling of the image as well as image formation principles. Such schemes only seek to boost image quality in order to please the viewer. Such enhancement based dehazing schemes have included histogram equalization schemes [1] , Single and multiscale Retinex [2] but to mention a few. Such schemes while successful in enhancing the overall appeal of the image, have failed to address intrinsic image properties that may be crucial for computer vision algorithms. That is to say that by enhancing the visual appeal of the image only, we fail to address actual feature enhancement and reconstruction. This means that such schemes may be useful in other areas such as multimedia but fail to meet actual machine vision application requirements. The later category of dehazing schemes, more related to the work presented here in this paper, are the dehazing schemes based on image recovery principles. Such algorithms perform a modeling of the atmospheric scattering and rely immensely on the application of supplementary scene information as well as priors. Examples of such schemes are seen in [3] - [11] .

Finally, it has been established that even in the presence of depth sensing technologies and other means by which depth information could be exclusively extracted from the scene, most machine vision systems continue to rely on intensity and color images extraction of depth information from scenes. This suggests that the presence of haze and fog during the period of image acquisition goes a long way to further impact upon algorithms that attempt to make a sense or draw from the scene depth information. Therefore, it is arguable that the removal of haze is capable of producing depth information and ultimately benefits many vision algorithms. Consequently, the presence of haze and fog within an image could be a premise for the extraction of useful scene depth information.

The core necessity for image enhancement and dehazing efforts stems from the core fact that the atmosphere is never fully free from the presence of various particles. Taking pure air into consideration, it is considered that the visible range is expected to be from between 277 km to about 348 km [12] [13] , when the curvature of the earth’s surface is not considered. While this may hold for theoretical analysis, real world images have shown ranges that are much less than these theoretical values. Using international visibility codes for meteorological range rates as a metric, it can be considered that visibility lies between under 50 meters and up to over 50 km for air conditions considered as exceptionally clear. The metrics have reflected the true conditions of air in our daily lives and while we focus on the digital applications of images captured, we present these background theoretical components in order to enable an easy definition and understanding of the motivation of image dehazing and defogging. It is, therefore, deducible that dehazing and defogging are two techniques that can be applied towards enhancing visibility within a scene for the human eye as well as computer and machine vision systems. In the category of non-human benefit, such enhancement techniques have the potential to smooth out and boost the flow of operators within areas of an image under various weather conditions both favorable and unfavorable.

In truly understanding the origins and physical constituents of haze formation in images, the scattering coefficient βsc is a key contributing factor that needs to be thoroughly understood in order to effectively establish a background.

The production of an image from a specific scene that is void of haze and its effect although the original image acquired form the scene contains some amount of haze. This is depicted in Figure 1 below.

2. Related Theory

2.1. Algorithm Overview

We briefly revisit the original version of the SSR theory in order to establish context upon which the advanced components will be built. The Equation (1)

(a) (b)

Figure 1. This figure illustrates in (a) an original hazy image and (b) the result of dehazing achieved with state-of-the-art.

embodies the dehazing problem according the SSR theory. I ( x , y ) represents the perceived image intensity, R ( x , y ) expresses the reflectance of the object and L ( x , y ) specifies the dynamic range of the image and it varies as the surrounding changes. According to the equation, noises n(x, y) forms a part of the atmospheric light that participates directly in the image processing.

I ( x , y ) = R ( x , y ) L ( x , y ) + n ( x , y ) (1)

Due to the additive nature in which it participates, it can, therefore, be estimated and addressed by means of a low-pass filter, thereby obtaining Equation (1), which illustrates the original. By performing the single scale retinex based dehazing, guided by the atmospheric scattering model, n(x, y) and L(x, y) components are estimated and the final dehazing result is obtained.

The proposed advanced form of the single-scaled retinex theory-based dehazing algorithm is presented graphically in Figure 2 below. Figure 2 graphically depicts the functional components of the proposed dark-channel based SSR dehazing algorithm with components consistent with the original improved SSR algorithm, contained within the blue block, while the red block designates the new components introduced into the original framework.

2.2. Estimation of Dark Channel Priori

In the advanced form of the algorithm however, this dehazing output becomes the intermediary result for initializing the higher level operations. In this second phase, the dark channel priori approach originally proposed in [14] , is applied in extracting the dark channel components of the input image (the output from the first stage of the algorithm). The dark channel is computed as follows:

J d a r k ( x ) = min k ( min y Ω ( k ) ( J C ( y ) ) ) (2)

In Equation (2) J C represents the color channel of J while Ω(k) represents the local patch which is center around k. The results at this point are depicted in Figure 3 below.

Figure 2. A graphical illustration of the functional components of the advanced dark channel SSR dehazing algorithm.

(a)(b)(c)

Figure 3. In the proposed form of the SSR dehazing algorithm, the output from the standard form (a) is used as the input to the second stage and the min of RGB values are computed as (b) and finally, the dark channel priories are yielded as (c).

2.3. Design and Implementation of Adaptive Filters

Once the dark channel priories of the image have been successfully computed, the algorithm proceeds into the adaptive filtering component. This component plays a major role in enhancing the features of the dark channel priori-based image and aiding in boosting the efficiency of the subsequent components of the algorithm. From a general perspective, a filter is termed adapted when it is capable of changing its filtering parameters (coefficients) over time, in order to allow adaption to image dynamics. In order to satisfy this task, an adaptive filter must self-learn. As the input image arrives at the filter, the adaptive filter coefficients are capable of adjusting themselves in order to achieve an optimum outcome, such as identifying an unknown filter component or canceling out noise in the input image.

In designing of an adaptive filter, some filter properties are required to be taken into account in order to realize filters that perform optimally as adaptive filers. These benchmark properties are briefly presented below.

Ÿ Filter Convergence Rate:

The convergence rate determines the rate at which the filter converges to its resultant state. Usually, a faster convergence rate is the desired characteristic of an adaptive system. Convergence rate is not, however, independent of all of the other performance characteristics. There will be a tradeoff, in other performance criteria, for an improved convergence rate and there will be a decreased convergence performance for an increase in other performance. For example, if the convergence rate is increased, the stability characteristics will decrease, making the system more likely to diverge instead of converging to the proper solution. Likewise, a decrease in convergence rate can cause the system to become more stable. This shows that the convergence rate can only be considered in relation to the other performance metrics, not by itself with no regards to the rest of the system.

Ÿ Minimum Mean Square Error:

The minimum mean square error (MSE) is a metric indicating how well a system can adapt to a given solution. A small minimum MSE is an indication that the adaptive system has accurately modeled, predicted, adapted and/or converged to a solution for the system. A very large MSE usually indicates that the adaptive filter cannot accurately model the given system or the initial state of the adaptive filter is an inadequate starting point to cause the adaptive filter to converge. There are a number of factors which will help to determine the minimum MSE including, but not limited to; quantization noise, the order of the adaptive system, measurement noise, and error of the gradient due to the finite step size.

Ÿ Computational Complexity:

Computational complexity is particularly important in real time adaptive filter applications. When a real time system is being implemented, there are hardware limitations that may affect the performance of the system. A highly complex algorithm will require much greater hardware resources than a simplistic algorithm.

Ÿ Stability:

Stability is the most important performance measure for the adaptive system. By the nature of the adaptive system, there are very few completely asymptotically stable systems that can be realized. In most cases, the systems that are implemented are marginally stable, with the stability determined by the initial conditions, the transfer function of the step size of the input.

Ÿ Robustness:

The robustness of a system is directly related to the stability of a system. Robustness is a measure of how well the system can resist both input and quantization noise.

Ÿ Filter Length:

The filter length of the adaptive system is inherently tied to many of the other performance measures. The length of the filter specifies how accurately a given system can be modeled by the adaptive filter. In addition, the filter length affects the convergence rate, by increasing or decreasing computation time, it can affect the stability of the system, at certain step sizes, and it affects the minimum MSE. If the filter length of the system is increased, the number of computations will increase, decreasing the maximum convergence rate. Conversely, if the filter length is decreased, the number of computations will decrease, increasing the maximum convergence rate. For stability, due to an increase in the length of the filter for a given system, you may add additional poles or zeroes that may be smaller than those that already exist. In this case, the maximum step size, or maximum convergence rate, will have to be decreased to maintain stability. Finally, if the system is under specified, meaning there are not enough pole and/or zeroes to model the system, the mean square error will converge to a nonzero constant. If the system is over specified, meaning it has too many poles and/or zeroes for the system model, it will have the potential to converge to zero, but increased calculations will affect the maximum convergence rate possible.

Now we proceed to formulate a general form of the adaptive filter applied at this stage of the algorithm.

y ( n ) = i = 0 L 1 w i ( n ) X ( n 1 ) = W T ( n ) X ( n ) (3)

where,

Input vector:

X ( n ) = [ X ( n ) , ( n 1 ) , , X ( n L + 1 ) ]

Parameter:

W ( n ) = [ w 0 ( n ) , w 1 ( n ) , , w L 1 ( n ) ] T

Once adaptive filtering has been realized on the dark channel components of the still haze image, a pixel level subtraction between haze input image and the adaptive filtered dark channel components is performed. This yields the final output of the dehazing algorithm as this iterative division operation removes the haze pixels systematically until a final haze-free output image is obtained. Next, we proceed to present implementation and experimental results of the proposed scheme.

3. Experimental Results and Discussion

This section presents the implementation results of the algorithm along with some experimental verification of the performance of the algorithm. In this section, we cover the implementation results, qualitative comparison with state-of-the-art along with image quality assessment results. The algorithm is implemented on a platform shown in Table 1.

With implementation realized on the target platform, firstly we present some qualitative comparison results with state-of-the-art as follows. The proposed algorithm was implemented on the Matlab platform and applied in verification and comparison experiments with state-of-the-art. The Matlab platform is selected for implementation and verification experiments due to the efficiency of the platform to realize algorithms with minimal complexity and trade off. This allows for the algorithm’s true performance to be evaluated with alterations in behavior resulting from the test-bed platform. In both qualitative and image quality assessment experiments, the dahzing dataset applied in state-of-the-art were also applied here for fairness of comparison. The results that were obtained from the experiments have been presented in Figures 4-6.

(a) (b) (c) (d)
(e)

Figure 4. Visual qualitative comparison results with (a) Input Haze Image (b) Histogram Equalization (c) SSR algorithm (d) Improved Retinex algorithm and (e) Proposed on the Canon dataset.

(a) (b) (c) (d)
(e)

Figure 5. Visual qualitative comparison results with (a) Input Haze Image (b) Histogram Equalization (c) SSR algorithm (d) Improved Retinex algorithm and (e) Proposed on the Cones dataset.

(a) (b) (c) (d)
(e)

Figure 6. Visual qualitative comparison results with (a) Input Haze Image (b) Histogram Equalization (c) SSR algorithm (d) Improved Retinex algorithm and (e) Proposed on the Dolls dataset.

Table 1. Platform for implementation and verification of the proposed algorithm.

The qualitative assessment results demonstrate the visual performance of the algorithm compared with the state-of-the-art while the image assessment results demonstrate the feature level improvement achieved by the proposed algorithm in comparison with the state of the state. The qualitative visual experiments demonstrate the superior performance attained by the proposed algorithm in terms of dehazing accuracy. Indeed it is clear from the results that the least efficient performance is achieved by histogram equalization. The Single Scale Retenix algorithm outperforms the histogram equalization but does not attain the degree of recovery achieved by the improved retinex in all the dataset. The unique area in which the improved version of the retinex algorithm outperforms the classical form is in the area of depth recovery. The improved algorithm is more capable of recovery haze in a longer range of depth as compared with the classical algorithm. However, despite the superior performance of the improved form of the algorithm. It still suffers from the inability to full recover color features in the recovered image. While haze features are successfully removed by the improved algorithm, some color features are also lost along the dehazing pipeline and this highlights the major contribution of the proposed algorithm.

The proposed advanced form of the algorithm outperforms the improved retinex algorithm in this area. With the implementation of the proposed algorithm, haze pixels are efficiently removed without significant loss of color features which are crucial for certain high-level machine vision and pattern recognition algorithms such as object recognition. The superior performance of the proposed technique is attributed to the adaptive filter implemented within the framework which allows for dehazing to adapt and form around the dynamic image features. In this way, the haze pixels are targeted and filtered out efficiently without any other intrinsic image features being targeted. Table 2 shows the image assessment results below.

As has been already established by the qualitative comparison results, the trend also holds for the image assessment quantitative experiments. The proposed algorithm out performs in all areas with the improved algorithm achieving the second pace in terms of performance. While the strongest area for the proposed algorithm remains its ability to restore the true color features of the target image, the improved form of the algorithm outperforms the proposed slightly in the area of edge strength. This lack of edge strength when implementing the proposed algorithm could be attributed to the smoothing effect of the filtering operation applied to the dark channel image. This allows for future work to extend the work proposed in this section by targeting the filtering com-

Table 2. Image quality assessment comparison with the state-of-the-art.

ponent and proposing much more efficient filtering schemes. This would allow for the smoothing problem to be addressed and further enhance the performance of the proposed dehazing scheme.

4. Conclusion

The work in this section addresses the dehazing problem from the perspective of the retinex theory. The paper proposed a new single-image dehazing algorithm based on the single scale limiting (SSR) algorithm, combined with the theory of atmospheric scattering. While this simple form of the algorithm has been sufficient in improving information entropy within the image in removing the impacts of haze, the approach has failed in extending this performance to far fields within the image. The deficiency has been alleviated in the improved form of the algorithm. Compared with SSR algorithm and histogram equalization algorithm, the improved form shows many advantages in recovering the details and the colors from both the objective and subjective assessment.

Acknowledgements

The Natural Science Foundation of Guangdong Province (2015A030310278, 2016A030313473), and the Key Technologies R & D Program of Guangdong Province (2016B090918057), and the Key Technologies R&D Program of Guangzhou City (201604046021, 201604010008),Hunan Province Key Laboratory of Videometric and Vision Navigation.

Cite this paper

Alharbi, E.M., Wang, H. and Ge, P. (2017) A Novel Dark- Channel Dehazing Algorithm Based on Adaptive-Filter Enhanced SSR Theory. Jour- nal of Computer and Communications, 5, 60-71. https://doi.org/10.4236/jcc.2017.511005

References

  1. 1. Kim, J.H., Sim, J.Y. and Kim, C.S. (2011) Single Image Dehazing Based on Contrast Enhancement. ICASSP, 1273-1276.

  2. 2. Land, E.H. and Mccann, J.J. (1971) Lightness and Retinex Theory. Journal of Optical Society of America, 61, 1-11. https://doi.org/10.1364/JOSA.61.000001

  3. 3. Tan, R. (2008) Visibility in Bad Weather from a Single Image. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, June 2008.

  4. 4. Fattal, R. (2008) Single Image Dehazing. ACM Transactions on Graphics, 27, 1-9. https://doi.org/10.1145/1360612.1360671

  5. 5. Jean-Philippe, T. and Nicolas, H. (2009) Fast Visibility Restoration from a Single Color or Gray Level Image. Proceeding of IEEE 12th International Conference on Computer Vision, New York, 2201-2208.

  6. 6. He, K.M., Sun, J. and Tang, X.O. (2010) Single Image Haze Removal Using Dark Channel Prior. IEEE Transactions on Pattern Analysis and Machine Intelligence, 33, 2341-2353.

  7. 7. Tao, S.Y., Feng, H.J., Xu, Z.H. and Li, Q. (2012) Image Degradation and Recovery Based on Multiple Scattering in Remote Sensing and Bad Weather Condition. Optics Express, 20, 16584-16595. https://doi.org/10.1364/OE.20.016584

  8. 8. Fang, S., Xia, X., Huo, X. and Chen, C. (2014) Image Dehazing Using Polarization Effects of Objects and Airlight. Optics Express, 22, 19523-19537. https://doi.org/10.1364/OE.22.019523

  9. 9. Liu, F., Cao, L., Shao, X., Han, P. and Bin, X. (2015) Polarimetric Dehazing Utilizing Spatial Frequency Segregation of Images. Applied Optics, 54, 8116-8122. https://doi.org/10.1364/AO.54.008116

  10. 10. Mudge, J. and Virgen, M. (2013) Real Time Polarimetric Dehazing. Applied Optics, 52, 1932-1938. https://doi.org/10.1364/AO.52.001932

  11. 11. Yeh, C., Kang, L., Lee, M. and Lin, C. (2013) Haze Effect Removal from Image via Haze Density Estimation in Optical Model. Optics Express, 21, 27127-27141. https://doi.org/10.1364/OE.21.027127

  12. 12. Hulbert, E.O. (1941) Optics at Atmospheric Haze. Journal of the Optical Society of America, 31, 467-476. https://doi.org/10.1364/JOSA.31.000467

  13. 13. Middleton, W.E.K. (1952) Vision through the Atmosphere. University of Toronto Press, Toronto.

  14. 14. He, K., Sun, J. and Tang, X. (2011) Single Image Haze Removal Using Dark Channel Prior. IEEE Transactions on Pattern Analysis and Machine Intelligence, 33, 2341-2353. https://doi.org/10.1109/TPAMI.2010.168