Journal of Computer and Communications
Vol.06 No.05(2018), Article ID:84957,13 pages
10.4236/jcc.2018.65009

A New Method of Multi-Focus Image Fusion Using Laplacian Operator and Region Optimization

Chao Wang, Rui Yuan*, Yuqiu Sun, Yuanxiang Jiang, Changsheng Chen, Xiangliang Lin

School of Information and Mathematics, Yangtze University, Jingzhou, China

Copyright © 2018 by authors and Scientific Research Publishing Inc.

This work is licensed under the Creative Commons Attribution International License (CC BY 4.0).

http://creativecommons.org/licenses/by/4.0/

Received: April 24, 2018; Accepted: May 27, 2108; Published: May 30, 2108

ABSTRACT

Considering the continuous advancement in the field of imaging sensor, a host of other new issues have emerged. A major problem is how to find focus areas more accurately for multi-focus image fusion. The multi-focus image fusion extracts the focused information from the source images to construct a global in-focus image which includes more information than any of the source images. In this paper, a novel multi-focus image fusion based on Laplacian operator and region optimization is proposed. The evaluation of image saliency based on Laplacian operator can easily distinguish the focus region and out of focus region. And the decision map obtained by Laplacian operator processing has less the residual information than other methods. For getting precise decision map, focus area and edge optimization based on regional connectivity and edge detection have been taken. Finally, the original images are fused through the decision map. Experimental results indicate that the proposed algorithm outperforms the other series of algorithms in terms of both subjective and objective evaluations.

Keywords:

Image Fusion, Laplacian Operator, Multi-Focus, Region Optimization

1. Introduction

Image fusion is one of the most important techniques used to extract and integrate as much information as possible for image analysis, such as surveillance, target tracking, target detection and face recognition [1] [2] . Image fusion is often applied to multi-focus image processing. Due to the limited focus range of the optical lens, the optical lens will blur the object outside the focused region in the process of optical imaging [3] . To get the full focus image, multi-focus image fusion is an effective technique to solve this problem. Multi-focus image fusion is to integrate the focus area from images with different depth focus. So far, many multi-focus image fusion algorithms have been proposed. All methods can be divided into two categories: spatial domain fusion and transform domain fusion [4] .

In the transform domain, the multi-scale decomposition is very similar to the human visual system and computer vision process from coarse to fine understanding of things, and no block effect in the fusion process [5] . In multi-focus image fusion algorithm and the image fusion field, they are considered by researchers. This class of algorithms is more widely in current research. At present, the research on multiscale image fusion methods is mainly focused on the image multiscale analysis tools and the fusion rules. In recent years, researchers have proposed many tools for multiscale analysis of images, including pyramid transform, wavelet transform and other multiscale geometric analysis methods.

The method based on spatial domain mainly deals with the image fusion according to the spatial feature information of image pixels [6] . As a single pixel cannot represent the image space feature information, the block method is generally used. This method has a better effect on the area rich image. However, the processing of the flat area is likely to cause misjudgment, the size of the block is difficult to select, and the image edge will appear discontinuous small pieces, resulting in serious block effect. In view of the shortcomings of the image fusion algorithm based on block segmentation, some scholars have proposed an improved scheme. Among them, V. Aslantas and R. Kurban proposed differential evolution algorithm to determine the size of segmented image blocks, and achieved some results [6] . To a certain extent, it solved the problem that the size of image blocks was difficult to select. A. Goshtasby and others calculate the corresponding blocks of the fused image by calculating the weighted sum of the sub blocks, and introduce the weighting factors to each corresponding block in the source image [7] ; H. Hariharan et al. defined the focal connectivity of the same focal plane, and segmented the fused source image according to the connectivity [8] . In addition to the above several spatial domain fusion algorithms, many scholars have proposed the fusion method based on the focus region detection in recent years.

From a large number of literatures, one of the key problems of spatial image fusion algorithm is how to measure the sharpness of blocks or regions or the saliency level of regions. In order to solve these problems, new multi-focus image fusion method, a spatial domain method, have been proposed based on Laplacian operator and region optimization. The saliency level of regions is the main part of the paper. The method of evaluating saliency level of image includes Tenengrad gradient function [9] , Laplacian gradient [10] function, sum modulus difference (SMD) [11] function, energy gradient [12] function, and so on. The image was processed by the better method of evaluating saliency level of image, and then the general focusing region was obtained. Then, the focusing region is optimized according to the focusing connectivity of the focal plane and the edge detection. Finally, the multi-focus image fusion is finished by using the final decision map.

2. Materials and Methology

2.1. Materials

In order to prove the superiority of the proposed fusion method, three sets of images are selected for multi-focus image fusion, as shown in Figures 1(a)-(c). The images on the top row are mainly focused on the foreground while the images on the bottom row are mainly focused on the background. To better evaluate the performance of the fusion method, the proposed method is compared with several current mainstream multi-focus image fusion methods based on DWT [13] , NSCT [14] , OPT [15] and LP [16] . All experiments are carried out in MATLAB2016a.

2.2. The Evaluation of Image Saliency

In the quality evaluation of no reference image [17] , the saliency of image is an important index to evaluate the quality of image. It can be better suited to human subjective feelings. If the image is not high in significance, the image is blurred. In this paper, the Laplacian gradient [10] is used.

The Laplacian operator is an important algorithm in the image processing, which is a marginal point detection operator that is independent of the edge direction. The Laplacian operator is a kind of second order differential operator. A continuous two-element function f (x, y), whose Laplacian operation is defined as

2 f = 2 f / x 2 + 2 f / y 2 (1)

For digital images, the Laplacian operation can be simplified as

g ( i , j ) = 4 f ( i , j ) f ( i + 1 , j ) f ( i 1 , j ) f ( i , j + 1 ) f ( i , j 1 ) (2)

At the same time the above formula can be expressed as a convolution form, that is

g ( i , j ) = r = k k s = l l f ( i r , j s ) H ( r , s ) (3)

In the above formula, i , j = 0 , 1 , 2 , , N 1 ; k = 1, l = 1, H(r, s) can take a lot of values, one of which is

H 1 = [ 0 1 0 1 4 1 0 1 0 ]

Experiments show that the higher the image saliency is, the greater the sum of the mean of the corresponding matrix is after being processed by the Laplacian operator. Therefore, the image saliency (D(f)) based on the Laplacian gradient function is defined as follows:

(a) (b) (c)

Figure 1. Images for multi-focus image fusion. (a) Backgammon, the upper one is foreground focus and the lower one is background focus; (b) Clock, the upper one is foreground focus and the lower one is background focus; (c) Lab, the upper one is foreground focus and the lower one is background focus.

D ( f ) = y x | g ( x , y ) | ( g ( x , y ) > T ) (4)

Among them, g (x, y) is the convolution of Laplacian operators at pixel points (x, y).

By using the value of D(f), it is easy to divide images with different clarity. Next, it is applied to the saliency decision of different regions of images. According to the above, the region saliency of an image can be defined as:

D I ( i , j ) = D ( I ( i n : i + n , j n : j + n ) ) (5)

Among them, D is the function of saliency method based on Laplacian gradient operator. DI is the matrix of saliency of image I. And ( 2 n + 1 ) × ( 2 n + 1 ) is the scale of processing template.

In the multi-focus image processing, we can get significant matrices (DI1, DI2) of different focus images, obtain a decision matrix (Mdecision) by comparing.

M decision = ( D I 1 D I 2 ) (6)

For various reasons, there are some noise and erroneous judgment in the decision map. It will affect the quality of image fusion. As for erroneous judgment, it will be mentioned later in the article.

2.3. Region Optimization

In the first obtained decision map, there are often some noise and misjudged areas need to be corrected. In most methods, morphological processing is usually used to solve this problem. But this method often leads to the destruction of the boundary. H. Hariharan et al. [18] defined the focusing connectivity of the same focal plane. Most of the noise and misjudged areas can be corrected, according to it.

M DF-decision = Delete Larea ( M decision ) (7)

As for DeleteLarea, it needs to be mentioned that its function is to delete smaller connected areas which include most noise and misjudged areas.

At this stage, there is an important problem to be solved. The erroneous judgment adhered to the focus edge is not removed by the above method. When using the Laplacian method to deal with the edges of multi-focus images, there is often edge information interference, in the case of Figure 2. Because the black part is more than the white part in their corresponding templates, point A and the points around it will turn black. We can understand it from Formula 2. f uses 3 × 3 this module for processing. And it will also be false for other reasons. Therefore, we put forward a focus edge optimization method based on edge detection. Edge detection is used to find the edges of the original images. Using a module scans the edges to modify the area in the module. The g is an edge detection function. As shown in Formula 8, the h is the function that if it is found that one side of the edge is dominated by an element, all this side is modified to the element. Among them, A is a decision map, B is an edge map, and C is an optimized decision map.

[ 1 1 1 0 0 1 1 0 0 0 1 1 1 0 0 1 1 0 0 0 1 0 0 0 0 ] f A = [ 1 1 1 0 0 1 1 1 0 0 1 1 0 0 0 1 1 0 0 0 1 0 0 0 0 ] g B = [ 1 1 1 0 0 0 1 0 0 0 0 0 1 0 0 0 1 0 0 0 1 0 0 0 0 ] } h C = [ 1 1 1 0 0 1 1 0 0 0 1 1 1 0 0 1 1 0 0 0 1 0 0 0 0 ] (8)

2.4. Multi-Focus Image Fusion

Image fusion is carried out according to the final decision map (Dfinal). Then, the fused image f (x, y) could be expressed as:

f ( x , y ) = D final × f 1 ( x , y ) + ( 1 D final ) × f 2 ( x , y ) (9)

It means the fused image is composed by the focus regions in the image f1 (x, y) and f2 (x, y). Though these steps, a fused image fully focused could be obtained.

For more than two images to be fused, it is necessary to change the form of the decision map. It will storage the serial number of the most significant image in the corresponding region. Figure 3 is a schematic map of a decision map in the process of fusion of four multi-focus images. When image fusion is made, each point of the image is assigned according to the index value. The fused image f (x, y) could be expressed as:

Figure 2. Example image of edge information interference.

Figure 3. Schematic map of a decision map in the process of fusion of four multi-focus images.

f ( x , y ) = f D final ( x , y ) ( x , y ) (10)

2.5. Evaluation Index System

The performance of the fusion algorithm can be evaluated subjectively and objectively. Since the evaluation is highly dependent on human visual characteristics, it is difficult to distinguish between the fused images when they are approximately similar. Therefore, one subjective evaluation method and four objective evaluation methods are adopted in this article.

1) Subjective evaluation method

a) Comparison of residual maps

The residual map can display the difference between two images in the image. We can observe the effect of image fusion by observing residual maps of different methods. The residual map Ir between the source image and the fused image is defined as follows:

I r = I origin I fusion + max ( I origin ) / 2 (11)

2) Objective evaluation methods

a) Mutual information (MI)

The greater the sum of the mutual information between the fusion image and the source image, the richer the information obtained from the source image of the fused image, and the better the fusion effect. The MI between the source image and the fused image is defined as follows:

M I = k = 0 L i = 0 L p A F ( i , k ) log 2 p A F ( i , k ) p A ( i ) p F ( k ) + k = 0 L j = 0 L p B F ( j , k ) log 2 p B F ( j , k ) p A ( j ) p F ( k ) (12)

Among them, pA, pB and pF are the normalized gray histogram of A, B and F. pAF (i, k) and pBF (j, k) are united gray histograms between the fused image and the source image. L is the number of intensity levels.

b) Peak signal to noise ratio (PSNR)

PSNR is the most common and widely used objective measure of image quality. The larger the PSNR, the less the distortion is represented. The PSNR is calculated as follows:

P S N R = 10 log 10 ( M A X I 2 / ( 1 m n i = 1 m 1 j = 1 n 1 I ( i , j ) K ( i , j ) 2 ) ) (13)

where A represents one of the pre-processed images, F represents the processed image, and MAXI is the maximum value that represents the color of the image point.

c) Spatial frequencies (SF)

SF reflects the change of the pixel gray level of the image in space. To some extent, SF can reflect the clarity of images. SF is defined as follows:

S F = 1 M × N i = 1 M j = 2 N [ I ( i , j ) I ( i , j 1 ) ] 2 + 1 M × N i = 2 M j = 1 N [ I ( i , j ) I ( i 1 , j ) ] 2 (14)

where I (I, j) represents the image, and M and N represent the number of rows and columns of the image.

d) Edge intensity (EI)

EI is a measure of the local change intensity of the image in the normal direction along the edge, and also reflects the image sharpness to some extent. Its formula is expressed as:

E I = 1 M × N i = 1 M j = 1 N I x 2 ( i , j ) + I y 2 ( i , j ) (15)

where Ix (i, j) and Iy (i, j) represent horizontal gradient and longitudinal gradient of the image.

3. Results

3.1. Laplacian Gradient

In order to test the accuracy of the above method, the experiment uses MATLAB language programming to achieve the above algorithm. Experimental pictures use Lena images. The image size is 512 × 512 pixels. Then, the four focus images are generated by blurring each with a Gaussian radius of 2.5, 5, 7.5, and 10, respectively. Five images of Lena, Lena 2.5, Lena 5, Lena 7.5, and Lena 10 are shown in Figures 4(a)-(e).

The five images were tested using the image saliency assessment method based on the Laplacian gradient. Get the corresponding D(f). The data is shown

(a) (b) (c) (d) (e)

Figure 4. Initial and blurry images of Lena images, (a) Initial Lena image with D(f) = 1.0000; (b) Lean images blurred with a Gaussian radius of 2.5 with D(f) = 0.1117, Lean images blurred with a Gaussian radius of 5 with D(f) = 0.0920; Lean images blurred with a Gaussian radius of 7.5 with D(f) = 0.0842; Lean images blurred with a Gaussian radius of 10 with D(f) = 0.0797.

that this method is very sensitive to fuzziness. Contrast experiments are performed using a group of multi-focus images in Figure 1(a). The results are shown in Figure 5(a).

From Figures 5(a)-(d), one can clearly see that the performance of these fusion methods showed difference when fused with the same multi-focus image. From a detailed observation, the fused image obtained by Tenengrad and SMD is not clear and there are a large number of residuals in Figure 5(c) and Figure 5(d). Besides, the edge of the object is fuzzy from the decision map of Tenengrad, SMD and energy gradient. At the same time, compared with the actual situation, it can be clearly seen that the decision map obtained by these fusion methods appeared to have more obvious false information. However, it can be easily observed that the fused image acquired by the image saliency assessment method based on the Laplacian gradient is more ideal in the subjective effect because the residual information is also less than other methods which means that the method transfer almost all focus information to the fused image. On the other hand, good preprocessing is very convenient for the later operation, especially in edge optimization.

3.2. Region Optimization

In turn, the focusing connectivity of the same focal plane and a focus edge optimization method based on edge detection are used to deal with the initial decision map. We can see that obvious interference have been removed in decision map in Figure 6(e). It is not difficult to find that the edge is smoother in the decision map after edge optimization in Figure 6(f). And it shows more clearly in Figures 7(a)-(d).

3.3. Multi-Focus Image Fusion

The whole process can be summarized below. First, we choose a set of multi-focus images (Figure 6(a) and Figure 6(b)) for processing to obtain the corresponding saliency maps (Figure 6(c)). And then we can get an initial decision map (Figure 6(d)) through them. Next, the focusing connectivity of the same focal plane is used to remove most of the noise and misjudge areas. The edge correction method is used to optimize the decision map (Figure 6(e)). Finally,

(a) (b) (c) (d)

Figure 5. Decision map of different methods. (a) Decision map using the image saliency assessment method based on the Laplacian gradient; (b) Decision map using the Tenengrad method; (c) Decision map using the SMD method; (d) Decision map using the energy gradient method.

(a) (b) (c) (d)(e) (f) (g)

Figure 6. Results of multi-focus image fusion. (a) multi-focus images with foreground focus; (b) multi-focus images with background focus; (c) the edge map; (d) the initial decision map; (e) the decision map without obvious interference; (f) the final decision map; (g) the fused image.

(a) (b) (c) (d)

Figure 7. Images of local magnification. (a) one of images in Figure 1(a) marked the processing area; (b) partial decision map before edge optimization; (c) partial edge map; (d) partial decision map after edge optimization.

the multi-focus images are fused according to the final decision map. In the final decision map (Figure 6(f)), it is clear from Figure 7(d) that the edge is more smooth and consistent with the actual situation. Finally, we got a globally clear image (Figure 6(g)).

4. Discussions

4.1. Subjective Evaluation

The fused image and corresponding residual map of Backgammon Clock, and Lab using different method are shown in Figures 8(a)-(j), Figures 9(a)-(j) and Figures 10(a)-(j). From Figures 8(a)-(e), we see that the five algorithms can produce fused images separately, but it is very difficult to distinguish the differences between some fusion results only by visual observation. To better evaluate the visual quality of the fused image, it is a good method to compare their residual map shown in Figures 8(f)-(j). Comparing the residual maps of these methods, the results are obvious. The residual maps obtained by DWT, NSCT, OPT and LP has more residual information, but the proposed method has less.

What would be resulted from Figure 8(c) and Figure 9(c) is that the images are not enough clear using OPT. Many fusion errors exit at the right edge of a Gobang box in Figure 8(a) and Figure 8(b) and the surface of the back clock in Figure 9(a), Figure 9(b) and Figure 9(d). From the residual maps, there are more residual information on the surface of clock in Figure 10(f), Figure 10(h) and Figure 10(i). Compared with other methods, the method proposed in this paper has better subjective performance.

4.2. Objective Evaluation

In the last part, the residual maps are used to compare the different image fusion methods. In order to further verify the performance of the proposed method, the objective quality evaluation is carried out. Objective evaluation indicators have been introduced above, including MI, PSNR, SF and EI. The evaluation results are shown in Table 1.

From the data in Table 1, the evaluation results are obvious. When the origin image is “Backgammon”, most values of four indexes of the proposed method are observably higher than those of the other methods. It has the same situation for “Clock” and “Lab”. By the meaning of several evaluation methods, it can be shown that the fusion image obtained by the proposed method contains more information, also shows that the fusion image has higher definition.

5. Conclusions

This paper presents an improved algorithm for multi-focus image fusion based

(a) (b) (c) (d) (e)(f) (g) (h) (i) (j)

Figure 8. The fused image and corresponding residual map of Backgammon used different method. (a) Fused image of DWT; (b) Fused image of NSCT; (c) Fused image of OPT; (d) Fused image of LP; (e) Fused image of Proposed method; (f) Residual map of DWT; (g) Residual map of NSCT; (h) Residual map of OPT; (i) Residual map of LP; (j) Residual map of Proposed method.

(a) (b) (c) (d) (e)(f) (g) (h) (i) (j)

Figure 9. The fused image and corresponding residual map of Clock used different method. (a) Fused image of DWT; (b) Fused image of NSCT; (c) Fused image of OPT; (d) Fused image of LP; (e) Fused image of Proposed method; (f) Residual map of DWT; (g) Residual map of NSCT; (h) Residual map of OPT; (i) Residual map of LP; (j) Residual map of Proposed method.

(a) (b) (c) (d) (e)(f) (g) (h) (i) (j)

Figure 10. The fused image and corresponding residual map of Lab used different method. (a) Fused image of DWT; (b) Fused image of NSCT; (c) Fused image of OPT; (d) Fused image of LP; (e) Fused image of Proposed method; (f) Residual map of DWT; (g) Residual map of NSCT; (h) Residual map of OPT; (i) Residual map of LP; (j) Residual map of Proposed method.

Table 1. Quantitative indexes of the fusion results.

on Laplacian operator and region optimization. There are two innovations in the algorithm which are the evaluation of image saliency based on Laplacian gradient and focus area and edge optimization based on the connectedness of the focused region and edge detection.

The evaluation of image saliency based on Laplacian gradient has performed well in distinguishing image clarity. It facilitates the extraction of precise focus areas. At the same time, focus area and edge optimization can make the focus area more accurate. From subjective and objective evaluation, it can be seen that the proposed algorithm is effective for multi-focus image fusion and it performs better than other four representative fusion algorithms. Many experiments have been done, and the algorithm still needs to be improved in the edge detection. Accurate edge detection can bring better fusion results.

Acknowledgements

This work is partially supported by the Hubei Provincial Department of Education, National Natural Science Foundation of China (11571041) and Natural Science Foundation of Hubei Province (2013CFA053).

Cite this paper

Wang, C., Yuan, R., Sun, Y.Q., Jiang, Y.X., Chen, C.S. and Lin, X.L. (2018) A New Method of Multi-Focus Image Fusion Using Laplacian Operator and Region Optimization. Journal of Computer and Communications, 6, 106-118. https://doi.org/10.4236/jcc.2018.65009

References

  1. 1. Sankaranarayanan, G., Veeraraghavan, A. and Chellappa, R. (2008) Object Detection, Tracking and Recognition for Multiple Smart Cameras. Proceedings of the IEEE, 96, 1606-1624. https://doi.org/10.1109/JPROC.2008.928758

  2. 2. Stathaki, T. (2008) Image Fusion: Algorithms and Applications. Academic Press, Cambridge.

  3. 3. Rahman, M.A., Liu, S., Wong, C.Y., Lin, S.C.F., Liu, S.C. and Kwok, N.M. (2017) Multi-Focal Image Fusion Using Degree of Focus and Fuzzy Logic. Digital Signal Processing, 60, 1-19. https://doi.org/10.1016/j.dsp.2016.08.004

  4. 4. Balasubramaniam, P. and Ananthi, V.P. (2014) Image Fusion Using Intuitionistic Fuzzy Sets. Information Fusion, 20, 21-30. https://doi.org/10.1016/j.inffus.2013.10.011

  5. 5. Piella, G. (2002) A Region-Based Multiresolution Image Fusion Algorithm. International Conference on Information Fusion, Annapolis, 8-11 July 2002, 1557-1564. https://doi.org/10.1109/ICIF.2002.1021002

  6. 6. Aslantas, V. and Kurban, R. (2010) Fusion of Multi-Focus Images Using Differential Evolution Algorithm. Expert Systems with Applications, 37, 8861-8870. https://doi.org/10.1016/j.eswa.2010.06.011

  7. 7. Rattá, G.A., Vega, J., Murari, A. and Contributors, J.E. (2007) Image Fusion: Advances in the State of the Art. Information Fusion, 8, 114-118. https://doi.org/10.1016/j.inffus.2006.04.001

  8. 8. Hariharan, H., Gribok, A., Abidi, M.A. and Koschan, A. (2006) Image Fusion and Enhancement via Empirical Mode Decomposition. Journal of Pattern Recognition Research, 1, 16-32. https://doi.org/10.13176/11.6

  9. 9. Yu, M.Y., Han, M.L., Cheng, Y.S. and Wei, T.A. (2011) Autofocusing Algorithm Comparison in Bright Field Microscopy for Automatic Vision Aided Cell Micromanipulation. IEEE International Conference on Nano/Molecular Medicine and Engineering, Hong Kong, 5-9 December 2010, 88-92.

  10. 10. Raghunandana, R., Manikantan, K., Murthy, N.N. and Ramachandran, S. (2012) Face Recognition Using DWT Thresholding Based Feature Extraction with Laplacian-Gradient Masking as a Pre-Processing Technique. Proceedings of Cube International Information Technology Conference, Pune, 3-5 September 2012, 82-89.

  11. 11. Choi, K.S., Lee, J.S. and Ko, S.J. (2002) New Autofocusing Technique Using the Frequency Selective Weighted Median Filter for Video Cameras. IEEE Transactions on Consumer Electronics, 45, 820-827. https://doi.org/10.1109/30.793616

  12. 12. Guo, J.B., Feng, H.J., Wang, L., Peng, Q.J. and Li, X.F. (2016) Design of Focusing Window Based on Energy Function of Gradient. Infrared Technology, 38, 197-202.

  13. 13. Li, S. and Yang, B. (2008) Multifocus Image Fusion by Combining Curvelet and Wavelet Transform. Pattern Recognition Letters, 29, 1295-1301. https://doi.org/10.1016/j.patrec.2008.02.002

  14. 14. Bhatnagar, G., Wu, Q.M.J. and Liu, Z. (2013) Directive Contrast Based Multimodal Medical Image Fusion in NSCT Domain. IEEE Transactions on Multimedia, 15, 1014-1024. https://doi.org/10.1109/TMM.2013.2244870

  15. 15. Jie-Feng, X.U., Ai-Guo, L.I. and Qin, Z. (2006) Image Fusion Algorithm Based on Orthogonal Polynomial Transform. Microelectronics & Computer, 23, 93-95.

  16. 16. Wang, W. and Chang, F. (2011) A Multi-Focus Image Fusion Method Based on Laplacian Pyramid. Journal of Computers, 6, 2559-2566. https://doi.org/10.4304/jcp.6.12.2559-2566

  17. 17. Choi, M.G., Jung, J.H. and Jeon, J.W. (2009) No-Reference Image Quality Assessment Using Blur and Noise. International Journal of Electrical & Electronics Engineering, 3, 184-188.

  18. 18. Hariharan, H., Koschan, A. and Abidi, M. (2007) Multifocus Image Fusion by Establishing Focal Connectivity. IEEE International Conference on Image Processing, 3, 321-324.