Advances in Remote Sensing
Vol.06 No.03(2017), Article ID:79461,15 pages
10.4236/ars.2017.63017

A Novel Hybrid Pan-Sharpen Method Using IHS Transform and Optimization

Haiyong Ding1*, Wenzhong Shi2

1School of Geography and Remote Sensing, Nanjing University of Information Science and Technology, Nanjing, China

2Department of Land Surveying and Geo-Informatics, The Hong Kong Polytechnic University, Hong Kong, China

Copyright © 2017 by authors and Scientific Research Publishing Inc.

This work is licensed under the Creative Commons Attribution International License (CC BY 4.0).

http://creativecommons.org/licenses/by/4.0/

Received: January 16, 2016; Accepted: September 25, 2017; Published: September 29, 2017

ABSTRACT

Intensity-hue-saturation (IHS) transform is the most commonly used method for image fusion purpose. Usually, the intensity image is replaced by Panchromatic (PAN) image, or the difference between PAN and intensity image is added to each bands of RGB images. Spatial structure information in the PAN image can be effectively injected into the fused multi-spectral (MS) images using IHS method. However, spectral distortion has become the typical factor deteriorating the quality of fused results. A hybrid image fusion method which integrates IHS and minimum mean-square-error (MMSE) was proposed to mitigate the spectral distortion phenomenon in this study. Firstly, IHS transform was used to derive the intensity image; secondly, the MMSE algorithm was used to fuse the histogram matched PAN image and intensity image; thirdly, optimization calculation was employed to derive the combination coefficients, and the new intensity image could be expressed as the combination of intensity image and PAN image. Fused MS images with high spatial resolution can be generated by inverse IHS transform. In numerical experiments, QuickBird images were used to evaluate the performance of the proposed algorithm. It was found that the spatial resolution was increased significantly; meanwhile, spectral distortion phenomenon was abated in the fusion results.

Keywords:

IHS Transform, Pan-Sharpen, Minimum Mean-Square-Error, Spectral Distortion, Optimization Calculation

1. Introduction

Multispectral (MS) remote sensed imagery, reflecting the radiance from different land covers by more spectral bands, has the performance of accurately mapping the land surface composition. However, these multispectral sensors usually have lower spatial resolution, which limits their applications in mapping the complex land surface morphological structure. High spatial resolution remote sensed imagery, obtained from commercial satellite sensors, has the potential to give more accurate descriptions of urban surface, and has been used extensively in the fields of urban planning, urban building extraction and decision supporting [1] [2] . Therefore, there is a desire to integrate the high spatial and high spectral information from these two kinds of imageries to give the most complete and accurate description of the study scene [3] .

Image fusion or pan-sharpening method is a technique producing images with high spatial and spectral resolution simultaneously, by injecting the spatial detail information in higher resolution panchromatic (PAN) image into the MS channels [4] .

Pan-sharpening means to use a panchromatic image to sharpen the multispectral images. There are several steps in a pan-sharpen algorithm. Firstly, registration between the PAN and MS images is made to get the spatial aligned images, which is a pivotal process to attain effective fusion results. Secondly, spatial information is extracted from the high resolution using a certain algorithm such as wavelet transform, intensity-hue-saturation (IHS) transform and principal component transform. Thirdly, the extracted spatial information is injected into the MS images to sharpen the spatial resolution meanwhile preserve the spectral information contained in the MS images. Finally, assessment will be made to evaluate the effectiveness of the pan-sharpen results. Another key point in this process lies in the mechanism of extraction and injecting of spatial information, which has become the hotspot issue in the applications of remotely sensed imageries.

Lots of pan-sharpening methods have been proposed in the past twenty years. These algorithms can be categorized into four categories: projection substitution methods, numerical methods, multi-resolution analysis based methods and hybrid methods.

IHS transformation and PCA transformation are two representative methods in the projection substitution fusion methods. In IHS transform, MS images are converted from Red-Green-Blue (RGB) color space into the intensity-hue-saturation color space, and then the intensity image, which mainly contains the low resolution spatial detail information, is substituted by the histogram matched PAN image. Fusion results are attained by inverse transforming from IHS to RGB color space [1] [4] [5] [6] . This algorithm provides an effective and fast implementation for sharpening the MS images. However, it is reported that there is significant spectral distortion in the results which may be induced by adding inappropriate spatial information. A fast IHS algorithm with spectral adjustment for IKONOS imagery fusion is proposed by Tu et al. [2] to avoid the spectral distortion as far as possible.

PCA-based fusion was commonly used due to the uncorrelated property among the principal components after the PCA transform. The first principal component, which was considered as containing enough spatial information due to the largest variance compared with the remains principal components, was replaced by the histogram matched PAN image [4] [7] [8] . The new first principal component and all the other principal components preserving the spectral information were converted back to get the fused MS images with higher spatial resolution compared with the original MS images. However, “a higher variance of the first PC does not necessarily mean it has higher correlation with the PAN image” [8] . Therefore, several modified PCA-based fusion algorithms have been proposed recently to improve the effectiveness of algorithm [4] [7] [8] .

In the numerical fusion algorithms, PAN image is assumed as the linear combination of the original high resolution MS bands, such that the combination coefficients will be estimated using the degraded low resolution MS bands [9] . Brovey method [10] , color normalized, and P+XS [9] are the algorithms belong to this set. Disadvantages of such algorithms lie in the assumption of linear combination, which is inappropriate in reality and will lead to incorrect fusion results. Recently, Garzelli et al. [11] suggested an optimal algorithm, which based on the minimum mean-square-error (MMSE) sense, to sharpen the MS images. In this algorithm, the fused high resolution MS images are assumed as the weighted combination of low resolution MS images and PAN image, in which the weight coefficients can be estimated using the Least-Square (LS) algorithm. Another model in this literature is called Band-Dependent Spatial-Detail model, which used the assumption that spatial information can be induced from the difference between PAN image and the sum of LRMS images.

Lots of attentions have been paid on the multi-resolution analysis based methods. Idea behind such methods is that the missing spatial information in MS images can be inferred from the high frequencies, which is the foundation of ARSIS concept [3] [12] [13] . ARSIS comes from the French acronym for “Amélioration de la Résolution Spatiale par Injection de Structures”(Improving Spatial Resolution by Structure Injection) [3] . Multi-resolution analysis methods such as Wavelet analysis [4] [13] [14] [15] [16] [17] , Pyramid decomposition, Contourlet analysis [7] [8] [18] - [24] and Shearlet analysis [25] are used to induce a scale-by-scale description of the information content of the PAN and MS images [9] . Among these algorithms, the key points are how to extract the spatial information as far as possible, and how to define a fusion rule to integrate the spatial information and the spectral information. Although different kinds of rules have been tested, a thorough investigation of this kind of algorithm is necessary to assess their performances.

Due to the limitation among different kinds of fusion algorithms, hybrid algorithms such as IHS and Wavelet, PCA and Wavelet, IHS and Contourlet, are used to give better fusion results. Intensity image or the first principal component will be extracted using the corresponding transform, and then wavelet decomposition will be used on the intensity image and PAN image simultaneously. The wavelet coefficient corresponding to the approximant part of the intensity image will be replaced by PAN image’s approximant wavelet coefficients. The fused MS image will be induced by inverse wavelet transform. Usually, better fusion effectives will be obtained using the hybrid algorithms.

As pointed by Tu [2] [6] , the key point in IHS-based fusion algorithm lies in the extraction of spatial information, which can be deduced from the difference between PAN and Intensity images. Therefore, the new intensity image can be seen as the linear combination of PAN and the original Intensity image. The idea behind this algorithm motivates us to use the so called minimum mean-square error (MMSE) method to induce the new intensity image after an optimization calculation. Therefore, the proposed novel hybrid fused method is based on the IHS transform and the MMSE optimal algorithm.

Outline of this paper is as follows. A brief introduction is given in the first section. Then, the proposed hybrid fusion algorithm is introduced in section 2. Numerical experiments and results are shown in section 3. Section 4 gives the discussion of the experimental results and conclusions are made in Section 5.

2. The Hybrid Pansharpen Method

Based on the fact that the fused high resolution MS images contain the spatial information coming from low resolution MS images and the panchromatic image, the proposed hybrid pansharpen method utilized the optimal component coefficients of the MS images and the panchromatic image to get the optimum fusion result. The flowchart of the hybrid pan-sharpen method is shown in Figure 1. There are two key steps in the hybrid pansharpen method: IHS transformation and optimization calculation. IHS transformation is used to get the intensity image, which contains the spatial information of the MS images. Optimization calculation is used to get the final intensity image by calculating the optimal component coefficients.

2.1. IHS-Based Fusion Method

IHS transform is extensively used to convert the MS images from RGB color space into the IHS color space. The Intensity image contains most of the spatial information of the scene, while hue image and saturation image reflect the spectral information of the same land cover. Compared with the PAN image, Intensity image has lower spatial resolution, which makes the MS images shortage of spatial information. Therefore, usually, Intensity image is replaced by the histogram matched PAN image to increase the spatial structure of the MS images. Standard IHS-based fusion algorithm is introduced briefly as the following four steps.

Firstly, band combination of MS images is used to form the RGB components, and then, the low spatial resolution RGB images are resized by upsampling to match the size of the high spatial resolution PAN image [2] .

Figure 1. The logic flow of Hybrid IHS Pan-sharpen method.

Secondly, IHS transform is made to convert the images from RGB color space into IHS color space using Equation (1).

[ I v 1 v 2 ] = [ 1 3 1 3 1 3 2 6 2 6 2 2 6 1 2 1 2 0 ] [ R G B ] (1)

where v 1 and v 2 are the variables in the computation. Hue and saturation components in the IHS space are given as

H = tan 1 ( v 2 v 1 ) , S = v 1 2 + v 2 2 (2)

Thirdly, the Intensity image I is replaced by the histogram matched PAN image. Finally, inverse IHS transform is used to get the fused MS images using Equation (3).

[ R G B ] = [ 1 1 2 1 2 1 1 2 1 2 1 2 0 ] [ PAN v 1 v 2 ] (3)

where R , G and B are the fused MS images.

Tu [2] introduced a computationally efficient method by rewrite the previous two equations, and the new formulation is given as

[ R G B ] = [ 1 1 2 1 2 1 1 2 1 2 1 2 0 ] [ I + ( PAN I ) v 1 v 2 ] = [ 1 1 2 1 2 1 1 2 1 2 1 2 0 ] [ I + δ v 1 v 2 ] = [ R + δ G + δ B + δ ] (4)

where

δ = PAN I . (5)

It was found that the spectral distortion mainly due to the change of saturation value, i.e., “the saturation value is expanded and stretched ( S > S ) , when the PAN value is less than its corresponding I value; the saturation value is compressed ( S < S ) when PAN value is larger than the I value” [2] . To avoid the change of saturation value among different land surface materials, we suggest using the optimization version of Intensity image as the replacement of the original Intensity image.

2.2. The Logic Flow of the Hybrid IHS Pan-Sharpen Method

To give a concise description of the hybrid pan-sharpen algorithm, some symbols are used to refer to the images and variables. Let M i , i = 1 , , N , which has size of N r × N c , denote the ith band of N MS images. Matrix P is the PAN image which has the size of r N r × r N c , where r is the ratio of spatial resolution between the PAN image and the MS image. For example, r equals four for QuickBird sensor. M i and P also is used to denote the lexicographically ordered vector which have the size of N r N c × 1 and r 2 N r N c × 1 , respectively.

According to the ratio r of spatial resolution between PAN image and MS image, low resolution MS images are upsampled to get new MS images which have same size with that of PAN image. Then, IHS transform is used to convert the new MS images from RGB color space into IHS color space to get three component images: intensity image I, hue image H, saturation image S.

New PAN image P1 can be deduced using histogram match by Equation (6).

P 1 = ( P μ P ) σ I σ P + μ I (6)

where μ P , μ I are mean value of PAN image and Intensity image respectively, and σ I , σ P are standard deviation of PAN image and intensity image, respectively.

The new intensity image I ˜ , which will be estimated using optimization algorithm, can be written as

I ˜ = w 1 I + w 2 P 1 (7)

where w 1 and w 2 are coefficients to be defined.

This formulation is similar to the single spatial-detail (SSD) model given by Garzelli et al. for image fusion [11] . But, there are two differences between the two models after an in-depth investigation. Firstly, SSD model is used to describe the relationships among estimated HRMS image, LRMS image and PAN image, i.e., the ith band of LRMS can be expressed as

HRMS i = LRMS i + γ i PAN (8)

where γ i is the parameter to be estimated. While, our model is used to depict the relationship among new intensity image, original intensity image and histogram matched PAN image. Secondly, in SSD model, parts of spatial information in PAN image are added into the low resolution MS images to get the high resolution MS images, which is necessary to enhance the spatial structure in the high resolution MS images. Whereas, in our model, the new intensity image I ˜ is estimated as the linear combination of intensity image I and the histogram matched PAN image, which is better than the situation in which I is added or replaced totally by PAN image, due to the fact that the spatial information in intensity image will be lost.

To estimate the parameters w 1 and w 2 , we employ the least-square criteria, i.e., to minimize the following object function:

min = I ˜ w 1 I w 2 P 1 2 2 (9)

where 2 2 denote the square of 2-norm of a vector.

Least-square solution can be deduced by calculating the partial derivative, and the solution can be expressed as

[ w 1 w 2 ] = [ P 1 T P 1 P 1 T I P 1 T I I T I ] 1 [ I ˜ T P 1 I ˜ T I ] (10)

where M T denote the transpose matrix of matrix M, and M 1 denotes the inverse matrix of matrix M.

To give a better fusion result, the optimized calculation can be implemented in the sliding window which has the size of 3 × 3 , i.e., the parameters should be estimated in each non-overlapped sliding window. The proposed algorithm include the following three steps:

Step 1: Let I ˜ 0 = I + P 1 be the initialized matrix of new intensity image; Iteration Times, and Tolerance α ;

Step 2: Calculate the parameters w 1 and w 2 using Equation (10); Estimate intensity image I ˜ 1 using Equation (7);

Step 3: if I ˜ 1 I ˜ 0 2 < α , output I ˜ 1 as the estimated intensity image; otherwise, go back to Step 2.

3. Numerical Experiments

3.1. Experiment Data

The QuickBird images are downloaded from http://www.digitalglobe.com. DigitalGlobe company provide commercial satellite QuickBird images, which contain one 0.6 m spatial resolution panchromatic image (450 - 900 nm) and four 2.4 m MS images: blue band (450 - 520 nm), green band (520 - 600 nm), red band (630 - 690 nm) and near infrared band (760 - 900 nm). A subset images which has the size of 387 × 390 are cut from the original QuikBird images and are used as the experiment images. The MS images have been resampled to the same pixel size of PAN image. The experiment images are shown in Figure 2.

The second remote sensed images used in this paper is Landsat ETM+ images,

Figure 2. QuickBird experiment data. Left: PAN image; right: MS image, R: band 3, G: band 2, B: band 1.

which is downlodad at https://www.usgs.gov/. Panchromatic image of ETM+ sensor has the spatial resolution of 15 meter, while the multi-spectral bands have the spatial resolution of 30 meter. So, it is necessary to merge the abund spectral information in the mulit-spectral images into the panchromatic image to get the high resolution multi-spectral images. The images are shown in Figure 3.

3.2. Assessment Index

To give an objective assessment, correlation coefficients are used to assess the spectral distortion between the fused MS images and the up-sampled MS images, due to the shortage of original high resolution MS images. Correlation coefficient is defined as

c c f , g = 1 M × N i = 1 M j = 1 N ( f ( i , j ) μ f ) ( g ( i , j ) μ g ) ( i = 1 M j = 1 N ( f ( i , j ) μ f ) 2 ) ( i = 1 M j = 1 N ( g ( i , j ) μ g ) 2 ) (11)

Correlation coefficient measure the similarity degree of the same spectral band between fused image and original image. Its value should be as close to 1 as possible.

Another index is ERGAS (Erreur Relative Globale Adimensionnelle de Synthese) [3] [12] or relative dimensional global error, which is defined as

ERGAS = 100 h l 1 N i = 1 N RMSE 2 ( n i ) ( n ˜ i ) 2 (12)

where h is the resolution of PAN image, l is the resolution of MS image, n ˜ i is the mean radiance of each spectral band, RMSE is the root mean square error calculated using

RMSE ( n i ) = j = 1 N P ( O j F j ) 2 (13)

Figure 3. Landsat ETM+ images: Panchromatic image (left), multi-spectral images (Right): R: band 5, G: band 4, B: band 3.

where NP is the total number of pixels in the original and fused image, O j and F j is the radiance value of pixel j in the ith band of the original image and the fused image, respectively. ERGAS is used to assess the spectral quality in the fused image, and the lower the value of the ERGAS, the higher the spectral quality of the merged image [26] .

4. Results and Discussion

4.1. Results of Fusing QuickBird Images

The outputs of applying different fusion methods to QuickBird images are shown in Figure 4. Firstly, it can be found by visual interpretation that there are more spatial detail information in the fused results compared with the original multi-spectral images. Spatial resolution of the fused results are much higher than original MS images. Most of the detail spatial structure in PAN image has been merged into the fused results. Some of the small spatial structure details which cannot be discerned from the original MS image (Figure 2), can be identified in the fused results. The results of fast IHS and wavelet fusion method have more sharp edge and texture than the results of Brovey fusion method, PCA fusion method and the proposed hybrid IHS fusion method, which can be verified by the correlation coefficients between the fused MS images and the PAN image.

It also can be found that the results from Brovey fusion, PCA fusion and fast IHS fusion, are severely disturbed by spectral distortion, which can be testified using the correlation coefficients between the fused MS images and the up-sampled MS images in Table 1. By preserving more spatial structure information in the fused images, wavelet fusion and the proposed hybrid IHS fusion generated more better results compared with the other fusion methods.

It can be found that the fast IHS results display higher spectral distortion compared with the results derived from hybrid IHS. The reason of spectral distortion of fast IHS has been investigated by Tu [2] [6] . In the proposed hybrid IHS algorithm, the difference between PAN image and intensity image has been optimized selected to decrease the change of saturation image, which is critical to preserve the spectral information contained in the original MS images. Therefore, there is similar spectral characteristic between the hybrid IHS results and the original MS images.

In addition to the visual inspection, the performance of these two methods is further quantitatively analyzed using the assessment indexes. Firstly, the correlation coefficients verified that results from hybrid IHS have higher similarity to the original MS images compared with the results from fast IHS. Little spectral distortion emerged in the results of the proposed method, which can be seen by visual investigation. There is major difference in RMSE and ERGAS between the results derived from different methods. Results from HIHS have smaller RMSE and ERGAS than that of results from fast IHS, which demonstrate that hybrid IHS’s results have higher quality. Correlation coefficient to PAN of the hybrid IHS is less than that of fast IHS’s result, which demonstrates that there is short-

Brovey Fusion Wavelet Fusion PCA Fusion
Fast IHS Fusion Hybrid IHS Fusion

Figure 4. Image fusion results of QuickBird images using different methods.

Table 1. Values of different indexes to evaluate the quality of the fused QuickBird images.

age of spatial information in the results of hybrid IHS compared with that of fast IHS.

4.2. Results of Fusing Landsat ETM+ Images

In this subsection, the proposed hybrid IHS method together with other fusion methods are used to fuse the MS images and panchromatic image taken from Landsat ETM+ sensor. The fused Landsat ETM+ images are shown in Figure 5. It can be found by visual interpretation that results from wavelet fusion and hybrid IHS fusion induced better fusion images compared with the results from the

Brovey Fusion Wavelet Fusion PCA Fusion
Fast IHS Fusion Hybrid IHS Fusion

Figure 5. Landsat ETM+ fusion results using different fusion methods.

Table 2. Values of different indexes to evaluate the quality of the fused Landsat ETM+ images.

other methods. Spatial information in the fused images had been increased in some degree. However, significant spectral distortion emerged in the fused results of Brovey method, PCA method and fast IHS fusion method.

To give a throughout investigation of the proposed hybrid IHS fusion method, different indexes are used to assess the performance of these methods (Table 2). Correlation coefficients, which is used to assess the correlation relationship between the fused high resolution MS images and the low resolution MS images, demonstrated the degree of spectral similarity between two images. It was found that results from wavelet fusion and the proposed hybrid IHS fusion, which had slight spectral distortion, outperformed the other fusion methods. The other three indexes such as RMSE, correlation ceofficients to panchromatic image and ERGAS have demonstrated that the proposed hybrid IHS outperformed the other fusion methods.

5. Conclusions

In this paper, we give a hybrid of IHS and Minimum Mean-Square-Error for fusing low resolution multi-spectral and Panchromatic images from same scene. IHS is one of the commonly used fusion algorithms to merge the spatial information in PAN image and spectral information in LRMS images. However, spectral distortion phenomenon in IHS method seriously deteriorates the quality of the fused images. Reason of spectral distortion is due to the process of adding the difference between PAN image and intensity image directly into the original RGB images. To avoid or mitigate the influence of pixels that has bigger value compared with the ordinary pixels in the difference image, MMSE model is utilized to estimate the new intensity image from PAN and intensity images.

QuickBird PAN image and LRMS images are fused to evaluate the performance of our proposed algorithm. FIHS are used as reference to analyze the results from HIHS method. The comparison confirms that results from HIHS preserve most of spectral information with little spectral distortion, while results from FIHS have significant spectral distortion which has worse fusion quality. Therefore, the proposed hybrid method outperforms the commonly used FIHS method by providing higher quality fusion results.

Cite this paper

Ding, H.Y. and Shi, W.Z. (2017) A Novel Hybrid Pan- Sharpen Method Using IHS Transform and Optimization. Advances in Remote Sensing, 6, 229-243. https://doi.org/10.4236/ars.2017.63017

References

  1. 1. Malpica, J.A. (2007) Hue Adjustment to IHS Pan-Sharpening IKONOS Imagery for Vegetation Enhancement. IEEE Geoscience and Remote Sensing Letters, 4, 27-31. https://doi.org/10.1109/LGRS.2006.883523

  2. 2. Tu, T.M., Huang, P.S., Hung, C.L., et al. (2004) A Fast Intensity-Hue-Saturation Fusion Technique with Spectral Adjustment for IKONOS Imagery. IEEE Geoscience and Remote Sensing Letters, 1, 309-312. https://doi.org/10.1109/LGRS.2004.834804

  3. 3. Ranchin, T. and Wald, L. (2000) Fusion of High Spatial and Spectral Resolution Images: The ARSIS Concept and Its Implementation. Photogrammetric Engineering and Remote Sensing, 66, 49-61.

  4. 4. Gonzalez-Audicana, M., Saleta, J.L., Catalan, R.G., et al. (2004) Fusion of Multispectral and Panchromatic Images Using Improved IHS and PCA Mergers Based on Wavelet Decomposition. IEEE Transactions on Geoscience and Remote Sensing, 42, 1291-1299. https://doi.org/10.1109/TGRS.2004.825593

  5. 5. Chu, H. and Zhu, W. (2008) Fusion of IKONOS Satellite Imagery Using IHS Transform and Local Variation. IEEE Geoscience and Remote Sensing Letters, 5, 653-657. https://doi.org/10.1109/LGRS.2008.2002034

  6. 6. Tu, T.M., Su, S.C., Shyu, H.C., et al. (2001) A New Look at IHS-Like Image Fusion Methods. Information Fusion, 2, 177-186. https://doi.org/10.1016/S1566-2535(01)00036-7

  7. 7. Malik, M.H. and Gilani, S.A.M. (2008) Adaptive Image Fusion Scheme Based on Contourlet Transform, Kernel Pca And Support Vector Machine. Innovation and Advanced Techniques in Systems, Computing Sciences and Software Engineering, 313-318.

  8. 8. Shah, V.P., Younan, N.H. and King, R.L. (2008) An Efficient Pan-Sharpening Method via a Combined Adaptive PCA Approach and Contourlets. IEEE Transactions on Geoscience and Remote Sensing, 46, 1323-1334. https://doi.org/10.1109/TGRS.2008.916211

  9. 9. Thomas, C., Ranchin, T., Wald, L., et al. (2008) Synthesis of Multispectral Images to High Spatial Resolution: A Critical Review of Fusion Methods Based on Remote Sensing Physics. IEEE Transactions on Geoscience and Remote Sensing, 46, 1301-1312. https://doi.org/10.1109/TGRS.2007.912448

  10. 10. Pohl, C. and Genderen, J.L. (1998) Multisensor Image Fusion in Remote Sensing: Concepts, Methods and Applications. Internaional Journal of Remote Sensing, 19, 823-854. https://doi.org/10.1080/014311698215748

  11. 11. Garzelli, A., Nencini, F. and Capobianco, L. (2008) Optimal MMSE Pan Sharpening of Very High Resolution Multispectral Images. IEEE Transactions on Geoscience and Remote Sensing, 46, 228-236. https://doi.org/10.1109/TGRS.2007.907604

  12. 12. Ranchin, T., Aiazzi, B., Alparone, L., et al. (2003) Image Fusion: The ARSIS Concept and Some Successful Implementation Schemes. ISPRS Journal of Photogrammetry and Remote Sensing, 58, 4-18.

  13. 13. Sirguey, P., Mathieu, R., Arnaud, Y., et al. (2008) Improving MODIS Spatial Resolution for Snow Mapping using Wavelet Fusion and ARSIS Concept. IEEE Geoscience and Remote Sensing Letters, 5, 78-82. https://doi.org/10.1109/LGRS.2007.908884

  14. 14. Amolins, K., Zhang, Y. and Dare, P. (2007) Wavelet Based Image Fusion Technique an Introduction, Review and Comparison. ISPRS Journal of Photogrammetry and Remote Sensing, 62, 249-263.

  15. 15. Kim, Y., Lee, C., Han, D., et al. (2011) Improved Additive-Wavelet Image Fusion. IEEE Geoscience and Remote Sensing Letters, 8, 263-267. https://doi.org/10.1109/LGRS.2010.2067192

  16. 16. Zhang, Y. and Hong, G. (2005) An IHS and Wavelet Integrated Approach to Improve Pan-Sharpening Visual Quality of Natural Colour IKONOS and QuickBird Images. Information Fusion, 6, 225-234.

  17. 17. Shi, W., Zhu, C., Tian, Y., et al. (2005) Wavelet-Based Image Fusion and Quality Assessment. International Journal of Applied Earth Observation and Geoinformation, 6, 241-251.

  18. 18. Alejaily, A.M., Rube, I.A.E. and Mangoud, M.A. (2008) Fusion of Remote Sensing Images using Contourlet Transform. In: Innovation and Advanced Techniques in Systems, Computing Sciences and Software Engineering, 213-218. https://doi.org/10.1007/978-1-4020-8735-6_40

  19. 19. Chang, X., Jiao, L., Liu, F., et al. (2010) Multicontourlet-Based Adaptive Fusion of Infrared and Visible Remote Sensing Images. IEEE Geoscience and Remote Sensing Letters, 7, 549-553. https://doi.org/10.1109/LGRS.2010.2041323

  20. 20. Chitaliya, N.G. and Trivedi, A.I. (2010) An Efficient Method for Face Feature Extraction and Recognition Based on Contourlet Transforms and Principal Component Analysis. Procedia Computer Science, 2, 52-61.

  21. 21. Qu, X.B., Yan, J.W., Xiao, H.Z., et al. (2008) Image Fusion Algorithm Based on Spatial Frequency-Motivated Pulse Coupled Neural Networks in Nonsubsampled Contourlet Transform Domain. Acta Automatica Sinica, 34, 1508-1514.

  22. 22. Saeedi, J. and Faez, K. (2011) A New Pan-Sharpening Method using Multiobjective Particle Swarm Optimization and the Shiftable Contourlet Transform. ISPRS Journal of Photogrammetry and Remote Sensing, 66, 365-381.

  23. 23. Yang, X.H. and Jiao, L.C. (2008) Fusion Algorithm for Remote Sensing Images Based on Nonsubsampled Contourlet Transform. Acta Automatica Sinica, 34, 274-281. https://doi.org/10.3724/SP.J.1004.2008.00274

  24. 24. Zaveri, T. and Zaveri, M. (2009) A Novel Hybrid Pansharpening Method using Contourlet Transform. PReMI, LNCS 5909, 363-368.

  25. 25. Miao, Q., Shi, C., Xu, P., et al. (2011) A Novel Algorithm of Image Fusion using Shearlets. Optics Communications, 284, 1540-1547.

  26. 26. Gonzalez-Audicana, M., Otazu, X., Fors, O., et al. (2006) A Low Computational-Cost Method to Fuse IKONOS Images using the Spectral Response Function of Its Sensors. IEEE Transactions on Geoscience and Remote Sensing, 44, 1683-1691. https://doi.org/10.1109/TGRS.2005.863299