^{1}

^{*}

^{2}

^{3}

^{4}

^{5}

^{4}

In interpretation of remote sensing images, it is possible that some images which are supplied by different sensors become understandable. For better visual perception of these images, it is essential to operate series of pre-processing and elementary corrections and then operate a series of main processing steps for more precise analysis on the images. There are several approaches for processing which are depended on the type of remote sensing images. The discussed approach in this article, i.e. image fusion, is the use of natural colors of an optical image for adding color to a grayscale satellite image which gives us the ability for better observation of the HR image of OLI sensor of Landsat-8. This process with emphasis on details of fusion technique has previously been performed; however, we are going to apply the concept of the interpolation process. In fact, we see many important software tools such as ENVI and ERDAS as the most famous remote sensing image processing tools have only classical interpolation techniques (such as bi-linear (BL) and bi-cubic/cubic convolution (CC)). Therefore, ENVI- and ERDAS-based researches in image fusion area and even other fusion researches often don’t use new and better interpolators and are mainly concentrated on the fusion algorithm’s details for achieving a better quality, so we only focus on the interpolation impact on fusion quality in Landsat-8 multispectral images. The important feature of this approach is to use a statistical, adaptive, and edge-guided interpolation method for improving the color quality in the images in practice. Numerical simulations show selecting the suitable interpolation techniques in MRF-based images creates better quality than the classical interpolators.

Processing of remote sensing images is a way for achieving information in different usages of geosciences. These images are widely used in different fields such as physical geography and satellite photogrammetric studies, study of climate change, earth physics and earth quick engineering, hydrology and soil erosion, jungle studies and different fields of agriculture. Remote sensing images contain different types such as ground photos, airborne photos and satellite images. Satellite images can be divided into the images in the visible region (optical images), thermal images (infrared), radar images and laser images and so on. For better visual perception of the images, it is essential to operate series of pre- processing and elementary corrections and then operate a series of main processes for more precise analysis on the image [

Several remote sensing satellites have been launched for different missions. One or more sensors are placed on each of these satellites for different purposes. Regarding the usages, each sensor takes images in some special frequency bands and gives us the proportional information. Landsat satellite series has been made from a group of 8 satellites (presently) by incorporation of NASA and USGS and they have been launched since 1972. Recently just Landsat-7 and Landsat-8 satellites are active (available) which are launched in 1999 and 2013, respectively.

The most popular satellite of this group is Landsat-7 and its multispectral sensor that named ETM+ is one the most popular remote sensing sensors in the world. Landsat-8 satellite has been considered for at least a ten-year mission and two multispectral sensors were placed on it. These two sensors were named OLI and TIRS and they provide images in nine and two frequency bands, respectively. The OLI sensor provides multispectral images which contain almost all of the ETM+ bands. However, they have been improved in SNR and spatial resolution. This sensor takes images in visible and IR, and the TIRS sensor which is considered as a thermal sensor, just takes images in two IR bands. Some of the OLI sensor’s bands are presented in ^{2} on the ground), so it has a good resolution as seen in

As follows, we overview some fusion approaches, however due to the concentration on another process (i.e. interpolation [

Spatial Resolution (meter) | Wavelength Range (micro meter) | Spectral Band | Band No. |
---|---|---|---|

30 | 0.450 - 0.515 | Blue (B) | 2 |

30 | 0.525 - 0.600 | Green (G) | 3 |

30 | 0.630 - 0.680 | Red (R) | 4 |

15 | 0.500 - 0.680 | Panchromatic (PAN) | 8 |

There are two principle approaches in colorization of grayscale images. First approach contains the ways which use virtual color to produce color images. Still the only way of colorization in some specific applications is to use these techniques although they are mainly ancient. The second approach contains the ways that use another color image to produce color in a grayscale image. Generally, this color image might be produced by other tools or the same tools. In this article, the second way will be studied which means that we want to colorize the grayscale image with a color image which has been provided by the same sensor. This process is named MS image fusion or pan-sharpening and previously with emphasis on details of fusion technique has been performed, but we are going to apply the concept of interpolation process that did not have suitable attentions in the past studies. In fact, we see many important software tools such as ENVI and ERDAS as the most famous remote sensing image processing tools have only classical interpolation techniques (such as BL and CC) and therefore ENVI- and ERDAS-based researches in image fusion area and even other fusion researches often don’t use new and better interpolators. So we only focus on interpolation impact in fusion quality in a specific application, i.e. Landsat multispectral images. Obviously, these images don’t need to the geometrical corrections because they have been provided by the same sensor. The important point in these images is fusing high and middle resolution images. We use an elementary-based way called IHS [

the Kruse and Raines method [

Therefore in addition to the IHS method, another question should be answered which is the resizing issue [

In this section, we discuss about LMMSE interpolation method which was presented in [_{h}(2i, 2j) and the same positions will be calculated which are named x_{45} and x_{135} and are derived from Equation (1). Now, we can introduce two error values for these two directions and use them in the next computations. Equation (2) shows the desired error values. In Equation (2), x_{h}(2i, 2j) or in brief x_{h} doesn’t exist, but it can be obtained towards this point that estimation error becomes at least. We name the x_{h} estimation x'_{h}, and this estimation can be achieved in this way that the errors will be minimized. Therefore based on the LMMSE method, to achieve such results, according to the Equation (3), x'_{h} can be known as a linear combination of weights w_{45} and w_{135} of x_{45} and x_{135} where the estimation error reaches the least amount with choosing the appropriate weights w_{45} and w_{135}.

x 45 = x ( i , j + 1 ) + x ( i + 1 , j ) 2 x 135 = x ( i , j ) + x ( i + 1 , j + 1 ) 2 (1)

e 45 ( 2 i , 2 j ) = x 45 ( 2 i , 2 j ) − x h ( 2 i , 2 j ) e 135 ( 2 i , 2 j ) = x 135 ( 2 i , 2 j ) − x h ( 2 i , 2 j ) (2)

Weighs will be as Equation (4) after the necessary calculations. After the calculating w_{45} and w_{135}, directional error variances are calculated and the variance calculations will be according to Equation (5) where u, S_{45} and S_{135} are achieved from Equations (6) and (7), respectively. For other pixels which are in other positions, from two orthogonal directions (0 and 90 degrees) which the pixels amounts were available from before or achieved as the results of statistical estimation in

x ′ h = w 45 x 45 + w 135 x 135 w 45 + w 135 = 1 { w 45 , w 135 } = arg min w 45 + w 135 = 1 E { ( x ′ h − x h ) 2 } (3)

w 45 = σ 2 ( e 135 ) σ 2 ( e 45 ) + σ 2 ( e 135 ) w 135 = σ 2 ( e 45 ) σ 2 ( e 45 ) + σ 2 ( e 135 ) = 1 − w 45 (4)

σ 2 ( e 45 ) = 1 3 ∑ k = 1 3 ( S 45 ( k ) − u ) 2 σ 2 ( e 135 ) = 1 3 ∑ k = 1 3 ( S 135 ( k ) − u ) 2 (5)

u = x 45 + x 135 2 (6)

S 45 = { x ( i , j + 1 ) , x ′ 45 , x ( i + 1 , j ) } S 135 = { x ( i , j ) , x ′ 135 , x ( i + 1 , j + 1 ) } (7)

To evaluate the proposed method, two panchromatic images shown in

from combination of 2nd, 3rd, and 4th bands of the OLI, with 30 meters resolution, shown in

in this article. SSIM has been shown in Equation (8). In this equation, amounts and σ_{x} and σ_{y} are standard deviations for them, respectively, and also σ_{xy} is small non-negative amounts, see more information in this respect in [

SSIM = ( 2 u x u y + C 1 ) ( 2 σ x σ y + C 2 ) ( σ x y + C 3 ) ( u x 2 + u y 2 + C 1 ) ( σ x 2 + σ y 2 + C 2 ) ( σ x σ y + C 3 ) (8)

Mountain | Lake | Color Band |
---|---|---|

0.5713 | 0.7010 | Red |

0.5862 | 0.7165 | Green |

0.5691 | 0.6669 | Blue |

0.5755 | 0.6948 | Average |

78.77 | 84.74 | Similarity (%) |

One of the best ways for enhancing complex issues is innovative heuristic solutions. Almost proving these innovative solutions are not mathematically possible, except with the simulation, namely, the problems are solved by using simulation-based solutions [

Khosravi, M.R., Sharif-Yazd, M., Moghimi, M.K., Keshavarz, A., Rostami, H. and Mansouri, S. (2017) MRF-Based Multispectral Image Fusion Using an Adaptive Approach Based on Edge-Guided Interpolation. Journal of Geographic Information System, 9, 114-125. https://doi.org/10.4236/jgis.2017.92008