Advances in Remote Sensing
Vol.06 No.01(2017), Article ID:73980,10 pages
10.4236/ars.2017.61005

Hyperspectral Image Classification Based on Hierarchical SVM Algorithm for Improving Overall Accuracy

Lida Hosseini, Ramin Shaghaghi Kandovan

Department of Communications, College of Electrical Engineering, Yadegar-e-Imam Khomeini (RAH) Shahr-e-Rey Branch, Islamic Azad University, Tehran, Iran

Copyright © 2017 by authors and Scientific Research Publishing Inc.

This work is licensed under the Creative Commons Attribution International License (CC BY 4.0).

http://creativecommons.org/licenses/by/4.0/

Received: December 27, 2016; Accepted: February 5, 2017; Published: February 8, 2017

ABSTRACT

One of the most challenges in the remote sensing applications is Hyperspectral image classification. Hyperspectral image classification accuracy depends on the number of classes, training samples and features space dimension. The classification performance degrades to increase the number of classes and reduce the number of training samples. The increase in the number of feature follows a considerable rise in data redundancy and computational complexity leads to the classification accuracy confusion. In order to deal with the Hughes phenomenon and using hyperspectral image data, a hierarchical algorithm based on SVM is proposed in this paper. In the proposed hierarchical algorithm, classification is accomplished in two levels. Firstly, the clusters included similar classes is defined according to Euclidean distance between the class centers. The SVM algorithm is accomplished on clusters with selected features. In next step, classes in every cluster are discriminated based on SVM algorithm and the fewer features. The features are selected based on correlation criteria between the classes, determined in every level, and features. The numerical results show that the accuracy classification is improved using the proposed Hierarchical SVM rather than SVM. The number of bands used for classification was reduced to 50, while the classification accuracy increased from 73% to 80% with applying the conventional SVM and the proposed Hierarchical SVM algorithm, respectively.

Keywords:

Feature Reduction Methods, Clustering Methods, Hyperspectral Image Classification, Support Vector Machine

1. Introduction

In order to discriminate between the similar species, hyperspectral image (HSI) including a large number of spectrum bands is introduced. The large number of spectral bands in hyperspectral remote sensing images is challenging in the classification algorithms from two perspectives. Firstly, due to the proximity and narrow spectral bands, redundancy information is significant. On the other hand, high information volume leads to confusion and degradation of the classification algorithm performance.

HSI classification is a significant challenge in remote sensing applications. Generally, the HIS classification algorithms fall into three categories: supervised, unsupervised, and semi-supervised. Due to the high feature space dimension of the hyperspectral images, the supervised algorithms are encountered with the Hughes phenomenon. Two approaches are proposed for solving this problem. The first, the semi-supervised algorithm [1] prevent from Hughes phenomenon with predicting initial labels for the test pixels. The feature space reduction [2] which includes two different methods, feature extraction [3] and feature selection [4] is the second approach for reducing computational complexity and increasing prediction accuracy. In [5] , a Genetic Algorithm (GA) based wrapper method is presented for classification of hyperspectral image using (SVM), a state-of-art classifier that has found success in a variety of areas.

The large number of algorithms has been proposed for HSI classification in the last decades. Among these methods, SVMs is most compatible with HSI Classification optimization problem [6] [7] [8] [9] . In [10] , the SVM method is introduced to classify the spectral data directly with a polynomial kernel. In order to improve the classification performance, different kinds of SVM-based algorithms [11] - [19] have been proposed. The semi-supervised learning based on labeled and unlabeled samples and kernel combination for integrating both spec- tral and spatial information are two ways to deal with Hughes phenomenon and linear of SVM algorithm.

SVM algorithm is particularly attractive in RS (Remote Sensing) applications. The main properties of it can be summarized as follows:

・ The SVM algorithm is designed based on the structural risk minimization principle, which results in high classification accuracy and very good generalization capabilities. This property is significant in HIS classification problem with high dimensional feature spaces and few training samples.

・ The data is mapped into a high dimensional feature space to solve non-linear separable classification problems by the kernel function. Thus, the data is separated with a simple linear function.

・ The optimization problem in the learning processes of the classifier is convex, which is solved by linearly constrained quadratic programming (QP) characterized from a unique solution. Thus, the system cannot fall into suboptimal solutions associated with local minima.

・ A dual formulation of the convex optimization problem can be represented, where only non-zero Lagrange multipliers are necessary for defining the separation hyper-plane. This is related to the property of sparseness of the solution.

In attention to the high information volume of hyperspectral images, a proposed hierarchical algorithm based on SVM for processing data without pro- cessing the additional information is presented in two stages. In the proposed algorithm, clusters including several same classes are introduced. The same classes are determined based on the Euclidean distance between the centers of class. Due to the lower number of clusters and less similarity between the clusters, a limited number of features for clustering pixels are required. Features are selected based on correlation criteria between cluster labels and features. The number of clusters and the used features at any stage is determined in preprocessor block. Then, SVM algorithm is applied at each stage and predicted labels are presented. The first, the proposed classifiers and preprocessing block explained. Then, the data sets used in the evaluation process is presented. Finally experiment results are shown for evaluating the hierarchical SVM method.

The proposed method is presented in Section 2. Section 3 shows the result simulation and discuses on effective parameters in determining classification accuracy. Finally, Section 4 concludes the paper.

2. Material and Method

In this paper, the proposed algorithm is based on SVM algorithm which machine learning is supervised. In general, supervised learning stages as follows:

1) Prepare image: Preprocessing block is responsible the preparation of the data for the image classification algorithm.

2) Select the algorithms: algorithms based on factors speed the process of learning, memory requirements, new data prediction accuracy and transparency of the relationship between output and input is selected.

3) Allocate Model

4) Apply the model to test data (prediction)

In this thesis, in order to cope with the effects of high information volume of images in the classification accuracy, preprocessing block is designed according to the data structure. The proposed hierarchical classifier is shown in Figure 1. In this design, data set is analyzed by preprocessing block so that the required number of classes and features is determined in each stage. Classification accuracy depends on the number of classes, training samples and features. Assuming a constant number of samples and features, classification accuracy decreases if the number of classes will increase. On the other hand, reducing the number of training samples leads to degradation in the classification performance. The high number of features in HIS and correlation of them, by increasing data redundancy and computational complexity, lead to confusion in classification algorithms result. The high resolution hyperspectral images to enhance discrimination of the high similarity classes are proposed. The proposed algorithm reduces computational complexity by combining similar classes and choosing a limited number of features in the first level. In the next step, classes within every cluster are separated.

Preprocessing Block

The preprocessor block determines the new clusters and features which the classifiers require at both levels. The mean pixels in each class are considered as the class center. The new clusters include classes which the Euclidean distance is minimum. In this block, feature selection method type is filtering. Features rank is determined based on the correlation criterion between Features and labels according to Equation (1).

R ( i ) = cov ( x i , Y ) var ( x i ) v a r ( Y ) (1)

x i and Y are i-th feature and labels vector respectively. A correlation criterion reveals linear dependence between features and labels. On the other hand for the aggregation class, the average of each class is intended to represent each class. This block is shown in Figure 2.

3. Discussion and Simulation Results

As noted earlier, the classification accuracy of HSI remote sensing images depend on the number of classes, features, training data as well as the kernel function. Overall classification accuracy is reduced by growing the number of classes. According to the simulation, overall classification accuracy reaches to 73% when applying SVM to IPS image with 16 classes and 100 features. While the overall accuracy approaches to 86% by reducing the number of classes and features to 7 and 50, respectively. Different kernel functions vary the classification accuracy of about 20%. According to tradeoff between accuracy and complexity, Gaussian kernel function is acceptable. Training data limitation is a most important for reducing the classification accuracy. In order to determine the range of changes, the simulation was performed with assuming 50 features of IPS data. As shown

Figure 1. The proposed hierarchical classifier block.

Figure 2. Preprocessing block diagram.

Figure 3. Classification accuracy of SVM algorithm versus training data set.

in Figure 3 if training data set is changed from 10% to 75%, the overall accuracy increases as much as 10%.

Another contributing factor in the origin of HIS classification accuracy is the number of features. It seems that a rising number of features should increase the classifier accuracy. But since the number of training data is limited, the classification accuracy does not improve with increasing the number of features in practice. SVM algorithm is applied on IPS with assuming the number of different features. As shown in Figure 4, classification accuracy versus the number of features is not monotonically increasing function. Not only does not increasing the number of features of the 170 improve the classification accuracy but also classification accuracy is decreased with increasing computational complexity and information redundancy.

3.1. Data Set

First, we evaluate the performance of proposed method on the Indian Pines data set. This data set totally consists of 145 × 145 pixels and 224 spectral bands in the

Figure 4. Classification accuracy of SVM algorithm versus the number of feature.

wavelength range 0.4 - 2.5 μm, which contains 16 ground-truth classes corresponding to different plants. Figure 5 illustrates original image and ground-truth classes.

3.2. Results

In order to evaluate the proposed hierarchical algorithm, the performance of this algorithm and SVM algorithm were compared on Indian Pines data. The proposed algorithm steps are given in the Table 1.

Simulation is done based on SVM algorithm assuming a Gaussian kernel function, 100 features and 20% training data set. While proposed hierarchical algorithm is applied assuming Gaussian kernel function, 20% training and 50 (level 1), 30 (level 2) features. Figure 6 and Figure 7 illustrate the overall accuracy of the SVM algorithm and proposed hierarchy algorithm, respectively.

In Table 2, detail the classification accuracy of all mentioned two classifiers including the SVM classifiers and the proposed methods on the Indian Pines data is shown.

4. Conclusion

According to the correlation of classes and features in hyperspectral images, it is not needed to all the features for discriminating the classes. In the proposed algorithm, classification is accomplished in both levels so that computational complexity reduces and overall accuracy increases. Feature selection is based on filter method which decision criteria is correlation between classes and features. Thus, the proposed hierarchical SVM algorithm achieves an acceptable accuracy

(a)(b)

Figure 5. (a) Original image; (b) Ground-truth classes of Indian Pines data set.

Table 1. The proposed algorithm steps.

Figure 6. Classification map with SVM algorithm.

Figure 7. Classification map with proposed hierarchical SVM algorithm.

Table 2. Classification Accuracy on IPS for Hierarchical SVM and SVM.

with the number of fewer features. The simulation results also show an increase about 7% in the accuracy of the proposed method rather than SVM algorithm.

Cite this paper

Hosseini, L. and Kandovan, R.S. (2017) Hyperspectral Image Classification Based on Hierarchical SVM Algorithm for Improving Overall Accuracy. Advances in Remote Sensing, 6, 66-75. https://doi.org/10.4236/ars.2017.61005

References

  1. 1. Chi, M. and Bruzzone, L. (2007) Semisupervised Classification of Hyperspectral Images by SVMs Optimized in the Primal. IEEE Transactions on Geoscience and Remote Sensing, 45, 1870-1880. https://doi.org/10.1109/TGRS.2007.894550

  2. 2. Plaza, A., Martínez, P., Plaza, J. and Pérez, R. (2005) Dimensionality Reduction and Classification of Hyperspectral Image Data Using Sequences of Extended Morphological Transformations. IEEE Transactions on Geoscience and Remote Sensing, 43, 466-479. https://doi.org/10.1109/TGRS.2004.841417

  3. 3. Benediktsson, J.A., Pesaresi, M. and Arnason, K. (2003) Classification and Feature Extraction for Remote Sensing Images from Urban Areas Based on Morphological Transformations. IEEE Transactions on Geoscience and Remote Sensing, 41, 1940-1949. https://doi.org/10.1109/TGRS.2003.814625

  4. 4. Chandrashekar, G. and Sahin, F. (2014) A Survey on Feature Selection Methods. Computers and Electrical Engineering, 16-28. https://doi.org/10.1016/j.compeleceng.2013.11.024

  5. 5. Li, Z., et al. (2008) A Genetic Algorithm Based Wrapper Feature Selection Method for Classification of Hyperspectral Images Using Support Vector Machine. Geoinformatics 2008 and Joint Conference on GIS and Built Environment: Classification of Remote Sensing Images. International Society for Optics and Photonics. https://doi.org/10.1117/12.813256

  6. 6. Melgani, F. and Bruzzone, L. (2004) Classification of Hyperspectral Remote Sensing Images with Support Vector Machines. IEEE Transactions on Geoscience and Remote Sensing, 42, 1778-1790. https://doi.org/10.1109/TGRS.2004.831865

  7. 7. Melgani, F. and Bruzzone, L. (2004) Classification of Hyperspectral Remote Sensing Images with Support Vector Machines. IEEE Transactions on Geoscience and Remote Sensing, 42, 1778-1790.

  8. 8. Meyer, D. and Wien, F.T. (2015) Support Vector Machines. The Interface to Libsvm in Package, e1071.

  9. 9. Ding, S. and Chen, L. (2009) Classification of Hyperspectral Remote Sensing Images with Support Vector Machines and Particle Swarm Optimization. International Conference on Information Engineering and Computer Science, ICIECS 2009, 1-5. https://doi.org/10.1109/iciecs.2009.5363456

  10. 10. Gualtieri, J.A., Chettri, S.R., Cromp, R.F. and Johnson, L.F. (1999) Support Vector Machine Classifiers as Applied to AVIRIS Data. Proc. Airborne Geosci. Workshop, 217-227.

  11. 11. Xia, J., Chanussot, J., Du, P. and He, X. (2015) Rotation-Based Support Vector Machine Ensemble in Classification of Hyperspectral Data with Limited Training Samples. IEEE Transactions on Geoscience and Remote Sensing, 54, 1519-1531. https://doi.org/10.1109/TGRS.2015.2481938

  12. 12. Shao, Z., Zhang, L., Zhou, X. and Ding, L. (2014) A Novel Hierarchical Semisupervised SVM for Classification of Hyperspectral Images. IEEE Geoscience and Remote Sensing Letters, 11, 1609-1613. https://doi.org/10.1109/LGRS.2014.2302034

  13. 13. He, M., Imran, F., Belkacem, B. and Mei, S. (2013) Improving Hyperspectral Image Classification Accuracy Using Iterative SVM with Spatial-Spectral Information. 2013 IEEE China Summit & International Conference on Signal and Information Processing, Beijing, 6-10 July 2013, 471-475. https://doi.org/10.1109/ChinaSIP.2013.6625384

  14. 14. Kuo, B., Ho, H., Li, C., Hung, C. and Taur, J. (2014) A Kernel-Based Feature Selection Method for SVM with RBF Kernel for Hyperspectral Image Classification. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, 7, 317-326. https://doi.org/10.1109/JSTARS.2013.2262926

  15. 15. Rick, A. and Fann, G. (2007) Feature Selection and Classification of Hyperspectral Images with Support Vector Machines. IEEE Geoscience and Remote Sensing Letters, 4, 674-677. https://doi.org/10.1109/LGRS.2007.905116

  16. 16. Camps-Valls, G. and Bruzzone, L. (2009) Kernel Methods for Remote Sensing Data Analysis. Vol. 26, Wiley, New York. https://doi.org/10.1002/9780470748992

  17. 17. Grégoire, M. and Lennon, M. (2003) Support Vector Machines for Hyperspectral Image Classification with Spectral-Based Kernels. 2003 IEEE International Geoscience and Remote Sensing Symposium, 1, 288-290.

  18. 18. Baassou, B., He, M. and Mei, S. (2013) An Accurate SVM-Based Classification Approach for Hyperspectral Image Classification. 2013 21st International Conference on Geoinformatics, Kaifeng, 20-22 June 2013, 1-7. https://doi.org/10.1109/Geoinformatics.2013.6626036

  19. 19. Yuliya, T., Fauvel, M., Chanussot, J. and Benediktsson, J. (2010) SVM- and MRF- Based Method for Accurate Classification of Hyperspectral Images. IEEE Geoscience and Remote Sensing Letters, 7, 736-740. https://doi.org/10.1109/LGRS.2010.2047711