International Journal of Geosciences
Vol.10 No.01(2019), Article ID:89791,11 pages
10.4236/ijg.2019.101001

A Review of Researches on Deep Learning in Remote Sensing Application

Ming Zhu1,2*, Yongning He2, Qingyu He2

1Institute of Geoscience and Resources, China University of Geosciences, Beijing, China

2Geographic Information Center of Guangxi, Nanning, China

Copyright © 2019 by author(s) and Scientific Research Publishing Inc.

This work is licensed under the Creative Commons Attribution International License (CC BY 4.0).

http://creativecommons.org/licenses/by/4.0/

Received: December 18, 2018; Accepted: January 7, 2019; Published: January 10, 2019

ABSTRACT

In recent years, deep learning has been widely used in the field of image understanding and made breakthroughs research progress in image understanding. Because remote sensing application and image understanding are inseparable, researchers have carried out a lot of research on the application of deep learning in remote sensing field, and extended the deep learning method to various application fields of remote sensing. This paper summarizes the basic principles of deep learning and its research progress and typical applications in remote sensing, introduces the current main deep learning model and its development history, focuses on the analysis and elaboration of the research status of deep learning in remote sensing image classification, object detection and change detection, and on this basis, summarizes the typical applications and their application effects. Finally, according to the current application of deep learning in remote sensing, the main problems and future development directions are summarized.

Keywords:

Deep Learning, Remote Sensing Application, CNN, Land Cover Classification, Object Detection, Change Detection

1. Introduction

Remote sensing is a technical means using sensors on satellite, aircraft or other platforms to collect targets’ radiation information, with which specific information can be obtained. In recent years, with the rapid development of remote sensing technology, the capacity of acquiring remote sensing data has been enhancing. Meantime, the spectral, spatial and temporal resolution of remote sensing imagery have been improving [1] , providing solid data bases for the remote sensing application. Although better and better imagery can be acquired through remote sensing, in practice the application of remote sensing imagery relies heavily on manual processing, while machine interpretation is only an aid to manual work. Traditionally, machine interpretation of remote sensing imagery is achieved through statistical methods such as maximum likelihood and K-means clustering, which are based on remote sensing features like spectrum and textures. In the past few years, methods including artificial neural network, support vector machine, genetic algorithm and object oriented method are developing rapidly with certain fruits achieved [2] . However, generally speaking, all these methods require manually extraction of image features or design of interpretation rules, thus lead to long design cycles and limited the potential of algorithm improvement. Besides, the accuracy and efficiency of automatic interpretation of remote sensing imagery cannot meet the needs of most applications. Since the remote sensing application is heavily dependent on manual work, the effectiveness of remote sensing is severely restricted by the experience and expertise of the operator [3] .

Deep learning is an important domain of machine learning research. Compared with traditional machine learning, deep learning is a representation- learning method with multiple layers. Data abstraction and extraction from the lower layers to higher layers are accomplished through simple nonlinear modules. Current deep learning often use deep neural network (DNN) to construct the layers, which are the stacks of simple nonlinear modules. Input data is passed between the layers, whose mapping relationship reduces the dimension and extract the key characteristics of data [4] . Relying on the deep convolution neural network (DCNN), deep learning provides an end-to-end machine learning model that can automatically extract image features without extraction algorithms designed by human. Compared with traditional methods, deep learning is completely data-driven, which can automatically find the best ways to extract image features through learning [5] [6] .

This paper briefly introduces the development of deep learning, and makes a detailed analysis for the current application fields of remote sensing land cover classification, target detection and change detection, expounds the main deep learning methods and research progress in these three fields, introduces the current application situation of deep learning in remote sensing field, and summarizes the current research work and main models. Finally, the application of deep learning is, summarized the existing problems are pointed out, and the future development direction of deep learning for remote sensing is prospected.

2. Common Deep Learning Methods in Remote Sensing Application

The deep learning method in remote sensing application is mainly used in three aspects, namely surface classification, object detection and change detection. A review of the current research results indicates that the major technical approach is to translate specific problems into classification or object detection tasks, which are processed with the computer vision deep learning model that is redesigned and adjusted for the targets of the remote sensing application, thus the specific problems are solved. The main structure is shown in Figure 1.

2.1. Land Cover Classification Methods of Remote Sensing Image

Land cover classification is a major field of remote sensing application. The main task of surface classification is to divide the pixels or regions in remote sensing imagery into several categories according to application requirements [7] . The deep learning model of land cover classification is generally based on deep belief network (DBN), convolution neural network (CNN) and spare auto encoder (SAE), among which the deep convolution neural network is the most popular approach at present.

Many early studies used deep CNN as Alexnet and VGG Net and achieved certain results. However, the nature of Alexnet and VGG Net classification method is to transform an image into a corresponding eigenvector through convolution, pooling and fully connected layer. Based on the eigenvector, a value representing the image classification is output. Therefore, the major issue addressed with such approach is the classification of integrated imagery on the image level. However, land cover classification is a problem of image segmentation, what to be addressed is the multi-classification after semantic segmentation of a single image.

To solve the problem of semantic segmentation and multi-classification, Long, et al. proposed FCN [8] , the full convolution neural network based on semantic segmentation. Based on CNN, FCN substitutes all the pooling layers and fully connected layers with convolution layers. At the end of the network, FCN introduces the transposed convolution layer, which upsamples the image features and predicts the output image size according to the input image size, thus every input pixel is predicted and the image is classified. FCN realizes end-to-end semantic segmentation, but it performs not that well in edge processing and classification accuracy.

Based on the further optimized network, Badrinarayanan, et al. proposed SegNet. [9] SegNet’s encoder is based on the first 13 layers of VGG-16, with improvements in the decoding stage of upsampling, besides, each decoder has a

Figure 1. Structural diagram of deep learning model.

corresponding encoder, and thus, with the same segmentation accuracy can be achieved with less training parameters and low memory overhead. To address the reduced resolution brought by subsampling or polling, based on the advantages of the above networks, DeepLab [10] , adopts Atrous convolution to expand the receptive field to acquire more contextual information. The latest DeepLab V3+ [11] [12] comes with improved Atrous convolution algorithm. ResNet, achieved with the pre-training on Imagnet, is used as the major network for feature extraction. In the ResNet residue block, Atrous convolution and different expansion rates are used to capture multi-scale contextual information in each convolution. To integrate multi-scale information, DeepLab v3+ introduces the encoder-decoder architecture and adopts the Xception model. With these improvements, the segmentation accuracy is maintained while the back end dense CRF is discarded.

At present, although there are a variety of deep learning models for surface classification, the main body are all of encoder-decoder structure (Figure 2). In the encoding stage, convolution, pooling and subsampling are adopted to acquire segmentation features. In the decoding stage, transposed convolution, pooling and upsampling is adopted to label image regions with same features, thus surface classification is achieved through semantic segmentation. At the same time, to improve the accuracy of classification, some deep learning models introduce post-processing stage to remove noise and optimize the edges. The comparison of representative image classification method is shown in Table 1.

Figure 2. Remote sensing image semantic segmentation flow chart.

Table 1. Comparison of representative image classification method.

2.2. Object Detection

Object detection is another common application of remote sensing. The deep learning model of object detection is mainly based on region-based convolution neural networks (R-CNN), which is the earliest proposed method of deep learning object detection. The main idea is to transform the object detection problem into the classification problem. The image is divided into a large number of candidate regions by selective search algorithm, CNN is then applied to obtain the eigenvectors of candidate regions, and finally object detection is completed by the classifier, which determines the type of the candidate area [13] . The proposal of R-CNN has greatly improved the success rate of image object detection, but R-CNN will generate partially overlapping candidate areas from each detection target. Such areas are repeatedly fed into CNN for feature calculation, thus reducing the efficiency of detection. To reduce overlapping candidate areas, He Kaiming proposed Spatial Pyramid Pooling Networks (SPP-Net) [14] , which introduces the spatial pyramid pooling layer after the last convolution layer, thus repetitive processing is eliminated, allowing image of any sizes to be processed with CNN. With these improvement, SPP-Net has greatly increased the speed of object detection. Based on SPP-Net, Girshick proposed Fast R-CNN [15] , which simplifies the spatial pyramid pooling layer of SPP-Net, thus, the RoI pooling layer is formed to extract features. The substitution of SVM by Softmax greatly improves the speed of training and detection. It is more accurate and 213 times faster than R-CNN. To further improve the efficiency of Fast R-CNN in generating candidate area, Ren et al. proposed Faster R-CNN [16] , which introduces Region Proposal Network (RPN), meantime, RPN and Fast R-CNN are combined as an integrated network to generate candidate regions. With further improved network structure, YOLO [17] and Single Shot Multibox Detector (SSD) [18] maintain almost the same detection accuracy with significantly improved detection speed. The comparison of representative image object detection method is shown in Table 2.

Table 2. Comparison of representative image object detection method.

2.3. Change Detection

Change detection is the process of detecting changes using remote sensing imagery obtained at different times. These changes are due in part to natural phenomena, such as droughts, floods, and landslides, the other part is due in human activities as new roads, excavation of the surface or construction of new houses. Compared to models for surface classification and object detection, there are less deep learning models for image change detection [7] . The current change detection based on deep learning mainly adopts two technic approaches. One is to detect the correspondent points of two imagery through deep learning and determine whether there are changes to the correspondent points. The other approach is to translate the change detection problem into the surface classification problem, and acquire the changed region through semantic segmentation, comparing and classification of map spots. From the experimental results, the semantic segmentation approach is easier to achieve, faster in speed and better in detection accuracy.

3. Progress in Researches on Deep Learning in Remote Sensing Application

With constant optimization of the deep learning model for remote sensing, deep learning is gradually applied in the surface classification, object detection and change detection of remote sensing imagery. The results of various applications show that compared with the traditional methods, new breakthroughs has been made in the accuracy and efficiency.

3.1. Imagery Based Land Cover Classification

Fu et al. [19] expanded the network for remote sensing image surface classification, a skip-layer structure is added to enable the FCN for multi-resolution image classification. Atrous convolution is introduced to improve the density of output features. CRF is applied in detection to refine the output class, thus improves the accuracy of high-resolution image classification. To address the problems in vegetation classification, namely, small difference of object feature and loss of features in encoding stage of FCN, Zhang et al. [20] Added a feature extraction layer with convolution kernel containing the features of vegetation to be extracted and an encoding layer adopting non-linear activation function, as a result, the accuracy of vegetation classification is improved. Sharma et al. [21] proposed a deep learning land cover classification method for middle-resolution imagery. This method takes Landsat 8 image as the research object, changes the CNN input from single pixel to 5 × 5 pixel image block. The image block input contains not only the image band information, but also the spatial relation of adjacent pixels. The experimental data shows that compared with the pixel-based CNN, the deep learning method based on block increased the overall classification accuracy of farmland, wetland, forest, water body and other features by 24.23%. Zhang et al. [22] proposed a high resolution imagery deep learning surface classification method that integrates CNN and Multi-Layer Perceptron (MLP). With integrated rules, by combining image features extracted by CNN and MLP, the overall classification accuracy is improved and reaches 90.56%, higher than CNN or MLP used alone. Zhao et al. [23] proposed a deep learning network suitable for multi-scale imagery classification, multi-scale surface classification is realized with sound accuracy by combining spectral and spatial features and improved classifiers.

In agricultural application, Cai et al. [24] proposed a high performance crop classification method that takes into account time and space. Based on the Common Land Units (CLU) data, long time-series multiple imagery spectral information of field blocks are combined. Spectral image stack and deep learning algorithm are applied to eliminate the interference of cloud, fog and shadow in local image. Compared with USDA crop data, the overall accuracy of this method for the classification of soybeans and corn reached 96%. Wei et al. [25] proposed a cube-pair-based deep convolution neural networks architecture for hyperspectral crop image classification. By using cube-pair, it exploits the data of different bands of hyperspectral imagery, and greatly reduces the training samples. Experiment shows that compared with the ordinary deep convolution neural networks, the cube-pair network architecture networks effectively improves the classification accuracy.

3.2. Object Extraction

Chen et al. [26] proposed an urban water body detection method based on deep learning. In this approach, A-SLIC is applied first to segment remote sensing imagery into superpixels, then well designed deep convolution neural network is used to extract the high-level features of water bodies. Experiment of several types of bodies in three cities gave an overall detection accuracy between 98.31% and 99.81%, which is a great progress.

Zhong et al. [27] proposed a position-sensitive balancing (PSB) object detection method and designed the detection framework for HSR remote sensing imagery. This framework combines Region Proposal Networks (RPN) with RESNET. The position-sensitive pooling layer is added to enhance the translation-invariance, improving the performance of object detection. Experiments show that the accuracy and speed of detecting aircraft, vehicles, bridges, ships, sports ground and other objects in high resolution remote sensing imagery have been significantly improved.

Tian et al. [28] proposed an urban area detection method based on deep learning. It involves the construction of Visual Dictionary on the basis of pre-trained deep neural network, followed by the training with labeled urban area imagery. The key of this method is how to construct the Visual Dictionary and perform the detection with deep neural networks. Experiments show that with small sample training, this scheme can accurately distinguish urban and non-urban areas.

3.3. Change Detection

To obtain the spectral and texture changes of the correspondent points between images, Zhang Xinlong et al. applied modified change vector analysis algorithm and grey level co-occurrence matrix that both concerning spatial-contextual information. By setting adaptive sampling intervals, samples of the most likely changed and unchanged areas are extracted. A Gaussian-Bernoulli Deep Boltzmann Machine model containing the label layer is constructed and trained to extract the deep features of changed and unchanged areas, thus effectively identify changed areas [29] .

Khan et al. proposed a forest change detection method. It transforms the change detection task into a region classification problem. Features of change are extracted through deep neural network. Based on these features, a multiresolution profile (MRP) of the target area is built and a candidate set of bounding-box is generated to detect potential changed areas. The detection accuracy of improved model reached 91.6%, which is 16% higher than traditional methods. The model can be well generalized, and can be widely used in the change detection tasks of various regions [30] .

3.4. Discussion

Although great progress has been made in the application deep learning methods in remote sensing, there are still the following shortcomings:

1) Lack of strict mathematical interpretation. Deep learning is merely a process fitting of the input data and the output result, there is a lack of strict mathematical basis for the design and improvement of the networks.

2) The requirements for training samples are high. To achieve better results in application, the requirements for quantity and quality of training samples are very high. Although some scholars have made certain progress in small sample training, for practical application in specific areas, a large number of training samples are required for higher accuracy.

3) Comprehensibility of network features is poor. Features extracted by the network lacks practical significance after being passed to the deep level. Though there are available visual development tools, the specific meaning of automatic network extraction cannot be designed. The construction, adjustment and improvement of deep network still rely on the experience of developers.

4) Few engineering application. Most research focus on network architecture and the verification algorithm, there are few researches on cloud computing architecture, data storage and retrieval mechanism for engineering applications. Few engineering project are completed and put into practical application.

5) Image recognition based on deep learning only relies on sample training, and image is mapped to specific results through complex computations. However, in this process, deep learning does not really understand the specific meaning of mapping, so it is impossible to use prior knowledge for image recognition and judgment.

4. Conclusion

Although the application of deep learning in the remote sensing is still in its infancy, a large number of studies have proved that deep learning methods can be widely combined with remote sensing application and achieve higher accuracy and efficiency than traditional methods in surface classification, object and change detection. With the continuous improvement and perfection of the remote sensing deep learning models, the end-to-end application framework free of feature design will become an important direction for the development of smart remote sensing application.

Acknowledgements

This study is supported by High Resolution Earth Observation System Regional Industrial Project “Guangxi Beibu Golf Economic Region Remote Sensing Integrated Service Platform Construction and Application” (84-Y40G07-9010-15/18), and Guangxi National Geo-survey project of Guangxi Bureau of Surveying, Mapping and Geo-information Agency, and “Guangxi Sugar Industry Development Big-data Platform” of Guangxi innovation driven development project (Major science and technology special project).

Conflicts of Interest

The authors declare no conflicts of interest regarding the publication of this paper.

Cite this paper

Zhu, M., He, Y.N. and He, Q.Y. (2019) A Review of Researches on Deep Learning in Remote Sensing Application. International Journal of Geosciences, 10, 1-11. https://doi.org/10.4236/ijg.2019.101001

References

  1. 1. Li, D.R., Tong, Q.X., Li, R.X., Gong, J.Y. and Zhang, L.P. (2012) Current Issues in High-Resolution Earth Observation Technology. Science China Earth Sciences, 55, 1043-1051.
    https://doi.org/10.1007/s11430-012-4445-9

  2. 2. Pagot, E. and Pesaresi, M. (2008) Systematic Study of the Urban Postconflict Change Classification Performance Using Spectral and Structural Features in a Support Vector Machine. IEEE Journal of Selected Topics in Applied Earth Observations & Remote Sensing, 1, 120-128.
    https://doi.org/10.1109/JSTARS.2008.2001154

  3. 3. Li, D.R., Zhang, L.P. and Xia, G.S. (2014) Automatic Analysis and Mining of Remote Sensing Big Data. Acta Geodaetica et Cartographica Sinica, 43, 1211-121

  4. 4. Lecun, Y., Bengio, Y. and Hinton, G. (2015) Deep Learning. Nature, 521, 436-444.
    https://doi.org/10.1038/nature14539

  5. 5. Gong, J.Y. and Ji, S.P. (2018) Photogrammetry and Deep Learning. Acta Geodaetica et Cartographica Sinica, 47, 693-704.

  6. 6. Gong, J.Y. and Ji, S.P. (2017) From Photogrammetry to Computer Vision. Geomatics and Information Science of Wuhan University, 42, 1518-1522.

  7. 7. Ball, J.E., Anderson, D.T. and Chan, C.S. (2017) A Comprehensive Survey of Deep Learning in Remote Sensing: Theories, Tools, and Challenges for the Community. Journal of Applied Remote Sensing, 11.
    https://doi.org/10.1117/1.JRS.11.042609

  8. 8. Long, J., Shelhamer, E. and Darrell, T. (2015) Fully Convolutional Networks for Semantic Segmentation. 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Boston, 3431-3440.
    https://doi.org/10.1109/CVPR.2015.7298965

  9. 9. Badrinarayanan, V., Kendall, A. and Cipolla, R. (2017) SegNet: A Deep Convolutional Encoder-Decoder Architecture for Image Segmentation. IEEE Transactions on Pattern Analysis and Machine Intelligence, 39, 2481-2495.
    https://doi.org/10.1109/TPAMI.2016.2644615

  10. 10. Chen, L., Papandreou, G., Kokkinos, I., et al. (2018) DeepLab: Semantic Image Segmentation with Deep Convolutional Nets, Atrous Convolution, and Fully Connected CRFs. IEEE Transactions on Pattern Analysis and Machine Intelligence, 40, 834-848.
    https://doi.org/10.1109/TPAMI.2017.2699184

  11. 11. Chen, L.C., Papandreou, G., Schroff, F., et al. (2017) Rethinking Atrous Convolution for Semantic Image Segmentation. CoRR. 2017, abs/1706.05587.

  12. 12. Chen, L.C., Zhu, Y., Papandreou, G., Schroff, F. and Adam, H. (2018) Encoder-Decoder with Atrous Separable Convolution for Semantic Image Segmentation. CoRR. abs/1802.02611.

  13. 13. Girshick, R., Donahue, J., Darrell, T. and Malik, J. (2014) Rich Feature Hierarchies for Accurate Object Detection and Semantic Segmentation. 2014 IEEE Conference on Computer Vision and Pattern Recognition, Columbus, 580-587.
    https://doi.org/10.1109/CVPR.2014.81

  14. 14. He, K., Zhang, X., Ren, S. and Sun, J. (2015) Spatial Pyramid Pooling in Deep Convolutional Networks for Visual Recognition. IEEE Transactions on Pattern Analysis and Machine Intelligence, 37, 1904-1916.
    https://doi.org/10.1109/TPAMI.2015.2389824

  15. 15. Girshick, R. (2015) Fast R-CNN. 2015 IEEE International Conference on Computer Vision (ICCV), Santiago, 1440-1448.
    https://doi.org/10.1109/ICCV.2015.169

  16. 16. Ren, S., He, K., Girshick, R. and Sun, J. (2017) Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks. IEEE Transactions on Pattern Analysis and Machine Intelligence, 39, 1137-1149.
    https://doi.org/10.1109/TPAMI.2016.2577031

  17. 17. Redmon, J., Divvala, S., Girshick, R. and Farhadi, A. (2016) You Only Look Once: Unified, Real-Time Object Detection. IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, 779-788.
    https://doi.org/10.1109/CVPR.2016.91

  18. 18. Liu, W., Anguelov, D., Erhan, D., Szegedy, C., Reed, S., Fu, C.Y. and Berg, A.C. (2016) SSD: Single Shot MultiBox Detector. Computer Vision-ECCV, Amsterdam, 21-37.
    https://doi.org/10.1007/978-3-319-46448-0_2

  19. 19. Fu, G., Liu, C., Zhou, R., Sun, T. and Zhang, Q. (2017) Classification for High Resolution Remote Sensing Imagery Using a Fully Convolutional Network. Remote Sensing, 9, 498-513.
    https://doi.org/10.3390/rs9050498

  20. 20. Zhang, C., Liu, J., Yu, F., Wan, S., Han, Y., Wang, J. and Wang, G. (2018) Segmentation Model Based on Convolutional Neural Networks for Extracting Vegetation from Gaofen-2 Images. Journal of Applied Remote Sensing, 12, Article ID: 042804.

  21. 21. Sharma, A., Liu, X., Yang, X. and Shi, D. (2017) A Patch-Based Convolutional Neural Network for Remote Sensing Image Classification. Neural Networks, 95, 19-28.
    https://doi.org/10.1016/j.neunet.2017.07.017

  22. 22. Zhang, C., Pan, X., Li, H., Gardiner, A., Sargent, A., Hare, J. and Atkinson, P. (2018) A Hybrid MLP-CNN Classifier for Very Fine Resolution Remotely Sensed Image Classification. ISPRS Journal of Photogrammetry and Remote Sensing, 140, 133-144.
    https://doi.org/10.1016/j.isprsjprs.2017.07.014

  23. 23. Yuksel, M.E., Basturk, N.S., Badem, H., Abdullah, C. and Alper, B. (2018) Classification of High Resolution Hyperspectral Remote Sensing Data Using Deep Neural Networks. Journal of Intelligent & Fuzzy Systems, 34, 2273-2285.
    https://doi.org/10.3233/JIFS-171307

  24. 24. Cai, Y., Guan, K., Peng, J., Wang, S., Seifert, C., Wardlow, B. and Li, Z. (2018) A High-Performance and In-Season Classification System of Field-Level Crop Types Using Time-Series Landsat Data and a Machine Learning Approach. Remote Sensing of Environment, 210, 35-47.
    https://doi.org/10.1016/j.rse.2018.02.045

  25. 25. Wei, W., Zhang, J., Zhang, L., Tian, C. and Zhang, Y. (2018) Deep Cube-Pair Network for Hyperspectral Imagery Classification. Remote Sensing, 10, 783.
    https://doi.org/10.3390/rs10050783

  26. 26. Chen, Y., Fan, R., Yang, X., Wang, J. and Latif, A. (2018) Extraction of Urban Water Bodies from High-Resolution Remote-Sensing Imagery Using Deep Learning. Water, 10, 585.
    https://doi.org/10.3390/w10050585

  27. 27. Zhong, Y., Han, X. and Zhang, L. (2018) Multi-Class Geospatial Object Detection Based on a Position-Sensitive Balancing Framework for High Spatial Resolution Remote Sensing Imagery. ISPRS Journal of Photogrammetry and Remote Sensing, 138, 281-294.
    https://doi.org/10.1016/j.isprsjprs.2018.02.014

  28. 28. Tian, T., Li, C., Xu, J. and Ma, J. (2018) Urban Area Detection in Very High Resolution Remote Sensing Images Using Deep Convolutional Neural Networks. Sensors, 18, 904-920.
    https://doi.org/10.3390/s18030904

  29. 29. Zhang, X., Chen, X., Li, F. and Yang, T. (2017) Change Detection Method for High Resolution Remote Sensing Images Using Deep Learning. Acta Geodaeticaet Cartographica Sinica, 46, 999-1008.

  30. 30. Khan, S.H., He, X., Porikli, F. and Bennamoun, M. (2017) Forest Change Detection in Incomplete Satellite Images with Deep Neural Networks. IEEE Transactions on Geoscience and Remote Sensing, 55, 5407-5423.
    https://doi.org/10.1109/TGRS.2017.2707528