Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 30121
Satellite Imagery Classification Based on Deep Convolution Network

Authors: Zhong Ma, Zhuping Wang, Congxin Liu, Xiangzeng Liu

Abstract:

Satellite imagery classification is a challenging problem with many practical applications. In this paper, we designed a deep convolution neural network (DCNN) to classify the satellite imagery. The contributions of this paper are twofold — First, to cope with the large-scale variance in the satellite image, we introduced the inception module, which has multiple filters with different size at the same level, as the building block to build our DCNN model. Second, we proposed a genetic algorithm based method to efficiently search the best hyper-parameters of the DCNN in a large search space. The proposed method is evaluated on the benchmark database. The results of the proposed hyper-parameters search method show it will guide the search towards better regions of the parameter space. Based on the found hyper-parameters, we built our DCNN models, and evaluated its performance on satellite imagery classification, the results show the classification accuracy of proposed models outperform the state of the art method.

Keywords: Satellite imagery classification, deep convolution network, genetic algorithm, hyper-parameter optimization.

Digital Object Identifier (DOI): doi.org/10.5281/zenodo.1125019

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1763

References:


[1] S. Basu, S. Ganguly, S. Mukhopadhyay, R. DiBiano, M. Karki, and R. Nemani, “DeepSat-A Learning framework for Satellite Imagery,” arXiv preprint arXiv:1509.03602, 2015.
[2] V. Mnih and G. E. Hinton, “Learning to Detect Roads in High-Resolution Aerial Images,” in Computer Vision - ECCV 2010, PT VI, 2010, vol. 6316, pp. 210–223.
[3] Xueyun Chen, Shiming Xiang, Cheng-Lin Liu, and Chun-Hong Pan, “Vehicle Detection in Satellite Images by Hybrid Deep Convolutional Neural Networks,” IEEE Geoscience and Remote Sensing Letters, vol. 11, no. 10, pp. 1797–1801, Oct. 2014.
[4] C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, and A. Rabinovich, “Going deeper with convolutions,” arXiv preprint arXiv:1409.4842, 2014.
[5] Sanjeev Arora, Rong Ge, and Tengyu Ma, “provable bounds for learning some deep representations,” presented at the International conference on machine learning, 2014.
[6] Cox D. and Pinto N., “Beyond Simple Features: A Large-Scale Feature Search Approach to Unconstrained Face Recognition,” presented at the IEEE International conference on Automated Face and Gesture Recognition, Santa Barbara, CA, 2011, pp. 8–15.
[7] J. Bergstra, D. Yamins, and D. Cox, “Making a science of model search: Hyperparameter optimization in hundreds of dimensions for vision architectures,” in Proceedings of The 30th International Conference on Machine Learning, 2013, pp. 115–123.
[8] Min Lin, Qiang Chen, and Shuicheng Yan, “Network In Network,” presented at the International conference on learning representations, Banff, Canada, 2014.
[9] A. Krizhevsky, I. Sutskever, and G. E. Hinton, “ImageNet Classification with Deep Convolutional Neural Networks,” in Advances in Neural Information Processing Systems 25, 2012, pp. 1097–1105.