Continual Learning Using Data Generation for Hyperspectral Remote Sensing Scene Classification
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 33093
Continual Learning Using Data Generation for Hyperspectral Remote Sensing Scene Classification

Authors: Samiah Alammari, Nassim Ammour

Abstract:

When providing a massive number of tasks successively to a deep learning process, a good performance of the model requires preserving the previous tasks data to retrain the model for each upcoming classification. Otherwise, the model performs poorly due to the catastrophic forgetting phenomenon. To overcome this shortcoming, we developed a successful continual learning deep model for remote sensing hyperspectral image regions classification. The proposed neural network architecture encapsulates two trainable subnetworks. The first module adapts its weights by minimizing the discrimination error between the land-cover classes during the new task learning, and the second module tries to learn how to replicate the data of the previous tasks by discovering the latent data structure of the new task dataset. We conduct experiments on hyperspectral image (HSI) dataset on Indian Pines. The results confirm the capability of the proposed method.

Keywords: Continual learning, data reconstruction, remote sensing, hyperspectral image segmentation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 232

References:


[1] "Hyperspectral and Multispectral Imaging," Edmund Optics, 07 06 2020. (Online). Available: https://www.edmundoptics.com/knowledge-center/application-notes/imaging/hyperspectraland-multispectral-imaging/. (Accessed 27 03 2022).
[2] M. Ahmad, A. M. Khan, M. Mazzara, S. Distefano, M. Ali and M. S. Sarfraz, "A Fast and Compact 3-D CNN for Hyperspectral Image Classification," in IEEE Geoscience and Remote Sensing Letters, vol. 19, pp. 1-5, 2022, Art no. 5502205, doi: 10.1109/LGRS.2020.3043710
[3] R. Hadsell, D. Rao, A.A. Rusu, and R. Pascanu, “Embracing Change: Continual Learning in Deep Neural Networks,” Trends in Cognitive Sciences, November 2020
[4] J. Kim, J. Kim and N. Kwak, "StackNet: Stacking feature maps for Continual learning," 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), Seattle, WA, USA, 2020, pp.975-982, doi:10.1109/ CVPRW50498 .2020.00129.
[5] L. Butyrev, G. Kontes, C. Löffler, and C. Mutschler, “Overcoming Catastrophic Forgetting via Hessian-free Curvature Estimates,” 2019.
[6] J. Xu and Z. Zhu, “Reinforced continual learning,” in Advances in Neural Information Processing Systems, 2018, pp. 899–908.
[7] Z. Li and D. Hoiem, “Learning without Forgetting,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 40, no. 12, pp. 2935–2947, Dec. 2018, doi: 10.1109/TPAMI.2017.2773081.
[8] J. Kirkpatrick et al., “Overcoming catastrophic forgetting in neural networks,” Proc. Nat. Acad. Sci. USA, vol. 114, no. 13, pp. 3521–3526, Mar. 2017, doi:10.1073/pnas. 1611835114.
[9] N. Ammour, Y. Bazi, H. Alhichri, and N. Alajlan, “Continual Learning Approach for Remote Sensing Scene Classification,” IEEE Geosci. Remote Sens. Lett., 2020.
[10] Hyperspectral Datasets Description, 2022 (accessed 2022-01-12), http://www.ehu.eus/ccwintco/index.php/HyperspectrlRemoteSensing Scenes.