Loss Function Optimization for CNN-Based Fingerprint Anti-Spoofing
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 33122
Loss Function Optimization for CNN-Based Fingerprint Anti-Spoofing

Authors: Yehjune Heo

Abstract:

As biometric systems become widely deployed, the security of identification systems can be easily attacked by various spoof materials. This paper contributes to finding a reliable and practical anti-spoofing method using Convolutional Neural Networks (CNNs) based on the types of loss functions and optimizers. The types of CNNs used in this paper include AlexNet, VGGNet, and ResNet. By using various loss functions including Cross-Entropy, Center Loss, Cosine Proximity, and Hinge Loss, and various loss optimizers which include Adam, SGD, RMSProp, Adadelta, Adagrad, and Nadam, we obtained significant performance changes. We realize that choosing the correct loss function for each model is crucial since different loss functions lead to different errors on the same evaluation. By using a subset of the Livdet 2017 database, we validate our approach to compare the generalization power. It is important to note that we use a subset of LiveDet and the database is the same across all training and testing for each model. This way, we can compare the performance, in terms of generalization, for the unseen data across all different models. The best CNN (AlexNet) with the appropriate loss function and optimizers result in more than 3% of performance gain over the other CNN models with the default loss function and optimizer. In addition to the highest generalization performance, this paper also contains the models with high accuracy associated with parameters and mean average error rates to find the model that consumes the least memory and computation time for training and testing. Although AlexNet has less complexity over other CNN models, it is proven to be very efficient. For practical anti-spoofing systems, the deployed version should use a small amount of memory and should run very fast with high anti-spoofing performance. For our deployed version on smartphones, additional processing steps, such as quantization and pruning algorithms, have been applied in our final model.

Keywords: Anti-spoofing, CNN, fingerprint recognition, loss function, optimizer.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 426

References:


[1] T. van der Putte and J. Keuning, “Biometrical Fingerprint Recognition: Don’t Get Your Fingers Burned” in Smart Card Research and Advanced Applications, pp. 289-306, 2000.
[2] A. Hori and I. Fujieda, “Study on Blood Movement During Fingerprint Input Actions”, International Journal of Optomechatronics, Vol. 2, pp.390-400, 2008.
[3] E. Park, X. Cui, W. Kim, and Haki. Kim, “End-to-End Fingerprints Liveness Detection using Convolutional Networks with Gram module”, ArXiv, 2018.
[4] E. Marasco and A. Ross, “A Survey on Antispoofing Schemes for Fingerprint Recognition Systems” in ACM Comput. Surv., Vol. 27, pp. 28:1-28:36, 2014.
[5] Arstechnica, “Brazilian docs fool biometrics scanners with bag full of fake fingers”, Available at: https://arstechnica.com, 2013. Accessed on: 9 June 2020.
[6] Arstechnica, “Anyone can fingerprint unlock a Galaxy S10--just grab a clear phone case”, Available at: https://arstechnica.com, 2019. Accessed on: 9 June 2020.
[7] T. Matsumoto, H. Matsumoto, K. Tamada, and S. Hoshino, “Impact of artificial “gummy” fingers on fingerprint systems”, in Proceedings of SPIE - The International Society for Optical Engineering, Vol. 4677, 2002.
[8] P. Bontrager, A. Roy, J. Togelius, N. Memon and A. Ross, "DeepMasterPrints: Generating MasterPrints for Dictionary Attacks via Latent Variable Evolution" in 2018 IEEE 9th International Conference on Biometrics Theory, Applications and Systems (BTAS), pp. 1-9, 2018.
[9] A. Jain, Y. Chen, and M. Demirkus,” Pores and Ridges: Fingerprint Matching Using Level 3 Features” in 18th International Conference on Pattern Recognition (ICPR’06), pp. 477-480, 2006.
[10] K. Fukushima, “Neocognitron A Hierarchical Neural Network Capable of Visual Pattern Recognition”, Neural Networks, Vol. 1, pp. 119-130, 1988.
[11] Y. Lecun, L. Bottou, Y. Bengio, and P. Haffner, “Gradient-Based Learning Applied to Document Recognition” in Proceedings of the IEEE., Vol. 86, pp. 2278-2324, 1998.
[12] A. Krizhevsky, I. Sutskever, and G. E. Hinton, “ImageNet Classification with Deep Learning Convolutional Neural Networks” in Advances in neural information processing systems 25(2), 2012.
[13] ImageNet, “ImageNet Large Scale Visual Recognition Challenge”, Available at: http://www.image-net.org/challenges/LSVRC/, 2015. Accessed on: 9 June 2020.
[14] K. Simonyan and A. Zisserman, “Very Deep Convolutional Networks for Large-Scale Image Recognition”, ArXiv, 2014.
[15] K. He, X. Zhang, S. Ren, and J. Sun, “Deep Residual Learning for Image Recognition” in 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 770-778, 2016.
[16] E. Park, W. Kim, Q. Li, J. Lim, and H. Kim, “Fingerprint Liveness Detection Using CNN Features of Random Sample Patches” in 2016 International Conference of the Biometrics Special Interest Group (BIOSIG), pp. 1-4, 2016.
[17] P-T de Boer, D. P. Kroese, S. Mannor, and R. Y. Rubinstein, “A Tutorial on the Cross-Entropy Method”, Annals of Operations Research, Vol. 134, pp. 19-67, 2005.
[18] Y. Wen, K. Zhang, Z. Li, and Y. Qiao, “A Discriminative Feature Feature Learning Approach for Deep Face Recognition” in Leibe B., Matas J., Sebe N., Welling M. (eds) Computer Vision – ECCV 2016, Vol. 9911, pp. 499-515, 2016.
[19] L. Rosasco, E. De Vito, A. Caponnetto, M. Piana, and A. Verri, “Are Loss Functions All the Same?”, Neural Computation, Vol. 16, Issue. 5, pp. 1063-1076, 2003.
[20] R. Sebastian, “An overview of gradient descent optimization algorithms”, ArXiv, Vol. arXiv:1609.04747v2 (cs.LG), 2016.
[21] D. P. Kingma and J. L. Ba, “Adam: A Method For Stochastic Optimization”, CoRR, Vol. arXiv:1412.6980v9 (cs.LG), 2014.
[22] R. M. Gower, N. Loizou, X. Qian, A. Sailanbayev, E. Shulgin and P. Richtarik, “SGD: General Analysis and Improved Rates”, ArXiv, Vol. arXiv:1901.09401v4 (cs.LG), 2019.
[23] M. C. Mukkamala and M. Hein, “Variants of RMSProp and Adagrad with Logarithmic Regret Bounds” in 17th International Conference on Machine Learning (ICML), 2017.
[24] M. D. Zeiler, “ADADELTA: An adaptive learning rate method”, ArXiv, Vol. arXiv:1212.5701v1 (cs.LG), 2012.
[25] J. Duchi, E. Hazan and Y. Singer, “Adaptive Subgradient Methods for Online Learning and Stochastic Optimization” in Journal of Machine Learning Research, Vol. 12, pp. 2121-2159, 2011.
[26] T. Dozat, “Incorporating Nesterov Momentum into Adam” in 16th International Conference on Learning Representations (ICLR), 2016.
[27] R. F. Nogueira, R. de A. Lotufo, and R. C. Machado, “Fingerprint Liveness Detection Using Convolutional Neural Networks”. in IEEE Transactions on Information Forensics and Security, Vol.11, No.6, pp. 1206-1213, 2016.
[28] D. Uliyan, S. Sadeghi, and H. A. Jalab, “Anti-spoofing method for fingerprint recognition using patch based deep learning machine” in Engineering Science and Technology, an International Journal, 2019.
[29] LivDet, “LivDet Databases”, Available at: livdet.org, Accessed on 2009.
[30] D. P. Kroese, R. Y. Rubinstein, S. Porotsky, A. L. D. Ltd, and T. Taimre, “Cross-Entropy Method” in Encyclopedia of Operations Research and Management Sciences, 3rd ed, Springer-Verlang, 2012.
[31] Z. I. Botev, D. P. Kroese, R. Y. Rubinstein, and P. L’Ecuyer. “Chapter 3. The Cross-Entropy Method for Optimization” in Handbook of Statistics, Vol. 31, pp. 35-59, 2013.
[32] Y. Wen, K. Zhang, Z. Li, and Y. Qiao, “A Comprehensive Study on Center Loss for Deep Face Recognition” in Int J Comput Vis 127, pp. 668-683, 2019.
[33] P. Ghosh and L. S. Davis, “Understanding Center Loss Based Network for Image Retrieval with Few Training Data” in 2018 European Conference on Computer Vision (ECCV), pp. 717-722, 2019.
[34] E. Garcia, “Cosine Similarity Tutorial” in ResearchGate, 2015.
[35] F. Rahutomo, T. Kitasuka, and M. Aritsugi, “Semantic Cosine Similarity” in 7th Interntaional Student Conference on Advanced Science and Technology (ICAST), 2019.
[36] C. Gentile and M. Warmuth, “Linear Hinge Loss and Average Margin” in Advances in Neural Information Processing Systems (NIPS), Vol .11, pp. 225-231, 1998.