Spatial-Temporal Awareness Approach for Extensive Re-Identification
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 33122
Spatial-Temporal Awareness Approach for Extensive Re-Identification

Authors: Tyng-Rong Roan, Fuji Foo, Wenwey Hseush

Abstract:

Recent development of AI and edge computing plays a critical role to capture meaningful events such as detection of an unattended bag. One of the core problems is re-identification across multiple CCTVs. Immediately following the detection of a meaningful event is to track and trace the objects related to the event. In an extensive environment, the challenge becomes severe when the number of CCTVs increases substantially, imposing difficulties in achieving high accuracy while maintaining real-time performance. The algorithm that re-identifies cross-boundary objects for extensive tracking is referred to Extensive Re-Identification, which emphasizes the issues related to the complexity behind a great number of CCTVs. The Spatial-Temporal Awareness approach challenges the conventional thinking and concept of operations which is labor intensive and time consuming. The ability to perform Extensive Re-Identification through a multi-sensory network provides the next-level insights – creating value beyond traditional risk management.

Keywords: Long-short-term memory, re-identification, security critical application, spatial-temporal awareness.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 537

References:


[1] L. Zheng, L. Shen, L. Tian, S. Wang, J. Wang, and Q. Tian, “Scalable Person Re-identification: A Benchmark,” in 2015 IEEE International Conference on Computer Vision (ICCV), Dec. 2015, pp. 1116–1124.
[2] E. Fendri, M. Frikha, and M. Hammami, “Multi-level semantic appearance representation for person re-identification system,” Pattern Recognit. Lett., vol. 115, pp. 30–38, Nov. 2018.
[3] X. Bai, M. Yang, T. Huang, Z. Dou, R. Yu, and Y. Xu, “Deep-Person: Learning discriminative deep features for person Re-Identification,” Pattern Recognit., vol. 98, p. 107036, Feb. 2020.
[4] X. Liu, S. Zhang, Q. Huang, and W. Gao, “RAM: A Region-Aware Deep Model for Vehicle Re-Identification,” in 2018 IEEE International Conference on Multimedia and Expo (ICME), Jul. 2018, pp. 1–6.
[5] T.-W. Huang, J. Cai, H. Yang, H.-M. Hsu, and J.-N. Hwang, “Multi-View Vehicle Re-Identification using Temporal Attention Model and Metadata Re-ranking,” 2019, pp. 434–442, Accessed: Nov. 02, 2020.
[6] A. Bedagkar-Gala and S. K. Shah, “A survey of approaches and trends in person re-identification,” Image Vis. Comput., vol. 32, no. 4, pp. 270–286, Apr. 2014.
[7] A. Li, L. Liu, K. Wang, S. Liu, and S. Yan, “Clothing Attributes Assisted Person Re-identification,” IEEE Trans. Circuits Syst. Video Technol., vol. 25, pp. 1–1, Jan. 2014.
[8] M. Munaro, S. Ghidoni, D. T. Dizmen, and E. Menegatti, “A feature-based approach to people re-identification using skeleton keypoints,” in 2014 IEEE International Conference on Robotics and Automation (ICRA), May 2014, pp. 5644–5651.
[9] G. Zhang, P. Jiang, K. Matsumoto, M. Yoshida, and K. Kita, “Reidentification of Persons Using Clothing Features in Real-Life Video,” Applied Computational Intelligence and Soft Computing, Jan. 11, 2017.
[10] L. Wei, S. Zhang, W. Gao, and Q. Tian, “Person Transfer GAN to Bridge Domain Gap for Person Re-Identification,” 2018, pp. 79–88, Accessed: Nov. 02, 2020.
[11] L. Zheng, Y. Huang, H. Lu, and Y. Yang, “Pose-Invariant Embedding for Deep Person Re-Identification,” IEEE Trans. Image Process., vol. 28, no. 9, pp. 4500–4509, Sep. 2019.
[12] N. Wojke, A. Bewley, and D. Paulus, “Simple online and realtime tracking with a deep association metric,” in 2017 IEEE International Conference on Image Processing (ICIP), Sep. 2017, pp. 3645–3649.
[13] A. Bewley, Z. Ge, L. Ott, F. Ramos, and B. Upcroft, “Simple online and realtime tracking,” in 2016 IEEE International Conference on Image Processing (ICIP), Sep. 2016, pp. 3464–3468.
[14] Y.H. Liu“Person Re-Identification Robut to Illumination Change with Clustering-based Loss Fonction”, Master Thesis, National Taiwan University.
[15] J. Redmon, S. Divvala, R. Girshick, A. Farhadi, “You Only Look Once: Unified, Real-Time Object Detection”, 2016 IEEE Conference on Computer Vision and Pattern Recognition