Vision-Based Collision Avoidance for Unmanned Aerial Vehicles by Recurrent Neural Networks
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 33093
Vision-Based Collision Avoidance for Unmanned Aerial Vehicles by Recurrent Neural Networks

Authors: Yao-Hong Tsai

Abstract:

Due to the sensor technology, video surveillance has become the main way for security control in every big city in the world. Surveillance is usually used by governments for intelligence gathering, the prevention of crime, the protection of a process, person, group or object, or the investigation of crime. Many surveillance systems based on computer vision technology have been developed in recent years. Moving target tracking is the most common task for Unmanned Aerial Vehicle (UAV) to find and track objects of interest in mobile aerial surveillance for civilian applications. The paper is focused on vision-based collision avoidance for UAVs by recurrent neural networks. First, images from cameras on UAV were fused based on deep convolutional neural network. Then, a recurrent neural network was constructed to obtain high-level image features for object tracking and extracting low-level image features for noise reducing. The system distributed the calculation of the whole system to local and cloud platform to efficiently perform object detection, tracking and collision avoidance based on multiple UAVs. The experiments on several challenging datasets showed that the proposed algorithm outperforms the state-of-the-art methods.

Keywords: Unmanned aerial vehicle, object tracking, deep learning, collision avoidance.

Digital Object Identifier (DOI): doi.org/10.5281/zenodo.2643866

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 953

References:


[1] The website of the Taipei Traffic Control Center http://tms.bote.taipei.gov.tw/main.jsp?lang=zh_TW.
[2] “Unmanned Aircraft Systems". ICAO. Accessed 2nd August, 2016 http://www.icao.int/Meetings/UAS/Documents/Circular%20328_en.pdf.
[3] S. A. Cambone, K. J. Krieg, P. Pace and L. Wells II, “Unmanned aircraft systems (UAS) roadmap 2005–2030,” USA: Office of the Secretary of Defense, 2005.
[4] M. Corcoran, "Drone wars: The definition dogfight". Accessed 2nd August 2016. http://www.abc.net.au/news/2013-03-01/dronewars-the-definition-dogfight/4546598.
[5] A. Ahmed, M. Nagai, C. Tianen, and R. Shibasaki, “Uav based monitoring systemand object detection technique development for a disaster area,” International Archives of Photogrammetry, Remote Sensing and Spatial Information Sciences, vol. 37, pp. 373–377, 2008.
[6] J. Polo, G. Hornero, C. Duijneveld, A. García and O. Casas, “Design of a low-cost Wireless Sensor Network with UAV mobile node for agricultural applications,” Computers and Electronics in Agriculture, vol. 119, pp. 19–32, 2015.
[7] B. Chen, Z. Chen, L. Deng, Y. Duan and J. Zhou, “Building change detection with RGB-D map generated from UAV images,” Neurocomputing, vol. 208, pp. 350–364, 2016.
[8] B. Coifman, M. McCord, R. Mishalani, M. Iswalt and Y. Ji, “Roadway trafficmonitoring froman unmanned aerial vehicle,” IEE Proceedings-Intelligent Transport Systems, vol. 153, no. 1, pp. 11–20, 2006.
[9] K. Kanistras, G. Martins, M. J. Rutherford and K. P. Valavanis, “Survey of unmanned aerial vehicles (uavs) for traffic monitoring,” in Handbook of Unmanned Aerial Vehicles, pp. 2643–2666, 2015.
[10] P. J. Hiltner, “Drones Are Coming: Use of Unmanned Aerial Vehicles for Police Surveillance and Its Fourth Amendment Implications,” The. Wake Forest JL & Pol'y, vol. 3, pp. 397, 2013.
[11] V. Reilly, H. Idrees and M. Shah, “Detection and tracking of large number of targets in wide area surveillance,” Computer Vision ECCV, pp. 186-199, 2010.
[12] Y. Wang, Z. Zhang and Y. Wang, “Moving Object Detection in Aerial Video”, 11th Inter-national Conference on Machine Learning and Applications, pp. 446-450, 2012.
[13] C. Lin, S. Pankanti, G. Ashour, D. Porat and J. R. Smith, “Moving camera analytics: Emerging scenarios, challenges, and applications”, IBM Journal of Research and Development, vol. 59, pp: 5:1-5:10, 2015.
[14] H. Zhou, H. Kong, L. Wei and D. Creighton, “Efficient Road Detection and Tracking for Unmanned Aerial Vehicle”, Transactions on Intelligent Transportation Systems, vol. 16, pp. 297-309, 2015.
[15] T. Moranduzzo and F. Melgani, “Automatic Car Counting Method for Unmanned Aerial Vehicle Images”, Geoscience and Remote Sensing, vol. 52, pp. 1635 – 1647, 2014.
[16] S. Parameswaran, C. Lane, B, Bagnall and H. Buck, “Marine Object Detection in UAV full-motion video”, Proc. SPIE 9076 Airborne Intelligence, surveillance, Reconnaissance Systems and Applications, XI, 907608, 2014.
[17] The website of ImageFusion.Org, The Online Resource for Research in Image Fusion, http://www.imagefusion.org/.
[18] The website of Flir camera, http://www.flir.tw/flirone/.
[19] The website of Softmax, http://ufldl.stanford.edu/tutorial/supervised/SoftmaxRegression/.
[20] Y. Lecun, L. Bottou, Y. Bengio and P. Haffner, “Gradient-based learning applied to document recognition”, Proceedings of the IEEE, vol. 86, no. 11, pp. 2278-2324, 1998.
[21] A. Mnih, and G. E. Hinton, “Learning nonlinear constraints with contrastive backpropagation,” In: Neural Networks, IJCNN'05. Proceedings. 2005 IEEE International Joint Conference on. IEEE, p. 1302-1307, 2005.
[22] R. Girshick, J. Donahue, T. Darrell, and J. Malik, “Region-based convolutional networks for accurate object detection and segmentation,” IEEE Transactions on Pattern Analysis & Machine Intelligence, vol. 38, no. 1, pp. 1–1, 2015.
[23] A. Graves, M. Liwicki, S. Fernandez, R. Bertolami, H. Bunke and J. Schmidhuber, “A Novel Connectionist System for Improved Unconstrained Handwriting Recognition.” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 31, no. 5, pp. 855–868, 2009.
[24] R. Girshick, “Fast r-cnn,” 2015 IEEE International Conference on Computer Vision (ICCV), 2015.
[25] S. Hochreiter and J. Schmidhuber, "Long short-term memory". Neural Computation. vol. 9, no. 8, pp. 1735–1780, 1997.
[26] G. E. Hinton, et al., “Deep Neural Networks for Acoustic Modeling in Speech Recognition,” IEEE Signal Processing Magazine, vol. 29, no. 6, pp. 82–97, 2012.
[27] The website of Mission Planner, http://ardupilot.org/planner/docs/mission-planner-overview.html.
[28] Y. Wu, J. Lim and M. H. Yang, “Online Object Tracking: A Benchmark,” In: 2013 IEEE Conference on Computer Vision and Pattern Recognition, pp. 2411-2418, 2013.
[29] Y. Wu, J. Lim and M. H. Yang, “Object tracking benchmark,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 37, no. 9, pp. 1834-1848, 2015.
[30] P. Liang, E. Blasch and H. Ling, “Encoding color information for visual tracking: Algorithms and benchmark,” IEEE Transactions on Image Processing, vol. 24, no. 12, pp. 5630-5644, 2015.
[31] A. W. M. Smeulders, D. M. Chu, R. Cucchiara, S. Calderara, A. Dehghan and M. Shah, “Visual tracking: An experimental survey,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 36, no. 7, pp. 1442-1468, 2014.
[32] M. Mueller, N. Smith and B. Ghanem, “A Benchmark and Simulator for UAV Tracking,” ECCV 2016: European Conference on Computer Vision, pp. 445-461, 2016.
[33] A. Anjos, and S. Marcel, “Counter-measures to photo attacks in face recognition: A public database and a baseline,” in Proc. IJCB, pp. 1–7, 2011.
[34] Wu, H.Y., M. Rubinstein, E. Shih, J. Guttag, F. Durand and W. Freeman, “Eulerian video magnification for revealing subtle changes in the world,” ACM Trans. Graph., vol. 31, no. 4, Art. ID 65, 2012.