Commenced in January 2007
Paper Count: 30455
Vision-Based Collision Avoidance for Unmanned Aerial Vehicles by Recurrent Neural Networks
Authors: Yao-Hong Tsai
Abstract:Due to the sensor technology, video surveillance has become the main way for security control in every big city in the world. Surveillance is usually used by governments for intelligence gathering, the prevention of crime, the protection of a process, person, group or object, or the investigation of crime. Many surveillance systems based on computer vision technology have been developed in recent years. Moving target tracking is the most common task for Unmanned Aerial Vehicle (UAV) to find and track objects of interest in mobile aerial surveillance for civilian applications. The paper is focused on vision-based collision avoidance for UAVs by recurrent neural networks. First, images from cameras on UAV were fused based on deep convolutional neural network. Then, a recurrent neural network was constructed to obtain high-level image features for object tracking and extracting low-level image features for noise reducing. The system distributed the calculation of the whole system to local and cloud platform to efficiently perform object detection, tracking and collision avoidance based on multiple UAVs. The experiments on several challenging datasets showed that the proposed algorithm outperforms the state-of-the-art methods.
Digital Object Identifier (DOI): doi.org/10.5281/zenodo.2643866Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 338
 The website of the Taipei Traffic Control Center http://tms.bote.taipei.gov.tw/main.jsp?lang=zh_TW.
 “Unmanned Aircraft Systems". ICAO. Accessed 2nd August, 2016 http://www.icao.int/Meetings/UAS/Documents/Circular%20328_en.pdf.
 S. A. Cambone, K. J. Krieg, P. Pace and L. Wells II, “Unmanned aircraft systems (UAS) roadmap 2005–2030,” USA: Office of the Secretary of Defense, 2005.
 M. Corcoran, "Drone wars: The definition dogfight". Accessed 2nd August 2016. http://www.abc.net.au/news/2013-03-01/dronewars-the-definition-dogfight/4546598.
 A. Ahmed, M. Nagai, C. Tianen, and R. Shibasaki, “Uav based monitoring systemand object detection technique development for a disaster area,” International Archives of Photogrammetry, Remote Sensing and Spatial Information Sciences, vol. 37, pp. 373–377, 2008.
 J. Polo, G. Hornero, C. Duijneveld, A. García and O. Casas, “Design of a low-cost Wireless Sensor Network with UAV mobile node for agricultural applications,” Computers and Electronics in Agriculture, vol. 119, pp. 19–32, 2015.
 B. Chen, Z. Chen, L. Deng, Y. Duan and J. Zhou, “Building change detection with RGB-D map generated from UAV images,” Neurocomputing, vol. 208, pp. 350–364, 2016.
 B. Coifman, M. McCord, R. Mishalani, M. Iswalt and Y. Ji, “Roadway trafficmonitoring froman unmanned aerial vehicle,” IEE Proceedings-Intelligent Transport Systems, vol. 153, no. 1, pp. 11–20, 2006.
 K. Kanistras, G. Martins, M. J. Rutherford and K. P. Valavanis, “Survey of unmanned aerial vehicles (uavs) for traffic monitoring,” in Handbook of Unmanned Aerial Vehicles, pp. 2643–2666, 2015.
 P. J. Hiltner, “Drones Are Coming: Use of Unmanned Aerial Vehicles for Police Surveillance and Its Fourth Amendment Implications,” The. Wake Forest JL & Pol'y, vol. 3, pp. 397, 2013.
 V. Reilly, H. Idrees and M. Shah, “Detection and tracking of large number of targets in wide area surveillance,” Computer Vision ECCV, pp. 186-199, 2010.
 Y. Wang, Z. Zhang and Y. Wang, “Moving Object Detection in Aerial Video”, 11th Inter-national Conference on Machine Learning and Applications, pp. 446-450, 2012.
 C. Lin, S. Pankanti, G. Ashour, D. Porat and J. R. Smith, “Moving camera analytics: Emerging scenarios, challenges, and applications”, IBM Journal of Research and Development, vol. 59, pp: 5:1-5:10, 2015.
 H. Zhou, H. Kong, L. Wei and D. Creighton, “Efficient Road Detection and Tracking for Unmanned Aerial Vehicle”, Transactions on Intelligent Transportation Systems, vol. 16, pp. 297-309, 2015.
 T. Moranduzzo and F. Melgani, “Automatic Car Counting Method for Unmanned Aerial Vehicle Images”, Geoscience and Remote Sensing, vol. 52, pp. 1635 – 1647, 2014.
 S. Parameswaran, C. Lane, B, Bagnall and H. Buck, “Marine Object Detection in UAV full-motion video”, Proc. SPIE 9076 Airborne Intelligence, surveillance, Reconnaissance Systems and Applications, XI, 907608, 2014.
 The website of ImageFusion.Org, The Online Resource for Research in Image Fusion, http://www.imagefusion.org/.
 The website of Flir camera, http://www.flir.tw/flirone/.
 The website of Softmax, http://ufldl.stanford.edu/tutorial/supervised/SoftmaxRegression/.
 Y. Lecun, L. Bottou, Y. Bengio and P. Haffner, “Gradient-based learning applied to document recognition”, Proceedings of the IEEE, vol. 86, no. 11, pp. 2278-2324, 1998.
 A. Mnih, and G. E. Hinton, “Learning nonlinear constraints with contrastive backpropagation,” In: Neural Networks, IJCNN'05. Proceedings. 2005 IEEE International Joint Conference on. IEEE, p. 1302-1307, 2005.
 R. Girshick, J. Donahue, T. Darrell, and J. Malik, “Region-based convolutional networks for accurate object detection and segmentation,” IEEE Transactions on Pattern Analysis & Machine Intelligence, vol. 38, no. 1, pp. 1–1, 2015.
 A. Graves, M. Liwicki, S. Fernandez, R. Bertolami, H. Bunke and J. Schmidhuber, “A Novel Connectionist System for Improved Unconstrained Handwriting Recognition.” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 31, no. 5, pp. 855–868, 2009.
 R. Girshick, “Fast r-cnn,” 2015 IEEE International Conference on Computer Vision (ICCV), 2015.
 S. Hochreiter and J. Schmidhuber, "Long short-term memory". Neural Computation. vol. 9, no. 8, pp. 1735–1780, 1997.
 G. E. Hinton, et al., “Deep Neural Networks for Acoustic Modeling in Speech Recognition,” IEEE Signal Processing Magazine, vol. 29, no. 6, pp. 82–97, 2012.
 The website of Mission Planner, http://ardupilot.org/planner/docs/mission-planner-overview.html.
 Y. Wu, J. Lim and M. H. Yang, “Online Object Tracking: A Benchmark,” In: 2013 IEEE Conference on Computer Vision and Pattern Recognition, pp. 2411-2418, 2013.
 Y. Wu, J. Lim and M. H. Yang, “Object tracking benchmark,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 37, no. 9, pp. 1834-1848, 2015.
 P. Liang, E. Blasch and H. Ling, “Encoding color information for visual tracking: Algorithms and benchmark,” IEEE Transactions on Image Processing, vol. 24, no. 12, pp. 5630-5644, 2015.
 A. W. M. Smeulders, D. M. Chu, R. Cucchiara, S. Calderara, A. Dehghan and M. Shah, “Visual tracking: An experimental survey,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 36, no. 7, pp. 1442-1468, 2014.
 M. Mueller, N. Smith and B. Ghanem, “A Benchmark and Simulator for UAV Tracking,” ECCV 2016: European Conference on Computer Vision, pp. 445-461, 2016.
 A. Anjos, and S. Marcel, “Counter-measures to photo attacks in face recognition: A public database and a baseline,” in Proc. IJCB, pp. 1–7, 2011.
 Wu, H.Y., M. Rubinstein, E. Shih, J. Guttag, F. Durand and W. Freeman, “Eulerian video magnification for revealing subtle changes in the world,” ACM Trans. Graph., vol. 31, no. 4, Art. ID 65, 2012.