Using Satellite Images Datasets for Road Intersection Detection in Route Planning
Authors: Fatma El-zahraa El-taher, Ayman Taha, Jane Courtney, Susan Mckeever
Abstract:
Understanding road networks plays an important role in navigation applications such as self-driving vehicles and route planning for individual journeys. Intersections of roads are essential components of road networks. Understanding the features of an intersection, from a simple T-junction to larger multi-road junctions is critical to decisions such as crossing roads or selecting safest routes. The identification and profiling of intersections from satellite images is a challenging task. While deep learning approaches offer state-of-the-art in image classification and detection, the availability of training datasets is a bottleneck in this approach. In this paper, a labelled satellite image dataset for the intersection recognition problem is presented. It consists of 14,692 satellite images of Washington DC, USA. To support other users of the dataset, an automated download and labelling script is provided for dataset replication. The challenges of construction and fine-grained feature labelling of a satellite image dataset are examined, including the issue of how to address features that are spread across multiple images. Finally, the accuracy of detection of intersections in satellite images is evaluated.
Keywords: Satellite images, remote sensing images, data acquisition, autonomous vehicles, robot navigation, route planning, road intersections.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 778References:
[1] Q. Zhou and Z. Li, “Experimental analysis of various types of road intersections for interchange detection,” Transactions in GIS, vol. 19, no. 1, pp. 19–41, 2015.
[2] P. Fogliaroni, D. Bucher, N. Jankovic, and I. Giannopoulos, “Intersections of our world,” in Proceedings of 10th international conference on geographic information science, vol. 114. Schloss Dagstuhl–Leibniz-Zentrum fuer Informatik, 2018, p. 3.
[3] S. Fang, L. Yang, T. Wang, and S. Jing, “Trajectory planning method for mixed vehicles considering traffic stability and fuel consumption at the signalized intersection,” Journal of Advanced Transportation, vol. 2020, 2020.
[4] B. Asadi and A. Vahidi, “Predictive cruise control: Utilizing upcoming traffic signal information for improving fuel economy and reducing trip time,” IEEE transactions on control systems technology, vol. 19, no. 3, pp. 707–714, 2010.
[5] F. E.-Z. El-Taher, A. Taha, J. Courtney, and S. Mckeever, “A systematic review of urban navigation systems for visually impaired people,” Sensors, vol. 21, no. 9, p. 3103, 2021.
[6] A. Cohen and S. Dalyot, “Route planning for blind pedestrians using openstreetmap,” Environment and Planning B: Urban Analytics and City Science, vol. 48, no. 6, pp. 1511–1526, 2021.
[7] P. Li, Y. Li, J. Feng, Z. Ma, and X. Li, “Automatic detection and recognition of road intersections for road extraction from imagery,” The International Archives of Photogrammetry, Remote Sensing and Spatial Information Sciences, vol. 43, pp. 113–117, 2020.
[8] D. Costea, A. Marcu, E. Slusanschi, and M. Leordeanu, “Creating roadmaps in aerial images with generative adversarial networks and smoothing-based optimization,” in Proceedings of the IEEE International Conference on Computer Vision Workshops, 2017, pp. 2100–2109.
[9] J. Dai, Y. Wang, W. Li, and Y. Zuo, “Automatic method for extraction of complex road intersection points from high-resolution remote sensing images based on fuzzy inference,” IEEE Access, vol. 8, pp. 39 212–39 224, 2020.
[10] J. Cheng, T. Liu, Y. Zhou, and Y. Xiong, “Road junction identification in high resolution urban sar images based on svm,” in Proceedings of the International Conference on Innovative Mobile and Internet Services in Ubiquitous Computing. Springer, 2019, pp. 597–606.
[11] V. T¨umen and B. Ergen, “Intersections and crosswalk detection using deep learning and image processing techniques,” Physica A: Statistical Mechanics and its Applications, vol. 543, p. 123510, 2020.
[12] A. Kumar, G. Gupta, A. Sharma, and K. M. Krishna, “Towards view-invariant intersection recognition from videos using deep network ensembles,” in Proceedings of the 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, 2018, pp. 1053–1060.
[13] K. Rebai, N. Achour, and O. Azouaoui, “Road intersection detection and classification using hierarchical svm classifier,” Advanced Robotics, vol. 28, no. 14, pp. 929–941, 2014.
[14] S. Kumaar, S. Mannar, S. Omkar et al., “Juncnet: A deep neural network for road junction disambiguation for autonomous vehicles,” arXiv preprint arXiv:1809.01011, 2018.
[15] U. Baumann, Y.-Y. Huang, C. Gl¨aser, M. Herman, H. Banzhaf, and J. M. Z¨ollner, “Classifying road intersections using transfer-learning on a deep neural network,” in Proceedings of the International Conference on Intelligent Transportation Systems (ITSC), 2018, pp. 683–690.
[16] J. Bock, R. Krajewski, T. Moers, S. Runde, L. Vater, and L. Eckstein, “The ind dataset: A drone dataset of naturalistic road user trajectories at german intersections,” in Proceedings of 2020 IEEE Intelligent Vehicles Symposium (IV). IEEE, 2020, pp. 1929–1934.
[17] J. Wang, C. Wang, X. Song, and V. Raghavan, “Automatic intersection and traffic rule detection by mining motor-vehicle gps trajectories,” Computers, Environment and Urban Systems, vol. 64, pp. 19–29, 2017.
[18] J. HUANG, M. DENG, Y. ZHANG, and H. LIU, “Complex road intersection modelling based on low-frequency gps track data,” International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, vol. 42, no. 2/W7, 2017.
[19] M. Oeljeklaus, F. Hoffmann, and T. Bertram, “A combined recognition and segmentation model for urban traffic scene understanding,” in Proceedings of the 2017 IEEE 20th International Conference on Intelligent Transportation Systems (ITSC). IEEE, 2017, pp. 1–6.
[20] M. Astrid, J.-H. Lee, M. Z. Zaheer, J.-Y. Lee, and S.-I. Lee, “For safer navigation: Pedestrian-view intersection classification,” in Proceedings of the 2020 International Conference on Information and Communication Technology Convergence (ICTC). IEEE, 2020, pp. 7–10.
[21] T. Koji and T. Kanji, “Deep intersection classification using first and third person views,” in Proceedings of the 2019 IEEE Intelligent Vehicles Symposium (IV). IEEE, 2019, pp. 454–459.
[22] A. Abdollahi, B. Pradhan, and A. Alamri, “Vnet: An end-to-end fully convolutional neural network for road extraction from high-resolution remote sensing data,” IEEE Access, vol. 8, pp. 179 424–179 436, 2020.
[23] J. Zhang, C. Lu, X. Li, H.-J. Kim, and J. Wang, “A full convolutional network based on densenet for remote sensing scene classification,” Mathematical Biosciences and Engineering, vol. 16, no. 5, pp. 3345–3367, 2019.
[24] Z. Shao, P. Tang, Z. Wang, N. Saleem, S. Yam, and C. Sommai, “Brrnet: A fully convolutional neural network for automatic building extraction from high-resolution remote sensing images,” Remote Sensing, vol. 12, no. 6, 2020.
[25] D. Bhatt, D. Sodhi, A. Pal, V. Balasubramanian, and M. Krishna, “Have i reached the intersection: A deep learning-based approach for intersection detection from monocular cameras,” in Proceedings of the 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, 2017, pp. 4495–4500.
[26] M. Saeedimoghaddam and T. Stepinski, “Automatic extraction of road intersection points from usgs historical map series using deep convolutional neural networks,” International Journal of Geographical Information Science, pp. 1–22, 2019.
[27] “Google map types,” https://developers.google.com/maps/ documentation/maps-static/start#MapTypes (archived on 2021-7-26).
[28] “Government website of washington dc datasets,” https://opendata.dc. gov/ (archived on 2021-7-26).
[29] “Intersection points dataset,” https://opendata.dc.gov/datasets/DCGIS:: intersection-points/about (archived on 2021-7-26).
[30] “Script to download road intersections dataset,” https://github.com/ fatmaelther/SatelliteDatasetsforRoadIntersectionDetection (archived on 2021-12-10).
[31] F. Chollet, “Xception: Deep learning with depthwise separable convolutions,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2017, pp. 1251–1258.
[32] C. Szegedy, S. Ioffe, V. Vanhoucke, and A. A. Alemi, “Inception-v4, inception-resnet and the impact of residual connections on learning,” in Proceedings of the Thirty-first AAAI conference on artificial intelligence, 2017.
[33] K. He, X. Zhang, S. Ren, and J. Sun, “Identity mappings in deep residual networks,” in Proceedings of European conference on computer vision. Springer, 2016, pp. 630–645.
[34] D. P. Kingma and J. Ba, “Adam: A method for stochastic optimization,” arXiv preprint arXiv:1412.6980, 2014.