A Control Model for Improving Safety and Efficiency of Navigation System Based on Reinforcement Learning
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 33122
A Control Model for Improving Safety and Efficiency of Navigation System Based on Reinforcement Learning

Authors: Almutasim Billa A. Alanazi, Hal S. Tharp

Abstract:

Artificial Intelligence (AI), specifically Reinforcement Learning (RL), has proven helpful in many control path planning technologies by maximizing and enhancing their performance, such as navigation systems. Since it learns from experience by interacting with the environment to determine the optimal policy, the optimal policy takes the best action in a particular state, accounting for the long-term rewards. Most navigation systems focus primarily on "arriving faster," overlooking safety and efficiency while estimating the optimum path, as safety and efficiency are essential factors when planning for a long-distance journey. This paper represents an RL control model that proposes a control mechanism for improving navigation systems. Also, the model could be applied to other control path planning applications because it is adjustable and can accept different properties and parameters. However, the navigation system application has been taken as a case and evaluation study for the proposed model. The model utilized a Q-learning algorithm for training and updating the policy. It allows the agent to analyze the quality of an action made in the environment to maximize rewards. The model gives the ability to update rewards regularly based on safety and efficiency assessments, allowing the policy to consider the desired safety and efficiency benefits while making decisions, which improves the quality of the decisions taken for path planning compared to the conventional RL approaches.

Keywords: Artificial intelligence, control system, navigation systems, reinforcement learning.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 212

References:


[1] Nambiar, Kavya. “How Do Google Maps Work?” Analytics Steps, 6 June 2021, www.analyticssteps.com/blogs/how-do-google-maps-work. Accessed 18 Sep. 2023.
[2] Castrodale, Jelisa. “This Is How Google Maps Knows Which Route Is the Fastest at Any given Moment.” USA Today, 24 Nov. 2015, www.usatoday.com/story/travel/roadwarriorvoices/2015/11/24/this-is-how-google-maps-knows-which-route-is-the-fastest-at-any-given-moment/83282460/. Accessed 27 Sep. 2023.
[3] Federal Highway Administration (FHWA), U.S. Department of Transportation (DOT). “How Do Weather Events Impact Roads?”, 1 Feb. 2023, https://ops.fhwa.dot.gov/weather/q1_roadimpact.htm. Accessed 2 Oct. 2023.
[4] Pelegov, Dmitry V., and Jean-Jacques Chanaron. “Electric Car Market Analysis Using Open Data: Sales, Volatility Assessment, and Forecasting.” Sustainability, vol. 15, no. 1, 2022, p. 399, doi:10.3390/su15010399.
[5] International Energy Agency (IEA). “Electric Car Sales, 2016-2023 –Charts – Data & Statistics.” 26 Apr. 2023, www.iea.org/data-and-statistics/charts/electric-car-sales-2016-2023. Accessed 5 Oct. 2023.
[6] DriveClean.Ca.gov “Electric Car Charging Overview”, 2021. driveclean.ca.gov/electric-car-charging. Accessed 11 Oct. 2023.
[7] Pacey, Paddy. “Honey Buzzard – Epic Migration – WildAware Environmental Conservation Organization.” Whole Earth Education, 20 Oct. 2020, wholeeartheducation.com/honey-buzzard-epic-migration/. Accessed 13 Oct. 2023.
[8] Kollar T, Roy N. “Trajectory Optimization using Reinforcement Learning for Map Exploration.”, The International Journal of Robotics Research. (2008) 27 (2):175-196.
[9] Konar, Amit, et al. "A deterministic improved Q-learning for path planning of a mobile robot." IEEE Transactions on Systems, Man, and Cybernetics: Systems 43.5 (2013): 1141-1153.
[10] Low, Ee Soong, Pauline Ong, and Kah Chun Cheah. "Solving the optimal path planning of a mobile robot using improved Q-learning." Robotics and Autonomous Systems 115 (2019): 143-161.
[11] Zhang, B., Mao, Z., Liu, W. et al. “Geometric Reinforcement Learning for Path Planning of UAVs.” J Intell Robot Syst 77, 391–409 (2015).
[12] Hutsebaut-Buysse, M.; Mets, K.; Latré, S. “Hierarchical Reinforcement Learning: A Survey and Open Research Challenges.”, Machine Learning and Knowledge Extraction, vol. 4, no. 1, 2022, pp. 172–221.
[13] Wiering, Marco A., and Martijn Van Otterlo. "Reinforcement learning." Adaptation, learning, and optimization 12.3 (2012): 729.
[14] Watkins, Christopher JCH, and Peter Dayan. "Q-learning." Machine learning 8 (1992): 279-292.
[15] Glorennec, Pierre Yves. "Reinforcement learning: An overview." Proceedings European Symposium on Intelligent Techniques (ESIT-00), Aachen, Germany. 2000.
[16] Watkins, Christopher John Cornish Hellaby. "Learning from delayed rewards." (1989)