Biologically Inspired Controller for the Autonomous Navigation of a Mobile Robot in an Evasion Task
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 32797
Biologically Inspired Controller for the Autonomous Navigation of a Mobile Robot in an Evasion Task

Authors: Dejanira Araiza-Illan, Tony J. Dodd

Abstract:

A novel biologically inspired controller for the autonomous navigation of a mobile robot in an evasion task is proposed. The controller takes advantage of the environment by calculating a measure of danger and subsequently choosing the parameters of a reinforcement learning based decision process. Two different reinforcement learning algorithms were used: Qlearning and Sarsa (λ). Simulations show that selecting dynamic parameters reduce the time while executing the decision making process, so the robot can obtain a policy to succeed in an escaping task in a realistic time.

Keywords: Autonomous navigation, mobile robots, reinforcement learning.

Digital Object Identifier (DOI): doi.org/10.5281/zenodo.1055665

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1426

References:


[1] M.R. Akbarzadeh, H. Rezaei, and M.B. Naghibi. A fuzzy adaptive algorithm for expertness based cooperative learning, application to herdin problem. In Proceedings of the 22nd International Conference on the North American Fuzzy Information Processing Society, pages 317-322, 2003.
[2] T. Arai, E. Pagello, and L.E. Parker. Advances in multi-robot systems. In IEEE Transactions on Robotics and Automation, volume 18, pages 655-661, 2002.
[3] C.M. Bishop. Neural networks for pattern recognition. Oxford University Press, 1995.
[4] S. Edut and D. Eilam. Protean behaviour under barn-owl attack: voles alternate between freezing and fleeing and spiny mice flee in alternating patters. Behavioural Brain Research, 155:207-216, 2004.
[5] D. Floreano and S. Nolfi. Adaptive behavior in competing co-evolving species. In Proceedings of the fourth European Conference on Artificial Life, pages 378-387. MIT Press, 1997.
[6] J.P. Hespanha, M. Prandini, and S. Sastry. Probabilistic pursuit-evasion games: A one-step nash approach. In Proceedings of the 39th IEEE Conference on Decision and Control, pages 2432-2437, 2000.
[7] D.A. Humphries and P.M. Driver. Protean defence by prey animals. Oecologia, 5:285-302, 1970.
[8] C. Laugier and R. Chatila, editors. Autonomous navigation in dynamic environments. Springer Berlin / Heidelberg, 2007.
[9] S.W. Lee. A bio-inspired group evasion behaviour. Technical report, Department of Computer Science, The University of North Carolina at Chapel Hill, 2008.
[10] G.F. Miller and D. Cliff. Co-evolution of pursuit and evasion I: biological and game-theoretic foundations. Technical Report CSRP311, School of Cognitive and Computing Sciences, University of Sussex, 1994.
[11] B. Scherrer and F. Charpillet. Cooperative co-learning: A modelbased approach for solving multi-agent reinforcement problems. In Proceedings of the 14th International Conference on Tools with Artificial Intelligence, pages 463-468. IEEE Computer Society, 2002.
[12] T. Stankowich and D.T. Blumstein. Fear in animals: a meta-analysis and review of risk assessment. Proceedings B, 272(1581):2627-2634, 2005.
[13] R.S. Sutton and A.G. Barto. Reinforcement Learning: An Introduction. The MIT Press, 1998.
[14] H. Tamakoshi and S. Ishii. Multi-agent reinforcement learning applied to a chase problem in a continuous world. Artificial Life Robotics, 5:202-206, 2001.
[15] N. Vlassis. A concise introduction to multi-agent systems and distributed artificial intelligence. Morgan & Claypool, 2007.
[16] M. Wahde and M.G. Nordahl. Evolution of protean behavior in pursuitevasion contests. In Proceedings of the fifth International Conference on Simulation of Adaptive Behavior on From animals to animats 5, pages 557-561. MIT Press, 1998.