Acquiring Contour Following Behaviour in Robotics through Q-Learning and Image-based States
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 33122
Acquiring Contour Following Behaviour in Robotics through Q-Learning and Image-based States

Authors: Carlos V. Regueiro, Jose E. Domenech, Roberto Iglesias, Jose L. Correa

Abstract:

In this work a visual and reactive contour following behaviour is learned by reinforcement. With artificial vision the environment is perceived in 3D, and it is possible to avoid obstacles that are invisible to other sensors that are more common in mobile robotics. Reinforcement learning reduces the need for intervention in behaviour design, and simplifies its adjustment to the environment, the robot and the task. In order to facilitate its generalisation to other behaviours and to reduce the role of the designer, we propose a regular image-based codification of states. Even though this is much more difficult, our implementation converges and is robust. Results are presented with a Pioneer 2 AT on a Gazebo 3D simulator.

Keywords: Image-based State Codification, Mobile Robotics, ReinforcementLearning, Visual Behaviour.

Digital Object Identifier (DOI): doi.org/10.5281/zenodo.1075962

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1612

References:


[1] C.V. Regueiro, M. Rodr'─▒guez, J. Correa, D.L. Moreno, R. Iglesias, and S. Barro, "A control architecture for mobile robotics based on specialists," in Intelligent Systems: Technology and Applications, C. Leondes, Ed., vol. 6, pp. 337-360. CRC Press, 2002.
[2] U. Nehmzow, Mobile Robotics: A Practical Introduction, Springer, 2003.
[3] R. Iglesias, C.V. Regueiro, J. Correa, and S. Barro, "Implementation of a Basic Reactive Behavior in Mobile Robotics Through Artificial Neural Networks," in Proc. oIWANN. 1997, vol. 1240 of LNCS, pp. 1364-1373, Springer Verlag.
[4] T. Nakamura and M. Asada, "Motion sketch: Acquisition of visual motion guided behaviors," in IJCAI, 1995, pp. 126-132.
[5] M. Mucientes, R. Iglesias, C.V. Regueiro, A. Bugar'─▒n, and S. Barro, "Fuzzy temporal rule-based velocity controller for mobile robotics," Fuzzy sets and systems, vol. 134, pp. 83-99, 2003.
[6] D. Driankov and A. Saffiotti, Eds., Fuzzy Logic Techniques for Autonomous Vehicle Navigation, vol. 61 of Studies in Fuzziness and Soft Computing, Springer-Verlag, 2001.
[7] J.R. Mill'an, D. Posenato, and E. Dedieu, "Continuous-action Q-Learning," Machine Learning, vol. 49, pp. 247, 265, 2002.
[8] J. Wyatt, "Issues in putting reinforcement learning onto robots," in 10th Biennal Conference of the AISB, Sheffield, UK, April 1995.
[9] D.L. Moreno, C.V. Regueiro, R. Iglesias, and S. Barro, "Making use of unelaborated advice to improve reinforcement learning: A mobile robotics approach," in Proc. International Conference on Advances in Pattern Recognition (ICAPR). 2005, vol. 3686 of LNCS, pp. 89-98, Springer-Verlag.
[10] E. Zalama, J. G'omez, M. Paul, and J.R. Per'an, "Adaptive behavior navigation of a mobile robot," IEEE Transactions on Systems, Man and Cybernetics. Part A: Systems and Humans, vol. 32, pp. 160-169, 2002.
[11] D.L. Moreno, C.V. Regueiro, R. Iglesias, and S. Barro, "Using prior knowledge to improve reinforcement learning in mobile robotics," in Towards Autonomous Robotic Systems (TAROS), 2004.
[12] R. Iglesias, C.V. Regueiro, J. Correa, and S. Barro, "Supervised Reinforcement Learning: Application to a Wall Following Behaviour in a Mobile Robot," in Proc. IEA. 1998, vol. 1416 of LNAI, pp. 300-309, Springer Verlag.
[13] C. Gaskett, L. Fletcher, and A. Zelinsky, "Reinforcement learning for a vision based mobile robot," in Proc. IEEE/RSJ Int. Conf. on Intelligent Robots and Systems (IROS), 2000, vol. 1, pp. 403-409.
[14] T. Mart'─▒nez-Mar'─▒n and T. Duckett, "Fast reinforcement learning for vision-guided mobile robots," in IEEE Int. Conf. on Robotics and Automation (ICRA), 2005, pp. 4170-4175.
[15] M.J.L. Boada, R. Barber, and M.A. Salichs, "Visual approach skill for a mobile robot using learning and fusion of simple skills," Robotics and Autonomous Systems, vol. 38, pp. 157-170, 2002.
[16] J.V. Ruiz, P. Montero, F. Mart'─▒n, and V. Matell'an, "Vision based behaviors for a legged robot," in Proc. Workshop en Agentes F'─▒sicos (WAF), 2005, pp. 59-66.
[17] U. Nehmzow, "Vision processing for robot learning," Industrial Robot, vol. 26, no. 2, pp. 121-130, 1999.
[18] N. Winters, J. Gaspar, G. Lacey, and J. Santos-Victor, "Omni-directional vision for robot navigation," in Proc. IEEE Workshop on Omnidirectional Vision, 2000, pp. 21 - 28.
[19] R.S. Sutton and A. Barto, Reinforcemente Learning, an introduction, MIT Press, 1998.
[20] L.P. Kaelbling, M.L. Littman, and A.W. Moore, "Reinforcement learning: A survey," Journal of Artificial Intelligence Research, vol. 4, pp. 237-285, 1996.
[21] D. Burschka, S. Lee, and G. Hager, "Stereo-based obstacle avoidance in indoor environments with active sensor re-calibration," in Proc. Int. Conf. on Robotics and Automation, 2002, vol. 2, pp. 2066-2072.
[22] G. Gini and A. Marchi, "Indoor robot navigation with single camera vision," in Proc. Pattern Recognition in Information Systems (PRIS). 2002, pp. 67-76, ICEIS Press.
[23] T. Taylor, S. Geva, and W.W. Boles, "Monocular vision as a range sensor," in Proc. CIMCA, 2004, pp. 566 - 575.
[24] A. Criminisi, I. Reid, and A. Zisserman, "Single view metrology," International Journal of Computer Vision, vol. 4, no. 2, pp. 123 - 148, 2000.
[25] J. Michels, A. Saxena, and A.Y. Ng, "High speed obstacle avoidance using monocular vision and reinforcement learning," in Proc. Int. Conf. on Machine Learning, 2005.
[26] E. Delage, H. Lee, and A.Y. Ng, "A dynamic bayesian network model for autonomous 3d reconstruction from a single indoor image," in IEEE Int. Conf. Computer Vision and Pattern Recognition (CVPR), 2006, vol. 2, pp. 2418-2428.