A Constructivist Approach and Tool for Autonomous Agent Bottom-up Sequential Learning
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 32769
A Constructivist Approach and Tool for Autonomous Agent Bottom-up Sequential Learning

Authors: Jianyong Xue, Olivier L. Georgeon, Salima Hassas

Abstract:

During the initial phase of cognitive development, infants exhibit amazing abilities to generate novel behaviors in unfamiliar situations, and explore actively to learn the best while lacking extrinsic rewards from the environment. These abilities set them apart from even the most advanced autonomous robots. This work seeks to contribute to understand and replicate some of these abilities. We propose the Bottom-up hiErarchical sequential Learning algorithm with Constructivist pAradigm (BEL-CA) to design agents capable of learning autonomously and continuously through interactions. The algorithm implements no assumption about the semantics of input and output data. It does not rely upon a model of the world given a priori in the form of a set of states and transitions as well. Besides, we propose a toolkit to analyze the learning process at run time called GAIT (Generating and Analyzing Interaction Traces). We use GAIT to report and explain the detailed learning process and the structured behaviors that the agent has learned on each decision making. We report an experiment in which the agent learned to successfully interact with its environment and to avoid unfavorable interactions using regularities discovered through interaction.

Keywords: Cognitive development, constructivist learning, hierarchical sequential learning, self-adaptation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 463

References:


[1] M. Gu´eriau, F. Armetta, S. Hassas, R. Billot, and N.-E. El Faouzi, “A constructivist approach for a self-adaptive decision-making system: application to road traffic control,” in 2016 IEEE 28th International Conference on Tools with Artificial Intelligence (ICTAI). IEEE, 2016, pp. 670–677.
[2] Y. Bu, J. Lu, and V. V. Veeravalli, “Active and adaptive sequential learning,” arXiv preprint arXiv:1805.11710, 2018.
[3] O. L. Georgeon and F. E. Ritter, “An intrinsically-motivated schema mechanism to model and simulate emergent cognition,” Cognitive Systems Research, vol. 15, pp. 73–92, 2012.
[4] G. Anthes, “Lifelong learning in artificial neural networks,” Communications of the ACM, vol. 62, no. 6, pp. 13–15, 2019.
[5] M. Gu´eriau, N. Cardozo, and I. Dusparic, “Constructivist approach to state space adaptation in reinforcement learning,” Learning, vol. 4, no. S3, p. S2, 2019.
[6] J. Piaget, The construction of reality in the child. Routledge, 2013, vol. 82.
[7] O. L. Georgeon, J. B. Marshall, and S. Gay, “Interactional motivation in artificial systems: Between extrinsic and intrinsic motivation,” in 2012 IEEE international conference on development and learning and epigenetic robotics (ICDL). IEEE, 2012, pp. 1–2.
[8] R. S. Sutton and A. G. Barto, Reinforcement learning: An introduction. MIT press, 2018.
[9] T. D. Kulkarni, K. Narasimhan, A. Saeedi, and J. Tenenbaum, “Hierarchical deep reinforcement learning: Integrating temporal abstraction and intrinsic motivation,” in Advances in neural information processing systems, 2016, pp. 3675–3683.
[10] V. Mnih, K. Kavukcuoglu, D. Silver, A. A. Rusu, J. Veness, M. G. Bellemare, A. Graves, M. Riedmiller, A. K. Fidjeland, G. Ostrovski et al., “Human-level control through deep reinforcement learning,” Nature, vol. 518, no. 7540, p. 529, 2015.
[11] P. Abbeel and A. Y. Ng, “Apprenticeship learning via inverse reinforcement learning,” in Proceedings of the twenty-first international conference on Machine learning. ACM, 2004, p. 1.
[12] M. Riedmiller, R. Hafner, T. Lampe, M. Neunert, J. Degrave, T. Van de Wiele, V. Mnih, N. Heess, and J. T. Springenberg, “Learning by playing-solving sparse reward tasks from scratch,” arXiv preprint arXiv:1802.10567, 2018.
[13] N. Haber, D. Mrowca, L. Fei-Fei, and D. L. Yamins, “Emergence of structured behaviors from curiosity-based intrinsic motivation,” arXiv preprint arXiv:1802.07461, 2018.
[14] P.-Y. Oudeyer, F. Kaplan, and V. V. Hafner, “Intrinsic motivation systems for autonomous mental development,” IEEE transactions on evolutionary computation, vol. 11, no. 2, pp. 265–286, 2007.
[15] A. E. Stahl and L. Feigenson, “Observing the unexpected enhances infants’ learning and exploration,” Science, vol. 348, no. 6230, pp. 91–94, 2015.
[16] J. Xue, O. L. Georgeon, and M. Gillermin, “Causality reconstruction by an autonomous agent,” in Biologically Inspired Cognitive Architectures Meeting. Springer, 2018, pp. 347–354.
[17] O. L. Georgeon, J. H. Morgan, and F. E. Ritter, “An algorithm for self-motivated hierarchical sequence learning,” in Proceedings of the International Conference on Cognitive Modeling. Philadelphia, PA. ICCM-164. Citeseer, 2010, pp. 73–78.
[18] F. Guerin, “Constructivism in ai: Prospects, progress and challenges.” in AISB Convention, 2008, pp. 20–27.
[19] W. Huitt and J. Hummel, “Piaget’s theory of cognitive development,” Educational psychology interactive, vol. 3, no. 2, pp. 1–5, 2003.
[20] O. L. Georgeon, C. Wolf, and S. Gay, “An enactive approach to autonomous agent and robot learning,” in 2013 IEEE Third Joint International Conference on Development and Learning and Epigenetic Robotics (ICDL). IEEE, 2013, pp. 1–6.
[21] O. L. Georgeon and J. B. Marshall, “Demonstrating sensemaking emergence in artificial agents: A method and an example,” International Journal of Machine Consciousness, vol. 5, no. 02, pp. 131–144, 2013.