A Probabilistic Reinforcement-Based Approach to Conceptualization
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 32807
A Probabilistic Reinforcement-Based Approach to Conceptualization

Authors: Hadi Firouzi, Majid Nili Ahmadabadi, Babak N. Araabi

Abstract:

Conceptualization strengthens intelligent systems in generalization skill, effective knowledge representation, real-time inference, and managing uncertain and indefinite situations in addition to facilitating knowledge communication for learning agents situated in real world. Concept learning introduces a way of abstraction by which the continuous state is formed as entities called concepts which are connected to the action space and thus, they illustrate somehow the complex action space. Of computational concept learning approaches, action-based conceptualization is favored because of its simplicity and mirror neuron foundations in neuroscience. In this paper, a new biologically inspired concept learning approach based on the probabilistic framework is proposed. This approach exploits and extends the mirror neuron-s role in conceptualization for a reinforcement learning agent in nondeterministic environments. In the proposed method, instead of building a huge numerical knowledge, the concepts are learnt gradually from rewards through interaction with the environment. Moreover the probabilistic formation of the concepts is employed to deal with uncertain and dynamic nature of real problems in addition to the ability of generalization. These characteristics as a whole distinguish the proposed learning algorithm from both a pure classification algorithm and typical reinforcement learning. Simulation results show advantages of the proposed framework in terms of convergence speed as well as generalization and asymptotic behavior because of utilizing both success and failures attempts through received rewards. Experimental results, on the other hand, show the applicability and effectiveness of the proposed method in continuous and noisy environments for a real robotic task such as maze as well as the benefits of implementing an incremental learning scenario in artificial agents.

Keywords: Concept learning, probabilistic decision making, reinforcement learning.

Digital Object Identifier (DOI): doi.org/10.5281/zenodo.1078317

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1482

References:


[1] S. Amizadeh, M. N. Ahmadabadi, B. N. Araabi, R. Siegwart, A Bayesian Approach to Conceptualization Using Reinforcement Learning, IEEE/ASME International Conference on Advanced Intelligent Mechatronics, 2007.
[2] T. R. Zentall, M. Galizio, and T. S. Critchfield, "Categorization, concept Learning and behavior analysis," in Journal of the Experimental Analysis of Behavior, vol. 78, no. 3, pp. 237-248, November 2002.
[3] G. Buccino, S. Vogt, A. Ritzl, G. R. Fink, K. Zilles, H. J. Freund and G. Rizzolatti, "Neural circuits underlying imitation learning of hand actions: an event related fMRI study," in Neuron, vol. 42, pp. 323-334, April 2004.
[4] A. Billard and M. J. Mataric, "Automatic learning human arm movements by imitation: evaluation of a biologically inspired connectionist architecture," in Robotics and Autonomous Systems, vol. 941, pp. 1-16, 2001.
[5] K. Doya, "Reinforcement learning in continuous time and space," Neural Computation, vol. 12, pp. 219-245, 2000.
[6] H. Mobahi, M. Nili Ahmadabadi, and B. N. Araabi, "Concept oriented imitation towards verbal human-robot interaction," In Proc. 2005 IEEE Int. Conf. Robotics and Automation, pp. 1495-1500, April 2005.
[7] H. Mobahi, M. Nili Ahmadabadi, and B. N. Araabi, "A biologically inspired for conceptual imitation using reinforcement learning," to be published in Applied Artificial Intelligence.
[8] S. Mahadevan and J. Connell, "Automatic programming of behaviorbased robots using reinforcement learning," in Artificial Intelligence, vol. 55, no. 2-3, pp. 311-365, June 1992.
[9] A. J. Smith, "Applications of the self-organizing map to reinforcement learning," in Neural Networks, vol. 15, pp. 1107-1124, 2002.
[10] O. Lebeltel, P. Bessiere, J. Diard and E. Mazer, "Bayesian robot programming," in Autonomous Robots, vol. 16, pp. 49-79, 2004.
[11] R. O. Duda, P. E. Hart and D. G. Stork, Pattern Classification (2nd Edition). New York: Wiley-Interscience, 2000.
[12] R. E. Neapolitan, Learning Bayesian Network. New Jersey: Pearson Prentice Hall, 2003.
[13] C. E. Priebe, "Adaptive mixtures," in Journal of the American Statistical Association, vol. 89, no. 427, pp. 796-806, September 1994.
[14] E-puck, EPFL Education Robot, http://www.e-puck.org.