ecms_neu_mini.png

Digital Library

of the European Council for Modelling and Simulation

 

Title:

Learning Of Autonomous Agent In Virtual Environment

Authors:

Pavel Nahodil, Jaroslav Vítků

Published in:

 

(2012).ECMS 2012 Proceedings edited by: K. G. Troitzsch, M. Moehring, U. Lotzmann. European Council for Modeling and Simulation. doi:10.7148/2012 

 

ISBN: 978-0-9564944-4-3

 

26th European Conference on Modelling and Simulation,

Shaping reality through simulation

Koblenz, Germany, May 29 – June 1 2012

 

Citation format:

Nahodil, P., & Vitku, J. (2012). Learning Of Autonomous Agent In Virtual Environment. ECMS 2012 Proceedings edited by: K. G. Troitzsch, M. Moehring, U. Lotzmann (pp. 373-379). European Council for Modeling and Simulation. doi:10.7148/2012-0373-0379

DOI:

http://dx.doi.org/10.7148/2012-0373-0379

Abstract:

Presented topic is from area of development of artificial creatures and proposes new architecture of autonomous agent. The work builds on a research of the latest approaches to Artificial Life, realized by the Department of Cybernetics, CTU in Prague in the last twenty years. This architecture design combines knowledge from Artificial Intelligence (AI), Ethology, Artificial Life (ALife) and Intelligent Robotics. From the field of classical AI, the fusion of reinforcement learning, planning and artificial neural network into one more complex control system was used here.         The main principle of its function is inspired by the field of Ethology, this means that life of given agent tries to be similar to life of an animal in the Nature, where animal learns relatively autonomously from simpler principles towards the more complex ones.          The architecture supports on-line learning of all knowledge from the scratch, while the core principle is in hierarchical Reinforcement Learning (RL), this action hierarchy is created autonomously based solely on agents interaction with an environment. The main key idea behind this approach is in original implementation of a domain independent hierarchical planner. Our planner is able to operate with behaviors learned by the RL. It means that an autonomously gained hierarchy of actions can be used not only by action selection mechanisms based on the reinforcement learning, but also by a planning system. This gives the agent ability to utilize high-level deliberative problem solving based solely on his experiences. In order to deal with higher-level control rather than a sensory system, the life of agent was simulated in a virtual environment.

Full text: