Documento PDF (Thesis)
Disponibile con Licenza: Creative Commons: Attribuzione - Condividi allo stesso modo 4.0 (CC BY-SA 4.0) Download (2MB) |
Abstract
Recent breakthroughs in machine learning are paving the way to the vision of software 2.0 era, which foresees the replacement of traditional software development with such techniques for many applications. In the context of agent-oriented programming, we believe that mixing together cognitive architectures like the BDI one and learning techniques could trigger new interesting scenarios. In that view, our previous work presents Jason-RL, a framework that integrates BDI agents and Reinforcement Learning (RL) more deeply than what has been already proposed so far in the literature. The framework allows the development of BDI agents having both explicitly programmed plans and plans learned by the agent using RL. The two kinds of plans are seamlessly integrated and can be used without differences. Here, we take autonomous driving as a case study to verify the advantages of the proposed approach and framework. The BDI agent has hard-coded plans that define high-level directions while fine-grained navigation is learned by trial and error. This approach – compared to plain RL – is encouraging as RL struggles in temporally extended planning. We defined and trained an agent able to drive in a track with an intersection, at which it has to choose the correct path to reach the assigned target. A first step towards porting the system in the real-world has been done by building a 1/10 scale racecar prototype which learned how to drive in a simple track.