A multi level explainability framework for BDI Multi Agent Systems

Yan, Elena (2023) A multi level explainability framework for BDI Multi Agent Systems. [Laurea magistrale], Università di Bologna, Corso di Studio in Ingegneria e scienze informatiche [LM-DM270] - Cesena
Documenti full-text disponibili:
[thumbnail of Thesis] Documento PDF (Thesis)
Disponibile con Licenza: Creative Commons: Attribuzione - Non commerciale - Condividi allo stesso modo 4.0 (CC BY-NC-SA 4.0)

Download (5MB)

Abstract

As software systems become more complex and the level of abstraction increases, programming and understanding behaviour become more difficult. This is particularly evident in autonomous systems that need to be resilient to change and adapt to possibly unexpected problems, as there are not yet mature tools for managing understanding. A complete understanding of the system is indispensable at every stage of software development, starting with the initial requirements analysis by experts in the field, through to development, implementation, debugging, testing, and product validation. A common and valid approach to increasing understandability in the field of Explainable AI is to provide explanations that can convey the decision making processes and the motivations behind the choices made by the system. Motivated by the different use cases of the explanation and the different classes of target users, it is necessary to deal with different levels of abstraction in the generated explanations since they target specific classes of users with different requirements and goals. This thesis introduces the idea of multi-level explainability as a way to generate different explanations for the same systems at different levels of detail. A low-level explanation related to detailed code could help developers in the debugging and testing phases, while a high-level explanation could support domain experts and designers or contribute to the validation phase to align the system with the requirements. The model taken as a reference for the automatic generation of explanations is the BDI (Belief-Desire-Intention) model, as it would be easier for humans to understand the mentalistic explanation of a system that behaves rationally given its desires and current beliefs. In this work we have prototyped an explainability tool for BDI agents and multi-agent systems that deals with multiple levels of abstraction that can be used for different purposes by different classes of users.

Abstract
Tipologia del documento
Tesi di laurea (Laurea magistrale)
Autore della tesi
Yan, Elena
Relatore della tesi
Correlatore della tesi
Scuola
Corso di studio
Ordinamento Cds
DM270
Parole chiave
Agent-oriented software engineering,Multi-Agent Systems,Debugging agent program,Explainability,BDI agents,JaCaMo framework
Data di discussione della Tesi
5 Ottobre 2023
URI

Altri metadati

Statistica sui download

Gestione del documento: Visualizza il documento

^