Meller, Grzegorz
(2024)
Explainable Artificial Intelligence: A Study of Methods, Applications, and Future Directions.
[Laurea magistrale], Università di Bologna, Corso di Studio in
Artificial intelligence [LM-DM270]
Documenti full-text disponibili:
Abstract
As artificial intelligence (AI) models, particularly deep neural networks, achieve success across various domains, concerns about their interpretability arise. These "black box" models often operate opaquely, making it difficult to understand their reasoning. This lack of explainability presents issues of trustworthiness, accountability, and ethics, especially in critical areas like healthcare, finance, and criminal justice. This thesis studies Explainable AI (XAI) as an interdisciplinary field, addressing technological, ethical, legal, and design challenges. A taxonomy of XAI methods is presented, categorizing them into interpretable models, model-agnostic tools, model-specific tools, neuro-symbolic approaches and explainable tools for generative AI (GenXAI). Tools like Shapley values, LIME, and integrated gradients are discussed, with special attention to neuro-symbolic AI's potential to enhance neural network explainability through symbolic reasoning. Emerging XAI techniques for large language models (LLMs), such as Retrieval-Augmented Generation (RAG) and Agentic RAG, are explored for their ability to enhance traceability and transparency. The overview of these tools is summarized where each method is classified according to the defined taxonomy and features. The thesis also explores user experience (UX) considerations, focusing on different personas of digital systems with varying needs for comprehensible explanations. XAI question bank is introduced, combining diverse questions, levels of depth with tailored XAI tools to address them. Legal and ethical considerations are presented, focusing on regulations like the EU’s GDPR and the AI Act, and how XAI can support compliance. Finally, an experiment combining LIME and RAG in the healthcare domain for heart disease prediction and explanation demonstrates the practical application of XAI, showcasing its ability to improve transparency and decision-making in high-stakes scenarios.
Abstract
As artificial intelligence (AI) models, particularly deep neural networks, achieve success across various domains, concerns about their interpretability arise. These "black box" models often operate opaquely, making it difficult to understand their reasoning. This lack of explainability presents issues of trustworthiness, accountability, and ethics, especially in critical areas like healthcare, finance, and criminal justice. This thesis studies Explainable AI (XAI) as an interdisciplinary field, addressing technological, ethical, legal, and design challenges. A taxonomy of XAI methods is presented, categorizing them into interpretable models, model-agnostic tools, model-specific tools, neuro-symbolic approaches and explainable tools for generative AI (GenXAI). Tools like Shapley values, LIME, and integrated gradients are discussed, with special attention to neuro-symbolic AI's potential to enhance neural network explainability through symbolic reasoning. Emerging XAI techniques for large language models (LLMs), such as Retrieval-Augmented Generation (RAG) and Agentic RAG, are explored for their ability to enhance traceability and transparency. The overview of these tools is summarized where each method is classified according to the defined taxonomy and features. The thesis also explores user experience (UX) considerations, focusing on different personas of digital systems with varying needs for comprehensible explanations. XAI question bank is introduced, combining diverse questions, levels of depth with tailored XAI tools to address them. Legal and ethical considerations are presented, focusing on regulations like the EU’s GDPR and the AI Act, and how XAI can support compliance. Finally, an experiment combining LIME and RAG in the healthcare domain for heart disease prediction and explanation demonstrates the practical application of XAI, showcasing its ability to improve transparency and decision-making in high-stakes scenarios.
Tipologia del documento
Tesi di laurea
(Laurea magistrale)
Autore della tesi
Meller, Grzegorz
Relatore della tesi
Scuola
Corso di studio
Ordinamento Cds
DM270
Parole chiave
Explainable Artificial Intelligence (XAI),Neuro-Symbolic AI,AI Ethics,AI Transparency,Large Language Models (LLMs),Retrieval-Augmented Generation (RAG),Local Interpretable Model-agnostic Explanations (LIME),User Experience Design
Data di discussione della Tesi
8 Ottobre 2024
URI
Altri metadati
Tipologia del documento
Tesi di laurea
(NON SPECIFICATO)
Autore della tesi
Meller, Grzegorz
Relatore della tesi
Scuola
Corso di studio
Ordinamento Cds
DM270
Parole chiave
Explainable Artificial Intelligence (XAI),Neuro-Symbolic AI,AI Ethics,AI Transparency,Large Language Models (LLMs),Retrieval-Augmented Generation (RAG),Local Interpretable Model-agnostic Explanations (LIME),User Experience Design
Data di discussione della Tesi
8 Ottobre 2024
URI
Statistica sui download
Gestione del documento: