Rossetto, Eric
(2023)
Decoding anomalies: utilizing SHAP techniques for interpretable time series auto-encoder models.
[Laurea magistrale], Università di Bologna, Corso di Studio in
Artificial intelligence [LM-DM270], Documento full-text non disponibile
Il full-text non è disponibile per scelta dell'autore.
(
Contatta l'autore)
Abstract
Anomaly detection is a pivotal task in many real-world applications, including predictive maintenance, fraud detection, and root cause analysis. The evolution of machine learning led to the use of complex neural networks to tackle this problem delivering excellent performance on several domains. However, the prevalent use of these sophisticated yet opaque \emph{black-box} models becomes unsuitable in high-stakes applications, where the demand for transparency and accountability surpasses mere efficiency. It appears natural that the concept of explainability has emerged as a needed concept in the realm of AI. Its simple yet fundamental objective is to empower AI systems with transparency, thereby enhancing their trustworthiness, confidence, informativeness and so on. Moreover, the presence of a "\emph{right of explanation}" in the GDPR illustrates how this concept has ethical and political implications. Notably, within the intersection of anomaly detection and explainability, literature remains scarce, particularly when dealing with time series data. This work tries to bridge this gap by integrating and exploring a Shapley value time-based generator (TimeSHAP) of "explanations" with an encoder-decoder model tailored for anomaly detection reconstruction task using aerospace data. Due to the presence of humans dealing with agents, evaluation of the framework is performed at a qualitative level spanning several dimensions. The outcomes, although falling short of expectations, underline a critical need for a paradigm shift within the research community that more than often fixates efforts solely on defining explanations as feature importance attributions. The problem of machine-human alignment observed in the field is a clear indication that the human factor needs to be more carefully considered in the design of Explainable AI systems, even at the expense of performance and cost.
Abstract
Anomaly detection is a pivotal task in many real-world applications, including predictive maintenance, fraud detection, and root cause analysis. The evolution of machine learning led to the use of complex neural networks to tackle this problem delivering excellent performance on several domains. However, the prevalent use of these sophisticated yet opaque \emph{black-box} models becomes unsuitable in high-stakes applications, where the demand for transparency and accountability surpasses mere efficiency. It appears natural that the concept of explainability has emerged as a needed concept in the realm of AI. Its simple yet fundamental objective is to empower AI systems with transparency, thereby enhancing their trustworthiness, confidence, informativeness and so on. Moreover, the presence of a "\emph{right of explanation}" in the GDPR illustrates how this concept has ethical and political implications. Notably, within the intersection of anomaly detection and explainability, literature remains scarce, particularly when dealing with time series data. This work tries to bridge this gap by integrating and exploring a Shapley value time-based generator (TimeSHAP) of "explanations" with an encoder-decoder model tailored for anomaly detection reconstruction task using aerospace data. Due to the presence of humans dealing with agents, evaluation of the framework is performed at a qualitative level spanning several dimensions. The outcomes, although falling short of expectations, underline a critical need for a paradigm shift within the research community that more than often fixates efforts solely on defining explanations as feature importance attributions. The problem of machine-human alignment observed in the field is a clear indication that the human factor needs to be more carefully considered in the design of Explainable AI systems, even at the expense of performance and cost.
Tipologia del documento
Tesi di laurea
(Laurea magistrale)
Autore della tesi
Rossetto, Eric
Relatore della tesi
Correlatore della tesi
Scuola
Corso di studio
Ordinamento Cds
DM270
Parole chiave
Explainable AI,Anomaly Detection,Time Series,SHAP,Autoencoder
Data di discussione della Tesi
21 Ottobre 2023
URI
Altri metadati
Tipologia del documento
Tesi di laurea
(NON SPECIFICATO)
Autore della tesi
Rossetto, Eric
Relatore della tesi
Correlatore della tesi
Scuola
Corso di studio
Ordinamento Cds
DM270
Parole chiave
Explainable AI,Anomaly Detection,Time Series,SHAP,Autoencoder
Data di discussione della Tesi
21 Ottobre 2023
URI
Gestione del documento: