Stanzione, Vincenzo Maria
(2022)
Developing a new approach for machine learning explainability combining local and global model-agnostic approaches.
[Laurea magistrale], Università di Bologna, Corso di Studio in
Ingegneria informatica [LM-DM270]
Documenti full-text disponibili:
Abstract
The last couple of past decades have seen a new flourishing season for the Artificial
Intelligence, in particular for Machine Learning (ML). This
is reflected in the great number of fields that are employing ML solutions to overcome a broad spectrum of problems.
However, most of the last employed ML models have a black-box
behavior. This means that given a certain input, we are not able to understand
why one of these models produced a certain output or made a certain
decision. Most of the time, we are not interested in knowing what
and how the model is thinking, but if we think of a model which makes extremely critical
decisions or takes
decisions that have a heavy result on people’s lives, in these cases explainability is a duty.
A great variety of techniques to perform global or local explanations are available. One of the most widespread is Local Interpretable Model-Agnostic
Explanations (LIME), which creates a local linear model in the proximity of an
input to understand in which way each feature contributes to the final output.
However, LIME is not immune from instability problems and sometimes to
incoherent predictions. Furthermore, as a local explainability technique, LIME
needs to be performed for each different input that we want to explain.
In this work, we have been inspired by the LIME approach for linear models
to craft a novel technique. In combination with the Model-based Recursive Partitioning
(MOB), a brand-new score function to assess the quality of a partition
and the usage of Sobol quasi-Montecarlo sampling, we developed a new global
model-agnostic explainability technique we called Global-Lime.
Global-Lime
is capable of giving a global understanding of the original ML
model, through an ensemble of spatially not overlapped hyperplanes, plus a local
explanation for a certain output considering only the corresponding linear
approximation. The idea is to train the black-box model and then supply along
with it its explainable version.
Abstract
The last couple of past decades have seen a new flourishing season for the Artificial
Intelligence, in particular for Machine Learning (ML). This
is reflected in the great number of fields that are employing ML solutions to overcome a broad spectrum of problems.
However, most of the last employed ML models have a black-box
behavior. This means that given a certain input, we are not able to understand
why one of these models produced a certain output or made a certain
decision. Most of the time, we are not interested in knowing what
and how the model is thinking, but if we think of a model which makes extremely critical
decisions or takes
decisions that have a heavy result on people’s lives, in these cases explainability is a duty.
A great variety of techniques to perform global or local explanations are available. One of the most widespread is Local Interpretable Model-Agnostic
Explanations (LIME), which creates a local linear model in the proximity of an
input to understand in which way each feature contributes to the final output.
However, LIME is not immune from instability problems and sometimes to
incoherent predictions. Furthermore, as a local explainability technique, LIME
needs to be performed for each different input that we want to explain.
In this work, we have been inspired by the LIME approach for linear models
to craft a novel technique. In combination with the Model-based Recursive Partitioning
(MOB), a brand-new score function to assess the quality of a partition
and the usage of Sobol quasi-Montecarlo sampling, we developed a new global
model-agnostic explainability technique we called Global-Lime.
Global-Lime
is capable of giving a global understanding of the original ML
model, through an ensemble of spatially not overlapped hyperplanes, plus a local
explanation for a certain output considering only the corresponding linear
approximation. The idea is to train the black-box model and then supply along
with it its explainable version.
Tipologia del documento
Tesi di laurea
(Laurea magistrale)
Autore della tesi
Stanzione, Vincenzo Maria
Relatore della tesi
Correlatore della tesi
Scuola
Corso di studio
Ordinamento Cds
DM270
Parole chiave
Machine Learning,Artificial Intelligence,Machine Learning Explainability,LIME,MOB,Interpretable Machine Learning
Data di discussione della Tesi
22 Marzo 2022
URI
Altri metadati
Tipologia del documento
Tesi di laurea
(NON SPECIFICATO)
Autore della tesi
Stanzione, Vincenzo Maria
Relatore della tesi
Correlatore della tesi
Scuola
Corso di studio
Ordinamento Cds
DM270
Parole chiave
Machine Learning,Artificial Intelligence,Machine Learning Explainability,LIME,MOB,Interpretable Machine Learning
Data di discussione della Tesi
22 Marzo 2022
URI
Statistica sui download
Gestione del documento: