Khantigul, Pasit
(2023)
Concept Whitening for Interpretable Link Prediction on Large Graphs.
[Laurea magistrale], Università di Bologna, Corso di Studio in
Ingegneria informatica [LM-DM270], Documento full-text non disponibile
Il full-text non è disponibile per scelta dell'autore.
(
Contatta l'autore)
Abstract
How much can we trust a neural network to make decisions that can potentially impact human lives?
This is a crucial question that arises in various critical domains, including healthcare, where graph-based approaches have proven to be useful in predicting drug interactions, modeling patient trajectories, and identifying disease subtypes, among other applications.
Nowadays graphs are ubiquitous, and numerous are also techniques for processing them.
In such a context, this thesis aims to address the trustworthiness of Graph Neural Networks (GNNs), by proposing the approach of Concept Whitening (CW) for interpretable link prediction that can guide us to comprehend the computations leading up to that layer. Specifically, adding the CW layer to a GNN, a rotation is performed on the latent space, aligning the axes with known concepts of interest.
Experiments conducted on a real-world medical network show the benefit of an inherent interpretable model based on the Graph Sage architecture, which is designed to be efficient for dynamic large graphs.
Abstract
How much can we trust a neural network to make decisions that can potentially impact human lives?
This is a crucial question that arises in various critical domains, including healthcare, where graph-based approaches have proven to be useful in predicting drug interactions, modeling patient trajectories, and identifying disease subtypes, among other applications.
Nowadays graphs are ubiquitous, and numerous are also techniques for processing them.
In such a context, this thesis aims to address the trustworthiness of Graph Neural Networks (GNNs), by proposing the approach of Concept Whitening (CW) for interpretable link prediction that can guide us to comprehend the computations leading up to that layer. Specifically, adding the CW layer to a GNN, a rotation is performed on the latent space, aligning the axes with known concepts of interest.
Experiments conducted on a real-world medical network show the benefit of an inherent interpretable model based on the Graph Sage architecture, which is designed to be efficient for dynamic large graphs.
Tipologia del documento
Tesi di laurea
(Laurea magistrale)
Autore della tesi
Khantigul, Pasit
Relatore della tesi
Correlatore della tesi
Scuola
Corso di studio
Ordinamento Cds
DM270
Parole chiave
graph neural networks,concept whitening,graph sage,graphs,explainable ai,deep learning,embedding space
Data di discussione della Tesi
23 Marzo 2023
URI
Altri metadati
Tipologia del documento
Tesi di laurea
(NON SPECIFICATO)
Autore della tesi
Khantigul, Pasit
Relatore della tesi
Correlatore della tesi
Scuola
Corso di studio
Ordinamento Cds
DM270
Parole chiave
graph neural networks,concept whitening,graph sage,graphs,explainable ai,deep learning,embedding space
Data di discussione della Tesi
23 Marzo 2023
URI
Gestione del documento: