Morra, Simone
(2025)
EEG decoding and analysis of reach-to-grasping using an explainable artificial intelligence approach.
[Laurea magistrale], Università di Bologna, Corso di Studio in
Biomedical engineering [LM-DM270] - Cesena, Documento full-text non disponibile
Il full-text non è disponibile per scelta dell'autore.
(
Contatta l'autore)
Abstract
Motor decoding from electroencephalography (EEG) using deep learning models has gained significant attention in applications like brain-computer interfaces. While these models offer high accuracy, their ”black-box” nature limits their transparency, which is especially problematic in neurophysiological contexts where understanding the decision-making process of the these models is essential. To address this challenge, this thesis focuses on the analysis and decoding of EEG signals during a reach-to-grasping task using an explainable artificial intelligence approach. The study involved the acquisition and processing of EEG signals from 19 healthy participants, in which participants performed movements to reach and grasp four different objects. Two classification tasks were considered: movement vs rest and the discrimination between three types of grasping (power, intermediate, and precision). After the EEG preprocessing, movement-related cortical potentials were extracted and analyzed to assess whether EEG signals are modulated in the conditions under analysis. EEGNet, a convolutional neural network widely used for EEG signal classification, was then used and its performance evaluated. Five post-hoc explainability methods were then applied to understand the spatio-temporal features that EEGNet identifies as relevant for decoding motor activity of the first task (movement vs rest). These were: saliency maps, deepLIFT, input × gradient, gradient shap and integrated gradients. The outputs of the explainability methods were compared with movement-related cortical potentials dynamics, which are known to reflect the neurophysiological correlates of movement. DeepLIFT, input × gradient, gradient shap, and integrated gradients emerged as the most reliable explainers for this analysis. This thesis provides a preliminary analysis on post-hoc explanation techniques for improving the comprehension of deep learning models applied to motor EEG decoders.
Abstract
Motor decoding from electroencephalography (EEG) using deep learning models has gained significant attention in applications like brain-computer interfaces. While these models offer high accuracy, their ”black-box” nature limits their transparency, which is especially problematic in neurophysiological contexts where understanding the decision-making process of the these models is essential. To address this challenge, this thesis focuses on the analysis and decoding of EEG signals during a reach-to-grasping task using an explainable artificial intelligence approach. The study involved the acquisition and processing of EEG signals from 19 healthy participants, in which participants performed movements to reach and grasp four different objects. Two classification tasks were considered: movement vs rest and the discrimination between three types of grasping (power, intermediate, and precision). After the EEG preprocessing, movement-related cortical potentials were extracted and analyzed to assess whether EEG signals are modulated in the conditions under analysis. EEGNet, a convolutional neural network widely used for EEG signal classification, was then used and its performance evaluated. Five post-hoc explainability methods were then applied to understand the spatio-temporal features that EEGNet identifies as relevant for decoding motor activity of the first task (movement vs rest). These were: saliency maps, deepLIFT, input × gradient, gradient shap and integrated gradients. The outputs of the explainability methods were compared with movement-related cortical potentials dynamics, which are known to reflect the neurophysiological correlates of movement. DeepLIFT, input × gradient, gradient shap, and integrated gradients emerged as the most reliable explainers for this analysis. This thesis provides a preliminary analysis on post-hoc explanation techniques for improving the comprehension of deep learning models applied to motor EEG decoders.
Tipologia del documento
Tesi di laurea
(Laurea magistrale)
Autore della tesi
Morra, Simone
Relatore della tesi
Correlatore della tesi
Scuola
Corso di studio
Indirizzo
CURRICULUM BIOMEDICAL ENGINEERING FOR NEUROSCIENCE
Ordinamento Cds
DM270
Parole chiave
EEG,Deep,Learning,Post-hoc,Explainability,Techniques,Motor, Related,Cortical,Potential
Data di discussione della Tesi
6 Febbraio 2025
URI
Altri metadati
Tipologia del documento
Tesi di laurea
(NON SPECIFICATO)
Autore della tesi
Morra, Simone
Relatore della tesi
Correlatore della tesi
Scuola
Corso di studio
Indirizzo
CURRICULUM BIOMEDICAL ENGINEERING FOR NEUROSCIENCE
Ordinamento Cds
DM270
Parole chiave
EEG,Deep,Learning,Post-hoc,Explainability,Techniques,Motor, Related,Cortical,Potential
Data di discussione della Tesi
6 Febbraio 2025
URI
Gestione del documento: