Benedetti, Agnese
(2024)
Motor imagery decoding from EEG via interpretable convolutional neural networks.
[Laurea magistrale], Università di Bologna, Corso di Studio in
Biomedical engineering [LM-DM270] - Cesena, Documento full-text non disponibile
Il full-text non è disponibile per scelta dell'autore.
(
Contatta l'autore)
Abstract
Convolutional neural networks (CNNs) are emerging as the new frontier for EEG decoding: these techniques enhance classification accuracy by automatically learning relevant features directly from data, bypassing separate feature extraction and classification stages. However, the current implementations often lack interpretability and require substantial computational resources due to the high number of trainable parameters. The introduction of interpretable components in the network design facilitates the interpretation of the learned features in a given domain (e.g., frequency domain). This could be useful for checking that the model relies on neurophysiological features rather than artifactual ones, and for clarifying which EEG features are the most relevant for decoding the EEG.
In this thesis a benchmark analysis on motor-imagery EEG datasets is performed comparing six traditional machine learning methods (TS+LR, TS+SVM, CSP+LDA, regCSP+shLDA, MDM and FgMDM) and a state-of-the-art CNN model (EEGNet) with respect to six interpretable CNNs for EEG decoding (SincShallowNet, SincEEGNet, WaSFCNN, MagEEGminer, CorrEEGminer and PLVEEGminer). The interpretable CNNs incorporated network elements devoted at learning interpretable features in the frequency domain, enabling a direct analysis of the most relevant frequency content for EEG decoding.
Results indicate that some of the interpretable models (SincShallowNet, SincEEGNet, WaSFCNN) achieve state-of-the-art performance with a limited amount of trainable parameters. Furthermore, the performed analysis on the spectral features suggests that the features learned by the interpretable CNNs matched the key EEG motor-related activity in the frequency domain, primarily within the α, β, and low γ frequency bands. In conclusion, this thesis highlights the role and the possible contributions of interpretable neural networks to the field of deep learning applied to motor imagery EEG decoding.
Abstract
Convolutional neural networks (CNNs) are emerging as the new frontier for EEG decoding: these techniques enhance classification accuracy by automatically learning relevant features directly from data, bypassing separate feature extraction and classification stages. However, the current implementations often lack interpretability and require substantial computational resources due to the high number of trainable parameters. The introduction of interpretable components in the network design facilitates the interpretation of the learned features in a given domain (e.g., frequency domain). This could be useful for checking that the model relies on neurophysiological features rather than artifactual ones, and for clarifying which EEG features are the most relevant for decoding the EEG.
In this thesis a benchmark analysis on motor-imagery EEG datasets is performed comparing six traditional machine learning methods (TS+LR, TS+SVM, CSP+LDA, regCSP+shLDA, MDM and FgMDM) and a state-of-the-art CNN model (EEGNet) with respect to six interpretable CNNs for EEG decoding (SincShallowNet, SincEEGNet, WaSFCNN, MagEEGminer, CorrEEGminer and PLVEEGminer). The interpretable CNNs incorporated network elements devoted at learning interpretable features in the frequency domain, enabling a direct analysis of the most relevant frequency content for EEG decoding.
Results indicate that some of the interpretable models (SincShallowNet, SincEEGNet, WaSFCNN) achieve state-of-the-art performance with a limited amount of trainable parameters. Furthermore, the performed analysis on the spectral features suggests that the features learned by the interpretable CNNs matched the key EEG motor-related activity in the frequency domain, primarily within the α, β, and low γ frequency bands. In conclusion, this thesis highlights the role and the possible contributions of interpretable neural networks to the field of deep learning applied to motor imagery EEG decoding.
Tipologia del documento
Tesi di laurea
(Laurea magistrale)
Autore della tesi
Benedetti, Agnese
Relatore della tesi
Scuola
Corso di studio
Indirizzo
CURRICULUM BIOENGINEERING OF HUMAN MOVEMENT
Ordinamento Cds
DM270
Parole chiave
Electroencephalography,(EEG),Motor,Imagery,Decoding,Machine, Learning,Deep,Convolutionl,Neural,Networks (CNN),Interpretability
Data di discussione della Tesi
21 Novembre 2024
URI
Altri metadati
Tipologia del documento
Tesi di laurea
(NON SPECIFICATO)
Autore della tesi
Benedetti, Agnese
Relatore della tesi
Scuola
Corso di studio
Indirizzo
CURRICULUM BIOENGINEERING OF HUMAN MOVEMENT
Ordinamento Cds
DM270
Parole chiave
Electroencephalography,(EEG),Motor,Imagery,Decoding,Machine, Learning,Deep,Convolutionl,Neural,Networks (CNN),Interpretability
Data di discussione della Tesi
21 Novembre 2024
URI
Gestione del documento: