Profili, Alessandro
(2023)
Machine learning algorithms for emotion recognition through the analysis of eye-tracking data in a sample of healthy subjects.
[Laurea magistrale], Università di Bologna, Corso di Studio in
Biomedical engineering [LM-DM270] - Cesena, Documento ad accesso riservato.
Documenti full-text disponibili:
|
Documento PDF (Thesis)
Full-text accessibile solo agli utenti istituzionali dell'Ateneo
Disponibile con Licenza: Salvo eventuali più ampie autorizzazioni dell'autore, la tesi può essere liberamente consultata e può essere effettuato il salvataggio e la stampa di una copia per fini strettamente personali di studio, di ricerca e di insegnamento, con espresso divieto di qualunque utilizzo direttamente o indirettamente commerciale. Ogni altro diritto sul materiale è riservato
Download (3MB)
| Contatta l'autore
|
Abstract
Emotion recognition is a research field of major interest to areas such as health care and psychology. Many emotion models and emotion recognition algorithms have been developed in the last decades to identify and better understand human emotions. Their recognition can be attempted through the analysis of physiological responses to specific emotional stimuli; among others, oculomotor activity and pupil dilation responses have been repeatedly investigated in the literature, due to its low cost and non-invasive nature. This work investigates, in a sample of 15 healthy participants, differences in these responses to customized emotional audio-visual stimuli delivered through a virtual reality headset, consisting of pre-recorded videos featuring either acquaintances or strangers highlighting positive characteristics and future plans of either the persons under study or other individuals, respectively. To this aim, eye-tracking data has been acquired, pre-processed and a feature extraction procedure has been achieved. Subsequently, machine learning algorithms have been used, first, to differentiate between stimulation and non-stimulation periods; second, to differentiate between stimuli featuring acquaintances and strangers; third, to differentiate between stimuli of specific self-reported pleasure (i.e. valence) and excitation (i.e. arousal) values. Our findings showed an accuracy higher than 90% at differentiating stimulation and non-stimulation periods, and an accuracy around 80% at differentiating videos from acquaintances and strangers. The accuracy at identifying stimuli with specific characteristics was higher than 65% and comparable to the accuracy reported by previous studies. In light of these results, additional studies are needed including more participants and a wider range of emotions to get more generalised and accurate results.
Abstract
Emotion recognition is a research field of major interest to areas such as health care and psychology. Many emotion models and emotion recognition algorithms have been developed in the last decades to identify and better understand human emotions. Their recognition can be attempted through the analysis of physiological responses to specific emotional stimuli; among others, oculomotor activity and pupil dilation responses have been repeatedly investigated in the literature, due to its low cost and non-invasive nature. This work investigates, in a sample of 15 healthy participants, differences in these responses to customized emotional audio-visual stimuli delivered through a virtual reality headset, consisting of pre-recorded videos featuring either acquaintances or strangers highlighting positive characteristics and future plans of either the persons under study or other individuals, respectively. To this aim, eye-tracking data has been acquired, pre-processed and a feature extraction procedure has been achieved. Subsequently, machine learning algorithms have been used, first, to differentiate between stimulation and non-stimulation periods; second, to differentiate between stimuli featuring acquaintances and strangers; third, to differentiate between stimuli of specific self-reported pleasure (i.e. valence) and excitation (i.e. arousal) values. Our findings showed an accuracy higher than 90% at differentiating stimulation and non-stimulation periods, and an accuracy around 80% at differentiating videos from acquaintances and strangers. The accuracy at identifying stimuli with specific characteristics was higher than 65% and comparable to the accuracy reported by previous studies. In light of these results, additional studies are needed including more participants and a wider range of emotions to get more generalised and accurate results.
Tipologia del documento
Tesi di laurea
(Laurea magistrale)
Autore della tesi
Profili, Alessandro
Relatore della tesi
Correlatore della tesi
Scuola
Corso di studio
Indirizzo
CURRICULUM BIOMEDICAL ENGINEERING FOR NEUROSCIENCE
Ordinamento Cds
DM270
Parole chiave
Emotions,Emotion recognition,Audio-visual stimulation,Oculomotor events,Classification,Machine Learning
Data di discussione della Tesi
21 Luglio 2023
URI
Altri metadati
Tipologia del documento
Tesi di laurea
(NON SPECIFICATO)
Autore della tesi
Profili, Alessandro
Relatore della tesi
Correlatore della tesi
Scuola
Corso di studio
Indirizzo
CURRICULUM BIOMEDICAL ENGINEERING FOR NEUROSCIENCE
Ordinamento Cds
DM270
Parole chiave
Emotions,Emotion recognition,Audio-visual stimulation,Oculomotor events,Classification,Machine Learning
Data di discussione della Tesi
21 Luglio 2023
URI
Statistica sui download
Gestione del documento: