Bolognini, Luca
(2023)
Emotion Recognition for Human-Centered Conversational Agents.
[Laurea magistrale], Università di Bologna, Corso di Studio in
Artificial intelligence [LM-DM270]
Documenti full-text disponibili:
Abstract
This thesis proposes a study on Emotion Recognition in Conversation to address the challenges
of the task with a chatbot reference case study to enhance conversational agents’ ability to understand and respond appropriately to human emotion. The study consists of two phases. The
first one involves the use of several baselines and the implementation of EmoBERTa to explore
aspects of the task, such as preprocessing, balancing technique and context modelling tested
on ERC benchmark dataset. The results reveal that the punctuation provides key information
to the task, balancing techniques can provide marginal improvements if appropriately selected
and context can provide additional information and suggest that a non-static context construction could be beneficial.
In the second phase, the effectiveness of a Few-Shot learning method, SetFit, is explored in
the context of ERC to face the scarce amount of real labelled data. An incompatibility with
the given context definition of the architecture employed by the mentioned method called for
an adaptation which proved to be ineffective. The performance of the SetFit method and finetuning are compared in a limited data regime. Finally, the study explores the capabilities of
a trained model on a specific ERC dataset to adapt to limited data from a different domain
using Transfer Learning and fine-tuning with inconclusive results. The findings and insight
from this can lay the groundwork for future developments and studies in the growing field of
emotional-aware conversational agents and the application of Few-Shot learning in this task.
Abstract
This thesis proposes a study on Emotion Recognition in Conversation to address the challenges
of the task with a chatbot reference case study to enhance conversational agents’ ability to understand and respond appropriately to human emotion. The study consists of two phases. The
first one involves the use of several baselines and the implementation of EmoBERTa to explore
aspects of the task, such as preprocessing, balancing technique and context modelling tested
on ERC benchmark dataset. The results reveal that the punctuation provides key information
to the task, balancing techniques can provide marginal improvements if appropriately selected
and context can provide additional information and suggest that a non-static context construction could be beneficial.
In the second phase, the effectiveness of a Few-Shot learning method, SetFit, is explored in
the context of ERC to face the scarce amount of real labelled data. An incompatibility with
the given context definition of the architecture employed by the mentioned method called for
an adaptation which proved to be ineffective. The performance of the SetFit method and finetuning are compared in a limited data regime. Finally, the study explores the capabilities of
a trained model on a specific ERC dataset to adapt to limited data from a different domain
using Transfer Learning and fine-tuning with inconclusive results. The findings and insight
from this can lay the groundwork for future developments and studies in the growing field of
emotional-aware conversational agents and the application of Few-Shot learning in this task.
Tipologia del documento
Tesi di laurea
(Laurea magistrale)
Autore della tesi
Bolognini, Luca
Relatore della tesi
Correlatore della tesi
Scuola
Corso di studio
Ordinamento Cds
DM270
Parole chiave
Emotion Recognition,Few-Shot Learning,Conversational agents
Data di discussione della Tesi
23 Marzo 2023
URI
Altri metadati
Tipologia del documento
Tesi di laurea
(NON SPECIFICATO)
Autore della tesi
Bolognini, Luca
Relatore della tesi
Correlatore della tesi
Scuola
Corso di studio
Ordinamento Cds
DM270
Parole chiave
Emotion Recognition,Few-Shot Learning,Conversational agents
Data di discussione della Tesi
23 Marzo 2023
URI
Statistica sui download
Gestione del documento: