Ciapponi, Stefano
(2023)
On the use of Prompting for Fine-Tuning Neural models for Speech Processing.
[Laurea magistrale], Università di Bologna, Corso di Studio in
Artificial intelligence [LM-DM270]
Documenti full-text disponibili:
|
Documento PDF (Thesis)
Disponibile con Licenza: Salvo eventuali più ampie autorizzazioni dell'autore, la tesi può essere liberamente consultata e può essere effettuato il salvataggio e la stampa di una copia per fini strettamente personali di studio, di ricerca e di insegnamento, con espresso divieto di qualunque utilizzo direttamente o indirettamente commerciale. Ogni altro diritto sul materiale è riservato
Download (1MB)
|
Abstract
Recent advances in the development of extremely large, multi-purpose models have motivated computer scientists to explore methods for adapting them to more specific tasks.
Fine-tuning is the most widely used approach to this problem, in which a more general model is trained on a new dataset of labeled data for the new task. While fine-tuning mitigates the data availability problem and enables models trained on small labeled datasets to achieve state-of-the-art performance, it also exhibits some key disadvantages: inefficiency, resource-intensive computation and making the models less general.
This study investigates the use of learnable prompts, a parameter-efficient fine-tuning alternative,
in spoken language understanding (SLU) tasks. To our knowledge, learnable prompts have not been previously applied to SLU, but have been tested on text-based natural language processing (NLP) tasks and computer vision tasks, achieving promising results. Therefore, we’ll be introducing our proposed approach, using learnable prompts in a SLU context, and analyse some experimental results on two different deep learning-based end-to-end SLU models.
Abstract
Recent advances in the development of extremely large, multi-purpose models have motivated computer scientists to explore methods for adapting them to more specific tasks.
Fine-tuning is the most widely used approach to this problem, in which a more general model is trained on a new dataset of labeled data for the new task. While fine-tuning mitigates the data availability problem and enables models trained on small labeled datasets to achieve state-of-the-art performance, it also exhibits some key disadvantages: inefficiency, resource-intensive computation and making the models less general.
This study investigates the use of learnable prompts, a parameter-efficient fine-tuning alternative,
in spoken language understanding (SLU) tasks. To our knowledge, learnable prompts have not been previously applied to SLU, but have been tested on text-based natural language processing (NLP) tasks and computer vision tasks, achieving promising results. Therefore, we’ll be introducing our proposed approach, using learnable prompts in a SLU context, and analyse some experimental results on two different deep learning-based end-to-end SLU models.
Tipologia del documento
Tesi di laurea
(Laurea magistrale)
Autore della tesi
Ciapponi, Stefano
Relatore della tesi
Correlatore della tesi
Scuola
Corso di studio
Ordinamento Cds
DM270
Parole chiave
Speech Processing,Spoken Language Understanding,Natural Language Processing,Neural Networks,Model Fine-tuning,Transformers,Artificial Intelligence
Data di discussione della Tesi
21 Ottobre 2023
URI
Altri metadati
Tipologia del documento
Tesi di laurea
(NON SPECIFICATO)
Autore della tesi
Ciapponi, Stefano
Relatore della tesi
Correlatore della tesi
Scuola
Corso di studio
Ordinamento Cds
DM270
Parole chiave
Speech Processing,Spoken Language Understanding,Natural Language Processing,Neural Networks,Model Fine-tuning,Transformers,Artificial Intelligence
Data di discussione della Tesi
21 Ottobre 2023
URI
Statistica sui download
Gestione del documento: