Baldelli, Davide
(2023)
A two-step LLM-augmented distillation method for passage reranking.
[Laurea magistrale], Università di Bologna, Corso di Studio in
Artificial intelligence [LM-DM270]
Documenti full-text disponibili:
Abstract
This thesis delves into the exploration and enhancement of passage reranking in Information Retrieval (IR) systems, particularly focusing on the distillation of knowledge from Large Language Models (LLMs) to augment the capabilities of smaller cross-encoders. The research pivots the feasibility of distilling the knowledge of LLMs into smaller models without compromising reranking capabilities, and the impact of the distillation process on the adaptability of the resultant model across diverse scenarios. To navigate through these inquiries, a novel distillation method, termed TWOLAR (TWO-step LLM-Augmented distillation method for passage Reranking), is introduced.
TWOLAR is characterized by a new scoring strategy and a distillation process consisting in the creation of a novel and diverse training dataset. The dataset consists of 20K queries, each associated with a set of documents retrieved via four distinct retrieval methods to ensure diversity, and then reranked by exploiting the zero-shot reranking capabilities of an LLM. The ablation study demonstrates the contribution of each introduced component. The experimental results show that TWOLAR significantly enhances the document reranking ability of the underlying model, obtaining state-of-the-art performances on the TREC-DL test sets and the zero-shot evaluation benchmark BEIR, thereby contributing a novel perspective and methodology to the discourse on optimizing IR systems via knowledge distillation from LLMs.
To facilitate future work we release our data set, finetuned models, and code.
Abstract
This thesis delves into the exploration and enhancement of passage reranking in Information Retrieval (IR) systems, particularly focusing on the distillation of knowledge from Large Language Models (LLMs) to augment the capabilities of smaller cross-encoders. The research pivots the feasibility of distilling the knowledge of LLMs into smaller models without compromising reranking capabilities, and the impact of the distillation process on the adaptability of the resultant model across diverse scenarios. To navigate through these inquiries, a novel distillation method, termed TWOLAR (TWO-step LLM-Augmented distillation method for passage Reranking), is introduced.
TWOLAR is characterized by a new scoring strategy and a distillation process consisting in the creation of a novel and diverse training dataset. The dataset consists of 20K queries, each associated with a set of documents retrieved via four distinct retrieval methods to ensure diversity, and then reranked by exploiting the zero-shot reranking capabilities of an LLM. The ablation study demonstrates the contribution of each introduced component. The experimental results show that TWOLAR significantly enhances the document reranking ability of the underlying model, obtaining state-of-the-art performances on the TREC-DL test sets and the zero-shot evaluation benchmark BEIR, thereby contributing a novel perspective and methodology to the discourse on optimizing IR systems via knowledge distillation from LLMs.
To facilitate future work we release our data set, finetuned models, and code.
Tipologia del documento
Tesi di laurea
(Laurea magistrale)
Autore della tesi
Baldelli, Davide
Relatore della tesi
Correlatore della tesi
Scuola
Corso di studio
Ordinamento Cds
DM270
Parole chiave
Information Retrieval,Reranking,Knowledge distillation,Large Language Models
Data di discussione della Tesi
21 Ottobre 2023
URI
Altri metadati
Tipologia del documento
Tesi di laurea
(NON SPECIFICATO)
Autore della tesi
Baldelli, Davide
Relatore della tesi
Correlatore della tesi
Scuola
Corso di studio
Ordinamento Cds
DM270
Parole chiave
Information Retrieval,Reranking,Knowledge distillation,Large Language Models
Data di discussione della Tesi
21 Ottobre 2023
URI
Statistica sui download
Gestione del documento: