Tanzi, Giuseppe
(2024)
Automating Test Case Generation for Automotive Industry using Large Language Models.
[Laurea magistrale], Università di Bologna, Corso di Studio in
Artificial intelligence [LM-DM270], Documento full-text non disponibile
Il full-text non è disponibile per scelta dell'autore.
(
Contatta l'autore)
Abstract
The rapid evolution of the automotive industry towards highly connected and software-driven vehicles underscores the need for robust testing methodologies to ensure safety, reliability, and compliance with industry standards. However, traditional sequential testing methodologies, particularly in test case generation, face challenges in dealing with the time-consuming process of manually creating test cases to meet the complexity of modern automotive software requirements. This thesis investigates how Large Language Models (LLMs) can be used to automatically create test cases, aiming to revolutionize the automotive software testing landscape. Leveraging advanced natural language understanding and generation capabilities, LLMs offer a novel approach to automate test case creation from requirements. The thesis investigates the application of open-source LLMs for test case generation, emphasizing the importance of balancing innovation with data confidentiality concerns. Additionally, advanced prompt engineering techniques such as Retrieval-Augmented Generation, Zero shot, and Few shot prompting are employed to enhance LLM performance. Through case studies involving datasets from renowned automotive companies, the research highlights the capabilities of LLMs in generating test cases across various documentation scenarios. The findings suggest that LLMs have the potential to improve traditional testing methods, providing solutions to enhance efficiency, accuracy, and confidentiality in automotive software testing.
Abstract
The rapid evolution of the automotive industry towards highly connected and software-driven vehicles underscores the need for robust testing methodologies to ensure safety, reliability, and compliance with industry standards. However, traditional sequential testing methodologies, particularly in test case generation, face challenges in dealing with the time-consuming process of manually creating test cases to meet the complexity of modern automotive software requirements. This thesis investigates how Large Language Models (LLMs) can be used to automatically create test cases, aiming to revolutionize the automotive software testing landscape. Leveraging advanced natural language understanding and generation capabilities, LLMs offer a novel approach to automate test case creation from requirements. The thesis investigates the application of open-source LLMs for test case generation, emphasizing the importance of balancing innovation with data confidentiality concerns. Additionally, advanced prompt engineering techniques such as Retrieval-Augmented Generation, Zero shot, and Few shot prompting are employed to enhance LLM performance. Through case studies involving datasets from renowned automotive companies, the research highlights the capabilities of LLMs in generating test cases across various documentation scenarios. The findings suggest that LLMs have the potential to improve traditional testing methods, providing solutions to enhance efficiency, accuracy, and confidentiality in automotive software testing.
Tipologia del documento
Tesi di laurea
(Laurea magistrale)
Autore della tesi
Tanzi, Giuseppe
Relatore della tesi
Correlatore della tesi
Scuola
Corso di studio
Ordinamento Cds
DM270
Parole chiave
Large Language Models,Automotive,Prompt Engineering,Retrieval-Augmented Generation,Natural Language Processing,NLP,LLM,RAG
Data di discussione della Tesi
19 Marzo 2024
URI
Altri metadati
Tipologia del documento
Tesi di laurea
(NON SPECIFICATO)
Autore della tesi
Tanzi, Giuseppe
Relatore della tesi
Correlatore della tesi
Scuola
Corso di studio
Ordinamento Cds
DM270
Parole chiave
Large Language Models,Automotive,Prompt Engineering,Retrieval-Augmented Generation,Natural Language Processing,NLP,LLM,RAG
Data di discussione della Tesi
19 Marzo 2024
URI
Gestione del documento: