Paolini, Riccardo
(2024)
Deep Reinforcement Learning in combat simulations.
[Laurea magistrale], Università di Bologna, Corso di Studio in
Artificial intelligence [LM-DM270], Documento ad accesso riservato.
Documenti full-text disponibili:
Abstract
Recent advancements in model-based reinforcement learning have demonstrated remarkable potential across various tasks, often achieving performance comparable to or better than model-free algorithms while requiring significantly fewer environmental steps. In contrast, multi-agent reinforcement learning typically demands a large amount of samples for effective training, making it a promising area to benefit from model-based approaches. Despite this, limited research has been conducted on model-based algorithms in multi-agent settings. This thesis investigates the potential of model-based multi-agent reinforcement learning in warfare simulations, where two battalions aim to either defend or capture a target position. We utilize the DreamerV3 algorithm to command companions independently while training them on shared experience. In addition, we introduce new encoder, decoder, and action networks tailored for DreamerV3 to handle the complex observation and action spaces of the ReLeGSim simulator. Our experiments evaluate the impact of various information sources, model sizes, and table embeddings on agent performance across increasing levels of difficulty. The results show significant improvements, achieving a 74- to 98-fold increase in sample efficiency and enhanced robustness during training compared to the previously employed model-free single-agent approach.
Abstract
Recent advancements in model-based reinforcement learning have demonstrated remarkable potential across various tasks, often achieving performance comparable to or better than model-free algorithms while requiring significantly fewer environmental steps. In contrast, multi-agent reinforcement learning typically demands a large amount of samples for effective training, making it a promising area to benefit from model-based approaches. Despite this, limited research has been conducted on model-based algorithms in multi-agent settings. This thesis investigates the potential of model-based multi-agent reinforcement learning in warfare simulations, where two battalions aim to either defend or capture a target position. We utilize the DreamerV3 algorithm to command companions independently while training them on shared experience. In addition, we introduce new encoder, decoder, and action networks tailored for DreamerV3 to handle the complex observation and action spaces of the ReLeGSim simulator. Our experiments evaluate the impact of various information sources, model sizes, and table embeddings on agent performance across increasing levels of difficulty. The results show significant improvements, achieving a 74- to 98-fold increase in sample efficiency and enhanced robustness during training compared to the previously employed model-free single-agent approach.
Tipologia del documento
Tesi di laurea
(Laurea magistrale)
Autore della tesi
Paolini, Riccardo
Relatore della tesi
Correlatore della tesi
Scuola
Corso di studio
Ordinamento Cds
DM270
Parole chiave
Reinforcement Learning,Model-Based RL,Multi-Agent RL,DreamerV3,World-Models,ReLeGSim
Data di discussione della Tesi
8 Ottobre 2024
URI
Altri metadati
Tipologia del documento
Tesi di laurea
(NON SPECIFICATO)
Autore della tesi
Paolini, Riccardo
Relatore della tesi
Correlatore della tesi
Scuola
Corso di studio
Ordinamento Cds
DM270
Parole chiave
Reinforcement Learning,Model-Based RL,Multi-Agent RL,DreamerV3,World-Models,ReLeGSim
Data di discussione della Tesi
8 Ottobre 2024
URI
Gestione del documento: