|
Documento PDF (Thesis)
Disponibile con Licenza: Creative Commons: Attribuzione - Condividi allo stesso modo 4.0 (CC BY-SA 4.0) Download (1MB) |
Abstract
Robots in real-world environments must adapt their strategies as conditions change. This thesis investigates online strategy adaptation using an Actor-Critic Reinforcement Learning framework optimized by a Genetic Algorithm (GA). By evolving meta parameters and initial network weights over 50 generations, the agent achieves an optimal balance between exploration and exploitation. Using the ARGoS3 simulator, the research evaluates two scenarios: an energy-driven survival task and a multi-headed architecture for contextual rewards based on visual stimuli. Results show that combining learning and evolution enables robots to autonomously adapt to non-stationary environments.

Login