Il full-text non è disponibile per scelta dell'autore.
(
Contatta l'autore)
Abstract
This thesis presents a neural network-based approach to robotic palletizing, addressing
the challenge of optimizing the packing efficiency and stability of objects in industrial
environments. Palletizing, a crucial task in logistics and automation, requires precise
placement strategies to maximize space utilization while ensuring structural integrity.
The proposed method leverages a custom-designed convolutional neural network (CNN)
to predict feasible packing actions in a three-dimensional space. A Proximal Policy Optimization
(PPO) algorithm is employed to train a reinforcement learning agent, enabling it
to autonomously learn efficient packing policies. A feasibility mask is integrated into the
agent’s decision-making process, ensuring only valid actions are selected during training
and deployment.
The system was tested on synthetic datasets representing various box dimensions and
pallet sizes. Results demonstrate significant improvements in packing density, stability,
and computational efficiency compared to heuristic-based methods. The feasibility mask
and reward shaping techniques effectively guided the agent toward optimal packing configurations,
achieving a high bin fill rate while avoiding structural inconsistencies.
This work contributes to the field of robotic automation by providing a scalable, datadriven
framework for solving complex packing problems, with potential applications in
manufacturing, logistics, and supply chain management.
Abstract
This thesis presents a neural network-based approach to robotic palletizing, addressing
the challenge of optimizing the packing efficiency and stability of objects in industrial
environments. Palletizing, a crucial task in logistics and automation, requires precise
placement strategies to maximize space utilization while ensuring structural integrity.
The proposed method leverages a custom-designed convolutional neural network (CNN)
to predict feasible packing actions in a three-dimensional space. A Proximal Policy Optimization
(PPO) algorithm is employed to train a reinforcement learning agent, enabling it
to autonomously learn efficient packing policies. A feasibility mask is integrated into the
agent’s decision-making process, ensuring only valid actions are selected during training
and deployment.
The system was tested on synthetic datasets representing various box dimensions and
pallet sizes. Results demonstrate significant improvements in packing density, stability,
and computational efficiency compared to heuristic-based methods. The feasibility mask
and reward shaping techniques effectively guided the agent toward optimal packing configurations,
achieving a high bin fill rate while avoiding structural inconsistencies.
This work contributes to the field of robotic automation by providing a scalable, datadriven
framework for solving complex packing problems, with potential applications in
manufacturing, logistics, and supply chain management.
Tipologia del documento
Tesi di laurea
(Laurea magistrale)
Autore della tesi
Garooge, Ammar
Relatore della tesi
Correlatore della tesi
Scuola
Corso di studio
Ordinamento Cds
DM270
Parole chiave
Neural networks, Robotic palletizing, Packing optimization, Bin packing, Deep reinforcement learning, Automation, Logistics, Data-driven framework, Sequential decision making
Data di discussione della Tesi
24 Marzo 2025
URI
Altri metadati
Tipologia del documento
Tesi di laurea
(NON SPECIFICATO)
Autore della tesi
Garooge, Ammar
Relatore della tesi
Correlatore della tesi
Scuola
Corso di studio
Ordinamento Cds
DM270
Parole chiave
Neural networks, Robotic palletizing, Packing optimization, Bin packing, Deep reinforcement learning, Automation, Logistics, Data-driven framework, Sequential decision making
Data di discussione della Tesi
24 Marzo 2025
URI
Gestione del documento: