Il full-text non è disponibile per scelta dell'autore.
(

Contatta l'autore)

## Abstract

This thesis focuses on distributed aggregative optimization, a recently emerged framework where a network of agents cooperate to solve a global optimization problem characterized by local cost functions depending on both the local decision variable and an aggregation of all of them (e.g., the mean of the decision variables of all the agents). The focus is on a so-called personalized version of this scenario in which the local costs consist of a (possibly) time-varying known term and a fixed unknown part, respectively representing the so-called engineering function (concerning measuring quantities such as, e.g., energy or time) and the user’s satisfaction (concerning human preferences whose model cannot be known in advance). In order to compensate for the lack of knowledge about the unknown part of each cost, this work enhances an existing distributed optimization scheme with an automatic differentiation procedure applied to neural networks. More in detail, the designed algorithm combines two independent loops devoted to performing optimization and learning steps. In turn, the distributed optimization algorithm embeds a consensus mechanism aimed at reconstructing in each agent the global information, namely the aggregative variable and the gradient of the cost function with respect to the aggregative variable. Finally, numerical examples involving a quadratic scenario are reported to show the effectiveness of the proposed method comparing already existing algorithms.

Abstract

This thesis focuses on distributed aggregative optimization, a recently emerged framework where a network of agents cooperate to solve a global optimization problem characterized by local cost functions depending on both the local decision variable and an aggregation of all of them (e.g., the mean of the decision variables of all the agents). The focus is on a so-called personalized version of this scenario in which the local costs consist of a (possibly) time-varying known term and a fixed unknown part, respectively representing the so-called engineering function (concerning measuring quantities such as, e.g., energy or time) and the user’s satisfaction (concerning human preferences whose model cannot be known in advance). In order to compensate for the lack of knowledge about the unknown part of each cost, this work enhances an existing distributed optimization scheme with an automatic differentiation procedure applied to neural networks. More in detail, the designed algorithm combines two independent loops devoted to performing optimization and learning steps. In turn, the distributed optimization algorithm embeds a consensus mechanism aimed at reconstructing in each agent the global information, namely the aggregative variable and the gradient of the cost function with respect to the aggregative variable. Finally, numerical examples involving a quadratic scenario are reported to show the effectiveness of the proposed method comparing already existing algorithms.

Tipologia del documento

Tesi di laurea
(Laurea magistrale)

Autore della tesi

Brumali, Riccardo

Relatore della tesi

Correlatore della tesi

Scuola

Corso di studio

Ordinamento Cds

DM270

Parole chiave

online optimization,distributed optimization,deep learning,neural network,users' feedback

Data di discussione della Tesi

14 Ottobre 2023

URI

## Altri metadati

Tipologia del documento

Tesi di laurea
(NON SPECIFICATO)

Autore della tesi

Brumali, Riccardo

Relatore della tesi

Correlatore della tesi

Scuola

Corso di studio

Ordinamento Cds

DM270

Parole chiave

online optimization,distributed optimization,deep learning,neural network,users' feedback

Data di discussione della Tesi

14 Ottobre 2023

URI

Gestione del documento: