Documenti full-text disponibili:
|
Documento PDF (Thesis)
Disponibile con Licenza: Salvo eventuali più ampie autorizzazioni dell'autore, la tesi può essere liberamente consultata e può essere effettuato il salvataggio e la stampa di una copia per fini strettamente personali di studio, di ricerca e di insegnamento, con espresso divieto di qualunque utilizzo direttamente o indirettamente commerciale. Ogni altro diritto sul materiale è riservato
Download (6MB)
|
Abstract
In every domain of scientific research, the comparison between innovative solutions and the state of the art is crucial. This practice enables the evaluation of whether the system under examination outperforms the established reference, either comprehensively or in specific aspects. In various fields of computer science, tools have been developed to benchmark new and existing solutions. On the other hand, in the domain of collective adaptive systems, a conspicuous gap exists in software designed to facilitate such comparisons.
The primary objective of this thesis is to create a prototype for a benchmarking platform focused on Collective Adaptive Systems (CAS). By making use of existing simulators available in the market, the aim is to establish a comprehensive framework for testing, validating, and comparing these dynamic systems.
The presented platform is designed to allow users to define benchmarks, execute them, and extract results of interest - all while preserving flexibility and extensibility.
This inherent adaptability allows for the incorporation of additional simulators into the testbed.
An experiment has been executed to validate the framework's anticipated functionalities and understand its strengths and weaknesses. This analysis serves the purpose of identifying areas for future improvement within the tool.
Abstract
In every domain of scientific research, the comparison between innovative solutions and the state of the art is crucial. This practice enables the evaluation of whether the system under examination outperforms the established reference, either comprehensively or in specific aspects. In various fields of computer science, tools have been developed to benchmark new and existing solutions. On the other hand, in the domain of collective adaptive systems, a conspicuous gap exists in software designed to facilitate such comparisons.
The primary objective of this thesis is to create a prototype for a benchmarking platform focused on Collective Adaptive Systems (CAS). By making use of existing simulators available in the market, the aim is to establish a comprehensive framework for testing, validating, and comparing these dynamic systems.
The presented platform is designed to allow users to define benchmarks, execute them, and extract results of interest - all while preserving flexibility and extensibility.
This inherent adaptability allows for the incorporation of additional simulators into the testbed.
An experiment has been executed to validate the framework's anticipated functionalities and understand its strengths and weaknesses. This analysis serves the purpose of identifying areas for future improvement within the tool.
Tipologia del documento
Tesi di laurea
(Laurea magistrale)
Autore della tesi
Penazzi, Paolo
Relatore della tesi
Correlatore della tesi
Scuola
Corso di studio
Ordinamento Cds
DM270
Parole chiave
Aggregate Programming,Collective Adaptive Systems,Alchemist
Data di discussione della Tesi
15 Marzo 2024
URI
Altri metadati
Tipologia del documento
Tesi di laurea
(NON SPECIFICATO)
Autore della tesi
Penazzi, Paolo
Relatore della tesi
Correlatore della tesi
Scuola
Corso di studio
Ordinamento Cds
DM270
Parole chiave
Aggregate Programming,Collective Adaptive Systems,Alchemist
Data di discussione della Tesi
15 Marzo 2024
URI
Statistica sui download
Gestione del documento: