Documenti full-text disponibili:
|
Documento PDF (Thesis)
Full-text accessibile solo agli utenti istituzionali dell'Ateneo
Disponibile con Licenza: Salvo eventuali più ampie autorizzazioni dell'autore, la tesi può essere liberamente consultata e può essere effettuato il salvataggio e la stampa di una copia per fini strettamente personali di studio, di ricerca e di insegnamento, con espresso divieto di qualunque utilizzo direttamente o indirettamente commerciale. Ogni altro diritto sul materiale è riservato
Download (406kB)
| Contatta l'autore
|
Abstract
This work provides a methodological approach to address fairness and bias mitigation in the design and development of data-driven methods. A central focus is the proposal and implementation of an innovative Fair-by-Design workflow, integrating various strategies for bias mitigation within data, well-known algorithms and proposed ones, and decision-making processes.
The study adopts a broad perspective on one datasets adopting several algorithms, aiming to establish equitable and unbiased applications of data-driven algorithms across various domains.
The primary objective is to ensure the general, equitable, and unbiased application of data-driven algorithms.
The methodology systematically evaluates multiple bias mitigation strategies,
with a critical emphasis on comparing their impact on the predictive accuracy of the algorithms.
This approach yields practical insights into the trade-offs between fairness and
accuracy, illustrating how different approaches can lead to varying accuracy scores on the same dataset and with the same models.
This thesis significantly contributes to the ongoing discourse on fairness in
machine learning and data-driven decision-making. The results offer guidance
to stakeholders across sectors, aiding them in making informed decisions about algorithm deployment to promote fairness and minimize bias.
Abstract
This work provides a methodological approach to address fairness and bias mitigation in the design and development of data-driven methods. A central focus is the proposal and implementation of an innovative Fair-by-Design workflow, integrating various strategies for bias mitigation within data, well-known algorithms and proposed ones, and decision-making processes.
The study adopts a broad perspective on one datasets adopting several algorithms, aiming to establish equitable and unbiased applications of data-driven algorithms across various domains.
The primary objective is to ensure the general, equitable, and unbiased application of data-driven algorithms.
The methodology systematically evaluates multiple bias mitigation strategies,
with a critical emphasis on comparing their impact on the predictive accuracy of the algorithms.
This approach yields practical insights into the trade-offs between fairness and
accuracy, illustrating how different approaches can lead to varying accuracy scores on the same dataset and with the same models.
This thesis significantly contributes to the ongoing discourse on fairness in
machine learning and data-driven decision-making. The results offer guidance
to stakeholders across sectors, aiding them in making informed decisions about algorithm deployment to promote fairness and minimize bias.
Tipologia del documento
Tesi di laurea
(Laurea magistrale)
Autore della tesi
Iannotta, Antonio
Relatore della tesi
Correlatore della tesi
Scuola
Corso di studio
Ordinamento Cds
DM270
Parole chiave
Fairness in AI,Machine Learning,Data Augmentation,Bias Addressing
Data di discussione della Tesi
15 Marzo 2024
URI
Altri metadati
Tipologia del documento
Tesi di laurea
(NON SPECIFICATO)
Autore della tesi
Iannotta, Antonio
Relatore della tesi
Correlatore della tesi
Scuola
Corso di studio
Ordinamento Cds
DM270
Parole chiave
Fairness in AI,Machine Learning,Data Augmentation,Bias Addressing
Data di discussione della Tesi
15 Marzo 2024
URI
Statistica sui download
Gestione del documento: