Documenti full-text disponibili:
Abstract
Automated decision-making systems are rapidly permeating socially sensitive domains such as finance, healthcare, justice, and autonomous mobility. While these data-driven solutions can increase efficiency, they can also perpetuate or amplify existing inequities whenever the underlying algorithms exhibit unfair behavior. This thesis provides a systematic investigation of algorithmic fairness, clarifying multiple, often competing, formal definitions adopted in the literature and mapping them to practical risks of bias and discrimination that arise throughout the machine-learning pipeline.
After surveying the main sources of bias—data imbalance, historical prejudice, model opacity, and feedback loops—the work reviews mitigation strategies grouped into three families: pre-processing (data-repair and re-sampling), in-processing (fairness-aware losses, constraints, regularizers), and post-processing (prediction-adjustment and explanation tools). Building upon these foundations, the thesis introduces FairLib: a modular, open-source library designed to address limitations in existing fairness toolkits by unifying bias-diagnosis metrics and mitigation algorithms behind a consistent API. FairLib is model-agnostic, integrates with popular ML frameworks, and facilitates reproducible experimentation through configurable pipelines.
A preliminary evaluation on canonical benchmark datasets shows that selected FairLib pipelines can reduce unfairness while leaving predictive accuracy broadly unchanged. Although limited to a modest set of benchmarks, these findings suggest that systematic fairness interventions are achievable without prohibitive trade-offs.
By coupling a critical analysis of fairness concepts with a practical, extensible toolkit, this thesis aims to foster greater transparency and accountability in AI systems and help practitioners deploy models that respect fundamental principles of equity.
Abstract
Automated decision-making systems are rapidly permeating socially sensitive domains such as finance, healthcare, justice, and autonomous mobility. While these data-driven solutions can increase efficiency, they can also perpetuate or amplify existing inequities whenever the underlying algorithms exhibit unfair behavior. This thesis provides a systematic investigation of algorithmic fairness, clarifying multiple, often competing, formal definitions adopted in the literature and mapping them to practical risks of bias and discrimination that arise throughout the machine-learning pipeline.
After surveying the main sources of bias—data imbalance, historical prejudice, model opacity, and feedback loops—the work reviews mitigation strategies grouped into three families: pre-processing (data-repair and re-sampling), in-processing (fairness-aware losses, constraints, regularizers), and post-processing (prediction-adjustment and explanation tools). Building upon these foundations, the thesis introduces FairLib: a modular, open-source library designed to address limitations in existing fairness toolkits by unifying bias-diagnosis metrics and mitigation algorithms behind a consistent API. FairLib is model-agnostic, integrates with popular ML frameworks, and facilitates reproducible experimentation through configurable pipelines.
A preliminary evaluation on canonical benchmark datasets shows that selected FairLib pipelines can reduce unfairness while leaving predictive accuracy broadly unchanged. Although limited to a modest set of benchmarks, these findings suggest that systematic fairness interventions are achievable without prohibitive trade-offs.
By coupling a critical analysis of fairness concepts with a practical, extensible toolkit, this thesis aims to foster greater transparency and accountability in AI systems and help practitioners deploy models that respect fundamental principles of equity.
Tipologia del documento
Tesi di laurea
(Laurea magistrale)
Autore della tesi
Di Zio, Valerio
Relatore della tesi
Correlatore della tesi
Scuola
Corso di studio
Ordinamento Cds
DM270
Parole chiave
bias mitigation,machine learning,fairlib,python,ai fairness
Data di discussione della Tesi
17 Luglio 2025
URI
Altri metadati
Tipologia del documento
Tesi di laurea
(NON SPECIFICATO)
Autore della tesi
Di Zio, Valerio
Relatore della tesi
Correlatore della tesi
Scuola
Corso di studio
Ordinamento Cds
DM270
Parole chiave
bias mitigation,machine learning,fairlib,python,ai fairness
Data di discussione della Tesi
17 Luglio 2025
URI
Statistica sui download
Gestione del documento: