Documenti full-text disponibili:
Abstract
The absence of in-domain labeled data hinders the applicability of powerful deep neural networks. Unsupervised Domain Adaptation (UDA) methods have emerged to exploit such models even when labeled data is not available in the target domain. All these techniques aim to reduce the distribution shift problem that afflicts these models when trained on one dataset and tested in a different one. However, most of the works, do not consider relationships among tasks to further boost performances. In this thesis, we study a recent method called AT/DT (Across Tasks Domain Transfer), that seeks to apply Domain Adaptation together with Task Adaptation, leveraging on the correlation of two popular Vision tasks such as Semantic Segmentation and Monocular Depth Estimation. Inspired by the Domain Adaptation literature, we propose many extensions to the original work and show how these enhance the framework performances. Our contributions are applied at different levels: we first study how different architectures affect the transferability of features across tasks. We further improve performances by deploying Adversarial training. Finally, we explore the possibility of replacing Depth Estimation with popular Self-supervised tasks, demonstrating that two tasks must be semantically connected to be able to transfer features among them.
Abstract
The absence of in-domain labeled data hinders the applicability of powerful deep neural networks. Unsupervised Domain Adaptation (UDA) methods have emerged to exploit such models even when labeled data is not available in the target domain. All these techniques aim to reduce the distribution shift problem that afflicts these models when trained on one dataset and tested in a different one. However, most of the works, do not consider relationships among tasks to further boost performances. In this thesis, we study a recent method called AT/DT (Across Tasks Domain Transfer), that seeks to apply Domain Adaptation together with Task Adaptation, leveraging on the correlation of two popular Vision tasks such as Semantic Segmentation and Monocular Depth Estimation. Inspired by the Domain Adaptation literature, we propose many extensions to the original work and show how these enhance the framework performances. Our contributions are applied at different levels: we first study how different architectures affect the transferability of features across tasks. We further improve performances by deploying Adversarial training. Finally, we explore the possibility of replacing Depth Estimation with popular Self-supervised tasks, demonstrating that two tasks must be semantically connected to be able to transfer features among them.
Tipologia del documento
Tesi di laurea
(Laurea magistrale)
Autore della tesi
Cardace, Adriano
Relatore della tesi
Correlatore della tesi
Scuola
Corso di studio
Ordinamento Cds
DM270
Parole chiave
Domain Adaptation,Task Adaptation,Deep Learning,Computer Vision
Data di discussione della Tesi
12 Marzo 2020
URI
Altri metadati
Tipologia del documento
Tesi di laurea
(NON SPECIFICATO)
Autore della tesi
Cardace, Adriano
Relatore della tesi
Correlatore della tesi
Scuola
Corso di studio
Ordinamento Cds
DM270
Parole chiave
Domain Adaptation,Task Adaptation,Deep Learning,Computer Vision
Data di discussione della Tesi
12 Marzo 2020
URI
Statistica sui download
Gestione del documento: