Interpretation of the user interface of a domestic appliance using robotic vision

Ahmadli, Ismayil (2019) Interpretation of the user interface of a domestic appliance using robotic vision. [Laurea magistrale], Università di Bologna, Corso di Studio in Automation engineering / ingegneria dell’automazione [LM-DM270], Documento full-text non disponibile
Il full-text non è disponibile per scelta dell'autore. (Contatta l'autore)

Abstract

This master thesis is a part of the project which is conducted by Prof.Gianluca Palli in cooperation with a private company in Laboratory of Automation and Robotics (LAR) of the University of Bologna. The project aims to control the given washing machines in a laundromat using mobile based manipulator and visualize all information in a compared manner for the user. The first goal of this thesis is to interpret data on the user interface of the washing machine using Deep Learning and compare it with actual data acquired from the machine itself. The focus lies on visualizing these two type of information directly to the user with the result of this comparison on GUI application. The second goal is to detect the washing machine and its pieces in certain area using convolutional neural networks. Deep learning has absolutely dominated computer vision over the last few years, achieving top score on many task in this field, with neural networks repeatedly push- ing the frontier of visual recognition. After introduction basic information about Deep Learning and particularly MobileNet architectures, Image Processing and Computer Vision concepts are presented. These concepts such as Homography matrix estimation, are used to automatically localize the region of interests of the user interface on the scene frame without using any external markers and recognize the washing program, option and functionalities as well with help of OpenCV library. Next, training and validation procedures of MobileNetV2 for both visual recognition and object detection tasks are described. The results are shown after finetuning with sample data using TF-Slim image recognition library and training the network for object detection using Tensorflow’s Object Detection API. Finally, Loss function on training and validation are indicated and the results on the scene frame visually are shown in a systematical way and further possible application are briefly discussed.

Abstract
Tipologia del documento
Tesi di laurea (Laurea magistrale)
Autore della tesi
Ahmadli, Ismayil
Relatore della tesi
Correlatore della tesi
Scuola
Corso di studio
Ordinamento Cds
DM270
Parole chiave
Deep Learning,MobileNetV2,Homography,SURF,Pose estimation,train,data,the user interface
Data di discussione della Tesi
15 Marzo 2019
URI

Altri metadati

Gestione del documento: Visualizza il documento

^