Documenti full-text disponibili:
      
    
  
  
    
      Abstract
      In the context of industrial robotics, manipulating rigid objects have been studied quite deeply. However, Handling deformable objects is still a big challenge. Moreover, due to new techniques introduced in the object detection literature, employing visual data is getting more and more popular between researchers. This thesis studies how to exploit visual data for detecting the end-point of a deformable linear object. A deep learning model is trained to perform the task of object detection. First of all, basics of the neural networks is studied to get more familiar with the mechanism of the object detection. Then, a state-of-the-art object detection algorithm YOLOv3 is reviewed so it can be used as its best. Following that, it is explained how to collect the visual data and several points that can improve the data gathering procedure are delivered. After clarifying the process of annotating the data, model is trained and then it is tested. Trained model localizes the end-point. This information can be used directly by the robot to perform tasks like pick and place or it can be used to get more information on the form of the object.
     
    
      Abstract
      In the context of industrial robotics, manipulating rigid objects have been studied quite deeply. However, Handling deformable objects is still a big challenge. Moreover, due to new techniques introduced in the object detection literature, employing visual data is getting more and more popular between researchers. This thesis studies how to exploit visual data for detecting the end-point of a deformable linear object. A deep learning model is trained to perform the task of object detection. First of all, basics of the neural networks is studied to get more familiar with the mechanism of the object detection. Then, a state-of-the-art object detection algorithm YOLOv3 is reviewed so it can be used as its best. Following that, it is explained how to collect the visual data and several points that can improve the data gathering procedure are delivered. After clarifying the process of annotating the data, model is trained and then it is tested. Trained model localizes the end-point. This information can be used directly by the robot to perform tasks like pick and place or it can be used to get more information on the form of the object.
     
  
  
    
    
      Tipologia del documento
      Tesi di laurea
(Laurea magistrale)
      
      
      
      
        
      
        
          Autore della tesi
          Roohi, Masood
          
        
      
        
          Relatore della tesi
          
          
        
      
        
          Correlatore della tesi
          
          
        
      
        
          Scuola
          
          
        
      
        
          Corso di studio
          
          
        
      
        
      
        
      
        
          Ordinamento Cds
          DM270
          
        
      
        
          Parole chiave
          deep learning,object detection,DLO,YOLOv3
          
        
      
        
          Data di discussione della Tesi
          21 Luglio 2020
          
        
      
      URI
      
      
     
   
  
    Altri metadati
    
      Tipologia del documento
      Tesi di laurea
(NON SPECIFICATO)
      
      
      
      
        
      
        
          Autore della tesi
          Roohi, Masood
          
        
      
        
          Relatore della tesi
          
          
        
      
        
          Correlatore della tesi
          
          
        
      
        
          Scuola
          
          
        
      
        
          Corso di studio
          
          
        
      
        
      
        
      
        
          Ordinamento Cds
          DM270
          
        
      
        
          Parole chiave
          deep learning,object detection,DLO,YOLOv3
          
        
      
        
          Data di discussione della Tesi
          21 Luglio 2020
          
        
      
      URI
      
      
     
   
  
  
  
  
  
    
    Statistica sui download
    
    
  
  
    
      Gestione del documento: