Gattoni, Giacomo
 
(2021)
Improving the reliability of recurrent neural networks while dealing with bad data.
[Laurea magistrale], Università di Bologna, Corso di Studio in 
Ingegneria informatica [LM-DM270], Documento full-text non disponibile
  
 
  
  
        
        
	
  
  
  
  
  
  
  
    
      Il full-text non è disponibile per scelta dell'autore.
      
        (
Contatta l'autore)
      
    
  
    
  
  
    
      Abstract
      In practical applications, machine learning and deep learning models can have difficulty in achieving generalization, especially when dealing with training samples that are either noisy or limited in quantity. 
Standard neural networks do not guarantee the monotonicity of the input features with respect to the output, therefore they lack interpretability and predictability when it is known a priori that the input-output relationship should be monotonic. This problem can be encountered in the CPG industry, where it is not possible to ensure that a deep learning model will learn the increasing monotonic relationship between promotional mechanics and sales. To overcome this issue, it is proposed the combined usage of recurrent neural networks, a type of artificial neural networks specifically designed to deal with data structured as sequences, with lattice networks, conceived to guarantee monotonicity of the desired input features with respect to the output. The proposed architecture has proven to be more reliable when new samples are fed to the neural network, demonstrating its ability to infer the evolution of the sales depending on the promotions, even when it is trained on bad data.
     
    
      Abstract
      In practical applications, machine learning and deep learning models can have difficulty in achieving generalization, especially when dealing with training samples that are either noisy or limited in quantity. 
Standard neural networks do not guarantee the monotonicity of the input features with respect to the output, therefore they lack interpretability and predictability when it is known a priori that the input-output relationship should be monotonic. This problem can be encountered in the CPG industry, where it is not possible to ensure that a deep learning model will learn the increasing monotonic relationship between promotional mechanics and sales. To overcome this issue, it is proposed the combined usage of recurrent neural networks, a type of artificial neural networks specifically designed to deal with data structured as sequences, with lattice networks, conceived to guarantee monotonicity of the desired input features with respect to the output. The proposed architecture has proven to be more reliable when new samples are fed to the neural network, demonstrating its ability to infer the evolution of the sales depending on the promotions, even when it is trained on bad data.
     
  
  
    
    
      Tipologia del documento
      Tesi di laurea
(Laurea magistrale)
      
      
      
      
        
      
        
          Autore della tesi
          Gattoni, Giacomo
          
        
      
        
          Relatore della tesi
          
          
        
      
        
      
        
          Scuola
          
          
        
      
        
          Corso di studio
          
          
        
      
        
      
        
      
        
          Ordinamento Cds
          DM270
          
        
      
        
          Parole chiave
          Neural networks,artificial neural networks,ANN,deep learning,DL,machine learning,ML,recurrent neural networks,RNN,long short-term memory,LSTM,gated recurrent unit,GRU,monotonicity constraints,lattice,look-up table,LUT,feature selection,consumer packaged goods,CPG
          
        
      
        
          Data di discussione della Tesi
          11 Marzo 2021
          
        
      
      URI
      
      
     
   
  
    Altri metadati
    
      Tipologia del documento
      Tesi di laurea
(NON SPECIFICATO)
      
      
      
      
        
      
        
          Autore della tesi
          Gattoni, Giacomo
          
        
      
        
          Relatore della tesi
          
          
        
      
        
      
        
          Scuola
          
          
        
      
        
          Corso di studio
          
          
        
      
        
      
        
      
        
          Ordinamento Cds
          DM270
          
        
      
        
          Parole chiave
          Neural networks,artificial neural networks,ANN,deep learning,DL,machine learning,ML,recurrent neural networks,RNN,long short-term memory,LSTM,gated recurrent unit,GRU,monotonicity constraints,lattice,look-up table,LUT,feature selection,consumer packaged goods,CPG
          
        
      
        
          Data di discussione della Tesi
          11 Marzo 2021
          
        
      
      URI
      
      
     
   
  
  
  
  
  
  
    
      Gestione del documento: 
      
        