Delvecchio, Giovanni Pio
 
(2024)
Zero-Shot Warning Generation for Misinformative Multimodal Content Detection.
[Laurea magistrale], Università di Bologna, Corso di Studio in 
Artificial intelligence [LM-DM270]
   
  
  
        
        
	
  
  
  
  
  
  
  
    
  
    
      Documenti full-text disponibili:
      
    
  
  
    
      Abstract
      The widespread prevalence of misinformation poses serious societal concerns. Out-ofcontext misinformation, which involves authentic images paired with false text, is particularly insidious as it can easily deceive audiences. Existing detection methods primarily assess the consistency between images and text but often fall short in providing
sufficient explanations for their assessments. Such explanations are crucial for effectively
debunking misinformation. We have designed a model capable of detecting multimodal
misinformation through cross-modality consistency checks that surpasses the current
state-of-the-art models in terms of accuracy and training time. Furthermore, we have
developed a lightweight model that achieves accuracy better than that of the state-ofthe-art models while using only one-third the parameters. In addition, we devised a
dual-purpose zero-shot learning task for generating contextualized warnings, enabling
automatic debunking. The result is enhanced user comprehension and more informed
decision-making. Additionally, qualitative and human evaluation of the generated warnings shed light on both the limitations and potentialities of our proposed approach.
     
    
      Abstract
      The widespread prevalence of misinformation poses serious societal concerns. Out-ofcontext misinformation, which involves authentic images paired with false text, is particularly insidious as it can easily deceive audiences. Existing detection methods primarily assess the consistency between images and text but often fall short in providing
sufficient explanations for their assessments. Such explanations are crucial for effectively
debunking misinformation. We have designed a model capable of detecting multimodal
misinformation through cross-modality consistency checks that surpasses the current
state-of-the-art models in terms of accuracy and training time. Furthermore, we have
developed a lightweight model that achieves accuracy better than that of the state-ofthe-art models while using only one-third the parameters. In addition, we devised a
dual-purpose zero-shot learning task for generating contextualized warnings, enabling
automatic debunking. The result is enhanced user comprehension and more informed
decision-making. Additionally, qualitative and human evaluation of the generated warnings shed light on both the limitations and potentialities of our proposed approach.
     
  
  
    
    
      Tipologia del documento
      Tesi di laurea
(Laurea magistrale)
      
      
      
      
        
      
        
          Autore della tesi
          Delvecchio, Giovanni Pio
          
        
      
        
          Relatore della tesi
          
          
        
      
        
          Correlatore della tesi
          
          
        
      
        
          Scuola
          
          
        
      
        
          Corso di studio
          
          
        
      
        
      
        
      
        
          Ordinamento Cds
          DM270
          
        
      
        
          Parole chiave
          Multimodal Models,out of context detection,Explainable AI
          
        
      
        
          Data di discussione della Tesi
          23 Luglio 2024
          
        
      
      URI
      
      
     
   
  
    Altri metadati
    
      Tipologia del documento
      Tesi di laurea
(NON SPECIFICATO)
      
      
      
      
        
      
        
          Autore della tesi
          Delvecchio, Giovanni Pio
          
        
      
        
          Relatore della tesi
          
          
        
      
        
          Correlatore della tesi
          
          
        
      
        
          Scuola
          
          
        
      
        
          Corso di studio
          
          
        
      
        
      
        
      
        
          Ordinamento Cds
          DM270
          
        
      
        
          Parole chiave
          Multimodal Models,out of context detection,Explainable AI
          
        
      
        
          Data di discussione della Tesi
          23 Luglio 2024
          
        
      
      URI
      
      
     
   
  
  
  
  
  
    
    Statistica sui download
    
    
  
  
    
      Gestione del documento: