Interpretable Prediction of Galactic Cosmic-Ray Short-Term Variations with Artificial Neural Networks

Sabbatini, Federico (2020) Interpretable Prediction of Galactic Cosmic-Ray Short-Term Variations with Artificial Neural Networks. [Laurea magistrale], Università di Bologna, Corso di Studio in Ingegneria e scienze informatiche [LM-DM270] - Cesena
Documenti full-text disponibili:
[img] Documento PDF (Thesis)
Disponibile con Licenza: Creative Commons: Attribuzione - Non commerciale - Non opere derivate 3.0 (CC BY-NC-ND 3.0)

Download (8MB)

Abstract

Monitoring the galactic cosmic-ray flux variations is a crucial issue for all those space missions for which cosmic rays constitute a limitation to the performance of the on-board instruments. If it is not possible to study the galactic cosmic-ray short-term fluctuations on board, it is necessary to benefit of models that are able to predict these flux modulations. Artificial neural networks are nowadays the most used tools to solve a wide range of different problems in various disciplines, including medicine, technology, business and many others. All artificial neural networks are black boxes, i.e.\ their internal logic is hidden to the user. Knowledge extraction algorithms are applied to the neural networks in order to obtain explainable models when this lack of explanation constitutes a problem. This thesis work describes the implementation and optimisation of an explainable model for predicting the galactic cosmic-ray short-term flux variations observed on board the European Space Agency mission LISA Pathfinder. The model is based on an artificial neural network that benefits as input data of solar wind speed and interplanetary magnetic field intensity measurements gathered by the National Aeronautics and Space Administration space missions Wind and ACE orbiting nearby LISA Pathfinder. The knowledge extraction is performed by applying to the underlying neural network both the ITER algorithm and a linear regressor. ITER was selected after a deep investigation of the available literature. The model presented here provides explainable predictions with errors smaller than the LISA Pathfinder statistical uncertainty.

Abstract
Tipologia del documento
Tesi di laurea (Laurea magistrale)
Autore della tesi
Sabbatini, Federico
Relatore della tesi
Correlatore della tesi
Scuola
Corso di studio
Ordinamento Cds
DM270
Parole chiave
knowledge extraction,explainable artificial intelligence,interpretable prediction,LISA Pathfinder,cosmic rays,artificial neural networks
Data di discussione della Tesi
17 Dicembre 2020
URI

Altri metadati

Statistica sui download

Gestione del documento: Visualizza il documento

^