Deployment of a data analysis workflow of the ATLAS experiment on HPC systems

Corchia, Federico Andrea Guillaume (2022) Deployment of a data analysis workflow of the ATLAS experiment on HPC systems. [Laurea magistrale], Università di Bologna, Corso di Studio in Physics [LM-DM270]
Documenti full-text disponibili:
[img] Documento PDF (Thesis)
Disponibile con Licenza: Creative Commons: Attribuzione - Condividi allo stesso modo 4.0 (CC BY-SA 4.0)

Download (3MB)

Abstract

LHC experiments produce an enormous amount of data, estimated of the order of a few PetaBytes per year. Data management takes place using the Worldwide LHC Computing Grid (WLCG) grid infrastructure, both for storage and processing operations. However, in recent years, many more resources are available on High Performance Computing (HPC) farms, which generally have many computing nodes with a high number of processors. Large collaborations are working to use these resources in the most efficient way, compatibly with the constraints imposed by computing models (data distributed on the Grid, authentication, software dependencies, etc.). The aim of this thesis project is to develop a software framework that allows users to process a typical data analysis workflow of the ATLAS experiment on HPC systems. The developed analysis framework shall be deployed on the computing resources of the Open Physics Hub project and on the CINECA Marconi100 cluster, in view of the switch-on of the Leonardo supercomputer, foreseen in 2023.

Abstract
Tipologia del documento
Tesi di laurea (Laurea magistrale)
Autore della tesi
Corchia, Federico Andrea Guillaume
Relatore della tesi
Correlatore della tesi
Scuola
Corso di studio
Indirizzo
NUCLEAR AND SUBNUCLEAR PHYSICS
Ordinamento Cds
DM270
Parole chiave
High Energy Physics,High Performance Computing,Grid Computing,ATLAS
Data di discussione della Tesi
15 Luglio 2022
URI

Altri metadati

Statistica sui download

Gestione del documento: Visualizza il documento

^