Content Delivery Network solutions for the CMS experiment: the evolution towards HL-LHC
F.M. Josep*, C. Pérez, A. Sikora, P. Serrano on behalf of the CMS Collaboration
Published on:
October 29, 2024
Abstract
The Large Hadron Collider (LHC) at CERN in Geneva is undergoing a significant upgrade in anticipation of a tenfold increase in proton-proton collisions expected in its forthcoming high-luminosity phase, starting by 2029. This necessitates an expansion of the World-Wide LHC Computing Grid (WLCG) within a constant budgetary framework. While technological advancements offer some relief for the expected increase, numerous research and development projects are underway. Their aim is to bring future resources to manageable levels and provide cost-effective solutions to effectively handle the expanding volume of generated data. In the quest for optimised data access and resource utilisation, the LHC community is exploring Content Delivery Network (CDN) techniques to optimize data access and resource utilization. A comprehensive study focuses on implementing data caching solutions for the Compact Muon Solenoid (CMS) experiment, particularly in Spanish compute facilities, revealing benefits for user analysis tasks. The study details the implementation of a data caching system in the PIC Tier-1 compute facility, discussing its positive impact on CPU usage and exploring optimal requirements and cost benefits. Furthermore, it investigates the potential broader integration of this solution into the CMS computing infrastructure.
DOI: https://doi.org/10.22323/1.458.0041
How to cite
Metadata are provided both in "article" format (very similar to INSPIRE) as this helps creating
very compact bibliographies which can be beneficial to authors and
readers, and in "proceeding" format
which is more detailed and complete.