LHC, ATLAS and the Grid

The Large Hadron Collider (LHC) experiments, in particular ATLAS, use a network of computer centres to process the billions of events coming from the proton-proton collisions. This collection of centres are seen as an enormous distributed machine thanks to the usage of a software layer called “GRID middleware”. The Worldwide LHC Computing Grid (WLCG), which have its headquarters at the CERN laboratory (Geneva), coordinates all the experiment’ s computing resources and promote general policies and tools to be used among all participating centers.

The amount of data transferred to the GRID for processing is huge: dozens of PB (1 PB = 1000 TeraBytes) a year, which is the most challenging data processing of the history of science. There are almost a hundred computer centres on the ATLAS GRID distributed all over the world. These centres are classified in two categories at present, according to their services: Tier-1 (keeping early stages data) and Tier-2 (mainly for simulations and storage of final stages data) centres.

The second period of data-taking (Run2), started in June 2015, brought a large increment in the data volume due to an increase on both beam luminosity and an energy. Software has changed to adapt the data processing to the new requirements. In particular, jobs evolved from sequential to parallel execution to take profit of new multi-core CPUs while maintaining memory footprint. These modifications imply a change in the local batch system configuration to allow for both type of jobs running simultaneously (single and multi-core) to allow efficient resource usage.

Slideshow Image
Cables in the ATLAS cavern