Bereiche | Tage | Auswahl | Suche | Aktualisierungen | Downloads | Hilfe
T: Fachverband Teilchenphysik
T 66: Grid-Computing
T 66.1: Gruppenbericht
Dienstag, 28. März 2017, 16:45–17:05, JUR 372
Computing strategy to cope with the upcoming massive HEP and HI data collection — •Thomas Kreß1 and Kilian Schwarz2 — 1RWTH Aachen University, Physics Institute III B — 2GSI, Helmholtzzentrum für Schwerionenforschung, Darmstadt
The LHC scientific program has led to numerous important physics results. This would not have been possible without an efficient processing of PetaBytes of data using the Worldwide LHC Computing Grid (WLCG). In the periods following the accelerator and detector upgrades, a huge increase in the data rate is expected. In addition, other big experiments like BELLE-2 and the FAIR collaborations will also take large amounts of data during the next years. So far the LHC computing strategy, based on Grid computing as a distribution of data and CPUs over a few hundred of dedicated sites, has met the challenges. However, to cope with substantially increased data volumes and correspondingly higher CPU requirements, new techniques like cloud computing and the usage of opportunistic resources are necessary. In parallel a reorganisation of the interplay of the computing sites is presently addressed by the evolving computing models of the affected experiments. Recently the Technical Advisory Board of the WLCG German Tier-1 site GridKa in Karlsruhe organised a meeting aimed to identify the guidelines for keeping German HEP and Heavy Ion computing excellent for future requirements. In a follow-up meeting working groups were launched in order to effectively organise the work on the above topics. The presentation will address the challenges, the German strategy, and the current status of the work packages.