Visit of NCSA representatives · Visit of NCSA representatives 2015-05-27 4 Cliquez ici pour le...
Transcript of Visit of NCSA representatives · Visit of NCSA representatives 2015-05-27 4 Cliquez ici pour le...
Centre de Calcul de l’Institut National de Physique Nucléaire et de Physique des Particules
IN2P3 Computing Center Visit of NCSA representatives
Visit of NCSA representatives 2015-05-27
Institut National de Physique Nucléaire et de Physique des Particules
Dedicated computing center (HTC)
Resources mutualisation
CC-IN2P3 federates the main computing resources For : High energy physics Nuclear physics Astroparticle physics
Some openings to other sciences
2
IN2P3 Computing Center
Manpower : 84 people - 63 IT engineers - 2 researchers
2015 budget : ~7.5 M€ (without salaries)
Partnership with CEA/DSM/IRFU
Visit of NCSA representatives 2015-05-27
LHC HESS
Auger
PlanckAMS
ANTARESSupernovae
VIRGO
~ 70 groups are using CC-IN2P3 resources
3
CPU
~26 000 vcores ~250 kHEPSpec06 ~ 240 TFlops Disk storage
Standard performance disk = 14 PB High performance disk (for GPFS) = 1.7 PB
Tapes
S t o r a g e u s e d o n magnetic tape : 25 PB out of 340 PB nominal capacity
Backup (TSM)
Stored volume : about 5,5 Po
Visit of NCSA representatives 2015-05-27 4
Cliquez ici pour le titre du slide
25,2 % 0,4 %
2,4 %
0,5 %
3,2 %
14,3 %
53,9 %Particle PhysicsHadron PhysicsTheoryLabsIN2P3 Multidisciplinary & openingsNuclear PhysicsAstroparticles + Neutrino
Others0,1 %
LHCb30,2 %
CMS23,0 %
ATLAS46,7 %
Auger1 %
Virgo2 % CTA
2 %
dchooz2 %
km3net4 %
lisaf4 %
AMS11,3 %
Antares14,1 %Planck
14,2 %
GLAST22,3 %
HESS23,4 %
Top 11 Astroparticle + Neutrino sharing
Top HEP
Users : distribution and CPU comsumption (May 2015)
Visit of NCSA representatives 2015-05-27 5
Cliquez ici pour le titre du slide4 IT teams
Development :
tools, services, software, reporting & BI, databases
Infrastructure : servers installation, set up, based software (OS, grid components, middleware, cloud…) storage systems (dCache, GPFS, HPSS…) network (WAN & LAN)
Operation : QoS day to day operation user support :
-> dedicated (LHC, Euclid, LSST) to give the « best services », to test solutions, to adapt the CC offer and vice versa -> generic : for experiments without specific needs
Research team : - strengthen links between Computer Scientists and users of large-scale DCIs in production - bring expertise on HPC to physics collaborations
-> Flexibility - adaptability : - may (re-)affect hardware to specific needs - (most) money is fungible
- « Task force » set up for LSST : Fabio (lead) , Yvan (dep.), plus Rachid (UserSupport) and Infra, Ops & Dev…
Organization
Visit of NCSA representatives 2015-05-27 6
WLCG and LCG-France : a french cloud
Visit of NCSA representatives 2015-05-27 7
Cliquez ici pour le titre du slide
107 kHS06 18 PB10 PB
2015 T1 Pledged CPU Capcacity
22 %
11 %
2 %4 %
9 % 3 %4 %6 %
12 %
12 %
10 %4 %
Canada France Germany Italy Netherlands Nordic Korea Russia Spain Taïwan UK USA
2015 T1 Pledged Disk Capacity
20 %
11 %
3 %4 %
9 % 1 %5 % 6 %
14 %
14 %
9 %4 %
2015 T1 Pledged Tape Capacity
25 %
12 %
2 %5 %
7 %1 %2 %5 %
15 %
12 %
9 %4 %
0
20,000
40,000
60,000
80,000
100,000
120,000
0
2000
4000
6000
8000
10000
12000
2006 2007 2008 2009 2010 2011 2012 2013 2014
TB
HEP
-SPE
C06
Evolution Ressources LCG-France Sites T2-T3 hors CC-IN2P3
Disk [TB] CPU [HEP-SPEC06]
% of 2015
pledgesCPU Disk Tapes
French T1 / ∑ T1 10.15 10.29 10.11
French T1+T2s /
∑ T1+T2s
9?57 10.00 10.11
LCG-France share into W-LCG
Visit of NCSA representatives 2015-05-27 8
Cliquez ici pour le titre du slide
2013 Renater report status: LCG is 46 % of total french academic trafic
Trafic Lyon <—>CERN
Trafic Lyon-general Internet
~ 50 PB of data per year (in + out)
Network connectivity for WLCG and LCG-France
Visit of NCSA representatives 2015-05-27 9
French academic network and IN2P3 usage
Visit of NCSA representatives 2015-05-27 10
Predrag Buncic, October 3, 2013 ECFA Workshop Aix-Les-Bains - 7
Data: Outlook for HL-LHC
• Very rough estimate of a new RAW data per year of running using a simple extrapolation of current data volume scaled by the output rates. • To be added: derived data (ESD, AOD), simulation, user data…
PB
0.0
50.0
100.0
150.0
200.0
250.0
300.0
350.0
400.0
450.0
Run 1 Run 2 Run 3 Run 4
CMS
ATLAS
ALICE
LHCb
Predrag Buncic, October 3, 2013 ECFA Workshop Aix-Les-Bains - 12
CPU: Online + Offline
• Very rough estimate of new CPU requirements for online and offline processing per year of data taking using a simple extrapolation of current requirements scaled by the number of events.
• Little headroom left, we must work on improving the performance.
ONLINE
GRID
Moore’s law limit
0
20
40
60
80
100
120
140
160
Run 1 Run 2 Run 3 Run 4
GRID
ATLAS
CMS
LHCb
ALICE
MH
S06
Historical growth of 25%/year
Room for improvement
Run 2 : 2015-18 — Run 3 : 2020-22 — Run 4 : 2025-28 > Run 2 : huge effort to be done on software, switch to and/or complement with other technologies
Visit of NCSA representatives 2015-05-27
LSST (Large Synoptic Survey Telescope) : - reprocessing of half of the data- CC will host all the processed data
EUCLID : CC-IN2P3 will be one of the 8 “Sciences Data Centers” of this European space mission and should provide 30% of the resources (CPU and storage)
CTA : CC-IN2P3 may be led to operate for it, also by 2030, at least a 25% of 88 kHS06 for CPU, 207 PB for disk and 507 PB for tapes.
LHC : ~+25% per year indicates that at the end of the LHC Run 3 a T1 capacity of
11
Cliquez ici pour le titre du slide
En 2024 CPU kHS06
DisquePo
MSSPo
LHC 1 000 80 150
x2014 5 6 6
In 2030 CPUkHS06
DiskPB
MSSPB
LSST 2 400 100 266
EUCLID 67 150 52
CTA 22 52 127
∑ 2 489 302 445
x2014 12 22 18
Cumulated capacities by year for the 4 experiments
PB
0,00
200,00
400,00
600,00
800,00
kHS0
6
0
1 250
2 500
3 750
5 000
2015
2016
2017
2018
2019
2020
2021
2022
2023
2024
2025
2026
2027
2028
2029
2030
CPU kHS06 Disk PB Tapes PB
Estimated needs of future (and on-going) experiments
Visit of NCSA representatives 2015-05-27 12
Cliquez ici pour le titre du slide
Base infrastructure designed to fit the final configuration (wires, pipes…)
Everything else is modular (transformers, UPS, chillers, etc.) and can be installed later as a function of the needs
Best PUE : ~ 1.47 Capacity : 80 racks
28 installed - 52 left
1 rack = 730 TB or 20,5 kHS06 (2014)
2*250 m2 to use
Computing rooms : VIL2
Visit of NCSA representatives 2015-05-27
} The french central point for storage and compute large datasets } main backbone of the french NGI
} the main user of the french NREN } behaves like a leader in driving the network increase bandwidth } good connections
} accustomed to work in international collaborations } WLCG , EU projects, Consortium of similar HEP (EU-T0), LIA
(China,Japan) } and with other communities (Humanities, Bio-medical…)
13
Conclusion
2015-05-27Visit of NCSA representatives
Cliquez ici pour le titre du slide
Thank you
14