View
215
Download
1
Category
Preview:
Citation preview
LHC Computing Resources 24 March 20002
ATLAS Computing organization
simulation reconstruction database coordinator
QC group simulation reconstruction database Arch. team
Event filter
Technical Group
National Comp. Board
Comp. Steering Group Physics
Comp. Oversight Board
Detector system
LHC Computing Resources 24 March 20003
Scales of Effort
Best benchmarks are Tevatron Collider Experiments (CDF, D0)
Scaling: CPU – factor of 1000 to LHC (event
complexity) Data volume – 10x to 100x User/developer community: 5x Distribution effort: 5x
LHC Computing Resources 24 March 20004
The ATLAS Computing Model
Data sizes/event (CTP numbers): RAW : 1 MB (100 Hz) ESD : 100 kB (moving up) AOD : 10 kB TAG : 100 B
Tier-0 : RAW, ESD, AOD, TAG Tier-1 : ESD, AOD, TAG Tier-2 : AOD, TAG Might be different for the first year(s)
LHC Computing Resources 24 March 20005
U.S. ATLAS Model as example
ATLAS CERN Computing
Center
US ATLAS Tier 2 Computing
Center
US ATLAS Tier 1 Computing
Center
Tier 3 Computing
US ATLAS Tier 2 Computing
Center
US ATLAS Tier 2 Computing
Center
Tier 3 Computing
Tier 3 Computing
Tier 3 Computing
US ATLAS User
International
National
Regional
Institutional
US ATLAS User
US ATLAS User
US ATLAS User
US ATLAS User
US ATLAS User
US ATLAS User
US ATLAS User
Individual
.
.
.
LAN
Atlantic
LHC Computing Resources 24 March 20006
Data Grid Hierarchy
Tier 1FNAL/BNL
T2
T2
T2
T2
T2
3
3
3
3
33
3
3
3
3
3
3
Tier 0 (CERN)
44 4 4
33
LHC Computing Resources 24 March 20007
ATLAS Milestones
2001 Number and places for Tier-1 centers should be known
2002 Basic world wide computing strategy should be defined
2003 Typical sizes for Tier-0 and Tier-1 centers should be proposed
2003 The role of Tier-2 centers in the GRID should be known
LHC Computing Resources 24 March 20008
Facilities Architecture : USA as Example
US ATLAS Tier-1 Computing Center at BNLNational in scope at ~20% of Tier-0 (see notes
at end) US ATLAS Tier-2 Computing Centers
Regional in scope at ~20% of Tier-1Likely one of them at CERN
US ATLAS Institutional Computing Facilities US ATLAS Individual Desk Top Systems
LHC Computing Resources 24 March 20009
U.S. ATLAS as example
Total US ATLAS facilities in ‘05 should include ... 10,000 SPECint95 for Re-reconstruction 85,000 SPECint95 for Analysis 35,000 SPECint95 for Simulation 190 TBytes/year of On-line (Disk) Storage 300 TBytes/year of Near-line (Robotic Tape) Storage Dedicated OC12 622 Mbit/sec Tier-1 connectivity to each
Tier-2 Dedicated OC12 622 Mbit/sec to CERN
LHC Computing Resources 24 March 200010
US ATLAS: Integrated Capacities by Year
FY 1999 FY 2000 FY 2001 FY 2002 FY 2003 FY 2004 FY 2005 FY 2006Operational Tier 2 Facilities - - 1 2 6 6 6 6 CPU - SPECint95
Tier 1 0.2 1 1 3 6 17 50 83 Tier 2 - - 1 3 12 30 89 154
Total CPU 0.2 1 2 6 18 47 140 237 Disk - TB
Tier 1 0.2 0 2 5 13 34 100 169 Tier 2 - - 1 3 12 28 89 147
Total Disk 0.2 0 3 8 25 62 189 316
Tape Storage (Tier 1) - TBTotal Tape 1 5 11 20 34 101 304 607
LHC Computing Resources 24 March 200013
Resource Estimates for 1st Year
Assumptions 100 Hz event rate 2 passes through reconstruction Low luminosity running (1.0E+33) Two pass calibration 2000 Costing and Moore’s law adjusted
Note: Some estimates are “bottom –up” using ATLAS Physics TDR numbers.
LHC Computing Resources 24 March 200014
ATLAS and the RC Hierarchy Intentions of setting up a local Tier-1 have been
expressed already in : Canada (ATLAS,Tier-1/2) France (LHC), Germany (LHC or multinational? at CERN), Italy (ATLAS?), Japan (ATLAS,Tier-1/2), Netherlands (LHC) Russia (LHC), UK (LHC), USA (ATLAS)
LHC Computing Resources 24 March 200015
CTP Estimate :Tier-1 Center
Tier-1 RC should have at startup (at least) 30,000 SPECint95 for Analysis 20,000 SPECint95 for Simulation 100 TBytes/year of On-line (Disk) Storage 200 TBytes/year of Near-line (Mass) Storage 100 Mbit/sec connectivity to CERN
Assume no major raw data processing or handling outside of CERN
Re-reconstruction partially in RC´s
LHC Computing Resources 24 March 200016
Calibration Assumptions
Muon system – 100 Hz of “autocalibration” data 200 SI95/event2nd pass=20 Hz for alignment
Inner Detector – 10 Hz, 1 SI95 for calibration (muon tracks)2nd pass =alignment
EM Cal – 0.2 Hz, 10 SI 95/event – Z->e+e-2nd pass=repeat analysis
Had. Cal – 1 Hz, 100 SI95 (isolated tracks)2nd pass = repeat, with found tracks
LHC Computing Resources 24 March 200017
Calibration Numbers
CPU: 24,000 SI95 Required Data storage: 1.3 PB (assuming one
stores data from this pass – fed into raw data store)
LHC Computing Resources 24 March 200018
Reconstruction
Two passes Breakdown by system
Muon: 200 SI95 Had+EM Cal. :10 SI95 Inner Detector: 100 SI 95
NB: At high luminosity ID numbers may rise drastically. Numbers may vary substantially by 2006
Total CPU: 64,000 SI95 (Robertson: 65,000) Robotic Store: 2 PB Reprocessing: 128K SI95 (1 per 3 months)
LHC Computing Resources 24 March 200019
Generation and Simulation
“Astrophysical” uncertainties Highly model dependent – scale of G4
activities vs. fast simulation (CDF vs. D0 models)
Assume 1% of total data volume is simulated via G4 3000 SI95/event Data store 10 TB
Remainder (10x) via fast simulation 30(?) TB, negligible CPU
LHC Computing Resources 24 March 200020
Analysis
130,000 SI95 from ATLAS CTP MONARC has pushed this number up Depends strongly on assumptions
Example: U.S. Estimate = 85K SI95, which would suggest a minimum of 500K SI95 for ATLAS, but large uncertainties
300 TB storage/regional center
LHC Computing Resources 24 March 200021
Resources
CERN: Raw data store 2 passes of reconstruction Calibration Reprocessing Assume analysis/etc. part of contributions (e.g.
RC at CERN) Tier-1’s
Each has 20% of CERN capacity in CPU/Tape/Disk (reconstruction…)
Monte Carlo, Calibration and analysis Costing via 2000 prices, Moore’s law
(1.4/year CPU, 1.18/year tape, 1.35/year disk)
LHC Computing Resources 24 March 200022
CPU
CERN: 216,000 SI95 Calibration, reconstruction, reprocessing only
Single Tier 1: 130k SI95 (U.S. Example) Total: 1,500 kSI95 NB. Uncertainties in analysis model,
reprocessing times can dominate estimates.
LHC Computing Resources 24 March 200023
Data Storage
Tape CERN: 2 PB( was 1 PB in TP) Each Tier 1: 400 TB (U.S. Est) Total: 4.0 PB Single Tier 1: 400 TB
LHC Computing Resources 24 March 200024
Disk Storage
More uncertainty: usage of compressed data,etc
Figure of merit: 25% of Robotic tape 540 TB at CERN 100 TB in ATLAS Computing TP U.S. Estimate: 100 TB Sum of CERN+ Tier 1’s : 1,540 TB
LHC Computing Resources 24 March 200025
Multipliers
CPU: 2000: 70 CHF/SI95, 10 factor from Moore
Robotic Tape: 2000: 2700 CHF/TB, 2.5 factor from Moore
Disk: 2000: 50,000/TB, 5 from Moore
Networking: 20% of sum of other hardware costs
Decent “rule of thumb”
LHC Computing Resources 24 March 200026
Costs
CPU: CERN: 15 MCHF Total: 106 MCHF (Tier 1’s+CERN)
Tape: CERN: 5.4 MCHF Total: 11 MCHF
Disk: CERN: 27 MCHF Total: 77 MCHF
Networking: 37 MCHF
LHC Computing Resources 24 March 200027
Moore’s Law
CPU: CERN: 2 MCHF Total: 11 MCHF (Tier 1’s+CERN)
Tape: CERN: 2.2 MCHF Total: 4.3 MCHF
Disk: CERN: 1.9 MCHF Total: 5.5 MCHF
Networking: 7.1 MCHFComment: Cannot buy everything at last
moment
LHC Computing Resources 24 March 200028
Commentary
Comparisons: ATLAS TP, Robertson Unit costs show wide variation (unit cost of SI95
now, robotic tape, disk) Moore’s law – varying assumptions Requirements can have large variations
ATLAS, CMS, MONARC etc.
One should not take these as cast in stone – variations in ATLAS for CPU/event Monte Carlo methodology Analysis models
Nonetheless – this serves as a starting point.
Recommended