24
1 A Summary of CHEP 2007 A Summary of CHEP 2007 Dmitry Emeliyanov, Dmitry Emeliyanov, RAL PPD RAL PPD Victoria, Victoria, BC, Canada, BC, Canada, 2-7 Sept. 2-7 Sept. 2007 2007

1 A Summary of CHEP 2007 Dmitry Emeliyanov, RAL PPD Victoria, BC, Canada, 2-7 Sept. 2007

Embed Size (px)

Citation preview

Page 1: 1 A Summary of CHEP 2007 Dmitry Emeliyanov, RAL PPD Victoria, BC, Canada, 2-7 Sept. 2007

1

A Summary of CHEP 2007A Summary of CHEP 2007

Dmitry Emeliyanov, RAL Dmitry Emeliyanov, RAL PPDPPD

Victoria, BC, Victoria, BC, Canada, Canada,

2-7 Sept. 2-7 Sept. 20072007

Page 2: 1 A Summary of CHEP 2007 Dmitry Emeliyanov, RAL PPD Victoria, BC, Canada, 2-7 Sept. 2007

2/24

CHEP’07 : The conferenceCHEP’07 : The conference• Expected

Audience:• attract 500

people• 90% from

outside of Canada

• 25% from US

Total: 474

Page 3: 1 A Summary of CHEP 2007 Dmitry Emeliyanov, RAL PPD Victoria, BC, Canada, 2-7 Sept. 2007

3/24

CHEP’07: Some statisticsCHEP’07: Some statistics

CHEP Presentations and Posters

0

10

20

30

40

50

60

70

80

90

Onlinecomputing

Softwarecomponents,

tools anddatabases

Computerfacilities,productiongrids and

networking

Collaborativetools

Distributeddata analysis

andinformation

management

Eventprocessing

Gridmiddlewareand tools

PresentationPoster

• 429 abstracts submitted with 1208 authors• 29 plenary talks and 7 parallel tracks:

Page 4: 1 A Summary of CHEP 2007 Dmitry Emeliyanov, RAL PPD Victoria, BC, Canada, 2-7 Sept. 2007

4/24

Selected topicsSelected topics

• Status of the LHC and experiments• Multi-core CPUs and HEP software: news from

Intel and view from CERN• Online computing: Trigger and DAQ activities in

LHC experiments and beyond

• All presentations are available in Indico: http://indico.cern.ch/conferenceTimeTable.py?confId=3580

• papers will be published in Journal of Physics Conference Series

Page 5: 1 A Summary of CHEP 2007 Dmitry Emeliyanov, RAL PPD Victoria, BC, Canada, 2-7 Sept. 2007

5/24

General LHC scheduleGeneral LHC scheduleT. Virdee (CERN/Imperial)

• Engineering run originally foreseen at end 2007 now precluded by delays in installation and equipment commissioning

• 450 GeV operation now part of normal setting up procedure for beam commissioning to high-energy

• General schedule has been revised, accounting for inner triplet repairs and their impact on sector commissioning

• All technical systems commissioned to 7 TeV operation, and machine closed April 2008

• Beam commissioning starts May 2008• First collisions at 14 TeV c.m. July 2008• Luminosity evolution will be dominated by our confidence in

the machine protection system and by the ability of the detectors to absorb the rates.

• No provision in success-oriented schedule for major mishaps, e.g. additional warm-up/cooldown of sector

Page 6: 1 A Summary of CHEP 2007 Dmitry Emeliyanov, RAL PPD Victoria, BC, Canada, 2-7 Sept. 2007

6/24

LHC experiments statusLHC experiments status

• Construction essentially completed• Installation is very advanced - beam pipes closed end

March 2008• Test beam and commissioning work already carried out

gives confidence that detectors will behave as expected• Commissioning using cosmics with more and more

complete setups (complexity and functionality) – using final readout, trigger and DAQ, software and computing

systems• Computing, Software & Analysis 24/7 Challenges, Dress

Rehearsals @50% of 2008 expectation by end of 2007.• Preparations for the rapid extraction of physics being

made• By spring 2008 experiments will be in 2008

configurations, fields ON, taking cosmics

T. Virdee (CERN/Imperial)

Page 7: 1 A Summary of CHEP 2007 Dmitry Emeliyanov, RAL PPD Victoria, BC, Canada, 2-7 Sept. 2007

77

Addressing Future HPC Demand Addressing Future HPC Demand with Multi-core Processorswith Multi-core Processors

Stephen S. PawlowskiStephen S. PawlowskiIntel Senior FellowIntel Senior Fellow

GM, Architecture and PlanningGM, Architecture and PlanningCTO, Digital Enterprise GroupCTO, Digital Enterprise Group

September 5, 2007

News from IntelNews from Intel

Page 8: 1 A Summary of CHEP 2007 Dmitry Emeliyanov, RAL PPD Victoria, BC, Canada, 2-7 Sept. 2007

88

Accelerating Multi- and Many-Accelerating Multi- and Many-corecore

Performance Through ParallelismPerformance Through Parallelism

Power delivery and management

High bandwidth memory

Reconfigurable cache

Scalable fabric

Fixed-function units

Big Core

Core

Core

Core

Core Core

Core Core

Core CoreBig Core

•Big cores for Single Thread Performance

Big cores for Single Thread Performance

•Small cores for Multi-Thread Performance

Small cores for Multi-Thread Performance

Page 9: 1 A Summary of CHEP 2007 Dmitry Emeliyanov, RAL PPD Victoria, BC, Canada, 2-7 Sept. 2007

99

Addressing Memory BandwidthAddressing Memory Bandwidth

Bringing Memory Closer to the CoresBringing Memory Closer to the Cores

Package

DRAM

CPU

Heat-sink

Last Level Cache

Fast DRAM

Memory on Package

*Future Vision, does not represent real Intel product

3D Memory Stacking

Package

Si Chip Si Chip

Page 10: 1 A Summary of CHEP 2007 Dmitry Emeliyanov, RAL PPD Victoria, BC, Canada, 2-7 Sept. 2007

105 September 2007 CHEP Plenary - SJ 10

How good is the match

between LHC software and

current/future processors?

Sverre Jarp

CERN openlab CTO

CHEP 2007

5 September 2007

Page 11: 1 A Summary of CHEP 2007 Dmitry Emeliyanov, RAL PPD Victoria, BC, Canada, 2-7 Sept. 2007

11

Implications of Moore’s law

• Initially the processor was simple– Modest frequency; Single instruction issue; In order;

Tiny caches; No hardware multithreading or multi-core; No major problems with cooling

• Since then:– Frequency scaling (from 150 MHz to 3 GHz)– Multiple execution ports, wide execution (SSE)– Out-of-order execution, larger caches– Multithreading, Multi-core– Heat

All of this has been absorbed without any

change to our software model: Single-threaded

processes farmed out per processor core.

Page 12: 1 A Summary of CHEP 2007 Dmitry Emeliyanov, RAL PPD Victoria, BC, Canada, 2-7 Sept. 2007

12

HEP Software Profile• Our memory usage:

– Today, we need 2 – 4 GB per single-threaded process.– In other words, a dual-socket server needs at least:

• Single core: 4 - 8 GB, Quad core: 16 - 32 GB• Future 16-way CPU: 64 – 128 GB, 64-way CPU: 256 – 512 GB

• “We have floating point work wrapped in ‘if/else’ logic”– Overall estimate: 50% is floating point

• Our LHC programs typically issue (on average) only 1 instruction per cycle – This is very low!

• Core 2 architecture can handle 4 instructions• Each SSE instruction can operate on 128 bits (2 doubles)

• “our LHC programs typically utilizes only 1 instruction per CPU clock cycle (= 1/8 of maximum)”

“We are not getting out of first gear”

Page 13: 1 A Summary of CHEP 2007 Dmitry Emeliyanov, RAL PPD Victoria, BC, Canada, 2-7 Sept. 2007

13

Recommendations• Industry will bombard us with new designs based on

multi-billion transistor budgets– Hundreds of cores– Multiple threads per core– Unbelievable floating-point performance

• Clearly, the emphasis now is to get LHC started and there is plenty of compute power

across the Grid.• If we want to extract (much) more

compute-power out of new chip

generations– Try to increase the Instruction Level Parallelism– Investigate “intelligent” multithreading– Reduce our overall memory footprint

Reentrantcode

Magneticfield

Physicsprocesses

Globaldata

Eventspecific

data

Core 0

Event-specific

data

Core 1

Event-specific

data

Core 2

Event-specific

data

Core 3

Page 14: 1 A Summary of CHEP 2007 Dmitry Emeliyanov, RAL PPD Victoria, BC, Canada, 2-7 Sept. 2007

14

Online Computing:CPU farms for high-level triggering; Farm configuration and run control; Describing and managing configuration data and conditions databases; Online software frameworks and tools; online calibration procedures

• 48 abstracts total: 27 oral presentations / 21 posters

• By experiments:– 38 LHC / 10 non-LHC experiment or generic– ALICE: 4– ATLAS: 15– CMS: 14– LHCb: 5

Page 15: 1 A Summary of CHEP 2007 Dmitry Emeliyanov, RAL PPD Victoria, BC, Canada, 2-7 Sept. 2007

15

Data Acquisition at the LHC experimentsPlenary talk by Sylvain CHAPELAND (CERN )

Page 16: 1 A Summary of CHEP 2007 Dmitry Emeliyanov, RAL PPD Victoria, BC, Canada, 2-7 Sept. 2007

16

• “Alea iacta est” – All fundamental choices are made– All use commercial components wherever possible– All based on powerful LAN technology and PC server farms– Installation is progressing rapidly

• Status reports: – “Integration of the Trigger and Data Acquisition Systems in

ATLAS”– “Commissioning of the ALICE Data Acquisition System”

• Commissioning and cosmics running– Commissioning of larger and larger slices has started in all 4 experiments– Large scale and Cosmic (ATLAS) tests look already very promising

– Extremely valuable feedback– require customized settings / algorithms

LHC Experiments: Trigger and DAQ Status

Page 17: 1 A Summary of CHEP 2007 Dmitry Emeliyanov, RAL PPD Victoria, BC, Canada, 2-7 Sept. 2007

17

Combined Cosmic run in June 2007

17

In June we had a 14 day combined cosmic run with no magnetic field.Included following systems:

Muons – RPC (~1/32) , MDT (~1/16), TGC (~1/36)

Calorimeters – EM (LAr )(~50%) & Hadronic (Tile) (~75%)

Tracking – Transition Radiation Tracker (TRT) (~6/32 of the barrel of the final system)

Only systems missing are the Silicon strips and pixels and the muon system CSCs

From “The ATLAS Trigger Commissioning with Cosmic rays”

Page 18: 1 A Summary of CHEP 2007 Dmitry Emeliyanov, RAL PPD Victoria, BC, Canada, 2-7 Sept. 2007

18

Trigger steering

• Sophisticated frameworks for high level trigger steering have been developed– Lightweight (caching of calculations (ATLAS))– Work both offline and online– Use a data-base for configurations (CMS) – Ready to be given to non-expert physicists!– “The ATLAS High Level Trigger Steering”– “High Level Trigger Configuration and Handling of Trigger

Tables in the CMS Filter Farm”

Page 19: 1 A Summary of CHEP 2007 Dmitry Emeliyanov, RAL PPD Victoria, BC, Canada, 2-7 Sept. 2007

19

Data Quality Monitoring

• Essential for commissioning and running• Works also with “offline” data• Standalone viewers vs plug-ins (e.g. web CMS)• Databases are used to store histograms or to

describe them (LHCb)• Reports from all four experiments:

– “The ALICE-LHC Online Data Quality Monitoring Framework”

– “A software framework for Data Quality Monitoring in ATLAS”

– “CMS Online Web Based Monitoring”– “Online Data Monitoring in the LHCb experiment”

Page 20: 1 A Summary of CHEP 2007 Dmitry Emeliyanov, RAL PPD Victoria, BC, Canada, 2-7 Sept. 2007

20

Slow and Run Controls

• Slow and run-control face huge numbers of elements ~ O(107)• Final run-control is beginning to be used on wide-scale, scalability

has been tested. Configuration stored in RDBMS (ALICE, CMS, LHCb) or as objects (ATLAS)

• All run-controls support partitioning and use finite state machines

– “The ATLAS DAQ System Online Configurations Database Service Challenge”

– “The Run Control and Monitoring System of the CMS Experiment”

• Detector Control is maybe “slow” but certainly big: “The CMS Tracker Control System”, O(50000) HV channels + O(100000) environment sensors controlled by 5 PCs

Page 21: 1 A Summary of CHEP 2007 Dmitry Emeliyanov, RAL PPD Victoria, BC, Canada, 2-7 Sept. 2007

21

TDAQ Activities Outside the LHC

• Reports from mature systems– “The DZERO Run 2 L3/DAQ System Performance”– “The PHENIX Experiment in the RHIC Run 7”– “The BaBar Online Detector Control System

Upgrade”

• And new frameworks– “Multi-Agent Framework for Experiment Control

Systems (AFECS)”

• Successful upgrades (to overcome legacy hardware), hardware extensions, high availability, running with very small crews

Page 22: 1 A Summary of CHEP 2007 Dmitry Emeliyanov, RAL PPD Victoria, BC, Canada, 2-7 Sept. 2007

22

The D0 Run II L3/DAQ System Performance

•Mainly run by 3 (part-time) people •Heterogeneous trigger farm scaled up from 90 to ~ 330 nodes•Has lived reliably through numerous detector and hardware upgrades

Page 23: 1 A Summary of CHEP 2007 Dmitry Emeliyanov, RAL PPD Victoria, BC, Canada, 2-7 Sept. 2007

23

To summarize ...

• The LHC experiments are looking forward to seeing the first data– All core DAQ components have been tested– Good fraction of equipment is installed (except for the

filter farms and part of the DAQ network)– Integration and Commissioning are well underway– A lot of activity in trigger control and steering

• Handing over to the physicists

– Monitoring frameworks evolving quickly

Many interesting Online stories will be told at the

next CHEP

Page 24: 1 A Summary of CHEP 2007 Dmitry Emeliyanov, RAL PPD Victoria, BC, Canada, 2-7 Sept. 2007

24/24

CHEP 2009CHEP 2009

• Will be held in Prague, Czech Republic on 21-27 March 2009