23
HLT - data compression vs event rejection

HLT - data compression vs event rejection. Assumptions Need for an online rudimentary event reconstruction for monitoring Detector readout rate (i.e

Embed Size (px)

Citation preview

HLT -data compression vs

event rejection

Assumptions

• Need for an online rudimentary event reconstruction for monitoring

• Detector readout rate (i.e. TPC) >> DAQ bandwidth mass storage bandwidth

• Some physics observables require running detectors at maximum rate

(e.g. quarkonium spectroscopy:

TPC/TRD dielectrons; jets in p+p: TPC tracking)

• Online combination of different detectors can increase selectivity of triggers

(e.g. jet quenching: PHOS/TPC high-pT - jet

events)

Data volume and event rate

TPC detector

data volume = 300 Mbyte/event

data rate = 200 Hz

front-end electronics

DAQ – event building

Level-3 system

permanent storage system

bandwidth

60 Gbyte/sec

15 Gbyte/sec

< 1.2 Gbyte/sec

< 2 Gbyte/sec

HLT tasks

• Online (sub)-event reconstruction– optimization and monitoring of detector

performance

– monitoring of trigger selectivity

– fast check of physics program

• Data rate reduction– data volume reduction

• regions-of-interest and partial readout

• data compression

– event rate reduction• (sub)-event reconstruction and event rejection

• p+p program– pile-up removal

– charged particle jet trigger, etc.

Data rate reduction

• Volume reduction– regions-of-interest and partial

readout– data compression

• entropy coder

• vector quantization

• TPC-data modeling

• Rate reduction– (sub)-event reconstruction and event

rejection before event building

TPC event(only about 1% is shown)

Regions-of-interest and partial readout

• Example: selection of TPC sector and -slice based on TRD track candidate

Data compression:Entropy coder

Variable Length Codingshort codes for long codes forfrequent values infrequent values

Results: NA49: compressed event size = 72% ALICE: = 65%

(Arne Wiebalck, diploma thesis, Heidelberg)

Probability distribution of 8-bit TPC data

Data compression:Vector quantization

• Sequence of ADC-values on a pad = vector:

• Vector quantization = transformation of vectors into codebook entries

• Quantization error:

Results: NA49: compressed event size = 29 %ALICE: = 48%-64%(Arne Wiebalck, diploma thesis, Heidelberg)

codebook

compare

Data compression: TPC-data modeling

• Fast local pattern recognition:

Result: NA49: compressed event size = 7 %

analytical cluster model

quantization of deviations from track and cluster

model

local track parameters

comparison to raw data

simple local track model (e.g. helix) track parameters

• Track and cluster modeling:

Fast pattern recognition

Essential part of Level-3 system

– crude complete event reconstruction

monitoring

– redundant local tracklet finder for cluster evaluation

efficient data compression

– selection of (,,pT)-slices

ROI

– high precision tracking for selected track candidates jets, dielectrons, ...

Fast pattern recognition

• Sequential approach– cluster finder, vertex finder and track

follower• STAR code adapted to ALICE TPC

– reconstruction efficiency

– timing results

• Iterative feature extraction– tracklet finder on raw data and cluster

evaluation• Hough transform

Fast cluster finder (1)

• timing: 5ms per padrow

Fast cluster finder (2)

Fast cluster finder (3)• Efficiency

• Offline efficiency

Fast vertex finder

• Resolution

• Timing result:19 msec on ALPHA (667

MHz)

Fast track finder

• Tracking efficiency

Fast track finder

• Timing results

Hough transform (1)

• Data flow

Hough transform (2)

-slices

Hough transform (3)• Transformation and maxima search

Level-3 system architecture

TPCsector

#1

TPCsector#36

TRD ITS XYZ

local processingsubsector/sector

global processing I(2x18 sectors)

global processing II(detector merging)

global processing III(event reconstruction)

ROI

data compr.

event rejection

monitoring

Lev

el-3

trig

ger

momentumfilter

TPC on-line trackingAssumptions:• Bergen fast tracker• DEC Alpha 667 MHz • Fast cluster finder excluding cluster deconvolutionNote: This cluster finder is sub optimal for the inner sectors and additional work is required here. However in order to get some estimate the computation requirements were based on the outer pad rows. It should be noted that the possibly necessary deconvolution in the inner padrows may require comparably more CPU cycles.

TPC L3 Tracking estimate:• Cluster finder on pad row of the outer sector

5 ms• tracking of all (monte carlo) space points for one TPC sector

600 msNote - this data may not include realistic noise - tracking to first order is linear with the number of tracks provided there are few overlaps - assuming one ideal processor below• Cluster finder on one sector (145 padrows)

725 ms• Process complete sector

1,325 s• Process complete TPC

47,7 s• Running at maximum TPC rate (200 Hz), January 2000 9540 CPUs• Assuming 20% overhead

11500 CPUs (parallel computation, network transfer, inner sector additional overhead, sector merging etc.)• Moores Law (60%/a) @ 2006 – 1a commission x10,5

1095 CPUs