+ All Categories
Home > Documents > HLT - data compression vs event rejection

HLT - data compression vs event rejection

Date post: 01-Jan-2016
Category:
Upload: rooney-chaney
View: 22 times
Download: 0 times
Share this document with a friend
Description:
HLT - data compression vs event rejection. Assumptions. Need for an online rudimentary event reconstruction for monitoring Detector readout rate (i.e. TPC) >> DAQ bandwidth  mass storage bandwidth - PowerPoint PPT Presentation
23
HLT - data compression vs event rejection
Transcript
Page 1: HLT - data compression vs event rejection

HLT -data compression vs

event rejection

Page 2: HLT - data compression vs event rejection

Assumptions

• Need for an online rudimentary event reconstruction for monitoring

• Detector readout rate (i.e. TPC) >> DAQ bandwidth mass storage bandwidth

• Some physics observables require running detectors at maximum rate

(e.g. quarkonium spectroscopy:

TPC/TRD dielectrons; jets in p+p: TPC tracking)

• Online combination of different detectors can increase selectivity of triggers

(e.g. jet quenching: PHOS/TPC high-pT - jet

events)

Page 3: HLT - data compression vs event rejection

Data volume and event rate

TPC detector

data volume = 300 Mbyte/event

data rate = 200 Hz

front-end electronics

DAQ – event building

Level-3 system

permanent storage system

bandwidth

60 Gbyte/sec

15 Gbyte/sec

< 1.2 Gbyte/sec

< 2 Gbyte/sec

Page 4: HLT - data compression vs event rejection

HLT tasks

• Online (sub)-event reconstruction– optimization and monitoring of detector

performance

– monitoring of trigger selectivity

– fast check of physics program

• Data rate reduction– data volume reduction

• regions-of-interest and partial readout

• data compression

– event rate reduction• (sub)-event reconstruction and event rejection

• p+p program– pile-up removal

– charged particle jet trigger, etc.

Page 5: HLT - data compression vs event rejection

Data rate reduction

• Volume reduction– regions-of-interest and partial

readout– data compression

• entropy coder

• vector quantization

• TPC-data modeling

• Rate reduction– (sub)-event reconstruction and event

rejection before event building

Page 6: HLT - data compression vs event rejection

TPC event(only about 1% is shown)

Page 7: HLT - data compression vs event rejection

Regions-of-interest and partial readout

• Example: selection of TPC sector and -slice based on TRD track candidate

Page 8: HLT - data compression vs event rejection

Data compression:Entropy coder

Variable Length Codingshort codes for long codes forfrequent values infrequent values

Results: NA49: compressed event size = 72% ALICE: = 65%

(Arne Wiebalck, diploma thesis, Heidelberg)

Probability distribution of 8-bit TPC data

Page 9: HLT - data compression vs event rejection

Data compression:Vector quantization

• Sequence of ADC-values on a pad = vector:

• Vector quantization = transformation of vectors into codebook entries

• Quantization error:

Results: NA49: compressed event size = 29 %ALICE: = 48%-64%(Arne Wiebalck, diploma thesis, Heidelberg)

codebook

compare

Page 10: HLT - data compression vs event rejection

Data compression: TPC-data modeling

• Fast local pattern recognition:

Result: NA49: compressed event size = 7 %

analytical cluster model

quantization of deviations from track and cluster

model

local track parameters

comparison to raw data

simple local track model (e.g. helix) track parameters

• Track and cluster modeling:

Page 11: HLT - data compression vs event rejection

Fast pattern recognition

Essential part of Level-3 system

– crude complete event reconstruction

monitoring

– redundant local tracklet finder for cluster evaluation

efficient data compression

– selection of (,,pT)-slices

ROI

– high precision tracking for selected track candidates jets, dielectrons, ...

Page 12: HLT - data compression vs event rejection

Fast pattern recognition

• Sequential approach– cluster finder, vertex finder and track

follower• STAR code adapted to ALICE TPC

– reconstruction efficiency

– timing results

• Iterative feature extraction– tracklet finder on raw data and cluster

evaluation• Hough transform

Page 13: HLT - data compression vs event rejection

Fast cluster finder (1)

• timing: 5ms per padrow

Page 14: HLT - data compression vs event rejection

Fast cluster finder (2)

Page 15: HLT - data compression vs event rejection

Fast cluster finder (3)• Efficiency

• Offline efficiency

Page 16: HLT - data compression vs event rejection

Fast vertex finder

• Resolution

• Timing result:19 msec on ALPHA (667

MHz)

Page 17: HLT - data compression vs event rejection

Fast track finder

• Tracking efficiency

Page 18: HLT - data compression vs event rejection

Fast track finder

• Timing results

Page 19: HLT - data compression vs event rejection

Hough transform (1)

• Data flow

Page 20: HLT - data compression vs event rejection

Hough transform (2)

-slices

Page 21: HLT - data compression vs event rejection

Hough transform (3)• Transformation and maxima search

Page 22: HLT - data compression vs event rejection

Level-3 system architecture

TPCsector

#1

TPCsector#36

TRD ITS XYZ

local processingsubsector/sector

global processing I(2x18 sectors)

global processing II(detector merging)

global processing III(event reconstruction)

ROI

data compr.

event rejection

monitoring

Lev

el-3

trig

ger

momentumfilter

Page 23: HLT - data compression vs event rejection

TPC on-line trackingAssumptions:• Bergen fast tracker• DEC Alpha 667 MHz • Fast cluster finder excluding cluster deconvolutionNote: This cluster finder is sub optimal for the inner sectors and additional work is required here. However in order to get some estimate the computation requirements were based on the outer pad rows. It should be noted that the possibly necessary deconvolution in the inner padrows may require comparably more CPU cycles.

TPC L3 Tracking estimate:• Cluster finder on pad row of the outer sector

5 ms• tracking of all (monte carlo) space points for one TPC sector

600 msNote - this data may not include realistic noise - tracking to first order is linear with the number of tracks provided there are few overlaps - assuming one ideal processor below• Cluster finder on one sector (145 padrows)

725 ms• Process complete sector

1,325 s• Process complete TPC

47,7 s• Running at maximum TPC rate (200 Hz), January 2000 9540 CPUs• Assuming 20% overhead

11500 CPUs (parallel computation, network transfer, inner sector additional overhead, sector merging etc.)• Moores Law (60%/a) @ 2006 – 1a commission x10,5

1095 CPUs


Recommended