+ All Categories
Home > Documents > LHCb Computing Model

LHCb Computing Model

Date post: 08-Jan-2016
Category:
Upload: keefe
View: 39 times
Download: 1 times
Share this document with a friend
Description:
LHCb Computing Model. Domenico Galli, Bologna. INFN CSN1 Roma, 31.1.2005. Premise: The Event Rates. Current LHCb computing model and resource estimates are based on the event rates at HLT output following “ re-optimized trigger/DAQ/computing ”: - PowerPoint PPT Presentation
Popular Tags:
51
LHCb Computing Model Domenico Galli, Bologna INFN CSN1 Roma, 31.1.2005
Transcript
Page 1: LHCb Computing Model

LHCbComputing Model

Domenico Galli, Bologna

INFN CSN1

Roma, 31.1.2005

Page 2: LHCb Computing Model

LHCb Computing Model. 2Domenico Galli

Premise: The Event Rates Current LHCb computing model and resource estimates

are based on the event rates at HLT output following “re-optimized trigger/DAQ/computing”:

Maximize physics output given available/expected computing resources.

They are summarized on the following table:

Data rate

EventsCalibratio

nPhysics

200 Hz Exclusive B candidates Tagging B (core)

600 HzHigh mass dimuon candidates

Tracking bJ/X (unbiased)

300 Hz D* candidates PIDCharm (mixing &

CPV)

900 HzInclusive b candidates (e.g. b)

Trigger B (data mining)

Page 3: LHCb Computing Model

LHCb Computing Model. 3Domenico Galli

The LHCb Dataflow

On-line Farm

CERN Tier-1s

CERN Tier-1s

Tier-2s

reconstruction

pre-selectionanalysis

RAWmc data RAW data

rDST

DST+RAW TAG

calibration data

MC On-line Farm

Physics Analysis

Local Analysis

n-tuple User TAGUser DST

TAGSelected DST+RAW

Paper

CERN Tier-1s

Tier-3s

CERN

Scheduled job Chaotic job

Page 4: LHCb Computing Model

LHCb Computing Model. 4Domenico Galli

The LHCb Dataflow (II)

Page 5: LHCb Computing Model

LHCb Computing Model. 5Domenico Galli

Event Parameters

Event Size [kB]

current

2008

RAW 25 25rDST N/A 25TAG 1 1DST 100 75MC DST 500 400

CPU [kSi2k•s/evt]curren

t2008

Reconstruction 2.4 2.4

Pre-selection analysis

0.6 0.2

Analysis 0.3 0.3 rDST (reduced DST): only enough

reconstructed data to allow the physics pre-selections algorithms to be run.

reconstruction

pre-selectionanalysis

RAWmc data RAW data

rDST

DST+RAW TAG

calibration data

Efficiency assumed

%

Scheduled CPU usage

85

Chaotic CPU usage 60

Disk usage 70

MSS usage 100

Page 6: LHCb Computing Model

LHCb Computing Model. 6Domenico Galli

5.5 MSi2k;

1800 CPU (assuming PASTA/2006-2007forecast);

40 TB disk.

The On-line Event Filter Farm

CERNcomputing

centre

HLTb-exclusive 200 Hz

di-muon 600 Hz

D* 300 Hz

b-inclusive 900 Hz

rDST (25 kB/evt)200 Hz

2 kHzRAW (25 kB/evt)

60 MB/s2x1010 evt/a

500 TB/a

2 streams

1 a = 107 s over 7-month period

200

600

300

900

b-exclusive

di-muon

D*

b-inclusive

Page 7: LHCb Computing Model

LHCb Computing Model. 7Domenico Galli

Reconstruction Evaluate:

Track position and momentum.

Energy of electromagnetic and hadronic showers.

Particle identification (e, γ, π0, π/K, μ).

Make use of: Calibration and alignment constants (produced from

online monitoring and/or offline from a pre-processing of data associated with the sub-detector).

Detector conditions (a subset ofExperimental Control System database).

reconstruction24 kSi2k•s/evt

calibration dataRAW data25 kB/evt

rDST data25 kB/evt

Page 8: LHCb Computing Model

LHCb Computing Model. 8Domenico Galli

Reconstruction (II) Required CPU for 1 pass of 1-year data set: 1.5 MSi2k•a. Performed twice in a year.

pass

1

during data taking, over a 7-month period

b-exclusive events: real-time, by the Event Filter Farm.

b-inclusive, di-muon, D* events: quasi real-time, (maximum delay of few days) by the Tier-1s.

CPU power required for each Tier-1: 0.39 MSi2k.

2

re-processing, during winter shut-down, over a 2-month period

42% by Event Filter Farm (5.5 MSi2k•2 months).

52% by Tier-1s and CERN.

CPU power required for each Tier-1: 0.74 MSi2k.

Page 9: LHCb Computing Model

LHCb Computing Model. 9Domenico Galli

Reconstruction (III) 500 TB/a input RAW.

Stored on MSS in 2 copies: one at CERN, the other divided among Tier-1s:

500 TB/a @ CERN;

500/6 = 83 TB/a @ each Tier-1;

500 TB/a output rDST per pass. 1000 TB/a stored on MSS in 1 copy divided among CERN

and Tier1s: 1000/7 = 150 TB/a @ each CERN + Tier-1

Page 10: LHCb Computing Model

LHCb Computing Model. 10Domenico Galli

Pre-selection Analysis (aka Stripping) Evaluate:

4-momentum of measured particletracks;

Primary and secondary vertices; Candidates for composite particles; 4-momentum of composite particles.

Apply: Cuts based on a specific

pre-selection algorithm foreach of the ~40 physicschannels.

At least 4 output data streamsforeseen during the first datataking (b-exclusive,b-inclusive, di-muonand D*).

rDST25 kB/evt

RAW25 kB/evt

TAG

b-inclusiveDST+RAW100 kB/evt

b-exclusiveDST+RAW100 kB/evt

D*

rDST+RAW50 kB/evt

di-muonrDST+RAW

50 kB/evt

pre-selectionanalysis

0.2 kSi2k•s/evt

Outputstream

Input fractio

n

Reduction

factorb-exclusive

0.1 10

b-inclusive 0.45 100di-muon 0.3 5D* 0.15 5

Page 11: LHCb Computing Model

LHCb Computing Model. 11Domenico Galli

Pre-selection Analysis (II) Pre-selection cuts are looser with respect to final

analysis and include sidebands to extract background properties.

The event that pass the selection criteria will be fully reconstructed (full DST, 75 kB/evt).

An Event Tag Collection is created for faster reference to selectedevents; it contains:

a brief summary of eachevent’s characteristics;

the results of the pre-selectionalgorithms;

a reference to theactual DST record.

rDST25 kB/evt

RAW25 kB/evt

TAG

b-inclusiveDST+RAW100 kB/evt

b-exclusiveDST+RAW100 kB/evt

D*

rDST+RAW50 kB/evt

di-muonrDST+RAW

50 kB/evt

pre-selectionanalysis

0.2 kSi2k•s/evt

Page 12: LHCb Computing Model

LHCb Computing Model. 12Domenico Galli

Pre-selection Analysis (III) Required CPU for 1 pass of 1-year data set: 0.29

MSi2k•a. Performed 4 times in a year.pass

1

during data taking, over a 7-month periodQuasi real-time (maximum delay of few days) by CERN + Tier-1s.CPU power required for each Tier-1/CERN: 0.08 MSi2k.

2after data taking, over a 1-month periodCPU power required for each Tier-1/CERN: 0.59 MSi2k.

3

after re-processing, during winter shut-down, over a 2-month periodCPU power provided by Event Filter Farm: 42% = 0.86 MSi2k.CPU power required for each Tier-1/CERN: 0.17 MSi2k.

4before next year data taking, over a 1-month periodCPU power required for each Tier-1/CERN: 4.1/7 = 0.59 MSi2k.

Page 13: LHCb Computing Model

LHCb Computing Model. 13Domenico Galli

Pre-selection Analysis (IV) Input: 2 pass x 500 TB/a rDST.

Output: 4 pass x (119 + 20 = 139) TB/a DST+TAG.

Stored on MSS in 2 copies: one at CERN, the other divided among Tier-1s:

4x139 = 556 TB/a @ CERN;

556/6 = 93 TB/a @ each Tier-1;

Stored on disk in 7 copies: one at CERN, one for each Tier-1. Older version removed (2 version kept):

2x139 TB/a @ CERN;

2x139 TB/a @ each Tier-1;

Page 14: LHCb Computing Model

LHCb Computing Model. 14Domenico Galli

Simulation Simulation studies are usually performed in order to:

measure the performance of the detector and of the event selection as a function of the regions of phase space;

estimate the efficiency of the full reconstruction and analysis of the B decay channel.

Due to the large background rejection, a full simulation of background events is unfeasible. Moreover it is better to rely on real data (mass sidebands) than on MC samples.

Simulation strategy: concentrate the simulation on what we consider as main-stream signals, in particular B decays and b-inclusive events.

Statistics must be sufficient such that the total error is not dominated by MC statistical error.

Page 15: LHCb Computing Model

LHCb Computing Model. 15Domenico Galli

Simulation (II) 2•109 signal events;

2•109 b-inclusive events;

10% of these events will pass the trigger simulation and will be reconstructed and stored on MSS.

6.5 MSi2k•a required (dominates CPU needs for LHCb).

MC DST size (including “thruth” information and relationships) is ~400 kB/evt. TAG size is ~1 kB/evt.

MSS storage: 160 TB/a.

Page 16: LHCb Computing Model

LHCb Computing Model. 16Domenico Galli

Analysis Analysis starts from stripped DST.

Output of stripping is self-contained, i.e. no need to navigate between files.

Further reduces the sample (typically by a factor of 5) to focus on one particular analysis channel.

Produce an n-tuple object or aprivate stripped DST, used bya single physicist or a small groupof collaborators.

Typical analysis jobs run on a~106 event sample.

Some analysis jobs will run ona larger ~107 event sample.

Physics Analysis0.3 kSi2k•s/evt

Local Analysis

n-tuple User TAGUser DST

TAGSelected DST+RAW

Paper

Page 17: LHCb Computing Model

LHCb Computing Model. 17Domenico Galli

Analysis (II)

Estimate of analysis requirements, excluding efficiencies.

N. of physicist performing analysis 140 (25%)

N. of analysis jobs per physicist per week 4

Fraction of jobs analyzing 106 events 80%

Fraction of jobs analyzing 107 events 20%

Event size reduction factor after analysis 5

Number of active n-tuples 5

2008 CPU needs [MSi2k•a] 0.80

2008 Disk storage [TB] 200

Page 18: LHCb Computing Model

LHCb Computing Model. 18Domenico Galli

Analysis (III) CPU need in 2008 (including 60% efficiency):

1.3 MSi2k•a. Due to better access to the RAW data, past copies of

stripped DST and the availability of MC data, we foresee CERN servicing a larger fraction of the analysis:

CERN: 25%; Tier-1: 75% (12.5% each one).

CPU power required in 2008 for CERN:1.3 * 0.25 = 0.32 MSi2k•a.

CPU power required in 2008 for each Tier-1:1.3 * 0.75/6 = 0.16 MSi2k•a.

CPU need for analysis will grow linearly with the available data in the early years of data taking (e.g. 3.9 MSi2k•a in 2010).

Disk storage need in 2008: ~200 TB. (will grow linearly with the available data in the early years of the experiment, e.g. ~600 TB in 2010).

Page 19: LHCb Computing Model

LHCb Computing Model. 19Domenico Galli

Data Location (MSS) Tier-1s:

INFN-CNAF (Bologna, Italy)

FZK (Karlsruhe, Germany)

IN2P3 (Lyon, France)

NIKHEF (Amsterdam, Netherlands)

PIC (Barcelona, Spain)

RAL (UK)

DST x 2

rDST

Tier1’s

RAW x 2

CERN

MC x 2

Page 20: LHCb Computing Model

LHCb Computing Model. 20Domenico Galli

2008 Assumed first year of full data taking:

107 seconds @ 2 x 1032cm-2s-1. Extended over 7 month (April-October) These are “stable running conditions”.

Data sample:

b-exclusive

dimuon D*b-

inclusiveTotal

Trigger Rate [Hz] 200 600 300 900 2000

Events [x109] 2 6 3 9 20

Page 21: LHCb Computing Model

LHCb Computing Model. 21Domenico Galli

CPU Requirements in 2008

MSi2k•a CERN 6 Tier- 1s Tier- 1 14 Tier- 2s Tier- 2 Total

Stripping 0.17 1.03 0.17 0.00 0.00 1.20Recons. 0.40 2.42 0.40 0.00 0.00 2.83Monte Carlo 0.00 0.00 0.00 7.65 0.55 7.65Analysis 0.32 0.97 0.16 0.00 0.00 1.29Total 0.90 4.42 0.73 7.65 0.55 12.97

Online FARM resources not presented here. CPU efficiencies:

Production: 85%. Analysis: 60%.

Page 22: LHCb Computing Model

LHCb Computing Model. 22Domenico Galli

CPU Requirements in 2008 (II)

0.00

5000.00

10000.00

15000.00

20000.00

25000.00

kSi2

k

Tier2's Tier1's CERN

Page 23: LHCb Computing Model

LHCb Computing Model. 23Domenico Galli

CPU Requirements in 2008 (III)

0

5000

10000

15000

20000

25000

kSi2

k

Monte Carlo Analysis Reconstruction Stripping

Page 24: LHCb Computing Model

LHCb Computing Model. 24Domenico Galli

Permanent Storage (MSS) in 2008

40%

60%

CERN

Tier1's

29%

33%

9%29%

RAWrDSTData DSTMC DST

TB CERN 6 Tier- 1s Tier- 1 Total

RAW 500 500 83 1000rDST 143 857 143 1000Data DST 556 556 93 1112MC DST 160 160 27 321Total 1359 2074 346 3433

Page 25: LHCb Computing Model

LHCb Computing Model. 25Domenico Galli

Fast Storage (Disk) in 2008

74%

1%25%

CERNTier1'sTier2's

TB CERN 6 Tier- 1s Tier- 1 14 Tier- 2s Tier- 2 Total

RAW 136 0 0 0 0.0 136rDST 136 0 0 0 0.0 136Data DST 256 1534 256 0 0.0 1790MC DST 229 687 115 23 1.6 939Analysis 70 210 35 0 0.0 280Total 826 2432 405 23 1.6 3281

4% 4%

54%

29%

9% RAWrDSTData DSTMC DSTAnalysis

Page 26: LHCb Computing Model

LHCb Computing Model. 26Domenico Galli

Network Bandwith

Peak bandwidth need exceed the average by a factor of 2.

[MB/s] CERN Tier-1 Tier-2

Average 76 143 20

Peak 165 276 20

Page 27: LHCb Computing Model

LHCb Computing Model. 27Domenico Galli

Network Bandwith (II)

0.00

50.00

100.00

150.00

200.00

250.00

300.00

350.00

400.00

450.00

500.00

MB/s

CERN→ Tier1's→ Tier2's→

Page 28: LHCb Computing Model

LHCb Computing Model. 28Domenico Galli

Network Bandwith (III)

(MB/s) Months Tier2's→ Tier1's→ CERN→J an-Mar 20 0 0Apr-Oct 20 39 34Nov 20 276 46Dec 20 128 165J an 20 128 165Feb 20 276 46Mar 20 0 0Apr-Oct 20 39 34Nov 20 276 46Dec 20 266 188J an 20 266 188Feb 20 276 46Mar 20 0 0Apr-Oct 20 78 40Nov 20 276 46Dec 20 266 188

2008

2010

2009

Page 29: LHCb Computing Model

LHCb Computing Model. 29Domenico Galli

CPU growth

[MSi2k•a] 2006 2007 2008 2009 2010

CERN T0 + T1 0.27 0.54 0.90 1.25 1.88Tier1s 1.33 2.65 4.42 5.55 8.35Tier2s 2.29 4.59 7.65 7.65 7.65Total 3.89 7.78 12.97 14.45 17.87

0.00

5.00

10.00

15.00

20.00

MS

I2k

a

2006 2007 2008 2009 2010

Year

CPU need profile

CERN T0 + T1

Tier1s

Tier2s

Page 30: LHCb Computing Model

LHCb Computing Model. 30Domenico Galli

Permanent Storage (MSS) growth

[TB] 2006 2007 2008 2009 2010

CERN T0 + T1 408 816 1359 2858 4566Tier1s 622 1244 2074 4286 7066Tier2sTotal 1030 2060 3433 7144 11632

0

2000

4000

6000

8000

10000

12000

TB

2006 2007 2008 2009 2010

Year

MSS need profile

CERN T0 + T1

Tier1s

Tier2s

Page 31: LHCb Computing Model

LHCb Computing Model. 31Domenico Galli

Fast Storage (Disk) growth

[TB] 2006 2007 2008 2009 2010

CERN T0 + T1 248 496 826 1095 1363Tier1s 730 1459 2432 2897 3363Tier2s 7 14 23 23 23Total 984 1969 3281 4015 4749

0

1000

2000

3000

4000

5000

TB

2006 2007 2008 2009 2010

Year

Disk need profile

CERN T0 + T1

Tier1s

Tier2s

Page 32: LHCb Computing Model

LHCb Computing Model. 32Domenico Galli

Cost Comparison200Hz 2000Hz

Hoffman2007

Now2008

CERNCPU [MSI2k] 2.0 0.9Disk [PB] 0.3 0.8Tape [PB] 1.2 1.4

Tier-1’s

CPU [MSI2k] 8.3 4.4Disk [PB] 1.6 2.4Tape [PB] 0.75 2.1

Tier-2’sCPU [MSI2k] - 7.6Disk [PB] - 0.02

Relative Cost 1.0 0.8

“Now” estimate based on CERN financing model (no internal LAN estimates though)

delay in purchasing, PASTA III report, …

Re-optimization Cost Comparison

Page 33: LHCb Computing Model

LHCb Computing Model. 33Domenico Galli

Tier-2 in Italy In LHCb computing model Monte Carlo

production is performed at Tier-2s.

LHCb-Italy has currently no priority on Tier-2 resources.

We see the following options: Reserve some Tier-1 resources to perform

Monte Carlo production as well. Build-up LHCb Tier-2(s). Add resources for LHCb to existing Italian Tier-

2s.

Page 34: LHCb Computing Model

LHCb Computing Model. 34Domenico Galli

Tier-1 In LHCb computing model Tier-1s are the

primary user analysis facility.

We need in Tier-1 fast random disk access.

We are investigating the solution of parallel file systems together with SAN technology as a mean to achieve the required I/O needs.

Page 35: LHCb Computing Model

LHCb Computing Model. 35Domenico Galli

Testbed for Parallel File Systems @ CNAF

14 File Servers

40 GB IDE disk

GPFSPVF

SLustre

Gigabit switch

4 Gb trunke

d uplinks

Rack of 36

Clients

Network Boot

Server for the 14 File

Servers

Gigabit switch

Page 36: LHCb Computing Model

LHCb Computing Model. 36Domenico Galli

Parallel File Systems: Write Throughput Comparison

Write aggregate throughput

0

50

100

150

200

250

1 5 10 20 30

Number of clients

MB

/s

PVFS-1 POSIX

PVFS-2 POSIX

GPFS POSIX

Lustre POSIX

PVFS-1 NATIVE

PVFS-2 NATIVE

Page 37: LHCb Computing Model

LHCb Computing Model. 37Domenico Galli

Parallel File Systems: Read ThroughputComparison

Read aggregate throughput

0

50

100

150

200

250

300

350

400

1 5 10 20 30

Number of clients

MB

/s

PVFS-1 POSIX

PVFS-2 POSIX

GPFS POSIX

Lustre POSIX

PVFS-1 NATIVE

PVFS-2 NATIVE

Page 38: LHCb Computing Model

Back-up

Page 39: LHCb Computing Model

LHCb Computing Model. 39Domenico Galli

Trigger/DAQ/Computing Re-optimization In the original model only b-exclusive decays were

collected (200 Hz). The idea was to understand the properties of the

background by the simulation of large samples of background events.

In the meanwhile, also having in mind the Tevatron experience, we realized that whenever and as much as possible we need to extract information from real data itself.

E.g. study the background from the sideband of the mass spectrum

Collect unbiased samples of b events E.g. trigger on the semileptonic decay of the other B.

The net effect is the reduction of the CPU need for simulation but the increase in the storage need with no overall increase in the cost.

Page 40: LHCb Computing Model

LHCb Computing Model. 40Domenico Galli

Dimuon Events Simple and robust trigger:

L1: ~1.8 kHz of high-mass dimuon trigger without IP cuts

HLT (with offline tracking and muID): ~600 Hz of dimuon candidates with high-mass (J/ or above)

Clean and abundant signals: J/, (1S), … Z mass peaks

Unique opportunity to understand the tracking (pin down systematics) Mass and momentum (B field calibration):

use dimuons from resonances of known masses

IP, decay length, proper time resolution: use J/ dimuons, which have common origin

Check of trigger biases: flat acceptance (vs proper time) for all BJ/X channels could be used as a handle to understand acceptance vs proper time

for other channels where IP cuts are applied

mass within 500 MeV of J/ or B mass, or above B mass

Huge statistics enables study as

a function of many parameters: geometry, kinematics (phase space)

Page 41: LHCb Computing Model

LHCb Computing Model. 41Domenico Galli

Loose offline selection, after L0 and L1-dimuon without IP cut (no HLT yet)

~130 Hz of signal J/, dominated by prompt production

O(109) signal J/ per year = O(103) CDF’s statistics

Possible conceptual use for calibration of proper-time resolution (to be studied):

Bs Dsh CP/mixing fit sensitive to ~5% variations in the global scale factor on the proper-time resolution would help to know it to ~1% O(105) J/ needed for such precision.

To check event-by-event errors, extract scale factors in “phase space” cells can envisage up to 104 cells (e.g. 10 bins in 4 variables).

J/ signal

Page 42: LHCb Computing Model

LHCb Computing Model. 42Domenico Galli

D* events Dedicated selection can collect abundant and

clean D* D0(K) peak without PID requirements

Such events can be used for PID (K and ) calibration + additional constraint for mass scale, etc …

Large statistics again allows study in bins of phase space

Page 43: LHCb Computing Model

LHCb Computing Model. 43Domenico Galli

b events

Straightforward trigger at all levels: Require muon with minimum pT and impact parameter

significance (IPS)

Rely only on one track (robustness !)

No bias on other b-hadron Handle to study and understand our other highly-

biasing B selections

Example: Set pT threshold at 3 GeV/c and IPS threshold at 3

900 Hz output rate, including 550 Hz of events containing true b decay

Page 44: LHCb Computing Model

LHCb Computing Model. 44Domenico Galli

Estimate of Reconstruction CPU & MSS

Off-line computing requirements for the reconstruction

b-exclusive

b-inclusive

di-muon

D* Total

Input fraction 0.1 0.45 0.3 0.15 1.00

Number of events

2.0•109 9.0•109 6.0•109 3.0•109 2.0•1010

CPU [MSi2k•a] 0.15 0.68 0.45 0.23 1.52

Storage requirement per reconstruction pass [TB]

50 225 150 75 500

Page 45: LHCb Computing Model

LHCb Computing Model. 45Domenico Galli

Estimate of Reconstruction CPU & MSS (II) Required CPU for 1 pass of 1-year data set: 1.52

MSi2k•a. Performed twice in a year. 1st pass (during data taking, over a 7-month period):

CPU power required (assuming 85% CPU usage efficiency):(1.52-0.15)*12/7*100/85 = 2.8 MSi2k.

CPU power required for each Tier-1: 2.8/7 = 0.39 MSi2k. 2nd pass (re-processing, during winter shut-down,

over a 2-month period): CPU power required (assuming 85% CPU usage efficiency):

1.52*12/2*100/85 = 10.7 MSi2k. CPU power provided by Event Filter Farm: 5.5 MSi2k. CPU power to be shared between CERN and Tier-1s: 5.2 MSi2k. CPU power required for each Tier-1: 5.2/7 = 0.74 MSi2k.

Page 46: LHCb Computing Model

LHCb Computing Model. 46Domenico Galli

Estimate of Pre-selection Analysis CPU & Storage

Reduction factors and computing requirements of the stripping stage

b-exclusive

b-inclusive

di-muon

D* Total

Input fraction 0.1 0.45 0.3 0.15 1.00

Reduction factor

10 100 5 5 9.57

Event yield per stripping

2.0•108 9.0•107 1.2•109 6.0•108 2.09•109

CPU [MSi2k•a] 0.03 0.06 0.13 0.06 0.29

Storage requirement per stripping [TB]

20 9 60 30 119

TAG [TB] 2 9 6 3 20

Page 47: LHCb Computing Model

LHCb Computing Model. 47Domenico Galli

Estimate of Pre-selection Analysis CPU & Storage (II)

Required CPU for 1 pass of 1-year data set: 0.29 MSi2k•a.

Performed 4 times in a year.

1st pass (during data taking, over a 7-month period): CPU power required (assuming 85% CPU usage efficiency):

0.29*12/7*100/85 = 0.58 MSi2k.

CPU power required for each Tier-1/CERN: 0.58/7 = 0.08 MSi2k.

2nd pass (after data taking, over a 1-month period): CPU power required (assuming 85% CPU usage efficiency):

0.29*12/1*100/85 = 2.1 MSi2k.

CPU power required for each Tier-1/CERN: 4.1/7 = 0.59 MSi2k.

Page 48: LHCb Computing Model

LHCb Computing Model. 48Domenico Galli

Estimate of Pre-selection Analysis CPU & Storage (III) 3rd pass (after re-processing, during winter shut-down,

over a 2-month period): CPU power required (assuming 85% CPU usage efficiency):

0.29*12/2*100/85 = 2.05 MSi2k. CPU power provided by Event Filter Farm: 42% = 0.86 MSi2k. CPU power to be shared between CERN and Tier-1s: 1.19

MSi2k. CPU power required for each Tier-1/CERN:

1.19/7 = 0.17 MSi2k.

4th pass (before next year data taking, over a 1-month period):

CPU power required (assuming 85% CPU usage efficiency):0.29*12/1*100/85 = 2.1 MSi2k.

CPU power required for each Tier-1/CERN: 4.1/7 = 0.59 MSi2k.

Page 49: LHCb Computing Model

LHCb Computing Model. 49Domenico Galli

Estimate of Simulation CPU & MSS

Application Nos. of events

CPU time/evt [kSi2k•s]

Total CPU [kSi2k•a]

Signal Gauss 2109 50 3171

Boole 2109 1 63

Brunel 2108 2.4 15

Inclusive Gauss 2109 50 3171

Boole 2109 1 63

Brunel 2108 2.4 15

Total 6499

Page 50: LHCb Computing Model

LHCb Computing Model. 50Domenico Galli

Estimate of Simulation CPU & MSS (II)

Output Nos. of events

Storage/evt [kB]

Total Storage

[TB]

Signal DST 2108 400 80.0

TAG 2108 1 0.2

Inclusive DST 2108 400 80.0

TAG 2108 1 0.2

Total 160.4

Page 51: LHCb Computing Model

LHCb Computing Model. 51Domenico Galli

Estimate of Analysis CPU 140 physicists

4 jobs physicist-1 week-1

52 weeks/a

2.8x106 events/job

0.3 kSi2k•s/evt

efficiency = 0.6

jobs/a = 140 x 4 x 52 = 30000

CPU required in 2008:30000 jobs x 2.8x106 evt/jobs x 0.3 kSi2k•s/evt == 2.5x1010 kSi2k•s = 2.5x1010 kSi2k•3x10-8a = 0.80 MSi2k•a

CPU required in 2008 (including 60% efficiency):0.80 / 0.6 = 1.3 MSi2k•a


Recommended