+ All Categories
Home > Documents > Exploiting the Grid to Simulate and Design the LHCb Experiment

Exploiting the Grid to Simulate and Design the LHCb Experiment

Date post: 15-Jan-2016
Category:
Upload: arnav
View: 16 times
Download: 0 times
Share this document with a friend
Description:
Calorimeters. Muon Detector. Yoke. Tracker. Shielding. Coil. Many millions of events. Various conditions. Many samples. GRID A unified approach. Distributed resources. RICH-2. Moun. Vertex. RICH-1. 10 mm. Many 1000s of computers required. Worldwide collaboration. - PowerPoint PPT Presentation
1
Exploiting the Grid to Simulate and Desig the LHCb Experiment K Harrison 1 , N Brook 2 , G Patrick 3 , E van Herwijnen 4 , on behalf of the LHCb Grid Group and GridPP 1 University of Cambridge, 2 University of Bristol, 3 Rutherford Appleton Laboratory, 4 CERN Moun Yoke Vertex Shielding Tracker Calorimete rs RICH-2 Coil Muon Detector RICH-1 LHCb Experiment LHCb Experiment LHC Weight: 4270 tonnes (Magnet: 1500 tonnes) Dimensions: 18m x 12m x 12m Number of electronic channels: 1.2 million Located 100m underground at the Large Hadron Collider (LHC) Due to start taking data in 2007 Proton beams colliding at an energy of 14TeV 2835 bunches per beam 10 11 protons/bunch 40MHz crossing rate 0 * 0 D D B 0 B 10 mm Example of decays of B o and anti-B o mesons Peta Bytes of data storage Many millions of events Many samples Distribut ed resources Many 1000s of computers required GRID A unified approach Worldwide collaborat ion Various conditions Heterogeneous operating systems Grid – A Single Grid – A Single Resource Resource LHCb is a particle-physics experiment that will study subtle differences between matter and antimatter. The design and construction of the experiment is being undertaken by some 500 scientists, from 50 institutes, in 14 countries around the world. The experiment will be located 100m underground at the Large Hadron Collider (27km circumference) being built at CERN in Geneva. The decays of more than 10 9 short-lived particles, known as B o and anti-B o mesons, will be studied at LHCb each year. To optimise the detector design, and to understand the physics, many millions of particle interactions must be simulated. The first data are expected in early 2007. Grid technology is being deployed to make use of globally distributed computing resources to satisfy the LHCb requirements. A prototype system, based on the existing software, is already operational. This system deals with job submission and execution, data transfer, bookkeeping, and the monitoring of data quality. DataGrid middleware is being utilised as it becomes available. In this way, LHCb is able both to produce the simulated datasets for the detector studies, and to feed back experience and ideas into the design of the Grid. Computing centres on the Grid are being integrated into the LHCb production system as they come on line. These currently include: CERN, IN2P3 (Lyon), CNAF (Bologna), NIKHEF (Amsterdam) and the EU DataGrid Testbed. The UK participates through the University institutes of Bristol, Cambridge, Edinburgh, Glasgow, Imperial College, Liverpool and Oxford, and the Rutherford Appleton Laboratory. Job definition and submission is through a web page. A central web server can submit to individual farms using afs, or to the DataGrid Testbed using Grid middleware. A java servlet generates the required job scripts, as well as specifying the random-number seeds and job options. Job submission and monitoring in the LHCb distributed environment are currently performed using PVSS II, a commercial Supervisory Control and Data Acquisition (SCADA) system developed by ETM. The same system has been adopted by the LHC experiments for detector monitoring and control during data taking. A typical simulation job processes 500 events, and produces a dataset with a size of the order of Gigabytes. These data are transferred directly to the CASTOR mass-storage facility at CERN, using the bbftp system. Copies of datasets are stored locally at the larger computer centres, which will also accept data from external sites. Bookkeeping is performed using java classes that Update bookkeeping database Transfer data to mass store Check data quality Submit jobs remotely via Web Execute on farm Analys is
Transcript
Page 1: Exploiting the Grid to Simulate and Design              the LHCb Experiment

Exploiting the Grid to Simulate and Design the LHCb Experiment

K Harrison1, N Brook2, G Patrick3, E van Herwijnen4, on behalf of the LHCb Grid Group and GridPP

1University of Cambridge, 2University of Bristol, 3Rutherford Appleton Laboratory, 4CERN

Moun

Yoke

Vertex

Shielding TrackerCalorimeters

RICH-2

Coil

Muon Detector

RICH-1

LHCb Experiment LHCb Experiment

LHC

Weight: 4270 tonnes (Magnet: 1500 tonnes)Dimensions: 18m x 12m x 12mNumber of electronic channels: 1.2 millionLocated 100m underground at the Large Hadron Collider (LHC)Due to start taking data in 2007

Proton beams colliding at an energy of 14TeV2835 bunches per beam1011 protons/bunch40MHz crossing rate

0

*0

D

DB

0B

10 mm

Example of decays of Bo and anti-Bo mesons

Peta Bytes of data storage

Many millions

of events

Many samples

Distributed resources

Many 1000s of computers required

GRIDA unified approach

Worldwide collaboration

Various conditions

Heterogeneous operating systems

Grid – A Single Resource Grid – A Single Resource

LHCb is a particle-physics experiment that will study subtle differences between matter and antimatter. The design and construction of the experiment is being undertaken by some 500 scientists, from 50 institutes, in 14 countries around the world. The experiment will be located 100m underground at the Large Hadron Collider (27km circumference) being built at CERN in Geneva. The decays of more than 109 short-lived particles, known as Bo and anti-Bo mesons, will be studied at LHCb each year. To optimise the detector design, and to understand the physics, many millions of particle interactions must be simulated. The first data are expected in early 2007.

Grid technology is being deployed to make use of globally distributed computing resources to satisfy the LHCb requirements. A prototype system, based on the existing software, is already operational. This system deals with job submission and execution, data transfer, bookkeeping, and the monitoring of data quality. DataGrid middleware is being utilised as it becomes available. In this way, LHCb is able both to produce the simulated datasets for the detector studies, and to feed back experience and ideas into the design of the Grid.

Computing centres on the Grid are being integrated into the LHCb production system as they come on line. These currently include: CERN, IN2P3 (Lyon), CNAF (Bologna), NIKHEF (Amsterdam) and the EU DataGrid Testbed. The UK participates through the University institutes of Bristol, Cambridge, Edinburgh, Glasgow, Imperial College, Liverpool and Oxford, and the Rutherford Appleton Laboratory.

Job definition and submission is through a web page. A central web server can submit to individual farms using afs, or to the DataGrid Testbed using Grid middleware. A java servlet generates the required job scripts, as well as specifying the random-number seeds and job options.

Job submission and monitoring in the LHCb distributed environment are currently performed using PVSS II, a commercial Supervisory Control and Data Acquisition (SCADA) system developed by ETM. The same system has been adopted by the LHC experiments for detector monitoring and control during data taking.

A typical simulation job processes 500 events, and produces a dataset with a size of the order of Gigabytes. These data are transferred directly to the CASTOR mass-storage facility at CERN, using the bbftp system. Copies of datasets are stored locally at the larger computer centres, which will also accept data from external sites.

Bookkeeping is performed using java classes that interact with a central ORACLE database at CERN via a servlet. The central database will also hold information for simulated datasets stored externally to CERN.

Updatebookkeepingdatabase

Transferdata tomass store

Check data quality

Submit jobs remotelyvia Web

Executeon farm

Analysis

Recommended