+ All Categories
Home > Documents > Jan 14-17, 2003 Joint DOE/NSF Review of U.S. LHC S&C 1 Lothar A T Bauerdick Fermilab The U.S. CMS...

Jan 14-17, 2003 Joint DOE/NSF Review of U.S. LHC S&C 1 Lothar A T Bauerdick Fermilab The U.S. CMS...

Date post: 20-Jan-2016
Category:
Upload: daniela-cummings
View: 214 times
Download: 0 times
Share this document with a friend
34
Jan 14-17, 2003 Joint DOE/NSF Review of U.S. LHC S&C 1 Lothar A T Bauerdick Fermilab The U.S. CMS Grid The U.S. CMS Grid Lothar A. T. Bauerdick, Fermilab Lothar A. T. Bauerdick, Fermilab Project Manager Project Manager Joint DOE and NSF Review of Joint DOE and NSF Review of U.S. LHC Software and Computing U.S. LHC Software and Computing Lawrence Berkeley National Lab, Jan 14-17, 2003 Lawrence Berkeley National Lab, Jan 14-17, 2003
Transcript
Page 1: Jan 14-17, 2003 Joint DOE/NSF Review of U.S. LHC S&C 1 Lothar A T Bauerdick Fermilab The U.S. CMS Grid Lothar A. T. Bauerdick, Fermilab Project Manager.

Jan 14-17, 2003Joint DOE/NSF Review of U.S. LHC S&C 1Lothar A T Bauerdick Fermilab

The U.S. CMS GridThe U.S. CMS Grid

Lothar A. T. Bauerdick, FermilabLothar A. T. Bauerdick, Fermilab

Project ManagerProject Manager

Joint DOE and NSF Review of Joint DOE and NSF Review of

U.S. LHC Software and ComputingU.S. LHC Software and Computing

Lawrence Berkeley National Lab, Jan 14-17, 2003Lawrence Berkeley National Lab, Jan 14-17, 2003

Page 2: Jan 14-17, 2003 Joint DOE/NSF Review of U.S. LHC S&C 1 Lothar A T Bauerdick Fermilab The U.S. CMS Grid Lothar A. T. Bauerdick, Fermilab Project Manager.

Jan 14-17, 2003Joint DOE/NSF Review of U.S. LHC S&C 2Lothar A T Bauerdick Fermilab

US CMS S&C Scope and DeliverablesUS CMS S&C Scope and Deliverables

Provide software engineering support for CMS Provide software engineering support for CMS CAS subproject CAS subproject

Provide S&C Environment for doing LHC Physics in the U.S.Provide S&C Environment for doing LHC Physics in the U.S.

UF subprojectUF subproject Develop and build “User Facilities” for CMS physics in the U.S.Develop and build “User Facilities” for CMS physics in the U.S.

A Grid of Tier-1 and Tier-2 Regional Centers connecting to the UniversitiesA Grid of Tier-1 and Tier-2 Regional Centers connecting to the Universities A robust infrastructure of computing, storage and networking resourcesA robust infrastructure of computing, storage and networking resources An environment to do research in the U.S. and in globally connected An environment to do research in the U.S. and in globally connected

communities communities A support infrastructure for physicists and detector builders doing researchA support infrastructure for physicists and detector builders doing research

This U.S. infrastructure is the U.S. contribution This U.S. infrastructure is the U.S. contribution to the CMS software and computing needsto the CMS software and computing needs

-- together with the U.S. share on developing the framework software-- together with the U.S. share on developing the framework software

Cost objective for FY03 is $4M Cost objective for FY03 is $4M

Page 3: Jan 14-17, 2003 Joint DOE/NSF Review of U.S. LHC S&C 1 Lothar A T Bauerdick Fermilab The U.S. CMS Grid Lothar A. T. Bauerdick, Fermilab Project Manager.

Jan 14-17, 2003Joint DOE/NSF Review of U.S. LHC S&C 3Lothar A T Bauerdick Fermilab

US CMS Tier-ed SystemUS CMS Tier-ed System

Tier-1 center at Fermilab provides computing resources and supportTier-1 center at Fermilab provides computing resources and support User Support for “CMS physics community”, e.g. software distribution, help deskUser Support for “CMS physics community”, e.g. software distribution, help desk Support for Tier-2 centers, and for Physics Analysis Center at FermilabSupport for Tier-2 centers, and for Physics Analysis Center at Fermilab

including information services, Grid operation services etcincluding information services, Grid operation services etc

Five Tier-2 centers in the U.S.Five Tier-2 centers in the U.S. Together will provide same CPU/Disk resources as Tier-1Together will provide same CPU/Disk resources as Tier-1 Facilitate “involvement of collaboration” in S&C developmentFacilitate “involvement of collaboration” in S&C development

Prototyping and test-bed effort very successfulPrototyping and test-bed effort very successful Universities to “bid” hosting Tier-2 center, take advantage of resources and expertiseUniversities to “bid” hosting Tier-2 center, take advantage of resources and expertise Tier-2 centers to be funded through NSF program for “empowering Universities”Tier-2 centers to be funded through NSF program for “empowering Universities”

Proposal to the NSF for 2003 to 2008 was submitted Oct 2002Proposal to the NSF for 2003 to 2008 was submitted Oct 2002

The US CMS System from the beginning spans Tier-1 and Tier-2 systemsThe US CMS System from the beginning spans Tier-1 and Tier-2 systemsThere is an economy of scale, and we plan for a central support componentThere is an economy of scale, and we plan for a central support component

We already have started to make “opportunistic” use of resources We already have started to make “opportunistic” use of resources that are NOT Tier-2 centersthat are NOT Tier-2 centers Important for delivering the resources to physics AND to involve UniversitiesImportant for delivering the resources to physics AND to involve Universities e.g. UW Madison condor pool, MRI initiatives at several Universitiese.g. UW Madison condor pool, MRI initiatives at several Universities

Page 4: Jan 14-17, 2003 Joint DOE/NSF Review of U.S. LHC S&C 1 Lothar A T Bauerdick Fermilab The U.S. CMS Grid Lothar A. T. Bauerdick, Fermilab Project Manager.

Jan 14-17, 2003Joint DOE/NSF Review of U.S. LHC S&C 4Lothar A T Bauerdick Fermilab

The US CMS Grid SystemThe US CMS Grid SystemThe US CMS Grid System of T1 and T2 prototypes and testbedsThe US CMS Grid System of T1 and T2 prototypes and testbeds

has a really important function within CMShas a really important function within CMS help develop a truly global and distributed approach help develop a truly global and distributed approach

to the LHC computing problemto the LHC computing problem ensure full participation of the US physics community ensure full participation of the US physics community

in the LHC research programin the LHC research program

To succeed requires the ability and ambition for leadership To succeed requires the ability and ambition for leadership … and a strong support to get the necessary resources! … and a strong support to get the necessary resources!

US CMS has prototyped Tier-1 and Tier-2 centers for CMS productionUS CMS has prototyped Tier-1 and Tier-2 centers for CMS production US CMS has worked with Grids and VDT to harden middleware productsUS CMS has worked with Grids and VDT to harden middleware products US CMS has integrated the VDT middleware in CMS production systemUS CMS has integrated the VDT middleware in CMS production system US CMS has deployed Integration Grid Testbed and used for real productionsUS CMS has deployed Integration Grid Testbed and used for real productions US CMS will participate in the series of CMS Data Challenges US CMS will participate in the series of CMS Data Challenges US CMS will take part in the LCG “Production Grid” milestone in 2003US CMS will take part in the LCG “Production Grid” milestone in 2003

Page 5: Jan 14-17, 2003 Joint DOE/NSF Review of U.S. LHC S&C 1 Lothar A T Bauerdick Fermilab The U.S. CMS Grid Lothar A. T. Bauerdick, Fermilab Project Manager.

Jan 14-17, 2003Joint DOE/NSF Review of U.S. LHC S&C 5Lothar A T Bauerdick Fermilab

From Facilities to a Grid FabricFrom Facilities to a Grid FabricWe have deployed a system of a Tier-1 center prototype at Fermilab,We have deployed a system of a Tier-1 center prototype at Fermilab,

and Tier-2 prototype facilities at Caltech, U.Florida and UCSD and Tier-2 prototype facilities at Caltech, U.Florida and UCSD

Prototypes systems operational and fully functional Prototypes systems operational and fully functional US CMS Tier-1/Tier-2 system very successfulUS CMS Tier-1/Tier-2 system very successful

R&D, Grid integration and deployment R&D, Grid integration and deployment e.g. high-throughput data transfers Tier-2/Tier-1, CERNe.g. high-throughput data transfers Tier-2/Tier-1, CERN

data throughput O(1TB/day) achieved!data throughput O(1TB/day) achieved! Storage Management, Grid Monitoring, VO managementStorage Management, Grid Monitoring, VO management

Tier-1/Tier-2 distributed User Facility was used very successfully Tier-1/Tier-2 distributed User Facility was used very successfully in the large-scale, world-wide production challengein the large-scale, world-wide production challenge — — part of a 20TB world-wide effort to producepart of a 20TB world-wide effort to produce

simulated and reconstruction MC events for HLT studiessimulated and reconstruction MC events for HLT studies ended on schedule in June 2002ended on schedule in June 2002

Large data samples (Objectivity and nTuples) Large data samples (Objectivity and nTuples) have been made available to the physics community have been made available to the physics community DAQ TDR DAQ TDR

Using Grid technologies, with the help of Grid projects and Grid middleware Using Grid technologies, with the help of Grid projects and Grid middleware developers, we have prepared the CMS data production and data analysis developers, we have prepared the CMS data production and data analysis environment to work in a Data Grid environment. environment to work in a Data Grid environment. Grid-enabled MC production system operational Grid-enabled MC production system operational Intense collaboration with US Grid projects to make middleware fit for CMS…Intense collaboration with US Grid projects to make middleware fit for CMS…

Page 6: Jan 14-17, 2003 Joint DOE/NSF Review of U.S. LHC S&C 1 Lothar A T Bauerdick Fermilab The U.S. CMS Grid Lothar A. T. Bauerdick, Fermilab Project Manager.

Jan 14-17, 2003Joint DOE/NSF Review of U.S. LHC S&C 6Lothar A T Bauerdick Fermilab

Preparing CMS for the GridPreparing CMS for the Grid

Making US CMS and CMS fit for working in a Grid EnvironmentMaking US CMS and CMS fit for working in a Grid Environment Production environment and production operations Production environment and production operations Deployment, configuration and management of Deployment, configuration and management of

systems, middleware environment at CMS sitessystems, middleware environment at CMS sites Monitoring of the Grid fabric and configurationMonitoring of the Grid fabric and configuration Providing information services, setup of serversProviding information services, setup of servers Management of user base on Grid, Management of user base on Grid,

interfacing to local specifics at Universities and labs (VO)interfacing to local specifics at Universities and labs (VO) Devising a scheme for software distribution and configuration (DAR, Devising a scheme for software distribution and configuration (DAR,

packman) of CMS application s/wpackman) of CMS application s/w

In all these areas we have counted onIn all these areas we have counted onsignificant contributions from the Grid Projectssignificant contributions from the Grid Projects

Thus these efforts are being tracked in the projectThus these efforts are being tracked in the projectthrough the US CMS S&C WBSthrough the US CMS S&C WBS

Page 7: Jan 14-17, 2003 Joint DOE/NSF Review of U.S. LHC S&C 1 Lothar A T Bauerdick Fermilab The U.S. CMS Grid Lothar A. T. Bauerdick, Fermilab Project Manager.

Jan 14-17, 2003Joint DOE/NSF Review of U.S. LHC S&C 7Lothar A T Bauerdick Fermilab

R&D R&D Integration IntegrationUSCMS Integration Grid Testbed (IGT)USCMS Integration Grid Testbed (IGT)

This is the combined USCMS Tier-1/Tier-2 + Condor/VDT team resourcesThis is the combined USCMS Tier-1/Tier-2 + Condor/VDT team resources Caltech, Fermilab, U Florida, UC SanDiego, + UW MadisonCaltech, Fermilab, U Florida, UC SanDiego, + UW Madison About 230 CPU (750 MHz equivalent, RedHat Linux 6.1)About 230 CPU (750 MHz equivalent, RedHat Linux 6.1)

– Additional 80 CPU at 2.4 GHz running RedHat Linux 7.XAdditional 80 CPU at 2.4 GHz running RedHat Linux 7.X About 5 TB local disk space plus Enstore Mass storage at FNAL using dCacheAbout 5 TB local disk space plus Enstore Mass storage at FNAL using dCache

Globus and Condor core middlewareGlobus and Condor core middleware Using Virtual Data Toolkit (VDT) 1.1.3, (with many fixes to issues discovered in the testbed)Using Virtual Data Toolkit (VDT) 1.1.3, (with many fixes to issues discovered in the testbed) With this version bugs have been shaken out of the core middleware productsWith this version bugs have been shaken out of the core middleware products

IGT Grid-wide monitoring tool (MonaLisa).IGT Grid-wide monitoring tool (MonaLisa). Physical parameters: CPU load, network usage, disk space, etc.Physical parameters: CPU load, network usage, disk space, etc. Dynamic discovery of monitoring targets and schemaDynamic discovery of monitoring targets and schema Interfaces to/from other monitoring packagesInterfaces to/from other monitoring packages

Commissioning through large productions:Commissioning through large productions: 1.5M physics events from generation stage all the way through analysis Ntuples1.5M physics events from generation stage all the way through analysis Ntuples

successfully finished in time for Christmas…successfully finished in time for Christmas…

Integration Grid Testbed operational Integration Grid Testbed operational as a step toward production quality Grid serviceas a step toward production quality Grid service

Page 8: Jan 14-17, 2003 Joint DOE/NSF Review of U.S. LHC S&C 1 Lothar A T Bauerdick Fermilab The U.S. CMS Grid Lothar A. T. Bauerdick, Fermilab Project Manager.

Jan 14-17, 2003Joint DOE/NSF Review of U.S. LHC S&C 8Lothar A T Bauerdick Fermilab

IGT results shown to CMS Plenary in DecIGT results shown to CMS Plenary in Dec

efficiency approaching “traditional” (Spring2002-type) CPU utilizationefficiency approaching “traditional” (Spring2002-type) CPU utilizationwith much reduced manpower effort to run productionwith much reduced manpower effort to run production

LCG getting involved with an IGT installation at CERNLCG getting involved with an IGT installation at CERNgetting ready for US and Europe to combine forces for the LCGgetting ready for US and Europe to combine forces for the LCG

5-Dec-2002 Grids NOW

QuickTime™ and a Graphics decompressor are needed to see this picture.

IGT Results (so far)

Time to process

1 event:

500 sec @ 750 MHz

Speedup:

Avg factor of 100

speedup during current

run

Resources:

Approximately 230 CPU

@750 MHz equiv.

Sustained efficiency:

about 43.5%

680K events as of Monday!

Oct 25

Page 9: Jan 14-17, 2003 Joint DOE/NSF Review of U.S. LHC S&C 1 Lothar A T Bauerdick Fermilab The U.S. CMS Grid Lothar A. T. Bauerdick, Fermilab Project Manager.

Jan 14-17, 2003Joint DOE/NSF Review of U.S. LHC S&C 9Lothar A T Bauerdick Fermilab

Grid Efforts Integral Part of US CMS Work Grid Efforts Integral Part of US CMS Work

Trillium Grid Projects in the US: PPDG, GriPhyN iVDGLTrillium Grid Projects in the US: PPDG, GriPhyN iVDGL PPDG effort for CMS at Fermilab, UCSD, Caltech, working with US CMS S&C peoplePPDG effort for CMS at Fermilab, UCSD, Caltech, working with US CMS S&C people Large influx of expertise and very dedicated effort from U.Wisconsin Madison Large influx of expertise and very dedicated effort from U.Wisconsin Madison

through the Condor and “Virtual Data Toolkit” (VDT) teamsthrough the Condor and “Virtual Data Toolkit” (VDT) teams We are using VDT for deployment of Grid middleware and infrastructure We are using VDT for deployment of Grid middleware and infrastructure

sponsored PPDG, iVDGL, now adopted by EDG and LCGsponsored PPDG, iVDGL, now adopted by EDG and LCG Investigating use of GriPhyN VDL technology in CMS --- virtual data is “on the map”Investigating use of GriPhyN VDL technology in CMS --- virtual data is “on the map”

US CMS Development Testbed: development and explorative Grid workUS CMS Development Testbed: development and explorative Grid work Allows us to explore technologies: MOP, GDMP, VO I/s, integration with EU grids…Allows us to explore technologies: MOP, GDMP, VO I/s, integration with EU grids… Led by PPDG and GriPhyN staff at U.Florida and FermilabLed by PPDG and GriPhyN staff at U.Florida and Fermilab All pT2 sites and Fermilab + Wisconsin involved -- ready enlarge that effortAll pT2 sites and Fermilab + Wisconsin involved -- ready enlarge that effort This effort is mostly Grid-sponsored: PPDG, iVDGLThis effort is mostly Grid-sponsored: PPDG, iVDGL Direct support from middleware developers: Condor, Globus, EDG, DataTagDirect support from middleware developers: Condor, Globus, EDG, DataTag

Integration Grid Testbed: Grid deployment and integration Integration Grid Testbed: Grid deployment and integration Again using manpower from iVDGL and project funded Tier-2 operations, Again using manpower from iVDGL and project funded Tier-2 operations,

PPDG, GriPhyN and iVDGL sponsored VDT, PPDG, GriPhyN and iVDGL sponsored VDT, PPDG sponsored VO tools, etc ppPPDG sponsored VO tools, etc pp

Page 10: Jan 14-17, 2003 Joint DOE/NSF Review of U.S. LHC S&C 1 Lothar A T Bauerdick Fermilab The U.S. CMS Grid Lothar A. T. Bauerdick, Fermilab Project Manager.

Jan 14-17, 2003Joint DOE/NSF Review of U.S. LHC S&C 10Lothar A T Bauerdick Fermilab

Agreements with the iVDGLAgreements with the iVDGL

iVDGL: facilities and operationsiVDGL: facilities and operations US CMS pT2 centers selection process through US CMS and PMUS CMS pT2 centers selection process through US CMS and PM Initial US CMS pT2 funding arrived through suppl. Grant on iVDGL in 2002Initial US CMS pT2 funding arrived through suppl. Grant on iVDGL in 2002 iVDGL funds h/w upgrade and in totalr ~3FTE at Caltech, U.Florida and UCSDiVDGL funds h/w upgrade and in totalr ~3FTE at Caltech, U.Florida and UCSD Additional US CMS S&C funding of 1FTE at each pT2 site Additional US CMS S&C funding of 1FTE at each pT2 site

for operations and management support out of NSF Research Program S&C Grant for operations and management support out of NSF Research Program S&C Grant

The full functioning of pT2 centers in US CMS Grid is fundamental to success!The full functioning of pT2 centers in US CMS Grid is fundamental to success!

Negotiated a MOU b/w iVDGL, iVDGL US CMS pT2 institutions and S&C projectNegotiated a MOU b/w iVDGL, iVDGL US CMS pT2 institutions and S&C project Agreement with iVDGL management on MOU achieved in recent steering meeting(s)Agreement with iVDGL management on MOU achieved in recent steering meeting(s) Now converging towards signature!Now converging towards signature! MOU text see handoutsMOU text see handouts

Grid-sponsored efforts towards US CMS Grid-sponsored efforts towards US CMS are now explicitly recognized in the new project plan (WBS)are now explicitly recognized in the new project plan (WBS)and are being tracked by the project managersand are being tracked by the project managers The MOU allows us to formalize this and give it the appropriate recognitions by the experimentsThe MOU allows us to formalize this and give it the appropriate recognitions by the experiments

I believe we will likely need some MOU with the VDTI believe we will likely need some MOU with the VDT

Page 11: Jan 14-17, 2003 Joint DOE/NSF Review of U.S. LHC S&C 1 Lothar A T Bauerdick Fermilab The U.S. CMS Grid Lothar A. T. Bauerdick, Fermilab Project Manager.

Jan 14-17, 2003Joint DOE/NSF Review of U.S. LHC S&C 11Lothar A T Bauerdick Fermilab

Connections of USCMS to the LCG Connections of USCMS to the LCG LCG has a sophisticated structure of committees, boards, forums, meetingsLCG has a sophisticated structure of committees, boards, forums, meetingsOrganizationally, the US LHC projects are not “ex officio” in any of those bodiesOrganizationally, the US LHC projects are not “ex officio” in any of those bodies

CERN and the LCG has sought representation CERN and the LCG has sought representation from “countries”, “regional centers”, Grid experts / individuals, from “countries”, “regional centers”, Grid experts / individuals, but has not yet taken advantage of the project structure of US LHC S&Cbut has not yet taken advantage of the project structure of US LHC S&C

US CMS is part of CMS, and our relationship and collaboration with LCG US CMS is part of CMS, and our relationship and collaboration with LCG is defined through this “being-part-of” CMS.is defined through this “being-part-of” CMS. US LHC Projects come in through both the experiment’s representatives US LHC Projects come in through both the experiment’s representatives

and the specific US representationand the specific US representation We will contribute to the LHC Computing Grid working through CMS We will contribute to the LHC Computing Grid working through CMS

under the scientific leadership of CMSunder the scientific leadership of CMS These contributions are US deliverables towards CMS, These contributions are US deliverables towards CMS,

and should become subject of MOUs or similar agreementsand should become subject of MOUs or similar agreements

Beyond those CMS deliverables, Beyond those CMS deliverables, in order for US physicist to acquire and maintain a leadership position in in order for US physicist to acquire and maintain a leadership position in LHC research, there need to be resources specific to US physicistsLHC research, there need to be resources specific to US physicists

Q: direct deliverables from US Grid projects to LCG?Q: direct deliverables from US Grid projects to LCG? e.g. VDT: e.g. VDT:

Grid middleware deployment for LHC through US LHC or directly from Grid projects? Grid middleware deployment for LHC through US LHC or directly from Grid projects?

Page 12: Jan 14-17, 2003 Joint DOE/NSF Review of U.S. LHC S&C 1 Lothar A T Bauerdick Fermilab The U.S. CMS Grid Lothar A. T. Bauerdick, Fermilab Project Manager.

Jan 14-17, 2003Joint DOE/NSF Review of U.S. LHC S&C 12Lothar A T Bauerdick Fermilab

Global LCG and US CMS GridGlobal LCG and US CMS GridWe expect that the LCG will address many issues related to running a distributed We expect that the LCG will address many issues related to running a distributed

environment and propose implementationsenvironment and propose implementations This is expected from the Grid Deployment Board Working GroupsThis is expected from the Grid Deployment Board Working Groups

A “cookie cutter” approach will not be a useful first stepA “cookie cutter” approach will not be a useful first step We are not interested in setting up identical environments at a smallish set of regional centersWe are not interested in setting up identical environments at a smallish set of regional centers Nor on defining a minimal environment down to the last version level, etcNor on defining a minimal environment down to the last version level, etc

With the IGT (and the “EDG CMS stress test”) we should be beyond thisWith the IGT (and the “EDG CMS stress test”) we should be beyond this In the US we already do have working (sub-) Grids: IGT, Atlas Testbed, Worldgrid In the US we already do have working (sub-) Grids: IGT, Atlas Testbed, Worldgrid

-- it can be done!-- it can be done! Note however, a large part of the functionality is either missing or of limited scalability, Note however, a large part of the functionality is either missing or of limited scalability,

and/or experiment-specific softwareand/or experiment-specific software

From the start we need to employ a model that allows From the start we need to employ a model that allows sub-organizations or sub-Grids to work togethersub-organizations or sub-Grids to work together There will always will be site-specifics: should be dealt with locally There will always will be site-specifics: should be dealt with locally

e.g. constraints thru different procurement procedures, DOE-lab security, time zones, etce.g. constraints thru different procurement procedures, DOE-lab security, time zones, etc The whole US CMS project and funding model foresees The whole US CMS project and funding model foresees

the Tier-1 center takes care of much of the US-wide support issues, the Tier-1 center takes care of much of the US-wide support issues, and assumes that half of the resources come from Tier-2 centers with limited local manpowerand assumes that half of the resources come from Tier-2 centers with limited local manpower

This is cost-effective and a good way to proceed towards the goal of a distributed This is cost-effective and a good way to proceed towards the goal of a distributed LHC research environment LHC research environment and on the way broadens the base and buy-in to make sure we are successfuland on the way broadens the base and buy-in to make sure we are successful BTW: US CMS has always de-emphasized the role of the Tier-1 prototype to provide “raw BTW: US CMS has always de-emphasized the role of the Tier-1 prototype to provide “raw

power”, but rather is counting on assembling the efforts from a distributed set of sites in the IGT power”, but rather is counting on assembling the efforts from a distributed set of sites in the IGT and production gridand production grid

Page 13: Jan 14-17, 2003 Joint DOE/NSF Review of U.S. LHC S&C 1 Lothar A T Bauerdick Fermilab The U.S. CMS Grid Lothar A. T. Bauerdick, Fermilab Project Manager.

Jan 14-17, 2003Joint DOE/NSF Review of U.S. LHC S&C 13Lothar A T Bauerdick Fermilab

Dealing with the LCG RequirementsDealing with the LCG RequirementsWe are adapting to work within the LCG approach: We are adapting to work within the LCG approach:

Grid Use Cases and ScenariosGrid Use Cases and Scenarios US participation in the GAG, follow up of the HEPCAL RTAGUS participation in the GAG, follow up of the HEPCAL RTAG

Working Groups in the Grid Deployment BoardWorking Groups in the Grid Deployment Board not yet “invited” to directly work in the working groupsnot yet “invited” to directly work in the working groups Work through the US GDB representative (Vicky White)Work through the US GDB representative (Vicky White)

Architects Forum for the application areaArchitects Forum for the application area Proposed and started a sub-group with US participation Proposed and started a sub-group with US participation

refining the blueprints of Grid Interfacesrefining the blueprints of Grid Interfaces

We have to ensure our leadership position for US LHC S&CWe have to ensure our leadership position for US LHC S&C We have to develop a clear understanding of what is workable for the USWe have to develop a clear understanding of what is workable for the US We have to ensure that appropriate priorities are set in the LCGWe have to ensure that appropriate priorities are set in the LCG

on a flexible distributed environment to support remote physics analysis requirementson a flexible distributed environment to support remote physics analysis requirements We have to be in a position to propose solutions, We have to be in a position to propose solutions,

--- and in some cases to propose alternative solutions,--- and in some cases to propose alternative solutions,that would meet the requirements of CMS and US CMSthat would meet the requirements of CMS and US CMS

US CMS has setup itself to be able to learn, prototype and develop US CMS has setup itself to be able to learn, prototype and develop while providing a production environment to while providing a production environment to cater to CMS, US CMS and LCG demands cater to CMS, US CMS and LCG demands

Page 14: Jan 14-17, 2003 Joint DOE/NSF Review of U.S. LHC S&C 1 Lothar A T Bauerdick Fermilab The U.S. CMS Grid Lothar A. T. Bauerdick, Fermilab Project Manager.

Jan 14-17, 2003Joint DOE/NSF Review of U.S. LHC S&C 14Lothar A T Bauerdick Fermilab

US CMS Approach to R&D, Integration, DeploymentUS CMS Approach to R&D, Integration, Deployment

prototyping, early roll out, strong QC/QA & documentation, tracking of external “practices”prototyping, early roll out, strong QC/QA & documentation, tracking of external “practices”Approach: “Rolling Prototypes”: evolution of the facility and data systemsApproach: “Rolling Prototypes”: evolution of the facility and data systems

Test stands for various hardware components and (fabric related software components) Test stands for various hardware components and (fabric related software components) -- this allows to sample emerging technologies with small risks (WBS 1.1.1)-- this allows to sample emerging technologies with small risks (WBS 1.1.1)

Setup of a test(bed) system out of next-generation components Setup of a test(bed) system out of next-generation components -- always keeping a well-understood and functional production system intact (WBS 1.1.2)-- always keeping a well-understood and functional production system intact (WBS 1.1.2)

Deployment of a production-quality facility --- comprised of Deployment of a production-quality facility --- comprised of well-defined components with well-defined interfaces that well-defined components with well-defined interfaces that can be upgraded component-wise with a well-defined can be upgraded component-wise with a well-defined mechanism for changing the components to minimize risks mechanism for changing the components to minimize risks (WBS 1.1.3)(WBS 1.1.3)

This matches to general This matches to general strategy of “rolling strategy of “rolling replacements” and thereby replacements” and thereby upgrading facility capacity upgrading facility capacity making use of Moore’s lawmaking use of Moore’s law

Page 15: Jan 14-17, 2003 Joint DOE/NSF Review of U.S. LHC S&C 1 Lothar A T Bauerdick Fermilab The U.S. CMS Grid Lothar A. T. Bauerdick, Fermilab Project Manager.

Jan 14-17, 2003Joint DOE/NSF Review of U.S. LHC S&C 15Lothar A T Bauerdick Fermilab

US CMS Grid Technology CyclesUS CMS Grid Technology CyclesCorrespondingly our approach to developing the software systems for the Correspondingly our approach to developing the software systems for the

distributed data processing environment adopts “rolling prototyping”distributed data processing environment adopts “rolling prototyping” Analyze current practices in distributed systems processing and of external Analyze current practices in distributed systems processing and of external

software, like Grid middleware (WBS 1.3.1, 1.3.2)software, like Grid middleware (WBS 1.3.1, 1.3.2) Prototyping of the distributed processing environment (WBS 1.3.3)Prototyping of the distributed processing environment (WBS 1.3.3) Software Support and Transitioning, including use of testbeds (WBS 1.3.4)Software Support and Transitioning, including use of testbeds (WBS 1.3.4) Servicing external milestones like data challenges to exercise the new functionality Servicing external milestones like data challenges to exercise the new functionality

and get feedback (WBS 1.3.5)and get feedback (WBS 1.3.5)

Next prototype system to be delivered is the US CMS contribution to the Next prototype system to be delivered is the US CMS contribution to the LCG Production Grid (June 2003)LCG Production Grid (June 2003) CMS will run a large Data Challenge on that system to prove the computing systems CMS will run a large Data Challenge on that system to prove the computing systems

(including new object storage solution) (including new object storage solution)

This scheme will allow us to react flexibly to technology developmentsThis scheme will allow us to react flexibly to technology developmentsAND to changing and developing external requirementsAND to changing and developing external requirements

It also requires a set of widely relevant technologies concerning e.g.It also requires a set of widely relevant technologies concerning e.g. System architectures, farm configuration and partitioningSystem architectures, farm configuration and partitioning Storage architectures and interfacesStorage architectures and interfaces How to approach information services, configuration management etc How to approach information services, configuration management etc

Page 16: Jan 14-17, 2003 Joint DOE/NSF Review of U.S. LHC S&C 1 Lothar A T Bauerdick Fermilab The U.S. CMS Grid Lothar A. T. Bauerdick, Fermilab Project Manager.

Jan 14-17, 2003Joint DOE/NSF Review of U.S. LHC S&C 16Lothar A T Bauerdick Fermilab

Berkeley W/S Nov 2002 -- The Global PictureBerkeley W/S Nov 2002 -- The Global Picture

operation of base grid infrastructure

• ~ 4-5 core centres (including the LCG Tier 1s)• information service, catalogues, ..• coordination, operations centre, ..• call centre, user support, training, ..

other grid nodes for physics,biology, medicine, ….

end-to-endmiddleware

end-to-end biomiddleware

end-to-end HEPmiddleware

Hardening/Reworkingof basic

middlewareprototyped by

current projects

AdvancedMiddleware

requirements definedby current projects

middlewareengineering

other scienceapplications

LHC applications bio applications

…..science sciencescience

middlewareR&D(CS)

Application/grid interface

Development of A Science Grid Infrastructure (L.Robertson)Development of A Science Grid Infrastructure (L.Robertson)

Page 17: Jan 14-17, 2003 Joint DOE/NSF Review of U.S. LHC S&C 1 Lothar A T Bauerdick Fermilab The U.S. CMS Grid Lothar A. T. Bauerdick, Fermilab Project Manager.

Jan 14-17, 2003Joint DOE/NSF Review of U.S. LHC S&C 17Lothar A T Bauerdick Fermilab

… … and the “missing pieces”and the “missing pieces”Transition to Transition to Production Level GridsProduction Level Grids (Berkeley List of WG1) (Berkeley List of WG1)

middleware support, middleware support, error recovery, error recovery, robustness, robustness, 24x7 Grid fabric operations, 24x7 Grid fabric operations, monitoring and system usage optimization, monitoring and system usage optimization, strategy and policy for resource allocation, strategy and policy for resource allocation, authentication and authorization, authentication and authorization, simulation of grid operations, simulation of grid operations, tools for optimizing distributed systemstools for optimizing distributed systems etc.etc.

Also: much needed functionality of a data handling system Also: much needed functionality of a data handling system is still missing! Even basic functionality is still missing! Even basic functionality like global catalogs and location services,like global catalogs and location services, Storage management,Storage management, High network/end-to-end throughput for Terabyte transfers High network/end-to-end throughput for Terabyte transfers

Page 18: Jan 14-17, 2003 Joint DOE/NSF Review of U.S. LHC S&C 1 Lothar A T Bauerdick Fermilab The U.S. CMS Grid Lothar A. T. Bauerdick, Fermilab Project Manager.

Jan 14-17, 2003Joint DOE/NSF Review of U.S. LHC S&C 18Lothar A T Bauerdick Fermilab

ITR: Focus Vision on Enabling ScienceITR: Focus Vision on Enabling Science

What does it take to do LHC science in a global setting?What does it take to do LHC science in a global setting? A focus on setting up big distributed computing facility would be too narrow: A focus on setting up big distributed computing facility would be too narrow:

racks of equipment distributed over <n> T1 centers, batch jobs running in production racks of equipment distributed over <n> T1 centers, batch jobs running in production

Focus on a global environment to enable science communities:Focus on a global environment to enable science communities: How can we achieve that US Universities are full players in LHC science?How can we achieve that US Universities are full players in LHC science? What capabilities and services are needed to do analysis 9 time zones from CERN?What capabilities and services are needed to do analysis 9 time zones from CERN? (what are the obstacles for remote scientist in existing experiments?) (what are the obstacles for remote scientist in existing experiments?)

We are analyzing at a set of scenarios We are analyzing at a set of scenarios ““science challenges” as opposed to Grid use cases: science challenges” as opposed to Grid use cases: exotic physics discovery, data validation and trigger modifications etcexotic physics discovery, data validation and trigger modifications etc

We identify then the capabilities needed from the analysis environment and We identify then the capabilities needed from the analysis environment and some of the CS and IT to enable those capabilitiessome of the CS and IT to enable those capabilities

This was started in the Berkeley Workshop, in the pre-proposal writing and This was started in the Berkeley Workshop, in the pre-proposal writing and is being followed up in a sub-group of the LCG Architecture Forumis being followed up in a sub-group of the LCG Architecture Forum

Page 19: Jan 14-17, 2003 Joint DOE/NSF Review of U.S. LHC S&C 1 Lothar A T Bauerdick Fermilab The U.S. CMS Grid Lothar A. T. Bauerdick, Fermilab Project Manager.

Jan 14-17, 2003Joint DOE/NSF Review of U.S. LHC S&C 19Lothar A T Bauerdick Fermilab

The detector calibration has changed several times during the year and she would like to verify that all the data has a consistent calibration

The code used to create the standard cuts has gone through several revisions, only more recent versions are acceptable

Data from known bad detector runs must be excluded

An event is at the edge of a background distribution and the event needs to be visualised

Typical Science ChallengeTypical Science Challenge

A physicist at a U.S. university presents a plot at avideoconference of the analysis group she is involvedin. The physicist would like to verify the source of allthe data points in the plot.

Page 20: Jan 14-17, 2003 Joint DOE/NSF Review of U.S. LHC S&C 1 Lothar A T Bauerdick Fermilab The U.S. CMS Grid Lothar A. T. Bauerdick, Fermilab Project Manager.

Jan 14-17, 2003Joint DOE/NSF Review of U.S. LHC S&C 20Lothar A T Bauerdick Fermilab

Typical Science ChallengeTypical Science Challenge

The detector calibration has changed several times during the year and she would like to verify that all the data has a consistent calibration

The code used to create the standard cuts has gone through several revisions, only more recent versions are acceptable

Data from known bad detector runs must be excluded

An event is at the edge of a background distribution and the event needs to be visualised

A physicist at a U.S. university presents a plot at avideoconference of the analysis group she is involvedin. The physicist would like to verify the source of allthe data points in the plot.

Meta

data

Data P

rovenance

Data E

quivale

nce

Collabora

tory

Tools

User I

nterfa

ces

Page 21: Jan 14-17, 2003 Joint DOE/NSF Review of U.S. LHC S&C 1 Lothar A T Bauerdick Fermilab The U.S. CMS Grid Lothar A. T. Bauerdick, Fermilab Project Manager.

Jan 14-17, 2003Joint DOE/NSF Review of U.S. LHC S&C 21Lothar A T Bauerdick Fermilab

Science ChallengesScience ChallengesA small group of University physicists are searching for a specific “exotic” physic signal, as the LHC event

sample increases over the years. Instrumental for this search is a specific detector component that those University groups have been involved in building. Out of their local detector expertise they develop a revolutionary new detector calibration method that indeed significantly increased the discovery reach. They obtain permission to use a local University compute center for Monte Carlo generation of their exotic signal. Producing the required sample and tuning the new algorithm takes many months.

After analyzing 10% of the available LHC dataset of 10 Petabytes with the new method they indeed find signals suggesting a discovery! The collaboration asks another group of researchers to verify the results and to perform simulations to increase the confidence by a factor three. There is a major conference in few weeks – will they be able to publish in time?

(a) access the meta-data, share the data and transfer the algorithms used to perform the analysis; (b) quickly have access to the maximum available physical resources to execute the expanded simulations,

stopping other less important calculations if need be;(c) decide to run their analyses and simulations on non-collaboration physical resources to the extent

possible – depending on cost, effort and other overheads;(d) completely track all new processing and results; (e) verify and compare all details of their results; (f) provide partial results to the eager researchers to allow them to track progress towards a result and/or

discovery; (g) provide complete and up to the minute information to the publication decision committee to allow them

to quickly take the necessary decisions.create and manage dynamic temporary private grids; provide complete provenance and meta-data tracking

and management for analysis communities; enable community based data validation and comparison; enable rapid response to new requests; provide usable and complete user interaction and control facilities

Page 22: Jan 14-17, 2003 Joint DOE/NSF Review of U.S. LHC S&C 1 Lothar A T Bauerdick Fermilab The U.S. CMS Grid Lothar A. T. Bauerdick, Fermilab Project Manager.

Jan 14-17, 2003Joint DOE/NSF Review of U.S. LHC S&C 22Lothar A T Bauerdick Fermilab

Science ChallengesScience ChallengesThe data validation group is concerned at the decrease in efficiency of the experiment for

collecting new physics signature events, after a section of the detector is broken and cannot be repaired until an accelerator shutdown. The collaboration is prepared to take a short downtime of data collection in order to test and deploy a new trigger algorithm to increase this ratio – where each day of downtime has an enormous overhead cost to the experiment.

The trigger group must develop an appropriate modification to the high-level trigger code, test it on large sample of simulated events and carefully compare the data filter for each of the 100 triggers in use. During the test period for the new algorithm the detector calibration group must check and optimize the calibration scheme.

(a) identify and define the true configuration of the hundreds of thousands of components of the detector in the configuration database;

(b) store and subsequently access sufficient information about the previous and this new temporary configuration to allow the data collected under each condition to be correctly analyzed;

(c) quickly develop and check a suite of new high level trigger algorithms integrated with the remainder of the official version of the application code;

(d) quickly have access to the maximum available physical resources to execute the testing; (e) export this information (which is likely to have a new metadata schema), to other

communities who, albeit with less priority, need to adapt and test their analyses, and them to the entire collaboration.

evolution and integration of meta-data schema and provenance data; arbitrarily structured meta-data; data equivalency

Page 23: Jan 14-17, 2003 Joint DOE/NSF Review of U.S. LHC S&C 1 Lothar A T Bauerdick Fermilab The U.S. CMS Grid Lothar A. T. Bauerdick, Fermilab Project Manager.

Jan 14-17, 2003Joint DOE/NSF Review of U.S. LHC S&C 23Lothar A T Bauerdick Fermilab

The Global Environment The Global Environment

Globally Enabled Globally Enabled Analysis CommunitiesAnalysis Communities(a pre-proposal was submitted to ITR) (a pre-proposal was submitted to ITR)

Enabling Enabling Global CollaborationGlobal Collaboration (a medium-sized ITR proposal)(a medium-sized ITR proposal)

Page 24: Jan 14-17, 2003 Joint DOE/NSF Review of U.S. LHC S&C 1 Lothar A T Bauerdick Fermilab The U.S. CMS Grid Lothar A. T. Bauerdick, Fermilab Project Manager.

Jan 14-17, 2003Joint DOE/NSF Review of U.S. LHC S&C 24Lothar A T Bauerdick Fermilab

Goals of the ITR ProposalGoals of the ITR Proposal

Provide individual physicists and groups of scientists Provide individual physicists and groups of scientists capabilities from the desktop that allow them:capabilities from the desktop that allow them: To participate as an equal in one or more “Analysis Communities”To participate as an equal in one or more “Analysis Communities” Full representation in the Global Experiment Enterprise Full representation in the Global Experiment Enterprise To on-demand receive whatever resources and information they need to To on-demand receive whatever resources and information they need to

explore their science interest while respecting the collaboration wide explore their science interest while respecting the collaboration wide priorities and needs. priorities and needs.

Environment for CMS (LHC) Distributed Analysis on the GridEnvironment for CMS (LHC) Distributed Analysis on the Grid Dynamic WorkspacesDynamic Workspaces - provide capability for individual and community to - provide capability for individual and community to

request and receive expanded, contracted or otherwise modified resources, request and receive expanded, contracted or otherwise modified resources, while maintaining the integrity and policies of the Global Enterprise. while maintaining the integrity and policies of the Global Enterprise.

Private GridsPrivate Grids - provide capability for individual and community to request, - provide capability for individual and community to request, control and use a heterogeneous mix of Enterprise wide and community control and use a heterogeneous mix of Enterprise wide and community specific software, data, meta-data, resources.specific software, data, meta-data, resources.

Page 25: Jan 14-17, 2003 Joint DOE/NSF Review of U.S. LHC S&C 1 Lothar A T Bauerdick Fermilab The U.S. CMS Grid Lothar A. T. Bauerdick, Fermilab Project Manager.

Jan 14-17, 2003Joint DOE/NSF Review of U.S. LHC S&C 25Lothar A T Bauerdick Fermilab

We must be able to reliably and consistently move resources & information We must be able to reliably and consistently move resources & information in both directions between the Global Collaboration and the Analysis Communitiesin both directions between the Global Collaboration and the Analysis Communities

Communities should be able to share among themselves.Communities should be able to share among themselves.

Physics Analysis in CMSPhysics Analysis in CMSThe Experiment controls and The Experiment controls and maintains the global enterprise: maintains the global enterprise:

HardwareHardware: Computers, Storage : Computers, Storage (permanent and temporary)(permanent and temporary)

SoftwareSoftware Packages: physics, Packages: physics, framework, data management, build framework, data management, build and distribution mechanisms; base and distribution mechanisms; base infrastructure (operating systems, infrastructure (operating systems, compilers, network, grid); compilers, network, grid);

Event and Physics Event and Physics Data and Data and DatasetsDatasets

SchemaSchema which define: meta-data, which define: meta-data, provenance, ancillary information provenance, ancillary information (run, luminosity, trigger, Monte-Carlo (run, luminosity, trigger, Monte-Carlo parameters, calibration etc)parameters, calibration etc)

Organization, Policy and PracticeOrganization, Policy and Practice

Analysis Groups - Communities - Analysis Groups - Communities - are of 1 to many individualsare of 1 to many individuals

Each community is part of the Each community is part of the Enterprise :Enterprise :

Is assigned or shares the total Is assigned or shares the total Computation and StorageComputation and Storage

Can access and modify software, Can access and modify software, data, schema (meta-data)data, schema (meta-data)

is subject the overall organization and is subject the overall organization and managementmanagement

Each community has local (private) Each community has local (private) control ofcontrol of

Use of outside resources e.g. local Use of outside resources e.g. local institution computing centersinstitution computing centers

Special versions of software, Special versions of software, datasets, schema, compilersdatasets, schema, compilers

Organization, policy and practiceOrganization, policy and practice

Page 26: Jan 14-17, 2003 Joint DOE/NSF Review of U.S. LHC S&C 1 Lothar A T Bauerdick Fermilab The U.S. CMS Grid Lothar A. T. Bauerdick, Fermilab Project Manager.

Jan 14-17, 2003Joint DOE/NSF Review of U.S. LHC S&C 26Lothar A T Bauerdick Fermilab

Page 27: Jan 14-17, 2003 Joint DOE/NSF Review of U.S. LHC S&C 1 Lothar A T Bauerdick Fermilab The U.S. CMS Grid Lothar A. T. Bauerdick, Fermilab Project Manager.

Jan 14-17, 2003Joint DOE/NSF Review of U.S. LHC S&C 27Lothar A T Bauerdick Fermilab

Page 28: Jan 14-17, 2003 Joint DOE/NSF Review of U.S. LHC S&C 1 Lothar A T Bauerdick Fermilab The U.S. CMS Grid Lothar A. T. Bauerdick, Fermilab Project Manager.

Jan 14-17, 2003Joint DOE/NSF Review of U.S. LHC S&C 28Lothar A T Bauerdick Fermilab

Page 29: Jan 14-17, 2003 Joint DOE/NSF Review of U.S. LHC S&C 1 Lothar A T Bauerdick Fermilab The U.S. CMS Grid Lothar A. T. Bauerdick, Fermilab Project Manager.

Jan 14-17, 2003Joint DOE/NSF Review of U.S. LHC S&C 29Lothar A T Bauerdick Fermilab

Page 30: Jan 14-17, 2003 Joint DOE/NSF Review of U.S. LHC S&C 1 Lothar A T Bauerdick Fermilab The U.S. CMS Grid Lothar A. T. Bauerdick, Fermilab Project Manager.

Jan 14-17, 2003Joint DOE/NSF Review of U.S. LHC S&C 30Lothar A T Bauerdick Fermilab

Page 31: Jan 14-17, 2003 Joint DOE/NSF Review of U.S. LHC S&C 1 Lothar A T Bauerdick Fermilab The U.S. CMS Grid Lothar A. T. Bauerdick, Fermilab Project Manager.

Jan 14-17, 2003Joint DOE/NSF Review of U.S. LHC S&C 31Lothar A T Bauerdick Fermilab

Page 32: Jan 14-17, 2003 Joint DOE/NSF Review of U.S. LHC S&C 1 Lothar A T Bauerdick Fermilab The U.S. CMS Grid Lothar A. T. Bauerdick, Fermilab Project Manager.

Jan 14-17, 2003Joint DOE/NSF Review of U.S. LHC S&C 32Lothar A T Bauerdick Fermilab

http://www-ed.http://www-ed.fnalfnal..govgov/work/grid//work/grid/gcgc_grow.html_grow.html

http://www-ed.http://www-ed.fnalfnal..govgov/work/grid//work/grid/gcgc_grow.html_grow.html

Page 33: Jan 14-17, 2003 Joint DOE/NSF Review of U.S. LHC S&C 1 Lothar A T Bauerdick Fermilab The U.S. CMS Grid Lothar A. T. Bauerdick, Fermilab Project Manager.

Jan 14-17, 2003Joint DOE/NSF Review of U.S. LHC S&C 33Lothar A T Bauerdick Fermilab

This ITR Addresses Key IssuesThis ITR Addresses Key Issues

Enable remote analysis groups and individual physicists Enable remote analysis groups and individual physicists reliable and quick validation, trusted by the collaborationreliable and quick validation, trusted by the collaboration demonstrate and compare methods and results reliably demonstrate and compare methods and results reliably

and improve the turnaround time to physics publicationsand improve the turnaround time to physics publications quickly respond to and decide upon resource requests from analysis quickly respond to and decide upon resource requests from analysis

groups/physicists, minimizing impact to the rest of the collaborationgroups/physicists, minimizing impact to the rest of the collaboration established infrastructure for evolution and extension for its long life time established infrastructure for evolution and extension for its long life time lower the intellectual cost barrier for new physicists to contributelower the intellectual cost barrier for new physicists to contribute enable small groups to perform reliable exploratory analyses on their ownenable small groups to perform reliable exploratory analyses on their own increased potential for individual/small community analyses and discovery increased potential for individual/small community analyses and discovery analysis communities will be assured they are using a well defined set of analysis communities will be assured they are using a well defined set of

software and datasoftware and data

This looks obvious and clearly required for the success of LHC RPThis looks obvious and clearly required for the success of LHC RP

This looks daunting and scarily difficult and involved This looks daunting and scarily difficult and involved and is indeed far from what has been achieved in existing experimentsand is indeed far from what has been achieved in existing experiments

We do need the intellectual involvement and engagement of CS and IT!We do need the intellectual involvement and engagement of CS and IT!

Page 34: Jan 14-17, 2003 Joint DOE/NSF Review of U.S. LHC S&C 1 Lothar A T Bauerdick Fermilab The U.S. CMS Grid Lothar A. T. Bauerdick, Fermilab Project Manager.

Jan 14-17, 2003Joint DOE/NSF Review of U.S. LHC S&C 34Lothar A T Bauerdick Fermilab

Conclusions on US CMS GridsConclusions on US CMS GridsThe Grid approach to US CMS S&C is technically sound, The Grid approach to US CMS S&C is technically sound,

and enjoys strong support and participation from and enjoys strong support and participation from U.S. Universities and Grid Projects U.S. Universities and Grid Projects We need large intellectual input and involvement, and significant R&D to build the system We need large intellectual input and involvement, and significant R&D to build the system

US CMS is driving US Grid integration and deploymentUS CMS is driving US Grid integration and deploymentUS CMS has proven that the US Tier-1/Tier-2 User Facility (Grid-) system US CMS has proven that the US Tier-1/Tier-2 User Facility (Grid-) system can indeed work to deliver effort and resources to CMS and US CMS!can indeed work to deliver effort and resources to CMS and US CMS! We are on the map for LHC computing and the LCGWe are on the map for LHC computing and the LCG With the funding advised by the funding agencies and project oversight With the funding advised by the funding agencies and project oversight

and good planning, we will have the manpower and equipment and good planning, we will have the manpower and equipment at the lab and universities to participate strongly in the CMS data challengesat the lab and universities to participate strongly in the CMS data challenges

This is a bare-bones plan, at the threshold, and variations could jeopardize these effortsThis is a bare-bones plan, at the threshold, and variations could jeopardize these efforts

We need to maintain leadership in the software and computing efforts, We need to maintain leadership in the software and computing efforts, to keep opportunity for U.S. leadership in the emerging LHC physics to keep opportunity for U.S. leadership in the emerging LHC physics programprogram

We have a unique opportunity of proposing our ideas to others, We have a unique opportunity of proposing our ideas to others, of doing our science in global, open, and international collaborationof doing our science in global, open, and international collaboration That goes beyond the LHC and beyond HEPThat goes beyond the LHC and beyond HEP


Recommended