Date post: | 03-Jan-2016 |
Category: |
Documents |
Upload: | clement-parks |
View: | 212 times |
Download: | 0 times |
ALICE-USAGrid-Deployment Plans
(By the way, ALICE is an LHC Experiment, TOO!)
Or(We Sometimes Feel Like and “AliEn” in our own Home…)
Larry Pinsky—Computing CoordinatorALICE-USA
2ALICE/Pinsky OSG Applications Workshop @ SLAC
1 Creighton University 2 Kent State University 3 Lawrence Berkeley National LaboratoryLawrence Berkeley National Laboratory 4 Michigan State University 5 Oak Ridge National Laboratory 6 The Ohio State University 7 The Ohio Supercomputing CenterThe Ohio Supercomputing Center 8 Purdue University 9 University of California, Berkeley 10 University of California, Davis 11 University of California, Los Angeles 12 University of HoustonUniversity of Houston 13 University of Tennessee 14 University of Texas at Austin 15 Vanderbilt University 16 Wayne State University
ALICE-USA Institutions
Already OfficialMembers of ALICEMajor
ComputingSites
3ALICE/Pinsky OSG Applications Workshop @ SLAC
ALICE Computing Needs
From <http://pcaliweb02.cern.ch/NewAlicePortal/en/Collaboration/Documents/TDR/Computing.html> as posted 25 Feb. 2005
Table 2.6 T0 Sum T1s Sum T2s Total
CPU (MSI2K) [Peak] 7.5 13.8 13.7 35
Transient Storage (PB) 0.44 7.6 2.5 10.54
Permanent storage (PB/year) 2.3 7.5 0 9.8
Bandwidth in (Gbps) 8 2 0.075
Bandwidth out (Gbps) 6 1.5 0.27
4ALICE/Pinsky OSG Applications Workshop @ SLAC
ALICE-USA Target
year 2008 2009 2010
% total 20 40 100
(ALICE-USA sum MSI2K) CPU 0.69 1.38 3.44
ALICE-USA sum (PB) Disk 0.25 0.51 1.26
ALICE-USA sum (PB/yr) Perm. St. 0.19 0.38 0.94
ALICE-USA sum (Gbps) Network 0.769 1.538 3.845
Each Major US site CPU 0.23 0.46 1.15
(1/3 ALICE-USA sum) Disk 0.08 0.17 0.42
Perm. St. 0.06 0.13 0.31
Network 0.256 0.513 1.282
One Full External T1 withFull Share of Supporting T2Capabilities—Net in the US[Based on 6 External T1s]
Note OSC is a Member ofALICE and has made thisCommitment NowNow…
5ALICE/Pinsky OSG Applications Workshop @ SLAC
ALICE-USA Commitments
OSC is commited now to getting NSF funding to Acquire this Level of Support.
LBL (NERSC) & UH are DOE funded and Commited to supplying these resources contingent upon DOE’s approval of the ALICE-USA EMCAL project.
All three institutions CONTINUE TO SUPPORT THE DATA CHALLENGES…
DOE is Currently well into the decision process regarding budgeting the Construction of EMCAL by ALICE-USA for ALICE. Funding to support Prototyping has been provided…
6ALICE/Pinsky OSG Applications Workshop @ SLAC
ALICE-USA Data Challenge Support
Since 2002, ALICE-USA has provided significant support for the Data Challenges.
Most recently (2004) ALICE-USA supplied ~14% (106 MSI2k-Hours) of the total (755 MSI2k-Hours) CPU and external storage capacity.
For 2005-2007 ALICE-USA intends to supply a similar fraction from existing commitments.
7ALICE/Pinsky OSG Applications Workshop @ SLAC
ALICE-USA Grid Middleware
We will support ALICE’s needs with whatever Middleware is consistent with them…
…As Well As what is consistent with our local needs in the US…
Our institutions are participating in OSG in the US, and some are members of PPDG.
8ALICE/Pinsky OSG Applications Workshop @ SLAC
Simplified view of the ALICE Grid with AliEn
Local scheduler
ALICE VO – central services
Central Task Queue
Job submission
File Catalogue
Configuration
Accounting
User authentication
Computing Element
Workload management
Job Monitoring
Storage volume manager
Data Transfer
Storage Element
Cluster Monitor
AliEn Site services
Disk and MSS
Existing site components
ALICE VO – Site services integration
9ALICE/Pinsky OSG Applications Workshop @ SLAC
ALICE VO interaction with various Grids
User(ProductionManager)
LCG UI/RB
Data Registration
LCG siteLCG CE
WN
ALICE VO Box
ARC UI/RB
ARC siteARC CE
WN
ALICE VO Box
AliEn siteAliEn CE
WN
ALICE VO Box
OSG siteOSG CE
WN
ALICE VO Box
OSG UI/RB
ALICETaskQ
ALICEFile
Catalogue
Job submission
10ALICE/Pinsky OSG Applications Workshop @ SLAC
Some Issues ALICE software will have to blend with many GRID
infrastructures. ALICE will use resources that will include many
different platforms. (e.g. AliEn, PROOF and AliROOT now run on a variety of platforms such as IA32, IA64, and G5’s).
The detailed OS versions cannot be mandated on all resources that will need to be used.
ALICE File Catalogs, Task Queues and Production Manager will interface directly to the UI/RBs & Local Services.
ALICE is evolving towards a “Cloud” model of distributed computing and away from a rigid “MONARC” model… T-1’s are distinguished from T-2’s by local MS capability and not tasks…
11ALICE/Pinsky OSG Applications Workshop @ SLAC
Meeting Next Week @ LBL
There will be a meeting at LBL next Friday, June 10, to discuss ALICE and OSG specifically…
June 1, 2005 - Pinsky ALICE/Pinsky OSG Applications Workshop @ SLAC
12
A Joint Grid Project A Joint Grid Project Between Physics Between Physics Departments at Departments at
Universities in TexasUniversities in TexasInitiated by the High Energy (Particle) Physics Groups…Initiated by the High Energy (Particle) Physics Groups…To Harness Unused Local Computing Resources…To Harness Unused Local Computing Resources…
June 1, 2005 - Pinsky ALICE/Pinsky OSG Applications Workshop @ SLAC
13
……In Support of In Support of HiPCATHiPCAT
HiHigh gh PPerformance erformance CComputing omputing AAcross cross TTexasexas
(HiPCAT) is a consortium of Texas institutions that use advanced (HiPCAT) is a consortium of Texas institutions that use advanced computational technologies to enhance research, development, and computational technologies to enhance research, development, and educational activities. These advanced computational technologies educational activities. These advanced computational technologies include traditional high performance computing (HPC) systems include traditional high performance computing (HPC) systems and clusters, in addition to complementary advanced computing and clusters, in addition to complementary advanced computing technologies including massive data storage systems and scientific technologies including massive data storage systems and scientific visualization resources. The advent of computational grids -- visualization resources. The advent of computational grids -- based on high speed networks connecting computing resources based on high speed networks connecting computing resources and grid 'middleware' running on these resources to integrate and grid 'middleware' running on these resources to integrate them into 'grids' -- has enabled the coordinated, concurrent usage them into 'grids' -- has enabled the coordinated, concurrent usage of multiple resources/systems and stimulated new methods of of multiple resources/systems and stimulated new methods of computing and collaboration. HiPCAT institutions support the computing and collaboration. HiPCAT institutions support the development, deployment, and utilization of all of these advanced development, deployment, and utilization of all of these advanced computing technologies tocomputing technologies to enable Texas researchers to address the enable Texas researchers to address the most challenging computational problems. most challenging computational problems.
June 1, 2005 - Pinsky ALICE/Pinsky OSG Applications Workshop @ SLAC
14
……And TIGREAnd TIGRE
The The TTexas exas IInternet nternet GGrid for rid for RResearch and esearch and EEducation ducation (TIGRE) project goal is to build a computational grid (TIGRE) project goal is to build a computational grid that integrates computing systems, storage systems and that integrates computing systems, storage systems and databases, visualization laboratories and displays, and databases, visualization laboratories and displays, and even instruments and sensors across Texas. TIGRE will even instruments and sensors across Texas. TIGRE will enhance the computational capabilities for Texas enhance the computational capabilities for Texas researchers in academia, government, and industry by researchers in academia, government, and industry by integrating massive computing power. Areas of integrating massive computing power. Areas of research which will benefit in particular: biomedicine, research which will benefit in particular: biomedicine, energy and the environment, aerospace, materials energy and the environment, aerospace, materials science, agriculture, and information technology.science, agriculture, and information technology.
June 1, 2005 - Pinsky ALICE/Pinsky OSG Applications Workshop @ SLAC
15
Setting Up THEGridSetting Up THEGrid
• THEGrid has set up a Grid infrastructure using existing hardware in Physics Departments on campuses in Texas…
• Initially, a Grid3-like approach was taken using VDT (Going to OSG Soon…)
• Local unused resources were harnessed using Condor…
June 1, 2005 - Pinsky ALICE/Pinsky OSG Applications Workshop @ SLAC
16
Using THEGrid
• Individual Students and Faculty at each participating campus can submit batch jobs!
• Jobs are submitted through a local portal on each campus…
• The middleware distributes the submitted jobs to one of the available locations throughout THEGrid…
• The output from each job is returned to the user…
June 1, 2005 - Pinsky ALICE/Pinsky OSG Applications Workshop @ SLAC
17
THEGridTHEGrid
Texas Tech will runThe OSG VOM Server