04/20/23 OSG at CANS 1
Open Science Grid
Ruth Pordes
Fermilabhttp://www.opensciencegrid.org
OSG at CANS 2
What is OSG?
Shared Common Distributed Infrastructure Supporting access to contributed Processing, disk & tape resources Over production and research networks and
Open tor use by Science Collaborations
Shared Common Distributed Infrastructure Supporting access to contributed Processing, disk & tape resources Over production and research networks and
Open tor use by Science Collaborations
OSG at CANS 3
96 Resources across production & integration infrastructures
20 Virtual Organizations +6 operations
Includes 25% non-physics.
~20,000 CPUs (from 30 to 4000)
~6 PB Tapes
~4 PB Shared Disk
Snapshot of Jobs on OSGs
Sustaining through OSG submissions:
3,000-4,000 simultaneous jobs .
~10K jobs/day
~50K CPUhours/day.
Peak test jobs of 15K a day.
Using production & research networks
OSG Snapshot
OSG at CANS 4
OSG - a Community Consortium • DOE Laboratories and DOE, NSF, other, University Facilities
contributing computing farms and storage resources, infrastructure and user services, user and research communities.
• Grid technology groups: Condor, Globus, Storage Resource Management, NSF Middleware Initiative.
• Global research collaborations: High Energy Physics - including Large Hadron Collider, Gravitational Wave Physics - LIGO, Nuclear and Astro Physics, Bioinformatices, Nanotechnology, CS research….
• Partnerships: with peers, development and research groups Enabling Grids for EScience (EGEE),TeraGrid, Regional & Campus Grids (NYSGrid, NWICG, TIGRE, GLOW..)
• Education: I2U2/Quarknet sharing cosmic ray data, Grid schools…
1999 2000 2001 2002 20052003 2004 2006 2007 2008 2009
PPDG
GriPhyN
iVDGL
Trillium Grid3 OSG
(DOE)
(DOE+NSF)(NSF)
(NSF)
OSG at CANS 5
OSG sits in the middle of an environment of a Grid-of-Grids from Local to Global InfrastructuresInter-Operating and Co-Operating Grids: Campus, Regional, Community,
National, International. Virtual Organizations doing Research & Education.
OSG at CANS 6
Overlaid by virtual computational environments of single to large groups of researchers local to worldwide
OSG at CANS 7
OSG Core Activities • Integration: software, systems and end-to-end
environments. Production, integration, test infrastructures.• Operations: common support mechanisms, security
protections, troubleshooting.• Inter-Operation: across administrative and technical
boundaries.
• OSG Principles and Characteristics Guaranteed and opportunistic access to shared resources. Heterogeneous environment. Interfacing and Federation across Campus, Regional, national/international
Grids preserving local autonomy New services and technologies developed external to OSG.
Each activity includes technical work with Collaborators in the US and
elsewhere.
Each activity includes technical work with Collaborators in the US and
elsewhere.
OSG at CANS 8
OSG Middleware
Infr
ast
ruct
ure
Applic
ati
ons
VO Middleware
Core grid technology distributions: Condor, Globus, Myproxy: shared with TeraGrid and
others
Virtual Data Toolkit (VDT) core technologies + software needed by
stakeholders:many components shared with EGEE
OSG Release Cache: OSG specific configurations, utilities etc.
HEP
Data and workflow management etc
Biology
Portals, databases etc
User Science Codes and Interfaces
Existing Operating, Batch systems and Utilities.
Astrophysics
Data replication etc
OSG at CANS 9
What is the VDT?• A collection of software
Grid software: Condor, Globus and lots more Virtual Data System: Origin of the name “VDT” Utilities: Monitoring, Authorization, Configuration Built for >10 flavors/versions of Linux
• Automated Build and Test: Integration and regression testing.
05
10
15
20
25
3035
4045
Jan-
02
May
-02
Sep-0
2
Jan-
03
May
-03
Sep-0
3
Jan-
04
May
-04
Sep-0
4
Jan-
05
May
-05
Sep-0
5
Jan-
06
May
-06
Sep-0
6
Nu
mb
er o
f m
ajo
r co
mp
on
ents
VDT 1.1.x VDT 1.2.x VDT 1.3.x
VDT 1.0Globus 2.0bCondor-G 6.3.1
VDT 1.1.3, 1.1.4 & 1.1.5, pre-SC 2002
VDT 1.1.8Adopted by LCG
VDT 1.1.11Grid2003 VDT 1.2.0
VDT 1.3.0
VDT 1.3.9For OSG 0.4
VDT 1.3.11Current ReleaseMoving to OSG 0.6.0
VDT 1.3.6For OSG 0.2
• An easy installation: Push a button, everything just works. Quick update processes.
• Responsive to user needs: process to add new components
based on community needs.
• A support infrastructure: front line software support, triaging between users and software
providers for deeper issues.
OSG at CANS 10
Middleware to Support Security
• Identification and Authorization based on X509 extended attribute certificates. In common with Enabling Grids for EScience (EGEE).
• Address needs of Roles of groups of researchers for control and policies of access.
• Operational auditing across core OSG assets.
OSG at CANS 11
OSG Active in Control and Understanding of Risk
• Security Process modelled on NIST Management, Operational, Technical controls
• Security Incidents: When not If. Organizations control their own activities: Sites,
Communities, Grids. Coordination between operations centers of
participating infrastructures. End-to-end troubleshooting involves people,
software and services from multiple infrastructures & organizations
OSG at CANS 12
High Energy Physicists Analyze today’s Data Worldwide
PB/mo = < 3 Gb/s>
High impact path
Production path
University of Science and Technology of China
University of Science and Technology of China
OSG at CANS 13
Physics needs in 2008:
• 20-30 Petabyte tertiary automated tape storage at 12 centers world-wide physics and other scientific collaborations.
• High availability (365x24x7) and high data access rates (1GByte/sec) locally and remotely.
• Evolving and scaling smoothly to meet evolving requirements.
• E.g. CMS computing modelTier-0
Tier-1 Tier-1
Tier-2 Tier-2
OSG at CANS 14
OSG Data Transfer, Storage and Access - GBytes/sec 365 days a year for CMS &
ATLASData Rates need to reach ~X3 in 1 yearData Rates need to reach ~X3 in 1 year
600MB/sec600MB/sec
~7 Tier-1s, CERN + Tier-2s~7 Tier-1s, CERN + Tier-2s
Bejing is a Tier-2 in this setBejing is a Tier-2 in this set
OSG at CANS 15
Aggressive program of End to End Network performance
• Complex end-to-end routes.
• Monitoring, configuration, diagnosis.
• Automated redundancy and recovery.
OSG at CANS 16
Submitting Locally, Executing Remotely:
15,000 jobs/day.
27 sites.
Handful of submission points.
+ test jobs at 55K/day.
OSG at CANS 17
Applications cross infrastructures e.g.OSG and TeraGrid
OSG at CANS 18
The OSG Model of Federation
OSG A(nother)
Grid e.g. NAREGI
Service-X Service-XAdaptor between
OSG-X and AGrid-X
VO or User that acts across grids
Interface to
Service-X
Security, Data, Jobs, Operations, Information, Acccounting…
OSG at CANS 19
BeforeFermiGride.g.Fermilab
User
ResourceHead Node
Workers
Astrophysics
ResourceHead Node
Workers
Common
ResourceHead Node
Workers
ParticlePhysics
ResourceHead Node
Workers
Theory
Exis
ting
CommonGateway &
Central Services
CommonGateway &
Central Services
GuestUser
Local Grid with adaptor to national grid• Central Campus wide Grid Services• Enable efficiencies and sharing across internal farms and storage• Maintain autonomy of individual resources
Next Step: Campus Infrastructure Days - new activity OSG, Internet2 and TeraGrid
OSG at CANS 20
Information & Monitoring
Storage Interfaces
Interoperation Increasing in Scope
OSG at CANS 21
Summary of OSG today
• Providing core services, software and a distributed facility for an increasing set of research communities.
• Helping Virtual Organizations access resources on many different infrastructures.
• Reaching out to others to collaborate and contribute our experience and efforts.