+ All Categories
Home > Documents > Status of HPSS

Status of HPSS

Date post: 06-Jan-2016
Category:
Upload: naomi
View: 48 times
Download: 2 times
Share this document with a friend
Description:
Status of HPSS. New Features, Requirements, and Installations. Otis Graf IBM Global Services - Federal Houston, Texas October 1999. Topics. HPSS New Features since Release 3.2 Next incremental updates Feature to be added in 4.2 Status of HPSS Installations Focus on Support Services - PowerPoint PPT Presentation
Popular Tags:
30
1 Status of HPSS New Features, Requirements, and Installations Otis Graf IBM Global Services - Federal Houston, Texas October 1999
Transcript
Page 1: Status of HPSS

1

Status of HPSS

New Features, Requirements, and Installations

Otis GrafIBM Global Services - Federal

Houston, TexasOctober 1999

Page 2: Status of HPSS

2

Topics

• HPSS New Features since Release 3.2

• Next incremental updates

• Feature to be added in 4.2

• Status of HPSS Installations

• Focus on Support Services

• Next Requirements Development Cycle

Page 3: Status of HPSS

3

Summary of HPSS Releases and Patches

Release Date Notes

3.1 Sep 96 Initial production release

3.2 Nov 97 Performance & PFTP improvements

4.1 Dec 98 DFS, HPSS filesets, MPI-IO

4.1.1 Apr 99 SGI movers, non-DCE movers

4.1.1.1 Jul 99 Solaris phase 1, additional devices

4.1.1.2 Nov 99 Solaris phase 2, SFS backup utilities

4.2 Nov 00 Major release, GPFS, Solaris phase 3

Page 4: Status of HPSS

4

• Includes through 4.1.1 (July 1999)• File families

– Segregate files onto different groups of tapes.

– Storage Server uses family id passed in segment create calls for selecting the tape for the segment create

Level 1Level 1

F1F2

FileAFileAFileB (F1)FileB (F1)FileC (F2)FileC (F2)

Level 2Level 2 UnassignedUnassignedFileCFileC

FileBFileB

FileAFileA

Features Added Since Rel. 3.2

Page 5: Status of HPSS

5

Features Added Since Rel. 3.2 (Cont.)

• Filsets: A hierarchical collection of files and directories managed as a single unit.

usrusr

\\

binbin homehome

goodwinjgoodwinj ......teaffteaff

Root FilesetRoot Fileset

\\

workwork playplay

\\

gamesgamesplayplay

junction junction

fs_goodwinjfs_goodwinj fs_teafffs_teaff

Page 6: Status of HPSS

6

Features Added Since Rel. 3.2 (Cont.)

• HPSS Filesets– Managed by name server

– Can be either HPSS-only or HPSS/DFS

• DFS Filesets– DFS-only

– HPSS/DFS archived• HPSS & DFS name spaces not kept in synch

• Only way to get data is through DFS

• Performance similar to DFS-only if data is resident

– HPSS/DFS mirrored• HPSS & DFS name spaces kept in synch

• Data can be seen and altered from either DFS or HPSS

• HPSS high speed data movement can be used

Page 7: Status of HPSS

7

• PFTP Enhancements– Allows the use of multiple network and/or multiple

systems for child processes for multi-network stripe transfers

• MPI-IO Interface– Implements a subset of the MPI-2 standard

– Coordinates access to HPSS files from multiple processes

Features Added Since Rel. 3.2 (Cont.)

Page 8: Status of HPSS

8

• Scaleable Accounting– Accounting summary records support the following

collection options during the accounting period:• By account index & COS,

– Number of accesses

– Number of files transferred

– bytes used

• By account index, COS, & storage class– number of files transferred

– number of accesses

Features Added Since Rel. 3.2 (Cont.)

Page 9: Status of HPSS

9

• Performance Enhancements– Support disk partitions > 2 GB

– Bigger virtual volumes (up to 16,384 segments per VV)

– Faster disk file creates and purges

• Support for Shelf Tapes– Provide utility to identify and move volumes to shelf

– Mark cartridge metadata with shelf descriptor

– Generate operator mount requests via SSM pop-up

• Client API Enhancements– Manually migrate/purge on a file basis

– Lock/unlock to prevent a file from being purged

Features Added Since Rel. 3.2 (Cont.)

Page 10: Status of HPSS

10

• Non-DCE client API (Solaris, IRIX & AIX)• Non-DCE Mover• Mover ported to other platforms

– Solaris (DCE)

– SGI IRIX (non-DCE)

• Additional Tape Drive Support– Ampex DST 314

– IBM 3590E

• Additional Library Support– ADIC Automated Media Library (AML)

Features Added Since Rel. 3.2 (Cont.)

Page 11: Status of HPSS

11

Number of HPSS Systems*

HPSS Installations per Year

0

1

2

3

4

5

6

7

8

9

10

1996 1997 1998 1999

Nu

mb

er o

f S

yste

ms

Test

Production

*Notes: •“System” is a name space instance intended for production •1999 data is “year to date”.

Page 12: Status of HPSS

12

Listing of HPSS Sitesas of October 1999

HPSS Deployment Phases

SI = System Integration

I = Installation

T = Production Readiness Test

P = Production

Name of Organization No. Phase Rel.

Maui High Performance Computing Center (MHPCC) 1 P 3.2

Sandia National Laboratory (SNL) 2 P/P 3.2/3.2

California Institute of Technology (Caltech) 1 P 4.1.1

Fermi National Accelerator Laboratory (FNAL) 1 P 4.1.1

Lawrence Livermore National Laboratory (LLNL) 2 P/P 3.2/3.2

University of Washington (UWA) 1 P 3.2

Los Alamos National Laboratory (LANL) 2 P/P4.1.14.1.1

San Diego Supercomputer Center (SDSC) 1 P 4.1.1

Oak Ridge National Laboratory (ORNL) 1 P 4.1.1

Page 13: Status of HPSS

13

Listing of HPSS Sites (Cont.)as of October, 1999

HPSS Deployment Phases

SI = System Integration

I = Installation

T = Production Readiness Test

P = Production

Name of Orginazation No. Phase Rel.

Lawrence Berkeley Laboratory (NERSC) 3 P/P/P 3.2

NASA Langely Research Center (LaRC) 1 P 3.2

Stanford Linear Accelerator Center (SLAC) 1 P 4.1

European Laboratory for Particle Physics (CERN) 1 P 3.2

CEA Centre de Bruyeres le Chatel (CEA-DAM) 1 P 4.1.1

NOAA National Climatic Data Center (NCDC) 1 P 3.2

University of Maryland (UMCP) 1 P 3.2

Brookhaven National Lab (BNL) 1 P 4.1.1

Page 14: Status of HPSS

14

Listing of HPSS Sites (Cont.)as of October, 1999

HPSS Deployment Phases

SI = System Integration

I = Installation

T = Production Readiness Test

P = Production

Name of Orginazation No. Phase Rel.

Indiana University (IU) 1 P 4.1.1

Institut National de Physique Nuclèaire et de Physique des Particules (IN2P3)

1 T 4.1.1

Institute of Physical and Chemical Research (RIKEN) 1 T 4.1.1

Marconi Integrated Systems 1 T 4.1.1

Argonne National Laboratory (ANL) 1 SI 4.1.1

Page 15: Status of HPSS

15

Current Activities

• Next scheduled patch (4.1.1.2 in Nov 99)– Sun Phase 2

• STK PVR and IBM 3494 PVR

• PFTP Client and Daemon

• MPI-IO

– SFS Backup utilities

– Some manageability improvements

• Follow-on patch (first quarter 00) – IBM 3590 tape drives on Solaris

• StorageTek RAIT - testing and prototyping activity, no HPSS software change

Page 16: Status of HPSS

16

Current Activities (Cont.)

• Development of Release 4.2 – Planned date is 4th Quarter 00

• Enhancement of On-Line Resources– Tools repository

– On-line help (FAQ & problem/solution DB)

– On-line documentation (HTML & PDF)

• New Training Classes– Advanced System Admin Class

– HPSS API programming class

Page 17: Status of HPSS

17

Current Activities (Cont.)

• Enhancement of Customer Support– Better coordination with IBM Transarc– Deployment/support procedures

C ustom erQ ualification

System sEngineering

H PSSD eploym ent

Activ ities

O n-G oingSupportActiv ities

Custom er profile Q uestionnaire Custom er Tra in ing

Requirem ents O ps Concept System Assurance

Review

HPSS Insta lla tion Acceptance Tests Production Readiness

Review

Problem Resolution System Upgrades Change S torage

Resources Change Configuration Periodic P rogram

Reviews

Page 18: Status of HPSS

18

Features in Rel. 4.2

• Multiple Storage Subsystems* – Comprised of Name Server, Bitfile Server, Migration

Purge Server, & 1-n Storage Servers

– May be used to enhance concurrency

– May be used to partition servers to handle parts of the name space Data assigned to Storage Subsystem based on name

• Fileset creation established association with Name Server

• Junction attached fileset to name space

*Note: HPSS Contract terms are pending for multiple name spaces.

Page 19: Status of HPSS

19

Features in Rel. 4.2 (Cont.)

• Federated Name Space– Defined as cooperative, distributed HPSS systems

linked at the internal server level

– Supports use of all HPSS interfaces across multiple HPSS sites

– Maintains autonomy of individual HPSS installations

– Name spaces linked by junctions which point to filesets

Page 20: Status of HPSS

20

Features in Rel. 4.2 (Cont.)

• Command Line Utilities– Provide command line interface to SSM to enable

management of HPSS from automated scripts

– Command-line program (hpssadm) which can be run interactively or in batch mode

– Allows user to manage servers, devices, storage classes, PVL jobs, and, to a limited extent, volumes

Page 21: Status of HPSS

21

Features in Rel. 4.2 (Cont.)

• Non-DCE Client API Security– Note: The majority of the non-DCE Client API has

been developed for released in Release 4.1.1

– Support client authorization / authentication by: • None

• DCE (for platforms with DCE)

• Kerberos

– Provide Non-DCE Client Gateway (NDCG) with the ability to authenticate clients’ identities on a per connection basis

Page 22: Status of HPSS

22

Features in Rel. 4.2 (Cont.)

• Mass Configuration– Provide mechanisms to create multiple server or drive /

device records in a single request

– Provide mechanism to set persistent default values used in creating new configuration records

– A configurable default Log Policy will be provided, which may reduce the number of specific policies which need to be created

Page 23: Status of HPSS

23

Features in Rel. 4.2 (Cont.)

• Gatekeeper– Provides optional client interface to allow sites to

schedule and monitor use of HPSS file resources

– Supports file create, open, stage, and close requests

– May be associated with one or many Storage Subsystems

– Sites will be able to implement site specific user level scheduling of storage requests. This may include:

• Limit the number of open files per user/host

• Prevent create requests for a user/host

• Collect statistics

Page 24: Status of HPSS

24

Features in Rel. 4.2 (Cont.)

• Account Validation– Maintain integrity of user account information. Users

will only be able to use valid accounts

– Support cross-linked namespaces (remote sites). Individual sites select their own style of accounting

– While still supported, no longer be necessary to have user default account information in DCE registry

– Sites that do not need accounting or validation of user accounts may continue to run in the current fashion

Page 25: Status of HPSS

25

Features in Rel. 4.2 (Cont.)

• Sun Solaris Phase 3– Port remaining servers

– Target OS is Solaris 8

• HPSS back-end the IBM General Parallel File System (GPFS)

Page 26: Status of HPSS

26

Post Release 4.2 Discussions

• No definitive release plans yet• Features that were delayed from Rel. 4.2

– Tape import/export– Multiple distributed movers - allow multiple, possibly distributed

movers to access a single device• Begin new requirements development cycle: 2 Qtr 00• Features under discussion

– Improved manageability features– Steps towards server consolidation and decreasing SFS

dependencies– Improved GPFS/HPSS performance through parallel data paths– Backup/archive feature (file servers, workstations)– Critical HEP requirements

Page 27: Status of HPSS

27

Next Requirements Development Cycle

• Process

– Process begins at start of a new release cycle.

– Requirements contributed by customer and IBM.

– Requirements team consists of a representative from each customer site.

– Release contents are prioritized by requirements team.

– Developers responsible decomposing requirements into subsystem level.

Page 28: Status of HPSS

28

Requirements Development Cycle (Cont.)

• Process (Cont.)

– Requirements are costed, and then reprioritized as an iterative process. Customer balloting influences release contents, but is not binding.

– Project metrics used to determine the number of requirements accepted in the next baseline.

– Requirements not making baseline are included in the requirements document, but annotated as future release.

– Formal inspection of requirements is performed.

Page 29: Status of HPSS

29

Requirements Development Cycle (Cont.)

• Process (Cont.)

– Any changes to the approved baseline require Technical and Executive Committee approval.

– Subsystem Requirements documents are published once a baseline is approved.

• Opportunity for the HEP Community

– HEP community develop common set of requirements

– Provide unified voice on requirements committee

Page 30: Status of HPSS

30

Additional Resources

• HPSS User Forum Presentations– www5.clearlake.ibm.com “What’s New”

• Site Reports• Details for current capabilities• Details of Release 4.2• Details of DFSS and HPSS

• Multi-Platform Capability of HPSS Components– www5.clearlake.ibm.com “Product Information”

”Multi-Platform Capability of HPSS”• Core Servers and Movers• User Interface Clients


Recommended