+ All Categories
Home > Documents > TeraGrid Arch Meeting RP Update: ORNL

TeraGrid Arch Meeting RP Update: ORNL

Date post: 21-Jan-2016
Category:
Upload: haig
View: 24 times
Download: 0 times
Share this document with a friend
Description:
TeraGrid Arch Meeting RP Update: ORNL. January 15, 2008 (MLK + 80y) John W. Cobb. Outline. SNS Status Neutron Science Portal Data Storage Experience Wider TeraGrid integration with Neutron Science Portal NSTG Cluster Operations Move the data: Support the experiment: connect to CI. - PowerPoint PPT Presentation
Popular Tags:
8
TeraGrid Arch Meeting RP Update: ORNL January 15, 2008 (MLK + 80y) John W. Cobb
Transcript
Page 1: TeraGrid Arch Meeting RP Update: ORNL

TeraGrid Arch MeetingRP Update: ORNL

January 15, 2008 (MLK + 80y)

John W. Cobb

Page 2: TeraGrid Arch Meeting RP Update: ORNL

Outline

• SNS Status• Neutron Science Portal• Data Storage Experience• Wider TeraGrid integration with Neutron Science

Portal• NSTG Cluster Operations

• Move the data: Support the experiment: connect to CI

Page 3: TeraGrid Arch Meeting RP Update: ORNL

SNS Status

• Ongoing instrument run times (SNS and HFIR)• New instruments in commissioning, joining user

program (Beamline: Name)– 2 Backscattering Spectrometer (BASIS)– 3 * Spallation Neutrons and Pressure Diffractometer (SNAP)– 4 A Magnetism Reflectometer (MR)– 4 B Liquids Reflectometer (LR)– 5 *Cold Neutron Chopper Spectrometer (CNCS)– 18 * Wide Angular-Range Chopper Spectrometer (ARCS)

• Instruments in Commissioning (shutters opened)– 6 Extended Q-Range Small-Angle Neutron Scattering Diffractometer (EQ-SANS)– 11A Powder Diffractometer (POWGEN)– 17 Fine-Resolution Fermi Chopper Spectrometer (SEQUOIA)

• More in the pipeline (10 more at last count) – “shovel ready”

Page 4: TeraGrid Arch Meeting RP Update: ORNL

Neutron Science Portal

• http://neutrons.ornl.gov/portal/ (SNS with NSTG)• Holiday success: Over break:

only 3 files failed to automaticallymove from Data Acq to portal archive.“I used the portal and it really works”

Page 5: TeraGrid Arch Meeting RP Update: ORNL

Data Storage Experience

Page 6: TeraGrid Arch Meeting RP Update: ORNL

Wider TeraGrid integration with NS Portal

• DOE SBIR with TechX corp (Mark Green) thick client portal and “virtual file system” for remote replication of SNS data (dist. Storage and disconnected use cases)

• Examining SNS data replication on archive storage targets. SULI semester Student David Speirs is also working here

• Jimmy Neutron Community Account continues use• TG dist. Jobs

–Simulation tab– Generalized fitting service (DAKOTA)– Distributed Data Reduction

Page 7: TeraGrid Arch Meeting RP Update: ORNL

NSTG Cluster Operations

• GridFTP: –We are moving to unstripped GridFTP because of “adverse

interactions” between GridFTP, PVFS, and local HW – not completely specified

–Working to implement, as planned, an enhancement path to support 10gbs transfer, especially SNS<>NSTG<>TG at large

–Will remain unstripped until then

• ESG: New local ESG node in TeraGrid enclave at ORNL – undergoing installation now

• Planning to work to mount Wide Area Lustres• General NSTG cluster usage: “canary in the coal mine”• Interesting large job count experience in Dec.

Page 8: TeraGrid Arch Meeting RP Update: ORNL

StageSub (Data Butler) Update

• See related file under this agenda on TG Wiki

• Idea: For submitted jobs, separate data movement (stage-in/out) from execution

• Dedicated, (privileged) job queue for Data transfer jobs to/from scratch

• Utility: (Users)– Not waste allocation during job execution waiting for job movement. And/or– Not risk data purge before data gets stored and/or job schedculed

• Utility: (Centers)– Increased Job throughput– Increased “effective” scratch storage– Ability to manage data storage bandwidth

• How: – Batch script directives

– Parser to submit munged scripts to compute and data queues– “Watcher” to coordinate progress between compute and data queues

• Multiple file movement tools including standard center tools as well as archive and Wide area tools including P2P tools (freeloader)

• Current Deployment (and future deployment)


Recommended