Date post: | 01-Jan-2016 |
Category: |
Documents |
Upload: | elijah-hunter |
View: | 26 times |
Download: | 0 times |
Observing Process:Astronomer's Integrated Desktop & Scheduling Block Based Observing
GBT e2e Software Review
May 3, 2005Amy Shelton ([email protected])
Nicole Radziwill ([email protected])
Ease of Use Project Description from August 2004
BUILD
EDIT
RUN
MONITOR
GBT OT
Astronomer’s Integrated Desktop (Astrid)
Easy Access toDocumentation, Help
Console Windows
Quick LookData Display
Status Screen
* Note that APIs are very telescope-specific, yet are used to abstract the observation into terms not specific to the telescope
SB Executor
usesConfiguration API
Observing API
Balancing API
ObservationManagement Database
(*e2e = Project Database)
Scheduling Blocks & Execution Log File
Data Capture
Annotated Index of Observation
GBT Observing Process
Why do we want an Integrated Desktop?• Astronomers don’t have to learn multiple applications to accomplish the observing task• Remote observers only launch one application, manages on-screen real estate well• Error reports do not require knowledge of what application error was reported in
Why do we want to transition to a Scheduling Block based system?• Observing Motivations
– Encourage up front preparation – maximize throughput of science per observing session– Enable dynamic scheduling– Facilitate remote observing– Balance interactivity needs with need to optimize high frequency weather– Simplify routine operations such as regression tests and “all sky pointing”– Characterize observations better so that pipeline processing can be enabled in the future– Enable observation data mining – we can track what was observed more effectively– Facilitate proposal review process – we can use data from past proposals to review current
• Technical Motivations– Enable more efficient troubleshooting – we can track what people did, and what errors occurred,
much more easily– Code is written to accommodate a discrete number of well-defined levels of abstraction– Distinct categories of usage (astronomers use apps and application components, experts use
HLAPIs, programmers use LLAPIs)
How do we move to a Scheduling Block based system?– Standard Observing Modes, Scheduling Blocks, Observation archiving
Observation Process: Preparation & Execution
Adopted on the GBT as defined by ALMA
but slightly customized for the
GBT (e.g. details
of ObsProcedure).
* NRAO e2e terminology used throughout presentation.
* Currently, Observation Process focuses on single SB
execution. The Science Program level is currently unimplemented.
Observing Process: Data Model
• Ties observation to proposal• Allows data mining on GBT observations• Schema relatively unchanged from last year
– ObservedIndex unimplemented until system ready for regular use by observers
– ObsTarget unimplemented until Source Catalogs in place
– Security added
Customizedfor use
with GBT idnam escriptobsprojectref_idobserver_idstate
ObsProcedure
ObsTarget
idnam eprim ary_observer
ObsProjectRef
idnam e
Observer
idobsprocedure_idprev_idnext_id
Queue
nam escript
ConfigurationCase
idobsprocedure_idoperator_iddatetim eversionexecuted_scriptexecuted_statelog
History
idnam e
Operatorobstarget_idfrequencyscan_typehistory_idobsprojectref_id
ObservedIndex
idkey
Security
Application
ApplicationComponent
HLAPIs
LLAPIs
ObservedData
(in an EDF)
Expert User
Real-TimeData
(streaming)
0..n
uses
uses
Programmeruses
GUIs
High-Level Architecture
Control System
Astronomer’s Integrated Desktop (Astrid)
• What is Astrid?– Astrid is a container from which you use various GBT
applications, e.g. Observation Management and Data Display.– Multiple application instances are supported, e.g. two Data
Displays.• Why is Astrid so important?
– Simplifies observation startup. Users simply type “astrid” at any Linux prompt.
– Reduces the number of applications that observers have to launch to begin observing to ONE; most useful for observing remotely!
– Observers do not have to know the difference between those applications to report issues.
Astrid Architecture
Astrid
ApplicationComponent Tabs
HLAPIs
LLAPIs
ObservedData
(in an EDF)
Expert User
Real-TimeData
(streaming)
0..n
uses
uses
uses
Control System
• Application Component Tabs – Allows the user to launch multiple applications within a single container. They provide a GUI interface to the HLAPIs for all users.
• HLAPIs are always available to the expert user to bridge the gap between non-interactive SB based observing and the desired for interactive observing. Useful for special purpose tasks (balancing mid-scan) and commissioning.
Astrid Screen Shot
Application Components
Drop-DownMenus
Tool Bar
Application ComponentLog Window
Python Command Console –
Expert User access to
HLAPIs, e.g. Balancing
and Configuration
Available Astrid Application Components
• Astrid – Launches a separate Astrid session
• DEAP – Data Extraction & Analysis Program provides general data display and manipulation capabilities.
• Python Editor – A text editor that provides Python syntax highlighting.
• Text Editor – A text editor for general use.
• Data Display – Provides real time display of data plus offline data perusal.
• Logview – Used to examine engineering FITS files.
• Observation Management – Edit/Submit/Run Scheduling Blocks on the telescope.
• GBTStatus (In development) – Telescope status information.
Astrid User Help
Observing with Astrid
• Preparation:1. Write your Scheduling Block(s).
– Observation Management provides an editor environment with syntax highlighting
– Use your favorite editor (Observation Management provides an import utility)2. Validate your Scheduling Block
– This is automatically done when importing Scheduling Blocks into the Observation Management editor and when saving Scheduling Blocks to the database
3. Upload your Scheduling Block to the Observation Management database
• Observing:1. Retrieve your saved Scheduling Block from the Observation Management database2. Submit the Scheduling Block to the Job Queue3. Promote your Scheduling Block from the Job Queue to the Run Queue4. Monitor Progress
– Observation Management Monitor tab– Monitor telescope status with GBTStatus application component
Observing with Astrid
* In the unified environment of Astrid, you can edit SB, submit SB, monitor SB progress, check GBT status, and view Continuum data with the Data Display.
Observing with Astrid
* In the unified environment of Astrid, you can edit SB, submit SB, monitor SB progress, check GBT status, and view Continuum data with the Data Display.
Scan Types & Observing Directives
Observing API
Scan Types
AutoFocus AutoPeakAutoPeakFocusFocusNodPeak
SlewTrackOffOnOffOnSameHAOnOffOnOffSameHA
TipDecLatMap DecLatMapWithReference PointMapPointMapWithReference RALongMap
Observing DirectivesPython style comments,
Annotation,
Comment,
Configure,
Balance,
Break,
execfile,
DefineScan, SetSourceVelocity, SetValues,
GetValue
* Documentation available on the GB Wiki at Observing.ScanTypes & Software.ObservingDirectives
AutoFocus, AutoPeak & AutoPeakFocus• Syntax (note all parameters are optional):
– AutoFocus(source, frequency, flux, radius) – AutoPeak(source, frequency, flux, radius) – AutoPeakFocus(source, frequency, flux, radius)
• How does it work?1. Finds current beam location and receiver2. Configures using standard continuum configuration3. Selects nearby calibrator (uses measured sky system temperature for Peaks)4. Balances5. Sets source name (comes from Jim Condon’s catalog & uses J2000 syntax)6. Peaks/Focuses
• Parameter Info: – source is a string and specifies the name of a particular source in the pointing catalog to be
used for calibration. The default is None. – frequency is a float and specifies the observing frequency in MHz. The default is the rest
frequency used by the standard continuum configuration cases. – flux is a float and specifies the minimum acceptable calibration flux in Jy at the observing
frequency. The default is 20 times the continuum point-source sensitivity. – radius is a float. The routine selects the closest calibrator within the radius (in degrees) having
the minimum acceptable flux. The default is 10 degrees.
Lessons Learned: Deploying Software on Working Telescope
• Technology Transfer– Astrid deployment delayed because technology
transfer between Software Development staff and Observing Support staff is taking much longer than originally planned for
• Staged Deployment – Program Layer not yet implemented– Observers expectations already influenced by
previous observing experiences at the GBT– Abuse at SB layer
• Long SBs• “for” loops• Training & documentation provided to
encourage best practices
Lessons Learned: Deploying Software on Working Telescope
• Up-front preparations– Major paradigm shift for GBT observers– Observing strategies can be developed minutes before observing, or
even on the fly, but planning minimizes user error and lost telescope time
– Constructing SBs requires forethought• Authorized projects must be entered into database ahead of time• Observer name must be entered into database ahead of time, and
associated with their projects• SBs need to be validated for observational integrity
– Ultimately will make most efficient use of telescope time• Dedicated Forum for Software Development Response
– Captures valuable user feedback– Visible indicator to Support Staff of Software Development’s response to
user-initiated suggestions/issues.
Lessons Learned: Aligning Development with Organizational Goals
• Remote observing needs require simplified application startup– Make most efficient use of bandwidth– Reduce number of applications needed to observe– One of the major issues driving the development of Astrid– Early tests using VNC very promising
• Importance of offline validation a priori– To reduce lost telescope time, SB must be created before scheduled
observation.– Validation currently catches syntactic errors ahead of time– Plans to expand Validator to further reduce time lost to non-syntactic user
error, e.g. passing an illegal file to Configure • Interactive observing is still a requirement
– Support commissioning activities, e.g. new receivers– Support pulsar observations, e.g. balancing outside a SB– Astrid’s Python console provides access to HLAPIs that supplement observing
intent captured in SBs. Provides on the fly access to the telescope.
Lessons Learned: Additional Requirements
• Project Data Directory– Missing mechanism for specifying project data directory– One project, many sessions– Sessions kept in separate directories– Sessions may have overlapping Subscan numbers
• Need for Online/Online (monitor only)/Offline Modes– One user should have control of telescope at a time– One or more users would like to see what is going on, e.g.
support staff, operators, collaborators– Operators have ultimate control– Users with upcoming observations need mechanism for
validating SBs and submitting to database • Additional Control Mechanisms
– E.g. device manager on/off (cope with legacy/visitor eqpt)– How much direct interaction with the control system
should be enabled in the technology?• Usability Issues and Bugs
– Accumulating user feedback, which will be prioritized and implemented as resource allocations permit
Enabling “Full Functionality” in 2005
Described at http://wiki.gb.nrao.edu/bin/view/Software/ObservingToolsRequests• Institute Online/Offline Modes – C3
– Provides appropriate level of access to users at each stage of observing: Preparation, Execution, & Data Reduction
• Improve Responses to Stops, Aborts & Control System Failures – C4• Moving Sources & Source Catalogs – C5, C6
– Eliminate requirement for users to set source names and velocities.– Eliminate need for users to create own source catalogs unless desired.
• Improved SB Submission & FIFO Queueing Model – C6, C7– SB observing with Astrid requires much mouse clicking– SB Job/Run Queue modifications to support batch or interactive SB
submission– Provide SB management capabilities to users
• Enable Offline Validation of SBs using full-telescope software simulator bound to production control system software – C8
Recap of Key Issues Remaining1. Coverage of all GBT Standard Observing Modes (balancing, ephemerides, etc)
2. Reliability, incl. enabling smooth recovery from control system failures or user-generated aborts, under all circumstances
3. Enabling Science Program Management– In high demand! Users are finding creative ways to build SBs now which simulate the
functionality they would be provided at the Science Program Level• Global variables• Management of long (many-hour) observing sessions• Existing (JAC) solution now / testbed for ALMA implementation; transition to
ALMA OT & Scheduler when mature
4. Implementing a GUI to build Scheduling Blocks– Perceived by some as the single most important item to deliver “ease of use”– Functionality is critically tied to delivery of Science Program management– Java prototype built last year looks very similar to ALMA OT, but builds single SBs and
does not manage Science Programs– Should we allocate time to do a gap analysis and complete this? Should we reverse
engineer SBs to ALMA/JCMT GUI? Should we do this in 2006 or is it possible to get the functionality earlier?
5. Clean differentiation between Science and Telescope Domains (Technical Issue)– SBs are in Science Domain– Expert users require more advanced interaction with the control system– How to best abstract these needs?
Conclusions
• Because of cooperative work between SDD and scientists this year, we can expect complete transition of GBT observing for all standard observing modes to Scheduling Block based system by end of 2005
• Up front planning required by observers at that point will significantly reduce burden on scientific staff
• Next year, can turn focus to usability issues, Science Program level.
• Project came very close to collapse December/January (resources stretched too thin, system made available in 9/04 needed reliability improvements). Support staff agreed to adopt the SB vision for operational use in early 2005, and since have participated in improving the tools and ensuring readiness for release to visiting observers. This came with some growing pains but has ultimately been productive.
• Early adoption of Scheduling Blocks without higher-level infrastructure already a huge payoff.