Date post: | 29-Jan-2016 |
Category: |
Documents |
Upload: | cuthbert-cross |
View: | 218 times |
Download: | 0 times |
Capabilities and Process
Application Integration
Goals
• Explain the importance of a single software stack for all BIRN servers
• Describe software integration, deployment and upgrade processes (in random order)
• Discuss possible extensions and improvements
BCC Software Integration Responsibilities
• System Architecture
• Development and Integration– System software– Application software
• Deployment– Reliability– Scalability– Consistency– Flexibility
BIRN-CC Responsibilities (cont’d)
• Maintenance and Support for a Continually Growing Community– 24/7 production quality grid– Monitoring– Troubleshooting– Help Desk– Training
Applications/Servers
• BIRN 1.0– GPOP– GComp– NAS– SRB– Oracle
• BIRN 2.0 added– Nagios– Condor– Scientific Applications
• BIRN 3.0 added– More Sciapps– Oracle– Postgres– Tomcat– Gridsphere– HID– Mediator– XNAT (in progress)– Xwiki (in progress)
Sources for BIRN Software
• Source code, RPMs and tarballs from:– Participant CVS/SVN repositories– Participant websites– Opensource community
• Configuration instructions– Make scripts– Hand edit config files– Website instructions– BIRN knowledge-base
Currently integrated on GComp
• AFNI• AIR• Brains2• Caret• Freesurfer
• FSL• LDDMM• LONI• Mipav• Slicer
If Done by Hand at 25+ BIRN Sites
CHAOS!
The BIRN Software Stack to the Rescue
• Every application/server gets– Same software version– Same software configuration– Same security
• Fewer manhours spent– Debugging– Propagating fixes
• Synergy
RPMs - a Cornerstone in the Path to Sanity
• Create users
• Unpack tarballs
• Modify configuration files
• Adjust firewalls
• Set up databases
• Start/stop services
• Make changes based on “zone”, “site”, etc.
The RPM Database
• RPM database on each server knows…– What has been installed– What functionality is available from where
• Examples[root@ncmir-gcomp ~]# rpm -qa | grep slicerslicer-2.5.1-0
[root@ncmir-gcomp ~]# rpm -q --requires slicer/usr/local/bin/tclsh libGL.so.1
[root@ncmir-gcomp ~]# rpm -q --whatprovides libGL.so.1 xorg-x11-Mesa-libGL-6.8.2-1.EL.13.25.1
Specific Examples for BIRN Sciapps
• 3DSlicer– RPM from tarball
• Brains2– Installs RPM from site
– Creates symbolic link
• Caret– RPM from .zip file
– Permissions corrected
• LDDMM– RPM created from tarball
– Environment variables set
• MIPAV– RPM from spec file and tarball
provided by site
– Verification script
[root@ncmir-gcomp ~]# for i in afni air brains2 caret freesurfer fsl lddmm loni mipav slicer ; do rpm -qa | grep -i $i ; done
AFNI-2006_03_21_1314-0 LONI_Pipeline_Client-1-0caret-5.50-1 LONI_Pipeline_Server-1-0freesurfer-3.0.2-0 mipav-1.59-1fsl-3.3.5-0 slicer-2.5.1-0lddmm-1.0.1-3 brains2-RedHatEnterpriseWS4-20060327-032445
• Alternative to up2date from RedHat
• Used for automated software updates
• Relies on RPM database
YUM (Yellowdog Updater, Modified)
[root@ncmir-gcomp ~]# for i in afni air brains2 caret freesurfer fsl lddmm loni mipav slicer ; do yum list | grep -i $i ; done
AFNI.i386 2006_03_21_1314-0 installed brains2-RedHatEnterpriseWS4.i386 20060327-032445 installedcaret.i386 5.50-1 installed freesurfer.i386 3.0.2-0 installed fsl.i386 3.3.5-0 installed lddmm.i386 1.0.1-4 installed LONI_Pipeline_Client.i386 1-0 installed LONI_Pipeline_Server.i386 1-0 installed mipav.i386 1.59-1 installed slicer.i386 2.5.1-0 installedslicer.i386 2.6-0 birncentral-dev-
YUM-my features
• Specific packages can be excluded temporarily
• Packages can be “installonly”, never updated
• Repository is same as that used for installation of new servers
• Development & Integration– CVS & SRB for version control facilitates
collaborative development– Automated build and deployment mechanisms for
repeatability and scalability– Flexible development environment to meet
changing requirements– Continually improving infrastructure in terms of
repeatability, tracking, monitoring, automation, and scalability
Software Standards
Features of Software Build Process
• “Normal” source code from CVS
• Large source, especially binary, from SRB
• Automatic tagging during build
• Tags match RPM version info
• Updates automatically added to YUM
Integrating/Updating via RPM
• Commit RPM to BIRN CVS repository• Where matters
– Which roll (sciapps, freesurfer, …)– Which architecture(s) (i386, noarch, ia64, …)
• Basename can’t change• If integrating, add nodes and graph files• Example for brains2
– BIRN/rocks/src/roll/sciapps/RPMS/i386/brains2-RedHatEnterpriseWS4-20060327-032445.i386.rpm
– BIRN/rocks/src/roll/sciapps/nodes/brains2.xml– BIRN/rocks/src/roll/sciapps/graphs/default/sciapps.xml
Updating/Integrating from CVS/SVN/SRB
• Import/Update/Upload source– CVS/SVN source repositories– Large tarballs, especially binaries, into SRB
• Update BIRN CVS files– Makefile– Version.mk– *.spec.in
• Examples– Slicer for tarball in SRB– Gridsphere for CVS/SVN
Updating BIRN CVS files
• Create/Update Makefile– Checks out/Downloads source– Prepares for build– Has build rules for spec.in file, if not in spec.in
file
• Create/Update version.mk– Sets up variables used in building and tagging– Has version # to track source changes– Has revision # to track spec.in changes– Important for YUM updating
Updating BIRN CVS files (cont’d)
• Create/Update *.spec.in– Standard boilerplate/template to start from– Pre-, install, Post- sections– Sets up requirements for building and running– http://fedora.redhat.com/docs/drafts/rpm-guide-en/
Software Deployment Paradigm
• Deployment of software at different levels of maturity to independent, isolated areas– Development
• Supports short-term specialized testing (network issues, database upgrades, etc.)
• Integrate testbed software prior to beta testing
– Staging• Provides area for complete, end-to-end testing
• Allows demonstration of latest software development efforts– Prior to production deployment– Without disruption to production systems
– Production• Allows reliable, repeatable, consistent infrastructure for collaboration of
neuroscience researchers
BIRN Uses Rocks
• Efficient, Reliable, Repeatable, Scalable Deployment • Systems can be reinstalled, reconfigured or
duplicated quickly and easily• Consistent software installation and configuration
without sacrificing flexibility• Standardized software delivery process• Server functionality is organized into rolls
– Reusable– Generic or specific
• Feedback incorporated in new Rocks releases by the SDSC Rocks team
Kicking Off Server Installation
• Choose rolls
• Partition & format the disk (/export is preserved)
• Enter network information & hostname
• Select operating environment (dev/stage/prod)
• Select timezone
• Enter root password
• Walk away
What the Blazes is a Roll?!
• A roll is a complete, reusable “set” of software (RPMs) needed to fulfill the functionality definition of the roll
• Examples
CentOS Provide RedHat O/S
ganglia or nagios Monitoring tools
condor or srb Generic condor / SRB
birncondor or birnsrb BIRN specific modifications
sciapps BIRN specific neuroscience tools
Server Functionality Defined by Roll Selection
• BIRN Gridsphere Portal roll list includes:– CentOS, birn, area51, nagios, etc.– java (to compile gridsphere code)– tomcat (portlet container)– gridsphere (basic)
• gridsphere and gridportlets RPMs
– birnportal (adds BIRN specific functions)• birnportal, birnportlets, gama RPMs
– srbportlets (adds SRB specific functions)– postgres (database)
• Contacts– [email protected]– [email protected]– Vicky Rowley, [email protected]– Kennon Kwok, [email protected]
• THE END