INFSO-RI-261611 2010 © Members of EMI collaboration PUBLIC 1 / 40
EUROPEAN M IDDLEWARE
INITIATIVE
DSA2.4 - CONTINUOUS INTEGRATION AND
CERTIFICATION TESTBEDS
EU DELIVERABLE: D4.4
Document identifier: EMI-DSA2.4-1277550-Integration_Testbeds_v1.0.doc
Date: 31/07/2010
Activity: SA2
Lead Partner: INFN
Document status: Final
Document link: http://cdsweb.cern.ch/record/1277550?ln=en
Abstract:
This document describes the distributed certification testbeds for internal and acceptance certification
and its access and usage requirements.
DSA2.4 – CONTINUOUS INTEGRATION AND CERTIFICATION TESTBEDS
Doc. Identifier: EMI-DSA2.4-1277550-Integration_Testbeds_v1.0.doc
Date: 31/07/2010
INFSO-RI-261611 2010 © Members of EMI collaboration PUBLIC 2 / 40
Copyright notice:
Copyright (c) Members of the EMI Collaboration. 2010.
See http://www.eu-emi.eu/about/Partners/ for details on the copyright holders.
EMI (“European Middleware Initiative”) is a project partially funded by the European Commission. For more information on the project, its partners and contributors please see http://www.eu-emi.eu.
This document is released under the Open Access license. You are permitted to copy and distribute verbatim copies of this document containing this copyright notice, but modifying this document is not allowed. You are permitted to copy this document in whole or in part into other documents if you attach the following reference to the copied elements: "Copyright (C) 2010. Members of the EMI Collaboration. http://www.eu-emi.eu ".
The information contained in this document represents the views of EMI as of the date they are published. EMI does not guarantee that any information contained herein is error-free, or up to date.
EMI MAKES NO WARRANTIES, EXPRESS, IMPLIED, OR STATUTORY, BY PUBLISHING THIS DOCUMENT.
DSA2.4 – CONTINUOUS INTEGRATION AND CERTIFICATION TESTBEDS
Doc. Identifier: EMI-DSA2.4-1277550-Integration_Testbeds_v1.0.doc
Date: 31/07/2010
INFSO-RI-261611 2010 © Members of EMI collaboration PUBLIC 3 / 40
.
Delivery Slip
Name Partner / Activity
Date Signature
From Danilo N.
Dongiovanni INFN / SA2 23/09/2010
Reviewed by Oliver Keeble,
Morris Riedel,
Balazs Konyua
CERN/SA1,
JUELICH/JRA1,
LU/NA1
03/12/10
Approved by PEB 03/12/10
Document Log
Issue Date Comment Author / Partner
0.1 28/07/2010 First draft to the attention of SA2 participants Danilo N. Dongiovanni
0.2 04/08/2010
Integrated feedbacks from Cristina Aifitimiei,
Jozef Cernak and Tomasz Wolak. Added
Executive Summary.
Danilo N. Dongiovanni
Cristina Aiftimiei
Jozef Cernak
Tomas Wolak
0.3 06/09/2010
Added EMI Testbed twiki automatic updates
notifications (Sec. 4.1.7).
Added use case of debugging information
accessibility for tester users (Sec. 4.1.4).
Added the notification of new Release Candidate
product versions to EMI Testbed team (Sec.
4.1.5).
Danilo N. Dongiovanni
0.4 07/09/2010
Integrated feedbacks from Jozef Cernak on
amendment procedure (Sec 1.5). Added reference
on testing guidelines (Sec. 3.3.1)
Danilo N. Dongiovanni
Jozef Cernak
0.5 09/09/2010 Reviewed Alberto Aimar
0.6 16/09/2010
Changed References. Highlighted Open issues in
the Executive Summary. Mentioned in sec. 5 the
testing VO created in the EMI testbed.
Added the possibility of direct support requests to
SA2.6 through savannah.
Added the plan to setup a shift infrastructure for
incoming support requests (sec. 4.1.6)
Danilo N. Dongiovanni
0.7 28/09/2010 Reviewed Balazs Konya
DSA2.4 – CONTINUOUS INTEGRATION AND CERTIFICATION TESTBEDS
Doc. Identifier: EMI-DSA2.4-1277550-Integration_Testbeds_v1.0.doc
Date: 31/07/2010
INFSO-RI-261611 2010 © Members of EMI collaboration PUBLIC 4 / 40
0.8 07/10/2010
Added some terms definition in terminology table.
Added new section 4.1 to clarify about the usage
of EMI Testbed and subset testbeds terms
throughout the document.
Added section reporting current implementation
status of Testbed.
Added EMI Testbed operation solutions and
practice reference table.
Changed Executive Summary.
Danilo N. Dongiovanni
0.9 21/10/2010 Reviewed Morris Riedel
0.10 28/10/2010 Added paragraph 5.5 defining inputs expected
from JRA1.7 Task. Danilo N. Dongiovanni
0.11 11/11/2010 Reviewed Oliver Keeble
0.12 12/11/2010
Minor corrections to text or presentation layer.
Added integration testing definition.
Mentioned EGI staged rollout procedure.
Added reference on Release Management process
Danilo N. Dongiovanni
0.13 18/11/2010 Minor changes. Added reference to EMI products
coverage table in section 7.1 Danilo N. Dongiovanni
0.13 03/12/2010 Final version accepted by all reviewers Danilo N. Dongiovanni
1.0 15/12/2010 Final version for submission Alberto Di Meglio
Document Change Record
Issue Item Reason for Change
DSA2.4 – CONTINUOUS INTEGRATION AND CERTIFICATION TESTBEDS
Doc. Identifier: EMI-DSA2.4-1277550-Integration_Testbeds_v1.0.doc
Date: 31/07/2010
INFSO-RI-261611 2010 © Members of EMI collaboration PUBLIC 5 / 40
TABLE OF CONTENTS
1. INTRODUCTION ......................................................................................................................................... 7
1.1. PURPOSE .................................................................................................................................................. 7
1.2. DOCUMENT ORGANISATION ..................................................................................................................... 7
1.3. APPLICATION AREA ................................................................................................................................. 7
1.4. REFERENCES ............................................................................................................................................ 7
1.5. DOCUMENT AMENDMENT PROCEDURE .................................................................................................... 9
1.6. TERMINOLOGY ......................................................................................................................................... 9
2. EXECUTIVE SUMMARY ......................................................................................................................... 11
3. EMI INTEGRATION TESTBED REQUIREMENTS IDENTIFICATION ......................................... 12
3.1. SA2.6 TASK DEFINITION ........................................................................................................................ 13
3.2. SURVEY OF TESTING RESOURCES AND PROCEDURES CONVERGING INTO EMI................................... 13
3.3. REQUIREMENTS IDENTIFICATION ................................................................................................. 15 3.3.1 Envisaged integration testing scenarios .................................................................................... 15 3.3.2 Effort definition, Procedures, Communication and Documentation needs ................................ 17
4. EMI TESTBED IMPLEMENTATION MODEL ..................................................................................... 19
4.1. EMI TESTBED OR TESTBEDS? ................................................................................................................ 19
4.2. EMI INTERNAL INTEGRATION TESTBED SOLUTION .......................................................................... 19 4.2.1 Hardware Resource dimensioning ............................................................................................. 20 4.2.2 Product Teams members effort contribution .............................................................................. 21 4.2.3 Testbed monitoring solutions ..................................................................................................... 21 4.2.4 Resources access policy ............................................................................................................. 22
4.2.4.1 Testing Certificates handling ................................................................................................................22 4.2.4.2 Usage Monitoring and Concurrent usage Requirements ......................................................................23
4.2.5 Testbed update and evolution procedure ................................................................................... 23 4.2.6 Activity tracking, communication channels and procedures ...................................................... 24 4.2.7 Public documentation ................................................................................................................ 25
4.3. LARGE-SCALE TESTBED IMPLEMENTATION PLAN ................................................................................... 25 4.3.1 Implementation plan .................................................................................................................. 26
5. OPEN ISSUES ............................................................................................................................................. 28
5.1. COMMON AUTHENTICATION FRAMEWORK ACROSS MIDDLEWARES .................................................... 28
5.2. SOFTWARE PRODUCT VARIABLES TO CONSIDER ................................................................................. 28
5.3. RESOURCE DISCOVERY AND PUBLISHING OVER DIFFERENT MIDDLEWARES............................................ 28
5.4. TOOLS AND AUTOMATION ..................................................................................................................... 29
5.5. INPUTS FROM JRA1................................................................................................................................ 29
6. EMI TESTBED EVOLUTION AND FUTURE REVISIONS OF THE MODEL ................................. 31
7. EMI TESTBED IMPLEMENTATION STATUS .................................................................................... 32
7.1. INSTALLED SERVICES INVENTORY .......................................................................................................... 32 7.1.1 ARC ............................................................................................................................................ 32 7.1.2 gLite ........................................................................................................................................... 33 7.1.3 UNICORE .................................................................................................................................. 34 7.1.4 dCache ....................................................................................................................................... 35
7.2. TESTBED OPERATIONS QUICK REFERENCE .................................................................................. 35
8. CONCLUSIONS .......................................................................................................................................... 40
DSA2.4 – CONTINUOUS INTEGRATION AND CERTIFICATION TESTBEDS
Doc. Identifier: EMI-DSA2.4-1277550-Integration_Testbeds_v1.0.doc
Date: 31/07/2010
INFSO-RI-261611 2010 © Members of EMI collaboration PUBLIC 6 / 40
DSA2.4 – CONTINUOUS INTEGRATION AND CERTIFICATION TESTBEDS
Doc. Identifier: EMI-DSA2.4-1277550-Integration_Testbeds_v1.0.doc
Date: 31/07/2010
INFSO-RI-261611 2010 © Members of EMI collaboration PUBLIC 7 / 40
1. INTRODUCTION
1.1. PURPOSE
This document describes the implementation strategy of EMI Integration Testing testbeds, taking into
account integration testing needs and resources made available from task participants and envisaged
future evolutions.
Apart from the testbed implementation model, also documentation, coordination procedures,
communication channels and activity tracking tools are defined both for the EMI internal testbed and
for the large-scale testbed in collaboration with external partners (less accurate level for this initial
stage of the project).
1.2. DOCUMENT ORGANISATION
Section 1 and 2 are the introductory and the executive summary section respectively.
Section 3 focuses on the collection and definition of requirements and targets for the EMI integration
testbed. Input from the EMI description of work and testbed customers (survey and meeting with
developers representatives) have been collected and summarized. These inputs have then been
matched with SA2.6 task targets and actual resources and consolidated testing procedures are available
across three middlewares (survey and meeting with ARC, gLite, UNICORE representative task
participants).
Section 4 is the key section, presenting EMI internal integration testing testbed implementation plan
including: testbed scenarios implementation model, documentation, coordination procedures,
communication channels and activity tracking tools. Also a preliminary model for a large scale testing
testbed is proposed.
Section 5 focuses on the definition of some key issues having a strong impact on testbed
implementation which need inputs or discussions from/with other EMI work packages.
Section 6 reports about the testbed model evolution and revision.
1.3. APPLICATION AREA
The document refers to the EMI software integration testbed implementation that is all resources and
coordination/effort needed to setup an infrastructure on which software products integration tests can
be performed. Since the test infrastructure implementation, update and management strictly depends
on the way software products certification and release process are defined, SA2.6 expect a continuous
collaboration and interaction with JRA1and SA1 activities.
1.4. REFERENCES
R1 EMI DoW
https://twiki.cern.ch/twiki/pub/EMI/EmiDocuments/EMI-Part_B_20100624-PUBLIC.pdf
R2 SA2.6 Survey about testing procedures and testbed status across middleware projects
https://twiki.cern.ch/twiki/bin/view/EMI/SurveyResults
R3 ARC project testing references
DSA2.4 – CONTINUOUS INTEGRATION AND CERTIFICATION TESTBEDS
Doc. Identifier: EMI-DSA2.4-1277550-Integration_Testbeds_v1.0.doc
Date: 31/07/2010
INFSO-RI-261611 2010 © Members of EMI collaboration PUBLIC 8 / 40
http://vls.grid.upjs.sk/testing/
R4 Atlassian bamboo for UNICORE project
https://UNICORE-dev.zam.kfa-juelich.de/bamboo/
R5 Yaimgen utility webpage at CERN
https://svnweb.cern.ch/trac/yaimgen/
R6 ETICS project homepage
http://etics.web.cern.ch/etics/
R7 EGEE project pre-production service homepage
http://egee-pre-production-service.web.cern.ch/egee-pre-production-service/
R8 CERN VNode utility
https://twiki.cern.ch/twiki/bin/view/EGEE/VNode
R9 NAGIOS Alarming system project homepage
http://www.nagios.org/
R10 EGEE project Operations Automation Team homepage
https://twiki.cern.ch/twiki/bin/view/EGEE/OAT_EGEE_III
R11 CERNS’s Savannah bug/task tracking tool instance
https://savannah.cern.ch/
R12 EMI Product Teams list
https://twiki.cern.ch/twiki/bin/view/EMI/EmiProductTeams
R13 EGEE project pre-production service: Pilot Service Testing procedure
https://twiki.cern.ch/twiki/bin/view/EGEE/PilotServices
R14 Jira issue and project tracking tool
http://www.atlassian.com/software/jira/
R15 Twiki automated notification tool
https://twiki.cern.ch/twiki/bin/view/EMI/WebNotify
R16 EMI Certification and Testing Guidelines
https://twiki.cern.ch/twiki/bin/view/EMI/EmiSa2CertTestGuidelines
R17 ARC Project
http://www.knowarc.eu
R18 gLite Project
http://www.glite.eu
R19 UNICORE Community Web page
http://www.UNICORE.eu
R20 dCache Project
http://www.dcache.org/
R21 Global Grid User Support
http://www.ggus.org/
R22 European Grid Initiative
https://www.egi.eu
DSA2.4 – CONTINUOUS INTEGRATION AND CERTIFICATION TESTBEDS
Doc. Identifier: EMI-DSA2.4-1277550-Integration_Testbeds_v1.0.doc
Date: 31/07/2010
INFSO-RI-261611 2010 © Members of EMI collaboration PUBLIC 9 / 40
R23 Release Process Guidelines
https://twiki.cern.ch/twiki/bin/view/EMI/EmiSa2ChangeManagementGuidelines
1.5. DOCUMENT AMENDMENT PROCEDURE
Amendments, comments and suggestions should be sent to [email protected] or to the author of
this document. This document can be amended by the Quality Assurance team (SA2) further to any
feedback from the other teams. Minor changes, such as spelling corrections, content formatting or
minor text reorganisation not affecting the content and meaning of the document can be applied by
SA2 without previous review. Other changes must be peer reviewed and submitted to the PEB for
approval.
When the document is modified for any reason, its version number shall be incremented accordingly.
The document version number shall follow the standard EMI conventions for document versioning.
All versions of the document shall be maintained using the document management tools selected by
the EMI project.
1.6. TERMINOLOGY
Work Package (WP) The EMI project is composed of two Networking Work Packages (NA1 and
NA2), two Service Work Packages and one Joint Research Work Packages
(JRA1).
Product Team (PT) Product Teams are teams of software developers and testers fully
responsible for the successful release of a particular software product (or
group of tightly coupled related products) compliant with agreed sets of
technical specification and acceptance criteria.
Software Product
Validation and Testing
The Process responsible for Validation and Testing of a new or Changed IT
Service. Service Validation and Testing ensures that the IT Service matches
its Design Specification and will meet the needs of the Business.
Testing activity performed by PT involves different classes of tests:
Unit tests: test the correctness of individual units or a group of
related units.
Deployment tests: verify that the component can be properly
installed and configured on all the supported platforms
Functionality tests: cover the majority of the tests done on a
component to verify it's correct (according to specifications)
behavior.
Regression tests: tests that are meant to verify specific bug fixes.
Performance tests: tests that aim at verifying the performance of a
component, which in many cases involves the measurement of the
response time for specific service requests. Performance tests should
verify how well the service behaves with nominal workloads.
Scalability tests meant to verify that the component behaves
according to its specifications when varying one of the variables that
can affect its performance. Load and stress tests are included.
Integration Testing Integration refers to a setup where one software component inherently uses
DSA2.4 – CONTINUOUS INTEGRATION AND CERTIFICATION TESTBEDS
Doc. Identifier: EMI-DSA2.4-1277550-Integration_Testbeds_v1.0.doc
Date: 31/07/2010
INFSO-RI-261611 2010 © Members of EMI collaboration PUBLIC 10 / 40
another component to provide a dedicated functionality. The way how a
component is integrated into another varies including the use of
communication protocols (i.e. Web services via SOAP) or simply the usage
of using libraries.
From definition above, integration tests are functionality tests involving the
interaction among different components, generally developed by different
Product Teams and which may be deployed on different hardware instances
or as modules on the same instance.
Certification of
Software Product
The action done by a Product Team on a release candidate in order to
"certify" that the software has been verified (the software has been correctly
built) and validated (the right software has been built). Verification and
validation involve testing.
DSA2.4 – CONTINUOUS INTEGRATION AND CERTIFICATION TESTBEDS
Doc. Identifier: EMI-DSA2.4-1277550-Integration_Testbeds_v1.0.doc
Date: 31/07/2010
INFSO-RI-261611 2010 © Members of EMI collaboration PUBLIC 11 / 40
2. EXECUTIVE SUMMARY
The present document describes the EMI continuous integration and certification testbed
implementation model. To define an effective strategy of implementation SA2.6 tried to integrate into
a homogeneous framework, inputs, ideas and requirements from the main middleware projects, taking
into account: i) the EMI Description of Work targets for the task; ii) integration testbed customer
requirements (collected through meetings and surveys with developer representatives) and iii) task
participant resources, established procedures and experiences (collected through meetings and surveys
and grouped per middleware: ARC, gLite and UNICORE). Therefore in section 3 SA2.6 identifies all
needed requirements to setup a suitable EMI integration testbeds for a middle-term perspective. In
particular: the envisaged testbed scenarios (testing for minor releases, major releases, for cross-
middleware integration and for large scale); the needed infrastructural supplies (accessible hardware
resources, connectivity, certificates handling etc.); the inter-project communication and effort
coordination needs.
Taking into account both requirements identified in section 3 and the three middleware (ARC, gLite,
UNICORE) pre-EMI approaches to integration testing and testbed implementation, in section 4 SA2.6
define the EMI internal integration testing testbed implementation plan including: what SA2.6 actually
provide to implement the testbed scenarios previously identified; the public documentation solution
implemented (EMI Testbed web page), coordination procedures (for testbed installation, modification,
configuration or generic support requests), communication channels and activity tracking tools
(mailing lists defined and Savannah tools groups and squads defined). Also a preliminary model for
large a scale testing testbed is proposed, even though that will be the object of the second quarter of
the project. Even though the initial implementation of testbeds will treat the four middleware
distribution separately, effort was spent into the harmonization of testbed access procedures,
documentation and resources monitoring.
Open Issues to the attention of readers. Even though, at the present time SA2.6 has a limited
knowledge of all possible EMI testbed use cases, section 5 was dedicated to the identification of main
issues expected to have a strong impact on testbed implementation and which need inputs or
discussions from/with other EMI work packages. In particular: cross-middleware resource discovery
and publication; hardware dimensioning relevant variables (releases, platforms, etc); common
authentication/authorization framework for testbed accessibility; automated testing evolutions (ex.
post build testing and on-demand testbed infrastructure creation).
A section is then devoted to the testbed model evolution and revision, envisaging three main phases:
the current phase of early convergence among middleware distributions; a second phase approaching
the first EMI release, when the focus will move towards cross-middleware integration and a third
phase in which external partners and integration testing for new user communities or Linux
distributions will become more and more important.
A final section reports about current testbed implementation status and a how-to quick reference table
about EMI Testbed operational aspects and implemented solutions.
DSA2.4 – CONTINUOUS INTEGRATION AND CERTIFICATION TESTBEDS
Doc. Identifier: EMI-DSA2.4-1277550-Integration_Testbeds_v1.0.doc
Date: 31/07/2010
INFSO-RI-261611 2010 © Members of EMI collaboration PUBLIC 12 / 40
3. EMI INTEGRATION TESTBED REQUIREMENTS IDENTIFICATION
The EMI SA2.6 task is in charge of providing testbeds for integration testing within the EMI project.
Purpose of this section is to identify what are the requirements and possible scenarios for an
integration testing testbed in a project merging different grid computing middleware distributions, like
EMI.
First step is to define a common dictionary concerning integration testing and testbed:
Integration testing: In the present document SA2.6 calls integration testing the phase and
related activities in which individual software modules are combined and tested as a group. So
SA2.6 expects this phase to come after an earlier certification phase in which software
module building, installation, configuration and functionality testing in insulation has been
carried out on Product Teams hardware resources. Inputs on the definition of integration
testing procedures are expected from EMI task JRA1.7 – Integration and Interoperability and
will be integrated in the document when made available (see section 5 for details).
Integration testing testbed: a platform for testing EMI software products in a working
environment. This definition implies a set of hardware resources deploying a variety of
operative systems and software products with proper configuration to allow services inter-
communication and grant concurrent access to a realistic production environment set of testing
users.
Software product, i.e. the unit object of integration testing, interacting with each other
software in a production like environment. Testing operations on a software product require its
installation and configuration on dedicated hardware resource on which operative systems
included in the predefined set of supported platforms are deployed. A software product is
typically available in several versions associated to a middleware release. A release can be of
two main categories: minor and major, depending on the possibility of breaking backward
compatibility (allowed only in major releases). For each middleware release SA2.6 assumes to
have two available versions per software product: a Release Candidate (RC) version and a
Production version, respectively the version which is going to be tested and the last version
passing the acceptance certification process for that release. From an integration point of view
a software product is then defined by a version tag and the associated release and platform.
Other implementation differences not affecting the considered product Application
Programming Interfaces (API) are not taken into account for testbeds. For example a given
service version changing only at underlying database implementation level (ex. Mysql or
Oracle), with no changes at the level implementing integration with other services, can be
considered a unique software product from an integration testing point of view, implying that
just one version will be effectively made available in the testbed.
The rest of the section is organized as follows: a first subsection reporting the SA2.6 task definition as
stated in EMI DoW [R1]; then a subsection reporting the result of a survey conducted among task
participants to collect information about current testing procedures and testbed resources and effort
organization for the various middleware converging into EMI; then a final subsection translating task
goals into a list of requirements, infrastructural resources and procedures to be made available.
DSA2.4 – CONTINUOUS INTEGRATION AND CERTIFICATION TESTBEDS
Doc. Identifier: EMI-DSA2.4-1277550-Integration_Testbeds_v1.0.doc
Date: 31/07/2010
INFSO-RI-261611 2010 © Members of EMI collaboration PUBLIC 13 / 40
3.1. SA2.6 TASK DEFINITION
The EMI Description of Work states [R1]:
Task SA2.6: “This task consists in the setup and maintenance of distributed testbeds for the
project continuous integration and testing operations and the coordination and provision of
large-scale testbeds from collaborating resource providers.”
Success Criteria: Availability and reliability metrics of the execution nodes
Partners: INFN, CERN, CESNET, JUELICH, UPJS
Key Performance Indicators for SA2 Testbeds is shown below
CODE KPI Description Method to
Measure
Estimated
Targets
KSA2.1 SA2
Services
Reliability
% uptime dependent only on the SA2
services themselves (individual KPIs
for test beds, repository, etc)
Participating
sites
monitoring
tools
99.00%
KSA2.2 SA2
Services
Availability
Total % uptime including the underlying
suppliers (individual KPIs for test beds,
repository, etc)
Participating
sites
monitoring
tools
97.00%
KSA2.3 Distributed
Testbed
Size
Number of CPUs available for distributed
testing through collaborations with external
providers (NGIs, sites, commercial
providers, other projects, etc)
Participating
sites
monitoring
tools
Year 1: 50
CPUs
Year 2: 200
CPUs
Year 3: 500
CPUs
3.2. SURVEY OF TESTING RESOURCES AND PROCEDURES CONVERGING INTO EMI
Among initial activities in SA2.6 task SA2.6 conducted a survey among task participants, grouped
into different middleware representatives (ARC, gLite, UNICORE), in order to collect information
about current testing procedures or practices and available integration testbeds. Comparative results of
the survey were made available here [R2]. The convergence between middleware on testing
procedures is beyond the scope of SA2.6 task. Therefore SA2.6 reports just issues which are relevant
for our purposes.
Testing procedures:
Testing procedures and practices: a very heterogeneous scenario was observed in the three
middleware distributions converging into EMI project: ARC, gLite and UNICORE. In the
survey results document [R2], links to public documentation on currently used procedure can
be found. Although the definition of a common testing procedure will be addressed in other
EMI Work Packages, from a testbed point of view, a common concern is on the need for a
DSA2.4 – CONTINUOUS INTEGRATION AND CERTIFICATION TESTBEDS
Doc. Identifier: EMI-DSA2.4-1277550-Integration_Testbeds_v1.0.doc
Date: 31/07/2010
INFSO-RI-261611 2010 © Members of EMI collaboration PUBLIC 14 / 40
clear definition of “Release Candidate” version of a given software product for each release
(minor or major) and when integration testing starts in the certification process.
Automated Tools: general requests of keeping and improving currently achieved results
regarding definition of certification process automated testing tools. ARC seems to have a
well defined framework and an automated testing tool [R3]. For gLite the existence of tools
may depend on the products (DPM, BDII use yaimgen [R5], VOMS has automated tests in
ETICS [R6]). UNICORE uses Atlassian Bamboo for continuous integration [R4]. This topic
strongly overlaps with EMI SA2.4 and JRA1 activities, so for what concerns the automatic
creation of testing resources needed automatic testing, future updates of this document are
foreseen.
A typical scenario for integration testing involves a Release Candidate software product
version versus other software product released versions. Also rare scenarios involving
Release Candidate software product version versus other software product Release Candidate
versions can occur (typically when testing for an upcoming major release, when backward
compatibility can be broken for several products at the same time).
Apart from some structured approaches in gLite previous experience (EGEE III European
project, Pre-Production activity, Pilots [R7]), a defined framework is missing for users
community involvement in the test of upcoming versions of software products, where the
setup of a dedicated testbed for stress-test or large scale test is required. In EGI project [R22]
a staged-rollout procedure is defined with pre-deployment in production of released software
products to a set of early adapters volunteer sites, but it is more a final acceptance YES/NO
test than a collaborative testing phase.
Testing effort is an issue. All middleware projects (ARC, gLite, UNICORE) used to have a
tester team carrying on testing activity. Now much will be left to PT efforts, with possible
problems to keep current standard of testing. Moving to a distributed infrastructure able to
support tests from the various middlewares components will imply additional effort for
concurrent (from “access to resources” point of view) test coordination and testbed
maintenance.
Testbeds implementation:
Number of software components to deploy, not considering cases of service co-hosting:
◦ ARC: 11 products
◦ gLite: 19 products in Release 3.1 (SL4); 17 products in Release 3.2 (SL5) (without
considering cases like mentioned gLiteVOMS Mysql/oracle database implementation)
◦ UNICORE: 11 products
Platforms: ARC (Fedora, Debian, Red Hat, Ubuntu, Windows and Mac OSX), gLite (SL4,
SL5), UNICORE (Linux, Windows, Mac OSX)
Virtual Servers are in general an applicable solution to deploy services but physical servers
are required for some services, as well as for performance / stress testing purposes.
DSA2.4 – CONTINUOUS INTEGRATION AND CERTIFICATION TESTBEDS
Doc. Identifier: EMI-DSA2.4-1277550-Integration_Testbeds_v1.0.doc
Date: 31/07/2010
INFSO-RI-261611 2010 © Members of EMI collaboration PUBLIC 15 / 40
Host certificates are required in ARC and gLite. Containers certificate in UNICORE not
related to host. Special CA for testing available (ARC, CERN) or for dummy certificate
emission for testing purposes, otherwise manual certificates.
Currently available hardware resources among the task partners:
◦ ARC: ~14 Virtual nodes + 9 physical servers
◦ gLite: CERN (~15 Virtual + 17 physical), INFN (~25 physical + 10 Virtual), CESNET
(~10). Still not clear whether it is covering actual needs (other sites were involved
before).
◦ UNICORE: Julich (~10 Virtual). Covering actual needs.
◦ dCache: 6 dCache servers + gLite BDII service
User service usage: no policy currently available in any middleware neither for
scheduling usage nor for monitoring it.
Server Monitoring: available in ARC (Ganglia + GridMonitor and Nagios), gLite
(Nagios), and no tools within UNICORE.
Automated tools for on demand virtual nodes creation: vNode [R8] portal available at
CERN. These tools are potentially very interesting since, with the automatic creation
and configuration of a pool of virtual servers deploying a set of service “talking to
each other”, the integration testing stage could become a default post-build operation,
implemented with batch scripts.
There is no established procedure or channels of communication for recruiting users
community partners interested in providing resources/effort for testing. Some past
experience with gLite Pilot services [R13].
3.3. REQUIREMENTS IDENTIFICATION
On the base of the basic terminology reported at the beginning of section 3, task goals are described in
section 3.1 and the current testing procedures and testbed implementation across the three middleware
(ARC, gLite, UNICORE), in this section SA2.6 will translate task goals into procedures, activities,
infrastructural and operational resources which SA2.6 must make available to have the EMI
integration testbed in place. The collaboration with other WPs ensures that feedback from SA1 and
JRA1 contribute to the solution described below.
3.3.1 Envisaged integration testing scenarios
From the definition of software products reported in section 3. and after preliminary discussions
concerning testbeds with EMI JRA1/SA1, matching SA2.6 task goals implies providing testbed
resources for the following testing scenarios:
1. EMI Internal Testing scenario A: Integration testing within a minor release (no backward
compatibility broken), so that a Release Candidate1 Service Version (RCSV in the following)
1 Here SA2.6 assumes that for each service a single Release Candidate version per Release exists, identified by a patch
number or an official build.
DSA2.4 – CONTINUOUS INTEGRATION AND CERTIFICATION TESTBEDS
Doc. Identifier: EMI-DSA2.4-1277550-Integration_Testbeds_v1.0.doc
Date: 31/07/2010
INFSO-RI-261611 2010 © Members of EMI collaboration PUBLIC 16 / 40
can be tested VERSUS other Production Version Services (PVS in the following). This implies a
distributed testbed of production services available for each middleware stack, with possible
multiple instances for central services. This could also imply cases of RCSV vs. other RCSV or
RCSV vs. (RCSV + PVS): imagine the case of two interacting products in which a common bug
is fixed contemporaneously, that would imply RCSV for both of them to be tested together with
all other services at production version. Key performance indicators KSA2.1, KSA2.2 will apply
to this testbed.
2. EMI Internal Testing scenario B: Integration testing for a major release (where it is allowed
to have new features or backward compatibility broken for many services). This implies a testbed
of RCSV available for each middleware stack, so basically this means providing hardware with
platform installed for Product Teams (PT) to install needed RCSV and allow them for previewing
other's PT RCSV. Key performance indicators KSA2.1, KSA2.2 will apply to this testbed.
3. EMI Internal Testing scenario C: Initial cross middlewares (gLite<->ARC<->UNICORE)
Integration testing. This implies a testbed of PVS for all middleware to be accessible, which will
be normally accomplished by testbeds defined at points 1 and 2 above, but could also imply
specific testbeds configuration setup especially in the initial phase of the middleware
convergence process. An issue specific to this scenario is the resource discovery and information
publishing across middlewares (see Section . Key performance indicators KSA2.1, KSA2.2 will
apply to this testbed.
4. EMI External Testing scenario D: Large-scale tests, this will involve voluntary partners
providing resources for the test. Key performance indicators KSA2.3 will apply to this testbed.
Figure 1 below provides a simplified representation of first three testing scenarios above (third
scenario can be treated as a normal major release where product X,Y and Z come from one of the three
middlewares or dCache respectively.
Figure 1: EMI Integration testing scenarios
DSA2.4 – CONTINUOUS INTEGRATION AND CERTIFICATION TESTBEDS
Doc. Identifier: EMI-DSA2.4-1277550-Integration_Testbeds_v1.0.doc
Date: 31/07/2010
INFSO-RI-261611 2010 © Members of EMI collaboration PUBLIC 17 / 40
As mentioned before, the certification testing resources of a service in insulation before integration is
in charge of PT so SA2.6 will focus on making available the testing infrastructure needed by each PT
to test the interaction of its product with other PT's software products. The definition of certification
process and testing guidelines PT should follow is in charge of other activities. See certification and
testing guidelines [R16] for overview of certification process and release management guidelines
[R23] for a reference on tests documentation, coordination and communication. From a testbed point
of view SA2.6 assumes that “testbed available” means:
Available HW resources:
◦ Server with OS installed (one among agreed platforms), network connection and utilities
◦ Server host certificate available
◦ Server level monitoring tool (e.g. Nagios, Ganglia etc.) to collect statistics about server
availability/reliability.
◦ Access to the server for PT testers
3.3.2 Effort definition, Procedures, Communication and Documentation needs
Apart from the testbed infrastructure, SA2.6 has to define the coordination, communication and task
tracking procedures needed to put in place and evolve the testbed, following the needs of the EMI
projects.
Effort: to put in place and maintain the EMI integration testbed, the SA2.6 expect activities
on:
◦ Hardware setup, OS installation, maintenance
◦ Software Product installation, configuration and update
◦ Host certificates, users configuration, services inter-communication configuration
◦ Coordination of activities, testbed planning, resources rising for large-scale testbed
◦ Automation solution identification and setup
Testbeds documentation: available server inventory and associated logbooks (twiki or
similar) should be maintained and made publicly available:
◦ Inventory: a complete list of available instances grouped by testbed purpose (according to
description in section 2)
◦ Logbooks: a service status page describing installed SW version, configuration (including
users who may access it, and other service resources it is connected to). Possibly historic
changes should be documented too.
◦ Contacts: a list of names and contacts of people responsible for service status maintenance
Task tracking and Communication:
DSA2.4 – CONTINUOUS INTEGRATION AND CERTIFICATION TESTBEDS
Doc. Identifier: EMI-DSA2.4-1277550-Integration_Testbeds_v1.0.doc
Date: 31/07/2010
INFSO-RI-261611 2010 © Members of EMI collaboration PUBLIC 18 / 40
◦ Procedures and defined channels to request a testbed and to involve people in providing
HW resource
◦ A task tracking system to assign tasks to people and track results (CERN's Savannah
[R11] is a good candidate for this). In future a migration to Jira [R14] tracking tool will be
evaluated in case that tool is widely adopted within EMI.
For large-scale tests, testbed means production environment simulation (services interaction,
distributed pool of users, production like usage of services: definition of use cases)
◦ Procedures and defined channels to request a testbed and to involve people outside EMI in
providing HW resource (EGI, NGIs, etc)
◦ As for internal testbed: server monitoring tools, public description of related testbeds,
people responsible, task tracking systems
DSA2.4 – CONTINUOUS INTEGRATION AND CERTIFICATION TESTBEDS
Doc. Identifier: EMI-DSA2.4-1277550-Integration_Testbeds_v1.0.doc
Date: 31/07/2010
INFSO-RI-261611 2010 © Members of EMI collaboration PUBLIC 19 / 40
4. EMI TESTBED IMPLEMENTATION MODEL
Moving from requirements identified in section 3, and in agreement with SA1 and JRA1 EMI work
package, the implementation model described in the following paragraphs has been adopted by SA2.6.
4.1. EMI TESTBED OR TESTBEDS?
To avoid misleading terminology, it is useful clarifying the usage of testbed and testbeds terms in the
document.
EMI Integration Testbed indicates the whole set of geographically distributed infrastructural
(hardware, networking, software) and operational (procedures, tools, documentation, communication
tool, support handling etc.) resources made available for integration testing purposes.
EMI Integration Testbed is then sub-classified into EMI Internal Integration Testbed and EMI large-
scale Testbed, to distinguish the set of infrastructural and operational resources provided by EMI
partners participating to SA2.6 task from those resources provided by / involving partners external to
EMI (e.g. European Grid Initiative project or the various National Grid Initiatives).
At the same time, from the pool of EMI Integration Testbed resources, several testbeds can be built,
depending on the particular subset aggregation needed for each given testing scenario. From this point
of view the testbed for a particular test is ultimately defined by the particular resources view the user
can have from the exploited user interface or client, depending on the information system
configuration, the middleware under test, the user credentials and authentication/authorization
configurations etc. To give a practical example, if a developer of a job management product needs to
test job submission from a User Interface to all middleware compute elements, she/he will need to
have a properly configured User Interface (api/client required version) from which a set of compute
elements and data management services properly configured (published into an accessible information
system collecting information from needed resources, authorization/authentication, etc.) can be
accessed etc. This will in general require the availability of a set of resources with basic configuration,
but also some additional customization effort for specific testing needs.
The objective of the SA2.6 task is therefore to provide both an adequate set of infrastructural resources
able to reproduce a production environment and a set of operational tools and solutions allowing for
the flexible customization of subset testbeds suited for the peculiar integration test a Product Team
may need to perform.
In this perspective, the classification of testbeds by testing scenarios A, B, C, D in section 3.3.1 or by
middleware is useful to identify the number and types of services needed and the possible
combinations and configurations to be adopted, but it does not imply their formal implementation as
separated testbeds.
4.2. EMI INTERNAL INTEGRATION TESTBED SOLUTION
The four testing scenarios described in section 3.3.1 are expected to cover envisaged integration
testing needs EMI software products. Notice that earlier certification testing2 and product performance
testing testbed provision is not considered in charge of SA2.6 task.
2 SA2.6 intends integration testing as the part of certification process when the product is tested versus other
products. Before this testing phase certification tests of the product in insulation should be carried out, either
as a post-build step or in a proper testing activity carried out by JRA1 or SA1 Product Team members.
DSA2.4 – CONTINUOUS INTEGRATION AND CERTIFICATION TESTBEDS
Doc. Identifier: EMI-DSA2.4-1277550-Integration_Testbeds_v1.0.doc
Date: 31/07/2010
INFSO-RI-261611 2010 © Members of EMI collaboration PUBLIC 20 / 40
To implement internal integration testbed for scenarios A, B, C partners in the task will provide:
Hardware Resources: Dedicated Hardware (physical or virtual), OS from supported platforms
installed, grid service ports accessibility and root privileges access for software product
installation and configuration from Product Team people.
Software Products installation and configuration. As mentioned in previous section products
will be differentiated on the base of: Release, Version (Production or Release Candidate),
platform, middleware. Notice that full coverage of production versions of EMI products in the
testbed is the main goal while the deployment of Release Candidate versions will be done on
best effort depending on available HW resources.
Information Systems: configuration in order to make the testbed grid resources visible to test
users. Initially just intra-middleware information service configuration will be considered,
and with this rather disjoint testbeds will be initially implemented for each middleware
(ARC, gLite, UNICORE). Therefore, cross-middleware resource discovery and information
publishing is an issue to be addressed at middleware development and harmonization level
(more in sec. 5).
Notice that while we have a commitment to make available production version instances of products
for each release, Release Candidate versions will be made available and maintained under specific
requests.
Concerning distribution of testbed service instances among partners, we adopt a Product Team /
middleware proximity criterion in the choice of the SA2.6 task partner site deploying software
products. That means that, whenever possible, each site contributing to testbed will firstly deploy
software products from local product teams or from middleware for which the site has previous
experience on. Other than optimizing the exploitation of partner experience on each specific product,
this approach will favour communication with PT and result in fewer accesses to testbed servers from
outer sites people.
4.2.1 Hardware Resource dimensioning
Input from JRA1 is expected about the complete list of EMI products, platforms supported and
foreseen releases. In this section SA2.6 will therefore identify the relevant variables for testbed
hardware resources dimensioning:
1. Number of Software Products. Notice that two software products implementation implying no
differences at API level will be considered equivalent (gLiteVOMS with oracle or Mysql
database)
2. Number of supported releases. Notice that two versions (Production version and Release
Candidate version) for each supported release will be generally deployed.
3. Number of supported platform per middleware.
Further details on the certification process definition are expected from JRA1.7 task and will be integrated
into testbed model once available.
DSA2.4 – CONTINUOUS INTEGRATION AND CERTIFICATION TESTBEDS
Doc. Identifier: EMI-DSA2.4-1277550-Integration_Testbeds_v1.0.doc
Date: 31/07/2010
INFSO-RI-261611 2010 © Members of EMI collaboration PUBLIC 21 / 40
Concerning current available resources, SA2.6 can rely on roughly 70 servers deploying services from
ARC, gLite, UNICORE middleware plus additional resources made available by dCache team for data
management integration testing purposes. Concerning platforms SA2.6 has currently deployed a
subset of the supported one per middleware: ARC (CENTOS 5.5, sl5, Debian Lenny, Ubuntu Hardy),
gLite (sl4, sl5), UNICORE (which is not dependent on any particular platform, generally deployed on
openSuSE 11.2), dCache (sl4, sl5). Apart from not deploying all supported platform permanently, the
deployed services cover current integration testing needs.
4.2.2 Product Teams members effort contribution
Testbed implementation implies a considerable effort for software product installation, configuration,
debugging and update. Moreover installing or configuring Release Candidate product version
generally requires expertise on the product. Therefore, in agreement with JRA1 and SA1 leaders
SA2.6 expect collaboration from PT members involved in certification concerning:
1. Providing effort for service installation on HW provided by SA2.6
2. Keeping Service Status Logbook documenting all updates, configuration and information
useful for the service usage from other PT
3. Providing effort for support and debugging on issues related to the service they are in charge
of.
The coordination of PT member intervention on server is in charge of SA2.6, which will ask
intervention and coordinate different PT interventions on services which need to be put in connection.
As stated in section 4.2.4 below granting access to PT members for installation or configuration will
be left to partners’ decision according to their local security policy.
4.2.3 Testbed monitoring solutions
Key performance indicators KSA2.1 and KSA2.2 reported in section 3, involving availability and
reliability metrics for testbed servers, imply automatic monitoring solutions for resources. Each
middleware currently has a monitoring solution deployed: ARC (Ganglia, GridMonitor, Nagios), gLite
(Nagios), UNICORE (Nagios is going to be installed).
Initially SA2.6 will provide availability and reliability statistics just on testbed server instances, not on
services given the fact that not all services have availability/reliability metrics defined and tools to
measure them. Concerning the adopted tool to monitor instances and produce statistics, all
middlewares plan to converge on Nagios solution [R9] for two suitable features for our task purposes:
Evolution of Nagios into grid monitoring Nagios [R10], which is expected to provide metrics
for service monitoring
Solution for geographical distribution: a second level Nagios can implement a central
instance, republishing and aggregating data coming from local sites Nagios instances.
Initially, availability and reliability statistics periodically produced by local sites Nagios instances will
be made available in the testbed public documentation center for SA2.6 described in section 4.1.7
DSA2.4 – CONTINUOUS INTEGRATION AND CERTIFICATION TESTBEDS
Doc. Identifier: EMI-DSA2.4-1277550-Integration_Testbeds_v1.0.doc
Date: 31/07/2010
INFSO-RI-261611 2010 © Members of EMI collaboration PUBLIC 22 / 40
4.2.4 Resources access policy
Three types of access to testbed resources are expected:
1. Tester users access to perform tests. This type of access will require common grid services
user configuration, which may include both personal account for PT testers on User Interface
services and dummy user accounts to simulate concurrent usage stress test on software
products.
2. Tester user access to collect information useful for tests debugging: access to logs, network
configuration etc. This use case will not necessarily require root privileges to be granted to
testers and could be implemented just making needed files available through https to users
with specific certificates, or via gridftp whenever possible. The mentioned approaches would
have the advantage of reducing the number of direct access to servers, which would involve
matching the local sites (the sites hosting the testbed servers) security access policies for each
participant site.
3. Root access to server for service installation and configuration. Since granting root access to
remote users poses a security problem, it will be regulated following local security access
policies for each participant site.
Account requests will be regulated according to communication procedures described in section 4.2.6
4.2.4.1 Testing Certificates handling
Testing certificates handling is a key security issue even in a testing environment. As mentioned in
section 3.2, participant sites and middleware have different solutions both for host and user certificates
as reported in TAB.4.1.4.1-1 below.
TAB 4.1.4.1-1 ARC GLite UNICORE
Host Certificate
Policy
Required by all software
products.
Possibility to disable certificate
checks in WS based services.
Required by most
software products.
Container certificates not tied to
any particular host, i.e. when
accessing UNICORE, only the
certificate by itself, not the host, is
checked.
User Certificate
Policy
Grid proxies and gLite voms
supported.
Grid proxies and
gLite voms
supported.
Apart from some interoperability
tests, proxies are not a supported
feature. SAML VOMS supported,
gLiteVOMS not supported.
Certificates
Generation
Instant CA Service for testing
purpose temporary user and
host certificates generation.
Manual generation of official
certificates with CA signature.
Tools available for
fake certificate
generation.
Manual generation
of official
certificates with CA
DSA2.4 – CONTINUOUS INTEGRATION AND CERTIFICATION TESTBEDS
Doc. Identifier: EMI-DSA2.4-1277550-Integration_Testbeds_v1.0.doc
Date: 31/07/2010
INFSO-RI-261611 2010 © Members of EMI collaboration PUBLIC 23 / 40
signature.
Notice that the interoperability testing activity among the three middleware instances in EMI
integration testbed will rely on the convergence on a common framework for authentication, which is
beyond the scope of SA2.6 task (see section 6).
4.2.4.2 Usage Monitoring and Concurrent usage Requirements
Neither usage monitoring tools nor resources usage scheduling tools are currently available. So SA2.6
is not able to implement policies regulating concurrent usage of testing resources.
On the other hand, the typical use case for integration testing will not imply exclusive usage of
resources. If in particular cases exclusive usage is required for testing purposes, an ad-hoc solution
will be defined and put in place on request.
4.2.5 Testbed update and evolution procedure
The testing scenarios in section 3.3.1 will imply the installation of both production version and
Release Candidate version software products for all supported or upcoming releases.
After initial setup the following evolution scenarios are expected:
1. New Product versions Released: each time the certification process successfully ends into
the release of new versions for each product, updating or configuration activities will be
needed to keep testbed aligned to official releases. Just 1 production version and 1 Release
Candidate version per product software per release will be deployed at a time.
1. Production versions: updates on testbed servers will be triggered by releases broadcasts or
other chosen automated notification tool. Involved PT people will be contacted to support
the update activity. Some optimization could derive from the fact that RC instances are
expected to become production version instances, while old production version instances
can be dismissed to host the new RC version.
2. Release Candidate versions: RC version availability will be notified by responsible PT to
testbed administrators or automatically triggered by a Ready for Certification status flag3.
2. Configuration changes requests: these types of requests may involve information system
configuration to publish subsets of resources, grid services configuration for inter-
communication, users or Virtual Organization configuration.
3. Special testbed requests: could be needed for peculiar testing requirements at some point in
EMI project, for example for testing some security issues across all products or new features
All these requests will be taken in charge by SA2.6 participants, who will request the support of PT
members if needed, as described in next section.
3 This will depend on the certification process adopted within EMI project.
DSA2.4 – CONTINUOUS INTEGRATION AND CERTIFICATION TESTBEDS
Doc. Identifier: EMI-DSA2.4-1277550-Integration_Testbeds_v1.0.doc
Date: 31/07/2010
INFSO-RI-261611 2010 © Members of EMI collaboration PUBLIC 24 / 40
4.2.6 Activity tracking, communication channels and procedures
Both coordination and installation/configuration activities concerning testbeds require clear channels
of communication and a way to track the effort of people involved. The solution adopted:
Communication:
◦ SA2.6: an [email protected] was created both for task internal communication and for
testbed requests reception.
Activity Tracking: SA2.6 and PT activities on testbed will be tracked through Savannah
[R11] tasks.
◦ An emi-sa2-testbed Savannah squad has been created to submit requests.
◦ Product Team squads have been created to track PT activities on testbeds, using contacts
in [R12]
Procedures: the following support requests cases (from PTs, SW Area Manager, SA1, JRA1)
are expected and treated as described below:
Requests for configuration support of existing services. These requests may include
enabling VO/users, making services to talk to each other, custom bdii setup etc.
Procedure:
Send either a request by mail to SA2.6 mailing list explaining the user's testing
and configuration needs or open a savannah task on the testbed squad. The request
will be then evaluated and tracked into a savannah task on the testbed squad.
If needed a PT members of services involved in the test will be contacted and his
contribute is tracked by savannah tasks.
Requests for new services setup (particular RC Service versions):
Send a request by mail to SA2.6 mailing list or by savannah task on the testbed
squad, explaining:
Your testing needs and the type and version of services you need to be
installed and the PT producing that service
Please also specify if you need the service to be included in the permanent
EMI testbed.
The request will be then evaluated and tracked into a savannah task on the testbed
squad. If needed, the involved PT members will be then contacted and his
contribute tracked by savannah tasks.
Requests for specific testbed (in this category: performance tests, security tests, data
management tests, etc.):
Send a request by mail to SA2.6 mailing list or by savannah task on the testbed
squad, explaining:
Your testing needs and an estimate of HW and SW requirements for your
test
DSA2.4 – CONTINUOUS INTEGRATION AND CERTIFICATION TESTBEDS
Doc. Identifier: EMI-DSA2.4-1277550-Integration_Testbeds_v1.0.doc
Date: 31/07/2010
INFSO-RI-261611 2010 © Members of EMI collaboration PUBLIC 25 / 40
PTs involved in the setup and suggestions on possible sites/PT/NGI that
may help in the setup. The request will be then evaluated and tracked into
a savannah task on the testbed squad.
The period of time you expect to have the testbed on for
The involved PT members will be then contacted and his contribute is tracked by
savannah tasks.
To ensure effective and prompt support for requests, SA2.6 plans to setup a shift infrastructure to deal
with the taking in charge of support requests, even though given the currently available human
resources which allow no redundancy in the number of people administrating sites participating to the
testbed and the participation of volunteering sites, a “best effort” policy will be adopted for the
response time for incoming requests.
4.2.7 Public documentation
For public documentation of activities and resources the following web pages were put in place:
SA2.6 internal activities:
◦ Meetings and Actions: https://twiki.cern.ch/twiki/bin/view/EMI/TSA26
◦ Savannah groups: emi-sa2 and emi-sa2-testbed
2. EMI Internal Testbed documentation, https://twiki.cern.ch/twiki/bin/view/EMI/TestBed,
reporting:
◦ Coordination Procedures and overview
◦ Testbed Inventory with a list of provided instances specifying: Middleware Suite, Service
Deployed, Platform, Server Hostname, Site Location, reference Product Team, Status
Logbook
◦ Status Logbook field in previous table is a link to a instance-specific web page describing
the hardware details of instance, software version installed, configuration information and
history of updates. The maintenance of this page is in charge of people performing
installations, configurations or updates (can be PT members).
Notice that “WikiUpdates” [R15] functionality is available for whoever wants to be automatically
updated about changes in Testbed status.
4.3. LARGE-SCALE TESTBED IMPLEMENTATION PLAN
Large-scale integration tests will involve volunteers partners not part of EMI project providing
resources for testing purposes. Key performance indicators KSA2.3 will apply to this testbed, which
just specify a target CPUs made available for each EMI project year.
Requirements identification for this type of testbed is complex, since efficient large-scale testing
infrastructures must reflect the specific testing needs (e.g. : how large is large enough for user
community?). Therefore here find some general guidelines moving from both actual knowledge on the
DSA2.4 – CONTINUOUS INTEGRATION AND CERTIFICATION TESTBEDS
Doc. Identifier: EMI-DSA2.4-1277550-Integration_Testbeds_v1.0.doc
Date: 31/07/2010
INFSO-RI-261611 2010 © Members of EMI collaboration PUBLIC 26 / 40
EMI project status and of previous positive experiences in the field coming from EGEE project [R7],
that is pilots procedures [R13]. Pilots Procedures were implemented to involve the users' community
in new product preview or testing activities using realistic use cases, creating an efficient direct
channel of communication between early adopters, users and developers. These activities had the
purpose of both letting user community to experience upcoming versions of products and testing
products in conditions strictly reflecting real use cases.
The lesson learned about this type of large scale testbed setup is that:
Testbed types: it is difficult to identify a large-scale testbed prototype, since very often user
communities will have different needs or use cases and will generally be interested in testing
the impact of new features on their own processing infrastructure.
To setup these testbed a great coordination effort is needed, so that the best way to optimize
the process is to create specific working teams for each product to test, involving User
Communities or sites representatives and Product experts together with SA2.6 members.
A key point in the success is the selection of user communities really interested in the
outcome of tests, therefore willing to support the tests providing human effort and resources.
4.3.1 Implementation plan
The target for Large Scale Testbeds from EMI external partners for first year of project is to reach 50
CPUs. In the following SA2.6 will define first actions to put in place until project month 6, with the
purpose of revising the model when testing requirements will be better defined.
External participant recruitment.
◦ EGI and NGI: A natural choice for us is to start retrieving external resources from
National Grid Initiatives (NGI) involved in the European Grid Initiative (EGI) project
which will be the first customers of EMI products and with whom there has been previous
collaboration experience. Since as mentioned before, a key point is the selection of
motivated partners SA2.6 will ask EMI Product Team and developers’ representatives
(SW area manager, JRA1, SA1) to provide us a list of friend NGI sites willing to provide
some resources and effort for needed scale tests for each given product. This approach
moves from the consideration that NGI sites partners are not equally interested in testing
all products, but have more experience or proximities with some Product Teams or
products.
◦ Virtual Organizations: Also contacting specific institutions which are transversal to
NGIs sites will be an option for us. As an example at INFN-CNAF SA2.6had positive
experience of large-scale testbed setup exploiting CMS Virtual Organization site contacts,
given that for the CMS computing model it was very important to preview and test the
whole computing chain on grid resources.
◦ Other Partners: The recruitment of other communities which have no relationship with
EMI participant background, such as commercial partners or new user communities will
require specific solution planning depending on EMI products distribution and
dissemination channels. As an example if EMI products will be distributed as a part of
some Linux distribution, SA2.6envisage possible new user communities willing to try beta
versions etc. This will be object of future revisions of this document.
DSA2.4 – CONTINUOUS INTEGRATION AND CERTIFICATION TESTBEDS
Doc. Identifier: EMI-DSA2.4-1277550-Integration_Testbeds_v1.0.doc
Date: 31/07/2010
INFSO-RI-261611 2010 © Members of EMI collaboration PUBLIC 27 / 40
Actions and Open Issues:
◦ Authentication solution: Access to grid resources external to EMI will imply the
institution and maintenance of a “VO” for testing and debugging purposes in EGI or other
partners. From preliminary discussions with EGI operations representatives it is still not
clear if the existence of such a catch-all VO for developers debugging and testing activity
will be granted. There is currently more consensus on regional scope catch-all
development VO. This will be object of NGIs meeting to which SA2.6 will take part in
the near future. The fragmented authentication framework across external partners could
have consequences on testing/testbeds implementation, so that a comfortable solution will
be studied within SA2.6 and proposed as standard.
◦ KPI KSA2.3: matching a stable target of 50 CPUs (year 1) can be conflicting with the
approach of setting up dedicated testbed for each testing use case. A possible solution for
this could be to create a permanent list of partner sites which declare their interest in some
product testing, dedicating resources on them, so that on average SA2.6could count on a
certain amount of CPUs even though they will not be for general testing purpose, but for
specific product only.
DSA2.4 – CONTINUOUS INTEGRATION AND CERTIFICATION TESTBEDS
Doc. Identifier: EMI-DSA2.4-1277550-Integration_Testbeds_v1.0.doc
Date: 31/07/2010
INFSO-RI-261611 2010 © Members of EMI collaboration PUBLIC 28 / 40
5. OPEN ISSUES
In this section SA2.6 will mention some key points having an impact on testbeds implementation
which need work or attention across EMI work packages to get to a common agreed solution.
5.1. COMMON AUTHENTICATION FRAMEWORK ACROSS MIDDLEWARES
As mentioned in section 3 and 4 a common framework for authentication will be needed. In fact,
loosely speaking, a unique method/service for user identity assessment and verification will ensure and
simplify accessibility of all services for users of all middleware communities. While ARC, gLite and
dCache can support gLiteVOMS proxies as common solution, in UNICORE, apart from some
interoperability tests, proxies are not a supported feature. Therefore some work on this must be taken
for future cross-middleware integration tests and subsequent tests across user communities. The
harmonization of authentication (i.e. common authN library) among these mentioned middleware
components is part of the development efforts within JRA1, but more discussion is needed when these
efforts become available and usable for the EMI Testbed activities.
As initial step, a testing Virtual Organization testers.eu-emi.eu has been created and deployed in the
implemented EMI testbed. If accepted as official testing VO SA2.6 will ask the Release Manager to
include it in the configuration and packaging of software products. This would greatly simplify the
deployment procedure facilitating the site administrators work when enabling the VO support on all
their services for testing purposes. As a consequence, also the recruitment of volunteer sites for pre-
view and testing activities would be facilitated.
5.2. SOFTWARE PRODUCT VARIABLES TO CONSIDER
As mentioned in section 3, one important variable for testbed hardware resources dimensioning is the
number of supported platforms. At current time not all platforms will have permanent testbed
deployed at the same time. Once EMI release 1 will be available SA2.6 will revise the issue.
Another concern at the moment is the deployment of User Interface services and batch systems (LSF,
etc) which are not currently part of any Product Team.
5.3. RESOURCE DISCOVERY AND PUBLISHING OVER DIFFERENT MIDDLEWARES
An important issue for cross-middleware integration is the availability of a common framework for
resource discovery and publishing across middleware distributions.
Current situation:
gLite: based on the BDII Architecture having a top-level BDII which gives a view of all the
services in a grid, which is configured to read from a set of site BDIIs, each of
which contains all the services for one site. At both of those levels
one can collect information on any sites/services provided that the information is published by
the service itself. GLite services normally have a resource BDII installed on each service
node, running information providers to publish information about that service. Potentially you
could run the information providers on a different node as long as they can still
collect the relevant information. Translators from different implementation of the glue
schema, e.g. XML, to LDAP can be written. Most gLite services publish basic information
DSA2.4 – CONTINUOUS INTEGRATION AND CERTIFICATION TESTBEDS
Doc. Identifier: EMI-DSA2.4-1277550-Integration_Testbeds_v1.0.doc
Date: 31/07/2010
INFSO-RI-261611 2010 © Members of EMI collaboration PUBLIC 29 / 40
using the GLUE 1.2 Service object (and optional Service Data, which is just a list of
key-value pairs). GLUE 2.0 will be used in the future. For computing and storage services the
required information is a lot more complex, and they have their own dedicated providers
supporting GLUE 1.
UNICORE: has its own service registry, where UNICORE clients look up services. Even
though it would be technically possible to push the UNICORE registry's contents to a
gLiteBDII, this will be addressed at the time of moving to a common EMI registry.
ARC: gLite services will not be recognized by ARC information system (and vice versa),
because they use a substantially different information model. However, there are "converters"
developed, thus ARC sites can be seen in some gLite monitoring tools. The Classic ARC
information system is using ldap so that gLite resources may be published in ARC GIISes.
Additionally one has to remember that besides classic ARC SA2.6 has also
WS-ARC services which are not able to register to a BDII based information system
(they are being registered to a cloud of ISIS services). So with one
BDII one cannot register all ARC services.
To sum up SA2.6currently could have some cross-middleware resource discovery but not setup a
permanent cross-middleware testbed. Therefore SA2.6 will initially have three different and separated
testbeds with an addition of a dCache testbed.
There has been an agreement among all considered middlewares to be compatible with
GLUE 2.0 schema in near future being essentially also part of JRA1 development efforts for many
components within the first releases (EMI-1 and EMI-2).
5.4. TOOLS AND AUTOMATION
Initially SA2.6 will have a low level of automation in the testbed setup process, basically relying on
participant available solution for virtual machines automatic setup, certificates handling etc.
Since among the goals of build tools is to partly automate the testing operations at post-build stage, at
some point in the EMI project, SA2.6 will need to address the issue of automatic testbed creation
triggered by post-build stage testing operations. An internal coordination agreement is in place
between SA2.6 and SA2.4 tasks. This document will be updated at due time when further knowledge
about the adopted testing approach will be available.
5.5. INPUTS FROM JRA1
For an efficient planning of testing resources deployment and concurrent usage the following
information is expected:
Every EMI release should come with an integration matrix reporting which components must
integrate which other.
A definition of "integration testing" step as part of the certification process in order to
coordinate concurrent integration testing among different products. Notice that certification
process in EMI is managed by PTs, marking a process change for most middleware with
respect to past practices, when certification process used to be centralized. Integration testing is
DSA2.4 – CONTINUOUS INTEGRATION AND CERTIFICATION TESTBEDS
Doc. Identifier: EMI-DSA2.4-1277550-Integration_Testbeds_v1.0.doc
Date: 31/07/2010
INFSO-RI-261611 2010 © Members of EMI collaboration PUBLIC 30 / 40
the certification phase where different PTs certification activities may interfere with each other
(concurrent usage of same resources, blocking bugs or missing features for products) so a
procedure and a set of rules to coordinate the integration certification is required. The problem
is mostly related to RC versions testing: how are they rejected or modified and more than that
some method to coordinate contemporary testing of multiple product RC versions. As
an example consider 4 products (gLiteWMS, gLiteBDII, CREAM CE and gLiteLB) to have a
RC for new major release EMI.xx deployed in the testbed. PTs start running tests
independently from each other. Then gLiteBDII happens to have a interaction bug with
CREAM CE, promptly communicated to gLiteBDII PT, which will start fixing the problem,
then ask for new RC to be deployed in the testbed. Now, imagine that at the same time
gLiteWMS PT is also running some integration tests against gLiteBDII and gLiteLB and wants
to terminate the test without changes in the gLiteBDII version or must restart its integration test
with new version. These situations can be very frequent in integration certification and SA2.6
will need a common procedure/plan for all PT implementing some sort of semaphore visible to
all PT to help coordinating their integration tests on the Testbed.
DSA2.4 – CONTINUOUS INTEGRATION AND CERTIFICATION TESTBEDS
Doc. Identifier: EMI-DSA2.4-1277550-Integration_Testbeds_v1.0.doc
Date: 31/07/2010
INFSO-RI-261611 2010 © Members of EMI collaboration PUBLIC 31 / 40
6. EMI TESTBED EVOLUTION AND FUTURE REVISIONS OF THE MODEL
The integration testbed and testbeds are quickly evolving topics in a project trying to integrate three
different middleware distributions like EMI. SA2.6 expects three main testbed phases in the project. A
first phase of early convergence among middleware distributions, currently ongoing, where main
activities are focused on knowing each other, maintaining the actual supported releases and setting up
common coordination and documentation tools. A second phase will take place when approaching the
first EMI release, when the focus will move towards cross-middleware integration. A third phase in
which external partners and integration testing for new user communities or Linux distributions will
become more and more important. In parallel automating the testbed setup activities will become a
central issue.
Therefore SA2.6 expect the presented implementation strategy for testbed to receive some revisions in
order to take into account upcoming requirements and problems which are not accurately predictable
at this stage of the project.
DSA2.4 – CONTINUOUS INTEGRATION AND CERTIFICATION TESTBEDS
Doc. Identifier: EMI-DSA2.4-1277550-Integration_Testbeds_v1.0.doc
Date: 31/07/2010
INFSO-RI-261611 2010 © Members of EMI collaboration PUBLIC 32 / 40
7. EMI TESTBED IMPLEMENTATION STATUS
Accordingly to the testbed model described in section 4, we worked on providing:
HW/SW resources:
Server with OS installed (one among agreed platforms), network connection and utilities.
Virtual Servers are in general an applicable solution to deploy services but physical
servers are required for some services.
Server host certificate available. Notice that host certificates are required in ARC and
gLite. Containers certificate in UNICORE not related to host. Special CA for testing
available (ARC, CERN) other than official CA.
Server level monitoring tool (e.g. Nagios, Ganglia etc.) to collect statistics about server
availability/reliability.
Service installation/configuration
Access to the server for PT testers
Other resources:
Documentation
Procedures and coordination
Communication and task traccking tools
As reported in section 5.3, cross-middleware resource publishing/discovery is not fully available at the
moment, so that separated middleware testbeds were implemented. Nonetheless, effort was spent on
harmonizing the testbeds to have common documentation, monitoring and testbed requests
procedures.
7.1. INSTALLED SERVICES INVENTORY
In the following, detailed description of resources provided per middleware is reported. In the
documentation page https://twiki.cern.ch/twiki/bin/view/EMI/TSA26 an EMI product coverage table
provides a mapping of EMI release products (as defined in the EMI technical plan) into testbed
deployed resources.
7.1.1 ARC
ARC [R17] middleware currently deploys 11 products on Fedora, Debian, RedHat, Ubuntu, Windows
and MacOSX platforms.
The following services (multiple instances for some services) were made available:
Product Name Version Platform Partner
GIIS service (ARC LDAP-
Infosys)
Release 0.8.2
Production
CentOS5.5 i386 Kosice
DSA2.4 – CONTINUOUS INTEGRATION AND CERTIFICATION TESTBEDS
Doc. Identifier: EMI-DSA2.4-1277550-Integration_Testbeds_v1.0.doc
Date: 31/07/2010
INFSO-RI-261611 2010 © Members of EMI collaboration PUBLIC 33 / 40
Nagios Release 3.2.1
Production
CentOS5.5 i386 Kosice
Instant CA Release 0.9
Production
CentOS5.5 i386 Kosice
Classic ARC Grid Monitor Release 0.8.2
Production
CentOS5.5 i386 Kosice
WS-ARC Grid Monitor Release Candidate CentOS5.5 i386 Kosice
ARC ISIS service (4 Instances) Release Candidate CentOS5.5 i386; SLC5.3/x86;
Debian Lenny /x86
Kosice, NIIF
Classic ARC CE Release 0.8.2
Production
CentOS5.5 i386 Kosice
CE1 type Release 1.1
Production
CentOS5.5 i386 Kosice
A-REX Release 1.1
Production
SLC5.3/x86; Debian Lenny /x8 Kosice, NIIF
Bartender service (2 instances) Release 1.1
Production
Debian Lenny /x86;
CentOS5.5/x86_64
Kosice, NIIF
AHash service (2 instances) Release 1.1
Production
Debian Lenny /x86 NIIF
Classic ARC clients Release 0.8.2
Production
Ubuntu Hardy /i386 Kosice
WS-ARC clients Release 1.1
Production
Ubuntu Hardy /i386 Kosice
ARC data clients Release 0.8.2
Production
Ubuntu Hardy /i386 Kosice
Librarian service (3 instances) Release 1.1
Production
CentOS5.5/x86_64 ; Debian
Lenny /x86
Kosice, NIIF
Echo service Release 1.1
Production
SLC5.3/x86 ; Debian Lenny
/x86
Kosice, NIIF
Shepherd service (2 instances) Release 1.1
Production
Debian Lenny /x86 NIIF
7.1.2 gLite
gLite [R18] middleware currently deploys 19 products in Release 3.1 SL4 OS and 17 products in
Release 3.2 SL5 (when service implementation changes without affecting API services are counted as
one, ex. gliteVOMS MySQL/Oracle database implementation) on SL4 and SL5 platforms.
The following services (multiple instances for some services) were made available:
Product Name Version Platform Partner
gliteWMS (3 instances) gLite 3.1 Production; RC SL4 INFN,
DSA2.4 – CONTINUOUS INTEGRATION AND CERTIFICATION TESTBEDS
Doc. Identifier: EMI-DSA2.4-1277550-Integration_Testbeds_v1.0.doc
Date: 31/07/2010
INFSO-RI-261611 2010 © Members of EMI collaboration PUBLIC 34 / 40
CERN
dGas ig_HLR Production Version ig48_sl4 SL4 INFN
gLite-Cream 3.2 Production Version(LSF); 3.1.24
Production
SL5/x86_64;
SLC4.8/x86
INFN
gLite UI (3 instances) 3.1 Production; 3.2 Production, RC SLC4.8;
SL5/x86_64
INFN,
CERN
GliteBDII (site; Top) Production 3.1.23 SLC4.8/x86 CERN
gLite-PX Production 3.1.29 SLC4.8/x86 CERN
gLite-lcgCE Production 3.1.40 SLC4.8/x86 CERN
gLite WN Production 3.1.11; 3.1.30 ; 3.2.7 SL4.8/x86 ,
SL4.8/x86_64;
SL5.5/x86_64
CERN
gLite-LFC_mysql Production 3.1.29 SL4.8/x86_64 CERN
gLite-VOMS Production 3.1.27; Production 3.2 SLC4.8/x86;
SL5.5/x86_64
CERN
gLite-FTS_oracle Production 3.1.22 SLC4.8/x86 CERN
gLite-VOBOX Production 3.1.42 SLC4.8/x86 CERN
gLite-SE_dpm_mysql Production 3.2.5 SL5.5/x86_64 CERN
gLite-DPM_pool Production 3.1.32 SLC4.8/x86 CERN
gLite-SE_dpm_mysql Production 3.1.35 / disk 3.1.29 SLC4.8/x86 CERN
Nagios Production 3.2.1-1; 3.2.0-1 SLC5.5/x86_64
; SLC4.8/x86
CERN
gLite-LB (9 instances) Production 2.1.7-1, RC SLC5.5/x86_64
; SLC4.8/x86
CESNET
STORM INFN grid Release 3.1.0-0_ig50_sl4 Production Version SL4 INFN
7.1.3 UNICORE
UNICORE [R19] middleware currently deploys 11 products without particular dependencies making
it executable on Linux, Windows and MacOSX platforms, generally deployed on openSuSE 11.2 for
certification purposes.
Product Name Version Platform Partner
Gateway 6.3.1 Production openSuSE 11.3 JUELICH
Registry 6.3.1 Production openSuSE 11.3 JUELICH
UNICORE/X incl. XNJS 6.3.1 Production openSuSE 11.3 JUELICH
OGSA-BES interfaces 6.3.1 Production openSuSE 11.3 JUELICH
HiLA 2.1 Production openSuSE 11.3 JUELICH
XUUDB 6.3.1 Production openSuSE 11.3 JUELICH
DSA2.4 – CONTINUOUS INTEGRATION AND CERTIFICATION TESTBEDS
Doc. Identifier: EMI-DSA2.4-1277550-Integration_Testbeds_v1.0.doc
Date: 31/07/2010
INFSO-RI-261611 2010 © Members of EMI collaboration PUBLIC 35 / 40
Command line Client (UCC) 6.3.1 Production openSuSE 11.3 JUELICH
7.1.4 dCache
Also dCache [R20] certification resources were made available for integration testing purposes from
dCache EMI partner. In particular, resources below can be accessed through CERN gLiteUI instance.
Product Name Version Platform Partner
BDII SL4 DESY
dCache A SL5;
SL4 32bit PNFS;
SL4 64bit PNFS;
DESY
dCache B Sl5;
SL4 32bit Chimera;
SL4 64bit Chimera;
DESY
7.2. TESTBED OPERATIONS QUICK REFERENCE
This section summarizes and describes the implemented solutions to EMI Testbed operational aspects
and practical information about testbed administration and usage. Information is provided in a tabular
form with rows addressing a specific operational aspect.
Testbed Operations quick reference: solutions in place.
1-EMI Testbed
Monitoring and KPIs
To produce statistics for the previously described key performance
indicators KSA2.1 and KSA2.2 each site participating to the testbed
infrastructure currently has a monitoring solution deployed: ARC (Ganglia,
GridMonitor, Nagios), gLite (Nagios), UNICORE (Nagios), able to produce
statistic on both availability and reliability of monitored instances.
A central portal collecting the status information for all geographically
distributed resources is still missing, since the two levels Nagios
architecture, to collect site-level Nagios information in a central Nagios
instance, is only available for Grid Nagios implementation, which currently
does not have probes for ARC and UNICORE middleware.
Just hardware availability reliability information are collected.
2-EMI Testbed
Documentation Documentation on SA2.6 internal activities:
https://twiki.cern.ch/twiki/bin/view/EMI/TSA26
Activity Task tracking: Savannah groups: emi-sa2 and emi-sa2-
testbed
Documentation on EMI Testbed:
https://twiki.cern.ch/twiki/bin/view/EMI/TestBed, reporting:
◦ Coordination Procedures and overview
DSA2.4 – CONTINUOUS INTEGRATION AND CERTIFICATION TESTBEDS
Doc. Identifier: EMI-DSA2.4-1277550-Integration_Testbeds_v1.0.doc
Date: 31/07/2010
INFSO-RI-261611 2010 © Members of EMI collaboration PUBLIC 36 / 40
◦ Table of deployment status of EMI Components
◦ Testbed Inventory with a list of provided instances specifying:
Middleware Suite, Service Deployed, Platform, Server
Hostname, Site Location, reference Product Team, Status
Logbook
◦ Status Logbook field in previous table is a link to a instance-
specific web page describing the hardware details of instance,
software version installed, configuration information and
history of updates. The maintenance of this page is in charge of
people performing installations, configurations or updates (can
be PT members).
Notice that “WikiUpdates” [R15] functionality is available for whoever
wants to be automatically updated about changes in Testbed status.
3-User Support
Requests Handling
To handle user support requests, coordinate and track all the distributed
effort the following solutions were adopted:
1. User Support Requests Handling: An emi-support-testbed
support unit has been created in GGUS, for testbed support
requests reception. Representatives from all partners contributing to
testbed are members of the support unit, and agreed on a 2 working
days response time on best effort. The Support Unit will be part of
next GGUS release. The adoption of GGUS will give a common
framework to handle both requests coming from EMI developers or
users and those coming from external users (e.g. the users of large
scale testbed involving other projects or partners contribution). This
is the only accepted interface with users.
2. Communication: an [email protected] was created both for
task internal communication and with other work packages or tasks
in EMI.
3. Activity Tracking: SA2.6 and Product Teams activities on testbed
will be tracked through Savannah [R11] tasks.
1. An emi-sa2-testbed Savannah squad has been created to
submit requests.
2. Product Team squads have been created to track PT
activities on testbeds
4-User Access to
Testbed
User Interface Service Instances: As default use case we assume testbed
users to have direct access just to user interface instances, that is just to grid
middleware access point services. To request an account on a EMI Testbed
User Interfaces instance, every user with a valid certificate from a trusted
Certification Authority, should send a user support request following the
procedure and tools described at point 4 in this table. Notice that it is also
possible to install the set of clients directly on personal machine (ex. usual
use case in UNICORE).
Other Service Instances: Root access on other services can be granted on
request, depending on the local sites security policy (which generally is also
subjected to national laws about traceability of access on servers). If the
access is required for debugging or logs exploration purposes, logs sharing
solutions will be implemented on demand (publishing of logs on public
AFS area, GridFTP downloads, https access) .
DSA2.4 – CONTINUOUS INTEGRATION AND CERTIFICATION TESTBEDS
Doc. Identifier: EMI-DSA2.4-1277550-Integration_Testbeds_v1.0.doc
Date: 31/07/2010
INFSO-RI-261611 2010 © Members of EMI collaboration PUBLIC 37 / 40
5-Testbed Updates:
Role & Duties of EMI
Testbed
administrators and
Product Team
members
EMI Testbed Administrators (EMI SA2.6 task) Role & Duties 1. Providing Coordination Effort, Procedures, Documentation and
Communication channels for the request, monitoring and
maintenance of testbeds within EMI
2. Providing HW, hosts certificates and what needed for service
installation
3. Providing Coordination Effort, Procedures and Communication
channels for the request, monitoring and maintenance of large scale
testbeds with external partners
Product Team Members Role & Duties in testbeds 1. When requested by SA2.6, providing effort from PT people for
service installation on HW provided by SA2.6
2. Keeping Service Status Logbook documenting all updates,
configuration and information useful for the service usage from
other PT
3. Providing effort for support and debugging on issues related to the
service they are in charge of.
6-Viewing Testbed
Resources:
information systems
Information Systems Configuration: Each of the middleware has a
service for resource discovery and publication (ARC, gLite BDII,
UNICORE Registry). A central information system instance was
configured for each middleware publishing the resources in the testbed. Full
cross-middleware compatibility among existing information system
services is in EMI plans, and EMI Testbed will reflect that integration once
it will be technically available.
Implications for testbed usage: the set of resources visible to the end users
(developers) depends on the configuration of their access point (the
information system instance configured in the User Interface instance user
is logged on). In practice user can build a custom testbed by selecting
needed resources from the pool of those published in the central
information system or merging them with other resources published on
other information system (ex. Product Team internal development testbed).
7-Testbed Usage in
practise: typical Use
Cases & handling
procedure
Use Case A: developer John needs to test the correct interaction
between service X (Release Candidate version) and services Y
(Production Version), Z (Release Candidate Version). Solution: service
X is configured to see resources published in the chosen EMI Testbed
central information system instance. Depending on the test performed John
may need some configuration effort on services Y or Z, to ensure they can
interact with X. John sends a support request to EMI Testbed group (see
line 3 in this table).
Use Case B: developer John needs to test the correct interaction
between his service X (version X.X.X installed on some instance of his
PT) and services Y (Production Version), Z (Release Candidate
Version). Solution: service X is configured to see resources published in
chosen the EMI Testbed central information system instance. He can also
setup a new information system merging information from both the
mentioned central information system and a local information system
DSA2.4 – CONTINUOUS INTEGRATION AND CERTIFICATION TESTBEDS
Doc. Identifier: EMI-DSA2.4-1277550-Integration_Testbeds_v1.0.doc
Date: 31/07/2010
INFSO-RI-261611 2010 © Members of EMI collaboration PUBLIC 38 / 40
publishing some development resources, building a custom testbed view.
Notice that service Y and Z will not be configured to see resources out of
EMI Testbed.
Use Case C: developer John needs to test the correct interaction
between his service X (Release Candidate Version) and services Y
(Production Version), Z (Release Candidate Version) not currently in
the testbed, through a User Interface (ex. Job submission from UI
involving broker, information system, storage element, compute
element). Solution: John requests (see line 3 in this table) an account on
one of the User Interfaces provided in the testbed, which is configured to
see resources published in the chosen EMI Testbed central information
system instance. Depending on the test performed John may need some
configuration effort on services Y or Z, to ensure they can interact with X.
Moreover John needs service Z to be installed in the Testbed. John sends a
support request to EMI Testbed group (see line 3 in this table).
8-Testbed Usage in
practise: typical
support requests &
handling procedure
EMI testbed will provide a set of instances deploying EMI products in
production with default configuration. To fully match PTs testing needs the
following support requests cases (from PTs, SW Area Manager, SA1,
JRA1) are expected and treated as described below:
Requests for configuration support of existing services. These
requests may include enabling VO/users, making services to talk to
each other, custom bdii setup etc. Procedure:
o Open GGUS tkt on EMI testbed SU explaining your testing
and configuration needs. The request will be then evaluated
and tracked into a savannah task on the testbed squad.
o If needed a PT members of services involved in the test
will be contacted and their contribute tracked by savannah
tasks.
Requests for new services setup (RC Service versions):
o Open GGUS tkt on EMI testbed SU explaining:
Your testing needs and the type and version of
services you need to be installed and the PT
producing that service
Please also specify if you need the service to be
included in the permanent EMI testbed.
o The request will be then evaluated and tracked into a
savannah task on the testbed squad. If needed, the involved
PT members will be then contacted and their contribute
tracked by savannah tasks.
Requests for specific testbed (in this category: performance tests,
security tests, data management tests...):
o Open GGUS tkt on EMI testbed SU explaining:
Your testing needs and an estimate of HW and SW
requirements for your test
PTs involved in the setup and suggestions on
possible sites/PT/NGI who may help in the setup.
The request will be then evaluated and tracked into
DSA2.4 – CONTINUOUS INTEGRATION AND CERTIFICATION TESTBEDS
Doc. Identifier: EMI-DSA2.4-1277550-Integration_Testbeds_v1.0.doc
Date: 31/07/2010
INFSO-RI-261611 2010 © Members of EMI collaboration PUBLIC 39 / 40
a savannah task on the testbed squad.
The period of time you expect to have the testbed
on for
o The involved PT members will be then contacted and their
contribute tracked by savannah tasks.
Requests for large scale testbed setup ( requests involving
contribution from entities outside EMI):
o Open GGUS tkt on EMI testbed SU explaining:
Your testing needs and an estimate of HW and SW
requirements for your test
PTs involved in the setup
Suggestions on possible sites/NGI wishing to
contribute in the setup.
The period of time you expect to have the testbed
on for
o The request will be then evaluated and tracked into a
savannah task on the testbed squad. External entities will
be contacted and testbed setup feasibility evaluated.
9-Testbed Updates:
Procedure and
average time to deploy
new versions of
services
EMI Testbed goal is to deploy EMI products, covering the highest possible
percentage of versions for each release. The number of required resources is
proportional to: (N° of EMI products)*(N° of supported releases)*(N° of
supported platforms)*(Production + Release Candidate Version) *(some
redundancy in the number of instances per product).
The resulting number of instances is potentially high, so SA2.6 will adopt
some priority principles to decide what to deploy first in case of missing
HW. In particular:
Versions: Production versions have priority with respect to Release
Candidate versions.
Platform supported: SL5 x64/32 first, others on request (UNICORE
deploys on OpenSUSE having no dependency on the platform)
The deployment process will be triggered by automated notifications from
EMI Release group.
A “Release Candidate” status with associated official build is pre-requisite
for deployment .
The average time of deployment will depend on available resources and the
queue of requests. A target of 2-working days on best effort as for other
support requests is assumed.
DSA2.4 – CONTINUOUS INTEGRATION AND CERTIFICATION TESTBEDS
Doc. Identifier: EMI-DSA2.4-1277550-Integration_Testbeds_v1.0.doc
Date: 31/07/2010
INFSO-RI-261611 2010 © Members of EMI collaboration PUBLIC 40 / 40
8. CONCLUSIONS
The main outcome of the document is the definition of the implementation model of EMI integration
testing infrastructure together with the operational resources necessary for its daily usage and
continuous update. The resources and procedures made available are an answer to integration testing
requirements identified in collaboration with main users of the testbed, i.e. EMI product teams,
technical development and support work packages. Also an “on demand” model for implementing
large scale acceptance testbed has been defined. Fine tuning of the model as well as its actual
implementation is expected as approaching testbed usage peaks in correspondence with first EMI
release. Major changes deriving from usage experience will then result in updates of the present
document.
Among the outcomes of the document it is useful to mention the identification of key open issues to be
addressed for assuring the testbed homogeneous usability by all middlewares converging into EMI.
Future work will then focus on tracking and integrating in the testbed the solutions to these issues,
among which the most relevant with respect to direct effort contribution is the adaptation of the
testbed infrastructure implementation to the planned automation of testing process.