Project Acronym Fed4FIRE
Project Title Federation for FIRE
Instrument Large scale integrating project (IP)
Call identifier FP7‐ICT‐2011‐8
Project number 318389
Project website www.fed4fire.eu
D5.5 – Report on second cycle development
Work package WP5
Task T5.1
Due date 15/06/2015
Submission date 09/05/2015 (ToC proposal)
15/06/2015 (Submission for Internal review)
06/07/2015 (Submission of final version)
Deliverable lead Mikhail Smirnov (Fraunhofer)
Version 2.0
Authors Mikhail Smirnov (Fraunhofer)
Florian Schreiner (Fraunhofer)
Thierry Parmentelat (INRIA)
Thierry Rakotoarivelo (NICTA)
Alexander Willner (TUB)
Yahya Al‐Hazmi (TUB)
FP7‐ICT‐318389/FRAUNHOFER/R/PU/D5.5
2 of 67
© Copyright Fraunhofer FOKUS and other members of the Fed4FIRE consortium, 2015
Alaa Alloush (TUB)
Chrysa Papagianni (NTUA)
Aris Leivadeas (NTUA)
Donatos Stavropoulos (UTH)
Aris Dadoukis (UTH)
Brecht Vermeulen (iMinds)
Loïc Baron (UPMC)
Carlos Bermudo (i2CAT)
Javier Garcia Lloreda (Atos)
Reviewers Jan Van Ooteghem (iMinds)
Steve Taylor (IT‐innovation)
Abstract This deliverable reports the software developments specified in Fed4FIRE for the common federation tools for experiment lifecycle management in the second development cycle
Keywords Experiment, testbed, resource, service, process, specification
Nature of the deliverable R Report X
P Prototype
D Demonstrator
O Other
Dissemination level PU Public X
PP Restricted to other programme participants (including the Commission)
RE Restricted to a group specified by the consortium (including the Commission)
CO Confidential, only for members of the consortium (including the Commission)
FP7‐ICT‐318389/FRAUNHOFER/R/PU/D5.5
3 of 67
© Copyright Fraunhofer FOKUS and other members of the Fed4FIRE consortium, 2015
Disclaimer
The information, documentation and figures available in this deliverable, is written by the Fed4FIRE (Federation for FIRE) – project consortium under EC co‐financing contract FP7‐ICT‐318389 and does not necessarily reflect the views of the European Commission. The European Commission is not liable for any use that may be made of the information contained herein.
FP7‐ICT‐318389/FRAUNHOFER/R/PU/D5.5
4 of 67
© Copyright Fraunhofer FOKUS and other members of the Fed4FIRE consortium, 2015
ExecutiveSummary
This report specifies the experiment lifecycle management in the federated testbed environment as it was developed in cycle 2 of the Fed4FIRE project. The structure of the deliverable follows the structure of Work Package 5 except for the descriptions Fed4FIRE portal. For the sake of consistency the reservation and SLA plugins as well as the jFed tool are described within the respected tasks, while the portal sections references needed sections. The introduction is intentionally short because this report is the logical continuation of cycle 2 specification, where main design decisions were made.
Two sets of inputs were taken into account in this deliverable: first, evaluated priorities of multiple possible developments in cycle 2 including those to meet the requirements of infrastructure owners and of service community. Second, architectural and sustainability requirements identified in WP2, which were again evaluated with respect to WP5 goals. The main result of these evaluations was described in [4] as the vision of the experiment life‐cycle management service offered to experimenters and supported by service components that are also seen as future services – components of a meta‐service. The cycle 2 developments reported in this document were following these inputs, thus detailed description of a respective service or service component is presented only for those which are different from previously specified in [4].
This WP5 meta service ‐ “Experiment lifecycle management” ‐ allows accredited users to run their experiments seamlessly on a federation of heterogeneous testbeds. The development (enhancements) of different service components facilitating this service are in summary:
Resource description and discovery that provides semantic directory of resources within the federation and their related information based on a formal representation (ontology);
Resource discovery within an Application Service Directory tool that provides semantic search and retrieval within a collection of offerings including high‐level, ready‐to‐use functionality that allows experimenters to ease their interaction with the testbeds;
Extensions of resource discovery service specifically tailored for the needs of WP6 – monitoring and measurements (Section 3.2);
Resource reservation service that experimenters can utilize to reserve heterogeneous resources spanning multiple testbeds, based on a multiplicity of selection criteria (time, type, etc.);
Resource provisioning service allows accredited users to allocate resources from one or several testbeds to deploy their experiments; it can be direct (user selects specific resources) or orchestrated (user defines requirements and service provides the best fitting resources);
Experiment control offers three alternatives jFed, or NEPI or LabWiki, to configure the resources and execute the different steps involved in the experiment deployment. The unified experiment configuration and execution is based in a common protocol for experiment control (FRCP), that must be available in Fed4FIRE testbeds;
User interface and portal service components combine Fed4FIRE portal and experimenter tools that allow the integration of the resources provided by the facilities in a user‐friendly way.
All these different component specifications are summarized in the final chapter.
FP7‐ICT‐318389/FRAUNHOFER/R/PU/D5.5
5 of 67
© Copyright Fraunhofer FOKUS and other members of the Fed4FIRE consortium, 2015
AcronymsandAbbreviations
AA Authorization and Authentication
AM Aggregate Manager
AMQP Advanced Message Queuing Protocol
API Application Programming Interface
AR Action Result
BPM Business Process Modelling
CAMAD Computer Aided Modeling and Design of Communication Links and Networks
CLI Command Line Interface
CI Configurable Item (also: Continuous Integration [methodology])
CMS Content Management System
CRUD Create‐Read‐Update‐Delete
DAG Directed Acyclic Graph
DCE Direct Code Execution (ns‐3)
DM Data Model
DOM Document Object Model
EC Experiment Controller
FB Federation Board
Fed4FIRE Federation for Future Internet Research and Experimentation Facilities
FCI Federation Computing Interface
FGRE FIRE ‐ GENI Research Experimentation summit
FP7‐ICT‐318389/FRAUNHOFER/R/PU/D5.5
6 of 67
© Copyright Fraunhofer FOKUS and other members of the Fed4FIRE consortium, 2015
FI, FIRE Future Internet, FI Research and Experimentation
FitSM Federated IT Service Management [methodology]
FRCP Federated Resource Control Protocol
FUSECO Future Seamless Communication [facility]
GENI Global Environment for Networking Innovation
GUI Graphical User Interface
JAXB Java Architecture for XML Binding
KPI Key Performance Indicator
KPI‐M Measured KPI
KPI‐T Target KPI
KPI‐U Uniform KPI
ILS Iterated Local Search
LCA Least Commonly Agreed [policy]
LGPL Lesser General Public License (GNU)
LTE Long Term Evolution [architecture]
MAS Management, Abstraction and Semantics
NDL Network Description Language
NEPI Network Experiment Programming Interface
NITOS Network Implementation Testbed using Open Source platforms
NICTA National ICT Australia
Ns‐3 Network Simulator, v.03
FP7‐ICT‐318389/FRAUNHOFER/R/PU/D5.5
7 of 67
© Copyright Fraunhofer FOKUS and other members of the Fed4FIRE consortium, 2015
OASIS Organization for the Advancement of Structured Information Standards
OCCI Open Cloud Computing Interface
OCF OFELIA Control Framework
OEDL The OMF Experimentation Description Language
OGF Open GRID Forum
OF OpenFlow
OFELIA OpenFlow in Europe: Linking Infrastructure and Applications
OLA Operational Level Agreement
OM Object Model
OMA Open Mobile Alliance
OMF cOntrol and Management Framework
OML ORBIT Measurement Library
OMN Open MultiNet [Forum]
OMSP OML Measurement Stream Protocol
ORCA Open Resource Control Architecture
OWL Web Ontology Language
PDP Policy Decision Point
PLE PlanetLab Europe [facility]
PI Principal Investigators
PyPElib Python Policy Engine library
RA Resource Adapter
FP7‐ICT‐318389/FRAUNHOFER/R/PU/D5.5
8 of 67
© Copyright Fraunhofer FOKUS and other members of the Fed4FIRE consortium, 2015
RAML RESTful API Modeling Language
RC Root Cause
RDF Resource Description Framework
RDFS Resource Description Framework Schema
REST Representational State Transfer
RPC Remote Procedure Call
RSpec Resource Specification
SAWSDL Semantic Annotations for WSDL and XML Schema
SC Service component
SFA Slice‐based Federation Architecture
SLA Service Level Agreement
SM Semantic Model
SON Self Organised Network
SPARQL Protocol And RDF Query Language
SSH Secure Shell
TDD Test Driven Development
TOSCA Topology and Orchestration Specification for Cloud Applications
UI User Interface
URL Uniform Resource Locator
VCT Virtual Customer Testbed
VM Virtual Machine
FP7‐ICT‐318389/FRAUNHOFER/R/PU/D5.5
9 of 67
© Copyright Fraunhofer FOKUS and other members of the Fed4FIRE consortium, 2015
WADL Web Application Description Language
XML eXtensible Markup Language
XMPP eXtensible Messaging and Presence Protocol
YourEPM Your Experiment Process Manager
FP7‐ICT‐318389/FRAUNHOFER/R/PU/D5.5
10 of 67
© Copyright Fraunhofer FOKUS and other members of the Fed4FIRE consortium, 2015
TableofContents
ContentsList of tables .......................................................................................................................................... 12
List of Figures ......................................................................................................................................... 13
1 Introduction ................................................................................................................................... 15
2 Inputs and Plans to this Deliverable (Task 5.1) ............................................................................. 17
2.1 Setting priorities for cycle 2 plans ......................................................................................... 17
2.2 Architectural requirements ................................................................................................... 17
2.3 Sustainability Requirements .................................................................................................. 20
3 Development of resource description and discovery (Task 5.2) ................................................... 22
3.1 Semantic Resource Directory as a Service ............................................................................ 22
3.2 Monitoring and Measurements ............................................................................................ 28
3.3 Discovery of application services .......................................................................................... 29
3.3.1 What are the Application Services? .............................................................................. 29
3.3.2 Service Directory ........................................................................................................... 30
3.3.3 Authentication and authorization for Application Services .......................................... 31
3.3.4 Application Service Representation .............................................................................. 31
3.3.5 Service Directory API Description .................................................................................. 33
4 Development of resource reservation (Task 5.3) .......................................................................... 34
4.1 Resource Reservation Overview ............................................................................................ 34
4.2 Reservation Broker – Architecture ........................................................................................ 34
4.2.1 Central Reservation Broker Inventory ........................................................................... 35
4.2.2 Mapping Sub module .................................................................................................... 35
4.2.3 Request Splitting ........................................................................................................... 36
4.2.4 Resource Mapping ......................................................................................................... 38
4.3 Reservation Plugin ................................................................................................................. 40
5 Development of resource provisioning (Task 5.4) ........................................................................ 43
5.1 Resource provisioning service ............................................................................................... 44
5.2 SLA Management .................................................................................................................. 46
5.2.1 Showing SLA information of testbeds supporting SLA .................................................. 46
5.2.2 Acceptance of the agreements ..................................................................................... 46
5.2.3 Viewing SLA agreements ............................................................................................... 47
FP7‐ICT‐318389/FRAUNHOFER/R/PU/D5.5
11 of 67
© Copyright Fraunhofer FOKUS and other members of the Fed4FIRE consortium, 2015
5.2.4 Viewing SLA Evaluation ................................................................................................. 48
6 Development of experiment control (Task 5.5) ............................................................................ 50
6.1 Standalone tool – jFed ........................................................................................................... 50
6.2 NEPI ....................................................................................................................................... 56
6.2.1 Core ............................................................................................................................... 57
6.2.2 Drivers ........................................................................................................................... 57
6.2.3 Documentation and Dissemination ............................................................................... 57
6.3 Labwiki for experiment control ............................................................................................. 57
7 Development of the user Interface / portal (Task 5.6) ................................................................. 61
7.1 Cycle 2 Development of the Portal ....................................................................................... 61
7.1.1 Projects management ................................................................................................... 61
7.1.2 Plugins ........................................................................................................................... 62
8 Conclusions .................................................................................................................................... 63
References ............................................................................................................................................. 64
9 Appendix: “Partitioning Costs Definition” ..................................................................................... 66
FP7‐ICT‐318389/FRAUNHOFER/R/PU/D5.5
12 of 67
© Copyright Fraunhofer FOKUS and other members of the Fed4FIRE consortium, 2015
ListoftablesTable 1 Top Five cycle 2 items (Criticality) ............................................................................................ 17
Table 2 Cycle 2 vs Cycle 1 ...................................................................................................................... 17
Table 3 Evaluation of policy categories ................................................................................................. 20
Table 4 Service Directory as a service ................................................................................................... 30
Table 5 Broker REST interface ............................................................................................................... 35
Table 6 SFA Implementation on different testbeds .............................................................................. 43
Table 7 Resource provisioning service .................................................................................................. 44
FP7‐ICT‐318389/FRAUNHOFER/R/PU/D5.5
13 of 67
© Copyright Fraunhofer FOKUS and other members of the Fed4FIRE consortium, 2015
ListofFiguresFigure 1 Fed4FIRE Value Proposition .................................................................................................... 19
Figure 2WP5 Service as a hierarchy ...................................................................................................... 20
Figure 3: Resource Centric Information Model within Fed4FIRE Legend: mon ‐ monitoring, prov ‐ provisioning, ctrl ‐ experiment control.................................................................................................. 23
Figure 4: Open‐Multinet Contributors .................................................................................................. 24
Figure 5: Semantic enabled SFA Aggregate Manger (FITeagle) ............................................................ 25
Figure 6: Open‐Multinet translation web GUI ...................................................................................... 26
Figure 7: Life cycle phases and APIs involved in the demonstration .................................................... 26
Figure 8: Demonstration workflow ....................................................................................................... 27
Figure 9: Linked semantic graph in WP5 and WP6 ............................................................................... 27
Figure 10 Integration of the semantic resource directory from the monitoring & measurement point of view ................................................................................................................................................... 29
Figure 11. Application Services under Fed4FIRE Portal ......................................................................... 32
Figure 12. Federation Services under Fed4FIRE Portal ......................................................................... 33
Figure 13: Central Reservation Broker – Architecture .......................................................................... 34
Figure 14 Partitioning of an experimenter request in a testbed federation environment ................... 36
Figure 15 Example of an abstract request and the mapping result produced ...................................... 39
Figure 16 (i) Unbound request (left) and (ii) Bound request: Response from the Reservation Broker (right) ..................................................................................................................................................... 39
Figure 17.Bound request: Response from the Reservation Broker ...................................................... 40
Figure 18 Unbound request .................................................................................................................. 40
Figure 19: Reservation Plugin: Unbound Request for Wireless Resources .......................................... 41
Figure 20: Reservation Plugin: Unbound Request for a Virtual Topology ............................................ 42
Figure 21. Acceptance of an SLA agreement ......................................................................................... 47
Figure 22. SLA agreements visualization table ...................................................................................... 48
Figure 23. SLA details view .................................................................................................................... 48
Figure 24 jFed Interface ......................................................................................................................... 50
Figure 25 jFed: XEN VM Properties ....................................................................................................... 51
Figure 26 jFed: editing interface IP numbers ........................................................................................ 52
Figure 27 jFed: map support ................................................................................................................. 53
Figure 28 jFed: overview of nodes (Part 1) ........................................................................................... 53
Figure 29 jFed: Overview of nodes (Part 2) ........................................................................................... 54
Figure 30 jFed: PLE topology support .................................................................................................... 54
FP7‐ICT‐318389/FRAUNHOFER/R/PU/D5.5
14 of 67
© Copyright Fraunhofer FOKUS and other members of the Fed4FIRE consortium, 2015
Figure 31 jFed: detailed interface information ..................................................................................... 55
Figure 32 jFed: detailed link information .............................................................................................. 55
Figure 33 jFed: management of images ................................................................................................ 56
Figure 34 LabWki Overview ................................................................................................................... 58
Figure 35 LabWiki Interface (Part 1) ...................................................................................................... 59
Figure 36 LabWiki Interface (Part 2) ...................................................................................................... 59
Figure 37: Project and slice creation workflow using Fed4Fire Portal .................................................. 61
Figure 38: Resources view in Fed4Fire Portal ....................................................................................... 62
Figure 39: Reservation of resources using Fed4Fire Portal ................................................................... 62
FP7‐ICT‐318389/FRAUNHOFER/R/PU/D5.5
15 of 67
© Copyright Fraunhofer FOKUS and other members of the Fed4FIRE consortium, 2015
1 IntroductionIn order to support testbed federation consumer growth, which promises to facilitate the critical shift from federated testbeds to federated research, the experiment lifecycle management must meet a number of requirements outlined in the previous deliverables D5.1, and D5.2 which outlined the cycle 1 and cycle 2 specifications of WP5 [1]. Following the phases of specification and development as well as the overall progress of the project, the cycle 2 specification plans were first prioritized and then aligned with the priorities set by WP2 of the project. These plans are reported in section 2 of D5.2, and this report illustrates how the plans were followed.
The rest of the deliverable follows the task structure of WP5. The main contribution in each task can be summarised as follows:
◦ Task 5.1 Overall Design and Planning: all the developments (features, KPI’s, adaptations, etc.) within WP5 were steered and aligned with the current architectural model of the federation developed in the Task 2.2 as well as with the future directions being specified in Task 2.3 and Task 2.4;
◦ Task 5.2 Resource Description and Discovery: A semantic information model and resource directory to store and provide generic information about heterogeneous resources including application services and documentation centre; the extension of the above information model to cater for the specific needs of WP6 on measuring and monitoring within the federation;
◦ Task 5.3 Resource Reservation: T5.3 developed a service that experimenters can utilize to reserve heterogeneous resources spanning multiple testbeds, based on a multiplicity of selection criteria;
◦ Task 5.4 Resource Provisioning: SLA management components were developed for resource provisioning service as well as a front‐end management tool ;
◦ Task 5.5 Experiment Control: the three recent releases of jFed tool are described along with the release 3.2 of NEPI and LabWiki adopted by the federation recently;
◦ Task 5.6 User interface / portal: combines Fed4FIRE portal and experimenter tools (e.g. jFed) that allow the integration of the resources provided by the facilities in a user‐friendly way.
Experiment lifecycle management and the workflow tools that enable it are critical components of any testbed federation. Therefore WP5 acknowledges the importance of finding a balance between pure technical aspects influencing its developments, and aspects related to making the final result as sustainable as possible. Inspired by the sustainability activities in T2.3 of the project, and the clear potential of deployment of FitSM methodology [9] in that context. WP5 therefore first attempted to tentatively specify within its service portfolio catalogue one entry per its task (aka would be task service). This is then was accompanied by a task specific development within cycle 2. Clearly, the detailed specification and would be service description do not match currently, however their juxtaposition appears to be very helpful in projecting the road ahead towards a sustainable – necessarily service oriented – federation of heterogeneous testbeds.
The services described in this report would be impossible without the following key architectural components as defined by WP2 for cycle 2 [5]:
Aggregate manager and Aggregate manager directory
CLI for control (e.g. SSH server & client)
Resource controller
FP7‐ICT‐318389/FRAUNHOFER/R/PU/D5.5
16 of 67
© Copyright Fraunhofer FOKUS and other members of the Fed4FIRE consortium, 2015
XMPP server
Experiment controller/ control server
Scenario editor
Documentation center
Portal
Authority directory
Service directory
Future reservation broker
Stand‐alone tools (jFed, NEPI)
Some of these components were sufficiently described in [1]. The others are further developed in cycle 2 specification and in the Fed4FIRE documentation centre, which is available on‐line at http://doc.fed4fire.eu/.
FP7‐ICT‐318389/FRAUNHOFER/R/PU/D5.5
17 of 67
© Copyright Fraunhofer FOKUS and other members of the Fed4FIRE consortium, 2015
2 InputsandPlanstothisDeliverable(Task5.1)
2.1 Settingprioritiesforcycle2plans
WP5 used the Delphi method [8] in an attempt to set priorities of the cycle 2 plans that are coming from the WP2 and from the D5.1. This was an expert evaluation of the importance (Im) of each of the 16 collected cycle 2 topics and of the associated problem level (Pr), i.e. how difficult it seems the implementation of the topic. While importance is a well recognised criteria for prioritisation the difficulty may seem as a separate consideration not connected to prioritisation. Nevertheless we used both metrics in the prioritization assuming that their combination makes a good mix of what is really important and what is doable within the duration of the project. All 16 physical attendees of the project’s technical workshop in Berlin (October 2013) took part in the exercise. The Im and Pr evaluations were averaged, and then the averaged values were composed to represent the topic criticality (Cr).
Table 1 Top Five cycle 2 items (Criticality)
Priority (requirement #)
Value of criticality
(Cr= Im•Pr)
Requirement name (source)
1 (5.4) Cr=54,08 Infrastructure community (D5.1)
2 (5.16) Cr=51,50 FRCP 1 (D5.1)
3 (2.1) Cr=47,48 Testbed Protection (WP2)
4 (5.5) Cr=43,14 Service community (D5.1)
5 (5.14) Cr=42,77 Future reservations (D5.1)
The results for the Top Five most important, mostly difficult and, finally mostly critical cycle 2 items are presented above (Table 1 Top Five cycle 2 items (Criticality) shows in brackets the source of the requirement of such item within cycle 2), for the details we refer to D5.2 [4].
2.2 Architecturalrequirements
Section 4 of D2.4 “Second federation architecture” [5] lists the 17 major differences of the cycle 2 architecture as compared to the one of cycle 1. We repeat these differences in the Table 2 Cycle 2 vs Cycle 1 in order to evaluate the importance of these differences from the viewpoint of Experiment lifecycle Management.
Table 2 Cycle 2 vs Cycle 1
Difference of cycle 2 from cycle 1 [5] WP5 relevance
Low, Medium, High
Explanation
Definition of member and slice authority instead of Identity provider
Low does not change WP5 tools
Definition of federation model and introduction Low WP5 tools are expandable
FP7‐ICT‐318389/FRAUNHOFER/R/PU/D5.5
18 of 67
© Copyright Fraunhofer FOKUS and other members of the Fed4FIRE consortium, 2015
of the concepts outside of the federation
Definition of slice and sliver Low WP5 tools can work with both
Federation services layer and application services layer instead of the brokers layer
High follows the FitSM)
Federator instead of central location(s) Low WP5 tools can work with both
Definition of currently three authorities in the Fed4FIRE federation
Low As above
Rename certificate directory to authority directory
Low As above
Introduction of documentation service in the federator
Low As above
Rename testbed directory to aggregate manager directory
Low As above
Experiment control (WP5): FRCP is the adopted API for this functionality; FRCP resource controller is to be foreseen on every resource.
Address security when transferring EC from the single‐testbed to the testbed federation domain.
Medium depends on FRCP adoption by testbeds
Facility monitoring (WP6) by exporting the corresponding data as OML streams, and collecting them in a central OML server that is queried by the FLS dashboard.
Medium Task 5.2 had to tailor its tools to meet WP6 requirements
Application services layer next to federation services was introduced to facilitate the integration of the testbeds from applications and services community [WP4].
High facilitates SLA and OLA adoption
Separation of architectural components into optional and obligatory.
Low WP5 relies on obligatory ones
In experiment control, the PDP component was introduced.
Medium (depends on PDP adoption by testbeds)
Workflows of the particular parts of the architecture.
Highest facilitates automation of SLA and OLA, facilitates repeatavbnility of
experiments
SLA management was introduced. High depends on SLA engine deployment at testbeds
A detailed setup was introduced for the First Level Support dashboard.
Low Not in scope
The majority of these relevance assessments is rather straightforward, and needs no further motivation. There is one specific case though that might catch the attention of the reader, and
FP7‐ICT‐318389/FRAUNHOFER/R/PU/D5.5
19 of 67
© Copyright Fraunhofer FOKUS and other members of the Fed4FIRE consortium, 2015
requires a bit more explanation. This case is the assignment of the highest WP5 relevance to the fact that WP2 introduced Workflows of the particular parts of the architecture in the second iteration of its federation architecture. The rationale for this evaluation stems from the original consideration of a Fed4FIRE federation of testbeds as a double‐sided market. This market provides its services to the two customer groups – experimenters and testbed providers as shown in the value proposition shown in Figure 1 and adopted by WP5 from the Task2.3 work on sustainability.
Figure 1 Fed4FIRE Value Proposition
The service offerings to the experimenter group are being regulated by Service Level Agreements (SLA) and PDP rules in interaction with experiment control supported by the facility monitoring [service] and eventually by services from the application services layer.
The service offerings to the testbed provider group are regulated by Operation Level Agreements (OLA), which agreements are best implemented and managed as services within the Federation services layer. The OLA is the agreement between the federator and the testbeds
Workflows are equally important for both sides of the Fed4FIRE market, because they formally specify the interactions between components (within particular use cases) and facilitate coherent operation of the market with proper orchestration of both services and resources. The WP5 service and its components are shown in Figure 2. Workflow is yet another term for a process, and since Fed4FIRE project is currently on the way of adopting the FitSM methodology [9] based on process orientation, the importance of workflows in experiment lifecycle management (including incident management) is therefore growing.
FP7‐ICT‐318389/FRAUNHOFER/R/PU/D5.5
20 of 67
© Copyright Fraunhofer FOKUS and other members of the Fed4FIRE consortium, 2015
Figure 2WP5 Service as a hierarchy
2.3 SustainabilityRequirements
The newly created Task 2.4 within the Fed4FIRE specifies the Federation Board, and in particular policy categories. An initial list of potential policy categories given in Error! Reference source not found. is evaluated against the work of WP5 with some initial comments.
Table 3 Evaluation of policy categories
Policy Category WP5 relevance
Low, Medium, High
Explanation
Operational model of federation (FitSM model, balance between testbed and experimenter communities)
High creates a first instance of OLA
Eligibility Requirements for Open Access Low WP5 already supports this
Eligibility Requirements for Special Access (Terms and Conditions)
Medium (requires service differentiation)
Resource Management (reservation and provisioning)
High depends on policies that are to be part of OLA
Resource Management (description and discovery)
Medium depends on policies that are to be part of OLA
Services the federation offers High depends on policies that are to be part of SLA
Communication and marketing Low for further study in T2.3 and T2.4
FP7‐ICT‐318389/FRAUNHOFER/R/PU/D5.5
21 of 67
© Copyright Fraunhofer FOKUS and other members of the Fed4FIRE consortium, 2015
Future direction for the federation TBD for further study in T2.3 and T2.4
Funding for Federator and Federation Board TBD for further study in T2.3 and T2.4
Processes for experimenters to get federated resources
TBD for further study in T2.3 and T2.4
Obviously, nearly 50% of policy categories cannot be evaluated at the end of cycle 2 within WP5. Nevertheless, the very fact that future existence of the federation is subject to the selection of right policies and their timely and correct management is important for the future work of WP5. This can be explained through the need to deploy OLA as a placeholder of all federation‐wide policies (presumably at Federator level), and subsequently through the need of testbeds and lifecycle management tools to obey respective policies in a way that will not constrain the work of experimenters. Some components of WP5 service, such as resource reservation and provisioning are explicitly policy‐dependent (some work on this is already reported in D5.4), while others, such as experiment control are currently policy‐agnostic except for Open Access policy.
FP7‐ICT‐318389/FRAUNHOFER/R/PU/D5.5
22 of 67
© Copyright Fraunhofer FOKUS and other members of the Fed4FIRE consortium, 2015
3 Developmentofresourcedescriptionanddiscovery(Task5.2)This section provides an update of the according counterpart in Fed4FIRE Deliverable D5.2 [4] for the second project development cycle. It contains information about the conducted work with respect to the first phase of the experiment life cycle, the resource description. More specifically, the semantic resource description and discovery. As such, motivational background and references to related work are not repeated within this deliverable.
We believe that the work done for OMN is relevant for this task, including the work done by NTUA for CAMAD as an application example [3]. Moreover NTUA proposed a first approach on exploiting semantic annotations for resource mapping for FI Experimental Facilities (wireless infrastructures) and has disseminated the results in a scientific paper that was presented in the 2014 IEEE 19th International Workshop on Computer Aided Modeling and Design of Communication Links and Networks (CAMAD). The development work reported here is the continuation of the work reported at the end of the First cycle development [1], in the deliverable D5.3 and discussed in the GENI‐FIRE workshop [2] and subsequent ontology workshops
3.1 SemanticResourceDirectoryasaService
As mentioned in D5.2, the “Semantic resource directory” was not planned as a logical component in the context of Fed4FIRE, i.e. it should NOT be a single centralized federator component, as it was not defined in D2.4 [5] as such. However, it was identified that information not only in the resource discovery phase from different federated testbeds, but throughout the complete Fed4FIRE architecture are logically linked and exchanged, which justifies this component implementing “semantic resource directory” as a service for the entire federation. Examples of usage of this service include:
within WP3 developments information about infrastructures are being pushed,
within WP4 information about services are being pushed,
WP6 is developing mechanisms to describe and communicate monitoring information,
in WP7 the monitoring information is being used for trust and authorization mechanisms;
within WP5 several cases exist: o T5.3 models information about reservation and availability of resources, o T5.4 models information about provisioning resources, o T5.5 models information for controlling resources ; o T5.6 models information to represent this to the user.
Allowing to represent this information in a standardized, homogeneous and semantic manner, would facilitate the following benefits (mainly involving descriptive languages and therefore without involving much functional code):
Reuse of implementations & ontologies developed already within the Semantic Web community [6].
Combination of resource information for handovers between protocols (SFA, FRCP, OMSP) and services (e.g. between monitoring service and SLA service).
Validation of information by detecting errors in the data early and explain them in detail to the user or developer.
Sophisticated expressions of relationships between information, such as equality, to overcome heterogeneity of models within a federation.
FP7‐ICT‐318389/FRAUNHOFER/R/PU/D5.5
23 of 67
© Copyright Fraunhofer FOKUS and other members of the Fed4FIRE consortium, 2015
Conduct complex queries to extract the needed information, discover resources and answer sophisticated questions, such as connectivity of resources.
Therefore, in the second development cycle, the following listed and explained below main achievements can be highlighted to proceed further to the goal of providing a homogeneous resource description within federated testbed environments.
First on a theoretical level, in [6] an architecture has been presented that allows to exchange semantic information between different micro‐services (both service components and configurable items introduced in Figure 2) in an extensible way. This would allow to link each of the above mentioned components and items based on a semantic graph, even without introducing a centralized directory. In conjunction with this, the Federated Infrastructure Description and Discovery Language (FIDDLE) has been developed (and will be published in a peer‐reviewed publication in Cycle 3). This ontology takes existing work into account that has been described in D5.2, such as RDF, INDL, NOVI or NML, and most partners of T5.2 contributed in the discussion. The figure below shows its relationship to the Fed4FIRE work packages from a very high level perspective. The information model is resource (R) centric with extensions for provisioning (prov) in WP5, monitoring (mon) in WP6 and potential extensions for other work packages such as WP7.
Figure 3: Resource Centric Information Model within Fed4FIRE Legend: mon ‐ monitoring, prov ‐ provisioning, ctrl ‐ experiment control
As a formal information model it has the potential to act as the basis for the envisioned resource description model. Therefore, this topic was actively discussed within the two GENI‐FIRE workshops in November 2013 and 2014. Additionally, an Ontology Workshop has been held at the university Amsterdam (UvA) with partners outside and inside of GENI and FIRE. This resulted in a common agreement [7] of needed aspects of such an ontology and the participants agreed to transform and enhance FIDDLE to a build an international agreed standard (which will be part of Cycle 3). This will be conducted under the umbrella of the established international project independent forum “Open‐Multinet” (see figure below).
FP7‐ICT‐318389/FRAUNHOFER/R/PU/D5.5
24 of 67
© Copyright Fraunhofer FOKUS and other members of the Fed4FIRE consortium, 2015
Figure 4: Open‐Multinet Contributors
Second, to bring this theoretical work to an actual implementation level and to provide proof of concepts, development of a FIDDLE‐enabled Aggregate Manager (FITeagle) and a translation mechanism has begun. As shown in the figure below, we introduced a new query parameter to discover/filter resources within the SFA ListResources() method call and as a result an RDF/XML serialized graph is being returned. This allowed further the usage of existing Fed4FIRE tools such as jFed. This way, thus enhanced FITeagle acts as a middleware component between experiment control tool (a client, say, jFed) and a testbed (a server).
FP7‐ICT‐318389/FRAUNHOFER/R/PU/D5.5
25 of 67
© Copyright Fraunhofer FOKUS and other members of the Fed4FIRE consortium, 2015
Figure 5: Semantic enabled SFA Aggregate Manger (FITeagle)
The translation mechanism allows other developers to adopt the conducted work, in particular within other work areas of Fed4FIRE, by translating the information back and forth to the developed model. This includes in particular the translation between the semantic information model and RSpecs in both directions. Below the web GUI of the developed service is shown.
FP7‐ICT‐318389/FRAUNHOFER/R/PU/D5.5
26 of 67
© Copyright Fraunhofer FOKUS and other members of the Fed4FIRE consortium, 2015
Figure 6: Open‐Multinet translation web GUI
Finally, the handover between different protocols has been implemented and demonstrated. Namely, the handover between the provisioning (WP5) and monitoring (WP6) has been implemented based on the mentioned information model (Figure 3). As depicted in the images below, this included the SFA and OMSP protocols, and the first five steps of the experiment life cycle.
Figure 7: Life cycle phases and APIs involved in the demonstration
FP7‐ICT‐318389/FRAUNHOFER/R/PU/D5.5
27 of 67
© Copyright Fraunhofer FOKUS and other members of the Fed4FIRE consortium, 2015
Figure 8: Demonstration workflow
As a result, the figure below represents the linked information managed within the provisioning and the monitoring phase, or ‐ in terms of Fed4FIRE ‐ between WP5 and WP6.
Figure 9: Linked semantic graph in WP5 and WP6
FP7‐ICT‐318389/FRAUNHOFER/R/PU/D5.5
28 of 67
© Copyright Fraunhofer FOKUS and other members of the Fed4FIRE consortium, 2015
Moreover, NTUA proposed a first approach on exploiting semantic annotations for resource mapping for FI Experimental Facilities (wireless infrastructures) and has disseminated the results in a scientific paper that was accepted in the 2014 IEEE 19th International Workshop on Computer Aided Modeling and Design of Communication Links and Networks (CAMAD) [3]. Additionally, in D5.3 further information about the extension of tools such as jFed in this regard are presented.
In parallel, with respect to the extension and wider usage of the existing XML‐based RSpec approach, the following work was conducted. Over 300 different attributes have been collected that all testbeds want to expose, including information about L2 RSpec extensions. The ProtoGENI v2 RSpec was extended for wireless devices and reservation information were introduced and used in NITOS. Finally, NEPI and the MySlice SFA Gateway have been extended to deal with different definitions of RSpecs.
3.2 MonitoringandMeasurements
Based on the presented work above, this subsection is focusing on T5.2 related extensions for WP6. Monitoring and measurement services covered in Fed4FIRE monitoring architecture play major roles for multiple stakeholders: experimenters, federation services (e.g. SLA, reputation and reservation) and FLS monitoring dashboard. These services are described through an ontology‐based information model associated with their concepts and relations. This model / ontology covers the basic and fundamental measurements and monitoring concepts and relations such as infrastructure resource monitoring, infrastructure health monitoring (facility monitoring), measurements (passive and active), tools, measurement metrics, measurement data, data units, unit prefixes, protocols, physical and logical locations, query, etc. All these are described through a set of classes and properties.
However, the ontology focused on the needs and requirements of a single project such as Fed4Fire will be limited in scope to the environment of such project. Indeed, the common monitoring tools used in Fed4FIRE are described as well as the common measurement metrics that give monitoring information about resources (e.g. availability, capacity, CPU and memory utilization, and more). It covers also metrics from heterogeneous domains (wireless, cloud‐based, SDN‐enabled, etc.). All components of the Fed4FIRE monitoring and measurement (see Figure 8 and Figure 10) and those they interact with (e.g. SLA management model, FLS monitoring dashboard, databroker, etc.) are modelled.
The monitoring ontology is directly linked with all other ontologies that describe resources, components, services, federation, etc., in order to use/share/have the same conceptualization and understanding of things.
FP7‐ICT‐318389/FRAUNHOFER/R/PU/D5.5
29 of 67
© Copyright Fraunhofer FOKUS and other members of the Fed4FIRE consortium, 2015
Having resources monitored and the data stored in any repository/database together with the ids/names of these resources, any component or user can query/get/find data related to any resource as long as its unique id/name is used.
Figure 10 Integration of the semantic resource directory from the monitoring & measurement point of view
3.3 Discoveryofapplicationservices
3.3.1 WhataretheApplicationServices?
The Application Services layer introduced in WP2 (document D2.4 – Second federation architecture) [5] fulfils the necessity of making easier for experimenters to use the resources of the federated infrastructure in their experiments. Due to the increasing number of testbeds being covered within Fed4FIRE and the diversity of technologies available, Infrastructure Providers are given the capacity to offer high‐level, ready‐to‐use functionality that allows experimenters to access specific data or to deploy different software applications on top of existing testbeds. Those functions have been defined as Application Services and those who offer them as Service Providers.
During cycle 2 only federated testbed providers are the ones offering Application Services. There is the option (to be explored in cycle 3) to allow external Service Providers to offer services using federation member’s infrastructures as well.
To the two Application Services first considered at the start of cycle 2, one more ‐ Openstack on demand ‐ has been added and two more were proposed to be added on cycle 3. Existing services on cycle 2 are:
Sensor data gathering and publication through an API from SmartSantander’s testbed.
Exp
erim
en
ter
tool
s Te
stbe
d Te
stbe
d m
anag
emen
t
Testbed A Testbed B
Discovery, reservation, provisioning
Discovery, reservation, provisioning
Member authority
Experimenter
Ser
vice
s
AM AM
AM: Aggregate manager
Slice authority
Federation services
Service Y
Facility monitoring
Facility monitoring
OML server+DB
Infrastructure monitoring
Infrastructure monitoring
OML server+DB
Masurement Measurement
Masurement Measurement
FLS dashboa
rd
Semantic resource directory
impo
rt
import if exported via AM
access to data (to be discussed)
FP7‐ICT‐318389/FRAUNHOFER/R/PU/D5.5
30 of 67
© Copyright Fraunhofer FOKUS and other members of the Fed4FIRE consortium, 2015
Automatic deployment of Hadoop clusters on resources of Virtual Wall.
Openstack on demand deployment on the Virtual Wall.
3.3.2 ServiceDirectory
The Service Directory is used by experimenters who are not interested in discovery of resources but rather of ready to run services. They need to discover available services and to decide if they fulfil their needs. This directory contains information about Infrastructure Services, offered by the federation itself as defined in D4.2, and Application Services, offered by Service Providers.
Application Services stored in the directory may come from federated and non‐federated Service Providers. In this sense, federated means that the Service Provider has a valid account on Fed4FIRE and therefore the corresponding credentials that guarantee the Service Provider is already trusted within the federation.
Two types of Service Providers were considered for cycle 2:
Testbed Providers that act as Service Providers offering Application Services using their
infrastructure. They are already federated. Good examples are SmartSantander and iMinds
withz their Hadoop cluster.
Service Providers that do not own any testbed and use the infrastructure provided by other
members of the federation. They can offer Application Services whether they are federated
or not, although federation membership should be encouraged.
However, only testbed providers are actually providing Application Services and adding the second type of service providers, although still being possible, is not going to happen during cycle 2 and does not fit under the federation plans for cycle 3 because of the security and trustworthiness issues mentioned in previous deliverable D5.2.
Therefore, the actual description of the service directory as a components of WP5 meta‐service can be found in Table 4
Table 4 Service Directory as a service
Basic Information
Service name Resource discovery (Application Service Directory)
General description Collection of offerings including high‐level, ready‐to‐use functionality that allows experimenters to ease their interaction with the testbeds
User of the service Experimenter
Service management
Service Owner Atos
Contact information (internal) [email protected]
Contact information (external) [email protected]
Service status Previous Status (cycle 1) – inexistent
Current status (cycle 2) – conception and development including
FP7‐ICT‐318389/FRAUNHOFER/R/PU/D5.5
31 of 67
© Copyright Fraunhofer FOKUS and other members of the Fed4FIRE consortium, 2015
three services
Future (cycle 3) – probably further services will be included, specific SLAs for these applications could be defined and implemented and security mechanisms will be refined
Service Area/Category Resource discovery
Resource description
Resource specification
Service agreements SLA is n/a
Detailed makeup
Core service building blocks All the requests are handled by a core element which then is in charge of performing queries to a database, process the responses and provide them the correct format to be sent back to the clients. Experimenters can browse applications and service providers can upload and manage them.
Additional building blocks
Service packages N/A
Dependencies There is a strong dependency with security mechanisms.
Technical Risks and Competitors
Risks The application services included in this directory should be “guaranteed” by the federation (they must be validated and maintained). Otherwise the image of the federation could be damaged (“applications in Fed4FIRE fail”, etc.)
Competitors XIFI yellow pages and services
3.3.3 AuthenticationandauthorizationforApplicationServices
Authentication and authorization mechanisms are an essential aspect of the access to the Service Directory.
Application Services during cycle 2 have been offered by Testbed Providers that act as Service Providers, e.g. SmartStantander and iMinds. As mentioned above, service providers without testbed have not been considered in this cycle. Since some Quality of Service conditions may be required for Application Services from experimenters, allowing non‐federated Service Providers to use federated testbeds would require mechanisms to guarantee quality assurance, service availability and confidentiality policies among the federation. These are aspects that have not been taken into consideration for cycle 3.
Currently, access to Application Services is done using the same user certificate for the authentication at their services as they use for authentication at their testbeds.
3.3.4 ApplicationServiceRepresentation
Models for application services were defined in D5.2 (“Detailed specifications regarding experiment workflow tools and lifecycle management for the second cycle”). Following that template, the three
FP7‐ICT‐318389/FRAUNHOFER/R/PU/D5.5
32 of 67
© Copyright Fraunhofer FOKUS and other members of the Fed4FIRE consortium, 2015
available Application Services from cycle 2 can be found using the Fed4FIRE Portal, under the tab called “Services”.
Figure 11 shows the view an experimenter has of those services. A brief description of the service, the provider and the API endpoint are shown by default on the list. A more detailed description, the API documentation and the type of protocol the service uses can be seen by clicking on the service title, as shown in the “Hadoop‐on‐demand” service.
Figure 11. Application Services under Fed4FIRE Portal
The experimenter can also see what services are offered by the federation clicking the “Federation Services” tab. Figure 12 shows this view, where the name, a brief description and the URL of the service are shown.
FP7‐ICT‐318389/FRAUNHOFER/R/PU/D5.5
33 of 67
© Copyright Fraunhofer FOKUS and other members of the Fed4FIRE consortium, 2015
Figure 12. Federation Services under Fed4FIRE Portal
3.3.5 ServiceDirectoryAPIDescription
The Service Directory REST API has been implemented as detailed in the cycle 2 specifications deliverable and has already been deployed.
Regarding authentication, a Basic HTTP authentication scheme has been stablished, where the Fed4FIRE Portal is currently the only tool with credentials.
For further details on the API see Appendix G on deliverable D5.2 ‐ Detailed specifications regarding experiment workflow tools and lifecycle management for the second cycle [4].
FP7‐ICT‐318389/FRAUNHOFER/R/PU/D5.5
34 of 67
© Copyright Fraunhofer FOKUS and other members of the Fed4FIRE consortium, 2015
4 Developmentofresourcereservation(Task5.3)
4.1 ResourceReservationOverview
The Reservation Broker is the overarching service that experimenters may use to reserve heterogeneous resources spanning multiple testbeds in the federation, based on a multiplicity of selection criteria (time, resource type etc.). Experimenters can greatly benefit from the brokering service, as the broker simplifies the overall process of identifying and reserving suitable resources for their experiments, especially in cases that they need to run experiments where resources from multiple testbeds are required to fulfil their needs. An extended description of the brokering service, its benefits are provided in detail in the predecessor of this document D5.1 – D5.4.
In Deliverable 5.2 we provided a short discussion on the functionality of the Reservation Broker to be supported by the end of Cycle 2. Adapting the initial NITOS/NICTA Broker implementation, we provided the functional specification of the Reservation Broker, including the broker’s architecture, its interactions with other components in the F4F environment, complementary modules to the Reservation Broker (like the Reservation plugin in the F4F portal), considerations and requirements regarding the description of resources with regards to the particular service and the status of implementation.
As specified at Deliverable 5.2, at the end of cycle 2, the user is able, using the Fed4FIRE portal, to make bound, or unbound requests for resources utilizing the Reservation plugin. In the case of an unbound request, the Reservation plugin interacts with the Reservation Broker that matches and optimizes the timeframe and resources requirements, as set by the experimenter, over one/multiple testbeds in the Fed4FIRE federation. Currently, the creation of federated slices over a set of wireless testbeds (NETMODE and NITOS) and PlanetLab is supported.
4.2 ReservationBroker–Architecture
Figure 13 shows the overall architecture of the Central Reservation Broker, and in the 2nd cycle of developments of Fed4FIRE the parts that have been extended / implemented are the Mapping sub‐module and the Inventory (DB). The scheduler module of Central Reservation Broker was extended with the capability to support a fully modular mapping sub‐module that can be overridden in order to be able to utilize more sophisticated resource reservation algorithms than the default “first come first serve” one. In addition, the Central Reservation Broker’s inventory has been enhanced to better fit the resources that exist in the Fed4FIRE ecosystem.
Figure 13: Central Reservation Broker – Architecture
FP7‐ICT‐318389/FRAUNHOFER/R/PU/D5.5
35 of 67
© Copyright Fraunhofer FOKUS and other members of the Fed4FIRE consortium, 2015
In the following sections the extensions mentioned above are described thoroughly.
4.2.1 CentralReservationBrokerInventory
The Inventory is a core component of any Brokering Framework; its main responsibility is to retain and manage a list of all the available resources and the reservations related to them. In the Fed4FIRE ecosystem there are more than one testbeds involved, containing a plethora of different types of resources. In order to be able to populate the inventory with all those resources we needed to develop events that retrieve information for the resources provided by each testbed and also a way to invoke those events both periodically and on demand.
As also described in deliverable D6.5 [20]Error! Reference source not found., Manifold is being used in order to populate the inventory. Manifold is a coarse‐grained infrastructure monitoring service and a major component in the Fed4FIRE Architecture. For every testbed that is integrated to the Central Reservation Broker it is mandatory to implement two events (one for resources and one for reservations) that request an advertisement from the testbed’s SFA interface (through Manifold), parse the output and populate the inventory accordingly. At the moment the integration of three Fed4FIRE testbeds has been completed (PLE, NITOS and NETMODE).
A mechanism that invokes the population events described above has been implemented. This mechanism is a part of Central Reservation Broker and is capable of invoking the events both periodically and on demand. Every 30 minutes the inventory is refreshed for every testbed that has been integrated. In addition to that, a REST API has been implemented that can refresh on demand the inventory of either a specific testbed or all testbeds. More specifically, in Table 5, we define how the REST interface can be used in order to refresh the inventory of the Central Reservation Broker.
Table 5 Broker REST interface
Description Path Method Body Result
List all available domains.
/domains GET ‐ A list of the integrated domains /testbeds.
Refresh all domains.
/domains/refresh POST JSON: {“domains”:[“*”]} Refresh inventory related to all domains / testbeds
Refresh specific domains.
/domains/refresh POST JSON: {“domains”:[“nitos”,”netmode”]}
Refresh inventory related to a list of domains / testbeds
Third party tools and services like the Reservation Plugin that is being described in Section 4.3 can consume this REST API.
4.2.2 MappingSubmodule
The Reservation Broker retains information from a pool of heterogeneous resources from different testbeds, providing external users with the opportunity to conduct experiments leveraging the wide range of capabilities that these different resources can offer. These are essentially resources with different hardware and software capabilities (e.g. wired/wireless resources, communication protocols, OS used etc.). As described in Deliverable 5.2 the mapping sub‐module is "entrusted with the responsibility of (i) splitting efficiently the request among underlying infrastructures and (ii)
FP7‐ICT‐318389/FRAUNHOFER/R/PU/D5.5
36 of 67
© Copyright Fraunhofer FOKUS and other members of the Fed4FIRE consortium, 2015
mapping the corresponding partial unbound requests to the appropriate substrate resources from selected testbeds in the Fed4FIRE federation"1 .
4.2.3 RequestSplitting
Adopting the terminology from the Network Virtualization Environment [12], Request Splitting or Request Partitioning entails splitting an experimenter's unbound request among a number of underlying testbeds (e.g., Error! Reference source not found.) with the goal to minimize an appropriate cost objective.
Figure 14 Partitioning of an experimenter request in a testbed federation environment
Within the Fed4FIRE project the involved actors are the Experimenters, the Testbed Providers and the Central Reservation Broker. Therefore during request partitioning the Central Reservation Broker seeks and aggregates resources offered from different Testbed Providers in order to provision an Experimenter's unbound request. However in order to perform partitioning we need to specify:
1. The appropriate cost objective: We define the provisioning cost for a resource using load
balancing metrics [13][14]. In particular the cost of provisioning a resource (virtual machine,
wireless node, virtual link etc.) is associated with the Availability of resources of the same type,
namely the Scarcity of the resource in each testbed and the Average Utilization over a time
window [13]. The average utilization of requested resources is taken into consideration, to avoid
including over utilized testbeds during the allocation of each user request. In the case of links
between different testbeds, their cost is assumed to be an order of magnitude higher than links
within a single testbed [12][13][15]. Hence it is set according to the highest cost among intra‐
testbed links of the two endpoint testbeds, augmented by a penalty value. A detailed description
of the Cost Objective and aforementioned costs is provided in section 9 (Appendix).
2. An efficient heuristic for request partitioning: The goal of such heuristic is to provide a quick and
efficient splitting of the user request among the available testbeds. In the case of a user's request
1 Mapping could also be supported locally by each testbed.
FP7‐ICT‐318389/FRAUNHOFER/R/PU/D5.5
37 of 67
© Copyright Fraunhofer FOKUS and other members of the Fed4FIRE consortium, 2015
for a virtual network, the graph splitting problem is NP‐hard [12][13]. To address cases as such,
the ILS meta‐heuristic [13] is selected for every request splitting case, adjusted in a testbed
federation environment. The results provided in [13] present a tradeoff between required time
to split the request and the partitioning cost compared with the exact solution for small problem
instances.
ILS Implementation and Fine Tuning
The ILS metaheuristic applies local search to an initial solution until it finds a local optimum; then it perturbs the solution and it restarts local search [15]. The starting point for the search may be a randomly constructed candidate solution, or one returned by a greedy construction heuristic. Local search consists of moving from one solution to another according to some well‐defined rules related to a neighbourhood of solution, accepted moves within the neighbourhood and Termination Criterion [16][17][18]. Common termination criteria for local search algorithms are reaching a maximum number of iterations or terminating the search when no improvement has been observed for a number of iterations. Perturbation generates new starting points of the local search by modifying some local optimum solution. Perturbation guides the algorithm to escape from local optima hence to investigate other parts of the search space. The strength of the perturbation plays a significant role in the efficiency of the algorithm, where strength is usually defined as the number of solution components modified. Finally the solution from which the search is continued is selected according to an appropriately defined Acceptance Criterion. The new local optimum is accepted usually with a fixed rule e.g. accept only better solution. The pseudo code of the ILS algorithm is presented in the following.
Algorithm 1: Iterated Local Search (ILS)
while stopping criterion is not met do
' ( )s Perturbation s
cs LocalSearch s
end while
To apply the general purpose of ILS in the Fed4FIRE platform, the above procedures have been fine‐tuned as follows:
Generate Initial Solution: A random initial solution is selected were each requested resource
is assigned randomly in a testbed of the federation
Perturbation: It is noticed that the best performance is achieved when at least 80% of the
component solution is altered.
Acceptance Criterion: Only better solutions are accepted.
()is GenerateInitialSolution
is LocalSearch s
tan , cs Accep ceCriterion s s
FP7‐ICT‐318389/FRAUNHOFER/R/PU/D5.5
38 of 67
© Copyright Fraunhofer FOKUS and other members of the Fed4FIRE consortium, 2015
Local Search: An iterated descent algorithm is applied employing a first improvement
pivoting rule. A simple type of move is used to define a neighbourhood e.g. at each iteration
a node is re‐assigned to a different testbed if the overall partitioning cost is improved the
move is accepted otherwise a different move is being made.
4.2.4 ResourceMapping
Resource mapping within the context of Fed4FIRE depends highly on (i) specific characteristics of each testbed (e.g., type of resources, access type etc.) and (ii) requested resources and constraints imposed by the user (e.g. location of resource, OS type etc.). For example in the case of an unbound request for two standalone virtual machines, a greedy node mapping algorithm responsible also for load balancing can efficiently map requested to physical resources whereas an unbound request for a virtual network topology requires a more sophisticated approach.
The mapping process is usually supported by the Infrastructure Providers that are generally not willing to disclose information about their substrates. However many of the testbed providers in Fed4FIRE do not support resource mapping ‐ hence the experimenter must choose explicitly the resources for reservation. Therefore, the mapping sub‐module supports also mapping of requested resources to the appropriate substrate resources in the Fed4FIRE federation. According to the questionnaire that was distributed among the Fed4FIRE partners in order to describe the available resources and capabilities of each testbed (Deliverable 5.4), different categories of testbeds have been identified leading to different mapping algorithms.
Wireless Testbeds
For the wireless testbeds, it is crucial to provide the experimenter with an appropriate subset of communicating physical resources according to the requested topology. Towards this direction, the implemented greedy heuristic aims at minimizing the average distance between communicating nodes. The placement for the 1st requested wireless node is random among the nodes that satisfy the criteria set by the user (e.g., reservation period).
An illustrative example of an abstract request for three 802.11 nodes in a full mesh topology, issued by an experimenter, is presented in Figure 15 along with the mapping result that is produced by the algorithm. In Figure 16 (i), the JSON description of the particular request is presented, that is sent to broker using the REST API of the mapping sub‐module. Following the mapping process, the broker responds with the JSON description of the bound request (Figure 4 (ii)). It should be noted that topology information is not currently advertised by wireless testbeds, via RSpec advertisements or other means. Hence the implemented approach maps the requested nodes to a subset of wireless nodes within proximity of each other, to ensure connectivity. Moreover, a coarse assumption is made that exclusive access is allowed only on wireless resources, in order to be able to differentiate among available resources within the federation. However there is a need to expose the type of the resource (e.g., either using RSpec's hardware_type field or some more abstract description e.g. 802.1 b/g/n node, VServer etc).
FP7‐ICT‐318389/FRAUNHOFER/R/PU/D5.5
39 of 67
© Copyright Fraunhofer FOKUS and other members of the Fed4FIRE consortium, 2015
Figure 15 Example of an abstract request and the mapping result produced
Figure 16 (i) Unbound request (left) and (ii) Bound request: Response from the Reservation Broker (right)
Wired Testbeds
Regarding the wired (PlanetLab‐like) testbeds, that are essentially deployment platforms for distributed systems as overlay networks over the “real” Internet, a node mapping algorithm is used that greedily assigns each requested node to the least loaded physical one that matches the requested criteria (i.e., location etc.).The load balancing technique takes into consideration average utilization in terms of CPU and memory over a time window, based on [19]. An illustrative example is provided in the following figure. The JSON description of an unbound request for three virtual machines is presented in Figure 18, and this is sent to the broker using the REST API of the mapping
FP7‐ICT‐318389/FRAUNHOFER/R/PU/D5.5
40 of 67
© Copyright Fraunhofer FOKUS and other members of the Fed4FIRE consortium, 2015
sub‐module. Following the mapping process, the broker responds with the JSON description of the bound request (Figure 17).
Figure 18 Unbound request
4.3 ReservationPlugin
In order to provide experimenters with the capability of submitting unbound requests for resources, the "Scheduler" or "Reservation Plugin", integrated with the Fed4FIRE portal, is being extended. Through this plugin, an experimenter is able to express and submit unbound requests for abstract topologies on desired date/time (for exclusively reservable resources) such as "I want two VMs " (instant reservation) or “I want three 802.11b/g nodes from any testbed, on June 15th 2015 from 15:00 to 19:00”.
In order to visualize the requested resources, as well as possible connections (communication links) between them, the jsPlumb JavaScript toolkit has been used2 along with jQuery JavaScript library3. 2 https://jsplumbtoolkit.com/apidoc
Figure 17.Bound request: Response from the Reservation Broker
FP7‐ICT‐318389/FRAUNHOFER/R/PU/D5.5
41 of 67
© Copyright Fraunhofer FOKUS and other members of the Fed4FIRE consortium, 2015
The DHTMLX JavaScript library has been adopted to implement the calendar 4. A screenshot of the Reservation plugin, for an unbound wireless full mesh topology request (hence no links provided), from March 2nd 2015 20:26 to March 3rd 2015 20:26, is depicted in the following Figure 19: Reservation Plugin: Unbound Request for Wireless Resources. The request has been successfully submitted to the Reservation Broker and the response (mapping solution) is visible to the experimenter on mouse over (e.g., Node‐1 has been mapped to omf.nitos.node005). The corresponding bound request is sent immediately to the SFA gateway for actual provisioning.
Figure 19: Reservation Plugin: Unbound Request for Wireless Resources
In the same manner, a screenshot of the Reservation plugin, regarding a requested wired topology from any testbed, from March 2nd 2015 20:26 to March 3rd 2015 20:26, is depicted in the following Figure 20.
3 http://jquery.com/
4 http://dhtmlx.com/docs/products/dhtmlxCalendar/
FP7‐ICT‐318389/FRAUNHOFER/R/PU/D5.5
42 of 67
© Copyright Fraunhofer FOKUS and other members of the Fed4FIRE consortium, 2015
Figure 20: Reservation Plugin: Unbound Request for a Virtual Topology
FP7‐ICT‐318389/FRAUNHOFER/R/PU/D5.5
43 of 67
© Copyright Fraunhofer FOKUS and other members of the Fed4FIRE consortium, 2015
5 Developmentofresourceprovisioning(Task5.4)During cycle 2, the testbeds in the project (meaning both the project’s original testbeds and the new ones incorporated through Open Calls), continued the deployment of the SFA interfaces to perform the provisioning of resources. As stated in [1], there are many implementations of such interfaces, so it was left to each testbed to choose the most convenient. During this cycle, some of the testbeds that already had an SFA deployment changed it to different implementation or upgraded it to a new version. Meanwhile the newly incorporated testbeds through the Open Calls, took advantage of the experience and counsels of the original testbeds and deployed the latest versions of the most convenient SFA implementations.
Table 6shows a summary of the SFA implementation adopted by each testbed during this cycle. In some cases, several APIs are deployed in the same testbed. This is because in some of the testbeds the AMs are able to manage different APIs or because they are testing a new one and keep the old available until the migration is done.
Table 6 SFA Implementation on different testbeds
Testbed SFA implementation
PLE V3 SFA Wrap
VirtualWall V3 Emulab
w‐iLab.t V3 Emulab
OFELIA V2 GENI, V3 GENI (testing)
NITOS V2
NetMode V2
BonFIRE V3 GENI (in prog.)
FUSECO V3 GENI
Koren V3
Norbit In progress
PerformLTE V2, V3 (testing) GENI
C‐Lab V3 SFAWrap
UltraFlow V3 Emulab
SmartSantander V2 SFAWrap
FP7‐ICT‐318389/FRAUNHOFER/R/PU/D5.5
44 of 67
© Copyright Fraunhofer FOKUS and other members of the Fed4FIRE consortium, 2015
Provisioning of [reserved] resources is inherently coupled with SLA management, however it has also some dependencies on the OLA side of the F4F federation. We start with an attempt to define resource provisioning as a service, which then is followed by the SLA management specification. The remainder of this section will first introduce the SLA front‐end tool which is the SLA Graphical User Interface developed through a plugin for the Django‐based MySlice Portal. Then, it will go into more details regarding the different parts of the same one: presentation of the SLAs, acceptance of the SLAs and presentation of the SLA evaluations. The explanation of the above is illustrated in the below text by snapshots of the SLA plugin in the Portal.
5.1 ResourceprovisioningserviceTable 7 Resource provisioning service
Basic Information
Service name Resource provisioning
General description Resource provisioning allows to the accredited user the instantiation of resources from one or several testbeds to deploy his experiment. It can via be direct request (user selects specific resources) or orchestrated or unbound request (user defines requirements and service provides the best fitting resources).
User of the service (role) The final user of the Resource Provisioning service is the Experimenter when requests resources for his experiment.
The Testbed Manager or Administrator indirectly uses the service by configuring the Aggregate Manager and its policies to offer the resources to be provisioned.
The direct Fed4FIRE components that invoke the service are the users’ GUIs (MySlice Portal, jFed) and CLIs (Omni), the reservation broker, policies engines (PDO, SFAWrap, pyPElib), service directory and reputation engine through the appropriate SFA API implementation.
Service access policies Administrator: no constraints Experimenter: on valid credentials
Service management
Service Owner Task Leader: i2CAT
Each testbed owns the resource provisioning service for its resources.
Contact information (internal) [email protected]
Contact information (external) https://portal.fed4fire.eu/
Service status Cycle 1: conception
Cycle 2: service development for some testbeds
Cycle 3: integration
Service Area/Category Resource provisioning.
FP7‐ICT‐318389/FRAUNHOFER/R/PU/D5.5
45 of 67
© Copyright Fraunhofer FOKUS and other members of the Fed4FIRE consortium, 2015
Resource description.
Resource specification
Infrastructure monitoring
Service agreements Best‐match search for the orchestrated provisioning. Orchestrated provisioning requires finding the resources that match experimenters’ requests and provision them.
Service usage policies Experimenter: per offered SLA’s or other policies defined in the testbed. Administrator: defines the policies
Detailed makeup
Core service building blocks For the direct provisioning, testbed directory and resource directory. For orchestrated provisioning, resource discovery and reservation broker.
Additional building blocks SFA Registry, SFA AMs
Service packages This service will offer two modes of resource provisioning (direct and orchestrated) each one can be a different package although with some common functions.
Dependencies Services dependencies: Resource Discovery, Resource specification and Infrastructure monitoring to know the availability of resources to be provisioned.
Software dependencies: SFA implementation (GENI, SFAWrap, AMsoil or any other compliant), user interface (MySlice Portal, plugin for MySlice, jFed, OMNI, etc.), monitoring of available resources.
Dependency policies Number and type of resources provisioned may vary subject to the policies defined in each testbed.
Technology Risks and competitors
Cost to provide (optional) Cost of implementation of SFA API in the testbed.
Funding source (optional) EU
Pricing (optional) Free of charge
Pricing policies Utility based
Value to customer (optional) Cost efficient and configurable monitoring system
Risks The resource provisioning relies on the correct authentication of the users in order to provide resources. If this authentication is overpassed, resources could be provisioned to unauthorized user.
Availability of resources must be monitored correctly, otherwise users can be granted access to resources already provisioned to other experiments.
For the provider, the risk is a loss of revenue since unauthorized
FP7‐ICT‐318389/FRAUNHOFER/R/PU/D5.5
46 of 67
© Copyright Fraunhofer FOKUS and other members of the Fed4FIRE consortium, 2015
users may consume resources, for the user it is a risk that his experiment is potentially disclosed to unauthorized users.
Competitors There are specific solutions for resource provisioning in other testbed‐based projects. Most of those solutions are part of testbeds that will be part of Fed4FIRE federation, so the potential risk of competitors is reduced as they are assimilated by this solution.
5.2 SLAManagement
During cycle 2, the SLA functionality has been developed to provide the users with the ability to visualize and accept the SLAs offered by the testbeds when requesting resources by means of a plugin for the MySLice portal. This plugin requires the interaction with two other components for the correctly management of the SLAs: the SLA Management Tool and the SLA Collector.
The SLA Management Tool has to be placed in the testbeds that want to offer SLAs for their resources. The SLA Management Tool allows the testbed manager to create, evaluate and eliminate the SLAs through an API. During cycle 2, the SLA Management Tool has been improved by integrating it with the monitoring system. With this integration, the SLA Management Tool can retrieve the metrics required for the SLA evaluation from the testbed local OML database or from the central one of the federation. This module has been correctly deployed in four testbeds: the ones at iMinds and Fuseco from Fraunhofer Institute and Netmode from NTUA during cycle 2.
The SLA Collector acts as a centralized proxy, allowing for the Federation Portal to communicate with the SLA Management tools of the different testbeds to retrieve the information of the SLAs and its violations of a testbed or given slice. The communication is done through a REST API. The SLA Collector will be integrated with the Reputation Service in the future so it will be able to obtain the needed information when required. At the moment of writing this document, the SLA collector is deployed at iMinds’ facilities.
The SLA Plugin allows the user to visualize and choose between SLAs offered by the testbeds which provide this information. This extension for the MySlice portal also allows visualizing the status of the accepted SLAs of a given slice.
5.2.1 ShowingSLAinformationoftestbedssupportingSLA
Despite the increase in the number of testbeds that support SLA during cycle 2, the number is still small so, instead of going for the modification of the returned structure of the SFA GetVersion call to include information on SLA support, it has been decided to move that functionality to the SLA Collector. Therefore, any client tool of the federation can obtain the list of testbeds that support SLAs by querying the SLA Collector through its API. More details on the API calls can be found in deliverable D7.5 Report on second cycle developments regarding trustworthiness [11]
The SLA plugin on the portal uses an API call to get the testbeds that support SLA. Then, when an experimenter selects a number of resources from any of those testbeds, it shows the corresponding SLA offerings.
Due to the required changes on the SLA architecture defined as part of cycle 3, the information about which testbeds support SLAs has not been placed yet in the Fed4FIRE documentation centre. This will be done once the SLA tools are updated to the architecture of cycle 3.
5.2.2 Acceptanceoftheagreements
FP7‐ICT‐318389/FRAUNHOFER/R/PU/D5.5
47 of 67
© Copyright Fraunhofer FOKUS and other members of the Fed4FIRE consortium, 2015
Figure 21 shows the user interface for accepting SLAs on the Fed4FIRE portal. The experimenter can select any number of resources from one or more testbeds. If any of those testbeds supports SLAs, the description of the type of offered SLA will appear under the “SLA offers” tab of the slice view. This SLA will cover the selected resources under that testbed, having one SLA per testbed per reservation.
The “SLA offers” tab indicates which testbeds, of those whose resources have been selected, offer an SLA, allowing the experimenter to visualize the detailed description of the SLA. If the experimenter then agrees with the offered SLA, he/she clicks in the “Accept” button and a green alert box will appear to indicate that the SLA has been accepted.
Afterwards, the experimenter can click on the “Apply” button to perform the actual resource reservation and, once it has been confirmed by the Aggregate Manager (AM) that the reservation has been successfully done in each testbed, the corresponding SLA will be created on each of those testbeds.
Figure 21. Acceptance of an SLA agreement
5.2.3 ViewingSLAagreements
The experimenter will be able to see the accepted SLA agreements under the “SLA status” tab, as shown in Figure 22. This tab shows a table containing the SLA provider, the agreement identifier, the creation date, the status and the evaluation result.
FP7‐ICT‐318389/FRAUNHOFER/R/PU/D5.5
48 of 67
© Copyright Fraunhofer FOKUS and other members of the Fed4FIRE consortium, 2015
Figure 22. SLA agreements visualization table
The user can then click on the agreement identifier to see the complete details of each SLA. A dialog will appear showing information about: the agreement identifier, the provider, the experimenter that accepted the SLA, the kind of service that it is being guaranteed, the testbed to which the SLA applies (in case a provider has several testbeds, i.e. iMinds), the SLA expiration date and the resources that are covered under the SLA.
Figure 23. SLA details view
5.2.4 ViewingSLAEvaluation
The SLA evaluation is shown as well in the previous table under the “SLA status” tab. SLAs can be in three different states:
Provisioned: the SLA has been accepted and created but the evaluation has not started yet.
Evaluating: the SLA is being evaluated.
Finished: the SLA evaluation has finished. At this point, the final evaluation result will be
shown under the “Result” column.
FP7‐ICT‐318389/FRAUNHOFER/R/PU/D5.5
49 of 67
© Copyright Fraunhofer FOKUS and other members of the Fed4FIRE consortium, 2015
This table allows the ordering of the different columns and provides a search box to filter SLAs by any kind of field.
The result column on the right will show the final evaluation result of each SLA once it has finished. A green color will indicate if the SLA has been correctly fulfilled whereas a red color will mean that a violation has occurred.
FP7‐ICT‐318389/FRAUNHOFER/R/PU/D5.5
50 of 67
© Copyright Fraunhofer FOKUS and other members of the Fed4FIRE consortium, 2015
6 Developmentofexperimentcontrol(Task5.5)In Fed4FIRE cycle 2 developments for user interface control were following three alternatives – jFed, NEPI and LabWiki – reported below in this order.
6.1 Standalonetool–jFed
In the second cycle of Fed4FIRE, there were 3 major jFed releases (5.3.0, 5.4.0 and 5.5.0, see http://jfed.iminds.be/release_notes/ ) and some intermediate bugfix releases.
jFed’s current interface is depicted in Figure 24.
Figure 24 jFed Interface
The major new features are listed below:
We now have 100% coverage of testbeds which are operational according Fed4FIRE standards:
o BonFIRE (still with the native Bonfire API, AM API in Bonfire is planned for cycle 3) o Univ. of Bristol VTAM (virtual machines) and openflow o C‐Lab
FP7‐ICT‐318389/FRAUNHOFER/R/PU/D5.5
51 of 67
© Copyright Fraunhofer FOKUS and other members of the Fed4FIRE consortium, 2015
o Exogeni NICTA o FuSeCo o I2CAT VTAM (virtual machines) and openflow o Koren openflow o Netmode o Nitos o Planetlab Europe o Stanford Optical o UC3M Optical o Virtual Wall 1 and 2 o w‐iLab.t
Besides these Fed4FIRE testbeds, also the following testbeds are fully supported: o GENI Instageni o Protogeni testbeds (Utah emulab, Kreonet Emulab) o GENI ExoGENI (including ExoGENI Univ. of Amsterdam)
The AMv3 API is now the preferred API if a testbed implements AMv2 and AMv3
Extra functionality for specific testbeds: o XEN VM properties (see Figure 25):
Figure 25 jFed: XEN VM Properties
FP7‐ICT‐318389/FRAUNHOFER/R/PU/D5.5
52 of 67
© Copyright Fraunhofer FOKUS and other members of the Fed4FIRE consortium, 2015
o Editing interface Ips (see Figure 26):
Figure 26 jFed: editing interface IP numbers
In jFed 5.4.0, we changed also from Oracle Java 7 to Java 8
A CLI (command line version) has become production quality so that integration with other frameworks (e.g. FORGE project) is easier. The difference with existing command line tools like omni is that jFed CLI does the whole experiment life cycle with a single command (instead of e.g. omni which needs different calls and resembles more jFed probe CLI). See http://doc.ilabt.iminds.be/jfed‐documentation/cli.html for full documentation, but a simple example is this:
java -jar experimenter-cli.jar create -s slice1 -S myproject --create-slice -p mypem.pem -P mypass --rspec 1node.rspec --expiration-hours 1
Extensive map support was added: o overview of testbed locations (see Figure 27)
FP7‐ICT‐318389/FRAUNHOFER/R/PU/D5.5
53 of 67
© Copyright Fraunhofer FOKUS and other members of the Fed4FIRE consortium, 2015
Figure 27 jFed: map support
o Overview of node locations in individual testbeds for Planetlab, Nitos, Netmode, w‐iLab.t, C‐Lab, see tutorials and screenshots here: http://doc.fed4fire.eu/tutorials.html, e.g. for Nitos and Netmode it looks like depicted in Figure 28 and in Figure 29)
Figure 28 jFed: overview of nodes (Part 1)
FP7‐ICT‐318389/FRAUNHOFER/R/PU/D5.5
54 of 67
© Copyright Fraunhofer FOKUS and other members of the Fed4FIRE consortium, 2015
Figure 29 jFed: Overview of nodes (Part 2)
Supports future reservations and channel reservations at the Nitos and Netmode testbeds
Supports more accurate available nodes list for Planetlab Europe (Figure 30) by using monitoring information (see http://doc.fed4fire.eu/tutorial_ple.html )
Figure 30 jFed: PLE topology support
FP7‐ICT‐318389/FRAUNHOFER/R/PU/D5.5
55 of 67
© Copyright Fraunhofer FOKUS and other members of the Fed4FIRE consortium, 2015
Properties of jFed’s Manifest View
In manifest view, under 'Show info' when you right click on a link or a node, you have now detailed info about the interfaces, links, IPs and MAC addresses, see screenshots below, so it is easier to identify things as shown in Figure 31 and in Figure 32.
Figure 31 jFed: detailed interface information
Figure 32 jFed: detailed link information
FP7‐ICT‐318389/FRAUNHOFER/R/PU/D5.5
56 of 67
© Copyright Fraunhofer FOKUS and other members of the Fed4FIRE consortium, 2015
o For emulab/protogeni testbeds (Virtual Wall, w‐iLab.t, UC3M and Stanford Optical), it is now possible to list and manage (delete) also the images you have created yourself, and it is easier to select images on other testbeds to import them as shown in Figure 33:
o
Figure 33 jFed: management of images
o It is now possible to load manifest RSpecs and convert them to request RSpecs (e.g. if you want to re‐use the same nodes that were automatically selected by the testbed before)
o Improvements on SSH terminal: o possible to type in yourself the login name o better automatic SSH terminal selection on Linux
o The stitching workflow has been completely rewritten to do now first Allocate calls on all relevant AMs, and then provision calls in the stitching workflow. It is also much more robust now.
o Besides these more obvious changes, there are in total about 400 tickets on bugs and improvements closed for these releases in cycle 2.
For more detailed documentation and overview of features, see http://jfed.iminds.be and http://doc.fed4fire.eu (especially the tutorial section)
6.2 NEPI
As part of cycle 2, NEPI has evolved and is now released under a version known as 3.2:
Initial release date: March 2015
Release process: starting with 3.2, NEPI is moving to a more agile development process, with
more frequent releases
Packaging and Installation: NEPI is now published through Python’s global software
distribution framework known as Python Package Index (pypi in short, which is the python
equivalent to ruby’s gems system). As a result NEPI can now be installed and updated using a
FP7‐ICT‐318389/FRAUNHOFER/R/PU/D5.5
57 of 67
© Copyright Fraunhofer FOKUS and other members of the Fed4FIRE consortium, 2015
simple invocation of the pip command, which greatly improves availability for users. NEPI’s
package description is available5.
Release 3.2 has brought the following enhancements and improvements.
6.2.1 Core
In terms of core functions, the core of NEPI has proved to be very stable and is now deemed essentially complete and robust as of 3.2. Very few changes have been made to it as part of the 3.2 release.
6.2.2 Drivers
The drivers (NEPI’s notion of a plugin) that allow to interact with the various technologies have here again seen a rather low traffic of changes. In particular, the FRCP driver, that is key to Fed4Fire as this technology was chosen as the common denominator for all testbeds, can here again be considered as stable and has seen only bug fixes as part of the 3.2 release. Two areas have been subject to more intense reworking and additions, and these are discussed next.
The wireless driver
As part of the feedback from various experiments carried out on involving wireless nodes, some improvements have been requested by users, and these improvements have been fulfilled in version 3.2. Among these is the ability to manage a wireless channel as a logical entity of its own, as opposed to having to issue low‐level driver‐dependent commands through more traditional channels (typically
issuing iwconfig commands through ssh).
Related (although loosely) to these improvements the way ssh interaction is carried out has been reworked. This applies in the case of a middle‐man ssh proxy (required in the case of OMF‐based wireless testbeds) where end nodes have no direct connection to the outside and need to be addressed through the OMF gateway.
The ns‐3 driver
Part of experimental facilities supported by NEPI are ns‐3, the network simulator, together with its companion, DCE. These allows creation of hybrid experiments where simulated and emulated components coexist and can interact. Changes have been needed on the NEPI side in order to cope with recent changes in the DCE layer. In the process, the ns‐3 driver itself (historically one of the oldest in the NEPI codebase) has undergone an overall rework, in order to benefit from the lessons learned while developing the whole range of other drivers.
6.2.3 DocumentationandDissemination
A particular effort has been devoted to improving the contents that target new users on the nepi web site, with in particular more examples and tutorials.
6.3 Labwikiforexperimentcontrol
Based on the OMF control framework and OML measurement library, NICTA has developed since some time also a graphical front‐end, which is called Labwiki (http://labwiki.mytestbed.net). The idea
5 https://pypi.python.org/pypi/nepi/
FP7‐ICT‐318389/FRAUNHOFER/R/PU/D5.5
58 of 67
© Copyright Fraunhofer FOKUS and other members of the Fed4FIRE consortium, 2015
behind the tool is to keep a lab book with a history of your experiments: the setup (‘plan’), the script (‘prepare’) of the experiment and the results (‘execute’).
Figure 34 LabWki Overview
Since the Fed4FIRE‐GENI summit (FGRE July 2014), Fed4FIRE has adopted Labwiki as an experiment control tool. The tool is mostly developed outside of Fed4FIRE (at NICTA and at US GENI partners), but has now become mature enough so we can adopt it for experimenters.
In the screenshot below (Figure 35 and Figure 36), you can easily see the three parts:
At the left, you have a kind of wiki where you can define the setup of the experiment
In the middle you define your script for the experiment (which is based on OEDL, which is
used in OMF as an experiment scripting language)
At the right, you can define the parameters for an experiment run, and during and after the
run, you see graphs and logfiles, see second screenshot. These graphs can then be copied to
the left, to keep notes.
FP7‐ICT‐318389/FRAUNHOFER/R/PU/D5.5
59 of 67
© Copyright Fraunhofer FOKUS and other members of the Fed4FIRE consortium, 2015
Figure 35 LabWiki Interface (Part 1)
Figure 36 LabWiki Interface (Part 2)
This tool is continuously developed and is targeted also at class exercises. One of the issues is the scalability of the tool, as it runs on a central server and has to send all OMF commands and collect all measurement info through OML, for all simultaneous users.
FP7‐ICT‐318389/FRAUNHOFER/R/PU/D5.5
60 of 67
© Copyright Fraunhofer FOKUS and other members of the Fed4FIRE consortium, 2015
The resources (VMs, network, etc) have to be provisioned outside of the tool, e.g. with the jFed tool (as you can see in the screenshots, you can then easily copy topology setups in the lab book at the left).
One of the nice things of the tool, is that of course the OEDL script can also be used with the OMF command line tool, without using labwiki. So, it builds on existing, mature tools instead of developing a completely new tool.
Fed4FIRE users can use the tool at http://labwiki.test.atlantis.ugent.be:4000
The tutorial that was used at FGRE 2014 can be found here:
http://groups.geni.net/geni/wiki/GENIFIRE/Labwiki
http://groups.geni.net/geni/wiki/GENIFIRE/Labwiki/Part1/Introduction
http://groups.geni.net/geni/wiki/GENIFIRE/Labwiki/Part2a/Introduction
FP7‐ICT‐318389/FRAUNHOFER/R/PU/D5.5
61 of 67
© Copyright Fraunhofer FOKUS and other members of the Fed4FIRE consortium, 2015
7 DevelopmentoftheuserInterface/portal(Task5.6)
7.1 Cycle2DevelopmentofthePortal
7.1.1 Projectsmanagement
Fed4Fire portal implements a new feature that allows users to create as many slices as they want without approval within a project. The administrators’ approval is only be necessary to create such a project. According to users’ feedback, e.g. during tutorials this feature eases the use of the portal.
The workflow for projects management is given in Figure 37:
Figure 37: Project and slice creation workflow using Fed4Fire Portal
For more information about the user interface, please have a look at D5.4.
Then a user can reserve resources thanks to its slice credential from the Resources view as demonstrated in Figure 38.
FP7‐ICT‐318389/FRAUNHOFER/R/PU/D5.5
62 of 67
© Copyright Fraunhofer FOKUS and other members of the Fed4FIRE consortium, 2015
Figure 38: Resources view in Fed4Fire Portal
The corresponding workflow for reserving resources using the portal is given in Figure 39.
Figure 39: Reservation of resources using Fed4Fire Portal
7.1.2 Plugins
The Reservation Plugin is described in section 4.3 Reservation Plugin of this deliverable. The SLA Plugin is described in section 3.3.2 SLA Management of this deliverable.
The jFed is described in section 3.4.1 Standalone tool – jFed of this deliverable.
FP7‐ICT‐318389/FRAUNHOFER/R/PU/D5.5
63 of 67
© Copyright Fraunhofer FOKUS and other members of the Fed4FIRE consortium, 2015
8 ConclusionsIn this deliverable, the developments of experiment life cycle management performed within the 2nd cycle of the Fed4FIRE are described. The main highlights are as follows:
Regarding the semantic description of the resources within the federation, the development followed the discussions that happen within the project, as well as within the larger GENI‐FIRE community hosted by the Open MultiNet platform and within the W3C ontology community. Enhancements were made for the discovery of application level services and also in response to special needs of WP6 on measurements and monitoring.
During the second cycle of development for the reservation broker, the efforts were focused on the necessary modifications in order to create a central instance of the service. The service is now tightly coupled with MySlice/Manifold, hence the corresponding data model is adopted in cycle 2. The adaptation of existing approaches was made for the delivery of the mapping sub‐module in the Fed4FIRE environment together with the investigation of exploitation of the semantic‐web technologies (OWL based information model) on request partitioning and resource mapping.
For SLA management, coupled to resource provisioning, a first development of the SLA front‐end tool was reported, together with the integration with the MySlice Portal.
For resource control service the three recent releases of jFed tool were described along with the description of the last release of NEPI tool. Recently adopted by the project LabWiki platform is yet another alternative for the experiment control that is now available for Fed4FIRE experimenters.
Finally, the Fed4FIRE portal proved to be a robust service component, which was expanded within cycle 2 by three new capabilities – SLA management, resource reservation and jFed‐based experiment control.
FP7‐ICT‐318389/FRAUNHOFER/R/PU/D5.5
64 of 67
© Copyright Fraunhofer FOKUS and other members of the Fed4FIRE consortium, 2015
References[1] Fed4FIRE (2014). Report on First Cycle Development (Deliverable 5.3). EU: Fed4FIRE Consortium.
[2] Fed4FIRE (2013) Meeting minutes summary, GENI‐‐FIRE workshop (14‐‐15/10/2013), Leuven, Belgium, on‐line at http://www.fed4fire.eu/fileadmin/documents/news/2013‐10‐summaryGeniFireWorkshop.pdf
[3] M. Giatili, C. Papagianni, S. Papavassiliou, “Semantic Aware Resource Mapping for Future Internet Experimental Facilities”, IEEE CAMAD, Athens, 2014
[4] Fed4FIRE (2014). Detailed Specifications regarding experiment workflow tools and lifecycle management for the second cycle (Deliverable 5.2). EU. Fed4FURE Consortium.
[5] Fed4FIRE. (2014). Scond Federation Architecture (Deliverable 2.4). EU: Fed4FIRE Consortium.
[6] Willner, A., & Magedanz, T. (2014). FIRMA: A Future Internet Resource Management Architecture. In Workshop on Federated Future Internet and Distributed Cloud Testbeds (FIDC). Karlskrona, Sweden: IEEE Xplore Digital Library. doi:10.1109/ITC.2014.6932981
[7] Fed4FIRE (2014) Meeting minutes summary, Amsterdam Ontology workshop (10‐11/11/2014), on‐line at https://docs.google.com/document/d/1pcsEq8asTTpu_tBkVnN3ZgwTr7eCWf6gS‐BSfB9qXgI/edit?pli=1#
[8] Delphi method. (2014, February 24). In Wikipedia, The Free Encyclopedia. Retrieved 12:08, February 28, 2014, from http://en.wikipedia.org/w/index.php?title=Delphi_method&oldid=596934962
[9] FitSM ‐ Standard for lightweight service management in federated IT infrastructures, on‐line resource of EU FedSM project. Retrieved 11:41 February 27, 2014 from http://www.fedsm.eu/fitsm
[10] Documentation of the testbeds federated in Fed4FIRE: http://doc.fed4fire.eu/testbeds.html
[11] S. Taylor, T. Leonard, M. Boniface, G. Androulidakis, L. Baron, M. Ott “D7.5: Detailed specifications regarding trustworthiness for the second cycle.” Deliverable of the FP7 Fed4FIRE project, February 2014.
[12] I. Houidi, W. Louati, W. B. Ameur, D. Zeghlache, “Virtual Network Provisioning across multiple
substrate network”, ELSEVIER Computer Networks, vol. 55, no. 2, pp. 1011‐1023, 2011.
[13] A. Leivadeas, C. Papagianni, S. Papavassiliou, “Efficient Resource Mapping Framework over
Networked Clouds via Iterated Local Search based Request Partitioning”, IEEE Trans. On Parallel
and Distributed Systems”, vol.24, no. 6, pp. 1077‐1086, June 2013.
[14] Y. Xin, I. Baldine, A. Mandal, C. Heermann, J. Chase, A. Yumerefendi, “Embedding Virtual
Topologies in Networked Clouds,” CFI’ 11 , June 13‐15 2011, Seoul, Korea.
[15] F. Zaheer, J. Xiao, R. Boutaba, ”Multi‐provider service negotiation and contracting in network
virtualization”, Network Operations and Management Symposium (NOMS), 2010 IEEE, pp 471‐
478, Osaka, June 2010.M. Gendreau, J. Provin, “Handbook of Metaheuristics”, 2nd Edition,
Springer, 2010
FP7‐ICT‐318389/FRAUNHOFER/R/PU/D5.5
65 of 67
© Copyright Fraunhofer FOKUS and other members of the Fed4FIRE consortium, 2015
[16] D. P. Bertsekas, “Network Optimization: Continuous and Discrete Models”, Athena Scientific,
1998.
[17] “Local Search in Combinatorial Optimization”, Princeton University Press, 2003.
[18] L. Tao, C. Zhao, K. Thulasirmana, M.N.S. Swanmy, “Simulated Annealing and Tabu Search
Algorithms for Multiway Graph Partitioning”, Journal of Circuits Systems, and Computers, vol. 2,
no. 2, pp. 159‐185, 1992.
[19] M. Yu, Y. Yi, J. Rexford, and M. Chiang, “Rethinking virtual network embedding: substrate support
for path splitting and migration,” ACM SIGCOMM Computer Communication Review, Vol.38, no.
2,pp. 17–29, Apr. 2008.
[20] Fed4FIRE(2015). Report on second cycle development regarding measuring and monitoring
Deliverable 6.5). EU: Fed4FIRE Consortium
FP7‐ICT‐318389/FRAUNHOFER/R/PU/D5.5
66 of 67
© Copyright Fraunhofer FOKUS and other members of the Fed4FIRE consortium, 2015
9 Appendix:“PartitioningCostsDefinition”A user submits a request query for a set of resources. Next the query is delivered in the mapping submodule of the Reservation Broker, where every requested resource is classified in a Virtual Resource Set – VRS and characterized by the resource type (e.g. type of Node), a number of functional attributes (e.g. OS image), the duration of the experiment (valid from ‐> valid until) and so on :
, ,
The type of resource can be a node or a link depending on the available testbeds involved in Fed4FIRE. The user must also specify the requirements on the functional attributes. Examples of functional attributes can be the type of nodes (e.g., wireless node, virtual machine, switch, sensor etc.), communication protocol used IEEE 802.11 a/b/g/n, virtualization environment (VMWare, Xen, etc…), OS (Windows, Linux etc), type of reservation (e.g. exclusive or shared), etc.
The mapping sub‐module identifies the set of candidate substrate resources that match the requested attributes (VRS) in every testbed ∈ . The set is denoted as follows:
, , ∈ , : , ∀ ∈
The provisioning cost per testbed for every requested resource specification is determined by the
matching set , . Specifically, it is based on the scarcity of resources, supporting the requested
attributes in each testbed provided by,
, ,
In addition, the utilization of the resource is also taken into account , . The definition of the
utilization differs according to the access policy of the testbed. In case of exclusive access, the utilization is defined according to the number of resources that are reserved for the time interval requested:
, , , ,
In case of shared access, the utilization is defined as the utilization of the non‐functional attributes ( ∈ , where b can be the CPU, memory, disk space etc…) of the candidate substrate resources on the time interval requested by the user:
FP7‐ICT‐318389/FRAUNHOFER/R/PU/D5.5
67 of 67
© Copyright Fraunhofer FOKUS and other members of the Fed4FIRE consortium, 2015
, ,∀ ∈
,
Taking all the above into consideration the partitioning cost per testbed for , is given by:
,
, ,
,, ∀ ∈
In case the solution provided by the mapping sub‐module includes nodes that belong in different testbeds the cost for inter‐testbed provisioning between testbeds and , is estimated as follows:
, , , , , ′ ∈
Where is a penalty value representing the extra cost of using multiple testbeds. The value of p is
set in such a way that ≫ , , , , , ′ ∈ . Thus the algorithm is guided to allocate nodes
on the same testbed.
The cost definition can be extended to also include the provisioning cost of a link (e.g., for a Cloud Computing or SDN) testbed as described in [4].
Costs Function
Taking the above into consideration the corresponding objective function with respect to provisioning cost is defined as:
,
∀ ∈∀∀ ∈ ∀ ∈∀ ∈∀ ∈∀ ∈, , ,
Where the binary value 1 when the requested resource is assigned to testbed i. Similarly
the binary variable is set to 1 when different resources of the same experiment is allocated on
different testbeds.