+ All Categories
Home > Documents > Common roadmap of FIRE test facilities – Fourth...

Common roadmap of FIRE test facilities – Fourth...

Date post: 21-Jun-2018
Category:
Upload: vuonganh
View: 222 times
Download: 0 times
Share this document with a friend
89
Common roadmap of FIRE test facilities – Fourth version Deliverable/Milestone: D3.7 Date: May 31, 2013 Version: 1.0
Transcript
Page 1: Common roadmap of FIRE test facilities – Fourth versionhome.ufam.edu.br/hiramaral/04_SIAPE_FINAL_2016/SIAPE_Biblioteca... · Common roadmap of FIRE . test facilities – Fourth

Common roadmap of FIRE

test facilities – Fourth version

Deliverable/Milestone: D3.7

Date: May 31, 2013

Version: 1.0

Page 2: Common roadmap of FIRE test facilities – Fourth versionhome.ufam.edu.br/hiramaral/04_SIAPE_FINAL_2016/SIAPE_Biblioteca... · Common roadmap of FIRE . test facilities – Fourth

D3.7 - Common roadmap of FIRE test facilities – 4th version

Version 1.0 – 23/05/2013 Page 2

Editor: Piet Demeester, iMinds

Deliverable nature: Report (R)

Dissemination level: (Confidentiality)

Public (PU)

Contractual Delivery Date: 31/05/2013

Actual Delivery Date 31/05/2013

Suggested Readers: Network researchers, system architects, FIA and FI-PPP projects’ stakeholders, EC

Total number of pages: 89

Keywords: future networks, experimental facilities, testing

Authors:

Alex Gluhak, University of Surrey Anastasius Gavras, Eurescom Annika Sällström, LTU Ingrid Moerman, iMinds Jacques Magen, Dimes Jérémie Leguay, Thales Jerker Wilander, Dimes José M. Hernández-Muñoz, TID Leandro Navarro, Universitat Politècnica de Catalunya Luis Muñoz, University of Cantabria Martin Potts, Martel Mauro Campanella, GARR Consortium Mike Boniface, IT Innovation Panayotis Antoniadis, UPMC Piet Demeester, iMinds Serge Fdida, UPMC Sophia MacKeith, UPMC Stefan Bouckaert, iMinds Stefan Fischer, University of Lübeck Susanna Avéssta, UPMC Tim Wauters, iMinds Timo Lahnalampi, Martel Timur Friedman, UPMC Vania Conan, Thales Jose A. Galache, Universidad de Cantabria

Page 3: Common roadmap of FIRE test facilities – Fourth versionhome.ufam.edu.br/hiramaral/04_SIAPE_FINAL_2016/SIAPE_Biblioteca... · Common roadmap of FIRE . test facilities – Fourth

D3.7 - Common roadmap of FIRE test facilities – 4th version

Version 1.0 – 23/05/2013 Page 3

ABSTRACT

Future Internet Research and Experimental (FIRE) facilities offer open and versatile experimentation testbeds to researchers in the field of Future Internet, both at service and network level. This final version of the FIRE Roadmap Report provides an overview of the existing and imminent services of the experimental facilities.

At the moment FIRE still benefits from the four already finished projects from Call 2: FEDERICA, OneLab2, PII and WISEBED. While TEFIS also finished recently, most of the projects from Call 5 (BonFIRE, OFELIA, SmartSantander) still will run for a few more months in 2013, and the CREW project extends until 2015. The three Call 7 projects started in autumn 2011: CONFINE, EXPERIMEDIA and OpenLab, with a testing offering range from virtual switches to cloud services. Most of these aforementioned projects are looking to continue operation even after the official end of the project. To continue operation, several of the projects (or testbeds thereof) are also looking in the direction of the call 8 federation project, Fed4FIRE.

This report provides a uniform description of the FIRE facilities and the federation IP, including a short description of the current operational offerings and their use conditions. This report’s objective is to give the potential experimenter an initial insight into the range of experimentation possibilities before contacting the facility directly.

Page 4: Common roadmap of FIRE test facilities – Fourth versionhome.ufam.edu.br/hiramaral/04_SIAPE_FINAL_2016/SIAPE_Biblioteca... · Common roadmap of FIRE . test facilities – Fourth

D3.7 - Common roadmap of FIRE test facilities – 4th version

Version 1.0 – 23/05/2013 Page 4

EX E C UT I V E S UM MA R Y

This updated and final version of the FIRE Roadmap Report updates and extends the previous FIRE roadmap reports on the available FIRE facilities. The information previously provided on the Call 2, Call 5 and call 7 FIRE projects BonFIRE, CREW, CONFINE, EXPERIMEDIA, Federica, OFELIA, OneLab, OpenLab, PII, SmartSantander, TEFIS and WISEBED was updated where needed. In addition, the call 8 federation IP Fed4FIRE was added to the overview.

Similar to the previous FIRE Roadmap Reports, this report provides an overview of the offering of the projects and the availability of their components. Figure 1 shows the timeline of the different projects included in this overview. The arrows that link one project to another, show that (in many cases, a part of) some of the facilities that are (further) developed in one project are also made available as part of newer projects.

FIGURE 1. TIMELINE OF THE FIRE PROJECTS INCLUDED IN THIS REPORT.

A brief overview of the FIRE providers and their facilities is found below.

Projects also presented in the previous FIRE Roadmap Reports:

BonFIRE (www.bonfire-project.eu) offers a multi-site cloud testbed with heterogeneous cloud resources, including computing, storage and networking resources, for large-scale testing of applications, services and systems.

CONFINE provides an integration of three existing community networks: Guifi.net (Catalonia, Spain), FunkFeuer (Wien, Austria) and AWMN (Athens, Greece); each is in the range of 500 – 20,000 nodes. The experimentations include routing extensions for large and heterogeneous networks, interoperability among routing protocols, and QoS for a variety of real-time (e.g. voice) and non-real-time services (e.g. media distribution) for heterogeneous networks.

Page 5: Common roadmap of FIRE test facilities – Fourth versionhome.ufam.edu.br/hiramaral/04_SIAPE_FINAL_2016/SIAPE_Biblioteca... · Common roadmap of FIRE . test facilities – Fourth

D3.7 - Common roadmap of FIRE test facilities – 4th version

Version 1.0 – 23/05/2013 Page 5

CREW (www.crew-project.eu) offers an open federated test platform, which facilitates experimentally-driven research on advanced spectrum sensing, cognitive radio and cognitive networking strategies in view of horizontal and vertical spectrum sharing in licensed and unlicensed bands.

EXPERIMEDIA offers researchers a facility for large-scale Future Media Internet experiments. Experiments will be conducted at testbeds offering live events and real-world communities to accelerate the adoption of the Future Media Internet. The testbeds include Schladming Ski Resort, Multi-Sport High Performance Center of Catalonia, Foundation for the Hellenic World, and 3D Innovation Living Lab.

FEDERICA (www.fp7-federica.eu) is a Europe-wide infrastructure based on computers and network physical resources, both capable of virtualization. The facility can create sets of virtual resources according to users' specification for topology and systems. The user has full control of the resources in the assigned slice which can be used (for example) for Future Internet clean-slate architectures, security and distributed protocols, routing protocols and applications.

OFELIA (www.fp7-ofelia.eu) provides an infrastructure enabling network innovations through the OpenFlow protocol: users can “control their part of the network” rather than simply conducting experiments “over” the network.

OneLab (www.onelab.eu) includes PlanetLab/PlanetLab Europe, NITOS (wireless), ETOMIC (high precision measurements) and Dimes (large scale measurements). By enabling a user to request resources using OMF controllers in a PlanetLab slice, OneLab allows a user to run an experiment on PlanetLab. The PlanetLab wireless extensions allow experiments to span PlanetLab and an OMF-controlled wireless testbed. NITOS consists of nodes based on commercial Wi-Fi cards and Linux-based open-source platforms, which are deployed both inside and outside.

OpenLab interconnects and advances the early prototypes OneLab2 and PII, as well as drawing upon other initiatives’ best work, such as the SFA control framework and OpenFlow switching. OpenLab’s contribution to the FIRE portfolio includes: PlanetLab Europe (PLE), the NITOS and w-iLab.t wireless testbeds; two IMS telco testbeds that can connect the public PSTN to IP phone services, and can explore merged media distribution; an LTE cellular wireless testbed; the ETOMIC high-precision network measurement testbed; the HEN emulation testbed; and the ns-3 simulation environment.

Panlab Infrastructure Implementation (PII) (www.panlab.net) has elaborated the concept of a Panlab Office. The Panlab Office is a testbed resource broker, whereby the brokered testbed resources are specialized ICT resources owned by testbed owners available for testing and experimentation by customers.

SmartSantander (www.smartsantander.eu) is building a city-scale experimental research facility in support of typical applications and services for a smart city. The results are expected to influence the definition and specification of Future Internet architecture design from the viewpoints of the Internet of Things and the Internet of Services.

This experimental facility will support horizontal and vertical federation with other experimental facilities and stimulate the development of new applications by users of various types, including experimental advanced research on IoT technologies. The facility will enable a realistic assessment of user acceptability.

Page 6: Common roadmap of FIRE test facilities – Fourth versionhome.ufam.edu.br/hiramaral/04_SIAPE_FINAL_2016/SIAPE_Biblioteca... · Common roadmap of FIRE . test facilities – Fourth

D3.7 - Common roadmap of FIRE test facilities – 4th version

Version 1.0 – 23/05/2013 Page 6

TEFIS (www.tefisproject.eu) supports the Internet of Services community, by providing testbeds supporting

the entire service lifecycle, including user behaviour, scale, performance and SLA compliance. It brings together test facilities for IT-based testing with Living Labs. The combination of testbeds offered by TEFIS allows a broad range of service characteristics including functionality, performance, scalability, usability, maintainability, user experience/acceptability, and standards compliance.

WISEBED (www.wisebed.eu) offers a federated wireless sensor network testbed that comprises approximately 1000 nodes distributed over 9 European sites and a library for the platform-independent and efficient implementation of algorithms.

In this final update of the FIRE roadmap, the Fed4FIRE project is added:

Fed4FIRE (www.fed4fire.eu) aims to federate existing testbeds and improve the way in which experimentation facilities are made available to experimenter communities. A wide scope of testbeds is considered: fixed and wireless networks, services and applications, and combinations thereof. A common federation framework is developed providing simple, efficient and cost effective tools and methodologies to support experimenters in running high-quality experiments.

Page 7: Common roadmap of FIRE test facilities – Fourth versionhome.ufam.edu.br/hiramaral/04_SIAPE_FINAL_2016/SIAPE_Biblioteca... · Common roadmap of FIRE . test facilities – Fourth

D3.7 - Common roadmap of FIRE test facilities – 4th version

Version 1.0 – 23/05/2013 Page 7

T A B L E O F C O N T EN T S

1. INTRODUCTION 10

2. INDIVIDUAL TESTBEDS OFFERINGS 11

2.1. BONFIRE 11 2.1.1. OFFERING 11 2.1.2. USE CASES 13 2.1.3. USAGE POLICIES 14 2.1.4. INTERNATIONAL LIAISON 14 2.1.5. FUTURE 15 2.2. CONFINE 16 2.2.1. OFFERING 16 2.2.2. USE CASES 17 2.2.3. USAGE POLICIES 18 2.2.4. INTERNATIONAL LIAISON 18 2.3. CREW 19 2.3.1. OFFERING 19 2.3.2. USE CASES 21 2.3.3. USAGE POLICIES 22 2.3.4. INTERNATIONAL LIAISON 23 2.4. EXPERIMEDIA 24 2.4.1. OFFERING 24 2.4.2. USE CASES 26 2.4.3. USAGE POLICIES 28 2.4.4. INTERNATIONAL LIAISON 28 2.4.5. FUTURE 28 2.5. FEDERICA 30 2.5.1. OFFERING 30 2.5.2. USE CASES 31 2.5.3. USAGE POLICIES 31 2.5.4. INTERNATIONAL LIAISON 32 2.6. OFELIA 33 2.6.1. OFFERING 33 2.6.2. USE CASES 35 2.6.3. USAGE POLICIES 35 2.6.4. INTERNATIONAL LIAISON 36 2.7. ONELAB 37

Page 8: Common roadmap of FIRE test facilities – Fourth versionhome.ufam.edu.br/hiramaral/04_SIAPE_FINAL_2016/SIAPE_Biblioteca... · Common roadmap of FIRE . test facilities – Fourth

D3.7 - Common roadmap of FIRE test facilities – 4th version

Version 1.0 – 23/05/2013 Page 8

2.7.1. OFFERING 37 2.7.2. USE CASES 41 2.7.3. USAGE POLICIES 41 2.7.4. INTERNATIONAL LIAISON 43 2.8. OPENLAB 44 2.8.1. OFFERING 44 2.8.2. USE CASES 52 2.8.3. USAGE POLICIES 54 2.8.4. INTERNATIONAL LIAISON 55 2.9. PII 56 2.9.1. OFFERING 56 2.9.2. USE CASES 57 2.9.3. USAGE POLICIES 58 2.9.4. INTERNATIONAL LIAISON 59 2.10. SMARTSANTANDER 60 2.10.1. OFFERING 60 2.10.2. USE CASES 64 2.10.3. USAGE POLICIES 68 2.10.4. INTERNATIONAL LIAISON 68 2.11. TEFIS 70 2.11.1. OFFERING 70 2.11.2. USE CASES 74 2.11.3. USAGE POLICIES 75 2.11.4. INTERNATIONAL LIAISON 76 2.12. WISEBED 77 2.12.1. OFFERING 77 2.12.2. USE CASES 78 2.12.3. USAGE POLICIES 78 2.12.4. INTERNATIONAL LIAISON 80 2.13. FED4FIRE 81 2.13.1. OFFERING 81 2.13.2. USE CASES 86 2.13.3. USAGE POLICIES 87 2.13.4. INTERNATIONAL LIAISON 87

3. LIST OF ABBREVIATIONS 88

Page 9: Common roadmap of FIRE test facilities – Fourth versionhome.ufam.edu.br/hiramaral/04_SIAPE_FINAL_2016/SIAPE_Biblioteca... · Common roadmap of FIRE . test facilities – Fourth

D3.7 - Common roadmap of FIRE test facilities – 4th version

Version 1.0 – 23/05/2013 Page 9

FIGURES

Figure 1. Timeline of the FIRE projects included in this report. ........................................................................ 4 Figure 2. BonFIRE testbed facility .................................................................................................................... 12 Figure 3. BonFIRE timeline ............................................................................................................................... 13 Figure 4. CONFINE offering .............................................................................................................................. 16 Figure 5. CONFINE timeline ............................................................................................................................. 17 Figure 6. The crew federation ......................................................................................................................... 19 Figure 7. EXPERIMEDIA OFFERING .................................................................................................................. 24 Figure 8. FEDERICA physical topology ............................................................................................................. 30 Figure 9. Screenshot of the OFELIA framework showing the current connectivity of OpenFlow switches. TUB and CNIT-Catania are not yet connected to hub switch at iMinds. ................................................................ 34 Figure 10. Map showing PlanetLab nodes in Europe. Click on the image to see an interactive map on the PlanetLab Europe website ............................................................................................................................... 37 Figure 11. OneLab Trust and resource discovery (SFA-based) federation ...................................................... 38 Figure 12. OneLab experiment control (OMF) federation .............................................................................. 39 Figure 13. OneLab Measurement system federation ...................................................................................... 40 Figure 14 Bringing an existing user community to a new testbed .................................................................. 52 Figure 15 Applying a tool from one testbed in another environment ............................................................ 53 Figure 16 Experiment repeatability ................................................................................................................. 53 Figure 17 Cross-testbed experiments .............................................................................................................. 54 Figure 17. Platform high-level architecture and building blocks ..................................................................... 61 Figure 18. SmartSantander architecture and building blocks ......................................................................... 63 Figure 19. Use cases deployment .................................................................................................................... 65 Figure 20. Developed applications: Augmented Reality (left) and Participatory Sensing (right) .................... 66 Figure 22. TEFIS functional architecture .......................................................................................................... 72 Figure 23. WISEBED architecture .................................................................................................................... 77 Figure 24. WISEBED APIs ................................................................................................................................. 78 Figure 25. Performing experiments in WISEBED ............................................................................................. 79 Figure 26. Overview of the testbeds currently belonging to Fed4FIRE ........................................................... 82

TABLES

Table 1. CREW: access policy overview ........................................................................................................... 22 Table 2. EXPERIMEDIA use scenarios .............................................................................................................. 25 Table 3. EXPERIMEDIA Driving experiments.................................................................................................... 27 Table 4. Overview of the OFELIA islands ......................................................................................................... 33 Table 5. TEFIS service testing objectives ......................................................................................................... 70

Page 10: Common roadmap of FIRE test facilities – Fourth versionhome.ufam.edu.br/hiramaral/04_SIAPE_FINAL_2016/SIAPE_Biblioteca... · Common roadmap of FIRE . test facilities – Fourth

D3.7 - Common roadmap of FIRE test facilities – 4th version

Version 1.0 – 23/05/2013 Page 10

1. INTRODUCTION

FIRE STATION is the main venue where discussion and interaction is taking place related to the analysis of the FIRE testbed offering as well as the definition of some guidelines for their evolution and, potential federation.

The goal of this FIRE roadmap is to provide an overview of the offering of the FIRE projects and facilities, in terms of their capabilities, operation and conditions for usage. Since its first version, this work has delivered a first stand-alone report dedicated to the FIRE offering.

Several of the projects included in this roadmap were not yet defined when the first version of this roadmap was released. In addition, several projects that listed planned developments in earlier roadmap versions have now finalized their offering and are able to provide concrete information on what is available on how it can be used. Those projects that have already ended or will be ending in the near future are in several cases offering (continued) open access to their facilities.

The following section provides an overview of the offering of the different facilities. Where relevant, links are included to online resources containing more detailed information.

Page 11: Common roadmap of FIRE test facilities – Fourth versionhome.ufam.edu.br/hiramaral/04_SIAPE_FINAL_2016/SIAPE_Biblioteca... · Common roadmap of FIRE . test facilities – Fourth

D3.7 - Common roadmap of FIRE test facilities – 4th version

Version 1.0 – 23/05/2013 Page 11

2. INDIVIDUAL TESTBEDS OFFERINGS

2.1. BONFIRE

2.1.1. OFFERING

2.1.1.1. Description of the facility

BonFIRE offers a multi-site cloud testbed with heterogeneous cloud resources, including computing, storage and networking resources, for the large-scale testing of applications, services and systems.

BonFIRE supports the following features:

• Advanced monitoring tools provide access to performance metrics for the physical and virtualized resources and to specific experiment monitoring results.

• Both real and emulated networking resources are available on interconnected testbeds. Controlled networks are available between selected cloud sites.

• In addition to in-depth user documentation, BonFIRE offers support for experimenters, to facilitate the uptake of the facility and increase the quality of the research.

• Access to BonFIRE resources is through the standard, Open Cloud Computing Interface (OCCI). The BonFIRE Portal generates OCCI for the user while a “Restfully”1 client-side library allows scripting of commands.

• In total, 580 cores, with 1TB of RAM and 50TB of storage are available permanently;. On-request access to about 2300 additional, multi-core nodes is also possible.

• Single sign-on, root access to the VMs and support for elasticity come as standard.

1 https://github.com/crohr/restfully

Page 12: Common roadmap of FIRE test facilities – Fourth versionhome.ufam.edu.br/hiramaral/04_SIAPE_FINAL_2016/SIAPE_Biblioteca... · Common roadmap of FIRE . test facilities – Fourth

D3.7 - Common roadmap of FIRE test facilities – 4th version

Version 1.0 – 23/05/2013 Page 12

FIGURE 2. BONFIRE TESTBED FACILITY

2.1.1.2. Main components - the timeline for the availability of the facility components

The BonFIRE facility is being constructed in four major cycles, each time mainly providing additional experiment functionality. The first cycle (M1 in June 2010, until M10) aimed at rapid prototyping, while the second cycle (M15) added functionality based on requirements from internal experiments. The third cycle took input from Open Call experiment requirements and incorporates federation with external facilities (such as FEDERICA; M24). In the last cycle (M34), results from the second Open Call experiments and any open access experiments are taken into account to construct the final version of the facility.

Three facility scenarios are supported: one involving an extended multi-site cloud facility with heterogeneous resources, one involving a controlled emulated network cloud and one involving a federated cloud with complex networking. The first two scenarios were available from the first cycle (M10), while the combination of both these scenarios, as well as the third scenario, was launched at Release 3 (M24).

In the second open call BonFIRE decided to incorporate a commercial cloud provider (Wellness Telecom) to explore how commercial software stacks can be incorporated into the facility and to investigate business models for on-demand resources. In addition, as part of the European Enlarged funding PSNC joined the consortium creating an additional BonFIRE site for the controlled networking scenario.

In terms of functionality, additional features for experiment description, experiment lifecycle management, virtual machine image management, monitoring, networking, elasticity, data management, security, heterogeneity and scalability will be provided.

Page 13: Common roadmap of FIRE test facilities – Fourth versionhome.ufam.edu.br/hiramaral/04_SIAPE_FINAL_2016/SIAPE_Biblioteca... · Common roadmap of FIRE . test facilities – Fourth

D3.7 - Common roadmap of FIRE test facilities – 4th version

Version 1.0 – 23/05/2013 Page 13

FIGURE 3. BONFIRE TIMELINE

2.1.2. USE CASES

Use Cases for experimentation on the BonFIRE facility focus on large-scale testing of applications, services and systems that require scalable and elastic provisioning of heterogeneous and controllable computing, storage and network resources.

For the first project phase, three experimental Use Cases have been provided by the BonFIRE consortium.

• Experiment Use Case 1: Dynamic Service Landscape Orchestration for Internet of Services

The objective of the experiment is to investigate the requirements arising from the composition of services at the different layers of the future Internet ecosystem (cloud composition and services composition). The end goal is to produce performance, deployment, management and life cycle models for a federated enterprise SOA application deployed within a cloud environment.

• Experiment Use Case 2: QoS-Oriented Service Engineering for Federated Clouds

This experiment aims to discover whether it is possible to describe virtualised hardware resources using high-level application-style benchmarks such as "dwarfs". The intention is that by describing hardware in terms closer to the computational work of real applications, making predictive models for application QoS will become easier than making predictions based on raw descriptions of the hardware such as CPU type and clock speed. These models can then be used to compare IaaS providers and make better resourcing decisions, amongst other benefits.

• Experiment Use Case 3: Elasticity Requirement for Cloud based Applications

The target of the experiment is to determine the elasticity requirements for cloud based web applications, helping providers to comfortably adhere to SLA levels without excessive over-provisioning of resources.

These experiments were incorporated in BonFIRE from the beginning so as to drive the facility and validate its first implementation. Based on these and on the experience and expertise of the BonFIRE partners, 27 Use Cases have been captured; from these, 101 functional and non-functional requirements were elicited.

Release 1Release 2

Release 3 Release 4

M0 M12 M24 M36

June 2010

M10

M16M24

M34

OpenCall 1

OpenCall 2

Page 14: Common roadmap of FIRE test facilities – Fourth versionhome.ufam.edu.br/hiramaral/04_SIAPE_FINAL_2016/SIAPE_Biblioteca... · Common roadmap of FIRE . test facilities – Fourth

D3.7 - Common roadmap of FIRE test facilities – 4th version

Version 1.0 – 23/05/2013 Page 14

The development of Use Cases continues on BonFIRE, with additional Use Cases from 9 Open Call experiments.

Representative Use Cases for BonFIRE from the First Open Call experiments originate from various research areas, including thin client computing, distributed cluster computing, security and dynamic service compositions. These experiments make use of BonFIRE's functionality for controlling network characteristics, federating heterogeneous resources, monitoring infrastructures and support for elasticity, and aim to assess the influence of such functions on their developed system. The second open call targeted multi-site clouds with controlled networked. Application areas included sensor based home security, media streaming, cloud-based plagiarism services and P2P networking,

2.1.3. USAGE POLICIES

2.1.3.1. How to connect to the facility

The BonFIRE facility will be made available to registered experimenters from the scientific community and from interested companies, both through Open Calls and Open Access mechanisms.

2.1.3.2. Plans how to accommodate new requirements for experiments

BonFIRE is committed to developing in line with user requirements. Initial Use Cases and requirements have been identified early on in the project, as a basis for the first prototype release of the facility. For each of the following three releases, additional input from experimenters will be gathered in order to drive future architecture and deployment plans. For the second release, requirements from the internal BonFIRE experiment Use Cases are taken into account. For the third and fourth release, requirements from the Open Call and non-funded experiments are considered as well.

2.1.3.3. Cost of usage

The usage of the infrastructure is free of cost for the embedded experiments and those selected through the two BonFIRE Open Calls. No limitations on the duration and reservation of the resources are set for the permanent infrastructure, while on-request access to additional resources is possible depending on the availability and usage policies of the individual testbed providers. The project recently launched open access for 2013 which allows anyone to apply to use the facilities and run experiments for free. Open Access offers limited resources for a limited period.

The project is working towards developing a sustainable business model and an associated business plan. This will investigate options for charging for usage.

2.1.4. INTERNATIONAL LIAISON

BonFIRE assumes federation as a key vehicle in order to achieve some of its goals. BonFIRE has selected facilities and projects whose functionality could be of use to BonFIRE and analysed the benefits and synergies that could be exploited by both parties. This analysis covers the FEDERICA, PII, OFELIA, OneLab2 and OpenCirrus initiatives.

In particular, federation with FEDERICA is identified as means to achieve experimentation involving controlled networks. To this end, BonFIRE is in close contact with the FEDERICA consortium; NOVI, whose

Page 15: Common roadmap of FIRE test facilities – Fourth versionhome.ufam.edu.br/hiramaral/04_SIAPE_FINAL_2016/SIAPE_Biblioteca... · Common roadmap of FIRE . test facilities – Fourth

D3.7 - Common roadmap of FIRE test facilities – 4th version

Version 1.0 – 23/05/2013 Page 15

consortium overlaps with FEDERICA and whose results can provide better access to FEDERICA (see section 2.4); and GN3, which has written to BonFIRE confirming that the involved NRENs “are fully committed to continue support of the FEDERICA infrastructure within the GN3 Project”. Release 3 of BonFIRE (M24, May 2012) will include support for federation.

2.1.5. FUTURE

The different classes of use cases are driven by the stakeholders running the experiments. BonFIRE has a broad range of resources and it is possible to innovate in networking, cloud management and how applications are deployed on clouds. Many of the new experiments focus on Software-as-a-Service applications and how that can be deployed in multi-site cloud environments as they move from system testing towards operational deployment. It’s clear that there are more potential users in the application space than other areas. Innovation in network and cloud infrastructure management has lower levels of interest for BonFIRE users although it is possible due to the deep control and observability offered by the facility.

BonFIRE will continue until at least 2015 through funding secured by partners in the FED4FIRE and ECO2Cloud projects. These projects will move the facility in distinct directions. FED4FIRE will help BonFIRE federate with facilities to support system level experiments where clouds are used with other resources such as sensor networks. ECO2Cloud focuses on GreenIT experiments by instrumenting cloud sites with energy sensor so that green parameters can be incorporated into infrastructure and application management functions

Page 16: Common roadmap of FIRE test facilities – Fourth versionhome.ufam.edu.br/hiramaral/04_SIAPE_FINAL_2016/SIAPE_Biblioteca... · Common roadmap of FIRE . test facilities – Fourth

D3.7 - Common roadmap of FIRE test facilities – 4th version

Version 1.0 – 23/05/2013 Page 16

2.2. CONFINE

2.2.1. OFFERING

2.2.1.1. Description of the facility

The CONFINE project offers a testbed for experimental research that integrates and extends three existing community networks: Guifi.net (Catalonia, Spain), FunkFeuer (Wien, Austria) and AWMN (Athens, Greece); each is in the range of 500 – 20,000 nodes, a greater number of links and even more end-users. These networks are extremely dynamic and diverse, and successfully combine different wireless and wired (optical) link technologies, fixed and ad-hoc routing algorithms, and management schemes. They run multiple self-provisioned, experimental and commercial services and applications.

FIGURE 4. CONFINE OFFERING

This testbed provides researchers with a portal to access to these emerging community networks, supporting any stakeholder interested in developing and testing experimental systems and technologies for these open and interoperable network infrastructures.

The testbed is a resource for the research community to address the limits and obstacles regarding Internet specifications that are exposed by these edge networks. It supports an integrated and multi-disciplinary effort to address and assess the usefulness and sustainability of community networking as a model for the Future Internet.

2.2.1.2. Main components - the timeline for the availability of the facility components

CONFINE is a four-year project, which started in October 2011 (see the CONFINE timeline in Figure 5). The basic CONFINE testbed has been operational internally since mid-2012, and has been becoming available for external use since spring 2013. An Open Call has been successfully announced in autumn 2012, resulting in 36 applications from which five new project partners were selected. The second open Open Call will be announced in autumn 2013.

Page 17: Common roadmap of FIRE test facilities – Fourth versionhome.ufam.edu.br/hiramaral/04_SIAPE_FINAL_2016/SIAPE_Biblioteca... · Common roadmap of FIRE . test facilities – Fourth

D3.7 - Common roadmap of FIRE test facilities – 4th version

Version 1.0 – 23/05/2013 Page 17

FIGURE 5. CONFINE TIMELINE

2.2.2. USE CASES

The CONFINE testbed is targeted towards researchers from academia and industry who want to perform experimentally-driven research on the current obstacles and limitations in community networks, addressing:

• Scale, heterogeneity and limited resources in the infrastructure (links, nodes, hosts) such as routing extensions for large and heterogeneous networks, and interoperability among routing protocols.

• The need for cross-layer interactions and optimizations, such as QoS for a variety of real-time (e.g. voice) and non-real-time services (e.g. media distribution), for heterogeneous networks.

• The definition of global parameters (e.g. QoE through user perceived parameters such as quality, price, reputation, and security) concerning the usability of the infrastructure and user-friendliness as seen from the user perspective.

• Self-management: self-configuration (e.g. adaptive channel and address allocation), self-healing (adaptation to node or link failures), and self-optimization (adaptation to different resource management functions depending on internal or external influences).

• Creation of open data sets for experimentation: generation of different data sets for off-line experimentation or simulation.

• Development of a benchmarking framework to ensure repeatability, reproducibility and verifiability of experiments with stable configurations.

• Best practices: documenting the different experiments performed on the experimental facility. • Contributions to standardization of different key specifications for community networks. • Contributions to open-source implementations of reference software components and services for

community networks. • Socio-technical-economic-legal evaluation and sustainability model based on the results of the

testbed's provision, usage, and operation.

A more elaborated example is that the testbed could be used for remote benchmarking of ad-hoc routing protocols and applications. There are many events at which researchers and implementers of routing

Year 1 Year 2 Year 3 Year 4

Internal Experimental Research

Experimental Research from open calls

External Experimental Research

CONFINE TESTBED

Page 18: Common roadmap of FIRE test facilities – Fourth versionhome.ufam.edu.br/hiramaral/04_SIAPE_FINAL_2016/SIAPE_Biblioteca... · Common roadmap of FIRE . test facilities – Fourth

D3.7 - Common roadmap of FIRE test facilities – 4th version

Version 1.0 – 23/05/2013 Page 18

systems and other applications for ad-hoc networks gather together in one place, discuss, build a test network, benchmark the progress of different implementations under realistic test deployments, and make public the results and rankings from these tests. The CONFINE testbed would allow a group of implementers to remotely set up experimental virtual networks built from a selection of devices and links in this testbed, and then perform these tests remotely, more frequently, on a larger scale, and perhaps using a set of test networks that cover several scenarios with different characteristics. The CONFINE testbed would facilitate a faster evolution of these algorithms and implementations, and more robust and interoperable implementations across a larger set of experimental environments. Of course, this would not prevent the participants from meeting periodically to share the fun and excitement of discussing, sharing ideas, coding, and doing other social activities together.

2.2.3. USAGE POLICIES

2.2.3.1. How to connect to the facility

The CONFINE facility will be made available to registered experimenters from the scientific community and from interested companies, both through Open Calls and through non-funded mechanisms.

2.2.3.2. Plans for how to accommodate new experiment requirements

New requirements and functionality will be defined in a demand-driven way from (1) the needs defined by Open Call usage scenarios; (2) gaps identified during operational use by internal/external experimenters; and (3) common needs identified with other FIRE facility projects.

2.2.3.3. Cost of usage

The CONFINE facilities have been open for external experimenters since October 2012. External experimenters joining the CONFINE project as a result of the Open Calls are funded by CONFINE. During years 2 and 3 of the project (October 2012 – September 2014), a selection of external experimenters not joining the CONFINE project will be granted free access to the CONFINE facilities under clear conditions: (1) there are no guarantees on the availability (as the testbed is still under development); (2) feedback on usability of the facilities is required.

From October 2014, full open access will be offered to external experimenters. Terms and conditions (scheme of contribution and access policies) will be applied according to the sustainability business model developed in the CONFINE project (in collaboration with other FIRE projects).

2.2.4. INTERNATIONAL LIAISON

CONFINE aims to collaborate with other community networks within and outside the EU. More liaisons may be established as a result of the Open Calls (October 2012 and October 2013).

Page 19: Common roadmap of FIRE test facilities – Fourth versionhome.ufam.edu.br/hiramaral/04_SIAPE_FINAL_2016/SIAPE_Biblioteca... · Common roadmap of FIRE . test facilities – Fourth

D3.7 - Common roadmap of FIRE test facilities – 4th version

Version 1.0 – 23/05/2013 Page 19

2.3. CREW

2.3.1. OFFERING

2.3.1.1. Description of the facility

CREW offers an open federated test platform, which facilitates experimentally-driven research on advanced spectrum sensing, cognitive radio and cognitive networking strategies in view of horizontal and vertical spectrum sharing in licensed and unlicensed bands.

CREW federates a Software-Defined Radio testbed for TV-bands at Trinity College Dublin, a heterogeneous ISM wireless testbed at iMinds, a sensor network testbed at TU Berlin, an outdoor heterogeneous ISM/TVWS testbed at the Josef Stefan Institute, a spectrum sensing platform developed by imec, an LTE-advanced cellular testbed at TU Dresden, a Multi-antenna LTE detection solution developed by Thales, a WinnF transceiver facility (API) developed by Thales, and finally the IRIS software radio framework which enables software defined radios (such as USRPs) to be reconfigured in real-time.

FIGURE 6. THE CREW FEDERATION

The aim of CREW is not to create uniform tools for accessing or operating the different testbeds, nor to enable simultaneous experiments at different locations (this would not be meaningful anyway, as interference is a local phenomenon). However, CREW is enabling advanced experiments by co-locating different devices and tools at a single location, by developing a common data format, by setting up a

Page 20: Common roadmap of FIRE test facilities – Fourth versionhome.ufam.edu.br/hiramaral/04_SIAPE_FINAL_2016/SIAPE_Biblioteca... · Common roadmap of FIRE . test facilities – Fourth

D3.7 - Common roadmap of FIRE test facilities – 4th version

Version 1.0 – 23/05/2013 Page 20

repository of experiment descriptions/scripts/traces, by developing benchmarking tools and by making available expertise ranging from the development of cognitive chips to the design of protocols for cognitive radios, nodes and networks. Most testbeds that are part of the CREW federation can be operated remotely.

To clarify the most important contributions achieved so far: CREW makes following federation tools available to interested experimenters:

• A common information portal containing a comprehensive description of the individual testbeds and further providing clear guidelines on the functionalities and possibilities of the federated testbed. The portal is accessible via www.crew-project.eu/portal.

• The CREW repository, containing several types of data, which are useful to be shared with other experimenters. Data includes full experiment descriptions, traces, wireless background environments, processing scripts and performance metrics. The portal is accessible via www.crew-project.eu/repository .

• A benchmarking framework for cognitive radio and network experiments, offering automated procedures for experiments and performance evaluation methodologies, enabling fair comparison between subsequent developments or competing cognitive solutions. Benchmarking tools, enforcing the methodologies, are available via http://www.crew-project.eu/portal/crew-benchmarking-tools .

• Advanced cognitive components such as spectrum sensing agents and configurable radio platforms, by linking together software and hardware solutions from multiple partners. As an example, one of the testbeds in Ghent makes available multiple devices and tools such as the imec spectrum sensing agents, the IRIS framework, and USRP devices.

2.3.1.2. Main components - the timeline for the availability of the facility components

CREW is a five-year project, which started in October 2010 (see CREW roadmap below). In the first two years of the project, the main components of CREW have been put in place. For experimenters, there are largely three components that are essential: the (heterogeneous) wireless testbeds and cognitive components that are part of the CREW federation, the CREW portal (through which experimenters find information to get access to the facilities) and the CREW repository, which is pubic website listing reusable scripts, experiment descriptions, etc. that can be used as a source of inspiration and as a place to store relevant results. An overview of the main components is provided below (A-C):

A. Testbeds: • Availability: all are available2 now and will be available at least until the end of the project

(September 2015).

2 Note that although CREW supports open access (i.e. “everyone” can use the facilities), no guarantees on the availability are provided. The decision on whether or not to allow an experimenter and what the conditions (policies) are, are taken by the individual testbed owners. Of course, official partners accepted as part of the open calls are granted access by default.

Page 21: Common roadmap of FIRE test facilities – Fourth versionhome.ufam.edu.br/hiramaral/04_SIAPE_FINAL_2016/SIAPE_Biblioteca... · Common roadmap of FIRE . test facilities – Fourth

D3.7 - Common roadmap of FIRE test facilities – 4th version

Version 1.0 – 23/05/2013 Page 21

• Features: open access, remote access, repeatable experiments, controlled experiments, fair comparison

• Benefits: availability of cognitive hardware, co-located hardware, efficient tools, verifiability, diversity, experimentation methodologies

Following testbeds are available:

• A heterogeneous ISM test environment at iMinds incorporating IEEE 802.11, IEEE 802.15.1, IEEE 802.15.4, USRP software radios;

• A licensed cognitive radio testbed (including TV bands) at TCD that is based on the IRIS reconfigurable radio platform and the USRP;

• A wireless sensor network test environment at TUB incorporating Tmote Sky and eyesIFXv2 sensor nodes, Wi-Spy spectrum analyzers, USRP software radios and BEE2 FPGA platforms

• An LTE cellular test environment at TUD incorporating a complete LTE-equivalent base station infrastructure and SDR mobile user terminals.

• An outdoor heterogeneous ISM/TVWS at JSI.

B. CREW portal: www.crew-project.eu/portal

• Availability: available now and will be available at least until the end of the project (September 2015).

• Features: clear and uniform information, search information, tutorials, general overview, detailed information

• Benefits: one-stop website for all information on the CREW testbeds and tools and their usage policies

C. CREW repository: www.crew-project.eu/repository

• Availability: available now and will be available at least until the end of the project (September 2015).

• Features: reusable components for experiments, examples, benchmarks • Benefits: comparability, verifiability, openness, sharing of information

2.3.2. USE CASES

Use Cases include, but are not limited to:

• Context awareness for cognitive networking: spectrum sensing in unlicensed (ISM) and licensed bands (TV white spaces, cellular systems)

Page 22: Common roadmap of FIRE test facilities – Fourth versionhome.ufam.edu.br/hiramaral/04_SIAPE_FINAL_2016/SIAPE_Biblioteca... · Common roadmap of FIRE . test facilities – Fourth

D3.7 - Common roadmap of FIRE test facilities – 4th version

Version 1.0 – 23/05/2013 Page 22

• Robust cognitive networks: applications that require robust communications though avoiding harmful interference and using frequency agility to improve communication quality

• Horizontal resource sharing in the ISM bands: algorithms, protocols and networking architectures for coexistence of and cooperation between independent heterogeneous network technologies

• Cooperation in heterogeneous networks in licensed bands: new techniques for context awareness in licensed bands (TV white spaces, cellular systems)

• Cognitive systems and cellular networks: the impact of dynamic spectrum access by secondary users on LTE cellular primary systems.

2.3.3. USAGE POLICIES

2.3.3.1. How to connect to the facility

Most testbeds of the CREW facility can be operated remotely over the internet. Anyone can ask for an account on any one of the testbeds. As the individual test facilities are very diverse, they have their own access and usage policies. However the common CREW portal provides clear guidelines on how to select the most suited facility and corresponding policies. The CREW facility providers commit to offer sufficient capacity for external experimenters and to provide the necessary support during experiments. The CREW portal will redirect the experimenter to the different remote login interfaces for the different facilities. Each local facility will further organize its own support.

The decision on whether or not to allow an experimenter and what the conditions are, are taken by the individual testbed owners. As such, getting access is not considered as a right but as a possibility.

2.3.3.2. Plans how to accommodate new requirements for experiments

A part of the CREW budget was/is reserved for demand-driven extensions. These demands come mainly from experimenters that join the consortium as part of the open calls, but are also determined by the consortium based on the own experiences while using the testbed.

So far, two open calls were launched so far. The partners that joined as a result of the first open call have now completed their tasks. The partners that joined after the second open call are now in the process of experimentation. Details and statistics are available via the CREW website (see http://www.crew-project.eu/opencall ). A third open call has been preliminary announced and will be officially launched at the Future Network and Mobile Summit in July 2013. More information is available via the CREW website http://www.crew-project.eu/opencall3.

2.3.3.3. Cost of usage

The portal and the repository are freely available to anyone. As mentioned above, the owners of the individual testbeds decide on the terms of access. While in general it is possible to use the testbed, whether or not to accept an experiment outside the open calls is still decided on a per-experiment and per-experimenter basis by the testbed owner. Following table provides an overview:

TABLE 1. CREW: ACCESS POLICY OVERVIEW

Testbed Available to ? For free? Heterogeneous ISM test academic use, SME, industry Best-effort access free for non-

Page 23: Common roadmap of FIRE test facilities – Fourth versionhome.ufam.edu.br/hiramaral/04_SIAPE_FINAL_2016/SIAPE_Biblioteca... · Common roadmap of FIRE . test facilities – Fourth

D3.7 - Common roadmap of FIRE test facilities – 4th version

Version 1.0 – 23/05/2013 Page 23

environment at iMinds commercial research

Information on paying use currently not public.

Licensed cognitive radio testbed at TCD

academic use, SME, industry Best-effort access free for non-commercial research

Information on paying use currently not public.

Wireless sensor network test environment at TUB

academic use, SME, industry Best-effort access free for non-commercial research

Information on paying use currently not public.

LTE cellular test environment at TUD

Upon agreement The terms for use need to be agreed upon individually.

Outdoor heterogeneous ISM/TVWS at JSI

academic use, SME, industry Best-effort access free for non-commercial research

Information on paying use currently not public.

2.3.4. INTERNATIONAL LIAISON

At the moment of writing, there are three official collaborations between CREW and other projects and initiatives:

• There is an on-going collaboration with a GENI-subgroup. The goal of the collaboration is the joint development of a common language for steering controlling cognitive radio devices. The motivation is to lower the learning threshold for setting up and executing a CR experiment, and to make experimental validation of CR solutions across multiple CR platforms more easily.

• There have been discussions with Berkeley on the “connectivity brokerage” concept, which is a modular and scalable methodology and architecture that enables proactive co-existence and collaboration between diverse technologies, making joint optimization of the scarce spectrum resources possible.

• There is a collaboration with the FP7-OpenLab project: the major CREW contribution lays in the overall benchmarking methodology and in providing the CREW spectrum sensing hardware and distributed spectrum sensing methodologies, which is one of the functionalities from the quality check during experiments. These spectral measurements are used for characterizing the wireless environment before, during and after wireless experiments in order to detect anomalies due to external influences (e.g. unwanted interference from other wireless devices or equipment not participating in the experiment). If the external influences are above a certain threshold, the experiments can be discarded during or after the experiment, and the experiment can be (automatically) repeated. In this way the significance and efficiency of wireless experiments can be improved substantially. From OpenLab side, support is provided to make the CREW benchmarking tools more generic, i.e. more compatible with a broad range of wireless testbeds. An immediate consequence of the collaboration is that the benchmarking tools can now work together with the OMF framework.

Page 24: Common roadmap of FIRE test facilities – Fourth versionhome.ufam.edu.br/hiramaral/04_SIAPE_FINAL_2016/SIAPE_Biblioteca... · Common roadmap of FIRE . test facilities – Fourth

D3.7 - Common roadmap of FIRE test facilities – 4th version

Version 1.0 – 23/05/2013 Page 24

2.4. EXPERIMEDIA

2.4.1. OFFERING

2.4.1.1. Description of the facility

Offering collective and participative experiences to real-world and online communities is at the heart of the Future Media Internet (FMI) and forms an essential part of entertainment, collaborative working, education, product and service innovation and advertising. Communities include hundreds of professionals, tens of thousands at live public events and millions online. Understanding the complexity and dynamics of such communities and ecosystems is essential for FMI research.

EXPERIMEDIA will develop and operate a unique facility that offers researchers what they need for large-scale FMI experiments. Experiments will be conducted at testbeds offering live events and real-world communities to accelerate the adoption of the FMI. EXPERIMEDIA primarily focuses on three testbeds offering a broad range of user experiences and infrastructures with the ecosystem. The testbeds include

• Schladming Ski Resort, • Multi-Sport High Performance Center of Catalonia, • Foundation for the Hellenic World,

FIGURE 7. EXPERIMEDIA OFFERING

Experiments will explore new forms of social interaction and rich media experiences incorporating both online and real-world interaction. Testbed technologies will include:

Page 25: Common roadmap of FIRE test facilities – Fourth versionhome.ufam.edu.br/hiramaral/04_SIAPE_FINAL_2016/SIAPE_Biblioteca... · Common roadmap of FIRE . test facilities – Fourth

D3.7 - Common roadmap of FIRE test facilities – 4th version

Version 1.0 – 23/05/2013 Page 25

• User generated and high quality content management and delivery, • 3D Internet platform and tools for 3D reconstruction from live events, • Augmented reality platform, • Tools for integration and analysis of social networks, • Access technologies, • Range of network connectivity options.

Testbed management services offerings provisioning, control and monitoring resources with a focus on Observability through deep instrumentation of system components and the communities that use them, along with orchestration of content flows. A Competence Centre will promote sustainable access to venues for FMI experiments and engagement with the wider community.

2.4.1.2. Main components - the timeline for the availability of the facility components

EXPERIMEDIA is a three-year project, which started in October 2011 and finishes September 2014. It will adopt an iterative development lifecycle. Each iteration will incrementally provide additional capabilities, initially driven by the needs of scenarios and driving experiments and later by the needs of experiments funded through the Open Calls. Success will depend on the ability to identify and deliver generic baseline capabilities that cross different experiments and to provide a testbed management framework that can provide access to them. The phases on the project with their core objective are:

Year 1 Connectivity: establish connectivity between testbeds through deployment of V1.0 of the EXPERIMEDIA baseline technologies

Year 2 Expansion: Expand technology baseline focusing on experiment composition patterns for dynamic media content and common user interaction

Year 3 Sustainability: Optimise technology baseline for increased operational efficiency , portability and usability considering localisation extremes

The project will deliver two major software releases that will be deployed at the start of each series of Open Call experiments (October 2012 and October 2013). Minor upgrades are planned for (March 2013 and March 2014) to provide specific enhancements based on the requirements of experiments funded through the Open Calls.

The facility requirements are driven by both scenarios and experiments. The scenarios describe the high-level focus of the facility and help identify system requirements, conceptual architecture and baseline technologies for different classes of experiments.

TABLE 2. EXPERIMEDIA USE SCENARIOS

Scenario Classes of experiments Facility capabilities provided

Live and augmented

Investigating the production, management and delivery of user-generated content with augmented reality for participative community experiences at large scale

Large-scale, user generated content production, management and delivery; mechanisms for synchronizing distributed live events providing a common experience; mechanisms for socially driven content metadata annotation and dissemination; personal, dynamic adaptation of content according to individual and/or

Page 26: Common roadmap of FIRE test facilities – Fourth versionhome.ufam.edu.br/hiramaral/04_SIAPE_FINAL_2016/SIAPE_Biblioteca... · Common roadmap of FIRE . test facilities – Fourth

D3.7 - Common roadmap of FIRE test facilities – 4th version

Version 1.0 – 23/05/2013 Page 26

public events group preferences; real-time orchestration allowing for adaptive narratives and content, customizing the user experience; detection and tracking of feature points for mobile devices; mobile platforms enhanced for augmented reality applications; integrated services for evaluations, in terms of technical features, content and user experience

Live and 3D Investigating the production, management and delivery of high quality media content (large-scale in terms of size of content), 3D Internet technologies and metadata management for professional collaborative working and experiences.

Acquisition and synchronisation between cameras feeds, audio and metadata, including matching exact frames from different cameras; integrated automatic data collection and management systems; metadata annotation and generation tools based on standards; real-time communications and data exchange including integrated 3D collaborative transmedia; on the fly 3D reconstruction of live events “in virtue” including geo-location in indoor enclosed spaces

Live and online

Exploring the relationship between real-world and online communities, and how such online communities can be used to adapt and provide efficient management of media content and improved personalised social experiences

Flexible access to different social networking platforms depending on the online community of interest to experiments; monitoring of the social activities (real and online) performed by individuals and communities; analysis of social graphs for extraction of social knowledge and proximity of users to content; knowledge services for media QoS and QoE parameters derived from social behaviours

2.4.2. USE CASES

The experiments are expected to fit into one or more of the scenarios using the baseline technology developed. Experiments are expected to explore Future Internet systems targeting enhanced and contextualised user experience, immersive and interactive media technologies (co-creation and collaboration tools, augmented reality, etc.), content production, distribution and delivery platforms (content objects, dynamic adaptation and aggregation, hybrid delivery/return infrastructures, etc.) and new social interaction models. Experiments will explore business models as well as technical issues. This will include aspects of security (and rights management), legal and ethical issues, complex interactions between consumers and providers and the economics of commercial viability of solutions. EXPERIMEDIA has three driving experiments funded within the project to verify and validate the facility.

Page 27: Common roadmap of FIRE test facilities – Fourth versionhome.ufam.edu.br/hiramaral/04_SIAPE_FINAL_2016/SIAPE_Biblioteca... · Common roadmap of FIRE . test facilities – Fourth

D3.7 - Common roadmap of FIRE test facilities – 4th version

Version 1.0 – 23/05/2013 Page 27

TABLE 3. EXPERIMEDIA DRIVING EXPERIMENTS

Driving experiment Objective and expected impact of FMI technologies investigated

Testbed

EX1: augmented reality services and UGC at large-scale live events (WP4.1)

How UGC management and delivery can be combined with augmented reality applications for enhanced experiences, participative co-creation and new forms of social interaction at large-scale live public events.

Schladming

EX2: content production and delivery for high quality and 3D Internet-based remote sports analysis and training (WP4.2)

How content production, content delivery and 3D Internet platforms for high quality content can be used for remote sports analysis, training and performance improvement considering new collaborative working and interaction models between athletes, coaches and other professionals

CAR

EX3: shared, real-time, immersive and interactive cultural and educational experiences (WP4.3)

How 3D virtual reality tools and content management and delivery platforms can facilitate shared, real-time, immersive and interactive experiences that combine virtual worlds (e.g., ancient civilizations, educational themes, etc) with modern scientific, civil achievements and modern educational curricula.

FHW

EXPERIMEDIA’s open call was designed to evenly distribute funding across each Smart Venue (Schladming, CAR and FHW) and to ensure a broad range of topics were covered with the domain of the Future Media Internet. As such 170K was allocated to each venue with experiments expected to be small (70K-100K maximum) resulting in 2 experiments running at each venue. The experiments were expected to target specific architecture scenarios associated with technology and live events.6 experiments were funded:

• MediaConnect (University of Graz, STI International GmbH)- experiment makes use of social networks and is investigating whether augmented reality and interactive video technologies can improve can improve the visitor experience.

• Digital Schladming (IN2 search interfaces development Ltd) - create a hyper-local temporally bound community, augmenting existing social media platforms.

• CONFetti (PSNC)- investigate whether the combination of HD video, stereoscopic 3D and remotely rendered 3D augmented reality models can help a coach who is unable to attend a training session in person still usefully participate.

• 3D Acrobatic Sport (STT) - experiment is using small inertial sensors strapped to the athletes' bodies to record their movements in acrobatic sports. The data from these sensors is used to animate a 3D avatar of the athlete to record and review their performance.

• BLUE (Henri Tudor Research Center, UNIVERSITY OF PELOPONNESE)- create the perfect museum visiting experience by measuring a visitors cognitive style and then recommending a path through the exhibits. It also enables sharing of their experience through social networks.

Page 28: Common roadmap of FIRE test facilities – Fourth versionhome.ufam.edu.br/hiramaral/04_SIAPE_FINAL_2016/SIAPE_Biblioteca... · Common roadmap of FIRE . test facilities – Fourth

D3.7 - Common roadmap of FIRE test facilities – 4th version

Version 1.0 – 23/05/2013 Page 28

• REENACT (University of Vigo) - lets visitors participate in a re-enactment of an ancient Greek battle. Using small tablets and 3D models it creates an immersive experience for the live battle which is subsequently replayed in the big theatre. The debriefing stage also encourages the participants to compare their battle with the historic one and therefore increase their understanding.

2.4.3. USAGE POLICIES

2.4.3.1. How to connect to the facility

The EXPERIMEDIA facility will be made available to registered experimenters from the scientific community and from interested companies, both through Open Calls and through non-funded mechanisms.

2.4.3.2. Plans for how to accommodate new experiment requirements

New requirements and functionality are defined in a demand-driven way from (1) the needs defined by Open Call usage scenarios; (2) gaps identified during operational use by internal/external experimenters; and (3) common needs identified with other FIRE facility projects.

2.4.3.3. Cost of usage

EXPERIMEDIA will run internal driving experiments, Open Call experiments and a selection of non-funded experiments that use the facility during the final year. The driving experiments will be used to provide requirements during facility development and to verify and validate the facility capabilities to be used by experiments funded during the Open Calls. There will be two Open Calls (planned for May 2012 and May 2013) and non-funded experiments in the final year. Execution of at least 10 experiments (‘driving’, ‘Open Call’ and ‘non-funded’) associated with three motivating architectural scenarios. Open Call partners will join the EXPERIMEDIA consortium whereas in the case of ‘non-funded’ experiments, contractual obligations, limitations of liabilities and fair use agreements will be developed on a case-by-case basis. From October 2014, access will be offered to experimenters based on terms and conditions according to the sustainability model developed in the EXPERIMEDIA project.

2.4.4. INTERNATIONAL LIAISON

EXPERIMEDIA will establish a European Future Media Internet Competence Centre that brings together all stakeholders and provides a platform to promote and offer access to Smart Venues and global engagement.

2.4.5. FUTURE

EXPERIMEDIA is developing a new approach to investigate the characteristics of Future Media Internet systems and how new products and services can be developed through experiments that combine real world and online communities. The facility identifies four foundation elements (venue, community, live events and technology) necessary to achieve this goal. The future of EXPERIMEDIA lies in its strategy for each of these elements and its ability to dynamically adapt to changing venues, communities and events. Technologies will be ported to different socio-economic contexts beyond those represented by the current smart venues and effective mechanisms established to engage of medium to large user populations. These

Page 29: Common roadmap of FIRE test facilities – Fourth versionhome.ufam.edu.br/hiramaral/04_SIAPE_FINAL_2016/SIAPE_Biblioteca... · Common roadmap of FIRE . test facilities – Fourth

D3.7 - Common roadmap of FIRE test facilities – 4th version

Version 1.0 – 23/05/2013 Page 29

measures along with methodology for business modelling and value impact assessment for target venues aims to provide a way to recruit and retire real-world locations on demand.

Page 30: Common roadmap of FIRE test facilities – Fourth versionhome.ufam.edu.br/hiramaral/04_SIAPE_FINAL_2016/SIAPE_Biblioteca... · Common roadmap of FIRE . test facilities – Fourth

D3.7 - Common roadmap of FIRE test facilities – 4th version

Version 1.0 – 23/05/2013 Page 30

2.5. FEDERICA

2.5.1. OFFERING

2.5.1.1. Description of the facility

FEDERICA is a Europe-wide infrastructure based on computers and network physical resources, both capable of virtualization. The facility can create sets of virtual resources (slices) according to users' specification for topology and systems. The user has full control of the resources in the assigned slice which can be used (for example) for many types of research, from Future Internet clean-slate architectures to security and distributed protocols, routing protocols and applications. The infrastructure has European size and it is open to interconnection to external facilities. Information and detailed description is available at http://www.fp7-federica.eu

FIGURE 8. FEDERICA PHYSICAL TOPOLOGY

The FEDERICA substrate comprises 4 core PoPs located in CESNET (Czech Republic), DFN (Germany), GARR (Italy), and PSNC (Poland), and 6 non-core PoPs located in GRNET (Greece), I2CAT (Spain), NTUA (Greece), NIIF (Hungarnet), PSNC (Poland), RedIRIS (Spain), and SWITCH (Switzerland).

The Core PoPs are interconnected in a full mesh topology by 1Gbps dedicated Ethernet network circuits,provided by GÉANT terminated on Juniper MX480 routers. Non-core PoPs are attached to core and non-core PoPs in a highly meshed topology, also using 1Gbps Ethernet circuits and EX Juniper switches.

Each PoP hosts at least one Sun X2200M2 server. In the core PoPs the number of servers is increased to at least two. VMware ESXi operating system was chosen as hypervisor.

Page 31: Common roadmap of FIRE test facilities – Fourth versionhome.ufam.edu.br/hiramaral/04_SIAPE_FINAL_2016/SIAPE_Biblioteca... · Common roadmap of FIRE . test facilities – Fourth

D3.7 - Common roadmap of FIRE test facilities – 4th version

Version 1.0 – 23/05/2013 Page 31

The facility allows disruptive testing in a wide area environment. All components of the slices are created using virtual resources and can host any type of protocol and application.

2.5.1.2. Main components - the timeline for the availability of the facility components

The FEDERICA project ended in October 2010. However, the substrate is still fully functional and is currently supported by the partners and partially by the GN3 project. On 1st April 2013, the GN3plus project started, which plans to set up a facility to offer Testbeds as a Service before Q4 2013. The FEDERICA facility will either continue as a separate project or integrate in the GN3plus facility.

2.5.2. USE CASES

The FEDERICA consortium has served more than 20 user groups, the paper [P.Szegedi et al, ‘Enabling Future Internet Research: The FEDERICA Case’ IEEE Communications Magazine • July 2011] details the most interesting ones.

The experiments may be divided in three areas:

• Validation of Virtual Infrastructure features: this group of experiments aims at validating the basic principles of virtualization capable infrastructures in general and particularly the measurement and the configuration of reproducible virtual resources

• Evaluation of multi-layer network architectures, using the capability of connecting external testbeds or virtual slices provided by other facilities to the FEDERICA user slice

• Design of novel data and control protocols: this group of experiments aims at designing and validating novel data and control plane protocols as well as architectures

During the last period FEDERICA has offered slices to the BonFIRE and CONFINE project which interconnect most of their islands.

2.5.3. USAGE POLICIES

2.5.3.1. How to connect to the facility

FEDERICA is open for accepting requests for use. The requests are examined by a User Policy Board which discusses technical feasibility and details with the user. There is currently no API or automated system to connect to. The slice is created initially by the NOC and then handed over to the full control of the user.

2.5.3.2. Plans how to accommodate new requirements for experiments

Until April 2013 there is no plan to modify the facility. Through the collaboration with the NOVI project, there has been an effort to adopt SFA and extend the ProtoGENI RSpecs to represent the FEDERICA facility. This enabled more automated control and management and better federation with SFA compatible facilities.

2.5.3.3. Cost of usage

Currently the usage is free of charge. The suggested duration of the experiment is three months. The total number of resources to be allocated varies case by case and it is agreed initially with the users.

Page 32: Common roadmap of FIRE test facilities – Fourth versionhome.ufam.edu.br/hiramaral/04_SIAPE_FINAL_2016/SIAPE_Biblioteca... · Common roadmap of FIRE . test facilities – Fourth

D3.7 - Common roadmap of FIRE test facilities – 4th version

Version 1.0 – 23/05/2013 Page 32

2.5.4. INTERNATIONAL LIAISON

FEDERICA had various contacts with Future Internet initiatives worldwide, in particular in the US and Japan. A direct collaboration is planned with Internet2 and GENI.

Page 33: Common roadmap of FIRE test facilities – Fourth versionhome.ufam.edu.br/hiramaral/04_SIAPE_FINAL_2016/SIAPE_Biblioteca... · Common roadmap of FIRE . test facilities – Fourth

D3.7 - Common roadmap of FIRE test facilities – 4th version

Version 1.0 – 23/05/2013 Page 33

2.6. OFELIA

2.6.1. OFFERING

2.6.1.1. Description of the facility

• OFELIA provides an infrastructure enabling network innovations through the OpenFlow protocol: users can “control their part of the network” rather than simply conducting experiments “over” the network

• Users receive a network slice consisting of

o virtual machines as end-hosts o a virtual machine to deploy their OpenFlow-capable network controller/application o Parts (slices) of the network nodes that connect to the user’s OpenFlow controller

From the start of the project it was planned for five islands (iMinds (formerly known as IBBT), TUB, i2CAT, UnivBristol (intially: UEssex), ETH) to provide heterogeneous network experimenting technologies. These five testbeds are operated by partners with complementary technological strengths and user groups from five countries with strong research communities in networking. As a result of two Open Calls within OFELIA, has led to 2 additional islands (in Trento and Rome) joining the OFELIA testbed.

TABLE 4. OVERVIEW OF THE OFELIA ISLANDS

Berlin (TUB) partial campus network with OF-switches

Gent (iMinds) central hub, 100 (200) node emulab instance

Zürich (ETH) connection to OneLab and GENI

Barcelona (i2CAT) control framework development & FIBRE

Bristol (UBristol) optical and L2 (Extreme) switches

Trento (Create-Net) city-wide test bed with L2 switches, NetFPGA

Rome/Catania Information-Centric-Networks

Page 34: Common roadmap of FIRE test facilities – Fourth versionhome.ufam.edu.br/hiramaral/04_SIAPE_FINAL_2016/SIAPE_Biblioteca... · Common roadmap of FIRE . test facilities – Fourth

D3.7 - Common roadmap of FIRE test facilities – 4th version

Version 1.0 – 23/05/2013 Page 34

FIGURE 9. SCREENSHOT OF THE OFELIA FRAMEWORK SHOWING THE CURRENT CONNECTIVITY OF OPENFLOW

SWITCHES. TUB AND CNIT-CATANIA ARE NOT YET CONNECTED TO HUB SWITCH AT IMINDS.

2.6.1.2. Main components - the timeline for the availability of the facility components

There are three phases in OFELIA (duration 37 (36) months from Sept. (Oct.) 2010):

• Phase i: OpenFlow controllers and switches in place, first local experiments concluded

• Phase ii: Connect islands and extend the OpenFlow experimentation to wireless and optics

• Phase iii: Automate resource assignment and provide connections to other FIRE and non-European research facilities

The detailed timeline for Phase I and II was as follows:

Phase I:

• March 2011: islands deployed + closure of 1st round of Open Calls

• June 2011: finalization of 1st round of Open Calls + public offering (= external users can make use of the facility)

Phase II:

• March 2012: islands extended and interconnected + closure of 2nd round of Open Calls.

• June 2012: finalisation of 2nd round of Open Calls

The project is now nearly completed and it is currently being studied how the facilities can remain operational. Until the end of the project (Sept. 2013), experimenters can make use of the OFELIA islands as a best effort service.

Page 35: Common roadmap of FIRE test facilities – Fourth versionhome.ufam.edu.br/hiramaral/04_SIAPE_FINAL_2016/SIAPE_Biblioteca... · Common roadmap of FIRE . test facilities – Fourth

D3.7 - Common roadmap of FIRE test facilities – 4th version

Version 1.0 – 23/05/2013 Page 35

In this case, for conducting experiments, users receive a network slice consisting of the following components:

• Resources o Virtual machines as end-hosts deployed on physical servers o A virtual machine to deploy their OpenFlow-capable network controller/application o Parts of the OpenFlow switches that connect to the user’s OpenFlow controller (a subset of

the overall flowspace) • Control

o Control of the flowspace defined with the above resources

2.6.2. USE CASES

In generic terms, OFELIA aims at Use Cases in which researchers need control over the network in terms of routing, traffic engineering, etc. The Open Calls resulted in typical applications such as: Network virtualization / provider virtualization, Content centric networking, Gaming / real-time applications, Multi-domain routing and Social networks.

Besides a pure lab environment, in two islands students / employee workplaces will be wired to the OFELIA facility and thus experiments involving real users could be supported.

2.6.3. USAGE POLICIES

2.6.3.1. How to connect to the facility

All steps needed to connect to OFELIA are described on http://www.fp7-ofelia.eu/ofelia-facility-and-islands/how-to-experiment/ . In order to get access, an experimenter

A. Needs to register for an OFELIA account B. Needs to set-up an OpenVPN connection C. Creates a project via a website that can be reached via an island-specific URL (these URLS

are available from the above website) D. Performs the experiment

Each of these steps is detailed on the OFELIA project website.

2.6.3.2. Plans how to accommodate new requirements for experiments

New requirements and functionality were defined in a demand-driven way from (1) the needs defined by Open Call usage scenarios (2) gaps identified during operational use by internal/external experimenters and; (3) common needs identified with other FIRE facility projects.

2.6.3.3. Cost of usage

From July 2011 until the end of the project (Sept 2013) the facility is open free-of-charge for external users as a best-effort service.

Page 36: Common roadmap of FIRE test facilities – Fourth versionhome.ufam.edu.br/hiramaral/04_SIAPE_FINAL_2016/SIAPE_Biblioteca... · Common roadmap of FIRE . test facilities – Fourth

D3.7 - Common roadmap of FIRE test facilities – 4th version

Version 1.0 – 23/05/2013 Page 36

Details on how to connect to the facility are available on the website http://www.fp7-ofelia.eu/ofelia-facility-and-islands/how-to-experiment/ . In short, to make use of the OFELIA services:

• Users need to agree with the OFELIA Usage Policy (available from the above link) • Users are advised to read the OFELIA user manual

For experiments inside a single island, the corresponding island manager decides on how low experiments can last and on which resources are made available. For other experiments, a decision is taken by the OFELIA policy committee, which includes a representative from each island.

2.6.4. INTERNATIONAL LIAISON

Initial communication links have been established to a number of related test facility projects, including:

• EU/Brazil call: via i2CAT (contact: Michael Stanton), starting June 2011

• China/OFELIA cooperation: via EICT (contact: Prof. Xing Li, TsingHua University Prof. Yan Ma, BUPT), initial contact established during ICT 2010, further talks were planned for FI week in Budapest, May 2011

• NICT/JGN2+ trials: via NEC VC in January 2011, continued in May 2011

Page 37: Common roadmap of FIRE test facilities – Fourth versionhome.ufam.edu.br/hiramaral/04_SIAPE_FINAL_2016/SIAPE_Biblioteca... · Common roadmap of FIRE . test facilities – Fourth

D3.7 - Common roadmap of FIRE test facilities – 4th version

Version 1.0 – 23/05/2013 Page 37

2.7. ONELAB

2.7.1. OFFERING

2.7.1.1. Description of the facility

The OneLab experimental facility offers access to a range of different Future Internet testbeds and research tools. Building on the results of former EC projects “OneLab” in FP6 and “OneLab2” in FP7, the OneLab offering will continue to be developed though the OpenLab project, starting in September 2011.

OneLab has three types of federations implemented in its experimental resources, each serving their specific functions: 1. Trust and resource discovery, 2. Experiment control federation - adding wireless resources and their OMF experiment controller, and 3. Measurement systems.

1. Trust and resource discovery federation

This federation builds on PlanetLab and its methodologies: Through an SFA interface, the PlanetLab Europe user community can access testbeds worldwide OneLab’s flagship testbed is the PlanetLab Europe (http://www.planet-lab.eu/), the European branch of the global PlanetLab system, a geographically-distributed platform for testing novel ideas in network overlays, content distribution systems, distributed systems, and peer-to-peer technology. It consists today of over 250 server-class computers (nodes), located at over 140 sites across Europe, and users have access to over 1000 PlanetLab nodes worldwide. Each node runs the PlanetLab operating system, which is based upon the Linux-VServer virtual server architecture. OneLab is a slice-based facility: a researcher obtains a slice across the system that consists of virtual machines on any or all of the nodes.

FIGURE 10. MAP SHOWING PLANETLAB NODES IN EUROPE. CLICK ON THE IMAGE TO SEE AN INTERACTIVE MAP ON

THE PLANETLAB EUROPE WEBSITE

MySlice - PlanetLab Europe slice management interface

A key OneLab component is MySlice (http://myslice.planet-lab.eu/), specially-developed for PlanetLab Europe, which gives users both a web interface and a programmable API through which to manage their slices. With a service that starts with slice setup, through experiment run-time, to retrospective analysis of

Page 38: Common roadmap of FIRE test facilities – Fourth versionhome.ufam.edu.br/hiramaral/04_SIAPE_FINAL_2016/SIAPE_Biblioteca... · Common roadmap of FIRE . test facilities – Fourth

D3.7 - Common roadmap of FIRE test facilities – 4th version

Version 1.0 – 23/05/2013 Page 38

experimental data after an experiment is concluded, MySlice also offers visualization tools for data that has been collected.

SFA (Slice Federation Architecture)

SFA is designed to be entirely decentralised, which is deemed to be the only way to achieve massive scale. It defines a web services interface between federation peers. Information and requests are performed online. It comprises: (a) naming conventions for referring to all entities in the federation, (b) a trust layer that implements secure/authenticated API calls, and (c) a set of APIs for using the different services that constitute the federation. It makes no assumptions about the nature of the resources on heterogeneous testbeds and so does not impose a model on resource descriptions, instead conveying them as-is, much like a payload is handled in a network packet. Privacy and policy constraints may be implemented in the SFA layer at each peer. Non-public data can be hidden and failures can be returned if policies would be violated. Emerging from GENI [geni.net], SFA currently has two interoperable free open source implementations: the PlanetLab one, that OneLab2 has been involved in implementing, and the ProtoGeni one, that the Emulab platform has started to use extensively.

FIGURE 11. ONELAB TRUST AND RESOURCE DISCOVERY (SFA-BASED) FEDERATION

On top of the original PlanetLab Europe testbed, the OneLab experimental facility has been expanded to include the other two types of federation, adding testbeds and measurement techniques. These added heterogeneous features include:

2. Experiment control federation – adding wireless resources and their OMF experiment controller

Page 39: Common roadmap of FIRE test facilities – Fourth versionhome.ufam.edu.br/hiramaral/04_SIAPE_FINAL_2016/SIAPE_Biblioteca... · Common roadmap of FIRE . test facilities – Fourth

D3.7 - Common roadmap of FIRE test facilities – 4th version

Version 1.0 – 23/05/2013 Page 39

By enabling a user to request OMF resource controllers in a PlanetLab slice, Onelab allows an OMF user to easily run an experiment on PlanetLab. With our PlanetLab wireless extensions, this experiment can easily span PlanetLab and an OMF wireless testbed. The European wireless testbed joining OneLab federation is:

NITOS (Network Implementation Testbed using Open Source code) located in Volos, Greece and developed by CERTH, in association with NITLab, the Network Implementation Testbed Laboratory of the Computer and Communication Engineering Department at the University of Thessaly. NITOS consists of nodes based on commercial Wi-Fi cards and Linux-based open-source platforms, which are deployed both inside and outside of the University of Thessaly's campus building. Currently, three kinds of nodes are supported: ORBIT-like, diskless Alix2c2 PCEngines, and GNU/MIMO.

FIGURE 12. ONELAB EXPERIMENT CONTROL (OMF) FEDERATION

3. Measurement systems

Onelab’s measurement system federation builds on TDMI, TopHat Distributed Measurement Infrastructure. TopHat is a topology information system integrated on every PlanetLab virtual machine. Researchers use the PlanetLab facility for its ability to host applications in realistic conditions over the public best effort Internet. TopHat has federated with two other measurement systems, DIMES and ETOMIC in order to gain higher measurement precision and wider coverage and hence observability of the Internet beyond PlanetLab nodes.

ETOMIC testbed - A high-precision traffic measurement infrastructure

The specially designed nodes of ETOMIC (European Traffic Observatory Measurement Infrastructure) are distributed throughout Europe, and are able to carry out active measurements that are globally synchronized to a high temporal resolution, providing extremely accurate readings (~10 nanoseconds).

Page 40: Common roadmap of FIRE test facilities – Fourth versionhome.ufam.edu.br/hiramaral/04_SIAPE_FINAL_2016/SIAPE_Biblioteca... · Common roadmap of FIRE . test facilities – Fourth

D3.7 - Common roadmap of FIRE test facilities – 4th version

Version 1.0 – 23/05/2013 Page 40

DIMES testbed - A distributed measurement infrastructure

Aimed at studying the topology of the Internet, DIMES is based on distributed light-weight software agents, hosted by a community of thousands of volunteers worldwide. DIMES offer the ability to launch topology measurements of the Internet from multiple vantage points, as well as access to a vast database of accumulated measurement data. On a typical day, the DIMES server records 3.5 million measurements from 1300 agents in over 50 countries.

FIGURE 13. ONELAB MEASUREMENT SYSTEM FEDERATION

2.7.1.2. Main components - the timeline for the availability of the facility components

OneLab’s governing consortium was set-up by INRIA and UPMC in August 2008 to oversee the development of OneLab’s first testbed, PlanetLab Europe. This partnership was expanded in 2010 to include two other key contributors, HUJI and the University of Pisa, and currently has no defined end-date, showing that these institutions are committed to sustainability and the long-term supervision of OneLab’s facilities. Although two key EC projects backing OneLab have now ended, further public funds have been actively sought and won, safeguarding the future development of the facility offerings and helping to continue the comprehensive support services currently available to users, ensuring that all OneLab offerings remain open to the public Internet research community, involving real-world end-users.

Page 41: Common roadmap of FIRE test facilities – Fourth versionhome.ufam.edu.br/hiramaral/04_SIAPE_FINAL_2016/SIAPE_Biblioteca... · Common roadmap of FIRE . test facilities – Fourth

D3.7 - Common roadmap of FIRE test facilities – 4th version

Version 1.0 – 23/05/2013 Page 41

2.7.2. USE CASES

Since the majority of OneLab’s testbed nodes are open to the public Internet, the researcher can experiment with distributed applications in a real-life testing environment. This ability makes the PlanetLab environment, for example, an essential complement to controlled experiments in simulation or emulation environments. Unpredictable real-world traffic loads, routing changes, and failures put applications to the test in ways that a controlled environment cannot. Furthermore, researchers can deploy services that are used by regular Internet end-users worldwide. For example, one content distribution experiment on PlanetLab offers faster web downloading to thousands of end-users in countries across the world. The researchers who deployed this service use it to study application performance and end-user behaviour. A key benefit of OneLab’s offer to researchers is that it allows them to deploy their experiments at a global scale, exposing their applications to geographic and network topological diversity and allowing them to deploy services in proximity to end-users. While the testbeds’ own nodes are scattered essentially across Europe, federation with the global PlanetLab system (http://www.planet-lab.org/) provides European researchers with access to the combined system, which consists of over 1,000 nodes at over 500 sites worldwide.

OneLab Use Cases include the following:

Network topologies o Delays o Available bandwidths

Routing paradigms for wired and wireless networks o Real-life routing changes o Routing and content sharing topologies: minimizing routing traffic or content queries

Real-world Internet traffic measurements o Unpredictable traffic loads

Applications failure recoveries Concurrent multipath transmissions System optimization for multi-hop networks

2.7.3. USAGE POLICIES

Users of testbeds within or federated to the OneLab facility are required to adhere to the OneLab Acceptable Use Policy (AUP) or a compatible AUP that governs their behaviour when making use of OneLab resources. This allows OneLab to handle incidents that arise across testbed boundaries, facilitating communication and allowing easy identification of those responsible. However, members of OneLab testbeds remain free to prioritize their own users or even allow them exclusive access to some resources. An additional service offered by OneLab is the management of the AUPs of participating testbeds.

2.7.3.1. How to connect to the facility

Over 140 institutions in Europe, representing over 1500 researchers, have signed user membership contracts with the PLE Consortium. To join as a user member (distinct from partnership), an institution contributes two server-class computers as PlanetLab Europe nodes. These machines are made freely available to all other PlanetLab users via the Internet. Because this openness requires responsible

Page 42: Common roadmap of FIRE test facilities – Fourth versionhome.ufam.edu.br/hiramaral/04_SIAPE_FINAL_2016/SIAPE_Biblioteca... · Common roadmap of FIRE . test facilities – Fourth

D3.7 - Common roadmap of FIRE test facilities – 4th version

Version 1.0 – 23/05/2013 Page 42

behaviour, member institutions legally commit their researchers to follow an AUP. At present, users of PlanetLab Europe have automatic access to the NITOS wireless testbed (which is freely available even without resource contribution). The DIMES testbed is open to any user provided that they install the DIMES software, and ETOMIC requires the contribution of a specially designed ETOMIC measurement box.

2.7.3.2. How to perform experiments

The PlanetLab Europe slice management interface is called MySlice (http://myslice.planet-lab.eu/). MySlice gives users both a web interface and a programmable API through which to manage their slices: starting with slice setup, through experiment run-time, to retrospective analysis of experimental data after an experiment is concluded. MySlice also offers visualization tools for data that has been collected.

The researcher can log in to each of these virtual machines via the SSH secure remote login tool, and will then be the root user in a Fedora 12 Linux environment. They can deploy whichever software they prefer on these virtual machines, subject only to a few restrictions based on the shared kernel. The researcher can also configure a delay and drop profile for packets entering and leaving a virtual machine, thereby turning its standard Ethernet link into an emulated wireless or DSL link for their slice. (The system also provides a few nodes that are connected via real Wi-Fi links.)

The NITOS wireless testbed in Volos offers researchers access using cOntrol Management Framework (OMF) open-source software. OMF simplifies the procedure of defining experiments and offers a more centralized way of deploying experiments and retrieving measurements.

2.7.3.3. Support organization for experimenters

UPMC and INRIA established the PlanetLab Europe Consortium in August 2008 as a partnership responsible for running the testbed, and the Consortium was expanded to include HUJI and UNIPI in 2010. CNRS (the French national research agency) funds the chief operations engineer position, UPMC itself provides additional legal and administrative support, the OneLab NOC (Network Operations Centre) employs a three-person development team at UPMC and INIRA, and also updates the documentation and tutorial videos offered through PlanetLab Europe online channels:

Dailymotion: http://www.dailymotion.com/PlanetLabEurope YouTube: http://www.youtube.com/user/PlanetLabEurope.

2.7.3.4. Contract requirements for usage

Once you become a member of PlanetLab Europe, you can provision private virtual servers on any of over 1000 PlanetLab nodes. These virtual servers can be used to deploy live Internet services that are exposed to the same issues that arise in genuine production environments: variable bandwidth, diverse latencies, realistic robustness, and failure modes.

2.7.3.5. Plans how to accommodate new requirements for experiments

OneLab, through its core partners, is involved in numerous national and EU initiatives, either underway or soon to begin. Any new requirements from users will be developed during these new projects. The FIRE Architecture Board will also contribute to the development of these successors of the OneLab2 project, and

Page 43: Common roadmap of FIRE test facilities – Fourth versionhome.ufam.edu.br/hiramaral/04_SIAPE_FINAL_2016/SIAPE_Biblioteca... · Common roadmap of FIRE . test facilities – Fourth

D3.7 - Common roadmap of FIRE test facilities – 4th version

Version 1.0 – 23/05/2013 Page 43

in due course the new projects will define methods to attract, collect and meet the evolving requirements of users.

2.7.3.6. Cost of usage

Usage requirements vary depending on the testbed. PlanetLab Europe and ETOMIC require that a member institution contributes some hardware resources in exchange for the testbed access enjoyed by their users. DIMES is available to any user who installs its agent, while the NITOS wireless is freely open to any individual user, provided that they register with OneLab before using its scheduler to reserve a slice on the testbed.

2.7.4. INTERNATIONAL LIAISON

At present most of OneLab’s international liaisons are related to the PlanetLab Europe testbed’s connections with other PlanetLab-based structures worldwide. Federation between the PlanetLab Europe Consortium and the global PlanetLab Consortium is based on a Memorandum of Understanding between participating institutions. The OneLab coordinator, Professor Serge Fdida, sits on the Steering Committee of the global PlanetLab Consortium. On an international level, in addition to the federation between PlanetLab Europe and PlanetLab Central, OneLab has federated with PlanetLab Japan, Private PlanetLab Korea (PPK) and has also expanded the facility to include Chinese partners.

Page 44: Common roadmap of FIRE test facilities – Fourth versionhome.ufam.edu.br/hiramaral/04_SIAPE_FINAL_2016/SIAPE_Biblioteca... · Common roadmap of FIRE . test facilities – Fourth

D3.7 - Common roadmap of FIRE test facilities – 4th version

Version 1.0 – 23/05/2013 Page 44

2.8. OPENLAB

2.8.1. OFFERING

2.8.1.1. Description of the facility

OpenLab brings together the essential ingredients for an open, general purpose and sustainable large scale shared experimental facility, providing enhancements to the earlier successful prototypes: OneLab2 and PII, to serve the demands of Future Internet Research and Experimentation. OpenLab delivers control and experimental plane middleware to facilitate the early use of the testbeds by researchers in industry and academia, exploiting our own proven technologies, developed notably in the OneLab and Panlab initiatives, as well as drawing upon other initiatives’ best work, such as the SFA control framework and OpenFlow switching.

OpenLab’s contribution to the FIRE facility portfolio includes: PlanetLab Europe (PLE), with over 150 member institutions across Europe; the NITOS and w-iLab.t wireless testbeds; two IMS telco testbeds that can connect the public PSTN to IP phone services, and can explore merged media distribution; an LTE cellular wireless testbed; the ETOMIC high-precision network measurement testbed; the HEN emulation testbed; and the ns-3 simulation environment. Potential experiments that can be performed over the available infrastructure go beyond what can be tested on the current Internet.

OpenLab extends the OneLab and Panlab facilities with advanced capabilities in the areas of mobility, wireless, monitoring, and domain interconnections, and introduces new technologies such as OpenFlow. These enhancements are transparent to existing users of each facility.

OpenLab’s open and available testbeds are listed below: NITOS (Network Implementation Testbed using Open-Source code)

NITOS [nitlab.inf.uth.gr/NITlab/index.php/testbed] is an OMF-based wireless testbed in a campus building at UTH in Volos, Greece. It consists of 45 nodes equipped with a mixture of Wi-Fi and GNU-radios, as well as cameras and temperature and humidity sensors. Two programmable robots provide mobility. This publicly available testbed supports experiments across all networking layers. In addition to OMF, the testbed employs locally developed tools: the NITOS scheduler, a resource reservation application, and TLQAP, a topology and connectivity monitoring tool. In OpenLab, under the guidance of COSMOTE, a major ISP provider in Greece, and with the help of Alcatel-GR, NITOS will be extended to meso-scale (WiMAX/3G/LTE), with a base-station and mobile end-user handsets. w-iLab.t

The w-iLab.t heterogeneous wireless testbed [http://www.iminds.be/en/develop-test/ilab-t/wireless-lab] is composed of two separate deployments: w-iLab.t Zwijnaarde and w-iLab.t Office. Both testbeds are equipped with multiple wireless technologies, including Wi-Fi and and sensor nodes. w-iLab.t Office is deployed across three floors of the iMinds office building in Ghent, Belgium. It contains 200 locations, each equipped to receive multiple wireless sensor nodes and two IEEE 802.11a/b/g WLAN

Page 45: Common roadmap of FIRE test facilities – Fourth versionhome.ufam.edu.br/hiramaral/04_SIAPE_FINAL_2016/SIAPE_Biblioteca... · Common roadmap of FIRE . test facilities – Fourth

D3.7 - Common roadmap of FIRE test facilities – 4th version

Version 1.0 – 23/05/2013 Page 45

interfaces. Wi-Fi and sensor networks can be configured to operate simultaneously, allowing complex and realistic experiments with heterogeneous nodes and multiple wireless technologies. In addition, shielded boxes accommodate nodes that can be connected over coax cables to RF splitters, RF combiners and computer controlled variable attenuators, thus allowing fully reproducible wireless experiments with emulated dynamically changing propagation scenarios. With a hardware control device (called “environment emulator”) that was designed in-house, unique features of the testbed include the triggering of repeatable digital or analogue I/O events at the sensor nodes, real-time monitoring of the power consumption, and battery capacity emulation. The most recent deployment, w-iLab.t zwijnaarde is found in an unmanned utility room (size: 66m x 22.5m). As opposed to the w-iLab.t Office where people work and where there are also operational wireless networks (WLAN, DECT phones) installed, there is a lot less external radio interference in the Zwijnaarde deployment. At this location, hardware is hosted at 60 spots. Every spot is equipped with an embedded PC with 2 Wi-Fi a/b/g/n interfaces and 1 IEEE 802.15.1 (Bluetooth) interface, a custom iMinds-Rmoni sensor node with an IEEE 802.15.4 interface, and an “environment emulator” board. There are two additional possibilities in the Zwijnaarde deployment: (i) a number of cognitive radio platforms (including USRPs) as well as specialized spectrum scanning engines are available at this location. This enables state-of-the art research in the field of cognitive radio and cognitive networking. (ii) a deployment with 20 mobile robots will be available at latest by Summer 2013. This will allow experiments to be scheduled that include up to 20 mobile nodes (similar to the 60 fixed nodes) to be operated in the environment.

DOTSEL

The DOTSEL testbed at ETH Zürich is focused on delay-tolerant opportunistic protocols and applications. It is composed of 15 Wi-Fi equipped Android Nexus One devices that are carried by staff members, and five Wi-Fi a/b/g ad-hoc gateways. With OpenLab, DOTSEL will be enlarged to 25 nodes and 3G capability will be added. PLE (PlanetLab Europe)

PLE [www.planet-lab.eu] is the European arm of the global PlanetLab system, the world’s largest research networking testbed, which gives users access to Internet-connected Linux virtual machines on over 1000 networked servers located in the United States, Europe, Asia, and elsewhere. Nearly 1000 scientific articles mention the PlanetLab system each year, including papers in such prestigious networking and distributed systems conferences as ACM SIGCOMM, ACM CoNEXT, IEEE INFOCOM, ACM HotNets, USENIX/ACM NSDI, ACM SIGMETRICS, and ACM SIGCOMM IMC. Researchers use PLE for experiments on overlays, distributed systems, peer-to-peer systems, content distribution networks, network security, and network measurements, among many other topics.

Page 46: Common roadmap of FIRE test facilities – Fourth versionhome.ufam.edu.br/hiramaral/04_SIAPE_FINAL_2016/SIAPE_Biblioteca... · Common roadmap of FIRE . test facilities – Fourth

D3.7 - Common roadmap of FIRE test facilities – 4th version

Version 1.0 – 23/05/2013 Page 46

Established in 2006 and developed by the OneLab initiative, PLE is today overseen by four OpenLab partners: UPMC, INRIA, HUJI, and UNIPI. UPMC handles testbed operations and INRIA co-leads, along with Princeton University, the development of MyPLC, the free, open-source software that powers PlanetLab. The PlanetLab Europe Consortium has over 150 signed-up member institutions: mostly universities and industrial research laboratories, each of which hosts two servers that it makes available to the global system. These institutions are home to 1720 users. On a typical recent day, 244 were connected to on-going experiments.

OpenFlow support in PlanetLab

PlanetLab Europe (PLE) supports OpenFlow capabilities through a modified version of OpenVSwtich called sliver-ovs. Experimenters are able to create an OpenFlow overlay network by specifying the links between PLE nodes. PlanetLab Europe support must be contacted to obtain a private IP subnet for a slice before setting up the OpenFlow network.

Documentation is available at https://www.planet-lab.eu/doc/guides/user/practices/openflow

OpenFlow enhancements for PLE: https://www.planet-lab.eu/files/OpenLab_Deliverable_D4_3_V1_0.pdf

OpenFlow-capable virtual topologies with Open-vSwitch: https://www.planet-lab.eu/files/wp1-wp4-openflow.pdfOpenLab extends both the PlanetLab software and the PlanetLab Europe Consortium. HEN (Heterogeneous Experimental Network)

HEN [mediatools.cs.ucl.ac.uk/nets/hen], built between 2005 and 2010 by University College London, provides 100 server-class machines with between 6 and 14 NICs each, interconnected by a Force10 E1200 switch with 550 1-Gigabit ports and 24 10-Gigabit ports. This infrastructure allows the emulation of rich topologies in a controlled fashion over switched VLANs that connect multiple virtual machines running on each host. The precise control of the topology and the choice of the end-host operating system, that are possible on HEN, are particularly valuable facilities to networking and distributed systems researchers.

Many dozens of researchers actively use HEN: at Stanford University, the University of Lancaster, NYU, the Nokia Research Centre, and NEC Labs Europe, to name a few. UK- and EU-funded projects, including the EPSRC-funded Virtual Routers project, EPSRC-funded ESLEA project, EU FP7-funded Trilogy project, and EU FP7-funded CHANGE project, have all generated the bulk of their experimental results on HEN. Results have been published in prestigious networking and distributed system venues including ACM SIGCOMM, ACM HotNets, USENIX/ACM NSDI, USENIX Security, ACM CCR, ACM CoNEXT, Presto, FDNA, PMECT, ICDCSW, and LSAD.

OpenLab extends HEN to support multi-homed operation and interconnection with other FIRE-supported testbeds, yielding a powerful platform for experimental research on multi-path transport and application protocols — a hot area of interest to today’s networking community. The Waterford Institute of Technology (WIT) IMS testbed

The TSSG/WIT NGN IMS testbed [ngntestcentre.com] is an Irish nationally-funded initiative serving telecom firms seeking to develop or test NGN services. It provides them with advanced multimedia services, such as

Page 47: Common roadmap of FIRE test facilities – Fourth versionhome.ufam.edu.br/hiramaral/04_SIAPE_FINAL_2016/SIAPE_Biblioteca... · Common roadmap of FIRE . test facilities – Fourth

D3.7 - Common roadmap of FIRE test facilities – 4th version

Version 1.0 – 23/05/2013 Page 47

conference calling and handling of presence information. The testbed is a carrier grade NGN platform based on the Ericsson IMS Communications System (ICS). The SIP-based horizontal network architecture includes an Ericsson IMS core and the components for managing sessions, addressing, subscriptions and IMS interworking components with the relevant gateways for enabling connectivity to other networks. The testbed has recently been upgraded with pico/femto cells to allow secure remote access to the test facility. The network also includes support systems for handling provisioning, charging, device configuration and operation and maintenance.

Clients include IP centrex companies, a location-based service provider, and developers of pico/femto cell technology. International customers have conducted testing in the area of IMS security and testbed interconnection using the GSMA Pathfinder service operated by Neustar.

In OpenLab, the WIT testbed will be enhanced by the development and integration of a P2P/NGN QoS reservation mechanism that will allow OpenLab experimenters to test application level P2P traffic routing algorithms. The University of Patras IMS testbed - OSIMS

The core of the OSIMS testbed is the IMS system based on the Open Source IMS core of OpenIMS found at http://www.openimscore.org/. This enables the creation of Next Generation Network core elements within R&D and IMS interested parties. OpenIMS enables the development of IMS services and the trial of concepts around core IMS elements that are based upon highly configurable and extendable software. The Open Source IMS Core consists of Call Session Control Functions (CSCFs), the central routing elements for any IMS signaling, and a Home Subscriber Server (HSS) to manage user profiles and associated routing rules. The central components of the Open Source IMS Core project are the Open IMS CSCFs (Proxy, Interrogating, and Serving). But since even basic signaling routing functionality for IMS requires information look-up in a HSS, normal usage of such a core IMS network is not possible without it, therefore a simple HSS is also part of the Open Source IMS Core project. OSIMS is a unique open and vendor-independent NGN/IMS test environment that can be used as a testbed by academic and industrial institutions for early prototyping of new NGN/IMS related components, protocols, and applications, as well as for testing and benchmarking of components. Also, the interconnection to other IMS testbeds worldwide is currently in process in order to allow the experience of IMS concepts and IMS services to be shared with partners. ETOMIC (European Traffic Observatory Measurement InfrastruCture)

ETOMIC [etomic.org] is a high-precision (10s of nanoseconds) network measurement testbed featuring dozens of Internet-connected nodes globally synchronized with GPS clocks. More than 100 users from 34 institutions run over 100 experiments per month. Created in 2004-5 within the FP6 EVERGROW Integrated Project, it was awarded the Best Testbed Award at TridentCom 2005. In OneLab2, ETOMIC opened interfaces to MySlice (described later) to provide measurements transparently to PlanetLab users.

OpenLab will continue to promote ETOMIC’s experimental plane interoperability.

Page 48: Common roadmap of FIRE test facilities – Fourth versionhome.ufam.edu.br/hiramaral/04_SIAPE_FINAL_2016/SIAPE_Biblioteca... · Common roadmap of FIRE . test facilities – Fourth

D3.7 - Common roadmap of FIRE test facilities – 4th version

Version 1.0 – 23/05/2013 Page 48

2.8.1.2. Main components - the timeline for the availability of the facility components

The facility prototypes have emerged mostly from earlier FIRE efforts, notably the OneLab2 and PII projects and the work of iMinds. In addition, OpenLab brings in a number of excellent contributions from elsewhere, such as University College London‘s HEN testbed, which was developed with UK national funding. All the above listed testbeds are open to experiments.

OpenLab has organised its work on extending the experimental capabilities into two categories: control plane tools, and experimental plane tools. Control plane tools largely work behind the scenes to support basic testbed operations and federation, whereas experimental plane tools are visible to the user and depend upon the control plane in order to function. The distinction between these two planes is similar to the notions of kernel and user space in the operating systems arena.

The tools that OpenLab starts with are each typically specific to a single testbed environment. Indeed, some tools such as the OMF Experiment Controller and PLE‘s MySlice interface come with existing communities of hundreds of users who are already comfortable working with them. OpenLab extends the coverage of these tools. By requiring a heterogeneous set of tools to function across multiple testbeds, OpenLab puts the emerging interoperability standards to the test.

Other prototypes are individual testbeds, or are tools that are specific to a testbed or type of testbed. These prototypes are grouped into two broad categories, wireless and wired, corresponding to two broad research networking communities.

The development is driven by FIRE users and the progress benchmarked at Interoperability trials scheduled as follows:

Control plane prototypes

Among the most promising candidate global federation architectures are SFA and Teagle.

SFA (Slice Federation Architecture)

SFA is designed to be entirely decentralized, which is deemed to be the only way to achieve massive scale. It comprises: (a) naming conventions for referring to all entities in the federation, (b) a trust layer that implements secure/authenticated API calls, and (c) a set of APIs for using the different services that constitute the federation. It makes no assumptions about the nature of the resources on heterogeneous testbeds and so does not impose a model on resource descriptions, instead conveying them as-is, much like a payload is handled in a network packet. Emerging from GENI [geni.net], SFA currently has two interoperable free open-source implementations: the PlanetLab one, that OneLab2 has been involved in implementing, and the ProtoGeni one, that the Emulab platform has started to use extensively. Each has been rolled out in an embryo SFA mesh, and there is currently a limited connectivity between them.

FITeagle

Page 49: Common roadmap of FIRE test facilities – Fourth versionhome.ufam.edu.br/hiramaral/04_SIAPE_FINAL_2016/SIAPE_Biblioteca... · Common roadmap of FIRE . test facilities – Fourth

D3.7 - Common roadmap of FIRE test facilities – 4th version

Version 1.0 – 23/05/2013 Page 49

FITeagle [www.fiteagle.org]

FITeagle provides an extensible framework to seamlessly interconnect testbeds and dynamically aggregate their resources. FITeagle further provides to its end-users functionality to transparently control experiments using a wide range of resources from available testbeds. FITeagle will provide interoperability with other federation frameworks like e.g. SFA. FITeagle is mainly based on the FIRE-Teagle developments and therefore inherits its basic conceptual architecture. FIRE-Teagle was successfully developed, deployed and used in the Panlab projects.

The FITeagle extensions for OpenLab are structured in different phases, each one implementing different features to interconnect FITeagle-managed / compliant testbeds with SFA:

The first phase included the control of an SFA-based testbed by using the FITeagle framework and related user tool (VCT Tool). The successful deployment of the network measurement tool Packet Tracking on dynamically provisioned interconnected virtual nodes was demonstrated on the OpenLab General Assembly meeting in Thessaloniki in June 2012. For this phase Fraunhofer FOKUS FUSECO and TUB experimentation-resources were used.

The second phase included the control of FITeagle-based testbeds by using an SFA-related user tool (SFI). The scenario contained the FUSECO IMS and UoP IMS testbeds and was demonstrated in October 2012 at the OpenLab One Year Review meeting in Madrid. The SFA command line client was used to request a list of available resources from all testbeds and to provision and to start OpenIMS resources at Fraunhofer FOKUS.

In the third phase an SFA graphical user interface (MySlice) will be used to demonstrate an even more deep integration of the FITeagle framework in SFA-based federations at the beginning of 2013. Potentially integrating heterogeneous resources from the IMS testbeds, with e.g. PlanetLab or LTE-enabled resources..

RADL (Resource Adapter Description Language)

RADL (Resource Adapter Description Language) [trac.panlab.net/trac/wiki/RADL] is a component of the Teagle architecture that provides a textual syntax for describing the Resource Adapter (RA) middleware that provides the link between a user and a testbed resource. RADL decouples the RA from its underlying implementation code and can be used to publish a resource description to a resource repository.

OMF (cOntrol and Management Framework)

OMF [omf.mytestbed.net] is a free open-source testbed control framework that manages over 20 testbeds worldwide, including the NITOS and NADA testbeds in Europe and six WiMAX meso-scale deployments in the US, along with the ORBIT facility at Rutgers University, the largest openly accessible wireless testbed facility in the world, for which OMF was originally developed in 2003. It provides an integrated measurement and instrumentation framework, as well powerful user tools (see next section) and a portal that supports the entire experiment lifecycle. OMF‘s Aggregate Manager is currently adopting the SFA APIs.

Page 50: Common roadmap of FIRE test facilities – Fourth versionhome.ufam.edu.br/hiramaral/04_SIAPE_FINAL_2016/SIAPE_Biblioteca... · Common roadmap of FIRE . test facilities – Fourth

D3.7 - Common roadmap of FIRE test facilities – 4th version

Version 1.0 – 23/05/2013 Page 50

Experimental plane prototypes

OMF Experiment Controller

OEDL, the OMF Experiment Description Language, is a declarative domain-specific language to define experiments. It describes required resources and how they should be configured and connected. It also defines the orchestration of the experiment itself. Controlability and self-healing are important properties of long-running experiments. OEDL allows for specification either of individual resources or of constraints that permit multiple possible resource assignments, a feature that is becoming increasingly important as the scale of testbeds and the complexity of investigations increase. The OMF Experiment Controller, in production use since 2005 and supporting hundreds of experiments every day, is the free open-source tool that executes OEDL scripts. It has been ported beyond OMF to other control frameworks, such as PlanetLab and Emulab.

NEPI (Network Experimentation Programming Interface)

NEPI [yans.pl.sophia.inria.fr/trac/nepi] allows a researcher to describe a network topology and configuration parameters, to specify traces to be collected, to deploy and monitor experiments, and to collect traces into a central datastore. It can pilot ns-3 simulations, emulations, and real-world PlanetLab experiments, and has the goal to be a general experiment controller. NEPI has a Python API and a simple yet powerful graphical user interface.

MySlice

MySlice [myslice.planet-lab.eu] is a free, open-source tool that helps users manage their testbed resources, from experiment setup through runtime, and retrospectively examine data once the experiment is complete. It is particularity focused on making the measurable characteristics of testbed resources available to users. This interface is currently used by all users of PlanetLab Europe and soon by all PlanetLab users worldwide.

FCI (Federation Computing Interface) and FSDL (Federation Scenario Description Language)

FCI [trac.panlab.net/trac/wiki/FCI] is an Eclipse-based SDK and API for developing applications that access and control an experiment‘s requested resources. The closely related FSDL is a domain specific language for specifying experimentation scenarios. FSDL provides an abstract syntax of a resource broker meta-model. The FSDL‘s concrete syntax is supported by a text editor that implements instances of this meta-model. Syntax highlighting, context assistance, validation errors and warnings are some of the features. FSDL is available for the Eclipse workbench and the specification framework is provided as Eclipse plugins.

ns-3

ns-3 [www.nsnam.org] is a free open-source discrete-event network simulator for Internet systems. Though not an experiment controller, it interacts nicely with experiment controllers to run real experiments in simulated or mixed environments. In addition to the simulation core, ns-3 includes an object framework for simulation configuration and event tracing, a set of 802.11 MAC and PHY models, and TCP/IP network stacks. ns-3 is the first network simulator to transparently and efficiently support the automatic conversion of network packets to and from simulation objects, thereby supporting real-time simulation. In addition, ns-3 integrates a robust and efficient Direct Code Execution (DCE) framework that encompasses arbitrary user

Page 51: Common roadmap of FIRE test facilities – Fourth versionhome.ufam.edu.br/hiramaral/04_SIAPE_FINAL_2016/SIAPE_Biblioteca... · Common roadmap of FIRE . test facilities – Fourth

D3.7 - Common roadmap of FIRE test facilities – 4th version

Version 1.0 – 23/05/2013 Page 51

space and kernel space protocol implementations written in C or C++ for Linux. It interfaces nicely with the NEPI tool described above, allowing automated setup and deployment of mixed experiments that involve a real-time simulation, a testbed, and a field experiment. A consortium to promote ns-3 was recently founded by INRIA and the University of Washington. More than 80 individuals have contributed ns-3 code, 4000 to 8000 separate users download the latest release each month, and more than 800 people are subscribed to the support mailing list.

nmVO (Network Measurement Virtual Observatory)

ELTE’s nmVO [nm.vo.elte.hu] is a framework to efficiently store and share research data. It provides data collection and archiving as well as easy-to-use analysis tools via both human- and machine-readable interfaces. Users edit and run customized SQL queries on nmVO-integrated databases containing several billion records dating from January 2006. Services available both through a web interface and via web services include: registration, login, schema browsing, batch queries, MyDB, import, and history browsing.

Multi-Hop Packet Tracking

Multi-Hop Packet Tracking [www.fokus.fraunhofer.de/go/track] is a free open-source distributed measurement tool that allows researchers to capture the path of packets and their hop-by-hop transmission quality in terms of loss, delay and jitter. The measurement results are exported using the IPFIX measurement protocol. Users can analyze and visualize the captured results in real time and query the collection point for flow-specific data. The software also supports a synchronized coordinated sampling approach that makes it possible to adjust the measurement overhead to available resources. The measurement probe which is installed on all nodes is written in C and available for UNIX distributions, an embedded version is also available for OpenWRT. Packet Tracking has been used in multiple testbeds such as Federica, Planetlab, VINI and G-Lab.

OFP (the OpenFlow Protocol)

OpenFlow is a vendor-independent communications protocol for accessing a switch’s forwarding plane. Switches that support OpenFlow implement, among other features, a programmable flow table that enables easy definition of flow paths across a network of OpenFlow switches. This approach significantly simplifies network management, as a single control element can manage an entire network of switches. Also, the system’s high configurability permits network infrastructure to be partitioned and shared between production and experimental needs. In OpenLab, OFP will be integrated into several testbeds.

IGW (Interconnection Gateways)

In commercial grade networks, interconnection gateways connect networks that are based on different technologies at the data plane by providing media transcoding functions, protocol translation, traffic rate decoupling, and other services. IGWs can also interconnect the data planes of experimental testbeds of differing technologies and/or in different locations. IGW prototypes have been used to establish links between the Panlab testbeds in the PII project and between GENI nodes in the US. In PII, IGWs support multi-homed experiments. In OpenLab, IGWs will interconnect the wired testbeds (a feature that could potentially be extended to the wireless testbeds as well). IGWs will be made controllable by testbed tools, and IGW functionality will be considerably enhanced with the capability of establishing experimentation paths between physical and virtual infrastructure networks.

Page 52: Common roadmap of FIRE test facilities – Fourth versionhome.ufam.edu.br/hiramaral/04_SIAPE_FINAL_2016/SIAPE_Biblioteca... · Common roadmap of FIRE . test facilities – Fourth

D3.7 - Common roadmap of FIRE test facilities – 4th version

Version 1.0 – 23/05/2013 Page 52

2.8.2. USE CASES

Since OpenLab’s testbeds are open to the public Internet, the researcher can experiment with distributed applications in a real-life testing environment. The individual use cases can be read in Sections 2.7 and 2.9 for OneLab2 and PII respectively. Moreover OpenLab extends these cases to the following four categories depicted in the figures below: 1) Bringing an existing user community to a new testbed, 2) Applying a tool from one testbed in another environment, 3) Experiment repeatability, and 4) Cross-testbed experiments.

FIGURE 14 BRINGING AN EXISTING USER COMMUNITY TO A NEW TESTBED

Page 53: Common roadmap of FIRE test facilities – Fourth versionhome.ufam.edu.br/hiramaral/04_SIAPE_FINAL_2016/SIAPE_Biblioteca... · Common roadmap of FIRE . test facilities – Fourth

D3.7 - Common roadmap of FIRE test facilities – 4th version

Version 1.0 – 23/05/2013 Page 53

FIGURE 15 APPLYING A TOOL FROM ONE TESTBED IN ANOTHER ENVIRONMENT

FIGURE 16 EXPERIMENT REPEATABILITY

Page 54: Common roadmap of FIRE test facilities – Fourth versionhome.ufam.edu.br/hiramaral/04_SIAPE_FINAL_2016/SIAPE_Biblioteca... · Common roadmap of FIRE . test facilities – Fourth

D3.7 - Common roadmap of FIRE test facilities – 4th version

Version 1.0 – 23/05/2013 Page 54

FIGURE 17 CROSS-TESTBED EXPERIMENTS

2.8.3. USAGE POLICIES

The usage policies for individual testbed components within OpenLab are governed by each testbed itself. However the majority of the OpenLab testbeds adhere currently to PlanetLab Europe federation. Users of testbeds within or federated to the OpenLab facility are required to adhere to a OpenLab Acceptable Use Policy (AUP) or a compatible AUP that governs their behaviour when making use of OpenLab’s resources. This allows OneLab to handle incidents that arise across testbed boundaries, facilitating communication and allowing easy identification of those responsible.

2.8.3.1. How to connect to the facility

Over 150 institutions in Europe, representing over 1700 researchers, have signed user membership contracts with the PLE Consortium (i.e. the core of OpenLab federation).

2.8.3.2. How to perform experiments

The way in which the Openlab testbeds are used differs from testbed to testbed. The experimenter can find information on the different testbeds online via the OpenLab website.

2.8.3.3. Support organization for experimenters

OpenLab testbeds are generally associated with research institutes and hence supported and maintained by their hosting organizations. UPMC and INRIA established the PlanetLab Europe Consortium in August 2008 as a partnership responsible for running the testbed federation, and were joined by HUJI and UNIPI in 2011. The Consortium is gradually expanding to include more testbeds. CNRS French national research

Page 55: Common roadmap of FIRE test facilities – Fourth versionhome.ufam.edu.br/hiramaral/04_SIAPE_FINAL_2016/SIAPE_Biblioteca... · Common roadmap of FIRE . test facilities – Fourth

D3.7 - Common roadmap of FIRE test facilities – 4th version

Version 1.0 – 23/05/2013 Page 55

agency funds the chief operations engineer position, UPMC itself provides additional legal and administrative support, the OneLab NOC (Network Operations Centre) employs a three-person development team at UPMC and INIRA, and also updates documentation and tutorial videos.

2.8.3.4. Contract requirements for usage

A member of PlanetLab Europe can, for example, provision private virtual servers on any of over 1000 PlanetLab nodes. These virtual servers can be used to deploy live Internet services that are exposed to the same issues that arise in genuine production environments: variable bandwidth, diverse latencies, realistic robustness, and failure modes.

2.8.3.5. Plans for how to accommodate new experiment requirements

OpenLab organised specific Open Calls to invite and fund new innovative use of the experimental facilities. Both open calls are now closed. Based on the feedback of the experimenters that joined as a result of the first open call, an those experimenters now using the facilities during the second open call, the facilities are improved where necessary.

2.8.3.6. Cost of usage

Usage requirements vary depending on the testbed. PLE and ETOMIC require that a member institution contribute some hardware resources in exchange for the testbed access.

2.8.4. INTERNATIONAL LIAISON

OpenLab is part of an active international community, through its multi-disciplinary experts on networks experimentation. Strong links with equivalent initiatives and facilities in North-America (GENI at large), Australia, Japan, China, Korea, Thailand and Brazil represent some of the most active relations. Federation between the PLE Consortium and the global PlanetLab Consortium is based on a memorandum of understanding between participating institutions. In addition to the federation between PlanetLab Europe and PlanetLab Central, OneLab has federated with PlanetLab Japan (PLJ), Private PlanetLab Korea (PPK) and has also expanded the facility to include Chinese partners.

Page 56: Common roadmap of FIRE test facilities – Fourth versionhome.ufam.edu.br/hiramaral/04_SIAPE_FINAL_2016/SIAPE_Biblioteca... · Common roadmap of FIRE . test facilities – Fourth

D3.7 - Common roadmap of FIRE test facilities – 4th version

Version 1.0 – 23/05/2013 Page 56

2.9. PII

2.9.1. OFFERING

The Panlab Infrastructure Implementation (PII) project and its predecessor Specific Support Action Panlab have elaborated on the concept of a Panlab office. The Panlab office is a testbed resource broker, whereby the brokered testbed resources are specialized ICT resources owned by testbed owners and normally available for testing and experimentation purposes by the customers.

The Panlab office arranges transactions between an organization and an individual interested to use testbed resources owned and operated by one or more testbed resource owners.

The Panlab office helps organizations owning testbed resources advertise service capabilities. The Panlab office then makes a commission when the deal is made and a contract is signed.

The Panlab office develops relationships with testbed owners and testbed operators and negotiates testing and experimentations contracts on behalf of the customers. The purpose of the Panlab office is to provide a single point of contact in which customers can access the entire available testbed resource market globally or regionally. The Panlab office will commonly advertise both managed and best effort testing and experimentation services.

A testbed owner is a special form of a Panlab office customer, called a Panlab partner.

To fulfil the goal of finding customers to lease testbed services, the Panlab office will do the following:

• Find testbed resources suited to the customers’ needs and specifications • Request terms and conditions for lease that may be imposed by the testbed owner • Request information about the testbed resources available and describe them for successful

advertising • List the testbed owners that offer testbed resources • Advertise the testing and experimentation services available through listing in the resource broker

tool (Teagle), newsletters, promotions and other methods The primary business objective of the Panlab office is the provisioning of testbed resource brokering services.

The secondary business objective is the deployment and operation of a Network Operation Centre (NOC) for all available resources in the Panlab federation of testbeds.

2.9.1.1. Description of the facility

The facility offers through Teagle www.fire-teagle.org a number of central services to testbed providers and users. Such services include the description, registration, orchestration and provisioning of heterogonous federated testbed resources across participating provider domains. Currently, different resource types can be handled such as physical and virtual machines, devices, software, as well as abstract concepts and services. By means of a common information model, resources can be described, allowing for advanced control in a federated environment. The existing Teagle and federation framework prototypes that were developed in the Panlab/PII projects are available as open source under the Apache 2.0 license.

Page 57: Common roadmap of FIRE test facilities – Fourth versionhome.ufam.edu.br/hiramaral/04_SIAPE_FINAL_2016/SIAPE_Biblioteca... · Common roadmap of FIRE . test facilities – Fourth

D3.7 - Common roadmap of FIRE test facilities – 4th version

Version 1.0 – 23/05/2013 Page 57

RADL (Resource Adapter Description Language) trac.panlab.net/trac/wiki/RADL is a component of the Teagle architecture that provides a textual syntax for describing the Resource Adapter (RA) middleware that provides the link between a user and a testbed resource. RADL decouples the RA from its underlying implementation code and can be used to publish a resource description to a resource repository.

Beyond the brokering service that the facility offers, a number of resources are available that allow the design and deployment of infrastructures for testing Next Generation Network (NGN) services and applications. These can be (for example) advanced multimedia services, such as conference calling and handling of presence information. Aspects that can be tested include session management, subscriber management, addressing, as well as Internet Multimedia Sub-system (IMS) interworking with the relevant gateways for connectivity to other networks. Important business relevant aspects that can be supported include provisioning, charging, device configuration, operation and management, and roaming across different IMS domains. Recently the facilities have been upgraded to support P2P/NGN QoS reservation mechanisms allowing for testing application level P2P traffic routing algorithms.

2.9.1.2. Main components - the timeline for the availability of the facility components

The current facilities are available independent of the Panlab/PII projects either as autonomous test centres or laboratories at the respective operators’ locations, or via the brokering service of the Panlab office. More specifically, the following operators have initially expressed their commitment to offer services via the Panlab office:

• Fraunhofer Fokus – IMS playground and other related capabilities, depending on demand

• TSSG/WIT NGN IMS testbed – including carrier grade IMS Communications Systems

• University of Patras – IMS testbed

• Cosmote – IMS infrastructure and computing resources

• …

2.9.2. USE CASES

A large number of different and quite heterogeneous Use Cases have been implemented in the PII project. The following list presents the Use Cases which can be retrieved in detail via http://www.panlab.net/use-cases.html:

• Testing trans-coding video through dynamic cloud allocation

• Testing Multicast Streaming on Dynamic Networks

• Testing Uncompressed HD Streaming

• EzWeb application over TID SDPLabs: “PII Message Sender”

• Testing end-to-end Self-Management in a Wireless Future Internet Environment

• Testing enhanced Web TV services over mobile phones

• Testing Adaptive admission control and resource allocation algorithms

Page 58: Common roadmap of FIRE test facilities – Fourth versionhome.ufam.edu.br/hiramaral/04_SIAPE_FINAL_2016/SIAPE_Biblioteca... · Common roadmap of FIRE . test facilities – Fourth

D3.7 - Common roadmap of FIRE test facilities – 4th version

Version 1.0 – 23/05/2013 Page 58

• Stress testing the Open IMS core

• Testing a VOIP user agent

As can be inferred by the above examples, typical Use Cases that can be directly supported by the PII testbeds are Next Generation Network (NGN) services and applications.

2.9.3. USAGE POLICIES

The typical usage policy is based on a customer contract between the Panlab office and the experimenter or testing customer. The terms and conditions are typically commercial.

Subject to negotiations, other than commercial terms and conditions may be negotiated, in particular if experiments or tests are embedded in a joint research and development project with the participation of the owners of the testbeds.

Additional usage policies have been defined for owners of testbeds that join the Panlab federation via a contractual relationship with the Panlab office.

2.9.3.1. How to connect to the facility

The Panlab office supports customers to construct and deploy a so called VCT (Virtual Customer Testbed), which is a specific configuration of resources that match customer requirements. By default, a VCT is interconnected over a Virtual Private Network (VPN) that is automatically created to connect the resources that are part of a VCT. The customer can then access the resources in the VCT either via a secure remote access tool (e.g. SSH), or the customer equipment is connected to the VPN in which case the customer equipment becomes inherently part of the VCT.

User access to the System Under Test (SUT) that is deployed by the customer in the VCT is depending on the customer requirements and the availability of access networks at the desired locations. Typical commodity commercial access networks are available in most locations.

2.9.3.2. How to perform experiments

This is out of scope of the facility and is left to the experimenter to decide. A number of tools supporting experimentation can be booked as part of the VCT.

The most exciting capability is the Federation Computing Interface (FCI) and the Federation Scenario Description Language (FSDL). The FCI [trac.panlab.net/trac/wiki/FCI] is an Eclipse-based SDK and API for developing applications that access and control an experiment’s requested resources. The closely related FSDL is a domain specific language for specifying experimentation scenarios. FSDL provides an abstract syntax of a resource broker meta-model. The FSDL concrete syntax is supported by a text editor that implements instances of this meta-model. Syntax highlighting, context assistance, validation errors and warnings are some of the features. FSDL is available for the Eclipse workbench and the specification framework is provided as Eclipse plug-ins.

2.9.3.3. Support organization for experimenters.

The Panlab office will offer a development group and a support group.

Page 59: Common roadmap of FIRE test facilities – Fourth versionhome.ufam.edu.br/hiramaral/04_SIAPE_FINAL_2016/SIAPE_Biblioteca... · Common roadmap of FIRE . test facilities – Fourth

D3.7 - Common roadmap of FIRE test facilities – 4th version

Version 1.0 – 23/05/2013 Page 59

The development group is responsible for the development of advanced services and enhanced service features that are deployed and operated by the Panlab office. Typically these services are related to the core technology for brokering and remotely managing testbed resources (Teagle, repository, Panlab testbed manager, resource adaptors, and interconnection gateway.). New services and features are deployed in consent with the Eurescom technical support director. The development team comprises Panlab partner organizations that work collaboratively in a virtual organization type of structure.

The support team is responsible for the operation of the Panlab office servers and the operation and troubleshooting of customer testing sessions. Similarly to the development team. the support team comprises Panlab partner organizations that work collaboratively in a virtual organization type of structure. One member of the Eurescom technical support team is member of the Panlab support team.

2.9.3.4. Contract requirements for usage

A contract with commercial terms and conditions must be negotiated and signed. Under certain conditions other terms and conditions are possible (see introduction to section Error! Reference source not found. above).

2.9.3.5. Plans how to accommodate new requirements for experiments

In the case that the existing offering cannot support a certain experiment, the Panlab office foresees a “Call for testing” procedure that allows the Panlab office to procure external resources that can satisfy the requirement. These external resources will be offered under a reselling agreement with the external service provider or testbed owner.

In the case that the requirements are communicated in advance, the Panlab office and its partners can evaluate the possibility to develop the required offering “in-house” and possibly in collaboration with the customer.

2.9.3.6. Cost of usage

The main cost of usage is typically based on the estimated personnel cost that is necessary to support the construction, deployment and operation of a VCT and the experiment. The possible costs may vary due to the large variation of complexity of the possible experiments. The only meaningful measure is the average personnel cost of an engineer that will be assigned to support an experiment. Normally the cost will be fixed in the contract following the definition of the services that must be provided.

2.9.4. INTERNATIONAL LIAISON

At present the Panlab office maintains a liaison to the Synchromedia consortium and laboratory at École de technologie supérieure at the Université du Québec.

Page 60: Common roadmap of FIRE test facilities – Fourth versionhome.ufam.edu.br/hiramaral/04_SIAPE_FINAL_2016/SIAPE_Biblioteca... · Common roadmap of FIRE . test facilities – Fourth

D3.7 - Common roadmap of FIRE test facilities – 4th version

Version 1.0 – 23/05/2013 Page 60

2.10. SMARTSANTANDER

2.10.1. OFFERING

The SmartSantander project aims at the creation of an experimental test facility for the research and experimentation of architectures, key enabling technologies, services and applications for the Internet of Things in the context of a city (the city of Santander located in the north of Spain). The envisioned facility is conceived as an essential instrument to achieve the European leadership on key enabling technologies for IoT, and to provide the European research community with a one-and-only platform of its characteristics, suitable for large scale experimentation and evaluation of IoT concepts under real-life conditions.

SmartSantander project provides a twofold exploitation opportunity. On the one hand, the research community gets benefit from deploying such a unique infrastructure which allows true field experiments. Researchers will be allowed to reserve the required resources within the whole network and for a determined time period in order to run their experiments. On the other hand, different services fitting citizens’ requirements will be deployed. Unlike the case of the experimental applications, it will be either the municipal authorities, or the corresponding service managers, the ones in charge of determining the cluster of nodes running to support each service, as well as, the time requirements and availability for the aforementioned service.

The project considers the deployment of 20,000 sensors in the cities of Belgrade, Guildford, Lübeck and Santander (12,000).

2.10.1.1. Description of the facility

In this section project architecture at both high and low level is shown. This architecture is intended to address and effectively achieve the mentioned twofold approach, service provision and experimentation support, pursued by the SmartSantander project. Implemented architecture relies on a three-tiered network approach: IoT node tier, gateway (GW) tier and testbed server tier.

• The IoT node tier embraces the majority of the devices deployed within the testbed infrastructure. It is composed of diverse heterogeneous devices, including miscellaneous sensor platforms, tailor-made devices for specific services as well as Radio-Frequency Identification (RFID) and Near Field Communications (NFC) tags. These devices are typically resource-constrained and host a range of sensors, and in some cases actuators. Other devices such as mobile phones and purpose-built devices with reasonable computing power (e.g. mobile devices in vehicles), as well as providing wide area communication capabilities, behave as IoT nodes in terms of sensing capabilities and as GW nodes regarding processing and communication capabilities.

• The GW tier links the IoT devices on the edges of the capillary network to the core network infrastructure. IoT nodes are grouped in clusters that depend on a GW device. This node locally gathers and processes the information retrieved by IoT devices within its cluster. It also manages (transmission/reception of commands) them, thus scaling and easing the management of the whole network. The GW tier devices are typically more powerful than IoT nodes in terms of memory and processing capabilities, also providing faster and more robust communication interfaces. GW devices allow virtualisation of IoT devices. This enables the instantiation of emulated sensors or actuators that behave in all respects similar to the actual devices.

Page 61: Common roadmap of FIRE test facilities – Fourth versionhome.ufam.edu.br/hiramaral/04_SIAPE_FINAL_2016/SIAPE_Biblioteca... · Common roadmap of FIRE . test facilities – Fourth

D3.7 - Common roadmap of FIRE test facilities – 4th version

Version 1.0 – 23/05/2013 Page 61

• The server tier provides more powerful computing platforms with high availability and directly connected to the core network. The servers are used to host IoT data repositories and application servers. Server tier devices receive data from all GW tier nodes. As a final step, the concept of federation is supported by the architecture. Servers managing networks located in different physical locations can connect among themselves to allow the users of the platforms to transparently access IoT nodes that are deployed in different testbeds.

Figure 18 shows the high-level architecture consolidated in the SmartSantander as well as the main functionalities provided and associated with each of the tiers.

FIGURE 18. PLATFORM HIGH-LEVEL ARCHITECTURE AND BUILDING BLOCKS

The architecture distinguishes four subsystems, namely management, experimentation, application support and, a transverse one, the Authentication, Authorization and Accounting (AAA) subsystem. In order to access and interact with these subsystems, four interfaces have been defined, and respectively named Management support interface (MSI), Experimental support interface (ESI), Application support interface (ASI) and Access control interface (ACI). For each of the three levels of the architecture, the corresponding functionalities and services associated with the aforementioned four subsystems are implemented.

The Testbed Management subsystem performs three non-trivial management processes, namely: resource discovery, resource monitoring and testbed reconfiguration. The discovery of resources is an essential feature of an IoT platform as it provides support for resource selection according to the user’s criteria (for example, sensed phenomena, sensor locality, measurement frequency among others). This essentially entails i) the description of the diversity of IoT resources using a uniform IoT Resource Description Model as

Page 62: Common roadmap of FIRE test facilities – Fourth versionhome.ufam.edu.br/hiramaral/04_SIAPE_FINAL_2016/SIAPE_Biblioteca... · Common roadmap of FIRE . test facilities – Fourth

D3.7 - Common roadmap of FIRE test facilities – 4th version

Version 1.0 – 23/05/2013 Page 62

well as ii) the generation of these descriptions based on dynamic IoT node registrations, and iii) their lookup based on IoT device attributes, dynamic state and connectivity characteristics. Even under normal operation, the IoT platform is in a constant state of flux: IoT nodes fail, change their point attachment, and join the platform or undergo transitions through a number of operation states. Ensuring the correct execution of the IoT testbed’s services while facing such a dynamic environment and guaranteeing the testbed’s resilience to failures, requires continuous monitoring of the state of its IoT resources. Finally, on the detection of hardware failures, fault-remediation strategies require that the testbed is reconfigured to omit the faulty nodes from future experimentation or service-provisioning.

The Experiment Support subsystem provides the mechanisms required to support different phases of the experiment lifecycle. During the specification phase, which mainly deals with resource selection (i.e. selection of IoT devices and other testbed resources suitable for execution of desired experiment), the user is supported with adequate functionality enabling exploration of available testbed resources (reserved resources are made available during the duration of the experiment) aiming at the selection of those fulfilling (in terms of capabilities offered by selected nodes) the desired properties. Once the selected nodes for a determined experiment have been reserved and scheduled, they can be wirelessly flashed with the corresponding code image. This flashing procedure, carried out through Multihop-Over-The-Air Programming (MOTAP), enables nodes to be flashed as many times and with as many codes as required. Finally, during the execution phase, the experimenter is empowered with the tools for experiment execution control, experiment monitoring, data collection and logging.

The Application Support subsystem is in charge of providing the functionalities to facilitate the development of services based on the information gathered by the IoT nodes. Besides the storage of the observations and measurements coming from the IoT nodes, its main functionalities relate to the lookup and provision of observations to requesting services by means of a publish/subscribe/notify interaction.

Finally, the AAA subsystem is in charge of authentication and access control functionalities that are transversally carried out in order to protect all the interaction points that the platform exposes to the outside world. This functionality is only carried out at server level in order to grant access to authenticated and authorized experimenters, services providers or testbed administrators.

Taking into consideration the aforementioned three-tiered architecture, both IoT nodes and GWs are intended for experimenting over them. In this sense, next figure shows the different modules in charge of providing the methodology for interacting and experimenting over the deployed nodes.

From the user perspective, three main blocks can be identified: service provision, service experimentation and experimentation at node level. Service provision includes the use cases developed within the SmartSantander project, taking information from the IoT infrastructure and processing it accordingly to offer the corresponding services. Service experimentation refers to the different experiments/services that can be implemented by external users, utilizing the information provided by the IoT infrastructure deployed within the project. Experimentation at node level implies node reservation, scheduling, management and flashing in order to change behaviour and execute different experiments over a group of nodes, i.e., routing protocol, network coding schemes or data-mining techniques.

Page 63: Common roadmap of FIRE test facilities – Fourth versionhome.ufam.edu.br/hiramaral/04_SIAPE_FINAL_2016/SIAPE_Biblioteca... · Common roadmap of FIRE . test facilities – Fourth

D3.7 - Common roadmap of FIRE test facilities – 4th version

Version 1.0 – 23/05/2013 Page 63

FIGURE 19. SMARTSANTANDER ARCHITECTURE AND BUILDING BLOCKS

From the platform operation point of view, in order to support the different types of users previously referred, the following main components are identified: Portal Server, Service Provision GW (SPGW), Service-Level Experimentation Manager (SLEM), Ubiquitous Sensor Network (USN) platform and Gateway for Experimentation (GW4EXP).

The Service Provision GW (SPGW) receives the data gathered by the deployed devices, storing them in the USN platform. The Node Manager is also fed with this information in order to monitor the available resources, reporting and keeping update the Resource Manager accordingly.

The Service-Level Experiment Manager (SLEM) allows the service-level experimenters (i.e. those running experiments using data provided by deployed nodes) to access data collected from the services, stored at the USN component. For service providers (i.e. those providing a service with data retrieved from the deployed nodes), data generated by nodes within the network is directly accessed through the USN.

The Portal Server represents the access point to the SmartSantander facility for node-level experiments, through the SmartSantander Testbed Runtime module, providing access (SNAA) to the platform, reserve (RS) a set of nodes to run the experiment and actuate (iWSN) on them, both remotely flashing them with the corresponding code image [5], as well as receiving the data associated to the experiment carried out with them. Finally, the GW4EXP allows accessing the nodes both in terms of network management and experimentation level.

2.10.1.2. Main components - the timeline for the availability of the facility components

The Santander testbed is currently composed of around 3000 IEEE 802.15.4 devices, 200 devices including GPS/GPRS capabilities, and 2000 joint RFID tag/QR code labels deployed both at static locations (streetlamps, facades, bus stops) as well as on-board of public vehicles (buses, taxis). According to the

Page 64: Common roadmap of FIRE test facilities – Fourth versionhome.ufam.edu.br/hiramaral/04_SIAPE_FINAL_2016/SIAPE_Biblioteca... · Common roadmap of FIRE . test facilities – Fourth

D3.7 - Common roadmap of FIRE test facilities – 4th version

Version 1.0 – 23/05/2013 Page 64

blocks previously defined, and from the user perspective, all the aforementioned nodes support a specific service and also provide the data they gather to the different types of users interested on experimenting with them (experimentation at service level). Furthermore, some of these components also allow the experimentation at node level, by remotely flashing them with the corresponding image code of the experiment to be carried out. In this sense, 1000 standard devices and 100 (80 of them mobile nodes) embedded linux devices offering an additional native 802.15.4 interface, will be available for experimentation over them. New use cases and their corresponding deployments associated to Phase 3 are currently being defined by the Consortium.

With respect to the activity periods of the deployed nodes, the following behavior classes have been defined:

- Traffic and outdoor parking nodes (450): They are fed with non-rechargeable batteries assuring a life time of approximately 2/3 years.

- Static environmental and irrigation nodes (1000): Rechargeable battery powered devices, offering a theoretical infinite lifetime (bounded to the life period of the corresponding batteries).

- Mobile nodes (150): Nodes powered with vehicles battery, including a 2-hour backup battery when vehicle engine is off.

- Static gateways (25): Nodes continuously attached to the main power.

In relation with the main features of the components previously described, the following ones can be highlighted:

- A double radio interface in some of the deployed nodes, allowing service provision and experimentation support in a simultaneous way.

- Capacity to remotely flash, both static and mobile nodes, allowing the change of behavior of the network in terms of both service provision and experimentation support.

- Use of rechargeable batteries to satisfy the demand in terms of energy consumption, ensuring adequate operation times for the nodes.

From the experimenter point of view, the possibility to experiment over a dense real deployment, as well as being able to do it with a wide spectrum of sensor types, grants him/her a valuable opportunity for testing laboratory-tailored experiments in a real deployment, and to generate added value services, derived from the different data sources offered by the project.

2.10.2. USE CASES

2.10.2.1. Current use cases

The components previously described have been deployed in the city of Santander through the development of different use cases (shown in Figure 20):

Page 65: Common roadmap of FIRE test facilities – Fourth versionhome.ufam.edu.br/hiramaral/04_SIAPE_FINAL_2016/SIAPE_Biblioteca... · Common roadmap of FIRE . test facilities – Fourth

D3.7 - Common roadmap of FIRE test facilities – 4th version

Version 1.0 – 23/05/2013 Page 65

FIGURE 20. USE CASES DEPLOYMENT

• Static Environmental Monitoring: Around 1000 IoT devices installed (mainly at the city centre) at streetlamps and facades, are provided with different sensors which offer measurements on different environmental parameters, such as temperature, CO, noise and luminosity. All these devices are provided with two independent IEEE 802.15.4 modules, one running the Digimesh protocol (proprietary routing protocol) intended for service provision (environmental measurements) as well as network management data transmission, whilst the other one (that implements a native 802.15.4 interface) associated to data retrieval and experimentation purposes.

• Mobile Environmental Monitoring: In order to extend the aforementioned static environmental monitoring use case, apart from measuring parameters at static points, the devices located on vehicles are continuously monitoring a number of environmental parameters associated to determined areas of the city. The modules installed on these vehicles are composed of a local processing unit in charge of sending (through a GPRS interface) the (geolocated) values retrieved by both the sensor board and the CAN-Bus module. The sensor board measures different environmental parameters, such as, CO, NO2, O3, particulate matters, temperature and humidity, whilst CAN-Bus module takes main parameters associated to the vehicle, retrieved from the CAN-Bus, such as position, altitude, speed, course and odometer. Furthermore, an additional 802.15.4 interface is also included in order to carry out experimentation, interacting with both other vehicles as well as with the aforementioned static devices, the so called vehicle to vehicle (V2V) and vehicle to infrastructure (V2I) communications. These modules have been installed in about 150 vehicles, including local buses, taxis and other public vehicles.

• Parks and gardens irrigation: Around 50 devices have been deployed in two green zones of the city, to monitor irrigation-related parameters, such as moisture temperature and humidity, pluviometer, anemometer, solar radiation, pressure and humidity, in order to make irrigation as efficient as possible. In terms of processing and communication issues, these nodes are same to those deployed for static environmental monitoring, implementing two independent communication interfaces for service provision, network management and experimentation activities.

• Outdoor parking area management. Almost 400 parking sensors (based on ferromagnetic technology), buried under the asphalt, have been installed at the main parking areas of the city centre, in order to detect parking sites availability in these zones.

Page 66: Common roadmap of FIRE test facilities – Fourth versionhome.ufam.edu.br/hiramaral/04_SIAPE_FINAL_2016/SIAPE_Biblioteca... · Common roadmap of FIRE . test facilities – Fourth

D3.7 - Common roadmap of FIRE test facilities – 4th version

Version 1.0 – 23/05/2013 Page 66

• Guidance to free parking lots: Taking information retrieved by the deployed parking sensors, 10 panels located at the main streets’ intersections have been installed in order to guide car drivers towards the available parking lots.

• Traffic Intensity Monitoring: Around 60 devices located at the main entrances of the city of Santander have been deployed to measure main traffic parameters, such as traffic volumes, road occupancy, vehicle speed or queue length.

As previously commented for the components taking part of the deployment, all the described use cases are intended to provide a different service, as well as offering the retrieved data for other users, the so called experimentation at service level. On the other hand, static and mobile environmental monitoring and parks and gardens irrigation, also offer the possibility of carrying out experimentation at node level, offering an additional communication interface. Apart from the referred use cases, two citizen-oriented services have been deployed, thus including corresponding applications for Android and IOS operating systems, and in order to foster citizenship’s involvement.

FIGURE 21. DEVELOPED APPLICATIONS: AUGMENTED REALITY (LEFT) AND PARTICIPATORY SENSING (RIGHT)

• Augmented Reality: As shown in left side of Figure 21, this service includes information about more

than 2700 places in the city of Santander, classified in different categories: beaches, parks and gardens, monuments, shops. In order to complement and enrich this service, 2000 RFID tags/QR code labels have been deployed, offering the possibility of “tagging” points of interest in the city as, for instance, touristic point of interests, shops and public places such as parks, squares, etc. In a small scale, the service provides the opportunity to distribute information in the urban environment as location based information.

• Participatory Sensing: As it can be derived from right side of Figure 21, in this scenario users utilize their mobile phones to send to the SmartSantander platform and in an anonymous way, physical sensing information, e.g. GPS coordinates, compass, environmental data such as noise, temperature. Users can also subscribe to services such as “the pace of the city”, where they can get alerts for specific types of events currently occurring in the city. Users can themselves also report the occurrence of such events, which will subsequently be propagated to other users that are subscribed to the corresponding types of events.

It is important to highlight that, in the same way as aforementioned use cases, information retrieved by these two services is made available by the SmartSantander platform, in order to enable other users to be able to experiment with it (experimentation at service level).

Page 67: Common roadmap of FIRE test facilities – Fourth versionhome.ufam.edu.br/hiramaral/04_SIAPE_FINAL_2016/SIAPE_Biblioteca... · Common roadmap of FIRE . test facilities – Fourth

D3.7 - Common roadmap of FIRE test facilities – 4th version

Version 1.0 – 23/05/2013 Page 67

Apart from the use cases defined within the city of Santander, some other use cases have been deployed in the other three cities involved in the project and previously described: Guilford, Lübeck and Belgrade-Pancevo. In the city of Guilford, 250 freely programmable sensor nodes deployed across all offices of CCSR (Centre for Communication Systems Research) at University of Surrey, with various sensing modalities (temperature, light, noise, motion, electricity consumption of attached devices, vibration). The IoT nodes consist of 200 TelosB based platforms and 50 SunSpots. 100 embedded Linux servers (GuruPlug Servers), directly connected to an Ethernet backbone, have been deployed and connected to the sensor nodes for their management. The data plane of the testbed is realized via wireless links (highlighted in Blue) based on 802.15.4 which can be single/multi-hop between the IoT nodes towards the GW devices. Lübeck facility consists of three sensor node hardware types: iSense, TelosB, and Pacemate. The nodes are arranged in clusters, where each cluster has one sensor node of each node. This testbed consists of roughly 300 stationary sensor nodes organized into 100 clusters. There are two different cluster layouts which differ in the sensor module connected to the iSense node. Half of the iSense sensor nodes are equipped with temperature and light sensors, while the remaining nodes are equipped with a passive infrared sensor and accelerometer sensors. All clusters are connected to a total of 35 netbooks forming the backbone of the testbed connected to the Internet. The sensor nodes are connected to the netbooks via USB, while the netbooks are connected to the Internet over 802.11g Wi-Fi. In addition to the indoor and mobile nodes, UZL has deployed a number of outdoor, solar-powered iSense sensor nodes (approximately 35), featuring a rechargeable battery pack through iSense Solar Power Harvesting System. The EkoBus system (60 devices) deployed in the city of Pancevo is made available for experimentation on IoT data level. The system utilizes public transportation vehicles in the city of Pancevo to monitor a set of environmental parameters (CO, CO2, NO2, temperature, humidity) over a large area as well as to provide additional information for the end-user like the location of the buses and estimated arrival times to bus stops. As the system is in commercial use, a replica of the system is made available for experimentation.

2.10.2.2. On the evolution of use cases

Regarding to the aforementioned use cases, next step is to combine the information retrieved by them in order to improve the performance of current services, as well as to create other added value ones. On the other hand, the scalability offered by the deployed network allows the addition of new devices with additional sensing capabilities, in order to fulfil corresponding requirements to new use cases to be deployed in the city. The implementation of Delay Tolerant Network Use Cases, aiming at an efficient experimentation code downloading, relaying on opportunistic network connection, using mobile/WiFi possibilities depending on the environment, is conceived as an important issue to be addressed. It is also important to include the concept of virtualization on top of most powerful SmartSantander nodes, allowing the execution of more complex experiments; and also virtualization on top of Smartphones, exploring the environment that smartphones provide for running experiments. Finally, it is important to pay attention to trust, security and privacy issues from the perspective of the end-users, focusing on the current end-user worries about the data exchange and privacy.

Page 68: Common roadmap of FIRE test facilities – Fourth versionhome.ufam.edu.br/hiramaral/04_SIAPE_FINAL_2016/SIAPE_Biblioteca... · Common roadmap of FIRE . test facilities – Fourth

D3.7 - Common roadmap of FIRE test facilities – 4th version

Version 1.0 – 23/05/2013 Page 68

2.10.3. USAGE POLICIES

Experimenters becoming involved through the Open Calls mechanism will be provided with support by the consortium itself during the execution of the proposals. Apart from the on-line material and help that will be made available for external experimenters through the project website, specific resources will be provisioned according to the model developed within the exploitation and sustainability plan to guarantee the proper support to experimental activities beyond the duration of the project. In general, to be granted access for the usage of any part of the facility, the experimenter’s entity will have to provide some administrative and legal guarantees related to the confidentiality of the information provided (NDA) and the commitment to a fair usage policy. The precise terms will be defined within the project. Specific contractual conditions apply for those institutions accessing the infrastructure through the Open Calls. These entities have become members of the consortium for a limited time span, and accepted the conditions of the project’s Consortium Agreement and other European regulation related to EC projects. These include privacy issues, restrictions with regard to data management, etc..

Although a huge number of requirements have already been identified, the fact of having a cyclic approach in terms of design, implementation and deployment means that the project approach allows gradually evolving and accommodating new requirements coming from external experimenters.

Furthermore, SmartSantander pro-actively seeks community requirements through surveys and workshops. As examples the survey on IoT experimentation needs and the subsequent workshop organised at the IoT week in Barcelona on the 8th of June 2011, the New Century Cities held in Zaragoza and the IoT Forum in Bled, both in November 2012. A further survey and workshop are planned for the next years.

During the next year the use of the platform will be cost free. Once the project has ended, an appropriate sustainable exploitation model will be fixed. This model is being conceived, taking into consideration the most relevant stakeholders. Both Regional Government and City Council have already initiated actions for providing economical support once the project has ended.

Once the SmartSantander project is finished, the Santander testbed will be still accessible via Fed4FIRE. With respect to those experimentation requests willing to run tests on the other testbeds (Guildford, Belgrade, and Lübeck), these proposals will be redirected to the corresponding site helpdesk in order to put them up and running, once the necessary previous steps (validation, customization, sanity checks, etc.) have been performed.

2.10.4. INTERNATIONAL LIAISON

The platform developed in SmartSantander is intended to be used in other smart city environments, mainly as basis of other project consortiums. Currently, activities to federate the deployments in Guildford, Belgrade, Lübeck and Santander are being carried out. Furthermore, synergies and complementarities have being identified with Aarhus, Berlin, Birmingham and Trento. The main idea is to experiment on the service plane aiming at correlating and extrapolating the results obtained on the mentioned sites. This collaborative experimentation in the service plane is facilitated by the interaction brought by the OUTSMART project belonging to the FI-PPP. In this sense, collaboration in EAR-IT project focuses on the exploitation of noise measurements offered by SmartSantander, whilst others like RADICAL try to combine

Page 69: Common roadmap of FIRE test facilities – Fourth versionhome.ufam.edu.br/hiramaral/04_SIAPE_FINAL_2016/SIAPE_Biblioteca... · Common roadmap of FIRE . test facilities – Fourth

D3.7 - Common roadmap of FIRE test facilities – 4th version

Version 1.0 – 23/05/2013 Page 69

information retrieved by smart cities environments with that coming from social networks. In terms of federation with other testbeds, Fed4FIRE will deliver open and easily accessible facilities to the FIRE experimentation communities, focusing on fixed and wireless infrastructures, services and applications, and combinations thereof.

A number of links are also being maintained with other European smart city initiatives, like the SenseSmartCity project developed in Skellefteå (Sweden). SmartSantander has become an ENOLL member in the 6th wave, the IoT Smart Santander Living Lab. Some discussions concerning collaboration with US testbeds under the GENI program have already started at previous EC workshops. First steps are being given to enhance co-operation with the ORBIT testbed at WINLAB. In this sense, SmartSantander is collaborating with NICTA and Rutgers University for adopting OML/OMF/SFA architecture on top of SmartSantander experimentation facility. This collaboration was agreed in the framework of FIRESTATION architecture board as part of the alignment work between FIRE ICT call 5 projects.

Page 70: Common roadmap of FIRE test facilities – Fourth versionhome.ufam.edu.br/hiramaral/04_SIAPE_FINAL_2016/SIAPE_Biblioteca... · Common roadmap of FIRE . test facilities – Fourth

D3.7 - Common roadmap of FIRE test facilities – 4th version

Version 1.0 – 23/05/2013 Page 70

2.11. TEFIS

2.11.1. OFFERING

TEFIS, a Testbed for Future Internet Services, enables the efficient combination of services of various testing facilities (Network resources, Living Lab resources, Cloud resources). The testbeds offered by TEFIS allows a broad range of service characteristics to be explored including functionality, performance, scalability, usability, maintainability, user experience/acceptability, and standards compliance. To this end TEFIS provides a single access point for experimenters to manage their complete experiment, orchestrated across more than one testbed facility. It supports experimenters to get the best out of the testbeds available services and resources to be combined to best fit the experimenter´s testing needs including user behaviour-, scale-, performance- and SLA compliance testing and experimentation. The services available by TEFIS connected testbeds are used in a wide variety of experiments in the fields of Mobile media, e-health, network performance measurements, e-learning, Collaborative content distribution in mobile networks and SDN

TEFIS testing service offerings available today by our connected testbeds:

Among others are….

1. Large-scale performance testing of Future internet services

2. Mobile service usefulness testing and IMS validation

3. Software capability testing and network efficiency

4. Integration capability testing

5. Software quality and compatibility testing

6. Mobile service quality testing

Table 1 below summarizes the testing objectives and how they relate to the Information Technology Infrastructure Library (ITIL) Service Lifecycle. In supporting experimentation with a view to improving services, all of these requirements need to be catered for.

TABLE 5. TEFIS SERVICE TESTING OBJECTIVES

Objective ITIL Service Lifecycle Phase(s)

Requirements

Functional Transition Validate that all function is available and works as expected

Performance Design; Transition Validate that the service continues to operate during all predicted loads; validate that configuration recommendations are appropriate.

Page 71: Common roadmap of FIRE test facilities – Fourth versionhome.ufam.edu.br/hiramaral/04_SIAPE_FINAL_2016/SIAPE_Biblioteca... · Common roadmap of FIRE . test facilities – Fourth

D3.7 - Common roadmap of FIRE test facilities – 4th version

Version 1.0 – 23/05/2013 Page 71

Scalability Design; Transition; Operation

Validate that the service continues to operate in the same way at different levels of load; validate that all requirements for deployment scenarios can be met.

Usability Transition Validate that users are able to access and use the functions offered.

Maintainability Transition; Operation Validate that the service can be deployed and continue to run in the operational environment.

Acceptance Transition Validate that users are comfortable with the design and how to use the service.

Standards Compliance

Design Validate that all appropriate standards or regulations have been met.

Individually, the testbeds made available to experimenters can be expected to be able to provide support for these objectives. However, it is not necessarily the case that all of these objectives will be met by a single test facility. This is a major contribution of TEFIS: managing access to a range of different testbeds which can support all of these objectives within a single experiment and test run, or across multiple experiments and test runs.

2.11.1.1. Main components - the timeline for the availability of the facility components

2.11.1.1.1. Overview of the TEFIS platform

TEFIS provides a single access point to be guided through and supported during all phases of experiment design and execution on federated test resources. Figure 22 shows the TEFIS infrastructure from experimenter (top) out to the federated testbeds (bottom). The platform itself in made up of five basic components: the TEFIS portal, the TEFIS middleware embedding the TEFIS core services supporting overall operation and control, the Experiment data manager, and a connector interface to communicate with the federated testbeds.

Page 72: Common roadmap of FIRE test facilities – Fourth versionhome.ufam.edu.br/hiramaral/04_SIAPE_FINAL_2016/SIAPE_Biblioteca... · Common roadmap of FIRE . test facilities – Fourth

D3.7 - Common roadmap of FIRE test facilities – 4th version

Version 1.0 – 23/05/2013 Page 72

FIGURE 22. TEFIS FUNCTIONAL ARCHITECTURE

The TEFIS portal is the main access point for the user to create an account (the TEFIS Identity Manager Interface), to search for resource (the TEFIS Directory Interface) and to design the workflow for their experiment (the TEFIS Experiment Manager Interface). Each of these is supported by a corresponding component within the TEFIS middleware (the Identity Manager, the Resource Directory and the Experiment Manager, respectively). In addition, the TEFIS Experimental Data Interface allows the experimenter to search any existing experiments to locate those with similar goals or set-up to what they are proposing, as well as to interact with monitoring data and experimental results once the experiment or an individual run completes.

The Core Services manages the whole platform and the execution of the experimenter’s test runs. The Experiment and Workflow Scheduler takes the workflow created by the experimenter and presents it for execution to the appropriate test resource. The Resource Manager interacts with the resources to reserve and initiate them. The Supervision Manager handles all monitoring activities, checking performance within the TEFIS environment itself as well as communicating with the testbeds to retrieve monitoring output from them.

The Connector Interface provides the central control for all communication to and from the testbeds. It is a significant benefit of TEFIS that it can offer not only support to experimenters, but also to testbed providers. Both users (experimenters) and providers (testbeds) are essential participants in the TEFIS platform and vision. It is the Connector Interface that provides the mechanism for these communities to co-operate.

Page 73: Common roadmap of FIRE test facilities – Fourth versionhome.ufam.edu.br/hiramaral/04_SIAPE_FINAL_2016/SIAPE_Biblioteca... · Common roadmap of FIRE . test facilities – Fourth

D3.7 - Common roadmap of FIRE test facilities – 4th version

Version 1.0 – 23/05/2013 Page 73

The Data Services support the other TEFIS components as well as experimenters and testbed providers with all of the data needs associated with an experiment. The TEFIS data filesystem holds all data and information necessary for the satisfactory execution of an experimental test run, as well as providing storage for output data from each workflow step. Through the experimental metadata we have defined, experiments can be tracked and browsed in support of experimenters within the community wanting to find or review related work.

2.11.1.1.2. Components offered by the TEFIS connected testbeds and living labs

Experimental facility

Short description Benefits Components offered

PACA grid Addresses parallel, distributed, and multi-threaded computing and cloud applications.

Users of the PACA Grid facilities may take advantage of: • A computing infrastructure for largescale computations. • A number of tools to automatically deploy and execute the distributed application, monitor the progress of computation and retrieve the computation results

Each node of the infrastructure is equipped with a dual-processor quadcore AMD Opteron 2356 processing unit, or a quad-processor hexa-core Intel Xeon E4750 for a total of 688 cores. A Windows CCS cluster of 8 computing nodes is also available and each node has a dual-processor quad core Intel Xeon E5320, for a total of 64 cores. Several software packages are available on the machines and any library and software that may be required can be installed on the nodes on demand.

ETICS Automatic build, test and quality certification for any distributed software exploiting distributed resources

Using ETICS throughout the development process lifecycle improves the quality of the software, and reduces the time spent on build, integration and test activities offering the possibility to achieve shorter time-to-market, a lower risk to delivery schedules and reduce project

A pool of around 50 computational nodes supporting all major platforms: RedHat, Debian, CentOS, Ubuntu, Windows Server.

SQS-IMS Validation and Testing of Converged Next Generation Services.Emulated and Real IMS networks supporting OMA, SIP, PGM, 3GPP, TISPAN, SS7 and IM standards.

TheSQS testbed offers both the emulated IMS platform, and connection to real environments to allow tasks to be run and tested. The SQS testBed also provides end-to end validation and testing services for the Telecommunications Sector,

IMS Core, IMS Roaming broker, Monitoring testing tools, Human team for IMS validation

BOTNIA The Botnia Living Lab focuses on the support of human-centric innovation on advanced ICT Services for “Extended Capabilities and Mobility”.

Added value for Facility users offered by Botnia Living Lab • To speed up the innovation process from idea to market launch by enduser involvement • To co-create, tap into and improve innovative ideas and concepts • To investigate and create new business opportunities

>5000 end-users The Form-IT methodology for user-involvement Human expertise in user testing

Kyatera Resources to develop science, technologies, and applications

KyaTera can be seen as two different services: As a high

The Kyatera fiber-optic network

Page 74: Common roadmap of FIRE test facilities – Fourth versionhome.ufam.edu.br/hiramaral/04_SIAPE_FINAL_2016/SIAPE_Biblioteca... · Common roadmap of FIRE . test facilities – Fourth

D3.7 - Common roadmap of FIRE test facilities – 4th version

Version 1.0 – 23/05/2013 Page 74

of the future Internet remotely collaborating via a high capacity optical network in São Paulo State (Brazil).

speed network, or as a network measurement tool. The KyaTera network performance meter will be created to measure the quality of the network, targeting the problem of transmitting multimedia with a predetermined quality of service, i.e., with a pre-determined pattern, measuring the parameters needed in the network so that the desired quality is achieved

PlanetLab The facility is a powerful infrastructure for the testing and evaluation of network protocols and distributed systems on a large scale under real conditions.

The facility is a powerful infrastructure for the testing and evaluation of network protocols and distributed systems on a large scale under real conditions.

PlanetLab currently consists of 1018 nodes at 487 sites across the globe. A number of monitoring capabilities:

PSNC Hybrid Cluster and 16K Visualization Tiled Display (Pending testbed)

Poznań Supernetworking and Computing Center’s Hybrid Cluster is a big GPU cluster remotely available via QosCosGrid (QCG) middleware and managed locally by SLURM queuing system.

This facility mainly targets scientific communities that need to analysis and visualize large data volumes as well as speed-up their scientific computation. Advanced visualization, including not only static images or offline generated movies, but also involving human-computer interactions and steering will allow experimenters for much better analysis and enhanced understanding of their scientific problems.

A 16 Tiled Visualization Wall for experimenters to investigate and evaluate new ways of visualizing scientific data in wide-area distributed environments with the use of high resolution displays.

More details on the facilities and the access policies can be found from the clickable online version of the above table, available via http://www.tefisproject.eu/facilities/experimental-facilities.

2.11.2. USE CASES

The TEFIS platform has examined a number of representative Future Internet scenarios, including eCommerce, multimedia services and eHealth. The scenarios provide a powerful demonstration of the benefits of TEFIS in each requiring multiple test facilities to be managed as a single, complete test entity for various stages of the specific tests.

The services offered by TEFIS connected testbeds are used today in different experiments in a variety of domains:

1 The e-travel use case: a large scale SOA application for a huge travel eCommerce platform accessible both from websites and Web Services.

2 e-health application: related to the Distributed Patient Medical Record (PMR) which manages medical information exchange between patient, medical staff (doctor, nurse, etc) and pharmacy.

3 The Mobile Media experiment: a mobile application over IMS for content sharing 4 QUEENS: Dynamic Quality User Experience ENabling Mobile Multimedia Services

Page 75: Common roadmap of FIRE test facilities – Fourth versionhome.ufam.edu.br/hiramaral/04_SIAPE_FINAL_2016/SIAPE_Biblioteca... · Common roadmap of FIRE . test facilities – Fourth

D3.7 - Common roadmap of FIRE test facilities – 4th version

Version 1.0 – 23/05/2013 Page 75

5 Quagga: Experimenting with an Open API for Quagga and Cross-layer Coordinated Networks 6 Smart Ski resort for collaborative content distribution 7 TEFPOL: Augmented reality collaborative workspace using Future Internet videoconferencing

Platform for remote education and Learning

More about the use cases are available online via http://www.tefisproject.eu/about/tefis-use-cases.

2.11.3. USAGE POLICIES

Access to TEFIS connected testbeds is made via the TEFIS portal www.tefisportal.eu. This portal provides support for:

• Faster experiment development in supporting review and searching of previous and related experiments, as well as supporting experimenters through the various stages of preparation, workflow definition and execution

• Brokerage of test resources; testbed providers are given access to a wider potential user audience through a common and experimenter-supportive interface; and experimenters can submit a single experiment to be run across as many testbeds as are appropriate to satisfy their test requirements

• Multiple test facilities. The TEFIS platform provides access to different facilities offering different services and capabilities, from large computer clusters to highly distributed systems and network simulators. A significant benefit that TEFIS currently provides is the availability of a Living Lab (Botnia) within the current test facilities that may be federated

• Community Support through the facility to be able to share results as well as set-up. TEFIS may also enable the exchange of information and data between experimenters with similar goals and aspirations

The TEFIS-portal acts as an intermediary for test services provided by Companies or Research organizations (Test providers) to the Users. The portal is open to any user and includes features to plan for and run experiments on the testbeds described before. The testbed providers are solely responsible for services provided and according to each providers’ own access-policies and terms, available at their own website. Remuneration or compensation for the use of services provided by the testbed providers, are to be determined by each testbed provider and to be paid by the user to the test provider according to terms determined by the test provider.

Page 76: Common roadmap of FIRE test facilities – Fourth versionhome.ufam.edu.br/hiramaral/04_SIAPE_FINAL_2016/SIAPE_Biblioteca... · Common roadmap of FIRE . test facilities – Fourth

D3.7 - Common roadmap of FIRE test facilities – 4th version

Version 1.0 – 23/05/2013 Page 76

2.11.4. INTERNATIONAL LIAISON

One of the testbeds that TEFIS encompasses is the KyaTera facility in Brazil. KyaTera is a high-performance fibre-optic network, which grew out of an original effort to support collaboration between different members of the Brazilian scientific community for the development of the science, technologies and applications appropriate to the Future Internet. It provides the capability to evaluate the quality of the network, especially when complex objects, such as multi-media data, need to be transmitted to a specific level of quality. The different parameters affecting Quality of Service can be identified and explored. The benefit for TEFIS users is advanced and controlled network capabilities to be able to investigate all aspects of transmission and Quality of Service.

Page 77: Common roadmap of FIRE test facilities – Fourth versionhome.ufam.edu.br/hiramaral/04_SIAPE_FINAL_2016/SIAPE_Biblioteca... · Common roadmap of FIRE . test facilities – Fourth

D3.7 - Common roadmap of FIRE test facilities – 4th version

Version 1.0 – 23/05/2013 Page 77

2.12. WISEBED

2.12.1. OFFERING

WISEBED offers a federated wireless sensor network testbed (www.wisebed.eu) that comprises approximately 1000 nodes distributed over 9 European sites and an algorithm library for the platform-independent and efficient implementation of algorithms (www.wiselib.org). In addition, the WISEBED consortium maintains a set of open APIs for testbed management, use, and operation as well as a reference implementation of these APIs. Both are used to run the testbeds of WISEBED and SmartSantander.

2.12.1.1. Description of the facility

The main feature of the WISEBED platform is its virtuality. It is easy to interconnect all kinds of sensor networks or even only single sensor nodes into one huge virtual testbed. WISEBED’s architecture and main features are shown in Figure 23.

FIGURE 23. WISEBED ARCHITECTURE

One of the main ideas is that it is possible for providers to install their own implementations of the APIs and clients of these APIs. Figure 24 shows the available APIs and their functionality. A formal description is available at the WISEBED website.

Page 78: Common roadmap of FIRE test facilities – Fourth versionhome.ufam.edu.br/hiramaral/04_SIAPE_FINAL_2016/SIAPE_Biblioteca... · Common roadmap of FIRE . test facilities – Fourth

D3.7 - Common roadmap of FIRE test facilities – 4th version

Version 1.0 – 23/05/2013 Page 78

FIGURE 24. WISEBED APIS

2.12.1.2. Main components - the timeline for the availability of the facility components

The full functionality is available now; there is no further development plan, since the project has already ended.

2.12.2. USE CASES

The WISEBED consortium has collected more than ten Use Cases giving a good overview of the potential of the platform. The Use Cases range from single student projects, for instance on the evaluation of cryptographic protocols, to industry projects (coalescences GmbH evaluated their implementation of the CoAP protocol) to complete EU projects running a vast set of experiments (in this case the FRONTS project).

2.12.3. USAGE POLICIES

WISEBED is open for any registered user. Registration information is available on www.wisebed.eu. Each individual testbed and the federated testbed offers a set of WISEBED Web Service APIs and a number of clients (web-based, command-line driven, etc.) exist. Experiments are conducted by uploading programs to the sensor nodes via the APIs, by receiving performance metrics emitted by the nodes, and by sending messages to the nodes for experimentation control.

2.12.3.1. How to connect to the facility

In order to perform experiments, users first have to obtain an account. This is done by following the instructions on the WISEBED website. Once an account has been set-up, experimenters use their account information in order to access the testbed management application. Currently, authentication is done via Shibboleth, though this left open to the implementer of a testbed (someone who wants to set-up his own testbed).

Page 79: Common roadmap of FIRE test facilities – Fourth versionhome.ufam.edu.br/hiramaral/04_SIAPE_FINAL_2016/SIAPE_Biblioteca... · Common roadmap of FIRE . test facilities – Fourth

D3.7 - Common roadmap of FIRE test facilities – 4th version

Version 1.0 – 23/05/2013 Page 79

2.12.3.2. How to perform experiments

Experiments are performed following the “algorithm” shown in Figure 20.

FIGURE 25. PERFORMING EXPERIMENTS IN WISEBED

First, the experimenter has to create the software implementation of the protocol/algorithm/application to be experimented with. Afterwards, he has to select the nodes and networks to be included in the virtual testbed. Then, he asks for a reservation and is given a secret key (a handle) for it. At the scheduled time for the experiment, the necessary nodes will be reserved and the testbed will be instantiated. Then, the experiment will be executed. During this time, the testbed can even be reconfigured. Finally, when the experiment is ended, the user can collect the resulting data.

2.12.3.3. Support organization for experimenters.

Currently, all 9 WISEBED partners offer support for the experimenter. Since the project has ended, this is currently a temporary solution; the partners are in the process of creating a WISEBED foundation which will have the goal of running the infrastructure and further developing the software.

2.12.3.4. Contract requirements for usage

WISEBED usage is still free of charge, however, this will change in the near future. For running new FP7 and FP8 projects, the WISEBED partners expect to be contacted first in order to become involved in the application process.

2.12.3.5. Plans how to accommodate new requirements for experiments

Further API and platform enhancements are realized in cooperation with SmartSantander. The process is open for third parties and the reference implementation is open source (http://wisebed.eu/site/run-your-own-testbed/testbed-runtime/).

2.12.3.6. Cost of usage

During the project runtime (until end of May 2011), the usage of the platform was free of charge for everyone interested, without any limitations in experiment duration or permitted resource usage. In order to keep up the infrastructure and continue developing the WISEBED software, the former consortium

Page 80: Common roadmap of FIRE test facilities – Fourth versionhome.ufam.edu.br/hiramaral/04_SIAPE_FINAL_2016/SIAPE_Biblioteca... · Common roadmap of FIRE . test facilities – Fourth

D3.7 - Common roadmap of FIRE test facilities – 4th version

Version 1.0 – 23/05/2013 Page 80

currently plans to build a foundation which would be responsible for operating the WISEBED infrastructure, including AAA issues. In the meantime, WISEBED can still be used free of charge.

2.12.4. INTERNATIONAL LIAISON

WISEBED has already been used and extended by non-European partners, namely an Argentinean university which created a testbed and connected it to the WISEBED, and a university from New Zealand which ran experiments on the platform.

Page 81: Common roadmap of FIRE test facilities – Fourth versionhome.ufam.edu.br/hiramaral/04_SIAPE_FINAL_2016/SIAPE_Biblioteca... · Common roadmap of FIRE . test facilities – Fourth

D3.7 - Common roadmap of FIRE test facilities – 4th version

Version 1.0 – 23/05/2013 Page 81

2.13. FED4FIRE

2.13.1. OFFERING

2.13.1.1. Description of the facility

In recent years numerous projects for building FIRE facilities have been launched, each targeting a specific community within the Future Internet ecosystem. The goal of the Fed4FIRE project is to federate these different facilities using a common federation framework. Such a federation can prove to be beneficial in several ways. First of all, it enables innovative experiments that break the boundaries of these domains. It also allows experimenters to more easily find the right resources to translate their ideas into actual experiments, to easily gain access to different nodes on different testbeds, to use the same experimenter tools across the different testbeds etc. This means that the experimenters can focus more on their research tasks than on the practical aspects of experimentation. The federation is also useful from the infrastructure providers’ point of view, since they can reuse common tools developed by the federation, they can reach a larger community of possible experimenters through the federation, etc.

The technical approach to federation adopted by the project is characterized by the following aspects:

• The architecture prefers distributed components. This way, the federation would not be compromised if, in the short or long term, individual testbeds or partners would discontinue their support of the federation. However, some non-critical central federation-level components are also included in the architecture for convenience purposes. Examples are directories providing introductory information about provided testbed and tools, a reservation broker, etc.

• Resource discovery, reservation and provisioning functions rely on the SFA API. This way, the experimenter can use both the dedicated Fed4FIRE portal and any other SFA compliant experimenter tool according to its own personal preference. Through the definition of a common resource specification (Rspec) based on ontologies, the project aims to make it easier for experimenters to include heterogeneous Future Internet technologies. Support for federation-wide resource reservation is another aspect that will be fostered over the course of the project.

• Experiment control relies on the adoption by all federated testbed of the novel and open FRCP protocol for resource control in a federated environment. This again allows experimenters to use their preferred tools. At first these will be OMF6 and NEPI, but any other experiment control tool that adopts FRCP should be natively supported by the Fed4FIRE testbeds.

• Monitoring functions are divided in three types: facility monitoring, infrastructure monitoring and experiment measuring. The project allows the usage of any preferred tool for the implementation of such functionality (Zabbix, Nagios, Collectd are just a few examples), but it requires that the monitoring data is exposed in a uniform way: through OML streams.

• Experimenter authentication functions are based on the usage of X.509 certificates. This way, experimenters affiliated with Fed4FIRE testbeds that already provide such X.509 identities to their experimenters can access all Fed4FIRE testbeds using their existing testbed account. For other experimenters the project also provides a dedicated Fed4FIRE X.509 identity provider. In order to allow role-based authorisation (where e.g. a professor can have higher privileges than a master student), Fed4FIRE is defining an appropriate extension to the standard X.509 certificate that enables such role-based authorisation.

Page 82: Common roadmap of FIRE test facilities – Fourth versionhome.ufam.edu.br/hiramaral/04_SIAPE_FINAL_2016/SIAPE_Biblioteca... · Common roadmap of FIRE . test facilities – Fourth

D3.7 - Common roadmap of FIRE test facilities – 4th version

Version 1.0 – 23/05/2013 Page 82

2.13.1.2. Main components - the timeline for the availability of the facility components

As depicted in Figure 26, there are currently 13 testbeds involved in the Fed4FIRE federation, introducing a diverge set of Future Internet technologies. During the course of this project even more testbeds will join the federation since two open calls will be launched that aim to allocate budget to selected candidate testbeds for inclusion in the Fed4FIRE project. A first open call was launched in May 2013, a second open call will follow in 2014.

FIGURE 26. OVERVIEW OF THE TESTBEDS CURRENTLY BELONGING TO FED4FIRE

A first category of testbeds are the wired testbeds:

• Virtual Wall (iMinds): The Virtual Wall (http://www.iminds.be/en/develop-test/ilab-t/virtual-wall) is an emulation environment that consists of 100 nodes (dual processor, dual core servers) interconnected via a non-blocking 1.5 Tb/s Ethernet switch, and a display wall (20 monitors) for experiment visualization. Each server is connected with 4 or 6 gigabit Ethernet links to the switch. The experimental setup is configurable through Emulab, allowing to create any network topology between the nodes, through VLANs on the switch. On each of these links, impairments (delay, packet loss, bandwidth limitations) can be configured. The Virtual Wall nodes can be assigned different functionalities ranging from terminal, server, network node to impairment node. Being an Emulab testbed at its core, the Virtual Wall his displayed a high level of flexibility, since it is also applied in the context of OpenFlow experimentation in its role of an OFELIA island, and in the context of cloud experimentation in its role of a BonFIRE island.

- Status: existing testbed, expected to be available through the federation at the end of the first development cycle of the project (Feb. 2014)

Page 83: Common roadmap of FIRE test facilities – Fourth versionhome.ufam.edu.br/hiramaral/04_SIAPE_FINAL_2016/SIAPE_Biblioteca... · Common roadmap of FIRE . test facilities – Fourth

D3.7 - Common roadmap of FIRE test facilities – 4th version

Version 1.0 – 23/05/2013 Page 83

- The testbed will remain available until the end of the project (Sep. 2016) - Features: create any desired network topology - Benefits: scale, control

• PlanetLab Europe (UPMC): PLE is the European arm of the global PlanetLab system, the world’s largest

research networking facility, which gives experimenters access to Internet-connected Linux virtual machines on over 1000 networked servers located in the United States, Europe, Asia, and elsewhere. In all cases these servers are connected directly to the public Internet, without any firewall or proxy server in between. Researchers use PLE for experiments on overlays, distributed systems, peer-to-peer systems, content distribution networks, network security, and network measurements, among many other topics.

- Status: existing testbed, expected to be available through the federation at the end of the first development cycle of the project (Feb. 2014)

- The testbed will remain available until the end of the project (Sep. 2016) - Features: inclusion of machines connected directly to the Internet and spread across a large

geographical area - Benefits: scale, real-life aspect

A second category of testbeds are the wireless testbeds:

• Norbit (NICTA): The NORBIT testbed is a Wi-Fi testbed located in Sydney, Australia. It belongs to the NICTA group. The testbed consists of 38 nodes equipped with a 1 GHz VIA C3 CPU, 512 MB RAM, 40 GB HD, 2 Gigabit Ethernet interfaces and 2 IEEE 802.11a/b/g/n interfaces. These nodes are installed indoors in an office environment, and are fully managed by OMF. One of the Gb Ethernet interfaces of all the nodes is attached to a configurable OpenFlow switch. This provides an experimental 1Gb wired network, in addition to the experimental wireless networks that the nodes can build.

- Status: existing testbed, expected to be available through the federation at the end of the first development cycle of the project (Feb. 2014)

- The testbed will remain available until the end of the project (Sep. 2016) - Features: Wi-Fi testbed, indoor and outdoor deployment, OpenFlow experimentation - Benefits: Real-life aspect of the wireless environment

• w-iLab.t (iMinds): This testbed is intended for Wi-Fi and sensor networking experimentation. It is

located in Zwijnaarde, a district of Ghent, and belongs to iMinds. It can be found in an unmanned utility room (size: 66m x 22.5m). There is practically no external radio interference in this deployment. At this location, hardware is hosted at 60 spots. Every spot is equipped with: 1 embedded PC with 2 Wi-Fi a/b/g/n interfaces and 1 IEEE 802.15.1 (Bluetooth) interface; a custom iMinds-Rmoni sensor node with an IEEE 802.15.4 interface; an “environment emulator” board, enabling unique features of the testbed including the triggering of repeatable digital or analog I/O events at the sensor nodes, real-time monitoring of the power consumption, and battery capacity emulation. There are two additional possibilities in the Zwijnaarde deployment: a number of cognitive radio platforms (including USRPs) as well as specialized spectrum scanning engines are available at this location. This enables state-of-the art research in the field of cognitive radio and cognitive networking. Besides, a deployment with 20

Page 84: Common roadmap of FIRE test facilities – Fourth versionhome.ufam.edu.br/hiramaral/04_SIAPE_FINAL_2016/SIAPE_Biblioteca... · Common roadmap of FIRE . test facilities – Fourth

D3.7 - Common roadmap of FIRE test facilities – 4th version

Version 1.0 – 23/05/2013 Page 84

mobile robots is planned for 2013. This will allow experiments to be scheduled that include up to 20 mobile nodes (similar to the 60 fixed nodes) to be operated in the environment.

- Status: existing testbed, expected to be available through the federation at the end of the first development cycle of the project (Feb. 2014)

- The testbed will remain available until the end of the project (Sep. 2016) - Features: Wi-Fi testbed, indoor deployment, mobile nodes - Benefits: Limited wireless interference, controlled mobility of mobile nodes

• NITOS (UTH): The NITOS testbed is a Wi-Fi testbed of the University of Thessaly, Volos, Greece. It

consists of 50 nodes installed in- and outdoors in an office environment, and 50 additional nodes are currently being installed in an indoor shielded environment (basement). Three different hardware types are installed, with different performance characteristics and 802.11 technologies.

30 nodes similar to the Norbit nodes: 1 GHz VIA C3 CPU, 2 IEEE 802.11a/b/g interfaces 20 so-called Commell nodes: Intel Core 2 Duo P8400 2.26 GHz, 2 IEEE 802.11a/b/g

interfaces. 16 of these nodes are deployed in a grid, the 4 others are attached with GNU Radio boards and support MIMO features.

50 so-called Icarus nodes: Intel Core i7-2600, 1 IEEE 802.11 a/b/g interface, 1 IEEE 802.11 a/b/g/n interface. These are the nodes that are being installed in the shielded environment.

On each node, one of the two Ethernet interfaces is connected to an OpenFlow switch. This way, the testbed can also be used for OpenFlow experimentation next to the typical Wi-Fi experiments.

- Status: existing testbed, expected to be available through the federation at the end of the first development cycle of the project (Feb. 2014)

- The testbed will remain available until the end of the project (Sep. 2016) - Features: Wi-Fi testbed, Real-life and shielded environment, OpenFlow support - Benefits: Combination of different environments, scale

• Netmode (NTUA): The NETMODE testbed is a Wi-Fi testbed belonging to the National Technical

University of Athens (NTUA). It consists of 20 x86 compatible nodes positioned indoors in an office environment. 18 of these nodes consist of an alix3d2 board with two IEEE 802.11 a/b/g interfaces, one 100Mbit Ethernet port, two USB interfaces and a 1GB flash card storage device. The other 2 nodes are more powerful, applying an Intel Atom CPU, a 250 GB hard drive, and providing two 802.11 a/b/g/n interfaces and one Gigabit Ethernet interface

- Status: existing testbed, expected to be available through the federation at the end of the first development cycle of the project (Feb. 2014)

- The testbed will remain available until the end of the project (Sep. 2016) - Features: Wi-Fi testbed, Office environment - Benefits: Real-life aspect of the wireless environment

• SmartSantander (UC): The SmartSantander facility associated with Fed4FIRE is the large scale smart

city deployment in the Spanish city of Santander. More detailed information can be found at (http://www.smartsantander.eu/index.php/testbeds/item/132-santander-summary). In short, the testbeds is composed of is composed currently of around 2000 IEEE 802.15.4 devices deployed in a 3-

Page 85: Common roadmap of FIRE test facilities – Fourth versionhome.ufam.edu.br/hiramaral/04_SIAPE_FINAL_2016/SIAPE_Biblioteca... · Common roadmap of FIRE . test facilities – Fourth

D3.7 - Common roadmap of FIRE test facilities – 4th version

Version 1.0 – 23/05/2013 Page 85

tiered architecture: sensor nodes, repeater nodes and gateways. The testbed supports two types of experiments: Internet of Things native experimentation (wireless sensor network experiments) and service provision experiments (applications using real-time real-world generated sensor data). At the moment, it is only planned to include service provision experiments in Fed4FIRE.

- Status: existing testbed, service experiments expected to be available through the federation at the end of the first development cycle of the project (Feb. 2014). Support for IoT experiments through the Fed4FIRE federation remains to be decided.

- The testbed will remain available until the end of the project (Sep. 2016) - Features: Sensor network, Smart City - Benefits: Real-life aspect of the wireless environment, scale

• FuSeCo (FOKUS): The FuSeCo (Future Seamless Communication) Playground - located in Berlin - is a

pioneering reference facility, integrating various state of the art wireless broadband networks. Two of its most important components are the OpenIMS Playground and the 3GPP Evolved Packet Core (EPC) prototype platform. In terms of access network hardware, Radio Access Network infrastructure is provided for 2G, 3G and 4G technologies. To complete this setup, several UEs are made available. The testbed addresses large and small scale equipment vendors, network operators, application developers and research groups to testwise deploy and extend their components and applications before market introduction in order to manifest their advance in the international telecom market.

- Status: existing testbed, expected to be available through the federation at the end of the first development cycle of the project (Feb. 2014)

- The testbed will remain available until the end of the project (Sep. 2016) - Features: Reference implementation of IMS and EPC - Benefits: Unique type of provided resources

A third category of Fed4FIRE testbeds are optical testbeds:

• OFELIA islands of UBristol and of i2CAT: In this category of testbeds, experiments are all about OpenFlow, on top of optical hardware or packet switches. In a typical experiment some virtual machines are deployed both as data sources and as data consumers, an OpenFlow path is established between them, and an OpenFlow controller is deployed that can dynamically change the path based on certain conditions. Two separate OFELIA testbeds take part in Fed4FIRE: the islands of UBristol and I2CAT. The former consists of 7 OpenFlow switches and 3 ROADM nodes, the latter of 5 OpenFlow switches and 3 ROADM nodes. Both sites also operate some servers to host virtual machines on.

- Status: existing testbeds, expected to be available through the federation at the end of the first development cycle of the project (Feb. 2014)

- The testbed will remain available until the end of the project (Sep. 2016) - Features: OpenFlow switching, optical - Benefits: Unique type of provided resources

• Koren (NIA): Koren is a high-speed research network interconnecting 6 POPs in Korea (with support for

bandwidth on demand), and also connecting to other international research networks. At the POPs,

Page 86: Common roadmap of FIRE test facilities – Fourth versionhome.ufam.edu.br/hiramaral/04_SIAPE_FINAL_2016/SIAPE_Biblioteca... · Common roadmap of FIRE . test facilities – Fourth

D3.7 - Common roadmap of FIRE test facilities – 4th version

Version 1.0 – 23/05/2013 Page 86

there are some OpenFlow switches and DCN switches, and servers that can host virtual machines. An experiment looks as follows:

VM's are created that act as data sources and/or sinks in the experiment. These are physically deployed at one or more POPs.

At every POP of interest, the experimenter can reserve several ports of the OpenFlow and DCN switches. To control how traffic will be forwarded over these ports, the experimenter deploys a new VM on which he installs a controller such as NOX.

If the experiment makes use of several POPs, the experimenter can reserve dedicated bandwidth between them. Because of the high-speed link to Geant, it is also possible to combine this Koren part of the experiment with other components belonging to the Fed4FIRE federation. But bandwidth cannot be guaranteed outside of the Koren Network.

- Status: existing testbeds, expected to be available through the federation at the end of the first development cycle of the project (Feb. 2014)

- The testbed will remain available until the end of the project (Sep. 2016) - Features: OpenFlow switching, optical

A fourth and last category of Fed4FIRE testbeds are those related to cloud computing:

• BonFIRE (EPCC island and INRIA island): The focus of BonFIRE experiments is on cloud computing experiments. BonFIRE consists of six different BonFIRE testbeds, of which three are represented through BonFIRE in Fed4FIRE: EPCC and INRIA. In this context, these testbeds are only accessible through the BonFIRE Service APIs, not directly. BonFIRE has been designed around four principles: Observability, Control, Advanced Cloud Features and Ease of Use. In terms of provided hardware, the two islands are characterized as follows:

At the EPCC island, two physical hosts provide 4 AMD Opteron 2.3GHz processors each, with a total 96 cores and 256GB of memory. A separate front-end node offers 24TB of disk, available to all cores through NFS. In-depth monitoring information at VM and hypervisor level is made available to the experimenters through the Zabbix framework.

Inria’s contribution is composed of a permanent set of resources shared between users, and of on-request resources taken from the local Grid’5000 node when at the request of BonFIRE users. The permanent resources are provided by 4 physical hosts provide 2 Intel hexa core machines. A separate front-end node offers 3.82TB of disk, available to all cores through NFS. In-depth monitoring information at VM and hypervisor level is made available to the experimenters through the Zabbix framework

2.13.2. USE CASES

No use cases that require Future Internet technology are excluded. Both single-testbed experiments and multi-testbed experiments are considered. In the former case, the advantage of the federation lies in the fact that even novel experimenters should be able to translate their ideas into experiment designs and executions. In the later the advantage of the federation lies in the fact that the federation significantly decreased the burden of implementing such complex experiments (single sign-on, uniform approach to the different steps of the experiment lifecycle, etc.).

Page 87: Common roadmap of FIRE test facilities – Fourth versionhome.ufam.edu.br/hiramaral/04_SIAPE_FINAL_2016/SIAPE_Biblioteca... · Common roadmap of FIRE . test facilities – Fourth

D3.7 - Common roadmap of FIRE test facilities – 4th version

Version 1.0 – 23/05/2013 Page 87

Detailed use cases have been worked out as part of Fed4FIRE deliverables D3.1 and D4.1, that will become available from http://www.fed4fire.eu/publications/deliverables.html . Examples use cases include:

From the services and application community: o Smart Stadium: offering real-time relevant contextual data o Online gaming: use SLA-based management of cloud hosting across multiple data centers o Public safety communications o Processing and storage in the cloud

From the infrastructures community o Teaching computer science using FIRE facilities o Testing a networking solution for wireless building automation o Geographical elasticity in cloud computing o Benchmarking a service platform for information retrieval o Mobile cloud services platform

2.13.3. USAGE POLICIES

2.13.3.1. How to connect to the facility

Formal access policies are currently still under discussion in the project. To open call participants, information is made available on the Fed4FIRE website. The open calls for experiments target academia, large industry and SMEs.

2.13.3.2. Plans for how to accommodate new experiment requirements

New requirements and functionality will be defined in a demand-driven way, among other things via the open call mechanism. If these requirements are related to the federation aspects, these can be included in the second and third implementation cycle of the project. If new requirements arise related to the actual testbeds, these will not be tackled by the project, as this is out of the Fed4FIRE scope.

2.13.3.3. Cost of usage

No information is available at the time of writing. The project has a sustainability research task, which is investigating a possible formal regulation of this matter. So far, no formal rules have been adopted.

2.13.4. INTERNATIONAL LIAISON

There is a close link with several FIRE projects several consortium members of each of them are also member of Fed4FIRE: OpenLab, BonFIRE, OFELIA, SmartSantander, Osiris, FIRESTATION, AmpliFIRE, TEFIS, CREW. There is also work on going to liaise with international similar initiative such as GENI in the US, and AKARI in the US.

Page 88: Common roadmap of FIRE test facilities – Fourth versionhome.ufam.edu.br/hiramaral/04_SIAPE_FINAL_2016/SIAPE_Biblioteca... · Common roadmap of FIRE . test facilities – Fourth

D3.7 - Common roadmap of FIRE test facilities – 4th version

Version 1.0 – 23/05/2013 Page 88

3. LIST OF ABBREVIATIONS

3GPP 3rd Generation Partnership Project AAA Authentication, Authorization and Accounting ACM Association for Computing Machinery AP Access Point API Application Programming Interface ASI Application Support Interface AUP Acceptable Use Policy CAR Multi-Sport High Performance Centre of Catalonia CoAP Constrained Application Protocol CPU Central Processing Unit CR Cognitive Radio EPSRC Engineering and Physical Sciences Research Council ESI Experimentation Support Interface EU European Union FCI Federation Computing Interface FHW Foundation Hellenic World FI-PPP Future Internet Public Private Partnership FIRE Future Internet Research and Experimentation FMI Future Media Internet FPGA Field-programmable gate array FSDL Federation Scenario Description Language FSDL Federation Scenario Description Language GENI Global Environment for Network Innovations GSMA GSM Association GW Gateway GW4EXP Gateway for Experimentation IaaS Infrastructure as a Service IEEE Institute of Electrical and Electronics Engineers IMS Internet Multimedia Sub-system IoT Internet of Things IP Internet Protocol IRIS Implementing Radio in Software ISM band Industrial, Scientific and Medical radio frequencies ITIL Information Technology Infrastructure Library LTE Long Term Evolution MOTAP Multihop Over-The-Air-Programming MSI Management Support Interface

Page 89: Common roadmap of FIRE test facilities – Fourth versionhome.ufam.edu.br/hiramaral/04_SIAPE_FINAL_2016/SIAPE_Biblioteca... · Common roadmap of FIRE . test facilities – Fourth

D3.7 - Common roadmap of FIRE test facilities – 4th version

Version 1.0 – 23/05/2013 Page 89

NEPI Network Experimentation Programming Interface NFC Near Field Communications NGN Next Generation Network NOC Network Operations Centre NREN National Research and Education Network OCCI Open Cloud Computing Interface OF OpenFlow OFP OpenFlow Protocol OMF cOntrol Management Framework PII Panlab Infrastructure Implementation PLE Planet Lab Europe PLJ PlanetLab Japan PoP Point of Presence PPK Private PlanetLab Korea QoE Quality of Experience QoS Quality of Service QR Quick Response RA Resource Adapter RADL Resource Adapter Description Language RAM Random Access Memory RFID Radio-Frequency Identification ROADM Reconfigurable optical add-drop multiplexer SDK Software Development Kit SDR Software Defined Radio SFA Slice Federation Architecture SIP Session Initiation Protocol SLA Service Level Agreement SLEM Service-Level Experimentation Manager SPGW Service Provision GW SUT System Under Test SUT System Under Test TDMI TopHat Distributed Measurement Infrastructure TVWS TV White Spaces US United States USN Ubiquitous Sensor Network UGC User Generated Content USRP Universal Software Radio Peripheral VCT Virtual Customer Testbed VPN Virtual Private Network WinnF Wireless Innovation Forum


Recommended