+ All Categories
Home > Documents > SUCRE CloudSource Magazine Issue 1/2013

SUCRE CloudSource Magazine Issue 1/2013

Date post: 27-Mar-2016
Category:
Upload: sucre-project
View: 217 times
Download: 3 times
Share this document with a friend
Description:
First issue of SUCRE Project Magazine "CloudSource", a magazine
Popular Tags:
34
Cloud computing & opensource CloudSource Issue 1 - April 2013
Transcript
Page 1: SUCRE CloudSource Magazine Issue 1/2013

Cloud computing & opensourceCloudSource

Issue 1 - April 2013

www.de-clunk.com [email protected] & layout : Paul Davies

Page 2: SUCRE CloudSource Magazine Issue 1/2013

SUCRE Coordinator, National & Kapodistrian University of Athens, Greece

Head of the Supercomputing Department at the Poznan Supercomputing Center, Poland.

President of ERCIM, U.K.

OCEAN project Coordinator, Fraunhofer Institute, Germany

EU-Japan Centre for Industrial Cooperation,manager of the FP7 project JBILAT, Japan

Scientific Editor and Journalist, Australia

Coordination by Mrs. Eleni Toli, National & Kapodistrian University of Athens, Greece and Giovanna Calabrò, Zephyr s.r.l., Italy

This publication is supported by EC funding under the 7th Framework Programme for Research and Technological Development (FP7). This Magazine has been prepared within the framework of FP7 SUCRE - SUpporting Cloud Research Exploitation Project, funded by the European Commission (contract number 318204). The views expressed are those of the authors and the SUCRE consortium and are, under no circumstances, those of the European Commission and its affiliated organizations and bodies.

The project consortium wishes to thank the Editorial Board for its support in the selection of the articles, the DG CONNECT Unit E.2 – Software & Services, Cloud of the European Commission and all the authors and projects for their valuable articles and inputs.

Prof. Alex Delis,

Dr. Norbert Meyer,

Prof. Dr. Keith Jeffery,

Dr. Yuri Glikman,

Dr. Toshiyasu Ichioka,

Mrs. Cristy Burne,

Editorial Board OTHER RELATED EVENTS

The Euro-Par 2013 conference will take place in Aachen, Germany, from August 26th until August 30th, 2013. The conference is jointly organized by the German Research School for Simulation Sciences, Forschungszentrum Jülich, and RWTH Aachen University in the framework of the Jülich Aachen Research Alliance. Further information at http://www.europar2013.org/conference/conference.html

Euro-Par 2013

This is the biggest Cloud Fair of German speaking countries. CLOUDZONE has developed from the Trendcongress, which has taken place successfully since 2008 in Karlsruhe. It will take place for the fifth time at the Karlsruhe Fair Center on 15th- 16th May 2013.For further information and to find out who should attend this event please visit http://www.cloudzone-karlsruhe.de/

CLOUDZONE 2013

The event will be hosted by SURFnet, the Dutch National Research and Education Network and held in the picturesque city of Maastricht on 3rd – 6th June 2013. For further information and to register, please visit https://tnc2013.terena.org/

TNC2013 - TERENA Networking Conference

This key conference will be held at the Marriott Hotel in Heidelberg, Germany on 23rd- 24th September 2013. Further information at http://www.isc-events.com/cloud13/

ISC Cloud Conference 2013

This conference will take place in Karlsruhe, Germany from September 30th to October 2nd 2013. Further information at http://socialcloud.aifb.uni-karlsruhe.de/confs/CGC2013/Calls.php

International Conference on Cloud and Green Computing (CGC2013)

To discuss this emerging enabling technology of the modern services industry, CLOUD 2013 invites you to join the largest academic conference to explores modern services and software sciences in the field of Services Computing, which was formally promoted by IEEE Computer Society since 2003. This event will take place in Santa Clara, CA, United States on 27th June – 2nd July 2013. For further information and to register please visit http://www.thecloudcomputing.org/2013/

IEEE 6th International Conference on Cloud Computing

This key event will take place in Valencia, Spain on May 27th - June 1st 2013. For further information and to register please visit www.iaria.org/

The Fourth International Conference on Cloud Computing, GRIDs, and Virtualization (CLOUD COMPUTING 2013)

1 32

Page 3: SUCRE CloudSource Magazine Issue 1/2013

Table of contents

1

3

4

7

11

14

17

20

23

27

31

Editorial Board

Editorial - Prof. Keith G Jeffry

Migrating to Cloud as a means to cut budget

Forests and the cloud: an international model of forest defoliation

The UberCloud Experiment: paving the way to HPC as a service

mist.io - touch the clouds - Mobile-friendly multi-cloud management, monitoring

and automation

CC1 system – the solution for private cloud computing

TERENA Trusted Cloud Drive for Academic Research

Scaling Software Challenges

PROSE survey on requirements for hosting open source software projects

News & Events

2

Page 4: SUCRE CloudSource Magazine Issue 1/2013

Editorial Issue 1CloudSource Magazine

April 2013

Prof. Keith G Jeffery

President ERCIM

The second Expert Group Report ‘Advances in CLOUDs’ was published in December 20121 . It followed the earlier report ‘The Future of CLOUD Computing ‘ published in January 20112 . Both provide useful insights into the future of CLOUDs and the second is clearly an evolutionary development of the �rst.

The report(s) cut through the hype and make a realistic assessment of the current state and future prospects. In particular they identify barriers to the development and take-up of CLOUD computing in Europe (and indeed wider) and propose research topics to address those barriers.

The major conclusion is that there are opportunities for European businesses; �rst in the ICT industry itself in providing infrastructure, platform and service level o�erings and second in using CLOUDs to improve business ICT management and use thus reducing costs and increasing opportunities. The Gartner Hypecycle for CLOUDs reached the peak of expectation in 2010, there are now some years in the trough of disillusionment followed by the slope of enlightenment before reaching the plateau of productivity. The trough and slope timescale means that there is a current tendency to invest less e�ort and interest into CLOUDs, which gives Europe a unique chance to leap ahead, de�ning the future of CLOUDs and the related market players and for European ICT businesses to develop o�erings and for European business in general to prepare to adopt CLOUD computing.

However, the barriers mentioned above are restraining the possible growth. Although many commercial organisations are considering CLOUD computing or experimenting with it there is as yet no concerted move for

them to adopt CLOUD technology. There are many European ICT SMEs working on CLOUDs and producing good products and services but they are not overcoming the threshold barrier to massive take-up as yet. One problem is that commercial businesses wish to use CLOUD in-house and expand elastically and interoperably onto one or more public CLOUDs (federated as necessary) for peak demand. The major barriers concern: (a) fear and uncertainty about legalistic aspects, especially the EU directive on processing of personal data which precludes outsourcing outside the EU unless the protection is equivalent to that of the EU3 ; (b) lack of con�dence in entrusting commercial data to an outsourced service even within the EU; (c) associated fears over security and privacy; (d) lock-in to particular proprietary solutions by public CLOUD vendors; (e) lack of interoperability across CLOUD environments which would permit easy elastic expansion from private (in-house) to public CLOUDs on demand when processing peaks are encountered; (f) technology barriers such as reliable multitenancy, easy elasticity, reliability and autonomicity; (g) lack of appropriate systems development environments and programming languages; (h) lack of standards for example for the description of CLOUD services.

Various EC-funded research projects are addressing some of these issues. The emerging results from those projects should advance the cause of widespread use of CLOUD computing.

1 http://cordis.europa.eu/fp7/ict/ssai/home_en.html2 http://cordis.europa.eu/fp7/ict/ssai/docs/cloud-report-�nal.pdf3 Directive 95/46/EC

3

Page 5: SUCRE CloudSource Magazine Issue 1/2013

Case studymigrating to cloud as a means to cut budget

Dr. Devendra D. Meshram, RTM Nagpur University, IndiaDr. Omprakash M. Ashtankar, Kavikulguru Institute of Technology and Science, IndiaUrmila D. Meshram Teaching Assistance, MTech, Japan

4

Page 6: SUCRE CloudSource Magazine Issue 1/2013

Different organizations approach IT needs in different ways, but some things are the same: we all want cost-effective solutions tailored to our needs. Cloud computing can provide this, reducing the cost of procuring, handling and maintaining services and resources. The option to migrate one’s own servers to the cloud also reduces costs, and results in changes to mindset as well as policy.

This real-time study of an actual company and its IT budget revealed how cost-effective migrating to cloud can be, covering issues of data migration, security, disaster recovery, and basic service provision.

The yearly cost of owning and handling a software application can be as much as four times the cost of its initial purchase, and companies can spend up to 75 percent of their total IT budget just maintaining and running existing systems and infrastructure� . Previously, this software-related budget was the main target of cost-reduction strategies.

Now, however, the scenario has changed: hardware and infrastructure have become fair budgetary targets. As a result, concepts such as virtual desktop infrastructure, remote desktop services, cloud computing, smart-phone compatible services and web-meetings are booming.

Many companies offering enterprise resource planning (ERP) software are developing infrastructure-compatible solutions to suit the latest technological challenges, offering ease of access from anywhere, and business accessibility using smart-phones.

The cost-driven elements of migrating to cloud and procuring cloud services include choosing and managing data types, migration practices, database management systems, targeted landscapes, operating systems, secured hosting, high availability, and disaster recovery.

migrating to cloud as a means to cut budget

After analyzing their IT budget, case study company ABC-IT� decided to migrate to the cloud. Their budget prior to migration (Figure 1, LHS) was fairly equally distributed between hardware, software, and operational costs. The majority of their software budget was allocated to software licensing. Their hardware budget was allocated to infrastructure networking, server acquisition and maintenance, data and application hosting, hardware allocation and storage. The operations budget covered support, transport management, user administration, consultants and development resources to help design and build custom systems.

Scenario of change

Case study results: reshuffling the IT budget

5

Page 7: SUCRE CloudSource Magazine Issue 1/2013

migrating to cloud as a means to cut budget

Further savings are also available: The cost of updating software is drastically reduced. Any unusual operational costs could be taken by option (tari�). Tailoring contracts to o�er threshold-level basic services also works to lower costs.

Managing data in-house requires users to manage hardware, operating systems, security and so on; however, when operating from the cloud, the only user burden is the procurement of services.

IT companies are rightly wary about handing over their data management to third party operators: mistakes could impact their entire business. To deal with this, the introduction of penalties for non-availability of contracted services may increase user con�dence in service availability.

Ultimately, in this case study, the transition to cloud computing signi�cantly reduced IT budget, and changed business management thinking.

After migration, their software licensing costs remained the same, but there was a comparative drop in operation and hardware budgets (Figure 1, RHS), substantially reducing their overall cost base.

Sta� costs were reduced, since the cloud vendor was responsible for operating and maintaining hardware and software. Server costs were reduced, since data was managed in a single data center. The cost of infrastructure developed for deploying applications in the cloud - including hardware, software, and operational infrastructure - was lower than the cost of deploying these same applications on-premises.

1 In Timothy Chou, “The End of Software,” SAMS Publishing, 2005, p. 6.2 Company names have been changed.

Transition from a traditional approach

Hardware

Software

Operations

Figure 1: Relative IT budget distribution before (LHS) and after (RHS) migration to cloud.

6

Page 8: SUCRE CloudSource Magazine Issue 1/2013

Forests and the cloud:

Bohdan J. Naumienko, Eurotech Ltd, Poland

Satellites and aerial drones can give us one global perspective. ICT can offer another.

an international model of forest defoliation

Forests are the lungs of the Earth. Their ongoing health is a global challenge, not only in its geographic scale, but also in terms of the integrated, multidisciplinary research required to meet this challenge.

Our research depends on international data sources and environmental sensors. We need a durable, globally accessible computerized model of forest health.

� We mean by integral management more than by integrated, inter alia real support for decision maker’s awareness based on complex rapid and automated mapping: in GIS environment - from sensors to user; (see the following text and for more information [2] Naumienko, B.J. (2009).

7

Page 9: SUCRE CloudSource Magazine Issue 1/2013

Forests and the cloud: an international model of forest defoliation

Let us suppose our cloud computing concept integrates innovative methods for forest monitoring, protection, cultivation and management. We could include data from Unmanned Aerial Vehicles (UAVs), Europe’s Global Monitoring for Environment and Security program (GMES), Group on Earth Observation (GEO), the G20 Global Agricultural Geo-Monitoring Initiative (GEOGLAM), and FOREST INTEGRAL1 OBSERVATION (FIO), which uses satellite and UAV data to monitor global forestry. We could also lead research in the polar (Norway), north-temperate (Poland), south-temperate (Italy, Turkey) and subtropical (China) zones.

What are the key elements of such a computerized, distributed model of the forest as an ecosystem?

Since branches and trunk are considered transport pipelines only, leaf photosynthesis represents a tree’s healthiness. Leaf health can be impacted by three main factors: access to sunlight, water transport environment, and root pathogens.

We model the sun’s energy using the variable S (intensity of light), and indirectly using variables O and C (concentration of O2 and CO2 respectively), as well as T (temperature).Through roots, trunk and branches as the other parts of the model are distributed pathogens to the whole foliage of the tree. For example the ash (Fraxinus), infected by either virulent and avirulent strains of Phytophthora, will be related to tree-invasive agent interactions, both compatible to disease development and incompatible to host-speci�c resistance, as well as to e�ectiveness of speci�c cultivation methods. Generally, host-pathogen interactions should allow us to aim speci�c pathways to enhance disease resistance in forestry.

Any tree is identi�ed by its geographical coordinates g = (i,j) and by a number k of biochemical compounds needed for vegetation, transported from atmosphere and roots to leaves. Thus we will be able to link ground and laboratory research with speci�c tree and examine not only results of use phosphites in forest cultivation, but also identify candidate genes involved in virulence. What is more, we can search relations between phosphites traitment on the ground and quantity D: a generalized defoliation index estimated from the air. This index is the one of fundamental measures of forest health and vitality (see Figure 1).

Forest modelling can be based on the average tree, statistically estimated from many forest representatives. Any tree can have three modelled areas:

the atmosphere, the tree’s foliage, and the tree’s roots.

What can cloud computing offer?

Calculating an average tree

1.2.3.

Figure 1: Defoliation index (D): left, D = 0%; middle, D = 20%; and right, D = 70%.

8

Page 10: SUCRE CloudSource Magazine Issue 1/2013

When data like the above are presented in static documents, their effect is limited. However, storing results in a cloud-based database would make it easier to analyse and combine data, as well as to update and accumulate of knowledge.

For example, if based on the cloud computing in which we had modelled trees infection by Phytophthora, the results of a model could be used in further studies, comparing tree-invasive pathogens, disease development, host-specific resistance, or cultivation methods from different research, especially concurrently from ground, aerial and satellite monitoring implemented in different regions of Earth.

In addition to offering cloud solutions such as software and infrastructure ‘as a service,’ we must also offer processes: capturing as a service, pre-processing as a service, data sharing as a service, data delivery and user-processing, all as a service.

Linking results

Figure 2: Architecture of a potential cloud solution.

VE : Video Exploitation

CA : Capturing

PP : Pre-Processing

SH : Sharing

DE : Delivering & UP : User’s Processing

SDK : Software Development Kit (e.g. for the Microsoft .NET)

Figure 3: Cloud solutions viewed as processes-as-a-service

Image

GIS Server

Satellite

Aerial

UAV

GIS Desktop

ExtractionTransform

Loading

WebServices

Work�ow

Maps

FileTransfer

GEODatabase

ImageryArchives Imagery

Tools

GISMobileClients

WebClients

ImageryArchives

VE

SDK

GIS Portal

Sources Data Management Web Applications Tools

Forests and the cloud: an international model of forest defoliation

9

Page 11: SUCRE CloudSource Magazine Issue 1/2013

Modelling any ecosystem, including that of a forest, requires a global approach to internet, information and communication technologies. From that global approach, we can also access scaled-down information services, useful for research education and business.

Using cloud computing in this fashion achieves three important aims: it connects international innovators, it offers improved ecological security, and it leverages the local strength of small and medium enterprises, offering a truly global reach.

References: [1] Oszako, T. (2012) Methods of Forest Healthiness Estimation (in Polish – unpublished). Forestry Research Institute, Warsaw. [2] Naumienko, B.J. (2009) Games, geometries, languages, processes: the foundations of integral education. Far East Journal of Mathematical Education, Vol.3, 1, 41-74.

Forests and the cloud: an international model of forest defoliation

10

Page 12: SUCRE CloudSource Magazine Issue 1/2013

Wolfgang Gentzsch, (Independent HPC Cloud Consultant, Germany)Burak Yenier, (Fiserv, US)

There are several million small- and medium-size manufacturers around the world, most of them using ordinary workstations for their daily design and development work. What could they achieve with greater computing power?

The UberCloud Experiment: paving the way to HPC as a service

11

Page 13: SUCRE CloudSource Magazine Issue 1/2013

Since buying an expensive compute cluster is usually not an option, renting computing power is the next best thing, but business in the cloud still comes with challenges: application complexity, privacy of sensitive data and intellectual property, expensive data transfers, conservative software licensing, performance bottlenecks from virtualization, user-specific system requirements, missing standards and lack of interoperability with different clouds…

On the other hand, renting remote computing resources comes with extremely attractive benefits: no lengthy procurement and acquisition cycles, ability to rapidly upscale or downsize, opportunity to shift focus from capital expenditure to the more flexible operational expenditure, and business flexibility thanks to on-demand, at-your-fingertip resources.

How can we reduce these barriers and allow businesses to optimize benefits? We tried an uber-experiment to find out.

Since August 2012, more than 400 organizations and individuals from around the world have joined the open, free, humanitarian UberCloud Experiment (http://www.hpcexperiment.com/). Designed to explore the end-to-end process of accessing and using remote computing resources, the UberCloud is now host to 60 international teams aiming to run end-user applications on remote computing resources. At the same time, we’re analyzing different HPC clouds and their interoperability, and finding ways to overcome the many roadblocks.

With these benefits, the experiment is leading the way to increasing business agility, competitiveness, and innovation.

Participants are also encouraged to make use of UberCloud Exhibit (http://www.exhibit.hpcexperiment.com), a directory of professional cloud services for the wider CAE, life sciences, and big data communities.

On board the UberCloud

What’s UberCloud’s secret?

The UberCloud Experiment: paving the way to HPC as a service

We offer users a long list of real benefits:

vendor-neutral serviceno need to hunt for resources in a crowded cloud marketprofessional match-making of end-users with suitable service providersfree, on-demand access to hardware, software, and expertise during the experimentcarefully tuned, end-to-end and step-by-step process for accessing remote resourcesopportunity to learn from the best practice of other participantsno-obligation, risk-free proof-of-concept: no money involved, no sensitive data transferred, no software license concerns, and the option to stay anonymous.

12

Page 14: SUCRE CloudSource Magazine Issue 1/2013

Several major roadblocks and their resolutions have been reported by our teams during the course of their projects. More details about the lessons learned and recommendations can be found in a recent article in Bio-IT World. Some of the major roadblocks were:

There are many reasons to join this community experiment: For a start, HPC as a service is the next big thing. But beyond that, HPC is complex, and it’s easier to tackle within a community. Barriers to entry are low, you can learn by doing, without risk, and you see how all this fits into your future research or business direction. You can ask questions or register for the experiment at http://www.hpcexperiment.com/

Screenshot of the result of one of the experiment teams: development of stents for a narrowed artery after balloon angioplasty to widen the artery and improve blood flow. This experiment has been performed in the Cyclone HPC Cloud.

Join us

Information security and privacy, even in this experiment setting: guarding raw data, processing models and resulting information.

Lack of easy, self-service registration and administration: still not available with most providers.

Incompatible software licensing models: the software licensing landscape is still difficult to navigate.

High expectations: can lead to disappointing results or even to project failure.

Reliability of resource providers: some teams had to wait for weeks before capacity could be allocated.

Need for interoperable clouds: migration of work to another cloud resource is still a real challenge.

Roadblocks so far - and their resolutions

The UberCloud Experiment: paving the way to HPC as a service

http://www.hpcexperiment.com

13

Page 15: SUCRE CloudSource Magazine Issue 1/2013

Markos Gogoulos, Unweb.me Ltd, GreeceDimitris MoraitisMike MuzurakisChristodoulos Psaltis

As cloud vendors compete by introducing a range of features and pricing models, consumers are increasingly combining a number of cloud offerings to construct the service combination they need. While this allows users to benefit from a wider range of service options, it can also have a negative effect on infrastructure management, introducing different sets of tools and APIs for each cloud. Add to this the need for VM maintenance, monitoring and provisioning, and using multiple clouds can end up a cumbersome and time consuming process.

mist.io - touch the cloudsMobile-friendly multi-cloud management, monitoring and automation

14

Page 16: SUCRE CloudSource Magazine Issue 1/2013

What’s required then, is a next-generation multi-cloud interface, preferably with mobile-friendly virtual machine (VM) management, monitoring and automation across clouds.

Welcome to mist.io.

mist.io is an open source software and a freemium service that helps you manage and monitor VMs across multiple public, private and hybrid clouds using your mobile phone, tablet or laptop.

You can use mist.io to create, reboot, destroy and tag VMs on any supported interface-as-a-service (IaaS) cloud. More importantly, you can send secure shell (ssh) commands using a web interface optimized for touchscreens, allowing you to solve infrastructure issues while on the road. Figure 1 presents mist.io’s architecture. Figure 2 shows how its interface has been optimized for touchscreens.

Using a premium mist.io service, you can also configure events that trigger notifications or automated responses. In the future, mist.io will also facilitate the migration of VMs across different clouds.

Mist.io’s user interface is an HTML5 application based on jQuery Mobile [1] and Ember.js [2]. The backend is built in Python using the Pyramid web framework [3], which implements a simple REST API using JSON that handles network calls. Communication with the cloud backends is realized using Apache’s libcloud [4], also implemented in Python.

Mist.io’s architecture is client-driven; most of the tasks are handled by JavaScript, on the browser. The server side requires the API keys to successfully communicate with the cloud providers.

Figure 1: mist.io architecture

Figure 2: mist.io’s touchscreen-optimized interface

mist.io

Inside mist.io

mist.io - touch the clouds : Mobile-friendly multi-cloud management, monitoring and automation

Brow

ser

REST API Amazon EC2

Rackspace Cloud

Linode

Openstackmis

t.io

serv

er

NativeAPIs

15

Page 17: SUCRE CloudSource Magazine Issue 1/2013

Along with the open-source tool, a premium monitoring service has been developed to provide detailed VM usage statistics and customizable alerts that trigger email and SMS notifications or automated actions (e.g. deploy another VM on high load).

On each VM’s dashboard (Figure 3), mist.io plots metrics for CPU utilization, memory consumption, system’s load average, I/O operations per second (IOPS) and network traffic.

Setting up events – including conditional events – is simple and touchscreen-optimized. Users can customize alert actions to include notification, creation/destruction of machines, and command execution, for example.

Monitoring service

Mist.io is a mobile-friendly, multi-cloud web-app for managing and monitoring VMs. Mist.io provides a simple interface to handle common VM tasks, execute remote commands via a touchscreen-optimized shell interface, and monitor the VMs’ health status, while triggering actions in response to user-defined events.

In the future we will enhance mist.io to facilitate migration of VMs across different clouds, to help users mitigate vendor lock-in and exploit the best features each cloud has to offer. We also plan to introduce a smarter alerting service using crowdsourced data on how users react under certain circumstances to help mist.io identify common problems and recommend relevant solutions.

Mist.io is open-source and available at https://github.com/mistio/mist.io.

The freemium service is hosted in private beta at https://mist.io. It’s scheduled to go public in May.

1 jQuery Mobile, http://jquerymobile.com2 ember.js, http://emberjs.com3 Pyramid, http://www.pylonsproject.org/

Conclusion

References

mist.io - touch the clouds : Mobile-friendly multi-cloud management, monitoring and automation

Figure 3: mist.io monitoring a VM

16

Page 18: SUCRE CloudSource Magazine Issue 1/2013

Mariusz Witek and team,Institute of Nuclear Physics PAN, Poland

We began the “Cloud Computing for Science and Economy” project - known as CC1 - at the end of 2009 at the Institute of Nuclear Physics PAN (IFJ PAN), Poland. The project is financed by the European Commission and the Polish Ministry of Science and Education (Innovative Economy, National Cohesion Strategy). In the project’s first phase, we developed a fully functional computing system in the form of a private cloud, and made it available to all IFJ PAN users. In the second phase, we implemented a distributed, centrally managed cloud architecture, enabling the resources of many distributed clusters to be shared.

CC1 system the solution for private cloud computing

17

Page 19: SUCRE CloudSource Magazine Issue 1/2013

CC1 system - the solution for private cloud computing

The result of this work — the CC1 system — provides resources within the infrastructure as a service (IaaS) model. A schematic view of the system is shown in Figure 1. The central element of the system is the cloud manager (CLM), which receives calls from user interfaces (web browser-based interfaces or EC2 interfaces) and passes commands to cluster managers (CMs). A cluster manager runs on each individual cluster, handling all low-level operations required to control virtual machines (VMs).

We used the Python programming language for the top layer. Virtual resources are managed using libvirt, a lower-level virtualization toolkit that can support a number of virtual machine managers. Currently, CC1 uses kernel-based virtual machine (KVM).

Introducing the CC1 system

Interface Layer

CLM(Cloud Manager)

CM(Cluster Manager)

VMs

NodeNode

VMs

DB - CLM

DB - CM

Storage

EC2 WWW

Figure. 1. The structure of the CC1 system.

18

Page 20: SUCRE CloudSource Magazine Issue 1/2013

The main features of the system include:

Keeping it simple

Putting it into operation

A private cloud: benefits in action

CC1 system - the solution for private cloud computing

a custom web-based user interface,

automatic creation of virtual clusters with a precon�gured batch system,

groups of users with the ability to share resources,

permanent virtual storage volumes that can be mounted to a VM,

a distributed structure: a federation of clusters running as a uniform cloud,

a quota for user resources, and

a monitoring and accounting system.

In developing the CC1 system, we emphasized simplicity: user access, administration and installation are all relatively easy. Self-service access to the system is provided via an intuitive web interface. The administration module contains a rich set of tools for user management and system con�guration.

We also developed an automatic installation procedure based on a standard package management system of Linux distributions. This way, the system can be set up quickly and operated without the need for a deep understanding of the underlying technology.

One of CC1’s crucial features is the easy creation of VM computing clusters equipped with a precon�gured batch system. This allows users to perform intensive calculations on-demand, without the need for time-consuming manual con�guration. When calculations are completed, the VM cluster can be destroyed and the resources can be made accessible to other users.

A private cloud based on the CC1 system was installed in IFJ PAN at the beginning of 2012. A number of VM images with various Linux �avours have been made available to users. Currently, about 1000 CPU cores are shared by various research teams at the Institute and their collaborators. We have achieved stable operation and high CPU utilization of above 80%, making us con�dent the system will continue to be useful in the future.

Multidisciplinary institutes such as IFJ PAN traditionally dedicate a given computer cluster to a single research group, typically resulting in low-level exploitation and ine�cient use of resources. A private cloud model, such as the CC1 system, enables various research groups to share computing resources. It can signi�cantly boost the e�ciency of infrastructure usage and at the same time reduce maintenance costs.

The private cloud computing model can improve the delivery of services and reduce the cost of IT operations. In addition, it ensures con�dential data can be processed on well-protected local infrastructure. The CC1 software is distributed under the Apache License 2.0. See http://cc1.ifj.edu.pl for more details.

19

Page 21: SUCRE CloudSource Magazine Issue 1/2013

by Peter Szegedi, TERENA, The Netherlands

The Trusted Cloud Drive (TCD) project [1] aims at piloting an experimental, high-performance, trusted, cloud storage solution for the Research and Education (R&E) community gathered under the Trans-European Research and Education Networking Association (TERENA) [2]. It builds on an open source cloud storage brokering platform [3] that provides federated user access, strong data encryption, supports various storage back-ends, and most importantly ensures the separation of the storage data from the metadata (such as file attributes, encryption keys, etc.) that are kept in a trusted location. TCD can also be considered as a storage middleware that maintains trust and privacy within the user domain and acts as a secure relay towards the private and/or public providers’ domains connected.

TERENATrusted Cloud Drive for Academic Research

20

Page 22: SUCRE CloudSource Magazine Issue 1/2013

National Research and Education Networks (NRENs) around the globe - such as the Intrenet2 in the US or SURFnet in the Netherlands - connect universities, university colleges and campuses with high-capacity links, peer with commercial networks at major exchanges, and provide advanced value-added services to the R&E community. They are membership organizations, governed by universities, subsidized by national governments, and work in a nonpro�t manner.

Undoubtedly, massive data storage is vital for academic research. Individual researchers and students on campus more and more use commercial cloud storage o�erings (e.g., Google Drive, iCloud, Dropbox) available on the market. However, these public services are not primarily designed for the needs of sensitive research data sets. Therefore, universities and research institutes are seeking for partnership with private storage solution integrators and application developers (e.g., PowerFolder, SpiderOak, OwnCloud) to build and operate their own storage infrastructure on campus that needs not only capital investment but operational knowledge and experience too. These private storage clusters can then provide the desired performance and data privacy but, due to the lack of standards and sometimes proprietary vendor solutions, cannot always interface with each other or with the public services.

NRENs are in a good position to deliver high-performance data storage infrastructure as a service specially tailored to R&E community over their advanced networks at national scale. Moreover, thanks to the European and global NREN collaboration, they can also aggregate demands and facilitate community provided storage to be shared across TERENA members.

Trust is the main asset of NRENs, as they are governed by the universities that are also the major clients of the NRENs. The Trusted Cloud Drive service pilot - the initiative of TERENA – builds on this trust relationship and provides the necessary software tool and know-how at NRENs’ hands. The open source, cloud storage brokering platform incorporated by the TCD pilot can be installed at university locations or hosted by NRENs to aggregate demands and broker storage resources. It can also act as a storage middleware layer that separates the underlying trust domain from the storage back-end providers’ domain so that maintains the data privacy of the users.

The user is able to authenticate to TCD with his/her federated account provided by the home institution hence rich set of identity attributes are available to determine the actual service o�ering via the platform. On the front-end a native Web Application or standard WebDAV access can be used for typical disk operations. Acting as a middleware, it is also possible to integrate the TCD platform with other, feature-rich storage applications provided by commercials or the community. Beyond the scope of the pilot, TERENA has been discussing e.g., with PowerFolder and the OwnCloud Community about potential integration scenarios. Trusted Cloud Drive does the encryption and the separation of the metadata from the storage data. The encryption keys and the sensitive metadata are kept in the local metadata store. The encrypted storage data blob can then be exported to the public cloud using various storage back-end APIs, including Amazon S3, OpenStack Swift, Pithos+ (the Greek NREN’s cloud) and soon other APIs supported by Jclouds.

Why the Trusted Cloud Drive?

What does the TCD offer?

TERENA Trusted Cloud Drive for Academic Research

21

Page 23: SUCRE CloudSource Magazine Issue 1/2013

TERENA Trusted Cloud Drive for Academic Research

TERENA TF-Storage Task Force participants discuss about the Trusted Cloud Drive pilot in March, 2013 in Berlin, Germany

The TERENA TCD pilot started in May 2012 and the �nal results with the list of potential use cases, service delivery scenarios and legal advices are expected to be published in April 2013. During the open pilot period (last 9 months) the software platform has been installed and tested at NRENs in Greece, Czech Republic, Croatia, Poland, Belgium, Portugal, Spain and Brazil. All together 19 NRENs, 8 Universities and 3 Research Labs have expressed their interest in experimenting with TCD in one way or another. TERENA is eager to maintain this community around the open source code in order to ensure the long term sustainability of the Trusted Cloud Drive. Commercial companies are also interested in exploring the potential service integration scenarios that goes beyond the scope of the pilot.

1 http://terena.org/clouddrive2 http://www.terena.org/3 https://github.com/VirtualCloudDrive/CloudDrive

Progress to date

22

Page 24: SUCRE CloudSource Magazine Issue 1/2013

Alvaro Simón, CESGA, SpainCarlos Fernández, CESGA Victor Mendez, PIC Jordi Guijarro, CESCA Jesús Bermejo, Telvent

As the digital universe expands, so too does software production, leading innovation in key European non-software industrial sectors such as automotive, aerospace, medical equipment, telecom equipment and consumer electronics [1]. In fact, it is difficult to identify a domain in which innovation does not rely on software.

Managing this explosion of software requires new approaches, addressing more than technical scalability issues.

Scaling Software Challenges

23

Page 25: SUCRE CloudSource Magazine Issue 1/2013

Scaling Software Challenges

The Organisation for Economic Co-operation and Development held two conferences focusing on the economic relevance of software; the �rst in Cáceres, Spain (November 2007), and the second in Tokyo, Japan (October 2008). The study addressed themes such as security, privacy, mobility, interoperability, accessibility and reliability from a user perspective [2]. Identifying the boundaries of the software industry was noted as a continuous challenge.

Several projects have recently tackled the technical challenges of software scaling, including OSMOSE (Open Source Middleware for Open Systems in Europe, 2003–2005)[3], OSIRIS (Open Source Infrastructure for Run-time Integration of Services, 2005–2008)[4] and OSAmI-Commons (Open Source Ambient Intelligence Commons, 2008–2011) [5].

Software and the economy

OSMOSE, OSIRIS and OSAmI-Commons

These projects worked on open source modular and dynamic middleware foundations, service bus imple-mentations, federated identity and reusability frameworks. Links between composite and virtualization cloud approaches were also identi�ed, leading to the set-up of the MEGHA Federated Cloud (2010) or Intercloud initiative [6]).

The MEGHA Working Group promotes and coordinates contributions to cloud computing R&D, education and management made by institutions a�liated with RedIRIS [7] in Spain. MEGHA established direct links with initiatives such as e-Science [8] and CRUE-TIC [9] in Spain, and internationally with projects like TERENA, the OpenNebula Interoperability Workng Group, GÉANT, EGI and OGF.

The MEGHA Federated Cloud

Figure. 1. MEGHA concept validation test bed

Alvaro Simón, CESGA, SpainCarlos Fernández, CESGA Victor Mendez, PIC Jordi Guijarro, CESCA Jesús Bermejo, Telvent

As the digital universe expands, so too does software production, leading innovation in key European non-software industrial sectors such as automotive, aerospace, medical equipment, telecom equipment and consumer electronics [1]. In fact, it is difficult to identify a domain in which innovation does not rely on software.

Managing this explosion of software requires new approaches, addressing more than technical scalability issues.

Scaling Software Challenges

24

Page 26: SUCRE CloudSource Magazine Issue 1/2013

In the first phase (2010–2011), MEGHA validated federated cloud platforms using OCCI [10] to streamline the use of cloud technologies among R&E service centers. Representative infrastructure providers (CESCA, CESGA, PIC), middleware providers (OpenNebula, RedIRIS, OSAmI-Commons) and users (UAB, UOC, UM) together with intermediate/identity/brokers resources (RedIRIS) joined efforts to demonstrate the viability of this approach.

The results stimulated the development of use cases including e-learning platforms on demand ( Learning Apps project [co-financed with FEDER funds]), a distributed HPC platform (e-Science), and Virtual Labs (VDI) in a hybrid scenario (Academic services).

MEGHA is working on these challenges. For example, the new rOCCI [11] server and OCCI [12] clients tested by CESGA and PIC teams with OpenNebula 3.8.x are able to use x509 user certificates for authentication.

MEGHA authentication is based on x509 users and robot certificates issued by the Spanish pkIRISGrid CA. This new feature was used by PIC developers to enhance the DIRAC software framework [13] originally developed by LHCb [14]. The new cloud plug-in developed by PIC and USC teams integrates a cloud broker, user authentication and supports different cloud managers such as OpenNebula or CloudStack [15]. Currently MEGHA members are working to enable virtual organizations (VOs) or a dynamic set of users to share federated cloud resources.

Software scaling challenges does not only derive from new technologies, but also from business models, methods, processes and tools. To engage on this topic, JCIS held a Conference on Service Science and Engineering, integrating the Scientific and Technical Conference on Web Services and SOA (JSWEB) and the Workshop on Business Processes and Service Engineering (PNIS) [16].

To continue this work, the SCALARE project — SCALing softwARE [17] — will soon start, uniting several European partners to address scaling software challenges across several dimensions.

Federating research and academic community clouds must address new technical challenges:

Scaling Software Challenges

Digital technologies are transforming our society faster than ever, and the relevance of software in non-software markets in increasing. The approaches presented in the article are only a few of many tackling scaling software challenges.

Conclusion

Concept validation

Ongoing developments

Beyond the technical challenges

Federated user authentication and authorization mechanisms, and user management between different cloud managers Secure VM image distribution and validation among heterogeneous cloud managersA federated cloud accounting system integrating the accounting records of multiple cloud managers and supporting federated cloud governanceMonitoring and notification of unpredictable changes in availability and readability status

25

Page 27: SUCRE CloudSource Magazine Issue 1/2013

Cloud federation has been validated as a suitable starting point, while the availability of technical staff could be a future bottleneck. To increase the efficiency in software development is needed, and more research is required to fully understand the implications of merging physical and digital worlds.

Scaling Software Challenges

[1] Digital Economy Definition [Online]. Available: http://en.wikipedia.org/wiki/Digital_economy[2] Organization for Economic Co-operation and Development. OECD. [Online]. Available: [Online]. Available: http://www.oecd.org/sti/ind/44131881.pdf[3] Open Source Middleware for Open Systems in Europe (OSMOSE) Project [Online]. Available: http://www.itea2.org/project/index/view/?project=46[4] Open Source Infrastructure for Run-tieme Integration of Services (OSIRIS) Project [Online].Available: http://www.itea2.org/project/index/view/?project=135[5] Open Source Ambient Intelligence Commons(OSAmI-Commons) Project [Online]. Available: http://www.itea2.org/project/index/view/?project=230[6] Intercloud Definition [Online]. Available: http://en.wikipedia.org/wiki/Intercloud[7] Megha Working Group [Online]. Available: http://wiki.rediris.es/megha/MainPage[8] Spanish e-Science Network [Online]. Available: http://www.e-ciencia.es/[9] ICT Comission of Spanish University Chancellors Conference. CRUE-TIC[Online]. Available: http://www.crue.org/TIC/[10] Open Cloud Computing Interface (OCCI) [Online]. Available: http://occi-wg.org/[11] rOCCI server [Online]. Available: https://github.com/gwdg/rOCCI-server[12] Thijs Metsch, Andy Edmonds: Open Cloud Computing Interface. OGF.org (2010) [Online]. Available: http://goo.gl/MxX19[13] Distributed Infrastructure with Remote Agent Control (DIRAC) [Online]. Available: http://diracgrid.org/[14] Large Hadron Collider beauty (LHCb) http://lhcb-comp.web.cern.ch/lhcb-comp/DIRAC/.[15] Méndez, V., Fernández, V., Graciani, R., Casajus, A., Fernández, T., Merino, G., Saborido, J.J.: The integration of Cloudstack and OCCI/Opennebula with DIRAC. Journal of Physics Conference Series (2013)[16] IX Conference on Science and Service Engineering [Online].Available: http://www.kybele.etsii.urjc.es/jcis2013/[17] Scaling Software (SCALARE) Project [Online]. Available: http://scalare.org/about-scalare/Black Duck KnowledgeBase [Online]. Available: http://www.blackducksoftware.com/products/knowledg-base [18] EMC Corporation [Online]. Available: http://www.emc.com/leadership/programs/digital-universe.htm[19] IBM [Online]. Available: http://www-01.ibm.com/software/data/bigdata/

References

Quick facts on the software and data explosion

The Black Duck Knowledge Base [18] includes information from 800, 000 projects in more than 5500 sites and 2,200 software licenses

Information in the world is doubling every two years [19]

2.5 quintillion bytes of data are created every day.

Over 90% of the data in the world today has been created in the last three years [20].

26

Page 28: SUCRE CloudSource Magazine Issue 1/2013

Alfredo Matos, (CMS)Miguel Ponce de Leon, (TSSG)Rui Ferreira, (IT Aveiro)João Paulo Barraca, (IT Aveiro)

Widespread use of free/libre/open source software (FLOSS) in European funded projects is vital to innovation transfer, but often, such software doesn’t enter general use. Legal issues, lack of business drivers, incomplete documentation and lack of knowledge about FLOSS are some of the most common reasons for this.

PROSEsurvey on requirements for hosting open source software projects

27

Page 29: SUCRE CloudSource Magazine Issue 1/2013

PROSE survey on requirements for hosting open source software projects

PROSE, an EU-funded project tasked with promoting FLOSS, aims to provide a common cloud platform on which open source projects can be hosted and information shared. But how would such a platform work? And what would be its main requirements and features? PROSE turned to ICT FP7 participants for answers.

Survey questions were divided into four themes (see Figure 1). Most questions were optional, allowing respondents to focus on the areas they considered more relevant. The survey was disseminated through EU ICT projects, resulting in 42 anonymous responses, 25 of which came from respondents identifying themselves as members of universities, research laboratories or companies.

PROSE

Consultation process

Privacy (e.g. private repositories, or restricted content) stood out as a key driver when choosing a platform: ICT projects operate a mixed set of closed and open components, where only some results are public. This makes it di�cult for such projects to reside entirely inside an open source forge.

Another identi�ed requirement dealt with version control systems (VCS): while Git was clearly preferred, subversion still gathered signi�cant support (see Figure 2 - next page).

The need for software quality metrics was also identi�ed. While SourceForge1 , GitHub2 , or Ohloh3 partially address this need, their o�erings are still far from the kind of metrics we envision. Ideally, metrics should re�ect project success. In the context of EU ICT, success can be de�ned by progression, dissemination and cooperation. Most participants identi�ed the number of downloads and community ratings, as good dissemination metrics. Other metrics can be obtained by analyzing activity and component (re)use. Interestingly, few participants found online social networking to be important.

From the point of view of collaboration, mailing lists, forums and wikis remain the most popular solutions.

Choosing a platform software

Source Code Hosting (G1)

Identify the primary tools for source code management and identify the main capabilities and limitations for individual projects hosted in the forge

Collaboration Tool (G2)

Establish the key collaboration mechanisms and recognize the associated collaboration models, that drive software development within EC ICT projects

Security and Support (G3)

Determine what technical capabilities the platform must provide in order to adequately support its’ projects and ensure secure access to its users

Project Metrics and Statistics (G4)

Identify what software quality metrics can be built from the multitude of available data that accurately convey a measure of the quality of the project’s results

Figure. 1. Goals for each survey question group.

Alfredo Matos, (CMS)Miguel Ponce de Leon, (TSSG)Rui Ferreira, (IT Aveiro)João Paulo Barraca, (IT Aveiro)

Widespread use of free/libre/open source software (FLOSS) in European funded projects is vital to innovation transfer, but often, such software doesn’t enter general use. Legal issues, lack of business drivers, incomplete documentation and lack of knowledge about FLOSS are some of the most common reasons for this.

PROSEsurvey on requirements for hosting open source software projects

28

Page 30: SUCRE CloudSource Magazine Issue 1/2013

CVS : 6.4% (0.04)

Darcs : 6.2% (0.03)

TFS : 5.5% (0.03)

Bazzar : 7.3% (0.04)

Visual Source Safe : 3.2% (0.02)

Subv

ersi

on (S

VN) :

30.

1% (0

.17)

Git : 31.4% (0,18)

Mer

curi

al :

10.0

% (0

.06)

Figure. 2. Importance of di�erent Version Controls Software solutions (values < 0.5 express indi�erence or rejection by respondents >

PROSE survey on requirements for hosting open source software projects

29

Page 31: SUCRE CloudSource Magazine Issue 1/2013

1 SourceForge: http://sourceforge.net2 Github: http://github.com3 Ohloh: http://ohloh.net4 Allura Platform: http://incubator.apache.org/allura/5 Available at the ICT PROSE Website: http://www.ict-prose.eu

PROSE survey on requirements for hosting open source software projects

In light of these survey results, PROSE moved towards selecting the Allura platform4 for its cloud o�ering. Allura addresses most of our requirements, as well as o�ering the possibility of self-hosting, platform customization, support for multiple tools (especially for VCS, supporting both git and subversion), and inclusion of external cutting-edge functionality, such as advanced metrics. Such metrics may be sourced from other ICT projects, thereby increasing collaborative impact.

The choice of Allura will also allow us to develop a close relationship with the (Allura-based) SourceForge ecosystem, with its high Alexa page rank and millions of users: establishing an ICT neighborhood on Sourceforge bridges the interests of both communities and further potentiates the success of the PROSE platform.

Following from this success, the PROSE survey5 is now in its second phase and will continue to shape our future decisions. Thank you to everyone who participated, and we look forward to an ongoing collaboration.

Allura-based PROSE platform

30

Page 32: SUCRE CloudSource Magazine Issue 1/2013

NEWS & EVENTS

International collaboration on topics of common interest is of outmost importance for the future of Cloud Computing and open source in Europe. Responding to this concern, the European Commission is already investing in collaborative research with major Cloud stakeholders worldwide, and particularly with researchers and industries in Japan and South-East Asia.

In this context, the EU-funded SUCRE project was set out to support the exploitation of Open Clouds in Europe, and to foster an international dialogue on Open Clouds interoperability between Europe and Japan. In order to meet this twofold, ambitious goal, the project has successfully set up and is now operating the SUCRE EU-Japan experts group. The group has engaged and is benefited by the participation of a number of high-profiled stakeholders stemming from the academia and industry of both regions of interest. As of mid-January 2013, the group members along with SUCRE have initiated a dialogue on Open Clouds interoperability and collaboration opportunities between Europe and Japan. The discussion is carried out both online and offline, facilitated by a variety of means such as the likes of virtual meetings, collaborative editing tools, mailing lists, and a dedicated group on Facebook.

Eventually, the results of the aforementioned activities will be captured by a final report that will be delivered by SUCRE in March 2014, summarizing all discussions and findings of the SUCRE EU-Japan experts group. Most importantly, the report will include a set of recommendations from the experts related to the interoperability of European and Japanese Open Clouds, as well as to future collaboration prospects.

Announcing the SUCRE EU-Japan Experts Group

The SUCRE consortium is pleased to announce the organization of a Young Researchers Forum that aims at bringing achievements and the potential of open-source cloud developments to the researchers of tomorrow.

The aforementioned event, focusing on Cloud Computing and Open Clouds, will take place in Karlsruhe Research Institute, Germany from 23-24 September 2013 and, among other scopes, aims at offering participants the opportunity to network with other researchers, international experts, and practitioners across disciplinary and national boundaries.

As such, the main target audience of the event will be junior researchers at a pre-PhD level, who are embarking on research programmes where Cloud Computing and Open Source are significant components. Internationally established lecturers will offer talks and there will be an opportunity for hands-on tasks. This is in line with the goal of establishing and sustaining the missing connection between young people, the researchers and innovators of tomorrow, and experienced practitioners, researchers, policy makers in clouds. Further information and to register please visit the project website at http://www.sucreproject.eu/summer-school and/or contact the organisers Prof. Rizos Sekallariou (rizos@cs .man.ac.uk) and/or Mrs. Eleni Toli ([email protected])

The SUCRE Young Researchers Forum

31

Page 33: SUCRE CloudSource Magazine Issue 1/2013

SUCRE Coordinator, National & Kapodistrian University of Athens, Greece

Head of the Supercomputing Department at the Poznan Supercomputing Center, Poland.

President of ERCIM, U.K.

OCEAN project Coordinator, Fraunhofer Institute, Germany

EU-Japan Centre for Industrial Cooperation,manager of the FP7 project JBILAT, Japan

Scientific Editor and Journalist, Australia

Coordination by Mrs. Eleni Toli, National & Kapodistrian University of Athens, Greece and Giovanna Calabrò, Zephyr s.r.l., Italy

This publication is supported by EC funding under the 7th Framework Programme for Research and Technological Development (FP7). This Magazine has been prepared within the framework of FP7 SUCRE - SUpporting Cloud Research Exploitation Project, funded by the European Commission (contract number 318204). The views expressed are those of the authors and the SUCRE consortium and are, under no circumstances, those of the European Commission and its affiliated organizations and bodies.

The project consortium wishes to thank the Editorial Board for its support in the selection of the articles, the DG CONNECT Unit E.2 – Software & Services, Cloud of the European Commission and all the authors and projects for their valuable articles and inputs.

Prof. Alex Delis,

Dr. Norbert Meyer,

Prof. Dr. Keith Jeffery,

Dr. Yuri Glikman,

Dr. Toshiyasu Ichioka,

Mrs. Cristy Burne,

Editorial Board OTHER RELATED EVENTS

The Euro-Par 2013 conference will take place in Aachen, Germany, from August 26th until August 30th, 2013. The conference is jointly organized by the German Research School for Simulation Sciences, Forschungszentrum Jülich, and RWTH Aachen University in the framework of the Jülich Aachen Research Alliance. Further information at http://www.europar2013.org/conference/conference.html

Euro-Par 2013

This is the biggest Cloud Fair of German speaking countries. CLOUDZONE has developed from the Trendcongress, which has taken place successfully since 2008 in Karlsruhe. It will take place for the fifth time at the Karlsruhe Fair Center on 15th- 16th May 2013.For further information and to find out who should attend this event please visit http://www.cloudzone-karlsruhe.de/

CLOUDZONE 2013

The event will be hosted by SURFnet, the Dutch National Research and Education Network and held in the picturesque city of Maastricht on 3rd – 6th June 2013. For further information and to register, please visit https://tnc2013.terena.org/

TNC2013 - TERENA Networking Conference

This key conference will be held at the Marriott Hotel in Heidelberg, Germany on 23rd- 24th September 2013. Further information at http://www.isc-events.com/cloud13/

ISC Cloud Conference 2013

This conference will take place in Karlsruhe, Germany from September 30th to October 2nd 2013. Further information at http://socialcloud.aifb.uni-karlsruhe.de/confs/CGC2013/Calls.php

International Conference on Cloud and Green Computing (CGC2013)

To discuss this emerging enabling technology of the modern services industry, CLOUD 2013 invites you to join the largest academic conference to explores modern services and software sciences in the field of Services Computing, which was formally promoted by IEEE Computer Society since 2003. This event will take place in Santa Clara, CA, United States on 27th June – 2nd July 2013. For further information and to register please visit http://www.thecloudcomputing.org/2013/

IEEE 6th International Conference on Cloud Computing

This key event will take place in Valencia, Spain on May 27th - June 1st 2013. For further information and to register please visit www.iaria.org/

The Fourth International Conference on Cloud Computing, GRIDs, and Virtualization (CLOUD COMPUTING 2013)

1 32

Page 34: SUCRE CloudSource Magazine Issue 1/2013

Cloud computing & opensourceCloudSource

Issue 1 - April 2013

www.de-clunk.com [email protected] & layout : Paul Davies


Recommended