+ All Categories
Home > Documents > GSSIM – A tool for distributed computing...

GSSIM – A tool for distributed computing...

Date post: 16-Oct-2020
Category:
Upload: others
View: 1 times
Download: 0 times
Share this document with a friend
22
Scientific Programming 19 (2011) 231–251 231 DOI 10.3233/SPR-2011-0332 IOS Press GSSIM – A tool for distributed computing experiments Slawomir B ˛ ak a , Marcin Krystek a , Krzysztof Kurowski a , Ariel Oleksiak a,, Wojciech Pi ˛ atek a and Jan W˛ eglarz a,b a Pozna´ n Supercomputing and Networking Center, Pozna´ n, Poland b Institute of Computing Science, Pozna´ n University of Technology, Pozna´ n, Poland Abstract. In this paper we present the Grid Scheduling Simulator (GSSIM), a comprehensive and advanced simulation tool for distributed computing problems. Based on a classification of simulator features proposed in the paper, we define problems that can be simulated using GSSIM and compare it to other simulation tools. We focus on an extension of our previous works including advanced workload generation methods, simulation of a network with advance reservation features, handling specific application performance models and energy efficiency modeling. Some important features of GSSIM are demonstrated by three diverse experiments conducted with the use of the tool. We also present an advanced web tool for the remote management and execution of simulation experiments, which makes GSSIM the comprehensive distributed computing simulator available on the Web. Keywords: Distributed computing, simulations, resource management, scheduling 1. Introduction Scheduling algorithms in distributed computing sys- tems have been the subject of intensive research over the last decade. However, research experiments eval- uating and making comparative analysis of these al- gorithms are often difficult to be conducted. This is caused by many problems including, for example, dif- ficulties in obtaining exclusive access to large-scale in- frastructures for research purposes or the lack of cer- tain functionalities of real resource management sys- tems, such as advance reservation (AR). Furthermore, emergence of new computing paradigms such as grids, clouds, multi-core processing, etc. caused gaining im- portance of various aspects of distributed computing, for example virtualization and energy efficiency is- sues. Therefore, due to diversity of distributed resource management systems and a significant technical effort needed to establish large-scale computing infrastruc- tures, simulations are commonly used to evaluate can- didate algorithms and architectures. However, most of simulations have been developed for a specific pur- pose, so that they cannot be re-used elsewhere. For this reason, several generic simulation tools were in- * Corresponding author: Ariel Oleksiak, Pozna´ n Supercomput- ing and Networking Center, ul. Noskowskiego 10, 61-704 Pozna´ n, Poland. E-mail: [email protected]. troduced such as SimGrid [6], GridSim [4], etc. Some of these tools provide a good basis for implementation and simulation of a wide range of algorithms. Nev- ertheless, in most cases developers must implement experiments by themselves using just the basic func- tionality of simulators. In consequence, setting up an experiment also requires a lot of work and is rarely ap- plicative by other researchers. To address these issues, we introduced the Grid Scheduling Simulator (GSSIM) which provides an au- tomated framework for the management of experi- ments related to resource management in distributed computing environments [17]. GSSIM achieves it through a flexible design of architecture and interac- tions between scheduling components, a possibility of plugging scheduling algorithms into the simulated en- vironment, modeling synthetic workloads and adopt- ing real traces in popular formats, a configuration of the computing infrastructure topology both on logical and physical level, and many other features as further described in detail. The GSSIM framework is complemented by the por- tal which enables online access to the simulator via a user-friendly experiment editor, workload generator and experiments repository. The rich web interface al- lows executing the simulation experiments remotely, provides access to workloads, resource descriptions and implementations of algorithms, and enables in- 1058-9244/11/$27.50 © 2011 – IOS Press and the authors. All rights reserved
Transcript
Page 1: GSSIM – A tool for distributed computing experimentsdownloads.hindawi.com/journals/sp/2011/925395.pdf · Scientific Programming 19 (2011) 231–251 231 DOI 10.3233/SPR-2011-0332

Scientific Programming 19 (2011) 231–251 231DOI 10.3233/SPR-2011-0332IOS Press

GSSIM – A tool for distributed computingexperiments

Sławomir Bak a, Marcin Krystek a, Krzysztof Kurowski a, Ariel Oleksiak a,∗, Wojciech Piatek a

and Jan Weglarz a,b

a Poznan Supercomputing and Networking Center, Poznan, Polandb Institute of Computing Science, Poznan University of Technology, Poznan, Poland

Abstract. In this paper we present the Grid Scheduling Simulator (GSSIM), a comprehensive and advanced simulation tool fordistributed computing problems. Based on a classification of simulator features proposed in the paper, we define problems that canbe simulated using GSSIM and compare it to other simulation tools. We focus on an extension of our previous works includingadvanced workload generation methods, simulation of a network with advance reservation features, handling specific applicationperformance models and energy efficiency modeling. Some important features of GSSIM are demonstrated by three diverseexperiments conducted with the use of the tool. We also present an advanced web tool for the remote management and executionof simulation experiments, which makes GSSIM the comprehensive distributed computing simulator available on the Web.

Keywords: Distributed computing, simulations, resource management, scheduling

1. Introduction

Scheduling algorithms in distributed computing sys-tems have been the subject of intensive research overthe last decade. However, research experiments eval-uating and making comparative analysis of these al-gorithms are often difficult to be conducted. This iscaused by many problems including, for example, dif-ficulties in obtaining exclusive access to large-scale in-frastructures for research purposes or the lack of cer-tain functionalities of real resource management sys-tems, such as advance reservation (AR). Furthermore,emergence of new computing paradigms such as grids,clouds, multi-core processing, etc. caused gaining im-portance of various aspects of distributed computing,for example virtualization and energy efficiency is-sues. Therefore, due to diversity of distributed resourcemanagement systems and a significant technical effortneeded to establish large-scale computing infrastruc-tures, simulations are commonly used to evaluate can-didate algorithms and architectures. However, most ofsimulations have been developed for a specific pur-pose, so that they cannot be re-used elsewhere. Forthis reason, several generic simulation tools were in-

*Corresponding author: Ariel Oleksiak, Poznan Supercomput-ing and Networking Center, ul. Noskowskiego 10, 61-704 Poznan,Poland. E-mail: [email protected].

troduced such as SimGrid [6], GridSim [4], etc. Someof these tools provide a good basis for implementationand simulation of a wide range of algorithms. Nev-ertheless, in most cases developers must implementexperiments by themselves using just the basic func-tionality of simulators. In consequence, setting up anexperiment also requires a lot of work and is rarely ap-plicative by other researchers.

To address these issues, we introduced the GridScheduling Simulator (GSSIM) which provides an au-tomated framework for the management of experi-ments related to resource management in distributedcomputing environments [17]. GSSIM achieves itthrough a flexible design of architecture and interac-tions between scheduling components, a possibility ofplugging scheduling algorithms into the simulated en-vironment, modeling synthetic workloads and adopt-ing real traces in popular formats, a configuration ofthe computing infrastructure topology both on logicaland physical level, and many other features as furtherdescribed in detail.

The GSSIM framework is complemented by the por-tal which enables online access to the simulator viaa user-friendly experiment editor, workload generatorand experiments repository. The rich web interface al-lows executing the simulation experiments remotely,provides access to workloads, resource descriptionsand implementations of algorithms, and enables in-

1058-9244/11/$27.50 © 2011 – IOS Press and the authors. All rights reserved

Page 2: GSSIM – A tool for distributed computing experimentsdownloads.hindawi.com/journals/sp/2011/925395.pdf · Scientific Programming 19 (2011) 231–251 231 DOI 10.3233/SPR-2011-0332

232 S. Bak et al. / GSSIM – A tool for distributed computing experiments

teractive visualization of results. In this way, GSSIMprovides a comprehensive environment enabling re-searchers to test resource management algorithms andarchitectures, and to exchange not only workloads butalso results of experiments and implementations of al-gorithms.

The remaining part of the paper is organized as fol-lows. In Section 2 we propose a classification of sim-ulator features and, on this basis, we present GSSIMin comparison with other simulation tools. Section 3provides the details of the specific features of GSSIM.In particular, it discusses the simulated architecture,introduces the workload management concept, ex-plains how to incorporate a specific application perfor-mance model into simulations, describes an extensionfor flow-based simulation of network with advancereservations, presents energy-efficiency modeling anddemonstrates an advanced web interface for remotemanagement of experiments. Section 4 presents exam-ples of experiments that illustrate collecting, analysis,and visualization of results. Final conclusions are givenin Section 5.

2. Simulation of distributed computing systems

A wide spectrum of distributed computing problemsmay need simulations to analyze their properties andpossible solutions. Proper modeling of these problemsrequires a number of features that must be provided bysimulation tools. In order to compare simulators withrespect to their features and supported distributed com-puting problems, we summarize the popular schedul-ing problems common in distributed computing sys-tems in Section 2.1 as well as introduce a classifica-tion of simulation features in Section 2.2. On this ba-sis, available simulators are compared in Section 2.3while summary of problems that can be simulated withthe use of GSSIM is presented in Section 2.4.

2.1. Scheduling problems

There is a large number of scheduling problemsstudied in the literature for many years. Attempts toclassify and analyze them can be found in [2,31]. Themost important classes have been distinguished andlisted below.

On-line and off-line scheduling. In the former case,at a certain point in time there is no information aboutfuture jobs. The latter case assumes knowledge aboutall jobs during scheduling.

Clairvoyant and non-clairvoyant scheduling. Inmost of real situations exact job execution time isunknown a priori. However, sometimes scheduling isbased on execution times given by users or estimatedon the basis of previous runs (a lot of algorithmsin scheduling theory assume knowledge of executiontimes). These cases are referred as non-clairvoyant andclairvoyant scheduling, respectively.

Time constraints. If job processing times are known(or estimated), requirements concerning time, or timeconstraints, can be used. The most common time con-straints in literature include ready times (the earliestjob start time), due dates (preferable job completiontime) and deadlines (the latest job completion time).

Workload types. Depending on a distributed com-puting infrastructure and scheduling algorithms to bestudied researchers may require diverse structures ofworkloads. Classification of workloads and their mainproperties are proposed in Fig. 1 and summarized be-low.

• Parallelism. Single tasks may be sequential, par-allel (running on multiple processors), distributed(running on multiple nodes) and distributed cross-domain (can run on machines in multiple admin-istrative domains, or sites).

• Number of processors. According to the classi-fication given in [8] tasks can be also dividedinto rigid, moldable, and malleable ones. First twotypes of task use a constant number of CPUs ei-ther specified exactly in advance (rigid tasks) orset at the beginning of task execution (moldabletasks). Number of exploited processors by mal-leable task may change during the runtime.

• Preemption. Tasks with preemption possibilitycan be suspended during their execution and re-sumed later on (possibly on another node). Non-preemptive tasks must run until they finish. Oth-erwise they must be restarted.

• Time dependencies. Tasks may contain time de-pendencies to other tasks. If they have no depen-dencies they are called independent tasks. Other-wise, they are tasks with preceding constraints, orsimply workflows. In the literature various struc-tures of workflows can be found such as chains,trees, uniconnected activity networks, and general(arbitrary).

• Number of tasks. Workloads may contain singletasks submitted by specific users or allow submit-ting multiple tasks by a single user, e.g., the so-called parameter sweep jobs or workflows.

Page 3: GSSIM – A tool for distributed computing experimentsdownloads.hindawi.com/journals/sp/2011/925395.pdf · Scientific Programming 19 (2011) 231–251 231 DOI 10.3233/SPR-2011-0332

S. Bak et al. / GSSIM – A tool for distributed computing experiments 233

Fig. 1. Properties of workloads. (Colors are visible in the online version of the article; http://dx.doi.org/10.3233/SPR-2011-0332.)

Quality of service. One of important aspects of somedistributed systems such as Grids is lack of full controlover all resources. For this reason, one must take intoconsideration the impact of local policies and otherusers on results of scheduling and either manage jobsin queues or use advance reservation techniques to de-liver requested quality of service (QoS). In the firstcase, start time is not guaranteed a priori and jobsusually are waiting in queues. In the second case, re-sources are leased, using advance reservation, for a cer-tain period and users obtain information about a starttime of the reservation.

Space-shared and time-shared scheduling. Jobs canbe assigned to specific multiple processors and ma-chines or scheduled and rescheduled in time. Theformer is called space- while the latter time-sharedscheduling.

Adaptive scheduling. Scheduling problems can bealso classified in terms of their adaptation to chang-ing conditions. Sorting from the least to the most adap-tive case there are algorithms which do not assumerescheduling at all, assume rescheduling before run-time (when jobs are pending in a queue or an ad-vance reservation for them is established), and runtimerescheduling including suspending, resuming and mi-grating jobs [19].

Heterogeneity of resources. In addition to proper-ties of jobs and scheduling systems themselves, varioustypes of resources must be considered to provide a sim-ulator applicable for large scope of scheduling prob-

lems. First of all, resources may be homogeneous orheterogeneous. Homogeneous resources can be mod-eled as identical processors, i.e. processors of the samespeed. Otherwise, for heterogeneous resources, the ex-ecution time is usually set as proportional to differentfixed speed of particular processors (uniform proces-sors). However, more complex dependencies betweenresource properties and execution time are possible, forinstance, processing times of tasks on different proces-sors can be arbitrary (unrelated processors).

Evaluation criteria. Significant number of criteriacan be applied to evaluate schedules. Among themthere are arbitrary global criteria such as makespan,mean flow time, resource utilization, etc., and user-specific criteria that can be defined for individual users.Additionally, criteria related to cost, energy consump-tion, reliability, etc., are gaining recently importance.

2.2. Simulation features

Due to the complexity and costs of building andoperating testbeds, extensive research has been con-ducted in the area of computer-based simulation tools.As a result, a wide variety of simulation tools emerged.A comprehensive taxonomy for design of simulationtools to model large and distributed systems has beenpresented in [29]. However, to compare various simu-lation tools more detailed classification of simulationfeatures is needed. In this section we propose a classi-fication that includes a variety of aspects that should be

Page 4: GSSIM – A tool for distributed computing experimentsdownloads.hindawi.com/journals/sp/2011/925395.pdf · Scientific Programming 19 (2011) 231–251 231 DOI 10.3233/SPR-2011-0332

234 S. Bak et al. / GSSIM – A tool for distributed computing experiments

Fig. 2. Essential features of distributed computing systems sim-ulations. (Colors are visible in the online version of the article;http://dx.doi.org/10.3233/SPR-2011-0332.)

taken into consideration in a simulation of modern dis-tributed computing systems. Most of these propertiesare essential to simulate the aforementioned schedul-ing problems. The summary of features of distributedcomputing simulators is illustrated in Fig. 2 and ex-plained in more detail below.

• Simulated architecture. Simulation tools may dif-fer in terms of architectures of modeled systemsas well as their levels of details. The logical ar-chitecture of distributed systems may assume cen-tralized control of resources, hierarchical topol-ogy of schedulers or fully distributed models. Inaddition, the physical topology may span fromcomputing nodes through racks and clusters up towhole data centers or geographically distributedgrids and clouds. Granularity of such topologiesmay also differ from coarse-grained to very fine-grained modeling single cores, memory hierar-chies and other hardware details. Ideally, simu-lation tools should provide the user with enoughflexibility in defining system architecture and hi-erarchy of resources (including both physical andlogical structure).

• Simulated objects. Simulation environment oughtto enable, apart from simulation of resource com-ponents, modeling the behavior of other dis-tributed computing entities. Simulations may in-clude models of various distributed computingentities. One of the most important element ofsimulation is workload which may be taken fromreal systems or generated synthetically. Workloadmay contain jobs ranging from single sequentialjobs, through parallel and distributed jobs, up tothe whole workflows containing time dependen-

cies between jobs. Workload types and elementsare summarized in Fig. 1 and discussed in previ-ous subsection. More detailed simulations of jobsmay require to model application performanceby taking into account a lot of factors that af-fect application execution, e.g., processing unitarchitecture, application characteristics, and inputdata. Besides complex information about simu-lated hardware (concerning architecture, charac-teristics, state, energy profile) and network (withdifferent network models), a simulation tool mayallow performing specific actions on resources,for instance change their states or support advancereservation (AR). It is also important to permitexperimental studies that involves managementof large amounts of distributed data as well asto model the dynamic nature of distributed envi-ronments by handling different resource events.Modern simulation tools may also support aspectsthat gained special attention recently, in particularvirtualization and energy-aware techniques.

• Results of the simulation. The goal of each simu-lation run is to provide a set of results that allowto evaluate a specific distributed computing sys-tem or scheduling method. These results may in-clude: application performance metrics (e.g., ex-ecution time, flow time, waiting time, completiontime), resource usage, cost, network performance,data management performance, energy consump-tion and thermal effects caused by computations.

• User interface. The user interface determines howthe user interacts with the simulation tool. The in-terface is essential to achieve high experiment ex-ecution automation and ease of use. First of all,programming interface may differ between simu-lators that may be delivered along with program-ming libraries or more structured frameworks.Usually frameworks allow to reduce the complex-ity of the simulation environment and to enterany modifications more conveniently. In additionto programming interface, simulators may be ac-companied with intuitive graphical user interfacethat allows to define simulation parameters muchfaster and easier than using ordinary configura-tion files. These graphical interfaces may includeuser friendly representation of simulation resultsto facilitate the analysis of the distributed com-puting system. Simulation tool may be also com-plemented by the Web interface in order to al-low remote experiments execution or/and providerepository facilities.

Page 5: GSSIM – A tool for distributed computing experimentsdownloads.hindawi.com/journals/sp/2011/925395.pdf · Scientific Programming 19 (2011) 231–251 231 DOI 10.3233/SPR-2011-0332

S. Bak et al. / GSSIM – A tool for distributed computing experiments 235

2.3. Comparison of simulation tools

Tables 1 and 2 present the results of a comparativeanalysis of simulation tools based on the classificationof simulator features proposed in Section 2.2. Simula-tion tools were selected to analysis based on their pub-lications, availability, and list of features. Contents ofthe tables are based on the recent publications concern-ing the given tools. Moreover, all features were veri-fied by experimental studies and a code analysis of theavailable simulators (only DGSim was not available todownload at the time this survey was compiled).

GridSim [4] developed at the University of Mel-bourne, is a toolkit that provides means for model-ing and simulation of base components that constituteparallel and distributed computing environments (gridusers, applications, resources, schedulers and resourcebrokers) and for the study of the involved schedulingalgorithms. A flexible and extensible architecture al-lows to modify components behavior or incorporatenew ones into the existing infrastructure. Thus, Grid-Sim is commonly adopted by other simulation toolsthat benefit from its core functionality.

Alea [13] developed at Masaryk University in Brno,is the Grid and cluster scheduling simulator. Alea isbased on the GridSim toolkit and extends the originalbasic functionality by introducing some innovative so-lutions like, e.g., “on the fly” job reading that leads to abetter scalability. Simulator is able to cope with generalresource management problems like the heterogeneityof jobs, resources, and the dynamic runtime changes. Italso supports also a specific workload format – Meta-Centrum [22].

MaGate [11] simulator is a simulation-based im-plementation for the MaGate scheduler. Its goal is toprovide a set of easy-to-use decentralized grid sched-ulers (which are able to collaborate with external gridservices) and help researchers to study and evaluatedifferent scheduling algorithms/models and workflowswithin various scenarios. Simulation process is sup-ported by a grid network overlay simulator that pro-vides services such as group communication and re-source discovery.

SimGrid [6] is a joint project between the Universityof Hawaii at Manoa, LIG Laboratory in Grenoble andUniversity of Nancy. It aims to provide core functional-ities and facilities for the simulation of parallel and dis-tributed applications in heterogeneous distributed en-vironments. In particular, SimGrid provides program-ming environments to support both researchers whostudy their algorithms and need to run simulations

quickly as well as developers who can develop a realdistributed applications.

OptorSim [1,5] was initiated as part of the EuropeanDataGrid project. It is a modular simulation frame-work that enables users to perform experimental stud-ies of optimisation strategies under different Grid con-figurations. In particular, OptorSim allows the analysisof various data replication algorithms and their impacton the resource usage and job throughput in HEP datagrids.

DGSim [12] development led by the Delft Universityof Technology, aims at system and workload model-ing in grid resource management architectures. DGSimfocuses on automating experiment setup, managementand optimizing the overall simulation process. More-over, it introduces the concepts of job selection pol-icy and grid evolution, which models static changesin the Grid infrastructure considered in the long-termperspective. DGSim provides also some innovative so-lutions concerning grid inter-operation, grid dynamicsand workload modeling.

GroudSim [25] toolkit is developed at University ofInnsbruck. It allows simulating both Grid and Cloudcomputing. The main GroudSim features include: sim-ulation of file transfers, the calculation of costs andbackground loading. Event module supports resourceand network failures as well as recovery events affect-ing these entities. Simulations can be easily extendableby any kind of probability distribution package.

Conclusions. In this section we review several ap-proaches to simulation of distributed computing sys-tems based on the proposed classification of simulatorfeatures. As shown in Table 2, most simulators basi-cally focus on addressing specific problem areas. Sincetheir general goals are different, they vary in termsof simulated architectures and usually allow model-ing and simulating only subset of distributed comput-ing entities. Although almost all of the aforementionedtoolkits are able to provide complete information aboutsimulated environment during the simulation, there isa lack of tools that allow performing specific actionson resources and in this way affecting their behavior.Moreover, there is a very little support for applica-tion performance modeling and incorporation of virtu-alization techniques. Most of simulation tools handledifferent resource events, mainly concerning resourcedynamics or resource failures. One should note thatamong simulators discussed in this paper, simulationof distributed data management is closely related to thenetwork modeling capability and thus, these featuresare either both supported or both not. Several tools

Page 6: GSSIM – A tool for distributed computing experimentsdownloads.hindawi.com/journals/sp/2011/925395.pdf · Scientific Programming 19 (2011) 231–251 231 DOI 10.3233/SPR-2011-0332

236S.B

aketal./G

SSIM–

Atoolfor

distributedcom

putingexperim

ents

Table 1

Comparison of simulation tools according to the simulated objects

Tool/property Workload Application Hardware Network Data Events Virtualization

Input Model

GSSIM Real (SWF, GWF) Sequential, parallel, Performance Information Flow model File transfer Resource Yes

+ synthetic distributed, rigid, modeling (architecture, + AR dynamics

(generator) moldable, preemptive characteristics, + background + resource (FTA)

jobs, workflows state, energy traffic and network

profile) + control generator failures,

(energy management, recovery

state) + AR + security

GridSim Real (SWF, Sequential, parallel, No Information Flow, packet Data storage Resource Yes

user-defined distributed, rigid, (architecture, model + back- + file failures (CloudSim

– primitive) moldable, characteristics, ground traffic transfer (FTA) – separate

preemptive jobs state) + AR generator tool)

Alea Real (SWF, GWF, Sequential, parallel, No Information No No Resource No

Metacentrum) rigid, moldable jobs (architecture, failures

characteristics, state)

MaGate Real (GWF) Sequential, parallel, No Information No No Resource No

+ synthetic rigid, moldable job (architecture, dynamics

(primitive) characteristics)

generator)

SimGrid XML format (own) Sequential, parallel, Performance Information Flow, packet File transfer Resource No

distributed jobs, modeling (architecture, model + back- dynamics

workflows characteristics, state) ground traffic + resource

generator failures

OptorSim Text format (own) Sequential jobs No Information Flow model Data storage No No

(architecture, + background + file

characteristics, state) traffic generator transfer

DGSim Real (SWF, GWF) Sequential, parallel, No Information No No Grid dynamics No

+ synthetic distributed, rigid (architecture, + evolution

(generator) jobs, workflows characteristics, state) model

GroudSim Real (GWF) Sequential jobs No Information Flow model File transfer Resource and Yes

(architecture, network fail-

characteristics, state) ures, recovery

Page 7: GSSIM – A tool for distributed computing experimentsdownloads.hindawi.com/journals/sp/2011/925395.pdf · Scientific Programming 19 (2011) 231–251 231 DOI 10.3233/SPR-2011-0332

S.Bak

etal./GSSIM

–A

toolfordistributed

computing

experiments

237

Table 2

Comparison of simulation tools according to the simulated architecture, user interface and results of the simulation

Tool/property Simulated architecture User interface Results

Physical Logical Programming Input Output On-line access

interface

GSSIM User-defined Centralized, Framework Comprehensive Text Remote experiment Application perfor-

decentralized, (introduced GUI – statistics, management, work- mance, resource usage,

hierarchical layers, plugins) experiment interactive load, scheduling network performance,

editor charts algorithms, experi- energy consumption,

ments repository heat dissipation

GridSim Grid/cluster/ Centralized, Generic Configuration Text No Application perfor-

computing hierarchical library files statistics, mance, cost, resource

node/cpu charts usage

Alea Grid/cluster/ Centralized, Framework Configuration Text No Application perfor-

computing hierarchical (introduced files statistics, mance, resource usage

node/cpu layers, plugins) charts

MaGate Grid/cluster/ Decentralized Framework Configuration Text No Application perfor-

computing (plugins) files + command statistics, mance, resource usage

node/cpu line or GUI charts

SimGrid Cluster/ Centralized, Generic Configuration Text No Application perfor-

computing decentralized library files generator statistics, mance, cost, resource

node/cpu charts usage, network

performance

OptorSim Grid/ Centralized, Framework Configuration Text No Application perfor-

cluster/cpu hierarchical files + command statistics, mance, resource usage,

line or GUI charts data management

performance

DGSim Grid/ Centralized, Framework Configuration Text Workload Application perfor-

cluster/cpu decentralized, files statistics, repository mance, resource usage

hierarchical charts

GroudSim Grid/cluster/ Centralized Framework Configuration Text No Application perfor-

node files statistics, mance, cost, resource

charts usage

Page 8: GSSIM – A tool for distributed computing experimentsdownloads.hindawi.com/journals/sp/2011/925395.pdf · Scientific Programming 19 (2011) 231–251 231 DOI 10.3233/SPR-2011-0332

238 S. Bak et al. / GSSIM – A tool for distributed computing experiments

accept workloads in various formats and are capableof supporting more sophisticated models than simplesequential jobs. Simulators are also evolving towardsuser-friendly tools by offering means to visualize thesimulation results. Nevertheless, some of them still re-quire to be manually provided with input data as plain-text configuration files.

Compared to other tools, GSSIM allows simulatinga wide scope of physical and logical architectural pat-terns. In particular, GSSIM provides means of simulat-ing complex distributed architectures containing mod-els of the whole data centers, containers, racks, nodes,etc. While many of the presented tools focus on ad-dressing specific issues, GSSIM supports simulationsof a wide variety of entities. It covers most of objectssimulated by other tools and adds some innovative fea-tures such as network advance reservation or genera-tion of events related to resources, network, and se-curity events, which are unique among all simulators.It also provides means (by definition of specific plu-gins) for complex modeling of the expected applica-tion performance. As far as programming interface isconcerned, most of popular available tools (e.g., Grid-Sim and SimGrid) are generic libraries supporting onlythe core functionalities. They are flexible but requiresubstantial effort to develop experiments. GSSIM pro-vides an easy-to-use framework to facilitate this pro-cess by several types of plugins, grouping experiments,and workload management. Moreover, GSSIM pro-vides an advanced graphical user interface containingintuitive editors for input data modeling and compre-hensive statistics presented in a user-friendly and inter-active manner. GSSIM is the only distributed comput-ing simulator enabling remote experiment executionon the Web. GSSIM offers the most comprehensivesimulation environment that should satisfy not onlygrid researchers but also cloud, data center and net-work administrators or application users/developers. Inthe following subsection we summarize scenarios andscheduling problems addressed by GSSIM, while theabove features, being the main advantages of GSSIM,are described in Section 3 in more detail.

2.4. Scenarios and scheduling problems in GSSIM

GSSIM can be used to simulate the vast majority ofscheduling problems in distributed computing systems.It enables simulations of many scheduling strategiesapplied to various types of applications. In particular, itcan simulate scheduling of multiple independent jobsat once, various kinds of parallel jobs, and whole work-

flows. Moreover, GSSIM is able to handle rigid andmoldable jobs, as well as preemptive jobs. Throughthe appropriate generation of workload (with job ar-rival times) and implementation of scheduling plug-ins, an analysis of both on-line and off-line strategiesis possible. In addition to types of problems related toworkload properties, GSSIM allows defining time con-straints in workloads and taking them into account inscheduling algorithms. In clairvoyant scheduling theknowledge concerning execution times and possibleconstraints has a significant impact on chosen schedul-ing algorithms and usually requires scheduling withadvance reservation in distributed computing infras-tructures. As for advance reservation, GSSIM enablessimulating of various scheduling problems includingboth best-effort and QoS-based approaches. To real-ize the latter case, GSSIM supports negotiations be-tween global schedulers and resource providers that arebuilt on top of local schedulers. This advance reserva-tion mechanism includes two phase commit protocol.More details concerning advance reservation and ne-gotiations in GSSIM can be found in [17,24]. Addi-tionally, GSSIM provides the possibility of schedulingbased on performance estimations. These estimationscan be generated on the basis of processing times in-cluded in the workload or using a custom algorithmimplemented by a researcher. Furthermore, more com-plex dependencies between resource parameters andexecution time can be modeled by the implementationof time estimation plugins. In this way GSSIM alsodeals with a heterogeneity of resources. The details ofexecution time estimation are presented in Section 3.3.At a local level, a developer of a scheduling pluginhas an unlimited access to queues and running tasks.Therefore, a variety of both space- and time-sharingpolicies can be applied. Moreover, a wide range ofadaptive scheduling paradigms is supported, includingsuspending, resuming, and migrating jobs both beforeand during the execution. Hence, for instance, devel-opers can implement algorithms on the basis of back-filling and preemption. For each experiment, detailedresults are collected. They contain many basic met-rics commonly used when evaluating scheduling algo-rithms, e.g., makespan, mean job completion time, maxtardiness, resource utilization, etc. In addition to globalcriteria, GSSIM supports user-specific criteria whichcan be defined in the workload for each user separately.In this way, a researcher can study strategies aiming atfinding schedules that takes perspectives of individualusers into account.

GSSIM has been successfully applied in a substan-tial number of research projects and academic stud-

Page 9: GSSIM – A tool for distributed computing experimentsdownloads.hindawi.com/journals/sp/2011/925395.pdf · Scientific Programming 19 (2011) 231–251 231 DOI 10.3233/SPR-2011-0332

S. Bak et al. / GSSIM – A tool for distributed computing experiments 239

ies. For instance, GSSIM allows Grid computing re-searchers to study the performance of many schedul-ing algorithms in complex distributed computing in-frastructures. In particular, within GSSIM it is pos-sible to establish a wide variety of distributed com-puting architectures and perform repeatable experi-ments involving diverse scheduling strategies withinGSSIM. They may vary from simple job schedulingpolicies, through multi-criteria resource management[21], up to complex scheduling problems with QoSrequirements as presented in [24] and Section 4.3.GSSIM can also facilitate the work of queuing sys-tems administrators by providing means of evaluat-ing efficiency of various queue configurations. Differ-ent approaches to this problem, containing studies onvarious queues types, increasing number of queues,and various job selection policies, can be found in[18] and Section 4.1. As a network simulator, GSSIMhas been used to study the problem of resource al-location with network resources for workflow appli-cations that is presented in [23]. It has also beenadopted as a simulation framework in European projectcalled Federica [7] in order to test different approachesconcerning resource allocation in virtual network ar-chitectures. Moreover, it can also be exploited bynetwork administrators to find possible bottlenecks innetwork configuration used for demanding HPC appli-cations. Data center administrators or owners can ben-efit from GSSIM energy module as it provides a com-prehensive support for energy-aware scheduling exper-iments. In this way they can investigate how energyconsumption depends on workload type and schedul-ing policies. Detailed information concerning energymodule is presented in Section 3.5. Recently developedextensions provide new interesting features for bothcloud providers and cloud users. The former are ableto optimize their cloud environment by tuning manage-ment policies and configuring virtual machines, whilethe latter can estimate the cost s of leasing resourcesfrom cloud providers. Finally, distributed applicationsdevelopers have the possibility of tuning their applica-tions to specific computing infrastructure.

3. Simulator features

The comparison of simulation tools presented inSection 2 shown that GSSIM enables modeling andsimulation of a wide spectrum of distributed com-puting problems. GSSIM addresses this issues withthe set of sophisticated features described in previ-

ous section. Due to modular framework architecture,that corresponds to the real world, a wide scope ofmodules that provide aforementioned capabilities canbe incorporate into GSSIM environment. Details con-cerning GSSIM architecture, including scheduling plu-gins, are presented in [17]. In such a way, a num-ber of extensions is available within GSSIM includingsimulated architecture flexibility, advanced workloadmanagement, network simulation, application perfor-mance modeling, simulation of energy efficiency andweb GUI enabling remote management and executionof simulation experiments. All these features are de-scribed in the following sections.

3.1. Simulated architecture

The main goal of GSSIM is to enable researchersto effectively perform experiments that contain simu-lations of distributed computing environments. There-fore, it assumes a distributed infrastructure with mul-tiple administrative domains (called also sites in thispaper) and scheduling entities. In general, GSSIMmodels two generic types of scheduling entities: globaland local schedulers. Global scheduler, is responsiblefor scheduling jobs to resources that belong to differ-ent administrative domains. To this end, it must in-teract with multiple sites (the most common exam-ple of a site is a computing cluster under control ofone of popular queueing systems, such as PBS, LSF,SGE, etc.) including retrieving information about re-sources, submitting jobs, or creating reservations de-pending on specific settings and a type of consideredscheduling problem. A local scheduler is responsiblefor managing resources that belong to a single admin-istrative domain (site). It retrieves tasks and reserva-tion requests from global schedulers. GSSIM allows tobuild a hierarchy of local schedulers corresponding tothe hierarchy of resource components over which thetask may be distributed (e.g., clusters and computingnodes).

Having these two generic entities, GSSIM can beconfigured to model a large scope of architectural pat-terns. To this end, users may define for each globalscheduler to which local schedulers or other globalschedulers it can submit tasks and/or reservation re-quests. An example of defining distributed architec-tures with multiple workloads and Grid schedulers ispresented in Fig. 3. In this example, we assume thatthere are two Grid schedulers, which manage their sep-arate workloads. These schedulers have access to mul-tiple local schedulers that can subsequently form vari-

Page 10: GSSIM – A tool for distributed computing experimentsdownloads.hindawi.com/journals/sp/2011/925395.pdf · Scientific Programming 19 (2011) 231–251 231 DOI 10.3233/SPR-2011-0332

240 S. Bak et al. / GSSIM – A tool for distributed computing experiments

Fig. 3. Example of distributed scheduling architecture in GSSIM; WL – Workload, GS – Grid scheduler, LS – Local scheduler.

ous hierarchies. However, each Grid scheduler can ac-cess a different set of local ones. On the other hand,more than one Grid schedulers can submit tasks toa single local scheduler. This architecture is just anexample, so obviously other configurations are possi-ble, for instance, fully distributed case with a separateworkload for each scheduler.

3.2. Workload management

In this section details concerning the workload struc-ture and management are described.

3.2.1. Workload structureExperiments performed in GSSIM require a descrip-

tion of jobs and tasks which will be scheduled duringsimulation. As a basic description, GSSIM uses files inthe Standard Workload Format (SWF) [26] or its ex-tension Grid Workload Format (GWF) [10]. In addi-tion to the SWF file, some more detailed descriptionof a job and task can be provided in additional XMLfile. Each XML file represents one job, and each jobconsist of multiple tasks. This form of description pro-vides a scheduler with more detailed information abouttask requirements, user preferences and execution timeconstraints, which are unavailable in SWF/GWF files.An example of such information is a set of param-eters related to execution time constraints. They ex-press user’s knowledge about task execution time anduser’s requirements about the earliest start and the lat-

est end time of a task – parameters essential in schedul-ing with advance reservations. Other parameters definedependencies between tasks in order to build work-flows.

As mentioned in Section 2.1, jobs may have vari-ous shapes and levels of complexity. Thus, schedulingstrategies may have different scope and need differentinput data. Therefore, GSSIM distinguish several lev-els of information about incoming jobs. These levelsand relationships between jobs and tasks are illustratedin Fig. 4.

3.2.2. Workload generatorThe main purpose of the workload generator tool

is to create synthetic workloads. It generates standardSWF workloads as well as additional parameters inauxiliary file (in the XML format).

All elements of the workload, mentioned in the pre-vious section, can be described by a number of at-tributes, beginning from a number of tasks, task ar-rival time, task runtime, through task resource require-ments, such as requested number of CPUs, up to user’spreferences. Input parameters of the workload gener-ator cover most of workload attributes. Each configu-ration parameter is described by standard statistical at-tributes, such as mean, minimum, maximum, standarddeviation, and may have predefined probabilistic dis-tribution, e.g., normal or constant.

One of the objectives of the workload generator isto create synthetic workloads which are similar to real

Page 11: GSSIM – A tool for distributed computing experimentsdownloads.hindawi.com/journals/sp/2011/925395.pdf · Scientific Programming 19 (2011) 231–251 231 DOI 10.3233/SPR-2011-0332

S. Bak et al. / GSSIM – A tool for distributed computing experiments 241

Fig. 4. Levels of information about jobs.

ones [16]. Therefore it is possible to define dependen-cies between any two parameters in configuration file.Exact values can be replaced by the dependency in aform of a mathematical expression which allows to usebasic operators like +, −, ∗, /, (, ). For instance, depen-dency between a number of CPUs and memory usedby jobs can be defined.

In addition to parameter dependencies, researcherscan define parameters which have different probabilitydistributions at different time periods. This feature al-lows to reflect the natural changes of tasks’ parameterdistributions at certain time or, in other words, to modeldaily cycles. For instance, tasks which are submitted innight hours may be longer then tasks submitted duringa day time.

Another feature of the workload generator which al-lows to create more natural workloads is the use ofmultiple distributions for different parts of a workload.It is possible to define for a single parameter a numberof distributions which describes only some percentageof generated values. For example, 30% of tasks mayrequire 10 CPUs and the remaining 70% may require5 CPUs.

3.3. Application performance modeling

GSSIM provides means to include specific applica-tion performance models during simulations. To thisend, additional plugin and interface are included in theGSSIM framework. Implementation of this plugin al-lows researchers to introduce specific ways of calculat-ing task execution time.

The following parameters can be applied to specifyexecution time of a task:

• Processor type and parameters.• Available memory.• Task length (number of CPU instructions needed

to complete a task).• Network parameters.• Task requirements.• Input data size.

Based on these parameters an estimated execution timecan be calculated in various ways depending on thespecific applications and scenarios.

The basic plugin available within the GSSIM releaseimplements the most common performance model ofa task, i.e. linear dependency between execution timeand resource speed. According to the classificationgiven in Section 2.1, this plugin implements a modelwith uniform processors. Time is calculated based onan actual task length (measured as a number of CPUoperations) and CPU speed (expressed as a number ofoperations per second). Let us denote the former pa-rameter as li, where i is a number of a given task, andthe latter parameter as μj , where j is a number of a ma-chine. Additionally let us assume that nj is a numberof processors of machine j allocated to this task. Thenwe obtain actual execution time of task i on machine j,denoted as pij , using the following simple formula:

pij =li

μjni. (1)

Of course, we can easily imagine more complex de-pendencies between execution time and parameters ofresources and application. For instance, speed up re-sulting from parallelization usually is worse than lin-ear. Therefore, instead of proportional decrease of anexecution time for bigger numbers of processors onecan model it using other functions, for instance of theform:

pij =li

μj ln ni. (2)

In the case of parallel and distributed applicationswe should also include information about availablenetwork parameters to model speed-up realistically.Other important issues include task memory require-ments and machine memory limits.

Using parameters listed in this section developerscan, for instance, take into account architectures of un-

Page 12: GSSIM – A tool for distributed computing experimentsdownloads.hindawi.com/journals/sp/2011/925395.pdf · Scientific Programming 19 (2011) 231–251 231 DOI 10.3233/SPR-2011-0332

242 S. Bak et al. / GSSIM – A tool for distributed computing experiments

derlying systems, such as multi-core processors, or vir-tualization overheads, and their impact on final perfor-mance of applications.

3.4. Flow-based simulation of network using advancereservation

Several simulation tools use a network model basedon a packet-level network approach (see Section 2.3).This model assumed packetizing data that is sent overnetwork into packets limited by the Maximum Trans-mission Unit (MTU) size. As all packets are sent overthe links this approach causes a significant overheadin simulating large data transfers. For this reason, thenetwork flow model was introduced [3]. It models datatransfers as flows instead of sending large numbers ofpackets. Applied bandwidth sharing model uses sim-ple min–max bandwidth fair sharing, i.e. each flow thatshares a link receives an equal portion of the band-width.

GSSIM uses the flow networking concept becauseusing this fluid view of network traffic the speed ofsimulations is largely improved by avoiding a need topacketize large network transfers as shown in [3]. Fur-thermore, GSSIM enhances the network flow model byadditional functions that provide more complex infor-mation about network topology and by the network ad-

vance reservation functionality. These capabilities arerelevant for experiments with co-allocation of varioustypes of resources and for analysis of data managementaspects in distributed computing, especially in gridsand clouds. Architecture of the flow-based networksimulator supporting advance reservations, which wasadopted in GSSIM is illustrated in Fig. 5.

The figures shows an example of network topologythat consists of nodes: routers, sites, and links betweenthem. It also presents two components which wereadded to GSSIM: Network Manager and Path Com-puting Element. Arrows show interactions between thecomponents. The Network Manager provides basic in-formation about network, such as network topology,bandwidth and latency between nodes. Moreover, it isresponsible for handling network reservation processincluding creating, canceling and modifying reserva-tions. It updates calendars which are associated withevery link. The calendar represents changes of the linkbandwidth over a period of time. The second impor-tant module is the Path Computing Element (PCE),which supports Network Manager by providing twonetwork features. PCE finds the shortest path betweentwo nodes by adopting Dijkstra‘s algorithm and calcu-lates the maximum flow between each pair of sites. Itreturns a list of calendars for each connection betweenthe nodes.

Fig. 5. Managing network advance reservations in GSSIM.

Page 13: GSSIM – A tool for distributed computing experimentsdownloads.hindawi.com/journals/sp/2011/925395.pdf · Scientific Programming 19 (2011) 231–251 231 DOI 10.3233/SPR-2011-0332

S. Bak et al. / GSSIM – A tool for distributed computing experiments 243

A common use cases may look as follows. A sched-uler queries the Network Manager about a networktopology and then about bandwidth on a given path.Then it makes a decision whether to send data in thebest effort manner relying on the information it re-ceived or to send data using reservation. In the sec-ond case it is necessary to create a reservation first.The reservation guarantees constant bandwidth duringsending data with the reservation period. A schedulersends its request to the Network Manager which al-lows it to create a reservation on particular path or onthe shortest path between given nodes with a requestedbandwidth in a given period of time. It results in up-dating calendars related to the reserved links. Oncea reservation has been created data can be sent witha proper reservation identifier. Reservation can be al-tered or cancel.

In order to add the advance reservation capability tothe network simulation, in addition to changes of thearchitecture and information exchanged among com-ponents, enhancements of the flow networking modelwere needed. In general, duration of a network flow isa sum of latency and time needed to send data throughthe network. The latter is a ratio of data size to theavailable bandwidth. This total time can be calculatedusing Eqs (3)–(5) [3]. Let LAT(f ) be the total latencyof a flow f from u to v. Then the total latency is a sumof latencies of all edges on the path that connects thesource u to the destination v, i.e.

LAT(f ) =∑

(u′ ,v′ )εf

LAT(u′, v′). (3)

Minimal bandwidth min BWmin(f ) is the smallestbandwidth available on any edge on path between uand v and is given by the formula:

BWmin(f ) = min(u′ ,v′ )εf

BW(u′, v′)n(u′, v′)

, (4)

where n(u′, v′) is the number of active flows over link(u′, v′).

Now, given that SIZE(f ) is a number of bytes inflow f , the total duration of network flow f can be cal-culated as:

T (f ) = LAT(f ) +SIZE(f )

BWmin(f ). (5)

Taking advance reservations into considerationEq. (3) remains unchanged (since latency is the same)

while Eqs (4) and (5) must be transformed as follows:

BWmin(f , ri) =BWR((u, v), ri)

nri (u, v), (6)

T (f , ri) = LAT(f ) +SIZE(f )

BWmin(f , ri), (7)

where ri is a reservation of network used to transfergiven data. The whole path (u, v) is considered insteadof particular links (u′, v′) since within one reservationthe same bandwidth is reserved at all links in the path.Thus BWR((u, v), ri) is a bandwidth allocated to reser-vation ri while BWmin(f , ri) is a bandwidth availablefor flow f using reservation ri. In this case nri (u, v)denotes active flows within a given reservation (mul-tiple data transfers can be performed within a singlereservation). Equations above express bandwidth andduration of network flow with respect to reservationri. If data is transferred in a best effort manner, i.e.through non-reserved links, total time is calculated asin (5), while bandwidth is given by:

BWmin(f )

= min(u′ ,v′ )εf

BW(u′, v′) − BWR(u′, v′)n(u′, v′)

. (8)

The use of flow networking concept with presentedextensions allows GSSIM to provide efficient simula-tions of network including reservation capabilities.

3.5. Simulation of energy efficiency

GSSIM allows researchers to take into account theenergy consumption issue in distributed computingsimulations [14]. To introduce the energy consumptionto a simulation environment appropriate energy con-sumption models must be used. The main goal of themodels is to emulate the behavior of the real comput-ing resource and the way it consumes energy. Due toreach functionality and flexible environment descrip-tion, GSSIM can be used to develop new energy con-sumption models or to examine energy managementstrategies. In more detail, GSSIM provides a function-ality to define energy efficiency of resources, depen-dency of energy consumption on resource load andspecific applications, and to manage power modes ofresources. The energy consumption models providedby default in GSSIM can be classified into followinggroups, starting from the simplest model up to the morecomplex ones:

Page 14: GSSIM – A tool for distributed computing experimentsdownloads.hindawi.com/journals/sp/2011/925395.pdf · Scientific Programming 19 (2011) 231–251 231 DOI 10.3233/SPR-2011-0332

244 S. Bak et al. / GSSIM – A tool for distributed computing experiments

Static approach is based on a static definition of re-source power usage. The model calculates total amountof energy consumed by the computing resource systemas a sum of energy consumed by all its components(processors, disks, power adapters, etc.). In the sim-plest case specific power usage values are assigned tocomputing nodes. More advanced versions of this ap-proach assume definition of resource states along withcorresponding power usage. Energy states are definedseparately for each component of the computing re-source system such as processor, memory, disk, poweradapter, etc. By default, similar to processor C-States,on/off/sleep/stand-by basic states are supported. How-ever user can define any number of new, resource spe-cific, states. For the processor it is also possible to de-fine frequency (voltage) levels, so called P-States, inwhich processor can operate with specific power us-age levels. This model follows changes of resource en-ergy states and calculates amounts of energy definedfor each state.

Resource load model expands static energy state de-scription and enhances it with realtime resource usage,most often simply the processor load. In this way it en-ables a dynamic estimation of power usage based onresource basic power usage and state (defined by staticresource description) as well as resource load. For in-stance, it allows to distinguish amount of energy usedby idle processor and a processor at full load. In thismanner energy consumption is directly connected withenergy state and describes average power usage by theresource working in a current state. For the processorswhich can operate on more than one frequency level,such description must be provided separately for eachlevel.

Application specific model allows to express differ-ences in the amount of energy required for executingvarious types of applications at diverse computing re-sources. It considers all defined system elements (pro-cessors, memory, disk, etc.), which are significant ina total energy consumption. Moreover, it also assumesthat each of these components can be utilized in a dif-ferent way during the experiment and thus have differ-ent impact on total energy consumption. To this end,specific characteristics of resources and applicationsare taken into consideration. Various approaches arepossible including making the estimated power usagedependent on defined classes of applications, ratio be-tween CPU-bound and IO-bound operations, etc. Allthese dependencies can be modeled in the energy pro-files, special plugins used to customize estimation ofenergy consumptions to specific applications and hard-ware.

In order to model energy management in GSSIMa researcher has to perform several steps. First, a re-source description has to be prepared. Developer,should specify power usage of resources. Dependingon an accuracy of a model user may provide additionalinformation about energy states which are supportedby the resources, amounts of energy consumed inthese states, energy consumption by specific subcom-ponents, or energy consumption related to resourceload to calculate the total energy consumed by theresource during runtime. If high accuracy and cus-tomization to specific applications and hardware is re-quired, a user can defined energy profiles that providemeans to calculate dynamic energy consumption ac-cording to application and resource models provided.Information provided within the resource description,both static resource description and values calculatedby the resource energy profile, can be used to performadvanced resource management. GSSIM provides in-terfaces, which allow scheduling plugins to collectdetail information about computing resource compo-nents and to change their energy states. Presence ofdetailed resource usage information, current resourceenergy state description and functional energy manage-ment interface enables an implementation of energy-aware scheduling algorithms. Resource energy con-sumption becomes in this context an additionalcriterion in the scheduling process, which use vari-ous techniques to decrease energy consumption, e.g.,workload consolidation, turning off unused resources,cutting down CPU frequency and others. After experi-ment performed using GSSIM the energy managementprocess and efficiency of used policies can be summa-rized and analyzed. To this end, detailed data abouteach resource component state and the energy con-sumed by it is collected. To ensure appropriate level ofdetails each change of the resource component energystate, each value returned by the resource energy pro-file is logged along with time stamp and presented in auser friendly chart form.

3.6. Remote management and execution of simulationexperiments

GSSIM project is accompanied by the web inter-face and system that provides online access compre-hensive experiment editor, remote experiment execu-tion as well as the experiments repository. Perform-ing experiments requires to establish the simulationenvironment properties first. GSSIM interface offersintuitive resource and network topology editor which

Page 15: GSSIM – A tool for distributed computing experimentsdownloads.hindawi.com/journals/sp/2011/925395.pdf · Scientific Programming 19 (2011) 231–251 231 DOI 10.3233/SPR-2011-0332

S. Bak et al. / GSSIM – A tool for distributed computing experiments 245

Fig. 6. GSSIM GUI. (Colors are visible in the online version of the article; http://dx.doi.org/10.3233/SPR-2011-0332.)

guides user through this stage. It supports the drag anddrop paradigm and therefore relieves the user from theneed of preparing the input files manually. Every re-source can be easily edited by dedicated forms, whichenables defining appropriate entity parameters. WebGSSIM interface includes also workload editor andgenerator tool. It provides all necessary means to fa-cilitate the whole process and create the workload ac-cording to the statistical parameters specified by theuser. Visualization of results is possible through inter-active graphs that allow users to adjust the chart con-tent to their requirements. Moreover, users can effort-lessly customize chart size using advanced mechanismof zooming, switch between various charts and quicklyview interesting details about jobs and resources. Theconcept of GSSIM experiment groups enables users to,first, run multiple options of one experiment at onceand, afterwards, to study the impact of specific param-eters on simulation results. An example of the experi-ment editor window with generated chart for resourceutilization is presented in Fig. 6.

4. Simulation experiments

GSSIM, due to its flexibility, can be applied to awide scope of experiments. As it was mentioned inSection 2.4, these experiments can be conducted byscientists to verify their research hypotheses, adminis-

trators to safely tune configuration of their productionsystems or by resource owners to assist them in makingdecisions concerning extensions of computing infras-tructure. GSSIM has been already successfully appliedto many scheduling problems, e.g., [15,16,20].

To demonstrate capabilities of the simulator wepresent in this section three examples of results ofexperiments conducted using the GSSIM framework.The first experiment uses the best effort strategies andreal traces to investigate consequences of resource par-titioning. In the second experiment we illustrate theuse of detailed Gantt diagrams to study and comparetwo scheduling algorithms. The third experiment con-tains simulations of algorithms using advance reserva-tion on the basis of the synthetic workload generatedby GSSIM.

4.1. Study of partitioning

Proper configuration of resource management sys-tems is a complex task performed by administrators ofcomputing infrastructures. This configuration must en-sure efficient management of workloads and high re-source utilization. One of techniques used is partition-ing of larger pools of resources into smaller parts. Par-titioning may need introduction of two levels of work-load management. Nevertheless, consequences of suchdecision should be evaluated a priori to avoid problemswith the real system operation. The simulation envi-

Page 16: GSSIM – A tool for distributed computing experimentsdownloads.hindawi.com/journals/sp/2011/925395.pdf · Scientific Programming 19 (2011) 231–251 231 DOI 10.3233/SPR-2011-0332

246 S. Bak et al. / GSSIM – A tool for distributed computing experiments

ronment such as GSSIM is a perfect tool to performsuch what-if analysis. In this section we illustrate howGSSIM can be used to study this problem based onreal workload traces. We assumed the use of best ef-fort strategies such as First Come First Serve (FCFS)and the off-line scheduling strategy Largest Size First(LSF). Thus, let us show now experimental results ob-tained by FCFS and LSF evaluated in the light of thefollowing criteria: utilization, waiting time and flowtime for a smaller fraction of the SDSC SP2 workload(jobs identified from 1 to 1000), see Table 3.

In contrast to FCFS, LSF is an example of the off-line strategy which is usually invoked periodically in ascheduling system to sort the queue according to job-size parameters. Obviously, the longer system waits toapply the LSF strategy the higher number of jobs iscollected for further processing. This natural character-istic of any off-line scheduling strategy influences theoverall performance of the system, especially all eval-uation metrics defined based on flow and waiting timeparameters as there are no scheduling decisions madeduring the off-line period. On the other hand, a set ofcollected jobs together with their requirements allowsa scheduler to take into account simultaneously all joband resource characteristics to optimize its schedul-ing decisions. Therefore, there is always a trade-offbetween on-line and off-line configuration parameterswhich in practice are adjusted experimentally. There-fore, we decided to invoke LSF with different frequen-cies in another simulation tests we conducted, respec-tively every p = 10, p = 60, p = 600 and p = 3600 s.As presented in Table 3, FCFS strategy achieved stillgood results, but it was outperformed by LSF-10 andLSF-60 describing the configuration of LSF with theoff-line scheduling period p = 10 and p = 60 s,respectively. All four strategies FCFS, LSF-10, LSF-60 and LSF-600 reached the same level of utilizationU = 0.545, however the total waiting time was re-duced by 2% in the case of LSF-10 and 3% in thecase of LSF-60. As expected, much longer invocationperiods of the LSF strategy affected significantly themean flow time, and corresponding total waiting time,

increased up to 81,621 and 73,449, respectively. Atthe same time, LSF-3600 strategy reduced utilizationdown to the level of 49%, however, because of the highpeak load selected from the SDSC SP2 workload forthis experiment, further improvements for this objec-tive would be difficult to achieve.

To perform more comprehensive simulation studieswe introduced hierarchical scheduling structures withtwo local queues involved. First, we partitioned com-puting resources into two sets based on additional de-scriptions and comments to real workloads at [26]. Inpractice, partitioning techniques are often used by localadministrators giving them the possibility of assign-ing end-users or resources to various queues depend-ing on the monitored workload and past system be-haviors. Therefore, for further simulation experiments,all created partitions and attached computing resourceswere provided to the Grid-level over a certain num-ber of local queues. Let us first describe hierarchicalstructures we used during the next experimental tests.A number of computing resources reported for SDSCSP2 was 128, and first we created two partitions pro-viding the access over two queues to 2 × 64 computingresources. However, we had to modify a bit the originalSDSC SP2 workload by reducing the requested com-puting resources from 128 to 64 for jobs identified as:86, 91, 95, 144, 171 (end-user 92) and 208, 742, 746,750, 920, 967 (end-user 147). To evaluate hierarchi-cal scheduling structures, Random and Load Balancingstrategies were used to assign jobs at the grid-level tolocal queues, while FCFS and LSF-600 strategies wereapplied for local queues.

As we can observe in Table 4, Partitioning the SDSCSP2 system into two parts resulted in a lower uti-lization of computing resource. As expected, such ahierarchical scheduling structure did not outperformthe performance of the original structure where onlyone queue was used together with 128 computing re-sources. Even the reduced job sizes of a few jobs inthe analyzed workload did not help to improve the to-tal waiting time and mean flow time in the hierarchi-cal system with two local queues. However, compar-

Table 3

Performance of local scheduling strategies: FCFS and LSF applied for the SDSC SP2 workload (jobs 1–1000)

Queues × Resources Grid-level Local-level Utilization Flow time (s) Waiting time (s)

1 × 128 FCFS FCFS 0.545 16,137 7965

1 × 128 FCFS LSF-10 0.545 15,972 7800

1 × 128 FCFS LSF-60 0.545 15,896 7724

1 × 128 FCFS LSF-600 0.545 18,127 9955

1 × 128 FCFS LSF-3600 0.494 81,621 73,449

Page 17: GSSIM – A tool for distributed computing experimentsdownloads.hindawi.com/journals/sp/2011/925395.pdf · Scientific Programming 19 (2011) 231–251 231 DOI 10.3233/SPR-2011-0332

S. Bak et al. / GSSIM – A tool for distributed computing experiments 247

Table 4

Performance of two-level hierarchical scheduling structures for the SDSC SP2 workload (jobs 1–1000)

Queues × Resources Grid-level Local-level Utilization Flow time (s) Waiting time (s)

2 × 64 Random FCFS 0.517 41,625 33,366

2 × 64 Random LSF-600 0.502 42,895 34,636

2 × 64 Load balancing FCFS 0.53 33,479 25,220

2 × 64 Load balancing LSF-600 0.532 34,988 26,728

Fig. 7. Comparison of utilization generated for the original single-queue and two-queue partitioned SDSC SP2 system. (Colors are visible in theonline version of the article; http://dx.doi.org/10.3233/SPR-2011-0332.)

ing strategies at the Grid-level, one should note thatLoad Balancing algorithm significantly outperformedthe Random approach with respect to all evaluated cri-teria. It allows to increase utilization of resources by6%, while the flow time and waiting time were reducedby 19 and 24%, respectively. Thus, these results con-firmed the importance of the resource allocation phaseon a grid. Moreover, the configuration of Load Bal-ancing and LSF-600 procedures turned out to be betterthan Load Balancing and FCFS strategies with respectto utilization criterion. Hence, it may be worthwhileto introduce more advances scheduling, i.e. off-linescheduling, so that a scheduler can take into accountsimultaneously more task and resource characteristicsto better optimize schedules. Generated by GSSIM uti-lization of resources for different hierarchical configu-rations (single queue with FCFS and LSF-600 and two-queue partitioned with Load Balancing and LSF-600strategies) used in simulation experiments is presentedin Fig. 7. More details of this experiment can be foundin [21].

GSSIM simulation environment allowed us to per-form this study without additional effort required to es-tablish specific configuration of the real infrastructure.Use of GSSIM to simulate production environments

may save a lot of work and inconvenience for users ofthe system.

4.2. Comparison of algorithms

This section contains results of a comparison of twoscheduling policies: on-line (a single task at once) vs.off-line (scheduling a set of tasks at once on a availableresource pool). In the former case a scheduler can al-locate resources (specific time slot) for a single task inthe head of the queue. It takes tasks one-by-one usingFCFS policy, query for a slot for this task and, if queryis successful, task is allocated and executed. Anotherapproach assumes that a scheduler can select a task toexecute from a set of tasks in the queue. In this casea scheduler can check particular time slots against re-quirements of various tasks. One of the simplest meth-ods that take advantage of this approach is the Gra-ham’s algorithm [9] which we adopted for this com-parison.

In Fig. 8 one can easily see that using Graham’s al-gorithm the scheduler is able to submit certain tasksearlier because for each slot it checks tasks until itfinds one that can be allocated to this slot. Two exem-plary regions with tasks allocated earlier are markedin the figure. Generally, this advantage of the Graham

Page 18: GSSIM – A tool for distributed computing experimentsdownloads.hindawi.com/journals/sp/2011/925395.pdf · Scientific Programming 19 (2011) 231–251 231 DOI 10.3233/SPR-2011-0332

248 S. Bak et al. / GSSIM – A tool for distributed computing experiments

Fig. 8. Comparison of schedules obtained by FCFS (left) and Graham (right) policies. (Colors are visible in the online version of the article;http://dx.doi.org/10.3233/SPR-2011-0332.)

approach leads to better resource utilization and task-related metrics such as mean flow time. Detailed valuesof these metrics for both approaches and their depen-dency on a number of tasks are obviously available instatistics generated by GSSIM. It was easy to see thatwhile for small number of tasks differences are small(due to substantial amount of free resources at the be-ginning), for greater number of tasks the Graham pol-icy demonstrates its advantage both in terms of meanflow time and resource utilization.

More details of this experiment can be found in [24].The results presented above are just simple example

of more detailed analysis which is available in GSSIM.By browsing details of Gantt charts generated for jobsand/or reservations one may analyze phenomena thatoccur in the scheduling process more deeply than justbased on general statistical values. More details on us-ing GSSIM Gantt charts to analyze schedules can befound in [18].

4.3. Configuration of advance reservation

Some of resource management systems support theadvance reservation functionality, e.g., [27,28]. Nev-ertheless, advance reservation may significantly de-teriorate efficiency of the whole system. Therefore,a proper configuration of this aspect of the resourcemanagement system is very important. One of param-eters that are of particular importance is a length of areservation time slot. This length defines a minimumtime period for which resources can be reserved in thesystem. Thus, it determines a granularity of reserva-tions. For instance, Platform LSF [27] allows reserva-tions in 10 min time slots (i.e., fixed-length time slots)while SGE [28] does not impose a minimal reserva-

tion length (referred as variable-length time slot in thesequel). Generally,fixed-length time slots improve per-formance of searching and allocation algorithms as thelist of slots does not depend on a number of reserva-tions in the system. Of course, increasing a length of atime slot leads to further performance improvements.On the other hand, long fixed-length slots result in lowresource utilization. Therefore, finding an appropriatetrade-off between resource utilization and performanceof algorithms is an important issue.

In order to study how performance of the partic-ular methods depends on parameters of the tasks 16workloads diverse in terms of task sizes and lengthswere prepared. We assumed that neither the earlieststart times nor deadlines were defined, i.e. available re-source were searched within period from current timeto infinity. Resource consisted from 64 identical pro-cessors.

We studied the impact of slot lengths on qualityof results, namely, resource utilization, makespan, andmean flow time. We compare the variable-length slotsapproach with two versions of the fixed-length slots ap-proach: with short (10 min) and long (1 h) slot length.Resource utilization obtained using these methods iscompared in Fig. 9. It is easy to see that the longerlength of a fixed slot the bigger the number of “gaps”that leads to lower resource utilization.

The results presented above are confirmed in Fig. 10where our expectations concerning resource utiliza-tion are confirmed by precise values. They are com-plemented by values of makespan and mean flow time.More details of this experiment can be found in [24].

An administrator looking at the presented resourceutilization charts as well as performance statistics maychoose an appropriate time slot length for the systemin question.

Page 19: GSSIM – A tool for distributed computing experimentsdownloads.hindawi.com/journals/sp/2011/925395.pdf · Scientific Programming 19 (2011) 231–251 231 DOI 10.3233/SPR-2011-0332

S. Bak et al. / GSSIM – A tool for distributed computing experiments 249

Fig. 9. Resource utilization for the fixed- and variable-length slots approach. (Colors are visible in the online version of the article;http://dx.doi.org/10.3233/SPR-2011-0332.)

Fig. 10. Resource utilization, makespan and mean flow time for variable- and fixed-length slots approaches. (Colors are visible in the onlineversion of the article; http://dx.doi.org/10.3233/SPR-2011-0332.)

5. Conclusion and future work

In this paper we presented GSSIM – a simulationframework that addresses the issues relevant to dis-tributed computing experiments. In particular, GSSIMaims at facilitating, automating, and accelerating the

process of preparation, execution, and analysis of ex-periments. We compared GSSIM with other knownsimulators based on a classification of distributed com-puting simulator features proposed in this paper. Afteranalysis of this comparison we concluded that GSSIMdelivers several functionalities which are not, to the

Page 20: GSSIM – A tool for distributed computing experimentsdownloads.hindawi.com/journals/sp/2011/925395.pdf · Scientific Programming 19 (2011) 231–251 231 DOI 10.3233/SPR-2011-0332

250 S. Bak et al. / GSSIM – A tool for distributed computing experiments

best of our knowledge, provided by any other dis-tributed computing simulation tool available, namely:efficient network modeling including advance reserva-tion capability, the possibility of adding customizedapplication performance models related to both exe-cution time and power usage, advanced modeling ofvarious events including network failures and secu-rity issues, and the possibility of managing and exe-cuting simulations remotely in the cloud. GSSIM alsocontains a flexible workload generation tool allowingany number of jobs with sophisticated requirementsto be created. Moreover, obtained experimental resultscan be analyzed using a fine-grained visualization ofschedules and resource utilization. We have demon-strated that GSSIM supports a variety of schedulingproblems and scenarios common in distributed com-puting environments, with specific examples of a fewdifferent experiments using queue-based strategies andreal traces, as well as testing algorithms with advancereservation on synthetic workloads.

To enable sharing of workloads, algorithms and re-sults, we have proposed the GSSIM portal [30], fromwhere researchers may download various syntheticworkloads, resource descriptions, scheduling plugins,and results. Users can even manage and run the wholeexperiments through the GSSIM web interface. Thisportal complements other known websites related tothis area as it provides a repository of synthetic work-loads and scheduling plugins as well as the online ser-vices for workloads generation and remote executionof experiments.

As GSSIM is a highly customizable and extendableframework which enables a wide range of possible fu-ture works. Therefore, we will focus on customiza-tion of simulations for specific scenarios, systems, andapplications. For example, we will aim at more de-tail modeling of thermal effects caused by applica-tion load and virtualization overheads of specific cloudplatforms. This will be achieved by development ofnew plugins and preparation of workloads and resourcedescriptions, which will be shared via the GSSIM por-tal. In this way the online simulation service will beconstantly extended in order to meet the requirementsof the distributed computing community.

Acknowledgements

The research presented in this paper was partiallysupported by a grant from Polish National ScienceCentre under award number 5790/B/T02/2010/38.

References

[1] W.H. Bell, D.G. Cameron, A.P. Millar, L. Capozza,K. Stockinger and F. Zini, Optorsim: a grid simulator for study-ing dynamic data replication strategies, in: Proceedings ofIJHPCA, SAGE Publications, London, 2003, pp. 403–416.

[2] J. Błazewicz, K.H. Ecker, E. Pesch, G. Schmidt and J. Weglarz,Handbook of Scheduling: From Theory to Applications,Springer-Verlag, Berlin, 2007.

[3] J. Broberg and R. Buyya, Flow networking in grid simula-tions, in: Grid Computing: Infrastructure, Services, and Ap-plications, L. Wang, ed., CRC Press, Boca Raton, FL, 2009,pp. 389–404.

[4] R. Buyya and M. Murshed, GridSim: a toolkit for the mod-eling and simulation of distributed resource management andscheduling for grid computing, The Journal of Concurrencyand Computation: Practice and Experience (CCPE) 14 (2002),1175–1220.

[5] D.G. Cameron, R. Carvajal-Schiaffino, J. Ferguson, P. Millar,C. Nicholson, K. Stockinger and F. Zini, OptorSim: a simula-tion tool for scheduling and replica optimisation in data grids,in: Proceedings of Computing in High Energy Physics, Inter-laken, Switzerland, 2004.

[6] H. Casanova, A. Legrand and M. Quinson, SimGrid: a genericframework for large-scale distributed experimentations, in:Proceedings of the 10th IEEE International Conference onComputer Modelling and Simulation, IEEE Computer Society,Los Alamitos, CA, 2008, pp. 126–131.

[7] Federica, http://www.fp7-federica.eu/.[8] D.G. Feitelson and L. Rudolph, Toward convergence in

job schedulers for parallel supercomputers, in: Job Schedul-ing Strategies for Parallel Processing, D.G. Feitelson andL. Rudolph, eds, Springer-Verlag, London, 1996, pp. 1–26.

[9] R.L. Graham, E.L. Lawler, J.K. Lenstra and A.H. RinnooyKan, Optimization and approximation in deterministic se-quencing and scheduling theory: a survey, Annals of DiscreteMathematics 5 (1997), 287–326.

[10] Grid workloads archive, http://gwa.ewi.tudelft.nl/.[11] Y. Huang, A. Brocco, M. Courant, B. Hirsbrunner and P. Kuo-

nen, MaGate simulator: a simulation environment for a de-centralized grid scheduler, in: Proceedings of the 8th Interna-tional Symposium on Advanced Parallel Processing Technolo-gies, Springer-Verlag, Berlin, 2009, pp. 273–287.

[12] A. Iosup, O.O. Sonmez and D.H.J. Epema, DGSim: comparinggrid resource management architectures through trace-basedsimulation, in: Proceedings of the 14th international Euro-ParConference on Parallel Processing, Springer-Verlag, Berlin,2008, pp. 13–25.

[13] D. Klusácek and H. Rudová, Alea 2 – job scheduling simula-tor, in: Proceedings of the 3rd International ICST Conferenceon Simulation Tools and Techniques, ICST, Brussels, Belgium,2010, pp. 1–10.

[14] M. Krystek, K. Kurowski, A. Oleksiak and W. Piatek, Energy-aware simulations with GSSIM, in: Proceedings of the COSTAction IC0804 on Energy Efficiency in Large Scale DistributedSystems, Cost Office, Toulouse, 2010, pp. 55–58.

[15] M. Krystek, K. Kurowski, A. Oleksiak and K. Rzadca, Com-parison of centralized and decentralized scheduling algorithmsusing GSSIM simulation environment, in: CoreGrid Integra-tion Workshop, Springer-Verlag, New York, 2008, pp. 185–196.

Page 21: GSSIM – A tool for distributed computing experimentsdownloads.hindawi.com/journals/sp/2011/925395.pdf · Scientific Programming 19 (2011) 231–251 231 DOI 10.3233/SPR-2011-0332

S. Bak et al. / GSSIM – A tool for distributed computing experiments 251

[16] K. Kurowski, J. Nabrzyski, A. Oleksiak and J. Weglarz, A mul-ticriteria approach to two-level hierarchy scheduling in grids,Journal of Scheduling 11 (2008), 371–379.

[17] K. Kurowski, J. Nabrzyski, A. Oleksiak and J. Weglarz,GSSIM – grid scheduling simulator, Computational Methodsin Science and Technology 13 (2007), 121–129.

[18] K. Kurowski, A. Oleksiak, W. Piatek and J. Weglarz, Hier-archical scheduling strategies for parallel tasks and advancereservations in grids, Journal of Scheduling (2011), 1–20.

[19] K. Kurowski, B. Ludwiczak, J. Nabrzyski, A. Oleksiak andJ. Pukacki, Improving grid level throughput using job migra-tion and rescheduling techniques in GRMS, in: Scientific Pro-gramming, IOS Press, Amsterdam, 2004, pp. 263–273.

[20] K. Kurowski, A. Oleksiak and J. Weglarz, Multicriteria, multi-user scheduling in grids with advance reservation, Journal ofScheduling 13 (2010), 493–508.

[21] K. Kurowski, Multicriteria resource management in grid en-vironments with dynamic descriptions of jobs and resources,PhD thesis, Poznan University of Technology, 2009.

[22] MetaCentrum, http://www.fi.muni.cz/~xklusac/index.php?page=meta2009.

[23] M. Mika, G. Waligóra and J. Weglarz, Modelling and solvinggrid resource allocation problem with network resources for

workflow applications, Journal of Scheduling 14 (2011), 291–306.

[24] A. Oleksiak, Multicriteria job scheduling in grids using predic-tion and advance resource reservation mechanisms, PhD thesis,Poznan University of Technology, 2009.

[25] S. Ostermann, K. Plankensteiner and R. Prodan, Using a newevent-based simulation framework for investigating differentresource provisioning methods in clouds, Scientific Program-ming Journal 19(2,3) (2011), 161–178.

[26] Parallel workload archive, http://www.cs.huji.ac.il/labs/parallel/workload/.

[27] Platform LSF, http://www.platform.com/.[28] SGE, http://www.sun.com/software/sge/.[29] A. Sulistio, C.S. Yeo and R. Buyya, A taxonomy of computer-

based simulation and its mapping to parallel and distributedsystems simulation tools, International Journal of Software:Practice and Experience 34 (2004), 653–673.

[30] The Grid Scheduling Simulations Portal, http://www.gssim.org.

[31] J. Weglarz, J. Józefowska, M. Mika and G. Waligóra, Projectscheduling with finite or infinite number of activity processingmodes: a survey, European Journal of Operational Research208 (2011), 177–205.

Page 22: GSSIM – A tool for distributed computing experimentsdownloads.hindawi.com/journals/sp/2011/925395.pdf · Scientific Programming 19 (2011) 231–251 231 DOI 10.3233/SPR-2011-0332

Submit your manuscripts athttp://www.hindawi.com

Computer Games Technology

International Journal of

Hindawi Publishing Corporationhttp://www.hindawi.com Volume 2014

Hindawi Publishing Corporationhttp://www.hindawi.com Volume 2014

Distributed Sensor Networks

International Journal of

Advances in

FuzzySystems

Hindawi Publishing Corporationhttp://www.hindawi.com

Volume 2014

International Journal of

ReconfigurableComputing

Hindawi Publishing Corporation http://www.hindawi.com Volume 2014

Hindawi Publishing Corporationhttp://www.hindawi.com Volume 2014

Applied Computational Intelligence and Soft Computing

 Advances in 

Artificial Intelligence

Hindawi Publishing Corporationhttp://www.hindawi.com Volume 2014

Advances inSoftware EngineeringHindawi Publishing Corporationhttp://www.hindawi.com Volume 2014

Hindawi Publishing Corporationhttp://www.hindawi.com Volume 2014

Electrical and Computer Engineering

Journal of

Journal of

Computer Networks and Communications

Hindawi Publishing Corporationhttp://www.hindawi.com Volume 2014

Hindawi Publishing Corporation

http://www.hindawi.com Volume 2014

Advances in

Multimedia

International Journal of

Biomedical Imaging

Hindawi Publishing Corporationhttp://www.hindawi.com Volume 2014

ArtificialNeural Systems

Advances in

Hindawi Publishing Corporationhttp://www.hindawi.com Volume 2014

RoboticsJournal of

Hindawi Publishing Corporationhttp://www.hindawi.com Volume 2014

Hindawi Publishing Corporationhttp://www.hindawi.com Volume 2014

Computational Intelligence and Neuroscience

Industrial EngineeringJournal of

Hindawi Publishing Corporationhttp://www.hindawi.com Volume 2014

Modelling & Simulation in EngineeringHindawi Publishing Corporation http://www.hindawi.com Volume 2014

The Scientific World JournalHindawi Publishing Corporation http://www.hindawi.com Volume 2014

Hindawi Publishing Corporationhttp://www.hindawi.com Volume 2014

Human-ComputerInteraction

Advances in

Computer EngineeringAdvances in

Hindawi Publishing Corporationhttp://www.hindawi.com Volume 2014


Recommended