Elysium Technologies Private Limited Singapore | Madurai | Chennai | Trichy | Coimbatore | Ramnad
Pondicherry | Salem | Erode | Tirunelveli
http://www.elysiumtechnologies.com, [email protected]
Elysium Technologies Private Limited Singapore | Madurai | Chennai | Trichy | Coimbatore | Ramnad
Pondicherry | Salem | Erode | Tirunelveli
http://www.elysiumtechnologies.com, [email protected]
ETPL NT-001 Answering “What-If” Deployment and Configuration Questions With WISE: Techniques and Deployment Experience
ETPL NT-002 Complexity Analysis and Algorithm Design for Advance Bandwidth Scheduling in Dedicated Networks
ETPL NT-003 Diffusion Dynamics of Network Technologies With Bounded Rational Users: Aspiration-Based Learning
ETPL NT-004 Delay-Based Network Utility Maximization
ETPL NT-005 A Distributed Control Law for Load Balancing in Content Delivery Networks
ETPL NT-006 Efficient Algorithms for Neighbor Discovery in Wireless Networks
ETPL NT-007 Stochastic Game for Wireless Network Virtualization
ETPL NT-008 ABC: Adaptive Binary Cuttings for Multidimensional Packet Classification,
ETPL NT-009 A Utility Maximization Framework for Fair and Efficient Multicasting in Multicarrier Wireless Cellular Networks
ETPL NT-010 Achieving Efficient Flooding by Utilizing Link Correlation in Wireless Sensor Networks,
ETPL NT-011 Random Walks and Green's Function on Digraphs: A Framework for Estimating Wireless Transmission Costs
ETPL NT-012 "A Flexible Platform for Hardware-Aware Network Experiments and a Case Study on Wireless Network Coding
ETPL NT-013 Exploring the Design Space of Multichannel Peer-to-Peer Live Video Streaming Systems
ETPL NT-014 Secondary Spectrum Trading—Auction-Based Framework for Spectrum Allocation and Profit Sharing
ETPL NT-015 Towards Practical Communication in Byzantine-Resistant DHTs
ETPL NT-016 Semi-Random Backoff: Towards Resource Reservation for Channel Access in Wireless LANs
ETPL NT-017 Entry and Spectrum Sharing Scheme Selection in Femtocell Communications Markets
ETPL NT-018 On Replication Algorithm in P2P VoD,
ETPL NT-019 Back-Pressure-Based Packet-by-Packet Adaptive Routing in Communication Networks
ETPL NT-020 Scheduling in a Random Environment: Stability and Asymptotic Optimality
ETPL NT-021 An Empirical Interference Modeling for Link Reliability Assessment in Wireless Networks
ETPL NT-022 On Downlink Capacity of Cellular Data Networks With WLAN/WPAN Relays
ETPL NT-023 Centralized and Distributed Protocols for Tracker-Based Dynamic Swarm Management
ETPL NT-024 Localization of Wireless Sensor Networks in the Wild: Pursuit of Ranging Quality
ETPL NT-025 Control of Wireless Networks With Secrecy
ETPL NT-026 ICTCP: Incast Congestion Control for TCP in Data-Center Networks
ETPL NT-027 Context-Aware Nanoscale Modeling of Multicast Multihop Cellular Networks
ETPL NT-028 Moment-Based Spectral Analysis of Large-Scale Networks Using Local Structural Information
ETPL NT-029 Internet-Scale IPv4 Alias Resolution With MIDAR
ETPL NT-030 Time-Bounded Essential Localization for Wireless Sensor Networks
ETPL NT-031 Stability of FIPP -Cycles Under Dynamic Traffic in WDM Networks
ETPL NT-032 Cooperative Carrier Signaling: Harmonizing Coexisting WPAN and WLAN Devices
ETPL NT-033 Mobility Increases the Connectivity of Wireless Networks
ETPL NT-034 Topology Control for Effective Interference Cancellation in Multiuser MIMO Networks
ETPL NT-035 Distortion-Aware Scalable Video Streaming to Multinetwork Clients
ETPL NT-036 Combined Optimal Control of Activation and Transmission in Delay-Tolerant Networks
ETPL NT-037 A Low-Complexity Congestion Control and Scheduling Algorithm for Multihop Wireless
Elysium Technologies Private Limited Singapore | Madurai | Chennai | Trichy | Coimbatore | Ramnad
Pondicherry | Salem | Erode | Tirunelveli
http://www.elysiumtechnologies.com, [email protected]
Elysium Technologies Private Limited Singapore | Madurai | Chennai | Trichy | Coimbatore | Ramnad
Pondicherry | Salem | Erode | Tirunelveli
http://www.elysiumtechnologies.com, [email protected]
Modern scientific data management and analysis usually rely on multiple scientists with diverse expertise. In
recent years, such a collaborative effort is often structured and automated by a data flow-oriented process
called scientific workflow. However, such workflows may have to be designed and revised among multiple
scientists over a long time period. Existing workbenches are single user-oriented and do not support scientific
workflow application development in a "collaborative fashion". In this paper, we report our research on the
enabling techniques in the aspects of collaboration provenance management and reproduciability. Based on a
scientific collaboration ontology, we propose a service-oriented collaboration model supported by a set of
composable collaboration primitives and patterns. The collaboration protocols are then applied to support
effective concurrency control in the process of collaborative workflow composition. We also report the design
and development of Confucius, a service-oriented collaborative scientific workflow composition tool that
extends an open-source, single-user development environment.
ETPL
SER - 001
Confucius: A Tool Supporting Collaborative Scientific Workflow Composition
Web service composition (WSC) is the task of combining a chain of connected single services together to
create a more complex and value-added composite service. Quality of service (QoS) has been mostly applied
to represent nonfunctional properties of web services and differentiate those with the same functionality. Many
research has been done on QoS-aware service composition, as it significantly affects the quality of a
composite service. However, existing methods are restricted to predefined workflows, which can incur a
couple of limitations, including the lack of guarantee for the optimality on overall QoS and for the
completeness of finding a composite service solution. In this paper, instead of predefining a workflow model
for service composition, we propose a novel planning-based approach that can automatically convert a QoS-
aware composition task to a planning problem with temporal and numerical features. Furthermore, we use
state-of-the-art planners, including an existing one and a self-developed one, to handle complex temporal
planning problems with logical reasoning and numerical optimization. Our approach can find a composite
service graph with the optimal overall QoS value while satisfying multiple global QoS constraints. We
implement a prototype system and conduct extensive experiments on large web service repositories. The
experimental results show that our proposed approach largely outperforms existing ones in terms of solution
quality and is efficient enough for practical deployment.
ETPL
SER - 002
QoS-Aware Dynamic Composition of Web Services Using Numerical Temporal
Planning
Elysium Technologies Private Limited Singapore | Madurai | Chennai | Trichy | Coimbatore | Ramnad
Pondicherry | Salem | Erode | Tirunelveli
http://www.elysiumtechnologies.com, [email protected]
Quality of service (QoS) is widely employed for describing nonfunctional characteristics of web services.
Although QoS of web services has been investigated intensively in the field of service computing, there is a
lack of real-world web service QoS data sets for validating various QoS-based techniques and models. To
investigate QoS of real-world web services and to provide reusable research data sets for future research, we
conduct several large-scale evaluations on real-world web services. First, addresses of 21,358 web services are
obtained from the Internet. Then, three large-scale real-world evaluations are conducted. In our evaluations,
more than 30 million real-world web service invocations are conducted on web services in more than 80
countries by users from more than 30 counties. Detailed evaluation results are presented in this paper and
comprehensive web service QoS data sets are publicly released online.
ETPL
SER - 003
Investigating QoS of Real-World Web Services
Advancements in cloud computing enable the easy deployment of numerous services. However, the analysis
of cloud service access platforms from a client perspective shows that maintaining and managing clients
remain a challenge for end users. In this paper, we present the design, implementation, and evaluation of an
asymmetric virtual machine monitor (AVMM), which is an asymmetric partitioning-based bare-metal
approach that achieves near-native performance while supporting a new out-of-operating system mechanism
for value-added services. To achieve these goals, AVMM divides underlying platforms into two asymmetric
partitions: a user partition and a service partition. The user partition runs a commodity user OS, which is
assigned to most of the underlying resources, maintaining end-user experience. The service partition runs a
specialized OS, which consumes only the needed resources for its tasks and provides enhanced features to the
user OS. AVMM considerably reduces virtualization overhead through two approaches: 1) Peripheral devices,
such as graphics equipment, are assigned to be monopolized by a single user OS. 2) Efficient resource
management mechanisms are leveraged to alleviate complicated resource sharing in existing virtualization
technologies. We implement a prototype that supports Windows and Linux systems. Experimental results
show that AVMM is a feasible and efficient approach to client virtualization.
ETPL
SER - 004 A Bare-Metal and Asymmetric Partitioning Approach to Client Virtualization
Elysium Technologies Private Limited Singapore | Madurai | Chennai | Trichy | Coimbatore | Ramnad
Pondicherry | Salem | Erode | Tirunelveli
http://www.elysiumtechnologies.com, [email protected]
This paper addresses automatic service composition (ASC) as a means to create new value-added services
dynamically and automatically from existing services in service-oriented architecture and cloud computing
environments. Manually composing services for relatively static applications has been successful, but
automatically composing services requires advances in the semantics of processes and an architectural
framework that can capture all stages of an application's lifecycle. A framework for ASC involves four stages:
planning an execution workflow, discovering services from a registry, selecting the best candidate services,
and executing the selected services. This four-stage architecture is the most widely used to describe ASC, but
it is still abstract and incomplete in terms of scalable goal composition, property transformation for seamless
automatic composition, and integration architecture. We present a workflow orchestration to enable nested
multilevel composition for achieving scalability. We add to the four-stage composition framework a
transformation method for abstract composition properties. A general model for the composition architecture
is described herein and a complete and detailed composition framework is introduced using our model. Our
ASC architecture achieves improved seamlessness and scalability in the integrated framework. The ASC
architecture is analyzed and evaluated to show its efficacy.
ETPL
SER - 006
A Scalable Architecture for Automatic Service Composition
Location-based services (LBS) are widely deployed. When the implementation of an LBS-enabled service has
evolved, regression testing can be employed to assure the previously established behaviors not having been
adversely affected. Proper test case prioritization helps reveal service anomalies efficiently so that fixes can be
scheduled earlier to minimize the nuisance to service consumers. A key observation is that locations captured
in the inputs and the expected outputs of test cases are physically correlated by the LBS-enabled service, and
these services heuristically use estimated and imprecise locations for their computations, making these
services tend to treat locations in close proximity homogenously. This paper exploits this observation. It
proposes a suite of metrics and initializes them to demonstrate input-guided techniques and point-of-interest
(POI) aware test case prioritization techniques, differing by whether the location information in the expected
outputs of test cases is used. It reports a case study on a stateful LBS-enabled service. The case study shows
that the POI-aware techniques can be more effective and more stable than the baseline, which reorders test
cases randomly, and the input-guided techniques. We also find that one of the POI-aware techniques, cdist, is
either the most effective or the second most effective technique among all the studied techniques in our
evaluated aspects, although no technique excels in all studied SOA fault classes.
A Technique for Deploying Robust Web Services
Developing robust web services is a difficult task. Field studies show that a large number of web services are
deployed with robustness problems (i.e., presenting unexpected behaviors in the presence of invalid inputs).
Although several techniques for the identification of robustness problems have been proposed in the past, there
is no practical approach to automatically fix those problems. This paper proposes a mechanism that
automatically fixes robustness problems in web services. The approach consists of using robustness testing to
detect robustness issues and then mitigate those issues by applying inputs verification based on well-defined
parameter domains, including domain dependencies between different parameters. This integrated and fully
automated methodology has been used to improve three different implementations of the TPC-App web
services and several services publicly available on the Internet. Results show that the proposed approach can
be easily used to improve the robustness of web services code.
ETPL
SER - 005
Prioritizing Test Cases for Regression Testing of Location-Based Services: Metrics,
Techniques, and Case Study
Elysium Technologies Private Limited Singapore | Madurai | Chennai | Trichy | Coimbatore | Ramnad
Pondicherry | Salem | Erode | Tirunelveli
http://www.elysiumtechnologies.com, [email protected]
Managing virtualized services efficiently over the cloud is an open challenge. Traditional models of software
development are not appropriate for the cloud computing domain, where software (and other) services are
acquired on demand. In this paper, we describe a new integrated methodology for the life cycle of IT services
delivered on the cloud and demonstrate how it can be used to represent and reason about services and service
requirements and so automate service acquisition and consumption from the cloud. We have divided the IT
service life cycle into five phases of requirements, discovery, negotiation, composition, and consumption. We
detail each phase and describe the ontologies that we have developed to represent the concepts and
relationships for each phase. To show how this life cycle can automate the usage of cloud services, we
describe a cloud storage prototype that we have developed. This methodology complements previous work on
ontologies for service descriptions in that it is focused on supporting negotiation for the particulars of a service
and going beyond simple matchmaking.
ETPL
SER - 008
Automating Cloud Services Life Cycle through Semantic Technologies
We address the problem of usage license administration in federated settings. This problem arises whenever
organizations, such as educational or research groups or institutions, share resources for business and scientific
reasons. In such settings, each user's usage of a licensed resource is typically supported by the user's
organization. License administration involves satisfying legal requirements while applying organizational
strategies for effective resource usage, and carrying out suitable accounting and audit controls. We propose an
approach, Licit, wherein an agent represents each resource sharing site and administers licenses in
collaboration with other agents. We show how to represent a variety of usage licenses formally as executable
policies and provide a simple information model using which each party can specify both the attributes
involved in its licenses and how to resolve them. Our architecture naturally accommodates a variety of site-
specific (i.e., custom) strategies for license administration. Licit has been implemented in a popular open
source framework for virtual computing, and yields performance results indicating its practical feasibility.
ETPL
SER - 007
Licit: Administering Usage Licenses in Federated Environments
Elysium Technologies Private Limited Singapore | Madurai | Chennai | Trichy | Coimbatore | Ramnad
Pondicherry | Salem | Erode | Tirunelveli
http://www.elysiumtechnologies.com, [email protected]
Remote data integrity checking is of crucial importance in cloud storage. It can make the clients verify
whether their outsourced data is kept intact without downloading the whole data. In some application
scenarios, the clients have to store their data on multi-cloud servers. At the same time, the integrity checking
protocol must be efficient in order to save the verifier’s cost. From the two points, we propose a novel remote
data integrity checking model: ID-DPDP (identity-based distributed provable data possession) in multi-cloud
storage. The formal system model and security model are given. Based on the bilinear pairings, a concrete ID-
DPDP protocol is designed. The proposed ID-DPDP protocol is provably secure under the hardness
assumption of the standard CDH (computational Diffie- Hellman) problem. In addition to the structural
advantage of elimination of certificate management, our ID-DPDP protocol is also efficient and flexible.
Based on the client’s authorization, the proposed ID-DPDP protocol can realize private verification, delegated
verification and public verification.
ETPL
SER - 009
Identity-Based Distributed Provable Data Possession in Multi-Cloud Storage
In an uncertain and changing environment, a composite service needs to continuously optimize its business
process and service selection through runtime adaptation. To achieve the overall satisfaction of stakeholder
requirements, quality tradeoffs are needed to adapt the composite service in response to the changing
environments. Existing approaches on service selection and composition, however, are mostly based on
quality preferences and business processes decisions made statically at the design time. In this paper, we
propose a requirements-driven self-optimization approach for composite services. It measures the quality of
services (QoS), estimates the earned business value, and tunes the preference ranks through a feedback loop.
The detection of unexpected earned business value triggers the proposed self-optimization process
systematically. At the process level, a preference-based reasoner configures a requirements goal model
according to the tuned preference ranks of QoS requirements, reconfiguring the business process according to
its mappings from the goal configurations. At the service level, selection decisions are optimized by utilizing
the tuned weights of QoS criteria. We used an experimental study to evaluate the proposed approach. Results
indicate that the new approach outperforms both fixed-weighted and floating-weighted service selection
approaches with respect to earned business value and adaptation flexibility.
ETPL
SER - 010
Requirements-Driven Self-Optimization of Composite Services using Feedback
Control
Elysium Technologies Private Limited Singapore | Madurai | Chennai | Trichy | Coimbatore | Ramnad
Pondicherry | Salem | Erode | Tirunelveli
http://www.elysiumtechnologies.com, [email protected]
We investigate the problem of minimizing the sum of the queue lengths of all the nodes in a wireless network
with a forest topology. Each packet is destined to one of the roots (sinks) of the forest. We consider a time-
slotted system and a primary (or one-hop) interference model. We characterize the existence of causal sample-
path optimal scheduling policies for this network topology under this interference model. A causal sample-
path optimal scheduling policy is one for which at each time-slot, and for any sample-path traffic arrival
pattern, the sum of the queue lengths of all the nodes in the network is minimum among all policies. We show
that such policies exist in restricted forest structures, and that for any other forest structure, there exists a
traffic arrival pattern for which no causal sample-path optimal policy can exist. Surprisingly, we show that
many forest structures for which such policies exist can be scheduled by converting the structure into an
equivalent linear network and scheduling the equivalent linear network according to the one-hop interference
model. The nonexistence of such policies in many forest structures underscores the inherent limitation of
using sample-path optimality as a performance metric and necessitates the need to study other (relatively)
weaker metrics of delay performance.
ETPL
NW - 011
On Sample-Path Optimal Dynamic Scheduling for Sum-Queue Minimization in
Forest
Selecting a vulnerability detection tool is a key problem that is frequently faced by developers of security-
critical web services. Research and practice shows that state-of-the-art tools present low effectiveness both in
terms of vulnerability coverage and false positive rates. The main problem is that such tools are typically
limited in the detection approaches implemented, and are designed for being applied in very concrete
scenarios. Thus, using the wrong tool may lead to the deployment of services with undetected vulnerabilities.
This paper proposes a benchmarking approach to assess and compare the effectiveness of vulnerability
detection tools in web services environments. This approach was used to define two concrete benchmarks for
SQL Injection vulnerability detection tools. The first is based on a predefined set of web services, and the
second allows the benchmark user to specify the workload that best portrays the specific characteristics of his
environment. The two benchmarks are used to assess and compare several widely used tools, including four
penetration testers, three static code analyzers, and one anomaly detector. Results show that the benchmarks
accurately portray the effectiveness of vulnerability detection tools (in a relative manner) and suggest that the
proposed benchmarking approach can be applied in the field.
ETPL
SER - 012
Assessing and Comparing Vulnerability Detection Tools for Web Services:
Benchmarking Approach and Examples
Elysium Technologies Private Limited Singapore | Madurai | Chennai | Trichy | Coimbatore | Ramnad
Pondicherry | Salem | Erode | Tirunelveli
http://www.elysiumtechnologies.com, [email protected]
The widespread use of web services in forming complex online applications requires service composition to
cope with highly dynamic and heterogeneous environments. Traditional centralized service composition
techniques are not sufficient to address the needs of applications in decentralized environments. In this paper,
a stigmergic-based approach is proposed to model the decentralized service interactions and handle service
composition in highly dynamic open environments. In the proposed approach, web services and resources are
modeled as multiple agents. Stigmergic-based self-organization mechanisms among agents are deployed to
facilitate adapting service composition. In addition, to overcome the limitations of traditional QoS-based
approaches, trust measurements are deployed as a criterion for service selection. To improve the performance
of the proposed stigmergic-based approach under dynamic scale-free environments, we investigate the
hybridization with local search operators to consolidate adaptation, and diversity schemes are introduced to
facilitate continual service adaptation. Extensive experiments show the efficiency of the proposed approach in
dealing with incomplete information and dynamic factors in composing and adapting web services in open
environments. The experiment results also show that the proposed approach achieves a better performance
than other traditional approaches.
ETPL
SER - 013
Trustworthy Stigmergic Service Composition and Adaptation in Decentralized
Environments
Social network platforms have rapidly changed the way that people communicate and interact. They have
enabled the establishment of, and participation in, digital communities as well as the representation,
documentation and exploration of social relationships. We believe that as ‘apps’ become more sophisticated, it
will become easier for users to share their own services, resources and data via social networks. To
substantiate this, we present a Social Compute Cloud where the provisioning of Cloud infrastructure occurs
through “friend” relationships. In a Social Compute Cloud, resource owners offer virtualized containers on
their personal computer(s) or smart device(s) to their social network. However, as users may have complex
preference structures concerning with whom they do or do not wish to share their resources, we investigate,
via simulation, how resources can be effectively allocated within a social community offering resources on a
best effort basis. In the assessment of social resource allocation, we consider welfare, allocation fairness, and
algorithmic runtime. The key findings of this work illustrate how social networks can be leveraged in the
construction of cloud computing infrastructures and how resources can be allocated in the presence of user
sharing preferences.
ETPL
SER - 014
A Social Compute Cloud: Allocating and Sharing Infrastructure Resources via Social
Networks
Elysium Technologies Private Limited Singapore | Madurai | Chennai | Trichy | Coimbatore | Ramnad
Pondicherry | Salem | Erode | Tirunelveli
http://www.elysiumtechnologies.com, [email protected]
Service Level Agreements (SLAs) are typically used to specify rules regarding the consumption of services
that are agreed between the providers of the Service-Based Applications (SBAs) and their consumers. An SLA
includes a list of terms that contain the guarantees that must be fulfilled during the provisioning and
consumption of the services. Since the violation of such guarantees may lead to the application of potential
penalties, it is important to assure that the SBA behaves as expected. In this article, we propose a proactive
approach to test SLA-aware SBAs by means of identifying test requirements, which represent situations that
are relevant to be tested. To address this issue, we define a four-valued logic that allows evaluating both the
individual guarantee terms and their logical relationships. Grounded in this logic, we devise a test criterion
based on the Modified Condition Decision Coverage (MCDC) in order to obtain a cost-effective set of test
requirements from the structure of the SLA. Furthermore by analyzing the syntax and semantics of the
agreement, we define specific rules to avoid non-feasible test requirements. The whole approach has been
automated and applied over an eHealth case study.
ETPL
SER - 015
Coverage-based testing for Service Level Agreements
Recently, there is a trend on developing mobile applications based on service-oriented architecture in
numerous application domains, such as telematics and smart home. Although efforts have been made on
developing composite SOAP services, little emphasis has been put on invoking and composing a combination
of SOAP, non-SOAP, and non-web services into a composite process to execute complex tasks on various
mobile devices. Main challenges are two-fold: one is how to invoke and compose heterogeneous web services
with various protocols and content types, including SOAP, RESTful, and OSGi services; and the other is how
to integrate non-web services, like Web contents and mobile applications, into a composite service process. In
this work, we propose an approach to invoking and composing SOAP, non-SOAP, and non-web services with
two key features: an extended BPEL engine bundled with adapters to enable direct invocation and composition
of SOAP, RESTful and OSGi services based on Adapter pattern; and two transformation mechanisms devised
to enable conversion of Web contents and Android activities into OSGi services. In the experimental
evaluations, we demonstrate network traffic and turnaround time of our approach are better than those of the
traditional ones.
ETPL
SER - 016
A Framework for Composing SOAP, Non-SOAP and Non-Web Services
Elysium Technologies Private Limited Singapore | Madurai | Chennai | Trichy | Coimbatore | Ramnad
Pondicherry | Salem | Erode | Tirunelveli
http://www.elysiumtechnologies.com, [email protected]
Web service recommendation systems can help service users to locate the right service from the large number
of available Web services. Avoiding recommending dishonest or unsatisfactory services is a fundamental
research problem in the design of Web service recommendation systems. Reputation of Web services is a
widely-employed metric that determines whether the service should be recommended to a user. The service
reputation score is usually calculated using feedback ratings provided by users. Although the reputation
measurement of Web service has been studied in the recent literature, existing malicious and subjective user
feedback ratings often lead to a bias that degrades the performance of the service recommendation system. In
this paper, we propose a novel reputation measurement approach for Web service recommendations. We first
detect malicious feedback ratings by adopting the Cumulative Sum Control Chart, and then we reduce the
effect of subjective user feedback preferences employing the Pearson Correlation Coefficient. Moreover, in
order to defend malicious feedback ratings, we propose a malicious feedback rating prevention scheme
employing Bloom filtering to enhance the recommendation performance. Extensive experiments are conducted
by employing a real feedback rating dataset with 1.5 million Web service invocation records. The
experimental results show that our proposed measurement approach can reduce the deviation of the reputation
measurement and enhance the success ratio of the Web service recommendation.
ETPL
SER - 017
Reputation Measurement and Malicious Feedback Rating Prevention in Web Service
Recommendation Systems
Elysium Technologies Private Limited Singapore | Madurai | Chennai | Trichy | Coimbatore | Ramnad
Pondicherry | Salem | Erode | Tirunelveli
http://www.elysiumtechnologies.com, [email protected]
Many web computing systems are running real time database services where their information change
continuously and expand incrementally. In this context, web data services have a major role and draw
significant improvements in monitoring and controlling the information truthfulness and data propagation.
Currently, web telemedicine database services are of central importance to distributed systems. However, the
increasing complexity and the rapid growth of the real world healthcare challenging applications make it hard
to induce the database administrative staff. In this paper, we build an integrated web data services that satisfy
fast response time for large scale Tele-health database management systems. Our focus will be on database
management with application scenarios in dynamic telemedicine systems to increase care admissions and
decrease care difficulties such as distance, travel, and time limitations. We propose three-fold approach based
on data fragmentation, database web sites clustering and intelligent data distribution. This approach reduces
the amount of data migrated between web sites during applications’ execution; achieves cost-effective
communications during applications’ processing and improves applications’ response time and throughput.
The proposed approach is validated internally by measuring the impact of using our computing services’
techniques on various performance features like communications cost, response time, and throughput. The
external validation is achieved by comparing the performance of our approach to that of other techniques in
the literature. The results show that our integrated approach significantly improves the performance of web
database systems and outperforms its counterparts.
ETPL
SER - 018
Designing High Performance Web-Based Computing Services to Promote Telemedicine
Database Management System
this paper introduces a testing strategy that is suitable for testing service-based applications. We describe an
architecture that responds to changes of service operation, operation arguments and service composition
changes. Our proof-of-concept test system performs runtime testing on our model atomic and composite web
services using a random testing technique. A novel change identification method was developed to capture
changes at the service interface. The test system is able to identify changes that occur in service operations and
operational arguments in a service description of a test candidate. Our approach uses a new method to detect
changes in a service inventory. Automated reconfiguration is used to support the continuous operation of the
testing systems during a test candidate change
ETPL
SER - 019
Dynamic Test Reconfiguration for Composite Web Services
Elysium Technologies Private Limited Singapore | Madurai | Chennai | Trichy | Coimbatore | Ramnad
Pondicherry | Salem | Erode | Tirunelveli
http://www.elysiumtechnologies.com, [email protected]
Software cohesion concerns the degree to which the elements of a module belong together. Cohesive software
is easier to understand, test and maintain. In the context of service-oriented development, cohesion refers to
the degree to which the operations of a service interface belong together. In the state of the art, software
cohesion is improved based on refactoring methods that rely on information, extracted from the software
implementation. This is a main limitation towards using these methods in the case of Web services: Web
services do not expose their implementation; instead all that they export is the Web service interface
specification. To deal with this problem, we propose an approach that enables the cohesiondriven
decomposition of service interfaces, without information on how the services are implemented. Our approach
progressive decomposes a given service interface into more cohesive interfaces; the backbone of the approach
is a suite of cohesion metrics that rely on information, extracted solely from the specification of the service
interface. We validate the approach in 22 real-world services, provided by Amazon and Yahoo. We assess the
effectiveness of the proposed approach, concerning the cohesion improvement, and the number of interfaces
that result from the decomposition of the examined interfaces. Moreover, we show the usefulness of the
approach in a user study, where developers assessed the quality of the produced interfaces.
ETPL
SER - 020
Cohesion-Driven Decomposition of Service Interfaces Without Access to Source Code
Composite services are widely popular for solving complex problems where the required QoS levels are often
demanding. The composite service that provides the best utility while meeting the QoS requirements has to be
found. This paper proposes a network model where many complementary candidates could be selected for
each service class to improve the benefits, while the conventional model limits the selection to a single service
candidate or service level per service class. The selection of services step is NP-hard because it can be reduced
to a multi-constraint knapsack problem. Yet, the decision has to be reached rapidly so that it does not increase
the overall workflow time. Large-size networks and problems with high restriction levels (strong QoS
requirements) are the most problematic. Traditional Multiple-Constrained-Shortest-Path (MCSP) heuristics are
improved in this paper using the novel concept “potential feasibility”. When our modified MCSP heuristic
algorithms are compared to the CPLEX solver, one of them demonstrates a significantly smaller average
runtime. Further, it provides solutions within a 2.6% optimality gap on average for small networks, and a 10%
optimality gap on average for large networks, regardless of the restriction level. Our algorithm uses a general
utility function, not derived from the QoS parameters.
ETPL
SER - 021
Network and QoS-based Selection of Complementary Services
Elysium Technologies Private Limited Singapore | Madurai | Chennai | Trichy | Coimbatore | Ramnad
Pondicherry | Salem | Erode | Tirunelveli
http://www.elysiumtechnologies.com, [email protected]
The convergence of Services Computing and Web 2.0 gains a large space of opportunities to compose
$“situational”$ web applications from Web-delivered services. However, the large number of services and the
complexity of composition constraints make manual composition difficult to application developers, who
might be non-professional programmers or even end-users. This paper presents a systematic data-driven
approach to assisting situational application development. We first propose a technique to extract useful
information from multiple sources to abstract service capabilities with a set tags. This supports intuitive
expression of user’s desired composition goals by simple queries, without having to know underlying
technical details. A planning technique then exploits composition solutions which can constitute the desired
goals, even with some potential new interesting composition opportunities. A browser-based tool facilitates
visual and iterative refinement of composition solutions, to finally come up with the satisfying outputs. A
series of experiments demonstrate the efficiency and effectiveness of our approach.
ETPL
SER - 022
Data-Driven Composition for Service-Oriented Situational Applications
This paper presents a novel monitoring architecture addressed to the cloud provider and the cloud consumers.
This architecture offers a Monitoring Platform-as-a-Service to each cloud consumer that allows to customize
the monitoring metrics. The cloud provider sees a complete overview of the infrastructure whereas the cloud
consumer sees automatically her cloud resources and can define other resources or services to be monitored.
This is accomplished by means of an adaptive distributed monitoring architecture automatically deployed in
the cloud infrastructure. This architecture has been implemented and released under GPL license to the
community as “MonPaaS”, open source software for integrating Nagios and OpenStack. An intensive
empirical evaluation of performance and scalability have been done using a real deployment of a cloud
computing infrastructure in which more than 3700 VMs have been executed..
ETPL
SER - 023
MonPaaS: An Adaptive Monitoring Platform as a Service for Cloud Computing
Infrastructures and Services
Elysium Technologies Private Limited Singapore | Madurai | Chennai | Trichy | Coimbatore | Ramnad
Pondicherry | Salem | Erode | Tirunelveli
http://www.elysiumtechnologies.com, [email protected]
QoS-aware Web service composition intends to maximize the global QoS of a composite service with local
and global QoS constraints while selecting the independent candidate services from different providers. With
the increasing number of candidate services emerging from the Internet, the network delays often greatly
affect the performance of the composite service, which are usually difficult to be collected beforehand. One
remedy is to predict them for the composition. However, there are some new issues in network delay
predictions for the composition, including prediction accuracy, on-demand measures to new services and
runtime overhead. In this paper, we try to tackle these critical challenges by taking advantage of the
geolocations of candidate services. We firstly describe a network-aware service composition problem. Then,
we present a novel geolocation-based NQoS prediction and reprediction approach for service composition.
Furthermore, a geolocation-based service selection algorithm is presented to make use of our NQoS prediction
approach for the composition. We have conducted extensive experiments on the real-world dataset collected
from PlanetLab. Comparative experimental results demonstrate that our approach improves the prediction
accuracy and predictability of the NQoS and reduces the runtime overheads in predicting the composition.
ETPL
SER - 024
Network-aware QoS prediction for Service Composition Using Geolocation
With explosive growth of social media, social computing becomes a new IT feature. A core functionality of
social computing is social network analysis, which studies dynamics of social connectivity among people,
including how people influence one another and how fast information diffuses in a social network and what
factors stimulate influence diffusion. One of the models for information diffusion is the heat diffusion model.
Although it is simple in capturing the basic principle of social influence, there are several limitations. First, the
uniform heat diffusion is no longer hold in social networks. Second, high degree nodes are most influential in
all contexts is not realistic. In this paper we propose a probabilistic approach of social influence diffusion
model with incentives. Our approach has three features. First we define an influence diffusion probability for
each node instead of uniform probability. Second, we categorize nodes into two classes: active and inactive.
Active nodes have chances to influence inactive nodes but not vice versa. Third, we utilize a system defined
diffusion threshold to control how influence is propagated. We study how incentives can be utilized to boost
the influence diffusion. Our experiments show the reward-powered model is more effective in influence
diffusion.
ETPL
SER - 025
Probabilistic Diffusion of Social Influence with Incentives
Elysium Technologies Private Limited Singapore | Madurai | Chennai | Trichy | Coimbatore | Ramnad
Pondicherry | Salem | Erode | Tirunelveli
http://www.elysiumtechnologies.com, [email protected]
An external web service may evolve without prior notification. In the course of the regression testing of a
workflow-based web service, existing test case prioritization techniques may only verify the latest service
composition using the not-yet-executed test cases, overlooking high-priority test cases that have already been
applied to the service composition before the evolution. In this paper, we propose Preemptive Regression
Testing (PRT), an adaptive testing approach to addressing this challenge. Whenever a change in the coverage
of any service artifact is detected, PRT recursively preempts the current session of regression test and creates a
sub-session of the current test session to assure such lately identified changes in coverage by adjusting the
execution priority of the test cases in the test suite. Then, the sub-session will resume the execution from the
suspended position. PRT terminates only when each test case in the test suite has been executed at least once
without any preemption activated in between any test case executions. The experimental result confirms that
testing workflow-based web service in the face of such changes is very challenging; and one of the PRT-
enriched techniques shows its potential to overcome the challenge.
ETPL
SER - 026
Preemptive Regression Testing of Workflow-based Web Services
Modern data centers use virtualization as a means to increase utilization of increasingly powerful multi-core
servers. Applications often require only a fraction of the resources provided by modern hardware. Multiple
concurrent workloads are therefore required to achieve adequate utilization levels. Current virtualization
solutions allow hardware to be partitioned into Virtual Machines with appropriate isolation on most levels.
However, unmanaged consolidation of resource intensive workloads can still lead to unexpected performance
variance. Measures are required to avoid or reduce performance interference and provide predictable service
levels for all applications. In this paper, we identify and reduce network-related interference effects using
performance models based on the runtime characteristics of virtualized workloads. We increase the
applicability of existing training data by adding network-related performance metrics and benchmarks. Using
the extended set of training data, we predict performance degradation with existing modeling techniques as
well as combinations thereof. Application clustering is used to identify several new network-related
application types with clearly defined performance profiles. Finally, we validate the added value of the
improved models by introducing new scheduling techniques and comparing them to previous efforts. We
demonstrate how the inclusion of network-related parameters in performance models can significantly increase
the performance of consolidated workloads.
ETPL
SER - 027
Network Aware Scheduling for Virtual Machine Workloads with Interference Models
Elysium Technologies Private Limited Singapore | Madurai | Chennai | Trichy | Coimbatore | Ramnad
Pondicherry | Salem | Erode | Tirunelveli
http://www.elysiumtechnologies.com, [email protected]
Service-Oriented Computing enables the composition of loosely coupled services provided with varying
Quality of Service (QoS) levels. Selecting a near-optimal set of services for a composition in terms of QoS is
crucial when many functionally equivalent services are available. As the number of distributedservices,
especially in the cloud, is rising rapidly, the impact of the network on the QoS keeps increasing. Despite this,
current approaches do not differentiate between the QoS of services themselves and the network. Therefore,
the computed latency differs from the actual latency, resulting in suboptimal QoS. Thus, we propose
a network-aware approach that handles the QoS of services and the QoS of the network independently. First,
we build a network model in order to estimate the network latency between arbitrary services and potential
users. Our selection algorithm then leverages this model to find compositions with a low latency for a given
execution policy. We employ a self-adaptive genetic algorithm which balances the optimization of latency and
other QoS as needed and improves the convergence speed. In our evaluation, we show that
our approach works under realistic network conditions, efficiently computing compositions with much lower
latency and otherwise equivalent QoS compared to current approaches.
ETPL
SER - 028
SanGA: A Self-Adaptive Network-Aware Approach to Service Composition
Data as a Service (DaaS) builds on service-oriented technologies to enable fast access to data resources on
the Web. However, this paradigm raises several new privacy concerns that traditional privacy models do not
handle. In addition, DaaS composition may reveal privacy-sensitive information. In this paper, we propose a
formal privacy model in order to extend DaaS descriptions with privacy capabilities. The privacy model
allows a service to define a privacy policy and a set of privacy requirements. We also propose a privacy-
preserving DaaS composition approach allowing to verify the compatibility between privacy requirements and
policies in DaaS composition. We propose a negotiation mechanism that makes it possible to dynamically
reconcile the privacy capabilities ofservices when incompatibilities arise in a composition. We validate the
applicability of our proposal through a prototype implementation and a set of experiments.
ETPL
SER - 029
Privacy-Enhanced Web Service Composition
Elysium Technologies Private Limited Singapore | Madurai | Chennai | Trichy | Coimbatore | Ramnad
Pondicherry | Salem | Erode | Tirunelveli
http://www.elysiumtechnologies.com, [email protected]
With the proliferation of Web services, service engineers demand automatic service composition algorithms
that not only synthesize the correct service compositions from thousands of services but also satisfy the quality
requirements of users. This is known as QoS-aware automatic service composition problem. Our observation
is that current research of only finding the optimal service composition result has several shortcomings. Users
have to utilize the optimal one, which will make it rigid, and consequently brings about problems, such as
overload of "hot services" and lack of choices for users. To cope with these problems,
a top k query mechanism is introduced in this paper, a progressive and incremental Key-Path-Based Loose
(KPL) algorithm with 100% accuracy is proposed. Our QSynth which won the performance championship of
Web Service Challenge 2009 and 2010 is extended to support top k query based on KPL algorithm.
Evaluations show that, compared to the state of the art, KPL algorithm achieves superior scalability and
accuracy with respect to a large variety of composition scenarios. Moreover, we generalize a new graph
problem: top k DAGs (Directed Acyclic Graphs) problem based on the above work. Applications of this new
graph problem contain API recommender, supply chain and so on.
ETPL
SER - 031
Top K Query for QoS-Aware Automatic Service Composition
With data storage and sharing services in the cloud, users can easily modify and share data as a group. To
ensure shared data integrity can be verified publicly, users in the group need to compute signatures on all the
blocks in shared data. Different blocks in shared data are generally signed by different users due
to data modifications performed by different users. For security reasons, once a user is revoked from the
group, the blocks which were previously signed by this revoked user must be re-signed by an existing user.
The straightforward method, which allows an existing user to download the corresponding part
of shared data and re-sign it during user revocation, is inefficient due to the large size of shared data in
the cloud. In this paper, we propose a novel public auditing mechanism for the integrity
of shared data with efficient user revocation in mind. By utilizing the idea of proxy re-signatures, we allow
the cloud to re-sign blocks on behalf of existing users during user revocation, so that existingusers do not need
to download and re-sign blocks by themselves. In addition, a public verifier is always able to audit the
integrity of shared data without retrieving the entire data from the cloud, even if some part of shared data has
been re-signed by the cloud. Moreover, our mechanism is able to support batchauditing by verifying
multiple auditing tasks simultaneously. Experimental results show that our mechanism can significantly
improve the efficiency of user revocation
ETPL
SER - 030
Panda: Public Auditing for Shared Data with Efficient User Revocation in the
Cloud
Elysium Technologies Private Limited Singapore | Madurai | Chennai | Trichy | Coimbatore | Ramnad
Pondicherry | Salem | Erode | Tirunelveli
http://www.elysiumtechnologies.com, [email protected]
Storage clouds use economies of scale to host data for diverse enterprises. However, enterprises differ in the
requirements for their data. In this work, we investigate the problem of resiliency or disaster recovery
(DR) planning in a storage cloud. The resiliency requirements vary greatly between different enterprises and
also between different datasets for the same enterprise. We present in this paper Resilient Storage Cloud Map
(RSCMap), a generic cost-minimizing optimization framework for disaster recovery planning, where the cost
function may be tailored to meet diverse objectives. We present fast algorithms that come up with a minimum
cost DR plan, while meeting all the DR requirements associated with all the datasets hosted on
the storage cloud. Our algorithms have strong theoretical properties: 2 factor approximation for bandwidth
minimization and fixed parameter constant approximation for the general cost minimization problem. We
perform a comprehensive experimental evaluation of RSCMap using models for a wide variety of replication
solutions and show that RSCMap outperforms existing resiliency planning approaches.
ETPL
SER - 032
Integrated Resiliency Planning in Storage Clouds
This paper presents a Learning Automata (LA)-based QoS (LAQ) framework capable of addressing some of
the challenges and demands of various cloud applications. The proposed LAQ framework ensures that the
computing resources are used in an efficient manner and are not over- or under-utilized by the consumer
applications. Service provisioning can only be guaranteed by continuously monitoring the resource and
quantifying various QoS metrics, so that services can be delivered in an on-demand basis with certain levels of
guarantee. The proposed framework helps in ensuring guarantees with these metrics in order to provide QoS-
enabled cloud services. The performance of the proposed system is evaluated with and without LA, and it is
shown that the LA-based solution improves the performance of the system in terms of response time and speed
up.
ETPL
SER - 033
Learning Automata-Based QoS Framework for Cloud IaaS
Elysium Technologies Private Limited Singapore | Madurai | Chennai | Trichy | Coimbatore | Ramnad
Pondicherry | Salem | Erode | Tirunelveli
http://www.elysiumtechnologies.com, [email protected]
Cloud storage services have become commercially popular due to their overwhelming advantages. To provide
ubiquitous always-on access, a cloud service provider (CSP) maintains multiple replicas for each piece of data
on geographically distributed servers. A key problem of using the replication technique in clouds is that it is
very expensive to achieve strong consistency on a worldwide scale. In this paper, we first present a
novel consistency as a service (CaaS) model, which consists of a large data cloud and multiple
small audit clouds. In the CaaS model, a data cloud is maintained by a CSP, and a group of users that
constitute an audit cloud can verify whether the data cloud provides the promised level of consistency or not.
We propose a two-level auditing architecture, which only requires a loosely synchronized clock in
the audit cloud. Then, we design algorithms to quantify the severity of violations with two metrics: the
commonality of violations, and the staleness of the value of a read. Finally, we devise a
heuristic auditing strategy (HAS) to reveal as many violations as possible. Extensive experiments were
performed using a combination of simulations and real cloud deployments to validate HAS.
ETPL
SER - 034
Consistency as a Service: Auditing Cloud Consistency
As the size of DRAM memory grows in clusters, memory errors are common. Current memoryavailability
strategies mostly focus on memory backup and error recovery. Hardware solutions likemirror memory needs
costly peripheral equipments while existing software approaches reduce the expense but are limited by the
high overhead in practical usage. Moreover, in cloud environments, containers such as LXC now can be used
as process and application-level virtualization to run multiple isolated systems on a single host. In this paper,
we present a novel system called Memvisor to provide high availability memory mirroring. It is a software
approach achieving flexible multi-granularity memorymirroring based on virtualization and binary translation.
We can flexibly set memory areas to bemirrored or not from process level to the whole user mode
applications. Then, all memory write instructions are duplicated. Data written to memory are synchronized to
backup space in the instruction level. If memory failures happen, Memvisor will recover the data from the
backup space. Compared with traditional software approaches, the instruction level synchronization lowers the
probability of data loss and reduces the backup overhead. The results show that Memvisor outperforms the
state-of-the-art software approaches even in the worst case.
ETPL
SER - 035
Multi-Granularity Memory Mirroring via Binary Translation in Cloud
Environments
Elysium Technologies Private Limited Singapore | Madurai | Chennai | Trichy | Coimbatore | Ramnad
Pondicherry | Salem | Erode | Tirunelveli
http://www.elysiumtechnologies.com, [email protected]
This paper presents the design, implementation, and evaluation of TransCom,
a virtual disk (Vdisk)based cloud computing platform that supports heterogeneous services of operating
systems (OSes) and their applications in enterprise environments. In TransCom, clients store all data and
software, including OS and application software, on Vdisks that correspond to disk images located on
centralized servers, while computing tasks are carried out by the clients. Users can choose to boot any client
for using the desired OS, including Windows, and access software and data services from Vdisks as usual
without consideration of any other tasks, such as installation, maintenance, and management. By centralizing
storage yet distributing computing tasks, TransCom can greatly reduce the potential system maintenance and
management costs. We have implemented a multi-platform TransComprototype that supports both Windows
and Linux services. The extensive evaluation based on both test-bed experiments and real-usage experiments
has demonstrated that TransCom is a feasible, scalable, and efficient solution for successful real-world use.
ETPL
SER - 036
TransCom: A Virtual Disk-Based Cloud Computing Platform for
Heterogeneous Services
Cloud computing is becoming increasingly important for provision of services and storage of data in the
Internet. However there are several significant challenges in securing cloud infrastructures from different types
of attacks. The focus of this paper is on the security services that a cloud provider can offer as part of its
infrastructure to its customers (tenants) to counteract these attacks. Our main contribution is
a security architecture that provides a flexible security as a service model that a cloudprovider can offer to its
tenants and customers of its tenants. Our security as a service model while offering a baseline security to the
provider to protect its own cloud infrastructure also provides flexibility to tenants to have
additional security functionalities that suit their security requirements. The paper describes the design of
the security architecture and discusses how different types of attacks are counteracted by the proposed
architecture. We have implemented the security architecture and the paper discusses analysis and performance
evaluation results.
ETPL
SER - 037
Security as a Service Model for Cloud Environment
Elysium Technologies Private Limited Singapore | Madurai | Chennai | Trichy | Coimbatore | Ramnad
Pondicherry | Salem | Erode | Tirunelveli
http://www.elysiumtechnologies.com, [email protected]
Security concerns are widely seen as an obstacle to the adoption
of cloud computing solutions.Information Flow Control (IFC) is a well understood Mandatory
Access Control methodology. The earliest IFC models targeted security in a centralised environment, but
decentralised forms of IFC have been designed and implemented, often within academic research projects. As
a result, there is potential for decentralised IFC to achieve better cloud security than is available today. In this
paper we describe the properties of cloud computing—Platform-as-a-Service clouds in particular—and review
a range of IFC models and implementations to identify opportunities for using IFC within
a cloud computingcontext. Since IFC security is linked to the data that it protects, both tenants and providers
of cloudservices can agree on security policy, in a manner that does not require them to understand and rely on
the particulars of the cloud software stack in order to effect enforcement.
ETPL
SER - 038
Information Flow Control for Secure Cloud Computing
The hindrances to the adoption of public cloud computing services include service reliability, data security and
privacy, regulation compliant requirements, and so on. To address those concerns, we propose
a hybrid cloud computing model which users may adopt as a viable and cost-saving methodology to make the
best use of public cloud services along with their privately-owned (legacy) data centers. As the core of
this hybrid cloud computing model, an intelligent workload factoring service is designed
for proactive workload management. It enables federation between on- and off-premise infrastructures for
hosting Internet-based applications, and the intelligence lies in the explicit segregation of base workload and
flash crowd workload, the two naturally different components composing the application workload. The core
technology of the intelligent workload factoring service is a fast frequent data item detection algorithm, which
enables factoring incoming requests not only on volume but also on data content, upon a changing application
data popularity. Through analysis and extensive evaluation with real-trace driven simulations and experiments
on a hybrid testbed consisting of local computingplatform and Amazon Cloud service platform, we showed
that the proactive workload managementtechnology can enable reliable workload prediction in the
base workload zone (with simple statistical methods), achieve resource efficiency (e.g., 78% higher server
capacity than that in base workloadzone) and reduce data cache/replication overhead (up to two orders of
magnitude) in the flash crowdworkload zone, and react fast (with an X^2 speed-up factor) to the changing
application data popularity upon the arrival of load spikes.
ETPL
SER - 039
Proactive Workload Management in Hybrid Cloud Computing
Elysium Technologies Private Limited Singapore | Madurai | Chennai | Trichy | Coimbatore | Ramnad
Pondicherry | Salem | Erode | Tirunelveli
http://www.elysiumtechnologies.com, [email protected]
Recently, Cloud Computing is attracting great attention due to its provision of configurable computing
resources. MapReduce (MR) is a popular framework for data-intensive distributed computing of batch
jobs. MapReduce suffers from the following drawbacks: 1. It is sequential in its processing of Map and
Reduce Phases 2. Being cluster based, its scalability is relatively limited. 3. It does not support flexible
pricing. 4. It does not support stream data processing. We describe Cloud MapReduce (CMR), which
overcomes these limitations. Our results show that CMR is more efficient and runs faster than other
implementations of the MR framework. In addition to this, we showcase how CMR can be further enhanced
to: 1. Support stream data processing in addition to batch data by parallelizing the Map and Reduce phases
through a pipelining model. 2. Support flexible pricing using Amazon Cloud's spot instances and to deal with
massive machine terminations caused by spot price fluctuations. 3. Improve throughput and speed-up
processing over traditional MR by more than 30% for large data sets. 4. Provide added flexibility and
scalability by leveraging features of the cloud computing model. Click-stream analysis, real-time multimedia
processing, time-sensitive analysis and other stream processingapplications can also be supported.
ETPL
SER - 040
An Advanced MapReduce: Cloud MapReduce, Enhancements and Applications
Cloud computing is a style of computing where different capabilities are provided as a service to customers
using Internet technologies. The most common offered services are Infrastructure (IasS), Software (SaaS) and
Platform (PaaS). This work integrates the service management into the cloudcomputing concept and shows
how management can be provided as a service in the cloud. Nowadays,services need to adapt their
functionalities across heterogeneous environments with different technological and administrative domains.
The implied complexity of this situation can be simplified by aservice management architecture in the cloud.
This paper focuses on this architecture, taking into account specific service management functionalities, like
incident management or KPI/SLAmanagement, and provides a complete solution. The proposed architecture is
based on a distributed set of agents, using semantic-based techniques: a Shared Knowledge Plane, instantiated
in the cloud, has been introduced to ensure communication between agents.
ETPL
SER - 041
A Flexible Architecture for Service Management in the Cloud
Elysium Technologies Private Limited Singapore | Madurai | Chennai | Trichy | Coimbatore | Ramnad
Pondicherry | Salem | Erode | Tirunelveli
http://www.elysiumtechnologies.com, [email protected]
Nowadays, the rapid growth of Cloud computing services is stressing the network communication
infrastructure in terms of resiliency and programmability. This evolution reveals missing blocks of the current
Internet Protocol architecture, in particular in terms of virtual machine mobility management for addressing
and locator-identifier mapping. In this paper, we propose some changes to the Locator/Identifier Separation
Protocol (LISP) to cope with this gap. We define novel control-plane functions and evaluate them exhaustively
in the worldwide public LISP testbed, involving five LISP sites distant from a few hundred kilometers to many
thousands kilometers. Our results show that we can guarantee service downtime upon
live virtual machine migration lower than a second across American, Asian and European LISP sites, and
down to 300 ms within Europe, outperforming standard LISP and legacy triangular routing approaches in
terms of service downtime, as a function of datacenter-datacenter and client-datacenter distances.
ETPL
SER - 042
Achieving Sub-Second Downtimes in Large-Scale Virtual Machine Migrations
with LISP
Backup paths are usually pre-installed by network operators to protect against single link failures in backbone
networks that use multi-protocol label switching. This paper introduces a new scheme called
Green Backup Paths (GBP) that intelligently exploits these existing backup paths to perform energy-
aware traffic engineering without adversely impacting the primary role of these backup paths of
preventing traffic loss upon single link failures. This is in sharp contrast to most existing schemes that
tackle energy efficiency and link failure protection separately, resulting in substantially high operational costs.
GBP works in an online and distributed fashion, where each router periodically monitors its
localtraffic conditions and cooperatively determines how to reroute traffic so that the highest number of
physical links can go to sleep for energy saving. Furthermore, our approach maintains quality-of-service by
restricting the use of long backup paths for failure protection only, and therefore, GBP avoids substantially
increased packet delays. GBP was evaluated on the point-of-presence representation of two publicly available
network topologies, namely, GÉANT and Abilene, and their real traffic matrices. GBP was able to achieve
significant energy saving gains, which are always within 15% of the theoretical upper bound.
ETPL
SER - 043
Leveraging MPLS Backup Paths for Distributed Energy-Aware Traffic
Engineering
Elysium Technologies Private Limited Singapore | Madurai | Chennai | Trichy | Coimbatore | Ramnad
Pondicherry | Salem | Erode | Tirunelveli
http://www.elysiumtechnologies.com, [email protected]
The need to autonomically optimize end-user service experience in near real time has been identified in the
literature in recent years. Management systems that monitor end-user service session context exist
but approaches that estimate end-user service experience from session context do not analyze the compliance
of that experience with user expectations. Approaches that optimize end-user servicedelivery are not
applicable to arbitrary services; they either optimize specific service types or use general mechanisms that do
not consider service experience. The lack of a holistic model for end-userservice management is a barrier to
autonomic end-user service optimization. This paper presentsAesop, an approach addressing
autonomic optimization of end-user service delivery using semantic-based techniques. Its knowledge base uses
the End-User Service Analysis and Optimization ontology, which models the end-user service management
domain and partitions knowledge that varies over time for efficient access. The Aesop Engine executes an
autonomic loop in near real time, which runssemantic algorithms to monitor sessions, analyze their
compliance with expectations, and plan and execute optimizations on service delivery networks. The
algorithms are efficient because they operate on small partitioned subsets of the Knowledge Base held as
separate self-contained models at run time. An Aesop implementation was evaluated on a home area network
test bed where compliance ofservice sessions with expectations when optimization was active was compared
with compliance of an identical set of sessions when optimization was inactive. Significant improvements
were observed on compliance levels of high priority sessions in all experimental scenarios, with compliance
levels more than doubled in some cases.
ETPL
SER - 044
The Aesop Approach for Semantic-Based End-User Service Optimization
Elysium Technologies Private Limited Singapore | Madurai | Chennai | Trichy | Coimbatore | Ramnad
Pondicherry | Salem | Erode | Tirunelveli
http://www.elysiumtechnologies.com, [email protected]
Network visibility is a critical part of traffic engineering, network management, and security. The most
popular current solutions - Deep Packet Inspection (DPI) and statistical classification, deeply rely on the
availability of a training set. Besides the cumbersome need to regularly update the signatures, their visibility is
limited to classes the classifier has been trained for. Unsupervised algorithms have been envisioned as a viable
alternative to automatically identify classes of traffic. However, the accuracy achieved so far does not allow to
use them for traffic classification in practical scenario. To address the above issues, we propose SeLeCT,
a Self-Learning Classifier for Internet Traffic. It uses unsupervised algorithms along with an adaptive seeding
approach to automatically let classes of traffic emerge, being identified and labeled. Unlike
traditional classifiers, it requires neither a-priori knowledge of signatures nor a training set to extract the
signatures. Instead, SeLeCT automatically groups flows into pure (or homogeneous) clusters using simple
statistical features. SeLeCT simplifies label assignment (which is still based on some manual intervention) so
that proper class labels can be easily discovered. Furthermore, SeLeCT uses an iterative seeding approach to
boost its ability to cope with new protocols and applications. We evaluate the performance
of SeLeCT using traffic traces collected in different years from various ISPs located in 3 different continents.
Our experiments show that SeLeCT achieves excellent precision and recall, with overall accuracy close to
98%. Unlike state-of-art classifiers, the biggest advantage of SeLeCT is its ability to discover new protocols
and applications in an almost automated fashion.
ETPL
SER - 045
SeLeCT: Self-Learning Classifier for Internet Traffic
Testing network devices in a live environment is desirable due to its reality. However, the defects are not
reproducible, and the network connectivity will be broken if the device is down. For effective defect
reproduction from real traffic, we design a new mechanism, which allows the device under test (DUT) to be
automatically online/offline, and supports multi-port replay for multi-port network devices with an OpenFlow
switch. The defect traces are captured when the DUT is online. When a DUT failure is detected, the DUT will
be offline, and the defect-triggering traces will be replayed to identify the defect. For efficient replay, we keep
only partial payloads in a reduced number of packets in the defect traces that are sufficient to trigger the
defects. For defect identification, reduction based on a binary search algorithm is presented to deal with the
defects caused by payload anomalies and by overloading. The downsizing ratios in the cases of payload
anomalies and overloading are up to 98.8% and 96%, respectively. The minimum outage time of the failover
during the DUT failure is obtained when the check interval is 1 second and the number of tolerable
consecutive failures is 2.
ETPL
SER - 046
On-the-Fly Capture and Replay Mechanisms for Multi-Port Network Devices in
Operational Networks
Elysium Technologies Private Limited Singapore | Madurai | Chennai | Trichy | Coimbatore | Ramnad
Pondicherry | Salem | Erode | Tirunelveli
http://www.elysiumtechnologies.com, [email protected]
We consider wireless mesh networks with cognitive ability of the wireless routers' radios. The cognitiveability
is a cost efficient manner to increase available bandwidth but requires an adaptive bandwidth management
mechanism to deal with dynamics of primary users' activities. In this paper, we investigate the
joint channel allocation and routing in cognitive wireless mesh networks including thechannel reuse
opportunities in order to improve the network performance. In particular we propose
aneconomic framework for adaptation and control of the network resources with the goal of network profit
maximization. The economic framework is based on the notion of state dependent node shadow price that is
derived from Markov decision theory. The node shadow prices are used as routing metrics while their average
values are used to allocate the channels among the different nodes. Simulation results illustrate
the network profit maximization and effectiveness of the proposed channel allocation scheme that is integrated
with a channel reuse algorithm.
ETPL
SER - 047
An Economic Framework for Routing and Channel Allocation in Cognitive
Wireless Mesh Networks
In wireless sensor networks the issue of preserving energy requires utmost attention. One primary way of
conserving energy is judicious deployment of sensor nodes within the network area so that theenergy flow
remains balanced throughout the network and prevents the problem of occurrence ofenergy holes. Firstly, we
have analyzed network lifetime, found node density as the parameter which has significant influence
on network lifetime and derived the desired parameter values for balancedenergy consumption. Then to meet
the requirement of energy balancing, we have proposed a probabilitydensity function (PDF), derived the PDF's
intrinsic characteristics and shown its suitability to model thenetwork architecture considered for the work.
A node deployment algorithm is also developed based on this PDF. Performance of the deployment scheme is
evaluated in terms of coverage-connectivity,energy balance and network lifetime. In qualitative analysis, we
have shown the extent to which our proposed PDF has been able to provide desired node density derived from
the analysis on networklifetime. Finally, the scheme is compared with three existing deployment schemes
based on various distributions. Simulation results confirm our scheme's supremacy over all the existing
schemes in terms of all the three performance metrics.
ETPL
SER - 048
Design of a Probability Density Function Targeting Energy-Efficient Node
Deployment in Wireless Sensor Networks
Elysium Technologies Private Limited Singapore | Madurai | Chennai | Trichy | Coimbatore | Ramnad
Pondicherry | Salem | Erode | Tirunelveli
http://www.elysiumtechnologies.com, [email protected]
Carrier Ethernet is rapidly being deployed in the metropolitan and core segments of the transportnetwork. One
of the emerging flavors of Carrier Ethernet is the IEEE 802.1Qay PBB-TE or ProviderBackbone Bridging—
Traffic Engineering standard. PBB-TE relies on the assignment of a network-specific Virtual Local
Area Network (VLAN) tag, called the Backbone VLAN ID or BVID that is used in conjunction with
a backbone Media Access Control (MAC) address for forwarding. The 12-bit BVIDalong with 48-
bit Backbone MAC address are used to forward an Ethernet frame. The assignment of BVIDs in a network is
critical, given that there are only 4094 possible assignments, especially for those paths that are overlapping in
the network graph and incident at the same destination. While the only way to scale is to reuse BVIDs, this
method can lead to a complication if the same BVID is allocated to an overlapping path. To the best of our
knowledge, this is the first instance of isolating this problem of limited BVID availability which rises only due
to graphical overlap between services. We formulate and solve this as a constrained optimization problem. We
present optimal and heuristic algorithms to solve the BVID problem. The optimal approach solves the `static'
case, while the heuristic can solve both the `static' and the `dynamic' cases of the BVID allocation problem.
Results show that the developed heuristics perform close to the optimal and can be used in commercial
settings for both the static and dynamic cases.
ETPL
SER - 049
On the Backbone VLAN Identifier (BVID) Allocation in 802.1Qay Provider
Backbone Bridged — Traffic Engineered Networks
Thank You !