+ All Categories
Home > Documents > A journey through SMScom: self-managing situational computing

A journey through SMScom: self-managing situational computing

Date post: 13-Dec-2016
Category:
Upload: carlo-ghezzi
View: 212 times
Download: 0 times
Share this document with a friend
11
Comput Sci Res Dev DOI 10.1007/s00450-012-0234-0 SPECIAL ISSUE PAPER A journey through SMScom: self-managing situational computing Luciano Baresi · Carlo Ghezzi © Springer-Verlag Berlin Heidelberg 2012 Abstract This article provides an overall view of the re- search that has been done in the context of self-managing software within the SMScom project. We start by the mo- tivations that inspired the research, and then we focus on a reference framework that explains its conceptual underpin- nings and on the paradigm shift it calls for in the way we currently engineer software. Next we focus on some specific research results achieved at the architecture and verification support level. Keywords Ubiquitous, pervasive systems · Internet of things · Cyber-physical systems 1 Introduction and motivations Modern software systems increasingly live in a dynamic and open world [2]. The goals to fulfill and the requirements to meet evolve over time. The environment in which the software is embedded also changes, even in ways that can- not be predicted upfront. Indeed, the physical and the cyber world are increasingly intertwined, through sensors, actua- tors, and many kinds of devices, to form so-called cyber- physical systems. As a consequence of changes in the en- vironment, the application itself is also required to evolve. This research has been partially funded by the European Commission, Programme IDEAS-ERC, Project 227977-SMScom. L. Baresi · C. Ghezzi ( ) Dipartimento di Elettronica e Informazione, DeepSE Group, Politecnico di Milano, Piazza Leonardo da Vinci, 32, 20133 Milano, Italy e-mail: [email protected] L. Baresi e-mail: [email protected] Another key feature is that the software we build is increas- ingly developed by composing parts developed by other par- ties and run by computing infrastructures that we do not own. Rather, they are developed, evolved, and operated by other, independent parties. Infrastructure as a service is for example provided by cloud environments. Software as a ser- vice may instead be provided and administered by service providers. Software evolution is not a new problem. It has been rec- ognized as a key distinguishing factor of software with re- spect to other technologies since the early work pioneered by L. Belady and M. Lehman since the 1970’s [6, 30]. Presently, however, the nature of software evolution mani- fests itself in radically different forms. First, the intensity of change today is much higher. Second, and most important, it must often be managed by the software itself while it is ac- tive and running, rather than being maintained off-line; that is, the software must be self-managing. Our research indicates that in order to achieve self- management new paradigms are needed to support both de- sign and run time. At design time, we need to anticipate changes and pre-plan strategies to react to them. At run time, we need a semantically rich environment that can de- tect changes, reason about them, check whether they may lead to requirements violations and reactively modifies the behavior of the running application. This paper provides an introduction to the goals and to some results of the SMScom research project. The project’s ultimate goal is precisely to support development of depend- able software that lives in a continuously evolving world and adapts itself to the situational changes it detects dur- ing its entire lifetime, from development time to run time. Rather than addressing specific, detailed technical prob- lems, in this paper we provide a global view of the SM- Scom project, focusing on the motivations behind it and on
Transcript

Comput Sci Res DevDOI 10.1007/s00450-012-0234-0

S P E C I A L I S S U E PA P E R

A journey through SMScom: self-managing situational computing

Luciano Baresi · Carlo Ghezzi

© Springer-Verlag Berlin Heidelberg 2012

Abstract This article provides an overall view of the re-search that has been done in the context of self-managingsoftware within the SMScom project. We start by the mo-tivations that inspired the research, and then we focus on areference framework that explains its conceptual underpin-nings and on the paradigm shift it calls for in the way wecurrently engineer software. Next we focus on some specificresearch results achieved at the architecture and verificationsupport level.

Keywords Ubiquitous, pervasive systems · Internet ofthings · Cyber-physical systems

1 Introduction and motivations

Modern software systems increasingly live in a dynamic andopen world [2]. The goals to fulfill and the requirementsto meet evolve over time. The environment in which thesoftware is embedded also changes, even in ways that can-not be predicted upfront. Indeed, the physical and the cyberworld are increasingly intertwined, through sensors, actua-tors, and many kinds of devices, to form so-called cyber-physical systems. As a consequence of changes in the en-vironment, the application itself is also required to evolve.

This research has been partially funded by the European Commission,Programme IDEAS-ERC, Project 227977-SMScom.

L. Baresi · C. Ghezzi (�)Dipartimento di Elettronica e Informazione, DeepSE Group,Politecnico di Milano, Piazza Leonardo da Vinci, 32,20133 Milano, Italye-mail: [email protected]

L. Baresie-mail: [email protected]

Another key feature is that the software we build is increas-ingly developed by composing parts developed by other par-ties and run by computing infrastructures that we do notown. Rather, they are developed, evolved, and operated byother, independent parties. Infrastructure as a service is forexample provided by cloud environments. Software as a ser-vice may instead be provided and administered by serviceproviders.

Software evolution is not a new problem. It has been rec-ognized as a key distinguishing factor of software with re-spect to other technologies since the early work pioneeredby L. Belady and M. Lehman since the 1970’s [6, 30].Presently, however, the nature of software evolution mani-fests itself in radically different forms. First, the intensity ofchange today is much higher. Second, and most important, itmust often be managed by the software itself while it is ac-tive and running, rather than being maintained off-line; thatis, the software must be self-managing.

Our research indicates that in order to achieve self-management new paradigms are needed to support both de-sign and run time. At design time, we need to anticipatechanges and pre-plan strategies to react to them. At runtime, we need a semantically rich environment that can de-tect changes, reason about them, check whether they maylead to requirements violations and reactively modifies thebehavior of the running application.

This paper provides an introduction to the goals and tosome results of the SMScom research project. The project’sultimate goal is precisely to support development of depend-able software that lives in a continuously evolving worldand adapts itself to the situational changes it detects dur-ing its entire lifetime, from development time to run time.Rather than addressing specific, detailed technical prob-lems, in this paper we provide a global view of the SM-Scom project, focusing on the motivations behind it and on

L. Baresi, C. Ghezzi

the paradigm shift that is required in the way we conceiveand run software to support continuous and dynamic evolu-tion.

The paper is structured as follows. Section 2 presentsa general framework to understand and reason about evo-lution and adaptation. In particular, it allows us to articu-late the complex interactions that may occur between thesoftware and the environment in which it is embedded,and how dependency on the environment may affect de-pendability and drive adaptation. Section 3 describes thechanges in paradigm required by adaptive software. Sec-tions 4 and 5 zoom into specific issues that concern soft-ware architecture and the lifelong verification. Section 6 il-lustrates final considerations and snapshots of on-going fu-ture work.

2 Reference framework

In this section we describe a framework that is useful to un-derstand and reason about software change. The frameworkis rooted into the foundational work on requirements engi-neering developed by M. Jackson and P. Zave [27, 39]. Jack-son and Zave observe that in requirements engineering oneneeds to carefully distinguish between two main concerns:the world and the machine. The machine is the system to bedeveloped; the world (the environment) is the portion of thereal-world affected by the machine. The ultimate purposeof the machine is always to be found in the world. The re-quirements we wish to satisfy must be found by scrutinizingthe world and must be expressed in terms of the phenomenathat occur in it. Some of these phenomena are shared withthe machine: they are either controlled by the world and ob-served by the machine, or controlled by the machine andobserved by the world. The machine is built to achieve therequirements in the real world. Its specification is a prescrip-tive statement of the relation on shared phenomena that mustbe enforced by the system to be developed. The machine thatimplements it must be correct with respect to it.

To understand how a specification should be derived, it isimportant to gather the relevant domain knowledge; i.e., thesoftware engineer needs to formulate the set of relevant as-sumptions that have to be made about the environment inwhich the machine is expected to work, which affect theachievement of the desired results. Quoting from [39],

“The primary role of domain knowledge is tobridge the gap between requirements and specifica-tions.”

If R and S are the prescriptive statements that formalizethe requirements and the specification, respectively, and D

are the descriptive statements that formalize the domain as-sumptions, assuming that S and D are both satisfied and

Fig. 1 The Jackson/Zave Framework

consistent with each other, it is necessary to prove that

S,D |= R (1)

i.e., the machine’s specification S we are going to devisemust entail satisfaction of the requirements R in the contextof the domain properties D.

Figure 1 provides a visual sketch of the Jackson–Zaveapproach. The domain D plays a fundamental role in es-tablishing the requirements. We need to know upfront howthe environment in which the software is embedded works,since the software to develop (the machine) can achieve theexpected requirements only based on certain assumptionson the behavior of the domain, which are described by D.Should these assumptions be invalidated, the requirementsmight be violated. That is, statement (1) may not hold be-cause of a change of D. The environment properties cap-tured by D may change for two reasons: either becausethe domain analysis was initially flawed (i.e., the environ-ment behaves according to different laws than the ones cap-tured by D) or because changes occurred in the environ-ment, which caused the assumptions that were made earlierto become no longer valid.

Software evolution refers to changes that affect the ma-chine, to enable it to respond to changes in the require-ments and/or in the environment (we ignore in this paper thefact that implementation may be incorrect, i.e., the runningsoftware violates its specification S). The management ofchanges in traditional software is performed off-line, duringthe maintenance phase. Changes in the requirements mostlyfall under the traditional perfective maintenance category,and may be dictated by changes in the business goals of or-ganizations or new demands by users of older versions ofan application. Environmental changes may instead affectthe assumptions that ensure the satisfaction of the require-ments. They may represent organizational assumptions orconditions on the physical context in which the machine isembedded. If these assumptions are invalidated, the softwaremust undergo what is traditionally called adaptive mainte-nance.

According to the traditional paradigm, in order to un-dergo a maintenance intervention, software returns into its

A journey through SMScom: self-managing situational computing

development stage, where changes are analyzed, prioritized,and scheduled. Changes are then handled by modifying thedesign and implementation of the application. The evolvedsystem is then verified, typically via some kind of regressiontesting.

This paradigm does not meet the requirements of manyemerging application scenarios, which are subject to con-tinuous changes in the requirements and in the environment,and which require rapid adaptation to such changes. Increas-ingly, both recognition of and reaction to changes must bemanaged automatically by the running system, and off-linehuman intervention must be limited to only handle radicalchanges. Run-time adaptation must be achieved seamlessly,as the application is running and providing service.

There are already numerous examples of existing sys-tems were these requirements hold. At the business level,information systems are increasingly built through serviceunits and their dynamic integration. Service units may beprovided by different providers, crossing enterprise bound-aries. The service network is dynamic, since changes in busi-ness conditions may require automatic adaptation of the net-work. At another extreme, pervasive computing systems andcyber-physical systems are also characterized by continu-ous, dynamic changes, mostly due to contextual changes.A typical context change is due to mobility, which may ex-pose the application to unexpected changes in, for example,the assumptions made on certain features provided by otherdevices in the physical environment with which the applica-tion interacts.

All the examples of systems we refer to provide theirfunctionality by relying on other applications or functionalfragments that exist in the external environment. Such ap-plications, which we call services, differ from componentsin traditional component-based software systems. Servicesare run autonomously by their own engine, which may behosted by a remote device. They may be discovered dynami-cally and then be invoked by other applications to have sometask executed. Normally, these services belong to differentstakeholders, who have full responsibility over their evolu-tion, deployment, and execution. They may commit to satis-fying a certain quality of service, but client application can-not have full trust into it. Systems built out of services, calledservice-oriented systems, are therefore characterized by dis-tributed ownership. No single stakeholder is in control ofthe whole system. Rather, new systems are built out of partsthat may evolve dynamically and autonomously. Service-oriented systems are an increasingly important–though notexclusive–class of systems for which dynamic evolution andadaptation is crucial. In this paper we deliberately focus onthem.

The Jackson-Zave framework can help us better under-stand the nature of changes in R and D and the way they canbe handled. First, to be more precise in the terminology, we

may use the term evolution to indicate changes that requirehuman intervention and off-line modification of the soft-ware. The term adaptation may instead be used to indicatechanges that the software itself can handle autonomously.Changes in R mostly lead to evolution: they can only be un-derstood and handled by humans. Instead changes in D maylargely be handled by adaptation.

3 Towards a new lifecycle paradigm

We traditionally view the lifetime of software as an alter-nation between two stages: development and operation. Inthe development stage the application is initially conceived,specified, modeled, designed, implemented, and verified.Operation instead refers to the stage in which the systemis running and offering service. Under operation we also in-clude system administration activities, such as manual per-formance tuning, database reindexing, server resetting. Thetwo stages are continuously iterated. As we observed, devel-opment is a recurring stage, since applications periodicallyundergo maintenance activities. The two phases are strictlysequential, and in most cases, before the next version isbrought to run time the previous is switched off (see Fig. 2).The reason why the lifecycle is conceived like this derivesfrom the fact that the software is designed to behave in a cer-tain predefined way within an environment that behaves ina certain manner. Should the required behaviors change be-cause of changes in the goals to be fulfilled, or should the en-vironment’s behavior change, the software needs to undergoa development (maintenance) iteration before it is eventu-ally deployed to generate a new runnable version. Further-more, system administration activities in the operation phaserequire substantial manual intervention (and would greatlybenefit from a self-managed approach).

In our context, however, we envisage software systemsthat can take an active role: they can detect situationalchanges and react to them autonomously before they pos-sibly decide to involve humans in the loop in case this isneeded. Systems of this kind are also called autonomic,and their high-level architecture can be described in termsof autonomic elements, schematically represented in Fig. 3.An autonomic element includes a management part (called

Fig. 2 Software lifecycle

L. Baresi, C. Ghezzi

Fig. 3 Autonomic element

MAPE cycle) that handles the adaptation, through the fol-lowing steps:

1. Monitor: The data that may indicate the need for an adap-tation are collected by a monitoring task.

2. Analyze: The collected data are analyzed to actually de-cide about possible reaction.

3. Plan: Reaction is planned.4. Execute: Reaction is enacted.

We contend that in order to found adaptation upon well-defined and mathematically sound grounds, the MAPE cycleat run time must be based on:

– An explicit representation of the formal requirements thatneed to be satisfied.

– An explicit model of the application and of the environ-ment. As we discussed earlier, we assume that the currentapplication model (and hence the currently running im-plementation) has been designed according to the domainassumptions captured by the current environment model.

– A learning capability, which can map the monitored phys-ical data onto the higher-level environment model andhence modify it whenever the monitored data suggest dif-ferent domain assumptions.

– A verification engine that continuously verifies require-ments satisfaction after each update of the environment’smodel. A detected violation can trigger a self-healingadaptation of the application’s model. Ideally, the self-healing step at the model level can then drive a (model-driven) adaptation of the running implementation.

To fully exploit these features, a radical change is nec-essary in the way we normally conceive and run software,leading to a new lifecycle paradigm. In particular, the tradi-

Fig. 4 Reference lifecycle for adaptive software

tional, clear cut boundary between development and execu-tion becomes blurred [3]. Traditionally, software engineer-ing did not care about run time. Rather, it mainly focusedon principles and methods to develop high quality software,possibly controlling and reducing development costs. Oncedeveloped, the software simply needed to be run, and thiswas not perceived as a problematic area.

Adaptive systems instead require that the run-time en-vironment must be able to reason about requirements andthe domain. Models—traditionally thought as development-time conceptual tools to support software designers—mustbe kept alive and running at run time. Verification—anothertypical development-time activity—is key to supporting de-tection of the need for adaptation.

The envisaged seamless integration between develop-ment and run time can be explained with reference to Fig. 4.

The initial development of the adaptive application fol-lows a modeldriven approach: Models are initially devel-oped, based on requirements and domain assumptions (1).Models are verified using automated tools, typically basedon model checking (2). In deriving the code, external com-ponents and services developed by third parties may be in-corporated (3). Once developed and deployed in the targetenvironment, the software is executed, and a monitor col-lects run-time data through suitable probes (4). Learning al-gorithms are used to transform the collected physical datainto higher-level domain information that is retrofitted to themodel level (5). This may generate a change in the assump-tions that are cast in the current domain model, which is thenupdated accordingly. This, in turn, re-triggers the verifier.If the need for an adaptation is discovered, the adapter (6)may modify the model (at run time) and this may generatechanges in the implementation, and hence a further iterationof the software lifecycle described by the diagram in Fig. 4.Of course, it may well be that the detected changes cannot behandled in a self-adaptive manner, but require more radicalhuman intervention. In this case, the knowledge collectedby the run-time environment is passed to a designer to assistoff-line evolution.

A journey through SMScom: self-managing situational computing

Fig. 5 Operational Description of the Specification via an Activity Di-agram

We will provide a concrete example of the proposed ap-proach in the next section. We will then zoom into threemain areas to enlighten the specific paradigm changes dic-tated by adaptive systems: requirements, architecture, andlifelong verification.

3.1 An example

Hereafter we illustrate a concrete example in which the ap-proach described earlier is successfully applied. The exam-ple, which is taken from [23], refers to a typical e-commerceapplication that sells on-line merchandise to its users by in-tegrating the following third-parties services:

1. Authentication Service. This service manages the iden-tity of users. It provides a Login and a Logout operationthrough which the system authenticates users.

2. Payment Service. This service provides a safe transac-tional payment service through which users can pay theselected merchandise via the CheckOut operation.

3. Shipping Service. This service is in charge of shippinggoods to the customer’s address. It provides two differentoperations: NrmShipping and ExpShipping. The formeris a standard shipping functionality while the latter repre-sents a faster and more expensive alternative. Finally, thesystem classifies the logged users as BigSpender (BS) orSmallSpender (SS), based on their usage profile. Indeed,if the money spent by a given user in his/her previousorders reaches a specific threshold he/she is consideredas BigSpender, otherwise users are considered as Small-Spenders.

Figure 5 provides a high-level view of the flow of interactionbetween users and the e-commerce application, expressed asan activity diagram.

The e-commerce application lives in a likely turbulent en-vironment, which largely depends on the volatility of severaldomain assumptions. Changes in a number of key domainassumptions may affect several non-functional requirementsthat refer to quality attributes, such as reliability, perfor-mance, or cost. In the example, we focus in particular onreliability.

Table 1 Domain Assumptions on Usage Profiles

Description Value

P(User is a BS) 0.35

P(BS chooses express shipping) 0.5

P(SS chooses express shipping) 0.25

P(BS searches again after a buy operation) 0.2

P(SS searches again after a buy operation) 0.15

Table 2 Domain Assumptions on External Services

Description Value

P(Login) 0.03

P(Logout) 0.03

P(NrmShipping) 0.05

P(ExpShipping) 0.05

P(CheckOut) 0.1

Reliability requirements can be typically expressed inprobabilistic terms: for example, the probability that certaincomplex transactions complete successfully. In the case ofthe e-commerce application, fulfillment of a reliability re-quirement clearly depends on certain assumptions about theenvironment, such as the reliability of the third-party ser-vices that are integrated into the application and usage pro-files (e.g., the ratio between small and big spenders), whichmay affect the satisfaction of requirements for the two usercategories.

Environment phenomena of these kinds are hard to pre-dict when the system is initially designed. For example, theexpected failure rate of services my be stated in the contractthat service providers may subscribe in terms of a service-level agreement, but this information is subject to uncer-tainty and is also very likely to change over time (for ex-ample, due to a new release of the service). Likewise, usageprofiles are hard to predict upfront, and are also very unsta-ble.

Let us assume that e-commerce application must satisfythe following reliability requirements:

– R1: “Probability of success is greater then 0.8”– R2: “Probability of a ExpShipping failure for a user rec-

ognized as BigSpender is less then 0.035”– R3: “Probability of an authentication failure is less then

0.06”

Let us further assume that development time domain anal-ysis tells us that expected usage profile can be reasonablydescribed as in Table 1. The notation P(x) denotes the prob-ability of “x”. Table 2 instead summarizes the results of do-main analysis concerning the external services integrated inthe e-commerce application. P(Op) here denotes the prob-ability of failure of service operation Op. The domain as-

L. Baresi, C. Ghezzi

Fig. 6 Example DTMC Model

sumptions expressed in Tables 1, 2 may derive from differ-ent sources. For example, reliability properties of third-partyservices may be published as part of the service-level agree-ment with service providers. Usage profiles may instead bederived from previous experience of the designers or knowl-edge extracted from previous similar systems.

As we discussed earlier, the software engineer’s goal isto derive a specification S, which leads to satisfaction ofrequirements R, assuming that the environment behaves asdescribed by D. From the activity diagram in Fig. 5, we canderive the state-machine model in Fig. 6. This model can beviewed as a combined formal description both of the appli-cation and of the domain assumptions.1 The state machinedescribes the possible sequences of interactive operations,according to the protocol specified by the activity diagramof Fig. 5. Domain assumptions are modeled as probabilitiesthat label the transitions. The model also represents failureand success states for the external services.

Formally, the model in Fig. 6 is a Discrete Markov Chain(DTMC). It contains one state for every operation performedby the system plus a set of auxiliary states representingpotential failures associated with auxiliary operations (e.g.,state 5) or specific internal logical states (e.g., state 2). Ex-isting tools are available to support automatic derivation ofmodel-to-model transformations, including transformationsfrom activity diagrams to Markovian models [19].

Given the DTMC model in Fig. 6, it is possible to ver-ify requirements R1–R3, by formalizing them as propertiesexpressed in the probabilistic logic language PCTL and us-ing a probabilistic model checker, like PRISM [25, 29]. Bydoing so, we obtain the following results:

– Probability of success = 0.804– Probability of a ExpShipping failure for a user recognized

as BigSpender = 0.031

1We do not discuss here how a model like the one in Fig. 6 can bederived by a model like the one in Fig. 5.

– Probability of an authentication failure (i.e., Login or Lo-gout failures) = 0.056

which ensure satisfaction of the requirements.As we argued earlier, domain assumptions may inaccu-

rate, or they may become invalid at run time. For exam-ple, let us assume that the reliability of the Payment Serviceis lower then the value published by the service provider.2

More precisely, let us suppose that the probability of failurefor each ExpShipping invocation is indeed slightly higherand equal to 0.067. In this case, by running the modelchecker again, we obtain that the probability of an expressshipping failure for a big spender is equal to 0.042, which isgreater then 0.035 and thus violates requirement R2.

If the verification step that leads to identifying a require-ments failure must be performed at run time, in order to en-able a possible self-adaptation of the running application,it must comply with the real-time constraints that are dic-tated by the application. This issue is crucial to support ourapproach to self-adaptation that is based on continuous runtime verification. More on this will be discussed in Sect. 5.

4 Software architecture issues

SMScom has thoroughly investigated the problem of prop-erly architecting self-adaptive systems. It has addressed theproblem both at infrastructure and application level: thereis no final solution—or definitive architectural style—thatshould be adopted as such, but the different experiments,and also the different viewpoints with which we have stud-ied the problem, have allowed us to distill some “general”requirements and key characteristics that the different typesof self-adaptive systems should embed.

2Reliability parameters may be estimated by monitoring service invo-cations and run-time failures and applying a Bayesian inference meth-ods, as explained in [15].

A journey through SMScom: self-managing situational computing

SMScom mainly targets highly distributed systems real-ized through some form of components (or services). Allthese systems assume—as already introduced in Sect. 2—that the need for adaptation can come from both new re-quirements and new (unforeseen) conditions in the environ-ment. This means that the different components must be ableto cooperate in a very flexible and dynamic way. Generallyspeaking, architectural solutions must be characterized byloosely coupled component compositions, based on a vari-ety of dynamic binding mechanisms.

If one thinks of the problem at infrastructure level,content-based routing and the publish and subscribe ar-chitectural style [32] seem to be very promising solutions.The interactions among components are not hard-coded, butthey are based on the interests (subscriptions) of the parties,which receive messages not because of the identify or rolethey play in the system, but because of their actual content.A smart implementation of the infrastructure in charge ofdispatching the information (through a proper mix of repli-cated components and distributed responsibilities) allowscomponents to enter and leave the system very easily andlets the information flow through the actual topology of thesystem.

As we mentioned, more and more systems fall withinthe class of cyber-physical systems, that is, distributed sys-tems whose terminal nodes can be sensors and devices em-bedded in the environment—also known as the Internet ofThings [1]—IoT). All these “things” must be properly ab-stracted (e.g., as if they were services [14]), to let the appli-cation exploit them, and the middleware must provide meansto offer reliable and robust interactions between the envi-ronment and the application interacting with it. The IoT iscalling for better approaches and solutions for the integra-tion between sensors and higher level software componentsand also for suitable programming abstractions [33] to easethe design of things-oriented applications. For example, theadoption of the REST architectural style [7] could providean interesting means to deal with pervasive environments.Although REST has been introduced by observing and ana-lyzing the structure of the Internet, its key concepts can beeasily borrowed and exploited to support ubiquitous com-puting.

In certain application environments, distributed pervasivesystems are composed of a very large number of different“agents”. Consider, for example, a driving assistance systemmanaging vehicles in an urban area to control traffic con-ditions. Coordination becomes a problem and raises scal-ability problems as the number of agents grows. We havestudied group-based communication [4] as a possible strat-egy to mitigate it. Components/agents can be can be aggre-gated in groups. Instead of attempting to coordinate a largenumber of individual components, the problem is reducedto coordinating “groups” of components, which are usually

fewer and less dynamic than single components. To do so,we distinguish between inter-group and intra-group coordi-nation. This abstraction provides a clear distinction betweenapplication logic and control, simplifies the design of coor-dination among components, prevents the system from be-ing flooded with coordination messages, and helps designersidentify where and how they can include control loops.

The implementation of a self-adaptive software may besupported at different, complementary levels: at the archi-tectural level (discussed below) or at the programming lan-guage level. In the latter case, context-oriented program-ming (COP) [26] proposes constructs that support adaptationcoming from the recognition of different contexts in whichthe application may enter dynamically and in which it issupposed to provide (context-aware) services to users. COPprovides language-level abstractions to modularize behav-ioral adaptation concerns and to dynamically activate themat runtime [22]. COP provides means to dynamically com-bine different behaviors when all the associated contexts areactive at the same time and properly modularize the code foreach behavior. The article [38] shows that COP languagescan empower software engineers committed to the design ofcontext-aware systems with a very powerful approach andtool.

In SMScom we explored different ways of architectingself-adaptive software implementations. As for specifica-tion, an interesting approach is based on the concept of dy-namic software product line—DSPL [10, 24]. A product line[9, 35] defines a family of software products that can beviewed as different configurations. In our case, configura-tions differ in the way they satisfy the same requirements indifferent environmental contexts, represented during designas variation points in product-line terms. The product line isdynamic in the sense that the different instances are instan-tiated dynamically at run time. We also investigated threeother approaches to enact the dynamism, mostly in the con-text of dynamic service compositions. In [37] compositionis viewed as a planning problem and the proposed solutionis based on planning techniques. In the work on tiles [8], theapplication results from assembling typed tiles. Finally, wedid some work on bio-inspired approaches [11], where thecomposition process mimics the self-coordination capabili-ties of certain kinds of insects.

Besides runtime composition, self-adaptive applicationsmust also cope with the dynamic update of components,hopefully in a safe and non-disruptive way. The approachproposed in [31] builds upon the notations of quiescence andtranquility and proposes version consistency of distributedtransactions as a safe criterion for dynamic updates. Versionconsistency ensures that distributed transactions be servedas if there were operating on a single coherent version ofthe system despite possible updates that may happen mean-while. A distributed algorithm maintains the dynamic de-

L. Baresi, C. Ghezzi

pendences between components at architectural level andenables low-disruptive version-consistent updates.

The architecture of adaptive applications must be aug-mented with additional monitoring capabilities to detectchanges in the behavior of the environment in which they areembedded. A general approach to monitoring is presentedin [5]. The environment we developed in the context of ser-vice integration (Dynamo) treats supervision as a cross cut-ting concern with respect to the workflow and builds uponwell-established open-source technology and standards. Su-pervision consists of monitoring and possibly reacting. Theformer synchronously checks the execution to see whethereverything proceeds as planned. The latter is only activatedif an anomaly is discovered, and attempts to fix the execu-tion and keep things on track. Supervision elements are onlyblended at runtime, and different stakeholders can adopt dif-ferent supervision policies with no impact on the actual busi-ness logic.

5 Lifelong verification

Lifelong verification means that software verification has tobe performed continuously, both at development time and arun time. At development time, it supports designers in con-ceiving the right features and developing the most suitablearchitecture. At run time, it supports assessing whether thecurrently running system still provides a valid solution af-ter changes are detected in the environment. In both cases,verification must be performed in a way that complies withcertain requirements in terms of response time. These re-quirements may of course differ substantially.

At design time, they must be compatible with the needsfor agility that are increasingly requested to developmentprocesses to guarantee continuous product refinement andadaptation. Most agile development methods dismiss formalmodels and formal verification because they are viewed asadversary to rapid, requirements-driven developments. Bysupporting model-driven development and fast verificationof high-level models we claim instead that they can poten-tially become the best allies of agility At run time, tim-ing requirements for verification are instead dictated by thechanges occurring in the environment and the speed of reac-tion mechanism, which must be able to adapt the solution toexternal changes before their effect becomes visible in termsof requirements violation.

The question we need to address is then how can we im-prove the execution time for verification. Under which keyassumptions or restrictions can verification become faster?Two main approaches may be followed to address the prob-lem. One may consist of trading off speed with accuracy.We may for example work on simplified models and thenobtain less accurate, but hopefully still useful verificationinformation. Or we may work on simplified properties, or

both. Another approach may instead be based on the obser-vation that changes are always incremental, i.e., evolution isgradual, and therefore we might try to focus on the effects ofchanges, instead of thinking of the system as a new one thatneeds to be re-analyzed from scratch after each evolutionarystep.

The problem can be abstractly formulated as follows. LetX be a system model and let R be a set of requirements. Ac-cording to the framework illustrated in Sect. 2, X typicallyconsists of a combination of interacting software and envi-ronment entities, formalized by 〈S,D〉, which satisfies theproperties expressed by R. A change is a new pair 〈D′,R′〉.An incremental verification reuses partial results of the anal-ysis of 〈S,D〉 against R to analyze 〈S,D′〉 against R′. Sincehere we restrict our analysis to changes in the environment,we may further ignore requirements changes, i.e., assumeR′ = R.

Let us briefly discuss an approach that can make verifi-cation incremental. For simplicity, we ignore the problem ofdetecting changes in D. We simply point out that if this mustbe performed autonomously at run time, it is necessary tomonitor the environment, collect relevant physical data, andtransform them into higher level information correspondingto domain properties (see, e.g., the approach presented in[15]).

Incremental verification may be based on the assumptionthat possible changes can often be anticipated at design time,by careful scrutiny of the environment assumptions in the re-quirements phase. The first approach, which exploits changeanticipation is rooted into the influential design for changeprinciples advocated by D. Parnas [34], which lead to thenotions of modularity, encapsulation, interface abstractions,contracts. This approach applies compositional reasoning toa modularized system through an approach that commonlycalled assume-guarantee. The system is viewed as a parallelcomposition of modules, each of which has to guarantee acertain property, that corresponds to its contract subscribedwith the other modules. The key idea in assume-guaranteereasoning [28, 36], is that we show that module M1 guaran-tees certain properties P1 on the assumption that module M2

delivers certain properties P2, and vice versa for M2, andthen claim that the system composed of M1 and M2 (i.e.,both running and interacting together) guarantees P1 and P2

unconditionally. Verification methods of this kind are calledcompositional because they allow one to reason about M1

and M2 separately and deduce properties about the parallelcomposition of the two modules M1||M2 without having toreason about the composed system directly. This direction isperhaps the most widely studied in the literature and leads ingeneral to compositional approaches that can help dominatethe complexity of reasoning on a large system.

Assume-guarantee approaches can be applied to manag-ing verification incrementally. That is, whenever the effect

A journey through SMScom: self-managing situational computing

of a change can be encapsulated within the boundary of amodule (i.e., that module, after the change that affected it,can be proved to continue to guarantee its contract property),the other modules are not affected, and their verificationdoes not need to be redone. Success of this approach de-pends on how well the initial design that defined the moduleboundaries ensures that anticipated possible changes do notpercolate through the module’s interface, affecting its guar-anteed property (i.e., its contract). Compositional reasoningfails, however, whenever the consequence of a change isthat a module does not guarantee the specified property anymore.

We have been working on a different (orthogonal) ap-proach, which is also based on a prior identification ofchangeable factors, but does not depend on modular reason-ing. Let us refer for example again to the system model de-scribed by the DTMC shown in Fig. 6, which is expectedto satisfy certain properties, expressed in PCTL. Changesthat may occur in the world, in this context, are express-ible as changes in the probabilities associated with certaintransitions. For example, the probability of a shipping fail-ure. A non-incremental approach implies that verificationthrough model checking should be applied to the modifiedmodel, and re-applied after any subsequent change fromscratch. The incremental approach we developed [18] iscalled parametric. This means that the transitions represent-ing phenomena that can change are labeled by variables,not constants. Constants model phenomena that are fullyknown before execution and cannot change, whereas vari-ables model changeable phenomena. Evaluation of the re-quired PCTL properties on the model (which contains con-stant and variable parts) is a partial evaluation, which in-volves both numeric and symbolic processing. In the caseof reachability formulae, the result is a symbolic polyno-mial formula that represents the residual verification con-dition corresponding to the partially evaluated PCTL prop-erty.3 This verification condition is carried on to runtime tosupport continuous verification. In fact, as the real valuesbecome known at runtime, the evaluation of the verificationcondition can tell us whether the system behaves as speci-fied or not. The proposed method is extremely efficient atruntime, at the obvious expense of a high pre-runtime com-plexity, which we are generally willing to pay to achieveruntime efficiency. In particular, the complexity heavily de-pends on the number of variables. Therefore the method isparticularly efficient if only a few parameters can changewithin the whole set of environment parameters.

A main limitation of this approach is that it can deal witha limited class of variability; in the DTMC case, the structure

3For other PCTL formulae, evaluation is performed as a tree-structuredtraversal computing polynomial forms.

of the model cannot change, just transition labels. In addi-tion, the approach as described works for models and prop-erties for which the design-time step can produce a closedformula to evaluate at runtime. We are currently workingon a generalized approach to incremental evaluation that isbased on the incremental analysis of the syntactic structureof the artifact to evaluate.

6 Conclusions

The objective of this paper was to give a high-level view ofthe motivations behind the SMScomproject. Ultimately, theproject tries to rethink the way software is conceived and runin the presence of continuous change. We presented a con-ceptual framework in which change can be understood andthe notions of evolution and adaptation can be made pre-cise and even formal. We advocated the need for softwareto be increasingly self-adaptative. Then we discussed themajor paradigm shift required by (self-)adaptive software,which leads to a unification of development time and runtime. Finally we zoomed into two areas—architecture andverification—to point to some of the results we achieved sofar.

In conclusion, the picture emerging from this article isboth general and partial. For a comprehensive view of thework done in the project, the reader is urged to refer to theproject’s Web site (http://www.erc-smscom.org/).

As for future work, we intend to invest in a number ofdirections. We briefly mention three, which represent con-tinuation of past research that was not mentioned earlier, butnevertheless address other key aspects of adaptive software.First, we will continue our research on dynamic discov-ery of specifications, which addresses the problem of rea-soning about newly found or evolving components/services[20, 21]. Second, we will continue work on model-drivendevelopment to derive a running application from high-levelmodels, and architectural changes in response to adaptationrequests [12, 13]. Finally, we will investigate how controltheory can support the design and implementation of adap-tive reactions in software. Preliminary work presented in[16, 17] looks quite promising.

Acknowledgements This research has been funded by the EuropeanCommission, Programme IDEAS-ERC, Project 227977-SMScom. Weare indebted to all the participants in the SMScom for are making theproject an extremely exciting research avenue. The project’s Web site(http://www.erc-smscom.org) lists all of them and acts as a repositoryof their contributions. We are also indebted to all the individuals whogave us feedback in conferences, workshops, and technical meetings.Naming all of them would simply be impossible!

References

1. Atzori L, Iera A, Morabito G (2010) The internet of things: a sur-vey. Comput Netw 54(15):2787–2805

L. Baresi, C. Ghezzi

2. Baresi L, Di Nitto E, Ghezzi C (2006) Toward open-world soft-ware: issue and challenges. Computer 39(10):36–43

3. Baresi L, Ghezzi C (2010) The disappearing boundary betweendevelopment-time and run-time. In: Proceedings of the FSE/SDPworkshop on future of software engineering research, FoSER’10.ACM, New York, pp 17–22

4. Baresi L, Guinea S (2011) A3: self-adaptation capabilities throughgroups and coordination. In: Proceeding of the 4th annual Indiasoftware engineering conference. ACM, New York, pp 11–20

5. Baresi L, Guinea S (2011) Self-supervising bpel processes. IEEETrans Softw Eng 37(2):247–263

6. Belady LA, Lehman MM (1976) A model of large program devel-opment. IBM Syst J 15(3):225–252

7. Caporuscio M, Funaro M, Ghezzi C (2010) Architectural issues ofadaptive pervasive systems. In: Graph transformations and model-driven engineering—essays dedicated to Manfred Nagl on the oc-casion of his 65th birthday. Lecture notes in computer science,vol 5765. Springer, Berlin, pp 492–511

8. Cavallaro L, Di Nitto E, Furia CA, Pradella M (2010) A tile-basedapproach for self-assembling service compositions. In: 15th IEEEinternational conference on engineering of complex computer sys-tems. IEEE Comput Soc, Los Alamitos, pp 43–52

9. Clements P, Northrop L (2001) Software product lines: practicesand patterns. Addison-Wesley, Boston

10. de Lemos R, Giese H, Mller H, Shaw M (2009) In: Software engi-neering for self-adaptive systems. Cheng BHC, de Lemos R, GieseH, Inverardi P, Magee J (eds) Lecture notes in computer science,vol 5525. ISBN:978-3-642-02160-2

11. Di Nitto E, Dubois DJ, Mirandola R (2009) On exploiting decen-tralized bio-inspired self-organization algorithms to develop realsystems. In: The 2009 ICSE workshop on software engineering foradaptive and self-managing systems, SEAMS 2009. IEEE Press,New York, pp 68–75

12. Drago M, Ghezzi C, Mirandola R (2011) Towards quality drivenexploration of model transformation spaces. In: Whittle J, ClarkT, Khne T (eds) Model driven engineering languages and systems.Lecture notes in computer science, vol 6981. Springer, Berlin,pp 2–16

13. Drago ML, Ghezzi C, Mirandola R (2011) Qvtr2: a rational andperformance-aware extension to the relations language. In: Pro-ceedings of the 2010 international conference on models in soft-ware engineering, MODELS’10. Springer, Berlin, p 328

14. Driscoll D, Mensch A (eds) (2009) Devices profile for web ser-vices (DPWS). Technical report, OASIS

15. Epifani I, Ghezzi C, Mirandola R, Tamburrelli G (2009) Modelevolution by run-time adaptation. In: Proceedings of the 31st inter-national conference on software engineering. IEEE Comput Soc,Los Alamitos, pp 111–121

16. Filieri A, Ghezzi C, Leva A, Maggio M (2011) Self-adaptive soft-ware meets control theory: a preliminary approach supporting reli-ability requirements. In: 26th IEEE/ACM international conferenceon automated software engineering (ASE), pp 283–292

17. Filieri A, Ghezzi C, Leva A, Maggio M (2012, to appear)In: Reliability-driven dynamic binding via feedback control,SEAMS’12. doi:10.1109/SEAMS.2012.6224390

18. Filieri A, Ghezzi C, Tamburrelli G (2011) Run-time efficient prob-abilistic model checking. In: Proceedings of the 33rd internationalconference on software engineering

19. Gallotti S, Ghezzi C, Mirandola R, Tamburrelli G (2008) Qual-ity prediction of service compositions through probabilistic modelchecking. In: Quality of software architectures. Models and archi-tectures. Lecture notes in computer science, vol 5281. Springer,Berlin, pp 119–134

20. Ghezzi C, Mocci A, Monga M (2009) Synthesizing intensionalbehavior models by graph transformation. In: Proceedings of the31st international conference on software engineering, ICSE’09.IEEE Comput Soc, Los Alamitos, pp 430–440

21. Ghezzi C, Mocci A, Salvaneschi G (2010) Automatic cross vali-dation of multiple specifications: a case study. In: Rosenblum D,Taentzer G (eds) Fundamental approaches to software engineer-ing. Lecture notes in computer science, vol 6013. Springer, Berlin,pp 233–247. doi:10.1007/978-3-642-12029-9

22. Ghezzi C, Pradella M, Salvaneschi G (2010) Programming lan-guage support to context-aware adaptation: a case-study with Er-lang. In: Proceedings of the 2010 ICSE workshop on softwareengineering for adaptive and self-managing systems, SEAMS’10.ACM, New York, pp 59–68

23. Ghezzi C, Tamburrelli G (2009) Reasoning on non-functional re-quirements for integrated services. In: Proceedings of the 17thinternational requirements engineering conference. IEEE ComputSoc, Los Alamitos, pp 69–78

24. Hallsteinsen S, Hinchey M, Park S, Schmid K (2008) Dynamicsoftware product lines. Computer 41(4):93–95

25. Hinton A, Kwiatkowska M, Norman G, Parker D (2006) Prism:a tool for automatic verification of probabilistic systems. In: Proc12th international conference on tools and algorithms for the con-struction and analysis of systems (TACAS06), vol 3920, pp 441–444

26. Hirschfeld R, Costanza P, Nierstrasz O (2008) Context-orientedprogramming. J Object Technol 7(3):125–151

27. Jackson M, Zave P (1995) Deriving specifications from require-ments: an example. In: ICSE’95: proceedings of the 17th inter-national conference on software engineering. ACM, New York,pp 15–24

28. Jones C (1983) Tentative steps toward a development method forinterfering programs. ACM Trans Program Lang Syst. doi:10.1145/69575.69577

29. Kwiatkowska M, Norman G, Parker D (2004) Prism 2.0: a toolfor probabilistic model checking. In: Proceedings of the first in-ternational conference on the Quantitative evaluation of systems,QEST 2004, pp 322–323

30. Lehman MM, Belady LA (eds) (1985) Program evolution: pro-cesses of software change. Academic Press, San Diego

31. Ma X, Baresi L, Ghezzi C, Manna VPL, Lu J (2011) Version-consistent dynamic reconfiguration of component-based dis-tributed systems. In: ESEC/FSE’11: the 19th ACM SIGSOFTsymposium on the foundations of software engineering and the13rd European software engineering conference. ACM, NewYork, pp 245–255

32. Margara A, Cugola G (2011) In: Proceedings of the fifth ACM in-ternational conference on distributed event-based systems. ACM,New York, pp 183–194

33. Mottola L, Picco GP (2011) Programming wireless sensor net-works: fundamental concepts and state of the art. ACM ComputSurv V(3):1–51

34. Parnas DL (1972) On the criteria to be used in decomposing sys-tems into modules. Commun ACM 15(12):1053–1058

35. Pohl K, Bckle G, van der Linden FJ (2005) Software productline engineering: foundations, principles and techniques. Springer,Heidelberg

36. Rushby J (2002) An overview of formal verification for the time-triggered architecture. In: Damm W, Olderog E (eds) Formal tech-niques in real-time and fault-tolerant systems. Lecture notes incomputer science, vol 2469. Springer, Berlin, pp 83–105

37. Sales Pinto L (2011) A declarative approach to enable flexible anddynamic service compositions. In: Proceedings of the 33rd inter-national conference on software engineering. ACM, New York,pp 1130–1131

38. Salvaneschi G, Ghezzi C, Pradella M (2012) Context-orientedprogramming: a software engineering perspective. J Syst Softw.doi:10.1016/j.jss.2012.03.024

39. Zave P, Jackson M (1997) Four dark corners of requirements en-gineering. ACM Trans Softw Eng Methodol 6(1):1–30

A journey through SMScom: self-managing situational computing

Luciano Baresi is associate profes-sor at Politecnico di Milano and thecurrent chair of the IFIP workinggroup (2.14/6.12/8.10) on Service-soriented Computing. He is used toserve as program committee mem-ber of important conferences relatedto service-oriented computing, web-based systems, and software engi-neering. He was program chair ofICSOC’09 (International Confer-ence on Service-Oriented Comput-ing) and SEAMS’12 (Symposiumon Software Engineering for Adap-tive and Self-Managing Systems)

and he will the hair of ESEC/FSE’13 (Joint European Software Engi-neering Conference and the ACM SIGSOFT Symposium on the Foun-dations of Software Engineering). Baresi is currently a member ofthe editorial board of the Transactions on Autonomous and AdaptiveSystems (ACM) and of Service Oriented Computing and Applications(Springer). His research interests touches the different aspects of soft-ware engineering, and specifically the aspects related to engineeringservice-based applications and mobile and ubiquitous systems. He co-authored some 120 papers.

Carlo Ghezzi is an ACM Fellow,an IEEE Fellow, a member of theItalian Academy of Sciences, anda recipient of the ACM SIGSOFTDistinguished Service Award. He isthe current President of Informat-ics Europe. He is a regular mem-ber of the program committee offlagship conferences in the soft-ware engineering field, such as theICSE and ESEC/FSE, for which healso served as Program and Gen-eral Chair. He was also GeneralCo-Chair of the International Con-ference on Service Oriented Com-

puting. Ghezzi has been the Editor in Chief of the ACM Trans. onSoftware Engineering and Methodology and is currently an AssociateEditor of the Communications of the ACM, IEEE Trans. on SoftwareEngineering, Science of Computer Programming, Computing, and Ser-vice Oriented Computing and Applications. Ghezzi’s research has beenmostly focusing on different aspects of software engineering. He co-authored over 200 papers and 8 books. He coordinated several nationaland international research projects.


Recommended