+ All Categories
Home > Documents > From Software Systems to Complex Software Ecosystems: Model- and Constraint-Based Engineering of...

From Software Systems to Complex Software Ecosystems: Model- and Constraint-Based Engineering of...

Date post: 10-May-2023
Category:
Upload: tuclausthal
View: 0 times
Download: 0 times
Share this document with a friend
26
From Software Systems to Complex Software Ecosystems: Model- and Constraint-Based Engineering of Ecosystems Andreas Rausch, Christian Bartelt, Sebastian Herold, Holger Klus, Dirk Niebuhr Chair of Software Systems Engineering: Department of Informatics Clausthal University of Technology 38670 Clausthal-Zellerfeld, Germany {andreas.rausch|christian.bartelt|sebastian.herold|holg.klus|dirk.niebuhr}@tu-clausthal.de Abstract Software is not self-supporting. It is executed by hardware and interacts with its environment. So-called software systems are complicated hierarchical systems. They are carefully engineered by competent engineers. In contrast, complex systems, like biological ecosystems, railway systems and the Internet itself, have never been developed and tested as a whole by a team of engineers. Neverthe- less, those complex systems have the ability to evolve without ex- plicit control by anyone, and they are more robust to dealing with problems at the level of its constituent elements compared to classi- cal engineered systems. Consequently in this article we introduce the concept of complex software ecosystems comprising of interacting adaptive software systems and human beings. Ecosystems achieve the demanded flexi- bility and dependability by means of a kind of higher-level regulato- ry system. Thereby equilibrium is continuously preserved through the appropriate balance between self-adaption and self-control capa- bilities of ecosystem’s participants. We will outline a methodology to support engineering of ecosystems by integrating model- and constraint-based engineering approach and applying it during design and run time. The open-world seman- tics of constraints set up a frame for the behavior of participants and the ecosystem itself. Violations of constraints can be identified dur- ing design time, but also provide a knowledge transfer to run time. Constraints are additionally monitored and enforced during run time. Thus, we propose an evolutionary engineering approach covering the whole life-cycle for forever active complex software ecosystems.
Transcript

From Software Systems to Complex Software Ecosystems: Model- and Constraint-Based Engineering of Ecosystems

Andreas Rausch, Christian Bartelt, Sebastian Herold, Holger Klus, Dirk Niebuhr

Chair of Software Systems Engineering: Department of Informatics

Clausthal University of Technology

38670 Clausthal-Zellerfeld, Germany

{andreas.rausch|christian.bartelt|sebastian.herold|holg.klus|dirk.niebuhr}@tu-clausthal.de

Abstract Software is not self-supporting. It is executed by hardware and interacts with its environment. So-called software systems are complicated hierarchical systems. They are carefully engineered by competent engineers. In contrast, complex systems, like biological ecosystems, railway systems and the Internet itself, have never been developed and tested as a whole by a team of engineers. Neverthe-less, those complex systems have the ability to evolve without ex-plicit control by anyone, and they are more robust to dealing with problems at the level of its constituent elements compared to classi-cal engineered systems. Consequently in this article we introduce the concept of complex software ecosystems comprising of interacting adaptive software systems and human beings. Ecosystems achieve the demanded flexi-bility and dependability by means of a kind of higher-level regulato-ry system. Thereby equilibrium is continuously preserved through the appropriate balance between self-adaption and self-control capa-bilities of ecosystem’s participants. We will outline a methodology to support engineering of ecosystems by integrating model- and constraint-based engineering approach and applying it during design and run time. The open-world seman-tics of constraints set up a frame for the behavior of participants and the ecosystem itself. Violations of constraints can be identified dur-ing design time, but also provide a knowledge transfer to run time. Constraints are additionally monitored and enforced during run time. Thus, we propose an evolutionary engineering approach covering the whole life-cycle for forever active complex software ecosystems.

2

Can Complex Software Systems be Engineered?

Software does not stand by itself. It is executed on physical ma-chines and communication channels (hardware) and it interacts with its environment, which includes humans and other software packag-es. So you might see software as the meat in a hamburger. The bot-tom of the burger bun is the hardware; the top is the environment.

Based on that analogy, software engineering research might be understood as the research discipline that devotes itself to the im-provement of the process for producing perfect hamburger meat. Clearly, that alone is not enough to come up with a better burger. Rather, we have to take the whole system into account, including the hardware executing the software, as well as the environment in which the software is embedded. Therefore we prefer to use the term software systems engineering resp. software systems engineering re-search.

The continuous increase in size and functionality of those soft-ware systems [3] have now made them among the most complex man-made systems ever devised [4]. As already discussed in [5], everyone would agree that a car, with its millions of components, is an extremely complicated system. The same can be said of the Eu-ropean railway system. Both the car and the railway system are hu-man constructions, but there is clearly a significant difference be-tween them. The complicated car was carefully designed and tested by a team of engineers who placed every component in its place with the utmost precision, and that is why it works. But no one de-signed the European railway system as a whole, and no one can claim to entirely understand or control it—and yet it works, some-how!

And whilst the car can be improved only through a careful re-design by competent engineers, the European railway system grows and shrinks on its own, without explicit and overriding control by any one specific person. Moreover, the ability of the car to function is highly dependent on the successful operation of every one of its core sub-components, while the efficiency of the European railway system is much more robust to disruptions and failures at the level of each of its constituent elements.

3

Looking around, one can see many other systems with the same characteristics: communication networks, transportation networks, cities, societies, markets, organisms, insect colonies, ecosystems. These systems have come to be called complex systems, not to be confused with merely very complicated systems such as cars and air-craft carriers. The definition of complex systems in [2], developed by scientists doing research in the area of complexity theory and its de-scendants, is as follows:

Complex systems are systems that do not have a centralizing authority and are not

designed from a known specification, but instead involve disparate stakeholders creating systems that are functional for other purposes and are only brought together in the com-plex system because the individual “agents” of the system see such cooperation as being beneficial for them. (cited from [2])

The question that we aim to raise and discuss in this paper is as

follows: What is it that unites these complex systems, and makes them different from cars and networks? And can something be learned from them that would help us build not only better cars and networks, but also smarter software systems, like safer building in-frastructures, more effective disaster response systems, and better planetary probe systems?

From Software Systems to Complex Software Ecoystems

In the developed world, software now pervades all areas of busi-ness, industry and society. Just some examples are: public admin-istration, management, organization and production companies. Even day-to-day personal life is no longer conceivable without the use of software and software-controlled devices can be found in every household.

As already mentioned, the software industry currently faces an in-exorable march towards greater complexity—software systems are the most complex man-made systems in existence [4]. The reasons for the steady increase in their complexity are twofold: On the one hand, the set of requirements imposed on software systems is becom-ing larger and larger as the extrinsic complexity increases. This in-cludes the following examples: features, depth of functionality, adaptability, and variability. In addition, the structures of software

4

systems, e.g., in terms of size, scope, distribution and networking of the system, are themselves becoming more complex, which leads to an increase in the intrinsic complexity of the system.

The expectations placed upon software systems have been grow-ing, along with their steadily increasing penetration into people's pri-vate, social, and professional lives. Users expect: - A high degree of autonomy, openness, intuitive usability and

timely response to changes in both the software system itself, as well as in the processes for the expected life cycle and demands (flexibility [6]).

- A high degree of reliability of the software system and the sur-rounding development, operation, and administration processes (dependability [7]).

As an analogy, let us consider the field of classical engineering: A single (even large) building can still be planned, explained and im-plemented centrally; however, the planning, design, establishment and further development of a city need to be performed using very different methods and models.

Similarly, the ever-increasing complexity of software systems and rising user expectations has led to a situation where the classical methods and techniques of software systems engineering have reached their limits. In the long run, the mechanisms required in software systems engineering to develop and to control software sys-tems are also facing a paradigm shift. To respond to this challenge, we have used this paper to put forward the proposal that software systems be interpreted as parts of larger complex software ecosys-tems, thus taking a first step in the direction of the necessary para-digm shift.

5

1 Fig. 1. Structure and equilibrium-concepts of complex software ecosystems.

Complex software ecosystems are complex adaptive systems of adaptive systems and human beings (see outer ring in Figure 1)—i.e., complex, compound system consisting of interacting individual adap-tive systems, which are adaptive as a whole, based on engineered adaptability. This means that not every large resp. complicated sys-tem can be considered as a complex software ecosystem: The com-plexity of the interaction between the elements of the complex soft-ware ecosystem and its resulting adaptability is one essential characteristic. It must also take into consideration the different life cycles of the individual adaptive systems.

This is an important difference from the traditional understanding of hierarchical systems: A hierarchical system consists of subsystems

1 Note, this picture is an updated version of the corresponding picture in [1].

6

whose interactions tend to be globally predictable, controllable, and designable. A complex software ecosystem comprises individual adaptive systems whose behavior and interactions change over time. These changes are usually not centrally planned, but arise from inde-pendent processes and decisions within and outside the complex software ecosystem.

In addition, complex software ecosystems are mixed human-machine artifacts: human beings in the complex software ecosystem (see the outer ring in Figure 1) interact with the individual systems, and in this way they become an integral, active part of the complex software ecosystem. Therefore, human requirements, goals, and be-havior must be considered when designing a complex software eco-system, by modeling them as active system components. Humans act as users, administrators, operators within the ecosystem. The very complex and multifaceted interaction and relationship between peo-ple and individual systems of a complex software ecosystem is a fur-ther key characteristic. Only by including this aspect can a holistic approach be taken. The requirements, needs and expectations of hu-mans in the individual systems of a complex software ecosystem are subject to special dynamics and forms of interaction. Thus, the indi-vidual systems need to be able to change continuously to meet the changing demands and adapt to the changing behavior of humans. By the same token, the changing expectations of humans will create new demands on the ecosystem.

In analogy to a biological ecosystem, complex software ecosys-tems achieve flexibility and dependability by means of a kind of higher-level regulatory system, through which equilibrium is main-tained between the forces applied by each of the participating indi-viduals. The equilibrium of an ecosystem is based on the following three concepts as shown in the inner ring of Figure 1: - Common objectives: Communities of adaptive systems and hu-

man beings form themselves dynamically. An essential feature of these communities is their common and jointly accepted objec-tives. Individual participants can be members of several commu-nities simultaneously. These communities may change or dis-solve over time, and new ones may be created. This is part of the adaptation in the complex software ecosystem.

- Organizing structures: Structures required for organizing and implementing the common objectives of the community form dy-namically. These structures define roles, responsibilities, com-

7

munication channels, and interaction mechanisms in the commu-nities. Like the communities themselves, organizational struc-tures can also change, thus leading to an adaption of the struc-tures in the complex software ecosystem.

- Ensuring constraints: Commonly accepted constraints govern the behavior and interactions of communities and their organizational structures. Control within complex software ecosystems—in the sense of ensuring adherence to these constraints—can be realized by different means. These mechanisms can be explicit, e.g. cen-tralized or federated via dedicated components, or implicit—for example, realized by market mechanisms, local incentive and preference structures of participants to achieve a specific behav-ior in the system. Another promising approach to force these con-straints is electronic institutions [9].

Can We Engineer Those Complex Software Ecosystems?

A key aspect in complex software ecosystems is the establishment of an equilibrium between the forces applied by participating indi-viduals participating. The equilibrium is continuously preserved through the appropriate balance between self-adaptation on the one hand and self-control on the other hand. When this equilibrium is dis-turbed, the complex software ecosystem breaks down and is no long-er manageable. For a complex software ecosystem to remain active and continuously evolve, we must understand this equilibrium and the mechanisms necessary to achieve and preserve it.

A complex software ecosystem is made up of a set of individual adaptive systems interacting with each other. Other than these inter-actions, the individual systems are considered closed systems that can be created with the classical methods of software systems engi-neering. However, in the process of designing them, adaptivity, evo-lution, and autonomy must be taken into consideration. The individu-al systems themselves may consist of subsystems or components, which may are used as sensors, actuators, or the interface to a physi-cal environment.

The compound ecosystem as a whole can no longer be described and controlled by using classical methods due to reasons of complex-ity. In addition to the complexity caused by the size of the compound ecosystem and its adaptability resulting from the adaptivity of indi-

8

vidual systems and their different life cycles, human beings are also considered a part of the complex software ecosystem. The resulting complex software ecosystem can be described and understood only by taking a holistic view. This is a necessary condition for the con-trollability of the overall ecosystem. However, this holistic approach leads to a very complex ecosystem with a high degree of adaptability, which in turn makes it difficult to control.

This leads us to a dilemma: In order to control the system, we need to treat it holistically, but doing so increases the degree of adaptability, which in turn reduces controllability. To solve this di-lemma, we must turn to the notions of self-adaption and self-control in the complex software ecosystem.

We distinguish three levels of adaption in a complex software ecosystem (see Figure 2). It should be noted that the higher the de-gree of adaptability, the more the human is involved in this adaption: - By engineered adaptability we are referring to the property of the

individual adaptive systems participating within an ecosystem to reconfigure and reorganize themselves at runtime in order to ful-fill context-sensitive tasks in the ecosystem. Adaptation is there-fore the pre-planned resp. engineered capability of the individual adaptive systems and their components to adapt themselves and their interaction with the environment. Here, the focus of the adaptability is primarily on functional and quality properties of the individual adaptive systems that are part of the ecosystem. Adaptation is often achieved by modifying component configura-tions—parameters are set, and this alters the functional behavior of system components.

- By emergent adaptability we understand the ability of a complex software ecosystem to provide emergent behavior by modifica-tion of the interaction structure between the participating systems and human beings. Complex software ecosystems are open and dynamic systems: new participants may enter into the ecosystem, sometimes with an unknown interface, structure and behavior. Already known participants may change their behavior (e.g. by engineered adaption) or leave the complex software ecosystem. Thus, emergent adaptability is in line with concepts in autonomic computing [8] or organic computing [9], respectively. Hence, emergent adaptability is grounded on a decentralized, formed co-operation of individual systems and human beings within the ecosystem. In opposite to engineered adaptability, emergent

9

adaptability is achieved by the autonomy of swarm organization. Such cooperation is not pre-determined and not explicitly de-signed. Rather, it follows on from the interaction structure of par-ticipants.

- Evolutionary adaptability is the ability of a complex software ecosystem to evolve itself under changing conditions in the me-dium to long term, and to sustainably reveal adaptive behavior. It includes the fundamental long-term development of the complex software ecosystem in all its aspects, in particular through change and adaptation of monitoring, configuration, and control mecha-nisms, including structural and functional aspects. Evolution in-corporates the capacity to evolve the individual adaptive systems within the ecosystem as well as the interaction constraints be-tween them. This means that implementing evolution as manual, computer-supported, or (partially) automated further develop-ment of the complex software ecosystem poses the biggest chal-lenge with respect to long-term control. Evolution will be trig-gered by sustainable changes in environmental conditions or by fundamental changes in the expectations of users and operators of the complex software ecosystem. It can be driven by human op-erators and users, but it also needs to be partly or fully automated in some cases. Evolution can mean either that the management, control and regulatory mechanisms are altered, or that individual components or entire systems are replaced or modified.

2 Fig. 2. Appropriate balance between self-adaptation and self-control

2 Note, this picture is an updated version of the corresponding picture in [1].

10

These three levels of self-adaption have to be supported by all of the participants in the complex software ecosystem. However, at the same time, care must be taken that the complex software ecosystem as a whole remains under control and thus ensures its superordinated goals and functions. For this to be achieved, the participating adap-tive systems and human have also to support three levels of self-control capabilities (see Figure 2): - By local constraints we are referring to the individual assump-

tions and guarantees of the ecosystems participants—the adaptive systems and human beings. In case of the adaptive systems the local constraints are designed as a self-contained part of the sys-tems, for instance the assumption that a specific database is avail-able within the ecosystem and the guarantee that the database will be read-only accessed. Moreover local constraints may define re-strictions to the engineered adaptivity capabilities of the adaptive system. Human beings naturally bring their own local constraints into an ecosystem, for instance usage-profiles or security de-mands.

- Institutional constraints are, on the one hand, ecosystem-wide constraints that all participants have to obey, for example com-mon traffic rules—stop if a traffic light is red. Moreover, com-munities in ecosystems share common objectives and organiza-tional structures (see Figure 1). In order to define and enforce their objectives and structures, institutional constraints can be formulated. Such an institutional constraint could be: All ecosys-tem participants, who wish to use electronic payment would have to provide their unique ID.

- By improvement constraints we are describing the constraints guiding the ecosystem’s own evolution. Those constraints regu-late the process to raise new requirements for the ecosystem or its participants and define how to react to a disturbed equilibrium within the ecosystem. A high number of new requirements or, frequently, troubles with the equilibrium will force a further de-velopment of individual participants of the ecosystem or the self-adaption and self-control capabilities of the ecosystem itself.

An appropriate balance of these self-control and the aforementioned self-adaption capabilities of all participants of the ecosystem guaran-tees permanent established equilibrium states. Thereby, we have achieved the goal of providing desirable flexibility, whilst at the same time ensuring dependability.

11

If self-adaption capabilities of the ecosystem’s participants prove too high a risk that the ecosystems evolves in uncontrolled manner and direction. Consequently the commonly shared objectives, organ-izing structures and ensuring constraints will be lost—the equilibri-um is disturbed. Self-control functions will be automatically activat-ed to re-regulate the ecosystem. If self-control functions are too high we face an ecosystem standstill. The ecosystem is no longer attrac-tive for the participants and might die off. Hence self-adaption func-tions will be activated.

The concept of equilibrium in complex software ecosystems ena-bles us to provide mechanisms for control, monitoring, and regula-tion, and to ensure constraint compliance via electronic institutions. If these constraints are violated, the self-adaptation mechanisms provided by the ecosystem and its participants can re-establish the equilibrium. Based on these mechanisms, equilibrium concepts are defined; and approaches to detection, prevention, and treatment of disorders in the complex software ecosystem are described and im-plemented.

3 Fig. 3. Ensuring equilibrium states in complex software ecosystems

3 Note, this picture is an updated version of the corresponding picture in [1].

12

Combining Closed-World Models and Open-World Constraints towards a Joint Developing Approach

Complex software ecosystems contain adaptive software systems and human beings interacting with each other. The constituent parts are independent of each other in terms of functionality and manage-ment, are developed evolutionarily, and will show adaptive behav-ior. These properties distinguish complex software ecosystems from traditional software systems and cause traditional software systems engineering approaches not to suit sufficiently.

Traditional software development approaches offer various tech-niques to support software engineers. One of the most fundamental is the use of models and modeling. Depending on what is considered relevant to a system under development at any given point, various modeling concepts and notations may be used to highlight one or more particular perspectives or views of that system. It is often nec-essary to convert between different views at an equivalent level of abstraction facilitated by model transformation, e.g. between a struc-tural view and a behavioral view. In other cases a transformation converts models offering a particular perspective between levels of abstraction, usually from a more abstract to less abstract view by adding more detail supplied by the transformation rules. These ideas of models, modeling, and model transformation are the basis for a set of software development approaches that are well known as model-driven development (MDD). The model-driven architecture (MDA) is a style of MDD that is well-established in research and industry. Four principles underly the OMG's view of MDA: - Models expressed in a well-defined notation are a cornerstone to

the understanding of systems for enterprise-scale solutions. - The building of systems can be organized around a set of models

by imposing a series of transformations between models, orga-nized into an architectural framework of layers and transfor-mations.

- A formal underpinning for describing models in a set of meta-models facilitates meaningful integration and transformation among models, and is the basis for automation through tools.

- Acceptance and broad adoption of this model-based approach re-quires industry standards to provide openness to consumers, and foster competition among vendors.

13

Independent of the specific model-based development approach, they all share a common property: At the end of the day the goal of models and modeling in model-driven development is to drill down to system construction [10]. Consequently, models use the closed-world assumption [11]. A closed-world model directly represents the system under study, meaning that there is a functional relation be-tween the language expressions and the modeled world. Even when modeling is used to create a conceptual model of a domain the rep-resented knowledge is implicitly viewed as being complete. Note that Reiter [12] distinguishes two kinds of world assumptions: closed-world (CWA) and open-world (OWA). These two different assumptions have fundamental implications on modeling practice [13].

With the increasing scale and complexity of software systems the corresponding models must not automatically grow due to abstrac-tion mechanisms in modeling. However, there is usually a trade-off between the accuracy of the results and the complexity of the model. The more complex the model, the less it may be understood, and of-ten the time taken for analysis and transformation will be increased. The less complex the model, the easier it is to understand and the more efficient it is to evaluate. However, the results may lose their relevance to the real system if too many important details are ab-stracted.

Due to scale and complexity, engineers in other disciplines such as airplane engineering use a set of different types of models: me-chanical models, thermal models, or aviation models. Each model is based on a closed-world assumption. But no closed-world model ex-ists that describes the whole airplane. Such a model would be either too complex to handle or irrelevant due to the required level of ab-straction. Although interdependencies between the remaining partial, but closed-world models do exist, they are not explicitly modeled. Instead they are managed through the surrounding engineering pro-cess.

The increasing scale and complexity of software systems aiming towards complex software ecosystems will lead us to the same trade-off. A complete closed-world model for a large complex software ecosystem is out of scope. Therefore, we are faced nowadays with a set of closed-world models describing subsystems of the overall complex software ecosystem. Moreover, these subsystems have their

14

own independent life cycles; thus the corresponding models are de-veloped and evolved independently of one another.

Nevertheless engineers of the overall complex software ecosystem have to take the interdependencies between the models of the subsys-tems into account. As described in the previous section, software de-velopment for complex software ecosystems has to be aware of con-straints for ensuring adaptability and controllability. Hence, a constraint-based approach based on an open-world assumption is ap-propriate.

For that reason, we believe that a joint engineering approach com-bines the best of the two modeling worlds: a model-based approach based on a closed-world assumption for partial system modeling, and a constraint-based approach based on an open-world assumption for modeling the relevant overall system properties.

A Model- and Constraint-Based Engineering Approach for Complex Software Ecosystems

The approach combines model-based development with constraint-based development as an efficient way to establish and evolve com-plex software ecosystems.

As already mentioned a complex software ecosystem consists of a set of individual adaptive systems. For each individual adaptive sys-tem a model resp. a set of models, based on a closed-world semantic, is developed, as illustrated on the right side of Figure 4.

To restrict the individual adaptive behavior of the adaptive sys-tems, local constraints might be added (cf. the middle column in Figure 4). In addition, institutional and improvement constraints from an ecosystem’s communities or covering common ecosystem’s objectives and guidelines are added—following an open-world se-mantic.

All of these constraints can be used to validate the individual models of the adaptive systems but also to validate the union of models of all adaptive systems resp. the ecosystem’s models. There-fore constraint validations of individual systems, but also of the in-teraction between these adaptive systems, can be identified during design time.

15

However, as changes in these systems are not centrally planned, but arise from independent processes and decisions, adaptivity can-not be completely controlled during design time. Consequently, we also have to take the run time into account. Therefore the constraints can be used again. They provide a knowledge transfer between de-sign time and run time. Constraints are additionally monitored and enforced during run time. Therefore, constraints can be used during design time as well as during run time (see Figure 4).

Fig. 4 Model- and constraint-based engineering approach for complex software ecosystems.

Firstly, our concept of constraints has to be defined. Constraints are used to express undesired behavior or situations of complex soft-ware ecosystems as well as their participants and defining actions which describe how to react to them. The constraints defined in this approach thus set up the frame for the behavior of single systems in a complex software ecosystem and the ecosystem itself. Such con-straints can be defined at the usual levels of development: require-ments elicitation and validation, architectural design, and component implementation. Constraints can be used for verification and valida-tion at design time and run time as well.

A constraint consists of various properties and represents crosscut-ting concerns on the complex software ecosystem. Thus, a uniform

16

formalism is used by requirements engineers, software architects, and component designers improving their communication as well as doc-umentation of the considered system. We may too distinguish be-tween different kinds of constraints, such as regulation and validation constraints. Regulation constraints are used to actively interact with subsystems of the complex software ecosystem in order to keep it in a useful state. Validation constraints are applied to passively observe the system and log constraint violations that need to be handled man-ually by a control instance, e.g. domain expert.

This concept of constraints is now integrated with model-based development. The model- and constraint-based development ap-proach we propose can be thought of as an adaptive but controlled improvement life-cycle for forever-young complex software ecosys-tems [14, 15]. This leads us to an iterative improvement process trig-gered by end-user feedback as illustrated in Figure 4.

In order to become widely accepted and used, a software system needs to fulfill the end-users' needs. To accomplish this task in com-plex software ecosystems, we use end-user feedback and experience to derive new requirements and corresponding constraints. We sug-gest gathering end-user feedback in-situ at very low effort and cost by using off-the-shelf mobile devices (Step 1 in Figure 4). Due to the distributed nature of complex software ecosystems, feedback must be forwarded to the responsible addressee to be considered for further development. Analyzing feedback clarifies whether existing subsys-tems should be changed or whether the feedback demands new re-quirements or constraints. Moreover feedback walkthrough facilitates the identification of problems, new ideas, and affected requirements and constraints by the analyst in charge.

Using the feedback received, parallel lines of development start (Step 2 in Figure 4): In the left column, the relevant systems have to be identified, at first based on feedback. For each identified single system, individual development will be started on the basis of the relevant user feedback, using arbitrary model-driven development approaches. This means that abstract models are transformed and re-fined to create detailed models, finally leading to executable systems or parts of systems. Common model-driven development techniques, including existing modeling languages (e.g. UML, BPNM etc.), model transformations (e.g. ATLAS), and frameworks (e.g. EMF), are used.

17

In the middle column, our concept of constraints, as introduced above, is used. To derive constraints for the requirements affected, the amount of feedback received that can be assigned to a particular domain concept is used. By assembling constraints in topic groups the complexity of ecosystems is decreased, and modularized devel-opment becomes feasible. Moreover, constraints which are applied to different levels of abstraction are aggregated hierarchically. This means refining requirements constraints down to architectural con-straints and component-specific constraints to verify development ar-tifacts against them during design time as well as to generate deploy-able monitoring code for run time.

Moreover, constraint refinement supports readability and con-sistency checking, since traceability is assured. If the compositionali-ty of constraint is guaranteed, the verification of lower-level con-straints to ensure overall compliance with higher-level constraints is possible. The overhead of decomposing constraints to lower levels is compensated when parts of the system are modified, since only the modified part has to be re-verified regarding its corresponding con-straints.

During execution, constraint monitoring is applied as shown in Figure 4, Step 4. Valuating systems are continually reported to the run time environment. Should the constraints change or if a violation is detected within the monitoring framework, then the following es-calation strategy is applied:

Firstly, the individual system tries to adapt its functionality to the new situation in order to comprise all the constraints once again.

Secondly, the overall ecosystem tries to re-arrange interaction be-tween the individual participants in the ecosystem to meet the us-er's needs. Do note that, in certain situations, the ecosystem can-not know what the best option for the user is. Consequently, the user is informed of possible configurations so that he or she can decide which optimization criteria and which configuration best meet his or her needs.

Finally, if the violation cannot be independently fixed, even after the inclusion of the user, the user is prompted to state his or her needs—a problem report is created. The needs of the user are then evaluated and consolidated and requirements are derived from there. The consolidated feedback and the derivation of the re-quirements leads to a next improvement stage, starting again with

18

Step 1. Consequently, in the next evolution step of the ecosystem, the problem should be addressed. 

Constraint Satisfaction by Models during Design time

By using the integrated model- and constraint-based engineering approach during design time, the adaptability of ecosystems can be controlled at the level of single, engineered, adaptable systems, as well as at the level of emergent arising and open interaction struc-tures of the single systems within the ecosystem. To understand the above-explained approach regarding the verification of models based on constraints (cf. Step 2 in Fig. 4), our view on the relation between (software) models and the above motivated constraint base is introduced at first.

Models of (software) systems consist in large parts of notational elements describing elements of the system and their relationships, structural as well as behavioral ones. For example, class diagram de-scribing the structures of types and interdependencies, prescribes that these classes and interdependencies have to be present in a sys-tem conforming to the model. Such models can mathematically be understood as relational structures consisting of a universe of entities and relations between them. Since we assume that a model – or at least the complete set of available models for a system – describes the system completely (due to the Closed World Assumption), the corresponding relational structure is finite.

In order to express that finite structures have desired properties, we can express such properties as logical formulas, e.g. as first-order logic expressions, that are evaluated on finite structures. A finite structure M satisfying a logical formula C is said to be a model of that formula:

⊨ Hence, the constraints that we have discussed so far, can be un-

derstood as first-order logic formula that are evaluated on the finite structure representing the model(s) of software systems – if, and on-ly if, the finite structure is a model for the formula, the constraint is satisfying in the modeled system. It corresponds to the verification

19

part of step 2 depicted in Figure 4. This can, of course, only be done at design time, if the constraints are available at that time.

In case of constraints related to guarantees in the context of the first level – engineered adaptability –, constraints are defined in the design process of the system at hand, and can be (partly) derived from models (Step 2 in Fig. 4., “Derivation”). In this case, the satis-faction of constraints must be considered under the Closed World Assumption (CWA). For example, constraints expressing the en-forcement of certain system architecture can be derived from archi-tectural models of the system and can be checked for the detailed design. A detailed research approach about the foundations of con-straint-based software development was published in [16]. Based on this approach, it can be checked and ensured that constraints are sat-isfied in a single systems meaning that the desired guarantees ex-pressed by the constraints are fulfilled.

Emergence constraints, that define the collective frame for the emerged adaptability of interacting structures within an ecosystem, have to be defined and adjusted continuously between the system designers of individual autonomous systems, especially when they change in case of evolutionary adaptability.

However the model verification by constraints during the design of a single system differs from the verification autonomous commu-nities because of the openness of an ecosystem. The validity signifi-cance of more general emergence constraints proved at design time is limited in general. For example, even if they have been proved at design time, that two time- or mission critical, independently devel-oped systems each fulfill a global constraint stating that responses are given in less than 8 minutes (e.g., emergency systems), there is no guarantee that a combination of both will do so. In this case, we have to check whether we can make statements evaluating the for-mulas under Open World Assumptions (OWA) – statements that can give hints that constraints might be violated. Therefore the finite structures (models) of the potential interacting, individual systems must be joined to a single finite structure. Also the sets of logical formulas, which represent the emergence constraints, must be joined to a set of common domain knowledge. To check the validity of po-tential arising, emergent structures within an ecosystem during de-sign time, the join of different (software) system descriptions

20

M1,..,Mn must be a model for the joined constraint set of C1,..,C2 un-der OWA:

∪ …∪ ⊨ ∪ …∪

The theoretical foundations of this kind of verification method are

discussed in [17] regarding the verification of joined models during the concurrent software development.

In addition to it, such joining of (software) models resp. constraint sets can be a challenging task in the case of different description languages. This is a common situation because of the independent development of the participating systems within the open ecosystem. Anyhow the approach is applicable if (software) models resp. con-straint sets are harmonized regarding their languages before a join of the finite structures resp. logical expression sets occurs.

Nevertheless, solely static analyses under OWA at design time are not potent enough to detect constraint violations of emerged struc-tures within an ecosystem. To support a more extensive validation, dynamic analyses at runtime are an effective method.

Controlled Adaptation during Run time

After a system has been designed, verified and developed, it will to be deployed and executed within the ecosystem. In order to be use-ful over time, systems must be able to adapt themselves to changing needs, goals, requirements or environmental conditions as autono-mously as possible. We distinguish three levels of adaptability, namely: engineered adaptability, emergent adaptability and evolu-tionary adaptability, as previously mentioned. During runtime, vari-ous aspects have to be considered in order to enable and control those kinds of adaptability. In order to discuss these aspects, we are using an architectural blueprint of a system which is based on the well-known MAPE-K loop [8]. The blueprint is depicted in Figure 5:

21

Fig. 5 Architectural blueprint for self-controlled self-adaption in complex software ecosystems

By engineered adaptability we understand the ability to react au-tonomously during runtime on pre-planned events in its environment in order to satisfy certain system-specific constraints. It could be useful to study an adaptive navigation system as an example. One constraint could be that the system should perform routing using road maps in combination with speech output, in case the user is driving a car. Another constraint could be that it should run in silent mode and use city maps, in case the user is walking through a town. The system must be able to react to those events automatically dur-ing runtime in order to satisfy the constraints at any time.

In our model we distinguish between the application which realiz-es the actual business logic, like navigation functionality, and the manager, which is responsible for performing all tasks necessary to satisfy the given constraints.

22

For this purpose, the manager has to monitor the application and the environment constantly. It stores the collected information as facts in the internal knowledge base. Furthermore, it analyzes those facts and checks, whether they satisfy the given constraints. If this is not the case, it tries to generate a reconfiguration plan with the ob-jective to create a system configuration which conforms to the given requirements. This is, amongst other factors, achieved by modifying components or application configurations, e.g. by setting parameters which alter the functional behavior of system components and appli-cations. In [18] we presented an approach which enables the specifi-cation of certain system constraints. This approach is able to recog-nize the violation of those constraints and provides mechanisms to react autonomously by reconfiguring the system considering the given constraints.

During runtime, situations may occur which either have not been foreseen during system development or require the involvement of other systems within the ecosystem, in order to satisfy the given constraints. In our model, the interaction between an adaptive sys-tem and the ecosystem is done through observer and reporter. They are responsible to monitor the ecosystem and contact the internal manager if required. The manager analyzes the monitored infor-mation again, creates plans and may adapt the application behavior or contact other systems through the reporter.

Certain events may not only lead to adaptation of individual sys-tems, but to structural modifications of the ecosystem, i.e. connec-tions between systems. On the one hand, certain requirements of in-dividual adaptive systems may need an adaptation of the overall ecosystem. On the other hand, the ecosystem must be able to inte-grate new adaptive systems automatically during runtime and has to be able to deal with failure or behavioral change of individual sys-tems. We call this ability emerged adaptability, as previously men-tioned. To realize emergent adaptability, each individual adaptive system must be able to handle events like mentioned before. If, for instance, a system becomes unavailable for some reason, connected systems may react by reconnecting automatically to another one. The same may happen if a new system becomes available within the ecosystem. In that way, an ecosystem emerges spontaneously. An-other reason is the behavioral change of connected systems. It may

23

happen that a system is compatible with another system at one time, and, due to behavioral changes, e.g. caused by engineered adapta-tion, incompatible at a future date. Therefore, semantic compatibility has to be checked continuously during runtime. In [19] we presented an approach which is based on runtime testing. At this, the con-straints are specified as test cases which are executed each time a behavioral change of connected systems is recognized.

In software ecosystems, emerged adaptability cannot be realized by one global authority due to its complexity and size. It is rather achieved by negotiation between equitable systems where each sys-tem is focused to satisfy the given local constraints. This in turn may lead to unbalanced ecosystems and at worst to a collapse of the whole ecosystem. We call the ability to ensure the balance within the ecosystem over time evolutionary adaptability. In our approach, basic conditions for a balanced ecosystem are specified as con-straints. The first step in realizing a balanced ecosystem is to recog-nize an unbalanced ecosystem, i.e. if the constraints are not satisfied anymore. This can be done by the individual adaptive systems them-selves or by humans within the ecosystem. If an unbalanced system is recognized, two kinds of reactions are conceivable. The first one is to adapt the ecosystem, or just parts of it. One possibility is to do this by sending appropriate configuration instructions to the observer module of individual adaptive systems. If it is not possible to equili-brate the ecosystem automatically, human intervention is required to prevent a collapse of the ecosystem.

Conclusion

In order to be able to answer the question – 'Can complex soft-ware systems be engineered?', we must be aware that the concept of (software) systems has evolved hugely in the past decade. In the past, complicated but closed systems, such as aircrafts or power plants, were hard, but possible, to engineer with classical top-down development methods. Nowadays, not only in the course of dissemi-nation of mobile devices, complex systems have an additional open and flexible character and face high dependability demands.

24

These complex software ecosystems cannot be engineered any more with classical (fully controlled) development methods. In order to engineer this kind of system, we need a fundamental extended ap-proach, which considers the system development from a different view mixing top-down and bottom-up approaches. Therefore, we are proposing an integrated model- and constraint-based engineering approach for complex software ecosystems, which takes into ac-count closed-world models regarding the classic engineering of sin-gle (software) systems and open-world constraints to control the adaptability of emerged interaction structures within an ecosystem.

The model- and constraint-based engineering approach separates the specification of structure and behavior by system models from the specification of the allowed behavior by constraints of the over-all complex software ecosystem. It, therefore, clearly distinguishes between the definition of autonomous behavior of system elements and the overall control, allowing both to develop and to evolve inde-pendently. Both closed-world models and open-world constraints are accordingly the basis for the controllability of the engineering pro-cess by static analyses at design time and dynamic analyses at run time.

The previously explained view on engineering of complex soft-ware ecosystems enables a wide-ranged spectrum of research topics:

Regarding design time, we require a general approach for the modeling and the constraint specification, combining open-world as well as closed-world aspects. Due to the evolution of complex soft-ware ecosystems, an incremental modeling approach must be taken into consideration. Furthermore, specially tailored methods and techniques for the incremental verification and validation at design time must be developed.

To support the controllability of complex software ecosystems at run time we also require a general approach for the constraint verifi-cation and validation. In addition the border between automatic ma-chine decisions and human interaction in case of constraint violation needs more investigation. A further ambitious research challenge would be to answer the open question of: ‘How we can establish a systematical feedback loop from the detection of constraint viola-tions on the one hand to the evolving system design on the other hand?’ Thereby, amended requirements must be explored and af-

25

fected system parts must be detected. Subsequently, a suitable im-pact analysis to evaluate the consequences of incremental design changes has to be researched.

As explained above, the research about the future engineering of complex software systems has created a widespread field of open and challenging scientific questions. Therefore, the concept of com-plex software ecosystems provides a suitable metaphor; and the comprehensive research frame of the proposed model- and con-straint-based engineering defines a scientific agenda for the coming years.

References

[1] Rausch, A., Muller, J. P., Niebuhr, D., Herold, S., & Goltz, U. (2012, June). IT ecosystems: A new paradigm for engineering complex adaptive software systems. In Digital Ecosystems Technologies (DEST), 2012 6th IEEE International Conference on (pp. 1-6). IEEE.

[2] Sheard, S. A. and Mostashari, A. (2009), Principles of complex systems for systems engi-neering. Syst. Engin., 12: 295–311. doi: 10.1002/sys.20124

[3] L. Northrop, P. Feiler, R. P. Gabriel, J. Goodenough, R. Linger, T. Longstaff, R. Kazman, M. Klein, D. Schmidt, K. Sullivan, and K. Wallnau, “Ultra-Large-Scale Systems – The Software Challenge of the Future,” Software Engineering Institute, Carnegie Mellon, Tech. Rep., June 2006. [Online]. Available: http://www.sei.cmu.edu/uls/downloads.html

[4] A. W. Brown and J. A. McDermid, “The art and science of software architecture,” in ECSA, ser. Lecture Notes in Computer Science, F. Oquendo, Ed., vol. 4758. Springer, 2007, pp. 237–256.

[5] Braha, Dan and Minai, Ali A. and Bar-Yam, Yaneer. “Complex Engineered Systems: A New Paradigm”, Springer Berlin Heidelberg, ISBN 978-3-540-32831-5, 2006. doi 10.1007/3-540-32834-3_1.

[6] Bennett, K., Layzell, P., Budgen, D., Brereton, P., Macaulay, L., & Munro, M.. "Service-based software: the future for flexible software." Software Engineering Conference, 2000. APSEC 2000. Proceedings. Seventh Asia-Pacific. IEEE, 2000.

[7] J. C. Laprie, A. Avizienis, and H. Kopetz, Eds., Dependability: Basic Concepts and Termi-nology. Secaucus, NJ, USA: Springer-Verlag New York, Inc., 1992.

[8] J. O. Kephart and D. M. Chess, “The vision of autonomic computing,” Computer, vol. 36, no. 1, pp. 41–50, Jan. 2003. [Online]. Available: http://dx.doi.org/10.1109/MC.2003.1160055

[9] C. M¨uller-Schloer, H. Schmeck, and T. Ungerer, Eds., Organic Computing A Paradigm Shift for Complex Systems. Springer, 2011.

[10] Natalya F. Noy and Deborah L. McGuinness, Ontology Development 101: A Guide to Cre-ating Your First Ontology. Stanford Knowledge Systems Laboratory Technical Report KSL-01-05 and Stanford Medical Informatics Technical Report SMI-2001-0880, March 2001.

[11] http://subs.emis.de/LNI/Proceedings/Proceedings96/GI-Proceedings-96-3.pdf [12] Raymond Reiter, "A logic for default reasoning", Artificial Intelligence, Apr. 1980, vol. 13

no.1-2 pp. 81-132. [13] https://ub-madoc.bib.uni-mannheim.de/1898/1/TR2008_004.pdf

26

[14] Mensing, B.; Goltz, U.; Aniculaesei, A.; Herold, S.; Rausch, A.; Gartner, S.; Schneider, K.; , "Towards integrated rule-driven software development for IT ecosystems," Digital Eco-systems Technologies (DEST), 2012 6th IEEE International Conference on , vol., no., pp.1-6, 18-20 June 2012 doi: 10.1109/DEST.2012.6227951

[15] Engels, G., Goedicke, M., Goltz, U., Rausch, A., & Reussner, R. (2009). Design for Fu-ture–Legacy-Probleme von morgen vermeidbar?. Informatik-Spektrum, 32(5), 393-397.

[16] S. Herold, “Architectural compliance in component-based systems. foundations, specifica-tion, and checking of architectural rules.” Ph.D. dissertation, Clausthal University of Tech-nology, 2011.

[17] C. Bartelt, “Kollaborative Modellierung im Software Engineering”, Dr.-Hut Verlag, 2011, Munich.

[18] H. Klus: Anwendungsarchitektur-konforme Konfiguration selbstorganisierender Software-systeme. Technische Universität Clausthal, Institut für Informatik. Dissertation. 2013. To be published.

[19] D. Niebuhr: Dependable Dynamic Adaptive Systems: Approach, Model, and Infrastructure. Clausthal-Zellerfeld, Technische Universität Clausthal, Institut für Informatik. Dissertation. 2010


Recommended