+ All Categories
Home > Documents > DSoS - Newcastle Universityresearch.cs.ncl.ac.uk/cabernet/formal modeling techniques (e.g., process...

DSoS - Newcastle Universityresearch.cs.ncl.ac.uk/cabernet/formal modeling techniques (e.g., process...

Date post: 19-Mar-2020
Category:
Upload: others
View: 3 times
Download: 0 times
Share this document with a friend
22
1 DSoS IST-1999-11585 Dependable Systems of Systems Demonstration of Support for Architectural Design for Dependable SoSs Report: Deliverable CSDA2 Report Delivery Date: 30 September 2002 Classification: Public Circulation Contract Start Date: 1 April 2000 Duration: 36m Project Co-ordinator: University of Newcastle upon Tyne Partners: DERA, Malvern – UK; INRIA, Rocquencourt – France; LAAS-CNRS, Toulouse – France; TU Wien – Austria; Universität Ulm – Germany; LRI, Orsay - France Project funded by the European Community under the “Information Society Technology” Programme (1998- 2002)
Transcript
Page 1: DSoS - Newcastle Universityresearch.cs.ncl.ac.uk/cabernet/formal modeling techniques (e.g., process algebra, Markov chains, Petri nets, queuing nets) from developers. As detailed in

1

DSoS

IST-1999-11585

Dependable Systems of Systems

Demonstration of Support for Architectural Design for Dependable SoSs

Report: Deliverable CSDA2

Report Delivery Date: 30 September 2002

Classification: Public Circulation

Contract Start Date: 1 April 2000 Duration: 36m

Project Co-ordinator: University of Newcastle upon Tyne

Partners: DERA, Malvern – UK; INRIA, Rocquencourt – France; LAAS-CNRS, Toulouse – France; TU Wien – Austria; Universität Ulm – Germany; LRI, Orsay - France

Project funded by the European Community under the “Information Society Technology” Programme (1998-2002)

Page 2: DSoS - Newcastle Universityresearch.cs.ncl.ac.uk/cabernet/formal modeling techniques (e.g., process algebra, Markov chains, Petri nets, queuing nets) from developers. As detailed in
Page 3: DSoS - Newcastle Universityresearch.cs.ncl.ac.uk/cabernet/formal modeling techniques (e.g., process algebra, Markov chains, Petri nets, queuing nets) from developers. As detailed in

3

Demonstration of Support for Architectural Design for Dependable SOSs

Viet Khoi Nguyen, Valérie Issarny INRIA (Rocquencourt, F)

Overview CSDA2 demonstrates the prototype implementation of the developer-oriented, architecture-based environment that has been introduced in Chapter 2 of Deliverable IC2. The environment assists developers during the design phase of DSoSs, through support for architectural modeling and quality analysis of DSoSs. In the environment, DSoS architectures are specified using an extensible UML-based Architecture Description Language (ADL). Extensibility of the ADL allows customization of the architectural elements’ specification, enabling in particular specialisation of the ADL for the quality analysis of DSoSs. In this context, we have been more specifically concentrating on providing support for the quality analysis of DSoSs at design time, while not requiring extensive knowledge of formal modeling techniques (e.g., process algebra, Markov chains, Petri nets, queuing nets) from developers. As detailed in Chapter 2 of the IC2 Deliverable, the ADL has been customized so as to allow: (i) quantitative analysis of the DSoSs quality from the standpoint of reliability and performance, and (ii) qualitative analysis of the DSoSs quality based on model checking. The prototype implementation of the environment builds upon an existing UML modeling tool. We have more specifically used the Rationale Rose tool for the graphical specification of software architectures. In particular, the tool allows the definition of add-ins for the integration of new functionalities. We have thus extended the Rationale Rose tool with specific add-ins for architecture modeling and for quality analysis. The latter type of add-ins allows for specifying quality parameters associated with the system’s architectural model, and for generating quality models to be processed by existing tools out of the attributed architectural models. Prototype usage has further been experimented using the Travel Agent case study that serves for demonstration. Further detail about the prototype implementation is given in the Appendix. A key feature of the proposed environment lies in not requiring extensive knowledge in formal methods from the developer, who is only required to model the system’s architecture using the standard UML modeling language and specify quality parameters associated with the system’s architectural elements. While the specification of quality parameters associated with reliability and performance analysis is quite straightforward for the developers as it amounts to define corresponding figures, parameters associated with model checking still requires extensive knowledge in model checking as it requires specifying processes. We are currently investigating ways to make the latter task easier through libraries of behavioral specification associated with well-known architectural styles and typical properties to be checked. Another area for future work relates to experimenting with the environment prototype with actual case studies, other than the TA system.

Page 4: DSoS - Newcastle Universityresearch.cs.ncl.ac.uk/cabernet/formal modeling techniques (e.g., process algebra, Markov chains, Petri nets, queuing nets) from developers. As detailed in

4

Page 5: DSoS - Newcastle Universityresearch.cs.ncl.ac.uk/cabernet/formal modeling techniques (e.g., process algebra, Markov chains, Petri nets, queuing nets) from developers. As detailed in

5

Appendix This appendix is organized as follows. Section A1 recalls our approach to architecture modeling using UML. Sections A2 and A3 then detail prototype implementation of the tool support aimed at supporting quantitative analysis of DSoS quality, regarding reliability and performance, respectively. Prototype implementation of the tool support for qualitative analysis is further addressed in Section A4. Functionalities offered by the development environment prototype are all illustrated using the Travel Agent case study, according to the system design that was presented in Chapter 2 of Deliverable IC2, which is further used for demonstrating the prototype usage.

A1. Modeling DSoS Architecture

A1.1. UML Profile for Architecture Description

Being a general language, UML needs to be extended in order to adapt to specific domains such as enterprise distributed object computing (EDOC), data warehouse, etc. The profile notion is used to represent and structure these domains by either:

• Using UML built-in extension mechanisms, also referred to as lightweight extension mechanism; these mechanisms are Tagged Values, Stereotypes and Constraints, or

• Using the MOF capability for extending the UML metamodel, also referred to as heavyweight extension mechanism.

We undertake the first approach for the description of software architectures, defining new stereotypes for the definition of architectural elements. More precisely:

• An architectural Component is defined using the ADL Component stereotype, which extends the UML subsystem element. A component Port is further defined using the UML interface element, and the relationships between a component and its ports (i.e., required or provided port) is specified through the dependency relationship using the ADL Provides and ADL Requires stereotypes, as exemplified in Figure 1.

• An architectural Connector, which specifies the interactions among components, is defined using the ADL Connector stereotype that extends the UML association element. As the basic semantic of association is to represent objects interconnection, it obviously lacks semantics information, compared to the definition of connectors. Thus, it is necessary to associate some more information with associations. This is done by adding some Tagged Values to the association element, as illustrated in Figure 2, which shows a client-server file system.

ServerComp<<ADL Component>>

FileOperation

read() : Stringwrite(data : String)

<<ADL Provides>>

Figure 1: Relationship between component and port

Page 6: DSoS - Newcastle Universityresearch.cs.ncl.ac.uk/cabernet/formal modeling techniques (e.g., process algebra, Markov chains, Petri nets, queuing nets) from developers. As detailed in

6

A system architecture is then defined using the UML class diagram, which provides the system’s static structure over which structural constraints associated with the system may be checked.

For behavioral analysis, the system architecture must further be instantiated on a scenario so as to define precisely interactions among components. The collaboration diagram associated with the specific scenario then specifies the interactions among component and connector instances, structured as prescribed by the architecture of the inspected system. Finally, the deployment diagram associated with the system’s architectural model serves describing the physical architecture of the system. More precisely, the deployment diagram describes how components are allocated to the nodes on top of which they execute.

A1.2. Modeling the Travel Agent System

The functionality of the Travel Agent (TA) system can be summarized as providing 3 main services to users: flight reservation, hotel reservation and car reservation. The flight reservation service helps users to book a flight while departure and destination city are provided; the hotel reservation service is used to book a hotel in a city; and the car reservation service is used to rent a car. The architecture of the TA system is depicted in Figure 3, which does not show the definition of ports associated with components for the sake of readability. The definition of ports together with corresponding relationships among components is depicted in Figure 4.

As discussed in Section A1.1, the system’s architecture that is defined using the UML class diagram is complemented by the system’s deployment diagram, and the system’s collaboration diagrams associated with the scenarios describing how the system is used. In the remainder, we consider that the TA system is deployed over four nodes, as depicted in Figure 5.

We have further chosen the scenario given in Figure 6 for the quality analysis of the TA system, which does not use the GeographicalDatabase component.

FileOperation

read() : Stringwrite(data : String)

ServerComp<<ADL Component>>

<<ADL Provides>>

ClientComp<<ADL Component>>

<<ADL Connector>>

<<ADL Requires>>

Figure 2: A sample client –server architecture

Page 7: DSoS - Newcastle Universityresearch.cs.ncl.ac.uk/cabernet/formal modeling techniques (e.g., process algebra, Markov chains, Petri nets, queuing nets) from developers. As detailed in

7

Figure 3: Architecture of the TA system

FlightReservation<<ADL Component>>

OlympicAirwaysQuery

HotelReservation<<ADL Component>>

IbisQuery SofitelQuery

<<ADL Requires>>

CarReservation<<ADL Component>>

AvisQuery HertzQuery

<<ADL Requires>>

AirCompanies<<ADL Component>>

Hotels<<ADL Component>>

CarCompanies<<ADL Component>>

<<ADL Provides>><<ADL Provides>>

<<ADL Requires>>

<<ADL Provides>>

AirFranceQuery

Figure 4: Relationship among components

Page 8: DSoS - Newcastle Universityresearch.cs.ncl.ac.uk/cabernet/formal modeling techniques (e.g., process algebra, Markov chains, Petri nets, queuing nets) from developers. As detailed in

8

A2. Reliability Analysis

In the environment, reliability analysis is carried out into two steps:

1. The first step lies in setting reliability parameters, which is carried out on the class, collaboration and deployment diagrams associated with the system’s architectural model (see Section A1.1).

2. The second step lies in generating the system’s reliability model, which is carried out on the collaboration diagram associated with the system.

Section A2.1 introduces the user interface offered by our environment for modeling reliability, using the TA system for illustration. Section A2.2 then presents the implementation of the environment

TANode FlightNode

HotelNode CarNode

Figure 5: System nodes

customer : Customer

ta : TravelAgentFrontEnd

hr : HotelReservation

fr : FlightReservation

cr : CarReservation

hc : Hotels

ac : AirCompanies

cc : CarCompanies

1:

3: i:1..* 2: i:1..*

4: i:1..*

6: i:1..*5: i:1..*

7: i:1..*

Figure 6: Scenario used for quality analysis

Page 9: DSoS - Newcastle Universityresearch.cs.ncl.ac.uk/cabernet/formal modeling techniques (e.g., process algebra, Markov chains, Petri nets, queuing nets) from developers. As detailed in

9

support (i.e., add-in) for reliability analysis, which subdivides into support for setting reliability parameters, and for generating reliability models to be processed by the SURE/ASSIST tool.

A2.1. Modeling Reliability

Functionalities added by our environment to the Rationale Rose tool are made available through the customized Tools menu, as shown in Figure 7.

Considering the environment’s support for reliability analysis, the user can set the reliability parameters associated with the system under development by first selecting an architectural element and then clicking on the Update Reliability Parameters item of the Tools menu. Before setting or modifying any reliability parameter, it is first checked that the selected architectural element belongs to diagrams associated with architecture modeling. More precisely, the diagram should be one of the following according to architecture modeling discussed in Section A1: (i) class diagram defining a system architecture and thus consisting of components and connectors; (ii) collaboration diagram specifying a scenario over instances of architectural elements; (iii) deployment diagram consisting in a set of nodes.

Customized menu

Figure 7: Customized Tools menu for quality analysis

Page 10: DSoS - Newcastle Universityresearch.cs.ncl.ac.uk/cabernet/formal modeling techniques (e.g., process algebra, Markov chains, Petri nets, queuing nets) from developers. As detailed in

10

In the class diagram, the user sets the failure and fault parameters associated with the system’s component and connector architectural elements (see Chapter 2 of IC2 for more detail). As an example, Figures 8 gives the failure and fault parameters associated with the TravelAgentFrontEnd component of the TA system architecture defined in Figure 3.

In the deployment diagram, the user sets the reliability parameters associated with the nodes of the physical architecture on top of which instances of the system’s components are to be executed. As an illustration, Figure 9 gives the reliability parameters associated with TANode node of the deployment diagram defined in Figure 5.

Finally, in the collaboration diagram, the user sets the parameters associated with architectural elements in relation with the given scenario. These parameters relate to defining the system’s redundancy, and lie in specifying the nodes on which the instances of the architectural elements are deployed together with the number of instances per node. As an example, Figure 10 gives the redundancy parameters associated with the ta instance of the TravelAgentFrontEnd component, which is defined in the collaboration diagram given in Figure 6. Note that for instances of architectural elements defined in the collaboration diagram, the user can visualize the failure and fault parameters specified for the corresponding element in the associated class diagram but can not modify them.

Figure 8: Component reliability

Page 11: DSoS - Newcastle Universityresearch.cs.ncl.ac.uk/cabernet/formal modeling techniques (e.g., process algebra, Markov chains, Petri nets, queuing nets) from developers. As detailed in

11

Figure 9: Node reliability

Figure 10: Reliability of a component instance

Parameters can not be modified, only visualized

Page 12: DSoS - Newcastle Universityresearch.cs.ncl.ac.uk/cabernet/formal modeling techniques (e.g., process algebra, Markov chains, Petri nets, queuing nets) from developers. As detailed in

12

Once all reliability parameters associated with the system’s architectural model are set, the user can generate the corresponding reliability model by clicking on the Generation of Reliability Model item of the Tools menu. This action then generates an input for the NASA ASSIST (Abstract Semi-Markov Specification Interface to the SURE Tool) tool1, which solves reliability models numerically. The output of ASSIST further produces input for the NASA SURE (Semi-Markov Unreliability Range Evaluator) tool that provides the final reliability result.

It must be noticed that the provided modeling of reliability at the architectural level is first checked for consistency prior to generating the corresponding reliability model: it is checked whether all the reliability parameters are set or not, and whether the fault type is correct or not by combining the reliability parameters and comparing to the classification of faults2. If the fault type is incorrect, an error message is displayed, and if the user still wants the reliability analysis to be completed, all the faults are treated like a physical fault.

The error message consists of listing accepted fault types together with the architectural elements for which the specification of reliability parameters is erroneous. As an illustration, Figure 11 gives a report on reliability modeling, listing accepted faults and erroneous elements.

1 R. Butler. The SURE approach to reliability analysis. IEEE Transactions on Reliability. 41(2), p 210-218. 1992. 2 J-C. Laprie. Dependable computing and fault tolerance: Concepts and terminology. Proc. of the 15th International Symposium on Fault Tolerant Computing, FTCS-15. 1985.

Figure 11: List of accepted faults and erroneous elements

Page 13: DSoS - Newcastle Universityresearch.cs.ncl.ac.uk/cabernet/formal modeling techniques (e.g., process algebra, Markov chains, Petri nets, queuing nets) from developers. As detailed in

13

A2.2. Implementation of the Reliability Add-in

As stated earlier and further shown on the use case diagram of Figure 12, the functionality of the reliability add-in subdivides into: (i) support for setting reliability parameters and (ii) support for generating a reliability model, which produces an input for the ASSIST tool, according to the generation procedure detailed in Chapter 2 of Deliverable IC2. The resulting static structure of the reliability add-in is depicted in Figure 13. The business logic is quite simple: the EventHandler class captures user events, leading to invoke the associated function. For instance, when the user clicks on the Generation of Reliability Model item, the ReliabilityGenerate function is invoked. This function uses some routines of the Modules package, which also provides the routines for performance analysis. More specifically, the structure of the Reliability class of the Modules packages is given in Figure 14.

User Perform reliability analysis<<include>>

<<include>> Set reliability parameter

Generate Reliability model

Figure 12: Functionality of the reliability add-in

EventHandler

UpdateReliablityParam()UpdatePerformanceParam()ReliabilityGenerate()PerformanceGenerate()

Modules

Forms

Figure 13: Static structure of the reliability add-in

Page 14: DSoS - Newcastle Universityresearch.cs.ncl.ac.uk/cabernet/formal modeling techniques (e.g., process algebra, Markov chains, Petri nets, queuing nets) from developers. As detailed in

14

When setting reliability parameters, a form is opened to support the task. This Form is defined in the Forms package and allow setting graphically reliability parameters.

When generating the ASSIST model, some forms (ErrorForm and FaultAccepted) are used whenever an error occurs. These forms will show erroneous elements and possibly some additional information (for example FaultAccepted show the detail classification of faults in Laprie’s paper 2) and are thus invoked by routines of the Reliability class of the Modules package. Figure 15 shows the specific forms used for reliability analysis.

Regarding the dynamic structure of the Reliability add-in, Figure 16 specifies the tasks associated with the setting of reliability parameters, while Figure 17 specifies the tasks associated with the generation of reliability models.

Reliability

getFaultType()getModelRel()generateReliabilityModel()checkModel()printItemDeclaration()printItemInitialization()printTransitionRules()printDeadCondition()

Figure 14: Structure of the Reliability class

Figure 15: Forms used for setting reliability parameters

NodeDetailMyItem : RoseItemNodeName : String

Form_Load()Ok_Click()Delete_Click()

NodeOnModelMyItem : RoseItem

Cancel_Click()Form_Load()Select_Click()

ErrorFormItemList : CollectionErrType : Integer

Form_Load()Close_Click()

CommonReliabilityMyItem : RoseItemFormMode : Integer

Form_Load()Ok_Click()Cancel_Click()

FaultAcceptedItemList : Collection

Form_Load()Close_Click()

Page 15: DSoS - Newcastle Universityresearch.cs.ncl.ac.uk/cabernet/formal modeling techniques (e.g., process algebra, Markov chains, Petri nets, queuing nets) from developers. As detailed in

15

A3. Performance analysis

As for the reliability analysis process, the performance analysis process subdivides into a step for setting the performance parameters of the system at the architectural level, and a step for generating the corresponding performance model of the system.

: User :Menu : EventHandler : CommonReliability

click UpdateReliablityParam( ) Open

Show params

Modify paramssave params

Figure 16: Setting reliability parameters

: User :Menu : EventHandler : Reliability:Display

ClickReliabilityGenerate( )

ask for file name

show request

Specify file name

generateReliabilityModel( )Check error

print model to file

Figure 17: Generating ASSIST models

Page 16: DSoS - Newcastle Universityresearch.cs.ncl.ac.uk/cabernet/formal modeling techniques (e.g., process algebra, Markov chains, Petri nets, queuing nets) from developers. As detailed in

16

Section A3.1 introduces the user interface offered by our environment for modeling performance, still using the TA system for illustration. Section A3.2 then presents the implementation of the environment support (i.e., add-in) for performance analysis, which subdivides into support for setting reliability parameters, and for generating performance models to be processed by the QNAP tool.

A3.1. Modeling Performance

The user sets the performance parameters by clicking on the Update Performance Parameters item of the Tools menu. The system elements for which performance parameters need to be set are: Node, Component, Connector and Message. As for reliability modeling, the setting of performance parameters applies to class, collaboration and deployment diagrams associated with the system’s architectural model, and performance models are generated from the collaboration diagrams associated with the system’s architectural model. Redundancy parameters introduced in reliability modeling are further exploited in performance modeling, as they give the nodes on top of which component instances are deployed together with the number of instances per node.

The parameter associated with the Node element is the service rate of the given node. The service rate is measured as the number of work units performed per time unit. As an illustration, Figure 18 gives the service rate of TANode.

The performance parameters associated with the Component element are set in the class diagram defining the system’s architecture and subdivides into the following:

• Thread Policy that specifies whether the component is single- or multi-threaded.

• Work Demand that is defined as the number of work units spent for providing a service, and is associated with a distribution function (constant, uniform, erlang ,etc).

As an example, Figure 19 gives the performance parameters associated with the FlightReservation component: the component is single threaded, and its work demand rate follows a uniform distribution with minimal (resp. maximal) value equal to 10 (resp. 20).

Figure 18: Node performance

Page 17: DSoS - Newcastle Universityresearch.cs.ncl.ac.uk/cabernet/formal modeling techniques (e.g., process algebra, Markov chains, Petri nets, queuing nets) from developers. As detailed in

17

Performance parameters associated with the connector element specifies the scheduling policy, capacity, and delays of the connector as detailed in Chapter 2 of IC2 Deliverable and illustrated in Figure 20 for the connector binding the TravelAgentFrontEnd and CarReservation components. Performance parameters associated with connector instances are as for connectors, the setting at the connector or instance level depending on whether values apply to a specific instance or all instances.

Finally, performance parameters associated with requests (messages) sent among components via connectors need to be set. These parameters are the time to finish a request, and the waiting time for treating the request. Figure 21 characterizes the message sent between the Customer and TravelAgentFontEnd components.

Figure 19: Component performance

Figure 20: Connector performance

Page 18: DSoS - Newcastle Universityresearch.cs.ncl.ac.uk/cabernet/formal modeling techniques (e.g., process algebra, Markov chains, Petri nets, queuing nets) from developers. As detailed in

18

Once all the performance parameters are set, the user can click on the Generation of Performance Model item of the Tools menu to generate an input to be processed by the QNAP tool3. If at least one parameter is not set, an error message is displayed and the user must set the missing parameters for the generation of the performance model to proceed. Once the performance model is generated, the user executes the QNAP tool.

A3.2. Implementation of the Performance Add-in

As stated earlier and further shown on the use case diagram of Figure 22, the functionality of the performance add-in subdivides into: (i) support for setting performance parameters and (ii) support for generating a performance model, which produces an input for the SIMULOG QNAP (Queuing Network Analysis Package) tool that uses queuing network models to analyze performance, according to the generation procedure detailed in Chapter 2 of Deliverable IC2.

The static structure of the performance add-in is the same as the one of the reliability add-in:

• The EventHandler class captures user’s events.

• Forms of the Forms package are used for graphically setting performance parameters.

3 www.simulog.com

Figure 21: Message performance

User Perform reliability analysis

<<include>>

<<include>>Set performance parameter

Generate performance model

Figure 22: Functionality of the performance add-in

Page 19: DSoS - Newcastle Universityresearch.cs.ncl.ac.uk/cabernet/formal modeling techniques (e.g., process algebra, Markov chains, Petri nets, queuing nets) from developers. As detailed in

19

• The Performance class of the Modules package provides routines for verifying model consistency and generating queuing network models; the structure of the Performance class is shown in Figure 23.

The setting of performance parameters requires more forms than for setting reliability parameters as each architectural element (component, connector, node, message) is associated with specific parameters. Thus, a form is introduced for each element, as depicted in Figure 24.

Regarding the dynamic structure of the performance add-in, it adheres to the one of the reliability add-in, as shown on Figure 25 and 26, which respectively specify the tasks associated with the setting of performance parameters for the specific case of components, and the tasks associated with the generation of queuing network models.

Performance

printService()printStation()getNodeBaseStations()buildComponent()buildConnector()buildPerStations()generatePerformanceModel()

Figure 23: Structure of the Performance class

Figure 24: Forms used for setting performance parameters

PerformanceCompMyRoseItem : RoseItemIsModifiable : boolean

Form_Load()getParamValue()setVisualizationMode()initComboFields()

PerformanceConMyRoseItem : RoseItemIsModifiable : boolean

Form_Load()getParamValue()setVisualizationMode()initComboFields()

PerformanceNodeMyItem : RoseItem

Form_Load()Ok_Click()Close_Click()

PerformanceReqMyRoseItem : RoseItem

Form_Load()init()

Page 20: DSoS - Newcastle Universityresearch.cs.ncl.ac.uk/cabernet/formal modeling techniques (e.g., process algebra, Markov chains, Petri nets, queuing nets) from developers. As detailed in

20

: User :Menu : EventHandler : PerformanceComp

clickUpdatePerformanceParam( )

open

show params

modify paramssave params

Figure 25: Setting performance parameters

: User :Menu : EventHandler :Display : Performance

ClickPerformanceGenerate( )

ask for file name

show request

Specify file name

generatePerformanceModel( )Check error

print model to file

Figure 26: Generating QNAP Models

Page 21: DSoS - Newcastle Universityresearch.cs.ncl.ac.uk/cabernet/formal modeling techniques (e.g., process algebra, Markov chains, Petri nets, queuing nets) from developers. As detailed in

21

A4. Qualitative analysis

For the qualitative analysis of DSoSs, we use model checking, relying more specifically on the SPIN4 model checking tool, as detailed in Chapter 2 of Deliverable IC2. In this context, the user specifies the behavior of components and connectors in the class diagram specifying the system’s architecture, using the PROMELA (PROtocol MEta Language) modeling language of SPIN. As an illustration, Figure 27 gives the behavioral specification of one of the ports associated with the FlightReservation component, and Figure 28 gives the behavioral specification associated with the connector binding the FlightReservation component with the AirCompanies component through respective required and provided ports.

Once the behavioral specification of components and connectors is provided, a PROMELA model is generated from the overall architecture description, according to the procedure described in Chapter 2 of IC2. The user may then invoke the SPIN tool, providing as input the generated PROMELA file, for checking the specific properties that the user provides as input.

Note that unlike the environment’s support for reliability and performance analysis, support provided by our environment for model checking architectures, requires high expertise from the developers in the area of model checking in general and of PROMELA/SPIN in particular. We are currently investigating a method that would alleviate this drawback, relying on a library of specification associated with common architectural styles.

4 G. J. Holzmann. The SPIN Model Checker. IEEE Transactions on Software Engineering, 23(50, p 279-295. 1997.

Figure 27: Port behavior

Page 22: DSoS - Newcastle Universityresearch.cs.ncl.ac.uk/cabernet/formal modeling techniques (e.g., process algebra, Markov chains, Petri nets, queuing nets) from developers. As detailed in

22

Figure 28: Connector behavior


Recommended