+ All Categories
Home > Documents > Performance Analysis of Software Architectures

Performance Analysis of Software Architectures

Date post: 30-Dec-2015
Category:
Upload: valentine-reilly
View: 30 times
Download: 2 times
Share this document with a friend
Description:
Performance Analysis of Software Architectures. UNIVERSITÀ DEGLI STUDI DELL’AQUILA Area Informatica, Facoltà di SS.MM.NN. Paola Inverardi. http://saladin.dm.univaq.it. Joint work with:. Simonetta Balsamo, Universita’ di Venezia - PowerPoint PPT Presentation
Popular Tags:
52
Performance Analysis of Software Architectures Paola Inverardi UNIVERSITÀ DEGLI STUDI DELL’AQUILA Area Informatica, Facoltà di SS.MM.NN. http:// saladin.dm.univaq. it
Transcript

Performance Analysis of Software Architectures

Paola Inverardi

UNIVERSITÀ DEGLI STUDI DELL’AQUILA

Area Informatica, Facoltà di SS.MM.NN.

http://saladin.dm.univaq.it

Joint work with:

• Simonetta Balsamo, Universita’ di Venezia

• Group of students over the years: Mangano, Russo, Aquilani, Andolfi

GoalGoal

quantitative analysis of SA descriptions.

Introduce the ability to measure architectural choices.

Why? and How?

To validatevalidate SA design choices with respect to performance indices

To comparecompare alternative SA designs .

Why ?Why ?

Produce feedbackfeedback at the design level

HOW?

Introduce quantitativequantitative modelsmodels early in the life cycle

EvaluateEvaluate performance indices

How ?How ?

Add non-functional requirementsnon-functional requirements to maintain the expected performance

Outline of the Talk

• Software Architectures

• Performance Evaluation

• Approaches

• Our recipe

• Conclusions

• References

• Advertising

Software Architectures

• High level system description in terms of subsystems (components) and the way they interact (connectors)

• Static description: Topology

• Dynamic description: Behavior

Topology

gzip

Filter Pseudo FilterAdapter

Filter

BehaviorCFu AD CFd Gzip

Repeat: n TIMES

R4I,*

*

*

R4O

*

R40,*

R4I

R4O

*,R4I

*,R4O

R4I

*

*

End:

MSC Finite State Automata

Static and Dynamic Views

(a) FSA, (b) Topology, (c) MSC

Quality Attributes and SA

• qualities discernable by observing the system execution: performance, security,availability, functionality, usability

• qualities not discernable at run time: modifiability, portability, reusability, integrability, testability.

Quality Attributes at run time

• Performance: refers to the responsiveness of the system. It is often a function of how much communication and interaction there is between components of the system. It is clearly an architectural issue. (communications usually take longer than computations)

How to measure Performance

• Arrival rates and distributions of service requests, processing times, queue sizes and latency (the rate at which requests are serviced)

• simulate by building a stochastic queueing model of the system based upon anticipated workload scenarios

Software Architectures Quality Attributes

• Static: can be measured statically (portability, scalability, reusability, …)

• Dynamic: can be measured by observing the

SA behavior (performance, availability, …)

Software Architectures and Performance

Quoting from WOSP 2000 panel introduction on Performance of SA: “the quantitative analysis of a SA allows for the

early detection of potential performance problems … Early detection of potential performance

problems allows alternative software designs and …

… meaning designing a software system and analyzing its performance before the system is

implemented …”

Software Architecture

Level of abstraction

Dynamic model

Lack of information

How do we measure

How do we interpretthe measures?

Performance Evaluation

Quantitative analysis of systems; based on models and methods both deterministic and stochastic

Evaluate the performance of a system means make a quantitative analysis to derive a set of (performance) indices either obtained as mean

or probabilistic figures

Probabilistic distribution/mean of response times, of waiting times,queus length, delay, resource utilization, throughput,

PE Models and Techniques

• Models are primarily stochastic and can be solved by either analytic or simulation techniques.

• Analytic techniques can be exact (e.g. numerical), approximated or bound

• Simulation techniques , more general but expensive

Queueing Network Models

• Service centers– service time– buffer space with scheduling policy– number of servers

• Customers

– Number for closed models, arrival process for open models

• Network Topology– models how service centers are interconnected and how

customers move among them

Queueing networks with finite capacity queues

Queueing networks with finite capacity queues

•Queueing network models to represent– sharing of resources with finite capacity queues– population constraints– synchronization constraints

•finite capacity of the queue– n = number of customers in the service center– B = finite capacity

blocking dependenceDeadlock

Solution Methods : exact vs approximate simulation

•various blocking types:– different behaviors of customer arrivals at a full node and of servers' activity

Analytical solutions for Q.N. with finite capacity queues

Analytical solutions for Q.N. with finite capacity queues

• Network model parameters

• M number of nodes • N number of customers

• µi service rate of node i

• Service time distribution: M, G, PHn , GE

• P=||pij|| routing matrix

• Bi finite capacity of node i

– Queue-length probability distribution ?• C-T Homogeneous Markov Chain

– S = (S1,S2,..., SM) network state

• State space E, transition rate matrix: Q

• Steady-state probabilities π(S)

r πQ = 0r π

r e = 1

⎧ ⎨ ⎩

Other average performance indices can be derived from π and depend on the blocking type

Exact solution becomes soon numerically untractable

Product-form solution in special cases approximate analysis

Queueing Network Models

“QNModelling is a top-down process. The underlying philosophy is to begin by identifying the principal components of the system and the ways they interact, then supply any details that prove to be necessary “

(ref. Lazowska et al. Quantitative System Performance, Prentice Hall,

http://www.cs.washington.edu/homes/lazowska/qsp/)

QNM creation

• Definition definition of service centers, their number, class of customers and topology

• Parameterization define the alternative of studies, e.g. by selecting arrival processes and service rates

• Evaluation obtain a quantitative description of system behavior. Computation of performance indices like resource utilization, system throughput and customer response time.

Approaches

• Software Performance the whole system life cycle is available, design is used to incrementally produce a QNM model of the software system.

• Software Specification the system behavioral specification is available and modeled by Stocastic Petri Nets, Stocastic Process Algebras

Software Performance• Performance Analysis integrated in the software life

cycle. – Assume to manage a number of software artifacts, from

requirements specifications (Use Cases) to deployment diagrams

• QNM models – Topology obtained from the information on the physical

architecture– Information on software component is used to define the

model workload

References under SP, a (UML-based) survey in BS01

Software Specification

• Identify a precise software stage: system design specification

• Formal behavioral specification: Stochastic petri Nets, Stochastically Timed Process Algebras– Behavioral and performance analysis in a single

model

References under SS

Our Approach

• No SP: we want to evaluate performance of the SA description. We do not assume to have an implementation

• No SS: nice one single model but feedback too difficult. The performance model is too far from the component/connectors description

CHAM, FSP,UMLWRIGHT,...

Dynamic descriptions,FSM,MSCs,...

Favorite modelQNM,SPN,SPA...

SA Description

Behavioral Model

Algorithm

Performance Model

Performance Evaluation

Results and interpretation

feed

back

Solution method:symbolic, approximation,simulation...

• Formal description of SA via CHAM• Behavioral analysis of the SA

• Algebraic analysis and finite state modeling• Validation and quantitative analysis based on FSTM• global system behavior• Queueing Network Model• Feedback at the design level• Capacity planning and case studies

- S. Balsamo, P. Inverardi, C. Mangano "An Approach to Performance Evaluation of Software Architectures" in IEEE Proc. WOSP'98.

- S. Balsamo, P. Inverardi, C. Mangano, L.Russo "Performance Evaluation of Software Architectures" in IEEE Proc. IWSSD-98.

Brief history of our work in SA and PE 1/2Brief history of our work in SA and PE 1/2

• Specification of SA via Message Sequence Charts - UML• Event ordering. Event sequence. Trace of events.• Communication types, concurrency and non-determinism

• Trace analysis and model structure identification• Quantitative analysis based on extended QN model• Scenarios for model parameterization• Feedback at the design level

- F. Andolfi, F. Aquilani, S. Balsamo, P. Inverardi "Deriving Performance Models of Software Architectures from Message Sequence Charts" in Proc. IEEE WOSP 2000.

- F. Andolfi, F. Aquilani, S. Balsamo, P. Inverardi " On using Queueing Network Models with finite capacity queues for Software Architectures performance prediction” in Proc. QNET’2000.

Brief history of our work in SA and PE 2/2Brief history of our work in SA and PE 2/2

• Description of SA via LTS - independent of ADL• Algorithm to derive the performance model structure• Add info on the communication types, state annotation• Identify scenarios for model parameterization• Performance model based on extended Queueing

Network models• Analytical solution (symbolic) for simple models,

approximation or simulation for complex models• Result interpretation at the software design level

F. Aquilani, S. Balsamo, P. Inverardi "Performance Analysis at the software architecture design level" TR-SAL-32, Technical Report Saladin Project, 2000, to appear on Performance Evaluation.

Framework of performance analysis of SA at the design level

Framework of performance analysis of SA at the design level

Usual Example: The Multiphase CompilerMultiphase compiler concurrent architecture

optimized architecture

Software Architecture

Synchronous communication

Queueing Network Model with BAS blocking

Acyclic topology

Solution: approximate analysis

OPTIMIZERobject_code

TEXT

LEXER PARSER SEMANTOR

character s phrases

cor.phrases

CODE_GEN

tokens

BS=1BP=1

SL P

BO=1

BG=1

G

O

Sequential Architecture

Same number of components

sc1

strongly sequentialized. No concurrency

text

parserlexer semantor optimizer

codegenOne sin

1 single service center

Software Architecture

QNM

Parameterization and Evaluation

• Specify parameters (e.g. arrival rate and mean service time

of each center). We keep them symbolic.• Meaning of the parameters, (e.g. service time =

execution time of a component, arrival rate = activation of concurrent instances of components execution.

• Parameter istantiations identify potential implementation scenarious– In the compiler example, 3 scenarious playing with the mean

service time of the concurrent model

How we provide Feedback

throughput of the 2 compiler SA:

the concurrent SA performs 5 times better than the sequential SA

Scenario in which the mean service times of the nodes have

the same degree of magnitude.

enrich performance requirements in the subsequent development steps,

– a global performance requirement can be broken into requirements on single components

Dynamic descriptionSA

Labeled Transition SystemMessage Sequence Charts

Algorithm

Performance Model- QNM

Performance Evaluation

Results and interpretation

feed

back

State annotation,Communication type

Scenariosparameterization

Choice of SA + new requirements on components, connectors

SA specification: Labeled Transition System

<S,L, , s,P>,

S set of states, L set of labels (communication types)

s initial state, P set of state labels

transition relation in (P x L x P)

SA components: communicating concurrent subsystems

SA level: consider interaction activities among components

Parallel composition of communicating components

P set of SA components and connectors states described by the LTS

Performance Analysis at the SA design level 1/2Performance Analysis at the SA design level 1/2

Performance Analysis at the SA design level 2/2Performance Analysis at the SA design level 2/2

First model the maximum level of concurrency (each component as an autonomous server)

(algorithm)

derive a simple structure of the QNM by analyzing the true level of concurrency and the communication type

Algorithm

1. LTS visit to derive interaction sets formed by interaction pairs (IP) - (p1 ,p2 ) flow of data from p1 to p2

• model connecting elements with buffer

• mark non-deterministic IP

2. examines the sets of IP to generate the service centers and topology of the QNM

• UML as ADL– a model of all possible system behaviours– state diagrams for “manageable” processes– implicit parallel notation for composite processes P1||P2||…||Pn

– no explicit representation due to state explosion

• Sequence diagrams/MSCs to describe components interactions

• MSCs with state information and iteration blocks, components are the object elements

• QNM with blocking, BAS mechanism

SA description: MSCs - From MSCs to QNMsSA description: MSCs - From MSCs to QNMs

• It is always possible to synthesize a FSM out of a set of MSCs

• all refer to the same initial system configuration

• representative of major system behaviors

• Each system component is in (at least) a MSC

• MSCs contain info about the state of components

• Other technical conditions

MSCs requirementsMSCs requirements

• communication among components, i.e. which components interact

• communication types, i.e. synchronous/asynchronous

• concurrency, i.e. components can proceed concurrently

• non-determinism, i.e. components do proceed nondeterministically

Extracting from MSC info aboutExtracting from MSC info about

• MSCs encoding => from a MSC we derive the trace (set of regular languages)

• We analyze traces to identify the kind of communications (1to2, 2to1, concurrent, non-deterministic), we build Interaction Pairs to record this information

• We use IP to build the QNM topology

How do we do that?How do we do that?

• I = (P1,P2)s => service center representing a unique service P1 followed by P2, expressing sequentiality (P1 and P2 are not concurrent)

• I = (P1,P2)a => service center with infinite buffer implicitely modelling the communication channel + the transition P1 ->P2 in the QNM

• {(P1,P2)s, (P1,P3)s }ND => multi-customer service center

• synchronous communication among concurrent components =>distinct service centers, the receiver component a zero capacity buffer with BAS policy in the sender component

Interaction Pairs and QNMInteraction Pairs and QNM

•Compressing Proxy system

•purpose: improve the performance of Unix-based World Wide Web browsers over slow networks by an HTTP server that compresses and uncompresses data to and from the network

•Software Architecture

gzip

Filter Pseudo FilterAdapter

Filter

BAD=1 BGZIP=1

GZIPAD

ExampleExample

Synchronous communication Queueing Network Model with BAS blockingexact analysis of the underlying Markov chain

CFu AD CFd Gzip

Repeat: n TIMES

R4I,*

*

*

R4O

*

R40,*

R4I

R4O

*,R4I

*,R4O

R4I

*

*

End:

{S(Cfu,AD)S(AD,Gzip)S(Gzip,AD)S(AD,CFd)}N

MSC to traceMSC to trace

• S(Px,Py)c1…S(Pk,Pz)c2 S(Pi,Pj)c3 S(Ps,Pt)c4 …

• S(Px,Py)c1…S(Pk,Pz)c2 S(Ps,Pt)c4 S(Pi,Pj)c3 …

• Pi and Ps are concurrent

S(Px,Py)c1…S(Pk,Pz)c2 S(Pi,Pj)c3 S(Ps,Pt)c4

S(Ps,Pt)c4 S(Pi,Pj)c3

Trace AnalysisTrace Analysis

• Derivation of the performance model from the dynamic view of SA

• Finite (incomplete) representation of the SA behavior, i.e. LTS (MSC)

• Analysis of LTS (MSC) to extract relevant to PM pieces of information

• Performance evaluation at the SA level of abstraction• Feedback on the design process• Case studies• Integration of architectural design tools and

performance tools

ConclusionConclusion

My opinion

• Still active area of research, very high industrial interest, research interest see key action of the new IST European program call

• PM models close to SA description. Symbolic evaluation!

• Feedback: Make explicit the extra info to help in refining the design steps

• Experiment!

ROME 22-26 JULY 2002

ISSTA and WOSP Together!

Selected Bibliography• GENERAL SA

– Shaw, M., Garlan, D., Software Architectures: Perspectives on an Emerging Discipline, Prentice Hall, 1996– Bass, L., Clemens, P., Kazman, R., Software Architectures in Practice, Addison Wesley, 1998– Hofmeister, C., Nord, R., Soni, D., Applied Software Architectures, Addison Wesley, October 1999.– Http://www.sei.cmu.edu

• Survey– S. Balsamo, M. Simeoni "On Transforming UML models into performance models" Workshop

on Transformations in UML, ETAPS 2001 Genova, Italy, April 7th, 2001. • Software Specification

– G. Balbo, G. Conte and M. A. Marsan. Performance Models of Multiprocessor Systems. Series in Computer Systems, The MIT Press, (1986).

– R. Pooley and P. King, "Using UML to derive stochastic process algebra models“ Proceedings 15th UK Performance Engineering Workshop, 1999. 

– P. King and R. Pooley, "Derivation of Petri Net Performance Models from UML specifications“ Proceedings 11th Int. Conf. on Tools and techniques for computer Performance Evaluation, Illinois 2000.

– M. Bernardo and R. Gorrieri "Extend Markovian Process Algebra" In Proc. CONCUR '96, LNCS (Springer-Verlag) No. 1119, (1996) 315-330.

– M. Bernardo, P. Ciancarini and L. Donatiello, "AEMPA: A Process Algebraic Description Language for the Performance Analysis of Software Architectures", in Wosp2000

• Our Approach– S. Balsamo, P. Inverardi, C. Mangano "An Approach to Performance Evaluation of Software Architectures" in

IEEE Proc. WOSP'98.– S. Balsamo, P. Inverardi, C. Mangano, L.Russo "Performance Evaluation of Software Architectures" in IEEE

Proc. IWSSD-98.– F. Andolfi, F. Aquilani, S. Balsamo, P. Inverardi "Deriving Performance Models of Software Architectures from

Message Sequence Charts" in Wosp2000– F. Andolfi, F. Aquilani, S. Balsamo, P. Inverardi " On using Queueing Network Models with finite capacity

queues for Software Architectures performance prediction” in Proc. QNET’2000.– F. Aquilani, S. Balsamo, P. Inverardi "Performance Analysis at the software architecture design level" TR-SAL-

32, Technical Report Saladin Project, 2000, to appear on Performance Evaluation

Selected bibliography• Software Performance

– H. Gomaa and D. Menasce, "A Method for Design and performance modeling of client-server Systems", IEEE Transactions on Software Engineering, 2000.

–  H. Gomaa and D. Menasce, "Design and Performance Modeling of component Interconnection Patterns for Distributed Software Architectures" in Wosp2000.

– V. Cortellessa, R. Mirandola, "Deriving a Queueing network based performance Model from UML Diagrams“ in Wosp2000

– D. C. Petriu, X. Wang "From UML Description of high-level software architecture to LQN Performance Models", in AGTIVE'99, LNCS 1779, Springer-verlag, 2000. 

– D. C. Petriu, C. Shousha, A. Jalnapurkar, "Architecture-based Performance Analysis Applied to a Telecommunication System", in IEEE Trans. of Software Engineering, 2000.

– R. Pooley, "Software Engineering and Performance: A Road-map“ in The Future of Software Engineering, A. Finkelstein Editor, 22 ICSE.

–  R. Pooley and P. King, "The Unified Modeling Language and Performance Engineering“ IEE Proceedings-Software, 146, 1 (February 1999).

– C. U. Smith. Performance Engineering of Software Systems. Addison-Wesley Publishing Company, (1990).– C. U. Smith and L. G. Williams "Software Performance Engineering: A Case Study Including Performance

Comparison with Design Alternatives" IEEE Trans. on Software Engineering, Vol 19, No. 7, 720-741, July 1993. 

– C. U. Smith and L. G. Williams "Performance Evaluation of Software Architectures“ in Wosp 1998– M. Woodside, C. Hrischuk, B. Selic, S. Bayarov, "A Wideband Approach to integrating Performance

prediction into a Software Design Environment", in Wosp 1998.– M. Woodside " Software Performance Evaluation by Models", in Performance Evaluation (G. Haring, C.

Lindemann, M. Reiser Eds.), LNCS 1769, 283-304, 2000.


Recommended