+ All Categories
Home > Documents > 1 PerfCenter and AutoPerf: Tools and Techniques for Modeling and Measurement of the Performance of...

1 PerfCenter and AutoPerf: Tools and Techniques for Modeling and Measurement of the Performance of...

Date post: 18-Jan-2018
Category:
Upload: pearl-shepherd
View: 221 times
Download: 0 times
Share this document with a friend
Description:
3 Several Usage Scenarios Example: Login BrowserWebAuthenticationIMAPSMTP User/Password Verify_passwd Send_to_auth GenerateHtml list_message Call list_message of IMAP server GenerateHtml Response Time Performance Goals during deployment: User perceived measures: response time, request drops (minimize) System measures: throughput, resource utilizations (maximize)
29
1 PerfCenter and AutoPerf: Tools and Techniques for Modeling and Measurement of the Performance of Distributed Applications Varsha Apte Faculty Member, IIT Bombay
Transcript
Page 1: 1 PerfCenter and AutoPerf: Tools and Techniques for Modeling and Measurement of the Performance of Distributed Applications Varsha Apte Faculty Member,

1

PerfCenter and AutoPerf: Tools and Techniques for Modeling and Measurement of the Performance of Distributed Applications

Varsha ApteFaculty Member, IIT Bombay

Page 2: 1 PerfCenter and AutoPerf: Tools and Techniques for Modeling and Measurement of the Performance of Distributed Applications Varsha Apte Faculty Member,

2

Example: WebMail Application (ready to be deployed)

IMAP server

Ad Server

Authentication Server,

SMTP Server

Web Server

WAN

User request

Several interacting components

Page 3: 1 PerfCenter and AutoPerf: Tools and Techniques for Modeling and Measurement of the Performance of Distributed Applications Varsha Apte Faculty Member,

3

Several Usage ScenariosExample: Login

Browser Web Authentication IMAP SMTPUser/Password

Verify_passwd

Send_to_auth

GenerateHtml

list_messageCall list_message of IMAP server

GenerateHtml

0.2

0.8

Res

pons

e Ti

me

Performance Goals during deployment:•User perceived measures: response time, request drops (minimize)•System measures: throughput, resource utilizations (maximize)

Page 4: 1 PerfCenter and AutoPerf: Tools and Techniques for Modeling and Measurement of the Performance of Distributed Applications Varsha Apte Faculty Member,

4

Deploying the application in a Data Center

What should be the configuration of the Web server? (Number of threads, buffer size,…)

On which machines should IMAP server be deployed? The Web Server?

How will the network affect the performance? (LAN vs WAN)

How many machines? Machine configuration? (how many CPUs, what speed, how many disks?)

Determining host and network architecture

Page 5: 1 PerfCenter and AutoPerf: Tools and Techniques for Modeling and Measurement of the Performance of Distributed Applications Varsha Apte Faculty Member,

5

Input specifications

Machines and Devices

Software Components

Network Params

Deployments

Scenarios

Parser Queuing Model

Simulation Tool Analytical ToolRef MASCOTS 07

Output Analysis

PerfCenter: Modeling Tool

Inbuilt functions and constructs aid datacenter architect to analyze and modify the model

PerfCenter generates underlying queuing network modelPerfCenter solves

the model

Architect specifies the model

PerfCenter code can be downloaded from http://www.cse.iitb.ac.in/perfnet/perfcenter

*

Page 6: 1 PerfCenter and AutoPerf: Tools and Techniques for Modeling and Measurement of the Performance of Distributed Applications Varsha Apte Faculty Member,

6

Capacity analysis for WebMail

Maximum throughput achieved is 30 requests/sec

Graph for Response time performance with increase in number of users

Page 7: 1 PerfCenter and AutoPerf: Tools and Techniques for Modeling and Measurement of the Performance of Distributed Applications Varsha Apte Faculty Member,

7

Autoperf: a capacity measurement and profiling toolFocusing on needs of a performance modeling tool

Page 8: 1 PerfCenter and AutoPerf: Tools and Techniques for Modeling and Measurement of the Performance of Distributed Applications Varsha Apte Faculty Member,

8

Input requirement for modeling tools Usage Scenarios Deployment details Resource Consumption Details – e.g.

“login transaction takes 20 ms CPU on Web server”Usually requires measured data

Page 9: 1 PerfCenter and AutoPerf: Tools and Techniques for Modeling and Measurement of the Performance of Distributed Applications Varsha Apte Faculty Member,

9

Performance measurement of multi-tier systemsTwo goals: Capacity Analysis:

Maximum number of users supported, transaction rate supported, etc

Fine grained profiling for use in performance models

Page 10: 1 PerfCenter and AutoPerf: Tools and Techniques for Modeling and Measurement of the Performance of Distributed Applications Varsha Apte Faculty Member,

10

Measurement for Capacity analysisMeasurement for Capacity analysis

Clients running load generators

System Test EnvironmentSystem Test Environment

Servers running system under test

GenerateRequests

E.g.:HttperfFloodSilk PerformerLoadRunner

Page 11: 1 PerfCenter and AutoPerf: Tools and Techniques for Modeling and Measurement of the Performance of Distributed Applications Varsha Apte Faculty Member,

11

…….Measurement for capacity .Measurement for capacity analysis – Answers providedanalysis – Answers provided

Page 12: 1 PerfCenter and AutoPerf: Tools and Techniques for Modeling and Measurement of the Performance of Distributed Applications Varsha Apte Faculty Member,

12

Given such data, models can “extrapolate” and predict performance at volume usage (e.g. PerfCenter).

Measurement for modelingMeasurement for modeling

Web ServerApp Server 1

App Server 2

ResourceConsumption profile

10ms

20ms

40ms

45ms

LAN

Clients running load generators

GenerateRequests

Page 13: 1 PerfCenter and AutoPerf: Tools and Techniques for Modeling and Measurement of the Performance of Distributed Applications Varsha Apte Faculty Member,

13

Generate Load

ProfileServers

1 2

3Collect clientstatistics Collect server

statistics

Client

Correlate & display

Introducing: Introducing: AutoPerfAutoPerf Servers

AutoPerfAutoPerf

Page 14: 1 PerfCenter and AutoPerf: Tools and Techniques for Modeling and Measurement of the Performance of Distributed Applications Varsha Apte Faculty Member,

14

AutoPerfAutoPerfDeployment Information of servers

Web transaction workload description

Fine grained server sideResource profiles

Page 15: 1 PerfCenter and AutoPerf: Tools and Techniques for Modeling and Measurement of the Performance of Distributed Applications Varsha Apte Faculty Member,

15

Future enhancements PerfCenter/AutoPerf:

Various features which make the tools more user-friendly

Capability to model/measure performance of virtualized data centers

Many other minor features Skills that need to be learned/liked:

Java programming (both tools are in Java) Discipline required to maintain/improve large software

Working with quantitative data

Page 16: 1 PerfCenter and AutoPerf: Tools and Techniques for Modeling and Measurement of the Performance of Distributed Applications Varsha Apte Faculty Member,

16

What is fun about this project?

Working on something that will (should) get used.

New focus on energy and virtualization –both exciting fields

Many, many, algorithmic challenges Running simulation/measurement in efficient

ways

Page 17: 1 PerfCenter and AutoPerf: Tools and Techniques for Modeling and Measurement of the Performance of Distributed Applications Varsha Apte Faculty Member,

17

Work to be done by RA Code maintenance Feature enhancement Write paper(s) for publication, go to conferences,

present them Creating web-pages and user groups, answering

questions Help in popularizing the tool, demos, etc Pick a challenging problem within this domain as

M.Tech. project, write paper (s), go to conferences!

Page 18: 1 PerfCenter and AutoPerf: Tools and Techniques for Modeling and Measurement of the Performance of Distributed Applications Varsha Apte Faculty Member,

18

Thank you/Questions

This research was sponsored by MHRD, Intel Corp, Tata Consultancy Services and IBM faculty award 2007-2009

PerfCenter code can be downloaded from http://www.cse.iitb.ac.in/perfnet/perfcenter

Page 19: 1 PerfCenter and AutoPerf: Tools and Techniques for Modeling and Measurement of the Performance of Distributed Applications Varsha Apte Faculty Member,

19

Request Arrival

Get SoftServer Get Device Get NetworkLink

InstanceFree

BufferFull Drop

ServiceRequest EnQueue

Y

YN

N

SoftServer DeviceService Req

NetworkLinkService Req

Queue Class

Simulator: Queue ClassAll resources like devices, soft servers and network link are abstracted queues

Discrete event simulator implemented in Java

Supports both open and

closed arrivals

Page 20: 1 PerfCenter and AutoPerf: Tools and Techniques for Modeling and Measurement of the Performance of Distributed Applications Varsha Apte Faculty Member,

20

Simulator: Synchronous callsServer1 Server2 Server3 Server4

Thread Busy

Thread waiting

Server1-t

Server2-t

Server3-t

PerfCenter Stack

User

Page 21: 1 PerfCenter and AutoPerf: Tools and Techniques for Modeling and Measurement of the Performance of Distributed Applications Varsha Apte Faculty Member,

21

Simulator Parameters PerfCenter Simulates both open and closed systems

loadparmsarate 10end

loadparms noofusers 10 thinktime exp(3)end

Model parameters

modelparmsmethod simulationtype closednoofrequest 10000confint falsereplicationno 1end

Independent replication method for output analysis

Page 22: 1 PerfCenter and AutoPerf: Tools and Techniques for Modeling and Measurement of the Performance of Distributed Applications Varsha Apte Faculty Member,

22

Deployments dply3 and dply4

H1Cpu%

H2Cpu%

H3Cpu%

H4CPU%

IMAPHost Disk%

H2Disk%

dply3 77.6 17.7 23.9 NA 44.2 18.7

dply4 53.8 18.4 27.7 NA 47.1 19.5

Page 23: 1 PerfCenter and AutoPerf: Tools and Techniques for Modeling and Measurement of the Performance of Distributed Applications Varsha Apte Faculty Member,

23

Deployment summaryH1Cpu%

H2Cpu%

H3Cpu%

H4CPU%

IMAPHost Disk%

H2Disk%

dply1 98.1 8.2 NA NA 41.2 8.8

dply2 67.5 15.9 48.6 75.0 80.6 17.0

dply3 77.6 17.7 23.9 NA 44.2 18.7

dply4 53.8 18.4 27.7 NA 47.1 19.5

Page 24: 1 PerfCenter and AutoPerf: Tools and Techniques for Modeling and Measurement of the Performance of Distributed Applications Varsha Apte Faculty Member,

24

Simulator: Dynamic loading of Scheduling policy

/Queue/SchedulingStartergy/

/FCFS.class

/LCFS.class

/RR.class

host host[2]cpu count 1cpu schedp fcfscpu buffer 9999end

host host[2]cpu count 1cpu schedp rrcpu buffer 9999end

Page 25: 1 PerfCenter and AutoPerf: Tools and Techniques for Modeling and Measurement of the Performance of Distributed Applications Varsha Apte Faculty Member,

25

Using PerfCenter for “what-if” analysis Scaling up Email Application To support requests arriving at rate

2000 req/sec

Page 26: 1 PerfCenter and AutoPerf: Tools and Techniques for Modeling and Measurement of the Performance of Distributed Applications Varsha Apte Faculty Member,

26

Step 1 Step 2diskspeedupfactor2 =20diskspeedupfactor3=80deploy web H4set H1:cpu:count 32set H4:cpu:count 32set H3:cpu:count 12set H2:cpu:count 12cpuspeedupfactor1=2cpuspeedupfactor3=4cpuspeedupfactor4=2

host H5cpu count 32cpu buffer 99999cpu schedP fcfscpu speedup 2enddeploy web H5set H2:cpu:count 32set H3:cpu:count 18

Page 27: 1 PerfCenter and AutoPerf: Tools and Techniques for Modeling and Measurement of the Performance of Distributed Applications Varsha Apte Faculty Member,

27

SummaryH1 H2 H3 H4 H5

Step1 CPU Count 32 12 12 32

Cpu Utilization 88 100 75.8 87.7

CPUSpeedUp 2 1 4 3

Disk Speedup 20 80

Disk utilization 51.6 46.5

Step2 CPU Count 32 32 18 32 32

Cpu Utilization 64.5 55.9 57.0 63.1 58.7

CPUSpeedUp 2 1 4 2 2

Disk Speedup 20 80

Disk utilization 58 52.6

Page 28: 1 PerfCenter and AutoPerf: Tools and Techniques for Modeling and Measurement of the Performance of Distributed Applications Varsha Apte Faculty Member,

28

Identifying Network Link Capacity

256Kbps 1Mbps

Lan1 to lan2 20.1% 5.1%

Lan2 to lan1 18.7% 4.8%

Page 29: 1 PerfCenter and AutoPerf: Tools and Techniques for Modeling and Measurement of the Performance of Distributed Applications Varsha Apte Faculty Member,

29

Limitations of standard tools Do not perform automated capacity analysis

Need range of load levels to be specified Need duration of load generation to be specified Need the steps in which to vary load to be specified Report only the throughput at a given load level, but not

the maximum achievable throughput and saturation load level.

Tools should take as input a better workload description (CBMG) rather than just the percentage of virtual users requesting each type of transaction.

Do not perform automated fine grained server side resource profiling.


Recommended