+ All Categories
Home > Documents > metrics of performance

metrics of performance

Date post: 31-Dec-2015
Category:
Upload: stuti-sharma
View: 16 times
Download: 0 times
Share this document with a friend
Description:
performance management
42
Copyright 2004 David J. Lilja 1 Performance metrics What is a performance metric? Characteristics of good metrics Standard processor and system metrics Speedup and relative change
Transcript
Page 1: metrics of performance

Copyright 2004 David J. Lilja 1

Performance metrics

What is a performance metric? Characteristics of good metrics Standard processor and system metrics Speedup and relative change

Page 2: metrics of performance

Copyright 2004 David J. Lilja 2

What is a performance metric?

Count Of how many times an event occurs

Duration Of a time interval

Size Of some parameter

A value derived from these fundamental measurements

Page 3: metrics of performance

Copyright 2004 David J. Lilja 3

Time-normalized metrics

“Rate” metrics Normalize metric to common time basis

Transactions per second Bytes per second

(Number of events) ÷ (time interval over which events occurred)

“Throughput” Useful for comparing measurements over

different time intervals

Page 4: metrics of performance

Copyright 2004 David J. Lilja 4

What makes a “good” metric?

Allows accurate and detailed comparisons Leads to correct conclusions Is well understood by everyone Has a quantitative basis A good metric helps avoid erroneous

conclusions

Page 5: metrics of performance

Copyright 2004 David J. Lilja 5

Good metrics are …

Linear Fits with our intuition If metric increases 2x, performance should

increase 2x Not an absolute requirement, but very appealing

dB scale to measure sound is nonlinear

Page 6: metrics of performance

Copyright 2004 David J. Lilja 6

Good metrics are …

Reliable If metric A > metric B

Then, Performance A > Performance B Seems obvious! However,

MIPS(A) > MIPS(B), but Execution time (A) > Execution time (B)

Page 7: metrics of performance

Copyright 2004 David J. Lilja 7

Good metrics are …

Repeatable Same value is measured each time an

experiment is performed Must be deterministic Seems obvious, but not always true…

E.g. Time-sharing changes measured execution time

Page 8: metrics of performance

Copyright 2004 David J. Lilja 8

Good metrics are …

Easy to use No one will use it if it is hard to measure Hard to measure/derive

→ less likely to be measured correctly A wrong value is worse than a bad metric!

Page 9: metrics of performance

Copyright 2004 David J. Lilja 9

Good metrics are …

Consistent Units and definition are constant across systems Seems obvious, but often not true…

E.g. MIPS, MFLOPS Inconsistent → impossible to make comparisons

Page 10: metrics of performance

Copyright 2004 David J. Lilja 10

Good metrics are …

Independent A lot of $$$ riding on performance results Pressure on manufacturers to optimize for a

particular metric Pressure to influence definition of a metric But a good metric is independent of this pressure

Page 11: metrics of performance

Copyright 2004 David J. Lilja 11

Good metrics are …

Linear -- nice, but not necessary Reliable -- required Repeatable -- required Easy to use -- nice, but not necessary Consistent -- required Independent -- required

Page 12: metrics of performance

Copyright 2004 David J. Lilja 12

Clock rate

Faster clock == higher performance 1 GHz processor always better than 2 GHz

But is it a proportional increase? What about architectural differences?

Actual operations performed per cycle Clocks per instruction (CPI) Penalty on branches due to pipeline depth

What if the processor is not the bottleneck? Memory and I/O delays

Page 13: metrics of performance

Copyright 2004 David J. Lilja 13

Clock rate

(Faster clock)

≠ (better performance) A good first-order metric

Linear

Reliable

Repeatable ☺Easy to measure ☺

Consistent ☺Independent ☺

Page 14: metrics of performance

Copyright 2004 David J. Lilja 14

MIPS

Measure of computation “speed” Millions of instructions executed per second MIPS = n / (Te * 1000000)

n = number of instructions Te = execution time

Physical analog Distance traveled per unit time

Page 15: metrics of performance

Copyright 2004 David J. Lilja 15

MIPS

But how much actual computation per instruction? E.g. CISC vs. RISC Clocks per instruction (CPI)

MIPS = Meaningless Indicator of Performance

Linear

Reliable

Repeatable ☺Easy to measure ☺

Consistent

Independent ☺

Page 16: metrics of performance

Copyright 2004 David J. Lilja 16

MFLOPS

Better definition of “distance traveled” 1 unit of computation (~distance) ≡ 1 floating-

point operation Millions of floating-point ops per second MFLOPS = f / (Te * 1000000)

f = number of floating-point instructions Te = execution time

GFLOPS, TFLOPS,…

Page 17: metrics of performance

Copyright 2004 David J. Lilja 17

MFLOPS

Integer program = 0 MFLOPS But still doing useful work Sorting, searching, etc.

How to count a FLOP? E.g. transcendental ops, roots

Too much flexibility in definition of a FLOP

Not consistent across machines

Linear

Reliable

Repeatable ☺Easy to measure ☺

Consistent

Independent

Page 18: metrics of performance

Copyright 2004 David J. Lilja 18

SPEC

System Performance Evaluation Coop Computer manufacturers select

“representative” programs for benchmark suite

Standardized methodology1. Measure execution times

2. Normalize to standard basis machine

3. SPECmark = geometric mean of normalized values

Page 19: metrics of performance

Copyright 2004 David J. Lilja 19

SPEC

Geometric mean is inappropriate (more later)

SPEC rating does not correspond to execution times of non-SPEC programs

Subject to tinkering Committee determines which

programs should be part of the suite

Targeted compiler optimizations

Linear

Reliable

Repeatable ☺Easy to measure ½☺

Consistent ☺Independent

Page 20: metrics of performance

Copyright 2004 David J. Lilja 20

QUIPS

Developed as part of HINT benchmark Instead of effort expended, use quality of

solution Quality has rigorous mathematical definition QUIPS = Quality Improvements Per Second

Page 21: metrics of performance

Copyright 2004 David J. Lilja 21

QUIPS

HINT benchmark program Find upper and lower rational bounds for

dxx

x

1

0 )1(

)1(

Use interval subdivision technique Divide x,y into intervals that are power of 2 Count number of squares completely

above/below curve to obtain u and l bounds Quality = 1/(u – l)

Page 22: metrics of performance

Copyright 2004 David J. Lilja 22

QUIPS

From: http://hint.byu.edu/documentation/Gus/HINT/ComputerPerformance.html

Page 23: metrics of performance

Copyright 2004 David J. Lilja 23

QUIPS

From: http://hint.byu.edu/documentation/Gus/HINT/ComputerPerformance.html

Page 24: metrics of performance

Copyright 2004 David J. Lilja 24

QUIPS

Primary focus on Floating-point ops Memory performance

Good for predicting performance on numerical programs

But it ignores I/O performance Multiprograming Instruction cache

Quality definition is fixed

Linear ≈☺Reliable

Repeatable ☺Easy to measure ☺

Consistent ☺Independent ☺

Page 25: metrics of performance

Copyright 2004 David J. Lilja 25

Execution time

Ultimately interested in time required to execute your program

Smallest total execution time == highest performance

Compare times directly Derive appropriate rates Time == fundamental metric of performance

If you can’t measure time, you don’t know anything

Page 26: metrics of performance

Copyright 2004 David J. Lilja 26

Execution time

“Stopwatch” measured execution timeStart_count = read_timer();

Portion of program to be measuredStop_count = read_timer();Elapsed_time = (stop_count – start_count) * clock_period;

Measures “wall clock” time Includes I/O waits, time-sharing, OS overhead, …

“CPU time” -- include only processor time

Page 27: metrics of performance

Copyright 2004 David J. Lilja 27

Execution time

Best to report both wall clock and CPU times

Includes system noise effects Background OS tasks Virtual to physical page mapping Random cache mapping and

replacement Variable system load

Report both mean and variance (more later)

Linear ☺Reliable ☺Repeatable ≈☺Easy to measure ☺

Consistent ☺Independent ☺

Page 28: metrics of performance

Copyright 2004 David J. Lilja 28

Performance metrics summary

Clock MIPS MFLOPS SPEC QUIPS TIME

Linear ≈☺ ☺Reliable ≈☺Repeatable ☺ ☺ ☺ ☺ ☺ ☺Easy to measure ☺ ☺ ☺ ½☺ ☺ ☺Consistent ☺ ☺ ☺ ☺Independent ☺ ☺ ☺ ☺

Page 29: metrics of performance

Copyright 2004 David J. Lilja 29

Other metrics

Response time Elapsed time from request to response

Throughput Jobs, operations completed per unit time E.g. video frames per second

Bandwidth Bits per second

Ad hoc metrics Defined for a specific need

Page 30: metrics of performance

Copyright 2004 David J. Lilja 30

Means vs. ends metrics

Means-based metrics Measure what was done Whether or not it was useful!

Nop instructions, multiply by zero, … Produces unreliable metrics

Ends-based metrics Measures progress towards a goal Only counts what is actually accomplished

Page 31: metrics of performance

Copyright 2004 David J. Lilja 31

Means vs. ends metrics

Means-based Ends-based

Clo

ck r

ate

MIP

S

MF

LO

PS

SP

EC

QU

IPS

Exe

cutio

n tim

e

Page 32: metrics of performance

Copyright 2004 David J. Lilja 32

Speedup

Speedup of System 1 w.r.t System 2 S2,1 such that: R2 = S2,1 R1

R1 = “speed” of System 1

R2 = “speed” of System 2

System 2 is S2,1 times faster than System 1

Page 33: metrics of performance

Copyright 2004 David J. Lilja 33

Speedup

“Speed” is a rate metric Ri = Di / Ti

Di ~ “distance traveled” by System i Run the same benchmark program on both

systems → D1 = D2 = D

Total work done by each system is defined to be “execution of the same program” Independent of architectural differences

Page 34: metrics of performance

Copyright 2004 David J. Lilja 34

Speedup

2

1

1

2

11

22

1

21,2 /

/

/

/

T

T

TD

TD

TD

TD

R

RS

Page 35: metrics of performance

Copyright 2004 David J. Lilja 35

Speedup

1 Systemn slower tha is 2 System1

1 Systemn faster tha is 2 System1

1,2

1,2

S

S

Page 36: metrics of performance

Copyright 2004 David J. Lilja 36

Relative change

Performance of System 2 relative to System 1 as percent change

1

121,2 R

RR

Page 37: metrics of performance

Copyright 2004 David J. Lilja 37

Relative change

1

/

//

1,2

2

21

1

12

1

121,2

S

T

TT

TD

TDTD

R

RR

Page 38: metrics of performance

Copyright 2004 David J. Lilja 38

Relative change

1 Systemn slower tha is 2 System0

1 Systemn faster tha is 2 System0

1,2

1,2

Page 39: metrics of performance

Copyright 2004 David J. Lilja 39

Important Points

Metrics can be Counts Durations Sizes Some combination of the above

Page 40: metrics of performance

Copyright 2004 David J. Lilja 40

Important Points

Good metrics are Linear

More “natural” Reliable

Useful for comparison and prediction Repeatable Easy to use Consistent

Definition is the same across different systems Independent of outside influences

Page 41: metrics of performance

Copyright 2004 David J. Lilja 41

Important Points

Not-so-good metric Better metrics

Clo

ck r

ate

MIP

S

MF

LO

PS

SP

EC

QU

IPS

Exe

cutio

n tim

e

Page 42: metrics of performance

Copyright 2004 David J. Lilja 42

Important Points

Speedup = T2 / T1 Relative change = Speedup - 1


Recommended