+ All Categories
Home > Documents > Data Mining: Evaluasi dan Validasi Romi Satria Wahono [email protected] +6281586220090.

Data Mining: Evaluasi dan Validasi Romi Satria Wahono [email protected] +6281586220090.

Date post: 31-Mar-2015
Category:
Upload: javion-olds
View: 222 times
Download: 3 times
Share this document with a friend
Popular Tags:
82
Data Mining: Evaluasi dan Validasi Romi Satria Wahono [email protected] http://romisatriawahono.net +6281586220090
Transcript
Page 1: Data Mining: Evaluasi dan Validasi Romi Satria Wahono romi@romisatriawahono.net  +6281586220090.

Data Mining:Evaluasi dan Validasi

Romi Satria [email protected]://romisatriawahono.net

+6281586220090

Page 2: Data Mining: Evaluasi dan Validasi Romi Satria Wahono romi@romisatriawahono.net  +6281586220090.

SD Sompok Semarang (1987) SMPN 8 Semarang (1990) SMA Taruna Nusantara, Magelang (1993) S1, S2 dan S3 (on-leave)

Department of Computer SciencesSaitama University, Japan (1994-2004)

Research Interests: Software Engineering and Intelligent Systems

Founder IlmuKomputer.Com Peneliti LIPI (2004-2007) Founder dan CEO PT Brainmatics Cipta Informatika

Romi Satria Wahono

Page 3: Data Mining: Evaluasi dan Validasi Romi Satria Wahono romi@romisatriawahono.net  +6281586220090.

Course Outline1. Pengenalan Data Mining2. Proses Data Mining3. Evaluasi dan Validasi pada Data Mining4. Metode dan Algoritma Data Mining5. Penelitian Data Mining

Page 4: Data Mining: Evaluasi dan Validasi Romi Satria Wahono romi@romisatriawahono.net  +6281586220090.

Evaluasi dan Validasi pada Data Mining

Page 5: Data Mining: Evaluasi dan Validasi Romi Satria Wahono romi@romisatriawahono.net  +6281586220090.

Evaluasi dan Validasi pada Data Mining1. Training and Testing2. Predicting performance: confidence limits3. Holdout, cross-validation, bootstrap4. Comparing schemes: the t-test5. Predicting probabilities: loss functions6. Cost-sensitive measures7. Evaluating numeric prediction8. The Minimum Description Length principle

Page 6: Data Mining: Evaluasi dan Validasi Romi Satria Wahono romi@romisatriawahono.net  +6281586220090.

Training and Testing

Page 7: Data Mining: Evaluasi dan Validasi Romi Satria Wahono romi@romisatriawahono.net  +6281586220090.

Evaluation: The Key to Success How predictive is the model we learned? Error on the training data is not a good indicator of

performance on future data•Otherwise 1-NN would be the optimum classifier!

Simple solution that can be used if lots of (labeled) data is available:• Split data into training and test set

However: (labeled) data is usually limited•More sophisticated techniques need to be used

Page 8: Data Mining: Evaluasi dan Validasi Romi Satria Wahono romi@romisatriawahono.net  +6281586220090.

Issues in Evaluation Statistical reliability of estimated differences in

performance ( significance tests) Choice of performance measure:•Number of correct classifications• Accuracy of probability estimates• Error in numeric predictions

Costs assigned to different types of errors•Many practical applications involve costs

Page 9: Data Mining: Evaluasi dan Validasi Romi Satria Wahono romi@romisatriawahono.net  +6281586220090.

Error Rate Natural performance measure for classification

problems: error rate• Success: instance’s class is predicted correctly• Error: instance’s class is predicted incorrectly• Error rate: proportion of errors made over the whole set

of instances Resubstitution error: error rate obtained from

training data Resubstitution error is (hopelessly) optimistic!

Page 10: Data Mining: Evaluasi dan Validasi Romi Satria Wahono romi@romisatriawahono.net  +6281586220090.

Test Set Test set: independent instances that have played no

part in formation of classifier• Assumption: both training data and test data are

representative samples of the underlying problem Test and training data may differ in nature• Example: classifiers built using customer data from two

different towns A and BTo estimate performance of classifier from town A in completely

new town, test it on data from B

Page 11: Data Mining: Evaluasi dan Validasi Romi Satria Wahono romi@romisatriawahono.net  +6281586220090.

Note on Parameter Tuning It is important that the test data is not used in any

way to create the classifier Some learning schemes operate in two stages:

1. Stage 1: build the basic structure2. Stage 2: optimize parameter settings

The test data can’t be used for parameter tuning! Proper procedure uses three sets: training data,

validation data, and test data• Validation data is used to optimize parameters

Page 12: Data Mining: Evaluasi dan Validasi Romi Satria Wahono romi@romisatriawahono.net  +6281586220090.

Making the Most of the Data Once evaluation is complete, all the data can be

used to build the final classifier Generally, the larger the training data the better the

classifier (but returns diminish) The larger the test data the more accurate the error

estimate Holdout procedure: method of splitting original

data into training and test set•Dilemma: ideally both training set and test set should be

large!

Page 13: Data Mining: Evaluasi dan Validasi Romi Satria Wahono romi@romisatriawahono.net  +6281586220090.

Predicting Performance

Page 14: Data Mining: Evaluasi dan Validasi Romi Satria Wahono romi@romisatriawahono.net  +6281586220090.

Evaluation of Data Mining Algorithms1. Estimation:• Error: Root Mean Square Error (RMSE), MSE, MAPE, etc

2. Prediction/Forecasting (Prediksi/Peramalan):• Error: Root Mean Square Error (RMSE) , MSE, MAPE, etc

3. Classification:• Confusion Matrix: Accuracy• ROC Curve: Area Under Curve (AUC)

4. Clustering:• Internal Evaluation: Davies–Bouldin index, Dunn index, • External Evaluation: Rand measure, F-measure, Jaccard index, Fowlkes–Mallows

index, Confusion matrix

5. Association:• Lift Charts: Lift Ratio• Precision and Recall (F-measure)

Page 15: Data Mining: Evaluasi dan Validasi Romi Satria Wahono romi@romisatriawahono.net  +6281586220090.

Predicting Performance Assume the estimated error rate is 25%. How close

is this to the true error rate?•Depends on the amount of test data

Prediction is just like tossing a (biased!) coin• “Head” is a “success”, “tail” is an “error”

In statistics, a succession of independent events like this is called a Bernoulli process• Statistical theory provides us with confidence intervals

for the true underlying proportion

Page 16: Data Mining: Evaluasi dan Validasi Romi Satria Wahono romi@romisatriawahono.net  +6281586220090.

Confidence Intervals We can say: p lies within a certain specified interval

with a certain specified confidence Example: S=750 successes in N=1000 trials• Estimated success rate: 75%•How close is this to true success rate p?

Answer: with 80% confidence p in [73.2,76.7]

Another example: S=75 and N=100• Estimated success rate: 75%•With 80% confidence p in [69.1,80.1]

Page 17: Data Mining: Evaluasi dan Validasi Romi Satria Wahono romi@romisatriawahono.net  +6281586220090.

Mean and Variance Mean and variance for a Bernoulli trial:

p, p (1–p) Expected success rate f=S/N Mean and variance for f : p, p (1–p)/N For large enough N, f follows a Normal distribution c% confidence interval [–z X z] for random

variable with 0 mean is given by:

With a symmetric distribution:

𝑃𝑟 [−𝑧≤ 𝑋≤ 𝑧 ]=𝑐

𝑃𝑟 [−𝑧≤ 𝑋 ≤ 𝑧 ]=1−2×𝑃𝑟 [ 𝑥≥ 𝑧 ]

Page 18: Data Mining: Evaluasi dan Validasi Romi Satria Wahono romi@romisatriawahono.net  +6281586220090.

Confidence Limits Confidence limits for the normal distribution with 0

mean and a variance of 1:

Thus: To use this we have to reduce our random variable f

to have 0 mean and unit variance

0.2540%

0.8420%

1.2810%

1.655%

2.33

2.58

3.09

z

1%

0.5%

0.1%

Pr[X z]

𝑃𝑟 [−1.65≤ 𝑋 ≤1.65 ]=90%

Page 19: Data Mining: Evaluasi dan Validasi Romi Satria Wahono romi@romisatriawahono.net  +6281586220090.

Transforming f Transformed value for f :

(i.e. subtract the mean and divide by the standard deviation)

Resulting equation:

Solving for p:

𝑓 −𝑝

√𝑝 (1−𝑝)/𝑁

𝑃𝑟 [−𝑧 ≤ 𝑓 −𝑝

√𝑝 (1−𝑝 )/𝑁≤ 𝑧 ]=𝑐

Page 20: Data Mining: Evaluasi dan Validasi Romi Satria Wahono romi@romisatriawahono.net  +6281586220090.

Examples f = 75%, N = 1000, c = 80% (so that z = 1.28):

f = 75%, N = 100, c = 80% (so that z = 1.28):

Note that normal distribution assumption is only valid for large N (i.e. N > 100)

f = 75%, N = 10, c = 80% (so that z = 1.28):

(should be taken with a grain of salt)

𝑝∈ [0.732,0 .767 ]

𝑝∈ [0.691,0 .801 ]

𝑝∈ [0.549,0 .881 ]

Page 21: Data Mining: Evaluasi dan Validasi Romi Satria Wahono romi@romisatriawahono.net  +6281586220090.

Cross Validation

Page 22: Data Mining: Evaluasi dan Validasi Romi Satria Wahono romi@romisatriawahono.net  +6281586220090.

Holdout Estimation What to do if the amount of data is limited? The holdout method reserves a certain amount for

testing and uses the remainder for training•Usually: one third for testing, the rest for training

Problem: the samples might not be representative• Example: class might be missing in the test data

Advanced version uses stratification• Ensures that each class is represented with

approximately equal proportions in both subsets

Page 23: Data Mining: Evaluasi dan Validasi Romi Satria Wahono romi@romisatriawahono.net  +6281586220090.

Repeated Holdout Method Holdout estimate can be made more reliable by

repeating the process with different subsamples• In each iteration, a certain proportion is randomly

selected for training (possibly with stratificiation)• The error rates on the different iterations are averaged to

yield an overall error rate This is called the repeated holdout method Still not optimum: the different test sets overlap• Can we prevent overlapping?

Page 24: Data Mining: Evaluasi dan Validasi Romi Satria Wahono romi@romisatriawahono.net  +6281586220090.

Cross-Validation Cross-validation avoids overlapping test sets

1. First step: split data into k subsets of equal size2. Second step: use each subset in turn for testing, the

remainder for training Called k-fold cross-validation Often the subsets are stratified before the cross-

validation is performed The error estimates are averaged to yield an overall

error estimate

Page 25: Data Mining: Evaluasi dan Validasi Romi Satria Wahono romi@romisatriawahono.net  +6281586220090.

More on Cross-Validation Standard method for evaluation: stratified ten-fold

cross-validation Why ten?• Extensive experiments have shown that this is the best

choice to get an accurate estimate• There is also some theoretical evidence for this

Stratification reduces the estimate’s variance Even better: repeated stratified cross-validation• E.g. ten-fold cross-validation is repeated ten times and

results are averaged (reduces the variance)

Page 26: Data Mining: Evaluasi dan Validasi Romi Satria Wahono romi@romisatriawahono.net  +6281586220090.

Other Estimates

Page 27: Data Mining: Evaluasi dan Validasi Romi Satria Wahono romi@romisatriawahono.net  +6281586220090.

Leave-One-Out Cross-Validation Leave-One-Out:

a particular form of cross-validation:• Set number of folds to number of training instances• I.e., for n training instances, build classifier n times

Makes best use of the data Involves no random subsampling Very computationally expensive• (exception: NN)

Page 28: Data Mining: Evaluasi dan Validasi Romi Satria Wahono romi@romisatriawahono.net  +6281586220090.

Leave-One-Out-CV and stratification Disadvantage of Leave-One-Out-CV: stratification is

not possible• It guarantees a non-stratified sample because there is

only one instance in the test set! Extreme example: random dataset split equally into

two classes• Best inducer predicts majority class• 50% accuracy on fresh data• Leave-One-Out-CV estimate is 100% error!

Page 29: Data Mining: Evaluasi dan Validasi Romi Satria Wahono romi@romisatriawahono.net  +6281586220090.

The Bootstrap CV uses sampling without replacement• The same instance, once selected, can not be selected

again for a particular training/test set The bootstrap uses sampling with replacement to

form the training set• Sample a dataset of n instances n times with replacement

to form a new dataset of n instances•Use this data as the training set•Use the instances from the original

dataset that don’t occur in the newtraining set for testing

Page 30: Data Mining: Evaluasi dan Validasi Romi Satria Wahono romi@romisatriawahono.net  +6281586220090.

The 0.632 bootstrap Also called the 0.632 bootstrap• A particular instance has a probability of 1–1/n of not

being picked• Thus its probability of ending up in the test data is:

• This means the training data will contain approximately 63.2% of the instances

Page 31: Data Mining: Evaluasi dan Validasi Romi Satria Wahono romi@romisatriawahono.net  +6281586220090.

Estimating error with the bootstrap The error estimate on the test data will be very

pessimistic• Trained on just ~63% of the instances

Therefore, combine it with the resubstitution error:

The resubstitution error gets less weight than the error on the test data

Repeat process several times with different replacement samples; average the results

𝑒𝑟𝑟=0.632×𝑒test   instances+0.368×𝑒training _ instances

Page 32: Data Mining: Evaluasi dan Validasi Romi Satria Wahono romi@romisatriawahono.net  +6281586220090.

More on the Bootstrap Probably the best way of estimating performance

for very small datasets However, it has some problems• Consider the random dataset from above• A perfect memorizer will achieve

0% resubstitution error and ~50% error on test data• Bootstrap estimate for this classifier:

• True expected error: 50%

𝑒𝑟𝑟=0.632×50%+0.368×0%=31.6%

Page 33: Data Mining: Evaluasi dan Validasi Romi Satria Wahono romi@romisatriawahono.net  +6281586220090.

Comparing Data Mining Schemes

Page 34: Data Mining: Evaluasi dan Validasi Romi Satria Wahono romi@romisatriawahono.net  +6281586220090.

Comparing Data Mining Schemes Frequent question: which of two learning schemes

performs better? Note: this is domain dependent! Obvious way: compare 10-fold CV estimates Generally sufficient in applications (we don't loose

if the chosen method is not truly better) However, what about machine learning research?•Need to show convincingly that a particular method

works better

Page 35: Data Mining: Evaluasi dan Validasi Romi Satria Wahono romi@romisatriawahono.net  +6281586220090.

Comparing Data Mining Schemes Want to show that scheme A is better than scheme

B in a particular domain• For a given amount of training data•On average, across all possible training sets

Let's assume we have an infinite amount of data from the domain:• Sample infinitely many dataset of specified size•Obtain cross-validation estimate on each dataset for each

scheme• Check if mean accuracy for scheme A is better than mean

accuracy for scheme B

Page 36: Data Mining: Evaluasi dan Validasi Romi Satria Wahono romi@romisatriawahono.net  +6281586220090.

Paired t-test In practice we have limited data and a limited number

of estimates for computing the mean Student’s t-test tells whether the means of two

samples are significantly different In our case the samples are cross-validation estimates

for different datasets from the domain Use a paired t-test because the individual samples are

paired• The same CV is applied twice

William GossetBorn 1876 in Canterbury; Died: 1937 in Beaconsfield, England

Obtained a post as a chemist in the Guinness brewery in Dublin in 1899. Invented the t-test to handle small samples for quality control in brewing. Wrote under the name "Student".

Page 37: Data Mining: Evaluasi dan Validasi Romi Satria Wahono romi@romisatriawahono.net  +6281586220090.

Distribution of the Means x1 x2 … xk and y1 y2 … yk are the 2k samples

for the k different datasets mx and my are the means With enough samples, the mean of a set of

independent samples is normally distributed Estimated variances of the means are

sx2/k and sy

2/k If mx and my are the true means then

are approximately normally distributed withmean 0, variance 1

𝑚𝑥−m x

√m𝑥2 /𝑘

𝑚 𝑦−m𝑦

√m 𝑦2 /𝑘

Page 38: Data Mining: Evaluasi dan Validasi Romi Satria Wahono romi@romisatriawahono.net  +6281586220090.

Student’s Distribution With small samples (k < 100) the mean follows

Student’s distribution with k–1 degrees of freedom Confidence limits:

0.8820%

1.3810%

1.835%

2.82

3.25

4.30

z

1%

0.5%

0.1%

Pr[X z]

0.8420%

1.2810%

1.655%

2.33

2.58

3.09

z

1%

0.5%

0.1%

Pr[X z]

9 degrees of freedom normal distribution

Assumingwe have10 estimates

Page 39: Data Mining: Evaluasi dan Validasi Romi Satria Wahono romi@romisatriawahono.net  +6281586220090.

Distribution of the differences Let md = mx – my

The difference of the means (md) also has a Student’s distribution with k–1 degrees of freedom

Let sd2 be the variance of the difference

The standardized version of md is called the t-statistic:

We use t to perform the t-test

𝑡=𝑚𝑑

√σ 𝑑2 /𝑘

Page 40: Data Mining: Evaluasi dan Validasi Romi Satria Wahono romi@romisatriawahono.net  +6281586220090.

Performing the Test Fix a significance level• If a difference is significant at the a % level,

there is a (100- a)% chance that the true means differ Divide the significance level by two because the

test is two-tailed• I.e. the true difference can be +ve or – ve

Look up the value for z that corresponds to a /2 If t –z or t z then the difference is significant• I.e. the null hypothesis (that the difference is zero) can be

rejected

Page 41: Data Mining: Evaluasi dan Validasi Romi Satria Wahono romi@romisatriawahono.net  +6281586220090.

Unpaired observations If the CV estimates are from different datasets,

they are no longer paired(or maybe we have k estimates for one scheme, and j estimates for the other one)

Then we have to use an un paired t-test with min(k, j) – 1 degrees of freedom

The estimate of the variance of the difference of the means becomes:

𝑥2

𝑘𝑦

2

𝑗

Page 42: Data Mining: Evaluasi dan Validasi Romi Satria Wahono romi@romisatriawahono.net  +6281586220090.

Dependent Estimates We assumed that we have enough data to create

several datasets of the desired size Need to re-use data if that's not the case• E.g. running cross-validations with different

randomizations on the same data Samples become dependent insignificant

differences can become significant A heuristic test is the corrected resampled t-test:• Assume we use the repeated hold-out method, with n1

instances for training and n2 for testing•New test statistic is:

Page 43: Data Mining: Evaluasi dan Validasi Romi Satria Wahono romi@romisatriawahono.net  +6281586220090.

Predicting Probabilities

Page 44: Data Mining: Evaluasi dan Validasi Romi Satria Wahono romi@romisatriawahono.net  +6281586220090.

Predicting Probabilities Performance measure so far: success rate Also called 0-1 loss function:

Most classifiers produces class probabilities Depending on the application, we might want to

check the accuracy of the probability estimates 0-1 loss is not the right thing to use in those cases

∑i{0  if   prediction   is   correct1  if   prediction   is   incorrect }

Page 45: Data Mining: Evaluasi dan Validasi Romi Satria Wahono romi@romisatriawahono.net  +6281586220090.

Quadratic loss function p1 … pk are probability estimates for an instance c is the index of the instance’s actual class a1 … ak = 0, except for ac which is 1

Quadratic loss is:

Want to minimize

Can show that this is minimized when pj = pj*, the

true probabilities

∑𝑗

(𝑝 𝑗−𝑎 𝑗)2=∑

𝑗 !=𝑐𝑝 𝑗2+(𝑎−𝑝𝑐 )

2

𝐸 [∑𝑗 (𝑝 𝑗−𝑎 𝑗)2]

Page 46: Data Mining: Evaluasi dan Validasi Romi Satria Wahono romi@romisatriawahono.net  +6281586220090.

Informational Loss Function The informational loss function is –log(pc),

where c is the index of the instance’s actual class Number of bits required to communicate the actual

class Let p1

* … pk* be the true class probabilities

Then the expected value for the loss function is:

●Justification: minimized when pj = pj*

Difficulty: zero-frequency problem

−𝑝1∗ log2𝑝1− ...−𝑝𝑘

∗ log2𝑝𝑘

Page 47: Data Mining: Evaluasi dan Validasi Romi Satria Wahono romi@romisatriawahono.net  +6281586220090.

Discussion Which loss function to choose? Both encourage honesty Quadratic loss function takes into account all class

probability estimates for an instance Informational loss focuses only on the probability

estimate for the actual class Quadratic loss is bounded:

it can never exceed 2 Informational loss can be infinite Informational loss is related to MDL principle [later]

1+∑𝑗

𝑝 𝑗2

Page 48: Data Mining: Evaluasi dan Validasi Romi Satria Wahono romi@romisatriawahono.net  +6281586220090.

Counting the Cost

Page 49: Data Mining: Evaluasi dan Validasi Romi Satria Wahono romi@romisatriawahono.net  +6281586220090.

Counting the Cost In practice, different types of classification errors

often incur different costs Examples:• Terrorist profiling

“Not a terrorist” correct 99.99% of the time

• Loan decisions•Oil-slick detection• Fault diagnosis• Promotional mailing

Page 50: Data Mining: Evaluasi dan Validasi Romi Satria Wahono romi@romisatriawahono.net  +6281586220090.

Counting the Cost The confusion matrix:

There are many other types of cost!• E.g.: cost of collecting training data

Actual class

True negativeFalse positiveNo

False negativeTrue positiveYes

NoYes

Predicted class

Page 51: Data Mining: Evaluasi dan Validasi Romi Satria Wahono romi@romisatriawahono.net  +6281586220090.

Aside: the Kappa Statistic Two confusion matrices for a 3-class problem:

actual predictor (left) vs. random predictor (right)

Number of successes: sum of entries in diagonal (D) Kappa statistic:

measures relative improvement over random predictor

𝐷𝑜𝑏𝑠𝑒𝑟𝑣𝑒𝑑−𝐷𝑟𝑎𝑛𝑑𝑜𝑚

𝐷𝑝𝑒𝑟𝑓𝑒𝑐𝑡−𝐷𝑟𝑎𝑛𝑑𝑜𝑚

Page 52: Data Mining: Evaluasi dan Validasi Romi Satria Wahono romi@romisatriawahono.net  +6281586220090.

Classification with Costs Two cost matrices:

Success rate is replaced by average cost per prediction• Cost is given by appropriate entry in the cost matrix

Page 53: Data Mining: Evaluasi dan Validasi Romi Satria Wahono romi@romisatriawahono.net  +6281586220090.

Cost-Sensitive Classification Can take costs into account when making

predictions• Basic idea: only predict high-cost class when very

confident about prediction Given: predicted class probabilities•Normally we just predict the most likely class•Here, we should make the prediction that minimizes the

expected cost• Expected cost: dot product of vector of class probabilities

and appropriate column in cost matrix• Choose column (class) that minimizes expected cost

Page 54: Data Mining: Evaluasi dan Validasi Romi Satria Wahono romi@romisatriawahono.net  +6281586220090.

Cost-Sensitive Learning So far we haven't taken costs into account at training

time Most learning schemes do not perform cost-sensitive

learning• They generate the same classifier no matter what costs are

assigned to the different classes• Example: standard decision tree learner

Simple methods for cost-sensitive learning:• Resampling of instances according to costs•Weighting of instances according to costs

Some schemes can take costs into account by varying a parameter, e.g. naïve Bayes

Page 55: Data Mining: Evaluasi dan Validasi Romi Satria Wahono romi@romisatriawahono.net  +6281586220090.

Lift Charts In practice, costs are rarely known Decisions are usually made by comparing possible

scenarios Example: promotional mailout to 1,000,000

households•Mail to all; 0.1% respond (1000)• Data mining tool identifies subset of 100,000 most promising,

0.4% of these respond (400)40% of responses for 10% of cost may pay off• Identify subset of 400,000 most promising, 0.2% respond

(800)

A lift chart allows a visual comparison

Page 56: Data Mining: Evaluasi dan Validasi Romi Satria Wahono romi@romisatriawahono.net  +6281586220090.

Generating a Lift Chart Sort instances according to predicted probability of

being positive:

x axis is sample sizey axis is number of true positives

………

Yes0.884

No0.933

Yes0.932

Yes0.951

Actual classPredicted probability

Page 57: Data Mining: Evaluasi dan Validasi Romi Satria Wahono romi@romisatriawahono.net  +6281586220090.

A Hypothetical Lift Chart

40% of responsesfor 10% of cost

80% of responsesfor 40% of cost

Page 58: Data Mining: Evaluasi dan Validasi Romi Satria Wahono romi@romisatriawahono.net  +6281586220090.

ROC Curves ROC curves are similar to lift charts• Stands for “receiver operating characteristic”•Used in signal detection to show tradeoff between hit

rate and false alarm rate over noisy channel Differences to lift chart:• y axis shows percentage of true positives in sample

rather than absolute number• x axis shows percentage of false positives in sample

rather than sample size

Page 59: Data Mining: Evaluasi dan Validasi Romi Satria Wahono romi@romisatriawahono.net  +6281586220090.

A Sample ROC Curve

Jagged curve—one set of test data Smooth curve—use cross-validation

Page 60: Data Mining: Evaluasi dan Validasi Romi Satria Wahono romi@romisatriawahono.net  +6281586220090.

Cross-Validation and ROC curves Simple method of getting a ROC curve using cross-

validation:• Collect probabilities for instances in test folds• Sort instances according to probabilities

This method is implemented in WEKA However, this is just one possibility• Another possibility is to generate an ROC curve for each

fold and average them

Page 61: Data Mining: Evaluasi dan Validasi Romi Satria Wahono romi@romisatriawahono.net  +6281586220090.

ROC Curves for Two Schemes

For a small, focused sample, use method A For a larger one, use method B In between, choose between A and B with appropriate probabilities

Page 62: Data Mining: Evaluasi dan Validasi Romi Satria Wahono romi@romisatriawahono.net  +6281586220090.

The Convex Hull Given two learning schemes we can achieve any

point on the convex hull! TP and FP rates for scheme 1: t1 and f1 TP and FP rates for scheme 2: t2 and f2 If scheme 1 is used to predict 100 × q % of the

cases and scheme 2 for the rest, then• TP rate for combined scheme:

q × t1 + (1-q) × t2• FP rate for combined scheme:

q × f1+(1-q) × f2

Page 63: Data Mining: Evaluasi dan Validasi Romi Satria Wahono romi@romisatriawahono.net  +6281586220090.

More Measures... Percentage of retrieved documents that are relevant:

precision=TP/(TP+FP) Percentage of relevant documents that are returned:

recall =TP/(TP+FN) Precision/recall curves have hyperbolic shape Summary measures: average precision at 20%, 50% and

80% recall (three-point average recall) F-measure=(2 × recall × precision)/(recall+precision) sensitivity × specificity = (TP / (TP + FN)) × (TN / (FP + TN)) Area under the ROC curve (AUC):

probability that randomly chosen positive instance is ranked above randomly chosen negative one

Page 64: Data Mining: Evaluasi dan Validasi Romi Satria Wahono romi@romisatriawahono.net  +6281586220090.

Summary of Some Measures

ExplanationPlotDomain

TP/(TP+FN)TP/(TP+FP)

RecallPrecision

Information retrieval

Recall-precision curve

TP/(TP+FN)FP/(FP+TN)

TP rateFP rate

Communications

ROC curve

TP(TP+FP)/(TP+FP+TN+FN)

TPSubset size

MarketingLift chart

Page 65: Data Mining: Evaluasi dan Validasi Romi Satria Wahono romi@romisatriawahono.net  +6281586220090.

Cost Curves Cost curves plot expected costs directly Example for case with uniform costs (i.e. error):

Page 66: Data Mining: Evaluasi dan Validasi Romi Satria Wahono romi@romisatriawahono.net  +6281586220090.

Cost curves: example with costs

Normalized   expected   cost=fn×𝑝𝑐 [+] fp× 1−𝑝𝑐 [+]

Probability   cost   function𝑝𝑐 [+]=𝑝 [+]C [+|−]

𝑝 [+]C[+|−]𝑝 [−]C[−|+]

Page 67: Data Mining: Evaluasi dan Validasi Romi Satria Wahono romi@romisatriawahono.net  +6281586220090.

Evaluating Numeric Prediction

Page 68: Data Mining: Evaluasi dan Validasi Romi Satria Wahono romi@romisatriawahono.net  +6281586220090.

Evaluating Numeric Prediction Same strategies: independent test set, cross-

validation, significance tests, etc. Difference: error measures Actual target values: a1 a2 …an

Predicted target values: p1 p2 … pn

Most popular measure: mean-squared error

Easy to manipulate mathematically

(𝑝1−𝑎1 )2+ ...+(𝑝𝑛−𝑎𝑛)

2

𝑛

Page 69: Data Mining: Evaluasi dan Validasi Romi Satria Wahono romi@romisatriawahono.net  +6281586220090.

Other Measures The root mean-squared error:

The mean absolute error is less sensitive to outliers than the mean-squared error:

Sometimes relative error values are more appropriate (e.g. 10% for an error of 50 when predicting 500)

√ (𝑝1−𝑎1 )2+ ...+(𝑝𝑛−𝑎𝑛)2

𝑛

∣𝑝1−𝑎1 ∣+...+ ∣𝑝𝑛−𝑎𝑛 ∣𝑛

Page 70: Data Mining: Evaluasi dan Validasi Romi Satria Wahono romi@romisatriawahono.net  +6281586220090.

Improvement on the Mean How much does the scheme improve on simply

predicting the average? The relative squared error is:

The relative absolute error is:

Page 71: Data Mining: Evaluasi dan Validasi Romi Satria Wahono romi@romisatriawahono.net  +6281586220090.

Correlation Coefficient Measures the statistical correlation between the

predicted values and the actual values

Scale independent, between –1 and +1 Good performance leads to large values!

Page 72: Data Mining: Evaluasi dan Validasi Romi Satria Wahono romi@romisatriawahono.net  +6281586220090.

Performance Measures for Numeric Prediction

Page 73: Data Mining: Evaluasi dan Validasi Romi Satria Wahono romi@romisatriawahono.net  +6281586220090.

Which measure? Best to look at all of them Often it doesn’t matter Example:

0.910.890.880.88Correlation coefficient

30.4%34.8%40.1%43.1%Relative absolute error

35.8%39.4%57.2%42.2%Root rel squared error

29.233.438.541.3Mean absolute error

57.463.391.767.8Root mean-squared error

DCBA

D bestC second-bestA, B arguable

Page 74: Data Mining: Evaluasi dan Validasi Romi Satria Wahono romi@romisatriawahono.net  +6281586220090.

The MDL principle MDL stands for minimum description length The description length is defined as:

space required to describe a theory + space required to describe the theory’s mistakes

In our case the theory is the classifier and the mistakes are the errors on the training data

Aim: we seek a classifier with minimal DL MDL principle is a model selection criterion

Page 75: Data Mining: Evaluasi dan Validasi Romi Satria Wahono romi@romisatriawahono.net  +6281586220090.

Model selection criteria Model selection criteria attempt to find a good

compromise between:• The complexity of a model• Its prediction accuracy on the training data

Reasoning: a good model is a simple model that achieves high accuracy on the given data

Also known as Occam’s Razor :the best theory is the smallest onethat describes all the facts

William of Ockham, born in the village of Ockham in Surrey (England) about 1285, was the most influential philosopher of

the 14th century and a controversial theologian.

Page 76: Data Mining: Evaluasi dan Validasi Romi Satria Wahono romi@romisatriawahono.net  +6281586220090.

Elegance vs. errors Theory 1: very simple, elegant theory that explains

the data almost perfectly Theory 2: significantly more complex theory that

reproduces the data without mistakes Theory 1 is probably preferable Classical example: Kepler’s three laws on planetary

motion• Less accurate than Copernicus’s latest refinement of the

Ptolemaic theory of epicycles

Page 77: Data Mining: Evaluasi dan Validasi Romi Satria Wahono romi@romisatriawahono.net  +6281586220090.

MDL and compression MDL principle relates to data compression:• The best theory is the one that compresses the data the

most• I.e. to compress a dataset we generate a model and then

store the model and its mistakes We need to compute

(a) size of the model, and(b) space needed to encode the errors

(b) easy: use the informational loss function (a) need a method to encode the model

Page 78: Data Mining: Evaluasi dan Validasi Romi Satria Wahono romi@romisatriawahono.net  +6281586220090.

MDL and Bayes’s theorem L[T]=“length” of the theory L[E|T]=training set encoded wrt the theory Description length= L[T] + L[E|T] Bayes’s theorem gives a posteriori probability of a

theory given the data:

Equivalent to:

𝑃𝑟 [ T |E ]=𝑃𝑟 [ E| T ]𝑃𝑟 [𝑇 ]

𝑃𝑟 [𝐸 ]

− log 𝑃𝑟 [ T |E ]=− log 𝑃𝑟 [ E |T ]− log 𝑃𝑟 [𝑇 ] log 𝑃𝑟 [𝐸 ]

Page 79: Data Mining: Evaluasi dan Validasi Romi Satria Wahono romi@romisatriawahono.net  +6281586220090.

MDL and MAP MAP stands for maximum a posteriori probability Finding the MAP theory corresponds to finding the

MDL theory Difficult bit in applying the MAP principle:

determining the prior probability Pr[T] of the theory

Corresponds to difficult part in applying the MDL principle: coding scheme for the theory

I.e. if we know a priori that a particular theory is more likely we need fewer bits to encode it

Page 80: Data Mining: Evaluasi dan Validasi Romi Satria Wahono romi@romisatriawahono.net  +6281586220090.

Discussion of MDL principle Advantage: makes full use of the training data when

selecting a model Disadvantage 1: appropriate coding scheme/prior

probabilities for theories are crucial Disadvantage 2: no guarantee that the MDL theory

is the one which minimizes the expected error Note: Occam’s Razor is an axiom! Epicurus’s principle of multiple explanations: keep

all theories that are consistent with the data

Page 81: Data Mining: Evaluasi dan Validasi Romi Satria Wahono romi@romisatriawahono.net  +6281586220090.

MDL and clustering Description length of theory:

bits needed to encode the clusters• e.g. cluster centers

Description length of data given theory:encode cluster membership and position relative to cluster• e.g. distance to cluster center

Works if coding scheme uses less code space for small numbers than for large ones

With nominal attributes, must communicate probability distributions for each cluster

Page 82: Data Mining: Evaluasi dan Validasi Romi Satria Wahono romi@romisatriawahono.net  +6281586220090.

Referensi1. Ian H. Witten, Frank Eibe, Mark A. Hall, Data mining: Practical

Machine Learning Tools and Techniques 3rd Edition, Elsevier, 2011

2. Daniel T. Larose, Discovering Knowledge in Data: an Introduction to Data Mining, John Wiley & Sons, 2005

3. Florin Gorunescu, Data Mining: Concepts, Models and Techniques, Springer, 2011

4. Jiawei Han and Micheline Kamber, Data Mining: Concepts and Techniques Second Edition, Elsevier, 2006

5. Oded Maimon and Lior Rokach, Data Mining and Knowledge Discovery Handbook Second Edition, Springer, 2010

6. Warren Liao and Evangelos Triantaphyllou (eds.), Recent Advances in Data Mining of Enterprise Data: Algorithms and Applications, World Scientific, 2007


Recommended