+ All Categories
Home > Documents > 1 Data Mining: Concepts and Techniques Classification: Basic Concepts.

1 Data Mining: Concepts and Techniques Classification: Basic Concepts.

Date post: 11-Jan-2016
Category:
Upload: ernest-murphy
View: 236 times
Download: 2 times
Share this document with a friend
Popular Tags:
42
1 Data Mining: Concepts and Techniques Classification: Basic Concepts
Transcript
Page 1: 1 Data Mining: Concepts and Techniques Classification: Basic Concepts.

1

Data Mining: Concepts and

TechniquesClassification: Basic Concepts

Page 2: 1 Data Mining: Concepts and Techniques Classification: Basic Concepts.

2

Classification: Basic Concepts

Classification: Basic Concepts

Decision Tree Induction

Rule-Based Classification

Model Evaluation and Selection

Summary

Page 3: 1 Data Mining: Concepts and Techniques Classification: Basic Concepts.

3

Supervised vs. Unsupervised Learning

Supervised learning (classification)

Supervision: The training data (observations,

measurements, etc.) are accompanied by labels indicating

the class of the observations

New data is classified based on the training set

Unsupervised learning (clustering)

The class labels of training data is unknown

Given a set of measurements, observations, etc. with the

aim of establishing the existence of classes or clusters in

the data

Page 4: 1 Data Mining: Concepts and Techniques Classification: Basic Concepts.

4

Classification predicts categorical class labels (discrete or nominal) classifies data (constructs a model) based on the training

set and the values (class labels) in a classifying attribute and uses it in classifying new data

Numeric Prediction models continuous-valued functions, i.e., predicts

unknown or missing values Typical applications

Credit/loan approval: Medical diagnosis: if a tumor is cancerous or benign Fraud detection: if a transaction is fraudulent Web page categorization: which category it is

Prediction Problems: Classification vs. Numeric

Prediction

Page 5: 1 Data Mining: Concepts and Techniques Classification: Basic Concepts.

5

Classification—A Two-Step Process

Model construction: describing a set of predetermined classes Each tuple/sample is assumed to belong to a predefined class, as

determined by the class label attribute The set of tuples used for model construction is training set The model is represented as classification rules, decision trees, or

mathematical formulae Model usage: for classifying future or unknown objects

Estimate accuracy of the model The known label of test sample is compared with the classified

result from the model Accuracy rate is the percentage of test set samples that are

correctly classified by the model Test set is independent of training set (otherwise overfitting)

If the accuracy is acceptable, use the model to classify data tuples whose class labels are not known

Page 6: 1 Data Mining: Concepts and Techniques Classification: Basic Concepts.

Figure The data classification process: (a) Learning: Training data are analyzed by a classification algorithm. Here, the class label attribute is loan_decision, and the learned model or classifier is represented in the form of classification rules. (b) Classification: Test data are used to estimate the accuracy of the classification rules. If the accuracy is considered acceptable, the rules can be applied to the classification of new data tuples.

Page 7: 1 Data Mining: Concepts and Techniques Classification: Basic Concepts.

7

Process (1): Model Construction

TrainingData

NAME RANK YEARS TENUREDMike Assistant Prof 3 noMary Assistant Prof 7 yesBill Professor 2 yesJim Associate Prof 7 yesDave Assistant Prof 6 noAnne Associate Prof 3 no

ClassificationAlgorithms

IF rank = ‘professor’OR years > 6THEN tenured = ‘yes’

Classifier(Model)

Page 8: 1 Data Mining: Concepts and Techniques Classification: Basic Concepts.

8

Process (2): Using the Model in Prediction

Classifier

TestingData

NAME RANK YEARS TENUREDTom Assistant Prof 2 noMerlisa Associate Prof 7 noGeorge Professor 5 yesJoseph Assistant Prof 7 yes

Unseen Data

(Jeff, Professor, 4)

Tenured?

Page 9: 1 Data Mining: Concepts and Techniques Classification: Basic Concepts.

9

Classification: Basic Concepts

Classification: Basic Concepts

Decision Tree Induction

Rule-Based Classification

Model Evaluation and Selection

Summary

Page 10: 1 Data Mining: Concepts and Techniques Classification: Basic Concepts.

10

Decision Tree Induction: An Example

age?

overcast

student? credit rating?

<=30 >40

no yes yes

yes

31..40

no

fairexcellentyesno

age income student credit_rating buys_computer<=30 high no fair no<=30 high no excellent no31…40 high no fair yes>40 medium no fair yes>40 low yes fair yes>40 low yes excellent no31…40 low yes excellent yes<=30 medium no fair no<=30 low yes fair yes>40 medium yes fair yes<=30 medium yes excellent yes31…40 medium no excellent yes31…40 high yes fair yes>40 medium no excellent no

Training data set: Buys_computer The data set follows an example of

Quinlan’s ID3 (Playing Tennis) Resulting tree:

Page 11: 1 Data Mining: Concepts and Techniques Classification: Basic Concepts.

11

Algorithm for Decision Tree Induction

Basic algorithm (a greedy algorithm) Tree is constructed in a top-down recursive divide-and-

conquer manner At start, all the training examples are at the root Attributes are categorical (if continuous-valued, they are

discretized in advance) Examples are partitioned recursively based on selected

attributes Test attributes are selected on the basis of a heuristic or

statistical measure (e.g., information gain) Conditions for stopping partitioning

All samples for a given node belong to the same class There are no remaining attributes for further partitioning –

majority voting is employed for classifying the leaf There are no samples left

Page 12: 1 Data Mining: Concepts and Techniques Classification: Basic Concepts.

Figure Basic algorithm for inducing a decision tree from training tuples.

Page 13: 1 Data Mining: Concepts and Techniques Classification: Basic Concepts.

13

Attribute Selection Measure: Information Gain (ID3/C4.5)

Select the attribute with the highest information gain Let pi be the probability that an arbitrary tuple in D belongs to

class Ci, estimated by |Ci, D|/|D| Expected information (entropy) needed to classify a tuple in D:

Information needed (after using A to split D into v partitions) to classify D:

Information gained by branching on attribute A

)(log)( 21

i

m

ii ppDInfo

)(||

||)(

1j

v

j

jA DInfo

D

DDInfo

(D)InfoInfo(D)Gain(A) A

Page 14: 1 Data Mining: Concepts and Techniques Classification: Basic Concepts.

14

Attribute Selection: Information Gain

Class P: buys_computer = “yes” Class N: buys_computer = “no”

means “age <=30” has 5 out of

14 samples, with 2 yes’es and 3

no’s. Hence

Similarly,

age pi ni I(pi, ni)<=30 2 3 0.97131…40 4 0 0>40 3 2 0.971

694.0)2,3(14

5

)0,4(14

4)3,2(

14

5)(

I

IIDInfoage

048.0)_(

151.0)(

029.0)(

ratingcreditGain

studentGain

incomeGain

246.0)()()( DInfoDInfoageGain ageage income student credit_rating buys_computer

<=30 high no fair no<=30 high no excellent no31…40 high no fair yes>40 medium no fair yes>40 low yes fair yes>40 low yes excellent no31…40 low yes excellent yes<=30 medium no fair no<=30 low yes fair yes>40 medium yes fair yes<=30 medium yes excellent yes31…40 medium no excellent yes31…40 high yes fair yes>40 medium no excellent no

)3,2(14

5I

940.0)14

5(log

14

5)

14

9(log

14

9)5,9()( 22 IDInfo

Page 15: 1 Data Mining: Concepts and Techniques Classification: Basic Concepts.

Figure The attribute age has the highest information gain and therefore becomes the splitting attribute at the root node of the decision tree. Branches are grown for each outcome of age. The tuples are shown partitioned accordingly.

Page 16: 1 Data Mining: Concepts and Techniques Classification: Basic Concepts.

16

Gain Ratio for Attribute Selection (C4.5)

Information gain measure is biased towards attributes with a large number of values

C4.5 (a successor of ID3) uses gain ratio to overcome the problem (normalization to information gain)

GainRatio(A) = Gain(A)/SplitInfo(A) Ex.

gain_ratio(income) = 0.029/1.557 = 0.019 The attribute with the maximum gain ratio is selected as the

splitting attribute

)||

||(log

||

||)( 2

1 D

D

D

DDSplitInfo j

v

j

jA

Page 17: 1 Data Mining: Concepts and Techniques Classification: Basic Concepts.

17

Gini Index (CART, IBM IntelligentMiner)

If a data set D contains examples from n classes, gini index, gini(D) is defined as

where pj is the relative frequency of class j in D If a data set D is split on A into two subsets D1 and D2, the gini

index gini(D) is defined as

Reduction in Impurity:

The attribute provides the smallest ginisplit(D) (or the largest reduction in impurity) is chosen to split the node (need to enumerate all the possible splitting points for each attribute)

n

jp jDgini

1

21)(

)(||||)(

||||)( 2

21

1 DginiDD

DginiDDDginiA

)()()( DginiDginiAginiA

Page 18: 1 Data Mining: Concepts and Techniques Classification: Basic Concepts.

18

Computation of Gini Index

Ex. D has 9 tuples in buys_computer = “yes” and 5 in “no”

Suppose the attribute income partitions D into 10 in D1: {low, medium} and 4 in D2

Gini{low,high} is 0.458; Gini{medium,high} is 0.450. Thus, split on the {low,medium} (and {high}) since it has the lowest Gini index

All attributes are assumed continuous-valued May need other tools, e.g., clustering, to get the possible split

values Can be modified for categorical attributes

459.014

5

14

91)(

22

Dgini

)(14

4)(

14

10)( 11},{ DGiniDGiniDgini mediumlowincome

Page 19: 1 Data Mining: Concepts and Techniques Classification: Basic Concepts.

19

Comparing Attribute Selection Measures

The three measures, in general, return good results but Information gain:

biased towards multivalued attributes Gain ratio:

tends to prefer unbalanced splits in which one partition is much smaller than the others

Gini index: biased to multivalued attributes has difficulty when # of classes is large tends to favor tests that result in equal-sized partitions

and purity in both partitions

Page 20: 1 Data Mining: Concepts and Techniques Classification: Basic Concepts.

20

Other Attribute Selection Measures

CHAID: a popular decision tree algorithm, measure based on χ2 test for

independence

C-SEP: performs better than info. gain and gini index in certain cases

G-statistic: has a close approximation to χ2 distribution

MDL (Minimal Description Length) principle (i.e., the simplest solution is

preferred):

The best tree as the one that requires the fewest # of bits to both (1)

encode the tree, and (2) encode the exceptions to the tree

Multivariate splits (partition based on multiple variable combinations)

CART: finds multivariate splits based on a linear comb. of attrs.

Which attribute selection measure is the best?

Most give good results, none is significantly superior than others

Page 21: 1 Data Mining: Concepts and Techniques Classification: Basic Concepts.

21

Overfitting and Tree Pruning

Overfitting: An induced tree may overfit the training data Too many branches, some may reflect anomalies due to

noise or outliers Poor accuracy for unseen samples

Two approaches to avoid overfitting Prepruning: Halt tree construction early ̵ do not split a node

if this would result in the goodness measure falling below a threshold

Difficult to choose an appropriate threshold Postpruning: Remove branches from a “fully grown” tree—

get a sequence of progressively pruned trees Use a set of data different from the training data to

decide which is the “best pruned tree”

Page 22: 1 Data Mining: Concepts and Techniques Classification: Basic Concepts.

22

Classification: Basic Concepts

Classification: Basic Concepts

Decision Tree Induction

Rule-Based Classification

Model Evaluation and Selection

Summary

Page 23: 1 Data Mining: Concepts and Techniques Classification: Basic Concepts.

23

Using IF-THEN Rules for Classification

Represent the knowledge in the form of IF-THEN rules

R: IF age = youth AND student = yes THEN buys_computer = yes

Page 24: 1 Data Mining: Concepts and Techniques Classification: Basic Concepts.

24

age?

student? credit rating?

<=30 >40

no yes yes

yes

31..40

no

fairexcellentyesno

Example: Rule extraction from our buys_computer decision-treeIF age = young AND student = no THEN buys_computer = noIF age = young AND student = yes THEN buys_computer = yesIF age = mid-age THEN buys_computer = yesIF age = old AND credit_rating = excellent THEN buys_computer = noIF age = old AND credit_rating = fair THEN buys_computer = yes

Rule Extraction from a Decision Tree

Rules are easier to understand than large trees One rule is created for each path from the root

to a leaf Each attribute-value pair along a path forms a

conjunction: the leaf holds the class prediction Rules are mutually exclusive and exhaustive

Page 25: 1 Data Mining: Concepts and Techniques Classification: Basic Concepts.

Model Evaluation and Selection

Evaluation metrics: How can we measure accuracy? Other metrics to consider?

Use test set of class-labeled tuples instead of training set when assessing accuracy

25

Page 26: 1 Data Mining: Concepts and Techniques Classification: Basic Concepts.

Classifier Evaluation Metrics: Confusion Matrix

Actual class\Predicted class

buy_computer = yes

buy_computer = no

Total

buy_computer = yes 6954 46 7000

buy_computer = no 412 2588 3000

Total 7366 2634 10000

Given m classes, an entry, CMi,j in a confusion matrix indicates # of tuples in class i that were labeled by the classifier as class j

Confusion Matrix:Actual class\Predicted class C1 ¬ C1

C1 True Positives (TP) False Negatives (FN)

¬ C1 False Positives (FP) True Negatives (TN)

Example of Confusion Matrix:

26

Page 27: 1 Data Mining: Concepts and Techniques Classification: Basic Concepts.

Classifier Evaluation Metrics: Accuracy, Error Rate,

Sensitivity and Specificity

Classifier Accuracy, or recognition rate: percentage of test set tuples that are correctly classifiedAccuracy = (TP + TN)/All

Error rate: 1 – accuracy, orError rate = (FP + FN)/All

Class Imbalance Problem: One class may be rare, e.g.

fraud Significant majority of the

negative class and minority of the positive class

Sensitivity: True Positive recognition rate

Sensitivity = TP/P Specificity: True Negative

recognition rate Specificity = TN/N

A\P C ¬C

C TP FN P

¬C FP TN N

P’ N’ All

27

Page 28: 1 Data Mining: Concepts and Techniques Classification: Basic Concepts.

Classifier Evaluation Metrics: Precision and Recall, and F-

measures Precision: exactness – what % of tuples that the classifier

labeled as positive are actually positive

Recall: completeness – what % of positive tuples did the classifier label as positive?

Perfect score is 1.0

F measure (F1 or F-score): harmonic mean of precision and recall,

Fß: weighted measure of precision and recall assigns ß times as much weight to recall as to precision

28

Page 29: 1 Data Mining: Concepts and Techniques Classification: Basic Concepts.

Classifier Evaluation Metrics: Example

29

Precision = 90/230 = 39.13% Recall = 90/300 = 30.00%

Actual Class\Predicted class cancer = yes cancer = no Total Recognition(%)

cancer = yes 90 210 300 30.00 (sensitivity

cancer = no 140 9560 9700 98.56 (specificity)

Total 230 9770 10000 96.40 (accuracy)

Page 30: 1 Data Mining: Concepts and Techniques Classification: Basic Concepts.

Issues Affecting Model Selection

Accuracy classifier accuracy: predicting class label

Speed time to construct the model (training time) time to use the model (classification/prediction time)

Robustness: handling noise and missing values Scalability: efficiency in disk-resident databases Interpretability

understanding and insight provided by the model Other measures, e.g., goodness of rules, such as decision tree

size or compactness of classification rules30

Page 31: 1 Data Mining: Concepts and Techniques Classification: Basic Concepts.

Summary (I)

Classification is a form of data analysis that extracts models describing important data classes.

Effective and scalable methods have been developed for decision tree induction, Naive Bayesian classification, rule-based classification, and many other classification methods.

Evaluation metrics include: accuracy, sensitivity, specificity, precision, recall, F measure, and Fß measure.

31

Page 32: 1 Data Mining: Concepts and Techniques Classification: Basic Concepts.

Reference: Books on Classification

E. Alpaydin. Introduction to Machine Learning, 2nd ed., MIT Press, 2011 L. Breiman, J. Friedman, R. Olshen, and C. Stone. Classification and Regression Trees.

Wadsworth International Group, 1984. C. M. Bishop. Pattern Recognition and Machine Learning. Springer, 2006. R. O. Duda, P. E. Hart, and D. G. Stork. Pattern Classification, 2ed. John Wiley, 2001 T. Hastie, R. Tibshirani, and J. Friedman. The Elements of Statistical Learning: Data Mining,

Inference, and Prediction. Springer-Verlag, 2001 H. Liu and H. Motoda (eds.). Feature Extraction, Construction, and Selection: A Data Mining

Perspective. Kluwer Academic, 1998T. M. Mitchell. Machine Learning. McGraw Hill, 1997 S. Marsland. Machine Learning: An Algorithmic Perspective. Chapman and Hall/CRC, 2009. J. R. Quinlan. C4.5: Programs for Machine Learning. Morgan Kaufmann, 1993 J. W. Shavlik and T. G. Dietterich. Readings in Machine Learning. Morgan Kaufmann, 1990. P. Tan, M. Steinbach, and V. Kumar. Introduction to Data Mining. Addison Wesley, 2005. S. M. Weiss and C. A. Kulikowski. Computer Systems that Learn: Classification and Prediction

Methods from Statistics, Neural Nets, Machine Learning, and Expert Systems. Morgan Kaufman, 1991.

S. M. Weiss and N. Indurkhya. Predictive Data Mining. Morgan Kaufmann, 1997. I. H. Witten and E. Frank. Data Mining: Practical Machine Learning Tools and Techniques, 2ed.

Morgan Kaufmann, 2005.32

Page 33: 1 Data Mining: Concepts and Techniques Classification: Basic Concepts.

Reference: Decision-Trees

M. Ankerst, C. Elsen, M. Ester, and H.-P. Kriegel. Visual classification: An interactive approach to decision tree construction. KDD'99

C. Apte and S. Weiss. Data mining with decision trees and decision rules. Future Generation Computer Systems, 13, 1997

C. E. Brodley and P. E. Utgoff. Multivariate decision trees. Machine Learning, 19:45–77, 1995. P. K. Chan and S. J. Stolfo. Learning arbiter and combiner trees from partitioned data for scaling machine

learning. KDD'95 U. M. Fayyad. Branching on attribute values in decision tree generation. AAAI’94 M. Mehta, R. Agrawal, and J. Rissanen. SLIQ : A fast scalable classifier for data mining. EDBT'96. J. Gehrke, R. Ramakrishnan, and V. Ganti. Rainforest: A framework for fast decision tree construction of large

datasets. VLDB’98. J. Gehrke, V. Gant, R. Ramakrishnan, and W.-Y. Loh, BOAT -- Optimistic Decision Tree Construction. SIGMOD'99. S. K. Murthy, Automatic Construction of Decision Trees from Data: A Multi-Disciplinary Survey, Data Mining and

Knowledge Discovery 2(4): 345-389, 1998 J. R. Quinlan. Induction of decision trees. Machine Learning, 1:81-106, 1986 J. R. Quinlan and R. L. Rivest. Inferring decision trees using the minimum description length principle.

Information and Computation, 80:227–248, Mar. 1989 S. K. Murthy. Automatic construction of decision trees from data: A multi-disciplinary survey. Data Mining and

Knowledge Discovery, 2:345–389, 1998. R. Rastogi and K. Shim. Public: A decision tree classifier that integrates building and pruning. VLDB’98. J. Shafer, R. Agrawal, and M. Mehta. SPRINT : A scalable parallel classifier for data mining. VLDB’96 Y.-S. Shih. Families of splitting criteria for classification trees. Statistics and Computing, 9:309–315, 1999. 33

Page 34: 1 Data Mining: Concepts and Techniques Classification: Basic Concepts.

Reference: Neural Networks

C. M. Bishop, Neural Networks for Pattern Recognition. Oxford University Press, 1995

Y. Chauvin and D. Rumelhart. Backpropagation: Theory, Architectures, and Applications. Lawrence Erlbaum, 1995

J. W. Shavlik, R. J. Mooney, and G. G. Towell. Symbolic and neural learning algorithms: An experimental comparison. Machine Learning, 6:111–144, 1991

S. Haykin. Neural Networks and Learning Machines. Prentice Hall, Saddle River, NJ, 2008

J. Hertz, A. Krogh, and R. G. Palmer. Introduction to the Theory of Neural Computation. Addison Wesley, 1991.

R. Hecht-Nielsen. Neurocomputing. Addison Wesley, 1990 B. D. Ripley. Pattern Recognition and Neural Networks. Cambridge University

Press, 1996

34

Page 35: 1 Data Mining: Concepts and Techniques Classification: Basic Concepts.

Reference: Support Vector Machines

C. J. C. Burges. A Tutorial on Support Vector Machines for Pattern Recognition. Data Mining and Knowledge Discovery, 2(2): 121-168, 1998

N. Cristianini and J. Shawe-Taylor. An Introduction to Support Vector Machines and Other Kernel-Based Learning Methods. Cambridge Univ. Press, 2000.

H. Drucker, C. J. C. Burges, L. Kaufman, A. Smola, and V. N. Vapnik. Support vector regression machines, NIPS, 1997

J. C. Platt. Fast training of support vector machines using sequential minimal optimization. In B. Schoelkopf, C. J. C. Burges, and A. Smola, editors, Advances in Kernel Methods|Support Vector Learning, pages 185–208. MIT Press, 1998

B. Schl¨okopf, P. L. Bartlett, A. Smola, and R. Williamson. Shrinking the tube: A new support vector regression algorithm. NIPS, 1999.

H. Yu, J. Yang, and J. Han. Classifying large data sets using SVM with hierarchical clusters. KDD'03.

35

Page 36: 1 Data Mining: Concepts and Techniques Classification: Basic Concepts.

Reference: Pattern-Based Classification

H. Cheng, X. Yan, J. Han, and C.-W. Hsu, Discriminative Frequent Pattern Analysis for Effective Classification, ICDE'07

H. Cheng, X. Yan, J. Han, and P. S. Yu, Direct Discriminative Pattern Mining for Effective Classification, ICDE'08

G. Cong, K.-L. Tan, A. K. H. Tung, and X. Xu. Mining top-k covering rule groups for gene expression data. SIGMOD'05

G. Dong and J. Li. Efficient mining of emerging patterns: Discovering trends and differences. KDD'99

H. S. Kim, S. Kim, T. Weninger, J. Han, and T. Abdelzaher. NDPMine: Efficiently mining discriminative numerical features for pattern-based classification. ECMLPKDD'10

W. Li, J. Han, and J. Pei, CMAR: Accurate and Efficient Classification Based on Multiple Class-Association Rules, ICDM'01

B. Liu, W. Hsu, and Y. Ma. Integrating classification and association rule mining. KDD'98

J. Wang and G. Karypis. HARMONY: Efficiently mining the best rules for classification. SDM'05

36

Page 37: 1 Data Mining: Concepts and Techniques Classification: Basic Concepts.

References: Rule Induction

P. Clark and T. Niblett. The CN2 induction algorithm. Machine Learning, 3:261–283, 1989.

W. Cohen. Fast effective rule induction. ICML'95 S. L. Crawford. Extensions to the CART algorithm. Int. J. Man-Machine Studies, 31:197–

217, Aug. 1989 J. R. Quinlan and R. M. Cameron-Jones. FOIL: A midterm report. ECML’93 P. Smyth and R. M. Goodman. An information theoretic approach to rule induction.

IEEE Trans. Knowledge and Data Engineering, 4:301–316, 1992. X. Yin and J. Han. CPAR: Classification based on predictive association rules. SDM'03

37

Page 38: 1 Data Mining: Concepts and Techniques Classification: Basic Concepts.

References: K-NN & Case-Based Reasoning

A. Aamodt and E. Plazas. Case-based reasoning: Foundational issues, methodological variations, and system approaches. AI Comm., 7:39–52, 1994.

T. Cover and P. Hart. Nearest neighbor pattern classification. IEEE Trans. Information Theory, 13:21–27, 1967

B. V. Dasarathy. Nearest Neighbor (NN) Norms: NN Pattern Classication Techniques. IEEE Computer Society Press, 1991

J. L. Kolodner. Case-Based Reasoning. Morgan Kaufmann, 1993 A. Veloso, W. Meira, and M. Zaki. Lazy associative classification.

ICDM'06

38

Page 39: 1 Data Mining: Concepts and Techniques Classification: Basic Concepts.

References: Bayesian Method & Statistical Models

A. J. Dobson. An Introduction to Generalized Linear Models. Chapman & Hall, 1990. D. Heckerman, D. Geiger, and D. M. Chickering. Learning Bayesian networks: The

combination of knowledge and statistical data. Machine Learning, 1995. G. Cooper and E. Herskovits. A Bayesian method for the induction of probabilistic

networks from data. Machine Learning, 9:309–347, 1992 A. Darwiche. Bayesian networks. Comm. ACM, 53:80–90, 2010 A. P. Dempster, N. M. Laird, and D. B. Rubin. Maximum likelihood from incomplete data

via the EM algorithm. J. Royal Statistical Society, Series B, 39:1–38, 1977 D. Heckerman, D. Geiger, and D. M. Chickering. Learning Bayesian networks: The

combination of knowledge and statistical data. Machine Learning, 20:197–243, 1995 F. V. Jensen. An Introduction to Bayesian Networks. Springer Verlag, 1996. D. Koller and N. Friedman. Probabilistic Graphical Models: Principles and Techniques. The

MIT Press, 2009 J. Pearl. Probabilistic Reasoning in Intelligent Systems. Morgan Kauffman, 1988 S. Russell, J. Binder, D. Koller, and K. Kanazawa. Local learning in probabilistic networks

with hidden variables. IJCAI'95 V. N. Vapnik. Statistical Learning Theory. John Wiley & Sons, 1998.

39

Page 40: 1 Data Mining: Concepts and Techniques Classification: Basic Concepts.

Refs: Semi-Supervised & Multi-Class Learning

O. Chapelle, B. Schoelkopf, and A. Zien. Semi-supervised Learning. MIT Press, 2006

T. G. Dietterich and G. Bakiri. Solving multiclass learning problems via error-correcting output codes. J. Articial Intelligence Research, 2:263–286, 1995

W. Dai, Q. Yang, G. Xue, and Y. Yu. Boosting for transfer learning. ICML’07

S. J. Pan and Q. Yang. A survey on transfer learning. IEEE Trans. on Knowledge and Data Engineering, 22:1345–1359, 2010

B. Settles. Active learning literature survey. In Computer Sciences Technical Report 1648, Univ. Wisconsin-Madison, 2010

X. Zhu. Semi-supervised learning literature survey. CS Tech. Rep. 1530, Univ. Wisconsin-Madison, 2005

40

Page 41: 1 Data Mining: Concepts and Techniques Classification: Basic Concepts.

Refs: Genetic Algorithms & Rough/Fuzzy Sets

D. Goldberg. Genetic Algorithms in Search, Optimization, and Machine Learning. Addison-Wesley, 1989

S. A. Harp, T. Samad, and A. Guha. Designing application-specific neural networks using the genetic algorithm. NIPS, 1990

Z. Michalewicz. Genetic Algorithms + Data Structures = Evolution Programs. Springer Verlag, 1992.

M. Mitchell. An Introduction to Genetic Algorithms. MIT Press, 1996 Z. Pawlak. Rough Sets, Theoretical Aspects of Reasoning about Data. Kluwer

Academic, 1991 S. Pal and A. Skowron, editors, Fuzzy Sets, Rough Sets and Decision Making

Processes. New York, 1998 R. R. Yager and L. A. Zadeh. Fuzzy Sets, Neural Networks and Soft Computing.

Van Nostrand Reinhold, 1994

41

Page 42: 1 Data Mining: Concepts and Techniques Classification: Basic Concepts.

References: Model Evaluation, Ensemble Methods

L. Breiman. Bagging predictors. Machine Learning, 24:123–140, 1996. L. Breiman. Random forests. Machine Learning, 45:5–32, 2001. C. Elkan. The foundations of cost-sensitive learning. IJCAI'01 B. Efron and R. Tibshirani. An Introduction to the Bootstrap. Chapman & Hall, 1993. J. Friedman and E. P. Bogdan. Predictive learning via rule ensembles. Ann. Applied

Statistics, 2:916–954, 2008. T.-S. Lim, W.-Y. Loh, and Y.-S. Shih. A comparison of prediction accuracy, complexity,

and training time of thirty-three old and new classification algorithms. Machine Learning, 2000.

J. Magidson. The Chaid approach to segmentation modeling: Chi-squared automatic interaction detection. In R. P. Bagozzi, editor, Advanced Methods of Marketing Research, Blackwell Business, 1994.

J. R. Quinlan. Bagging, boosting, and c4.5. AAAI'96. G. Seni and J. F. Elder. Ensemble Methods in Data Mining: Improving Accuracy

Through Combining Predictions. Morgan and Claypool, 2010. Y. Freund and R. E. Schapire. A decision-theoretic generalization of on-line learning

and an application to boosting. J. Computer and System Sciences, 199742


Recommended