+ All Categories
Home > Documents > Learning Agents Laboratory Computer Science Department George Mason University

Learning Agents Laboratory Computer Science Department George Mason University

Date post: 04-Feb-2016
Category:
Upload: orrick
View: 52 times
Download: 0 times
Share this document with a friend
Description:
CS 782 Machine Learning. 4. Inductive Learning from Examples: Decision Tree Learning. Prof. Gheorghe Tecuci. Learning Agents Laboratory Computer Science Department George Mason University. Overview. The decision tree learning problem. The basic ID3 learning algorithm. - PowerPoint PPT Presentation
40
1 3, G.Tecuci, Learning Agents Laboratory Learning Agents Laboratory Computer Science Department George Mason University Prof. Gheorghe Tecuci 4. Inductive Learning from Examples: Decision Tree Learning
Transcript
Page 1: Learning Agents Laboratory Computer Science Department George Mason University

1 2003, G.Tecuci, Learning Agents Laboratory

Learning Agents LaboratoryComputer Science Department

George Mason University

Prof. Gheorghe Tecuci

4. Inductive Learning from Examples:Decision Tree Learning

4. Inductive Learning from Examples:Decision Tree Learning

Page 2: Learning Agents Laboratory Computer Science Department George Mason University

2 2003, G.Tecuci, Learning Agents Laboratory

OverviewOverview

The basic ID3 learning algorithm

Discussion and refinement of the ID3 method

Applicability of the decision tree learning

The decision tree learning problem

Recommended reading

Exercises

Page 3: Learning Agents Laboratory Computer Science Department George Mason University

3 2003, G.Tecuci, Learning Agents Laboratory

The decision tree learning problemThe decision tree learning problem

Given • language of instances: feature value vectors • language of generalizations: decision trees • a set of positive examples (E1, ..., En) of a concept • a set of negative examples (C1, ... , Cm) of the same concept • learning bias: preference for shorter decision trees

Determine • a concept description in the form of a decision tree which is

a generalization of the positive examples that does not cover any of the negative examples

Page 4: Learning Agents Laboratory Computer Science Department George Mason University

4 2003, G.Tecuci, Learning Agents Laboratory

Examples

IllustrationIllustration

height hair eyes classshort blond blue +tall blond brown -tall red blue +short dark blue -tall dark blue -tall blond blue +tall dark brown -short blond brown -

Feature vector representation of examplesThat is, there is a fixed set of attributes, each attribute taking values from a specified set.

hair

dark red blond

eyes

blue brown

-

-

+

+(short, blond, blue) is +

hair = blond

eyes = blue

(short, blond, blue)

Decision tree concept

Page 5: Learning Agents Laboratory Computer Science Department George Mason University

5 2003, G.Tecuci, Learning Agents Laboratory

What is the logical expression represented by the decision tree?What is the logical expression represented by the decision tree?

Decision tree concept: hair

dark red blond

eyes

blue brown

-

-

+

+

Disjunction of conjunctions (one conjunct per path to a + node):

(hair = red) [(hair = blond) & (eyes = blue)]

Which is the concept represented by this decision tree?

Page 6: Learning Agents Laboratory Computer Science Department George Mason University

6 2003, G.Tecuci, Learning Agents Laboratory

Feature-value representationFeature-value representation

If the training set (i.e. the set of positive and negative examples from which the tree is learned) contains a positive example and a negative example that have identical values for each attribute, it is impossible to differentiate between the instances with reference only to the given attributes.

In such a case the attributes are inadequate for the training set and for the induction task.

Is the feature value representation adequate?

Page 7: Learning Agents Laboratory Computer Science Department George Mason University

7 2003, G.Tecuci, Learning Agents Laboratory

Feature-value representation (cont.)Feature-value representation (cont.)

The problem is that there are many such correct decision trees and the task of induction is to construct a decision tree that correctly classifies not only instances from the training set but other (unseen) instances as well.

When could a decision tree be built?

If the attributes are adequate, it is always possible to construct a decision tree that correctly classifies each instance in the training set.

So what is the difficulty in learning a decision tree?

Page 8: Learning Agents Laboratory Computer Science Department George Mason University

8 2003, G.Tecuci, Learning Agents Laboratory

OverviewOverview

The basic ID3 learning algorithm

Discussion and refinement of the ID3 method

Applicability of the decision tree learning

The decision tree learning problem

Recommended reading

Exercises

Page 9: Learning Agents Laboratory Computer Science Department George Mason University

9 2003, G.Tecuci, Learning Agents Laboratory

The basic ID3 learning algorithmThe basic ID3 learning algorithm • Let C be the set of training examples

• If all the examples in C are positive then create a node with label +

• If all the examples in C are negative then create a node with label -

• If there is no attribute left then create a node with the same label as the majority of examples in C

• Otherwise:

- partition the examples into subsets C1, C2, ... , Ck according to the values of A.

- apply the algorithm recursively to each of the sets Ci which is not empty

- for each Ci which is empty create a node with the same label as the majority of examples in C the node

- select the best attribute A and create a decision node, where v1, v2, ... , vk are the values of A:

. . .

Av1 v2

vk

Page 10: Learning Agents Laboratory Computer Science Department George Mason University

10 2003, G.Tecuci, Learning Agents Laboratory

Features selection: information theoryFeatures selection: information theory

Let us consider a set S containing objects from n classes S1, ... , Sn, so that the probability of an object to belong to a class Si is pi.

According to the information theory, the amount of information needed to identify the class of one particular member of S is:

Ii = - log2 pi

Intuitively, Ii represents the number of questions required to identify the class Si of a given element in S.

The average amount of information needed to identify the class of an element in S is:

- ∑ pi log2 pi

Page 11: Learning Agents Laboratory Computer Science Department George Mason University

11 2003, G.Tecuci, Learning Agents Laboratory

DiscussionDiscussion

Consider the following letters: A B C D E F G H

Think of one of them (call it, the secret letter).

How many questions need to be asked in order to find the secret letter?

Page 12: Learning Agents Laboratory Computer Science Department George Mason University

12 2003, G.Tecuci, Learning Agents Laboratory

Features selection: the best attributeFeatures selection: the best attribute

Let us suppose that the decision tree has been built from a training set C consisting of p positive examples and n negative examples.

The average amount of information needed to classify an instance from C is

p + n log2 p + n p + n log

2 p + np np nI(p, n) = - -

If attribute A with values {v1, v2,...,vk} is used for the root of the decision tree, it will partition C into {C1, C2,...,Ck}, where each Ci contains pi positive examples and ni negative examples.

The expected information required to classify an instance in Ci is I(pi, ni).The expected amount of information required to classify an instance after the value of the attribute A is known is therefore:

p + n I(p , n )p + n ii

iii = 1

kIres(A) =

The information gained by branching on A is: gain(A) = I(p, n) - Ires(A)

Page 13: Learning Agents Laboratory Computer Science Department George Mason University

13 2003, G.Tecuci, Learning Agents Laboratory

Features selection: the heuristicFeatures selection: the heuristic

The information gained by branching on A is:

gain(A) = I(p, n) - Ires(A)

Choose the attribute which leads to the greatest information gain.

What would be a good heuristic?

Why is this a heuristic and not a guaranteed method?

Hint: What kind of search method for the best attribute does ID3 uses?

Page 14: Learning Agents Laboratory Computer Science Department George Mason University

14 2003, G.Tecuci, Learning Agents Laboratory

Features selection: the heuristicFeatures selection: the heuristic

Why is this a heuristic and not a guaranteed method?

Hint: Think of a situation where a is the best attribute, but the combination of “b and c” would actually be better than any of “a and b”, or “a and c”. That is, knowing b and c you can classify, but knowing only a and b (or only a and c) you cannot.

This shows that the attributes may not be independent. How could we deal with this?

Hint: Consider also combination of attributes, not only a, b, c, but also ab, bc, ca

What is a problem with this approach?

Page 15: Learning Agents Laboratory Computer Science Department George Mason University

17 2003, G.Tecuci, Learning Agents Laboratory

Examples

Illustration of the methodIllustration of the method

height hair eyes classshort blond blue +tall blond brown -tall red blue +short dark blue -tall dark blue -tall blond blue +tall dark brown -short blond brown -

1. Find the attribute that maximizes the information gain:

p + n I(p , n )p + nii

iii = 1

k

p + n I(p , n )p + nii

iii = 1

kgain(A) = I(p, n) -

p + n log2 p + n p + n log

2 p + np np nI(p, n) = - -

I(3+, 5-) = -3/8log23/8 – 5/8log25/8 = 0.954434003

Height: short (1+, 2-) tall(2+, 3-)Gain(height) = 0.954434003 - 3/8*I(1+,2-) - 5/8*I(2+,3-) = = 0.954434003 – 3/8(-1/3log21/3 - 2/3log22/3) – 5/8(-2/5log22/5 - 3/5log23/5) = 0.003228944

Hair: blond(2+, 2-) red(1+, 0-) dark(0+, 3-)Gain(hair) = 0.954434003 – 4/8(-2/4log22/4 – 2/4log22/4) – 1/8(-1/1log21/1-0) – -3/8(0-3/3log23/3) = 0.954434003 – 0.5 = 0.454434003

Eyes: blue(3+, 2-) brown(0+, 3-)Gain(eyes) = 0.954434003 – 5/8(-3/5log23/5 – 2/5log22/5) -5/8(= = 0.954434003 - 0.606844122 = 0.347589881 “Hair” is the best attribute.

Page 16: Learning Agents Laboratory Computer Science Department George Mason University

18 2003, G.Tecuci, Learning Agents Laboratory

Examples

Illustration of the method (cont.)Illustration of the method (cont.)

height hair eyes classshort blond blue +tall blond brown -tall red blue +short dark blue -tall dark blue -tall blond blue +tall dark brown -short blond brown - hair

dark red blond

short, dark, blue: -tall, dark, blue: -tall, bark, brown: -

tall, red, blue: + short, blond, blue: +tall, blond, brown: -tall, blond, blue: +short, blond, brown: -

2. “Hair” is the best attribute. Build the tree using it.

Page 17: Learning Agents Laboratory Computer Science Department George Mason University

19 2003, G.Tecuci, Learning Agents Laboratory

Illustration of the method (cont.)Illustration of the method (cont.)

3. Select the best attribute for the set of examples:

short, blond, blue: +tall, blond, brown: -tall, blond, blue: +short, blond, brown: -

I(2+, 2-) = -2/4log22/4 – 2/4log22/4 = -log21/2=1

Height: short (1+, 1-) tall(1+, 1-)

Eyes: blue (2+, 0-) brown(0+, 2-)

Gain(height) = 1 – 2/4*I(1+,1-) – 2/4*I(1+,1-) = 1 - I(1+,1-) = 1-1 = 0

Gain(eyes) = 1 – 2/4*I(2+,0-) – 2/4*I(0+,2-) = 1 – 0 – 0 = 1

“Eyes” is the best attribute.

Page 18: Learning Agents Laboratory Computer Science Department George Mason University

20 2003, G.Tecuci, Learning Agents Laboratory

Illustration of the method (cont.)Illustration of the method (cont.)

hair

dark red blond

short, dark, blue: -tall, dark, blue: -tall, bark, brown: -

tall, red, blue: +

short, blond, blue: + tall, blond, brown: -tall, blond, blue: + short, blond, brown: -

eyes

blue brown

4. “Eyes” is the best attribute. Expand the tree using it:

Page 19: Learning Agents Laboratory Computer Science Department George Mason University

21 2003, G.Tecuci, Learning Agents Laboratory

Illustration of the method (cont.)Illustration of the method (cont.)

5. Build the decision tree:

hair

dark red blond

eyes

blue brown

-

-

+

+

What induction hypothesis is made?

Page 20: Learning Agents Laboratory Computer Science Department George Mason University

22 2003, G.Tecuci, Learning Agents Laboratory

OverviewOverview

The basic ID3 learning algorithm

Discussion and refinement of the ID3 method

Applicability of the decision tree learning

The decision tree learning problem

Recommended reading

Exercises

Page 21: Learning Agents Laboratory Computer Science Department George Mason University

23 2003, G.Tecuci, Learning Agents Laboratory

How could we transform a tree into a set of rules?How could we transform a tree into a set of rules?

hair

dark red blond

eyes

blue brown

-

-

+

+

Answer:

IF (hair = red) THEN positive example

IF [(hair = blond) & (eyes = blue)]THEN positive example

Why should we make such a transformation?

Converting to rules improves understandability.

Page 22: Learning Agents Laboratory Computer Science Department George Mason University

24 2003, G.Tecuci, Learning Agents Laboratory

Learning from noisy dataLearning from noisy data

• errors in the values of attributes (due to measurements or subjective judgments);

• errors of classifications of the instances (for instance a negative example that was considered a positive example).

What errors could be found in an example (also called noise in data)?

What are the effects of noise?

How to change the ID3 algorithm to deal with noise?

Page 23: Learning Agents Laboratory Computer Science Department George Mason University

25 2003, G.Tecuci, Learning Agents Laboratory

How to deal with noise?How to deal with noise?

The algorithm must be able to work with inadequate attributes, because noise can cause even the most comprehensive set of attributes to appear inadequate.

The algorithm must be able to decide that testing further attributes will not improve the predictive accuracy of the decision tree. For instance, it should refrain from increasing the complexity of the decision tree to accommodate a single noise-generated special case.

Noise may cause the attributes to become inadequate.

Noise may lead to decision trees of spurious complexity (overfitting).

What are the effects of noise?

How to change the ID3 algorithm to deal with noise?

Page 24: Learning Agents Laboratory Computer Science Department George Mason University

26 2003, G.Tecuci, Learning Agents Laboratory

How to deal with an inadequate attribute set?How to deal with an inadequate attribute set?

A collection C of instances may contain representatives of both classes, yet further testing of C may be ruled out, either because the attributes are inadequate and unable to distinguish among the instances in C, or because each attribute has been judged to be irrelevant to the class of instances in C.

In this situation it is necessary to produce a leaf labeled with a class information, but the instances in C are not all of the same class.

(inadequacy due to noise)

What class to assign a leaf node that contains both + and - examples?

Page 25: Learning Agents Laboratory Computer Science Department George Mason University

27 2003, G.Tecuci, Learning Agents Laboratory

What class to assign a leaf node that contains both + and - examples?What class to assign a leaf node that contains both + and - examples?

Approaches:

1. The notion of class could be generalized from a binary value (0 for negative examples and 1 for positive examples) to a number in the interval [0; 1]. In such a case, a class of 0.8 would be interpreted as 'belonging to class P with probability 0.8'.

2. Opt for the more numerous class, i.e. assign the leaf to class P if p>n, to class N if p<n, and to either if p=n.

The first approach minimizes the sum of the squares of the errors over objects in C.

The second approach minimizes the sum of the absolute errors over objects in C. If the aim is to minimize expected error, the second approach might be anticipated to be superior.

Page 26: Learning Agents Laboratory Computer Science Department George Mason University

28 2003, G.Tecuci, Learning Agents Laboratory

How to avoid overfitting the data?How to avoid overfitting the data?

One says that a hypothesis overfits the training examples if some other hypothesis that fits the training examples less well actually performs better over the entire distribution of instances.

• Stop growing the tree before it overfits;• Allow the tree to overfit and then prune it.

How to determine the correct size of the tree?

Use a testing set of examples to compare the likely errors of various trees.

How to avoid overfitting?

Page 27: Learning Agents Laboratory Computer Science Department George Mason University

29 2003, G.Tecuci, Learning Agents Laboratory

Rule post pruning to avoid overfitting the data?Rule post pruning to avoid overfitting the data?

Infer a decision tree

Convert the tree into a set of rules

Prune (generalize) the rules by removing antecedents as long as this improves their accuracy

Sort the rules by their accuracy and use this order in classification

Rule post pruning algorithm

Compare tree pruning with rule post pruning.

Rule post pruning is more general. We can remove an attribute from the top of the tree without removing all the attributes that follow.

Page 28: Learning Agents Laboratory Computer Science Department George Mason University

30 2003, G.Tecuci, Learning Agents Laboratory

How to use continuous attributes?How to use continuous attributes?

Transform a continuous attribute into a discrete one.

Give an example of such a transformation.

Page 29: Learning Agents Laboratory Computer Science Department George Mason University

31 2003, G.Tecuci, Learning Agents Laboratory

How to deal with missing attribute values?How to deal with missing attribute values?

Estimate the value from the values of the other examples.

How?

Assign the value that is most common for the training examples at that node.

Assign a probability to each of the values. How does this affect the algorithm?

Consider fractional examples.

Page 30: Learning Agents Laboratory Computer Science Department George Mason University

32 2003, G.Tecuci, Learning Agents Laboratory

Comparison with the candidate elimination algorithmComparison with the candidate elimination algorithm

Generalization languageID3 – disjunctions of conjunctionsCE – conjunctions

ID3 – all in the same time (can deal with noise and missing values)CE – one at a time (can determine the most informative example)

Use of examples

ID3: hill climbing (may not find the concept but only an approximation)CE: exhaustive search

Search strategy

ID3 – preference bias (Occam’s razor)CE – representation bias

Bias

Page 31: Learning Agents Laboratory Computer Science Department George Mason University

33 2003, G.Tecuci, Learning Agents Laboratory

OverviewOverview

The basic ID3 learning algorithm

Discussion and refinement of the ID3 method

Applicability of the decision tree learning

The decision tree learning problem

Recommended reading

Exercises

Page 32: Learning Agents Laboratory Computer Science Department George Mason University

34 2003, G.Tecuci, Learning Agents Laboratory

What problems are appropriate for decision tree learning?What problems are appropriate for decision tree learning?

Problems for which:

Instances can be represented by attribute-value pairs

Disjunctive descriptions may be required to represent the learned concept

Training data may contain errors

Training data may contain missing attribute values

Page 33: Learning Agents Laboratory Computer Science Department George Mason University

35 2003, G.Tecuci, Learning Agents Laboratory

What practical applications could you envision?What practical applications could you envision?

Classify:

- Patients by their disease;

- Equipment malfunctions by their cause;

- Loan applicants by their likelihood to default on payments.

Page 34: Learning Agents Laboratory Computer Science Department George Mason University

36 2003, G.Tecuci, Learning Agents Laboratory

Which are the main features of decision tree learning?Which are the main features of decision tree learning?

May employ a large number of examples.

Discovers efficient classification trees that are theoretically justified.

Learns disjunctive concepts.

Is limited to attribute-value representations.

Has a non incremental nature (there are however also incremental versions that are less efficient).

The tree representation is not very understandable.

The method is limited to learning classification rules.

The method was successfully applied to complex real world problems.

Page 35: Learning Agents Laboratory Computer Science Department George Mason University

37 2003, G.Tecuci, Learning Agents Laboratory

OverviewOverview

The basic ID3 learning algorithm

Discussion and refinement of the ID3 method

Applicability of the decision tree learning

The decision tree learning problem

Recommended reading

Exercises

Page 36: Learning Agents Laboratory Computer Science Department George Mason University

38 2003, G.Tecuci, Learning Agents Laboratory

ExerciseExercise

food medium type classherbivore land harmless mammal + deer (e1)carnivore land harmful mammal - lion (c1)omnivorous water harmless fish + goldfish (e2)herbivore amphibious harmless amphibian - frog (c2)omnivorous air harmless bird - parrot (c3)carnivore land harmful reptile + cobra (e3)carnivore land harmless reptile - lizard (c4)omnivorous land moody mammal + bear (e4)

Build two different decision trees corresponding to the examples and counterexamples from the following table.

Indicate the concept represented by each decision tree.

Apply the ID3 algorithm to build the decision tree corresponding to the examples and counterexamples from the above table.

Page 37: Learning Agents Laboratory Computer Science Department George Mason University

39 2003, G.Tecuci, Learning Agents Laboratory

ExerciseExercise

shape size classball large + e1brick small - c1cube large - c2ball small + e2

any-shape

ball cube

any-size

largesmallbrick mediumstar

a) You will be required to learn this concept by applying two different learning methods, the Induction of Decision Trees method, and the Versions Space (candidate elimination) method.Do you expect to learn the same concept with each method or different concepts?Explain in detail your prediction (You will need to consider various aspects like the instance space, the hypothesis space, and the method of learning).

b) Learn the concept represented by the above examples by applying:- the Induction of Decision Trees method;- the Versions Space method.

c) Explain the results obtained in b) and compare them with your predictions.

d) Which will be the results of learning with the above two methods if only the first three examples are available?

Consider the following positive and negative examples of a concept

and the following background knowledge

Page 38: Learning Agents Laboratory Computer Science Department George Mason University

40 2003, G.Tecuci, Learning Agents Laboratory

ExerciseExercise

workstation software printer classmaclc macwrite laserwriter + e1sun frame-maker laserwriter + e2hp accounting laserjet - c1sgi spreadsheet laserwriter - c2macII microsoft-word proprinter + e3

any-printer

any-software

publishing-sw page-maker

frame-makermicrosoft-word

mac-writespreadsheet

accounting

any-workstation sunhp

mac

sgi

vax

laserwriter

xerox

proprinter

laserjetmicrolaser

op-system unix

vms

mac-os

macplus

maclc

macIIsomething

a) Build two decision trees corresponding to the above examples. Indicate the concept represented by each decision tree. In principle, how many different decision trees could you build?b) Learn the concept represented by the above examples by applying the Versions Space method. Which is the learned concept if only the first four examples are available?c) Compare and justify the obtained results.

Consider the following positive and negative examples of a concept

and the following background knowledge

Page 39: Learning Agents Laboratory Computer Science Department George Mason University

41 2003, G.Tecuci, Learning Agents Laboratory

ExerciseExercise

True of false:If decision tree D2 is an elaboration of D1 (according to ID3), then D1 is more general than D2.

Page 40: Learning Agents Laboratory Computer Science Department George Mason University

42 2003, G.Tecuci, Learning Agents Laboratory

Recommended readingRecommended reading

Mitchell T.M., Machine Learning, Chapter 3: Decision tree learning, pp. 52 -80, McGraw Hill, 1997.

Quinlan J.R., Induction of decision trees, in Machine Learning Journal, 1:81-106. Also in Shavlik J. and Dietterich T. (eds), Readings in Machine Learning, Morgan Kaufmann, 1990.

Barr A., Cohen P., and Feigenbaum E.(eds), The Handbook of Artificial Intelligence, vol III, pp.406-410, Morgan Kaufmann, 1982.

Elwyn Edwards, Information Transmission, Chapter 4: Uncertainty, pp. 28-39, Chapman and Hall, 1964.


Recommended