28
Induction of Decision Trees (IDT) CSE 335/435 Resources: Main: Artificial Intelligence: A Modern Approach (Russell and Norvig; Chapter “Learning from Examples”) Alternatives: http ://www.dmi.unict.it/~ apulvirenti/agd/Qui86.pdf http://www.cse.unsw.edu.au/~ billw/cs9414/notes/ml/06prop/id3/id3.html http://www.aaai.org/AITopics/html/expert.html (article: Think About It: Artificial

Induction of Decision Trees (IDT)

  • Upload
    dalit

  • View
    34

  • Download
    0

Embed Size (px)

DESCRIPTION

Induction of Decision Trees (IDT). CSE 335/435 Resources: Main: Artificial Intelligence: A Modern Approach (Russell and Norvig ; Chapter “Learning from Examples ”) Alternatives: http ://www.dmi.unict.it/~ apulvirenti/agd/Qui86.pdf - PowerPoint PPT Presentation

Citation preview

Page 1: Induction of Decision Trees (IDT)

Induction of Decision Trees (IDT)

CSE 335/435Resources:

–Main: Artificial Intelligence: A Modern Approach (Russell and Norvig; Chapter “Learning from Examples”)–Alternatives:

–http://www.dmi.unict.it/~apulvirenti/agd/Qui86.pdf–http://www.cse.unsw.edu.au/~billw/cs9414/notes/ml/06prop/id3/id3.html

–http://www.aaai.org/AITopics/html/expert.html (article: Think About It: Artificial Intelligence & Expert Systems)–http://www.aaai.org/AITopics/html/trees.html

Page 2: Induction of Decision Trees (IDT)

Learning: The Big Picture

• Two forms of learning:

Supervised: the input and output of the learning component can be perceived

Example: classification tasks

Unsupervised: there is no hint about the correct answers of the learning component

Example: finding data clusters

Page 3: Induction of Decision Trees (IDT)

Example, Training Sets in IDT

• Each row in the table (i.e., entry for the Boolean function) is an example

• All rows form the training set

• If the classification of the example is “yes”, we say that the example is positive, otherwise we say that the example is negative (this is called Boolean or Binary classification)

• The algorithm we are going to present can be easily extended to non-Boolean classification problems• That is, problems in which there are 3 or more possible classes • Example of such problems?

Page 4: Induction of Decision Trees (IDT)

Induction of Decision Trees• Objective: find a concise decision tree that agrees with the

examples• “concise” instead of “optimal” because the latter is NP-

complete

• The guiding principle we are going to use is the Ockham’s razor principle: the most likely hypothesis is the simplest one that is consistent with the examples

• So we will perform heuristic approximations to the problem• Aiming at getting good solutions but not necessarily optimal

• Sometimes the algorithm do generate optimal solutions for the simple restaurant example the algorithm does find

an optimal solution)

Page 5: Induction of Decision Trees (IDT)

Example

Page 6: Induction of Decision Trees (IDT)

Example of a Decision TreeBar?

yes

yes

Hunyes

Patfull

Alt

yes

Possible Algorithm:1. Pick an attribute A

randomly2. Make a child for every

possible value of A3. Repeat 1 for every

child until all attributes are exhausted

4. Label the leaves according to the cases

no

Frino

Hun

yes

Patsome

Alt

yesProblem: Resulting tree could be very long

Page 7: Induction of Decision Trees (IDT)

Example of a Decision Tree (II)Patrons?

no yes

none some

waitEstimate?

no yes

0-10>60

Full

Alternate?

Reservation?

Yes

30-60

no

yes

No

noBar?

Yes

no

yes

Fri/Sat?

No Yes

yes

no yes

Hungry?

yes

No

10-30

Alternate?

yes

Yes

no

Raining?

no yes

yes

no yes

Nice: Resulting tree is optimal.

Page 8: Induction of Decision Trees (IDT)

Optimality Criterion for Decision Trees

• We want to reduce the average number of questions that are been asked. But how do we measure this for a tree T

• How about using the height: T is better than T’ if height(T) < height(T’)

Doesn’t work. Easy to show a counterexample, whereby height(T) = height(T’) but T asks less questions on average than T’

• Better: the average path lenght , APL(T), of the tree T. Let L1, ..,Ln be the n leaves of a decision tree T.APL(T) = (height(L1) + height(L2) +…+ height(Ln))/n

• Optimality criterion: A decision tree T is optimal if (1) T has the lowest APL and (2) T is consistent with the input table

Homework

Page 9: Induction of Decision Trees (IDT)

Inductive Learning• An example has the from (x,f(x))

• Inductive task: Given a collection of examples, called the training set, for a function f, return a function h that approximates f (h is called the hypothesis) noise, error

• There is no way to know which of these two hypothesis is a better approximation of f. A preference of one over the other is called a bias.

Page 10: Induction of Decision Trees (IDT)

Induction

Ex’ple Bar Fri Hun Pat Type Res wait

x1 no no yes some french yes yes

x4 no yes yes full thai no yes

x5 no yes no full french yes no

x6

x7

x8

x9

x10

x11

Data

pattern

Databases: what are the data that matches this pattern?

database

Induction: what is the pattern that matches these data?

induction

Page 11: Induction of Decision Trees (IDT)

Induction of Decision Trees: A Greedy Algorithm

Algorithm:

1. Initially all examples are in the same group

2. Select the attribute that makes the most difference (i.e., for each of the values of the attribute most of the examples are either positive or negative)

3. Group the examples according to each value for the selected attribute

4. Repeat 1 within each group (recursive call)

Page 12: Induction of Decision Trees (IDT)

IDT: ExampleLets compare two candidate attributes: Patrons and Type. Which is a better attribute?

Patrons?none

X7(-),x11(-)

some

X1(+),x3(+),x6(+),x8(+)

full

X4(+),x12(+), x2(-),x5(-),x9(-),x10(-)

Type?

french

X1(+),x5(-)

italian

X6(+),x10(-)

burger

X3(+),x12(+), x7(-),x9(-)X4(+),x12(+)

x2(-),x11(-)

thai

Page 13: Induction of Decision Trees (IDT)

IDT: Example (cont’d)We select a best candidate for discerning between X4(+),x12(+), x2(-),x5(-),x9(-),x10(-)

Patrons?none

no

some

yes

full

Hungry

no yes

X5(-),x9(-)X4(+),x12(+),X2(-),x10(-)

Page 14: Induction of Decision Trees (IDT)

IDT: Example (cont’d)By continuing in the same manner we obtain:

Patrons?none

no

some

yes

full

Hungry

no yes

YesType?

Yesno

Fri/Sat?

frenchitalian thai burger

yesno yesno yes

Page 15: Induction of Decision Trees (IDT)

IDT: Some Issues

• Sometimes we arrive to a node with no examples. This means that the example has not been observed. We just assigned as value the majority vote of its parent

• Sometimes we arrive to a node with both positive and negative examples and no attributes left. This means that there is noise in the data. We again assigned as value the majority vote of the

examples

Page 16: Induction of Decision Trees (IDT)

How Well does IDT works?

This means: how well does H approximates f?

Empirical evidence:

1.Collect a large set of examples2.Divide it into two sets: the training set and the test set3.Measure percentage of examples in test set that are

classified correctly4.Repeat 1 top 4 for different size of training sets, which

are randomly selected

Next slide shows the sketch of a resulting graph for the restaurant domain

Page 17: Induction of Decision Trees (IDT)

How Well does IDT works? (II)%

cor

rect

on

test

set

Training set size

0.4

0.5

1

20 100

Learningcurve

• As the training set grows the prediction quality improves (for this reason these kinds of curves are called happy curves)

Page 18: Induction of Decision Trees (IDT)

Selection of a Good Attribute: Information Gain Theory

• Suppose that I flip a “fair” coin:

what is the probability that it will come heads:How much information you gain when it fall:

0.5

1 bit

• Suppose that I flip a “totally unfair” coin (always come heads):

what is the probability that it will come heads:How much information you gain when it fall:

1

0

Page 19: Induction of Decision Trees (IDT)

Selection of a Good Attribute: Information Gain Theory (II)

• Suppose that I flip a “very unfair” coin (99% will come heads):

what is the probability that it will come heads:How much information you gain when it fall:

0.99

Fraction of A bit

• In general, the information provided by an event decreases with the increase in the probability that that event occurs. Information entropy of an event e (Shannon and Weaver, 1949):

H(e) = log2(1/p(e))

Page 20: Induction of Decision Trees (IDT)

Lets Play Twenty Questions• I am thinking of an animal:

• You can ask “yes/no” questions only

• Winning condition:– If you guess the animal correctly after asking 20

questions or less, and– you don’t make more than 3 attempts to guess the right

animal

Page 21: Induction of Decision Trees (IDT)

What is happening? (Constitutive Rules)

• We are building a binary (two children) decision tree

a questionno

yes

# potential questions

20

21

22

23

# levels

0

1

2

3

# questions made = log2(# potential questions)

Page 22: Induction of Decision Trees (IDT)

Same Principle Operates for Online Version

• Game: http://www.20q.net/• Ok so how can this be done?• It uses information gain:

Ex’ple Bar Fri Hun Pat Type Res wait

x1 no no yes some french yes yes

x4 no yes yes full thai no yes

x5 no yes no full french yes no

x6

x7

x8

x9

x10

x11

Table of movies stored in the systemPatrons?

no yes

none some

waitEstimate?

no yes

0-10>60

Full

Alternate?

Reservation?

Yes

30-60

no

yes

No

noBar?

Yes

no

yes

Fri/Sat?

No Yes

yes

no yes

Hungry?

yes

No

10-30

Alternate?

yes

Yes

no

Raining?

no yes

yes

no yes

Nice: Resulting tree is optimal.

Decision Tree

Page 23: Induction of Decision Trees (IDT)

Selection of a Good Attribute: Information Gain Theory (III)

• If the possible answers vi have probabilities p(vi), then the information content of the actual answer is given by:I(p(v1), p(v2), …, p(vn)) = p(v1)H(v1) + p(v2)H(v2) +…+ p(vn)H(vn)

= p(v1)log2(1/p(v1)) + p(v2) log2(1/p(v2)) +…+ p(vn) log2(1/p(vn))

• Examples:

Information content with the fair coin:Information content with the totally unfair:Information content with the very unfair:

I(1/2,1/2) = 1I(1,0) = 0I(1/100,99/100)= 0.08

Page 24: Induction of Decision Trees (IDT)

Selection of a Good Attribute

• For Decision Trees: suppose that the training set has p positive examples and n negative. The information content expected in a correct answer:

I(p/(p+n),n/(p+n))

• We can now measure how much information is needed after testing an attribute A:

Suppose that A has v values. Thus, if E is the training set, E is partitioned into subsets E1, …, Ev.

Each subset Ei has pi positive and ni negative examples

Page 25: Induction of Decision Trees (IDT)

Selection of a Good Attribute (II)

Each subset Ei has pi positive and ni negative examplesIf we go along the i branch, the information content of the i

branch is given by: I(pi/(pi+ ni), ni/(pi+ ni))

• Probability that an example has the i-th attribute of A:P(A,i) =(pi+ ni)/(p+n)

• Bits of information to classify an example after testing attribute A:

Reminder(A) = p(A,1) I(p1/(p1+ n1), n1/(p1+ n1)) + p(A,2) I(p2/(p2+ n2), n2/(p2+ n2)) +…+ p(A,v) I(pv/(pv+ nv), nv/(pv+ nv))

Page 26: Induction of Decision Trees (IDT)

Selection of a Good Attribute (III)• The information gain of testing an attribute A:

Gain(A) = I(p/(p+n),n/(p+n)) – Remainder(A)

• Example (restaurant):

Gain(Patrons) = ? Gain(Type) = ?

Page 27: Induction of Decision Trees (IDT)

Parts I and II Due: Monday!

1. (ALL) Compute Gain(Patrons) and Gain(Type) for restaurant example (see Slide 5 for a complete table)

2. (CSE 435) What is the complexity of the algorithm shown in Slide 11, assuming that the selection of the attribute in Step 2 is done by the information gain formula of Slide 26

3. (Optional) Show a counter-example proving that using information gain does not necessarily produces an optimal decision tree (that is you construct a table and the resulting decision tree from the algorithm is not the optimal one)

Homework Part I

Page 28: Induction of Decision Trees (IDT)

Construction of Optimal Decision Trees is NP-Complete

4. (All) See Slide 8. Note: you have to create a table for which one can extract decision trees with same height but one has smaller APL than the other none. Show the table and both trees

5 (CSE 435)– Formulate the generation of an optimal decision tree as

a decision problem– Design a (deterministic) algorithm solving the decision

problem. Explain the complexity of this algorithm – Discuss what makes the decision problem so difficult.

Homework Part II