Classification: Basic Concepts and Decision Trees

  • Upload
    muncel

  • View
    29

  • Download
    3

Embed Size (px)

DESCRIPTION

Classification: Basic Concepts and Decision Trees. A programming task. Classification: Definition. Given a collection of records ( training set ) Each record contains a set of attributes , one of the attributes is the class . - PowerPoint PPT Presentation

Citation preview

Classification: Basic Concepts and Decision Trees

Classification: Basic Concepts and Decision Trees1A programming taskClassification: DefinitionGiven a collection of records (training set )Each record contains a set of attributes, one of the attributes is the class.Find a model for class attribute as a function of the values of other attributes.Goal: previously unseen records should be assigned a class as accurately as possible.A test set is used to determine the accuracy of the model. Usually, the given data set is divided into training and test sets, with training set used to build the model and test set used to validate it.3Illustrating Classification Task

Examples of Classification TaskPredicting tumor cells as benign or malignant

Classifying credit card transactions as legitimate or fraudulent

Classifying secondary structures of protein as alpha-helix, beta-sheet, or random coil

Categorizing news stories as finance, weather, entertainment, sports, etc

Classification Using DistancePlace items in class to which they are closest.Must determine distance between an item and a class.Classes represented byCentroid: Central value.Medoid: Representative point.Individual pointsAlgorithm: KNN

Classification TechniquesDecision Tree based MethodsRule-based MethodsMemory based reasoningNeural NetworksNave Bayes and Bayesian Belief NetworksSupport Vector MachinesA first exampleDatabase of 20,000 images of handwritten digits, each labeled by a human [28 x 28 greyscale; pixel values 0-255; labels 0-9]

Use these to learn a classifier which will label digit-images automatically

8This is perhaps the most well-studied data set in ML handwritten digits, part of zipcodes scanned off envelopes by the post office.

The learning problemInput space X = {0,1,,255}784Output space Y = {0,1,,9}Training set (x1, y1), , (xm, ym)m = 20000Classifier f: X ! YLearningAlgorithmTo measure how good f is: use a test set[Our test set: 100 instances of each digit.]9A possible strategyInput space X = {0,1,,255}784Output space Y = {0,1,,9}Treat each image as a point in 784-dimensional Euclidean space

To classify a new image: find its nearest neighbor in the database (training set) and return that label f = entire training set + search engine10K Nearest Neighbor (KNN):Training set includes classes.Examine K items near item to be classified.New item placed in class with the most number of close items.O(q) for each tuple to be classified. (Here q is the size of the training set.)

KNN

Nearest neighbor

Image to labelNearest neighborOverall:error rate = 6%(on test set)Question: what is the error rate for random guessing?13What does it get wrong?Who knows but heres a hypothesis:Each digit corresponds to some connected region of R784. Some of the regions come close to each other; problems occur at these boundaries.e.g. a random point in this ball has only a 70% chance of being in R2R2R114Nearest neighbor: pros and consProsSimpleFlexibleExcellent performance on a wide range of tasks

ConsAlgorithmic: time consuming with n training points in Rd, time to label a newpoint is O(nd)Statistical:memorization, not learning!no insight into the domainwould prefer a compact classifier

15Prototype selectionA possible fix: instead of the entire training set, just keep a representative sampleVoronoicellsDecision boundary16Prototype selectionA possible fix: instead of the entire training set, just keep a representative sampleVoronoicellsDecision boundary17How to pick prototypes?

They neednt be actual data points

Idea 2: one prototype per class: mean of training points

Examples:

Error = 23%18Postscript: learning modelsTraining data Classifier fLearningAlgorithmBatch learningOn-line learningSee a new point x predict labeltestSee yUpdateclassifer19Example of a Decision Tree

categoricalcategoricalcontinuousclassRefundMarStTaxIncYESNONONOYesNoMarried Single, Divorced< 80K> 80KSplitting AttributesTraining DataModel: Decision TreeAnother Example of Decision Tree

categoricalcategoricalcontinuousclassMarStRefundTaxIncYESNONONOYesNoMarried Single, Divorced< 80K> 80KThere could be more than one tree that fits the same data!Decision Tree Classification Task

Decision TreeApply Model to Test DataRefundMarStTaxIncYESNONONOYesNoMarried Single, Divorced< 80K> 80K

Test DataStart from the root of tree.Apply Model to Test DataRefundMarStTaxIncYESNONONOYesNoMarried Single, Divorced< 80K> 80K

Test DataApply Model to Test DataRefundMarStTaxIncYESNONONOYesNoMarried Single, Divorced< 80K> 80K

Test DataApply Model to Test DataRefundMarStTaxIncYESNONONOYesNoMarried Single, Divorced< 80K> 80K

Test DataApply Model to Test DataRefundMarStTaxIncYESNONONOYesNoMarried Single, Divorced< 80K> 80K

Test DataApply Model to Test DataRefundMarStTaxIncYESNONONOYesNoMarried Single, Divorced< 80K> 80K

Test DataAssign Cheat to NoDecision Tree Classification Task

Decision TreeDecision Tree InductionMany Algorithms:Hunts Algorithm (one of the earliest)CARTID3, C4.5SLIQ,SPRINTGeneral Structure of Hunts AlgorithmLet Dt be the set of training records that reach a node tGeneral Procedure:If Dt contains records that belong the same class yt, then t is a leaf node labeled as ytIf Dt is an empty set, then t is a leaf node labeled by the default class, ydIf Dt contains records that belong to more than one class, use an attribute test to split the data into smaller subsets. Recursively apply the procedure to each subset.

Dt?Hunts AlgorithmDont CheatRefundDont CheatDont CheatYesNoRefundDont CheatYesNoMaritalStatusDont CheatCheatSingle,DivorcedMarriedTaxableIncomeDont Cheat< 80K>= 80KRefundDont CheatYesNoMaritalStatusDont CheatCheatSingle,DivorcedMarried

Tree InductionGreedy strategy.Split the records based on an attribute test that optimizes certain criterion.

IssuesDetermine how to split the recordsHow to specify the attribute test condition?How to determine the best split?Determine when to stop splitting

Tree InductionGreedy strategy.Split the records based on an attribute test that optimizes certain criterion.

IssuesDetermine how to split the recordsHow to specify the attribute test condition?How to determine the best split?Determine when to stop splitting

How to Specify Test Condition?Depends on attribute typesNominalOrdinalContinuous

Depends on number of ways to split2-way splitMulti-way splitSplitting Based on Nominal AttributesMulti-way split: Use as many partitions as distinct values.

Binary split: Divides values into two subsets. Need to find optimal partitioning.CarTypeFamilySportsLuxuryCarType{Family, Luxury}{Sports}CarType{Sports, Luxury}{Family}ORSplitting Based on Ordinal AttributesMulti-way split: Use as many partitions as distinct values.

Binary split: Divides values into two subsets. Need to find optimal partitioning.

What about this split?SizeSmallMediumLargeSize{Medium, Large}{Small}Size{Small, Medium}{Large}ORSize{Small, Large}{Medium}Splitting Based on Continuous AttributesDifferent ways of handlingDiscretization to form an ordinal categorical attribute Static discretize once at the beginning Dynamic ranges can be found by equal interval bucketing, equal frequency bucketing(percentiles), or clustering.

Binary Decision: (A < v) or (A v) consider all possible splits and finds the best cut can be more compute intensiveSplitting Based on Continuous Attributes

Tree InductionGreedy strategy.Split the records based on an attribute test that optimizes certain criterion.

IssuesDetermine how to split the recordsHow to specify the attribute test condition?How to determine the best split?Determine when to stop splitting

How to determine the Best Split

Before Splitting: 10 records of class 0,10 records of class 1Which test condition is the best?How to determine the Best Split

Greedy approach: Nodes with homogeneous class distribution are preferredNeed a measure of node impurity:

Non-homogeneous,High degree of impurityHomogeneous,Low degree of impurityMeasures of Node ImpurityGini Index

Entropy

Misclassification errorHow to Find the Best Split

B?YesNoNode N3Node N4A?YesNoNode N1Node N2Before Splitting:

M0M1M2M3M4M12M34Gain = M0 M12 vs M0 M34Measure of Impurity: GINIGini Index for a given node t :

(NOTE: p( j | t) is the relative frequency of class j at node t).

Maximum (1 - 1/nc) when records are equally distributed among all classes, implying least interesting informationMinimum (0.0) when all records belong to one class, implying most interesting information

Examples for computing GINI

P(C1) = 0/6 = 0 P(C2) = 6/6 = 1Gini = 1 P(C1)2 P(C2)2 = 1 0 1 = 0

P(C1) = 1/6 P(C2) = 5/6Gini = 1 (1/6)2 (5/6)2 = 0.278P(C1) = 2/6 P(C2) = 4/6Gini = 1 (2/6)2 (4/6)2 = 0.444Splitting Based on GINIUsed in CART, SLIQ, SPRINT.When a node p is split into k partitions (children), the quality of split is computed as,

where,ni = number of records at child i, n = number of records at node p.

Binary Attributes: Computing GINI IndexSplits into two partitionsEffect of Weighing partitions: Larger and Purer Partitions are sought for.B?YesNoNode N1Node N2

Gini(N1) = 1 (5/6)2 (2/6)2 = 0.194 Gini(N2) = 1 (1/6)2 (4/6)2 = 0.528Gini(Children) = 7/12 * 0.194 + 5/12 * 0.528= 0.333Categorical Attributes: Computing Gini IndexFor each distinct value, gather counts for each class in the datasetUse the count matrix to make decisions

Multi-way splitTwo-way split (find best partition of values)Continuous Attributes: Computing Gini IndexUse Binary Decisions based on one valueSeveral Choices for the splitting valueNumber of possible splitting values = Number of distinct valuesEach splitting value has a count matrix associated with itClass counts in each of the partitions, A < v and A vSimple method to choose best vFor each v, scan the database to gather count matrix and compute its Gini indexComputationally Inefficient! Repetition of work.

Continuous Attributes: Computing Gini Index...For efficient computation: for each attribute,Sort the attribute on valuesLinearly scan these values, each time updating the count matrix and computing gini indexChoose the split position that has the least gini index

Split PositionsSorted ValuesAlternative Splitting Criteria based on INFOEntropy at a given node t:

(NOTE: p( j | t) is the relative frequency of class j at node t).Measures homogeneity of a node. Maximum (log nc) when records are equally distributed among all classes implying least informationMinimum (0.0) when all records belong to one class, implying most informationEntropy based computations are similar to the GINI index computations

Examples for computing Entropy

P(C1) = 0/6 = 0 P(C2) = 6/6 = 1Entropy = 0 log 0 1 log 1 = 0 0 = 0 P(C1) = 1/6 P(C2) = 5/6Entropy = (1/6) log2 (1/6) (5/6) log2 (1/6) = 0.65P(C1) = 2/6 P(C2) = 4/6Entropy = (2/6) log2 (2/6) (4/6) log2 (4/6) = 0.92

Splitting Based on INFO...Information Gain:

Parent Node, p is split into k partitions;ni is number of records in partition iMeasures Reduction in Entropy achieved because of the split. Choose the split that achieves most reduction (maximizes GAIN)Used in ID3 and C4.5Disadvantage: Tends to prefer splits that result in large number of partitions, each being small but pure.

Splitting Based on INFO...Gain Ratio:

Parent Node, p is split into k partitionsni is the number of records in partition i

Adjusts Information Gain by the entropy of the partitioning (SplitINFO). Higher entropy partitioning (large number of small partitions) is penalized!Used in C4.5Designed to overcome the disadvantage of Information Gain

Splitting Criteria based on Classification ErrorClassification error at a node t :

Measures misclassification error made by a node. Maximum (1 - 1/nc) when records are equally distributed among all classes, implying least interesting informationMinimum (0.0) when all records belong to one class, implying most interesting information

Examples for Computing Error

P(C1) = 0/6 = 0 P(C2) = 6/6 = 1Error = 1 max (0, 1) = 1 1 = 0 P(C1) = 1/6 P(C2) = 5/6Error = 1 max (1/6, 5/6) = 1 5/6 = 1/6P(C1) = 2/6 P(C2) = 4/6Error = 1 max (2/6, 4/6) = 1 4/6 = 1/3

Comparison among Splitting Criteria

For a 2-class problem:Misclassification Error vs GiniA?YesNoNode N1Node N2

Gini(N1) = 1 (3/3)2 (0/3)2 = 0 Gini(N2) = 1 (4/7)2 (3/7)2 = 0.489Gini(Children) = 3/10 * 0 + 7/10 * 0.489= 0.342Tree InductionGreedy strategy.Split the records based on an attribute test that optimizes certain criterion.

IssuesDetermine how to split the recordsHow to specify the attribute test condition?How to determine the best split?Determine when to stop splitting

Stopping Criteria for Tree InductionStop expanding a node when all the records belong to the same class

Stop expanding a node when all the records have similar attribute values

Early termination (to be discussed later)Decision Tree Based ClassificationAdvantages:Inexpensive to constructExtremely fast at classifying unknown recordsEasy to interpret for small-sized treesAccuracy is comparable to other classification techniques for many simple data sets

Example: C4.5Simple depth-first construction.Uses Information GainSorts Continuous Attributes at each node.Needs entire data to fit in memory.Unsuitable for Large Datasets.Needs out-of-core sorting.

You can download the software from:http://www.cse.unsw.edu.au/~quinlan/c4.5r8.tar.gz

Practical Issues of ClassificationUnderfitting and Overfitting

Missing Values

Costs of ClassificationUnderfitting and Overfitting (Example)

500 circular and 500 triangular data points.

Circular points:0.5 sqrt(x12+x22) 1

Triangular points:sqrt(x12+x22) > 0.5 orsqrt(x12+x22) < 1Underfitting and Overfitting

OverfittingUnderfitting: when model is too simple, both training and test errors are large Overfitting due to Noise

Decision boundary is distorted by noise pointOverfitting due to Insufficient Examples

Lack of data points in the lower half of the diagram makes it difficult to predict correctly the class labels of that region - Insufficient number of training records in the region causes the decision tree to predict the test examples using other training records that are irrelevant to the classification taskNotes on OverfittingOverfitting results in decision trees that are more complex than necessary

Training error no longer provides a good estimate of how well the tree will perform on previously unseen records

Need new ways for estimating errorsEstimating Generalization ErrorsRe-substitution errors: error on training ( e(t) )Generalization errors: error on testing ( e(t))

Methods for estimating generalization errors:Optimistic approach: e(t) = e(t)Pessimistic approach: For each leaf node: e(t) = (e(t)+0.5) Total errors: e(T) = e(T) + N 0.5 (N: number of leaf nodes) For a tree with 30 leaf nodes and 10 errors on training (out of 1000 instances): Training error = 10/1000 = 1% Generalization error = (10 + 300.5)/1000 = 2.5%Reduced error pruning (REP): uses validation data set to estimate generalization errorOccams RazorGiven two models of similar generalization errors, one should prefer the simpler model over the more complex model

For complex models, there is a greater chance that it was fitted accidentally by errors in data

Therefore, one should include model complexity when evaluating a modelMinimum Description Length (MDL)Cost(Model,Data) = Cost(Data|Model) + Cost(Model)Cost is the number of bits needed for encoding.Search for the least costly model.Cost(Data|Model) encodes the misclassification errors.Cost(Model) uses node encoding (number of children) plus splitting condition encoding.

How to Address OverfittingPre-Pruning (Early Stopping Rule)Stop the algorithm before it becomes a fully-grown treeTypical stopping conditions for a node: Stop if all instances belong to the same class Stop if all the attribute values are the sameMore restrictive conditions: Stop if number of instances is less than some user-specified threshold Stop if class distribution of instances are independent of the available features (e.g., using 2 test) Stop if expanding the current node does not improve impurity measures (e.g., Gini or information gain).How to Address OverfittingPost-pruningGrow decision tree to its entiretyTrim the nodes of the decision tree in a bottom-up fashionIf generalization error improves after trimming, replace sub-tree by a leaf node.Class label of leaf node is determined from majority class of instances in the sub-treeCan use MDL for post-pruningExample of Post-Pruning

Class = Yes20Class = No10Error = 10/30Training Error (Before splitting) = 10/30Pessimistic error = (10 + 0.5)/30 = 10.5/30Training Error (After splitting) = 9/30Pessimistic error (After splitting)= (9 + 4 0.5)/30 = 11/30PRUNE!Class = Yes8Class = No4Class = Yes3Class = No4Class = Yes4Class = No1Class = Yes5Class = No1Examples of Post-pruningOptimistic error?

Pessimistic error?

Reduced error pruning?C0: 11C1: 3C0: 2C1: 4C0: 14C1: 3C0: 2C1: 2Dont prune for both casesDont prune case 1, prune case 2Case 1:Case 2:Depends on validation setHandling Missing Attribute ValuesMissing values affect decision tree construction in three different ways:Affects how impurity measures are computedAffects how to distribute instance with missing value to child nodesAffects how a test instance with missing value is classifiedComputing Impurity Measure

Split on Refund: Entropy(Refund=Yes) = 0 Entropy(Refund=No) = -(2/6)log(2/6) (4/6)log(4/6) = 0.9183 Entropy(Children) = 0.3 (0) + 0.6 (0.9183) = 0.551Gain = 0.9 (0.8813 0.551) = 0.3303Missing valueBefore Splitting: Entropy(Parent) = -0.3 log(0.3)-(0.7)log(0.7) = 0.8813Distribute Instances

RefundYesNo

RefundYes

No

Probability that Refund=Yes is 3/9Probability that Refund=No is 6/9Assign record to the left child with weight = 3/9 and to the right child with weight = 6/9Classify InstancesRefundMarStTaxIncYESNONONOYesNoMarried Single, Divorced< 80K> 80KMarriedSingleDivorcedTotalClass=No3104Class=Yes6/9112.67Total3.67216.67

New record:Probability that Marital Status = Married is 3.67/6.67Probability that Marital Status ={Single,Divorced} is 3/6.67Scalable Decision Tree Induction MethodsSLIQ (EDBT96 Mehta et al.)Builds an index for each attribute and only class list and the current attribute list reside in memorySPRINT (VLDB96 J. Shafer et al.)Constructs an attribute list data structure PUBLIC (VLDB98 Rastogi & Shim)Integrates tree splitting and tree pruning: stop growing the tree earlierRainForest (VLDB98 Gehrke, Ramakrishnan & Ganti)Builds an AVC-list (attribute, value, class label)BOAT (PODS99 Gehrke, Ganti, Ramakrishnan & Loh)Uses bootstrapping to create several small samplesPostscript: learning modelsTraining data Classifier fLearningAlgorithmBatch learningOn-line learningSee a new point x predict labeltestSee yUpdateclassifer82

83

Preprocessing step85

Generative and Discriminative ClassifiersGenerative vs. Discriminative ClassifiersTraining classifiers involves estimating f: X Y, or P(Y|X)

Discriminative classifiers (also called informative by Rubinstein&Hastie):Assume some functional form for P(Y|X)Estimate parameters of P(Y|X) directly from training data

Generative classifiersAssume some functional form for P(X|Y), P(X)Estimate parameters of P(X|Y), P(X) directly from training dataUse Bayes rule to calculate P(Y|X= xi)

Bayes Formula

107Generative Model

ColorSizeTextureWeight

108Discriminative ModelLogistic RegressionColorSizeTextureWeight

109ComparisonGenerative modelsAssume some functional form for P(X|Y), P(Y)Estimate parameters of P(X|Y), P(Y) directly from training dataUse Bayes rule to calculate P(Y|X= x)Discriminative modelsDirectly assume some functional form for P(Y|X)Estimate parameters of P(Y|X) directly from training data

110Probability Basics 111Prior, conditional and joint probability for random variablesPrior probability: Conditional probability: Joint probability: Relationship:Independence: Bayesian Rule

Probability Basics 112Quiz: We have two six-sided dice. When they are tolled, it could end up with the following occurance: (A) dice 1 lands on side 3, (B) dice 2 lands on side 1, and (C) Two dice sum to eight. Answer the following questions:

Probabilistic Classification 113Establishing a probabilistic model for classificationDiscriminative model

Discriminative Probabilistic Classifier

Probabilistic Classification 114Establishing a probabilistic model for classification (cont.)Generative model

GenerativeProbabilistic Modelfor Class 1

GenerativeProbabilistic Modelfor Class 2

GenerativeProbabilistic Modelfor Class L

Probabilistic Classification 115MAP classification ruleMAP: Maximum A PosteriorAssign x to c* if

Generative classification with the MAP ruleApply Bayesian rule to convert them into posterior probabilities

Then apply the MAP rule

Nave Bayes 116Bayes classification

Difficulty: learning the joint probability Nave Bayes classificationAssumption that all input attributes are conditionally independent!

MAP classification rule: for

For a class, the previous generative model can be decomposed by n generative models of a single input.116Nave Bayes 117Nave Bayes Algorithm (for discrete input attributes)Learning Phase: Given a training set S,

Output: conditional probability tables; for elementsTest Phase: Given an unknown instance , Look up tables to assign the label c* to X if

Example 118Example: Play Tennis

Example 119Learning PhaseOutlookPlay=YesPlay=NoSunny2/93/5Overcast4/90/5Rain3/92/5TemperaturePlay=YesPlay=NoHot2/92/5Mild4/92/5Cool3/91/5HumidityPlay=YesPlay=NoHigh3/94/5Normal6/91/5WindPlay=YesPlay=NoStrong3/93/5Weak6/92/5P(Play=Yes) = 9/14P(Play=No) = 5/14Example 120Test PhaseGiven a new instance, x=(Outlook=Sunny, Temperature=Cool, Humidity=High, Wind=Strong)Look up tables

MAP ruleP(Outlook=Sunny|Play=No) = 3/5P(Temperature=Cool|Play==No) = 1/5P(Huminity=High|Play=No) = 4/5P(Wind=Strong|Play=No) = 3/5P(Play=No) = 5/14P(Outlook=Sunny|Play=Yes) = 2/9P(Temperature=Cool|Play=Yes) = 3/9P(Huminity=High|Play=Yes) = 3/9P(Wind=Strong|Play=Yes) = 3/9P(Play=Yes) = 9/14P(Yes|x): [P(Sunny|Yes)P(Cool|Yes)P(High|Yes)P(Strong|Yes)]P(Play=Yes) = 0.0053 P(No|x): [P(Sunny|No) P(Cool|No)P(High|No)P(Strong|No)]P(Play=No) = 0.0206

Given the fact P(Yes|x) < P(No|x), we label x to be No.

Example 121Test PhaseGiven a new instance, x=(Outlook=Sunny, Temperature=Cool, Humidity=High, Wind=Strong)Look up tables

MAP ruleP(Outlook=Sunny|Play=No) = 3/5P(Temperature=Cool|Play==No) = 1/5P(Huminity=High|Play=No) = 4/5P(Wind=Strong|Play=No) = 3/5P(Play=No) = 5/14P(Outlook=Sunny|Play=Yes) = 2/9P(Temperature=Cool|Play=Yes) = 3/9P(Huminity=High|Play=Yes) = 3/9P(Wind=Strong|Play=Yes) = 3/9P(Play=Yes) = 9/14P(Yes|x): [P(Sunny|Yes)P(Cool|Yes)P(High|Yes)P(Strong|Yes)]P(Play=Yes) = 0.0053 P(No|x): [P(Sunny|No) P(Cool|No)P(High|No)P(Strong|No)]P(Play=No) = 0.0206

Given the fact P(Yes|x) < P(No|x), we label x to be No.

Relevant Issues 122Violation of Independence AssumptionFor many real world tasks,Nevertheless, nave Bayes works surprisingly well anyway!Zero conditional probability ProblemIf no example contains the attribute valueIn this circumstance, during test For a remedy, conditional probabilities estimated with

Relevant Issues 123Continuous-valued Input AttributesNumberless values for an attribute Conditional probability modeled with the normal distribution

Learning Phase: Output: normal distributions and Test Phase:Calculate conditional probabilities with all the normal distributionsApply the MAP rule to make a decision

ConclusionsNave Bayes based on the independence assumptionTraining is very easy and fast; just requiring considering each attribute in each class separatelyTest is straightforward; just looking up tables or calculating conditional probabilities with normal distributions A popular generative modelPerformance competitive to most of state-of-the-art classifiers even in presence of violating independence assumptionMany successful applications, e.g., spam mail filteringA good candidate of a base learner in ensemble learningApart from classification, nave Bayes can do more 124Learning algorithmInductionDeductionTest SetModelTraining SetTidRefundMarital

StatusTaxable

IncomeCheat

1YesSingle125KNo

2NoMarried100KNo

3NoSingle70KNo

4YesMarried120KNo

5NoDivorced95KYes

6NoMarried60KNo

7YesDivorced220KNo

8NoSingle85KYes

9NoMarried75KNo

10NoSingle90KYes

10

TidRefundMarital

StatusTaxable

IncomeCheat

1YesSingle125KNo

2NoMarried100KNo

3NoSingle70KNo

4YesMarried120KNo

5NoDivorced95KYes

6NoMarried60KNo

7YesDivorced220KNo

8NoSingle85KYes

9NoMarried75KNo

10NoSingle90KYes

10

Tree Induction algorithmInductionDeductionTest SetModelTraining SetRefundMarital

StatusTaxable

IncomeCheat

NoMarried80K?

10

RefundMarital

StatusTaxable

IncomeCheat

NoMarried80K?

10

RefundMarital

StatusTaxable

IncomeCheat

NoMarried80K?

10

RefundMarital

StatusTaxable

IncomeCheat

NoMarried80K?

10

RefundMarital

StatusTaxable

IncomeCheat

NoMarried80K?

10

RefundMarital

StatusTaxable

IncomeCheat

NoMarried80K?

10

Tree Induction algorithmInductionDeductionTest SetModelTraining SetTidRefundMarital

StatusTaxable

IncomeCheat

1YesSingle125KNo

2NoMarried100KNo

3NoSingle70KNo

4YesMarried120KNo

5NoDivorced95KYes

6NoMarried60KNo

7YesDivorced220KNo

8NoSingle85KYes

9NoMarried75KNo

10NoSingle90KYes

10

TidRefundMarital

StatusTaxable

IncomeCheat

1YesSingle125KNo

2NoMarried100KNo

3NoSingle70KNo

4YesMarried120KNo

5NoDivorced95KYes

6NoMarried60KNo

7YesDivorced220KNo

8NoSingle85KYes

9NoMarried75KNo

10NoSingle90KYes

10

Taxable Income> 80K?< 10K[10K,25K)YesNo[25K,50K)Taxable Income?[50K,80K)> 80K(i) Binary split(ii) Multi-way splitOwn Car?C0: 6C1: 4C0: 4C1: 6Car Type?C0: 1C1: 3C0: 8C1: 0C0: 1C1: 7C0: 1C1: 0C0: 1C1: 0C0: 0C1: 1Student ID?...YesNoFamilySportsLuxuryc1c10c20C0: 0C1: 1...c11C0: 5C1: 5C0: 9C1: 1C0N00

C1N01

C0N10

C1N11

C0N20

C1N21

C0N30

C1N31

C0N40

C1N41

C11

C25

Gini=0.278

C10

C26

Gini=0.000

C12

C24

Gini=0.444

C13

C23

Gini=0.500

C10

C26

C12

C24

C11

C25

Parent

C16

C26

Gini = 0.500

N1N2

C151

C224

Gini=0.333

CarType

{Sports, Luxury}{Family}

C131

C224

Gini0.400

CarType

{Sports} {Family,Luxury}

C122

C215

Gini0.419

CarType

FamilySportsLuxury

C1121

C2411

Gini0.393

TidRefundMarital

StatusTaxable

IncomeCheat

1YesSingle125KNo

2NoMarried100KNo

3NoSingle70KNo

4YesMarried120KNo

5NoDivorced95KYes

6NoMarried60KNo

7YesDivorced220KNo

8NoSingle85KYes

9NoMarried75KNo

10NoSingle90KYes

10

Taxable Income> 80K?YesNoCheatNoNoNoYesYesYesNoNoNoNo

Taxable Income

607075859095100120125220

55657280879297110122172230

Yes0303030312213030303030

No0716253434343443526170

Gini0.4200.4000.3750.3430.4170.4000.3000.3430.3750.4000.420

C10

C26

C12

C24

C11

C25

C10

C26

C12

C24

C11

C25

Parent

C17

C23

Gini = 0.42

N1N2

C134

C203

Sheet1XyXy1?0?0?1?1?

Sheet2

Sheet3

Sheet1XyXy?????

Sheet2

Sheet3

TidRefundMarital

StatusTaxable

IncomeClass

1YesSingle125KNo

2NoMarried100KNo

3NoSingle70KNo

4YesMarried120KNo

5NoDivorced95KYes

6NoMarried60KNo

7YesDivorced220KNo

8NoSingle85KYes

9NoMarried75KNo

10?Single90KYes

10

Class

= YesClass= No

Refund=Yes03

Refund=No24

Refund=?10

TidRefundMarital

StatusTaxable

IncomeClass

10?Single90KYes

10

Class=Yes2 + 6/9

Class=No4

Class=Yes0 + 3/9

Class=No3

TidRefundMarital

StatusTaxable

IncomeClass

1YesSingle125KNo

2NoMarried100KNo

3NoSingle70KNo

4YesMarried120KNo

5NoDivorced95KYes

6NoMarried60KNo

7YesDivorced220KNo

8NoSingle85KYes

9NoMarried75KNo

10

Class=Yes0

Class=No3

Cheat=Yes2

Cheat=No4

TidRefundMarital

StatusTaxable

IncomeClass

11No?85K?

10