Upload
others
View
6
Download
0
Embed Size (px)
Citation preview
Data Science for Health Care ConferenceMay 15, 2019
Lect. Anuchate Pattanateepapon, D.Eng.Section for Clinical Epidemiology and Biostatistics
Mahidol University Faculty of Medicine Ramathibodi Hospital© 2019
Decision Tree Algorithm
• Introduction• Example• Decision Tree Induction and Principles
- Entropy- Information gain
• Evaluations• Practice with python
Outline
2
• A decision tree [1] is a flowchart-like structure in which each internal noderepresents a "test" on an attribute (e.g. whether a coin flip comes up heads or tails), each branch represents the outcome of the test, and each leaf node represents a class label (decision taken after computing all attributes).
• A decision tree building is based on greedy algorithm, which recursive build nodes/leaf nodes from top to bottom.
Introduction: What is a decision tree?
Dt1
Root node
D𝑡4
node
D𝑡5 D𝑡6
node
Leaf node
Leaf node
Leaf node Leaf node
3
• Decision trees are powerful and popular tools for classification and prediction.
• Decision trees represent rules, which can be understood by humans and used in knowledge system such as database.
• Decision trees could apply with binary or multiple classes
Introduction: Why decision tree?
Types of
tumor?
MalignantBenign Premalignant
Is this patient have a
cancer?
HealthyCancer
Binary classes Multiple classes- Divides values into two subsets- Need to find optimal partitioning
- Use as many partitions as distinct values
Yes No < a > b[ a, b ]
Age
AdultChild
≤ 18 > 18
4
Goal: Categorization• Given an event, predict is category. Examples:
• Is a person fit?
Introduction: A goal of decision tree
5
Introduction: Is a person fit?
Age < 30?
Eat’s a lot of pizzas? Exercise in the morning?
Fit
Yes
Yes
No
No
https://www.xoriant.com/blog/product-engineering/decision-trees-machine-learning-algorithm.html
UnfitUnfit
Yes No
Fit
or or = node or or = leaf node
Root node
6
Goal: Categorization• Given an event, predict is category. Examples:
• Is a person fit?• Does the patient need to send to the hospital?
Introduction: A goal of decision tree
7
Introduction: Does the patient need to send to the hospital?
Does patient have difficulty breathing?
Send to hospital Is the patient bleeding?
Send to hospital Stay at home
Yes
Yes
No
No
https://www.samtalksml.net/helping-doctors-validate-decision-trees/
8
Goal: Categorization• Given an event, predict is category. Examples:
• Is a person fit?• Does the patient need to send to the hospital?• Who is a low-risk or a high-risk patient would not survive at least 30
days based on the initial 24-hour data?
Introduction: A goal of decision tree
9
Introduction: Who is a low-risk or a high-risk patient?
Is the minimum systolic blood pressure over the initial 24 hour period > 91?
Is age > 62.5 High risk
Yes
Yes
No
No
Is sinus tachycardia present? Low risk
High risk Low risk
Yes No
https://onlinecourses.science.psu.edu/stat508/book/export/html/647
10
Goal: Categorization• Given an event, predict is category. Examples:
• Is a person fit?• Does the patient need to send to the hospital?• Who is a low-risk or a high-risk patient would not survive at least 30
days based on the initial 24-hour data?• Event = list of samples. Examples:
• Hepatitis B Vaccination: Which travelers were get a Hepatitis B vaccine?
Introduction: A goal of decision tree
11
Introduction: Which travelers were get a Hepatitis B vaccine?
Traveler is from a low HBV endemic country
No
No
Yes
YesHBV vaccination not recommended
List of travelers II
YesNo
https://www.researchgate.net/figure/Decision-Tree-for-Hepatitis-B-vaccination-for-international-travelers-to-Asia_fig5_306273554
List of travelers: Remark: Travelers are without evidence of immunization to HBV or previous HBV infection
HBV vaccination recommendedList of travelers I
Long-term travel is planned and traveler was born before implementation of EPI in their home country
HBV vaccination not recommendedList of travelers III
Consider HBV vaccinationList of travelers IV
Travel is to intermediate to high HBV prevalence countries and there are planned HBV exposure risks
12
• Attribute-value description: object or case must be expressible in terms of a fixed collection of properties or attributes (e.g., hot, mild, cold).
• Predefined classes (target values): the target function has discrete output values (boolean or multiclass)
• Sufficient data: enough training cases should be provided to learn the model.
key requirements
13
Tid Refund Marital Status
Taxable Income
Cheat
1 Yes Single 125K No 2 No Married 100K No 3 No Single 70K No 4 Yes Married 120K No 5 No Divorced 95K Yes 6 No Married 60K No 7 Yes Divorced 220K No 8 No Single 85K Yes 9 No Married 75K No 10 No Single 90K Yes
Refund
Marital Status
Taxable Income
YESNO
NO
NO
Yes No
MarriedSingle, Divorced
< 80K > 80K
Splitting Attributes
Training Data Model: Decision Tree
Example of a decision tree
14
3 cases
1 case 3 cases
3 cases
Training Data Model: Decision Tree
Another example of a decision tree
Marital Status
Refund
Taxable Income
YESNO
NO
NO
Yes No
Single, Divorced
< 80K > 80K
There could be more than one tree that fits the same data!
Tid Refund Marital Status
Taxable Income
Cheat
1 Yes Single 125K No 2 No Married 100K No 3 No Single 70K No 4 Yes Married 120K No 5 No Divorced 95K Yes 6 No Married 60K No 7 Yes Divorced 220K No 8 No Single 85K Yes 9 No Married 75K No 10 No Single 90K Yes
15
Married
4 cases
2 cases
1 case 3 cases
• Given a set of training samples/cases/objects and their attribute values, try to determine the target attribute value of new examples.
• Classification• Prediction
Introduction: The problem
[Learner] Classifier
Outcome(healthy/cancer)
Testing datagender age label
f 19 healthyTraining Data (historical)
16
Refund
Marital Status
Taxable Income
YESNO
NO
NO
Yes No
MarriedSingle, Divorced
< 80K > 80K
Refund Marital Status
Taxable Income
Cheat
No Married 80K ? 10
Unseen sample / Testing sampleStart from the root of tree.
Apply Model to Test Data
17
Refund
Marital Status
Taxable Income
YESNO
NO
NO
Yes No
MarriedSingle, Divorced
< 80K > 80K
Refund Marital Status
Taxable Income
Cheat
No Married 80K ? 10
Apply Model to Test Data
18
Unseen sample / Testing sample
Refund
Marital Status
Taxable Income
YESNO
NO
NO
Yes No
MarriedSingle, Divorced
< 80K > 80K
Refund Marital Status
Taxable Income
Cheat
No Married 80K ? 10
Apply Model to Test Data
19
Unseen sample / Testing sample
Refund
Marital Status
Taxable Income
YESNO
NO
NO
Yes No
MarriedSingle, Divorced
< 80K > 80K
Refund Marital Status
Taxable Income
Cheat
No Married 80K ? 10
Apply Model to Test Data
20
Unseen sample / Testing sample
Refund
Marital Status
Taxable Income
YESNO
NO
NO
Yes No
Married Single, Divorced
< 80K > 80K
Refund Marital Status
Taxable Income
Cheat
No Married 80K ? 10
Apply Model to Test Data
21
Unseen sample / Testing sample
Refund
Marital Status
Taxable Income
YESNO
NO
NO
Yes No
Married Single, Divorced
< 80K > 80K
Refund Marital Status
Taxable Income
Cheat
No Married 80K ? 10
Apply Model to Test Data
Assign Cheat to “No”
22
Unseen sample / Testing sample
• Many Algorithms:• Hunt’s Algorithm (one of the earliest)• CART• ID3• C4.5• SLIQ• SPRINT
Decision Tree Induction
23
• Let Dt be the set of training records that reach a node t
• General Procedure:• If Dt contains records that belong the same class
yt, then t is a leaf node labeled as yt
• If Dt is an empty set, then t is a leaf node labeled by the default class, yd
• If Dt contains records that belong to more than one class, use an attribute test to split the data into smaller subsets. Recursively apply the procedure to each subset. Dt
?
General Structure of Hunt’s AlgorithmTid Refund Marital
Status Taxable Income
Cheat
1 Yes Single 125K No 2 No Married 100K No 3 No Single 70K No 4 Yes Married 120K No 5 No Divorced 95K Yes 6 No Married 60K No 7 Yes Divorced 220K No 8 No Single 85K Yes 9 No Married 75K No 10 No Single 90K Yes
24
Don’t
Cheat
Refund
Don’t
Cheat
Don’t
Cheat
Yes No
Refund
Don’t
Cheat
Yes No
Marital
Status
Don’t
Cheat
Cheat
Single,Divorced
Married
Taxable
Income
Don’t
Cheat
< 80K >= 80K
Refund
Don’t
Cheat
Yes No
Marital
Status
Don’t
CheatCheat
Single,Divorced
Married
General Structure of Hunt’s AlgorithmTid Refund Marital
Status Taxable Income
Cheat
1 Yes Single 125K No 2 No Married 100K No 3 No Single 70K No 4 Yes Married 120K No 5 No Divorced 95K Yes 6 No Married 60K No 7 Yes Divorced 220K No 8 No Single 85K Yes 9 No Married 75K No 10 No Single 90K Yes
25
• Greedy strategy.• Split the records based on an attribute test that optimizes certain criterion.
• Issues• Determine how to split the records
• How to specify the attribute test condition?• How to determine the best split?
• Determine when to stop splitting
Tree Induction
26
• Greedy strategy.• Split the records based on an attribute test that optimizes certain criterion.
• Issues• Determine how to split the records
• How to specify the attribute test condition?• How to determine the best split?
• Determine when to stop splitting
Tree Induction
27
• Depends on attribute types- Nominal- Ordinal- Continuous
• Depends on number of ways to split- 2-way split- Multi-way split
How to Specify Test Condition?
28
• Greedy strategy.• Split the records based on an attribute test that optimizes certain criterion.
• Issues• Determine how to split the records
• How to specify the attribute test condition?• How to determine the best split?
• Determine when to stop splitting
Tree Induction
29
Own
Car?
C0: 6
C1: 4
C0: 4
C1: 6
C0: 1
C1: 3
C0: 8
C1: 0
C0: 1
C1: 7
Car
Type?
C0: 1
C1: 0
C0: 1
C1: 0
C0: 0
C1: 1
Student
ID?
...
Yes No Family
Sports
Luxury c1
c10
c20
C0: 0
C1: 1...
c11
Before Splitting: 10 records of class 0,10 records of class 1
Which test condition is the best?
How to determine the Best Split
30
• Greedy approach: • Nodes with homogeneous class distribution are preferred
• Need a measure of node impurity:
C0: 5
C1: 5
C0: 9
C1: 1
Non-homogeneous,
High degree of impurity
Homogeneous,
Low degree of impurity
How to determine the Best Split
31
• Gini Index
• Entropy
• Misclassification error
Measures of Node Impurity
32
B?
Yes NoNode N3 Node N4
A?
Yes No
Node N1 Node N2
Before Splitting:
C0 N10
C1 N11
C0 N20
C1 N21
C0 N30
C1 N31
C0 N40
C1 N41
C0 N00
C1 N01
M0
Gain = GINI of M0 – GINIsplit of M12 vs GINI of M0 – GINIsplit of M34
How to Find the Best Split
M1 M2 M3 M4
M12 M34
33
• Gini Index for a given node t :
(NOTE: p( j | t) is the relative frequency of class j at node t).
• Maximum (1 - 1/nc) when records are equally distributed among all classes, implying least interesting information, nc= number of classes
• Minimum (0.0) when all records belong to one class, implying most interesting information
j
tjptGINI 2)]|([1)(
C1 0
C2 6
Gini=0.000
C1 2
C2 4
Gini=0.444
C1 3
C2 3
Gini=0.500
C1 1
C2 5
Gini=0.278
Measure of Impurity: GINI
34
C1 0
C2 6
C1 2
C2 4
C1 1
C2 5
P(C1) = 0/6 = 0 P(C2) = 6/6 = 1
Gini = 1 – P(C1)2 – P(C2)2 = 1 – 0 – 1 = 0
j
tjptGINI 2)]|([1)(
P(C1) = 1/6 P(C2) = 5/6
Gini = 1 – (1/6)2 – (5/6)2 = 0.278
P(C1) = 2/6 P(C2) = 4/6
Gini = 1 – (2/6)2 – (4/6)2 = 0.444
Examples for computing GINI
C1 3
C2 3
P(C1) = 3/6 P(C2) = 3/6
Gini = 1 – (3/6)2 – (3/6)2 = 0.5
35
• Used in CART, SLIQ, SPRINT.• When a node p is split into k partitions (children), the quality of split is
computed as,
where, ni = number of records at child i,n = number of records at node p.
k
i
isplit iGINI
n
nGINI
1
)(
Splitting Based on GINI
36
B?
Yes NoNode N3 Node N4
A?
Yes No
Node N1 Node N2
Before Splitting:
C0 5
C1 2
C0 1
C1 4
C0 4
C1 2
C0 2
C1 4
C0 6
C1 6
M0
M1 M2 M3 M4
M12 M34Gain = GINI of M0 – GINIsplit of M12 = ?
Binary Attributes: Computing GINI Index
Gain = GINI of M0 – GINIsplit of M34 = ?
37
Splits into two partitions Effect of Weighing partitions:
Larger and Purer Partitions are sought for.A?
Yes No
Node N1 Node N2
N1 N2
C0 5 1
C1 2 4
Ginisplit=0.371
Gini(N1) = 1 – (5/7)2 – (2/7)2
= 0.408
Gini(N2) = 1 – (1/5)2 – (4/5)2
= 0.32
Gini(Children) or Ginisplit or Gini(N34)= 7/12 * 0.408 +
5/12 * 0.32= 0.371
Binary Attributes: Computing GINI Index
Parent
C0 6
C1 6
Gini = 0.500
38
Splits into two partitions Effect of Weighing partitions:
Larger and Purer Partitions are sought for.B?
Yes No
Node N3 Node N4Gini(N3) = 1 – (4/6)2 – (2/6)2
= 0.444
Gini(N4) = 1 – (2/6)2 – (4/6)2
= 0.444
Gini(Children) or Ginisplit or Gini(N34)= 6/12 * 0.444 +
6/12 * 0.444= 0.444
Binary Attributes: Computing GINI Index
N3 N4
C0 4 2
C1 2 4
Ginisplit=0.444
Parent
C0 6
C1 6
Gini = 0.500
39
B?
Yes NoNode N3 Node N4
A?
Yes No
Node N1 Node N2
Before Splitting:
C0 5
C1 2
C0 1
C1 4
C0 4
C1 2
C0 2
C1 4
C0 6
C1 6
M0
M1 M2 M3 M4
M12 M34Gain = 0.500 – 0.371 = 0.129
Binary Attributes: Computing GINI Index
Gain = 0.500 – 0.444 = 0.056
40
Before Splitting: 4 records of class 1,6 records of class 2
Which test condition is the best?
Categorical Attributes: Computing Gini Index
Car Type?Family
SportC1 1
C2 4
C1 2
C2 1
Luxury
C1 1
C2 1
Car Type?FamilySport,
Luxury
C1 1
C2 4
C1 3
C2 2
Car Type?Family, LuxurySport
C1 2
C2 5
C1 2
C2 1
41
• For each distinct value, gather counts for each class in the dataset• Use the count matrix to make decisions
CarType
{Sports,Luxury}
{Family}
C1 3 1
C2 2 4
Gini 0.400
CarType
{Sports}{Family,Luxury}
C1 2 2
C2 1 5
Gini 0.419
CarType
Family Sports Luxury
C1 1 2 1
C2 4 1 1
Gini 0.393
Multi-way split Two-way split (find best partition of values)
Categorical Attributes: Computing Gini IndexBefore Splitting: 4 records of class 1, 6 records of class 2, Gini = 0.480
Gain = 0.480 – 0.393 = 0.087
42
Own
Car?
C0: 6
C1: 4
C0: 4
C1: 6
C0: 1
C1: 3
C0: 8
C1: 0
C0: 1
C1: 7
Car
Type?
C0: 1
C1: 0
C0: 1
C1: 0
C0: 0
C1: 1
Student
ID?
...
Yes No Family
Sports
Luxury c1
c10
c20
C0: 0
C1: 1...
c11
Before Splitting: 10 records of class 0,10 records of class 1
Which test condition is the best?
Categorical Attributes: Computing Gini Index
43
• Use Binary Decisions based on one value
• Several Choices for the splitting value• Number of possible splitting values
= Number of distinct values
• Each splitting value has a count matrix associated with it• Class counts in each of the partitions, A < v and
A v• Simple method to choose best v
• For each v, scan the database to gather count matrix and compute its Gini index
• Computationally Inefficient! Repetition of work.
Taxable
Income
> 80K?
Yes No
Continuous Attributes: Computing Gini IndexTid Refund Marital
Status Taxable Income
Cheat
1 Yes Single 125K No 2 No Married 100K No 3 No Single 70K No 4 Yes Married 120K No 5 No Divorced 95K Yes 6 No Married 60K No 7 Yes Divorced 220K No 8 No Single 85K Yes 9 No Married 75K No 10 No Single 90K Yes
44
• For efficient computation: for each attribute,• Sort the attribute on values• Linearly scan these values, each time updating the count matrix and computing gini index• Choose the split position that has the least gini index
Cheat No No No Yes Yes Yes No No No No
Taxable Income
60 70 75 85 90 95 100 120 125 220
55 65 72 80 87 92 97 110 122 172 230
<= > <= > <= > <= > <= > <= > <= > <= > <= > <= > <= >
Yes 0 3 0 3 0 3 0 3 1 2 2 1 3 0 3 0 3 0 3 0 3 0
No 0 7 1 6 2 5 3 4 3 4 3 4 3 4 4 3 5 2 6 1 7 0
Gini 0.420 0.400 0.375 0.343 0.417 0.400 0.300 0.343 0.375 0.400 0.420
Split PositionsSorted Values
Continuous Attributes: Computing Gini Index
45
• Entropy at a given node t:
(NOTE: p( j | t) is the relative frequency of class j at node t).• Measures homogeneity of a node.
• Maximum (log nc) when records are equally distributed among all classes implying least information, nc = number of classes
• Minimum (0.0) when all records belong to one class, implying most information
• Entropy based computations are similar to the GINI index computations
j
tjptjptEntropy )|(log)|()(
Alternative Splitting Criteria based on INFO
46
C1 0
C2 6
C1 2
C2 4
C1 1
C2 5
P(C1) = 0/6 = 0 P(C2) = 6/6 = 1
Entropy = – 0 log 0 – 1 log 1 = – 0 – 0 = 0
P(C1) = 1/6 P(C2) = 5/6
Entropy = – (1/6) log2 (1/6)– (5/6) log2 (1/6) = 0.65
P(C1) = 2/6 P(C2) = 4/6
Entropy = – (2/6) log2 (2/6)– (4/6) log2 (4/6) = 0.92
j
tjptjptEntropy )|(log)|()(2
Examples for computing Entropy
C1 3
C2 3
P(C1) = 3/6 P(C2) = 3/6
Entropy = – (3/6) log2 (3/6)– (3/6) log2 (3/6) = 1
47
• Information Gain:
Parent Node, p is split into k partitions;ni is number of records in partition i
• Measures Reduction in Entropy achieved because of the split. Choose the split that achieves most reduction (maximizes GAIN)
• Used in ID3 and C4.5• Disadvantage: Tends to prefer splits that result in large number of partitions, each
being small but pure.
k
i
i
splitiEntropy
n
npEntropyGAIN
1
)()(
Splitting Based on INFO...
48
• Gain Ratio:
Parent Node, p is split into k partitionsni is the number of records in partition i
• Adjusts Information Gain by the entropy of the partitioning (SplitINFO). Higher entropy partitioning (large number of small partitions) is penalized!
• Used in C4.5• Designed to overcome the disadvantage of Information Gain
SplitINFO
GAINGainRATIO Split
split
k
i
ii
n
n
n
nSplitINFO
1
log
Splitting Based on INFO...
49
• Classification error at a node t :
• Measures misclassification error made by a node. • Maximum (1 - 1/nc) when records are equally distributed
among all classes, implying least interesting information• Minimum (0.0) when all records belong to one class, implying
most interesting information
)|(max1)( tiPtErrori
Splitting Criteria based on Classification Error
50
C1 0
C2 6
C1 2
C2 4
C1 1
C2 5
P(C1) = 0/6 = 0 P(C2) = 6/6 = 1
Error = 1 – max (0, 1) = 1 – 1 = 0
P(C1) = 1/6 P(C2) = 5/6
Error = 1 – max (1/6, 5/6) = 1 – 5/6 = 1/6
P(C1) = 2/6 P(C2) = 4/6
Error = 1 – max (2/6, 4/6) = 1 – 4/6 = 1/3
)|(max1)( tiPtErrori
Examples for Computing Error
C1 3
C2 3
P(C1) = 3/6 P(C2) = 3/6
Error = 1 – max (3/6, 3/6) = 1 – 3/6 = 1/2
51
For a 2-class problem:
Comparison among Splitting Criteria
52
• Greedy strategy.• Split the records based on an attribute test that optimizes certain criterion.
• Issues• Determine how to split the records
• How to specify the attribute test condition?• How to determine the best split?
• Determine when to stop splitting
Tree Induction
53
• Stop expanding a node when all the records belong to the same class
• Stop expanding a node when all the records have similar attribute values
• Early termination (to be discussed later)
Stopping Criteria for Tree Induction
54
Overfitting
Underfitting: when model is too simple, both training and test errors are large
Underfitting and Overfitting
55
Lack of data points in the lower half of the diagram makes it difficult to predict correctly the class labels of that region
Insufficient number of training records in the region causes the decision tree to predict the test examples using other training records that are irrelevant to the classification task
Overfitting due to Insufficient Examples
56
• Overfitting results in decision trees that are more complex than necessary
• Training error no longer provides a good estimate of how well the tree will perform on previously unseen records
• Need new ways for estimating errors
Notes on Overfitting
57
• Pre-Pruning (Early Stopping Rule)• Stop the algorithm before it becomes a fully-grown tree• Typical stopping conditions for a node:
• Stop if all instances belong to the same class• Stop if all the attribute values are the same
• More restrictive conditions:• Stop if number of instances is less than some user-specified threshold• Stop if class distribution of instances are independent of the available features
(e.g., using 2 test)• Stop if expanding the current node does not improve impurity
measures (e.g., Gini or information gain).
How to Address Overfitting
59
• Post-pruning• Grow decision tree to its entirety• Trim the nodes of the decision tree in a bottom-up fashion• If generalization error improves after trimming, replace sub-tree by
a leaf node.• Class label of leaf node is determined from majority class of
instances in the sub-tree
How to Address Overfitting
60
Example of Post-Pruning
Gini(after split) = 0.4444 > Gini(before split) = 0.0425
https://www.kdnuggets.com/2018/12/guide-decision-trees-machine-learning-data-science.html
61
• Training accuracy• How many training instances can be correctly classify based on the available
data?• Is high when the tree is deep/large, or when there is less confliction in the
training instances.• however, higher training accuracy does not mean good generalization
• Testing accuracy• Given a number of new instances, how many of them can we correctly
classify?• Cross validation
Evaluation
62
Creating a decision tree with python(two-dimensional data, which has four class labels)
https://scikit-learn.org/stable/modules/generated/sklearn.tree.DecisionTreeClassifier.html
63
Creating a decision tree(two-dimensional data, which has four class labels)
conda install -c anaconda numpyconda install -c conda-forge matplotlibconda install -c anaconda seabornconda install -c anaconda scikit-learn
64
Creating a decision tree(two-dimensional data, which has four class labels)
import matplotlib.pyplot as pltimport seaborn as sns; sns.set()
from sklearn.datasets import make_blobs
X, y = make_blobs(n_samples=300, centers=4,random_state=0, cluster_std=1.0)
plt.scatter(X[:, 0], X[:, 1], c=y, s=50, cmap='rainbow');
65
Creating a decision tree(two-dimensional data, which has four class labels)
import numpy as npimport matplotlib.pyplot as pltimport seaborn as sns; sns.set()
from sklearn.datasets import make_blobs
X, y = make_blobs(n_samples=300, centers=4,random_state=0, cluster_std=1.0)
plt.scatter(X[:, 0], X[:, 1], c=y, s=50, cmap='rainbow');
Import library numpy (scientific computing)and assign to np variable
Import library matplotlib.pyplot and assign to plt variableImport library seaborn (Visualizations for plot) and assign to sns variable, set default value
package embeds some small toy datasets and generate isotropic Gaussian blobs for clustering
Number of samples = 300Number of cluster = 4Set random number generation for dataset creation= 0The S.D. of the clusters = 1X axis = X[:, 0], Y axis = X[:, 1],
Colors by class =y, size of point=50, Clolors shade = ‘rainbow’
66
Creating a decision tree(two-dimensional data, which has four class labels)
67
Creating a decision tree(two-dimensional data, which has four class labels)
from sklearn.tree import DecisionTreeClassifiertree = DecisionTreeClassifier().fit(X, y)
68
Creating a decision tree(two-dimensional data, which has four class labels)
from sklearn.tree import DecisionTreeClassifiertree = DecisionTreeClassifier().fit(X, y)
Import library sklearn.tree (machine learning)and call DecisonTreeClassifier
Create a decision tree classifier by using data X, classes y
69
Creating a decision tree(two-dimensional data, which has four class labels)
# Plot confusion matrixfrom sklearn.metrics import confusion_matrix
y_pred = tree.predict(X)confusion = confusion_matrix(y, y_pred)print('Confusion Matrix : \n', confusion)
Import library sklearn.metricsand call confusion_matrix
Find y predict from method predict( )
Generate confusion matrix from y-train and y-predict
70
Creating a decision tree(two-dimensional data, which has four class labels)
# Plot confusion matrix# True PositivesTP = confusion[1, 1]# True NegativesTN = confusion[0, 0]# False PositivesFP = confusion[0, 1]# False NegativesFN = confusion[1, 0]
Find true positive from confusion matrix
Find true negative from confusion matrix
Find false positive from confusion matrix
Find false negative from confusion matrix
71
Creating a decision tree(two-dimensional data, which has four class labels)
# Plot confusion matrix
result = tree.score(X, y)print('accuracy from DT method: ', result)print('accuracy: ', (TP + TN) / float(TP + TN + FP + FN))print('sensitivity: ',TP / float(TP + FN))print('specificity: ',TN / float(TN + FP))
Find accuracy using tree.score( ) from DT
Find accuracy
Find sensitivity
Find specificity
72
Creating a decision tree(two-dimensional data, which has four class labels)
# classification reportfrom sklearn.metrics import classification_report
target_names = ['class 0', 'class 1', 'class 2', 'class 3']print(classification_report(y, y_pred, target_names=target_names))
Import library sklearn.metrics call method classification_report
Set 4 target names
Generate classification report
73
Creating a decision tree(two-dimensional data, which has four class labels)
# test model with new dataX_test, y_test = make_blobs(n_samples=300, centers=4,
random_state=0, cluster_std=1.2)
y_predict = tree.predict(X_test)confusion = confusion_matrix(y_test, y_predict)print('Confusion Matrix of test data : \n', confusion)
Change cluster standard deviation = 1.2
74
Creating a decision tree(two-dimensional data, which has four class labels)
# classification report of test datatarget_names = ['class 0', 'class 1', 'class 2', 'class 3']print(classification_report(y_test, y_predict, target_names=target_names))
75
Creating a decision tree(two-dimensional data, which has four class labels)
Training data Testing data
76
Apply Decision Tree with Diabetes Dataset
77
Age = อายุGlu = glucose ความเขม้ขน้ของน า้ตาลในเลือด หรือ ระดับกลโูคสในเลอืด หน่วย mg/dL
milligrams/deciliterHdl = ไขมันชนดิด ีHigh Density Lipoprotein หน่วย mg/dLLdl = ไขมันชนดิรา้ย low-density lipoprotein หน่วย mg/dLBmi = ดัชนีมวลรา่งกาย Body Mass Index (BMI) คือ อัตราส่วนระหว่างน า้หนักต่อ ส่วนสูงSex = เพศDm = Diabetes mellitus
78
Creating a decision tree(Diabetes Dataset)
# import librariesimport numpy as npimport matplotlib.pyplot as pltimport seaborn as sns; sns.set()import pandasfrom sklearn.model_selection import train_test_split
Import library sklearn.model_selection call method train_test_split
79
Creating a decision tree(Diabetes Dataset)
df = pandas.read_csv('H:/simpredm.csv')print(df)df['sex'] = df['sex'].map({'Male': 1, 'Female': 0})df['dm'] = df['dm'].map({'yes': 1, 'no': 0})print(df)X = np.array(df)print(X)y_temp = X[:, 6]X_temp = X[:, 0:6]
Import simpredm.csv data by using pandas.read_csv( )
Change column’s name(for plot scatter)
Convert csv data to numpy array
Assigned y temp
Assigned X temp
80
Creating a decision tree(Diabetes Dataset)
np.random.seed(415)X_train, X_test, y_train, y_test = train_test_split(X_temp, y_temp,test_size=0.33, random_state=42)
plt.scatter(X_train[:, 0], X_train[:, 3], c=y_train, s=50, cmap='rainbow');
Set global seed number for fix random number
Split X data into training and testing data, ratio = 67 : 33 and set random state = 42
Select column number 1 and 4 to plot
81
Creating a decision tree(Diabetes Dataset)
#Create Decision Tree Modelfrom sklearn.tree import DecisionTreeClassifier
tree = DecisionTreeClassifier().fit(X_train, y_train)
# Plot confusion matrixfrom sklearn.metrics import confusion_matrix
y_pred = tree.predict(X_test)confusion = confusion_matrix(y_test,y_pred)print('Confusion Matrix : \n', confusion)
82
Creating a decision tree(Diabetes Dataset)
# True PositivesTP = confusion[1, 1]# True NegativesTN = confusion[0, 0]# False PositivesFP = confusion[0, 1]# False NegativesFN = confusion[1, 0]print('accuracy: ', (TP + TN) / float(TP + TN + FP + FN))print('sensitivity: ',TP / float(TP + FN))print('specificity: ',TN / float(TN + FP))
83
Creating a decision tree(Diabetes Dataset)
# classification reportfrom sklearn.metrics import classification_report
target_names = ['DM', 'Healthy']print(classification_report(y_test, y_pred, target_names=target_names))
84
Creating a decision tree(Diabetes Dataset)
85
Decision Tree AlgorithmAlgorithms Measure Procedure Pruning
CART Gini diversity index Constructs binary decision tree
Post pruning based on cost complexity measure
ID3 Entropy info-gain Top-down decisionTree construction
Pre-pruning using aSingle pass algorithm
C4.5 Entropy info-gain/Gain-ratio
Top-down decisionTree construction
Pre-pruning using aSingle pass algorithm
SLIQ Gini index Decision tree construction in a Breadth first manner
Post-pruning based on Minimum Description Length principle
SPRINT Gini index Decision tree construction in a Breadth first manner
Post-pruning based on Minimum Description Length principle
86
Training accuracy• How many training samples could be correctly classify?• A training accuracy is increased when the tree is deep or it has less
confliction in the training samples.• higher training accuracy ≠ good generalization
Testing accuracy• How many testing samples could be correctly classify?• Cross validation
How to evaluate decision trees performance
87
Pros• Could generate understandable rules• Perform classification without much computation• Could handle continuous and categorical variables• Give a clear indication of which features/variables are most important for
classification or prediction.
Pros and Cons of decision trees
88
Cons• Not perform well when applied for prediction of a continuous attribute• Not perform well when applied with multiple classes and small data• Not perform well when applied with non-rectangular regions• Computationally expensive to train a model:
- Feature candidates for node splitting must be sorted before its best split can be found
- Pruning algorithms could also be expensive since many candidate sub-trees must be formed and compared
Pros and Cons of decision trees
89
Reference[1] A. Lauren, G.L. Gabriel, and W. Joschua, “CS109 Data Science”, School of Engineeringand Applied Sciences, Harvard, [online] http://cs109.github.io/2015/, [15 January, 2019].[2] Al Khlil, “Suite of decision tree-based classification algorithms on cancer gene expressiondata”, Egyptian Informatics Journal, Vol. 12, Pages 73–82, 2011.[3] Podgorelec, Vili and Kokol Peter and Stiglic Bruno and Rozman Ivan, “Decision Trees: AnOverview and Their Use in Medicine”, Journal of medical systems, Vol. 26, Pages 445-63,2002.
90
Regression Tree
91
Regression Tree
92