45
Statistisches Data Mining (StDM) Woche 11 Oliver Dürr Institut für Datenanalyse und Prozessdesign Zürcher Hochschule für Angewandte Wissenschaften [email protected] Winterthur, 29 November 2016 1

Oliver Dürr - GitHub Pagesoduerr.github.io/teaching/stdm/woche11/slides11.pdf · [email protected] Winterthur, 29 November 2016 1 . Multitasking senkt Lerneffizienz:

  • Upload
    others

  • View
    3

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Oliver Dürr - GitHub Pagesoduerr.github.io/teaching/stdm/woche11/slides11.pdf · oliver.duerr@zhaw.ch Winterthur, 29 November 2016 1 . Multitasking senkt Lerneffizienz:

Statistisches Data Mining (StDM) Woche 11

Oliver Dürr Institut für Datenanalyse und Prozessdesign Zürcher Hochschule für Angewandte Wissenschaften [email protected] Winterthur, 29 November 2016

1

Page 2: Oliver Dürr - GitHub Pagesoduerr.github.io/teaching/stdm/woche11/slides11.pdf · oliver.duerr@zhaw.ch Winterthur, 29 November 2016 1 . Multitasking senkt Lerneffizienz:

Multitasking senkt Lerneffizienz: •  Keine Laptops im Theorie-Unterricht Deckel zu

oder fast zu (Sleep modus)

Page 3: Oliver Dürr - GitHub Pagesoduerr.github.io/teaching/stdm/woche11/slides11.pdf · oliver.duerr@zhaw.ch Winterthur, 29 November 2016 1 . Multitasking senkt Lerneffizienz:

Overview of classification (until the end to the semester) Classifiers

K-Nearest-Neighbors (KNN) Logistic Regression Linear discriminant analysis Support Vector Machine (SVM) Classification Trees Neural networks NN Deep Neural Networks (e.g. CNN, RNN) …

Evaluation

Cross validation Performance measures ROC Analysis / Lift Charts

Combining classifiers

Bagging Boosting Random Forest

Theoretical Guidance / General Ideas

Bayes Classifier Bias Variance Trade off (Overfitting)

Feature Engineering

Feature Extraction Feature Selection

Page 4: Oliver Dürr - GitHub Pagesoduerr.github.io/teaching/stdm/woche11/slides11.pdf · oliver.duerr@zhaw.ch Winterthur, 29 November 2016 1 . Multitasking senkt Lerneffizienz:

Decision Trees Chapter 8.1 in ILSR

Page 5: Oliver Dürr - GitHub Pagesoduerr.github.io/teaching/stdm/woche11/slides11.pdf · oliver.duerr@zhaw.ch Winterthur, 29 November 2016 1 . Multitasking senkt Lerneffizienz:

Note on ISLR

•  In ISLR they also include trees for regression. Here we focus on trees for classification

Page 6: Oliver Dürr - GitHub Pagesoduerr.github.io/teaching/stdm/woche11/slides11.pdf · oliver.duerr@zhaw.ch Winterthur, 29 November 2016 1 . Multitasking senkt Lerneffizienz:

Example of a Decision Tree for classification

Tid Refund MaritalStatus

TaxableIncome Cheat

1 Yes Single 125K No

2 No Married 100K No

3 No Single 70K No

4 Yes Married 120K No

5 No Divorced 95K Yes

6 No Married 60K No

7 Yes Divorced 220K No

8 No Single 85K Yes

9 No Married 75K No

10 No Single 90K Yes10

Refund

MarSt

TaxInc

YES NO

NO

NO

Yes No

Married Single, Divorced

< 80K > 80K

Splitting Attributes

Training Data Model: Decision Tree

Source: Tan, Steinbach, Kumar 6

Page 7: Oliver Dürr - GitHub Pagesoduerr.github.io/teaching/stdm/woche11/slides11.pdf · oliver.duerr@zhaw.ch Winterthur, 29 November 2016 1 . Multitasking senkt Lerneffizienz:

Another Example of Decision Tree

Tid Refund MaritalStatus

TaxableIncome Cheat

1 Yes Single 125K No

2 No Married 100K No

3 No Single 70K No

4 Yes Married 120K No

5 No Divorced 95K Yes

6 No Married 60K No

7 Yes Divorced 220K No

8 No Single 85K Yes

9 No Married 75K No

10 No Single 90K Yes10

MarSt

Refund

TaxInc

YES NO

NO

NO

Yes No

Married Single,

Divorced

< 80K > 80K

There could be more than one tree that fits the same data!

Source: Tan, Steinbach, Kumar 7

Page 8: Oliver Dürr - GitHub Pagesoduerr.github.io/teaching/stdm/woche11/slides11.pdf · oliver.duerr@zhaw.ch Winterthur, 29 November 2016 1 . Multitasking senkt Lerneffizienz:

Decision Tree Classification Task

Apply Model

Induction

Deduction

Learn Model

Model

Tid Attrib1 Attrib2 Attrib3 Class

1 Yes Large 125K No

2 No Medium 100K No

3 No Small 70K No

4 Yes Medium 120K No

5 No Large 95K Yes

6 No Medium 60K No

7 Yes Large 220K No

8 No Small 85K Yes

9 No Medium 75K No

10 No Small 90K Yes 10

Tid Attrib1 Attrib2 Attrib3 Class

11 No Small 55K ?

12 Yes Medium 80K ?

13 Yes Large 110K ?

14 No Small 95K ?

15 No Large 67K ? 10

Test Set

TreeInductionalgorithm

Training SetDecision Tree

Source: Tan, Steinbach, Kumar 8

Page 9: Oliver Dürr - GitHub Pagesoduerr.github.io/teaching/stdm/woche11/slides11.pdf · oliver.duerr@zhaw.ch Winterthur, 29 November 2016 1 . Multitasking senkt Lerneffizienz:

Apply Model to Test Data

Refund

MarSt

TaxInc

YES NO

NO

NO

Yes No

Married Single, Divorced

< 80K > 80K

Refund Marital Status

Taxable Income Cheat

No Married 80K ? 10

Test Data Start from the root of tree.

Source: Tan, Steinbach, Kumar 9

Page 10: Oliver Dürr - GitHub Pagesoduerr.github.io/teaching/stdm/woche11/slides11.pdf · oliver.duerr@zhaw.ch Winterthur, 29 November 2016 1 . Multitasking senkt Lerneffizienz:

Refund

MarSt

TaxInc

YES NO

NO

NO

Yes No

Married Single, Divorced

< 80K > 80K

Refund Marital Status

Taxable Income Cheat

No Married 80K ? 10

Test Data

Apply Model to Test Data

Source: Tan, Steinbach, Kumar 10

Page 11: Oliver Dürr - GitHub Pagesoduerr.github.io/teaching/stdm/woche11/slides11.pdf · oliver.duerr@zhaw.ch Winterthur, 29 November 2016 1 . Multitasking senkt Lerneffizienz:

Refund

MarSt

TaxInc

YES NO

NO

NO

Yes No

Married Single, Divorced

< 80K > 80K

Refund Marital Status

Taxable Income Cheat

No Married 80K ? 10

Test Data

Apply Model to Test Data

Source: Tan, Steinbach, Kumar 11

Page 12: Oliver Dürr - GitHub Pagesoduerr.github.io/teaching/stdm/woche11/slides11.pdf · oliver.duerr@zhaw.ch Winterthur, 29 November 2016 1 . Multitasking senkt Lerneffizienz:

Refund

MarSt

TaxInc

YES NO

NO

NO

Yes No

Married Single, Divorced

< 80K > 80K

Refund Marital Status

Taxable Income Cheat

No Married 80K ? 10

Test Data

Apply Model to Test Data

Source: Tan, Steinbach, Kumar 12

Page 13: Oliver Dürr - GitHub Pagesoduerr.github.io/teaching/stdm/woche11/slides11.pdf · oliver.duerr@zhaw.ch Winterthur, 29 November 2016 1 . Multitasking senkt Lerneffizienz:

Refund

MarSt

TaxInc

YES NO

NO

NO

Yes No

Married Single, Divorced

< 80K > 80K

Refund Marital Status

Taxable Income Cheat

No Married 80K ? 10

Test Data

Apply Model to Test Data

Source: Tan, Steinbach, Kumar 13

Page 14: Oliver Dürr - GitHub Pagesoduerr.github.io/teaching/stdm/woche11/slides11.pdf · oliver.duerr@zhaw.ch Winterthur, 29 November 2016 1 . Multitasking senkt Lerneffizienz:

Refund

MarSt

TaxInc

YES NO

NO

NO

Yes No

Married Single, Divorced

< 80K > 80K

Refund Marital Status

Taxable Income Cheat

No Married 80K ? 10

Test Data

Assign Cheat to “No”

Apply Model to Test Data

Source: Tan, Steinbach, Kumar 14

Page 15: Oliver Dürr - GitHub Pagesoduerr.github.io/teaching/stdm/woche11/slides11.pdf · oliver.duerr@zhaw.ch Winterthur, 29 November 2016 1 . Multitasking senkt Lerneffizienz:

Decision Tree Classification Task

Apply Model

Induction

Deduction

Learn Model

Model

Tid Attrib1 Attrib2 Attrib3 Class

1 Yes Large 125K No

2 No Medium 100K No

3 No Small 70K No

4 Yes Medium 120K No

5 No Large 95K Yes

6 No Medium 60K No

7 Yes Large 220K No

8 No Small 85K Yes

9 No Medium 75K No

10 No Small 90K Yes 10

Tid Attrib1 Attrib2 Attrib3 Class

11 No Small 55K ?

12 Yes Medium 80K ?

13 Yes Large 110K ?

14 No Small 95K ?

15 No Large 67K ? 10

Test Set

TreeInductionalgorithm

Training Set

Decision Tree

Source: Tan, Steinbach, Kumar 15

Page 16: Oliver Dürr - GitHub Pagesoduerr.github.io/teaching/stdm/woche11/slides11.pdf · oliver.duerr@zhaw.ch Winterthur, 29 November 2016 1 . Multitasking senkt Lerneffizienz:

Restrict to recursive Partitioning by binary splits

16

•  There many approaches •  C 4.5 / C5 Algorithms, SPRINT

•  Here top down splitting until no further split possible (or other criteria)

Page 17: Oliver Dürr - GitHub Pagesoduerr.github.io/teaching/stdm/woche11/slides11.pdf · oliver.duerr@zhaw.ch Winterthur, 29 November 2016 1 . Multitasking senkt Lerneffizienz:

x1 0.25

x2

0.4

Score: Node impurity (next slides)

17

How to train a classification tree

•  Draw 3 splits to separate the data.

•  Draw the corresponding tree

Page 18: Oliver Dürr - GitHub Pagesoduerr.github.io/teaching/stdm/woche11/slides11.pdf · oliver.duerr@zhaw.ch Winterthur, 29 November 2016 1 . Multitasking senkt Lerneffizienz:

ParentNodepissplitinto2par00onsMaximizeGainoverpossiblesplits:

niisnumberofrecordsinpar00oni

PossibleImpurityMeasures:-Giniindex-entropy-misclassifica0onerror

2

1( ) ( )i

spliti

nGAIN IMPURITY p IMPURITY in=

⎛ ⎞= −⎜ ⎟⎝ ⎠∑

Source: Tan, Steinbach, Kumar 19

Construction of a classification tree: Minimize the impurity in each node

Page 19: Oliver Dürr - GitHub Pagesoduerr.github.io/teaching/stdm/woche11/slides11.pdf · oliver.duerr@zhaw.ch Winterthur, 29 November 2016 1 . Multitasking senkt Lerneffizienz:

Bef

ore

split

Pos

sibl

e S

plit

Bef

ore

split

Pos

sibl

e S

plit

Construction of a classification tree: Minimize the impurity in each node

We have nc=4 different classes: red, green, blue, olive

20

Page 20: Oliver Dürr - GitHub Pagesoduerr.github.io/teaching/stdm/woche11/slides11.pdf · oliver.duerr@zhaw.ch Winterthur, 29 November 2016 1 . Multitasking senkt Lerneffizienz:

p( j | t) istherela0vefrequencyofclassjatnodetncisthenumberofdifferentclassesGiniIndex:

Entropy:

Classifica0onerror:

The three most common impurity measures

2

1( ) 1 [ ( | )]

cn

jGINI t p j t

=

= −∑

Entropy(t) = − p( j | t) log2 p( j | t)( )

j=1

nC

){1,...,( ) 1 max ( | )

cj nClassification Error t p j t

∈− = −

21

Page 21: Oliver Dürr - GitHub Pagesoduerr.github.io/teaching/stdm/woche11/slides11.pdf · oliver.duerr@zhaw.ch Winterthur, 29 November 2016 1 . Multitasking senkt Lerneffizienz:

C1 0 C2 6

C1 2 C2 4

C1 1 C2 5

P(C1) = 0/6 = 0 P(C2) = 6/6 = 1

Gini = 1 – P(C1)2 – P(C2)2 = 1 – 0 – 1 = 0

P(C1) = 1/6 P(C2) = 5/6

Gini = 1 – (1/6)2 – (5/6)2 = 0.278

P(C1) = 2/6 P(C2) = 4/6

Gini = 1 – (2/6)2 – (4/6)2 = 0.444

Computing the Gini Index for three 2-class examples

( )2

2 2 2

1( ) 1 [ ( | )] 1 ( .1| ) ( .2 | )

jGINI t p j t p class t p class t

=

= − = − +∑

Gini Index at a given node t (in case of nc=2 different classes):

Source: Tan, Steinbach, Kumar 22

Distribution of class 1 and class2 in node t:

Page 22: Oliver Dürr - GitHub Pagesoduerr.github.io/teaching/stdm/woche11/slides11.pdf · oliver.duerr@zhaw.ch Winterthur, 29 November 2016 1 . Multitasking senkt Lerneffizienz:

C1 0 C2 6

C1 2 C2 4

C1 1 C2 5

p(C1) = 0/6 = 0 p(C2) = 6/6 = 1

Entropy = – 0 log(0) – 1 log(1) = – 0 – 0 = 0

p(C1) = 1/6 p(C2) = 5/6

Entropy = – (1/6) log2(1/6) – (5/6) log2(1/6) = 0.65

p(C1) = 2/6 p(C2) = 4/6

Entropy = – (2/6) log2(2/6) – (4/6) log2(4/6) = 0.92

Computing the Entropy for three examples

( )2 2( ) ( .1) log (p( .1)) + ( .2) log (p( .2))Entropy t p class class p class class= − ⋅ ⋅

Entropy at a given node t (in case of 2 different classes):

Source: Tan, Steinbach, Kumar 23

Page 23: Oliver Dürr - GitHub Pagesoduerr.github.io/teaching/stdm/woche11/slides11.pdf · oliver.duerr@zhaw.ch Winterthur, 29 November 2016 1 . Multitasking senkt Lerneffizienz:

C1 0 C2 6

C1 2 C2 4

C1 1 C2 5

p(C1) = 0/6 = 0 p(C2) = 6/6 = 1

Error = 1 – max (0, 1) = 1 – 1 = 0

p(C1) = 1/6 p(C2) = 5/6

Error = 1 – max (1/6, 5/6) = 1 – 5/6 = 1/6

p(C1) = 2/6 p(C2) = 4/6

Error = 1 – max (2/6, 4/6) = 1 – 4/6 = 1/3

Computing the miss-classification error for three examples

( )( ) 1 max ( .1) , p(class.2) Classification Error t p class− = −

Classification error at a node t (in case of 2 different classes) :

Source: Tan, Steinbach, Kumar 24

Page 24: Oliver Dürr - GitHub Pagesoduerr.github.io/teaching/stdm/woche11/slides11.pdf · oliver.duerr@zhaw.ch Winterthur, 29 November 2016 1 . Multitasking senkt Lerneffizienz:

p: proportion of class 1

Compare the three impurity measures for a 2-class problem

Impu

rity

Source: Tan, Steinbach, Kumar 25

Page 25: Oliver Dürr - GitHub Pagesoduerr.github.io/teaching/stdm/woche11/slides11.pdf · oliver.duerr@zhaw.ch Winterthur, 29 November 2016 1 . Multitasking senkt Lerneffizienz:

Using a tree for classification problems

x1 0.25

x2

0.4

x1<0.25

x2<0.4

yes

yes

no

no

Score: Node impurity

X

X ?

? B̂f o=

Model class label is given by the majority of all observation labels in the region

26

Page 26: Oliver Dürr - GitHub Pagesoduerr.github.io/teaching/stdm/woche11/slides11.pdf · oliver.duerr@zhaw.ch Winterthur, 29 November 2016 1 . Multitasking senkt Lerneffizienz:

Decision Tree in R

# rpart for recursive partitioning library(rpart) d = rpart(Species ~., data=iris) plot(d,margin=0.2, uniform=TRUE) text(d, use.n = TRUE)

Page 27: Oliver Dürr - GitHub Pagesoduerr.github.io/teaching/stdm/woche11/slides11.pdf · oliver.duerr@zhaw.ch Winterthur, 29 November 2016 1 . Multitasking senkt Lerneffizienz:

Overfitting

Taken from ISLR

Overfitting The larger the tree, the lower the error on the training set. “low bias” However, training error can get worse. “high variance”

Page 28: Oliver Dürr - GitHub Pagesoduerr.github.io/teaching/stdm/woche11/slides11.pdf · oliver.duerr@zhaw.ch Winterthur, 29 November 2016 1 . Multitasking senkt Lerneffizienz:

Pruning

•  First Strategy: Stop growing the tree if impurity measure gain is small. •  To short-sighted: a seemingly worthless split early on in the tree might be

followed by a very good split •  A better strategy is to grow a very large tree to the end, and then prune it

back in order to obtain a subtree

•  In rpart the pruning controlled with a complexity parameter cp •  cp is a parameter in regressions trees which can be optimized (like k in

kNN) •  Later: Crossvalidation a way to systematically find the best meta parameter

Page 29: Oliver Dürr - GitHub Pagesoduerr.github.io/teaching/stdm/woche11/slides11.pdf · oliver.duerr@zhaw.ch Winterthur, 29 November 2016 1 . Multitasking senkt Lerneffizienz:

Pruning

t1 <- rpart(sex ~., data=crabs) plot(t1, margin=0.2, uniform=TRUE);text(t1,use.n=TRUE) t2 = prune(t1, cp=0.05) plot(t2);text(t2,use.n=TRUE) t3 = prune(t1, cp=0.1) plot(t3);text(t3,use.n=TRUE) t4 = prune(t1, cp=0.2) plot(t4);text(t4,use.n=TRUE)

cp = 0.2 cp = 0.1 cp = 0.05

Original Tree

Page 30: Oliver Dürr - GitHub Pagesoduerr.github.io/teaching/stdm/woche11/slides11.pdf · oliver.duerr@zhaw.ch Winterthur, 29 November 2016 1 . Multitasking senkt Lerneffizienz:

Trees vs Linear Models

Slide Credit: ISLR

Page 31: Oliver Dürr - GitHub Pagesoduerr.github.io/teaching/stdm/woche11/slides11.pdf · oliver.duerr@zhaw.ch Winterthur, 29 November 2016 1 . Multitasking senkt Lerneffizienz:

Pros and cons of tree models

•  Pros-  Interpretability-  Robusttooutliersintheinputvariables-  Cancapturenon-linearstructures-  Cancapturelocalinterac0onsverywell-  Lowbiasifappropriateinputvariablesareavailableandtreehas

sufficientdepth.•  Cons

-  Highvaria0on(instabilityoftrees)-  Tendtooverfit-  Needsbigdatasetstocaptureaddi0vestructures-  Inefficientforcapturinglinearstructures

32

Page 32: Oliver Dürr - GitHub Pagesoduerr.github.io/teaching/stdm/woche11/slides11.pdf · oliver.duerr@zhaw.ch Winterthur, 29 November 2016 1 . Multitasking senkt Lerneffizienz:

Joining Forces

Page 33: Oliver Dürr - GitHub Pagesoduerr.github.io/teaching/stdm/woche11/slides11.pdf · oliver.duerr@zhaw.ch Winterthur, 29 November 2016 1 . Multitasking senkt Lerneffizienz:

Which supervised learning method is best?

Bild: Seni & Elder

Page 34: Oliver Dürr - GitHub Pagesoduerr.github.io/teaching/stdm/woche11/slides11.pdf · oliver.duerr@zhaw.ch Winterthur, 29 November 2016 1 . Multitasking senkt Lerneffizienz:

Bundling improves performance

Bild: Seni & Elder

Page 35: Oliver Dürr - GitHub Pagesoduerr.github.io/teaching/stdm/woche11/slides11.pdf · oliver.duerr@zhaw.ch Winterthur, 29 November 2016 1 . Multitasking senkt Lerneffizienz:

Evolving results I Low hanging fruits and slowed down progress

•  After 3 weeks, at least 40 teams had improved the Netflix classifier

•  Top teams showed about 5% improvement

•  However, improvement slowed:

fromhPp://www.research.aP.com/~volinsky/neTlix/

Page 36: Oliver Dürr - GitHub Pagesoduerr.github.io/teaching/stdm/woche11/slides11.pdf · oliver.duerr@zhaw.ch Winterthur, 29 November 2016 1 . Multitasking senkt Lerneffizienz:

Details: Gravity

•  Quote • 

[home.mit.bme.hu/~gtakacs/download/gravity.pdf]

Page 37: Oliver Dürr - GitHub Pagesoduerr.github.io/teaching/stdm/woche11/slides11.pdf · oliver.duerr@zhaw.ch Winterthur, 29 November 2016 1 . Multitasking senkt Lerneffizienz:

Details: When Gravity and Dinosaurs Unite

•  Quote •  “Our common team blends the result

of team Gravity and team Dinosaur Planet.”

Page 38: Oliver Dürr - GitHub Pagesoduerr.github.io/teaching/stdm/woche11/slides11.pdf · oliver.duerr@zhaw.ch Winterthur, 29 November 2016 1 . Multitasking senkt Lerneffizienz:

Details: BellKor / KorBell

•  Quote •  “Our final solution (RMSE=0.8712)

consists of blending 107 individual results.“

Page 39: Oliver Dürr - GitHub Pagesoduerr.github.io/teaching/stdm/woche11/slides11.pdf · oliver.duerr@zhaw.ch Winterthur, 29 November 2016 1 . Multitasking senkt Lerneffizienz:

Evolving results III Final results

•  The winner was an ensemble of ensembles (including BellKor) •  Gradient boosted decision trees [http://www.netflixprize.com/assets/GrandPrize2009_BPC_BellKor.pdf]

Page 40: Oliver Dürr - GitHub Pagesoduerr.github.io/teaching/stdm/woche11/slides11.pdf · oliver.duerr@zhaw.ch Winterthur, 29 November 2016 1 . Multitasking senkt Lerneffizienz:

Overview of Ensemble Methods

•  Many instances of the same classifier –  Bagging (bootstrapping & aggregating)

•  Create “new” data using bootstrap •  Train classifiers on new data •  Average over all predictions (e.g. take majority vote)

–  Bagged Trees •  Use bagging with decision trees

–  Random Forest •  Bagged trees with special trick

–  Boosting •  An iterative procedure to adaptively change distribution of training data by focusing more on previously

misclassified records.

•  Combining classifiers –  Weighted averaging over predictions –  Stacking classifiers

•  Use output of classifiers as input for a new classifier

Page 41: Oliver Dürr - GitHub Pagesoduerr.github.io/teaching/stdm/woche11/slides11.pdf · oliver.duerr@zhaw.ch Winterthur, 29 November 2016 1 . Multitasking senkt Lerneffizienz:

Bagging

Page 42: Oliver Dürr - GitHub Pagesoduerr.github.io/teaching/stdm/woche11/slides11.pdf · oliver.duerr@zhaw.ch Winterthur, 29 November 2016 1 . Multitasking senkt Lerneffizienz:

Idee des Bootstraps

Idee: Die Sampling Verteilung liegt nahegenug an der ‘wahren’

Page 43: Oliver Dürr - GitHub Pagesoduerr.github.io/teaching/stdm/woche11/slides11.pdf · oliver.duerr@zhaw.ch Winterthur, 29 November 2016 1 . Multitasking senkt Lerneffizienz:

Bagging: Bootstrap Aggregating

OriginalTraining data

....D1 D2 Dt-1 Dt

D

Step 1:Create Multiple

Data Sets

C1 C2 Ct -1 Ct

Step 2:Build Multiple

Classifiers

C*Step 3:

CombineClassifiers

Source: Tan, Steinbach, Kumar

Bootstrap

Aggregating: mean, majority vote

Page 44: Oliver Dürr - GitHub Pagesoduerr.github.io/teaching/stdm/woche11/slides11.pdf · oliver.duerr@zhaw.ch Winterthur, 29 November 2016 1 . Multitasking senkt Lerneffizienz:

Why does it work?

•  Suppose there are 25 base classifiers •  Each classifier has error rate, ε = 0.35 •  Assume classifiers are independent (that’s the hardest assumption) •  Take majority vote •  Majority voter is wrong if 13 or more are wrong •  Number of wrong predictors X ~ Bin(size=25, p0=0.35)

• >1-pbinom(12,size=25,prob=0.35)• [1]0.06044491

∑=

− =−⎟⎟⎠

⎞⎜⎜⎝

⎛25

13

25 06.0)1(25

i

ii

iεε

Page 45: Oliver Dürr - GitHub Pagesoduerr.github.io/teaching/stdm/woche11/slides11.pdf · oliver.duerr@zhaw.ch Winterthur, 29 November 2016 1 . Multitasking senkt Lerneffizienz:

Why does it work?

•  25 Base Classifiers

Ensembles are only better than one classifier, if each classifier is better than random guessing!