20
Ensembles—Combining Multiple Learners For Better Accuracy Reading Material Ensembles: 1.http://www.scholarpedia.org/article/Ensemble_learning (optional) 2.http://en.wikipedia.org/wiki/Ensemble_learning 3. http://en.wikipedia.org/wiki/Adaboost 4.Rudin video(optional): http://videolectures.net/mlss05us_rudin_da/ You do not need to read the textbook, as long you understand the Eick: Ensemble Learni

classifier

Embed Size (px)

DESCRIPTION

classifier

Citation preview

  • EnsemblesCombining Multiple LearnersFor Better AccuracyReading Material Ensembles:http://www.scholarpedia.org/article/Ensemble_learning (optional)http://en.wikipedia.org/wiki/Ensemble_learning http://en.wikipedia.org/wiki/Adaboost Rudin video(optional): http://videolectures.net/mlss05us_rudin_da/ You do not need to read the textbook, as long you understand the transparencies!

    Eick: Ensemble Learning

    Lecture Notes for E Alpaydn 2004 Introduction to Machine Learning The MIT Press (V1.1)

  • Lecture Notes for E Alpaydn 2004 Introduction to Machine Learning The MIT Press (V1.1)*RationaleNo Free Lunch Theorem: There is no algorithm that is always the most accuratehttp://en.wikipedia.org/wiki/No_free_lunch_in_search_and_optimization Generate a group of base-learners which when combined has higher accuracyEach algorithm makes assumptions which might be or not be valid for the problem at hand or not.Different learners use differentAlgorithmsHyperparametersRepresentations (Modalities)Training setsSubproblems

    Lecture Notes for E Alpaydn 2004 Introduction to Machine Learning The MIT Press (V1.1)

  • General IdeaEick: Ensemble Learning

    Lecture Notes for E Alpaydn 2004 Introduction to Machine Learning The MIT Press (V1.1)

    Original Training data

    ....

    D1

    D2

    Dt-1

    Dt

    D

    Step 1: Create Multiple Data Sets

    C1

    C2

    Ct -1

    Ct

    Step 2: Build Multiple Classifiers

    C*

    Step 3: Combine Classifiers

  • Fixed Combination Rules*Lecture Notes for E Alpaydn 2010 Introduction to Machine Learning 2e The MIT Press (V1.0)

    Lecture Notes for E Alpaydn 2004 Introduction to Machine Learning The MIT Press (V1.1)

  • Why does it work?Suppose there are 25 base classifiersEach classifier has error rate, = 0.35Assume classifiers are independentProbability that the ensemble classifier makes a wrong prediction:Eick: Ensemble Learning

    Lecture Notes for E Alpaydn 2004 Introduction to Machine Learning The MIT Press (V1.1)

  • Bayesian perspective:

    If dj are independent

    Bias does not change, variance decreases by LIf dependent, error increase with positive correlation

    *Why are Ensembles Successful?

    Lecture Notes for E Alpaydn 2004 Introduction to Machine Learning The MIT Press (V1.1)

  • What is the Main Challenge for Developing Ensemble Models?The main challenge is not to obtain highly accurate base models, but rather to obtain base models which make different kinds of errors. For example, if ensembles are used for classification, high accuracies can be accomplished if different base models misclassify different training examples, even if the base classifier accuracy is low. Independence between two base classifiers can be assessed in this case by measuring the degree of overlap in training examples they misclassify (|AB|/|AB|)more overlap means less independence between two models.Eick: Ensemble Learning

    Lecture Notes for E Alpaydn 2004 Introduction to Machine Learning The MIT Press (V1.1)

  • *Bagging Use bootstrapping to generate L training sets and train one base-learner with each (Breiman, 1996)Use voting (Average or median with regression)Unstable algorithms profit from baggingEick: Ensemble Learning

    Lecture Notes for E Alpaydn 2004 Introduction to Machine Learning The MIT Press (V1.1)

  • BaggingSampling with replacement

    Build classifier on each bootstrap sample

    Each sample has probability (1 1/n)n of being selectedEick: Ensemble Learning

    Lecture Notes for E Alpaydn 2004 Introduction to Machine Learning The MIT Press (V1.1)

  • BoostingAn iterative procedure to adaptively change distribution of training data by focusing more on previously misclassified recordsInitially, all N records are assigned equal weightsUnlike bagging, weights may change at the end of boosting roundEick: Ensemble Learning

    Lecture Notes for E Alpaydn 2004 Introduction to Machine Learning The MIT Press (V1.1)

  • BoostingRecords that are wrongly classified will have their weights increasedRecords that are classified correctly will have their weights decreased Example 4 is hard to classify Its weight is increased, therefore it is more likely to be chosen again in subsequent roundsEick: Ensemble Learning

    Lecture Notes for E Alpaydn 2004 Introduction to Machine Learning The MIT Press (V1.1)

  • Basic AdaBoost LoopD1= initial dataset with equal weightsFOR i=1 to k DO Learn new classifier Ci; Compute ai (classifiers importance); Update example weights; Create new training set Di+1 (using weighted sampling) END FOR

    Construct Ensemble which uses Ci weighted by ai (i=1,k)

    Eick: Ensemble Learning

    Lecture Notes for E Alpaydn 2004 Introduction to Machine Learning The MIT Press (V1.1)

  • Example: AdaBoostBase classifiers: C1, C2, , CT

    Error rate (weights add up to 1):

    Importance of a classifier:

    Eick: Ensemble Learning

    Lecture Notes for E Alpaydn 2004 Introduction to Machine Learning The MIT Press (V1.1)

  • Example: AdaBoostWeight update:

    If any intermediate rounds produce error rate higher than 50%, the weights are reverted back to 1/n and the re-sampling procedure is repeated

    Classification (aj is a classifiers importance for the whole dataset):Increase weight if misclassification;Increase is proportional to classifiers Importance. http://en.wikipedia.org/wiki/Adaboost Eick: Ensemble Learning

    Lecture Notes for E Alpaydn 2004 Introduction to Machine Learning The MIT Press (V1.1)

  • Illustrating AdaBoostVideo Containing Introduction to AdaBoost: http://videolectures.net/mlss05us_rudin_da/ Eick: Ensemble Learning

    Lecture Notes for E Alpaydn 2004 Introduction to Machine Learning The MIT Press (V1.1)

    Boosting Round 1

    +

    +

    +

    -

    -

    -

    -

    -

    -

    -

    B1

    0.0094

    0.0094

    0.4623

    a = 1.9459

  • Illustrating AdaBoostEick: Ensemble Learning

    Lecture Notes for E Alpaydn 2004 Introduction to Machine Learning The MIT Press (V1.1)

    Boosting Round 1

    +

    +

    +

    -

    -

    -

    -

    -

    -

    -

    Boosting Round 2

    0.3037

    -

    -

    -

    -

    -

    -

    -

    -

    +

    +

    Boosting Round 3

    +

    +

    +

    +

    +

    +

    +

    +

    +

    +

    Overall

    +

    +

    +

    -

    -

    -

    -

    -

    +

    +

    0.0009

    0.0422

    0.0276

    0.1819

    0.0038

    B1

    0.0094

    0.0094

    0.4623

    B2

    B3

    a = 1.9459

    a = 2.9323

    a = 3.8744

  • Lecture Notes for E Alpaydn 2004 Introduction to Machine Learning The MIT Press (V1.1)*Mixture of ExpertsVoting where weights are input-dependent (gating)

    (Jacobs et al., 1991)Experts or gating can be nonlinear

    Lecture Notes for E Alpaydn 2004 Introduction to Machine Learning The MIT Press (V1.1)

  • Lecture Notes for E Alpaydn 2004 Introduction to Machine Learning The MIT Press (V1.1)*StackingCombiner f () is another learner (Wolpert, 1992)

    Lecture Notes for E Alpaydn 2004 Introduction to Machine Learning The MIT Press (V1.1)

  • Lecture Notes for E Alpaydn 2004 Introduction to Machine Learning The MIT Press (V1.1)

    CascadingUse dj only if preceding ones are not confident

    Cascade learners in order of complexity

    Lecture Notes for E Alpaydn 2004 Introduction to Machine Learning The MIT Press (V1.1)

  • *Summary Ensemble LearningEnsemble approaches use multiple models in their decision making. They frequently accomplish high accuracies, are less likely to over-fit and exhibit a low variance. They have been successfully used in the Netflix contest and for other tasks. However, some research suggest that they are sensitive to noise (http://www.phillong.info/publications/LS10_potential.pdf ).The key of designing ensembles is diversity and not necessarily high accuracy of the base classifiers: Members of the ensemble should vary in the examples they misclassify. Therefore, most ensemble approaches, such as AdaBoost, seek to promote diversity among the models they combine.The trained ensemble represents a single hypothesis. This hypothesis, however, is not necessarily contained within the hypothesis space of the models from which it is built. Thus, ensembles can be shown to have more flexibility in the functions they can represent. Example: http://www.scholarpedia.org/article/Ensemble_learningCurrent research on ensembles centers on: more complex ways to combine models, understanding the convergence behavior of ensemble learning algorithms, parameter learning, understanding over-fitting in ensemble learning, characterization of ensemble models, sensitivity to noise.Eick: Ensemble Learning

    Lecture Notes for E Alpaydn 2004 Introduction to Machine Learning The MIT Press (V1.1)