57
CS60020: Foundations of Algorithm Design and Machine Learning Sourangshu Bhattacharya

CS60020: Foundations of Algorithm Design and Machine Learning

  • Upload
    others

  • View
    8

  • Download
    0

Embed Size (px)

Citation preview

Page 1: CS60020: Foundations of Algorithm Design and Machine Learning

CS60020: Foundations of Algorithm Design and Machine

LearningSourangshu Bhattacharya

Page 2: CS60020: Foundations of Algorithm Design and Machine Learning

BOOSTING

Page 3: CS60020: Foundations of Algorithm Design and Machine Learning

Boosting

• Train classifiers (e.g. decision trees) in a sequence.• A new classifier should focus on those cases which were

incorrectly classified in the last round.• Combine the classifiers by letting them vote on the final

prediction (like bagging).• Each classifier is �weak� but the ensemble is �strong.�• AdaBoost is a specific boosting method.

Page 4: CS60020: Foundations of Algorithm Design and Machine Learning

Boosting Intuition

• We adaptively weigh each data case.

• Data cases which are wrongly classified get high weight (the algorithm will focus on them)

• Each boosting round learns a new (simple) classifier on the weighed dataset.

• These classifiers are weighed to combine them into a single powerful classifier.

• Classifiers that that obtain low training error rate have high weight.

• We stop by using monitoring a hold out set (cross-validation).

Page 5: CS60020: Foundations of Algorithm Design and Machine Learning

Boosting in a Picture

training cases Correctlyclassified

This examplehas a large weightin this round

This DT has a strong vote.

boosting rounds

5

Page 6: CS60020: Foundations of Algorithm Design and Machine Learning

Boosting

• Combining multiple “base” classifiers to come up with a “good” classifier.

• Base classifiers have to be “weak learners”, accuracy > 50%

• Base classifiers are trained on a weighted training dataset.

• Boosting involves sequentially learning !"and #"(%).

Page 7: CS60020: Foundations of Algorithm Design and Machine Learning

Adaboost

Page 8: CS60020: Foundations of Algorithm Design and Machine Learning

Adaboost (contd..)

Page 9: CS60020: Foundations of Algorithm Design and Machine Learning

And in animation

Original training set: equal weights to all training samples

Taken from �A Tutorial on Boosting� by Yoav Freund and Rob Schapire

Page 10: CS60020: Foundations of Algorithm Design and Machine Learning

AdaBoost example

ROUND 1

ε = error rate of classifierα = weight of classifier

10

Page 11: CS60020: Foundations of Algorithm Design and Machine Learning

AdaBoost example

ROUND 2

11

Page 12: CS60020: Foundations of Algorithm Design and Machine Learning

AdaBoost example

ROUND 3

12

Page 13: CS60020: Foundations of Algorithm Design and Machine Learning

AdaBoost example

13

Page 14: CS60020: Foundations of Algorithm Design and Machine Learning

Adaboost illustration

Page 15: CS60020: Foundations of Algorithm Design and Machine Learning

Adaboost - Observations

• !": weighted error ∈ [0,0.5)• *" ≥ 0• ,-"./ is higher than ,-" by a factor (1 −!")/!", when i is misclassified.

Page 16: CS60020: Foundations of Algorithm Design and Machine Learning

Adaboost - derivation

• Consider the error function:

• Where

• Goal: Minimize E w.r.t. !" and #" $ , sequentially.

Page 17: CS60020: Foundations of Algorithm Design and Machine Learning

Adaboost - derivation• Minimize w.r.t. !"

• Let #" be the set of datapoints correctly classified by $".

Page 18: CS60020: Foundations of Algorithm Design and Machine Learning

Adaboost - derivation

• Minimizing w.r.t. !" and #", we get the updates 2(a) and 2(b).

• We can see that:

• Using:• We get:

Page 19: CS60020: Foundations of Algorithm Design and Machine Learning

Adaboost

Page 20: CS60020: Foundations of Algorithm Design and Machine Learning

Adaboost (contd..)

Page 21: CS60020: Foundations of Algorithm Design and Machine Learning

SUPPORT VECTOR MACHINES

Page 22: CS60020: Foundations of Algorithm Design and Machine Learning

Support vector machines• Let {x1, ..., xn} be our data set and let yi Î {1,-1} be the class

label of xi

22

Class 1

Class 2

m

y=1y=1

y=1

y=1y=1

y=-1

y=-1

y=-1

y=-1

y=-1y=-1

1 ³+ bxw iT

For yi=11 -£+ bxw i

TFor yi=-1

( ) ( )iiiT

i yxbxwy ,,1 "³+×So:

Page 23: CS60020: Foundations of Algorithm Design and Machine Learning

Large-margin Decision Boundary• The decision boundary should be as far away

from the data of both classes as possible– We should maximize the margin, m

23

Class 1

Class 2

m

Page 24: CS60020: Foundations of Algorithm Design and Machine Learning

Finding the Decision Boundary• The decision boundary should classify all points correctly Þ

• The decision boundary can be found by solving the following constrained optimization problem

• This is a constrained optimization problem. Solving it requires to use Lagrange multipliers

24

Page 25: CS60020: Foundations of Algorithm Design and Machine Learning

KKT Conditions

• Problem:min$%(') sub. to: g1 x ≤ 0 ∀ 6

• Lagrangian: 7 ', 9 = % ' − ∑= 9=>=(')• Conditions:– Stationarity: ?@L x, 9 = 0.– Primal feasibility: >= ' ≤ 0 ∀ 6.– Dual feasibility: 9= ≥ 0.– Complementary slackness: 9=>= ' = 0.

Page 26: CS60020: Foundations of Algorithm Design and Machine Learning

• The Lagrangian is

– ai≥0– Note that ||w||2 = wTw

26

Finding the Decision Boundary

Page 27: CS60020: Foundations of Algorithm Design and Machine Learning

• Setting the gradient of w.r.t. w and b to zero, we have

27ïï

î

ïï

í

ì

=¶¶

"=¶¶

0

,0

bL

kwLk

( )( )

å åå

å

= ==

=

÷÷ø

öççè

æ÷ø

öçè

æ+-+=

=+-+=

n

i

m

k

ki

kii

m

k

kk

n

ii

Tii

T

bxwyww

bxwywwL

1 11

1

121

121

a

a

n: no of examples, m: dimension of the space

The Dual Problem

Page 28: CS60020: Foundations of Algorithm Design and Machine Learning

The Dual Problem• If we substitute to , we have

Since

• This is a function of ai only

28

Page 29: CS60020: Foundations of Algorithm Design and Machine Learning

The Dual Problem• The new objective function is in terms of ai only• It is known as the dual problem: if we know w, we know all ai; if we know

all ai, we know w• The original problem is known as the primal problem• The objective function of the dual problem needs to be maximized (comes

out from the KKT theory)• The dual problem is therefore:

29

Properties of ai when we introduce the Lagrange multipliers

The result when we differentiate the original Lagrangian w.r.t. b

Page 30: CS60020: Foundations of Algorithm Design and Machine Learning

The Dual Problem

• This is a quadratic programming (QP) problem– A global maximum of ai can always be found

• w can be recovered by

30

Page 31: CS60020: Foundations of Algorithm Design and Machine Learning

Characteristics of the Solution• Many of the ai are zero– Complementary slackness: !"#

$1 − '"(

)*+," +

. = 0– Sparse representation: w is a linear combination

of a small number of data points

• xi with non-zero ai are called support vectors (SV)– The decision boundary is determined only by the

SV– Let tj (j=1, ..., s) be the indices of the s support

vectors. We can write31

Page 32: CS60020: Foundations of Algorithm Design and Machine Learning

A Geometrical Interpretation

32

a6=1.4

Class 1

Class 2

a1=0.8

a2=0

a3=0

a4=0

a5=0a7=0

a8=0.6

a9=0

a10=0

Page 33: CS60020: Foundations of Algorithm Design and Machine Learning

Characteristics of the Solution• For testing with a new data z– Compute and

classify z as class 1 if the sum is positive, and class 2 otherwise

– Note: w need not be formed explicitly

33

Page 34: CS60020: Foundations of Algorithm Design and Machine Learning

Non-linearly Separable Problems• We allow “error” xi in classification; it is based on the output

of the discriminant function wTx + b• xi approximates the number of misclassified samples

34

Class 1

Class 2

Page 35: CS60020: Foundations of Algorithm Design and Machine Learning

Soft Margin Hyperplane• The new conditions become

– xi are “slack variables” in optimization– Note that xi=0 if there is no error for xi– xi is an upper bound of the number of errors

• We want to minimize

• C : tradeoff parameter between error and margin

35

å=

+n

iiCw

1

2

21 x

Page 36: CS60020: Foundations of Algorithm Design and Machine Learning

The Optimization Problem

36

( )( ) ååå===

-+--++=n

iii

n

ii

Tiii

n

ii

T bxwyCwwL111

121 xµxax

01

=-=¶¶ å

=

n

iijiij

j

xywwL a 0

1==å

=

n

iiii xyw !! a

0=--=¶¶

jjj

CL µax

01

==¶¶ å

=

n

iiiyb

L a

With α and μ Lagrange multipliers, POSITIVE

Page 37: CS60020: Foundations of Algorithm Design and Machine Learning

The Dual Problem

ååå== =

+-=n

iij

Tiji

n

i

n

jji xxyyL

11 121 aaa !!

åå å

ååå

== =

== =

-÷÷ø

öççè

æ÷÷ø

öççè

æ+--+

++=

n

iii

n

i

n

ji

Tjjjiii

n

iij

Tiji

n

i

n

jji

bxxyy

CxxyyL

11 1

11 1

1

21

xµaxa

xaa !!

jjC µa +=01

=å=

n

iiiyaWith and

Page 38: CS60020: Foundations of Algorithm Design and Machine Learning

The Optimization Problem• The dual of this new constrained optimization problem is

• New constraints derived from since μ and α are positive.

• w is recovered as

• This is very similar to the optimization problem in the linear separable case, except that there is an upper bound C on ai now

• Once again, a QP solver can be used to find ai

38

jjC µa +=

Page 39: CS60020: Foundations of Algorithm Design and Machine Learning

• The algorithm try to keep ξ low, maximizing the margin

• The algorithm does not minimize the number of error. Instead, it minimizes the sum of distances from the hyperplane.

• When C increases the number of errors tend to lower. At the limit of C tending to infinite, the solution tend to that given by the hard margin formulation, with 0 errors

2/13/19 39

å=

+n

iiCw

1

2

21 x

Page 40: CS60020: Foundations of Algorithm Design and Machine Learning

Soft margin is more robust to outliers

40

Page 41: CS60020: Foundations of Algorithm Design and Machine Learning

Extension to Non-linear Decision

Boundary

• So far, we have only considered large-margin classifier with

a linear decision boundary

• How to generalize it to become nonlinear?

• Key idea: transform xi to a higher dimensional space to

“make life easier”

– Input space: the space the point xi are located

– Feature space: the space of f(xi) after transformation

• Why transform?

– Linear operation in the feature space is equivalent to non-linear

operation in input space

– Classification can become easier with a proper transformation.

In the XOR problem, for example, adding a new feature of x1x2

make the problem linearly separable

41

Page 42: CS60020: Foundations of Algorithm Design and Machine Learning

Extension to Non-linear Decision

Boundary

• So far, we have only considered large-margin classifier with

a linear decision boundary

• How to generalize it to become nonlinear?

• Key idea: transform xi to a higher dimensional space to

“make life easier”

– Input space: the space the point xi are located

– Feature space: the space of f(xi) after transformation

• Why transform?

– Linear operation in the feature space is equivalent to non-linear

operation in input space

– Classification can become easier with a proper transformation.

In the XOR problem, for example, adding a new feature of x1x2

make the problem linearly separable

42

Page 43: CS60020: Foundations of Algorithm Design and Machine Learning

XORX Y0 0 00 1 11 0 11 1 0

43

Is not linearly separable

X Y XY0 0 0 00 1 0 11 0 0 11 1 1 0

Is linearly separable

Page 44: CS60020: Foundations of Algorithm Design and Machine Learning

Find a feature space

44

Page 45: CS60020: Foundations of Algorithm Design and Machine Learning

Transforming the Data

• Computation in the feature space can be costly because it is high dimensional– The feature space is typically infinite-dimensional!

• The kernel trick comes to rescue45

f( )

f( )

f( )f( )f( )

f( )

f( )f( )

f(.) f( )

f( )

f( )f( )f( )

f( )

f( )

f( )f( ) f( )

Feature spaceInput spaceNote: feature space is of higher dimension than the input space in practice

Page 46: CS60020: Foundations of Algorithm Design and Machine Learning

The Kernel Trick• Recall the SVM optimization problem

• The data points only appear as inner product• As long as we can calculate the inner product in the

feature space, we do not need the mapping explicitly• Many common geometric operations (angles,

distances) can be expressed by inner products• Define the kernel function K by

46

Page 47: CS60020: Foundations of Algorithm Design and Machine Learning

An Example for f(.) and K(.,.)• Suppose f(.) is given as follows

• An inner product in the feature space is

• So, if we define the kernel function as follows, there is no need to carry out f(.) explicitly

• This use of kernel function to avoid carrying out f(.) explicitly is known as the kernel trick

47

Page 48: CS60020: Foundations of Algorithm Design and Machine Learning

Kernels

• Given a mapping:a kernel is represented as the inner product

A kernel must satisfy the Mercer’s condition:

48

φ(x)x®

å®i ii φφK (y)(x)yx ),(

ò ³" 0)()()( )( yxyxyx,x ddggKg

Page 49: CS60020: Foundations of Algorithm Design and Machine Learning

Modification Due to Kernel Function• Change all inner products to kernel functions• For training,

49

Original

With kernel function

Page 50: CS60020: Foundations of Algorithm Design and Machine Learning

Modification Due to Kernel Function• For testing, the new data z is classified as class

1 if f ³ 0, and as class 2 if f <0

50

Original

With kernel function

Page 51: CS60020: Foundations of Algorithm Design and Machine Learning

More on Kernel Functions• Since the training of SVM only requires the value of K(xi, xj), there is no restriction of the form of xi and xj– xi can be a sequence or a tree, instead of a feature vector

• K(xi, xj) is just a similarity measure comparing xi and xj

• For a test object z, the discriminant function essentially is a weighted sum of the similarity between z and a pre-selected set of objects (the support vectors)

51

Page 52: CS60020: Foundations of Algorithm Design and Machine Learning

Kernel Functions

• In practical use of SVM, the user specifies the kernel function; the transformation f(.) is not explicitly stated

• Given a kernel function K(xi, xj), the transformation f(.) is given by its eigenfunctions (a concept in functional analysis)– Eigenfunctions can be difficult to construct explicitly– This is why people only specify the kernel function without

worrying about the exact transformation• Another view: kernel function, being an inner product,

is really a similarity measure between the objects

52

Page 53: CS60020: Foundations of Algorithm Design and Machine Learning

A kernel is associated to a transformation

–Given a kernel, in principle it should be recovered the transformation in the feature space that originates it.

–K(x,y) = (xy+1)2= x2y2+2xy+1

It corresponds the transformation

2/13/19 53

÷÷÷

ø

ö

ççç

è

æ

®12

2

xx

x

Page 54: CS60020: Foundations of Algorithm Design and Machine Learning

Examples of Kernel Functions• Polynomial kernel of degree d

• Polynomial kernel up to degree d

• Radial basis function kernel with width s

– The feature space is infinite-dimensional• Sigmoid with parameter k and q

– It does not satisfy the Mercer condition on all k and q

54

Page 55: CS60020: Foundations of Algorithm Design and Machine Learning

Building new kernels• If k1(x,y) and k2(x,y) are two valid kernels then the

following kernels are valid– Linear Combination

– Exponential

– Product

– Polynomial transformation (Q: polynomial with non negative coeffcients)

– Function product (f: any function)

55

),(),(),( 2211 yxkcyxkcyxk +=

[ ]),(exp),( 1 yxkyxk =

),(),(),( 21 yxkyxkyxk ×=

[ ]),(),( 1 yxkQyxk =

)(),()(),( 1 yfyxkxfyxk =

Page 56: CS60020: Foundations of Algorithm Design and Machine Learning

Polynomial kernel

Ben-Hur et al, PLOS computational Biology 4 (2008)56

Page 57: CS60020: Foundations of Algorithm Design and Machine Learning

Gaussian RBF kernel

Ben-Hur et al, PLOS computational Biology 4 (2008)57