53
Classification Tamara Berg CSE 595 Words & Pictures

Classification Tamara Berg CSE 595 Words & Pictures

Embed Size (px)

Citation preview

Page 1: Classification Tamara Berg CSE 595 Words & Pictures

Classification

Tamara BergCSE 595 Words & Pictures

Page 2: Classification Tamara Berg CSE 595 Words & Pictures

HW2

• Online after class – Due Oct 10, 11:59pm• Use web text descriptions as proxy for class

labels. • Train color attribute classifiers on web

shopping images. • Classify test images as to whether they display

attributes.

Page 3: Classification Tamara Berg CSE 595 Words & Pictures

Topic Presentations

• First group starts on Tuesday• Audience – please read papers!

Page 4: Classification Tamara Berg CSE 595 Words & Pictures

Example: Image classification

apple

pear

tomato

cow

dog

horse

input desired output

Slide credit: Svetlana Lazebnik

Page 5: Classification Tamara Berg CSE 595 Words & Pictures

Slide from Dan Kleinhttp://yann.lecun.com/exdb/mnist/index.html

Page 6: Classification Tamara Berg CSE 595 Words & Pictures

Slide from Dan Klein

Page 7: Classification Tamara Berg CSE 595 Words & Pictures

Slide from Dan Klein

Page 8: Classification Tamara Berg CSE 595 Words & Pictures

Slide from Dan Klein

Page 9: Classification Tamara Berg CSE 595 Words & Pictures

Slide from Dan Klein

Page 10: Classification Tamara Berg CSE 595 Words & Pictures

Example: Seismic data

Body wave magnitude

Surf

ace

wav

e m

agni

tude

Nuclear explosions

Earthquakes

Slide credit: Svetlana Lazebnik

Page 11: Classification Tamara Berg CSE 595 Words & Pictures

Slide from Dan Klein

Page 12: Classification Tamara Berg CSE 595 Words & Pictures

The basic classification framework

y = f(x)

• Learning: given a training set of labeled examples {(x1,y1), …, (xN,yN)}, estimate the parameters of the prediction function f

• Inference: apply f to a never before seen test example x and output the predicted value y = f(x)

output classification function

input

Slide credit: Svetlana Lazebnik

Page 13: Classification Tamara Berg CSE 595 Words & Pictures

Some classification methods

106 examples

Nearest neighbor

Shakhnarovich, Viola, Darrell 2003Berg, Berg, Malik 2005…

Neural networks

LeCun, Bottou, Bengio, Haffner 1998Rowley, Baluja, Kanade 1998…

Support Vector Machines and Kernels Conditional Random Fields

McCallum, Freitag, Pereira 2000Kumar, Hebert 2003…

Guyon, VapnikHeisele, Serre, Poggio, 2001…

Slide credit: Antonio Torralba

Page 14: Classification Tamara Berg CSE 595 Words & Pictures

Example: Training and testing

• Key challenge: generalization to unseen examples

Training set (labels known) Test set (labels unknown)

Slide credit: Svetlana Lazebnik

Page 15: Classification Tamara Berg CSE 595 Words & Pictures

Slide credit: Dan Klein

Page 16: Classification Tamara Berg CSE 595 Words & Pictures

Slide from Min-Yen Kan

Classification by Nearest Neighbor

Word vector document classification – here the vector space is illustrated as having 2 dimensions. How many dimensions would the data actually live in?

Page 17: Classification Tamara Berg CSE 595 Words & Pictures

Slide from Min-Yen Kan

Classification by Nearest Neighbor

Page 18: Classification Tamara Berg CSE 595 Words & Pictures

Classification by Nearest Neighbor

Classify the test document as the class of the document “nearest” to the query document (use vector similarity to find most similar doc)

Slide from Min-Yen Kan

Page 19: Classification Tamara Berg CSE 595 Words & Pictures

Classification by kNN

Classify the test document as the majority class of the k documents “nearest” to the query document. Slide from Min-Yen Kan

Page 20: Classification Tamara Berg CSE 595 Words & Pictures

Slide from Min-Yen Kan

What are the features? What’s the training data? Testing data? Parameters?

Classification by kNN

Page 21: Classification Tamara Berg CSE 595 Words & Pictures

Decision tree classifier

Example problem: decide whether to wait for a table at a restaurant, based on the following attributes:1. Alternate: is there an alternative restaurant nearby?2. Bar: is there a comfortable bar area to wait in?3. Fri/Sat: is today Friday or Saturday?4. Hungry: are we hungry?5. Patrons: number of people in the restaurant (None, Some, Full)6. Price: price range ($, $$, $$$)7. Raining: is it raining outside?8. Reservation: have we made a reservation?9. Type: kind of restaurant (French, Italian, Thai, Burger)10. WaitEstimate: estimated waiting time (0-10, 10-30, 30-60, >60)

Slide credit: Svetlana Lazebnik

Page 22: Classification Tamara Berg CSE 595 Words & Pictures

Decision tree classifier

Slide credit: Svetlana Lazebnik

Page 23: Classification Tamara Berg CSE 595 Words & Pictures

Decision tree classifier

Slide credit: Svetlana Lazebnik

Page 24: Classification Tamara Berg CSE 595 Words & Pictures

Linear classifier

• Find a linear function to separate the classes

f(x) = sgn(w1x1 + w2x2 + … + wDxD) = sgn(w x)

Slide credit: Svetlana Lazebnik

Page 25: Classification Tamara Berg CSE 595 Words & Pictures

Discriminant Function• It can be arbitrary functions of x, such as:

Nearest Neighbor

Decision Tree

LinearFunctions

( ) Tg b x w x

Slide credit: Jinwei Gu

Page 26: Classification Tamara Berg CSE 595 Words & Pictures

Linear Discriminant Function• g(x) is a linear function:

( ) Tg b x w x

x1

x2

wT x + b = 0

wT x + b < 0

wT x + b > 0

A hyper-plane in the feature space

Slide credit: Jinwei Gu

denotes +1

denotes -1

x1

Page 27: Classification Tamara Berg CSE 595 Words & Pictures

• How would you classify these points using a linear discriminant function in order to minimize the error rate?

Linear Discriminant Function

denotes +1

denotes -1

x1

x2

Infinite number of answers!

Slide credit: Jinwei Gu

Page 28: Classification Tamara Berg CSE 595 Words & Pictures

• How would you classify these points using a linear discriminant function in order to minimize the error rate?

Linear Discriminant Function

x1

x2

Infinite number of answers!

denotes +1

denotes -1Slide credit: Jinwei Gu

Page 29: Classification Tamara Berg CSE 595 Words & Pictures

• How would you classify these points using a linear discriminant function in order to minimize the error rate?

Linear Discriminant Function

x1

x2

Infinite number of answers!

denotes +1

denotes -1Slide credit: Jinwei Gu

Page 30: Classification Tamara Berg CSE 595 Words & Pictures

x1

x2• How would you classify these points using a linear discriminant function in order to minimize the error rate?

Linear Discriminant Function

Infinite number of answers!

Which one is the best?

denotes +1

denotes -1Slide credit: Jinwei Gu

Page 31: Classification Tamara Berg CSE 595 Words & Pictures

Large Margin Linear Classifier

“safe zone”• The linear discriminant

function (classifier) with the maximum margin is the best

Margin is defined as the width that the boundary could be increased by before hitting a data point

Why it is the best? strong generalization ability

Margin

x1

x2

Linear SVMSlide credit: Jinwei Gu

Page 32: Classification Tamara Berg CSE 595 Words & Pictures

Large Margin Linear Classifier

x1

x2 Margin

wT x + b = 0

wT x + b = -1w

T x + b = 1

x+

x+

x-

Support Vectors

Slide credit: Jinwei Gu

Page 33: Classification Tamara Berg CSE 595 Words & Pictures

Solving the Optimization Problem

SV

( ) T Ti i

i

g b b

x w x x x

The linear discriminant function is:

Notice it relies on a dot product between the test point x and the support vectors xi

Slide credit: Jinwei Gu

Page 34: Classification Tamara Berg CSE 595 Words & Pictures

Linear separability

Slide credit: Svetlana Lazebnik

Page 35: Classification Tamara Berg CSE 595 Words & Pictures

Non-linear SVMs: Feature Space General idea: the original input space can be mapped to

some higher-dimensional feature space where the training set is separable:

Φ: x → φ(x)

This slide is courtesy of www.iro.umontreal.ca/~pift6080/documents/papers/svm_tutorial.ppt

Page 36: Classification Tamara Berg CSE 595 Words & Pictures

Nonlinear SVMs: The Kernel Trick With this mapping, our discriminant function is now:

SV

( ) ( ) ( ) ( )T Ti i

i

g b b

x w x x x

No need to know this mapping explicitly, because we only use the dot product of feature vectors in both the training and test.

A kernel function is defined as a function that corresponds to a dot product of two feature vectors in some expanded feature space:

( , ) ( ) ( )Ti j i jK x x x x

Slide credit: Jinwei Gu

Page 37: Classification Tamara Berg CSE 595 Words & Pictures

Nonlinear SVMs: The Kernel Trick

Linear kernel:

2

2( , ) exp( )

2i j

i jK

x x

x x

( , ) Ti j i jK x x x x

( , ) (1 )T pi j i jK x x x x

0 1( , ) tanh( )Ti j i jK x x x x

Examples of commonly-used kernel functions:

Polynomial kernel:

Gaussian (Radial-Basis Function (RBF) ) kernel:

Sigmoid:

Slide credit: Jinwei Gu

Page 38: Classification Tamara Berg CSE 595 Words & Pictures

Support Vector Machine: Algorithm

• 1. Choose a kernel function

• 2. Choose a value for C

• 3. Solve the quadratic programming problem (many software packages available)

• 4. Construct the discriminant function from the support vectors

Slide credit: Jinwei Gu

Page 39: Classification Tamara Berg CSE 595 Words & Pictures

Some Issues• Choice of kernel - Gaussian or polynomial kernel is default - if ineffective, more elaborate kernels are needed - domain experts can give assistance in formulating appropriate similarity

measures

• Choice of kernel parameters - e.g. σ in Gaussian kernel - σ is the distance between closest points with different classifications - In the absence of reliable criteria, applications rely on the use of a

validation set or cross-validation to set such parameters.

This slide is courtesy of www.iro.umontreal.ca/~pift6080/documents/papers/svm_tutorial.ppt Slide credit: Jinwei Gu

Page 40: Classification Tamara Berg CSE 595 Words & Pictures

Summary: Support Vector Machine

• 1. Large Margin Classifier – Better generalization ability & less over-fitting

• 2. The Kernel Trick– Map data points to higher dimensional space in

order to make them linearly separable.– Since only dot product is used, we do not need to

represent the mapping explicitly.

Slide credit: Jinwei Gu

Page 41: Classification Tamara Berg CSE 595 Words & Pictures

• A simple algorithm for learning robust classifiers– Freund & Shapire, 1995– Friedman, Hastie, Tibshhirani, 1998

• Provides efficient algorithm for sparse visual feature selection– Tieu & Viola, 2000– Viola & Jones, 2003

• Easy to implement, doesn’t require external optimization tools.

Boosting

Slide credit: Antonio Torralba

Page 42: Classification Tamara Berg CSE 595 Words & Pictures

• Defines a classifier using an additive model:

Boosting

Strong classifier

Weak classifier

WeightFeaturesvector

Slide credit: Antonio Torralba

Page 43: Classification Tamara Berg CSE 595 Words & Pictures

• Defines a classifier using an additive model:

• We need to define a family of weak classifiers

Boosting

Strong classifier

Weak classifier

WeightFeaturesvector

from a family of weak classifiers

Slide credit: Antonio Torralba

Page 44: Classification Tamara Berg CSE 595 Words & Pictures

Adaboost

Slide credit: Antonio Torralba

Page 45: Classification Tamara Berg CSE 595 Words & Pictures

Each data point has

a class label:

wt =1and a weight:

+1 ( )

-1 ( )yt =

Boosting• It is a sequential procedure:

xt=1

xt=2

xt

Slide credit: Antonio Torralba

Page 46: Classification Tamara Berg CSE 595 Words & Pictures

Toy exampleWeak learners from the family of lines

h => p(error) = 0.5 it is at chance

Each data point has

a class label:

wt =1and a weight:

+1 ( )

-1 ( )yt =

Slide credit: Antonio Torralba

Page 47: Classification Tamara Berg CSE 595 Words & Pictures

Toy example

This one seems to be the best

Each data point has

a class label:

wt =1and a weight:

+1 ( )

-1 ( )yt =

This is a ‘weak classifier’: It performs slightly better than chance.Slide credit: Antonio Torralba

Page 48: Classification Tamara Berg CSE 595 Words & Pictures

Toy example

We set a new problem for which the previous weak classifier performs at chance again

Each data point has

a class label:

wt wt exp{-yt Ht}

We update the weights:

+1 ( )

-1 ( )yt =

Slide credit: Antonio Torralba

Page 49: Classification Tamara Berg CSE 595 Words & Pictures

Toy example

We set a new problem for which the previous weak classifier performs at chance again

Each data point has

a class label:

wt wt exp{-yt Ht}

We update the weights:

+1 ( )

-1 ( )yt =

Slide credit: Antonio Torralba

Page 50: Classification Tamara Berg CSE 595 Words & Pictures

Toy example

We set a new problem for which the previous weak classifier performs at chance again

Each data point has

a class label:

wt wt exp{-yt Ht}

We update the weights:

+1 ( )

-1 ( )yt =

Slide credit: Antonio Torralba

Page 51: Classification Tamara Berg CSE 595 Words & Pictures

Toy example

We set a new problem for which the previous weak classifier performs at chance again

Each data point has

a class label:

wt wt exp{-yt Ht}

We update the weights:

+1 ( )

-1 ( )yt =

Slide credit: Antonio Torralba

Page 52: Classification Tamara Berg CSE 595 Words & Pictures

Toy example

The strong (non- linear) classifier is built as the combination of all the weak (linear) classifiers.

f1 f2

f3

f4

Slide credit: Antonio Torralba

Page 53: Classification Tamara Berg CSE 595 Words & Pictures

Adaboost

Slide credit: Antonio Torralba