76
Machine Learning Overview Tamara Berg Language and Vision

Machine Learning Overview Tamara Berg Language and Vision

Embed Size (px)

Citation preview

Page 1: Machine Learning Overview Tamara Berg Language and Vision

Machine Learning Overview

Tamara BergLanguage and Vision

Page 2: Machine Learning Overview Tamara Berg Language and Vision

Reminders

• HW1 – due Feb 16

• Discussion leaders for Feb 17/24 should schedule a meeting with me soon

Page 3: Machine Learning Overview Tamara Berg Language and Vision

Types of ML algorithms

• Unsupervised– Algorithms operate on unlabeled examples

• Supervised– Algorithms operate on labeled examples

• Semi/Partially-supervised– Algorithms combine both labeled and unlabeled

examplesSlide 3 of 113

Page 4: Machine Learning Overview Tamara Berg Language and Vision

Unsupervised Learning

Slide 4 of 113

Page 5: Machine Learning Overview Tamara Berg Language and Vision

Slide 5 of 113

Page 6: Machine Learning Overview Tamara Berg Language and Vision

K-means clustering• Want to minimize sum of squared Euclidean

distances between points xi and their nearest cluster centers mk

Algorithm:• Randomly initialize K cluster centers• Iterate until convergence:

• Assign each data point to the nearest center• Recompute each cluster center as the mean of all points assigned

to it

k

ki

ki mxMXDcluster

clusterinpoint

2)(),(

source: Svetlana Lazebnik Slide 6 of 113

Page 7: Machine Learning Overview Tamara Berg Language and Vision

Supervised Learning

Slide 7 of 113

Page 8: Machine Learning Overview Tamara Berg Language and Vision

Slide from Dan KleinSlide 8 of 113

Page 9: Machine Learning Overview Tamara Berg Language and Vision

Slide from Dan KleinSlide 9 of 113

Page 10: Machine Learning Overview Tamara Berg Language and Vision

Slide from Dan KleinSlide 10 of 113

Page 11: Machine Learning Overview Tamara Berg Language and Vision

Slide from Dan KleinSlide 11 of 113

Page 12: Machine Learning Overview Tamara Berg Language and Vision

Example: Image classification

apple

pear

tomato

cow

dog

horse

input desired output

Slide credit: Svetlana LazebnikSlide 12 of 113

Page 13: Machine Learning Overview Tamara Berg Language and Vision

Slide from Dan Kleinhttp://yann.lecun.com/exdb/mnist/index.html Slide 13 of 113

Page 14: Machine Learning Overview Tamara Berg Language and Vision

Example: Seismic data

Body wave magnitude

Surf

ace

wav

e m

agni

tude

Nuclear explosions

Earthquakes

Slide credit: Svetlana LazebnikSlide 14 of 113

Page 15: Machine Learning Overview Tamara Berg Language and Vision

Slide from Dan KleinSlide 15 of 113

Page 16: Machine Learning Overview Tamara Berg Language and Vision

The basic classification framework

y = f(x)

• Learning: given a training set of labeled examples {(x1,y1), …, (xN,yN)}, estimate the parameters of the prediction function f

• Inference: apply f to a never before seen test example x and output the predicted value y = f(x)

output classification function

input

Slide credit: Svetlana LazebnikSlide 16 of 113

Page 17: Machine Learning Overview Tamara Berg Language and Vision

17

Some ML classification methods

106 examples

Nearest neighbor

Shakhnarovich, Viola, Darrell 2003Berg, Berg, Malik 2005…

Neural networks

LeCun, Bottou, Bengio, Haffner 1998Rowley, Baluja, Kanade 1998…

Support Vector Machines and Kernels Conditional Random Fields

McCallum, Freitag, Pereira 2000Kumar, Hebert 2003…

Guyon, VapnikHeisele, Serre, Poggio, 2001…

Slide credit: Antonio Torralba

Page 18: Machine Learning Overview Tamara Berg Language and Vision

Example: Training and testing

• Key challenge: generalization to unseen examples

Training set (labels known) Test set (labels unknown)

Slide credit: Svetlana LazebnikSlide 18 of 113

Page 19: Machine Learning Overview Tamara Berg Language and Vision

Slide credit: Dan KleinSlide 19 of 113

Page 20: Machine Learning Overview Tamara Berg Language and Vision

Slide from Min-Yen Kan

Classification by Nearest Neighbor

Word vector document classification – here the vector space is illustrated as having 2 dimensions. How many dimensions would the data actually live in?

Slide 20 of 113

Page 21: Machine Learning Overview Tamara Berg Language and Vision

Slide from Min-Yen Kan

Classification by Nearest Neighbor

Slide 21 of 113

Page 22: Machine Learning Overview Tamara Berg Language and Vision

Classification by Nearest Neighbor

Classify the test document as the class of the document “nearest” to the query document (use vector similarity to find most similar doc)

Slide from Min-Yen Kan

Slide 22 of 113

Page 23: Machine Learning Overview Tamara Berg Language and Vision

Classification by kNN

Classify the test document as the majority class of the k documents “nearest” to the query document. Slide from Min-Yen Kan

Slide 23 of 113

Page 24: Machine Learning Overview Tamara Berg Language and Vision

Slide from Min-Yen Kan

What are the features? What’s the training data? Testing data? Parameters?

Classification by kNN

Slide 24 of 113

Page 25: Machine Learning Overview Tamara Berg Language and Vision

Slide from Min-Yen KanSlide 25 of 113

Page 26: Machine Learning Overview Tamara Berg Language and Vision

Slide from Min-Yen KanSlide 26 of 113

Page 27: Machine Learning Overview Tamara Berg Language and Vision

Slide from Min-Yen KanSlide 27 of 113

Page 28: Machine Learning Overview Tamara Berg Language and Vision

Slide from Min-Yen KanSlide 28 of 113

Page 29: Machine Learning Overview Tamara Berg Language and Vision

Slide from Min-Yen KanSlide 29 of 113

Page 30: Machine Learning Overview Tamara Berg Language and Vision

Slide from Min-Yen Kan

What are the features? What’s the training data? Testing data? Parameters?

Classification by kNN

Slide 30 of 113

Page 31: Machine Learning Overview Tamara Berg Language and Vision

31

NN for vision

Fast Pose Estimation with Parameter Sensitive HashingShakhnarovich, Viola, Darrell

Page 32: Machine Learning Overview Tamara Berg Language and Vision

J. Hays and A. Efros, IM2GPS: estimating geographic information from a single image, CVPR 2008

NN for vision

Page 33: Machine Learning Overview Tamara Berg Language and Vision

Decision tree classifier

Example problem: decide whether to wait for a table at a restaurant, based on the following attributes:1. Alternate: is there an alternative restaurant nearby?2. Bar: is there a comfortable bar area to wait in?3. Fri/Sat: is today Friday or Saturday?4. Hungry: are we hungry?5. Patrons: number of people in the restaurant (None, Some, Full)6. Price: price range ($, $$, $$$)7. Raining: is it raining outside?8. Reservation: have we made a reservation?9. Type: kind of restaurant (French, Italian, Thai, Burger)10. WaitEstimate: estimated waiting time (0-10, 10-30, 30-60, >60)

Slide credit: Svetlana LazebnikSlide 33 of 113

Page 34: Machine Learning Overview Tamara Berg Language and Vision

Decision tree classifier

Slide credit: Svetlana LazebnikSlide 34 of 113

Page 35: Machine Learning Overview Tamara Berg Language and Vision

Decision tree classifier

Slide credit: Svetlana LazebnikSlide 35 of 113

Page 36: Machine Learning Overview Tamara Berg Language and Vision

Linear classifier

• Find a linear function to separate the classes

f(x) = sgn(w1x1 + w2x2 + … + wDxD) = sgn(w x)

Slide credit: Svetlana LazebnikSlide 36 of 113

Page 37: Machine Learning Overview Tamara Berg Language and Vision

Discriminant Function• It can be arbitrary functions of x, such as:

Nearest Neighbor

Decision Tree

LinearFunctions

( ) Tg b x w x

Slide credit: Jinwei GuSlide 37 of 113

Page 38: Machine Learning Overview Tamara Berg Language and Vision

Linear Discriminant Function• g(x) is a linear function:

( ) Tg b x w x

x1

x2

wT x + b = 0

wT x + b < 0

wT x + b > 0

A hyper-plane in the feature space

Slide credit: Jinwei Gu

denotes +1

denotes -1

x1

Slide 38 of 113

Page 39: Machine Learning Overview Tamara Berg Language and Vision

• How would you classify these points using a linear discriminant function in order to minimize the error rate?

Linear Discriminant Function

denotes +1

denotes -1

x1

x2

Infinite number of answers!

Slide credit: Jinwei GuSlide 39 of 113

Page 40: Machine Learning Overview Tamara Berg Language and Vision

• How would you classify these points using a linear discriminant function in order to minimize the error rate?

Linear Discriminant Function

x1

x2

Infinite number of answers!

denotes +1

denotes -1Slide credit: Jinwei Gu

Slide 40 of 113

Page 41: Machine Learning Overview Tamara Berg Language and Vision

• How would you classify these points using a linear discriminant function in order to minimize the error rate?

Linear Discriminant Function

x1

x2

Infinite number of answers!

denotes +1

denotes -1Slide credit: Jinwei Gu

Slide 41 of 113

Page 42: Machine Learning Overview Tamara Berg Language and Vision

x1

x2• How would you classify these points using a linear discriminant function in order to minimize the error rate?

Linear Discriminant Function

Infinite number of answers!

Which one is the best?

denotes +1

denotes -1Slide credit: Jinwei Gu

Slide 42 of 113

Page 43: Machine Learning Overview Tamara Berg Language and Vision

Large Margin Linear Classifier

“safe zone”• The linear discriminant

function (classifier) with the maximum margin is the best

Margin is defined as the width that the boundary could be increased by before hitting a data point

Why it is the best? strong generalization ability

Margin

x1

x2

Linear SVMSlide credit: Jinwei Gu

Slide 43 of 113

Page 44: Machine Learning Overview Tamara Berg Language and Vision

Large Margin Linear Classifier

x1

x2 Margin

wT x + b = 0

wT x + b = -1w

T x + b = 1

x+

x+

x-

Support Vectors

Slide credit: Jinwei GuSlide 44 of 113

Page 45: Machine Learning Overview Tamara Berg Language and Vision

• A simple algorithm for learning robust classifiers– Freund & Shapire, 1995– Friedman, Hastie, Tibshhirani, 1998

• Provides efficient algorithm for sparse visual feature selection– Tieu & Viola, 2000– Viola & Jones, 2003

• Easy to implement, doesn’t require external optimization tools.

Boosting

Slide credit: Antonio TorralbaSlide 45 of 113

Page 46: Machine Learning Overview Tamara Berg Language and Vision

• Defines a classifier using an additive model:

Boosting

Strong classifier

Weak classifier

WeightFeaturesvector

Slide credit: Antonio TorralbaSlide 46 of 113

Page 47: Machine Learning Overview Tamara Berg Language and Vision

• Defines a classifier using an additive model:

• We need to define a family of weak classifiers

Boosting

Strong classifier

Weak classifier

WeightFeaturesvector

from a family of weak classifiers

Slide credit: Antonio TorralbaSlide 47 of 113

Page 48: Machine Learning Overview Tamara Berg Language and Vision

Adaboost

Slide credit: Antonio TorralbaSlide 48 of 113

Page 49: Machine Learning Overview Tamara Berg Language and Vision

Each data point has

a class label:

wt =1and a weight:

+1 ( )

-1 ( )yt =

Boosting• It is a sequential procedure:

xt=1

xt=2

xt

Slide credit: Antonio TorralbaSlide 49 of 113

Page 50: Machine Learning Overview Tamara Berg Language and Vision

Toy exampleWeak learners from the family of lines

h => p(error) = 0.5 it is at chance

Each data point has

a class label:

wt =1and a weight:

+1 ( )

-1 ( )yt =

Slide credit: Antonio TorralbaSlide 50 of 113

Page 51: Machine Learning Overview Tamara Berg Language and Vision

Toy example

This one seems to be the best

Each data point has

a class label:

wt =1and a weight:

+1 ( )

-1 ( )yt =

This is a ‘weak classifier’: It performs slightly better than chance.Slide credit: Antonio Torralba

Slide 51 of 113

Page 52: Machine Learning Overview Tamara Berg Language and Vision

Toy example

Each data point has

a class label:

wt wt exp{-yt Ht}

We update the weights:

+1 ( )

-1 ( )yt =

Slide credit: Antonio TorralbaSlide 52 of 113

Page 53: Machine Learning Overview Tamara Berg Language and Vision

Toy example

Each data point has

a class label:

wt wt exp{-yt Ht}

We update the weights:

+1 ( )

-1 ( )yt =

Slide credit: Antonio TorralbaSlide 53 of 113

Page 54: Machine Learning Overview Tamara Berg Language and Vision

Toy example

Each data point has

a class label:

wt wt exp{-yt Ht}

We update the weights:

+1 ( )

-1 ( )yt =

Slide credit: Antonio TorralbaSlide 54 of 113

Page 55: Machine Learning Overview Tamara Berg Language and Vision

Toy example

Each data point has

a class label:

wt wt exp{-yt Ht}

We update the weights:

+1 ( )

-1 ( )yt =

Slide credit: Antonio TorralbaSlide 55 of 113

Page 56: Machine Learning Overview Tamara Berg Language and Vision

Toy example

The strong (non- linear) classifier is built as the combination of all the weak (linear) classifiers.

f1 f2

f3

f4

Slide credit: Antonio TorralbaSlide 56 of 113

Page 57: Machine Learning Overview Tamara Berg Language and Vision

Adaboost

Slide credit: Antonio TorralbaSlide 57 of 113

Page 58: Machine Learning Overview Tamara Berg Language and Vision

Semi-Supervised Learning

Slide 58 of 113

Page 60: Machine Learning Overview Tamara Berg Language and Vision

60

However, for many problems, labeled data can be rare or expensive.

Unlabeled data is much cheaper.Need to pay someone to do it, requires special testing,…

Slide Credit: Avrim Blum

Page 61: Machine Learning Overview Tamara Berg Language and Vision

61

However, for many problems, labeled data can be rare or expensive.

Unlabeled data is much cheaper.

Speech

Images

Medical outcomes

Customer modeling

Protein sequences

Web pages

Need to pay someone to do it, requires special testing,…

Slide Credit: Avrim Blum

Page 62: Machine Learning Overview Tamara Berg Language and Vision

62

However, for many problems, labeled data can be rare or expensive.

Unlabeled data is much cheaper.

[From Jerry Zhu]

Need to pay someone to do it, requires special testing,…

Slide Credit: Avrim Blum

Page 63: Machine Learning Overview Tamara Berg Language and Vision

63

Need to pay someone to do it, requires special testing,…

However, for many problems, labeled data can be rare or expensive.

Unlabeled data is much cheaper.

Can we make use of cheap unlabeled data?

Slide Credit: Avrim Blum

Page 64: Machine Learning Overview Tamara Berg Language and Vision

Semi-Supervised LearningCan we use unlabeled data to augment a small

labeled sample to improve learning?

But unlabeled data is missing the most important info!!But maybe still has

useful regularities that we can use.

But…But…But…Slide Credit: Avrim Blum Slide 64 of 113

Page 65: Machine Learning Overview Tamara Berg Language and Vision

65

Method 1:

EM

Page 66: Machine Learning Overview Tamara Berg Language and Vision

How to use unlabeled data • One way is to use the EM algorithm

– EM: Expectation Maximization

• The EM algorithm is a popular iterative algorithm for maximum likelihood estimation in problems with missing data.

• The EM algorithm consists of two steps, – Expectation step, i.e., filling in the missing data – Maximization step – calculate a new maximum a posteriori

estimate for the parameters.

Slide 66 of 113

Page 67: Machine Learning Overview Tamara Berg Language and Vision

Example Algorithm

1. Train a classifier with only the labeled documents.

2. Use it to probabilistically classify the unlabeled documents.

3. Use ALL the documents to train a new classifier.

4. Iterate steps 2 and 3 to convergence.

Slide 67 of 113

Page 68: Machine Learning Overview Tamara Berg Language and Vision

68

Method 2:

Co-Training

Page 69: Machine Learning Overview Tamara Berg Language and Vision

Co-training[Blum&Mitchell’98]

Many problems have two different sources of info (“features/views”) you can use to determine label.

E.g., classifying faculty webpages: can use words on page or words on links pointing to the page.

My AdvisorProf. Avrim Blum My AdvisorProf. Avrim Blum

x2- Text infox1- Link infox - Link info & Text info

Slide Credit: Avrim BlumSlide 69 of 113

Page 70: Machine Learning Overview Tamara Berg Language and Vision

Co-trainingIdea: Use small labeled sample to learn initial rules.– E.g., “my advisor” pointing to a page is a good indicator it

is a faculty home page.– E.g., “I am teaching” on a page is a good indicator it is a

faculty home page.

my advisor

Slide Credit: Avrim BlumSlide 70 of 113

Page 71: Machine Learning Overview Tamara Berg Language and Vision

Co-trainingIdea: Use small labeled sample to learn initial rules.– E.g., “my advisor” pointing to a page is a good indicator it

is a faculty home page.– E.g., “I am teaching” on a page is a good indicator it is a

faculty home page.Then look for unlabeled examples where one view is

confident and the other is not. Have it label the example for the other.

Training 2 classifiers, one on each type of info. Using each to help train the other.

hx1,x2ihx1,x2ihx1,x2i

hx1,x2ihx1,x2ihx1,x2i

Slide Credit: Avrim BlumSlide 71 of 113

Page 72: Machine Learning Overview Tamara Berg Language and Vision

Co-training vs. EM

• Co-training splits features, EM does not.

• Co-training incrementally uses the unlabeled data.

• EM probabilistically labels all the data at each round; EM iteratively uses the unlabeled data.

Slide 72 of 113

Page 73: Machine Learning Overview Tamara Berg Language and Vision

Generative vs DiscriminativeDiscriminative version – build a classifier to discriminate

between monkeys and non-monkeys.P(monkey|image)

Page 74: Machine Learning Overview Tamara Berg Language and Vision

Generative version - build a model of the joint distribution.

P(image,monkey)

Generative vs Discriminative

Page 75: Machine Learning Overview Tamara Berg Language and Vision

Generative vs Discriminative

Can use Bayes rule to compute p(monkey|image) if we know p(image,monkey)

Page 76: Machine Learning Overview Tamara Berg Language and Vision

Generative vs Discriminative

Can use Bayes rule to compute p(monkey|image) if we know p(image,monkey)

DiscriminativeGenerative