Upload
erin-lyde
View
222
Download
0
Embed Size (px)
Citation preview
Machine learning continued
Image source: https://www.coursera.org/course/ml
More about linear classifiers• When the data is linearly separable, there may
be more than one separator (hyperplane)
0:negative
0:positive
b
b
wxx
wxx
Which separatoris best?
Support vector machines• Find hyperplane that maximizes the margin
between the positive and negative examples
C. Burges, A Tutorial on Support Vector Machines for Pattern Recognition, Data Mining and Knowledge Discovery, 1998
Support vector machines• Find hyperplane that maximizes the margin
between the positive and negative examples
1:1)(negative
1:1)( positive
by
by
wxx
wxx
MarginSupport vectors
C. Burges, A Tutorial on Support Vector Machines for Pattern Recognition, Data Mining and Knowledge Discovery, 1998
Distance between point and hyperplane: ||||
||
w
wx b
For support vectors, 1 bwx
Therefore, the margin is 2 / ||w||
Finding the maximum margin hyperplane
1. Maximize margin 2 / ||w||
2. Correctly classify all training data:
Quadratic optimization problem:
C. Burges, A Tutorial on Support Vector Machines for Pattern Recognition, Data Mining and Knowledge Discovery, 1998
1)(subject to2
1min
2
, by ii
bxww
w
1:1)(negative
1:1)( positive
by
by
iii
iii
wxx
wxx
Finding the maximum margin hyperplane• Solution:
i iii y xw
C. Burges, A Tutorial on Support Vector Machines for Pattern Recognition, Data Mining and Knowledge Discovery, 1998
Support vector
learnedweight
Finding the maximum margin hyperplane• Solution:
b = yi – w·xi for any support vector
• Classification function (decision boundary):
• Notice that it relies on an inner product between the test point x and the support vectors xi
• Solving the optimization problem also involves computing the inner products xi · xj between all pairs of training points
i iii y xw
C. Burges, A Tutorial on Support Vector Machines for Pattern Recognition, Data Mining and Knowledge Discovery, 1998
bybi iii xxxw
• Datasets that are linearly separable work out great:
• But what if the dataset is just too hard?
• We can map it to a higher-dimensional space:
0 x
0 x
0 x
x2
Nonlinear SVMs
Slide credit: Andrew Moore
Φ: x → φ(x)
Nonlinear SVMs• General idea: the original input space can
always be mapped to some higher-dimensional feature space where the training set is separable
Slide credit: Andrew Moore
Nonlinear SVMs• The kernel trick: instead of explicitly computing
the lifting transformation φ(x), define a kernel function K such that
K(x , y) = φ(x) · φ(y)
(to be valid, the kernel function must satisfy Mercer’s condition)
• This gives a nonlinear decision boundary in the original feature space:
bKybyi
iiii
iii ),()()( xxxx
C. Burges, A Tutorial on Support Vector Machines for Pattern Recognition, Data Mining and Knowledge Discovery, 1998
Nonlinear kernel: Example
• Consider the mapping ),()( 2xxx
22
2222
),(
),(),()()(
yxxyyxK
yxxyyyxxyx
x2
Polynomial kernel: dcK )(),( yxyx
Gaussian kernel
• Also known as the radial basis function (RBF) kernel:
• The corresponding mapping φ(x) is infinite-dimensional!
• What is the role of parameter σ?• What if σ is close to zero?• What if σ is very large?
2
2
1exp),( yxyx
K
Gaussian kernel
SV’s
What about multi-class SVMs?• Unfortunately, there is no “definitive” multi-
class SVM formulation• In practice, we have to obtain a multi-class
SVM by combining multiple two-class SVMs • One vs. others
• Traning: learn an SVM for each class vs. the others• Testing: apply each SVM to test example and assign to it the
class of the SVM that returns the highest decision value
• One vs. one• Training: learn an SVM for each pair of classes• Testing: each learned SVM “votes” for a class to assign to
the test example
SVMs: Pros and cons• Pros
• Many publicly available SVM packages:http://www.kernel-machines.org/software
• Kernel-based framework is very powerful, flexible• SVMs work very well in practice, even with very small
training sample sizes
• Cons• No “direct” multi-class SVM, must combine two-class SVMs• Computation, memory (esp. for nonlinear SVMs)
– During training time, must compute matrix of kernel values for every pair of examples
– Learning can take a very long time for large-scale problems
Beyond simple classification: Structured prediction
Image Word
Source: B. Taskar
Structured Prediction
Sentence Parse tree
Source: B. Taskar
Structured Prediction
Sentence in two languages
Word alignment
Source: B. Taskar
Structured Prediction
Amino-acid sequence Bond structure
Source: B. Taskar
Structured Prediction• Many image-based inference tasks can loosely be
thought of as “structured prediction”
Source: D. Ramanan
model
Unsupervised Learning
• Idea: Given only unlabeled data as input, learn some sort of structure
• The objective is often more vague or subjective than in supervised learning
• This is more of an exploratory/descriptive data analysis
Unsupervised Learning
• Clustering– Discover groups of “similar” data points
Unsupervised Learning
• Quantization– Map a continuous input to a discrete (more
compact) output
1
2
3
• Dimensionality reduction, manifold learning– Discover a lower-dimensional surface on which the
data lives
Unsupervised Learning
Unsupervised Learning• Density estimation
– Find a function that approximates the probability density of the data (i.e., value of the function is high for “typical” points and low for “atypical” points)
– Can be used for anomaly detection
Semi-supervised learning
• Lots of data is available, but only small portion is labeled (e.g. since labeling is expensive)– Why is learning from labeled and unlabeled data
better than learning from labeled data alone?
?
Active learning• The learning algorithm can choose its own training
examples, or ask a “teacher” for an answer on selected inputs
S. Vijayanarasimhan and K. Grauman, “Cost-Sensitive Active Visual Category Learning,” 2009
Xinlei Chen, Abhinav Shrivastava and Abhinav Gupta. NEIL: Extracting Visual Knowledge from Web Data. In ICCV 2013