Upload
paul-allison
View
39
Download
0
Embed Size (px)
DESCRIPTION
CSSE463: Image Recognition Day 26. This week Today: Applications of PCA Weds: k-means lab due. Sunday night: project plans and prelim work due Questions?. Principal Components Analysis. Given a set of samples, find the direction(s) of greatest variance. We’ve done this! - PowerPoint PPT Presentation
Citation preview
CSSE463: Image Recognition CSSE463: Image Recognition Day 26Day 26
This weekThis week Today: Applications of PCAToday: Applications of PCA Weds: k-means lab due.Weds: k-means lab due. Sunday night: project plans and prelim work dueSunday night: project plans and prelim work due
Questions?Questions?
Principal Components AnalysisPrincipal Components Analysis Given a set of samples, find the Given a set of samples, find the
direction(s) of greatest variance.direction(s) of greatest variance.
We’ve done this!We’ve done this!
Example: Spatial momentsExample: Spatial moments Principal axes are Principal axes are
eigenvectors of covariance eigenvectors of covariance matrixmatrix
Eigenvalues gave relative Eigenvalues gave relative importance of each dimension importance of each dimension
Note that each point can be Note that each point can be represented in 2D using the represented in 2D using the new coordinate system new coordinate system defined by the eigenvectorsdefined by the eigenvectors
The 1D representation The 1D representation obtained by projecting the obtained by projecting the point onto the principal axis is point onto the principal axis is a reasonably-good a reasonably-good approximationapproximation
height
weight
size
girth
yyyx
xyxx
n
iiiyy
n
iiiyxxy
n
iiixx
c
yyyyn
yyxxn
xxxxn
1
1
1
))((1
))((1
))((1
Covariance Matrix (using matrix operations)Covariance Matrix (using matrix operations)
Place the points in their own column.Place the points in their own column.
Find the mean of each row.Find the mean of each row.
Subtract it.Subtract it.
Multiply N * NMultiply N * NTT
You will get a 2x2 matrix, in You will get a 2x2 matrix, in which each entry is a which each entry is a summation over all n points.summation over all n points.You could then divide by nYou could then divide by n
yyyx
xyxx
n
iiiyy
n
iiiyxxy
n
iiixx
c
yyyyn
yyxxn
xxxxn
1
1
1
))((1
))((1
))((1
n
n
yyyy
xxxxF
...
...
321
321
yyyyyyyy
xxxxxxxxN
n
n
...
...
321
321
y
x
y
Q1
Generic processGeneric process
The covariance matrix of a set of data gives The covariance matrix of a set of data gives the ways in which the set varies.the ways in which the set varies.
The eigenvectors corresponding to the The eigenvectors corresponding to the largest eigenvalues give the directions in largest eigenvalues give the directions in which it varies most.which it varies most.
Two applicationsTwo applicationsEigenfacesEigenfacesTime-elapsed photographyTime-elapsed photography
““Eigenfaces”Eigenfaces” Question: what are the primary Question: what are the primary
ways in which faces vary? ways in which faces vary? What happens when we apply What happens when we apply
PCA?PCA? For each face, create a For each face, create a
column vector that contains all column vector that contains all the pixels from that face the pixels from that face
This is a point in a high This is a point in a high dimensional space (e.g., dimensional space (e.g., 65536 for a 256x256 pixel 65536 for a 256x256 pixel image)image)
Create a matrix F of all M Create a matrix F of all M faces in the training set.faces in the training set.
Subtract off the “average Subtract off the “average face”, face”, , to get N, to get N
Compute the rc Compute the rc xx rc rc covariance matrix C = N*Ncovariance matrix C = N*NTT . .
M. Turk and A. Pentland, Eigenfaces for Recognition, J Cog Neurosci, 3(1)
Mrcrcrcrc
M
M
M
pppp
pppp
pppp
pppp
F
,3,2,1,
,133,132,131,13
,123,122,121,12
,113,112,111,11
““Eigenfaces”Eigenfaces” Question: what are the Question: what are the
primary ways in which primary ways in which faces vary?faces vary?
What happens when we What happens when we apply PCA?apply PCA? The eigenvectors are the The eigenvectors are the
directions of greatest directions of greatest variabilityvariability
Note that these are in Note that these are in 65536-D; thus form a 65536-D; thus form a faceface..
This is an This is an “eigenface”“eigenface” Here’s the first 4 from the Here’s the first 4 from the
ORL face dataset.ORL face dataset.
Q2-3
““Eigenfaces”Eigenfaces” Question: what are the Question: what are the
primary ways in which primary ways in which faces vary?faces vary?
What happens when we What happens when we apply PCA?apply PCA? The eigenvectors are the The eigenvectors are the
directions of greatest directions of greatest variabilityvariability
Note that these are in Note that these are in 65536-D; thus form a 65536-D; thus form a faceface..
This is an This is an “eigenface”“eigenface” Here are the first 4 from Here are the first 4 from
the ORL face dataset.the ORL face dataset.
http://upload.wikimedia.org/wikipedia/commons/6/67/Eigenfaces.png; from the ORL ; from the ORL face database, AT&T Laboratories Cambridgeface database, AT&T Laboratories Cambridge Q2-3
Interlude: Projecting points onto linesInterlude: Projecting points onto lines
We can project each point onto We can project each point onto the principal axis. How? the principal axis. How?
height
weight
size
girth
Interlude: Projecting a point onto a lineInterlude: Projecting a point onto a line
Assuming the axis is represented by a unit Assuming the axis is represented by a unit vector vector uu, we can just take the dot-product of the , we can just take the dot-product of the point point pp and the vector. and the vector.
uu**pp = = uuTTp p (which is 1D)(which is 1D) Example: Project (5,2) onto line y=x.Example: Project (5,2) onto line y=x. If we want to project onto two vectors, u and v If we want to project onto two vectors, u and v
simultaneously:simultaneously: Create Create ww = [ = [uu vv], then compute ], then compute wwTTp, which is p, which is
2D.2D. Result: p is now in terms of u and v. Result: p is now in terms of u and v. This generalizes to arbitrary dimensions.This generalizes to arbitrary dimensions.
Q4
Application: Face detectionApplication: Face detection
If we want to project a point onto two vectors, u and v If we want to project a point onto two vectors, u and v simultaneously:simultaneously:
Create Create ww = [ = [uu vv], then compute ], then compute wwTTp, which is 2D.p, which is 2D. Result: p is now in terms of u and v. Result: p is now in terms of u and v. This generalizes to arbitrary dimensions.This generalizes to arbitrary dimensions.
You can represent a face in terms of its eigenfaces; it’s just a You can represent a face in terms of its eigenfaces; it’s just a different basis. different basis.
The M most important eigenvectors capture most of the The M most important eigenvectors capture most of the variability:variability: Ignore the rest!Ignore the rest! Instead of 65k dimensions, we only have M (~50 in practice)Instead of 65k dimensions, we only have M (~50 in practice) Call these 50 dimensions “face-space”Call these 50 dimensions “face-space”
““Eigenfaces”Eigenfaces” Question: what are the Question: what are the
primary ways in which primary ways in which faces vary? faces vary?
What happens when we What happens when we apply PCA?apply PCA? Keep only the top M Keep only the top M
eigenfaces for “face eigenfaces for “face space”. space”.
We can project any face We can project any face onto these eigenvectors. onto these eigenvectors. Thus, any face is a linear Thus, any face is a linear combination of the combination of the eigenfaces. eigenfaces.
Can classify faces in this Can classify faces in this lower-D space.lower-D space.
There are computational There are computational tricks to make the tricks to make the computation feasiblecomputation feasible
Time-elapsed photographyTime-elapsed photography
Question: what are Question: what are the ways that outdoor the ways that outdoor images vary over images vary over time?time?
Form a matrix in Form a matrix in which each column is which each column is an imagean image
Find eigs of Find eigs of covariance matrixcovariance matrix
N Jacobs, N Roman, R Pless, Consistent Temporal Variations in Many Outdoor Scenes. IEEE Computer Vision and Pattern Recognition, Minneapolis, MN, June 2007.
See example images on Dr. B’s laptop or at the link below.
Time-elapsed photographyTime-elapsed photography
Question: what are Question: what are the ways that outdoor the ways that outdoor images vary over images vary over time?time?
The mean and top 3 The mean and top 3 eigenvectors (scaled):eigenvectors (scaled):
Interpretation?Interpretation?
N Jacobs, N Roman, R Pless, Consistent Temporal Variations in Many Outdoor Scenes. IEEE Computer Vision and Pattern Recognition, Minneapolis, MN, June 2007.
Q5-6
Time-elapsed photographyTime-elapsed photography
Recall that each image in the dataset is a linear Recall that each image in the dataset is a linear combination of the eigenimages. combination of the eigenimages.
N Jacobs, N Roman, R Pless, Consistent Temporal Variations in Many Outdoor Scenes. IEEE Computer Vision and Pattern Recognition, Minneapolis, MN, June 2007.
= + 4912* - 217* +393*
= - 2472* + 308* +885*
mean PC1 PC2 PC3
Time-elapsed photographyTime-elapsed photography
Every image’s projection onto the first Every image’s projection onto the first eigenvector eigenvector
N Jacobs, N Roman, R Pless, Consistent Temporal Variations in Many Outdoor Scenes. IEEE Computer Vision and Pattern Recognition, Minneapolis, MN, June 2007.
Research ideaResearch idea
Done:Done: Finding the PCsFinding the PCs Using to detect latitude and longitude given images Using to detect latitude and longitude given images
from camerafrom camera
Yet to do:Yet to do: Classifying images based on their projection into this Classifying images based on their projection into this
space, as was done for eigenfacesspace, as was done for eigenfaces