Upload
kellan
View
29
Download
0
Tags:
Embed Size (px)
DESCRIPTION
776 Computer Vision. Jared Heinly Spring 2014 (slides borrowed from Jan-Michael Frahm , Svetlana Lazebnik , and others). Image alignment. Image from http://graphics.cs.cmu.edu/courses/15-463/2010_fall/. A look into the past. http://blog.flickr.net/en/2010/01/27/a-look-into-the-past/. - PowerPoint PPT Presentation
Citation preview
776 Computer Vision
Jared HeinlySpring 2014
(slides borrowed from Jan-Michael Frahm, Svetlana Lazebnik, and others)
Image alignment
Image from http://graphics.cs.cmu.edu/courses/15-463/2010_fall/
A look into the past
http://blog.flickr.net/en/2010/01/27/a-look-into-the-past/
A look into the past• Leningrad during the blockade
http://komen-dant.livejournal.com/345684.html
Bing streetside images
http://www.bing.com/community/blogs/maps/archive/2010/01/12/new-bing-maps-application-streetside-photos.aspx
Image alignment: Applications
Panorama stitching
Recognitionof objectinstances
Image alignment: Challenges
Small degree of overlap
Occlusion,clutter
Intensity changes
Image alignment
• Two families of approaches:o Direct (pixel-based) alignment
• Search for alignment where most pixels agree
o Feature-based alignment• Search for alignment where extracted features agree• Can be verified using pixel-based alignment
2D transformation models
• Similarity(translation, scale, rotation)
• Affine
• Projective(homography)
Let’s start with affine transformations
• Simple fitting procedure (linear least squares)• Approximates viewpoint changes for roughly
planar objects and roughly orthographic cameras• Can be used to initialize fitting for more complex
models
Fitting an affine transformation
• Assume we know the correspondences, how do we get the transformation?
),( ii yx ),( ii yx
2
1
43
21
tt
yx
mmmm
yx
i
i
i
i
i
i
ii
ii
yx
ttmmmm
yxyx
2
1
4
3
2
1
10000100
Fitting an affine transformation
• Linear system with six unknowns• Each match gives us two linearly independent
equations: need at least three to solve for the transformation parameters
i
i
ii
ii
yx
ttmmmm
yxyx
2
1
4
3
2
1
10000100
Fitting a plane projective transformation
• Homography: plane projective transformation (transformation taking a quad to another arbitrary quad)
Homography• The transformation between two views of a planar
surface
• The transformation between images from two cameras that share the same center
Application: Panorama stitching
Source: Hartley & Zisserman
Fitting a homography• Recall: homogeneous coordinates
Converting to homogeneousimage coordinates
Converting from homogeneousimage coordinates
Fitting a homography• Recall: homogeneous coordinates
• Equation for homography:
Converting to homogeneousimage coordinates
Converting from homogeneousimage coordinates
11 333231
232221
131211
yx
hhhhhhhhh
yx
Fitting a homography• Equation for homography:
ii xHx
11 333231
232221
131211
i
i
i
i
yx
hhhhhhhhh
yx
0 ii xHx
iT
iiT
i
iT
iiT
iT
iT
i
iT
iT
iT
i
i
yxx
yyx
xhxhxhxhxhxh
xhxhxh
3
2
1
12
31
23
1
'yi
3 equations, only 2 linearly
independent
Unknown scale
Scaled versions of
same vector
00
00
3
2
1
hhh
xxxxxx
TTii
Tii
Tii
TTi
Tii
Ti
T
xyx
y'xi
Rows of H
Each element is a 3-vectorRows of H formed into 9x1 column
vector
Direct linear transform
• H has 8 degrees of freedom (9 parameters, but scale is arbitrary)
• One match gives us two linearly independent equations
• Four matches needed for a minimal solution (null space of 8x9 matrix)
• More than four: homogeneous least squares
0
00
00
3
2
1111
111
hhh
xxxx
xxxx
Tnn
TTn
Tnn
Tn
T
TTT
TTT
xy
xy
0hA
Robust feature-based alignment
• So far, we’ve assumed that we are given a set of “ground-truth” correspondences between the two images we want to align
• What if we don’t know the correspondences?
),( ii yx ),( ii yx
Robust feature-based alignment
• So far, we’ve assumed that we are given a set of “ground-truth” correspondences between the two images we want to align
• What if we don’t know the correspondences?
?
Robust feature-based alignment
Robust feature-based alignment
• Extract features
Robust feature-based alignment
• Extract features• Compute putative matches
Alignment as fitting• Least Squares• Total Least Squares
Least Squares Line Fitting•Data: (x1, y1), …, (xn, yn)•Line equation: yi = m xi + b•Find (m, b) to minimize
n
i ii bxmyE1
2)((xi, yi)
y=mx+b
Problem with “Vertical” Least Squares
• Not rotation-invariant• Fails completely for vertical lines
Total Least Squares
•Distance between point (xi, yi) and line ax+by=d (a2+b2=1):|axi + byi – d|•Find (a, b, d) to minimize the sum of squared perpendicular distances
n
i ii dybxaE1
2)( (xi, yi)
ax+by=d
n
i ii dybxaE1
2)(
Unit normal: N=(a, b)
Least Squares: Robustness to Noise
• Least squares fit to the red points:
Least Squares: Robustness to Noise
• Least squares fit with an outlier:
Problem: squared error heavily penalizes outliers
Robust Estimators• General approach: find model parameters θ that minimize
•
ri (xi, θ) – residual of ith point w.r.t. model parameters θρ – robust function with scale parameter σ
;,iii xr
The robust function ρ behaves like squared distance for small values of the residual u but saturates for larger values of u
Choosing the scale: Just right
The effect of the outlier is minimized
The error value is almost the same for everypoint and the fit is very poor
Choosing the scale: Too small
Choosing the scale: Too large
Behaves much the same as least squares
RANSAC• Robust fitting can deal with a few outliers –
what if we have very many?• Random sample consensus (RANSAC):
Very general framework for model fitting in the presence of outliers
• Outlineo Choose a small subset of points uniformly at randomo Fit a model to that subseto Find all remaining points that are “close” to the model and reject
the rest as outlierso Do this many times and choose the best model
M. A. Fischler, R. C. Bolles. Random Sample Consensus: A Paradigm for Model Fitting with Applications to Image Analysis and Automated Cartography. Comm. of the ACM, Vol 24, pp 381-395, 1981.
RANSAC for line fitting example
Source: R. Raguram
RANSAC for line fitting example
Least-squares fit
Source: R. Raguram
RANSAC for line fitting example
1. Randomly select minimal subset of points
Source: R. Raguram
RANSAC for line fitting example
1. Randomly select minimal subset of points
2. Hypothesize a model
Source: R. Raguram
RANSAC for line fitting example
1. Randomly select minimal subset of points
2. Hypothesize a model
3. Compute error function
Source: R. Raguram
RANSAC for line fitting example
1. Randomly select minimal subset of points
2. Hypothesize a model
3. Compute error function
4. Select points consistent with model
Source: R. Raguram
RANSAC for line fitting example
1. Randomly select minimal subset of points
2. Hypothesize a model
3. Compute error function
4. Select points consistent with model
5. Repeat hypothesize-and-verify loop
Source: R. Raguram
43
RANSAC for line fitting example
1. Randomly select minimal subset of points
2. Hypothesize a model
3. Compute error function
4. Select points consistent with model
5. Repeat hypothesize-and-verify loop
Source: R. Raguram
44
RANSAC for line fitting example
1. Randomly select minimal subset of points
2. Hypothesize a model
3. Compute error function
4. Select points consistent with model
5. Repeat hypothesize-and-verify loop
Uncontaminated sample
Source: R. Raguram
RANSAC for line fitting example
1. Randomly select minimal subset of points
2. Hypothesize a model
3. Compute error function
4. Select points consistent with model
5. Repeat hypothesize-and-verify loop
Source: R. Raguram
RANSAC for line fitting• Repeat N times:• Draw s points uniformly at random• Fit line to these s points• Find inliers to this line among the remaining
points (i.e., points whose distance from the line is less than t)
• If there are d or more inliers, accept the line and refit using all inliers
Choosing the parameters• Initial number of points s
o Typically minimum number needed to fit the model• Distance threshold t
o Choose t so probability for inlier is p (e.g. 0.95) o Zero-mean Gaussian noise with std. dev. σ: t2=3.84σ2
• Number of samples No Choose N so that, with probability p, at least one random
sample is free from outliers (e.g. p=0.99) (outlier ratio: e)
Source: M. Pollefeys
Choosing the parameters
sepN 11log/1log
peNs 111
proportion of outliers es 5% 10% 20% 25% 30% 40% 50%2 2 3 5 6 7 11 173 3 4 7 9 11 19 354 3 5 9 13 17 34 725 4 6 12 17 26 57 1466 4 7 16 24 37 97 2937 4 8 20 33 54 163 5888 5 9 26 44 78 272 1177
Source: M. Pollefeys
• Initial number of points s• Typically minimum number needed to fit the model
• Distance threshold t• Choose t so probability for inlier is p (e.g. 0.95) • Zero-mean Gaussian noise with std. dev. σ: t2=3.84σ2
• Number of samples N• Choose N so that, with probability p, at least one
random sample is free from outliers (e.g. p=0.99) (outlier ratio: e)
Adaptively determining the number of samples
• Inlier ratio e is often unknown a priori, so pick worst case, e.g. 50%, and adapt if more inliers are found, e.g. 80% would yield e=0.2
• Adaptive procedure:o N=∞, sample_count =0o While N >sample_count
• Choose a sample and count the number of inliers• If inlier ratio is highest of any found so far, set
e = 1 – (number of inliers)/(total number of points)• Recompute N from e:
• Increment the sample_count by 1
sepN 11log/1log
Source: M. Pollefeys
RANSAC pros and cons• Pros
o Simple and generalo Applicable to many different problemso Often works well in practice
• Conso Lots of parameters to tuneo Doesn’t work well for low inlier ratios (too many iterations,
or can fail completely)o Can’t always get a good initialization
of the model based on the minimum number of samples
Alignment as fitting• Previous lectures: fitting a model to features in one
image
i
i Mx ),(residual
Find model M that minimizes
M
xi
Alignment as fitting• Previous lectures: fitting a model to features in one
image
• Alignment: fitting a model to a transformation between pairs of features (matches) in two images
i
i Mx ),(residual
i
ii xxT )),((residual
Find model M that minimizes
Find transformation T that minimizes
M
xi
T
xixi'
Robust feature-based alignment
• Extract features• Compute putative matches• Loop:
o Hypothesize transformation T
Robust feature-based alignment
• Extract features• Compute putative matches• Loop:
o Hypothesize transformation To Verify transformation (search for other matches consistent with T)
Robust feature-based alignment
• Extract features• Compute putative matches• Loop:
o Hypothesize transformation To Verify transformation (search for other matches consistent with T)
RANSAC• The set of putative matches contains a very high
percentage of outliers
• RANSAC loop:1. Randomly select a seed group of matches2. Compute transformation from seed group3. Find inliers to this transformation 4. If the number of inliers is sufficiently large, re-
compute least-squares estimate of transformation on all of the inliers
• Keep the transformation with the largest number of inliers
RANSAC example: Translation
Putative matches
RANSAC example: Translation
Select one match, count inliers
RANSAC example: Translation
Select one match, count inliers
RANSAC example: Translation
Select translation with the most inliers
Summary• Detect feature points (to be covered later)• Describe feature points (to be covered later)• Match feature points (to be covered later)• RANSAC
o Transformation model (affine, homography, etc.)
Advice• Use the minimum model necessary to define the
transformation on your datao Affine in favor of homographyo Homography in favor of 3D transform (not discussed yet)o Make selection based on your data
• Fewer RANSAC iterations• Fewer degrees of freedom less ambiguity or
sensitivity to noise
Generating putative correspondences
?
Generating putative correspondences
• Need to compare feature descriptors of local patches surrounding interest points
( ) ( )=?
featuredescriptor
featuredescriptor
?
• Simplest descriptor: vector of raw intensity values
• How to compare two such vectors?o Sum of squared differences (SSD)
• Not invariant to intensity change
o Normalized correlation
• Invariant to affine intensity change
Feature descriptors
i
ii vuvu 2),SSD(
jj
jj
i ii
vvuu
vvuuvu
22 )()(
))((),(
Disadvantage of intensity vectors as descriptors
• Small deformations can affect the matching score a lot
• Descriptor computation:o Divide patch into 4x4 sub-patcheso Compute histogram of gradient orientations (8 reference angles) inside
each sub-patcho Resulting descriptor: 4x4x8 = 128 dimensions
Feature descriptors: SIFT
David G. Lowe. "Distinctive image features from scale-invariant keypoints.” IJCV 60 (2), pp. 91-110, 2004.
• Descriptor computation:o Divide patch into 4x4 sub-patcheso Compute histogram of gradient orientations (8 reference angles) inside
each sub-patcho Resulting descriptor: 4x4x8 = 128 dimensions
• Advantage over raw vectors of pixel valueso Gradients less sensitive to illumination changeo Pooling of gradients over the sub-patches achieves robustness to small
shifts, but still preserves some spatial information
Feature descriptors: SIFT
David G. Lowe. "Distinctive image features from scale-invariant keypoints.” IJCV 60 (2), pp. 91-110, 2004.
Feature matching
?
• Generating putative matches: for each patch in one image, find a short list of patches in the other image that could match it based solely on appearance
Feature space outlier rejection• How can we tell which putative matches are more
reliable?• Heuristic: compare distance of nearest neighbor to
that of second nearest neighboro Ratio of closest distance to second-closest distance will be high
for features that are not distinctive
David G. Lowe. "Distinctive image features from scale-invariant keypoints.” IJCV 60 (2), pp. 91-110, 2004.
Threshold of 0.8 provides good separation