37
Robust and large-scale alignment Image from http://graphics.cs.cmu.edu/courses/15-463/20

Robust and large-scale alignment Image from

  • View
    218

  • Download
    2

Embed Size (px)

Citation preview

Page 1: Robust and large-scale alignment Image from

Robust and large-scale alignment

Image from http://graphics.cs.cmu.edu/courses/15-463/2010_fall/

Page 2: Robust and large-scale alignment Image from

Robust feature-based alignment• So far, we’ve assumed that we are given a set of

“ground-truth” correspondences between the two images we want to align

• What if we don’t know the correspondences?

),( ii yx ),( ii yx

Page 3: Robust and large-scale alignment Image from

Robust feature-based alignment• So far, we’ve assumed that we are given a set of

“ground-truth” correspondences between the two images we want to align

• What if we don’t know the correspondences?

?

Page 4: Robust and large-scale alignment Image from

Robust feature-based alignment

Page 5: Robust and large-scale alignment Image from

Robust feature-based alignment

• Extract features

Page 6: Robust and large-scale alignment Image from

Robust feature-based alignment

• Extract features• Compute putative matches

Page 7: Robust and large-scale alignment Image from

Robust feature-based alignment

• Extract features• Compute putative matches• Loop:

• Hypothesize transformation T

Page 8: Robust and large-scale alignment Image from

Robust feature-based alignment

• Extract features• Compute putative matches• Loop:

• Hypothesize transformation T• Verify transformation (search for other matches consistent

with T)

Page 9: Robust and large-scale alignment Image from

Robust feature-based alignment

• Extract features• Compute putative matches• Loop:

• Hypothesize transformation T• Verify transformation (search for other matches consistent

with T)

Page 10: Robust and large-scale alignment Image from

Generating putative correspondences

?

Page 11: Robust and large-scale alignment Image from

Generating putative correspondences

• Need to compare feature descriptors of local patches surrounding interest points

( ) ( )=?

featuredescriptor

featuredescriptor

?

Page 12: Robust and large-scale alignment Image from

Feature descriptors• Recall: covariant detectors => invariant descriptors

Detect regions Normalize regionsCompute appearance

descriptors

Page 13: Robust and large-scale alignment Image from

• Simplest descriptor: vector of raw intensity values• How to compare two such vectors?

• Sum of squared differences (SSD)

– Not invariant to intensity change

• Normalized correlation

– Invariant to affine intensity change

Feature descriptors

i

ii vuvu 2),SSD(

jj

jj

i ii

vvuu

vvuuvu

22 )()(

))((),(

Page 14: Robust and large-scale alignment Image from

Disadvantage of intensity vectors as descriptors

• Small deformations can affect the matching score a lot

Page 15: Robust and large-scale alignment Image from

• Descriptor computation:• Divide patch into 4x4 sub-patches• Compute histogram of gradient orientations (8 reference

angles) inside each sub-patch• Resulting descriptor: 4x4x8 = 128 dimensions

Feature descriptors: SIFT

David G. Lowe. "Distinctive image features from scale-invariant keypoints.” IJCV 60 (2), pp. 91-110, 2004.

Page 16: Robust and large-scale alignment Image from

• Descriptor computation:• Divide patch into 4x4 sub-patches• Compute histogram of gradient orientations (8 reference

angles) inside each sub-patch• Resulting descriptor: 4x4x8 = 128 dimensions

• Advantage over raw vectors of pixel values• Gradients less sensitive to illumination change• Pooling of gradients over the sub-patches achieves

robustness to small shifts, but still preserves some spatial information

Feature descriptors: SIFT

David G. Lowe. "Distinctive image features from scale-invariant keypoints.” IJCV 60 (2), pp. 91-110, 2004.

Page 17: Robust and large-scale alignment Image from

Feature matching

?

• Generating putative matches: for each patch in one image, find a short list of patches in the other image that could match it based solely on appearance

Page 18: Robust and large-scale alignment Image from

Feature space outlier rejection

• How can we tell which putative matches are more reliable?• Heuristic: compare distance of nearest neighbor to that of

second nearest neighbor• Ratio of closest distance to second-closest distance will be high

for features that are not distinctive

David G. Lowe. "Distinctive image features from scale-invariant keypoints.” IJCV 60 (2), pp. 91-110, 2004.

Threshold of 0.8 provides good separation

Page 19: Robust and large-scale alignment Image from

Reading

David G. Lowe. "Distinctive image features from scale-invariant keypoints.”

IJCV 60 (2), pp. 91-110, 2004.

Page 20: Robust and large-scale alignment Image from

RANSAC• The set of putative matches contains a very high

percentage of outliers

RANSAC loop:

1. Randomly select a seed group of matches

2. Compute transformation from seed group

3. Find inliers to this transformation

4. If the number of inliers is sufficiently large, re-compute least-squares estimate of transformation on all of the inliers

Keep the transformation with the largest number of inliers

Page 21: Robust and large-scale alignment Image from

RANSAC example: Translation

Putative matches

Page 22: Robust and large-scale alignment Image from

RANSAC example: Translation

Select one match, count inliers

Page 23: Robust and large-scale alignment Image from

RANSAC example: Translation

Select one match, count inliers

Page 24: Robust and large-scale alignment Image from

RANSAC example: Translation

Select translation with the most inliers

Page 25: Robust and large-scale alignment Image from

Scalability: Alignment to large databases

• What if we need to align a test image with thousands or millions of images in a model database?• Efficient putative match generation

• Approximate descriptor similarity search, inverted indices

Model database

?Test image

Page 26: Robust and large-scale alignment Image from

Scalability: Alignment to large databases

• What if we need to align a test image with thousands or millions of images in a model database?• Efficient putative match generation

• Fast nearest neighbor search, inverted indexes

D. Nistér and H. Stewénius, Scalable Recognition with a Vocabulary Tree, CVPR 2006

Test image

Database

Vocabulary tree with inverted

index

Page 27: Robust and large-scale alignment Image from

Descriptor spaceSlide credit: D. Nister

Page 28: Robust and large-scale alignment Image from

Hierarchical partitioning of

descriptor space (vocabulary tree) Slide credit: D. Nister

Page 29: Robust and large-scale alignment Image from

Slide credit: D. NisterVocabulary tree/inverted index

Page 30: Robust and large-scale alignment Image from

Populating the vocabulary tree/inverted indexSlide credit: D. Nister

Model images

Page 31: Robust and large-scale alignment Image from

Populating the vocabulary tree/inverted indexSlide credit: D. Nister

Model images

Page 32: Robust and large-scale alignment Image from

Populating the vocabulary tree/inverted indexSlide credit: D. Nister

Model images

Page 33: Robust and large-scale alignment Image from

Populating the vocabulary tree/inverted indexSlide credit: D. Nister

Model images

Page 34: Robust and large-scale alignment Image from

Looking up a test image Slide credit: D. Nister

Test imageModel images

Page 35: Robust and large-scale alignment Image from

Voting for geometric transformations• Modeling phase: For each model feature, record 2D location,

scale, and orientation of model (relative to normalized feature coordinate frame)

model

index

Page 36: Robust and large-scale alignment Image from

Voting for geometric transformations• Test phase: Each match between a test and model feature

votes in a 4D Hough space (location, scale, orientation) with coarse bins

• Hypotheses receiving some minimal amount of votes can be subjected to more detailed geometric verification

model

test image

index

Page 37: Robust and large-scale alignment Image from

Indexing with geometric invariants• When we don’t have feature descriptors, we can take n-tuples

of neighboring features and compute invariant features from their geometric configurations

• Application: searching the sky: http://www.astrometry.net/

A

B

CD