Upload
gautam
View
70
Download
2
Embed Size (px)
DESCRIPTION
A Statistical Approach to Speed Up Ranking/Re-Ranking. Hong-Ming Chen [email protected] Advisor: Professor Shih-Fu Chang. Outline . Flow chart of the overall work The idea of using statistical approach to do re-ranking By feature locations relationship O(n 2 ) time complexity - PowerPoint PPT Presentation
Citation preview
A Statistical Approach to Speed Up Ranking/Re-Ranking
Hong-Ming [email protected]
Advisor: Professor Shih-Fu Chang
Outline
• Flow chart of the overall work• The idea of using statistical approach to do re-ranking– By feature locations relationship
• O(n2) time complexity – By orientation relationship
• O(n) time complexity • The re-rank accuracy is as good as RANSAC
• Experimental result evaluation
Flow Chart 1 – ranking components construction
Dataset:Ukbench [1]
[1] D. Nistér and H. Stewénius. Scalable recognition with a vocabulary tree. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), volume 2, pages 2161-2168, June 2006.[2] http://www.vlfeat.org/
Code Book
Hierarchical k-means [1][2]
Bag of Word histograms of the database images
Query image
Bag of Word histogram of the
query image
Respond top-N result
Flow Chart 2 – re-ranking components construction
Respond top-N result
Re-rank by RANSAC [3]
[3] http://www.csse.uwa.edu.au/~pk/research/matlabfns/, Peter Kovesi, Centre for Exploration Targeting School of Earth and Environment The University of Western Australia
Re-rank by proposed statistical approach
Result evaluation
1. Feature Locations Relationship
• SIFT features [4] are:– Invariant to translation, rotation and scaling – Partially invariant to local geometric distortion
• For an ideal similar image pair:– Only translation, rotation and scaling – The ratio of corresponding distance pairs should be constant.
•
[4] David G. Lowe, "Distinctive image features from scale-invariant keypoints," International Journal of Computer Vision, 60, 2 (2004)
P1a
P1bP2a
P2a
Image A Image B
dist1 dist2 1 constant scaling2
distdist
1. Feature Locations Relationship
• SIFT features [4] are:– Invariant to translation, rotation and scaling – Partially invariant to local geometric distortion
• For a similar image pair with view angle difference:– Translation, rotation and scaling– Local geometric distortion, and wrong feature points matching – The ratio of corresponding distance pairs is near constant.
•
[4] David G. Lowe, "Distinctive image features from scale-invariant keypoints," International Journal of Computer Vision, 60, 2 (2004)
P1a
P1bP2a
P2a
Image A Image B
dist1 dist2 1 constant2
distdist
Example
ukbench00000 ukbench00001
Mean = 0.85 Variance = 0.017 Total amount of match points: 554
Mean: scaling Variance: matching error, the smaller the better
1. Feature Locations Relationship
• Assumption after observation:– A similar image pair: a distribution with small
distribution variance – A dissimilar image pair: a distribution with large
distribution variance •
Analysis of feature locations relationship • Relationship of match pair numbers and average variances
between similar image pairs and dissimilar image pairs
Red: dissimilar image pairs Blue: similar image pairs
2. Feature orientation Relationship• SIFT features [4] are:
– Invariant to translation, rotation and scaling – Partially invariant to local geometric distortion
• For similar image pairs:– The rotation degree of P1a -> P1b should be EQUAL to the rotation
degree of P2a -> P2b
[4] David G. Lowe, "Distinctive image features from scale-invariant keypoints," International Journal of Computer Vision, 60, 2 (2004)
P1a
P1bP2a
P2a
Image A Image B
Example
ukbench00000
ukbench00001
ukbench00001
Shift about pi/4
The rotation degree is about 50, Distance measured by histogram intersection
2. Feature orientation Relationship
• Assumption after observation:– A similar image pair: small orientation histogram
distance – A dissimilar image pair: large orientation
histogram distance •
Analysis of Feature orientation Relationship
Relationship of match pair numbers and average orientation intersection difference between similar image pairs and dissimilar image pairs
Red: dissimilar image pairs Blue: similar image pairs
Why I zoom in the small-match-number portion of the diagrams?
Dataset and features discussion• Ukbench dataset analysis:
– 2550 classes, 4 images/class– Similar image pairs combination: C(4, 2) * 2550 = 15300 pairs
• High percentage of similar image pairs having small amount of match points. (with default ratio value = 0.6)
• The re-ranking criteria should have outstanding performance especially only having small match points amount.
Match points # Accumulated #/% Match points # Accumulated #/%
0 602 3.9% 6 3278 21.4%
1 1190 7.8% 7 3555 23.2%
2 1733 11.3% 8 3812 24.9%
3 2236 14.6% 9 4046 26.4%
4 2613 17.1% 10 4297 28.1%
5 2982 19.5% 20 5934 38.8%
Comparison of two re-ranking approach
Match point #
Similar image pairs Dissimilar image pairs
Variance of Scaling Distribution
Orientation histogram difference
Variance of Scaling Distribution
Orientation histogram difference
mean var mean var mean var mean var
3 32.236 90377.651
0.602 0.027 498.641 55598860.926
0.610 0.035
4 79.073 1028033.066
0.604 0.029 772.344 266541945.753
0.641 0.030
5 198.830 7229360.856
0.595 0.019 882.084 205772324
.251
0.657 0.025
10 27.822 26219.275
0.609 0.011 1937.780 303821731.998
0.685 0.024
overall 18.207 235756.236
0.422 0.032 495.669 92963999.421
0.614 0.030
Comparison of two re-ranking approach
Match point #
Similar image pairs Dissimilar image pairs
Variance of Scaling Distribution
Orientation histogram difference
Variance of Scaling Distribution
Orientation histogram difference
mean var mean var mean var mean var
3 32.236 90377.651
0.602 0.027 498.641 55598860.926
0.610 0.035
4 79.073 1028033.066
0.604 0.029 772.344 266541945.753
0.641 0.030
5 198.830 7229360.856
0.595 0.019 882.084 205772324
.251
0.657 0.025
10 27.822 26219.275
0.609 0.011 1937.780 303821731.998
0.685 0.024
overall 18.207 235756.236
0.422 0.032 495.669 92963999.421
0.614 0.030
High variance of the variance of Scaling Distribution, even though the mean of it is quite distinctive.
Comparison of two re-ranking approach
Match point #
Similar image pairs Dissimilar image pairs
Variance of Scaling Distribution
Orientation histogram difference
Variance of Scaling Distribution
Orientation histogram difference
mean var mean var mean var mean var
3 32.236 90377.651
0.602 0.027 498.641 55598860.926
0.610 0.035
4 79.073 1028033.066
0.604 0.029 772.344 266541945.753
0.641 0.030
5 198.830 7229360.856
0.595 0.019 882.084 205772324
.251
0.657 0.025
10 27.822 26219.275
0.609 0.011 1937.780 303821731.998
0.685 0.024
overall 18.207 235756.236
0.422 0.032 495.669 92963999.421
0.614 0.030
The variance of orientation histogram difference are very small (with respect to its mean value) and stable.
Comparison of two re-ranking approach
Match point #
Similar image pairs Dissimilar image pairs
Variance of Scaling Distribution
Orientation histogram difference
Variance of Scaling Distribution
Orientation histogram difference
mean var mean var mean var mean var
3 32.236 90377.651
0.602 0.027 498.641 55598860.926
0.610 0.035
4 79.073 1028033.066
0.604 0.029 772.344 266541945.753
0.641 0.030
5 198.830 7229360.856
0.595 0.019 882.084 205772324
.251
0.657 0.025
10 27.822 26219.275
0.609 0.011 1937.780 303821731.998
0.685 0.024
overall 18.207 235756.236
0.422 0.032 495.669 92963999.421
0.614 0.030
Overall, the orientation histogram difference can clearly separate similar/dissimilar image pairs, because of its large distance of mean value and quite small variance.
Comparison of two re-ranking approach
Match point #
Similar image pairs Dissimilar image pairs
Variance of Scaling Distribution
Orientation histogram difference
Variance of Scaling Distribution
Orientation histogram difference
mean var mean var mean var mean var
3 32.236 90377.651
0.602 0.027 498.641 55598860.926
0.610 0.035
4 79.073 1028033.066
0.604 0.029 772.344 266541945.753
0.641 0.030
5 198.830 7229360.856
0.595 0.019 882.084 205772324
.251
0.657 0.025
10 27.822 26219.275
0.609 0.011 1937.780 303821731.998
0.685 0.024
overall 18.207 235756.236
0.422 0.032 495.669 92963999.421
0.614 0.030
When match points are more than 5, the orientation histogram difference can roughly separate similar and dissimilar image pairs.
Comparison of two re-ranking approach
Match point #
Similar image pairs Dissimilar image pairs
Variance of Scaling Distribution
Orientation histogram difference
Variance of Scaling Distribution
Orientation histogram difference
mean var mean var mean var mean var
3 32.236 90377.651
0.602 0.027 498.641 55598860.926
0.610 0.035
4 79.073 1028033.066
0.604 0.029 772.344 266541945.753
0.641 0.030
5 198.830 7229360.856
0.595 0.019 882.084 205772324
.251
0.657 0.025
10 27.822 26219.275
0.609 0.011 1937.780 303821731.998
0.685 0.024
overall 18.207 235756.236
0.422 0.032 495.669 92963999.421
0.614 0.030
When match points are more than 10, the orientation histogram difference can clearly separate similar and dissimilar image pairs.
Experimental results discussion• 1. the impact of k values (cluster centers)
K=1000 K=4096 K=10000 K=50625 K=100000Recall = 1 (33%) 0.722 0.758 0.781 0.818 0.808Recall = 2 (66%) 0.544 0.585 0.614 0.640 0.645Recall = 3 (100%)
0.360 0.401 0.431 0.459 0.460
K=1000K=4096K=10000K=50625K=100000
Experimental results discussion• 2. the impact of looking up code book by different approach:
– A. by tracing the vocabulary tree [1]: efficient, but the result is not optimal
– B. by scanning the whole code book: very slow, but guarantees a optimal BoW result with respect to the K centers
•
K=1000(by tree) K=1000 K=10000(by tree) K=10000Recall = 1 (33%) 0.722 0.750 0.781 0.815Recall = 2 (66%) 0.544 0.575 0.614 0.658Recall = 3 (100%) 0.360 0.390 0.431 0.470
K=1000: decoded by treeK=1000: decoded directly
K=10000: decoded by treeK=10000: decoded directly
K=1000Ground truthRotation Scale var + rotationRANSACScale varOriginal
Ground truth Rotation Scale var + rotation
RANSAC Scale var Original
0.837 0.782 0.780 0.773 0.754 0.722
0.664 0.600 0.600 0.591 0.583 0.544
0.455 0.407 0.404 0.401 0.398 0.360
Re-rank depth =20
K=50625Ground truthRotation Scale var + rotationRANSACScale varOriginal
Ground truth Rotation Scale var + rotation
RANSAC Scale var Original
0.921 0.849 0.846 0.845 0.813 0.818
0.769 0.688 0.685 0.675 0.665 0.640
0.557 0.502 0.497 0.493 0.487 0.459
Re-rank depth =20
Experimental result -- all
• Re-rank depth = 20
distribution K=1000 K=4096 K=10000 K=50625Recall = 1 (33%) 0.754 0.780 0.799 0.813Recall = 2 (66%) 0.583 0.617 0.647 0.665Recall = 3 (100%)
0.398 0.430 0.456 0.487
rotation K=1000 K=4096 K=10000 K=50625Recall = 1 (33%) 0.782 0.810 0.827 0.849Recall = 2 (66%) 0.600 0.635 0.665 0.688Recall = 3 (100%)
0.407 0.441 0.469 0.502
K=1000 K=4096 K=10000 K=50625 K=100000Recall = 1 (33%) 0.722 0.758 0.781 0.818 0.808Recall = 2 (66%) 0.544 0.585 0.614 0.640 0.645Recall = 3 (100%)
0.360 0.401 0.431 0.459 0.460
RANSAC K=1000 K=4096 K=10000 K=50625Recall = 1 (33%) 0.773 0.803 0.821 0.845Recall = 2 (66%) 0.591 0.628 0.656 0.675Recall = 3 (100%)
0.401 0.435 0.463 0.493
Ground Truth K=1000 K=4096 K=10000 K=50625Recall = 1 (33%) 0.837 0.869 0.900 0.921Recall = 2 (66%) 0.664 0.704 0.733 0.769Recall = 3 (100%)
0.455 0.493 0.526 0.557
Dist+Ro K=1000 K=4096 K=10000 K=50625Recall = 1 (33%) 0.780 0.808 0.826 0.846Recall = 2 (66%) 0.600 0.633 0.662 0.685Recall = 3 (100%)
0.404 0.438 0.465 0.497
Time Complexity Analysis
• RANSAC: O(Kn):– K: random subset tried– n: input data size – no upper bound on the time it takes to compute the parameters
• Distribution of Feature Location distance relationship:– O(n2) : distribution consists of all distance relationships– O(n): when n (match point number) is large enough, we can subsample
“reliable enough” amount of samples to form the distribution• The distance of orientation histograms of matched SIFT features:
– O(n): to generate rotation angle histograms of matched SIFT features– Constant time for compute rotation angles– Only little overhead with respect to searching match points
•
Future work
• We have:– 1. Scale information– 2. Orientation information– 3. Trivial to find translation– A good initial guess for precise homography matrix estimation?
• Applied the current approach to quantized SIFT features:– Using a code word to represent a interesting point, rather than
applying 128 dimension vector• Moving from exact 1-1 mapping to many-to-many mapping.
– I’ve tried to solve this problem. However, there are now no satisfying results at this stage.
Reference
• [1] D. Nistér and H. Stewénius. Scalable recognition with a vocabulary tree. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), volume 2, pages 2161-2168, June 2006.
• [2] http://www.vlfeat.org/• [3] http://www.csse.uwa.edu.au/~pk/research/matlabfns/,
Peter Kovesi, Centre for Exploration Targeting School of Earth and Environment The University of Western Australia
• [4] David G. Lowe, "Distinctive image features from scale-invariant keypoints," International Journal of Computer Vision, 60, 2 (2004)