Pose, Illumination and Expression Invariant Pairwise Face-Similarity Measure via Doppelganger List...

Preview:

Citation preview

Pose, Illumination and Expression Invariant Pairwise Face-Similarity Measure via Doppelganger List

Comparison

Author: Florian Schroff, Tali Treibitz, David Kriegman, Serge BelongieSpeaker:刘昕

Outline

Authors Paper Information Abstract Motivation Methods Experiment Conclusion

Outline

Authors Paper Information Abstract Motivation Methods Experiment Conclusion

Authors(1/4)

Florian Schroff :

Authors(2/4)

Tali Treibitz: Background: Ph.D. student in the Dept. of Electrical Engineering, Technion Publication:三篇 CVPR,一篇 PAMI

Authors(3/4)

David J. Kriegman : Background:

UCSD Professor of Computer Science & Engineering.UIUC Adjunct Professor of Computer Science and Beckman Institute . IEEE Transactions on Pattern Analysis & Machine Intelligence, Editor-in-Chief, 2005-2009

Authors(4/4)

Serge J. Belongie: Background:

Professor

Computer Science and Engineering

University of California, San Diego

Outline

Authors Paper Information Abstract Motivation Methods Experiment Conclusion

Paper Information

文章出处 ICCV 2011

相关文献 Chunhui Zhu; Fang Wen; Jian Sun . A Rank-Order Distance based

Clustering Algorithm for Face Tagging, CVPR2011 Lior Wolf ; Tal Hassner;Yaniv Taigman; One shot similarity kernel,

ICCV09 Kumar, N.; Berg, A.C.; Belhumeur, P.N.; Nayar, S.K.Attribute and

simile classifiers for face verification, CVPR09

Outline

Authors Paper Information Abstract Motivation Methods Experiment Conclusion

Abstract(1/2)

Face recognition approaches have traditionally focused on direct comparisons between aligned images, e.g. using pixel values or local image features. Such comparisons become prohibitively difficult when comparing faces across extreme differences in pose, illumination and expression.

To this end we describe an image of a face by an ordered list of identities from a Library. The order of the list is determined by the similarity of the Library images to the probe image. The lists act as a signature for each face image: similarity between face images is determined via the similarity of the signatures.

Abstract(2/2)

Here the CMU Multi-PIE database, which includes images of 337 individuals in more than 2000 pose, lighting and illumination combinations, serves as the Library.

We show improved performance over state of the art face-similarity measures based on local features, such as FPLBP, especially across large pose variations on FacePix and Multi-PIE. On LFW we show improved performance in comparison with measures like SIFT (on fiducials), LBP, FPLBP and Gabor (C1).

Outline

Authors Paper Information Abstract Motivation Methods Experiment Conclusion

Motivation

Learn a new distance metric D’

Outline

Authors Paper Information Abstract Motivation Methods Experiment Conclusion

Methods—Overview

Methods-Assumption This approach stems from the observation that ranked Doppelganger lists

are similar for similar people(Even under different imaging conditions)

Methods-Set up Face database

Using MultiPIE as a Face Library:

Methods-Finding Alike

Calculating the list:

Methods-Compare List

Calculating similarity:

Outline

Authors Paper Information Abstract Motivation Methods Experiment Conclusion

Experiment on FacePix(across pose)

Experiment- Verification Across Large Variations of Pose

Experiment- on Multi-PIE

The classification performance using ten fold cross-validation is 76:6% ± 2.0(both FPLBP and SSIM on direct image comparison perform near chance). To the best of our knowledge these are the first results reported across all pose, illumination and expression conditions on Multi-PIE.

Experiment on LFW ( 1/2)

LFW实验结果

Experiment on LFW ( 2/2)

Outline

Authors Paper Information Abstract Motivation Methods Experiment Conclusion

Conclusion(1/2) To the best of our knowledge, we have shown the first verification results

for face-similarity measures under truly unconstrained expression, illumination and pose, including full profile, on both Multi-PIE and FacePix.

The advantages of the ranked Doppelganger lists become apparent when the two probe images depict faces in very different poses. Our method does not require explicit training and is able to cope with large pose ranges.

It is straightforward to generalize our method to an even larger variety of imaging conditions, by adding further examples to the Library. No change in our algorithm is required, as its only assumption is that imaging conditions.

Conclusion(2/2) We expect that a great deal of improvement can be achieved by using this

powerful comparison method as an additional feature in a complete verification or recognition pipeline where it can add the robustness that is required for face recognition across large pose ranges. Furthermore, we

are currently exploring the use of ranked lists of identities in other classification domains.

Thanks for listening

Xin Liu

Relative Attributes

Author: Devi Parikh, Kristen GraumanSpeaker:刘昕

Outline

Authors Paper Information Abstract Motivation Methods Experiment Conclusion

Outline

Authors Paper Information Abstract Motivation Methods Experiment Conclusion

Authors(1/2)

Devi Parikh : http://ttic.uchicago.edu/~dparikh/ Background: Research Assistant Professor at Toyota Technological Institute at Chicago (TTIC) Publication:

L. Zitnick and D. Parikh The Role of Image Understanding in Segmentation IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2012 (to appear)D. Parikh and L. Zitnick, Exploring Tiny Images: The Roles of Appearance and Contextual Information for Machine and Human Object Recognition ,Pattern Analysis and Machine Intelligence (PAMI), 2012 (to appear)

一堆牛会,一堆牛期刊

Authors(2/2)

Kristen Grauman : http://www.cs.utexas.edu/~grauman/ Background:Clare Boothe Luce Assistant Professor

Microsoft Research New Faculty Fellow

Department of Computer Science University of Texas at Austin Publication:一堆 CVPR,一堆 ICCV……

Outline

Authors Paper Information Abstract Motivation Methods Experiment Conclusion

Paper Information

文章出处ICCV 2011 Oral

获奖情况 Marr Prize!

Outline

Authors Paper Information Abstract Motivation Methods Experiment Conclusion

Abstract(1/2)

Human-nameable visual “attributes” can benefit various recognition tasks. However, existing techniques restrict these properties to categorical labels (for example, a person is ‘smiling’ or not, a scene is ‘dry’ or not), and thus fail to capture more general semantic relationships.

We propose to model relative attributes. Given training data stating how object/scene categories relate according to different attributes, we learn a ranking function per attribute. The learned ranking functions predict the relative strength of each property in novel images.

Abstract(2/2)

We then build a generative model over the joint space of attribute ranking outputs, and propose a novel form of zero-shot learning in which the supervisor relates the unseen object category to previously seen objects via attributes (for example, ‘bears are furrier than giraffes’).

We further show how the proposed relative attributes enable richer textual descriptions for new images, which in practice are more precise for human interpretation. We demonstrate the approach on datasets of faces and natural scenes, and show its clear advantages over traditional binary attribute prediction for these new tasks

Outline

Authors Paper Information Abstract Motivation Methods Experiment Conclusion

Motivation

However, for a large variety of attributes, not only is this binary setting restrictive, but it is also unnatural.

Why we model relative attributes?

Outline

Authors Paper Information Abstract Motivation Methods Experiment Conclusion

Methods—Formulation(1/3)

Ranking functions:

For each attribute open

Methods—Formulation(2/3) Objective Function:

Compared to SVM:

Methods—Formulation(3/3) Margin and support vectors

wmTX+b=0

Geometric margin:

Methods- ZeroShotLearning From

Relationships ( 1/3) Overview :

Methods- ZeroShotLearning From

Relationships ( 2/3) Image representation:

Methods- ZeroShotLearning From

Relationships ( 3/3) Generative model:

Methods- Describing Images in Relative Terms ( 1/2)

How to describe?

Methods- Describing Images in Relative Terms ( 2/2)

E.g.

Relative (ours):

More natural than tallbuilding Less natural than forest

More open than tallbuilding Less open than coast

Has more perspective than tallbuilding

Binary (existing):

Not natural

Not open

Has perspective

Outline

Authors Paper Information Abstract Motivation Methods Experiment Conclusion

Experiment-Overview(1/2)

OSR and PubFig:

Experiment-Overview(2/2)

Baseline:

Experiment- Relative zero-shot Learning(1/4)

ProposedBinary attributes

Classifier score

How does performance vary with more unseen categories?

classical recognition problem

binary ~ relative supervision

55

Experiment- Relative zero-shot Learning(2/4)

<< baseline supervision can give unique ordering on all classes

56

Experiment- Relative zero-shot Learning(3/4)

57

Experiment- Relative zero-shot Learning(4/4)

Relative attributes jointly carve out space for unseen category

58

Experiment-Human study(2/2)

18 subjects

Test cases:10OSR, 20 PubFig

59

Outline

Authors Paper Information Abstract Motivation Methods Experiment Conclusion

Conclusion

We introduced relative attributes, which allow for a richer language of supervision and description than the commonly used categorical (binary) attributes. We presented two novel applications: zero-shot learning based on relationships and describing images relative to other images or categories. Through extensive experiments as well as a human subject study, we clearly demonstrated the advantages of our idea. Future work includes exploring more novel applications of relative attributes, such as guided search or interactive learning, and automatic discovery of relative attributes.

Thanks for listening

Xin Liu

Recommended