Upload
crescent
View
75
Download
1
Tags:
Embed Size (px)
DESCRIPTION
Recognition of textures and object classes. Introduction. Invariant local descriptors => robust recognition of specific objects or scenes Recognition of textures and object classes => description of intra-class variation, selection of discriminant features. texture recognition. - PowerPoint PPT Presentation
Citation preview
Recognition of
textures and object classes
Introduction
• Invariant local descriptors => robust recognition of specific objects or scenes
• Recognition of textures and object classes => description of intra-class variation, selection of
discriminant features
texture recognition car detection
1. An affine-invariant texture recognition (CVPR’03)
2. A two-layer architecture for texture segmentation and recognition (ICCV’03)
3. Feature selection for object class recognition (ICCV’03)
Overview
Affine-invariant texture recognition
• Texture recognition under viewpoint changes and non-rigid transformations
• Use of affine-invariant regions– invariance to viewpoint changes– spatial selection => more compact representation, reduction of
redundancy in texton dictionary
[A sparse texture representation using affine-invariant regions, S. Lazebnik, C. Schmid and J. Ponce, CVPR 2003]
Overview of the approach
Harris detector
Laplace detector
Region extraction
Descriptors – Spin images
Spatial selection
clustering each pixel
clustering selected pixels
Signature and EMD
• Hierarchical clustering => Signature :
• Earth movers distance
– robust distance, optimizes the flow between distributions– can match signatures of different size– not sensitive to the number of clusters
SS = { ( m1 , w1 ) , … , ( mk , wk ) }
D( SS , SS’’ ) = [i,j fij d( mi , m’j)] / [i,j fij ]
Database with viewpoint changes
20 samples of 10 different textures
Results
Spin images Gabor-like filters
A two-layer architecture
• Texture recognition + segmentation
• Classification of individual regions + spatial layout
[A generative architecture for semi-supervised texture recognition, S. Lazebnik, C. Schmid, J. Ponce, ICCV 2003]
A two-layer architecture
Modeling : 1. Distribution of the local descriptors (affine invariants)
• Gaussian mixture model• estimation with EM, allows incorporating unsegmented images
2. Co-occurrence statistics of sub-class labels over affinely adapted neighborhoods
Segmentation + Recognition :1. Generative model for initial class probabilities2. Co-occurrence statistics + relaxation to improve labels
Texture Dataset – Training Images
T1 (brick) T2 (carpet) T3 (chair) T4 (floor 1) T5 (floor 2) T6 (marble) T7 (wood)
Effect of relaxation + co-occurrence
Original image
Top: before relaxation (indivual regions), bottom: after relaxation (co-occurrence)
Recognition + Segmentation Examples
Animal Dataset – Training Images
• no manual segmentation, weakly supervised• 10 training images per animal (with background) • no purely negative images
Recognition + Segmentation Examples
Object class detection
• Description of intra-class variations of object parts
[Selection of scale inv. regions for object class recognition,G. Dorko and C. Schmid, ICCV’03]
Object class detection
• Description of intra-class variations of object parts
• Selection of discrimiant features
Outline of the approach
Clustering of descriptors
• Descriptors are labeled as positive/negative
• Hierarchical clustering of the positive/negative set
• Examples of positive clusters
Clustering of descriptors
• Descriptors are labeled as positive/negative
• Hierarchical clustering of the positive/negative set
• Examples of positive clusters
Classification
• Learn a separate classifier for each cluster
– Classifier : Support Vector Machine
• Select significant classifiers
– Feature selection with likelihood ratio / mutual information
5
Likelihood Mutual Information
10
25
Likelihood – mutual information
Summary - Approach
• Automatic construction of object part classifiers– scale and rotation invariant– no normalization/alignment of the training and test images
• Selection of discriminant features– interest points, clustering– feature selection with likelihood or mutual information
• Comparison of two feature selection methods– likelihood: more discriminant but very specific– mutual Information: discriminant but not too specific
Material
• Powerpoint presention and papers will be available at
http://www.inrialpes.fr/movi/people/Schmid/cvpr-tutorial03