CRF-based Activity Recognition on Manifolds
Presented byArshad Jamal and Prakhar Banga
Introduction• Objective: To explore the STIP based activity analysis by
finding a manifold structure for the HoG-HOF descriptor to reduce the dimension of the data and learn a hCRF based discriminative classifier to classify actions
• Challenges: Huge diversity in the data (view-points, appearance, motion, lighting etc.)
• Applications: video surveillance, video indexing and retrieval and human computer interaction
Related Works
1. Various variants of spatio-temporal interest points (STIPs) based approaches exist
2. Generative models like Bayesian Networks to model key pose changes
3. Discriminative models like a CRF network to model the temporal dynamics of silhouettes based features
4. Learning and Matching of Dynamic Shape Manifolds for Human Action Recognition
Proposed Approach
STIP Detector &
Descriptor
Manifold Learning
(LPP)
Learn CRF Classifier
Labeled TrainingDataset
STIP Detector &Descriptor
Dimensionality reduction Classifier
Test Video Action Class
Algorithm Details: STIP Detector• Looks for distinctive neighborhood in the video
• High image variation in space and time• Describe it using distribution of gradient and optical flow
ttytxt
ytyyxy
xtxyxx
LLL
LLL
LLL
H Where Lij is ji
L
THHtraceH )()det( 3
Any (x, y, t) location in the video is STIP if
I. Laptev. On Space-Time Interest Points. IJCV, 2005
Algorithm Details: STIP Descriptor
I. Laptev. On Space-Time Interest Points. IJCV, 2005
• Small spatio-temporal neighborhood extracted• Divided into 3x3x2 tiles
Dimensionality Reduction
A broad classification• Linear:
• Cannot capture inherent non-linearity in the manifold
• Non-linear• May not be defined everywhere• Do not preserve neighborhood
Locality preserving projections(LPP)
Concentrates on preserving locality rather than minimizing the Least Square Error
Capable of learning the non-linear manifold structure as optimally as possible
Algorithm Details: HCRF based Classifier
1. HCRF is a discriminative classifier conditioned globally on all the observations
2. Model parameters are found by maximizing the conditional log likelihood on the labelled training data
3. Flexible configuration and connectivity of the Hidden variables
Wang, Mori NIPS 2008
y
h1 h2
xmx2x1
hm
Output
Hiddenstates
Observations
Algorithm Details: HCRF based Classifier
'
);',,'(
);(
);(
);,();/(
y h
h
m
m
hyp
yp
p
ypyp
x
hx,,
x
xx
Ekj
kjT
jjj hhyhyhxy),(Vj
T
Vj
T ),,(),(),() x;h,,(
Wang, Mori NIPS 2008
Given a sequence of STIPs x and it’s class labelWe wish to find p(y/x)
y
'
));',,'(exp(
));(exp(
y h
h
m
m
hy
y
x
hx,,
Current Status
1. Datasets: • KTH: 600 videos of 6 actions by 25 actors in 4 scenarios• UCF-50 dataset
2. Features: 3D-Harris corner as STIP, 162-dim (72+90) HoG-HoF descriptors computed for the two datasets
• Script based approach to add new datasets
3. Dimensionality reduction using LPP• Learn a low dimensional manifold using all the STIPs obtained
from the full dataset • Project the data onto the learned manifold
4. HCRF model is learned using the training dataset• Initial results obtained
Results: STIPs
Result: LPP output
3-dimensions Plotted for visualization
~1.2Lac STIPs collected from all action classes inKTH dataset
To be done…• Testing the classifier for different dataset
• Compiling the action classification results for different datasets
• Code debugging
References• I. Laptev. On Space-Time Interest Points. IJCV, 2005• I. Laptev, M. Marszalek, C. Schmid, and B. Rozenfeld. Learning realistic
human actions from movies. In CVPR, 2008• F. Lv and R. Nevatia. Single view human action recognition using key
pose matching and viterbi path searching. In CVPR, 2007• S. Wang, A. Quattoni, L.-P. Morency, D. Demirdjian, and T. Darrell.
Hidden Conditional Random Fields for Gesture Recognition. In CVPR, 2006
• L.-P. Morency, A. Quattoni, and T. Darrell. Latent-Dynamic Discriminative Models for Continuous Gesture recognition. In CVPR, June 2007
• A. Quattoni, S. Wang, L.-P. Morency, M. Collins, and T. Darrell. Hidden conditional random fields. PAMI, Oct. 2007
• C. Sminchisescu. Selection and context for action recognition. In ICCV, 2009
• J. Sun, X. Wu, S. Yan, L. Cheong, T. Chua, and J. Li. Hierarchical spatio-temporal context modelling for action recognition. In CVPR, 2009
• L Wang, D Suter, Learning and Matching of Dynamic Shape Manifolds for Human Action Recognition, IEEE TIP, 2007
Thank You