Upload
ludlow
View
44
Download
0
Tags:
Embed Size (px)
DESCRIPTION
Playing with features for learning and prediction. Jongmin Kim Seoul National University. Problem statement. Predicting outcome of surgery. Predicting outcome of surgery. Ideal approach. surgery. Training Data. . . . . ?. Predicting outcome. Predicting outcome of surgery. - PowerPoint PPT Presentation
Citation preview
Playing with features forlearning and prediction
Jongmin KimSeoul National University
Problem statement• Predicting outcome of surgery
Predicting outcome of surgery
• Ideal approach
. . . .
?
Training Data
Predicting out-come
surgery
Predicting outcome of surgery
• Initial approach– Predicting partial features
• Predict witch features?
Predicting outcome of surgery
• 4 Surgery– DHL+RFT+TAL+FDO
flexion of the knee( min / max )
dorsiflexion of the ankle( min )
rotation of the foot( min / max )
Predicting outcome of surgery
• Is it good features?
• Number of Training data– DHL+RFT+TAL : 35 data– FDO+DHL+TAL+RFT : 33 data
Machine learning and feature
Data Featurerepresentation
Learningalgorithm
Featurerepresentation
Learningalgorithm
• Joint position / angle• Velocity / acceleration• Distance between body parts• Contact status• …
Features in motion
Features in computer vision
SIFT Spin image
HoG RIFT
Textons GLOH
Machine learning and feature
Outline• Feature selection• - Feature ranking• - Subset selection: wrapper, filter, embedded• - Recursive Feature Elimination• - Combination of weak prior (Boosting)• - ADAboosting(clsf) / joint boosting (clsf)/ Gradi-
entboost (regression)
• Prediction result with feature selection
• Feature learning?
Feature selection• Alleviating the effect of the curse of
dimensionality• Improve the prediction performance• Faster and more cost-effective• Providing a better understanding of
the data
Subset selection• Wrapper
• Filter
• Embedded
Feature learning?• Can we automatically learn a good feature represen-
tation?• Known as: unsupervised feature learning, feature
learning, deep learning, representation learning, etc.
• Hand-designed features (by human):• 1. need expert knowledge• 2. requires time-consuming hand-tuning.
• When it’s unclear how to hand design features: au-tomatically learned features (by machine)
Learning Feature Representations
• Key idea: • –Learn statistical structure or correlation of the
data from unlabeled data • –The learned representations can be used as fea-
tures in supervised and semi-supervised settings
Learning Feature Representations
EncoderDecoder
Input (Image/ Features)
Output Features
e.g.Feed-back /generative /top-downpath
Feed-forward /bottom-up path
Learning Feature Representations
σ(Wx)Dz
Input Patch x
Sparse Features z
e.g.
• Predictive Sparse Decomposition [Kavukcuoglu et al., ‘09]
Encoder filters W
Sigmoid function σ(.)
Decoder filters D
L1 Spar-sity
Stacked Auto-Encoders
En-coder
De-coder
Input Image
Class label
Features
En-coder
De-coder
Features
En-coder
De-coder
[Hinton & Salakhutdinov Science ‘06]
At Test Time
En-coder
Input Image
Class label
Features
En-coder
Features
En-coder
[Hinton & Salakhutdinov Science ‘06]
• Remove decoders• Use feed-forward
path
• Gives standard(Convolutional)Neural Network
• Can fine-tune with backprop
Status & plan• Data 파악 / learning technique survey…
• Plan : 11 월 실험 끝• 12 월 논문 writing• 1 월 시그랩 submit• 8 월에 미국에서 발표• But before all of that….
Deep neural net vs. boost-ing
• Deep Nets:• - single highly non-linear system• - “deep” stack of simpler modules• - all parameters are subject to learning
• Boosting & Forests:• - sequence of “weak” (simple) classifiers that are lin-
early combined to produce a powerful classifier• - subsequent classifiers do not exploit representations
of earlier classifiers, it's a “shallow” linear mixture• - typically features are not learned
Deep neural net vs. boost-ing