View
223
Download
0
Tags:
Embed Size (px)
Citation preview
Feature selection methodsfrom correlation
to causality
Isabelle Guyon [email protected]
NIPS 2008 workshop on kernel learning
1) Feature Extraction, Foundations and ApplicationsI. Guyon, S. Gunn, et al.Springer, 2006.http://clopinet.com/fextract-book
2) Causal feature selectionI. Guyon, C. Aliferis, A. ElisseeffTo appear in “Computational Methods of Feature Selection”, Huan Liu and Hiroshi Motoda Eds., Chapman and Hall/CRC Press, 2007.http://clopinet.com/causality
Acknowledgements and references
http://clopinet.com/causality
Constantin Aliferis Alexander Statnikov
André Elisseeff Jean-Philippe Pellet
Gregory F. Cooper Peter Spirtes
Introduction
Feature Selection
• Thousands to millions of low level features: select the most relevant one to build better, faster, and easier to understand learning machines.
X
n
m
n’
Applications
Bioinformatics
Quality control
Machine vision
Customer knowledge
variables/features
examples
10
102
103
104
105
OCRHWR
MarketAnalysis
TextCategorization
Syst
em d
iagn
osis
10 102 103 104 105
106
Nomenclature
• Univariate method: considers one variable (feature) at a time.
• Multivariate method: considers subsets of variables (features) together.
• Filter method: ranks features or feature subsets independently of the predictor (classifier).
• Wrapper method: uses a classifier to assess features or feature subsets.
Univariate Filter
Methods
Univariate feature ranking
• Normally distributed classes, equal variance 2 unknown; estimated from data as 2
within.
• Null hypothesis H0: + = -
• T statistic: If H0 is true,
t= (+ - -)/(withinm++1/m-Studentm++m--d.f.
-1
- +
- +
P(Xi|Y=-1)
P(Xi|Y=1)
xi
• H0: X and Y are independent.
• Relevance index test statistic.
• Pvalue false positive rate FPR = nfp / nirr
• Multiple testing problem: use Bonferroni correction pval n pval
• False discovery rate: FDR = nfp / nsc FPR n/nsc
• Probe method: FPR nsp/np
pvalr0 r
Null distribution
Statistical tests ( chap. 2)
(Guyon, Dreyfus, 2006, )
Univariate Dependence
• Independence:
P(X, Y) = P(X) P(Y)
• Measure of dependence:
MI(X, Y) = P(X,Y) log dX dY
= KL( P(X,Y) || P(X)P(Y) )
P(X,Y)
P(X)P(Y)
A choice of feature selection ranking methods depending on the nature of:
• the variables and the target (binary, categorical, continuous)
• the problem (dependencies between variables, linear/non-linear relationships between variables and target)
• the available data (number of examples and number of variables, noise in data)
• the available tabulated statistics.
Other criteria ( chap. 3)
(Wlodzislaw Duch, 2006)
Multivariate Methods
Univariate selection may fail
Guyon-Elisseeff, JMLR 2004; Springer 2006
Filters,Wrappers, andEmbedded methods
All features FilterFeature subset Predictor
All features
Wrapper
Multiple Feature subsets
Predictor
All featuresEmbedded
method
Feature subset
Predictor
Relief
nearest hit
nearest miss
Dhit Dmiss
Relief=<Dmiss/Dhit>
Dhit
Dmiss
Kira and Rendell, 1992
Wrappers for feature selection
N features, 2N possible feature subsets!
Kohavi-John, 1997
• Exhaustive search.• Stochastic search (simulated annealing,
genetic algorithms)• Beam search: keep k best path at each
step. • Greedy search: forward selection or
backward elimination.• Floating search: Alternate forward and
backward strategies.
Search Strategies ( chap. 4)
(Juha Reunanen, 2006)
Forward Selection (wrapper)
n
n-1
n-2
1
…
Start
Also referred to as SFS: Sequential Forward Selection
Guided search: we do not consider alternative paths.Typical ex.: Gram-Schmidt orthog. and tree classifiers.
Forward Selection (embedded)
…
Start
n
n-1
n-2
1
Backward Elimination (wrapper)
1
n-2
n-1
n
…
Start
Also referred to as SBS: Sequential Backward Selection
Backward Elimination (embedded)
…
Start
1
n-2
n-1
n
Guided search: we do not consider alternative paths.Typical ex.: “recursive feature elimination” RFE-SVM.
Scaling Factors
Idea: Transform a discrete space into a continuous space.
• Discrete indicators of feature presence: i {0, 1}
• Continuous scaling factors: i IR
=[1, 2, 3, 4]
Now we can do gradient descent!
• Many learning algorithms are cast into a minimization of some regularized functional:
Empirical errorRegularization
capacity control
Formalism ( chap. 5)
(Lal, Chapelle, Weston, Elisseeff, 2006)
Justification of RFE and many other embedded methods.
Embedded method
• Embedded methods are a good inspiration to design new feature selection techniques for your own algorithms:– Find a functional that represents your prior knowledge about
what a good model is.– Add the weights into the functional and make sure it’s
either differentiable or you can perform a sensitivity analysis efficiently
– Optimize alternatively according to and – Use early stopping (validation set) or your own stopping
criterion to stop and select the subset of features
• Embedded methods are therefore not too far from wrapper techniques and can be extended to multiclass, regression, etc…
Causality
What can go wrong?
Guyon-Aliferis-Elisseeff, 2007
X2 X1
180 190 200 210 220 230 240 250 260
20
40
60
80
100
120
What can go wrong?
20 40 60 80 100
8
10
12
14
16
20
40
60
80
100
X2 X1
X1
X
2
X2 X1
180 190 200 210 220 230 240 250 260
20
40
60
80
100
120
What can go wrong?
Guyon-Aliferis-Elisseeff, 2007
X2 Y
X1
Lung Cancer
Smoking Genetics
Coughing
AttentionDisorder
Allergy
Anxiety Peer Pressure
Yellow Fingers
Car Accident
Born an Even Day
Fatigue
Local causal graph
What works and why?
Bilevel optimization
1) For each feature subset, train predictor on training data.
2) Select the feature subset, which performs best on validation data.– Repeat and average if you
want to reduce variance (cross-validation).
3) Test on test data.
N variables/features
M s
ampl
es
m1
m2
m3
Split data into 3 sets:training, validation, and test set.
Complexity of Feature Selection
Method Number of subsets tried
Complexity C
Exhaustive search wrapper
2N N
Nested subsets Feature ranking
N(N+1)/2 or N
log N
Generalization_error Validation_error + (C/m2)
m2: number of validation examples, N: total number of features,n: feature subset size.
With high probability:
n
Error
Try to keep C of the order of m2.
Insensitivity to irrelevant features
Simple univariate predictive model, binary target and features, all relevant features correlate perfectly with the target, all irrelevant features randomly drawn. With 98% confidence, abs(feat_weight) < w and i wixi < v.
ng number of “good” (relevant) features
nb number of “bad” (irrelevant) features
m number of training examples.
Conclusion
• Feature selection focuses on uncovering subsets of variables X1, X2, … predictive of the target Y.
• Multivariate feature selection is in principle more powerful than univariate feature selection, but not always in practice.
• Taking a closer look at the type of dependencies in terms of causal relationships may help refining the notion of variable relevance.
• Feature selection and causal discovery may be more harmful than useful.
• Causality can help ML but ML can also help causality