28
Feature Augmentation via Nonparametrics and Selection (FANS) in High Dimensional Classification * Jianqing Fan, Yang Feng, Jiancheng Jiang and Xin Tong Abstract We propose a high dimensional classification method that involves nonparametric feature augmentation. Knowing that marginal density ratios are building blocks of the Bayes rule for each feature, we use the ratio estimates to transform the original feature measurements. Subsequently, we invoke penalized logistic regression, taking as input the newly transformed or augmented features. This procedure trains models with high local complexity and a simple global structure, thereby avoiding the curse of dimensionality while creating a flexible nonlinear decision boundary. The resulting method is called Feature Augmentation via Nonparametrics and Selection (FANS). We motivate FANS by generalizing the Naive Bayes model, writing the log ratios of joint densities as a linear combination of those of marginal densities. It is related to generalized additive models, but has better interpretability and computability. Risk bounds are developed for FANS. In numerical analysis, FANS is compared with competing methods, so as to provide a guideline on its best application domain. Real data analysis demonstrates that FANS performs very competitively on benchmark email spam and gene expression data sets. Moreover, FANS is implemented by an extremely fast algorithm through parallel computing. Keywords: density estimation, classification, high dimensional space, nonlinear decision boundary, feature augmentation, feature selection, parallel computing. * Jianqing Fan is Frederick L. Moore Professor of Finance, Department of Operations Research and Financial Engineering, Princeton University, Princeton NJ 08544 (Email: [email protected]). Yang Feng is Assistant Professor, Department of Statistics, Columbia University, New York, NY 10027 (Email: [email protected]). Jiancheng Jiang is Associate Professor, Department of Mathematics and Statis- tics, University of North Carolina at Charlotte, Charlotte, NC, 28223 (Email: [email protected]). Xin Tong is Assistant Professor, Department of Data Sciences and Operations, University of Southern California, Los Angeles, CA, (Email: [email protected]). The financial support from National Institutes of Health grants R01-GM072611 and R01GM100474-01 and National Science Foundation grants DMS-1206464 and DMS-1308566 is greatly acknowledged. 1

Feature Augmentation via Nonparametrics and Selection ......than the sample size. However, in many contemporary applications, the number of features pis large compared to the sample

  • Upload
    others

  • View
    2

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Feature Augmentation via Nonparametrics and Selection ......than the sample size. However, in many contemporary applications, the number of features pis large compared to the sample

Feature Augmentation via Nonparametrics and Selection

(FANS) in High Dimensional Classification∗

Jianqing Fan, Yang Feng, Jiancheng Jiang and Xin Tong

Abstract

We propose a high dimensional classification method that involves nonparametric

feature augmentation. Knowing that marginal density ratios are building blocks of the

Bayes rule for each feature, we use the ratio estimates to transform the original feature

measurements. Subsequently, we invoke penalized logistic regression, taking as input the

newly transformed or augmented features. This procedure trains models with high local

complexity and a simple global structure, thereby avoiding the curse of dimensionality

while creating a flexible nonlinear decision boundary. The resulting method is called

Feature Augmentation via Nonparametrics and Selection (FANS). We motivate FANS

by generalizing the Naive Bayes model, writing the log ratios of joint densities as a

linear combination of those of marginal densities. It is related to generalized additive

models, but has better interpretability and computability. Risk bounds are developed

for FANS. In numerical analysis, FANS is compared with competing methods, so as to

provide a guideline on its best application domain. Real data analysis demonstrates that

FANS performs very competitively on benchmark email spam and gene expression data

sets. Moreover, FANS is implemented by an extremely fast algorithm through parallel

computing.

Keywords: density estimation, classification, high dimensional space, nonlinear decision

boundary, feature augmentation, feature selection, parallel computing.

∗Jianqing Fan is Frederick L. Moore Professor of Finance, Department of Operations Research and

Financial Engineering, Princeton University, Princeton NJ 08544 (Email: [email protected]). Yang

Feng is Assistant Professor, Department of Statistics, Columbia University, New York, NY 10027 (Email:

[email protected]). Jiancheng Jiang is Associate Professor, Department of Mathematics and Statis-

tics, University of North Carolina at Charlotte, Charlotte, NC, 28223 (Email: [email protected]). Xin Tong

is Assistant Professor, Department of Data Sciences and Operations, University of Southern California, Los

Angeles, CA, (Email: [email protected]). The financial support from National Institutes of Health

grants R01-GM072611 and R01GM100474-01 and National Science Foundation grants DMS-1206464 and

DMS-1308566 is greatly acknowledged.

1

Page 2: Feature Augmentation via Nonparametrics and Selection ......than the sample size. However, in many contemporary applications, the number of features pis large compared to the sample

1 Introduction

Classification aims to identify to which category a new observation belongs based on feature

measurements. Common applications include disease classification using high-throughput data

such as microarrays, SNPs, spam detection and image recognition. Well known classification

methods include Fisher’s linear discriminant analysis (LDA), logistic regression, Naive Bayes,

k-nearest neighbor, neural networks, and many others. All these methods can perform well

in the classical low dimensional settings, in which the number of features is much smaller

than the sample size. However, in many contemporary applications, the number of features

p is large compared to the sample size n. For instance, the dimensionality p in microarray

data is frequently in thousands or beyond, while the sample size n is typically in the order of

tens. Besides computational issues, the central conflict in high dimensional setup is that the

model complexity is not supported by limited access to data. In other words, the “variance”

of conventional models is high in such new settings, and even simple models such as LDA need

to be regularized. We refer to Hastie et al. (2009) and Buhlmann and van de Geer (2011) for

overviews of statistical challenges associated with high dimensionality.

In this paper, we propose a classification procedure FANS (Feature Augmentation via Non-

parametrics and Selection). Before introducing the algorithm, we first detail its motivation.

Suppose feature measurements and responses are coded by a pair of random variables (X, Y ),

where X ∈ X ⊂ Rp denotes the features and Y ∈ {0, 1} is the binary response. Recall that a

classifier h is a data-dependent mapping from the feature space to the labels. Classifiers are

usually constructed to minimize the risk P (h(X) 6= Y ).

Denote by g and f the class conditional densities respectively for class 0 and class 1, i.e.,

(X|Y = 0) ∼ g and (X|Y = 1) ∼ f . It can be shown that the Bayes rule is 1I(r(x) ≥ 1/2),

where r(x) = E(Y |X = x). Let π = P (Y = 1), then

r(x) =πf(x)

πf(x) + (1− π)g(x).

Assume for simplicity that π = 1/2, then the oracle decision boundary is

{x : f(x)/g(x) = 1} = {x : log f(x)− log g(x) = 0} ,

Denote by g1, · · · , gp the marginals of g, and by f1, · · · , fp those of f . Nonparametric Naive

Bayes model assumes that the conditional distributions of each feature given the class labels

are independent, i.e.,

logf(x)

g(x)=

p∑j=1

logfj(xj)

gj(xj). (1.1)

2

Page 3: Feature Augmentation via Nonparametrics and Selection ......than the sample size. However, in many contemporary applications, the number of features pis large compared to the sample

Naive Bayes is a simple approach, but it is useful in high-dimensional settings. Taking a two

class Gaussian model with common covariance matrix, Bickel and Levina (2004) showed that

naively carrying out Fisher’s discriminant rule performs poorly due to diverging spectra. In

addition, the authors argued that independence rule which ignores the covariance structure

performs better than Fisher’s rule in high-dimensional settings. However, correlation among

features is usually an essential characteristic of data, and it can help classification under

suitable models and sample size to feature dimension ratios. Examples in bioinformatics study

can be found in Ackermann and Strimmer (2009). Recently, Fan et al. (2012) showed that

the independence assumption can lead to huge loss in classification power when correlation

prevails, and proposed a Regularized Optimal Affine Discriminant (ROAD). ROAD is a linear

plug-in rule targeting directly on the classification error, and it takes advantages of the un-

regularized pooled sample covariance matrix.

Relaxing the two-class Gaussian assumption in parametric Naive Bayes gives us a general

Naive Bayes formulation such as (1.1). However, this model also fails to capture the corre-

lation, or dependence among features in general. This consideration motivates us to ask the

following question: what if we add weights in front of each log marginal density ratio and

optimize them under certain criterions (all coefficients are 1 in Naive Bayes model, so there

is no need for optimization). More precisely, we would like to learn a decision boundary from

the following set

D =

{x : β0 + β1 log

f1(x1)

g1(x1)+ · · ·+ βp log

fp(xp)

gp(xp)= 0, β0, · · · , βp ∈ R

}. (1.2)

For univariate problems, the marginal density ratio delivers the best classifier based on only

one feature. Therefore, the marginal density ratios can be regarded as the best transformations

of the future measurements, and (1.2) is an effort towards combining those most powerful

univariate transforms to build more powerful classifiers.

This is in a similar spirit as the sure independence screening in Fan and Lv (2008) where

the best marginal predictors are used as probes for their utilities in the joint model. By

combining these marginal density ratios and optimizing over their coefficients βj’s, we wish

to build a good classifier that takes into account feature dependence. Note that although

our target boundary D is not linear in the original features, but it is linear in the parameters

βj’s. Therefore, after renaming the transformed variables, any linear classifiers can be applied.

For example, we can use logistic regression, one of the most popular linear classification rules.

Other choices, such as SVM (linear kernel), are good alternatives, but we fix logistic regression

for the rest of discussion.

3

Page 4: Feature Augmentation via Nonparametrics and Selection ......than the sample size. However, in many contemporary applications, the number of features pis large compared to the sample

Recall that logistic regression models the log odds ratio by

logP (Y = 1|X = x)

P (Y = 0|X = x)= β0 +

p∑j=1

βjxj ,

where the βj’s are estimated by the maximum likelihood approach. We should note that with-

out explicitly modeling correlations, logistic regression takes into account features’ joint effects

and levels a good linear combination of features as the decision boundary. Its performance

is similar to LDA, but both models can only capture decision boundaries linear in original

features.

On the other hand, logistic regression might serve as a building block for the more flexible

FANS. Concretely, if we know the marginal densities fj and gj, and run logistic regression

on the transformed features {log(fj(xj)/gj(xj))}, we create a decision boundary nonlinear in

original features. This use of the transformed features is easily interpretable: one naturally

combines the “most powerful” univariate transforms (building blocks of univariate Bayes rules)

{log(fj(xj)/gj(xj))} rather than the original measurements. In special cases such as the

two-class Gaussian model with common covariance matrix, the transformed features are not

different from the original ones. Some caution should be taken: if fj = gj for some j, i.e.,

the marginal densities for feature j are exactly the same, this feature will not have any

contribution in classification. Deletion like this might lose power, because features having no

marginal contribution on their own might boost classification performance when they are used

jointly with other features. In view of this defect, an alternative method is to augment the

original features with the transformed ones.

Since marginal densities fj and gj are unknown, we need to first estimate them, and then

run a penalized logistic regression (PLR) on the estimated transforms. Such regularization

(penalization) is necessary to reduce model complexity in the high dimensional paradigm.

This two-step classification rule of feature augmentation via nonparametrics and selection

will be called “FANS” for short. Precise algorithmic implementation of FANS is described in

the next section. Numerical results show that our new method excels in many scenarios, in

particular when no linear decision boundary can separate the data well.

To understand where FANS stands compared to Naive Bayes (NB), penalized logistic

regression (PLR), and the regularized optimal affine discriminant (ROAD), we showcase a

simple simulation example. In this example, the choice is between a multivariate Gaussian

distribution and the componentwise mixture of two multivariate Gaussian distributions:

Class 0: N((5× 1T10,0Tp−10)T ,Σ),

Class 1: w◦N(0p, Ip)+(1−w)◦N((6×1T10,0Tp−10)T ,Σ), where p = 1000, ◦ is the element-

wise product between matrices, Σii = 1 for all i = 1, · · · , p, Σij = 0.5 for all i, j = 1, · · · , pand i 6= j. w = (w1, · · · , wp)T , where wj ∼iid Bernoulli(0.5).

4

Page 5: Feature Augmentation via Nonparametrics and Selection ......than the sample size. However, in many contemporary applications, the number of features pis large compared to the sample

Figure 1: The median test errors for Gaussian vs. mixture of Gaussian when the training

data size varies.

1 1.5 2 2.50

0.05

0.1

0.15

0.2

0.25

0.3

0.35

0.4

0.45

0.5

log10

(Sample size)

Media

n test cla

ssific

ation e

rror

FANS

ROAD

PLR

NB

The median classification error for 100 repetitions as a function of training sample size n

is rendered in Figure 1. This figure suggests that increasing sample sizes does not help NB

boost performance, because the NB model is severely biased in view of significant correlation

presence. It is interesting to compare PLR with ROAD. ROAD is a more efficient approach

when the sample size is small; however, PLR eventually does better when the sample size

becomes large enough. This is not surprising because the underlying true model is not two

class Gaussian with common covariance matrix. So the less “biased” PLR beats ROAD on

large samples. Nevertheless, even if ROAD uses a misspecified model, it still benefits from a

specific model assumption on small samples. Finally, since the oracle decision boundary in

this example is nonlinear, the newly proposed FANS approach performs significantly better

than others when the sample size is reasonably large. The above analysis seems to suggest

that FANS does well as long as we have enough data to construct accurate marginal density

estimates. Note also that ROAD is better than FANS when the training sample size is

extremely small. The best method in practice largely depends on the available data.

A popular extension of logistic regression and close relative to FANS is the additive lo-

gistic regression, which belongs to generalized additive models (Hastie and Tibshirani, 1990).

5

Page 6: Feature Augmentation via Nonparametrics and Selection ......than the sample size. However, in many contemporary applications, the number of features pis large compared to the sample

Additive logistic regression allows (smooth) nonparametric feature transformations to appear

in the decision boundary by modeling

logP (Y = 1|X = x)

P (Y = 0|X = x)=

p∑j=1

hj(xj) , (1.3)

where hj’s are smooth functions. This additive decision boundary is very general, in which

FANS and logistic regression are special cases. It works well for small-p-large-n scenarios,

while its penalized versions adapt to high dimensional settings. We will compare FANS with

penalized additive logistic regression in numerical studies. The major drawback of additive lo-

gistic regression (generalized additive model) is the heavy computational complexity (e.g., the

backfitting algorithm) involved in searching the transformation functions hj(·). Moreover, the

available algorithms, e.g., the algorithm for penGAM (Meier et al., 2009), fail to give an esti-

mate when the sample size is very small. Compared to FANS, the generalized additive model

uses a factor of Kn more parameters, where Kn is the number of knots in the approximation

of every additive components {hj(·)}pj=1. While this reduces possible biases in comparison

with FANS, it increases variances in the estimation and results in more computation cost (see

Table 2). Moreover, FANS admits a nice interpretation of optimal combination of optimal

univariate classifiers.

Besides the aforementioned references, there is a huge literature on high dimensional clas-

sification. Examples include principal component analysis in Bair et al. (2006) and Zou et al.

(2006), partial least squares in Nguyen and Rocke (2002), Huang (2003) and Boulesteix (2004),

and sliced inverse regression in Li (1991) and Antoniadis et al. (2003). Recently, there has

been a surge of interest for extending the linear discriminant analysis to high-dimensional

settings including Guo et al. (2007), Wu et al. (2009), Clemmensen et al. (2011), Shao et al.

(2011), Cai and Liu (2011), Fan et al. (2012), Mai et al. (2012) and Witten and Tibshirani

(2012).

The rest of the paper is organized as follows. Section 2 introduces an efficient algorithm

for FANS. Section 3 is dedicated to simulation studies and real data analysis. Theoretical

results are presented in Section 4. We conclude with a short discussion in Section 5. Longer

proofs and technical results are relegated to the Appendix.

2 Algorithm

In this section, we detail an efficient algorithm (S1 − S5) for FANS. A variant of FANS

(FANS2), which uses the transformed features to augment the original ones, is also described.

6

Page 7: Feature Augmentation via Nonparametrics and Selection ......than the sample size. However, in many contemporary applications, the number of features pis large compared to the sample

2.1 FANS and its Running Time Bound

S1. Given n pairs of observations D = {(X i, Yi), i = 1, · · · , n}. Randomly split the data

into two parts for L times: Dl = (Dl1, Dl2), l = 1, · · · , L.

S2. On each Dl1, l ∈ {1, · · · , L}, apply kernel density estimation and denote the estimates

by f = (f1, · · · , fp)T and g = (g1, · · · , gp)T .

S3. Calculate the transformed observations Zi = Z f ,g(X i), where Zij = log fj(Xij) −log gj(Xij), for each i ∈ Dl2 and j ∈ {1, . . . , p}.

S4. Fit an L1-penalized logistic regression to the transformed data {(Zi, Yi), i ∈ Dl2}, using

cross validation to get the best penalty parameter. For a new observation x, we estimate

transformed features by log fj(xj)−log gj(xj), j = 1, . . . , p, and plug them into the fitted

logistic function to get the predicted probability pl = P (Y = 1|X = x).

S5. Repeat (S2)-(S4) for l = 1, · · · , L, use the average predicted probability prob = L−1∑L

l=1 pl

as the final prediction, and assign the observation x to class 1 if prob ≥ 1/2, and 0 oth-

erwise.

A few comments on the technical implementation are made as follows.

Remark 1

i). In S2, if an estimated marginal density is less than some threshold ε (say 10−2), we set

it to be ε. This Winsorization increases the stability of the transformations, because the

estimated transformations log fj and log gj are unstable in regions where true densities

are small.

ii). In S4, we take penalized logistic regression, but any linear classifier can be used. For

example, support vector machine (SVM) with linear kernel is also a good choice.

iii). In S4, the L1 penalty (Tibshirani, 1996) was adopted since our primary interest is the

classification error. We can also apply other penalty functions, such as SCAD (Fan and

Li, 2001), adaptive LASSO (Zou, 2006) and MCP (Zhang, 2010).

iv). In S5, the average predicted probability is taken as the final prediction. An alternative

approach is to make a decision on each random split, and listen to majority votes.

In S1, we split the data multiple times. The rationale behind multiple splitting lies in

the two-step prototype nature of FANS, which uses the first part of the data for marginal

7

Page 8: Feature Augmentation via Nonparametrics and Selection ......than the sample size. However, in many contemporary applications, the number of features pis large compared to the sample

nonparametric density estimates (in S2) and (transformation of) the second part for penal-

ized logistic regression (in S4). Multiple splitting and prediction averaging not only make

our procedure more robust against arbitrary assignments of data usage, but also make more

efficient use of limited data. We take L = 20 in our numerical studies. This choice reflects

our cluster’s node number. Interested readers can as well leverage their better computing

resources for a larger L, but our experience is that more splits do not help universally in our

numerical examples. Also, we recommend a balanced assignment by switching the role of data

used for feature transformation and for feature selection, i.e., D2l = (D(2l−1),2, D(2l−1),1) when

D2l−1 = (D(2l−1),1, D(2l−1),2).

It is straightforward to derive a running time bound for our algorithm. Suppose splitting

has been done. In S2, we need to perform kernel density estimation for each variable, which

costs O(n2p)1. The transformations in S3 cost O(np). In S4, we call the R package glmnet

to implement penalized logistic regression, which employs the coordinate decent algorithm

for each penalty level. This step has a computational cost at most O(npT ), where T is the

number of penalty levels, i.e., the number of times the coordinate descent algorithm is run

(see Friedman et al. (2007) for a detailed analysis). The default setting is T = 100, though

we can set it to other constants. Therefore, a running time bound for the whole algorithm is

O(L(n2p+ np+ npT )) = O(Lnp(n+ T )).

The above bound does not look particularly interesting. However, smart implementation

of the FANS procedure can fully unleash the potential of our algorithm. Indeed, not only

the L repetitions, but also the marginal density estimates in S2 can be done via parallel

computing. Suppose L is the number of available nodes, and the cpu core number in each

node is N ≥ n/T . This assumption is reasonable because T = 100 by default, N = 8 for

our implementation, and sample sizes n for many applications are less than a multiple of

TN . Under this assumption, the L predicted probabilities calculations can be carried out

simultaneously and the results are combined later in S5. Moreover in S2, the running time

bound becomes O(n2p/N). Henceforth, a bound for the whole algorithm will be O(npT ),

which is the same as for penalized logistic regression. The exciting message here is that,

by leveraging modern computer architecture, we are able to implement our nonparametric

classification rule FANS within running time at the order of a parametric method. The

computation times for various simulation setups are reported in Table 2, where the first

column reports results when only L repetitions are paralleled, and the second column reports

the improvement when marginal density estimates in S2 are paralleled within each node.

1Approximate kernel density estimates can be computed faster, see e.g., Raykar et al. (2010).

8

Page 9: Feature Augmentation via Nonparametrics and Selection ......than the sample size. However, in many contemporary applications, the number of features pis large compared to the sample

2.2 Augmenting Linear Features

As we argued in the introduction, features with no marginal discrimination power do not make

contribution in FANS. One remedy is to run (in S4) the penalized logistic regression using

both the transformed features and the original features, which amounts to modeling the log

odds ratio by

β0 + β1 logf1(x1)

g1(x1)+ . . .+ βp log

fp(xp)

gp(xp)+ βp+1x1 + . . .+ β2pxp .

This variant of FANS is named FANS2, and it allows features with no marginal power to enter

the model in a linear fashion. FANS2 augments the original features with the log-likelihood

ratios and helps when a linear decision boundary separates data reasonably well.

3 Numerical Studies

3.1 Simulation

In simulation studies, FANS and FANS2 are compared with competing methods: penalized

logistic regression (PLR, Friedman et al. (2010)), penalized additive logistic regression models

(penGAM, Meier et al. (2009)), support vector machine (SVM), regularized optimal affine

discriminant (ROAD, Fan et al. (2012)), linear discriminant analysis (LDA), Naive Bayes

(NB) and feature annealed independence rule (FAIR, Fan and Fan (2008)).

In all simulation settings, we set p = 1000 and training and testing data sample sizes

of each class to be 300. Five-fold cross-validation is conducted when needed, and we repeat

50 times for each setting (The relative small number of replications is due to the intensive

computation time of penGAM, c.f. Table 2). Table 1 summarizes median test errors for each

method along with the corresponding standard errors. This table omits Fisher’s classifier

(using pseudo inverse for sample covariance matrix), because it gives a test error around 50%,

equivalent to random guessing.

Example 1 We consider the two class Gaussian settings where Σii = 1 for all i = 1, · · · , pand Σij = ρ|i−j|, µ1 = 01000 and µ2 = (1T10,0

T990)T , in which 1d is a length d vector with all

entries 1, and 0d is a length d vector with all entries 0. Two different correlations ρ = 0 and

ρ = 0.5 are investigated.

This is the classical LDA setting. In view of the linear optimal decision boundary, the

nonparametric transformations in FANS is not necessary. Table 1 indicates some efficiency

(not much) loss due to the more complex model FANS. However, by including the original

features, FANS2 is comparable to the methods (e.g., PLR and ROAD) which learn linear

9

Page 10: Feature Augmentation via Nonparametrics and Selection ......than the sample size. However, in many contemporary applications, the number of features pis large compared to the sample

boundaries of original features. In other words, the price to pay for using the unnecessarily

more complex method FANS (FANS2) is small in terms of the classification error.

An interesting observation is that penGAM, which is based on a more general model class

than FANS and FANS2, performs worse than our new methods. This is also expected as

the complex parameter space considered by penGAM is unnecessary in view of linear optimal

decision boundary. Surprisingly, SVM performs poorly (even worse than NB), especially when

all features are independent.

Example 2 The same settings as Example 1 except the common covariance matrix is an equal

correlation matrix, with a common correlation ρ = 0.9.

Same as in Example 1, FANS and FANS2 have performance comparable to PLR and

ROAD. Although FAIR works very well in Example 1, where the features are independent (or

nearly independent), it fails badly when there is significant global pairwise correlation. Similar

observations also hold for NB. This example shows, ignoring correlation among features could

lead to significant loss of information and deterioration in the classification error.

Example 3 One class follows a multivariate Gaussian distribution, and the other a mixture

of two multivariate Gaussian distributions. Precisely,

Class 0: N((3× 1T10,0Tp−10)T ,Σp),

Class 1: 0.5×N(0p, Ip) + 0.5×N((6× 1T10,0Tp−10)T ,Σp),

where Σii = 1, Σij = ρ for i 6= j. Correlations ρ = 0 and ρ = 0.5 are considered.

In this example, Class 0 and Class 1 have the same mean, so the differences are in higher

order moments. Table 1 shows that all methods based on linear boundary perform like random

guessing, because the optimal decision boundary is highly nonlinear. penGAM is comparable

to FANS and FANS2, but SVM cannot capture the oracle decision boundary well even if a

nonlinear kernel is applied.

Example 4 Two classes follow uniform distributions,

Class 0: Unif (A),

Class 1: Unif (B\A),

where A = {x ∈ Rp : ‖x‖2 ≤ 1} and B = [−1, 1]p.

Clearly, the oracle decision boundary is {x ∈ Rp : ‖x‖2 = 1}. Again, FANS and FANS2

capture this simple boundary well while the linear-boundary based methods fail to do so.

Computation times (in seconds) for various classification algorithms are reported in Table

2. FANS is extremely fast thanks to parallel computing. While penGAM performs similarly

to FANS in the simulation examples, its computation cost is much higher. The similarity

10

Page 11: Feature Augmentation via Nonparametrics and Selection ......than the sample size. However, in many contemporary applications, the number of features pis large compared to the sample

in performance is due to the abundance in training examples. We will demonstrate with an

email spam classification example that penGAM fails to deliver satisfactory results on small

samples.

Table 1: Median test error (in percentage) for the simulation examples. Standard errors are

in the parentheses.

Ex(ρ) FANS FANS2 ROAD PLR penGAM NB FAIR SVM

1(0) 6.8(1.1) 6.2(1.2) 6.0(1.3) 6.5(1.2) 6.6(1.1) 11.2(1.4) 5.7(1.0) 13.2(1.5)

1(0.5) 16.5(1.7) 16.2(1.8) 16.5(5.3) 15.9(1.7) 16.9(1.6) 20.6(1.7) 17.2(1.6) 22.5(1.8)

2(0.5) 4.2(0.9) 2.0(0.6) 2.0(0.6) 2.5(0.6) 3.7(0.9) 43.5(11.1) 25.3(1.6) 5.3(1.1)

2(0.9) 3.1(1.1) 0.0(0.0) 0.0(0.0) 0.0(0.0) 0.2(1.4) 46.8(8.8) 30.2(1.9) 0.0(0.1)

3(0) 0.0(0.0) 0.0(0.0) 49.6(2.4) 50.0(1.3) 0.0(0.1) 50.4(2.2) 50.2(2.1) 31.8(2.4)

3(0.5) 3.4(0.7) 3.4(0.7) 49.3(2.4) 50.0(1.3) 3.7(0.8) 50.0(2.1) 50.2(2.0) 19.8(2.4)

4 0.0(0.0) 0.0(0.0) 28.2(1.8) 50.0(10.7) 0.0(0.0) 41.0(1.1) 34.6(1.4) 0.0(0.0)

Table 2: Computation time (in seconds) comparison for FANS, SVM, ROAD and penGAM.

The parallel computing technique is applied. Standard errors are in the parentheses.

Ex(ρ) FANS FANS(para) SVM ROAD penGAM

1(0) 12.0(2.6) 3.8(0.2) 59.4(12.8) 99.1(98.2) 243.7(151.8)

1(0.5) 12.7(2.1) 3.5(0.2) 81.3(19.2) 100.7(89.3) 325.8(194.3)

2(0.5) 16.0(3.1) 4.0(0.2) 77.6(18.1) 106.8(90.7) 978.0(685.7)

2(0.9) 22.0(4.6) 4.5(0.3) 75.7(17.8) 98.3(83.9) 3451.1(3040.2)

3(0) 12.1(2.1) 3.4(0.2) 152.1(27.4) 96.3(68.8) 254.6(130.0)

3(0.5) 11.9(2.0) 3.4(0.2) 342.1(58.0) 95.9(74.8) 298.7(167.4)

4 22.4(3.9) 6.6(0.4) 264.3(45.0) 75.1(54.0) 4811.9(3991.7)

3.2 Real Data Analysis

We study two real examples, and compare FANS(FANS2) with competing methods.

11

Page 12: Feature Augmentation via Nonparametrics and Selection ......than the sample size. However, in many contemporary applications, the number of features pis large compared to the sample

3.2.1 Email Spam Classification

First, we investigate a benchmark email spam data set. This data set has been studied by

Hastie et al. (2009) among others to demonstrate the power of additive logistic regression

models. There are a total of n = 4, 601 observations with p = 57 numeric attributes. The

attributes are, for instance, the percentage of specific words or characters in an email, the

average and maximum run lengths of upper case letters, and the total number of such letters.

To show suitable application domains of FANS and FANS2, we vary the training proportion,

from 5%, 10%, 20%, · · · , to 80% of the data while assigning the rest as test set. Splits are

repeated for 100 times and we report the median classification errors.

Figure 2 and Table 3 summarize the results. First, we notice that FANS and FANS2

are very competitive when training sample sizes are small. As the training sample size in-

creases, SVM becomes comparable to FANS2 and slightly better than FANS. In general, these

three methods dominate throughout different training proportions. The more complex model

penGAM failed to yield classifiers when training data proportion is less than 30% due to the

difficulty of matrix inversion with the splines basis functions. For larger training samples,

penGAM performs better than linear decision rules; however, it is not as competitive as either

FANS or FANS2. Also interestingly, Naive Bayes (NB) is the favored method given smallest

training sample (5%), and but its test error remains almost unchanged when the sample size

increases. In other words, NB’s independence assumption allows good training given very few

data points, but it cannot benefit from larger samples due to severe model bias.

Table 3: Median classification error (in percentage) on e-mail spam data when the size of the

training data varies. Standard errors are in the parentheses.

% FANS FANS2 ROAD PLR penGAM LDA NB FAIR SVM

5 11.1(2.6) 10.5(1.1) 13.6(0.9) 13.5(1.7) - 13.6(1.1) 10.5(5.0) 15.6(1.7) 11.2(0.8)

10 8.7(2.4) 8.5(0.9) 11.3(0.8) 10.5(1.1) - 11.3(0.9) 10.7(4.2) 13.5(0.9) 9.4(0.7)

20 8.0(2.1) 7.7(0.7) 10.6(0.6) 9.0(0.8) - 10.3(0.6) 10.7(5.3) 12.4(0.7) 8.1(0.7)

30 7.8(1.7) 7.4(0.5) 10.3(0.4) 8.9(0.6) 9.2(0.6) 10.1(0.5) 10.7(4.0) 11.7(0.4) 7.4(0.6)

40 7.2(2.2) 6.9(0.5) 10.1(0.5) 9.0(0.6) 8.6(0.5) 10.0(0.4) 10.5(5.1) 11.5(0.6) 7.0(0.5)

50 7.4(2.2) 7.0(0.5) 9.9(0.5) 8.5(0.6) 8.3(0.5) 9.9(0.4) 10.7(4.1) 11.8(0.6) 6.9(0.5)

60 7.4(2.2) 6.8(0.5) 9.8(0.6) 9.3(0.6) 7.8(0.6) 9.5(0.5) 10.6(4.8) 11.8(0.7) 6.5(0.6)

70 7.2(1.6) 6.4(0.6) 9.5(0.7) 9.2(0.7) 7.4(0.7) 9.4(0.6) 10.5(4.6) 11.4(0.7) 6.4(0.7)

80 6.9(1.6) 6.3(0.7) 9.4(0.6) 9.3(0.9) 7.4(0.8) 9.2(0.6) 10.4(4.7) 11.4(0.8) 6.3(0.9)

12

Page 13: Feature Augmentation via Nonparametrics and Selection ......than the sample size. However, in many contemporary applications, the number of features pis large compared to the sample

Figure 2: The median test classification error for the spam data set using various proportions

of the data as training sets for different classification methods.

0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.80.06

0.07

0.08

0.09

0.1

0.11

0.12

0.13

0.14

0.15

0.16

Proportion of Training Data

Media

n T

est C

lassific

ation E

rror

PLR

FANS

FANS2

SVM

ROAD

LDA

NB

FAIR

penGAM

Table 4: Classification error and number of selected genes on lung cancer data.

FANS FANS2 ROAD PLR penGAM FAIR NB

Training Error 0 0 1 0 0 0 6

Testing Error 0 0 1 6 2 7 36

No. of selected genes 52 52 52 15 16 54 12533

3.2.2 Lung Cancer Classification

We now evaluate the newly proposed classifiers on a popular gene expression data set “Lung

Cancer” (Gordon et al., 2002), which comes with predetermined, separate training and test

sets. It contains p = 12, 533 genes for n0 = 16 adenocarcinoma (ADCA) and n1 = 16

mesothelioma training vectors, along with 134 ADCA and 15 mesothelioma test vectors.

Following Dudoit et al. (2002), Fan and Fan (2008), and Fan et al. (2012), we standardized

each sample to zero mean and unit variance. The classification results for FANS, FANS2,

ROAD, penGAM, FAIR and NB are summerized in Table 4. FANS and FANS2 achieve 0 test

classification error, while the other methods fail to do so.

13

Page 14: Feature Augmentation via Nonparametrics and Selection ......than the sample size. However, in many contemporary applications, the number of features pis large compared to the sample

4 Theoretical Results

In this section, an oracle inequality regarding the excess risk is derived for FANS. Denote

by f = (f1, · · · , fp)T and g = (g1, · · · , gp)T vectors of marginal densities of each class with

f 0 = (f0,1, · · · , f0,p)T and g0 = (g0,1, · · · , g0,p)

T being the true densities. Let {(X i, Yi)}ni=1 be

i.i.d. copies of (X, Y ), and the regression function be modeled by

P (Y1 = 1|X1) =1

1 + exp(−m(Z1)),

where Z1 = (Z11, · · · , Z1p)T , each Z1j = Z1j(X1) = log fj(X1j) − log gj(X1j), and m(·) is a

generic function in some function class M including the linear functions. Now, let Q = {q =

(m,f , g)} be the parameter space of interest with constraints on m, f and g be specified

later. The loss function we consider is

ρ(q) = ρ(m,f , g) = ρq(X1, Y1) = −Y1m(Z1) + log(1 + exp[m(Z1)]) .

Let m0 = arg minm∈M Pρ(m,f 0, g0). Then the target parameter is q∗ = (m0,f 0, g0). We use

a working model with mβ(Z1) = βTZ1 to approximate m0. Under this working model, for a

given parameter q = (mβ,f , g), let

πq(X1) = P (Y1 = 1|X1) =1

1 + exp(−βTZ1).

With this linear approximation, the loss function is the logistic loss

ρ(q) = ρq(X1, Y1) = −Y1βTZ1 + log(1 + exp[βTZ1]) .

Denote the empirical loss by Pnρ(q) =∑n

i=1 ρq(X i, Yi)/n, and the expected loss by Pρ(q) =

Eρq(X, Y ). In the following, we take M as linear combinations of the transformed features

so that m0 = mβ0, where

β0 = arg minβ∈Rp

Pρ(mβ,f 0, g0) .

In other words, q0 = (mβ0,f 0, g0) = q∗. Hence, the excess risk for a parameter q is

E(q) = P [ρ(q)− ρ(q∗)] = P [ρ(q)− ρ(q0)] . (4.4)

As described in Section 2, densities f 0 and g0 are unavailable and must be estimated. Dif-

ferent from the numerical implementation, we now proceed with a modified sampling scheme

and procedure that are more friendly for theoretical derivation. Suppose we have labeled

samples {X+1 , · · · ,X+

n1} (used to learn f 0) and {X−1 , · · · ,X−n1

} (used to learn g0; theory

carries over for different sample sizes), in addition to an i.i.d. sample {(X1, Y1), · · · , (Xn, Yn)}

14

Page 15: Feature Augmentation via Nonparametrics and Selection ......than the sample size. However, in many contemporary applications, the number of features pis large compared to the sample

(used to select features). Moreover, suppose {(X1, Y1), · · · , (Xn, Yn)} is independent of

{X+1 , · · · ,X+

n1} and {X−1 , · · · ,X−n1

}. A simple way to comprehend the above theoretical

set up is that the sample size of 2n1 +n has been split into three groups. The notations P and

E are regarding the random couple (X, Y ). We use the notation P n to denote the probability

measure induced by the sample {(X1, Y1), · · · , (Xn, Yn)}, and notation P+ and P− for the

probability measure induced by the labeled samples {X+1 , · · · ,X+

n1} and {X−1 , · · · ,X−n1

}.The density estimates f = (f1, · · · , fp)T and g = (g1, · · · , gp)T are based on samples

{X+1 , · · · ,X+

n1} and {X−1 , · · · ,X−n1

}:

fj(x) =1

n1h

n1∑i=1

K

(X+ij − xh

)and gj(x) =

1

n1h

n1∑i=1

K

(X−ij − x

h

)for j = 1, · · · , p ,

in which K(·) is a kernel function and h is the bandwidth. Then with these estimated marginal

densities, we have an “oracle estimate” q1 = (β1, f , g), where

β1 = arg minβ∈Rp

Pρ(mβ, f , g) .

It is the oracle given marginal density estimates f and g, and is estimated in FANS by

β1 = arg minβ∈Rp

Pnρ(mβ, f , g) + λ‖β‖1 .

Let q1 = (mβ1, f , g). Our goal is to control the excess risk E(q1), where E is defined by (4.4).

In the following, we introduce technical conditions for this task.

Let Z0 be the n × p design matrix consisting of transformed covariates based on the

true densities f 0 and g0. That is Z0ij = log f0,j(Xij) − log g0,j(Xij), for i = 1, · · · , n and

j = 1, · · · , p. In addition, let Z0 = (Z01,Z

02, . . . ,Z

0n)T . Also, denote by |S| the cardinality of

the set S. Denote by ‖D‖max = maxij |Dij| for any matrix D with elements Dij.

Assumption 1 (Compatibility Condition) The matrix Z0 satisfies compatibility condi-

tion with a compatibility constant φ(·), if for every subset S ⊂ {1, · · · , p}, there exists a

constant φ(S), such that for all β ∈ Rp that satisfy ‖βSc‖1 ≤ 3‖βS‖1, it holds that

‖βS‖21 ≤

1

nφ2(S)‖Z0β‖2|S| .

A direct application of Corollary 6.8 in Buhlmann and van de Geer (2011) leads to a compati-

bility condition on the estimated transform matrix Z, in which Zij = log fj(Xij)− log gj(Xij).

Lemma 1 Denote by E = Z − Z0 the estimation error matrix of Z0. If the compatibility

condition is satisfied for Z0 with a compatibility constant φ(·), and the following inequalities

hold

32‖E‖max|S|φ(S)2

≤ 1 , for every S ⊂ {1, · · · , p}, (4.5)

the compatibility condition holds for Z with a new compatibility constant φ1(·) ≥ φ(·)/√

2.

15

Page 16: Feature Augmentation via Nonparametrics and Selection ......than the sample size. However, in many contemporary applications, the number of features pis large compared to the sample

The Compatibility Condition can be interpreted as a condition that bounds the restricted

eigenvalues. The irrepresentable condition (Zhao and Yu, 2006) and the Sparse Riesz Con-

dition (SRC) (Zhang and Huang, 2008) are in similar spirits. Essentially, these conditions

avoid high correlation among subsets where signals are concentrated; such high correlation

may cause difficulty in parameter estimation and risk prediction.

To help theoretical derivation, we introduce two intermediate L0-penalized estimates.

Given the true densities f 0 and g0, consider a penalized theoretical solution q∗0 = (β∗0,f 0, g0),

where

β∗0 = arg minβ∈Rp

3Pρ(mβ,f 0, g0) + 2H

(4λ√sβ

φ(Sβ)

), (4.6)

in which H(·) is a strictly convex function on [0,∞) with H(0) = 0, sβ = |Sβ| is the cardinality

of Sβ = {j : βj 6= 0}, and φ(·) is the compatibility constant for Z0. Throughout the paper,

we consider a specific quadratic function2 H(v) = v2/(4c) whose convex conjugate is G(u) =

supv{uv −H(v)} = cu2. Then, equation (4.6) defines an L0-penalized oracle:

β∗0 = arg minβ∈Rp

3Pρ(mβ,f 0, g0) +8λ2sβcφ2(Sβ)

. (4.7)

Similarly, with density estimate vectors f and g, we define an L0-penalized oracle estimate

q∗1 = (mβ∗1, f , g), where

β∗1 = arg minβ∈Rp

3Pρ(mβ, f , g) +8λ2sβcφ2

1(Sβ). (4.8)

To study the excess risk E(q1), we consider its relationship with E(q∗1) and E(q∗0).

Assumption 2 (Uniform Margin Condition) There exists η > 0 such that for all (mβ,f , g)

satisfying ‖β − β0‖∞ + max1≤j≤p ‖fj − f0,j‖∞ + max1≤j≤p ‖gj − g0,j‖∞ ≤ 2η, we have

E(mβ,f , g) ≥ c‖β − β0‖22 , (4.9)

where c is the positive constant in (4.7).

The uniform margin condition is related to the one defined in Tsybakov (2004) and van de

Geer (2008). It is a type of “identifiability” condition. Basically, near the target parameter

q0 = (mβ0,f 0, g0), the functional value needs to be sufficiently different from the value on

q0 to enable enough separability of parameters. Note that we impose the uniform margin

condition in both the neighborhood of the parametric component β0 and the nonparametric

2The following theoretical results can be derived for a generic strictly convex function H(·) along the same

lines.

16

Page 17: Feature Augmentation via Nonparametrics and Selection ......than the sample size. However, in many contemporary applications, the number of features pis large compared to the sample

components f 0 and g0, because we need to estimate the densities, in addition to the parametric

part. A related concept in binary classification is called “Margin Assumption”, which was first

introduced in Polonik (1995) for densities.

To study the relationship between E(q1) and E(q∗1), we define

vn(β) = (Pn − P )ρ(mβ, f , g) and WM = sup‖β−β∗

1‖≤M|vn(β)− vn(β∗1)| .

Denote by

2ε∗ = 3E(mβ∗1, f , g) +

8λ2sβ∗1

cφ21(Sβ∗

1).

Set M∗ = ε∗/λ0 (λ0 to specified in Theorem 1) and

J1 = {WM∗ ≤ λ0M∗} = {WM∗ ≤ ε∗} .

The idea here is to choose λ0 such that the event J1 has high probability.

A few more notations are introduced to facilitate the discussion. Let τ > 0. Denote by

bτc the largest integer strictly less than τ . For any x, x′ ∈ R and any bτc times continuously

differentiable real valued function u on R, we denote by ux its Taylor polynomial of degree

bτc at point x:

ux(x′) =

∑|s|≤bτc

(x′ − x)s

s!Dsu(x) .

For L > 0, the (τ, L, [−1, 1])-Holder class of functions, denoted by Σ(τ, L, [−1, 1]), is the

set of functions u : R → R that are bτc times continuously differentiable and satisfy, for any

x, x′ ∈ [−1, 1], the inequality:

|u(x′)− ux(x′)| ≤ L|x− x′|τ .

The (τ, L, [−1, 1])-Holder class of density is defined as

PΣ(τ, L, [−1, 1]) =

{p : p ≥ 0,

∫p = 1, p ∈ Σ(τ, L, [−1, 1])

}.

Assumption 3 Assume that β1 is in the interior of some compact set Cp. There exists an

ε0 ∈ (0, 1) such that for all β ∈ Cp, f , g ∈ PΣ(2, L, [−1, 1]), ε0 < π(mβ,f ,g)(·) < 1− ε0.

Assumption 4 ‖Z0‖max ≤ K for some absolute constant K > 0, and ‖β0‖∞ ≤ C1 for some

absolute constant C1 > 0.

Assumption 5 The penalty level λ is in the range of (8λ0, Lλ0) for some L > 8. Moreover,

the following holds

8KL2(eη/ε0 + 1)2

η

λ0sβ∗1

φ21(Sβ∗

1)≤ 1,

where η is as in the uniform margin condition.

17

Page 18: Feature Augmentation via Nonparametrics and Selection ......than the sample size. However, in many contemporary applications, the number of features pis large compared to the sample

Assumption 3 is a regularity condition on the probability of the event that the observation

belongs to class 1. Since the FANS estimator is based on the estimated densities, we impose

the constraints in a neighborhood of the oracle estimate β1 (when using f and g). Assumption

4 bounds the maximum absolute entry of the design matrix as well as the maximum absolute

true regression coefficient. Assumption 5 posits a proper range of the penalty parameter λ to

guarantee that the penalized estimator mimics the un-penalized oracle.

Assumption 6 Suppose the feature measurement X has a compact support [−1, 1]p, and

f0,j, g0,j ∈ PΣ(2, L, [−1, 1]) for all j = 1, · · · , p, where PΣ denotes a Holder class of densities.

Assumption 7 Suppose there exists εl > 0 such that for all j = 1, · · · , p, εl ≤ f0,j, g0,j ≤ ε−1l .

Also we truncate estimates fj and gj at εl and ε−1l .

Assumption 8

n720− 3

1 (log(3p))34 (log n1)

110 = o(1) ,

and,

n110−α

1 (log(3p))12 (log n1)

25 = o(1) ,

for some constant α > 7/15.

Assumption 6 imposes constraints on the support of X and smoothness condition on the

true densities f 0 and g0, which help control the estimation error incurred by the nonparametric

density estimates. Assumption 7 assumes that the marginal densities and the kernel are strictly

positive on [−1, 1]p. Assumption 8 puts a restriction on the growth of the dimensionality p in

terms of sample size n1.

We now provide a lemma to bound the uniform deviation between fj and f0,j for j =

1, · · · , p.

Lemma 2 Under Assumptions 6-8, taking the bandwidth h =(

logn1

n1

)1/5

, for any δ1 > 0,

there exists N∗1 such that if n1 ≥ N∗1 ,

P+−(

max1≤j≤p

‖fj − f0,j‖∞ ≥ m

)≤ δ1 , and P+−

(max1≤j≤p

‖gj − g0,j‖∞ ≥ m

)≤ δ1 ,

for m = C2

√2 log(3p/δ1)

n1−α1

, and C2 is an absolute constant.

Denote by

J2 =

{max1≤j≤p

‖fj − f0,j‖∞ ≤ η/2, max1≤j≤p

‖gj − g0,j‖∞ ≤ η/2

},

where η is the constant in the uniform margin condition. It is straightforward from Lemma 2

that

P+−(J2) ≥ 1− 6p

exp(η2n1−α1 /4C2

2).

The next lemma can be similarly derived as Lemma 2, so its proof is omitted.

18

Page 19: Feature Augmentation via Nonparametrics and Selection ......than the sample size. However, in many contemporary applications, the number of features pis large compared to the sample

Lemma 3 Under Assumptions 6-8, taking the bandwidth h =(

logn1

n1

)1/5

, for any δ > 0, there

exists N∗2 such that if n1 ≥ N∗2 ,

P+− (‖E‖max ≥ m) ≤ δ ,

where E is the estimation error matrix as defined in Lemma 1 and m = C3

√2 log(3p/δ)

n1−α1

.

Corollary 1 Under Assumptions 6-8, take the bandwidth h =(

logn1

n1

)1/5

. On the event

J3 ={‖E‖max ≤ C3

√2 log(3p/δ)

n1−α1

}(regarding labeled samples) with P+−(J3) > 1 − δ, there

exists N∗2 ∈ N and C4 > 0 such that if n1 ≥ N∗2 , |Fkl| = |Z1k − Z01k| · |Z1l − Z0

1l| ≤ C4bn1

uniformly for k, l = 1, · · · , p, where bn1 = 2 log(3p/δ)/n1−α1 .

Denote by

J4 =

{32‖E‖max max

S⊂{1,...,p}

|S|φ(S)2

≤ 1

}.

On the event J4, the inequality (4.5) holds, and the compatibility condition is satisfied for Z

(by Lemma 1). Moreover, it can be derived from Lemma 3 by taking a specific δ,

P+−(J4) ≥ 1− 3p exp{−n1−α1 /(2048C2

3A2p)} ,

where Ap = maxS⊂{1,...,p} |S|/φ(S)2. Combining Lemma 2 and the uniform margin condition,

we see that for given estimators f and g, the margin condition holds for the estimated trans-

formed matrix Z involved in the FANS estimator β1. Following similar lines as in van de

Geer (2008) delivers the following theorem, so a formal proof is omitted.

Theorem 1 (Oracle Inequality) In addition to Assumptions 1-8, assume ‖mβ∗1−mβ0

‖∞ ≤η/2 and E(mβ∗

1, f , g)/λ0 ≤ η/4. Then on the event J1 ∩ J2 ∩ J3 ∩ J4, we have

E(mβ1, f , g) + λ‖β1 − β∗1‖1 ≤ 6E(mβ∗

1, f , g) +

16λ2sβ∗1(eη/ε0 + 1)2

cφ21(Sβ∗

1)

.

In addition, when n1 ≥ max(N∗1 , N∗2 ) and under the normalization condition that ‖Z1j‖∞ ≤ 1

for all j, it holds that

P(J1 ∩ J2 ∩ J3 ∩ J4) ≥ 1− exp(−t)− 6p exp{−η2n1−α1 /(4C2

2)} − δ − 3p exp{−n1−α1 /(2048C2

3A2p)} ,

for

λ0 := 4λ∗ +tK

3n+

√2t

n(1 + 8λ∗) ,

where P is the probability with regards to all the samples and

λ∗ =

√2 log(2p)

n+K log(2p)

3n.

19

Page 20: Feature Augmentation via Nonparametrics and Selection ......than the sample size. However, in many contemporary applications, the number of features pis large compared to the sample

Theorem 1 shows that with high probability, the excess risk of the FANS estimator can be

controlled in terms of the excess risk of q∗1 when using the estimated density functions f and

g plus an explicit order term. Next, we will study the excess risk of q∗1.

Assumption 9 Let Z01(β1) be the subvector of Z0

1 corresponding to the nonzero components

of β1, and bn1 = log(3p/δ1)/n1−α1 . Assume sβ1

≤ an1 for some deterministic sequence {an1},and an1 · bn1 = o(1). In addition, 0 < C5 ≤ λmin(P{Z0

1(β1)Z01(β1)T}), for some absolute

constant C5.

Assumption 9 allows the number of nonzero elements of β1 to diverge at a slow rate with

n1. Also, it demands a lower bound of the restricted eigenvalue of the sub-matrix of Z0

corresponding to the nonzero components of β1.

Lemma 4 Let Q(β) = Pρ(mβ, f , g) + λ‖β‖0, and β1 = min{|β1,j| : j ∈ Sβ1}. Under

Assumptions 3, 6, 7, 8 and 9, on the event J3, there exists a N∗3 such that, if n1 ≥ N∗3 and

the penalty parameter λ < 0.5C5ε0(1 − ε0)β21, the L0 penalized solution coincides with the

unpenalized version; that is β∗1 = β1.

Theorem 2 (Oracle Inequality) In addition to Assumptions 1-9, suppose 4C1C4s2β0bn1 ≤

λ0η, the penalty parameter λ ∈ (8λ0,min(Lλ0, 0.5C5ε0(1 − ε0) ·minj:β1,j 6=0(|β1,j|))), where C5

is defined in Assumption 9, and ‖mβ∗1−mβ0

‖∞ ≤ η/2. Taking the bandwidth h =(

logn1

n1

)1/5

,

on the event J1 ∩ J2 ∩ J3 ∩ J4 as in Theorem 1, we have

E(mβ∗1, f , g) ≤ C1C4s

2β0bn1 .

Then in view of Theorem 1, we have

E(mβ1, f , g) ≤

16λ2s∗β1(eη/ε0 + 1)2

cφ21(Sβ∗

1)

+ 6C1C4s2β0bn1 .

From Theorem 2, it is clear that the excess risk of the FANS estimator is naturally de-

composed into two parts. One part is due to the nonparametric density estimations while the

other part is due to the regularized logistic regression on the estimated transformed covariates.

When both the penalty parameter λ and the bandwidth h of the nonparametric density esti-

mates f and g are chosen appropriately, the FANS estimator will have a diminishing excess

risk with high probability. Note that one can make explicit λ to obtain a bound on the excess

risk in terms of the sample sizes n and n1, and the dimensionality p.

20

Page 21: Feature Augmentation via Nonparametrics and Selection ......than the sample size. However, in many contemporary applications, the number of features pis large compared to the sample

5 Discussion

We propose a new two-step nonlinear rule FANS (and its variant FANS2) to tackle binary

classification problems in high-dimensional settings. FANS first augments the original feature

space by leveraging flexibility of nonparametric estimators, and then achieves feature selection

through regularization (penalization). It combines linearly the best univariate transforms that

essentially augment the original features for classification. Since nonparametric techniques are

only performed on each dimension, we enjoy a flexible decision boundary without suffering

from the curse of dimensionality. An array of simulation and real data examples, supported

by an efficient parallelized algorithm, demonstrate the competitive performance of the new

procedures.

A few extensions are worth further investigation. Beyond a specific procedure, FANS

establishes a general two-step classification framework. For the first step, one can use other

types of marginal density estimators, e.g., local polynomial density estimates. For the second

step, one might rely on other classification algorithms, e.g., the support vector machine, k-

nearest neighbors, etc. Searching for the best two-step combination is an important but

difficult task, and we believe that the answer mainly depends on the data structures.

We can further augment the features by adding pairwise bivariate density ratios. These

bivariate densities can be estimated by the bivariate kernel density estimators. Alternatively,

we can restrict our attention to bivariate ratios of features selected by FANS. The latter has

significantly fewer features.

Dimensions of data sets (e.g., SNPs) in many contemporary applications could be in mil-

lions. In such ultra-high dimensional scenarios, directly applying the FANS (FANS2) approach

could cause problems due to high computational complexity and instability of the estima-

tion. It will be beneficial to have a prior step to reduce the dimensionality in the original

data. Notable works towards this effort on the theoretical front include Fan and Lv (2008),

which introduced the sure independence screening (SIS) property to screen out the marginally

unimportant variables. Subsequently, Fan et al. (2011) proposed nonparametric independence

screening (NIS), which is an extension of SIS to the additive models.

6 Appendix

The appendix contains technical proofs and Lemma 5.

21

Page 22: Feature Augmentation via Nonparametrics and Selection ......than the sample size. However, in many contemporary applications, the number of features pis large compared to the sample

Proof of Lemma 2 For any r,m > 0,

P+−(

max1≤j≤p

‖fj − f0,j‖∞ ≥ m

)≤ e−rmE+− exp

(max1≤j≤p

r‖fj − f0,j‖∞)

= e−rmE+−(

max1≤j≤p

exp r‖fj − f0,j‖∞)

≤ e−rmp∑j=1

E+−(

exp r‖fj − f0,j‖∞).

Since we assumed that all fj and f0,j are uniformly bounded by ε−1l , ‖fj−f0,j‖∞ is bounded

by ε−1l for all j ∈ {1, · · · , p}. This coupled with Lemma 1 in Tong (2013), provides a high

probability bound for ‖fj − f0,j‖∞, gives rise to the following inequality,

E+− exp(r‖fj − f0,j‖∞

)≤ exp

r√ log(n1/δ2)

n1h

+ exp(rε−1l ) · δ2 ,

where δ2 playes the role of ε in Lemma 1 of Tong (2013)(taking constant C = 1 for simplicity).

Finding the optimal order for r does not seem to be feasible. So we plug in r = n1−α1 m

and δ2 = exp(−rε−1l ), then

P+−(

max1≤j≤p

‖fj − f0,j‖∞ ≥ m

)

≤ p exp(−n1−α1 m2)

1 + exp

n1−α1 m

√log n1

n1h+n1−α

1 mε−1l

n1h

≤ p exp(−n1−α

1 m2)

1 + exp

√2n1−α1 m

√ log n1

n1h+

√mε−1

l

nα1h

≤ p exp(−n1−α

1 m2)

{1 + exp

[√

2n1−α1 m

(log n1

n1

) 25

+√

2m32 ε− 1

2l n

1110− 3

1 (log n1)110

]},

where in the last inequality we have used the bandwidth h =(

logn1

n1

)1/5

.

The results are derived by taking m =√

2 log(3p/δ1)

n1−α1

( so δ1 = 3p exp(−n1−α1 m2)), and

by taking Assumption 8. Note that we need to introduce α > 0 because the consistency

conditions do not hold for α = 0. In fact, we need at least α > 7/15. Under this assumption,

there exists a positive integer N∗1 such that if n1 ≥ N∗1 ,

1 + exp[2

54 ε− 1

2l n

720− 3

1 (log(3p/δ1))34 (log n1)

110 + 2n

110−α

1 (log(3p/δ1))12 (log n1)

25

]≤ 3 .

22

Page 23: Feature Augmentation via Nonparametrics and Selection ......than the sample size. However, in many contemporary applications, the number of features pis large compared to the sample

Therefore, for n1 ≥ N∗1 ,

P+−(

max1≤j≤p

‖fj − f0,j‖∞ ≥ m

)≤ δ1, for m =

√2 log(3p/δ1)

n1−α1

.

Lemma 5 For any vector θ0 = (θ0,1, . . . , θ0,p)T , let Sθ0 = {j : θ0,j 6= 0}, and let the minimum

signal level be θ0 = min{|θ0,j| : j ∈ Sθ0}. Let g(θj) = cj(θj − θ0,j)2 + λ‖θj‖0, where cj > 0. If

λ ≤ cjθ20, g(θj) achieves the unique minimum at θj = θ0,j.

Proof of Lemma 5 For θ0,j = 0, the result is obvious. For θ0,j 6= 0, we have j ∈ Sθ0 and

g(θj) ≥ λ‖θj‖0I(θj 6= 0) + cj(θj − θ0,j)2I(θj = 0)

= λ‖θj‖0I(θj 6= 0) + cjθ20,jI(θj = 0).

If λ‖θ0,j‖0 ≤ cjθ20,

g(θj) ≥ λ‖θj‖0I(θj 6= 0) + λ‖θ0,j‖0I(θj = 0) = λ‖θ0,j‖0.

Since g(θ0,j) = λ‖θ0,j‖0, the lemma follows.

Proof of Lemma 4 Denote Q0(β) = Pρ(mβ, f , g). Then we have β1 = arg minβ∈Rp Q0(β).

Since ∇Q0(β1) = 0 and

∇2Q0(β) = P{Z1ZT

1 exp(ZT

1 β)(1 + exp(ZT

1 β))−2} ≥ ε0(1− ε0)P{Z1ZT

1 } � 0.

By Taylor’s expansion of Q0(β) at β1,

Q(β) = Q0(β1) + 0.5(β − β1)T∇2Q0(β)(β − β1) + λ‖β‖0 , (6.10)

where β lies between β and β1. Let M = P{Z1(β1)Z1(β1)T}, where Z1(β1) is the subvector

of Z1 corresponding to the nonzero components of β1, and M = P{Z01(β1)Z0

1(β1)T}, where

Z01(β1) is the subvector of Z0

1 corresponding to the nonzero components of β1. Let F =

M−M (a symmetric matrix). From the uniform deviance result of Lemma 3, with probability

1 − δ regarding the labeled samples, there exists a constant C4 > 0 such that |Fkl| ≤ C4bn1

uniformly for k, l = 1, · · · , sβ1, where bn1 = 2 log(3p/δ)/n1−α

1 .

Hence, ‖F‖2 ≤ ‖F‖F ≤ C4sβ1bn1 ≤ C4an1bn1 . For any eigenvalue λ(M), by the Bauer-Fike

inequality (Bhatia, 1997), we have min1≤k≤sβ1|λ(M) − λk(M )| ≤ ‖F‖2 ≤ C4an1bn1 , where

λk(A) denotes the k-th largest eigenvalue of A. In addition, in view of Assumption 9, there

exists k ∈ Sβ1such that

λmin(M) ≥ λk(M )− C4an1bn1 ≥ λmin(M )− C4an1bn1 ≥ C5 − C4an1bn1 .

23

Page 24: Feature Augmentation via Nonparametrics and Selection ......than the sample size. However, in many contemporary applications, the number of features pis large compared to the sample

Since an1bn1 = o(1), there exists N∗3 (δ) such that when n1 > N∗3 (δ), we have λmin(M ) > 0.

Let β(1)1 be the subvector of β1 consisting of the nonzero components. Then by (6.10) and

Lemma 5 for each j ∈ Sβ1with λ < 0.5C5ε0(1− ε0)β1

2, we have

Q(β) ≥ Q0(β1) + 0.5(C5 − C4an1bn1)ε0(1− ε0)‖β(1) − β(1)1 ‖2 + λ‖β‖0

≥ Q0(β1) +∑j∈Sβ1

{0.5(C5 − C4an1bn1)ε0(1− ε0)(βj − β1,j)

2 + λ‖βj‖0

}, (6.11)

where βj and β1,j are the j-th components of β and β1, respectively. For n1 ≥ N∗3 (δ),

Q(β) ≥ Q0(β1) + λ∑j∈Sβ1

‖β1,j‖0 = Q0(β1) + λ‖β1‖0 .

By (6.10), we have

Q(β1) = Q0(β1) + λ‖β1‖0 .

Therefore, β1 is a local minimizer of Q(β). It then follows from the convexity of Q(β) that

β1 is the global minimizer β∗1 of Q(β).

Proof of Theorem 2 For simplicity, denote by ρ(m(Z1), Y1) the loss function ρq(X1, Y1) =

−Y1m(Z1) + log(1 + exp(m(Z1)). Note that

∂ρ(m(Z1), Y1)

∂m(Z1)= −Y1 +

exp(m(Z1))

1 + exp(m(Z1))= −Y1 + πm,f0,g0(X1),

and

∂2ρ(m(Z1), Y1)

[∂m(Z1)]2=

exp(m(Z1))

[1 + exp(m(Z1))]2.

By the second order Taylor expansion, we obtain that

ρ(mβ(Z1), Y1) = ρ(mβ0(Z0

1), Y1) + [∂ρ(mβ0(Z0

1), Y1)/∂mβ(Z1)](mβ(Z1)−mβ0(Z0

1))

+1

2

∂2ρ(m∗, Y1)

[∂mβ(Z1)]2(mβ(Z1)−mβ0

(Z1))2, (6.12)

where m∗ lies between mβ(Z1) and mβ0(Z0

1). Since

P

[∂ρ(mβ0

(Z01), Y1)

∂mβ(Z1)

]= 0 (6.13)

and 0 < ∂2ρ(m∗, Y1)/[∂mβ(Z1)]2 < 1, taking the expectation we obtain that

|Pρ(mβ(Z1), Y1)− Pρ(mβ0(Z0

1), Y1)| < 0.5P [(mβ(Z1)−mβ0(Z0

1))2]

= 0.5P [(ZT

1 β − (Z01)Tβ0)2].

24

Page 25: Feature Augmentation via Nonparametrics and Selection ......than the sample size. However, in many contemporary applications, the number of features pis large compared to the sample

Hence, from Corollary 1, on the event J3,

|Pρ(mβ0(Z1), Y1)− Pρ(mβ0

(Z01), Y1)| ≤ 0.5βT0 P [(Z1 −Z0

1)(Z1 −Z01)T ]β0

≤ C1C4s2β0bn1 ,

where sβ = |Sβ| is the cardinality of Sβ = {j : βj 6= 0}. Naturally, Pρ(mβ0(Z1), Y1) ≤

Pρ(mβ0(Z0

1), Y1) + C1C4s2β0bn1 .

In addition, by definition of β1, Pρ(mβ1(Z1), Y1) = minβ Pρ(mβ(Z1), Y1). As a result,

Pρ(mβ1(Z1), Y1) ≤ Pρ(mβ0

(Z1), Y1). Thus, we have

Pρ(mβ1(Z1), Y1) ≤ Pρ(mβ0

(Z01), Y1) + C1C4s

2β0bn1 . (6.14)

In addition, by (6.12) and (6.13), for any β we have Pρ(mβ(Z1), Y1) ≥ Pρ(mβ0(Z0

1), Y1).

Then, setting β = β1 on the left side leads to

Pρ(mβ1(Z1), Y1) ≥ Pρ(mβ0

(Z01), Y1). (6.15)

Combining (6.14) and (6.15) leads to

|Pρ(mβ1(Z1), Y1)− Pρ(mβ0

(Z01), Y1)| ≤ C1C4s

2β0bn1 . (6.16)

As a result, we have

E(mβ1, f , g) ≤ C1C4s

2β0bn1 . (6.17)

(6.17) combined with Lemma 4 (β∗1 = β1) leads to

E(mβ∗1, f , g) ≤ C1C4s

2β0bn1 . (6.18)

Recall the oracle estimator

β∗1 = arg minβ∈B

{E(mβ, f , g) +

8λsβcφ2

1(Sβ)

}.

Then by Theorem 1,

E(q1) = E(mβ1, f , g) ≤ 6E(mβ∗

1, f , g) +

16λ2sβ∗1(eη/ε0 + 1)2

cφ2(Sβ∗1)

. (6.19)

Therefore, by (6.18) and (6.19),

E(q1) ≤16λ2sβ∗

1(eη/ε0 + 1)2

cφ2(Sβ∗1)

+ C1C4s2β0bn1 .

25

Page 26: Feature Augmentation via Nonparametrics and Selection ......than the sample size. However, in many contemporary applications, the number of features pis large compared to the sample

References

Ackermann, M. and Strimmer, K. (2009). A general modular framework for gene set

enrichment analysis. BMC Bioinformatics, 10 1471–2105.

Antoniadis, A., Lambert-Lacroix, S. and Leblanc, F. (2003). Effective dimension

reduction methods for tumor classification using gene expression data. Bioinformatics, 19

563–570.

Bair, E., Hastie, T., Paul, D. and Tibshirani, R. (2006). Prediction by supervised

principal components. J. Amer. Statist. Assoc., 101 119–137.

Bickel, P. J. and Levina, E. (2004). Some theory for Fisher’s linear discriminant function,

’naive Bayes’, and some alternatives when there are many more variables than observations.

Bernoulli, 10 989–1010.

Boulesteix, A.-L. (2004). PLS dimension reduction for classification with microarray data.

Stat. Appl. Genet. Mol. Biol., 3 Art. 33, 32 pp. (electronic).

Buhlmann, P. and van de Geer, S. (2011). Statistics for High-Dimensional Data.

Springer.

Cai, T. and Liu, W. (2011). A direct estimation approach to sparse linear discriminant

analysis. J. Amer. Statist. Assoc., 106 1566–1577.

Clemmensen, L., Hastie, T., Wiiten, D. and Ersboll, B. (2011). Sparse discriminant

analysis. Technometrics, 53 406–413.

Dudoit, S., Fridlyand, J. and Speed, T. P. (2002). Comparison of discrimination meth-

ods for the classification of tumors using gene expression data. J. Amer. Statist. Assoc., 97

77–87.

Fan, J. and Fan, Y. (2008). High-dimensional classification using features annealed inde-

pendence rules. Ann. Statist., 36 2605–2637.

Fan, J., Feng, Y. and Song, R. (2011). Nonparametric independence screening in sparse

ultra-high dimensional additive models. J. Amer. Statist. Assoc., 106 544–557.

Fan, J., Feng, Y. and Tong, X. (2012). A road to classification in high dimensional space:

the regularized optimal affine discriminant. Journal of the Royal Statistical Society. Series

B (Statistical Methodology), 74 745–771.

26

Page 27: Feature Augmentation via Nonparametrics and Selection ......than the sample size. However, in many contemporary applications, the number of features pis large compared to the sample

Fan, J. and Li, R. (2001). Variable selection via nonconcave penalized likelihood and its

oracle properties. J. Amer. Statist. Assoc., 96 1348–1360.

Fan, J. and Lv, J. (2008). Sure independence screening for ultrahigh dimensional feature

space (with discussion). J. Roy. Statist. Soc., Ser. B: Statistical Methodology, 70 849–911.

Friedman, J., Hastie, T., Hofling, H. and Tibshirani, R. (2007). Pathwise coordinate

optimization. Annals of Applied Statistics, 1 302–332.

Friedman, J., Hastie, T. and Tibshirani, R. (2010). Regularization paths for generalized

linear models via coordinate descent. Journal of Statistical Software, 33 1–22.

Gordon, G. J., Jensen, R. V., Hsiao, L.-L., Gullans, S. R., Blumenstock, J. E.,

Ramaswamy, S., Richards, W. G., Sugarbaker, D. J. and Bueno, R. (2002).

Translation of microarray data into clinically relevant cancer diagnostic tests using gene

expression ratios in lung cancer and mesothelioma. Cancer Research, 62 4963–4967.

Guo, Y., Hastie, T. and Tibshirani, R. (2007). Regularized linear discriminant analysis

and its application in microarrays. Biostatistics, 8 86–100.

Hastie, T. and Tibshirani, R. (1990). Generalized Addtive Models. Chapman & Hall/CRC.

Hastie, T., Tibshirani, R. and Friedman, J. H. (2009). The Elements of Statistical

Learning: Data Mining, Inference, and Prediction (2nd edition). Springer-Verlag Inc.

Huang, P. W., X. (2003). Linear regression and two-class classification with gene expression

data. Bioinformatics, 19 2072–2978.

Li, K.-C. (1991). Sliced inverse regression for dimension reduction. J. Amer. Statist. Assoc.,

86 316–342. With discussion and a rejoinder by the author.

Mai, Q., Zou, H. and Yuan, M. (2012). A direct approach to sparse discriminant analysis

in ultra-high dimensions. Biometrika, 99 29–42.

Meier, L., Geer, V. and Buhlmann, P. (2009). High-dimensional additive modeling.

Ann. Statist, 37 3779–3821.

Nguyen, D. V. and Rocke, D. M. (2002). Tumor classification by partial least squares

using microarray gene expression data . Bioinformatics, 18 39–50.

Polonik, W. (1995). Measuring mass concentrations and estimating density contour clusters-

an excess mass approach. Annals of Statistics, 23 855–881.

27

Page 28: Feature Augmentation via Nonparametrics and Selection ......than the sample size. However, in many contemporary applications, the number of features pis large compared to the sample

Raykar, V., Duraiswami, R. and Zhao, L. (2010). Fast computation of kernel estimators.

Journal of Computational and Graphical Statistics, 19 205–220.

Shao, J., Wang, Y., Deng, X. and Wang, S. (2011). Sparse linear discriminant analysis

by thresholding for high dimensional data. Ann. Statist., 39 1241–1265.

Tibshirani, R. (1996). Regression shrinkage and selection via the lasso. J. Roy. Statist.

Soc., Ser. B, 58 267–288.

Tong, X. (2013). A plug-in approach to anomaly detection. Journal of Machine Learning

Research, 14 3011–3040.

Tsybakov, A. (2004). Optimal aggregation of classifiers in statistical learning. Ann. Statist.,

32 135–166.

van de Geer, S. (2008). High-dimensional generalized linear models and the lasso. Ann.

Statist., 36 614–645.

Witten, D. and Tibshirani, R. (2012). Penalized classification using fisher’s linear dis-

criminant. Journal of the Royal Statistical Society Series B, 73 753–772.

Wu, M. C., Zhang, L., Wang, Z., Christiani, D. C. and Lin, X. (2009). Sparse linear

discriminant analysis for simultaneous testing for the significance of a gene set/pathway

and gene selection. Bioinformatics, 25 1145–1151.

Zhang, C.-H. (2010). Nearly unbiased variable selection under minimax concave penalty.

Ann. Statist., 38 894–942.

Zhang, C.-H. and Huang, J. (2008). The sparsity and bias of the LASSO selection in

high-dimensional linear regression. Ann. Statist., 36 1567–1594.

Zhao, P. and Yu, B. (2006). On model selection consistency of lasso. Journal of Machine

Learning Research, 7 2541–2563.

Zou, H. (2006). The adaptive lasso and its oracle properties. J. Amer. Statist. Assoc., 101

1418–1429.

Zou, H., Hastie, T. and Tibshirani, R. (2006). Sparse principal component analysis. J.

Comput. Graph. Statist., 15 265–286.

28