34
Distance Shrinkage and Euclidean Embedding via Regularized Kernel Estimation Luwan Zhang * , Grace Wahba and Ming Yuan ‡§ *‡ Morgridge Institute for Research and *†‡ Department of Statistics University of Wisconsin-Madison (August 4, 2015) * Research supported in part by NSF Career Award DMS-1321692 and FRG Award DMS-1265202. Research supported in part by NIH Grants EY09946, 1U54AI117924-01 and NSF Grant DMS1308877. Research supported in part by NSF Career Award DMS-1321692 and FRG Award DMS-1265202, and NIH Grant 1U54AI117924-01. § Address for correspondence: Department of Statistics, University of Wisconsin-Madison, 1300 University Avenue, Madison, WI 53706. 1

Distance Shrinkage and Euclidean Embedding via Regularized Kernel Estimationmy2550/papers/kernel.final.pdf · 2017-08-07 · Distance Shrinkage and Euclidean Embedding via Regularized

  • Upload
    others

  • View
    6

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Distance Shrinkage and Euclidean Embedding via Regularized Kernel Estimationmy2550/papers/kernel.final.pdf · 2017-08-07 · Distance Shrinkage and Euclidean Embedding via Regularized

Distance Shrinkage and Euclidean Embedding via

Regularized Kernel Estimation

Luwan Zhang∗, Grace Wahba† and Ming Yuan‡§

∗‡Morgridge Institute for Research and ∗†‡Department of Statistics

University of Wisconsin-Madison

(August 4, 2015)

∗Research supported in part by NSF Career Award DMS-1321692 and FRG Award DMS-1265202.†Research supported in part by NIH Grants EY09946, 1U54AI117924-01 and NSF Grant DMS1308877.‡Research supported in part by NSF Career Award DMS-1321692 and FRG Award DMS-1265202, and

NIH Grant 1U54AI117924-01.§Address for correspondence: Department of Statistics, University of Wisconsin-Madison, 1300 University

Avenue, Madison, WI 53706.

1

Page 2: Distance Shrinkage and Euclidean Embedding via Regularized Kernel Estimationmy2550/papers/kernel.final.pdf · 2017-08-07 · Distance Shrinkage and Euclidean Embedding via Regularized

Abstract

Although recovering an Euclidean distance matrix from noisy observations is a com-

mon problem in practice, how well this could be done remains largely unknown. To fill

in this void, we study a simple distance matrix estimate based upon the so-called regu-

larized kernel estimate. We show that such an estimate can be characterized as simply

applying a constant amount of shrinkage to all observed pairwise distances. This fact

allows us to establish risk bounds for the estimate implying that the true distances can

be estimated consistently in an average sense as the number of objects increases. In

addition, such a characterization suggests an efficient algorithm to compute the dis-

tance matrix estimator, as an alternative to the usual second order cone programming

known not to scale well for large problems. Numerical experiments and an application

in visualizing the diversity of Vpu protein sequences from a recent HIV-1 study further

demonstrate the practical merits of the proposed method.

Key words: Embedding, Euclidean distance matrix, kernel, multidimensional scaling, reg-

ularization, shrinkage, trace norm.

1 Introduction

The problem of recovering an Euclidean distance matrix from noisy or imperfect observa-

tions of pairwise (dis)similarity scores between a set of objects arises naturally in many

different contexts. It allows us to map objects from an arbitrary domain to Euclidean

spaces, and therefore makes them amenable for subsequent statistical analyses, and also

provides tools for visualization. Consider, for example, evaluating (dis)similarity between

molecular sequences. A standard approach is through sequence alignment and measuring

the (dis)similarity between a pair of sequences using their corresponding alignment score

(see, Durbin et al., 1998). Although encoding invaluable insights into the relationship be-

tween sequences, it is well known that these scores do not correspond directly to a distance

metric in the respective sequence space and therefore cannot be employed in kernel based

learning methods. Similarly, there are also numerous other instances where it is possible to

derive similarity or dissimilarity scores for pairs of objects from expert knowledge or other

information, which, if successfully converted into positive semi-definite kernels or Euclidean

distances, could allow themselves to play an important role in a myriads of statistical and

computational analyses (e.g., Scholkopf and Smola, 2002; Szekely, Rizzo and Bakirov, 2007).

2

Page 3: Distance Shrinkage and Euclidean Embedding via Regularized Kernel Estimationmy2550/papers/kernel.final.pdf · 2017-08-07 · Distance Shrinkage and Euclidean Embedding via Regularized

A canonical example where this type of problem occurs is multidimensional scaling which

aims to place each object in a low dimensional Euclidean space such that the between-object

distances are preserved as well as possible. As such it also forms the basis for several

other more recent approaches to nonlinear dimension reduction and manifold learning. See,

Scholkopf (1998), Tenanbaum, De Silva and Langford (2000), Lu et al. (2005), Venna and

Kaski (2006), Weinberger et al. (2007), Chen and Buja (2009, 2013) among others. Despite

the popularity of multidimensional scaling, very little is known about to what extent the

distances among the embedded points could faithfully reflect the true pairwise distances

when observed with noises; and it is largely used only as an exploratory tool for initial data

analysis.

Another example where it is of interest to reconstruct an Euclidean distance matrix is the

determination of molecular structures using nuclear magnetic resonance (NMR, for short)

spectroscopy, a technique pioneered by Nobel laureate Kurt Wuthrich (see, e.g., Wuthrich,

1986). As demonstrated by Wuthrich, distances between atoms could be inferred from

chemical shifts measured by NMR spectroscopy. These distances obviously need to conform

to a three dimensional Euclidean space yet experimental data on distances are inevitably

noisy and as a result, the observed distances may not translate directly into locations of these

atoms in a stable structure. Therefore, this becomes a problem of recovering an Euclidean

distance matrix in 3D from noisy observations of pairwise distances. Similar problems also

occur in graph realization and Euclidean representation of graphs where the goal is to embed

the vertex set of a graph in an Euclidean space in such a fashion that the distance between

two embedded vertexes matches their corresponding edge weight (see, e.g., Pouzet, 1979).

While an exact embedding of a graph is typically of very high dimension, it is useful in some

applications to instead seek approximate yet low dimensional embeddings (see, e.g., Roy,

2010).

More specifically, let Oi : i = 1, 2, . . . , n be a collection of objects from domain Owhich could be the coordinates of atoms in the case of molecular structure determination

using NMR spectroscopy, or the vertex set of a graph in the case of graph realization. Let

Ω be a subset of (i, j) : 1 ≤ i, j ≤ n, and xij : (i, j) ∈ Ω be the observed dissimilarity

scores between them such that

xij = dij + εij, (i, j) ∈ Ω,

where εijs are the measurement errors and D = (dij)1≤i,j≤n is a so-called Euclidean distance

3

Page 4: Distance Shrinkage and Euclidean Embedding via Regularized Kernel Estimationmy2550/papers/kernel.final.pdf · 2017-08-07 · Distance Shrinkage and Euclidean Embedding via Regularized

matrix in that there exist points p1, . . . , pn ∈ Rk for some k ∈ N such that

dij = ∥pi − pj∥2, 1 ≤ i < j ≤ n; (1)

see, e.g., Darrotto (2013). Here ∥ · ∥ stands for the usual Euclidean distance. Our goal

is to estimate the Euclidean distance matrix D from (xij)(i,j)∈Ω. In many applications, all

pairwise dissimilarity scores are observable, that is Ω = (i, j) : 1 ≤ i < j ≤ n. In these

cases, we can more conveniently write the observed scores in a matrix form X = (xij)1≤i,j≤n

where we adopt the convention that xji = xij and xii = 0. To fix ideas, in the rest of the

paper, we focus primarily on this setting of complete observations, with the exception of

Section 4 where we discuss specifically how the methodology could handle the more general

situations in a seamless fashion.

In the light of (1), D can be identified with the points pis, which suggests an embedding

of Ois in Rk. Obviously, if Ois can be embedded in the Euclidean space of a particular di-

mension, then it is also possible to embed them in a higher dimensional Euclidean space. We

refer to the smallest k in which such an embedding is possible as the embedding dimension of

D, denoted by dim(D). As is clear from the aforementioned examples, oftentimes, either the

true Euclidean distance matrix D itself is of low embedding dimension; or we are interested

in an approximation of D that allows for a low dimensional embedding. Such is the case, for

example, for molecular structure determination where the the embedding dimension of the

true distance matrix D is necessarily three. Similarly, for multidimensional scaling or graph

realization, we typically are interested in mapping objects in two or three dimensions.

Recall that

dij = p⊤i pi + p⊤j pj − 2p⊤i pj,

which relates D to the so-called kernel (or Gram) matrix K = (p⊤i pj)1≤i,j≤n. Furthermore,

it is also clear that the embedding dimension dim(D) equals to rank(K). Motivated by

this correspondence between an Euclidean distance matrix and a kernel matrix, we consider

estimating D by D = (dij)1≤i,j≤n where

dij =⟨K, (ei − ej)(ei − ej)

⊤⟩= kii + kjj − 2kij. (2)

Here ⟨A,B⟩ = trace(A⊤B), ei is the ith column vector of the identity matrix, and K =

(kij)1≤i,j≤n is the the so-called regularized kernel estimate; see, e.g., Lu et al. (2005) and

Weinberger et al. (2007). More specifically,

K = argminM⪰0

∑(i,j)∈Ω

(xij −

⟨M, (ei − ej)(ei − ej)

⊤⟩)2 + λntrace(M)

, (3)

4

Page 5: Distance Shrinkage and Euclidean Embedding via Regularized Kernel Estimationmy2550/papers/kernel.final.pdf · 2017-08-07 · Distance Shrinkage and Euclidean Embedding via Regularized

where λn ≥ 0 is a tuning parameter that balances the tradeoff between goodness-of-fit and

the preference towards an estimate with smaller trace norm. Hereafter, we write M ⪰ 0 to

indicate that a matrixM is positive semi-definite. The trace norm penalty used in defining K

encourages low-rankness of the estimated kernel matrix and hence low embedding dimension

of D. See, e.g., Lu et al. (2005), Yuan et al. (2007), Negahban and Wainwright (2011),

Rohde and Tsybakov (2011), and Lu, Monteiro and Yuan (2012) among many others for

similar use of this type of penalty. The goal of the current article is to study the operating

characteristics and statistical performance of the estimate D defined by (2).

A fundamental difficulty in understanding the behavior of the proposed distance matrix

estimate D comes from the simple observation that a kernel is not identifiable given pairwise

distances alone, even without noise, as the latter is preserved under translation while the

former is not. Therefore, it is not clear what exactly K is estimating, and subsequently

what the relationship between D and D is. To address this challenge, we introduce a

notion of minimum trace kernel to resolve the ambiguity associated with kernel estimation.

Understanding of this concept allows us to more directly and explicitly characterize D as

first applying a constant amount of shrinkage to all observed distances; and then projecting

the shrunken distances to an Euclidean distance matrix. Because the distance between a

pair of points shrinks when they are projected onto a linear subspace, this characterization

offers a geometrical explanation to the ability of D to induce low dimensional embeddings.

In addition, this direct characterization of D also suggests an efficient way to compute it

using a version of Dykstra’s alternating projection algorithm thanks to the special geometric

structure of Dn, the set of n× n Euclidean distance matrices. See, e.g., Glunt et al. (1990).

Obviation of semidefinite programming, and more generally second order cone programmings

computational expense is the principal advantage of this alternating projection technique.

Furthermore, based on this explicit characterization, we establish statistical risk bounds for

the discrepancy D − D and show that the true distances can be recovered consistently in

average if D allows for (approximate) low dimensional embeddings.

The rest of the paper is organized as follows. In Section 2, we discuss in details the

shrinkage effect of the estimate D by exploiting the duality between a kernel matrix and

an Euclidean distance matrix. Taking advantage of our explicit characterization of D and

the geometry of the convex cone of Euclidean distance matrices, Section 3 establishes risk

bounds for D and Section 4 describes how D can be computed using an efficient alternating

projection algorithm. The merits of D is further illustrated via numerical examples, both

simulated and real, in Section 5. All proofs are relegated to the Appendix.

5

Page 6: Distance Shrinkage and Euclidean Embedding via Regularized Kernel Estimationmy2550/papers/kernel.final.pdf · 2017-08-07 · Distance Shrinkage and Euclidean Embedding via Regularized

2 Distance Shrinkage

In this section, we show that there is a one-to-one correspondence between an Euclidean

distance matrix and a so-called minimum trace kernel; and exploit this duality explicitly to

characterize D.

2.1 Minimum Trace Kernels

Despite the popularity of regularized kernel estimate K, rather little is known about its

statistical performance. This is perhaps in a certain sense inevitable because a kernel is

not identifiable given pairwise distances alone. To resolve this ambiguity, we introduce the

concept of minimum trace kernel, and show that K is targeting at the unique minimum trace

kernel associated with the true Euclidean distance matrix.

Recall that any n × n positive semidefinite matrix K can be identified with a set of

points p1, . . . , pn ∈ Rk for some k ∈ N such that K = PP⊤ where P = (p1, . . . , pn)⊤. At

the same time, these points can also be associated with an n× n Euclidean distance matrix

D = (dij)1≤i,j≤n where

dij = ∥pi − pj∥2, 1 ≤ i < j ≤ n.

Obviously,

dij = ⟨K,Bij⟩,

where

Bij = (ei − ej)(ei − ej)⊤.

It is clear that any positive semi-definite matrix M can be a kernel matrix and therefore

translated uniquely into a distance matrix. In other words,

T (M) = diag(M)1⊤ + 1diag(M)⊤ − 2M = (mii +mjj − 2mij)1≤i,j≤n

is a surjective map from the set Sn of n×n positive semi-definite matrices to Dn. Hereafter,

we write 1 as a vector of ones of conformable dimension. The map T , however, is not in-

jective because, geometrically, translation of the embedding points results in different kernel

matrix yet the distance matrix remains unchanged. As a result, it may not be meaningful,

in general, to consider reconstruction of a kernel matrix from dissimilarity scores alone.

It turns out that one can easily avoid such an ambiguity by requiring the embeddings

to be centered in that P⊤1 = 0 where 0 is a vector of zeros of conformable dimension.

We note that even with the centering, the embeddings as represented by P for any given

6

Page 7: Distance Shrinkage and Euclidean Embedding via Regularized Kernel Estimationmy2550/papers/kernel.final.pdf · 2017-08-07 · Distance Shrinkage and Euclidean Embedding via Regularized

Euclidean distance matrix still may not be unique as distances are invariant to rigid motions.

However, their corresponding kernel matrix, as the following result shows, is indeed uniquely

defined. Moreover the kernel matrix can be characterized as having the smallest trace among

all kernels that correspond to the same distance matrix, hence will be referred to as the

minimum trace kernel.

Theorem 1. Let D be an n× n distance matrix. Then the preimage of D under T

M(D) = M ∈ Sn : T (M) = D

is convex; and −JDJ/2 is the unique solution to following convex program

argminM∈M(D)

trace(M),

where J = I − (11⊤/n). In addition, if p1, . . . , pn ∈ Rn is an embedding of D such that

p1 + . . .+ pn = 0, then PP⊤ = −JDJ/2, where P = (p1, . . . , pn)⊤.

In the light of Theorem 1, T is bijective when restricted to the set of minimum trace

kernels:

K = M ⪰ 0 : trace(M) ≤ trace(A), ∀A ∈M(T (M)).

and its inverse is R(M) = −JMJ/2 as a map from distance matrices to kernels with

minimum trace. From this viewpoint, the regularized kernel estimate K intends to estimate

R(D) instead of the original data-generating kernel. In addition, it is clear that

Proposition 2. For any λn > 0, the regularized kernel estimate K as defined in (3) is a

minimum trace kernel. In addition, any embedding P of K, that is K = P P⊤, is necessarily

centered so that P⊤1 = 0.

The relationships among the data-generating kernel K, D, R(D), regularized kernel

estimate K as defined by (3), and the distance matrix estimate D as defined by (2) can be

described by Figure 1.

2.2 Distance Shrinkage

We now study the properties of the proposed distance matrix estimate given by (2). Recall

that, in the case of complete observation, the regularized kernel estimate K is given by

K = argminM⪰0

1

2∥X −T (M)∥2F + λntrace(M)

, (4)

7

Page 8: Distance Shrinkage and Euclidean Embedding via Regularized Kernel Estimationmy2550/papers/kernel.final.pdf · 2017-08-07 · Distance Shrinkage and Euclidean Embedding via Regularized

..

..K ..D ..R(D)

. ..D ..K

.

T

Figure 1: Relationships among K, D, R(D), K and D: the true distance matrix D is

determined by the data-generating kernel K; there is a one-to-one correspondence between

D and the minimum trace kernel R(D). Similarly, there is a one-to-one correspondence

between D and K which are estimate of D and R(D) respectively.

where ∥·∥F stands for the usual matrix Frobenius norm. It turns out that, following Theorem

1, D = T (K) actually allows for a more explicit and concise expression.

To this end, observe that the set Dn of n×n Euclidean distance matrices is a closed convex

cone (Schonberg, 1935; Young and Householder, 1938). Let PDn denote the projection to

Dn in that

PDn(A) = argminM∈Dn

∥A−M∥2F.

for A ∈ Rn×n. Then

Theorem 3. Let D be defined by (2) with the regularized kernel estimate K given by (4).

Then

D = PDn

(X − λn

2nD0

)where D0 is an Euclidean distance matrix whose diagonal elements are zero and off-diagonal

entries are ones.

Theorem 3 characterizes D as the projection of X − (λn/2n)D0 to an Euclidean dis-

tance matrix. Therefore, it can be computed as soon as we can evaluate the projection

onto the closed convex set Dn. As shown in Section 4, this could be done efficiently using

an alternating projection algorithm thanks to the geometric structure of Dn. In addition,

subtraction of (λn/2n)D0 from X amounts to applying a constant shrinkage to all observed

pairwise distances. Geometrically, distance shrinkage can be the result of projecting points

in an Euclidean space onto a lower dimensional linear subspace, and therefore encourages

low dimensional embeddings. We now look at the specific example when n = 3 to further

illustrate such an effect.

8

Page 9: Distance Shrinkage and Euclidean Embedding via Regularized Kernel Estimationmy2550/papers/kernel.final.pdf · 2017-08-07 · Distance Shrinkage and Euclidean Embedding via Regularized

In the special case of n = 3 points, the projection to Euclidean distance matrices can be

computed analytically. Let

X =

0 x12 x13

x12 0 x23

x13 x23 0

be the observed distance matrix. We now determine the embedding dimension of PD3(X −ηD0).

Let

Q =1

3 +√3

2 +√3 −1 −(1 +

√3)

−1 2 +√3 −(1 +

√3)

−(1 +√3) −(1 +

√3) −(1 +

√3)

be a 3× 3 Householder matrix. Then, for a 3× 3 symmetric hollow matrix X,

QXQ =

−1

3x12 − 1+

√3

3x13 +

1+√3

6+3√3x23

23x12 − 1

3x13 − 1

3x23 ∗

23x12 − 1

3x13 − 1

3x23 −1

3x12 +

1+√3

6+3√3x13 − 1+

√3

3x23 ∗

∗ ∗ ∗

,

where we only give the 2 × 2 leading principle matrix of QXQ and leave the other entries

unspecified. As shown by Hayden and Wells (1988), the minimal embedding dimension of

PD3(X) can be determined by the eigenvalues of the principle matrix.

More specifically, denote by

D(X) =

[13x12 +

1+√3

3x13 − 1+

√3

6+3√3x23 −2

3x12 +

13x13 +

13x23

−23x12 +

13x13 +

13x23

13x12 − 1+

√3

6+3√3x13 +

1+√3

3x23

],

and

D(X) = U

[α1 0

0 α2

]U⊤

its eigenvalue decomposition. Write

∆x :=√2[(x12 − x13)2 + (x12 − x23)2 + (x13 − x23)2]. (5)

Then, it can be calculated that

α1 =(x12 + x13 + x23) + ∆x

3, and α2 =

(x12 + x13 + x23)−∆x

3. (6)

In the light of Theorem 6.1 of Glunt et al. (1990), we have

9

Page 10: Distance Shrinkage and Euclidean Embedding via Regularized Kernel Estimationmy2550/papers/kernel.final.pdf · 2017-08-07 · Distance Shrinkage and Euclidean Embedding via Regularized

Proposition 4.

dim(PD3(X)) =

2 if x12 + x13 + x23 > ∆x

1 if − 12∆x < x12 + x13 + x23 ≤ ∆x

0 otherwise

,

where ∆x is given by (5), and dim(PD3(X)) = 0 means PD3(X) = 0.

To appreciate the effect of distance shrinkage, consider the case when PD3(X) has a

minimum embedding dimension of two. By Proposition 4, this is equivalent to assuming

α2 > 0. Observe that

D(X − ηD0) = D(X)− ηI2.

The eigenvalues of D(X − ηD0) are therefore α1 − η and α2 − η where α1 ≥ α2 are the

eigenvalues of D(X) as given by (6). This indicates that, by applying sufficient amount

of distance shrinkage, we can reduce the minimum embedding dimension as illustrated in

Figure 2.

..O1

.

O2

.

O3

PD3 (X) =⇒PD3 (X − ηD0)

..O1

.

O2

.

O3

Figure 2: Effect of distance shrinkage when n = 3.

More specifically,

• If1

3(x12 + x13 + x23)−

∆x

3≤ η <

1

3(x12 + x13 + x23) +

2∆x

3,

then the minimum embedding dimension of PD3(X − ηD0) is one.

• If

η ≥ 1

3(x12 + x13 + x23) +

2∆x

3,

then the minimum embedding dimension of PD3(X − ηD0) is zero;

10

Page 11: Distance Shrinkage and Euclidean Embedding via Regularized Kernel Estimationmy2550/papers/kernel.final.pdf · 2017-08-07 · Distance Shrinkage and Euclidean Embedding via Regularized

3 Estimation Risk

The previous section provides an explicit characterization of the proposed distance matrix

estimate D as a distance shrinkage estimator. We now take advantage this characterization

to establish statistical risk bounds for D.

3.1 Estimation Error for Distance Matrix

A natural measure of the quality of a distance matrix estimate D is the averaged squared

error of all pairwise distances:

L(D,D) :=2

n(n− 1)

∑1≤i<j≤n

(dij − dij

)2

.

It is clear that when both D and D are n× n Euclidean distance matrices,

L(D,D) =1

n(n− 1)∥D −D∥2F.

For convenience, we shall now consider bounding ∥D − D∥2F. Taking advantage of the

characterization of D as a projection onto the set of n× n Euclidean distance matrices, we

can derive the following oracle inequality.

Theorem 5. Let D be defined by (2). Then for any λn such that λn ≥ 2∥X −D∥,

∥D −D∥2F ≤ infM∈Dn

∥M −D∥2F +

9

4λ2n(dim(M) + 1)

,

where ∥ · ∥ stands for the matrix spectral norm.

Theorem 5 gives a deterministic upper bound for the error of D, ∥D −D∥2F in compar-

ison with that of an arbitrary approximation to D. More specifically, let D be the closest

Euclidean distance matrix with embedding dimension r to D, in terms of Frobenius norm.

Then Theorem 5 implies that with sufficiently large tuning parameter λn,

L(D,D) ≤ L(D,D) +Crλ2

n

n2,

for some constant C > 0. In particular, if D itself is embedding dimension r, then

L(D,D) ≤ Crλ2n

n2.

11

Page 12: Distance Shrinkage and Euclidean Embedding via Regularized Kernel Estimationmy2550/papers/kernel.final.pdf · 2017-08-07 · Distance Shrinkage and Euclidean Embedding via Regularized

More explicit bounds for the estimation error can be derived from this general result.

Consider, for example, the case when the observed pairwise distances are the true distances

subject to additive noise:

xij = dij + εij, 1 ≤ i < j ≤ n, (7)

where the measurement errors εijs are independent with mean E(εij) = 0 and variance

var(εij) = σ2. Assume that the distributions of measurement errors have light tails such

that

E(εij)2m ≤ (c0m)m, ∀m ∈ N (8)

for some constant c0 > 0. Then the spectral norm of X −D satisfies

∥X −D∥ = 2σ(√

n+Op(n−1/6)

).

See, e.g., Sinai and Soshnikov (1998). Thus,

Corollary 6. Let D be defined by (2). Under the model given by (7) and (8), if λn =

4σ(n1/2 + 1), then with probability tending to one,

∥D −D∥2F ≤ infM∈Dn

∥M −D∥2F + 36nσ2(dim(M) + 1)

,

as n→∞. In particular, if dim(D) = r, then with probability tending to one,

∥D −D∥2F ≤ 36nσ2(r + 1).

In other words, under the model given by (7) and (8),

L(D,D) ≤ L(D,D) +Crσ2

n,

for some constant C > 0, where as before, D is the closest Euclidean distance matrix to D

with embedding dimension r. In particular, if D itself is embedding dimension r, then

L(D,D) ≤ Crσ2

n.

3.2 Low Dimensional Approximation

As mentioned before, in some applications, the chief goal may not be to recover D itself but

rather its embedding in a prescribed dimension. This is true, in particular, for multidimen-

sional scaling and graph realization where we are often interested in embedding a distance

12

Page 13: Distance Shrinkage and Euclidean Embedding via Regularized Kernel Estimationmy2550/papers/kernel.final.pdf · 2017-08-07 · Distance Shrinkage and Euclidean Embedding via Regularized

matrix in R2 or R3. Following the classical multidimensional scaling, a parameter of interest

in these cases is

Dr := argminM∈Dn(r)

∥J(D −M)J∥2F,

where Dn(r) is the set of all n × n Euclidean distance matrices of embedding dimension at

most r. An obvious estimate of Dr can be derived by replacing D with D:

Dr := argminM∈Dn(r)

∥J(D −M)J∥2F. (9)

Similar to the classical multidimensional scaling, the estimate Dr can be computed more

explicitly as follows. Let K be the regularized kernel estimate corresponding to D, and

K = UΓU⊤ be its eigenvalue decomposition with Γ = diag(γ1, γ2, . . .) and γ1 ≥ γ2 ≥ . . ..

Then Dr = T (Kr) where Kr = Udiag(γ1, . . . , γr, 0, . . .)U⊤.

The risk bounds we derived for D can also be translated into that for Dr. More specifi-

cally,

Corollary 7. Let Dr be defined by (9) where D is given by (2) with λn ≥ 2∥X −D∥. Thenthere exists a numerical constant C > 0 such that

∥J(Dr −D)J∥2F ≤ C

(min

M∈Dn(r)∥J(D −M)J∥2F + λ2

nr

),

In particular, under the model given by (7) and (8), if λn = 4σ(n1/2+1), then with probability

tending to one,

∥J(Dr −D)J∥2F ≤ C

(min

M∈Dn(r)∥J(D −M)J∥2F + nrσ2

).

4 Computation

It is not hard to see that the optimization problem involved in defining the regularized kernel

estimate can be formulated as a second order cone program (see, e.g., Lu et al. 2005; Yuan

et al., 2007). This class of optimization problems can be readily solved using generic solvers

such as SDPT3 (Toh, Todd and Tutuncu, 1999; Tutuncu, Toh and Todd, 2003). Although in

principle, these problems can be solved in polynomial time, on the practical side, the solvers

are known not to scale well to large problems. Instead of starting from the regularized kernel

estimate, as shown in Section 3, D can be directly computed as a projection onto the set

of Euclidean distance matrices. Taking advantage of this direct characterization and the

particular geometric structure of the closed convex cone Dn, we can devise a more efficient

algorithm to compute D.

13

Page 14: Distance Shrinkage and Euclidean Embedding via Regularized Kernel Estimationmy2550/papers/kernel.final.pdf · 2017-08-07 · Distance Shrinkage and Euclidean Embedding via Regularized

4.1 Alternating Projection

We shall adopt, in particular, an alternating projection algorithm introduced by Dykstra

(1983). Dykstra’s algorithm is a refinement of the von Neumann alternating projection

algorithm specifically designed to compute projection onto the intersection of two closed

convex sets by constructing a sequence of projections to the two sets alternatively.

Data: x.

Result: Projection of x onto the intersection of two closed convex set C1 and C2.Initialization: x0 = x, p0 = 0, q0 = 0, k = 0 ;

repeat

sk ←PC1(xk + pk);

pk+1 ← xk + pk − sk ;

xk+1 ←PC2(sk + qk) ;

qk+1 ← sk + qk − xk+1 ;

k ← k + 1 ;

until a certain convergence criterion is met;

return xk+1.

Algorithm 1: Dykstra’s alternating projection algorithm: PC1 and PC2 are the pro-

jections onto C1 and C2 respectively.

The main idea of Dykstra’s algorithm can be illustrated by Figure 3 where the projection

of a point onto the intersection of two half-spaces is computed. The alternating projection

algorithms, albeit simple, are very powerful and have found numerous applications in prac-

tice. It is also known that, under mild regularity conditions, the algorithm converges linearly

regardless of the initial point. Interested readers are referred to the monograph by Escalante

and Raydan (2011) for further details.

Now consider evaluating D which is the projection of X − ηnD0 onto Dn. Observe that

Dn is the intersection of two closed convex cones:

C1 = M ∈ Rn×n : JMJ ⪯ 0,

and

C2 = M ∈ Rn×n : diag(M) = 0.

Dykstra’s alternating projection algorithm can then be readily applied with input X−ηnD0.

The use of alternating projection algorithms is motivated by the fact that although PC1∩C2

14

Page 15: Distance Shrinkage and Euclidean Embedding via Regularized Kernel Estimationmy2550/papers/kernel.final.pdf · 2017-08-07 · Distance Shrinkage and Euclidean Embedding via Regularized

..p1

.

q1

.

p2

.

q2

.PC1∩C2(x0).

x0

.s0

.

x1

.

x1 + p1

.s1

.

x2

.

s1 + q1

.. . .

. C1.

C2

Figure 3: Illustration of alternating projection algorithm.

is difficult to evaluate, projections to C1 and C2 actually have explicit form and are easy to

compute.

More specifically, for any symmetric matrix A ∈ Rn×n, let A11 be the (n − 1)th leading

principle submatrix of its Householder transform QAQ where Q = I − vv⊤/n and v =

[1, . . . , 1, 1 +√n]⊤. In other words,

A = Q

[A11 A12

A21 A22

]Q

Let A11 = UΓU⊤ be its eigenvalue decomposition. Then

PC1(A) = Q

[UΓ+U⊤ A12

A21 A22

]Q

where Γ+ = diag(maxγii, 0). See Hayden and Wells (1988). On the other hand, it is clear

that PC2(A) simply replaces all diagonal entries of A with zeros.

4.2 Dealing with Missing Data

We have thus far focused on the case when all pairwise distances are observable. Although

this is true in many applications, there are also situations where some of the distances may

not be available. Missing data can be conveniently handled within our framework through

a combination of the alternating projection and EM algorithm.

More specifically, recall that Ω ⊂ (i, j) : 1 ≤ i < j ≤ n is the set of entries observed in

15

Page 16: Distance Shrinkage and Euclidean Embedding via Regularized Kernel Estimationmy2550/papers/kernel.final.pdf · 2017-08-07 · Distance Shrinkage and Euclidean Embedding via Regularized

X. As the complete data case, we proceed to estimate D by DΩ = T (KΩ) where

KΩ = argminM⪰0

∑(i,j)∈Ω

(xij −

⟨M, (ei − ej)(ei − ej)

⊤⟩)2 + λntrace(M)

.

Here we use the superscript Ω to signify the dependence on the set Ω of the observed entries.

Unlike the case without missing data, DΩ in general can not be characterized as a projection

of XΩ = (xij)(i,j)∈Ω onto the set of Euclidean distance matrices. To address this difficulty,

we iterate between an E step where the missing observations are imputed using the current

estimate of the pairwise distances, and an M step where we can appeal to the alternating

projection algorithm on the observed distances along with those imputed in the E step.

Data: XΩ = (xij)(i,j)∈Ω, ηn ≥ 0

Result: DΩ

Initialization: initialize xij for i < j and (i, j) /∈ Ω, and let X = X⊤ = (xij)1≤i,j≤n

where xii = 0; k = 0, and X(0) = X ;

repeat

M Step – D(k+1) = PDn(X(k) − ηnD0) ;

E Step – x(k+1)ij = xij if (i, j) ∈ Ω, 0 if i = j, and d

(k+1)ij otherwise ;

until a certain convergence criterion is met;

DΩ ← D(k+1) ;

return DΩ.Algorithm 2: EM algorithm to handle missing data.

4.3 Tuning

The ability to handle missing data also facilitates the tuning of λn or equivalently ηn. Clearly,

the performance of the proposed method depends on the choice of the tuning parameter. In

some cases, we want to embed data into an Euclidean space of a fixed dimensionality. For

example, the atoms of a protein have to live in a three dimensional space. To this end, we

can experiment with different values of the tuning parameter and use the one corresponding

to the desired embedding dimension. Our experience suggests this strategy works fairly well

in numerical experiments and the performance of the resulting estimate is also fairly stable

for a broad range of tuning parameter choices. In many other situations, however, a more

objective choice of tuning parameter may become desirable. A common strategy to address

16

Page 17: Distance Shrinkage and Euclidean Embedding via Regularized Kernel Estimationmy2550/papers/kernel.final.pdf · 2017-08-07 · Distance Shrinkage and Euclidean Embedding via Regularized

this is through cross-validation, which can be done effectively using the algorithm presented

before.

To do cross-validation, we first randomly divide the entries ofX into T mutually exclusive

subsets: Ω(1),Ω(2), . . . ,Ω(T ), for some fixed T , so that

Ω(1) ∪ Ω(2) ∪ . . . ∪ Ω(T ) = (i, j) : 1 ≤ i < j ≤ n.

In particular, the choice of T = 5 or 10 is often advocated in practice (see, e.g., Hastie,

Tibshirani and Friedman, 2009). For each t = 1, . . . , T , we can then apply the algorithm

given in the previous subsection to compute the distance shrinkage estimate with a given

tuning parameter ηn based on partial observations:

X−Ω(t) := Xij : 1 ≤ i < j ≤ n, (i, j) /∈ Ω(t).

Denote by D(t),ηn (t = 1, . . . , T ) the resulting estimates. We evaluate the suitability of a

tuning parameter ηn by its cross validation score:

CV(ηn) =1

T

T∑t=1

∑(i,j)∈Ω(t)

(Xij − D

(t),ηnij

)2

.

The same procedure can be repeated for a sequence of different values of ηn, and the one

associated with the smallest cross valuation score will be selected to the final choice. The

distance shrinkage estimate based on this choice of the tuning parameter is then computed

based on all observations to yield the final estimate.

5 Numerical Examples

To illustrate the practical merits of the proposed methods and the efficacy of the algorithm,

we conducted several numerical experiments.

5.1 Sequence Variation of Vpu Protein Sequences

The current work was motivated in part by a recent study on the variation of Vpu (HIV-1

virus protein U) protein sequences and their relationship to preservation of tetherin and

CD4 counter-activities (Pickering et al., 2014). Viruses are known for their fast mutation

and therefore an important task is to understand the diversity within a viral population. Of

particular interest in this study is a Vpu sequence repertoire derived from actively replicating

17

Page 18: Distance Shrinkage and Euclidean Embedding via Regularized Kernel Estimationmy2550/papers/kernel.final.pdf · 2017-08-07 · Distance Shrinkage and Euclidean Embedding via Regularized

plasma virus from 14 HIV-1-infected individuals. Following standard MACS criteria, five of

these individuals can be classified as Long-term nonprogressors, five as rapid progressors,

and four as normal progressors, according to how long the progression from seroconversion

to AIDS takes. A total of 304 unique amino acid sequences were obtained from this study.

We first performed pairwise alignment between these amino acid sequences using various

BLOSUM substitution matrices. The results using different substitution matrices are fairly

similar; and to fix ideas, we shall report here analysis based on the BLOSUM62 matrix.

These pairwise similarity scores sij : 1 ≤ i ≤ j ≤ n are converted into dissimilarity scores:

xij = sii + sjj − 2sij, ∀1 ≤ i < j ≤ n.

As mentioned earlier, X = (xij)1≤i,j≤n is not an Euclidean distance matrix. To this end, we

first applied the classical multidimensional scaling to X. The three dimensional embedding

is given in the top left panel of Figure 4. The amino acid sequences derived from the same

individuals are represented by the same symbol and color. Different colors correspond to

the three different classes of disease progression: long-term nonprogressors are represented

in red, normal in green, and rapid progressors in purple. For comparison, we also computed

D with various choices of the tuning parameters. Similar to the observations made by Lu

et al. (2005), the corresponding embeddings are qualitatively similar for a wide range of

choices of λn. A typical one is given in the top right panel of Figure 4. It is clear that

both embeddings share a lot of similarities. For example, sequences derived from the same

individual are more similar as they tend to cluster together. The key difference, however,

is that the embedding corresponding to D suggests an outlying sequence. We went back to

the original pairwise dissimilarity scores and identified the sequence as derived from a rapid

progressor. It is fairly clear from the original scores that this sequence is different from the

others. The minimum dissimilarity score from the particular sequence to any other sequence

is 245 whereas the largest score between any other pair of sequences is 215. The histogram

of the scores between the sequence and other sequences, or among other sequences are given

in the bottom panel of Figure 4.

Given these observations, we now consider the analysis with the outlying sequence re-

moved. To gain insight, we consider different choices of λn to visually inspect the Euclidean

embeddings given by the proposed distance shrinkage. The embeddings given in Figure 5

correspond to λn equals 4000, 8000, 12000 and 16000 respectively. These embedding are

qualitatively similar.

18

Page 19: Distance Shrinkage and Euclidean Embedding via Regularized Kernel Estimationmy2550/papers/kernel.final.pdf · 2017-08-07 · Distance Shrinkage and Euclidean Embedding via Regularized

Classical MDS

−8 −6 −4 −2 0 2 4 6−6

−4

−2

0 2

4

−6−4

−2 0

2 4

6

Distance Shrinkage

−8 −6 −4 −2 0 2 4−4

−2

0 2

4 6

810

−8−6

−4−2

0 2

4

xij (i<j)

Den

sity

0 50 100 150 200 250 300 350

0.00

00.

005

0.01

00.

015

0.02

0

Figure 4: Three dimensional embedding for 304 amino acid sequences: the top panels are

embeddings from classical multidimensional scaling and distance shrinkage respectively. The

histogram of the pairwise dissimilarity scores is given in the bottom panel. The shaded his-

togram corresponds to those scores between the outlying sequence and the other sequences.

19

Page 20: Distance Shrinkage and Euclidean Embedding via Regularized Kernel Estimationmy2550/papers/kernel.final.pdf · 2017-08-07 · Distance Shrinkage and Euclidean Embedding via Regularized

λn=4000

−6 −4 −2 0 2 4 6 8−6

−4

−2

0 2

4

−6−4

−2 0

2 4

6

λn=8000

−6 −4 −2 0 2 4 6 8−6

−4

−2

0 2

4 6

−6−4

−2 0

2 4

6

λn=12000

−6 −4 −2 0 2 4 6 8−4

−2

0 2

4 6

−8−6

−4−2

0 2

4

λn=16000

−8 −6 −4 −2 0 2 4−3

−2

−1

0 1

2 3

−8−6

−4−2

0 2

4

Figure 5: Euclidean embedding of 303 amino acid sequences via distance shrinkage: the

outlying sequence was removed from the original data and each panel corresponds to different

choice of λn.

20

Page 21: Distance Shrinkage and Euclidean Embedding via Regularized Kernel Estimationmy2550/papers/kernel.final.pdf · 2017-08-07 · Distance Shrinkage and Euclidean Embedding via Regularized

5.2 Simulated Examples

To further compare the proposed distance shrinkage approach with the classical multidimen-

sional scaling, we carried out several sets of simulation studies. For illustration purposes,

we took the setup of the molecular conformation problem discussed earlier. In particular,

we considered the problem of protein folding, a process of a random coil conformed to a

physically stable three-dimensional structure equipped with some unique characteristics and

functions.

We started by extracting the existing data on the 3D structure of the channel-forming

trans-membrane domain of Vpu protein from HIV-1 mentioned before. The data obtained

from protein data bank (symbol: 1PJE) contains the 3D coordinates of a total of n = 91

atoms. The exact Euclidean distance matrix D was then calculated from these coordinates.

We note that in this case the embedding dimension is known to be three. We generated

observations xij by adding an measurement error εij ∼ N(0, σ2) for 1 ≤ i < j ≤ n. We

considered three different values of σ2 – 0.05, 0.25 and 0.5 respectively, representing rela-

tively high, medium and low signal to noise ratio. For each value of σ2, we simulated one

hundred datasets and computed for each dataset the Euclidean distance matrix correspond-

ing to the classical multidimensional scaling and the distance shrinkage. We evaluated the

performance of each method by the Kruskal’s stress defined as ∥D−D∥F/∥D∥F. The resultsare summarized by Table 1.

Signal-to-Noise Ratio Method Mean Standard error

High Distance Shrinkage 0.010 2.0e-04

Classical MDS 0.078 9.3e-04

Medium Distance Shrinkage 0.024 4.8e-04

Classical MDS 0.185 2.5e-03

Low Distance Shrinkage 0.035 8.4e-04

Classical MDS 0.301 3.9e-03

Table 1: Kruskal’s stress for 1PJE data with measurement error.

To better appreciate the difference between the two methods, Figure 6 gives the rib-

bon plot of the protein backbone structure corresponding to the true Euclidean distance

matrix and the estimated ones from a typical simulation run with different signal to noise

ratios. It is noteworthy that the improvement of the distance shrinkage over the classical

21

Page 22: Distance Shrinkage and Euclidean Embedding via Regularized Kernel Estimationmy2550/papers/kernel.final.pdf · 2017-08-07 · Distance Shrinkage and Euclidean Embedding via Regularized

multidimensional scaling becomes more evident with higher level of noise.

Our theoretical analysis suggests better performances for larger number of atoms. To

further illustrate this effect of n, we repeated the previous experiment for HIV-1 virus protein

U cytoplasmic domain (protein data bank symbol: 2K7Y) consisting of n = 671 atoms. We

simulated data in the same fashion as before and the Kruskal stress, based on one hundred

simulated dataset for each value of σ2, is reported in Table 2. The performance compares

favorable with that for 1PJE.

Signal-to-Noise Ratio Method mean standard error

High Distance Shrinkage 1.66e-04 2.70e-07

Classical MDS 3.2e-03 4.84e-06

Medium Distance Shrinkage 8.32e-04 1.48e-06

Classical MDS 1.61e-02 2.45e-05

Low Distance Shrinkage 1.7e-03 3.05e-06

Classical MDS 3.22e-02 5.28e-05

Table 2: Kruskal’s stress for 2K7Y data with measurement error.

To demonstrate the efficacy of cross-validation as a tuning method, we give in Figure

7 the true Kruskal stress as a function of the tuning parameter λ along with the five fold

cross validation scores for a typical simulated dataset under each of the three levels of signal-

to-noise ratio. These plots were generated by computing the distance matrix estimate for

a series of values for the tuning parameter. It is clear from these plots that the tuning

parameter selected by the cross validation is fairly close to optimal choice that minimizes

the true Kruskal stress.

In the next set of simulation, we assess the effect of missing data for the proposed distance

shrinkage estimate. Similar to before, we take the 3D coordinates data from protein data

bank for five different proteins with different number of atoms. Pairwise distances were first

computed for each of the protein. To mimic the typical NMR experiments, we assume that

the larger distances are missing. In particular, we consider cases where the top 50%, 25% or

10% of the distances are unobservable. For those observed distances, independent Gaussian

measurement errors with mean 0 and variance 0.5 were added. We ran the proposed distance

shrinkage estimate on the simulated data. We experimented a range of tuning parameter

choices and the performance is fairly similar. The results are summarized in the following

table. As expected, the method performs better with the amount of missing data reduces.

22

Page 23: Distance Shrinkage and Euclidean Embedding via Regularized Kernel Estimationmy2550/papers/kernel.final.pdf · 2017-08-07 · Distance Shrinkage and Euclidean Embedding via Regularized

(a) Distance Shrinkage, High signal-to-noise ratio (b) Classical MDS, High signal-to-noise ratio

(c) Distance Shrinkage, Medium signal-to-noise ratio (d) Classical MDS, Medium signal-to-noise ratio

(e) Distance Shrinkage, Low signal-to-noise ratio (f) Classical MDS, Low signal-to-noise ratio

Figure 6: Ribbon plot of 1PJE protein back structure: the true structure is represented

in gold whereas the structured corresponding to the estimated Euclidean distance matrix is

given in blue. The left panels are for the distance shrinkage estimate whereas the right panels

are for the the classical multidimensional scaling. Particular regions where the distance

shrinkage shows visible improvement is circled out in red in the right panels.

23

Page 24: Distance Shrinkage and Euclidean Embedding via Regularized Kernel Estimationmy2550/papers/kernel.final.pdf · 2017-08-07 · Distance Shrinkage and Euclidean Embedding via Regularized

0.0 0.5 1.0 1.5 2.0

5e−

055e

−04

High SNR

λ

Str

ess

0.0 0.5 1.0 1.5 2.02e−

042e

−03

High SNR

λ

CV

Sco

res

4.0 4.5 5.0 5.5 6.00.00

020

0.00

040

Medium SNR

λ

Str

ess

4.0 4.5 5.0 5.5 6.00.00

110

0.00

114

Medium SNR

λ

CV

Sco

res

9.0 9.2 9.4 9.6 9.8 10.0

0.00

042

Low SNR

λ

Str

ess

9.0 9.2 9.4 9.6 9.8 10.0

0.00

221

0.00

225 Low SNR

λ

CV

Sco

res

Figure 7: Comparison of Kruskal stress and cross-validation scores for simulated 2K7Y data.

The right column gives plots of the Krusal stress as a function of the tuning parameter λ for

different signal-to-noise ratios, and the left column gives plots of the cross-validation scores.

In each panel, the minimizing tuning parameter is marked with the grey vertical line.

24

Page 25: Distance Shrinkage and Euclidean Embedding via Regularized Kernel Estimationmy2550/papers/kernel.final.pdf · 2017-08-07 · Distance Shrinkage and Euclidean Embedding via Regularized

The distance shrinkage estimate works reasonably well even with 10% of missing data.

Table 3: Effect of Missing Data

PDB ID # of Atoms Kruskal’s Stress

50% Missing 25% Missing 10% Missing

1PTQ 402 .57 .35 .18

1HOE 558 .56 .33 .15

1PHT 811 .56 .34 .17

1AX8 1003 .57 .36 .18

Finally, to further demonstrate the robustness of the approach to non-Gaussian measure-

ment error, we generated pairwise distance scores between the 671 atoms following Gamma

distributions:

xij ∼ Ga(dij, 1), ∀1 ≤ i < j ≤ 671,

so that both the mean and variance of xij are dij, where dij is the true squared distance

between the ith and jth atoms. We again applied both classical multidimensional scaling

and distance shrinkage to estimate the true distance matrix and reconstruct the 3D folding

structure. The result from a typical simulated dataset is given in Figure 8.

References

[1] Chen, L. and Buja, A. (2009), Local multidimensional scaling for nonlinear dimension

reduction, graph drawing, and proximity analysis, Journal of the American Statistical

Association, 104, 209-219.

[2] Chen, L. and Buja, A. (2013), Stress functions for nonlinear dimension reduction, prox-

imity analysis, and graph drawing, Journal of Machine Learning Research, 14, 1145-

1173.

[3] Darrotto, J. (2013), Convex Optimization and Euclidean Distance Geometry, Palo Alto:

Meboo.

25

Page 26: Distance Shrinkage and Euclidean Embedding via Regularized Kernel Estimationmy2550/papers/kernel.final.pdf · 2017-08-07 · Distance Shrinkage and Euclidean Embedding via Regularized

Figure 8: Ribbon plot of 2K7Y protein back structure: the true structure, and the structures

corresponding to the classical multidimensional scaling and the distance shrinkage estimate

are represented in gold, blue and pink respectively.

26

Page 27: Distance Shrinkage and Euclidean Embedding via Regularized Kernel Estimationmy2550/papers/kernel.final.pdf · 2017-08-07 · Distance Shrinkage and Euclidean Embedding via Regularized

[4] Durbin, R., Eddy, S. Krogh, A. and Mitchison, G. (1998), Biological Sequence Analysis:

Probabilistic Models of Proteins and Nucleic Acids, Cambridge: Cambridge University

Press.

[5] Dykstra, R. (1983), An algorithm for restricted least squares regression, Journal of the

American Statistical Association, 78, 837-842.

[6] Escalante, R. and Raydan, M. (2011), Alternating Projection Methods, Philadelphia:

Society for Industrial and Applied Mathematics.

[7] Glunt, W., Hayden, T., Hong, S. and Wells, J. (1990), An alternating projection algo-

rithm for computing the nearest Euclidean distance matrix, SIAM Journal of Matrix

Analysis and Applications, 11, 589-600.

[8] Hastie, T., Tibshirani, R. and Friedman, J. (2009), The Elements of Statistical Learning,

New York: Springer.

[9] Hyden, T.L. and Wells, J. (1988), Approximation by matrices positive semidefinite on

a subspace, Linear Algebra and Its Applications, 109, 115-130.

[10] Lu, F., Keles, S., Wright, S. and Wahba, G. (2005), Framework for kernel regularization

with application to protein clustering, Proceedings of the National Academy of Sciences,

102, 12332-12337.

[11] Lu, Z., Monteiro, R. and Yuan, M. (2012), Convex optimization methods for dimen-

sion reduction and coefficient estimation in multivariate linear regression, Mathematical

Programming, 131, 163-194.

[12] Negahban S., Wainwright M. (2011), Estimation of (near) low-rank matrices with noise

and high-dimensional scaling, The Annals of Statistics, 39(2), 1069-1097.

[13] Pickering, S., Hue, S., Kim, E., Reddy, S., Wolinsky, S. and Neil, S. (2014), Preservation

of Tetherin and CD4 counter-activities in circulating Vpu alleles despite extensive se-

quence variation within HIV-1 infected individuals, PLOS Pathogens, 10(1), e1003895.

[14] Pouzet M. (1979), Note sur le probleme de Ulam, Journal of Combinatorial Theory,

Series B, 27(3), 231-236.

[15] Rohde A., Tsybakov A. (2011), Estimation of high-dimensional low-rank matrices, The

Annals of Statistics, 39(2) 887-930.

27

Page 28: Distance Shrinkage and Euclidean Embedding via Regularized Kernel Estimationmy2550/papers/kernel.final.pdf · 2017-08-07 · Distance Shrinkage and Euclidean Embedding via Regularized

[16] Roy, A. (2010), Minimal Euclidean representations of graphs, Discrete Mathematics,

310(4), 727-733.

[17] Scholkopf, B. and Smola, A. (1998), Nonlinear component analysis as a kernel eigenvalue

problem, Neural Computation, 10 1299-1319.

[18] Scholkopf, B. and Smola, A. (2002), Learning with Kernels, Cambridge: MIT Press.

[19] Schoenberg, I.J. (1935), Remarks to Maurice Frechet article “Sur la definition ax-

iomatique d’une classe d’espaces distancies vectoriellement applicable sur l’espace de

Hilbert”, Annals of Mathematics, 38, 724-732.

[20] Sinai, Y. and Soshnikov, A. (1998), A refinement of Wigners semi-circle law in a neigh-

borhood of the spectrum edge for random symmetric matrices, Functional Analysis and

Its Applications, 32(2), 114-131.

[21] Szekely, G.J., Rizzo, M.L. and Bakirov, N.K. (2007), Measuring and testing indepen-

dence by correlation of distances, The Annals of Statistics, 35, 2769-2794.

[22] Tenenbaum J., De Silva V., Langford J. (2000), A global geometric framework for

nonlinear dimensionality reduction, Science, 290(5500), 2319-2323.

[23] Toh, K.C., Todd, M.J. and Tutuncu, R.H. (1999), SDPT3 – a Matlab software package

for semidefinite programming, Optimization Methods and Software, 11, 545-581.

[24] Tutuncu, R.H., Toh, K.C. and Todd, M.J. (2003), Solving semidefinite-quadratic-linear

programs using SDPT3, Mathematical Programming Series B, 95, 189-217.

[25] Venna, J. and Kaski, S. (2006), Local multidimensional scaling, Neural Networks, 19,

889-899.

[26] Wuthrich, K. (1986), NMR of Proteins and Nucleic Acids, New York: John Wiley Sons,

Inc..

[27] Young, G. and Householder, A.S. (1938), Discussion of a set of points in terms of their

mutual distances, Psychometrika, 3, 19-22.

[28] Yuan, M., Ekici, A., Lu, Z. and Monteiro, R. (2007), Dimension reduction and coefficient

estimation in multivariate linear regression, Journal of the Royal Statistical Society:

Series B, 69, 329-346.

28

Page 29: Distance Shrinkage and Euclidean Embedding via Regularized Kernel Estimationmy2550/papers/kernel.final.pdf · 2017-08-07 · Distance Shrinkage and Euclidean Embedding via Regularized

Appendix – Proofs

Proof of Theorem 1. Denote by M0 = −JDJ/2. We first show that M0 ∈M(D). Note first

that

J(ei − ej) = (ei − ej).

Therefore,

⟨M0, Bij⟩ = −1

2(ei − ej)

⊤JDJ(ei − ej) = −1

2(ei − ej)

⊤D(ei − ej) = dij,

where in the last equality follows from the facts that D is symmetric and diag(D) = 0.

Together with the fact that M0 ⪰ 0 (Schonberg, 1935; Young and Householder, 1938), this

implies that M0 ∈M(D).

Next, we show that for any M ∈ M(D), trace(M0) ≤ trace(M). To this end, observe

that

D = T (M) = diag(M)1⊤ + 1diag(M)⊤ − 2M.

Then

trace(M0) = trace(−JDJ/2)

=1

2trace

[(I − 11⊤

n

)(2M − diag(M)1⊤ − 1diag(M)⊤

)(I − 11⊤

n

)]=

1

2trace

(2M − diag(M)1⊤ − 1diag(M)⊤

)− 1

n1⊤ (

2M − diag(M)1⊤ − 1diag(M)⊤)1

+1

2n2trace

[11⊤ (

2M − diag(M)1⊤ − 1diag(M)⊤)11⊤]

= − 1

2n1⊤ (

2M − diag(M)1⊤ − 1diag(M)⊤)1

= trace(M)− 1

n1⊤M1.

The positive semi-definiteness of M ensures that 1⊤M1 ≥ 0, which implies that M0 has the

minimum trace inM(D). We now show it is also the only one.

Assume the contrary that there exists an M ∈M(D) such that M = M0 yet trace(M) =

trace(M0). Following the previous calculation, we have 1⊤M1 = 0. Recall that M ⪰ 0. The

fact that 1⊤M1 = 0 necessarily implies that 1 ∈ ker(M). As a result, M = JMJ , and

M −M0 = J(M −M0)J.

29

Page 30: Distance Shrinkage and Euclidean Embedding via Regularized Kernel Estimationmy2550/papers/kernel.final.pdf · 2017-08-07 · Distance Shrinkage and Euclidean Embedding via Regularized

On the other hand,

⟨M,Bij⟩ = ⟨M0, Bij⟩ = dij, ∀i < j.

Therefore,

⟨J(M −M0)J,Bij⟩ = ⟨M −M0, Bij⟩ = 0, ∀i < j.

It is not hard to see that

Bij : i < j ∪ eie⊤i : 1 ≤ i ≤ n

forms a basis of the collection of n× n symmetric matrices. In other words, there exists αij

(1 ≤ i ≤ j) such that

M −M0 =∑

1≤i<j≤n

αijBij +n−1∑i=1

αiieiei.

Recall that 1 ∈ ker(M) ∩ ker(M0). Hence

(M −M0)1 = [α11, . . . , αnn]⊤ = 0.

In other words,

M −M0 =∑

1≤i<j≤n

αijBij.

Thus

∥M −M0∥2F = ∥J(M −M0)J∥2F =∑

1≤i<j≤n

αij⟨J(M −M0)J,Bij⟩ = 0.

This obviously contradicts with the assumption that M = M0.

The second statement follows from the same argument. Note that PP⊤ ∈ M(D). Be-

cause the embedding points are centered, we have 1⊤PP⊤1 = 0. The previous argument

then suggests that PP⊤ = M0.

Proof of Theorem 3. Following Theorem 1, D = T (K) can be equivalently expressed as

D = argminM∈Dn

1

2∥X −M∥2F + λntrace

(−1

2JMJ

). (10)

Recall that J = I− (11⊤/n). Observe that D0 = (n−1)I−nJ . Therefore, for any M ∈ Dn,∥∥∥∥(X − λn

2nD0

)−M

∥∥∥∥2

F

= ∥X −M∥2F +λn

n⟨M,D0⟩+ (terms not involving M)

= ∥X −M∥2F +λn

n⟨M, (n− 1)I − nJ⟩+ (terms not involving M)

= ∥X −M∥2F − λn⟨M,J⟩+ (terms not involving M),

30

Page 31: Distance Shrinkage and Euclidean Embedding via Regularized Kernel Estimationmy2550/papers/kernel.final.pdf · 2017-08-07 · Distance Shrinkage and Euclidean Embedding via Regularized

where the last equality follows from the fact that any distance matrix is hollow, e.g., its

diagonals are zeros, hence ⟨M, I⟩ = 0. Because J is idempotent,

⟨M,J⟩ = ⟨M,J2⟩ = trace(JMJ).

Therefore,

PDn

(X − λn

2nD0

)= argmin

M∈Dn

1

2∥X −M∥2F −

λn

2trace(JMJ)

= argmin

M∈Dn

1

2∥X −M∥2F + λntrace

(−1

2JMJ

),

which, in the light of (10), implies the desired statement.

Proof of Theorem 5. By Theorem 3, D = PDn(X − (λn/2n)D0). Write ηn = λn/(2n) for

simplicity. Recall that for any M ∈ Rn×n, its projection to the closed convex set Dn,

PDn(M), can be characterized by the so-called Kolmogorov criterion:

⟨A−PDn(M),M −PDn(M)⟩ ≤ 0, ∀A ∈ Dn.

See, e.g., Escalante and Raydan (2011). In particular, taking M = X − ηnD0 yields

⟨A− D,D − D⟩ ≤ ⟨X −D − ηnD0, D − A⟩.

A classical result in distance geometry by Schonberg (1935) indicates that a distance matrix

is conditionally negative semi-definite on the set

Xn = x ∈ Rn : x⊤1 = 0,

that is, x⊤Mx ≤ 0 for any x ∈ Xn. See also Young and Householder (1938). In other words,

if M ∈ Dn, then the so-called Schonberg transform JMJ is negative semi-definite where, as

before, J = I − (11⊤/n).

Let V be the eigenvectors of JAJ , and V⊥ be an orthonormal basis of the orthogonal

complement of the linear subspace spanned by 1 and V . Then [1/√n, V, V⊥] forms an

orthonormal basis of Rn. Then for any symmetric matrix M , write

M = P0M + P1M,

where

P1M = V⊥V⊤⊥ MV⊥V

⊤⊥

31

Page 32: Distance Shrinkage and Euclidean Embedding via Regularized Kernel Estimationmy2550/papers/kernel.final.pdf · 2017-08-07 · Distance Shrinkage and Euclidean Embedding via Regularized

and

P0M = M −P1M = [1/√n, V ][1/

√n, V ]⊤M [1/

√n, V ][1/

√n, V ]⊤

+V⊥V⊤⊥ M [1/

√n, V ][1/

√n, V ]⊤ + [1/

√n, V ][1/

√n, V ]⊤MV⊥V

⊤⊥ .

Therefore,

⟨X −D, D − A⟩ = ⟨P0(X −D),P0(D − A)⟩+ ⟨P1(X −D),P1(D − A)⟩

= ⟨P0(X −D),P0(D − A)⟩+ ⟨P1(X −D),P1D⟩

≤ ∥P0(X −D)∥∥P0(D − A)∥∗ + ∥P1(X −D)∥∥P1D∥∗

where in the last inequality we used the fact that for any matrices M1,M2 ∈ Rn×n,

⟨M1,M2⟩ ≤ ∥M1∥∥M2∥∗,

and ∥ · ∥ and ∥ · ∥∗ represent the matrix spectral and nuclear norm respectively. It is clear

that

∥P1(X −D)∥ ≤ ∥X −D∥,

and

∥P0(X −D)∥ ≤ 2∥X −D∥.

Then,

⟨X −D, D − A⟩ ≤ ∥X −D∥(2∥P0(D − A)∥∗ + ∥P1D∥∗

)On the other hand, recall that both D and D are hollow and D0 = (n− 1)I −nJ . Thus,

⟨D0, D − A⟩ = n⟨A− D, J⟩

= ntrace(J(A− D)J)

= −ntrace(V V ⊤(D − A)V V ⊤)− ntrace(P1D)

= −ntrace(V V ⊤(D − A)V V ⊤) + n∥P1D∥∗≥ −n∥V V ⊤(D − A)V V ⊤∥∗ + n∥P1D∥∗≥ −n∥P0(D − A)∥∗ + n∥P1D∥∗,

where the last equality follows from the fact that P1D is negative semi-definite.

Taking nηn ≥ ∥X −D∥ yields that

⟨X −D − λnD0, D − A⟩ ≤ 3nηn∥P0(D − A)∥∗.

32

Page 33: Distance Shrinkage and Euclidean Embedding via Regularized Kernel Estimationmy2550/papers/kernel.final.pdf · 2017-08-07 · Distance Shrinkage and Euclidean Embedding via Regularized

Note that, by Cauchy-Schwartz inequality, for any M ∈ Rn×n

∥M∥∗ ≤√rank(M)∥M∥F.

Therefore,

∥P0(D − A)∥∗ ≤√

rank(JAJ) + 1∥P0(D − A)∥F≤

√rank(JAJ) + 1∥D − A∥F

=√

dim(A) + 1∥D − A∥F,

where the last equality follows from the fact that for any Euclidean distance matrix A,

dim(A) = rank(JAJ). See, e.g., Schonberg (1935) and Young and Householder (1938). As

a result,

⟨A− D,D − D⟩ ≤ 3nηn√

dim(A) + 1∥D − A∥F.

Simple algebraic manipulations show that

⟨A− D,D − D⟩ = 1

2

(∥D −D∥2F + ∥D − A∥2F − ∥A−D∥2F

).

Thus,

∥D −D∥2F + ∥D − A∥2F ≤ ∥A−D∥2F + 6nηn√

dim(A) + 1∥D − A∥F,

which implies that

∥D −D∥2F ≤ ∥A−D∥2F + 6nηn√dim(A) + 1∥D − A∥F − ∥D − A∥2F

= ∥A−D∥2F + 9n2η2n(dim(A) + 1)−(∥D − A∥F − 3nηn

√dim(A) + 1

)2

≤ ∥A−D∥2F + 9n2η2n(dim(A) + 1).

This completes the proof.

Proof of Corollary 7. Observe first that

Dr = argminM∈Dr

∥J(M − D)J∥2F.

Therefore,

∥J(Dr −D)J∥2F ≤ 2∥J(Dr − D)J∥2F + 2∥J(D −D)J∥2F≤ 2∥J(Dr − D)J∥2F + 2∥D −D∥2F≤ 4∥J(Dr −D)J∥2F + 4∥J(D −D)J∥2F + 2∥D −D∥2F≤ 4 min

M∈Dn(r)∥J(D −M)J∥2F + 6∥D −D∥2F

33

Page 34: Distance Shrinkage and Euclidean Embedding via Regularized Kernel Estimationmy2550/papers/kernel.final.pdf · 2017-08-07 · Distance Shrinkage and Euclidean Embedding via Regularized

On the other hand, taking M = Dr in Theorem 5 yields

1

n2∥D −D∥2F ≤ 1

n2∥Dr −D∥2F + 9η2n(r + 1)

=1

n2min

M∈Dn(r)∥J(D −M)J∥2F + 9η2n(r + 1),

where, as before, ηn = λn/2n. Therefore,

1

n2∥J(Dr −D)J∥2F ≤

10

n2min

M∈Dn(r)∥J(D −M)J∥2F + 54η2n(r + 1),

which completes the proof.

34