32
Interlacing Families and the Kadison–Singer Problem

InterlacingFamiliesand theKadison–SingerProblemdschroeder/essay.pdf · 2. Stable Polynomials rather unusual approach. It uses the interpretation of the operator norm of the hermitian

  • Upload
    others

  • View
    6

  • Download
    0

Embed Size (px)

Citation preview

Page 1: InterlacingFamiliesand theKadison–SingerProblemdschroeder/essay.pdf · 2. Stable Polynomials rather unusual approach. It uses the interpretation of the operator norm of the hermitian

Interlacing Families andthe Kadison–Singer Problem

Page 2: InterlacingFamiliesand theKadison–SingerProblemdschroeder/essay.pdf · 2. Stable Polynomials rather unusual approach. It uses the interpretation of the operator norm of the hermitian
Page 3: InterlacingFamiliesand theKadison–SingerProblemdschroeder/essay.pdf · 2. Stable Polynomials rather unusual approach. It uses the interpretation of the operator norm of the hermitian

Contents

1 Introduction 1

2 Stable Polynomials 32.1 Real Rootedness and Real Stability . . . . . . . . . . . . . . . . . . . . . 42.2 Barrier Argument . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7

3 The Kadison–Singer Problem 113.1 C*-algebras and States . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113.2 Kadison–Singer Problem . . . . . . . . . . . . . . . . . . . . . . . . . . 123.3 Paving Conjecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133.4 Proof of the Paving Conjecture . . . . . . . . . . . . . . . . . . . . . . . 14

4 Ramanujan Graphs 174.1 Expander Graphs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 184.2 2-lifts and the Construction of Ramanujan Graphs . . . . . . . . . . . . . 204.3 Matching Polynomials . . . . . . . . . . . . . . . . . . . . . . . . . . . . 214.4 Existence of bipartite Ramanujan graphs of arbitrary degree . . . . . . . . 234.5 Generic bound . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23

Bibliography 27

i

Page 4: InterlacingFamiliesand theKadison–SingerProblemdschroeder/essay.pdf · 2. Stable Polynomials rather unusual approach. It uses the interpretation of the operator norm of the hermitian
Page 5: InterlacingFamiliesand theKadison–SingerProblemdschroeder/essay.pdf · 2. Stable Polynomials rather unusual approach. It uses the interpretation of the operator norm of the hermitian

Chapter 1

Introduction

Given m balls [m] and any collection of m subsets S1, . . . Sm ⊂ [m] of these balls, canwe colour the m balls in red and blue in such a way that the number of red and blue ballsapproximately agree in all of the subsets Sk? Ideally, we would colour the balls all half-red,half-blue. But how close can we come to this situation with a discrete colouring?

This would be a typical question from the field of discrepancy theory. An answer to theabove question due to Spencer [16], is that we can find a colouring such that in all Sk thedifference between the numbers of red and blue balls is at most 6√m.

A result of a similar kind lies at the core of a sequence of the two papers [10, 11] by Marcus,Spielman and Srivastava (see also the review article [12] by the same authors and the blogarticle [17] by Srivastava) this essay is about. Instead of partitioning a set of m balls, wewant to partition m vectors v1, . . . , vm ∈ Rn which are small (∥vk∥2 ≤ ϵ for all k) anddecompose the identity matrix 1 in the sense that

v1v∗1 + · · ·+ vmv

∗m = 1

into two subsets vkk∈S1 ∪ vkk∈S2 = v1, . . . , vm in such a way that∑k∈Sj

vkv∗k ≈ 1

21

for j = 1, 2 in some approximative sense. Intuitively, it should be clear that the smallnesscondition ∥vk∥2 ≤ ϵ is necessary for such a result to hold since any single vector vk of rela-tively big norm makes any partition rather unbalanced. As a special case of the discrepancyresult Corollary 2.1 which lies at the core of chapter two, we can prove∥∥∥∥∥∥

∑k∈Sj

vkv∗k

∥∥∥∥∥∥ ≤(1/

√2 +

√ϵ)2

(1.1)

and therefore−(

√2ϵ+ ϵ)1 ≤

∑k∈Sj

vkv∗k − 1

21 ≤ (

√2ϵ+ ϵ)1

for j = 1, 2 where A ≤ B means that the matrix B −A is positive semidefinite.

An often successful approach for proving the bound from eq. (1.1) could be to pick the par-tition randomly in the sense that we consider independent Bernoulli distributed random

1

Page 6: InterlacingFamiliesand theKadison–SingerProblemdschroeder/essay.pdf · 2. Stable Polynomials rather unusual approach. It uses the interpretation of the operator norm of the hermitian

1. Introduction

variables ϵ1, . . . , ϵm and to compute the probability of∥∥∑m

k=1 ϵkvkv∗k

∥∥ being small. Inthe present case, however, often successful methods like Matrix Chernoff bounds or moresophisticated concentration results all only give highly probable bounds on eq. (1.1) grow-ing as O(

√ϵ logn) with the dimension n (see, for example [20, 21]). At the expense of

replacing high probability with positive probability, Marcus, Spielman and Srivastava man-aged to improve this bound to be constant in the dimension n. Phrased probabilisticallythe main result the sequence of papers is the following Theorem:

Theorem 1.1. Let ϵ > 0,m,n ∈ N, (Ω,F ,P) be a finite probability space and v1, . . . , vm beindependent random Cn-vectors such that E ∥vi∥2 ≤ ϵ and

m∑k=1

E vkv∗k = 1.

Then

P(∥∥∥∥∥

m∑k=1

vkv∗k

∥∥∥∥∥ ≤ (1 +√ϵ)2)> 0.

Note that this can not be proved via the first moment method since E∥∥∑m

k=1 vkv∗k

∥∥grows with the dimension n. Instead, Marcus, Spielman and Srivastava used the character-ization of

∥∥∑mk=1 vkv

∗k

∥∥ as the largest root of the characteristic polynomial

x 7→ P∑mk=1

vkv∗k(x) ..= det

(x1−

m∑k=1

vkv∗k

)

and showed that the largest root of the expected characteristic polynomial EP∑mk=1

vkv∗k

is at most given by (1 +√ϵ)2. To conclude the proof they used interlacing properties

of characteristic polynomials to show that – while the determinant generally behaves verynon-linearly – the largest root of EP∑m

k=1vkv∗

kstill is an average of the largest roots of the

different realizations of P∑mk=1

vkv∗k.

This innovative method enabled Marcus, Spielman and Srivastava to resolve two long-open problems from rather different areas of Mathematics – they gave a positive answer tothe Kadison–Singer problem and managed to prove the existence of bipartite Ramanujangraphs of arbitrary degree.

The purpose if this essay is to understand the method of interlacing polynomials, as wellas how it can be applied to solve the two famous and unrelated problems. The authordoes, of course, not claim any originality and follows the original papers [10, 11] as well assubsequently published simplifications or alternative accounts [19, 15, 18, 22, 3] closely.

The second chapter of this essay establishes the method of interlacing polynomials, whilein the third and fourth chapter the two problems are (briefly) introduced and resolved usingresults from chapter two.

2

Page 7: InterlacingFamiliesand theKadison–SingerProblemdschroeder/essay.pdf · 2. Stable Polynomials rather unusual approach. It uses the interpretation of the operator norm of the hermitian

Chapter 2

Stable Polynomials

The goal of this chapter is to give a self contained proof of Theorem 1.1. Before startingwith the proof, however, we want to prove the following deterministic Corollary, alreadyadvertised in the introduction on discrepancy.

Corollary 2.1. Letn,m ∈ N andu1, . . . , um ∈ Cn be vectors withu1u∗1+· · ·+umu

∗m = 1

and ∥uk∥2 ≤ ϵ for all k ∈ [m]. Then for any r ∈ N there exists a partition [m] = S1∪· · ·∪Sr

such that ∥∥∥∥∥∥∑k∈Sj

uku∗k

∥∥∥∥∥∥ ≤(

1√r+

√ϵ

)2

for all j ∈ [r].

Proof. Take Ω = [r]m equipped with the uniform distribution and set vk : Ω → Cn ⊗ Cr

to map vk(ω) ..=√ruk ⊗ eωk for k ∈ [m] with e1, . . . er being the standard basis of Cr .

Then E ∥vk∥2 = E r ∥uk∥2 ≤ rϵ andm∑

k=1

E vkv∗k = r

m∑k=1

(uku∗k)⊗

(Eω eωke

∗ωk

)= r

m∑k=1

(uku∗k)⊗

(1

r1

)= 1

and thus by Theorem 1.1 for some ω ∈ [r]m we have

r∑j=1

∥∥∥∥∥∥∑

k∈[m],ωk=j

ruku∗k

∥∥∥∥∥∥2

=

∥∥∥∥∥∥ ∑

k∈[m],ωk=1

ruku∗k

⊗ e1e∗1 + · · ·+

∑k∈[m],ωk=r

ruku∗k

⊗ ere∗r

∥∥∥∥∥∥2

=

∥∥∥∥∥m∑

k=1

vk(ω)vk(ω)∗

∥∥∥∥∥2

≤ (1 +√rϵ)4

implying ∥∥∥∥∥∥∑

k∈[m],ωk=j

uku∗k

∥∥∥∥∥∥ ≤(

1√r+

√ϵ

)2

for all j ∈ [r].

Before going into the details, we will give a rough sketch of the proof strategy for The-orem 1.1. The proof of the probabilistic bound on the operator norm of a matrix utilizes a

3

Page 8: InterlacingFamiliesand theKadison–SingerProblemdschroeder/essay.pdf · 2. Stable Polynomials rather unusual approach. It uses the interpretation of the operator norm of the hermitian

2. Stable Polynomials

rather unusual approach. It uses the interpretation of the operator norm of the hermitianmatrix

∑mk=1 vkv

∗k as the largest root of its characteristic polynomial

x 7→ P∑mk=1

vkv∗k(x) ..= det

(x−

m∑k=1

vkv∗k

). (2.1)

The proof will proceed in two stages. Firstly, we prove the claimed bound for the expectedpolynomial:Proposition 2.2. Under the assumptions of Theorem 1.1 we have that the largest root of theexpected characteristic polynomial

x 7→ EP∑mk=1

vkv∗k(x) = E det

(x−

m∑j=1

vkv∗k

)

satisfiesmaxroot EP∑m

k=1vkv∗

k≤ (1 +

√ϵ)2.

Secondly we will relate the maximal root of EP∑mk=1

vkv∗k

to the maximal roots of therealizations of P∑m

k=1vkv

∗k:

Proposition 2.3. Let m,n ∈ N, (Ω,F ,P) be a finite probability space and v1, . . . , vm beindependent random Cn-vectors. Then the polynomial EP∑m

k=1vkv

∗k

has only real roots and itholds that

min maxrootP∑mk=1

vkv∗k≤ maxroot EP∑m

k=1vkv

∗k≤ max maxrootP∑m

k=1vkv∗

k

where the minimum and maximum are taken over all values the random vectors vk take withpositive probability. In particular,∥∥∥∥∥

m∑k=1

vkv∗k

∥∥∥∥∥ ≤ maxroot EP∑mk=1

vkv∗k

with positive probability.Remark 2.4. Note that this result is rather non-trivial and relies on specific properties of charac-teristic polynomials. In general, roots of polynomials need not to behave well under convex combi-nations. For example, the average of polynomials with only real roots can have complex roots. Alsonote that the expected characteristic polynomial is, in general, not a characteristic polynomial.

Clearly both Propositions together imply the Theorem. We shall begin with the proofof Proposition 2.3 by showing that characteristic polynomials exhibit certain interlacingproperties.

2.1 Real Rootedness and Real StabilityWe begin by better understanding the characteristic polynomial of sums of rank one matri-ces.Lemma 2.5. The characteristic polynomial of the sum of rank-one matrices

∑mk=1 vkv

∗k can be

written as

det(z −

m∑k=1

vkv∗k

)= µ[v1v

∗1 , . . . , vmv

∗m](z)

4

Page 9: InterlacingFamiliesand theKadison–SingerProblemdschroeder/essay.pdf · 2. Stable Polynomials rather unusual approach. It uses the interpretation of the operator norm of the hermitian

2.1. Real Rootedness and Real Stability

where

µ[A1, . . . , Am](z) ..=

[m∏

k=1

(1− ∂

∂zk

)]det(z1+

m∑k=1

zkAk

)∣∣∣∣∣z1=···=zm=0

(2.2)

is the so called mixed characteristic polynomial of positive semidefinite matricesA1, . . . , Am.

Proof. Since for any invertible matrix A and vector v we have

det(A− vv∗) = det(A) det(1−A−1vv∗) = (1− v∗A−1v) detA= (1− ∂/∂z)(1 + zv∗A−1v)

∣∣z=0

detA = (1− ∂/∂z)det(A+ zvv∗)∣∣z=0

the same formula holds true for any matrix A by density of invertible matrices. Applyingthis formula iteratively then proves the Lemma.

While the determinant of some matrix generally depends in a very non-linear fashionon the matrix, the mixed characteristic polynomial depends affine-linearly on rank-one ma-trices:Lemma 2.6 (Affine-linearity of mixed characteristic polynomials). Letλ1, . . . , λl > 0withλ1 + · · · + λl = 1 and A1, . . . , Am−1 be positive semidefinite matrices and w1, . . . , wl bevectors of the corresponding dimension. Then

l∑j=1

λjµ[A1, . . . , Am−1, wjw∗j ] = µ[A1, . . . , Am−1, λ1w1w

∗1 + · · ·+ λlwlw

∗l ]

and the same holds true in the other entries since µ is invariant under permutations of the matrices.In particular, if the vectors are chosen independently randomly and assume only finitely manyvalues we have that

E det(z1−

m∑k=1

vkv∗k

)= µ[E v1v∗1 , . . . ,E vmv∗m](z). (2.3)

Proof. For invertible matrices A we find

l∑j=1

λj det(A+ zwjw∗j ) =

l∑j=1

λj det(A)(1 + z Tr(A−1wjw∗j ))

= det(A)[1 + z Tr

(A−1

l∑j=1

λjwjw∗j

)]= det

(A+ z

l∑j=1

λkwjw∗j

)+O

(z2)

from the straight-forward expansion

det(1+ zB) = 1 + z TrB +O(z2).

The same result for general matrices and thereby the claim follows, again, by density of in-vertible matrices. Applying this formula iteratively using independence also yields eq. (2.3).

Clearly all realizations of the random characteristic polynomial det(z1 −∑m

k=1 vkv∗k)

have only real roots. The expected characteristic polynomial is a convex combination ofthese, so called real rooted, polynomials and can be shown to also only having real roots. Itshould be emphasized that this is not the case for general polynomials.

5

Page 10: InterlacingFamiliesand theKadison–SingerProblemdschroeder/essay.pdf · 2. Stable Polynomials rather unusual approach. It uses the interpretation of the operator norm of the hermitian

2. Stable Polynomials

Lemma 2.7. LetA1, . . . , Am be positive semidefinite matrices of the same dimension. Then

(i) the polynomial

(z, z1, . . . , zm) 7→ det(z1+ z1A1 + · · ·+ zmAm)

has no zeros with ℑz,ℑz1, . . . ,ℑzm all positive (such a polynomial is called real stable),

(ii) the polynomial µ[A1, . . . , Am] is real rooted.

Proof.

(i) The coefficients of the polynomial are real since the polynomial takes real values forall choices of real variables. Also, for all vectors 0 = x ∈ Ck we have

ℑ⟨x, (z1+ z1A1 + · · ·+ zmAm)x⟩ = ⟨x, (ℑz1+ ℑz1A1 + · · ·+ ℑzmAm)x⟩≥ ℑz ∥x∥2 > 0

in case that ℑz,ℑz1, . . . ,ℑzm > 0 implying that the determinant is non-zero.

(ii) Assume that p is an univariate polynomial with no roots in the upper half plane. Thuswe can write p(z) = c

∏lj=1(z − xj) for some x1, . . . , xl with ℑxj ≤ 0 and some

c ∈ C. Then

(1− ∂/∂z)p(z) = c

(1−

l∑j=1

1

z − xj

)l∏

j=1

(z − xj)

also has no roots in the upper half plane. Applying this to the m variables of thepolynomial from part (i) while fixing the other variables, we find that(

1− ∂

∂z1

). . .

(1− ∂

∂zm

)det(z +

m∑k=1

zkvkv∗k

)(2.4)

is a real stable polynomial.If (z1, . . . , zm) 7→ p(z1, . . . , zm) is a real stable polynomial then p(·, . . . , ·, 0) can belocally uniformly approximated by the sequence p(·, . . . , ·, i/n), n ∈ N and since thelatter has no zeros in z ∈ Z | ℑz > 0 m−1 by Hurwitz’s Theorem from complexanalysis (see, for example, [6, Theorem 2.5]) neither does the former unless it’s identi-cally zero. Specialising this to eq. (2.4) shows that the restriction to z1 = · · · = zm =0 produces a real stable (and thus real rooted since complex roots of univariate polyno-mials come in pairs of complex conjugates) univariate polynomial since characteristicpolynomials cannot vanish identically.

The last ingredient we need for the proof of Proposition 2.3 is a control on the maximalroot of convex combinations of our polynomials.

Lemma 2.8. Let p, q be polynomials of the same degree with leading coefficient 1 such that λp+(1− λ)q is real rooted for all 0 ≤ λ ≤ 1. If maxroot p ≤ maxroot q, then

maxroot p ≤ maxroot(λp+ (1− λ)q) ≤ maxroot q

for all 0 ≤ λ ≤ 1.

6

Page 11: InterlacingFamiliesand theKadison–SingerProblemdschroeder/essay.pdf · 2. Stable Polynomials rather unusual approach. It uses the interpretation of the operator norm of the hermitian

2.2. Barrier Argument

Proof. Since p, q > 0 in (maxroot q,∞) the second inequality is immediate. Assume forcontradiction that the first inequality fails, i.e., that pλ ..= λp + (1 − λ)q has no rootsin [maxroot p,maxroot q] for some 0 < λ < 1. Then pλ > 0 in [maxroot p,maxroot q]which in turn implies that q(maxroot p) > 0 and consequently that q = p0 has at least twozeros in (maxroot p,maxroot q]. Note that since pµ(maxroot p) > 0 and pµ has only realroots for all µ < 1, there has to be some point µ < λ where at least two roots of pµ from(−∞,maxroot p) jump to (maxroot p,maxroot q]. This contradict, for example, Hurwitz’Theorem.

Proof of Proposition 2.3. It follows from Lemmata 2.5, 2.6 and 2.7 that all convex combi-nations of the random realizations of the characteristic polynomial are real rooted. Thus(and since characteristic polynomials have leading coefficient 1) we can apply Lemma 2.8iteratively to conclude the proof.

2.2 Barrier ArgumentThe goal of this section is to give a proof of Proposition 2.2. According to the precedingsection and after a straightforward translation we can rewrite the expected characteristicpolynomial as

EP∑rk=1

vkv∗k(z) =

[m∏

k=1

(1− ∂

∂zk

)]det(

m∑k=1

zk E vkv∗k

)∣∣∣∣∣z1=···=zm=z

After defining p(z1, . . . , zm) ..= det(z1 E v1v∗1 + · · ·+ zm E vmv∗m) and introducing theshorthand notation ∂k ..= ∂/∂zk we have to show that (1− ∂1) . . . (1− ∂m)p is non-zeroon the diagonal

(t, . . . , t) ∈ Rm∣∣ t > (1 +

√ϵ)2.

In fact we will prove the stronger statement that (1−∂1) . . . (1−∂m)p is nonzero in the or-thant

(z1, . . . , zm) ∈ Rm

∣∣ zk > (1 +√ϵ)2 for all k ∈ [m]

, in other words, that ((1+√

ϵ)2, . . . , (1 +√ϵ)2) lies above the zeros of (1− ∂1) . . . (1− ∂m)p.

Proof of Proposition 2.2. We want to bound the zeros of (1 − ∂1) . . . (1 − ∂m)p usingan iterative argument. Firstly, p has no zeros above (0, . . . , 0) since z1 E v1v∗1 + · · · +zm E vmv∗m ≥ 1mink zk is non-singular in this orthant. For the next iteration note thatin this orthant (1− ∂k)p = 0 if and only if ∂kp

p= ∂k log p = 1. Clearly

∂klog(p(z1, . . . , zm))∣∣z1,...,zm=t

=∂kdet(z1 E v1v∗1 + · · ·+ zm E vmv∗m)

∣∣z1=···=zm=t

det(tE v1v∗1 + · · ·+ tE vmv∗m)

= ∂kdet(1 +

zk − t

tE vkv∗k

) ∣∣∣∣zk=t

=Tr E vkv∗k

t≤ ϵ

t

and we can choose t > 0 and δ > 0 such that ϵ/t < 1 − 1/δ. This is the basis for theiteration step, as summarized in the following Proposition.Proposition 2.9. Suppose that x ∈ Rm lies above the roots of some real stable polynomial(z1, . . . , zm) 7→ q(z1, . . . , zm) and

∂klog(q(z1, . . . , zm))∣∣z=x

< 1− 1/δ

for all k and some δ > 0. Then x also lies above the zeros of (1− ∂k)q and

∂j log [(1− ∂k)q(z1, . . . , zm)]∣∣z=(x1,...,xk−1,xk+δ,xk+1,...,xm)

≤ ∂j log q(z1, . . . , zm)∣∣z=x

for all k, j.

7

Page 12: InterlacingFamiliesand theKadison–SingerProblemdschroeder/essay.pdf · 2. Stable Polynomials rather unusual approach. It uses the interpretation of the operator norm of the hermitian

2. Stable Polynomials

Applying this result to our case yields that (t, . . . , t) lies above the roots of (1 − ∂1)pand that

∂i log [(1− ∂1)p(z1, . . . , zm)]∣∣z=(t+δ,t,...,t)

≤ 1− 1/δ.

Thus we are able to iterate and apply Proposition 2.9 again to the still real stable (see theproof of Lemma 2.7) polynomial (1− ∂1)p showing that (t+ δ, t+ δ, t, . . . , t) lies abovethe zeros of (1− ∂1)(1− ∂2)p. Iterating this we find that (t+ δ, . . . , t+ δ) lies above thezeros of (1−∂1) . . . (1−∂r)p. In particular, the maximal root of the expected characteristicpolynomial EP∑m

k=1vkv∗

kis at most t+δ. We can now choose t = ϵ+

√ϵ and δ = 1+

√ϵ

to complete the proof.

It remains to prove the non-trivial Proposition 2.9. The proof will follow from the fol-lowing monotonicity and convexity properties of the so called barrier functions ∂kq.

Lemma2.10. Supposex ∈ Rm lies above the zeros of some real stable polynomial (z1, . . . , zm) 7→q(z1, . . . , zm). Then the functions

t 7→ ∂1 log(q(z1, . . . , zm))∣∣z=(x1+t,x2,...,xm)

(2.5)

and

t 7→ ∂1 log(q(z1, . . . , zm))∣∣z=(x1,x2+t,x3,...,xm)

(2.6)

are positive, decreasing and convex for t ≥ 0.

Proof. The univariate polynomial z1 7→ q(z1, . . . , zm) can be factorized as

q(z1, . . . , zm) = c(z2, . . . , zm)

l∏j=1

(z1 − ρj(z2, . . . , zm))

in terms of its real roots ρ1(z2, . . . , zm) ≥ · · · ≥ ρl(z2, . . . , zm). Then

∂1 log(q(z1, . . . , zm)) =

l∑j=1

1

z1 − ρj(z2, . . . , zm)> 0

for z ≥ x. Also, obviously ∂21 log(q(z1, . . . , zm)) < 0 and ∂3

1 log(q(z1, . . . , zm)) > 0 forz ≥ x proving the monotonicity and convexity of the function from eq. (2.5).

While this first statement was effectively (after freezing the remaining variables) a state-ment about univariate real rooted polynomials, the second statement is effectively a state-ment about real stable polynomials in two variables and considerably harder to prove. Let(z1, z2) 7→ q′(z1, z2) denote the bivariate restriction of q which can be factorized asq′(z1, z2) = c(z1)

∏lj=1(z2 − ρj(z1)) in terms of its real roots ρ1(z1) ≥ · · · ≥ ρn(z1).

Then

∂2∂1 log(q′(z1, z2)) = ∂1

l∑j=1

(z2 − ρj(z1))−1

and

∂22∂1 log(q′(z1, z2)) = −∂1

l∑j=1

(z2 − ρj(z1))−2

and the proof would be complete if we knew that z1 7→ ρj(z1) was decreasing for all j.

8

Page 13: InterlacingFamiliesand theKadison–SingerProblemdschroeder/essay.pdf · 2. Stable Polynomials rather unusual approach. It uses the interpretation of the operator norm of the hermitian

2.2. Barrier Argument

Assume for contradiction that this is not the case, i.e., that there exists a1 ∈ R such thatsome root satisfies c ..= ∂1ρi(z1)

∣∣z1=a

> 0 then (a1, a2) with a2 ..= ρi(a1) is a root of q′.Then we find

∂2q′(z1, z2)

∣∣z=a

= c(a1)∏j =i

(a2 − ρj(a1)) =.. c′

and∂1q

′(z1, z2)∣∣z=a

= −c(a1)c∏j =i

(a2 − ρj(a1)) = −cc′

and consequently we can expand q′ around a as

q′(z1, z2) = c′ [−c(z1 − a1) + (z2 − a2)] +O((|z1 − a1|+|z2 − a2|)2

).

Now define the univariate polynomial

qϵ(x) ..= ϵ−1q′(iϵ+ a1, xϵ+ a2) = ϵ−1c′(−icϵ+ xϵ) +O (ϵ) = −icc′ + c′x+O (ϵ)

with a root in the upper half plane for small ϵ. But this would imply that q′ also had a root(z1, z2) with z1 in the upper half plane contradicting the real stability.

Now we have all ingredients to prove the remaining Proposition 2.9 and thereby con-cluding the proof of Proposition 2.2 and consequently also Theorem 1.1.

Proof of Proposition 2.9. From the monotonicity in all variables (using Lemma 2.10 after anappropriate relabelling) it follows that ∂k log q < 1 − 1/δ in the orthant above x andtherefore (1− ∂k)q cannot have any roots in this area. For the claimed bound note that

∂j log [(1− ∂k)q] = ∂j log [q · (1− ∂k log q)] = ∂j log q − ∂j∂k log q1− ∂k log q

where the denominator is, by assumption, in the orthant above x at least 1/δ. Thus

∂j log [(1− ∂k)q(z1, . . . , zm)]∣∣z=(x1,...,xk−1,xk+δ,xk+1,...,xm)

≤ (∂j − δ∂k∂j)log(q(z1, . . . , zm))∣∣z=(x1,...,xk−1,xk+δ,xj+1,...,xm)

≤ ∂j log(q(z1, . . . , zm))∣∣z=x

from convexity in the form f(t+ δ)− δf ′(t+ δ) ≤ f(t).

9

Page 14: InterlacingFamiliesand theKadison–SingerProblemdschroeder/essay.pdf · 2. Stable Polynomials rather unusual approach. It uses the interpretation of the operator norm of the hermitian
Page 15: InterlacingFamiliesand theKadison–SingerProblemdschroeder/essay.pdf · 2. Stable Polynomials rather unusual approach. It uses the interpretation of the operator norm of the hermitian

Chapter 3

The Kadison–Singer Problem

3.1 C∗-algebras and StatesThe Kadison–Singer Problem originated as a conjecture about pure states on the C∗-algebraB(ℓ2) of bounded operators on the Hilbert space ℓ2. We start by giving a short introductionto the notion of abstract C∗-algebras and their states.

A Banach algebra A is complex Banach space together with a distributive and associativemultiplication satisfying λ(AB) = (λA)B = A(λB) and ∥AB∥ ≤ ∥A∥ ∥B∥ for allA,B ∈ A and λ ∈ C. A C∗-algebra is a Banach algebraA together with a map ∗ : A → Asatisfying

• ∗ is an involution, i.e., (A∗)∗ = A for all A ∈ A;

• ∗ is conjugate linear, i.e., (λA)∗ = λA∗ for all A ∈ A, λ ∈ C;

• (AB)∗ = B∗A∗ for all A,B ∈ A;

• ∥A∗A∥ = ∥A∥2.

If A is unital, i.e., there exists a multiplicative identity 1 ∈ A satisfying 1A = A1for all A ∈ A, then this identity automatically satisfies 1 = 1∗ and ∥1∥ = 1. To avoidintroducing the concept of approximate identities, we shall only consider unital C∗-algebras.For the discussion of the Kadison–Singer this is sufficient since all discussed C∗-algebraswill be unital.

Example 3.1. The bounded linear operators B(H) acting on a Hilbert space H with compositionas the multiplication form a Banach algebra. Recall that to every operatorA ∈ B(H) correspondsits adjoint operator A∗ ∈ B(H) uniquely defined by Riesz Representation Theorem such that⟨x,A∗y⟩ = ⟨Ax, y⟩ for all x, y ∈ H. It can be easily checked that the map A 7→ A∗ satisfiesall the properties requested above, giving B(H) also the structure of a C∗-algebra. Clearly anyclosed ∗-subalgebra A ⊂ B(H) (meaning that the subalgebra is norm-closed and stable under the∗-operation) of B(H) is also a C∗–algebra.

According to the Gelfand–Naimark Theorem (see, for example, [8]) the above examplesare indeed the only examples of C∗-algebras. The theorem states that any C∗-algebra isisometrically ∗-isomorphic (meaning that the isomorphism respects the ∗-operation) to aclosed ∗-subalgebra of B(H), the bounded operators on some Hilbert space H. Therefore,from now on, we shall assume that every C∗-algebra is realized as a ∗-closed subalgebra ofB(H), for some Hilbert space H.

11

Page 16: InterlacingFamiliesand theKadison–SingerProblemdschroeder/essay.pdf · 2. Stable Polynomials rather unusual approach. It uses the interpretation of the operator norm of the hermitian

3. The Kadison–Singer Problem

Remark 3.2. The commutative Gelfand–Naimark Theorem (see, for example, [4, Theorem 2.1.11B])gives an alternative characterization of commutative C∗-algebras. It states that any unital com-mutative C∗-algebra is isometrically ∗-isomorphic to C(X) for some compact Hausdorff spaceX .

A linear functionalϕ : A → C on a C∗-algebraA is called positive if it satisfiesϕ(A∗A) ≥0 for all A ∈ A. Alternatively positive functionals are functionals mapping positive ele-ments of A to non-negative scalars, where A ∈ A is called positive if one of the followingequivalent (see [4, Proposition 2.2.10]) conditions is satisfied:

• A = B∗B for some B ∈ A;• A = B2 for some B ∈ A with B∗ = B;• σ(A) ⊂ [0,∞), i.e., A− λ1 is not invertible only for λ ∈ [0,∞);• (in case A ⊂ B(H) for some Hilbert space H) A is positive semidefinite, i.e.,⟨x,Ax⟩ ∈ [0,∞) for all x ∈ H.

Due to the rich structure of C∗-algebras this purely algebraic property already implies (see[4, Proposition 2.3.11]) the topological property of ϕ being continuous and that ϕ attainsits norm

∥ϕ∥ = sup |ϕ(A)| | A ∈ A, ∥A∥ ≤ 1 = ϕ(1)

on the multiplicative identity 1. If a positive functional ϕ is normalized in the sense ∥ϕ∥ =ϕ(1) = 1 it is called a state. Clearly any convex combination tϕ1 +(1− t)ϕ2 of two statesϕ1, ϕ2 is again a positive functional and is also a state since

∥tϕ1 + (1− t)ϕ2∥ = (tϕ1 + (1− t)ϕ2)(1) = tϕ1(1) + (1− t)ϕ2(1) = 1.

A state is called pure if it cannot be written as the convex combination of any two otherstates. The set of states EA on A is a weak∗-closed subset of the dual A∗. Clearly theset of pure states PA on A is equal to the set of extreme points of EA and it follows fromKrein–Milman that EA = convPA (for details, see [4, Theorem 2.3.15]).Example 3.3.

(i) Consider the Hilbert space ℓ2n = Cn equipped the Euclidean inner product and the C∗-algebra Mn

..= B(ℓn2 ) of n × n matrices equipped with the operator norm. Then for anypositive semidefinite matrix ρwith unit trace Tr ρ = 1, the map ϕ(A) = Tr ρA defines astate on A. Moreover, ϕ is pure if and only if Tr ρ2 = 1 if and only if ρ is a rank 1 matrix,i.e., there exists some unit vector x ∈ ℓn2 such that Tr ρA = ⟨x,Ax⟩.

(ii) By the Riesz–Markov Representation Theorem the states onC(X) forX compact and Haus-dorff are precisely given by maps of the form f 7→

∫Xf dµ for Borel probability measures on

X . It can be seen that the pure states are those corresponding to Dirac measures δx, x ∈ X .Thus in particular all pure states on C(X) are homeomorphisms (in fact, this is an equiva-lence). Remark 3.2 allows to generalize this result to all unital commutative C∗-algebras.

3.2 Kadison–Singer ProblemAccording to Dirac’s fundamental vision of quantum mechanics [7]:

To introduce a representation in practice• We look for observables which we would like to have diagonal either because we

are interested in their probabilities or for reasons of mathematical simplicity;• We must see that they all commute – a necessary condition since diagonal ma-

trices always commute;

12

Page 17: InterlacingFamiliesand theKadison–SingerProblemdschroeder/essay.pdf · 2. Stable Polynomials rather unusual approach. It uses the interpretation of the operator norm of the hermitian

3.3. Paving Conjecture

• We then see that they form a complete commuting set, and if not we add somemore commuting observables to make them into a complete commuting set;

• We set up an orthogonal representation with this commuting set diagonal.

The representation is then completely determined [...] by the observables that arediagonal [...]

In the language of C∗-algebras Dirac thought of the diagonal operators D(ℓ2) on ℓ2 as acomplete commuting set of observables and of a very special class of pure states on D(ℓ2),those of the form A 7→ ⟨ek, Aek⟩ corresponding to the orthogonal representation, i.e., thecanonical basis (ek)k∈N of ℓ2. These special, so called vector, pure states on D(ℓ2) canbe uniquely extended to B(ℓ2). But, what about the others? That is the Kadison-SingerProblem:

Conjecture 3.4 (Kadison–Singer (KS)). Every pure state on the algebra of diagonal operatorsD(ℓ2) on ℓ2 can be uniquely extended to a state on the algebra of bounded operators B(ℓ2).

More generally, Dirac is suggested that pure states on any maximally commuting subal-gebra of B(ℓ2) can be uniquely extended to states on B(ℓ2). Kadison and Singer managedto prove that this is false for any subalgebra not equal to D(ℓ2), see [9, 5]. Therefore itshould not be too surprising that Kadison and Singer actually believed KS to be wrong:

[...] The results that we have obtained leave the question of uniqueness of extensionof the singular pure states of D(ℓ2) open. We incline to the view that such extensionsare non-unique [...] (page 397 of [9])

Remark 3.5. Note that the important statement in KS is the uniqueness since any state ϕ onD(ℓ2) can be trivially extended to ϕdiag onB(ℓ2), where diag maps any matrix to its diagonalpart. Therefore the Kadison–Singer conjecture can be easily rephrased as:

Conjecture 3.6 (KS′). Every extension of a pure state ϕ on D(ℓ2) vanishes on operators withzero diagonal.

Also note that the extension is automatically pure. Indeed, if we could write the unique exten-sion ψ as a non-trivial convex combinations of states tψ1 + (1− t)ψ2, the pureness of ϕ wouldimply that ψ1

∣∣D(ℓ2)

= ψ2

∣∣D(ℓ2)

= ϕ, contradicting the uniqueness of the extension.

3.3 Paving ConjectureInstead of proving the Kadison–Singer conjecture directly we will prove the following Pavingconjecture, which turns out to be equivalent to KS.

Conjecture 3.7 (Paving Conjecture (PC)). For all ϵ > 0 there exists r ∈ N such thatfor all n ∈ N and T ∈ Mn = B(ℓn2 ) with diagT = 0 there exist diagonal projectionsQ1, . . . , Qr ∈..= D(ℓn2 ) withQ1 + · · ·+Qr = 1 satisfying

∥QjTQj∥ ≤ ϵ ∥T∥

for all 1 ≤ j ≤ r.

Since n is assumed to be independent from r PC can be shown to be equivalent to thefollowing Paving Conjecture onB(ℓ2). With (en)n∈N being the standard basis of ℓ2, we canclearly identify ℓn2 as spane1, . . . , en ⊂ ℓ2 and have the projectionsPn = e1e

∗1+. . . ene

∗n

mapping ℓ2 → ℓn2 ⊂ ℓ2.

13

Page 18: InterlacingFamiliesand theKadison–SingerProblemdschroeder/essay.pdf · 2. Stable Polynomials rather unusual approach. It uses the interpretation of the operator norm of the hermitian

3. The Kadison–Singer Problem

Conjecture 3.8 (PC′). For all ϵ > 0 there exists r ∈ N such that for all T ∈ B(ℓ2) withdiagT = 0 there exist diagonal projections Q1, . . . , Qr ∈ D(ℓ2) with Q1 + · · · + Qr = 1satisfying

∥QjTQj∥ ≤ ϵ ∥T∥

for all 1 ≤ j ≤ r.

Proof of Equivalence of PC and PC′. Assume that PC holds and let T ∈ B(ℓ2) be someoperator with diagT = 0. Then forT (n) = PnTPn there exist projectionsQ(n)

1 , . . . , Q(n)r ∈

D(ℓn2 ) ⊂ D(ℓ2) satisfying Q(n)1 + · · · + Q

(n)r = Pn and

∥∥∥Q(n)j T (n)Q

(n)j

∥∥∥ ≤ ϵ∥∥∥T (n)

∥∥∥for all j. The diagonal projections in the norm topology can be canonically identified with0, 1N (with 0, 1 being equipped with the discrete topology) which is compact by Ty-chonoff and we can therefore find a subsequence such that Q(nk)

j converges to some pro-jection Qj ∈ D(ℓ2) for all j as k → ∞ which obviously still satisfy Q1 + · · · + Qr = 1.Thus

∥QjTQj∥ = limk→∞

∥∥∥Q(nk)j T (nk)Q

(nk)j

∥∥∥ ≤ limk→∞

ϵ∥∥∥T (nk)

∥∥∥ = ϵ ∥T∥ .

The converse follows easily from the identification ℓn2 ⊂ ℓ2.

We now are ready to study the relationship of PC and KS:

Equivalence of PC′ andKS′. Assume that PC′ holds and fix any ϵ > 0. Let ψ be anextension of a pure state ϕ on D(ℓ2) to B(ℓ2) and let T ∈ B(ℓ2) be some operator withdiagT = 0. ChooseQ1, . . . , Qr to be the projections fromPC′ and observe thatψ(Qj) =ϕ(Qj) ∈ 0, 1 since ϕ is a pure state on the commutative C∗-algebra D(ℓ2) and thusa homeomorphism (see Example 3.3(ii)) implying ϕ(Qj) = ϕ(Q2

j ) = ϕ(Qj)2. Since

1 =∑

1≤j≤r ϕ(Qj) it follows that ϕ(Qj) = δij for some 1 ≤ i ≤ r. The Cauchy-Schwarz inequality now implies

|ψ(SQj)|≤ ψ(SS∗)ψ(Q∗jQj) = ψ(SS∗)ψ(Qj) = 0

and similarly ψ(QjS) = 0 for all j = i and S ∈ B(ℓ2) implying

|ψ(T )|= |ψ(QiTQi)|≤ ϵ ∥T∥ .

We therefore proved KS′ since ψ(T ) = 0 by the arbitrariness of ϵ.Since the converse is not needed for the scope of this essay, details of the implication

KS′ ⇒ PS′ shall be omitted. The interested reader is referred to [9, Lemma 5]

3.4 Proof of the Paving Conjecture

The study of interlacing families of polynomials allows us to give a positive solution to PC′

and thereby also KS.

Proposition 3.9 (PC for orthogonal projections (PCP )). For any orthogonal projection P ∈Mn, n ∈ N and r ∈ N there exist diagonal projections Q1, . . . , Qr ∈ Mn with Q1 + · · · +Qr = 1 and

∥QjPQj∥ ≤(1/

√r +

√∥diagP∥

)2

for all j ∈ [r].

14

Page 19: InterlacingFamiliesand theKadison–SingerProblemdschroeder/essay.pdf · 2. Stable Polynomials rather unusual approach. It uses the interpretation of the operator norm of the hermitian

3.4. Proof of the Paving Conjecture

Proof. For uk..= Pek, k ∈ [n] we have ∥uk∥2 = ⟨ek, P ek⟩ = Pk,k ≤ ∥diagP∥ and

∑k∈[n]

uku∗k

∣∣∣∣∣∣Ran P

=∑k∈[n]

Peke∗kP∣∣Ran P

= 1Ran P .

From Corollary 2.1 we then find a partition [n] = S1∪· · ·∪Sr and corresponding diagonalprojections Qj

..=∑

k∈Sjeke

∗k which satisfy

∥QjPQj∥ = ∥(PQj)∗(PQj)∥ = ∥(PQj)(PQj)

∗∥ = ∥PQjP∥

=

∥∥∥∥∥∥∑k∈Sj

Peke∗kP

∥∥∥∥∥∥ =

∥∥∥∥∥∥∑k∈Sj

uku∗k

∥∥∥∥∥∥ ≤(1/

√r +

√∥diagP∥

)2

.

This special case of the paving conjecture can be amplified to prove the general case:

Theorem 3.10 (PC holds true). For all r ∈ N and T ∈ Mn with diagT = 0 there existdiagonal projectionsQ1, . . . , Qr4 withQ1 + · · ·+Qr4 = 1 satisfying

∥QjTQj∥ ≤ 4(1/r +

√2/r)∥T∥

for all 1 ≤ j ≤ r4.

Proof. First assume that T is hermitian with ∥T∥ ≤ 1. Then

P ..=1

2

(1+ T

√1− T 2

√1− T 2 1− T

)is a Hermitian projection with ∥diagP∥ = 1/2 and for any r ∈ Nwe can apply Proposition3.9 to find diagonal projectionsQ1, . . . , Qr ∈M2n with

∑j∈[r]Qj = 1 and ∥QjPQj∥ ≤(

1/√r + 1/

√2)2 for all j ∈ [r]. If we decompose

Qj =

(Q

(1)j 0

0 Q(2)j

)

this implies

max∥∥∥Q(1)

j (1+ T )Q(1)j

∥∥∥ , ∥∥∥Q(2)j (1− T )Q

(2)j

∥∥∥ ≤ ϵr ..= 2(1/

√r + 1/

√2)2.

It follows from the first inequality that 0 ≤ Q(1)j (1 + T )Q

(1)j ≤ ϵrQ

(1)j , i.e., −Q(1)

j ≤Q

(1)j TQ

(1)j ≤ (ϵr−1)Q

(1)j . The second inequality yields analogously thatQ(1)

j ≥ Q(1)j TQ

(1)j ≥

−(ϵr − 1)Q(1)j and therefore with Qi,j

..= Q(1)i Q

(2)j we can conclude

−(ϵr − 1)Qi,j ≤ Qi,jTQi,j ≤ (ϵr − 1)Qi,j

and consequently also ∥Qi,jTQi,j∥ ≤ ϵr − 1 for all i, j ∈ [r]. By scaling we can immedi-ately generalize this to ∥Qi,jTQi,j∥ ≤ (ϵr − 1) ∥T∥ for Hermitian T of arbitrary norm.Since

∑i,j∈[r]Qi,j = 1 and ϵr −1 becomes arbitrarily small for large r this already proves

the conjecture for hermitian matrices.Now consider an arbitrary matrix T which can be decomposed as T = U + iV with

U, V being hermitian and ∥U∥ , ∥V ∥ ≤ ∥T∥. From the above argument we find 2r2

15

Page 20: InterlacingFamiliesand theKadison–SingerProblemdschroeder/essay.pdf · 2. Stable Polynomials rather unusual approach. It uses the interpretation of the operator norm of the hermitian

3. The Kadison–Singer Problem

diagonal projections Q′1, Q

′′1 , . . . , Q

′r2 , Q

′′r2 paving U and V , respectively. After defining

Qi,j..= Q′

iQ′′j , i, j ∈ [r2] we finally find∥∥∥Qi,jTQi,j

∥∥∥ ≤∥∥∥Qi,jUQi,j

∥∥∥+ ∥∥∥Qi,jV Qi,j

∥∥∥ ≤ 2(ϵr − 1) ∥T∥

for the diagonal projections Qi,j still decomposing the identity∑

i,j∈[r2] Qi,j = 1.

16

Page 21: InterlacingFamiliesand theKadison–SingerProblemdschroeder/essay.pdf · 2. Stable Polynomials rather unusual approach. It uses the interpretation of the operator norm of the hermitian

Chapter 4

Ramanujan Graphs

To a (undirected) graph G = (V,E) with vertex set V and edge set E, there correspondsthe so called adjacency matrix A(G) with A(G)i,j = 1 if i, j ∈ E and A(G)i,j = 0otherwise. The characteristic polynomial of the graph is defined to be the characteristicpolynomial of its adjacency matrix, i.e.,

PG(z) ..= PA(G)(z) ..= det(z −A(G)).

The spectrum SpecG of G is defined to be the spectrum of its adjacency matrix, i.e., theset of zeros of PA(G) and similarly, by eigenvalues and eigenvectors of G we just meaneigenvalues and eigenvectors of A(G). The spectrum of a graph is real since adjacencymatrices are obviously symmetric. As a convention, we order the eigenvalues by size insuch a way that

λ1(G) ≥ · · · ≥ λ|V |(G).

Example 4.1 (Spectrum of complete (bipartite) graphs).

(i) The complete graph Kd+1 on d + 1 vertices (which is d-regular) only has eigenvalues −1and d. Indeed, let αk denote the column vector of length k with all α’s and 1 denote theidentity. Then

A(Kd+1) = 1d+1 · 1Td+1 − 1

where the rank one matrix 1d+1 · 1Td+1 has eigenvalues d+ 1 and 0.

(ii) The spectrum of the complete d-regular bipartite graphKd,d is −d, 0, d. Indeed,

A(Kd,d) =

(1d0d

)·(0Td 1Td

)+

(0d1d

)·(1Td 0Td

)is a rank two matrix with eigenvectors

( 1d1d

)and

( 1d−1d

)corresponding to the eigenvalues

d,−d.

Proposition 4.2 (Spectral properties of d-regular graphs). Let G = (V,E) be a d-regulargraph, i.e., all vertices have degree d. Then

(i) SpecG ⊂ [−d, d].

(ii) d = λ1(G) ∈ SpecG. Also, λ2(G) < d if and only ifG is connected.

(iii) SpecG is symmetric about 0 ifG is bipartite.

(iv) A connected graphG is bipartite if and only if −d ∈ SpecG.

17

Page 22: InterlacingFamiliesand theKadison–SingerProblemdschroeder/essay.pdf · 2. Stable Polynomials rather unusual approach. It uses the interpretation of the operator norm of the hermitian

4. Ramanujan Graphs

Proof.

(i) For |λ|> d the matrix λ − A(G) is strictly diagonal dominant and thereby non-singular.

(ii) The vector of all 1’s is an eigenvector corresponding to the eigenvalue d since all rowssums up to d. If f = (fv)v∈V is an eigenvector corresponding to eigenvalue d withfu = maxv∈V |fu| (otherwise −f is also an eigenvector), then fu = (A(G)f)u/d =∑

u,v∈E fv/d ≤∑

u,v∈E f(u)/d = fu implies fv = fu for all v adjacent tov. Thus we find that for connected graphs the eigenspace corresponding to eigenvalued is one-dimensional. Conversely in disconnected d-regular graphs one can triviallyfind independent eigenvectors corresponding to eigenvalue λ.

(iii) The adjacency matrix of a bipartite graph takes the form A(G) =(

0 AAT 0

). If ( v

w ) isan eigenvector with eigenvalue λ, then

A(G)

(−vw

)=

(Aw

−AT v

)=

(λv

−λw

)= −λ

(−vw

)and therefore −λ is also an eigenvalue.

(iv) One implication follows trivially from the previous ones. Conversely any eigenvectorf corresponding to eigenvalue −d satisfies fu = (−1)dist(u,v)fv for u, v in V (recallthatG is assumed to be connected). This clearly induces a bipartition of the graph.

4.1 Expander GraphsIn various areas of mathematics and computer science the question arises to find sparse d-regular graphsG = (V,E) on a large number of vertices with good connectivity properties.Explicitly, one often asks for graphs with a large expansion coefficient

h(G) ..= minS⊂V,|S|≤|V |/2

e(S, S)

|S|

where e(S, S) is the number of edges between S and its complement. For d-regular graphs,clearly e(S, S) ≤ d|S| and therefore d is a natural upper limit on the expansion coefficient.We can, however, also find a lower limit in terms of an spectral gap of the graph:

Proposition 4.3. For a d-regular graphG = (V,E) we have h(G) ≥ (d− λ2(G))/2.

Proof. Let S be a minimizer of h(G). The graph Laplacian ∆ ..= d1 − A(G) has d −λ1(G) = 0 as its smallest eigenvalue corresponding to the constant vectors as eigenvectors.Define a vector f by fv = −|S| for v ∈ S and fv = |S| for v ∈ S which is obviouslyorthogonal to any constant vector and thereby it follows, for example from the min-maxTheorem, that

d− λ2(G) ≤ ⟨f,∆f⟩∥f∥2

=

∑u,v∈E(fv − fu)

2∑v∈V f

2v

=|V |2e(S, S)|V ||S||S|

≤ 2e(S, S)

|S| = 2h(G)

since |S|≤ |V |/2.

Remark 4.4. We can also improve the upper limit in terms of the spectral gap. Indeed, it holdsthat h(G) ≤

√d(d− λ2(G)). For details the reader is referred to [2].

18

Page 23: InterlacingFamiliesand theKadison–SingerProblemdschroeder/essay.pdf · 2. Stable Polynomials rather unusual approach. It uses the interpretation of the operator norm of the hermitian

4.1. Expander Graphs

This reduces the problem of finding graphs with good expander properties to findinggraphs with small non-trivial spectrum λ2(G), . . . , λ|V |(G) or λ2(G), . . . , λ|V |−1(G)for bipartite graphs, respectively. The following Theorem due to Alon and Boppana [1]shows that there is a theoretical limit on how small the non-trivial spectrum can be.

Theorem4.5. LetG = (V,E) be ad-regular graph such there exist two edgesu1, u2, v1, v2 ∈E with distance dist(u1, u2, v1, v2) ≥ 2k + 2. Then

λ2(G) ≥ 2√d− 1 +

2√d− 1− 1

k + 1.

Proof (after Nilli, [14]). The result is trivial for disconnected graphs. Therefore from nowon assume G to be connected and note that in this case the eigenspace corresponding toeigenvalue d is one dimensional. Define Vi = v ∈ V | minj=1,2 dist(v, vj) = i andanalogously Ui for 0 ≤ i ≤ k. Clearly

∪Ui and

∪Vi are disjoint, in fact, these vertex sets

are not even adjacent. Now define a vector f = (fv)v∈V such that fv = (d − 1)−i/2

if v ∈ Vi and fv = α(d − 1)−i/2 if v ∈ Ui and fv = 0 otherwise with α < 0chosen such that ⟨f, 1|V |⟩ =

∑v∈V fv = 0, i.e., f being orthogonal to the eigenvec-

tor corresponding to eigenvalue d. Thus it follows, for example from the min-max The-orem (using that the eigenspace corresponding to eigenvalue λ is one-dimensional), thatλ2(G) ≥ ⟨Af, f⟩ / ⟨f, f⟩. Clearly

⟨f, f⟩ =∑v∈V

f2v =

k∑i=0

|Vi|(d− 1)i

+

k∑i=0

α2|Ui|(d− 1)i

=.. CV,1 + CU,1.

For a bound on ⟨Af, f⟩ it is most convenient to write ∆ = d1−A and then compute

⟨∆f, f⟩ =∑

u,v∈E

(fu−fv)2 =∑

u,v∈E,u or v∈

∪Vi

(fu−fv)2+∑

u,v∈E,u or v∈

∪Ui

(fu−fv)2 =.. CV,2+CU,2.

We can reorder the sum CV,2 in in terms with u ∈ Vi, v ∈ Vi+1 and u ∈ Vk, v ∈∪Vi to

obtain

CV,2 =

k−1∑i=0

∑u∈Vi

∑u,v∈E,v∈Vi+1

((d− 1)−i/2 − (d− 1)−i/2−1/2

)2+∑u∈Vk

∑u,v∈E,v ∈Vk−1

(d− 1)−k

≤k−1∑i=0

|Vi|(d− 1)((d− 1)−i/2 − (d− 1)−i/2−1/2

)2+ |Vk|(d− 1)−k+1

=

k−1∑i=0

|Vi|(d− 1)i

(d− 2

√d− 1

)+

|Vk|(d− 1)k

((d− 2

√d− 1) + (2

√d− 1− 1)

)= (d− 2

√d− 1)CV,1 + (2

√d− 1− 1)

|Vk|(d− 1)k

.

Since |Vi|≤ (d−1)|Vi−1| for all i, we find |Vk|/(d−1)k ≤ CV,1/(k+1) and consequentlyCV,2/CV,1 ≤ d−2

√d− 1+ 2

√d−1−1k+1

. The same inequality holds for CU,2/CU,1 and wecan conclude

⟨Af, f⟩⟨f, f⟩ = d− ⟨∆f, f⟩

⟨f, f⟩ ≥ 2√d− 1 +

2√d− 1− 1

k + 1.

19

Page 24: InterlacingFamiliesand theKadison–SingerProblemdschroeder/essay.pdf · 2. Stable Polynomials rather unusual approach. It uses the interpretation of the operator norm of the hermitian

4. Ramanujan Graphs

Remark 4.6. If Gnn∈N is an infinite family of d-regular graphs, the vertex number is un-bounded and so is the maximal edge distance. Therefore lim infn→∞ λ2(G) ≥ 2

√d− 1.

Graphs with non-trivial spectrum, that is optimal in the sense of this Theorem are calledRamanujan:Definition4.7. Ad-regular graphG = (V,E) is called Ramanujan if maxλ2(G), |λ|V |(G)| ≤2√d− 1. G is called bipartite Ramanujan if it is bipartite and λ2(G) ≤ 2

√d− 1.

4.2 2-lifts and the Construction of Ramanujan GraphsIt is an open question whether for all d ∈ N there exist infinite families of Ramanujan graphs.The question is known to have an affirmative answer in the case of d = pr +1, with r ∈ Nand p prime (see [13]). The theory developed in chapter two allowed Marcus, Spielman,Srivastava to prove the existence of infinite families of bipartite Ramanujan graphs for alld ∈ N:Theorem 4.8. For all d ≥ 2 and k ∈ N there exists a d-regular bipartite graph G = (V,E)

with |V |= d2k+1 and λ2(G) ≤ 2√d− 1.

The construction method was first suggested by Bilu and Linial. We start with someRamanujan graph G = (V,E) and want to construct another Ramanujan graph on vertexset V ⊔ V = (v, i) | v ∈ V, i ∈ 1, 2 . Giving an edge signing s : E → −1, 1 weadd for some given edge u, v the edges (u, 1), (v, 1) and (u, 2), (v, 2) to the newedge set Es if s(u, v) = 1 and (u, 1), (v, 2) and (u, 2), (v, 1) otherwise to obtainthe new graph Gs = (V ⊔ V,Es). The graph Gs with twice as many vertices is called a2-lift of G. It is evident from the construction that the 2-lift of a d-regular graph remainsd-regular and that 2-lifts on bipartite graphs remain bipartite. The spectrum of a 2-lift Gs

also can be easily related to the spectrum of G:Lemma 4.9. Define As(G)u,v = s(u, v)A(G)u,v to be the signed adjacency matrix of G.Then

A(Gs) =1

2

(A(G) +As(G) A(G)−As(G)A(G)−As(G) A(G) +As(G)

)and SpecGs = SpecA(G) ∪ SpecAs(G).

Proof. Clearly1

2(A(G) +As(G)) = A((V, s−11))

and1

2(A(G)−As(G)) = A((V, s−1−1)).

If f is an eigenvector of A(G) corresponding to eigenvalue λ, then A(Gs)(

ff

)= λ

(ff

)and also, if f is an eigenvector of As(G) corresponding to eigenvalue λ, then

(f−f

)is an

eigenvector of A(Gs) corresponding to the same eigenvalue. This produces 2|V | linearlyindependent eigenvectors proving the assertion.

From this Lemma we can already outline the strategy for proving Theorem 4.8. Weknow that Kd,d is bipartite Ramanujan since λ2(Kd,d) = 0. Thus if we could find asigning s with λ1(As(Kd,d))

(ff

)= maxrootPAs(Kd,d) ≤ 2

√d− 1, we would have a

bipartite Ramanujan graph Gs on 4d vertices and could hope that we can continue in aninductive manner. It has been conjectures by Bilu and Linial that this procedure can be usedto produce infinite families of d-regular Ramanujan graphs for arbitrary d.

20

Page 25: InterlacingFamiliesand theKadison–SingerProblemdschroeder/essay.pdf · 2. Stable Polynomials rather unusual approach. It uses the interpretation of the operator norm of the hermitian

4.3. Matching Polynomials

Conjecture 4.10 (BL). Every d-regular graph G = (V,E) has a signing s : E → −1, 1with ∥As(G)∥ ≤ 2

√d− 1.

As a consequence of the theory developed in chapter two Marcus, Spielman and Srivas-tava could give a partial answer to this conjecture and thereby showed that the constructionsuggested in the previous paragraph allows yields arbitrary big bipartite Ramanujan graphs,proving Theorem 4.8.

Theorem 4.11. Every d-regular graph G = (V,E) has a signing s : E → −1, 1 withλ1(As(G)) = maxrootPAs(G) ≤ 2

√d− 1.

The idea behind the proof of this Theorem is choosing the signing uniformly randomly.

Proposition 4.12. Let G = (V,E) be a d-regular Graph with d ≥ 2. If we choose the 2|E|

signings s : E → −1, 1 randomly all with probability 2−|E|, then we have that Es PAs(G)

is a real rooted polynomial with maxroot Es PAs(G) ≤ 2√d− 1.

Proposition4.13. For some s ∈ −1, 1E , we have that maxrootPAs(G) ≤ maxroot Es PAs(G)

and therefore also λ1(As(G)) ≤ maxroot Es PAs(G).

4.3 Matching PolynomialsBefore giving proofs of these two Propositions, we want to begin by better understandingthe expected characteristic polynomial:

Lemma 4.14. Und the assumptions of Lemma 4.12 we have that

Es PAs(G)(z) = µG(z) ..=

⌊|V |/2⌋∑k=0

z|V |−2k(−1)kmk(G)

wheremk(G) is the number of matchings inG with k edges.

Proof. By definition we have,

Es det(z1−As(G)) =∑

σ∈Sym(V )

sgn(σ)Es

∏v∈V

(z −As(G)v,σ(v))

=∑U⊂V

z|V \U|∑

σ∈Sym∗(U)

sgn(σ)Es

∏v∈U

[−χE(v, σ(v))s(v, σ(v))]

where Sym and Sym∗ are the permutation and derangement groups. The expected productis non-zero if and only if v, σ(v) | v ∈ U ⊂ E and σ2(v) = v for all v ∈ U (usingindependence), i.e., for derangements σ consisting only of cycles of length 2. Clearly inthis case |U | has to be even, sgn(σ) = (−1)|U|/2 and the expected products are all 1 sinceE s(v, σ(v))2 = 1. These 2-cyclic derangements are in one-to-one correspondence to|U |/2-matchings on G(U) and thus

Es det(z1−As(G)) =∑U⊂V,|U| even

(−1)|U|/2z|V \U|m|U|/2(G(U)) = µG(z).

Thus the expected characteristic polynomial is precisely given by the so called matchingpolynomial of the graph. The claimed bound on the maximal root of 2

√d− 1 is a special

case of the following property of general matching polynomials:

21

Page 26: InterlacingFamiliesand theKadison–SingerProblemdschroeder/essay.pdf · 2. Stable Polynomials rather unusual approach. It uses the interpretation of the operator norm of the hermitian

4. Ramanujan Graphs

Proposition 4.15. LetG = (V,E) be a graph with maximal degree ∆(G). Then µG(z) > 0

for z > max2√

∆(G)− 1, 1.

Proof. It is evident from the definition of the matching polynomial that

µG(z) = µG1(z) . . . µGk(z)

whereG1⊔· · ·⊔Gk = G are the connected components ofG. It therefore suffices to provethe statement for all connected components separately. In the case that some componentGl only has one vertex, we clearly have µGl(z) = z for which the claim holds. In casesome connected component Gl consists of two vertices, we have µGl(z) = z2 − 1 forwhich the claim holds as well. From now on we concentrate on connected components Gl

with at least three vertices and therefore ∆(Gl) ≥ 2 and aim to prove µGl(z) > 0 forz > 2

√∆(Gl)− 1.

We are aiming to achieve this via induction and therefore claim that we have the recursiverelation

µG(z) = zµG(V \v)(z)−∑

u,v∈E

µG(V \u,v)(z)

for all v ∈ V . Indeed, there aremk(G(V \ v)) matchings of size k onG not containingv and

∑u,v∈E mk−1(G \ u, v) matchings of size k containing v.

The induction step basically follows from the following Lemma:Lemma 4.16. Let H = (U,F ) be a graph, v ∈ U a vertex and δ ∈ N some integer such thatmax2,∆(H), deg v + 1 ≤ δ. Then

µH(z)

µH(U\v)(z)>

√δ − 1

for all z > 2√δ − 1.

Proof of Lemma. The proof goes via induction on |U |. If |U |= 1, then the only vertex hasdegree zero and the quotient is simply given by z. For the induction step note that

µH(z)

µH(U\v)(z)= z −

∑u,v∈F

µH(U\u,v)(z)

µH(U\v)(z)> 2

√δ − 1− deg v√

δ − 1≥

√δ − 1

which follows from the induction hypothesis since ∆(H(U \ v)) ≤ ∆(H) ≤ δ anddegH(U\v) u+ 1 ≤ degH u ≤ ∆(H) ≤ δ for all u with u, v ∈ F .

To conclude the proof pick any v ∈ V , and then for any u ∈ V with u, v ∈ E theLemma can be applied to H = G(V \ v), u and δ = ∆(G) since ∆(H) ≤ ∆(G) anddegH u+ 1 ≤ degG u ≤ ∆(G). Thus

µG(z)

µG(V \v)(z)= z −

∑u,v∈E

µG(V \u,v)(z)

µG(V \v)(z)> 2√

∆(G)− 1− deg v√∆(G)− 1

≥ 2√

∆(G)− 1− ∆(G)√∆(G)− 1

> 0

for z > 2√

∆(G)− 1 and the claim follows from induction over |V |.

This obviously proves Proposition 4.12:

Proof of Proposition 4.12. Observe that the matching polynomial is either even or odd andtherefore |µG(z)|> 0 for all |z|> 2

√d− 1 from Proposition 4.15.

22

Page 27: InterlacingFamiliesand theKadison–SingerProblemdschroeder/essay.pdf · 2. Stable Polynomials rather unusual approach. It uses the interpretation of the operator norm of the hermitian

4.4. Existence of bipartite Ramanujan graphs of arbitrary degree

4.4 Existence of bipartite Ramanujan graphs of arbitrary degreeWe now turn to the proof of Proposition 4.13. Clearly

As(G) = −d1+∑

u,v∈E

(eu + s(u, v)ev)(eu + s(u, v)ev)T

can be written as a sum of a multiple of the identity and independent rank one matricesin terms of the standard basis (ev)v∈V . We now apply Proposition 2.3 to this sum ofindependent rank one matrices to conclude the proof:

Proof of Proposition 4.13. From Proposition 2.3 it follows that

λ1(As(G)) = λ1(As(G) + d1)− d ≤ maxroot EPAs(G)+d1 − d

= maxroot EPAs(G)(· − d)− d = maxroot EPAs(G)

with positive probability.

Now the existence of bipartite Ramanujan graphs of arbitrary degrees is almost an im-mediate consequence:

Proof of Theorems 4.8 and 4.11. Start with Kd,d which bipartite Ramanujan after Example4.1. Then after Propositions 4.12 and 4.13 there exists some signing with λ1(As(Kd,d)) ≤2√d− 1 and thus the corresponding 2-lift (Kd,d)s is still bipartite and satisfies

λs((Kd,d)s) = maxλ1(As(Kd,d)), λ2(Kd,d) ≤ 2√d− 1.

We therefore have found a bipartite Ramanujan graph (Kd,d)s on 4d vertices. We cancontinue inductively to obtain bipartite Ramanujan graphs of order 2kd for all k ∈ N.

Finally, it should be remarked why the introduced technique does not prove the exis-tence of non-bipartite Ramanujan graphs of arbitrary degree. Theorem 4.11 also applies tonon-bipartite graphs and can be easily generalized to prove the existence of signings withλ1(As(G)) ≤ 2

√d− 1 or λ|V |(As(G)) ≥ −2

√d− 1, respectively. The problem lies in

the fact that the interlacing argument does not give the existence of a signing which satisfiesthese two conditions simultaneously.

4.5 Generic boundIt is also interesting to note that while the bound on the maximal root of the matchingpolynomial was necessary to prove the precise eigenvalue bound, a more generic boundusing Theorem 1.1 gives the same asymptotic behaviour. Indeed,

Es

∑u,v∈E

(eu + s(u, v)ev)(eu + s(u, v)ev)T

d= 1

andEs

∥∥∥∥eu + s(u, v)ev√d

∥∥∥∥2 =2

d=.. ϵ.

Thus Theorem 1.1 gives that

∑u,v∈E

(eu + s(u, v)ev)(eu + s(u, v)ev)T

d≤ (1 +

√2/d)21

23

Page 28: InterlacingFamiliesand theKadison–SingerProblemdschroeder/essay.pdf · 2. Stable Polynomials rather unusual approach. It uses the interpretation of the operator norm of the hermitian

4. Ramanujan Graphs

with positive probability and therefore also As(G) ≤√8d + 2 with positive probability,

which is of the same order as the optimal bound As(G) ≤ 2√d− 1. An interesting

consequence of the generic bound applied to the signed adjacency matrices is that the ϵ-dependence in Theorem 1.1 is necessary since we would otherwise have a contradiction toTheorem 4.5.

24

Page 29: InterlacingFamiliesand theKadison–SingerProblemdschroeder/essay.pdf · 2. Stable Polynomials rather unusual approach. It uses the interpretation of the operator norm of the hermitian
Page 30: InterlacingFamiliesand theKadison–SingerProblemdschroeder/essay.pdf · 2. Stable Polynomials rather unusual approach. It uses the interpretation of the operator norm of the hermitian
Page 31: InterlacingFamiliesand theKadison–SingerProblemdschroeder/essay.pdf · 2. Stable Polynomials rather unusual approach. It uses the interpretation of the operator norm of the hermitian

Bibliography

[1] Noga Alon. Eigenvalues and expanders. Combinatorica, 6(2):83–96, 1986.

[2] Noga Alon. On the edge-expansion of graphs. Combin. Probab. Comput., 6(2):145–152, 1997.

[3] Petter Brändén. Lecture Notes for Interlacing Families. https://people.kth.se/ pbran-den/notes.pdf.

[4] O. Bratteli and Robinson. Operator Algebras and Quantum Statistical Mechanics 1. C*-and W*-Algebras. Symmetry Groups. Decomposition of States. Springer, 1987.

[5] P. G. Casazza and J. Crandell Tremain. The Kadison-Singer Problem in mathemat-ics and engineering. Proceedings of the National Academy of Science, 103:2032–2039,February 2006.

[6] J.B. Conway. Functions of One Complex Variable I. Graduate Texts in Mathematics.Springer New York, 2012.

[7] P. A. M. Dirac. The Principles of Quantum Mechanics. International series of mono-graphs on physics (Oxford, England). 1930.

[8] I. Gelfand and M. Neumark. On the imbedding of normed rings into the ring ofoperators in Hilbert space. Mat. Sb., Nov. Ser., 12:197–213, 1943.

[9] Richard V. Kadison and I. M. Singer. Extensions of pure states. American Journal ofMathematics, 81(2):pp. 383–400, 1959.

[10] A. Marcus, D. A. Spielman, and N. Srivastava. Interlacing Families I: Bipartite Ra-manujan Graphs of All Degrees. ArXiv e-prints, April 2013.

[11] A. Marcus, D. A Spielman, and N. Srivastava. Interlacing Families II: Mixed Char-acteristic Polynomials and the Kadison-Singer Problem. ArXiv e-prints, June 2013.

[12] A. W. Marcus, D. A. Spielman, and N. Srivastava. Ramanujan Graphs and the So-lution of the Kadison-Singer Problem. ArXiv e-prints, August 2014.

[13] M. Morgenstern. Existence and Explicit Constructions of q + 1 Regular RamanujanGraphs for Every Prime Power q. Journal of Combinatorial Theory, Series B, 62(1):44 –62, 1994.

[14] Alon Nilli. On the second eigenvalue of a graph. Discrete Mathematics, 91(2):207–210,1991.

[15] Stochastic Analyis Seminar Princeton University. Proof of Kadison-Singer.https://blogs.princeton.edu/sas/2014/10/05/lectures-2-and-3-proof-of-kadison-singer-1/.

27

Page 32: InterlacingFamiliesand theKadison–SingerProblemdschroeder/essay.pdf · 2. Stable Polynomials rather unusual approach. It uses the interpretation of the operator norm of the hermitian

Bibliography

[16] Joel Spencer. Six standard deviations suffice. Transactions of the American MathematicalSociety, 289(2):679–679, feb 1985.

[17] Nikhil Srivastava. Discrepancy, Graphs, and the Kadison-Singer Prob-lem. http://windowsontheory.org/2013/07/11/discrepancy-graphs-and-the-kadison-singer-conjecture-2/, July 2013.

[18] Terence Tao. Real stable polynomials and the Kadison-Singer problem.https://terrytao.wordpress.com/2013/11/04/real-stable-polynomials-and-the-kadison-singer-problem/, November 2013.

[19] D. Timotin. The solution to the Kadison–Singer Problem: yet another presentation.ArXiv e-prints, January 2015.

[20] J. A. Tropp. The random paving property for uniformly bounded matrices. ArXivMathematics e-prints, December 2006.

[21] J. A. Tropp. User-friendly tail bounds for sums of random matrices. ArXiv e-prints,April 2010.

[22] A. Valette. Le problème de Kadison-Singer (The Kadison-Singer problem). ArXive-prints, September 2014.

28