40
Chaos in stochastic 2d Galerkin-Navier-Stokes Jacob Bedrossian * Sam Punshon-Smith July 9, 2021 Abstract We prove that all Galerkin truncations of the 2d stochastic Navier-Stokes equations in vor- ticity form on any rectangular torus subjected to hypoelliptic, additive stochastic forcing are chaotic at sufficiently small viscosity, provided the frequency truncation satisfies N 392. By “chaotic” we mean having a strictly positive Lyapunov exponent, i.e. almost-sure asymptotic exponential growth of the derivative with respect to generic initial conditions. A sufficient con- dition for such results was derived in previous joint work with Alex Blumenthal which reduces the question to the non-degeneracy of a matrix Lie algebra implying H¨ ormander’s condition for the Markov process lifted to the sphere bundle (projective hypoellipticity). The purpose of this work is to reformulate this condition to be more amenable for Galerkin truncations of PDEs and then to verify this condition using a) a reduction to genericity properties of a diag- onal sub-algebra inspired by the root space decomposition of semi-simple Lie algebras and b) computational algebraic geometry executed by Maple 1 in exact rational arithmetic. Note that even though we use a computer assisted proof, the result is valid for all aspect ratios and all sufficiently high dimensional truncations; in fact, certain steps simplify in the formal infinite dimensional limit. Contents 1 Introduction 2 2 Preliminaries: projective hypoellipticity and chaos 6 3 Projective hypoellipticity on complex geometries 12 4 A dinstinctness condition in the diagonal algebra 17 5 Verifying distinctness in the diagonal algebra 22 A Relevant algebraic geometry 30 B A key lemma regarding polynomials on the lattice 33 C Computer code 35 References 39 * Department of Mathematics, University of Maryland, College Park, MD 20742, USA [email protected]. J.B. was supported by National Science Foundation CAREER grant DMS-1552826 and the Simons Foundation (2020 Simons Fellowship) Division of Applied Mathematics, Brown University, Providence, RI 02906, USA [email protected]. This mate- rial was based upon work supported by the National Science Foundation under Award No. DMS-1803481. 1 Maple is a trademark of Waterloo Maple Inc. 1 arXiv:2106.13748v2 [math.PR] 8 Jul 2021

Jacob Bedrossian Sam Punshon-Smith July 9, 2021

  • Upload
    others

  • View
    17

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Jacob Bedrossian Sam Punshon-Smith July 9, 2021

Chaos in stochastic 2d Galerkin-Navier-Stokes

Jacob Bedrossian∗ Sam Punshon-Smith†

July 9, 2021

Abstract

We prove that all Galerkin truncations of the 2d stochastic Navier-Stokes equations in vor-ticity form on any rectangular torus subjected to hypoelliptic, additive stochastic forcing arechaotic at sufficiently small viscosity, provided the frequency truncation satisfies N ≥ 392. By“chaotic” we mean having a strictly positive Lyapunov exponent, i.e. almost-sure asymptoticexponential growth of the derivative with respect to generic initial conditions. A sufficient con-dition for such results was derived in previous joint work with Alex Blumenthal which reducesthe question to the non-degeneracy of a matrix Lie algebra implying Hormander’s conditionfor the Markov process lifted to the sphere bundle (projective hypoellipticity). The purposeof this work is to reformulate this condition to be more amenable for Galerkin truncations ofPDEs and then to verify this condition using a) a reduction to genericity properties of a diag-onal sub-algebra inspired by the root space decomposition of semi-simple Lie algebras and b)computational algebraic geometry executed by Maple1 in exact rational arithmetic. Note thateven though we use a computer assisted proof, the result is valid for all aspect ratios and allsufficiently high dimensional truncations; in fact, certain steps simplify in the formal infinitedimensional limit.

Contents

1 Introduction 2

2 Preliminaries: projective hypoellipticity and chaos 6

3 Projective hypoellipticity on complex geometries 12

4 A dinstinctness condition in the diagonal algebra 17

5 Verifying distinctness in the diagonal algebra 22

A Relevant algebraic geometry 30

B A key lemma regarding polynomials on the lattice 33

C Computer code 35

References 39

∗Department of Mathematics, University of Maryland, College Park, MD 20742, USA [email protected]. J.B.was supported by National Science Foundation CAREER grant DMS-1552826 and the Simons Foundation (2020Simons Fellowship)

†Division of Applied Mathematics, Brown University, Providence, RI 02906, USA [email protected]. This mate-rial was based upon work supported by the National Science Foundation under Award No. DMS-1803481.

1Maple is a trademark of Waterloo Maple Inc.

1

arX

iv:2

106.

1374

8v2

[m

ath.

PR]

8 J

ul 2

021

Page 2: Jacob Bedrossian Sam Punshon-Smith July 9, 2021

1 Introduction

Chaos is fundamental to our understanding of fluids and fluid-like systems in realistic settings andis thought to be an integral aspect of turbulence in these systems [13]. However, there are fewmathematically rigorous results on chaos in fluid models, even for finite dimensional models. Inthis paper we consider Galerkin truncations of the 2d Navier-Stokes equations on a torus of aspectratio r > 0 and subjected to additive stochastic forcing. This system can be written as a memberof the following class of stochastic differential equations (SDEs) on Rd,

dxt = (B(xt, xt)− εAxt) dt+√ε

r∑k=1

ekdWkt , (1.1)

where ekrk=1 are a family of constant vectors and W krk=1 are independent standard Wienerprocesses with respect to a canonical stochastic basis (Ω,F , (Ft),P). Here A is symmetric positivedefinite, and B is bilinear satisfying x · B(x, x) = 0 and divB = 0 as well as the “cancellationproperty” B(ek, ek) = 0 (a more general cancellation property can be taken, see [10]). BesidesGalerkin truncations of the Navier-Stokes equations (see Section 1.1 below), this class includesLorenz-96 [39], and the shell models GOY [26] and SABRA [40], all of which are observed to bechaotic for ε small (see e.g. discussions in [10,33,41] for Lorenz-96, e.g. [17] for GOY and SABRA,and e.g. [13] for Galerkin-Navier-Stokes). The balance of dissipation and forcing (usually called‘fluctuation-dissipation’) is chosen so that there is a non-trivial limit as ε → 0; this balance canalways be taken for small damping/large forcing regimes by a suitable re-scaling of t and x (seeRemark 1.5).

For many physical models, under fairly general conditions on ekrk=1 it is possible to show thatthere is a unique stationary measure µ associated to the Markov process of (xt) solving (1.1)(seeSection 2 below). We denote the stochastic flow of diffeomorphisms defined by the solution map asx 7→ xt =: Φt

ω(x), t ≥ 0. For SDE of the form (1.1), the stationary measures have Gaussian upperbounds (see Section 2 or [11]), and so it is possible2 to define a top Lyapunov exponent via thelimit

λ1 := limt→∞

1

tlog∣∣DxΦt

ω

∣∣ ,which holds for µ × P almost every (x, ω) ∈ Rd × Ω. In particular, the Lyapunov exponent λ1 isdeterministic and well-defined independent of initial condition or random noise path. When λ1 > 0,we say that (2.2) chaotic as it shows an exponential sensitivity of the trajectory to changes in theinitial condition. For deterministic systems, verifying λ1 > 0 is notoriously difficult; see e.g. thediscussions in [10] and [46, 53, 54]. Even in the random case, there are relatively few methods.The methods a la Furstenberg (see e.g. [7, 15, 38, 49, 52]) are powerful when applicable, but arenot quantitative and cannot be used to obtain λ1 > 0 for dissipative systems. For systems witha lot of rigid structure, it is sometimes possible to obtain even asymptotic expansions of λ1 insmall noise limits; see e.g. [3, 6, 8, 9, 31, 42, 44, 47] however, these methods generally require almostcomplete knowledge of the limiting ε → 0 dynamics and it is far from clear how these argumentscould be adapted to more complicated systems such as the Galerkin-Navier-Stokes equations oreven Lorenz-96.

Our recent work with Alex Blumenthal [10] puts forward a new method for obtaining lowerbounds on λ1 for SDEs. Therein, we used the method to prove that the Lorenz-96 model subjectto stochastic forcing is chaotic for all ε sufficiently small; the Lorenz-96 model is commonly used

2This follows by the Kingman subadditive ergodic theorem; see e.g. [35].

2

Page 3: Jacob Bedrossian Sam Punshon-Smith July 9, 2021

in applied mathematics as a test case for numerical or analytical methods for high-dimensional,chaotic systems [12, 33, 41, 43, 45], but no mathematical proof of chaos had previously been foundeven in the stochastic case. More generally, for each SDE in the class (1.1), we formulated asufficient condition for chaos in terms of a certain Lie algebra associated to the nonlinearity. Inparticular, the Lie algebraic condition of [10] implies the quantitative estimate

limε→0

λ1(ε)

ε=∞,

and hence ∃ε0 > 0 such that ∀ε ∈ (0, ε0) there holds λ1 > 0.In this paper, we first provide a convenient reformulation of the Lie algebra condition of [10],

particularly amenable to application in Galerkin approximations of PDEs and other complex-valuedSDEs, using basic concepts from complex geometry. Our main result is to verify this condition forthe Galerkin truncations of the 2d Navier-Stokes equations with frequency cutoff N ≥ 392 on toriiof any aspect ratio (Theorem 1.1), thus proving chaos for all ε sufficiently small.

Inspired by the classical root space decomposition of semi-simple Lie algebras, we reduce theproblem to proving genericity of a diagonal sub-algebra. Using the algebraic structure of thenonlinearity in Fourier space, we further reduce this question to showing that a certain list ofpolynomial systems have only trivial solutions. These are exhaustively verified to be inconsistentusing methods from computational algebraic geometry carried out with Maple [1]. Note that despiteusing a computer assisted proof, our results nevertheless apply in arbitrary frequency truncationand arbitrary aspect ratio and, in a certain sense, well-suited for infinite dimensions. We believethe method put forward in this paper should be applicable to other Galerkin approximations ofPDEs, both real and complex valued, provided the nonlinearity is a finite-degree polynomial.

1.1 2d Galerkin-Navier-Stokes equations

Denote the torus of arbitrary side-length ratio T2r = [0, 2π)× [0, 2π

r ) (periodized) for r > 0. Recallthat the Navier-Stokes equations on T2

r in vorticity form are given by

∂tw + u · ∇w = ε∆ω +√εWt,

where u is the divergence free velocity field satisfying the Bio-Savart law u = ∇⊥(−∆)−1w and Wt

is a white-in time, colored-in-space Gaussian forcing assumed to be diagonalizable with respect tothe Fourier basis. The parameter ε represents the kinematic viscosity; the noise has been scaledwith a matching

√ε so that the dynamics have a non-trivial limit when ε→ 0. For definiteness, we

will assume the forcing is of the form

Wt = 2∑k∈Z2

+

αk(cos(k1x1 + rk2x2))W(k;a)t + βk(sin(k1x1 + rk2x2))W

(k;b)t ,

W (k,a),W (k,b)rk=1 are independent standard Wiener processes with respect to a canonical stochas-tic basis (Ω,F , (Ft),P) and that if αk 6= 0 then βk 6= 0 and vice-versa. Here we are denoting the“upper” lattice

Z2+ :=

(k1, k2) ∈ Z2

0 : k2 > 0∪

(k1, 0) ∈ Z20 : k1 > 0

,

where Z20 := Z2\0. We denote the set of driving modes by

K := k ∈ Z2+ : αk, βk 6= 0.

Upon taking the Fourier transform one can re-write the equations in terms of the complexcoefficient wk = r

(2π)2

´T2re−i(x1k1+rx2k2)w(x) dx for each k ∈ Z2

0 and satisfies the reality constraint

3

Page 4: Jacob Bedrossian Sam Punshon-Smith July 9, 2021

w−k = wk. In Fourier space, the nonlinearity B(w,w) = −u · ∇w takes the form for each ` ∈ Z20

B`(w,w) :=1

|T2r |

ˆT2r

B(w,w)e−i(`1x1+r`2x2)dx =1

2

∑k+j=`

cj,kwjwk, (1.2)

where the sum is over all j, k ∈ Z20 such that j + k = `, the symmetrized coefficient is

cj,k := 〈j⊥, k〉r(

1

|k|2r− 1

|j|2r

),

and we are using the notation

〈j⊥, k〉r = r(j2k1 − j1k2), |k|2r = k21 + r2k2

2.

In what follows cj,k always depends on r but we suppress the dependence for notational simplicity.One way to deal with the reality constraint w−k = wk is to restrict the complex valued wk tothe upper lattice Z2

+ and encode the values in the negative lattice Z2− := −Z2

+ through complexconjugation w−k = wk. In this sense we can think of the vorticity w = (w`) as belonging to the

complex space CZ2+ , and the Navier-Stokes equations is seen to be the following complex-valued

evolution equation on CZ2+

w` = B`(w,w)− ν |`|2r w` +√ν(α`W

(`;a)t + iβ`W

(`;b)t

). (1.3)

The above formulation gives a clear method for finite-dimensional approximation, known as aGalerkin approximation. Define the truncated lattice

Z2+,N =

k ∈ Z2

+ : |k|`∞ ≤ N, |k|`∞ := max|k1|, |k2|,

and now simply restrict the vorticity to the truncated lattice w = (w`) ∈ CZ2+,N , in which case (1.3)

becomes an SDE, with the sum in the non-linearity (1.2) now taken over all j, k ∈ Z2+,N such that

j+k = ` ∈ Z2+,N . We regard the phase space as a real finite-dimensional manifold CZ2

+,N ∼= (R2)Z2+,N

with the real and imaginary coordinates (ak, bk)k∈Z2+,N

defined by wk = ak + ibk giving also the

corresponding basis ∂ak , ∂bkk∈Z2+,N

for the tangent space TwCZ2+,N . One can easily check that this

truncation satisfies all of the hypotheses assumed for (1.1).

1.2 Main results

We will assume a general condition on K that implies there exists a unique stationary measure forall ε > 0 (c.f. [19, 28,48]). Denote the full truncated lattice by Z2

0,N :=k ∈ Z2

0 : |k|`∞ ≤ N

.

Assumption 1. Define the sets Zn ⊂ Z20,N ,

Z0 = K ∪ (−K)

Zn =k ∈ Z2

0,N : k = k1 + k′, k1 ∈ Zn−1, k′ ∈ Z0, ck1,k′ 6= 0.

We say K is hypoelliptic if Z20,N =

⋃Zn.

In [19] it was shown explicitly that for r = 1 the sets K = (0, 1), (1, 1) and K = (1, 0), (1, 1)are both hypoelliptic for all N (note also that if K is hypoelliptic, then so is any K′ such thatK ⊆ K′ ). In the limit N →∞ the set of hypoelliptic forcings is easier to characterize [28], howeverfor a fixed N we are unaware of a simple characterization of all hypoelliptic K due to the presenceof the truncation.

The main theorem of this work is the following.

4

Page 5: Jacob Bedrossian Sam Punshon-Smith July 9, 2021

Theorem 1.1. Consider the 2d Galerkin-Navier-Stokes equations with frequency truncation N ≥392 on T2

r. Suppose that K is hypoelliptic. Then

limε→0

λ1(ε)

ε=∞, (1.4)

and in particular ∀N ≥ 392 and ∀r > 0, ∃ε0 > 0 such that for all ε ∈ (0, ε0), λ1 > 0.

Before we make remarks, let us provide an outline of the remainder of the paper. In Section 2 werecall the definition of projective hypoellipticity which corresponds to Hormander’s condition for theMarkov process (xt) lifted to the sphere bundle in a suitable manner, and we recall our results withAlex Blumenthal [10] which (A) provide a useful sufficient condition for projective hypoellipticityin terms of a matrix Lie algebra based only on the nonlinearity (Proposition 2.6) and (B) show thatprojective hypoellipticity implies Theorem 1.1 (see Section 2.4). In Section 3, we reformulate thesufficient condition for projective hypoellipticity to be more suitable for (1.3) (Proposition 3.11).The remainder of the paper is dedicated to proving this sufficient condition. Section 4 introducesa diagonal sub-algebra h and shows that a certain genericity property of h implies projective hy-poellipticity (Corollary 4.9). Section 5 proves this genericity property (Proposition 5.1). Section5 also contains a more detailed summary of the proof of Theorem 1.1 which puts together all ofthe pieces. Sections 4 and 5 both use computational algebraic geometry and computer assistedproofs performed with Maple [1] to compute Grobner bases for certain polynomial ideals (althoughthe arguments used in Section 5 are significantly more complicated). A review of the algebraicgeometry required is included for the readers’ convenience in Appendix A and the computer codeis included in C. Appendix B contains a simple but crucial technical lemma regarding polynomialideals.

1.3 Remarks

Remark 1.2. We did not attempt to optimize the proof to try and reduce the value of N and webelieve that the result holds for much smaller N as well. However, N ≥ 392 is already enough totreat nearly all modern numerical simulations of the 2d Navier-Stokes equations on T2

r .

Remark 1.3. As might be expected, we currently do not have any quantitative estimates on ε0and λ1 in terms of N at this time.

Remark 1.4. The quantitative estimate of λ1 in terms of ε is almost certainly sub-optimal

Remark 1.5. If one starts with the scaling

dxt = (B(xt, xt)− εAxt) dt+r∑

k=1

ek dW kt , (1.5)

we can relate the stochastic flow of diffeomorphisms Φtω solving the SDE (1.5) with the stochastic

flow Φtω solving (1.1) by Φt

ω(u) =√εΦ√εt

ω (u/√ε) (where ωt = ε−1/4ω√εt a Brownian self-similar

rescaling of the noise path ω so equality of the two flows is interpreted as equality in probabilisticlaw). Thus, the Lyapunov exponent λε1 of the stochastic flow Φt

ω satisfies ε−1λε1 = ε−1λε1, and inparticular λε1 > 0 if and only if λε1 > 0.

Remark 1.6. We believe our methods should extend in an analogous way to Galerkin truncationsof other PDE with polynomial nonlinearities, for example the 3D Navier-Stokes equations, as wellas more general truncations like the Fourier decimation models in e.g. [23].

5

Page 6: Jacob Bedrossian Sam Punshon-Smith July 9, 2021

Remark 1.7. Another important truncation of the Euler non-linearity is the Zeitlin model (see [25,55,56]), which has the added benefit that it preserves the Poisson structure of the Euler equationsfor the co-adjoint orbits on SDiff(T2) (see [4]). In this approximation, instead of a sharp truncationin frequency, the Fourier modes are taken to belong to the periodic lattice Z2

0,mod N := Z20\NZ2

0 forsome N ≥ 1 and the non-linearity is given by

B`(w,w) =1

2

∑`=j+k mod N

sin

(2π〈j⊥, k〉

N

)(1

|k|2− 1

|j|2

)wjwk.

While we expect that our results should still hold for this model, it is important to note thatour methods do not currently apply to this truncation since the multiplier sin(2π〈j⊥, k〉/N) is nota polynomial in the lattice variables j, k and therefore cannot be easily treated by our algebraicgeometry methods.

Remark 1.8. The use of computational algebraic geometry methods to deduce generating proper-ties of matrix Lie algebras is not an entirely new idea. For instance, computations using polynomialideals and Grobner bases feature in [20] as a practical tool for deducing transitivity on Rn for cer-tain matrix algebras related to bilinear control systems. However, in contrast with our work, thetechniques used in [20] depend very strongly on the dimension and do not generalize to infinite orarbitrary dimensional systems like ours.

Remark 1.9. Our computer assisted proof uses Maple’s implementation of the F4 algorithm [21] tocompute the reduced Grobner basis (see [16] or Appendix A) of certain polynomial ideals associatedwith the coefficient cj,k. Computing Grobner bases, particularly for ideals generated by high degreepolynomials with many variables, can be notoriously costly and can be very sensitive to the choiceof variable ordering and associated monomial ordering. It is important to remark that the set upof several of the computations included in Appendix C are incredibly delicate, and often fail toconverge if some of the constraints aren’t included, the variable ordering isn’t chosen correctly, or ifthe choice of saturating polynomial isn’t written in a certain way. Indeed, the principle part of thecalculation takes places on polynomials of degree 19 in 11 independent variables, which is a far toohigh dimensional space in which to do arbitrary computations, even for modern supercomputers.

Remark 1.10. A remarkable feature of our proof is that it holds for all N large enough and forall torus aspect ratios r > 0. Such a conclusion is simply not possible using more direct methods.Specifically, for a given fixed N and fixed r > 0, one can of course attempt to check the matrixalgebra generating properties exhaustively using a more direct method, however computing thisvery quickly becomes extremely expensive for even fairly modest N ≥ 10 and can only be doneusing exact rational arithmetic if r is a rational number.

1.4 Acknowledgements

We would like to give a special mention to Alex Blumenthal for many fruitful discussions and whosework in [10] laid the foundation for this one.

2 Preliminaries: projective hypoellipticity and chaos

In this section we review the concepts of hypoellipticity for Markov semigroups, the projectiveprocess and its hypoellipticity, and the main results of [10] connecting projective hypoellipticity tochaos as well as some convenient characterizations of projective hypoellipticity.

6

Page 7: Jacob Bedrossian Sam Punshon-Smith July 9, 2021

2.1 Hypoellipticity

In this section we briefly recall the notion of hypoellipticity and its relevance to SDEs.Let (M, g) be a smooth, Riemannian manifold and X(M), the space of smooth vector fields over

M . Let [X,Y ] be the Lie bracket of two vector fields X,Y , defined for each f ∈ C∞(M) by

[X,Y ](f) = XY (f)− Y X(f)

where X(f) denotes the directional derivative of f in the direction X. This bracket turns X(M)into an infinite dimensional Lie algebra. Denote the adjoint action ad(X) : X(M) → X(M) byad(X)Y = [X,Y ]. The next condition, introduced in [29], is crucial to our study.

Definition 2.1 (Hormander’s condition). For a given collection F ⊆ X(M) define the Lie algebragenerated by F by

Lie(F) := spanLiem(F) : m ≥ 1, (2.1)

whereLiem(F) := spanad(Xr) . . . ad(X2)X1 : Xi ∈ F , 1 ≤ r ≤ m.

We say that a collection of smooth vector fields F ⊆ X(M) satisfies Hormander’s condition on Mif for each x ∈M we have the following spanning property

Liex(F) := X(x) : X ∈ Lie(F) = TxM.

It is also useful to define a notion of (locally) uniform spanning properties of a collection ofvector fields.

Definition 2.2 (Uniform Hormander). Let F ε ⊂ X(M) be a set of vector fields parameterized byε ∈ (0, 1]. We say F ε satisfies the uniform Hormander condition on M if ∃m ∈ N, such that forany open, bounded set U ⊆ M there exists constants Kn∞n=0, such that for all ε ∈ (0, 1] and allx ∈ U , there is a finite subset Vx ⊂ Liem(F ε) such that ∀ξ ∈ Rd

|ξ| ≤ K0

∑X∈Vx

|X(x) · ξ|∑X∈Vx

||X||Cn(U) ≤ Kn.

An important role will also be played by a certain Lie algebra ideal which is better suited tohypoellipticity for parabolic equations and Markov semigroups.

Definition 2.3 (Parabolic Hormander’s condition). Let X0 ∈ X(M) be a distinguished “drift”vector field and let X ⊆ X(M) be of a collection of “noise” vector fields. We define the zero-timeideal generated by X0 and X as the Lie algebra generated by the sets X and [X , X0] := [X,X0] :X ∈ X, which we denote by

Lie(X0;X ) := Lie(X , [X , X0]).

Correspondingly we say that the vector fields X0,X satisfy the parabolic Hormander condition onM if the vector fields Liex(X0;X ) = TxM . Likewise we say X0,X satisfies the uniform parabolicHormander condition if X , [X , X0] satisfies the uniform Hormander condition as in Definition2.2.

Remark 2.4. The terminology ‘zero-time ideal’ comes from geometric control theory (see e.g. [32])where Lie(X0;X ) plays an important role in obtaining exact controllability of affine control sys-tems. A proof that the definition of the zero-time ideal in geometric control theory and Lie(X0;X )coincide can be found in [Proposition 4.14, [10]], although this fact is likely well-known to expertsin geometric control theory (see e.g. discussion in [20] Chapter 3).

7

Page 8: Jacob Bedrossian Sam Punshon-Smith July 9, 2021

Let (M, g) be a smooth Riemannian manifold and consider a stochastic process xt ∈ M, t ≥ 0defined by the (Stratonovich) SDE

dxt = X0(xt) dt+r∑

k=1

Xk(xt) dW kt , (2.2)

for vector fields Xk ∈ X(M). Define the Markov kernel for any set O ⊂M and x ∈M , Pt(x,O) =P(xt ∈ O |x0 = x) and define the Markov semigroup on

Ptϕ(x) := E (ϕ(xt) |x0 = x) =

ˆMϕ(y)Pt(x, dy)

where ϕ : M → R is bounded and measurable. We also define the adjoint semigroup on probabilitymeasures P(M) for each Borel A ⊂M and µ ∈ P(M)

P∗t µ(A) :=

ˆMPt(y,A)µ(dy).

Under fairly mild conditions on the vector fields Xkrk=0, these Markov semigroups are well definedand solve deterministic PDEs [2, 37]. Recall the definition of stationary measure for an SDE.

Definition 2.5. A measure µ ∈ P(M) is called stationary for a given SDE if P∗t µ = µ.

Hormander’s theorem implies that if X0, X1, ..., Xr satisfies the parabolic Hormander condi-tion, then Pt : L∞ → C∞ (see e.g. [27,29]). This implies that any stationary measure µ is absolutelycontinuous with respect to Lebesgue measure with a smooth density. By the Doob-Khasminskiitheorem, this together with topological irreducibility3 implies the uniqueness of stationary mea-sures.

2.2 Projective Hypoellipticity

Consider the general SDE (2.2) on a Riemannian manifold (M, g). It is well-known that manydynamical properties are encoded in the process

zt = (xt, vt) :=

(Φt(x),

DxΦtv

|DxΦtv|

).

The process (zt) takes values on the unit tangent bundle SM defined by the fibers SxM =Sn−1(TxM) and is called the projective process (as one can just as well consider the process onthe projective bundle PM). One can show that zt solves the lifted version of (2.2) on SM

dzt = X0(zt) dt+r∑

k=1

Xk(zt) dW kt .

Here, for a smooth vector field X on M , define the “lifted” vector field X on SM by

X(x, v) := (X(x), V∇X(x, v)),

where each of the components in the block vector above is determined via the orthogonal splittingT(x,v)SM = TxM ⊕ TvSxM into horizontal and vertical components induced by the Levi-Civita

3It suffices to show that for t > 0, x ∈M , O ⊂M open, then Pt(x,O) > 0, however if only uniqueness is desired,one can get by with much less.

8

Page 9: Jacob Bedrossian Sam Punshon-Smith July 9, 2021

connection ∇ on M and the associated Sasaki metric4 g on SM . The “vertical” component V∇Xwill be referred to as the projective vector field and is defined explicitly by

V∇X(x, v) := ∇X(x)v − 〈v,∇X(x)v〉xv,

where ∇X(x) denotes the total covariant derivative of X, viewed as a linear endomorphism onTxM . That there should be a connection between the hypoellipticity of the projective process andthe Lyapunov exponents is well-documented (see e.g. [7, 18, 47]). Indeed, the sufficient conditionproved in [10] for (1.4) in systems of the form (1.1) is the requirement of uniform hypoellipticity ofthe (zt) process, i.e. projective hypoellipticity, which we explain next.

Here we recall necessary and sufficient conditions on a collection of vector fields F ⊆ X(M) sothat their lifts F = X : X ∈ F ⊆ X(SM) satisfy the Hormander condition on SM . Since thevector fields F may not be volume preserving, it is convenient to define for each X ∈ X(M) andx ∈M the following traceless linear operator on TxM :

MX(x) := ∇X(x)− 1n divX(x) Id ,

which we view as an element of the Lie algebra sl(TxM) of linear endomorphisms A with tr(A) = 0and Lie bracket given by the commutator [A,B] = AB − BA. Since the projective vector fieldV∇X(v) includes a projection orthogonal to v, we always have V∇X = VMX

. For each x ∈ M , animportant role will be played by the following Lie sub-algebra of sl(TxM)

mx(F) := MX(x) : X ∈ Lie(F) , X(x) = 0.

Note that mx(F) is independent of any choice of coordinates (and is in fact independent of thechoice of metric). One can further check that mx(F) is indeed a Lie sub-algebra of sl(TxM).

The spanning properties of the lifted vector fields F on SM can be related to properties of theLie algebra mx(F). An important role is played by the non-trivial fact that the lifting map X 7→ Xsatisfies the identity

[X, Y ] = [X,Y ] ,

and therefore is a Lie algebra isomorphism5 onto the set of lifts

X(M) := X : X ∈ X(M).

The associated implications for projective hypoellipticity of the lifts are conveniently recorded inthe following from [10].

Proposition 2.6 (Proposition 4.2 in [10]). Let F ⊆ X(M) be a collection of smooth vector fieldson M . Their lifts F ⊆ X(M) satisfy the Hormander condition on SM if and only if F satisfies theHormander condition on M and for each x ∈M , mx(F) acts transitively on SxM in the sense thatfor each (x, v) ∈ SM , one has

VA(x) : A ∈ mx(F) = TvSxM.

In particular this implies that F ⊆ X(M) satisfies Hormander’s condition if for each x ∈M

Liex(F) = TxM, and mx(F) = sl(TxM).

Remark 2.7. In general one should expect that “generically” mx(F) = sl(TxM) holds true. Indeed,it well-known in the control theory literature (see [14]) that there is an open and dense set of sln(R)such that any two matrices A,B in that set generate sln(R).

4The Sasaki metric (see [50]) is the unique metric on SM induced from g such that the splitting T(x,v)SM =TxM ⊕ TvSxM induced by the Levi-Civita connection is orthogonal.

5This was observed in [7], but see e.g. [Lemma B.2 [10]] for a complete proof.

9

Page 10: Jacob Bedrossian Sam Punshon-Smith July 9, 2021

2.3 Chaos and Fisher Information

In this section we briefly recall some of the main results of [10] for the readers’ convenience. Forthis we have to define the sum Lyapunov exponent, which describes the asymptotic exponential rateof the volume compression/expansion:

λΣ = limt→∞

1

tlog det(DxΦt

ω).

With some additional mild integrability (see [10, 34] for discussions) the Kingman subadditiveergodic theorem [35,36] implies that a unique stationary measure leads to uniquely defined λ1, λΣ

attained for µ×P a.e. (x, ω).For general SDE of the form (2.2) on a Riemannian manifold (M, g), with Alex Blumenthal, we

provided the following identity connecting a degenerate Fisher information-type quantity with theLyapunov exponent.

Proposition 2.8 (Proposition 1.4, [10]). Assume that the SDE (2.2) defines a global-in-timestochastic flow of C1 diffeomorphisms and that the associated projective process (zt) has a uniquestationary measure ν which is absolutely continuous with smooth density f and which satisfies someadditional mild decay estimates at infinity (see [10] for details). Then,

FI(f) :=r∑

k=1

ˆSM

|X∗kf |2

fdq = nλ1 − 2λΣ,

where n is the dimension of M and dq the Riemannian volume measure on SM , and X∗k denotesthe formal adjoint of Xk as a differential operator with respect to L2(dq).

Remark 2.9. A sharper version of the identity holds on the conditional measures with nλ1−λΣ onthe right-hand side, providing a time-infinitesimal analogue of relative entropy inequalities studiedin e.g. [7, 24,38]; see [10] for details.

In [10] we proved the following crucial uniform Hormander-type lower bound on the Fisher in-formation, connecting regularity in W s,1 of f ε to the Fisher information and therefore the Lyapunovexponents.

Theorem 2.10 (Theorem 1.9, [10]). Consider the SDE (2.2) for vector fields Xε0, ..., X

εr param-

eterized by ε ∈ (0, 1]. Suppose that Xε0, X

ε1, ..., X

εr satisfies the uniform Hormander condition

on SM and suppose that for all ε ∈ (0, 1] there exists a unique stationary measure ν with smoothdensity f ε for the associated projective process (zt). Then ∃s? ∈ (0, 1) such that ∀U ⊂ SM openand bounded, ∃CU > 0 such that ∀ε ∈ (0, 1] there holds

||f ε||2W s?,1 ≤ CU (1 + FI(f ε)) .

Note both s? and CU are independent of ε.

The above two results give a clear path towards estimating Lyapunov exponents from below iflower bounds on the regularity of f ε can be obtained.

2.4 Application to the Galerkin-Navier-Stokes equations

In the context of the SDE in the specific class (1.1), one can prove the following using standardmethods. As discussed in Section 1.1, the Galerkin-Navier-Stokes equations written in Fouriervariables and when phase space is interpreted through the real and imaginary parts as Rd, thefollowing theorem applies.

10

Page 11: Jacob Bedrossian Sam Punshon-Smith July 9, 2021

Theorem 2.11 (See [10] or [11]). Let Xε0(x) = B(x, x)− εAx and consider the class of SDE (1.1).

These SDEs each generate families of global-in-time, smooth stochastic diffeomorphisms Φtω, and if

Xε0, X1, ...Xr satisfies the parabolic Hormander condition, then for all ε > 0, there exists a unique

stationary measure µ with a smooth, density ρ which satisfies a pointwise Gaussian upper bound.Moreover, there exists a top Lyapunov exponent λ1(ε) ∈ R and a sum Lyapunov exponent λΣ(ε)such that the following limit holds µ×P almost-surely

λ1 := limt→∞

1

tlog∣∣DxΦt

ω

∣∣λΣ := lim

t→∞

1

tlog det(DxΦt

ω).

Remark 2.12. In fact, if Xε0, X1, ...Xr satisfies the uniform parabolic Hormander’s condition,

then one can prove the pointwise Gaussian upper bound on ρ uniformly in ε, as well as a uniform-in-ε strictly positive lower bound on all compact sets [11].

The main theorem of this paper is a description of the Lie algebra mx(TxM) for the 2d Galerkin-Navier-Stokes equations (1.3).

Theorem 2.13. Let N ≥ 392, let X0(w) = B(w,w) + ε∆w be the Galerkin Navier-Stokes vector

field over M = CZ2+,N , and let X = ∂ak , ∂bkk∈K ⊆ X(M) where K ⊆ Z2

+,N satisfies Assumption1. Then, ∀w ∈M (in a uniform way)

mw([X , Xε0]) = sl(TwM).

and in particular, from Proposition 2.6, Xε0, X satisfies the uniform-in-ε parabolic Hormander con-

dition on SM .

The majority of the paper is spent proving Theorem 2.13; see Section 5 for a summary of howthe pieces fit together in the proof. Next, we briefly summarize next why Theorem 2.13 impliesTheorem 1.1 from the results of [10].

The following lemma is a consequence of Hormander’s theorem, Doob-Khasminskii’s theorem,and geometric control theory; see [10] for details.

Lemma 2.14 (Theorem A.1, [10]). Let Xε0(x) = B(x, x) − εAx and consider the class of SDE

(1.1) (with the corresponding conditions assumed on B). Suppose that Xε0, X satisfies the parabolic

Hormander condition on SM . Then, ∀ε ∈ (0,∞), there exists a unique stationary measure ν forthe associated projective process with a smooth, strictly positive density f ε with respect to Lebesguemeasure such that f ε log f ε ∈ L1.

In view of the above, Proposition 2.8 gives the following for (1.1) (assuming projective hypoel-lipticity),

FI(f ε) =nλ1

ε− 2trA.

Theorem 2.10 then implies there exists an s ∈ (0, 1) such that for every bounded open set U ⊆ SMwe have

||f ε||2W s?,1(U) .U 1 +λ1

ε.

Therefore, if λ1/ε were to remain bounded, one can show that f εε>0 is precompact in Lp for allp ≥ 1 sufficiently small and so there is a strongly convergent subsequence f εn → f ∈ L1 which is

11

Page 12: Jacob Bedrossian Sam Punshon-Smith July 9, 2021

an absolutely continuous stationary density for the ε = 0 limiting deterministic projective process[Proposition 1.14, [10]].

Projective hypoellipticity played the crucial role in reducing the estimate on the Lyapunovestimate to one of regularity of f ε, and for the estimate (1.4) to whether or not there can exist aninvariant measure with an L1 density for the deterministic ε = 0 projective process. That no suchinvariant density can exist for any model of the form (1.1) was proved in [Proposition 1.15 [10]], andtherefore λ1/ε→∞. The major additional ingredient used in this step is that the ε = 0 JacobianDxΦt grows unboundedly as t → ∞ for a.e. initial condition x, necessitating concentrations inany invariant measures (see [10] for details). This is deduced using the special structure of thenonlinearity and for other models may not be straightforward to verify.

To summarize, to prove Theorem 1.1, it suffices only to verify that Xε0, X1, ..., Xr satisfies

the uniform-in-ε parabolic Hormander condition, i.e. Theorem 2.13. The remainder of the paper isdedicated to the proving of this result, which as detailed in the next section, is a purely algebraicquestion.

Remark 2.15. For the specific case of Navier-Stokes with additive stochastic forcing, the Fisherinformation becomes

FI(f ε) =∑k∈K

ˆ (|∂ak log f ε|2 + |∂bk log f ε|2

)f ε dq.

3 Projective hypoellipticity on complex geometries

3.1 Real vs complex spanning

Treating the phase space CZ2+,N as a real manifold using the real and imaginary parts can be awk-

ward and lead to very cumbersome calculations. Due to the convenience of the Fourier descriptionwhen dealing with Galerkin truncations of PDEs, it makes more sense to find a natural, complexway to view phase space. Specifically, if we have a complex phase space Cn, we should treat it asa complex manifold and complexify the tangent space. First, we review some of the basic conceptsfrom complex geometry for the readers’ convenience (see e.g. [Chapter 1, [30]]) and explain howthe ideas apply to hypoellipticity of stochastic PDEs, providing a cleaner proof of the spanningcondition for Galerkin Navier-Stokes obtained in [19]. Finally, we explain how the ideas extend tothe question of projective hypoellipticity and formulate the sufficient condition which occupies therest of the paper.

Definition 3.1. Given a real vector space V , define its complexification by

V ⊗ C = v1 + iv2 : v1, v2 ∈ V .

We begin by noting the following simple, but crucial equivalence between complex and realspanning of a collection of vectors in a real vector space.

Lemma 3.2. Let V be a real vector space and let V ⊗ C be it’s complexification. For a givencollection of vectors vk ⊂ V , we have

spanvk = V, if and only if spanCvk = V ⊗ C, (3.1)

where spanC denotes the span of a collection of vectors using complex coefficients.

Proof. Real spanning of V implies complexified spanning since one can span the real and imaginaryparts separately. For the converse, suppose that (3.1) holds. This means that for any v ∈ V thereexist ak ⊂ C such that

∑k αkvk = v. Taking the real part of both sides gives

∑k Re(αk)vk = v,

implying that vk spans V .

12

Page 13: Jacob Bedrossian Sam Punshon-Smith July 9, 2021

3.2 Hormanders condition on Cn

Now we turn to the space Cn, where complexification of the tangent space is natural and mostuseful. Let X be a smooth vector field over Cn, where Cn is viewed as a real manifold withreal tangent space TCn spanned by the coordinate vectors ∂ak , ∂bk , corresponding to the real andimaginary parts respectively. Clearly, TCn is isomorphic as a vector space to Cn and therefore wemay view each X as a mapping Cn → Cn with Xk : C → C the kth component of the image ofthat map. In the ∂ak , ∂bk basis we can write X as

X =∑k

Re(Xk)∂ak + Im(Xk)∂bk . (3.2)

In what follows we will complexify the tangent space TCn ⊗ C and define complex basis vectors

∂zk = 12 (∂ak − i∂bk) , ∂zk = 1

2 (∂ak + i∂bk) .

This naturally induces the splitting TCn ⊗ C = T 1,0Cn ⊕ T 0,1Cn, where T 1,0Cn = spanC∂zk,T 0,1Cn = spanC∂zk, known as the holomorphic and anti-holomorphic bundles respectively (see[30]). In this new basis, we see that (3.2) becomes

X =∑k

Xk∂zk +Xk∂zk .

Recall that the Lie bracket [ · , · ] is coordinate independent and does not depend on the choiceof basis and so neither does Lie(F) for some collection F ⊆ X(Cn). Given a collection F ⊂X(Cn), let Lie(F)C be the complexification of Lie(F) (obtained by replacing span with spanC inthe definition (2.1)). We now have the following simple corollary of Lemma 3.1 regarding spanningfor Liex(F)C := X(x) : X ∈ Lie(F)C.

Lemma 3.3. A collection F ⊆ X(Cn) satisfies Hormander’s condition on Cn (as a real manifold)if and only if for each z ∈ Cn

Liez(F)C = T 1,0Cn ⊕ T 0,1Cn.

Remark 3.4. The same proof also applies to any subalgebra of Lie(F), for instance the Lie algebraideal Lie(X0;F) with respect to a distinguished drift vector field X0.

Remark 3.5. Lemma 3.3 means from a practical perspective that in order to check Hormander’scondition for a collection of vector fields on Cn, it is sufficient to take complex linear combinationsand attempt to isolate ∂zk and ∂zk separately in order to span both T 1,0Cn and T 0,1Cn.

3.3 Application: hypoellipticity for the stochastic Navier-Stokes equations

In this section we show how the complexification procedure above allows us to give a cleaner proofof Hormander’s condition for the Navier-Stokes equations with additive stochastic forcing in 2d,first identified in [19] and expanded upon in [28]. Recall from Section 1.1, we can formulate the 2d

stochastic Galerkin-Navier-Stokes equations as an SDE on M = CZ2+,N by

w = Xε0(w) +

√ε∑k∈K

(αkW

(k;a)t ∂ak + βkW

(`;b)t ∂bk

),

where αk, βk ∈ R 6= 0 for k ∈ K ⊆ Z2+,N and Xε

0(w) = B(w,w) + εAw. In ∂w`, ∂w`

coordinates ittakes the form

Xε0(w) =

∑`∈Z2

+,N

(B`(w,w)− ε|`|2w`)∂w`+ (B`(w,w)− ε|`|2w`)∂w`

,

13

Page 14: Jacob Bedrossian Sam Punshon-Smith July 9, 2021

where

B`(w,w) =1

2

∑j+k=`

cj,kwjwk,

with the sum over all j, k ∈ Z20,N such that j + k = `. Due the reality constraint w−` = w` we find

it convenient to index the basis vectors on the full lattice Z20 via

∂w`:=

∂w`

` ∈ Z2+,N

∂w−`` ∈ Z2

−,N,

where Z2− = −Z2

+. Combining this with the reality constraint on B−`(w,w) = B`(w,w), we canwrite X0 in a more succinct notation involving a sum over the full lattice

Xε0(w) =

∑`∈Z2

0

(B`(w,w)− ε|`|2w`)∂w`.

We note that for any w ∈ CZ20,N , satisfying w−` = w`, that ∂w`

as defined above has the propertythat for each `, i ∈ Z2

0,N , ∂w`behave as Wirtinger derivatives, satisfying

∂w`wi = δi=`.

From this, we can easily obtain simple expressions for the brackets

[∂wk, Xε

0(w)] =∑

j∈Z20,N

1Z20,N

(j + k)cj,kwj∂wj+k− ε|k|2∂wk

,

and[∂wk1

, [∂wk2, Xε

0(w)]] = 1Z20,N

(k1 + k2)ck1,k2∂wk1+k2.

Our goal is to prove the following:

Proposition 3.6. Let X = ∂ak , ∂bk : k ∈ K, where K ⊆ Z2+,N satisfies Assumption 1. Then

Lie(Xε0;X )C contains the constant vector fields ∂wk

: k ∈ Z20,N and moreover, it follows from

Lemma 3.3 that Xε0,X satisfies the uniform parabolic Hormander condition on CZ2

+,N viewed as areal manifold.

Proof. Since for a given ` ∈ Z2+,N , ∂w`

and ∂w−`= ∂w`

are complex linear combinations of ∂a`and ∂b` for ` ∈ Z2

+,N it suffices to take brackets with respect to ∂w`for all ` ∈ K ∪ −K ⊆ Z2

0,N .Therefore for k1, k2 ∈ K ∪ −K we have

[∂wk1, [∂wk2

, X0]] = 1Z20,N

(k1 + k2)ck1,k2∂wk1+k2;

as this is independent of ε, it is clear that spanning will imply uniform spanning. If ck1,k2 6= 0 weconclude that ∂wk1+k2

∈ Lie(X0;F)C.It becomes clear we need the following iteration, defining Z0 = K ∪ −K

Zn = `+ j : j ∈ Z0, ` ∈ Zn−1 such that c`,j 6= 0

By Assumption 1, this iteration continues to generate all of Z20,N which implies that ∂wk

k∈Z20,N⊆

Lie(X0;F)C and therefore, since T 0,1CZ20,N ' T 0,1CZ2

+,N ⊕ T 1,0CZ2+,N ⊆ Liez(X0;F)C, the theorem

is proved.

14

Page 15: Jacob Bedrossian Sam Punshon-Smith July 9, 2021

3.4 Projective spanning on Cn

When the manifold is Cn we will also find it useful to complexify the tangent space to showprojective hypoellipticity. Let V be a real vector space and recall that for a given vector space W(real or complex) the space sl(W ) is the Lie algebra of linear endomorphisms H of W with trH = 0(note this is independent of basis) and Lie bracket given by the commutator

[A,B] = AB −BA.

Note that any endomorphism H of V can be trivially extended to an endomorphism of the com-plexification V ⊗ C via H(v1 + iv2) = Hv1 + iHv2, moreover any G ∈ sl(V ⊗ C) can be writtenas G = G1 + iG2, where G1, G2 ∈ sl(V ), so that we have sl(V ⊗ C) = sl(V ) ⊗ C, i.e. sl(V ) is areal form for sl(V ⊗ C). We denote the Lie algebra of endomorphisms generated by any collectionH ⊆ sl(V ) by

Lie(H) = spanad(Hr) . . . ad(H2)H1 : Hi ∈ H, r ∈ N.

Likewise, define Lie(H)C as above with spanC and H extended to sl(V ⊗C). The next result followseasily from Lemma 3.2 and the bilinearity of X,Y 7→ ad(X)Y .

Proposition 3.7. Let V be a real vector space and H ⊆ sl(V ), then Lie(H) = sl(V ) if and only ifLie(H)C = sl(V )⊗ C.

In light of the linearity of the mapping X 7→ MX(z) we have the following property of the Liealgebra of endomorphisms induced by Lie(F)C for some collection F ∈ X(Cn).

mz(F)C := MX(z) : X ∈ Lie(F)C , X(z) = 0.

Corollary 3.8. Let F ⊆ X(Cn), then for each z ∈ Cn we have mz(F) = sl(TzCn) if and onlyif mz(F)C = sl(TzCn) ⊗ C. In particular, the lifts F satisfies Hormander’s condition on SCn ifmz(F)C = sl(TzCn)⊗ C and F satisfies Hormander’s condition on Cn.

Remark 3.9. Corollary 3.8 is useful in the sense that it allows one to work directly with mx(X0;F)Ctherefore consider matrices ∇X(x) in ∂zk , ∂zk coordinates, which often take a much simpler formthan their counterparts in ∂ak , ∂bk coordinates.

3.5 A sufficient condition for projective hypoellipticity for Navier-Stokes

In this section, we consider a sufficient condition for projective hypoellipticity for the Navier-Stokesequation in terms of a real matrix Lie algebra obtained by working in complex coordinates. Thesematrices take on a particularly simple form that allow the problem to be made much more tractablewhich is crucial for the arguments that follow.

Following the set-up of section 3.3, we define the Navier-Stokes vector field on the complexified

tangent space TwCZ2+,N ⊗ C

Xε0(w) :=

∑`∈Z2

0

(B`(w,w)− ε|`|2w`

)∂w`

,

where we recall that w−` = w` and that we have defined for ` ∈ Z20

∂w`:=

∂w`

` ∈ Z2+

∂w−`` ∈ Z2

−.

15

Page 16: Jacob Bedrossian Sam Punshon-Smith July 9, 2021

As in the set up of Proposition 3.6, we assume that we have vector fields ∂wkk∈K∪−K, where

K ⊂ Z20,N generates Z2

0,N in the sense of Assumption 1. By Proposition 3.6, we have thatLie(X0; ∂wk

k∈K)C, contains the constant vector fields ∂wkk∈Z2

0,N, and therefore any vector field

X ∈ Lie(X0; ∂wkk∈K)C can always be shifted by a constant vector field

X = X −X(z)

so that X(z) = 0 and ∇X = ∇X. Additionally, by Corollary 3.8, and the fact that B(w,w) isbilinear and ∆w is linear, this implies that for each k ∈ Z2

0,N , the endomorphism

Hk := ∇[∂wk, Xε

0] = ∂wk∇B,

belongs to mw(X0; ∂wkk∈K)C. Moreover due to the bilinear nature of B(w,w), each Hk is constant

and independent of ε.The following Lemma gives an explicit matrix representation of Hk in ∂wk

coordinates as a|Z2

0,N | = (2N + 1)2− 1 dimensional square matrix indexed over Z20,N . This simple form comes from

the convenient form of the nonlinearity in complex variables (see Section 1.1).

Lemma 3.10. For each k ∈ Z20,N we have the following formula for Hk in ∂w`

coordinates by

(Hk)`,j = cj,kδk+j=`, `, j ∈ Z20,N . (3.3)

Note that in ∂wk coordinates, the matrices Hk are real matrices, and therefore we only

need them to generate an appropriate Lie algebra of real matrices in order for them to span thecomplexified space by Corollary 3.8.

Below we record a sufficient condition for projective spanning in the Galerkin-Navier-Stokessystem in terms of the Lie algebra generated by the Hk matrices.

Proposition 3.11. Let Hk := Hk : k ∈ Z20,N be the matrices defined by (3.3) in ∂wk

coordi-

nates. Then the lifts Xε0, ∂ak , ∂bk : k ∈ K satisfy the uniform parabolic Hormander condition on

SCZ2+,N if

Lie(Hk) = slZ20,N

(R),

where we use the notation slZ20,N

(R) to denote the Lie algebra of real, trace-free matrices indexed

by Z20,N .

Proof. By the above discussion regarding Proposition 3.6, we have that Hk viewed as linearendomorphisms satisfy

Hk ⊂ mw(X0; ∂wkk∈K∪(−K))C ⊆ sl(TwCZ2

+,N )⊗ C.

If the corresponding real matrix Lie algebra Lie(Hk) represented in ∂wkcoordinates is equal

to slZ20,N

(R), then by Proposition 3.7 it is clear that the complexified algebra of endomorphisms

satisfies Lie(Hk)⊗ C = sl(TwCZ2+,N )⊗ C and therefore

mw(X0; ∂wkk∈K∪(−K))C = sl(TwCZ2

+,N )⊗ C.

Moreover, since Hk are constant matrices, this equality is uniform in w. It follows by Corollary

3.8 that Xε0, ∂ak , ∂bk : k ∈ K satisfy the uniform parabolic Hormander condition on SCZ2

+,N .

16

Page 17: Jacob Bedrossian Sam Punshon-Smith July 9, 2021

Remark 3.12. It is important to note that each matrix Hk has a banded structure, with non-zero entries occurring on the band ` − j = k (except when cj,k = 0). This banded structure is aconsequence of the non-local frequency coupling present in non-linearity. Such non-local interactionsprovide a significant challenge when studying the Lie(Hk) and are what make projective spanningfor Navier-Stokes and other PDEs so challenging compared to locally coupled models like Lorenz96 or shell models like GOY or SABRA.

4 A dinstinctness condition in the diagonal algebra

In order to show thatLie(Hk) = slZ2

0,N(R),

for Proposition 3.11, a special role will be played by a certain diagonal subalgebra h. Genericityproperties of elements of this algebra, specifically related to distinctness of certain differences ofdiagonal elements, will play a crucial role in our ability to isolate elementary matrices, which isparticularly challenging for the Navier-Stokes equations due to the non-local frequency interactionsmade explicit by the banded structure of the Hk matrices.

4.1 An illustrative example

Taking a page from the classical root space decomposition of semi-simple Lie algebras, we will makeuse of a strategy that utilizes the fact that elementary matrices are left invariant by adjoint actionwith a diagonal matrix. To fix ideas, we will first consider an idealized situation. Let D be anydiagonal matrix in sln(R) with diagonal entries Dii denoted by Di. It is well known and easilyverifiable that for any elementary matrix Ei,j = δij for the Kronecker delta (i.e. a matrix with aone in the ith row and jth column and zero elsewhere) one has

ad(D)Ei,j = [D, Ei,j ] = (Di − Dj)Ei,j ,

and therefore Ei,j is an eigenvector of the operator ad(D) with eigenvalue Di − Dj . This meansthat if D is suitably generic in the sense that it’s diagonal entries have distinct differences

Di − Dj 6= Di′ − Dj′ when (i, j) 6= (i′, j′),

then the operator ad(D) has simple eigenvalues. Such a distinctness property and associated sim-plicity of the spectrum gives a clear strategy for spanning sets of elementary matrices that generatesln(R) using an approach similar to Krylov subspace methods for generating sets of linearly inde-pendent eigenvectors [5, 51]. Specifically we have the following.

Proposition 4.1. Let H be a matrix in sln(R) whose diagonal entries are zero Hii = 0, and withat least one non-zero element away from the diagonal,

supp(H) := (i, j) : i 6= j ,Hij 6= 0 6= ∅.

Suppose, in addition, that there is a diagonal matrix D whose diagonal entries Di = Dii satisfy

Di − Dj 6= Di′ − Dj′ , for each (i, j), (i′, j′) ∈ supp(H), (i, j) 6= (i′, j′).

Then for N = |supp(H)|,

spanH, ad(D)H, ad(D)2H . . . , ad(D)N−1H,

contains the elementary matrices Ei,j for each i, j ∈ supp(H).

17

Page 18: Jacob Bedrossian Sam Punshon-Smith July 9, 2021

Proof. Write

H =∑Hij 6=0

HijEij

and λij = Di − Dj be the corresponding eigenvalue of ad(D). The Krylov subspace in questionbecomes

span∑

HijEij ,∑

HijλijEij , . . .

∑Hijλ

N−1ij Eij

.

The linear independence of these vectors reduces to the invertibility of the Vandermonde matrix1 λi1j1 . . . λN−1i1j1

......

. . ....

1 λiN jN . . . λN−1iN jN

,

which follows by the assumption that λij 6= λi′j′ for (i, j) 6= (i′, j′). Hence, due to the full rank, theKrylov subspace coincides with span

Eij : Hij 6= 0

.

Remark 4.2. In the context of Lemma 4.1, it is important to note that one does not necessarilyneed H to have all it’s entries non-zero in order to show that Lie(D, H) = sln(R). Indeed, arelatively small number of elementary matrices can easily generate sln(R). For instance it is readilyseen that the elementary matrices

E1,2, E2,3, . . . En−1,n, En,1

are sufficient to generate sln(R).

4.2 The diagonal subalgebra h

The set of matrices Hk defined in (3.3) does not contain any diagonal matrices, however, bycommuting Hk and H−k, we obtain a diagonal algebra which we denote

h := span[Hk, H−k] : k ∈ Z20,N.

Lemma 4.3. For each k ∈ Z20,N , we have

Dk := [Hk, H−k]

is a diagonal matrix with diagonal entries for each i ∈ Z20,N given by

Dki := ci,kci+k,k1Z20,N

(i+ k)− ci,kci−k,k1Z20,N

(i− k),

and therefore h = spanDk is a commutative Lie sub-algebra of slZ20,N

(R).

Remark 4.4. It is important to note that the truncated lattice Z20,N actually makes the form

of Dki more complicated. Depending on the choice of k, the indicator functions 1Z20,N

(i + k) and

1Z20,N

(i − k) have non-trivial regions where they overlap and don’t overlap, leading to significant

complications in proofs that utilize computational algebra. A remarkable fact is that the “infinitedimensional” case obtained by replacing Z2

0,N with the full lattice Z20 actually gives the much cleaner

form

Dki = ci,kci+k,k − ci,kci−k,k = 〈i⊥, k〉2(

1

|k|2− 1

|i|2

)(1

|i− k|2− 1

|i+ k|2

)making it much more amenable to algebraic methods.

18

Page 19: Jacob Bedrossian Sam Punshon-Smith July 9, 2021

We would like to use the diagonal matrices in h to proceed as Section 4.1, however, the situationhere is far more delicate than that presented in Proposition 4.1 due to the fact that Dki has aninversion symmetry Dk−i = −Dki , which fundamentally restricts the possibility of having distinctdifferences. In particular, for any given diagonal matrix D satisfying D−i = −Di, the adjointoperator

ad(D) : slZ20,N

(R)→ slZ20,N

(R)

is incapable of having simple spectrum since the odd symmetry of Di implies that there are alwaystwo dimensional invariant spaces associated to the adjoint operator. Specifically we see that foreach i, j ∈ Z2

0,N , i 6= j

ad(D)Ei,j = (Di − Dj)Ei,j and ad(D)E−j,−i = (Di − Dj)E−j,−i

and therefore the eigenvalue Di −Dj for ad(D)always has multiplicity at least 2 with the invariantspace

spanEi,j , E−j,i.

With this in mind, it is convenient to write Hk as a linear combination of such matrices. Inparticular we can write for each k ∈ Z2

0,N

Hk =1

2

∑i,i−k∈Z2

0,N

(ci−k,kEi,i−k − ci,kEk−i,−i).

Taking into account the sparsity of Hk and the fact that for any diagonal matrix D satisfyingD−i = −Di, ad(D) leaves ci−k,kE

i,i−k− ci,kEk−i,i invariant, suggests that if D satisfies the followingdistinctness property

Di − Di−k 6= Di′ − Di′−k, (4.1)

for each k, i, i′, i − k, i′ − k ∈ Z20,N with i 6= i′ and i 6= k − i′, then a similar procedure to the

one carried out in Proposition 4.1 implies that under the distinctness condition (4.1), if D belongsto Lie(Hk), then Lie(Hk) also contains the following sets of matrices for each k ∈ Z2

0,N and

i, i− k ∈ Z20,N

ci−k,kEi,i−k − ci,kEk−i,−i.

By relabeling indices and eliminating k, this means that we can obtain matrices of the form

M i,j := cj,i−jEi,j − ci,i−jE−j,−i

for each i, j ∈ Z20,N , i− j ∈ Z2

0,N . Similarly, in the distinctness condition (4.1) we can eliminate k,and reduce this to a more symmetric constraint of the form

Dki + Dkj + Dk` + Dkm 6= 0

for i, j, `,m ∈ Z20,N satisfying

i+ j + `+m = 0,

with the constraints

CN := (i, j, `,m) ∈ (Z20,N )4 : (i+ j, `+m) 6= 0, (i+ `, j +m) 6= 0, (i+m, j + `) 6= 0. (4.2)

In general we have the following convenient reformulation of the distinctness condition (4.1).

19

Page 20: Jacob Bedrossian Sam Punshon-Smith July 9, 2021

Definition 4.5 (Distinct). We say a diagonal matrix D ∈ h is distinct if for every (i, j, `,m) ∈ CNwith i+ j + `+m = 0 we have

Di + Dj + D` + Dm 6= 0. (4.3)

Remark 4.6. Note that the constraint set CN defined in (4.2) is fundamental to the symmetry ofthe sum Di + Dj + D` + Dm. Each constraint is necessary in the sense that if any one of them failsthen we automatically have

Di + Dj + D` + Dm = 0

due to the inversion symmetry D−i = −Di.

Under this new definition, we can summarize the above discussion as follows.

Lemma 4.7. Suppose that h contains a distinct diagonal matrix in the sense of Definition 4.5.Then Lie(Hk) contains the matrices M i,j : i, j ∈ Z2

0,N , i− j ∈ Z20,N.

It turns out that this set of matrices M i,j , each one being comprised of linear combination ofpairs of elementary matrices, is sufficient to generate all of slZ2

0,N(R). The proof of this fact is the

content of the following subsection.

Proposition 4.8. The matrices M i,j : i, j ∈ Z20,N , i− j ∈ Z2

0,N generate slZ20,N

(R).

As a simple corollary of this and Lemma 4.7 this reduces Proposition 3.11 to a condition on theexistence of a distinct matrix inside h.

Corollary 4.9. If h contains a distinct matrix, then Lie(Hk) = slZ20,N

(R).

4.2.1 Proof of Proposition 4.8

To show Proposition 4.8 we first assume an algebraic property of the coefficients cj,k, which we willprove in Proposition 4.11 using techniques from computational algebraic geometry. To simplifynotation in what follows, denote

Si := Z20,N ∩ Z2

0,N + i.

Lemma 4.10. Suppose that for each i, j ∈ Z20,N , with i − j ∈ Z2

0,1, there exists a k, k′ ∈ Si ∩ Sjsuch that

dk,k′

i,j := ci,i−kck,j−kck′,i−k′cj,j−k′ − ck,i−kcj,j−kci,i−k′ck′,j−k′ 6= 0.

Then M i,j : i, j ∈ Z20,N , i− j ∈ Z2

0,N generates slZ20,N

(R).

Proof. If we take commutators of matrices of the form [M i,k,Mk,j ], where i − j ∈ Z20,1 and k ∈

Si ∩ Sj , we have[M i,k,Mk,j ] = ck,i−kcj,k−jE

i,j − ci,k−ick,j−kE−j,−i.

Therefore if we pick any two k, k′ ∈ Si ∩ Sj , we obtain a 2× 2 linear system for Ei,j and E−j,−i

[M i,k,Mk,j ] = ck,i−kcj,k−jEi,j − ci,k−ick,j−kE−j,−i

[M i,k′ ,Mk′,j ] = ck′,i−k′cj,k′−jEi,j − ci,k′−ick′,j−k′E−j,−i.

(4.4)

We can write Ei,j and E−j,−i as a linear combination of [M i,k,Mk,j ] and [M i,k′ ,Mk′,j ] providedthat for each i, j ∈ Z2

0, with i− j ∈ Z20,1 we can find a k, k′ ∈ Si ∩ Sj such that

dk,k′

i,j = det

(−ck,i−kcj,j−k ci,i−kck,j−k−ck′,i−k′cj,j−k′ ci,i−k′ck′,j−k′

)6= 0.

20

Page 21: Jacob Bedrossian Sam Punshon-Smith July 9, 2021

This is true by assumption and therefore we can solve the linear system (4.4) and obtain allelementary matrices Ei,j for i, j ∈ Z2

0,N , with i − j ∈ Z20,1. One can easily check that all such

elementary matrices generate slZ20,N

(R) (see Remark 4.2).

Note that the property that dk,k′

i,j 6= 0 is a purely algebraic one. In particular, suppose by

contradiction, that there exists an i, j ∈ Z20,N with i− j ∈ Z2

0,1 such that

dk,k′

i,j = 0, for all k, k′ ∈ Si ∩ Sj .

That is then i, j must solve a set of rational equations (with integer coefficients), one for eachpair (k, k′) ∈ (Si ∩ Sj)2. Next, we show that this system of rational equations is algebraicallyinconsistent. We will prove the following Proposition using machinery from computational algebraicgeometry, which we review in Appendix A. The computations are done using Maple; see AppendixC.1 for the computer code.

Proposition 4.11. For each i, j ∈ Z20,N with i − j ∈ Z2

0,1, there exists k, k′ ∈ Si ∩ Sj such that

dk,k′

i,j 6= 0.

Proof. To prove this, we first note that d(i, j, k, k′, r) = dk,k′

i,j is a purely rational algebraic functionof the variables i = (i1, i2), j = (j1, j2), k = (k1, k2), k′ = (k′1, k

′2) and r. We denote the numerator

byP (i, j, k, k′, r) = numer(d(i, j, k, k′, r)).

Suppose by contradiction that there exists an i, j ∈ Z20,N with i−j ∈ Z2

0,1 such that d(i, j, k, k′, r) = 0

for all k, k′ ∈ Si ∩ Sj . Then we have

P (i, j, k, k′, r) = 0, for all k, k′ ∈ Si ∩ Sj .

Note that this polynomial is degree 10 in k, k′. If N ≥ 8 then since i − j ∈ Z20,1, Si ∩ Sj always

contains Z20,6 and Lemma B.1 implies that the collection of polynomials in i, j, r defined by the

coefficients of the polynomial in k1, k2, k′1, k′2

f1, . . . , fs = coeffs(P, k1, k2, k′1, k′2)

must also vanish (note that the coefficients f1, ..., fs are polynomials in (i1, i2, j1, j2) and r withinteger coefficients). By extending the variables i1, i2, j1, j2 and r, to the algebraically closed fieldC, we define the polynomial ideal generated by f1, . . . , fs

I = 〈f1, . . . , fs〉 ⊆ C[i1, i2, j1, j2, r]

(see Appendix A for a review of the relevant algebraic geometry). Next we define the constraintpolynomial

g(i, j, r) = r2|i|2r |j|2r |i− j|2r .

Note that on C5, we have g 6= 0 exactly encodes the constraint that i 6= 0, j 6= 0, i 6= j, r 6= 0. Inlight of this, our goal then is to show that affine varieties induced by I and g are the same

V(I) = V(g),

since this implies that the only common zeros of f1, . . . , fs in C5 are those with i = 0, j = 0, i = jor r = 0. By the strong Nullstellensatz [Ch 4, Theorem 10 [16]] (see also Theorem A.11 below),

21

Page 22: Jacob Bedrossian Sam Punshon-Smith July 9, 2021

this is true if and only if there exists an n ∈ Z≥0 such that gn ∈ I, or equivalently by TheoremA.12, if the reduced Grobner basis of the saturation I : g∞ for any given monomial ordering is1 (see Appendix A.2 for background on saturation). To compute a saturation with respect toa single polynomial, it suffices to introduce an extra variable z to represent 1/g and consider theaugmented ideal

I = 〈f1, . . . , fs, gz − 1〉 ⊆ C[i1, i2, j1, j2, r, z].

By Theorem A.12, if 1 is the reduced Grobner basis for I, then it is also the reduced Grobnerbasis for I : g∞.

We use Maple [1] to compute the reduced Grobner basis G for the ideal I. This computation isdone in graded reverse lexicographical order (or “grevlex”) and the variable ordering

i1 < i2 < j1 < j2 < z < r

using an implementation of the F4 algorithm [22]; see Appendix C. The result is G = 1, therebyconcluding the proof.

5 Verifying distinctness in the diagonal algebra

So far we have shown that if h contains a distinct matrix D in the sense of Definition 4.5 thenLie(Hk) = slZ2

0,N(R), which implies projective spanning by Proposition 3.11.

The goal of this section is to show that h does contain many distinct matrices, in fact, they are‘generic’ in the sense that they form an open and dense set in h. Unfortunately, each individualdiagonal matrix Dk = [Hk, H−k] is certainly not distinct, since there are many degeneracies relatedto each particular k. However, we have the benefit of a large number of such diagonal matricesand can take linear combinations of each Dk to find a distinct matrix. Specifically taking linearcombinations allows one to reduce the condition for the distinctness condition (4.3) to one that ismuch more mild on the entire collection Dk.

Indeed, the main result of this section, and the main effort of proof is to show the followingsufficient condition on the collection Dk.

Proposition 5.1. For each (i, j, `,m) ∈ CN (defined in (4.2)) with i+ j + `+m = 0, there existsa k ∈ Z2

0,N such that

Dki + Dkj + Dk` + Dkm 6= 0.

We now show how proposition 5.1 implies that “most” elements in the span of Dk are in factdistinct.

Lemma 5.2. Assume the result of Proposition 5.1 holds, then there exists an open and dense setof matrices in h = spanDk : k ∈ Z2

0,N that are distinct in the sense of definition 4.5.

Proof. For each fixed (i, j, `,m) ∈ CN , we denote the vector

w(i,j,`,m) :=(Dki + Dkj + Dk` + Dkm : k ∈ Z2

0,N

)∈ RZ2

0,N ,

and for each (i, j, `,m) ∈ CN , let

Γ(i,j,`,m) =α ∈ RZ2

0,N : α · w(i,j,`,m) 6= 0.

22

Page 23: Jacob Bedrossian Sam Punshon-Smith July 9, 2021

By Proposition 5.1, w(i,j,`,m) is always a non-zero vector and hence Γ(i,j,`,m) is an open-dense set(being the complement of a plane). Since CN is a finite set

Γ =⋂

(i,j,`,m)∈CN

Γ(i,j,`,m)

is also open and dense in RZ20,N . It follows that for any α ∈ Γ, the linear combination

∑k αkDk, is

distinct.

We now briefly summarize how Proposition 5.1 completes the proof of the main results of ourpaper.

Proof of Theorems 2.13 and 1.1. Proposition 5.1 together with Lemma 5.2 imply that h con-tains a distinct matrix in the sense of definition 4.5. Then, Corollary 4.9 and Proposition 3.11implies projective hypoellipticity for the Naver-Stokes equations, i.e. Theorem 2.13, from whichTheorem 1.1 follows by the results of [10]; see Sections 2.3 and 2.4 for more information.

5.1 The simpler “infinite dimensional” case

Before continuing with the proof of proposition 5.1, which is rather technical in nature due to thepresence of the Galerkin truncation, it is very instructive to first see how the proof goes in the “infi-nite dimensional” case when the Galerkin truncation is removed and we instead consider the entirelattice Z2

0. The actual proof is similar in spirit to the one presented below, just repeated 35 timesto cover various edge cases. The proof in this section has an accompanying Maple worksheet thatwill do the algebraic computations and compute the reduced Grobner bases using exact arithmetic;see Appendix C.2.

The proof full proof of Proposition 5.1 will make use of the algebraic structure of D(i, k, r) = Dki(recall r dependence is implicit) as a piecewise defined rational function on (Z2

0,N )2. The overallgoal is to show that for each side length r 6= 0 that there do not exist any solutions (i, j, `,m) ∈ CNwith i+ j + `+m = 0, to the set of Diophantine equations

Dki + Dkj + Dk` + Dkm = 0, for all k ∈ Z20,N .

As mentioned in Remark 4.4 without the Galerkin cut-off D(i, k, r) takes a much simpler rationalalgebraic form that is not piecewise defined on the lattice,

D(i, k, r) = ci,kci+k,k − ci,kci−k,k = 〈i⊥, k〉2r(

1

|k|2r− 1

|i|2r

)(1

|i− k|2r− 1

|i+ k|2r

).

In light of this, our strategy is to extend the rational function

W(i, j, `,m, k, r) := D(i, k, r) + D(j, k, r) + D(`, k, r) + D(m, k, r)

in 11 variables

i = (i1, i2), j = (j1, j2), m = (m1,m2), ` = (`1, `2), k = (k1, k2), and r,

to the algebraically closed field C, and show that such a system of algebraic equations is inconsistent.In particular, the numerator polynomial

P (i, j, `,m, k, r) = numer(W(i, j, `,m, k, r)

)23

Page 24: Jacob Bedrossian Sam Punshon-Smith July 9, 2021

belongs to C[i, j, `,m, k, r]6, has integer coefficients and vanishes whenever W does. In many waysit is this polynomial and the fact that it has finite order and integer coefficients that allows for theuse of a computer algebra proof that holds for arbitrary Galerkin truncation.

The goal is to understand the common zeros of the collection of polynomials obtained byevaluating k on various subsets of the lattice. Particularly for a given subset K ⊆ Z2

0, we considerthe ideal of polynomials in the remaining 9 variables (i, j, `,m, r) ∈ C9 generated by evaluating Pat each k ∈ K

IK := 〈Pk : k ∈ K〉 ⊂ C[i, j, `,m, r], where Pk(i, j, `,m, r) = P (i, j, `,m, k, r).

We also introduce the two polynomials h1, h2 describing the i+ j + `+m = 0 constraint

h1 := i1 + j1 + `1 +m1 and h2 := i2 + j2 + `2 +m2

as well as the polynomial7

g(i, j, `,m, r) := r2|i|2r |j|2r |`|2r |m|2r(|i+ j|2r + |`+m|2r)(|i+ `|2r + |j +m|2r)(|i+m|2r + |j + `|2r) (5.1)

whose non-vanishing implies that

r 6= 0, i 6= 0, j 6= 0, ` 6= 0,m 6= 0

and(i+ j, `+m) 6= (0, 0), (i+ `, j +m) 6= (0, 0), (i+m, j + `) 6= (0, 0),

and therefore g 6= 0 perfectly encodes the constraint set CN along with the assumption that r 6= 0.The proof will be complete then, if we can find a set K ⊆ Z2

0 so that the affine variety generatedby IK , h1, h2 is the same as that generated by g, namely

V(IK , h1, h2) = V(g).

In order to show this, it will also be useful to freeze i, j, `,m, r and treat P as a polynomial ink = (k1, k2) and regard the coefficients as polynomials in C[i, j, `,m, r]. We denote the collectionof these polynomials by

f1, . . . , fs := coeffs(P, k1, k2) ⊆ C[i, j, `,m, r].

Note that the number of polynomials in f1, . . . , fs only depends on the order of the polynomialP in k and is independent of any truncation.

By Lemma B.1 and the observation that P is order 19 in k, if there ∃k′ ∈ Z20 such that

k ∈ Z20 : |k − k′|`∞ ≤ 10

⊆ K, then the polynomial ideals are equal

IK = 〈f1, ..., fs〉 .

Since we can obviously find such a set K, our goal reduces to showing that

V(f1, . . . , fs, h1, h2) = V(g).

6Here we use the obvious shorthand C[i, j, `,m, k, r] = C[i1, i2, j1, j2, `1, `2,m1,m2, k1, k2, r].7Note the choice of | · |r norm here, which is fundamental to ensuring our computational implementation converges

for arbitrary r.

24

Page 25: Jacob Bedrossian Sam Punshon-Smith July 9, 2021

By Theorem A.11, this is equivalent to showing that the saturated ideal 〈f1, . . . , fs, h1, h2〉 : g∞

satisfies〈f1, . . . , fs, h1, h2〉 : g∞ = C[i, j, `,m, r],

or more practically from a computational stand point, by Theorem A.12 that 1 is the reducedGrobner basis for the augmented ideal

I = 〈f1, . . . , fs, h1, h2, zg − 1〉 ⊆ C[i, j, `,m, r, z]

with z added as an extra variable. This can indeed be checked computationally in Maple. Specifi-cally, we compute the reduced Grobner basis of I using the graded reverse lexicographical monomialorder (or “grevlex”) and the variable ordering

i1 < i2 < j1 < j2 < `1 < `2 < m1 < m2 < z < r

using an implementation of the F4 algorithm [22] (see Appendix A and C ); the computation verifiesthat G = 1, thereby concluding the proof.

5.2 Treating the Galerkin truncation: Proof of proposition 5.1

As already mentioned, the Galerkin truncation makes D(i, k, r) = Dki a piecewise defined rationalfunction (depending on the choice of k) and so great care must be taken to consider all the possiblecombinations algebraic forms on different partitions of the lattice to carry out a similar argumentto the one given above.

Proof of proposition 5.1. To begin, it is convenient to write Dki in a proper piecewise defined sense.To do this we will find it convenient to define the set

Sk := Z20,N ∩ Z2

0,N + k ⊆ Z20,N ,

and denote

D+(i, k, r) := ci,kci+k,k = 〈i⊥, k〉2r(

1

|k|2r− 1

|i|2r

)(1

|k|2r− 1

|i+ k|2r

)as well as

D−(i, k, r) := −D+(i,−k, r) = 〈i⊥, k〉2r(

1

|k|2r− 1

|i|2r

)(1

|i− k|2r− 1

|k|2r

)and

D(i, k, r) := D+(i, k, r) + D−(i, k, r) = 〈i⊥, k〉2r(

1

|k|2r− 1

|i|2r

)(1

|i− k|2r− 1

|i+ k|2r

).

Then D(i, k, r) can be written piecewise as

D(i, k, r) =

D+(i, k, r) i ∈ Sk\S−k

D(i, k, r) i ∈ Sk ∩ S−k

D−(i, k, r) i ∈ S−k\Sk

0 i ∈ Z20,N\(Sk ∪ S−k)

.

This means that for each fixed i ∈ Z20,N , D(i, k, r) is obtained by evaluating exactly one of the four

exact rational algebraic functions belonging to D+, D,D−, 0, moreover it is not hard to show that

25

Page 26: Jacob Bedrossian Sam Punshon-Smith July 9, 2021

for each such i ∈ Z20,N there is always a suitably large set K such that D(i, k, r) takes the same

algebraic form for all k ∈ K. Based off this idea, it is the following key lemma that allows us totreat the piecewise rational behavior of the sum

W(i, j, `,m, k, r) := D(i, k, r) + D(j, k, r) + D(`, k, r) + D(m, k, r)

in a purely algebraic fashion, as long as N is large enough.

Lemma 5.3. Let a ≥ 1 and suppose that N > 4(9a+ 8), then for each fixed (i, j, `,m) ∈ CN , thereexists a k′ ∈ Z2

0,N−a and four rational functions D1,D2,D3,D4, each taking one of the four possible

forms D+, D,D−, 0, with at least one the Di 6= 0 such that for all k ∈ Z20 with |k − k′|`∞ ≤ a, W

takes the form

W(i, j, `,m, k, r) = D1(i, k, r) + D2(j, k, r) + D3(`, k, r) + D4(m, k, r).

Before proving Lemma 5.3 (which is done in the following subsection), lets see how to useit to complete the proof of Proposition 5.1. Indeed, with this Lemma in hand, the proof nowfollows along the similar lines as that of Section 5.1 above. Fix (i, j, `,m) ∈ CN and r 6= 0 withi+ j + `+m = 0 and assume by contradiction that

W(i, j, `,m, k, r) = 0 for all k ∈ Z20,N .

By Lemma 5.3, letK := k ∈ Z2

0,N : |k − k′|`∞ ≤ aso that for all k ∈ K, W(i, j, `,m, k, r) is a given fixed rational function and therefore we can defineas in Section 5.1 the numerator polynomial

P (i, j, `,m, k, r) = numer (W(i, j, `,m, k, r)) ,

and observe that by assumption we have

(i, j, `,m, r) ∈ V(IK , h1, h2), where IK := 〈Pk : k ∈ K〉.

Therefore the proof is complete if we can show that V(IK , h1, h2) = V(g), where g is defined by(5.1). By Lemma B.1 and Lemma 5.3, using that the P is at most order 19 in k, we can choosea = 10 so that 2a > 19, and therefore the polynomials f1, . . . , fs = coeffs(P, k1, k2) generatethe ideal IK . We can now follow the exact same procedure as in Section 5.1 to show that thereduced Grobner basis of the extended ideal I = 〈f1, . . . , fs, h1, h2, zg − 1〉 is 1. This can beverified computationally for general algebraic forms

W(i, j, `,m, k, r) = D1(i, k, r) + D2(j, k, r) + D3(`, k, r) + D4(m, k, r) (5.2)

by computing the Grobner basis of I for all possible choices of algebraic functions (D1,D2,D3,D4)sampled from the set D+, D,D−, 0 (excluding (0, 0, 0, 0)). Due to symmetry of the sum (5.2) andthe constraint i + j + ` + m = 0, up to relabeling i, j, `,m we can disregard the order in whichwe consider D1,D2,D3,D4 and therefore the total number of possible ideals we have to check isjust the number of ways to draw an unordered sample D1,D2,D3,D4 of 4 things from a the setD+, D,D−, 0 with replacement (excluding 0, 0, 0, 0). This means that there are(

4 + 4− 1

4

)− 1 =

(7

4

)− 1 = 34

different ideals I to check. This seemingly tedious task is carried effortlessly by Maple (see AppendixC.3 for the code) showing that the reduced Grobner basis for I in every case is 1. This concludesthe proof of Proposition 5.1 (and hence of Theorem 1.1).

26

Page 27: Jacob Bedrossian Sam Punshon-Smith July 9, 2021

5.2.1 Proof of Lemma 5.3

We now turn to the proof of the key Lemma 5.3.

Proof of Lemma 5.3. Fix an integer a > 0 and for each k′ ∈ Z20,N−a, let Kk′ := k ∈ Z2

0,N :|k − k′|`∞ ≤ a and define the following sets

Gk′+ :=⋂

k∈Kk′

Sk\S−k, Gk′− :=⋂

k∈Kk′

S−k\Sk, Gk′ :=⋂

k∈Kk′

Sk∩S−k, Gk′0 :=⋂

k∈Kk′

Z20,N\(Sk∪S−k),

where D(i, k, r) takes a specific algebraic form (D+,D−, D, 0 respectively) uniformly for k satis-fying |k − k′|`∞ ≤ a. This means we have a “good” set

Gk′ := Gk′+ ∪ Gk′− ∪ Gk

′ ∪ Gk′0

where for each fixed i ∈ Gk′ , D(i, k, r) takes a consistent algebraic form for all |k − k′|`∞ ≤ a, anda remaining “bad” set

Bk′ := Z20,N\Gk

where D(i, k, r) doesn’t take a consistent algebraic form for all |k − k′|`∞ ≤ a (see Figure 1 for aillustration of these sets in the lattice).

K−k′

Kk′

Z20,N Gk′+

Gk′−

Gk′

Gk′0

Gk′0

Bk′

2a+ 1

Figure 1: A schematic illustration for the partition of Z20,N into good Gk′ := Gk′+ ∪ Gk

′− ∪ Gk

′ ∪ Gk′0

and bad sets Bk′ for a given k′. The set Gk′+ is the upper right L-shaped zone (shaded yellow), the

set Gk′− is the lower left L-shaped zone (shaded blue), and the set Gk′ is the center square with twopunctures (shaded green). The upper left and lower right corners (not shaded) are the two piecesof Gk′0 . The bad set consists of several pieces (all shaded red): the the connection zones betweenall of the G sets as well as the squares K±k′ shown here embedded in Gk′ .

27

Page 28: Jacob Bedrossian Sam Punshon-Smith July 9, 2021

For a given i, j, `,m ∈ Z20,N our goal is to find a k′ such that i, j, `,m ⊆ Gk′ , such that not

all i, j, `,m belong to Gk′0 . This is the content of the following Lemma.

Lemma 5.4. Suppose N > 4(9a + 8), then for every i, j, `,m ⊆ Z20,N there exists a k′ ∈ Z2

0,N

with |k′|`∞ ≤ bN/2c − a such that i, j, `,m ⊆ Gk′.

Proof. Some of this is best seen by picture. First, we note that the requirement that |k′|`∞ ≤bN/2c−a is solely so that the sets K±k′ stay with in the inner-most rectangle (this is to avoid keepingtrack of much more complicated intersections between bad sets). For simplicity we denote for eachn ≥ 1 the diagonal element k′n = (a+ 2(n− 1)(a+ 1), a+ 2(n− 1)(a+ 1)) and let Bn := Bk′n . Wenote that each 1 ≤ n1 < n2, Bn1 always has has a non-trivial overlap with Bn2 ,

Bn1 ∩ Bn2 6= ∅, if 1 ≤ n1 < n2.

However, if N is big enough, any triple intersection is empty (see Figure 2 for a depiction of this)

Bn1 ∩ Bn2 ∩ Bn3 = ∅, if 1 ≤ n1 < n2 < n3.

By “big enough”, we mean that maxi |k′ni|`∞ ≤ bN/2c− a so that sets Kk

′ni ∪K−k

′ni are all disjoint

and don’t overlap with any of the outer bands.

−k′

k′

−k′

k′

−k′

k′

−k′

k′

−k′

k′

B3

B2

B1

Figure 2: An illustration of the empty triple intersection B1 ∩ B2 ∩ B3 = ∅. The gaps between setsare exaggerated for visual clarity.

For each i ∈ Z20,N , denote δi the delta measure on Z2

0,N concentrated at i, defined for each

A ⊆ Z20,N by

δi(A) =

1 i ∈ A0 i /∈ A

.

28

Page 29: Jacob Bedrossian Sam Punshon-Smith July 9, 2021

Likewise for any four lattice points i, j, `,m ⊆ Z20,N , denote the counting measure

γi,j,`,m := δi + δj + δ` + δm,

which counts how many of the lattice points i, j, `,m belong to a given subset of the lattice. Notethat for any A ⊆ Z2

0,N , 0 ≤ γi,j,`,m(A) ≤ 4.

We now work by contradiction and assume that there exists four lattice points i, j, `,m ∈ Z20,N

such that for every k′ ∈ Z20,N with |k′|`∞ ≤ bN/2c − a, at least one of the lattice points i, j, `,m

belongs to Bk′ . This implies that for each n ≥ 1 we have

γi,j,`,m(Bn) ≥ 1.

By the inclusion-exclusion principle (and the fact that the only non-trivial intersections are pairwiseintersections), we have that for each i, j, `,m ⊆ Z2

0,N and M > 1

γi,j,`,m

⋃1≤n≤M

Bn

=

M∑n=1

γi,j,`,m (Bn)−∑

1≤n1<n2≤Mγi,j,`,m(Bn1 ∩ Bn2)

=

M∑n=1

γi,j,`,m (Bn)− γi,j,`,m

⋃1≤n1<n2≤M

Bn1 ∩ Bn2

≥M − 4.

Choosing M = 9, then implies that

γi,j,`,m

⋃1≤n≤M

Bn

≥ 5

which is clearly a contradiction, since γi,j,`,m ≤ 4. Since we had to take M = 9, this means that weneed to have

2(|k′9|`∞ + a) < N,

which is the same as requiring that N > 4(9a+ 8).

To complete the proof of Lemma 5.3, we may assume with out loss of generality that not alli, j, `,m belong to Gk′0 , since if that were the case we could replace k′ with its horizontal reflection

k′ = (−k′1, k′2), and obtain i, j, `,m ∈ Gk′+ ∩ Gk′− ⊆ Gk

′(see Figure 3 for a visual proof of this).

Then it is clear that Lemma 5.4 implies that there are rational functions D1,D2,D3,D4 with eachDi belonging to D+,D−, D, 0 (excluding the case where they are all zero) such that

W(i, j, `,m, k, r) = D1(i, k, r) + D2(j, k, r) + D3(`, k, r) + D4(m, k, r).

29

Page 30: Jacob Bedrossian Sam Punshon-Smith July 9, 2021

k′ → k′−k′

k′ k′

−k′

Figure 3: An illustration of Gk′0 ⊆ Gk′

+ ∪ Gk′− . We see that reflection k′ = (k′1, k

′2) 7→ (−k′1, k′2) = k′

reverses upper and lower corners from right to left. Note we have chosen k′ sufficiently large for

this and also to ensure that K±k′ and K±k′ stay surrounded entirely by Gk′ and Gk′ respectively.

A Relevant algebraic geometry

A.1 Polynomial ideals and Grobner bases

In this section we review some of the basic concepts from algebraic geometry that are used in thecomputer assisted proof of condition (i). We give a brief summary here for the readers’ convenienceas the area may be far removed from many of the readers’ expertise. The exposition here is adaptedfrom Cox-Little-O’Shea [16], see therein for mathematical details and more explanations.

Given a field K we denote by K[x1, ..., xn] the ring of polynomials over K in n variables withcoefficients in K. First, we recall the notion of a polynomial ideal.

Definition A.1. A subset I ⊂ K[x1, ..., xn] is an ideal if

(i) 0 ∈ I.

(ii) If f, g ∈ I then f + g ∈ I.

(iii) If f ∈ I and h ∈ K[x1, ..., xn], then hf ∈ I.

Given f1, ..., fs polynomials in K[x1, ..., xn], we define the ideal generated by these polynomials,denoted 〈f1, ..., fs〉 as

〈f1, ..., fs〉 =

s∑j=1

hjfj : hj ∈ K[x1, ..., xn]

.

We recall the notion of a variety, which is the set of points in Kn which solve a given system ofpolynomials.

Definition A.2. Let f1, . . . , fs ∈ K[x1, . . . , xn]. Then we define

V(f1, . . . , fs) = (a1, . . . , an) ∈ Kn : fj(a1, . . . , an) = 0 ∀j, 1 ≤ j ≤ s .

30

Page 31: Jacob Bedrossian Sam Punshon-Smith July 9, 2021

Recall the notion of a variety associated to an ideal, which is the subset of Kn which simulta-neously is a zero for all the polynomials in the ideal I.

Definition A.3. Given an ideal I ⊂ K[x1, . . . , xn], we denote V(I) the set

V(I) = (a1, . . . , an) ∈ Kn : f(a1, . . . , an) = 0 ∀f ∈ I .

This is an affine variety and in particular, V(〈f1, . . . , fs〉) = V(f1, ..., fs) (see e.g. [Proposition 9,page 81 [16]]).

We are now ready to state a result on the non-solvability of a given system of polynomials whichis equivalent to Hilbert’s Nullstellensatz.

Theorem A.4 (Weak Nullstellensatz (See Ch 1, Theorem 1 [16])). Let I ⊂ K[x1, ..., xn] be an idealover an algebraically closed field K satisfying V(I) = ∅. Then I = K[x1, ..., xn].

The weak Nullstellensatz gives the following necessary and sufficient condition for the inconsis-tency of a given set of polynomial equations (over an algebraically closed field, e.g. C).

Corollary A.5. Over an algebraically closed field, a given system of polynomial equations f1 =... = fs = 0 does not have a solution if and only if 1 ∈ 〈f1, ..., fs〉.

To computationally verify this, we need the concept of a Grobner basis, which can in some waybe considered an extension of a basis for the nullspace of a matrix in linear algebra. For this weneed to fix a reasonable (total) ordering on monomials

xα = xα11 . . . xαn

n ,

where α ∈ Zn≥0 is a multi-index. A total ordering on monomials xα : α ∈ Zn≥0 is naturallyequivalent to a total ordering on the set of multi indices Zn≥0; see e.g. [Definition 1, page 55 [16]].

There are many reasonable choices for orderings on Z2≥0. A common choice of ordering is the

lexicographic ordering where α > β if the leftmost non-zero entry in α − β is positive. However,many choices are possible and often preferable from a computational standpoint [Chapter 2, Section2 of [16]].

A monomial ordering allows one to define the leading term LT(f) of a polynomial f , defined tobe largest term cxα in that polynomial. In a similar manner, for an ideal I, we can define LT(I)the set of leading terms of all non-zero elements of I. With a monomial ordering on hand, we cannow define the Grobner basis of an ideal.

Definition A.6 (Grobner Basis). Fix a monomial order on K[x1, ...xn]. A finite subset G =g1, ..., gk of a non-zero ideal I ⊂ K[x1, ...xn] is called a Grobner basis if

〈LT(I)〉 = 〈LT(g1), ...,LT(gk)〉 ,

One can show that every ideal I ⊂ K[x1, . . . xn] has a Grobner basis and moreover that anyGrobner basis G = g1, . . . , gk satisfies 〈g1, . . . , gk〉 = I (see [Corollary 6; page 78 [16]] ). We needto put an additional constraint in order to single out a unique, minimal Grobner basis.

Definition A.7. Given a monomial ordering, a reduced Grobner basis for a polynomial ideal I isa Grobner basis G of an ideal I such that

(i) The leading coefficient of p = 1 for all p ∈ G.

31

Page 32: Jacob Bedrossian Sam Punshon-Smith July 9, 2021

(ii) ∀p ∈ G no monomial of p lies in 〈G \ p〉.

Theorem A.8 (Theorem 5, page 93 [16]). For a given monomial ordering and a given non-zeroideal I ⊂ K[x1, ..., xn] there exists a unique reduced Grobner basis.

The following theorem summarizes now the sufficient condition we will use to prove inconsis-tency.

Theorem A.9 (Sufficient condition for inconsistency (See page 179, [16])). Let f1, . . . fs be acollection of polynomials in K[x1, . . . , xn]. If for a given monomial ordering, 1 is the reducedGrobner basis of the ideal 〈f1, ..., fs〉, then V(f1, . . . , fs) = ∅ (i.e. f1, . . . fs are algebraicallyinconsistent)

There exists many algorithms for computing the reduced Grobner basis of a given ideal, we willuse the Maple [1] implementation of the F4 algorithm [22]. Computing a reduced Grobner basis canbe very computationally intensive hence it was important that we can take many steps to reducethe complexity of the system.

A.2 Saturation of an ideal

We are however, not quite finished. In our situation we do not quite have a simple system ofpolynomials but in fact we have a system of polynomials where solutions are constrained to stayaway from a certain affine variety V(g) ∈ Kn that is characterized as the zero set of a certainpolynomial g ∈ K[x1, . . . , xn]. The constraint that solutions stay away from V(g) can then becharacterized by the non vanishing of g. Our goal then is to understand when, for a given idealI ⊆ K[x1, . . . , xn], we have

V(I) = V(g),

which implies that, away from the constraint set g 6= 0, the ideal I has no common zeros.Algebraically, the idea here is to, in essence, mod out g from the ideal I. This is commonly donevia what is known as saturation of the ideal I by g.

Definition A.10 (Saturation). The saturation of an ideal I of K[x1, . . . , xn] by a polynomial g isdefined to be the set

I : g∞ := f ∈ K[x1, . . . , xn] : there exists m ≥ 0 such thatfgm ∈ I.

Our most important application of the saturation I : g∞ is the following version of the StrongNullstellensatz.

Theorem A.11 (Strong Nullstellensatz (c.f. Ch 4, Theorem 10 [16])). Let K be an algebraicallyclosed field and let I be an ideal in K[x1, . . . , xn], then for any polynomial g ∈ K[x1, . . . , xn] wehave

V(I) = V(g)

if and only if the saturation I : g∞ = K[x1, . . . , xn]. In other words, by the weak Nullstellensatz,

V(I) = V(g) if and only if V(I : g∞) = ∅.

In order to compute the saturation I : g∞, we make use of following trick that introduces anew variable z that represents the inverse of g. This allows us to give a convenient computationalcondition for V(I) = V(g).

32

Page 33: Jacob Bedrossian Sam Punshon-Smith July 9, 2021

Theorem A.12 (c.f. Chapter 4 Theorem 14 [16]). Let f1, . . . , fs be a basis for an ideal I ⊆K[x1, . . . , xn], for an algebraically closed field K. Let z be a new variable and define the augmentedideal

I = 〈f1, . . . , fs, zg − 1〉 ⊆ K[x1, . . . , xn, z],

thenV(I : g∞) = ∅ if and only if V(I) = ∅.

In particular, by the strong Nullstellensatz and Theorem A.9, if the reduced Grobner basis of I is1, then

V(I) = V(g).

B A key lemma regarding polynomials on the lattice

This lemma is an important technical trick that is used repeatedly through out the paper to Forexample, we can deduce that if a polynomial vanishes at sufficiently many integer lattice pointsin some its variables, then the polynomials making up the coefficients of those variables must allvanish.

Lemma B.1. Let P (x1, ..., xn, y1, ..., ym) be a polynomial m+n complex variables and suppose thatit is degree J viewed as a polynomial in the last m variables. Suppose that K ⊂ Zm contains Zm0,qwith 2q + 2 > J or Zm0 ∩

Zmq + i

with 2q + 1 > J and some i ∈ Zm0 . Define the coefficients of P

viewed as a polynomial in the last m variables

f1, ..., fs = coeffs(P, y1, ..., ym).

Furthermore for each y = (y1, . . . , ym) define the polynomial in C[x1, . . . , xn] by

Py(x1, ..., xn) = P (x1, ..., xn, y1, ..., ym).

Then the following polynomial ideals are equivalent

〈f1, ..., fs〉 = 〈Py : y ∈ K〉 .

Proof. First, consider the case m = 1. The ideal generated by Py for y ∈ K is given by the following

f(x) =∑y∈K

hy(x)Py(x) =∑y∈K

hy(x)∑

0≤j≤Jfj(x)yj ,

and hence 〈Py : y ∈ K〉 ⊂ 〈f1, ..., fs〉. To see the other direction, consider any polynomial inf ∈ 〈f1, ...fs〉

f(x) =∑

0≤j≤Jhj(x)fj(x).

Note that since there are at least J + 1 distinct points in K we can write each coefficient fj(x) =∑y∈K cyPy(x) for some rational coefficients cy (by inverting the Vandermonde matrix encoding the

equations Py(x) =∑

0≤j≤J yjfj(x) ∀y ∈ K). Therefore, we can write

f(x) =∑

0≤j≤Jhj(x)fj(x) =

∑0≤j≤J

∑y∈K

hj(x)cyPy(x),

33

Page 34: Jacob Bedrossian Sam Punshon-Smith July 9, 2021

and therefore 〈f1, ..., fs〉 ⊆ 〈Py : y ∈ K〉.Consider next the case m > 1. As in the m = 1 case, it is straightforward to check that

〈Py : y ∈ K〉 ⊆ 〈f1, ..., fs〉. The other inclusion can be proved analogously as well using the m-dimensional Vandermonde matrix. One may alternatively see it using an iterative argument as inthe statement regarding affine varieties. Indeed, for any (x1, ..., xn, y1, ..., ym) consider the set ofpoints ym+1 such that y ∈ K. Then we have the relationship

P (x1, ..., xn, y1, ..., ym+1) = gJ+1(x1, ..., xn, y1, ..., ym)yJm+1 + ...+ g1(x1, ..., xn, y1, ..., ym),

for suitable coefficient polynomials f1, ...fJ+1. These coefficient polynomials can be solved for interms of P (x1, ..., xn, y1, ..., ym+1) by a usual Vandermonde matrix for the points y ∈ K By theassumption on the set K, this argument can similarly be iterated until all of the y′js have beeneliminated from the coefficients, proving the statement about the polynomial ideals.

34

Page 35: Jacob Bedrossian Sam Punshon-Smith July 9, 2021

> >

> >

> >

(2)(2)

> >

> >

> >

> >

> >

> >

> >

> >

(1)(1)

> >

> >

First we begin by loading the Groebner package in Maple

Next, we define some fuctions that define the various coefficients as functions:

The distorted lattice norm and skew product with distortion parameter

The Euler coefficient:

The algebraic expression for d:

The saturation polynomial:

We now generate the set of polynomials by calculating the numerator and the collecting the coefficients in k, k'

10

Define the variable ordering

and then run the Groebner basis algorithm with "grevlex" ordering on the collection of polynomials.

C Computer code

In this section we include printout of the code and output for various Maple worksheets needed toprove several key results using techniques from algebraic geometry. The worksheets can be providedupon request.

C.1 “Spanning” Proof of Proposition 4.11

35

Page 36: Jacob Bedrossian Sam Punshon-Smith July 9, 2021

> >

> >

> >

> >

> >

(2)(2)> >

> >

> > > >

> >

> >

> >

> >

(1)(1)

First we begin by loading the Groebner package in Maple

Next, we define some fuctions that define the various coefficients as functions:

The distorted lattice norm and skew product with distortion parameter

The Euler coefficient

The algebraic expression for D

The saturation polynomial

With these functions in hand, we now generate our set of polynomials associated to by calculating the numerator and then collecting the coefficients in k

19

We define the variable ordering

and then run the Groebner basis algorithm with "grevlex" ordering on the collection of polynomials with the extra condition and the saturating polynomial with the extra variable z

Since the Groebner basis is [1], this concludes the proof. QED

C.2 “Infinite Dimensional” Distinctness Proof

36

Page 37: Jacob Bedrossian Sam Punshon-Smith July 9, 2021

> >

(3)(3)

> >

(1)(1)

> >

> >

> >

> >

> >

> >

> >

(2)(2)

> >

> >

> >

> >

> >

First we begin by loading the Groebner package in Maple

Next, we define some fuctions that define the various coefficients as functions:

The distorted lattice norm and skew product with distortion parameter

The Euler coefficient

The various algebrasic expressions that D can take

The saturation polynomial

Now we generate all possible unordered lists of size 4, sampled with replacement from the list of 4 possible functions removing the case.

By standard combinatorics (bars and stars counting), there should be

34

of these cases to consider. Indeed, we find that there are 34 total cases

34

With all these functions in hand, we now generate our set of all possible polynomials associated to and its various possible agebric forms by calculating the numerator. This generates

a list of of polynomials in the variables for each case defined above

Each polynomial is of degree at most 19 in k

C.3 Full Galerkin Truncated Distinctness Proof

37

Page 38: Jacob Bedrossian Sam Punshon-Smith July 9, 2021

> >

(4)(4)> >

> >

(3)(3)

We then collect the coefficients in , which gives a collection of polynomials in . We add to this collection the linear polynomial and the saturating polynomial augmented with the extra variable z. This generates a list of 34 collections of polynomials.

We now define the variable ordering

and run the Groebner basis algorithm with "grevlex" ordering on each collection of polynomials. It will produce a list of Groebner bases for each case. Note that this step takes a little time to run (about 12 minutes total depending on CPU)

Since each Groebner basis is [1], this concludes the proof. QED

38

Page 39: Jacob Bedrossian Sam Punshon-Smith July 9, 2021

References[1] Maple (2020). maplesoft, a division of waterloo maple inc., waterloo, ontario.

[2] L. Arnold, Random dynamical systems, Dynamical systems, 1995, pp. 1–43.

[3] L. Arnold, G. Papanicolaou, and V. Wihstutz, Asymptotic analysis of the Lyapunov exponent and rotation numberof the random oscillator and applications, SIAM Journal on Applied Mathematics 46 (1986), no. 3, 427–450.

[4] V. I Arnold and B. A Khesin, Topological methods in hydrodynamics, 1st ed., Applied Mathematical Sciences,Springer-Verlag New York, 1998.

[5] W. E. Arnoldi, The principle of minimized iterations in the solution of the matrix eigenvalue problem, Quarterlyof applied mathematics 9 (1951), no. 1, 17–29.

[6] E. I. Auslender and G. N. Milstein, Asymptotic expansion of Lyapunov exponent for linear stochastic systemswith small noises, Prikl. Mat. i Mekh. 46 (1982), 358–365 (In Russ.)

[7] P. H Baxendale, Lyapunov exponents and relative entropy for a stochastic flow of diffeomorphisms, ProbabilityTheory and Related Fields 81 (1989), no. 4, 521–554.

[8] P. H Baxendale, Stochastic averaging and asymptotic behavior of the stochastic duffing–van der pol equation,Stochastic processes and their applications 113 (2004), no. 2, 235–272.

[9] P. H Baxendale and L. Goukasian, Lyapunov exponents for small random perturbations of Hamiltonian systems,Annals of probability (2002), 101–134.

[10] J. Bedrossian, A. Blumenthal, and S. Punshon-Smith, A regularity method for lower bounds on the lyapunovexponent for stochastic differential equations, arXiv preprint arXiv:2007.15827 (2020).

[11] J. Bedrossian and K. Liss, Quantitative spectral gaps and uniform lower bounds in the small noise limit forMarkov semigroups generated by hypoelliptic stochastic differential equations, arXiv:2007.13297 (2020).

[12] G. Boffetta, M. Cencini, M. Falcioni, and A. Vulpiani, Predictability: a way to characterize complexity, Physicsreports 356 (2002), no. 6, 367–474.

[13] T. Bohr, M. H Jensen, G. Paladin, and A. Vulpiani, Dynamical systems approach to turbulence, CambridgeUniversity Press, 2005.

[14] W. M Boothby and E. N Wilson, Determination of the transitivity of bilinear systems, SIAM J. Control Optim.17 (Mar. 1979), no. 2, 212–221.

[15] A. Carverhill, Furstenberg’s theorem for nonlinear stochastic systems, Probability theory and related fields 74(1987), no. 4, 529–534.

[16] D. Cox, J. Little, and D. OShea, Ideals, varieties, and algorithms: an introduction to computational algebraicgeometry and commutative algebra, Springer Science & Business Media, 2013.

[17] P. D Ditlevsen, Turbulence and shell models, Cambridge University Press, 2010.

[18] D. Dolgopyat, V. Kaloshin, L. Koralov, et al., Sample path properties of the stochastic flows, The Annals ofProbability 32 (2004), no. 1A, 1–27.

[19] W. E and J. C Mattingly, Ergodicity for the Navier-Stokes equation with degenerate random forcing: Finite-dimensional approximation, Commun. Pure Appl. Math. 54 (Nov. 2001), no. 11, 1386–1402.

[20] D. Elliott, Bilinear control systems: matrices in action, Vol. 169, Springer Science & Business Media, 2009.

[21] J.-C. Faugere, A new efficient algorithm for computing grobner bases (f4), J. Pure Appl. Algebra 139 (June1999), no. 1, 61–88.

[22] J.-C. Faugere, A new efficient algorithm for computing grobner bases (f4), Journal of pure and applied algebra139 (1999), no. 1-3, 61–88.

[23] U. Frisch, A. Pomyalov, I. Procaccia, and S. S. Ray, Turbulence in noninteger dimensions by fractal fourierdecimation, Physical review letters 108 (2012), no. 7, 074501.

[24] H. Furstenberg, Noncommuting random products, Transactions of the American Mathematical Society 108(1963), no. 3, 377–428.

[25] I. Gallagher, Mathematical analysis of a structure-preserving approximation of the bidimensional vorticity equa-tion, Numer. Math. 91 (Apr. 2002), no. 2, 223–236.

[26] E. Gledzer, Hydrodynamic-type system admitting two quadratic integrals of motion, Dokl. akad. nauk sssr, 1973,pp. 1046–1048.

39

Page 40: Jacob Bedrossian Sam Punshon-Smith July 9, 2021

[27] M. Hairer, On malliavin’s proof of hormander’s theorem, Bulletin des sciences mathematiques 135 (2011), no. 6-7,650–666.

[28] M. Hairer and J. C. Mattingly, Ergodicity of the 2D Navier-Stokes equations with degenerate stochastic forcing,Ann. of Math. 164 (2006), no. 3, 993–1032.

[29] L. Hormander, Hypoelliptic second order differential equations, Acta Mathematica 119 (1967), no. 1, 147–171.

[30] D. Huybrechts, Complex geometry: an introduction, Springer Science & Business Media, 2005.

[31] P. Imkeller and C. Lederer, An explicit description of the Lyapunov exponents of the noisy damped harmonicoscillator, Dynamics and Stability of Systems 14 (1999), no. 4, 385–405.

[32] V. Jurdjevic, Geometric control theory, Cambridge university press, 1997.

[33] A. Karimi and M. R Paul, Extensive chaos in the Lorenz-96 model, Chaos: An interdisciplinary journal ofnonlinear science 20 (2010), no. 4, 043105.

[34] Y. Kifer, A note on integrability of Cr-norms of stochastic flows and applications, Stochastic mechanics andstochastic processes, 1988, pp. 125–131.

[35] Y. Kifer, Ergodic theory of random transformations, Vol. 10, Springer Science & Business Media, 2012.

[36] J. F. C. Kingman, Subadditive ergodic theory, The annals of Probability 1 (1973), no. 6, 883–899.

[37] H. Kunita, Stochastic flows and stochastic differential equations, Vol. 24, Cambridge university press, 1997.

[38] F. Ledrappier, Positivity of the exponent for stationary sequences of matrices, Lyapunov exponents, 1986,pp. 56–73.

[39] E. N Lorenz, Predictability: A problem partly solved, Proc. seminar on predictability, 1996.

[40] V. S L’vov, E. Podivilov, A. Pomyalov, I. Procaccia, and D. Vandembroucq, Improved shell model of turbulence,Physical Review E 58 (1998), no. 2, 1811.

[41] A. J Majda, Introduction to turbulent dynamical systems in complex systems, Springer, 2016.

[42] N. Moshchuk and R Khasminskii, Moment Lyapunov exponent and stability index for linear conservative systemwith small random perturbation, SIAM Journal on Applied Mathematics 58 (1998), no. 1, 245–256.

[43] E. Ott, B. R Hunt, I. Szunyogh, A. V Zimin, E. J Kostelich, M. Corazza, E. Kalnay, D. Patil, and J. AYorke, A local ensemble Kalman filter for atmospheric data assimilation, Tellus A: Dynamic Meteorology andOceanography 56 (2004), no. 5, 415–428.

[44] E. Pardoux and V. Wihstutz, Lyapunov exponent and rotation number of two-dimensional linear stochasticsystems with small diffusion, SIAM Journal on Applied Mathematics 48 (1988), no. 2, 442–457.

[45] D. Pazo, I. G Szendro, J. M Lopez, and M. A Rodrıguez, Structure of characteristic Lyapunov vectors in spa-tiotemporal chaos, Physical Review E 78 (2008), no. 1, 016209.

[46] Y. Pesin and V. Climenhaga, Open problems in the theory of non-uniform hyperbolicity, Discrete Contin. Dyn.Syst 27 (2010), no. 2, 589–607.

[47] M. A Pinsky and V. Wihstutz, Lyapunov exponents of nilpotent Ito systems, Stochastics: An International Journalof Probability and Stochastic Processes 25 (1988), no. 1, 43–57.

[48] M. Romito, Ergodicity of the finite dimensional approximation of the 3D Navier–Stokes equations forced by adegenerate noise, J. Stat. Phys. 114 (Jan. 2004), no. 1, 155–177.

[49] G. Royer, Croissance exponentielle de produits Markoviens de matrices aleatoires, Annales de l’ihp probabiliteset statistiques, 1980, pp. 49–62.

[50] S. Sasaki, On the differential geometry of tangent bundles of riemannian manifolds, II, TMJ Update 14 (Jan.1962), no. 2, 146–155 (en).

[51] L. N Trefethen and D. Bau III, Numerical linear algebra, Vol. 50, Siam, 1997.

[52] A. Virtser, On products of random matrices and operators, Theory of Probability & Its Applications 24 (1980),no. 2, 367–377.

[53] A. Wilkinson, What are Lyapunov exponents, and why are they interesting?, Bulletin of the American Mathe-matical Society 54 (2017), no. 1, 79–105.

[54] L.-S. Young, Mathematical theory of Lyapunov exponents, Journal of Physics A: Mathematical and Theoretical46 (2013), no. 25, 254001.

[55] V Zeitlin, Finite-mode analogs of 2D ideal hydrodynamics: Coadjoint orbits and local canonical structure, PhysicaD 49 (Apr. 1991), no. 3, 353–362.

[56] V Y. Zeitlin, Algebraization of 2-D ideal fluid hydrodynamical systems and their Finite-Mode approximations,Advances in turbulence 3, 1991, pp. 257–260.

40