Upload
others
View
2
Download
0
Embed Size (px)
Citation preview
ON SYSTEMS OF FIRST ORDER LINEAR PARTIAL
DIFFERENTIAL EQUATIONS WITH Lp COEFFICIENTS
SORIN MARDARE
Abstract. We establish the existence, the uniqueness, and the stability of the
solution to the Cauchy problem associated to a general class of systems of first
order linear partial differential equations under minimal regularity assumptions
on their coefficients.
Contents
1. Introduction 12. Preliminaries 63. Matrix-valued differential forms 124. Stability and uniqueness 245. Local regularization of Pfaff systems 276. Existence theory 366.1. Necessary conditions 366.2. Sufficient conditions 387. Appendix 47Acknowledgements 52References 52
1. Introduction
A general system of first order linear partial differential equations over an opensubset Ω ⊂ R
d, d ≥ 2, has the following form:
(1.1)m
∑
j=1
d∑
i=1
cijk
∂yj
∂xi=
m∑
j=1
bjkyj + bk,
where k varies in the set 1, 2, . . . , r for some r ∈ N, the unknowns being thefunctions y1, y2, . . . , ym : Ω → R.
In this paper we are interested in systems of the simpler form:
(1.2)∂yj
∂xi=
ℓ∑
k=1
akijyk + bij ,
the unknowns being the functions y1, y2, . . . , yℓ : Ω → R. This system in fact isnot very different from the general system, since (at least pointwise) after some
Date: September 14, 2007.
2000 Mathematics Subject Classification. Primary: 35N10; Secondary: 58A17.
Key words and phrases. First order PDE, Pfaffian system, existence, stability.
1
2 SORIN MARDARE
algebraic operations and the elimination of the eventual non essential unknowns,the system (1.1) reduces to a system of type (1.2). Of course, such a reduction isnot always possible in the entire domain Ω since the algebraic operations and theessential unknowns may not be the same at each point of Ω. But even in this case,it is possible in many cases to reduce the general system (1.1) to several systems oftype (1.2) by splitting the domain Ω into several parts.
A simple example of general system equivalent with a system of type (1.2) is a
system (1.1) where r = dm and the matrix (cijk ) ∈ M
dm×dm is invertible at each
point of Ω (cijk is the element at the ((j − 1)d + i)-th row and k-th column).
To properly address the question of existence and uniqueness of solutions, westudy here the Cauchy problem associated with the system (1.2), i.e., the problemof finding functions (y1, ..., yℓ), defined over the connected open set Ω, that satisfythe problem
(1.3)
∂yj
∂xi=
ℓ∑
k=1
akijyk + bij ,
yi(x0) = y0
i ,
for some given point x0 ∈ Ω and initial values (y01 , y0
2 , . . . , y0ℓ ) ∈ R
ℓ.If we wish this Cauchy problem to have a unique solution, it is natural to require
that all the partial derivatives of all the unknowns be represented exactly once inthe system. Therefore the system in (1.3) will henceforth have dℓ equations, thatis, it will be understood that the equations of (1.3) are satisfied for all i ∈ 1, ...,dand j ∈ 1, ..., ℓ.
If we denote by Y the line matrix (y1 y2 . . . yℓ) ∈ M1×ℓ, by Ai the square matrix
Ai := (akij) ∈ M
ℓ×ℓ, where akij is the element at the k-th row and j-th column, and
finally by Bi the line matrix (bi1 bi2 . . . biℓ) ∈ M1×ℓ, i ∈ 1, ...,d, then the Cauchy
problem (1.3) can be written in the following matrix form:
(1.4)
∂Y
∂xi= Y Ai + Bi for all i ∈ 1, 2, . . . ,d,
Y (x0) = Y 0,
where Y 0 is the line matrix (y01 y0
2 . . . y0ℓ ) ∈ M
1×ℓ.The classical theory of existence and uniqueness of solutions for this system
applies under the assumption that the coefficients Ai and Bi are (at least) of class C1
over Ω (see Cartan [5], Malliavin [11], Thomas [17]). This theory was subsequentlygeneralised by Hartman and Wintner [9] to systems with coefficients of class C0
over Ω and finally by the author, in [12] to systems with coefficients of class L∞loc
over Ω, and in [13] to systems with coefficients of class Lploc, p > 2, in the particular
case where the domain Ω is two-dimensional.Our objective here is to solve the above system in the general case of domains
Ω ⊂ Rd of any dimension, the coefficients of (1.4) being then of class Lp
loc, withp > d, and the equations being understood in the distributional sense.
As in the theory of first order linear ordinary differential equations, it is easy tosee that solving the Cauchy problem (1.4) is equivalent with solving two simpler
ON SYSTEMS OF FIRST ORDER LINEAR PARTIAL DIFFERENTIAL EQUATIONS 3
systems: If the matrix field X : Ω → Mℓ×ℓ satisfies the homogeneous system
(1.5)
∂X
∂xi= XAi for all i ∈ 1, 2, . . . ,d,
X(x0) = I,
where I is the identity matrix in Mℓ×ℓ, and the matrix field Z : Ω → M
1×ℓ satisfiesthe system
(1.6)∂iZ = BiX
−1 for all i ∈ 1, 2, . . . ,d,Z(x0) = Y 0,
then a straightforward computation shows that the matrix field Y := ZX satisfiesthe initial system (1.4).
Therefore our task in this paper is to study the existence and uniqueness ofsolutions to these last two systems. The second system (1.6) is relatively easyto solve and its resolution constitutes a generalization of the classical Poincaretheorem (see Theorem 6.5). The first system (1.5) is considerably more difficultto solve and its resolution occupies most of this paper (the final result is Theorem6.4). Combined with Lemma 6.1 establishing the invertibility of the matrix fieldX, these two theorems establish the following existence and uniqueness result forthe system (1.4):
Theorem 1.1. Let Ω be a connected and simply connected open subset of Rd and
let Ai ∈ Lploc(Ω; Mℓ×ℓ) and Bi ∈ Lp
loc(Ω; M1×ℓ), p > d, be matrix fields that satisfyin the distributional sense the compatibility conditions
∂Aj
∂xi+ AiAj =
∂Ai
∂xj+ AjAi for all i 6= j ,(1.7)
∂Bj
∂xi+ BiAj =
∂Bi
∂xj+ BjAi for all i 6= j .(1.8)
Then the problem (1.4) has a unique solution in the space W 1,ploc (Ω, M1×ℓ).
Note that the first compatibility condition, which expresses the commutativityof the second order derivatives of the solution X, insures the existence of solutionsto the system (1.5) (see Theorem 6.4), while the second compatibility condition,which expresses the commutativity of the second order derivatives of Z, insuresthe existence of solutions to the system (1.6); indeed, the second compatibilitycondition implies that
∂(BiX−1)
∂xj=
∂(BjX−1)
∂xifor all i 6= j ,
which are precisely the compatibility conditions allowing to apply the PoincareTheorem 6.5 to the system (1.6).
We now describe in more detail how we solve in this paper the homogeneoussystem (1.5). In fact, we will consider the more general homogeneous system wherethe unknown is a field with values in any space of matrices, not necessarily square.More specifically, we henceforth study the system
(1.9)
∂Y
∂xi= Y Ai for all i ∈ 1, 2, . . . ,d,
Y (x0) = Y 0,
4 SORIN MARDARE
where Y : Ω → Mq×ℓ is the unknown, Ai : Ω → M
ℓ×ℓ are matrix fields of classLp
loc(Ω), p > d, x0 ∈ Ω, and Y 0 ∈ Mq×ℓ. Such systems are called Pfaff systems in
most of the mathematical literature, and appear notably in Differential Geometry todescribe the covariant derivatives of vector fields or the equations of a moving framealong a differential manifold. From now on, we adopt this terminology wheneverrefering to systems of type (1.9).
We know how to solve a Pfaff system in the classical setting, that is, whenAi ∈ C1(Ω). The idea is to integrate the system along smooth curves (contained inΩ) joining x0 with the other points x of the set Ω (the system becomes an ordinarydifferential equation along each curve). Thanks to the compatibility conditions andto the simple connectedness of Ω, it is then seen that the value at x of the solutionto this ordinary differential equation does not depend on the curve joining x0 to x.This invariance allows to unambiguously define a solution Y to the Cauchy problem(1.9) by taking Y (x) to be the value at x of the solution to the ordinary differentialequation associated with a curve (arbitrarily chosen) joining x0 to x.
Note that this strategy cannot be applied in our setting, i.e., when Ai ∈ Lploc(Ω),
since the restriction of a Lploc-function to a curve is not well defined. Another
difficulty in the resolution of our problem is that the compatibility conditions arenonlinear, since the product of matrices is not commutative (as long as the coeffi-cients Ai are true matrix fields, not merely scalar fields). For instance, we cannotreduce the resolution of Pfaff systems with Lp
loc-coefficients to the resolution of Pfaffsystems with C1-coefficients by mollifying its coefficients with a convolution kernelsince the compatibility conditions are not satisfied by the regularized coefficientsand therefore it is not possible to apply the classical theory to the regularizedPfaff systems. Nevertheless, we do prove in this paper that a Pfaff system withLp
loc-coefficients can be approached locally with Pfaff systems whose coefficients aresmooth and satisfy the compatibility conditions (1.7) (understood in the distribu-tional sense), by a new method of approximation (devised in the proof of Theorem5.1).
The paper is organised as follows. The next Section and the Appendix preparethe background for all the other sections; we list in particular several theoremsconcerning the Sobolev spaces and a regularity result for the solution to a variationalequation that are needed in the subsequent sections. For the convenience of thereader, proofs are provided for those statements that are less known in the literatureunder the specific assumptions needed in this paper. Section 3 is a preliminary toSection 5 and may be skipped altogether if the main result of Section 5 concerningthe regularization of Pfaff systems (stated in Theorem 5.1) is accepted withoutproof. Section 4 establishes the stability and the uniqueness of the solution tothe Cauchy problem (1.9). It relies on Section 2 and is itself needed in Section 6.Finally, Section 6 establishes necessary and sufficient conditions for the existenceof solutions to the Cauchy problem (1.9) and it is based on the results of Sections4 and 5.
We end this section with a few remarks on the regularity of the coefficients. Weconsider that the coefficients of the Pfaff system belong to a Lebesgue space of typeLp
loc, p ≥ 1, and wish to know which is the minimal value of p for which the Cauchyproblem (1.9) has solutions.
The first observation is that the compatibility conditions are meaningful in thedistributional setting if and only if Ai ∈ L2
loc(Ω). However, further regularity on
ON SYSTEMS OF FIRST ORDER LINEAR PARTIAL DIFFERENTIAL EQUATIONS 5
coefficients in needed if we wish the system (1.9) to be well posed in a distributionalsetting.
To deduce the optimal regularity for the coefficients Ai, we consider the scalarcase, where the coefficients Ai and the unknown Y are usual functions on Ω. Inthis case we have an explicit formula for the solution. Indeed, since the product iscommutative in the scalar case, the compatibility conditions take the simpler form
∂Aj
∂xi=
∂Ai
∂xjfor all i 6= j .
Then the Poincare Theorem (see Theorem 6.5) shows that there exists a primitive
of (Ai), i.e., a function a ∈ W 1,ploc (Ω) such that
∂a
∂xi= Ai for all i ∈ 1, 2, . . . ,d.
In turn, this implies that the function Y := Cea satisfies the system (1.9), theconstant C being defined as the solution to the equation Y (x0) = Y 0.
If p < d, the problem (1.9) is not well posed in the distributional sense. Itsuffices to consider the example where Ω is the unit ball in R
d and Ai are thepartial derivatives of the function a : Ω → R, defined by
a(x) = |x|−α,
with 0 < α < d/p − 1. It is easy to verify that Ai ∈ Lp(Ω). If the solution (inthe sense of distributions) to the system (1.9) exists, then it must be of the formY = Cea for some constant C. Indeed, if Y is a solution to (1.9) in Ω, then it is alsoa solution in the smaller domain Ω \ 0. Since the coefficients Ai are of class C∞
over Ω\0, we must have Y = Cea in Ω\0, by the uniqueness of the solution tothe Cauchy problem (1.9) posed over Ω \ 0 (which falls in the classical setting).But ea 6∈ L1
loc(Ω), hence Y is not a distribution over Ω if C 6= 0.If p = d, we cannot define the initial value for Y in the Cauchy problem. At least
not in any point. This is true even if we replace the initial condition Y (x0) = Y 0
with the more general condition
lim suprց0
∫
Br(x0)Y (x) dx
|Br(x0)| = Y 0,
in the eventuality of noncontinuous solutions. In the above formula, Br(x0) denotes
the ball of radius r centered at x0 and |Br(x0)| denotes its Lebesgue measure.
Consider the following example: Ω is the ball of radius 1/2 centered at the originand Ai are the partial derivatives of the function a : Ω → R, defined by
a(x) =
(
log1
|x|
)α
with 0 < α < 1 − 1/d. It is easy to verify that Ai belong to the space Ld(Ω). Forthe same reasons as above, a solution to the problem (1.9) must be of the formY = Cea for some constant C. Or we cannot define an initial condition at theorigin since
lim suprց0
∫
Br(0)ea(x) dx
|Br(0)| = +∞ .
6 SORIN MARDARE
Nor can we allow Y 0 to be infinite in the Cauchy problem (1.9), since we would losethe uniqueness of the solution (only the sign of the constant C would be known,not its value).
If p > d, then the primitive a of the coefficients Ai is continuous over Ω, since itbelongs to W 1,p
loc (Ω). Therefore the function ea is continuous over Ω and so its value
at any point is well-defined. Moreover, ea belongs to the space W 1,ploc (Ω), which
shows that the Cauchy problem (1.9) is well posed in the case where p > d.We conclude from the above discussion on the value of p that the minimal regu-
larity assumption that leads to a well posed Cauchy problem (1.9) is Ai ∈ Lploc(Ω),
with p > d.
Note that, in the case where p < d, even the problem of finding non trivial
solutions to the Pfaff system∂Y
∂xi= Y Ai (the initial condition at x0 is dropped)
is not well posed in the distributional sense, since we may not have a solution inL1
loc(Ω).On the contrary, in the case where p = d, chances are that the Pfaff system alone
(we drop again the initial condition at x0) has nontrivial solutions. At least this iswhat the scalar case suggests. Indeed, if Ai are (scalar) functions in Ld
loc(Ω) that
satisfy the compatibility conditions∂Aj
∂xi= ∂Ai
∂xj, then their primitive a belongs to
the space W 1,dloc (Ω). Hence ea ∈ L1
loc(Ω). In fact we even have that
e|a|d/(d−1) ∈ L1
loc(Ω)
(see e.g Adams [1] or Gilbarg and Trudinger [8]). However, in the general casewhere the coefficients of the Pfaff system are (true) matrix fields of class Ld
loc(Ω),the question of whether there exists a nontrivial solution to the Pfaff system remainsopen.
2. Preliminaries
This section gathers the notation and the definitions used in this article, as wellas various preliminary results that will be subsequently needed.
The notation Mq×ℓ and M
ℓ := Mℓ×ℓ respectively designate the space of all
matrices with q rows and ℓ columns and the space of all square matrices of orderℓ. We equip these spaces with the norm
|A| :=∑
i,j
|Aij |
and with the inner product
A · B :=∑
i,j
AijBij ,
where A = (Aij) and B = (Bij).The notation AT designates the transposed of the matrix A. Hence, if A ∈ M
q×ℓ,AT is a matrix in M
ℓ×q. If A ∈ Mℓ, its cofactor matrix is defined as the matrix
whose element at the i-th row and j-th column is the determinant of the matrixin M
ℓ−1 obtained from A by deleting its i-th row and j-th column. We recall theformula
A−1 =1
det A(CofA)T
ON SYSTEMS OF FIRST ORDER LINEAR PARTIAL DIFFERENTIAL EQUATIONS 7
for an invertible matrix A ∈ Mℓ.
In this paper, all scalar, vector, or matrix-valued functions are defined on subsetsof the Euclidean space R
d, thus equipped with the norm
|x| :=(
∑
i
x2i
)1/2
,
where x = (x1, ..., xd) denotes a generic point in Rd. The distance between two
subsets A,B of Rd is defined by
dist(A,B) = infa ∈ A
b ∈ B
|a − b|,
the diameter of a subset A ⊂ Rd is defined by
DA := supx,y∈A
|x − y|,
and the Lebesgue measure of a measurable subset A ⊂ Rd is denoted |A|. The
closure in Rd of a subset Ω ⊂ R
d is denoted Ω, the complement of Ω ⊂ Rd is
denoted Ωc := Rd \ Ω, and an open ball with radius R centered at x ∈ R
d isdenoted BR(x), or BR if its center is irrelevant in the subsequent analysis. IfΩ ⊂ R
d is a domain with Lipschitz boundary, we denote by ν = (νi)di=1 the exterior
unit normal to its boundary. In particular, ν is a vector field of class L∞ on ∂Ω.The space of distributions over an open set Ω ⊂ R
d is denoted D′(Ω) and partial
derivatives of first and second order are denoted ∂i = ∂∂xi
and ∂ij = ∂2
∂xi∂xj. The
usual Sobolev spaces are denoted Wm,p(Ω), or Hm(Ω) := Wm,2(Ω) if p = 2, and
Wm,ploc (Ω) := f ∈ D′(Ω); f ∈ Wm,p(U) for all open set U ⋐ Ω,
where the notation U ⋐ Ω means that the closure of U in Rd is a compact sub-
set of Ω. If p > d, the classes of functions in W 1,p(Ω) are identified with theircontinuous representatives, as in the Sobolev embedding theorem (see, e.g., Adams[1]). For matrix-valued and vector-valued function spaces, we shall use the notationWm,p(Ω; Mq×ℓ), Wm,p(Ω; Rℓ), etc.
The Lebesgue spaces Lp(Ω; Rd) and Lp(Ω; Mq×ℓ) are equipped with the norms
‖v‖Lp(Ω) =∑
i
‖vi‖Lp(Ω)
and
‖A‖Lp(Ω) =∑
i,j
‖Aij‖Lp(Ω),
and the Sobolev spaces W 1,p(Ω; Mq×ℓ) and W 2,p(Ω; Mq×ℓ) are equipped with thenorms
‖Y ‖W 1,p(Ω) = ‖Y ‖Lp(Ω) +∑
i
‖∂iY ‖Lp(Ω)
and
‖Y ‖W 2,p(Ω) = ‖Y ‖W 1,p(Ω) +∑
i,j
‖∂ijY ‖Lp(Ω).
The image of an application f : X → Y is defined by
Im f = f(x);x ∈ X.
8 SORIN MARDARE
If X and Y are two topological vector spaces, we denote by Isom(X;Y ) the spaceof isomorphisms between X and Y , i.e.
Isom(X;Y ) :=
f : X → Y ; f is linear and bijective,
f and f−1 are continuous
.
Finally, we make the following convention: all the cuboids ω considered in thispaper have edges parallel with the axes of coordinates, i.e.,
ω = (a1, b1) × (a2, b2) × · · · × (ad, bd)
for some ai, bi ∈ R such that ai < bi.
The remainder of this section gathers all the preliminary results that will besubsequently needed. Proofs are provided only for those statements that are lesscommon in the literature in the present form. We begin with the classical implicitfunction theorem (for a proof see, e.g., Schwartz[15, Chap. 3, sect. 8] or Deimling[7, Theorem 15.1]):
Theorem 2.1 (implicit function theorem). Let there be given three Banach spacesX,Y and Z, an open subset Ω of the space X × Y containing a point (x0, y0), anda mapping f ∈ C1(Ω;Z) such that its derivative with respect to the second variablesatisfies
Dyf(x0, y0) ∈ Isom(Y ;Z).
Then there exist open balls Bδ(x0) ⊂ X and Bε(x0) ⊂ Y and exactly one mapg : Bδ(x0) → Bε(y0) such that
f(x, g(x)) = f(x0, y0) for all x ∈ Bδ(x0).
Furthermore, the map g is of class C1.
The next theorem establishes the Morrey inequality with an explicit constant inthe particular case where the domain is convex. A more general Morrey inequalitycan be found in Maz’ja [14, Section 1.4.5]. However, the result below is sufficientfor our analysis and has an elementary proof (the only prerequisite is the densityof C1(Ω) in W 1,p(Ω)), which is given here for the sake of completeness.
Theorem 2.2 (Morrey’s inequality). Let Ω ⊂ Rd be a bounded open convex set of
diameter DΩ > 0 and let p > d. Then W 1,p(Ω) ⊂ C(Ω) and
(2.1) |u(x) − u(y)| ≤ 2DΩ
(1 − d/p)|Ω|1/p‖∇u‖Lp(Ω)
for all u ∈ W 1,p(Ω) and all x, y ∈ Ω.
Proof. We first prove that (2.1) is valid for all u ∈ C1(Ω). For x, z ∈ Ω, letϕ : [0, 1] → R be defined by ϕ(t) = u(x + t(z − x)). Then
u(z) − u(x) = ϕ(1) − ϕ(0) =
∫ 1
0
ϕ′(t) dt =
∫ 1
0
∇u(x + t(z − x)) · (z − x) dt
By integrating with respect to z over Ω and dividing by |Ω|, we obtain
u − u(x) =1
|Ω|
∫
Ω
∫ 1
0
∇u(x + t(z − x)) · (z − x) dt dz ,
ON SYSTEMS OF FIRST ORDER LINEAR PARTIAL DIFFERENTIAL EQUATIONS 9
where u :=1
|Ω|
∫
Ω
u dx.
By using the Cauchy-Schwartz inequality and the fact that |z − x| ≤ DΩ for allz ∈ Ω, we next obtain
|u − u(x)| ≤ DΩ
|Ω|
∫
Ω
∫ 1
0
|∇u(x + t(z − x))| dt dz.
Furthermore, the Fubini formula and the inequality√
v · v ≤ ∑di=1 |vi| for all v =
(vi) ∈ Rd imply that
|u − u(x)| ≤ DΩ
|Ω|
∫ 1
0
∫
Ω
d∑
i=1
|∂iu(x + t(z − x))| dz dt.
Now, by making the change of variables y = x + t(z − x), we obtain
|u − u(x)| ≤ DΩ
|Ω|
∫ 1
0
(
∫
Ωt(x)
d∑
i=1
t−d|∂iu(y)| dy
)
dt ,
where
Ωt(x) = (1 − t)x + tK := x + t(z − x) ; z ∈ K.Note that Ωt(x) ⊂ Ω, since Ω is convex, and that |Ωt(x)| = td|Ω|.
Using now Holder inequality with1
p+
1
q= 1 in the last integral gives
|u − u(x)| ≤ DΩ
|Ω|
∫ 1
0
t−d‖∇u‖Lp(Ωt(x))|Ωt(x)| 1q dt
≤ DΩ
|Ω| |Ω| 1q∫ 1
0
t−d+ dq ‖∇u‖Lp(Ωt(x)) dt
≤ DΩ
|Ω| 1p‖∇u‖Lp(Ω)
∫ 1
0
t−dp dt
=DΩ
|Ω| 1p(
1 − dp
)
‖∇u‖Lp(Ω) .
Finally, using the triangle inequality and the continuity of u on Ω, we obtain theinequality (2.1) for u ∈ C1(Ω) and x, y ∈ Ω.
Now let u ∈ W 1,p(Ω) and let a sequence (un) ⊂ C1(Ω) that converges to u in theW 1,p-norm. By passing if necessary to a subsequence, we can assume that un → ua.e. in Ω. Choose a point x0 ∈ Ω such that un(x0) → u(x0). In particular,
(
un(x0))
is a Cauchy sequence in R. By using inequality (2.1) for the function (un − um)and y = x0, we obtain that, for all x ∈ Ω,
|(un − um)(x)| ≤ |(un − um)(x0)| +2DΩ
(
1 − dp
)
|Ω| 1p‖∇(un − um)‖Lp(Ω).
Consequently,
‖un − um‖L∞(Ω) ≤ |(un − um)(x0)| +2DΩ
(
1 − dp
)
|Ω| 1p‖∇(un − um)‖Lp(Ω) ,
which shows that (un) is a Cauchy sequence with respect to the L∞(Ω)-norm.Therefore the sequence (un) converges in L∞(Ω) and, by the uniqueness of the
10 SORIN MARDARE
limit in the Lp-norm, we must have un → u in the L∞(Ω)-norm. Since un ∈ C(Ω),we deduce that u ∈ C(Ω). The inequality (2.1) is then obtained by passing to thelimit in the inequalities of the same type satisfied by un. ¤
An immediate consequence of Theorem 2.2 is the following:
Corollary 2.3. Let Ω ⊂ Rd be an bounded open convex set of diameter DΩ > 0
and let p > d. Then, for all u ∈ W 1,p(Ω) and all x, y ∈ Ω,
(2.2) |u(x) − u(y)| ≤ 2(
1 − dp
)
|Ω0|1p
D1− d
p
Ω ‖∇u‖Lp(Ω),
where Ω0 := 1DΩ
Ω (i.e., Ω0 is a set of diameter 1 homothetic with Ω).
In particular, if Ω = BR ⊂ Rd is an open ball of radius R, then there exists a
constant C1 > 0 depending only on (d, p) such that
(2.3) |u(x) − u(y)| ≤ C1R1− d
p ‖∇u‖Lp(BR)
for all u ∈ W 1,p(BR) and all x, y ∈ BR.
The Sobolev Embedding Theorem and the corresponding Sobolev inequalities(see, e.g., Gilbarg and Trudinger [8]) will be used extensively in this paper. We willneed in particular the following Sobolev inequality with an explicit constant (thedependence on R is obtained by a scaling argument):
Theorem 2.4 (Sobolev inequality). Let ωR ⊂ Rd be an open cube with edges of
length R > 0 and let p > d. There exists a constant C2 > 0 depending only on(d, p) such that
‖u‖L2p/(p−2)(ωR) ≤ C2R1−d/p‖∇u‖L2(ωR)
for all u ∈ H1(ωR) such that u = 0 on at least one lateral face of the cube ωR.
In the remainder of this section, the set ω is an open cuboid in Rd, that is, a set
of the form
ω = (a1, b1) × (a2, b2) × · · · × (ad, bd),
and Γi denotes the union of its two lateral faces that are perpendicular to the axisof coordinates xi, that is,
Γi := (∂ω ∩ xi = ai) ∪ (∂ω ∩ xi = bi) .
The next result is a generalisation of the Poincare theorem and it can be foundin Bourgain, Brezis, Mironescu [4] for the case of smooth and simply connectedsets. The proof in [4] remains valid for cuboids. For the sake of completeness, wereproduce it here:
Theorem 2.5. Let there be given f1, f2, ..., fd ∈ Lp(ω), where p ≥ 1, that satisfyin the distributional sense the compatibility conditions
∂ifj = ∂jfi in ω,
for all i, j ∈ 1, 2, ...,d. Then there exists ψ ∈ W 1,p(ω), unique up to a constant,such that
∂iψ = fi in ω
for all i ∈ 1, 2, ...,d.
ON SYSTEMS OF FIRST ORDER LINEAR PARTIAL DIFFERENTIAL EQUATIONS 11
Proof. It suffices to prove that for any open cuboid ω′ ⋐ ω, there exists a functionψ ∈ W 1,p(ω′) such that
∂iψ = fi in ω′.
Indeed, we can construct from this result a function ψ ∈ W 1,ploc (ω) that satisfy
∂iψ = fi in ω,
since the set ω can be written as a union of open cuboids, ω = ∪n∈Nωn, such thatωn ⊂ ωn+1 ⋐ ω for all n ∈ N. Since fi ∈ Lp(ω) and the domain ω is a cuboid(thus its boundary is Lipschitz-continuous), the function ψ is in fact in W 1,p(ω)(see, e.g., Maz’ja [14], Corollary in Section 1.1.11).
Let ω′ ⋐ ω. By taking the convolution of functions fi with a sequence of molli-fiers, one can find sequences (fε
i ) ⊂ C∞(ω′) with the following properties:
∂ifεj = ∂jf
εi in ω′,
fεi → fi in Lp(ω′) as ε → 0,
for all i, j ∈ 1, 2, ...,d. Then the classical Poincare theorem shows that, for eachε, there exists a function ψε ∈ C∞(ω′) whose derivatives are given by
∂iψε = fε
i in ω′,
and that satisfies in addition∫
ω′
ψε(x)dx = 0.
Since (fεi ) are Cauchy sequences in Lp(ω′), the Poincare-Wirtinger inequality shows
that (ψε) is a Cauchy sequence in the space W 1,p(ω′). Since this space is complete,there exists a function ψ ∈ W 1,p(ω′) such that ψε → ψ in W 1,p(ω′) as ε → 0. Thisfunction satisfies the conditions of the Theorem on ω′.
Finally, the uniqueness result is an obvious consequence of the general distribu-tion theory: if ψ, ψ ∈ W 1,p(ω) satisfy ∂iψ = ∂iψ = fi in ω, then ∇(ψ − ψ) = 0 in
ω; hence (ψ − ψ) is a constant, since the set ω is connected. ¤
The following theorem gathers the density results for Sobolev spaces that areneeded in this paper. We state them for cuboids in R
d even though they hold truefor a much larger class of open sets. However, we want to emphasize that in thecase of cuboids these density results have an elementary proof based on extensionsof functions in Sobolev spaces.
Theorem 2.6. Let p ∈ [1,+∞), m ≥ 1, and i1, . . . , ik ∈ 1, . . . ,d. Then thefollowing assertions hold true:
(i) The space C∞(ω) is dense in Wm,p(ω).(ii) The space
u ∈ C∞(ω) ; u = 0 on Γi1 ∪ · · · ∪ Γik
is dense in the space
u ∈ W 1,p(ω) ; u = 0 on Γi1 ∪ · · · ∪ Γik .
As a consequence of this Theorem, we now establish the following result abouttraces of functions in Sobolev spaces:
Corollary 2.7. Let u ∈ W 2,p(ω), p ≥ 1, such that u = 0 on Γi for some i ∈1, . . . ,d. Then ∂ju = 0 on Γi for all j 6= i.
12 SORIN MARDARE
Proof. By Theorem 2.6(i), there exists a sequence (un)n∈N ⊂ C2(ω) such that un →u in W 2,p(ω). Then un|∂ω
→ u|∂ωin W 1,p(∂ω). In particular, un|Γi
→ u|Γi= 0 in
W 1,p(Γi), so that we have on one hand
∂j(un|Γi) → 0 in Lp(Γi) for all j 6= i.
On the other hand, ∂jun → ∂ju in W 1,p(ω), hence (∂jun)|∂ω→ (∂ju)|∂ω
inLp(∂ω). In particular (∂jun)|Γi
→ (∂ju)|Γiin Lp(Γi).
But ∂j(un|Γi) = (∂jun)|Γi
for all n and all j 6= i, since un ∈ C2(ω). Therefore,
we obtain by passing to the limit n → ∞ that
(∂ju)|Γi= 0 for all j 6= i.
¤
Remark 2.8. In other words, we have proved that (∂ju)|Γi= ∂j(u|Γi
) = 0, andthis thanks to the fact that we have enough regularity for u. While the secondequality is obvious and is true even for u ∈ W 1,p(ω), the term (∂ju)|Γi
is not well
defined if u is only of class W 1,p over ω.
We end this Section with a regularity result for the solution of a variationalequation under assumptions that are relevant to our subsequent analysis. Theproof is given in the Appendix (see Theorem 7.2 and Corollary 7.4).
Theorem 2.9. Let p, q ≥ 1 such that 1p + 1
q = 1 and fix any indices i1, i2, . . . , ik ∈1, 2, . . . ,d. Let
Vp(ω) := u ∈ W 1,p(ω); u = 0 on Γi1 ∪ Γi2 ∪ ... ∪ Γik,
Vq(ω) := v ∈ W 1,q(ω); v = 0 on Γi1 ∪ Γi2 ∪ ... ∪ Γik,
and let f ∈ Lr(ω), where r ∈ (1,+∞) satisfies 1r + 1
q ≤ 1 + 1d .
If a function u ∈ Vp(ω) satisfies the variational equation∫
ω
∇u · ∇v dx =
∫
ω
fv dx for all v ∈ Vq(ω),
then u ∈ W 2,r(ω) and satisfies the boundary value problem
−∆u = f in ω,
u = 0 on Γi1 ∪ Γi2 ∪ ... ∪ Γik,
∂ju = 0 on Γj for all j 6∈ i1, i2, . . . , ik .
3. Matrix-valued differential forms
In this section, as well as in Section 5, we use the differential calculus on an openset Ω ⊂ R
d viewed as a (trivial) differential manifold endowed with the Euclideanmetric. The differential manifold Ω being parametrized by the identity map, thecovariant basis of the tangent space at any point of Ω is the canonical basis of R
d.In order to simplify the notation, we make the convention that all the indices
appearing in this section belong to the set 1, 2, . . . ,d. For instance, we will
write∑
i1<···<ik
instead of∑
1≤i1<···<ik≤d
. We will also write∑
i1<···<ik
(
∑
j 6=ir
)
instead
of∑
1≤i1<···<ik≤d
(
∑
j 6∈i1,...,ik
)
.
ON SYSTEMS OF FIRST ORDER LINEAR PARTIAL DIFFERENTIAL EQUATIONS 13
All the differential forms in this paper are matrix-valued : a k-form, k ≥ 1, isdefined by
α :=∑
i1<···<ik
Ai1...ikdxi1 ∧ · · · ∧ dxik ,
where the coefficients are matrix fields Ai1...ik: Ω → M
q×ℓ, and a 0-form is a fieldφ : Ω → M
q×ℓ. Note that a form of degree k > d vanish identically in Ω. The spaceof all k-forms is denoted Λk(Ω; Mq×ℓ).
We say that a k-form is of class Wm,p over Ω, and we denote α ∈ Wm,p(Ω;Λk), ifall its coordinates Ai1...ik
are in Wm,p(Ω; Mq×ℓ). We equip the space Wm,p(Ω;Λk)with the norm
‖α‖W m,p(Ω;Λk) :=∑
i1<···<ik
‖Ai1...ik‖W m,p(Ω;Mq×ℓ) .
As usual, we use the notation Lp(Ω;Λk) instead of W 0,p(Ω;Λk) and Hm(Ω;Λk)instead of Wm,2(Ω;Λk). When the degree of a form is unambiguously deducedfrom the context, we will use the short notation Wm,p(Ω) instead of Wm,p(Ω;Λk).
Similar definitions and conventions are used for the spaces Wm,ploc (Ω;Λk), Cm(Ω;Λk),
and Cm(Ω;Λk).In all that follows, we make the convention that the indices of the coefficients
of a matrix-valued form are always written in increasing order, that ǫi1...iris the
sign of the permutation i1, . . . , ir of the set 1, 2, . . . , r, and that the indices ofǫ always form a permutation of the set of integers from 1 to the number of indices.The notation A... bip... means that the indice ip is missing.
On the space of all forms, we define several algebraic and differential operators.For the generic forms
α :=∑
i1<···<ik
Ai1...ikdxi1 ∧ · · · ∧ dxik ∈ Λk(Ω; Mq×ℓ),
β :=∑
i1<···<ik
Bi1...ikdxi1 ∧ · · · ∧ dxik ∈ Λk(Ω; Mq×ℓ),
φ :=∑
i1<···<ih
Φi1...ihdxi1 ∧ · · · ∧ dxih ∈ Λh(Ω; Mℓ×m),
where k + h ≤ d, we define
- the Hodge star operator ∗ : Λk(Ω; Mq×ℓ) → Λd−k(Ω; Mq×ℓ) ,
∗ α :=∑
ik+1<···<id
Ai1...ikǫi1...ikik+1...iddxik+1 ∧ · · · ∧ dxid ,
- the exterior product ∧ : Λk(Ω; Mq×ℓ) × Λh(Ω; Mℓ×m) → Λk+h(Ω; Mq×m) ,
α ∧ φ :=∑
i1<···<ik+h
(
∑
1≤j1<···<jk≤k+h
Aij1...ijk
Φijk+1...ijk+h
ǫj1...jkjk+1...jk+h
)
dxi1 ∧ · · · ∧ dxik+h ,
- the scalar product · : Λk(Ω; Mq×ℓ) × Λk(Ω; Mq×ℓ) → Λ0(Ω; R) ,
α · β :=∑
i1<···<ik
Ai1...ik· Bi1...ik
,
- the differential, or the exterior derivative d : Λk(Ω; Mq×ℓ) → Λk+1(Ω; Mq×ℓ), k ≤d − 1 ,
14 SORIN MARDARE
dα :=∑
i1<···<ik+1
(
k+1∑
p=1
(−1)p−1∂ipAi1... bip...ik+1
)
dxi1 ∧ · · · ∧ dxik+1 ,
- the codifferential δ : Λk(Ω; Mq×ℓ) → Λk−1(Ω; Mq×ℓ), k ≥ 1 ,
δα :=∑
i1<···<ik−1
(
∑
j 6=ir
(−1)p∂jAi1...ip−1jip...ik−1
)
dxi1 ∧ · · · ∧ dxik−1 ,
- the laplacean ∆ : Λk(Ω; Mq×ℓ) → Λk(Ω; Mq×ℓ) ,
∆α :=∑
i1<···<ik
(∆Ai1...ik)dxi1 ∧ · · · ∧ dxik ,
- the gradient ∇ : Λk(Ω; Mq×ℓ) →(
Λk(Ω; Mq×ℓ))d
,
∇α := (∂iα)di=1, where ∂iα :=∑
i1<···<ik
∂iAi1...ikdxi1 ∧ · · · ∧ dxik ,
and the scalar product of gradients ,
∇α · ∇β :=
d∑
i=1
∂iα · ∂iβ .
Remarks 3.1. 1. Note that in general
α ∧ φ 6= (−1)khφ ∧ α
(even if α and β were square matrix-valued forms), in contrast with what happensin the case of scalar-valued forms.
2. With the exception of the scalar product, all the above operators have alsointrinsic definitions (not involving coordinates). In Differential Geometry, it iscustomary to start with intrinsic definitions and only then deduce their equivalentin coordinates. For instance, the codifferential can be defined by means of the Hodgestar operator and the exterior differential (both of which have intrinsic definitions)by δ := (−1)d(k+1)+1 ∗ d∗ (see e.g. Bleecker [3], Taylor [16]). The laplacean isactually defined by means of d and δ as ∆δ,d := dδ + δd. This leads sometimesto confusion since in a Euclidean space we have the formula (see e.g. Jost [10])dδ + δd = −∆, where ∆ is the operator defined by taking the laplacean of allthe coefficients of the form (as we did in this Section). In this paper, we will useexclusively this last definition of the laplacean (hence the formula −∆ = dδ + δd),thus avoiding the confusion with the operator ∆δ,d.
3. The definition of the operators d and ∧ translate verbatim in any other basis(associated with a chart parametrizing the manifold), while all the other operatorstranslate into more complicated expressions involving the contravariant coordinatesof the metric induced by the chart and the associated Christofell symbols.
From the definitions of the the operators d, δ and ∧, it is easy to see that theysatisfy the following properties, which will be useful henceforth.
Theorem 3.2. The operators d and δ satisfy the identities
dd = 0 , δδ = 0 ,
dδ + δd = −∆.
ON SYSTEMS OF FIRST ORDER LINEAR PARTIAL DIFFERENTIAL EQUATIONS 15
If α, β, γ are 1-forms whose coefficients are matrix fields, then
(α ∧ β) ∧ γ = α ∧ (β ∧ γ),
d(α ∧ β) = dα ∧ β − α ∧ dβ.
If α is a k-form, we let
|α| :=∑
i1<···<ik
|Ai1...ik|
and, if α ∈ W 1,p(Ω;Λk), we define the Lp-norm of its gradient by letting
‖∇α‖Lp(Ω) := ‖α‖W 1,p(Ω) − ‖α‖Lp(Ω) =
d∑
i=1
‖∂iα‖Lp(Ω).
It is then easy to see that the follwing inequalities are satisfied
(3.1)
|α · β| ≤√
α · α√
β · β ≤ |α||β|,|α ∧ φ| ≤ |α||φ|,
∥
∥|α|∥
∥
Lp(Ω)≤ ‖α‖Lp(Ω;Λk),
‖dα‖Lp(Ω,Λk+1) ≤ ‖∇α‖Lp(Ω),
‖δα‖Lp(Ω,Λk−1) ≤ ‖∇α‖Lp(Ω) .
The remainder of this section gathers various theorems about Sobolev spacesand Stokes’ type formulas in the framework of matrix-valued forms. All the formsappearing in these theorems are defined over an open cuboid in R
d, that is,
ω = (a1, b1) × · · · × (ad, bd),
and the notation Γi designates the union of the two lateral faces of ω that areperpendicular to the xi-axis, i.e.,
Γi := (∂ω ∩ xi = ai) ∪ (∂ω ∩ xi = bi) .
We begin by translating the Poincare Theorem 2.5 into the framework of matrix-valued forms:
Theorem 3.3. Let α ∈ Lp(ω; Λ1) be a 1-form that satisfies dα = 0 in the distri-butional sense. Then there exists φ ∈ W 1,p(ω; Λ0) such that dφ = α.
If φ ∈ W 1,p(ω; Λk) for some p ≥ 1, we say that φ = 0 on ∂ω if i∗(φ) = 0, wherei∗ is the pull-back of the canonical inclusion i : ∂ω → ω. The following lemmacharacterises this boundary condition on a form in terms of boundary conditionson its coefficients.
Lemma 3.4. Let φ =∑
i1<···<ik
Φi1...ikdxi1 ∧ · · · ∧ dxik be a k-form.
(i) If φ ∈ W 1,p(ω; Λk), p ≥ 1, then
∗φ = 0 on ∂ω ⇐⇒ Φi1...ik= 0 on Γi1 ∪ · · · ∪ Γik
for all i1 < · · · < ik.
(ii) If φ ∈ W 2,p(ω), p ≥ 1, then ∗φ = 0 on ∂ω,
∗dφ = 0 on ∂ω,⇐⇒
Φi1...ik= 0 on Γi1 ∪ · · · ∪ Γik
,
∂jΦi1...ik= 0 on Γj for all j 6∈ i1, . . . , ik,
⇐⇒
Φi1...ik= 0 on Γi1 ∪ · · · ∪ Γik
,
∂νΦi1...ik= 0 on ∂ω \
(
Γi1 ∪ · · · ∪ Γik
)
.
16 SORIN MARDARE
Proof. We first recall that, for all h 6= j, i∗(dxh) = dxh and i∗(dxj) = 0 on Γj (seee.g. Barden and Thomas [2]).
(i) This property follows immediately from the definition of the star operator,the linearity of the pull-back i∗, and the formula
i∗(dxik+1 ∧ · · · ∧ dxid) = i∗(dxik+1) ∧ · · · ∧ i∗(dxid) .
(ii) Thanks to the part (i), we have
∗φ = 0 on ∂ω,
∗dφ = 0 on ∂ω,⇐⇒
Φi1...ik= 0 on Γi1 ∪ · · · ∪ Γik
,
k+1∑
p=1
(−1)p−1∂ipΦi1... bip...ik+1
= 0 on Γi1 ∪ · · · ∪ Γik+1.
Fix a (k + 1)-tuple i1 < i2 < · · · < ik+1 and a number h ∈ 1, 2, . . . , k + 1. Byapplying Corollary 2.7 to each of the qℓ components of the matrix fields Φi1... bip...ik+1
,
we obtain that ∂ipΦi1... bip...ik+1
= 0 on Γihfor all p 6= h. Therefore
k+1∑
p=1
(−1)p−1∂ipΦi1... bip...ik+1
= (−1)h−1∂ihΦi1... bih...ik+1
= 0 on Γih.
Hence ∂ihΦi1... bih...ik+1
= 0 on Γihfor all (k + 1)-tuple i1 < i2 < · · · < ik+1 and
all h ∈ 1, 2, . . . , k + 1. Or this is equivalent with ∂jΦi1...ik= 0 on Γj for all
j 6∈ i1, . . . , ik. ¤
For any 1 ≤ p ≤ +∞, we define the spaces
W 1,p∗ (ω; Λk) := φ ∈ W 1,p(ω) ; ∗φ = 0 on ∂ω,
W 2,p∗ (ω; Λk) := φ ∈ W 2,p(ω) ; ∗φ = 0 and ∗ dφ = 0 on ∂ω,
and we use the convention of notation
H1∗ (ω; Λk) := W 1,2
∗ (ω; Λk) .
Remark 3.5. With the notation of Theorem 2.9, Lemma 3.4(i) states that φ :=∑
i1<···<ikΦi1...ik
dxi1 ∧· · ·∧dxik ∈ W 1,p∗ (ω; Λk) if and only if all the qℓ components
of each coordinate Φi1...ikbelong to the space Vp(ω) associated with the indices
i1, . . . , ik.Note that for k ≥ 1, the space H1
∗ (ω; Λk) endowed with the scalar product
(α, β) 7→∫
ω
∇α · ∇β dx
is a Hilbert space. The norm associated with this scalar product, which is denoted‖ · ‖H1
∗(ω;Λk), is equivalent with the L2-norm of the gradient, since we have, for all
α ∈ H1∗ (ω; Λk),
(3.2) ‖α‖H1∗(ω;Λk) ≤ ‖∇α‖L2(ω;Λk) ≤
√
dqℓ
(
d
k
)
‖α‖H1∗(ω;Λk)
where(
dk
)
:= d!k!(d−k)! .
For any 1 ≤ p < +∞, we also define the space Wp(ω; Λk) as the closure ofC1(ω; Λk) in the space φ ∈ Lp(ω) ; dφ ∈ Lp(ω) with respect to the norm
‖φ‖ := ‖φ‖Lp(ω) + ‖dφ‖Lp(ω).
ON SYSTEMS OF FIRST ORDER LINEAR PARTIAL DIFFERENTIAL EQUATIONS 17
Remark 3.6. Using the fact that ω is a cuboid, one can show that
Wp(ω; Λk) = φ ∈ Lp(ω) ; dφ ∈ Lp(ω).
Note that in the case of a general domain ω, the space Wp(ω; Λk) may be strictlycontained in the space φ ∈ Lp(ω) ; dφ ∈ Lp(ω).
As previously convened for spaces of forms, we will use the shorter notationW 1,p
∗ (ω) and Wp(ω) when the degree of the forms in these spaces is unambiguously
deduced from the context. Note that, by Theorem 2.6(i), W 1,p∗ (ω) ⊂ W 1,p(ω) ⊂
Wp(ω) and W 1,p(ω; Λ0) = Wp(ω; Λ0).The next theorem establishes an inequality of Sobolev type for forms in the
space H1∗ (ω; Λk), the proof of which is deduced from the corresponding inequality
on functions (see Theorem 2.4) by applying the latter to each component of eachcoordinate (which is a matrix field) of any k-form α in H1
∗ (ω; Λk):
Theorem 3.7. Let p > d and k ≥ 1 and let ω ⊂ Rd be an open cube with edges of
length R > 0. There exists a constant C2 > 0 depending only on (d, p) such that
‖α‖L2p/(p−2)(ω) ≤ C2R1−d/p‖∇α‖L2(ω)
for all α ∈ H1∗ (ω; Λk).
In the remainder of this section, k is a fixed, but otherwise arbitrary, integer in1, 2, . . . ,d.
The following two theorems and corollary establish basic formulas of integrationin the framework of differential forms. We start with a formula of integration byparts, whose expression explains in particular why the operator δ is called codif-ferential. Note that this formula holds true in the particular, but important, casewhere one of forms involved has compact support in Ω.
Theorem 3.8. Let q > 1 and 1 ≤ p, r < +∞ be such that 1p + 1
q = 1 and
1
r+
1
q≤ 1 +
1
dif q 6= d,
r > 1 if q = d.
Then, for all α ∈ W 1,r(ω; Λk−1) ∪ Wp(ω; Λk−1) and all β ∈ W 1,q∗ (ω; Λk),
(3.3)
∫
ω
dα · β dx =
∫
ω
α · δβ dx .
Proof. Note that the integrals in (3.3) are well defined in the case where α ∈W 1,r(ω; Λk−1), thanks to the appropriate Sobolev embeddings applied to each com-ponent of each coordinate of the two forms.
It suffices to prove formula (3.3) for α ∈ C1(ω; Λk−1) and β ∈ W 1,q∗ (ω; Λk), since
the space C1(ω; Λk−1) is dense in both spaces Wp(ω; Λk−1) and W 1,r(ω; Λk−1).
18 SORIN MARDARE
Using the definition of dα and the Stokes formula, we have on one hand
∫
ω
dα · β dx =
∫
ω
∑
i1<···<ik
(
k∑
p=1
(−1)p−1∂ipAi1... bip...ik
)
· Bi1...ikdx
=∑
i1<···<ik
k∑
p=1
∫
ω
(−1)p−1∂ipAi1... bip...ik
· Bi1...ikdx
=∑
i1<···<ik
k∑
p=1
(
∫
ω
(−1)pAi1... bip...ik· ∂ip
Bi1...ikdx
+
∫
∂ω
(−1)p−1Ai1... bip...ik· Bi1...ik
νipdσ
)
=
∫
ω
∑
i1<···<ik
k∑
p=1
(−1)pAi1... bip...ik· ∂ip
Bi1...ikdx
+∑
i1<···<ik
k∑
p=1
∫
∂ω
(−1)p−1Ai1... bip...ik· Bi1...ik
νipdσ.
On the other hand, the integrand of the first term of the right hand side satisfies
∑
i1<···<ik
k∑
p=1
(−1)pAi1... bip...ik· ∂ip
Bi1...ik
=∑
i1<···<ik−1
(
Ai1...ik−1·∑
j 6=ir
(−1)p∂jBi1...ip−1jip...ik−1
)
= α · δβ ,
(in the last sum, p is the unique index that satisfies ip−1 < j < ip) and the secondterm satisfies
∑
i1<···<ik
k∑
p=1
∫
∂ω
(−1)p−1Ai1... bip...ik· Bi1...ik
νipdσ
=∑
i1<···<ik
k∑
p=1
(−1)p−1
∫
Γip
Ai1... bip...ik· Bi1...ik
νipdσ = 0,
since νip= 0 on ∂ω \ Γip
and Bi1...ik= 0 on Γip
(by Lemma 3.4(i)). Therefore theequality previously obtained by Stokes formula shows that the formula (3.3) is true
for all α ∈ C1(ω; Λk−1) and β ∈ W 1,q∗ (ω; Λk). ¤
Remarks 3.9. 1. The formula (3.3) is well-known in the Hodge Theory in the caseof compactly supported forms.
2. Another proof of Theorem 3.8 uses the formula∫
ω
α · δβ dx =
∫
ω
dα · β dx −∫
∂ω
i∗(α) ∧ i∗(∗β)
for scalar forms (the formula (3.3) for matrix-valued forms is then simply obtainedby summation), itself a consequence of the Stokes formula on manifolds combined
ON SYSTEMS OF FIRST ORDER LINEAR PARTIAL DIFFERENTIAL EQUATIONS 19
with the following formulas
∗∗ = (−1)k(d−k)Id on Λk,
δ = (−1)d(k+1)+1 ∗ d ∗ on Λk,
φ · η = φ ∧ ∗η,
where Id is the identity mapping on Λk and i∗ is the pull-back of the canonicalinclusion i : ∂ω → ω (see, e.g., Bleecker [3] for further details).
The next theorem establishes a formula that will be crucial in proving the lasttheorem of this Section, which in turn will be used extensively in the proof of theTheorem 5.2.
Theorem 3.10. Let α ∈ W 1,p∗ (ω; Λk) and β ∈ W 1,q
∗ (ω; Λk), where p, q ≥ 1 and1
p+
1
q= 1. Then
(3.4)
∫
ω
∇α · ∇β dx =
∫
ω
(dα · dβ + δα · δβ) dx.
Proof. Since 1p + 1
q = 1, we have either p < +∞ or q < +∞. Suppose p < +∞.
In view of Theorem 2.6(ii) and Lemma 3.4(i), we can approximate the form α(with respect to the W 1,p(ω)-norm) by a sequence of forms αn ∈ C2(ω) suchthat ∗αn = 0 on ∂ω. Hence it suffices to prove the formula (3.4) for forms
α =∑
i1<···<ik
Ai1...ikdxi1 ∧ · · · ∧ dxik and β =
∑
i1<···<ik
Bi1...ikdxi1 ∧ · · · ∧ dxik ,
where Ai1...ik∈ C2(ω), Bi1...ik
∈ W 1,q(ω), and
(3.5) Ai1...ik= Bi1...ik
= 0 on Γi1 ∪ · · · ∪ Γik.
From the definition of the operators d and δ, we first have
∫
ω
dα · dβ dx
=
∫
ω
∑
i1<···<ik+1
(
k+1∑
p=1
(−1)p−1∂ipAi1... bip...ik+1
)
·(
k+1∑
s=1
(−1)s−1∂isBi1... bis...ik+1
)
dx,
∫
ω
δα · δβ dx
=
∫
ω
∑
j1<···<jk−1
(
∑
t6=jr
(−1)p′
∂tAj1...jp′−1tjp′ ...jk−1
)
·(
∑
l 6=jr
(−1)s′
∂lBj1...js′−1ljs′ ...jk−1
)
dx.
20 SORIN MARDARE
By ordering the sums of products appearing in the above two formulas accordingto the fact that p = s or not, and respectively t = l or not, we obtain
∫
ω
dα · dβ dx
(3.6)
=
∫
ω
∑
i1<···<ik+1
(
k+1∑
p=1
∂ipAi1... bip...ik+1
· ∂ipBi1... bip...ik+1
)
dx
+
∫
ω
∑
i1<···<ik+1
(
∑
p6=s
(−1)p+s∂ipAi1... bip...ik+1
· ∂isBi1... bis...ik+1
)
dx,
∫
ω
δα · δβ dx
(3.7)
=
∫
ω
∑
j1<···<jk−1
(
∑
t6=jr
∂tAj1...jp′−1tjp′ ...jk−1· ∂tBj1...jp′−1tjp′ ...jk−1
)
+
∫
ω
∑
j1<···<jk−1
(
∑
t,l 6=jrt6=l
(−1)p′+s′
∂tAj1...jp′−1tjp′ ...jk−1· ∂lBj1...js′−1ljs′ ...jk−1
)
dx.
Note that the sum of the first integrals of the right hand sides of these last two
formulas is exactly
∫
ω
∇α · ∇β dx. So in order to prove the Theorem, it remains to
prove that the sum of the second integrals of the right hand sides vanishes.Let us fix i1 < · · · < ik+1 and p, s ∈ 1, . . . , k + 1, p 6= s. By applying the
Stokes theorem twice, we have
∫
ω
∂ipAi1... bip...ik+1
· ∂isBi1... bis...ik+1
dx
= −∫
ω
∂ipisAi1... bip...ik+1
· Bi1... bis...ik+1dx
+
∫
∂ω
∂ipAi1... bip...ik+1
· Bi1... bis...ik+1νis
dσ
=
∫
ω
∂isAi1... bip...ik+1
· ∂ipBi1... bis...ik+1
dx
−∫
∂ω
∂isAi1... bip...ik+1
· Bi1... bis...ik+1νip
dσ
+
∫
∂ω
∂ipAi1... bip...ik+1
· Bi1... bis...ik+1νis
dσ.
But∫
∂ω
∂isAi1... bip...ik+1
· Bi1... bis...ik+1νip
dσ
=
∫
Γip
∂isAi1... bip...ik+1
· Bi1... bis...ik+1νip
dσ = 0,
ON SYSTEMS OF FIRST ORDER LINEAR PARTIAL DIFFERENTIAL EQUATIONS 21
since νip= 0 on ∂ω \ Γip
and Bi1... bis...ik+1= 0 on Γip
, and∫
∂ω
∂ipAi1... bip...ik+1
· Bi1... bis...ik+1νis
dσ
=
∫
Γis
∂ipAi1... bip...ik+1
· Bi1... bis...ik+1νis
dσ = 0,
since νis= 0 on ∂ω \ Γis
and ∂ipAi1... bip...ik+1
= 0 on Γis. That Bi1... bis...ik+1
= 0
on Γipis a consequence of (3.5), since ip ∈ i1, . . . , ik+1 \ is (remember that we
have fixed p 6= s). For similar reasons, Ai1... bip...ik+1= 0 on Γis
, which implies that
∂jAi1... bip...ik+1= 0 on Γis
for all j 6= is, and in particular for j = ip as was needed
to obtain the second formula.Summing up the above arguments, we have proved that
∫
ω
∂ipAi1... bip...ik+1
· ∂isBi1... bis...ik+1
dx
=
∫
ω
∂isAi1... bip...ik+1
· ∂ipBi1... bis...ik+1
dx .
Now let t := is, l := ip, and j1 < · · · < jk−1 such that j1, . . . , jk−1 =i1, . . . , ik+1 \ is, ip. Using this notation in the right-hand side of the aboveequation, we deduce that
∫
ω
∂ipAi1... bip...ik+1
· ∂isBi1... bis...ik+1
dx
=
∫
ω
∂tAj1...jp′−1tjp′ ...jk−1· ∂lBj1...js′−1ljs′ ...jk−1
dx,
where p′ and s′ are the indices for which jp′−1 < t < jp′ and js′−1 < l < js′ .If p < s, then ip ∈ i1, . . . , is−1 and is 6∈ i1, . . . , ip; therefore we have in that
case p′ = s − 1 and s′ = p. Otherwise, if p > s, then ip > is, and we have in thatcase p′ = s and s′ = p − 1.
Whatever the case, we have p + s = p′ + s′ + 1 and the last relation implies that
(−1)p+s
∫
ω
∂ipAi1... bip...ik+1
· ∂isBi1... bis...ik+1
dx
= −(−1)p′+s′
∫
ω
∂tAj1...jp′−1tjp′ ...jk−1· ∂lBj1...js′−1ljs′ ...jk−1
dx.
Note that summing with respect to i1 < · · · < ik+1 and p 6= s in the left handside of this equation corresponds to summing with respect to j1 < · · · < jk−1 andt, l 6∈ j1, . . . , jk−1, t 6= l , in the right hand side. This observation shows that thesum of the second integrals of the right hand sides of the equations (3.6) and (3.7)vanishes. This completes the proof of the theorem.
¤
Remark 3.11. The proof of Theorems 3.8 and 3.10 shows that these theoremsstill hold true if one of the forms belongs to D(ω) (the space of C∞-forms with
compact support in ω) while the other form belongs to W 1,1loc (ω). Therefore, if
α ∈ W 2,1loc (ω; Λk), we have
(3.8)
∫
ω
(−∆α)·β dx =
∫
ω
∇α·∇β dx =
∫
ω
(dα·dβ+δα·δβ) dx =
∫
ω
(δd+dδ)α·β dx
22 SORIN MARDARE
for all β ∈ D(ω; Λk), hence that
−∆ = δd + dδ on W 2,1loc (ω; Λk).
In this way, we retrieved the equality −∆ = δd+dδ of Theorem 3.2, which otherwisecan be established solely by algebraic calculus (see, e.g., Jost [10]).
The next Corollary asserts that the first equality of (3.8) still holds if the twoforms belong to Wm,p
∗ (ω; Λk)-type spaces. The proof given below is based on thethe identity −∆ = δd + dδ (derived algebraically) and on the Theorems 3.8 and3.10. A more elementary proof uses Lemma 3.4 to derive the corresponding formulafor the coefficients of the two forms (the sought equality is then simply obtained bysummation). This second proof is not given here.
Corollary 3.12. Let 1 < r, q < +∞ such that 1r + 1
q ≤ 1 + 1d . Then, for all
α ∈ W 2,r∗ (ω; Λk) and all β ∈ W 1,q
∗ (ω; Λk),∫
ω
(−∆α) · β dx =
∫
ω
∇α · ∇β dx .
Proof. By using the formula −∆ = dδ + δd and the Theorems 3.8 and 3.10, wededuce that
∫
ω
(−∆α) · β dx =
∫
ω
dδα · β dx +
∫
ω
δdα · β dx
=
∫
ω
δα · δβ dx +
∫
ω
dα · dβ dx
=
∫
ω
∇α · ∇β dx.
The second equality was obtained by applying Theorem 3.8 twice, first for the coupleof forms (δα, β) ∈ W 1,r(ω; Λk−1) × W 1,q
∗ (ω; Λk), then for (β, dα) ∈ W 1,q(ω; Λk) ×W 1,r
∗ (ω; Λk+1), while the third equality was obtained by applying Theorem 3.10 for
(α, β) ∈ W 1,p∗ (ω; Λk) × W 1,q
∗ (ω; Λk). Note that the form α belongs to the space
W 1,p∗ (ω; Λk) thanks to the Sobolev embedding W 2,r(ω) ⊂ W 1,p(ω), where p is
defined by 1p + 1
q = 1. ¤
We end this section with the following theorem which establishes two remarkableproperties of the solution to a variational equation defined over spaces of forms:
Theorem 3.13. Let p, q ≥ 1 such that 1p + 1
q = 1 and let α ∈ W 1,p∗ (ω; Λk). Let
there be given a k-form φ ∈ Lr(ω; Λk), where r ∈ (1,+∞) satisfies 1r + 1
q ≤ 1 + 1d .
Suppose that the form α satisfies
(3.9)
∫
ω
∇α · ∇β dx =
∫
ω
φ · β dx for all β ∈ W 1,q∗ (ω; Λk) .
Then α ∈ W 2,r∗ (ω; Λk).
Furthermore, if φ ∈ Wr(ω; Λk), then the form dα, which belongs to the space
W 1,r∗ (ω; Λk+1), satisfies the variational equation:
(3.10)
∫
ω
∇(dα) · ∇β dx =
∫
ω
dφ · β dx for all β ∈ W 1,s∗ (ω; Λk+1) ,
where s is defined by1
r+
1
s= 1.
ON SYSTEMS OF FIRST ORDER LINEAR PARTIAL DIFFERENTIAL EQUATIONS 23
Proof. It is clear that the form α satisfies equation (3.9) if and only if each compo-nent of each coordinate Ai1...ik
of α satisfies the same type of equation, but where βis replaced with a function in W 1,q(ω) that vanishes on Γi1 ∪· · ·∪Γik
and the scalarproduct is replaced by the usual product in R. More specifically, if u denotes thecomponent of Ai1...ik
at the i-th row and j-th column, if f denotes the componentof Φi1...ik
at the i-th row and j-th column, then u ∈ Vp(ω) (see Remark 3.5) and∫
ω
∇u · ∇v dx =
∫
ω
fv dx for all v ∈ Vq(ω) ,
where the spaces Vp(ω) and Vq(ω) are those defined in the statement of Theorem2.9 for the choice of indices i1, . . . , ik.
By applying Theorem 2.9 to this equation, we deduce that α ∈ W 2,r(ω; Λk) andthat
(3.11) ∂jAi1...ik= 0 on Γj for all j 6∈ i1, . . . , ik.
Since we know that
Ai1...ik= 0 on Γi1 ∪ · · · ∪ Γik
,
Lemma 3.4(ii) shows that ∗dα = 0 on ω. Hence α ∈ W 2,r∗ (ω; Λk).
We now prove that dα satisfies the variational equation (3.10). By the the samedensity arguments as those used in the proof of Theorem 3.10, it suffices to provethat dα satisfies the variational equation (3.10) for all forms
β =∑
i1<···<ik+1
Bi1...ik+1dxi1 ∧ · · · ∧ dxik+1 ∈ C2(ω; Λk+1)
that satisfy ∗β = 0 on ∂ω.The first observation is that δβ ∈ C1(ω,Λk) and ∗δβ = 0 on ∂ω. Indeed, for any
fixed indices i1 < · · · < ik and l ∈ 1, . . . , k, we have that Bi1...ir−1jir...ik= 0 on
Γilfor all j 6∈ i1, . . . , ik. Hence ∂tBi1...ir−1jir...ik
= 0 on Γilfor all t 6= il, and in
particular for all t = j 6∈ i1, . . . , ik. Therefore,∑
j 6=is
(−1)r∂jBi1...ir−1jir...ik= 0 on Γil
,
which means that ∗δβ = 0 on ω (see Lemma 3.4(i)). In particular, δβ ∈ W 1,q∗ (ω; Λk)∩
W 1,s∗ (ω; Λk) and β ∈ W 1,s
∗ (ω; Λk+1).This observation allows us to apply the Theorems 3.8 and 3.10, together with
the identities dd = 0 and δδ = 0, to obtain the following relations:∫
ω
∇(dα) · ∇β dx =
∫
ω
(ddα · dβ + δdα · δβ) dx =
∫
ω
δdα · δβ dx
=
∫
ω
dα · dδβ dx =
∫
ω
(dα · dδβ + δα · δδβ) dx
=
∫
ω
∇α · ∇(δβ) dx.
But δβ belongs to the space W 1,q∗ (ω; Λk), so it can be used in the variational
equation (3.9) satisfied by α. Therefore the last equality implies that∫
ω
∇(dα) · ∇β dx =
∫
ω
φ · δβ dx.
24 SORIN MARDARE
Using again Theorem 3.8 (which can be applied to the last integral of the aboveinequality since φ ∈ Wr(ω; Λk) by assumption), we finally deduce that
∫
ω
∇(dα) · ∇β dx =
∫
ω
dφ · β dx .
¤
Remark 3.14. Note that the hypotheses of Theorem 3.13 are satisfied if 1 < p ≤r < +∞ or if p = 2 and 1 ≤ d
2< r < +∞. These two special cases occur in Section
5.
4. Stability and uniqueness
Besides its interest on its own, the stability and uniqueness results of this section,which were first established in [13], are needed to establish the existence of a solutionto the Pfaff system (see Section 6). We first show that small perturbations in theLp-norm of the coefficients of the Pfaff system and of its “initial data” induce smallperturbations of its solution (in the Frechet space W 1,p
loc ):
Theorem 4.1. Let Ω be a connected open subset of Rd, let p > d, and let there
be given sequences of matrix fields(
Ani
)
n∈N⊂ Lp(Ω; Mℓ) for i ∈ 1, 2, . . . ,d, and
(
Y n)
n∈N⊂ W 1,p
loc (Ω; Mq×ℓ) that satisfy the Pfaff systems
∂iYn = Y nAn
i in Ω for all i
in the distributional sense. Fix a point x0 ∈ Ω and assume that there exists aconstant M > 0 such that
|Y n(x0)| +∑
i
‖Ani ‖Lp(Ω) ≤ M for all n ∈ N.
Then, for any open set K ⋐ Ω, there exist a constant C > 0 (depending on M)such that
‖Y n − Y m‖W 1,p(K) ≤ C
|Y n(x0) − Y m(x0)| +∑
i
‖Ani − Am
i ‖Lp(Ω)
for all n,m ∈ N.
Proof. Without losing in generality, we may consider that K is a connected setcontaining x0. To see this, it suffices to prove that, for any open set K ⋐ Ω, thereexists a connected open set Ω′ such that K ∪ x0 ⊂ Ω′ ⋐ Ω.
In order to construct such a set Ω′, we consider a countable covering with openballs of Ω, i.e.,
Ω =
+∞⋃
k=1
Bk,
such that Bk ⊂ Ω for all k, where Bk := Bk(xk) is an open ball centered at thepoint xk ∈ Ω. Such a covering exists thanks to the Lindelof Theorem, which statesthat in a topological space having a countable basis for its topology, any opencovering of a set contains a countable subcovering.
Then we construct a sequence of connected sets Ωk ⊂ Ωk+1 ⋐ Ω such thatΩ = ∪+∞
k=1Ωk in the following way:
ON SYSTEMS OF FIRST ORDER LINEAR PARTIAL DIFFERENTIAL EQUATIONS 25
We start with Ω1 := B1. Then we fix a continuous path γ ⊂ Ω joining x1 withx2 and a finite covering of γ with balls Bk (covering which always exists since γ iscompact), say γ ⊂ ∪j∈JBj , where J is a finite subset of N, such that γ ∩ Bj 6= ∅for all j ∈ J . Then we define
Ω2 := Ω1 ∪(
⋃
j∈J
Bj)
∪ B2.
By repeating this process, we construct a sequence of connected open sets Ωk,defined as finite unions of some balls Bj , that satisfy Bk ⊂ Ωk ⊂ Ωk+1. HenceΩk ⊂ Ω and Ω = ∪+∞
k=1Ωk.
It is now clear how to construct the set Ω′: since K ∪ x0 is a compact subsetof Ω, there exists a number k0 ∈ N such that K ∪x0 ⊂ Ωk0
; hence we can chooseΩ′ = Ωk0
.
In the remainder of the proof, K is a connected open set containig x0. Let R > 0be fixed such that
R < dist(K,Ωc) and C1R1−d/p ≤ 1
2M,
where C1 is the constant appearing in the inequality (2.3) of Corollary 2.3.Fix any open ball BR = BR(x) ⋐ Ω with center x ∈ K. We begin by establishing
an upper bound for the L∞-norm of Y m over BR. Using Morrey’s inequality (seeCorollary 2.3), we have
‖Y m‖L∞(BR) ≤ |Y m(x)| + C1R1−d/p
∑
i
‖∂iYm‖Lp(BR)
≤ |Y m(x)| + 1
2M
∑
i
‖Y mAmi ‖Lp(BR)
≤ |Y m(x)| + 1
2‖Y m‖L∞(BR),
from which we deduce that
‖Y m‖L∞(BR) ≤ 2|Y m(x)|.
It is clear that any point x ∈ K can be joined to x0 by a broken line formed bymaximum N segments with ends in K and lengths < R, where the number Ndepends only on x0,K and R. Therefore, using the previous inequality severaltimes shows that
(4.1) ‖Y m‖L∞(BR) ≤ 2N+1|Y m(x0)| ≤ 2N+1M.
We now estimate the L∞-norm of (Y n − Y m) and the Lp-norm of ∂i(Yn − Y m)
over BR. From the relation
∂i(Yn − Y m) = (Y n − Y m)An
i + Y m(Ani − Am
i ),
26 SORIN MARDARE
we first obtain
(4.2)
∑
i
‖∂i(Yn − Y m)‖Lp(BR)
≤ M‖Y n − Y m‖L∞(BR) + ‖Y m‖L∞(BR)
∑
i
‖Ani − Am
i ‖Lp(BR)
≤ M‖Y n − Y m‖L∞(BR) + 2N+1M∑
i
‖Ani − Am
i ‖Lp(BR).
But Morrey’s inequality (see Corollary 2.3) and the choice of the radius R showthat
(4.3) ‖Y n − Y m‖L∞(BR) ≤ |(Y n − Y m)(x)| + 1
2M
∑
i
‖∂i(Yn − Y m)‖Lp(BR).
Therefore, the inequality (4.2) yields
(4.4)∑
i
‖∂i(Yn − Y m)‖Lp(BR)
≤ 2M |(Y n − Y m)(x)| + 2N+2M∑
i
‖Ani − Am
i ‖Lp(BR),
which combined with the inequality (4.3) next yields
(4.5) ‖Y n − Y m‖L∞(BR) ≤ 2|(Y n − Y m)(x)| + 2N+1∑
i
‖Ani − Am
i ‖Lp(BR).
Let now the open set K ⋐ Ω be covered with a finite number of balls of radiusR centered in points that belong to K. Let the center, say x, of any such ball bejoined to the given point x0 with a broken line formed by maximum N segmentswith ends in K and lengths < R (the number N depends only on x0,K and R).Then a recursion argument based on the inequality (4.5) shows that
‖Y n−Y m‖L∞(BR(x)) ≤ 2N+1|(Y n−Y m)(x0)|+(4N+1−2N+1)∑
i
‖Ani −Am
i ‖Lp(Ω).
Combined with (4.4), this inequality further implies that
∑
i
‖∂i(Yn − Y m)‖Lp(BR)
≤ 2N+2M |(Y n − Y m)(x0)| + 22N+3M∑
i
‖Ani − Am
i ‖Lp(Ω).
The last two inequalities show that there exists a constant C > 0, independentof n,m, such that
‖Y n − Y m‖W 1,p(BR(x)) ≤ C(
|(Y n − Y m)(x0)| +∑
i
‖Ani − Am
i ‖Lp(Ω)
)
.
Since this inequality is valid for any ball BR(x) in the chosen covering of K, sum-ming all these inequalities yields the desired inequality. ¤
An immediate consequence of the stability result of Theorem 4.1 is the followinguniqueness result for Pfaff systems:
ON SYSTEMS OF FIRST ORDER LINEAR PARTIAL DIFFERENTIAL EQUATIONS 27
Theorem 4.2. Let Ω be a connected open subset of Rd and let Ai ∈ Lp
loc(Ω; Mℓ),p > d. Then the Pfaff system
∂iY = Y Ai in Ω for all i
has at most one solution in W 1,ploc (Ω; Mq×ℓ) satisfying Y (x0) = Y 0 for some x0 ∈ Ω
and Y 0 ∈ Mq×ℓ.
5. Local regularization of Pfaff systems
This section is crucial towards establishing the existence of solutions to Pfaffsystems with coefficients that are only of class Lp (see Section 6). It consists inestablishing that a Pfaff system with Lp-coefficients can be approched locally witha sequence of Pfaff systems with smooth coefficients. More specifically, we have thefollowing
Theorem 5.1. Let Ω be an open subset of Rd and let there be given d matrix fields
Ai ∈ Lp(Ω; Mℓ), p > d, such that
(5.1) ∂jAi − ∂iAj = AiAj − AjAi for all i, j ∈ 1, 2, ...,din the space of distributions D′(Ω; Mℓ) .
Then there exists R0 > 0, depending only on p,d, ℓ and ‖Ai‖Lp(Ω), with thefollowing property: For any open cube ω ⋐ Ω whose edges have lengths < R0,there exist sequences of matrix fields (Aε
i ) in C∞(ω; Mℓ)∩Lp(ω; Mℓ) that satisfy therelations
∂jAεi − ∂iA
εj = Aε
i Aεj − Aε
jAεi in ω for all i, j,
Aεi → Ai in Lp(ω; Mℓ) as ε → 0.
The proof of this theorem is best described in the framework of matrix-valueddifferential forms, of which a short presentation is given in Section 3. In this setting,Theorem 5.1 is expressed in the following equivalent form:
Theorem 5.2. Let Ω be an open subset of Rd and let α ∈ Lp(Ω;Λ1), p > d, be a
1-form with matrix coefficients that satisfies in the distributional sense
(5.2) dα + α ∧ α = 0 in ω.
Then there exists R0 > 0, depending only on p,d, ℓ and ‖α‖Lp(Ω), with the fol-lowing property: For any open cube ω ⋐ Ω whose edges have lengths < R0, thereexists a sequence of 1-forms (αε) in C∞(ω; Λ1) ∩ Lp(ω; Λ1) such that
dαε + αε ∧ αε = 0 in ω
αε → α in Lp(ω) as ε → 0.
Proof. The proof uses the notions, notation and results of Section 3, especiallyTheorem 3.13. The notation Λk designates the space of all k-forms with matrix co-efficients in M
ℓ and the spaces W 1,p∗ (ω; Λk), W 2,p
∗ (ω; Λk), H1∗ (ω; Λk), and Wp(ω; Λk)
are those defined in Section 3.Let α := Aidxi, where Ai ∈ Lp(Ω; Mℓ), be a 1-form that satisfies
dα + α ∧ α = 0 in Ω.
Let there be given a cube
ω := (a1, b1) × (a2, b2) × ... × (ad, bd)
28 SORIN MARDARE
that satisfies the assumptions of the Theorem (the length of the edges of ω, hereafterdenoted R, satisfies R < R0) and let
Γk := ∂ω ∩ x = (xi) ∈ Rd;xk = ak or xk = bk
denote the union of the two lateral faces of ω that are orthogonal to the xk-axis.The proof, whose general structure is given in the outline below, is broken into
six steps.
Outline. Our aim is to find a sequence of smooth matrix-valued 1-forms αε :=Aε
i dxi, defined over ω, such that
dαε + αε ∧ αε = 0 in ω,
αε → α in the Lp(ω)-norm.
The idea is to decompose the 1-form α into two parts, namely
α = dφ + δψ in ω,
where φ is a 0-form of class W 1,p(ω) and ψ is a closed (i.e., dψ = 0) 2-form of classW 2,p/2(ω) in such a way that φ is not subjected to any other constraint while ψ isdetermined by the relation dα + α ∧ α = 0. To insure that the form ψ determinedin this fashion is closed, we impose some boundary conditions on ψ: for reasonsthat will be made clear afterwards, we require ψ to satisfy the (mixed) boundaryconditions
∗ψ = 0 and ∗ dψ = 0 on ∂ω .
Note that in coordinates, these boundary conditions mean that (Lemma 3.4 can beapplied since ω is a cube), for all i < j,
Ψij = 0 on Γi ∪ Γj ,
∂νΨij = 0 on ∂ω \ (Γi ∪ Γj) ,
where ψ =∑
i<j Ψijdxi ∧ dxj . That such a decomposition does exist is proven in
the step (ii).The next step consists in choosing a sequence of smooth fields φε that converges
to φ in the W 1,p(ω)-norm and to define ψε as a function of φε, determined by therelation dαε + αε ∧ αε = 0 with αε = dφε + δψε. This is done in steps (iii)-(v).
The final step is to show that the sequence of 1-forms defined by
αε := dφε + δψε,
satisfies the conditions of the theorem.
(i) Preliminary result: α ∧ α ∈ W p2(ω; Λ2). It is clear that α ∧ α ∈ L
p2 (ω; Λ2)
and, since
d(α ∧ α) = d(−dα) = −ddα = 0,
we have in particular d(α ∧ α) ∈ Lp2 (ω; Λ2).
On the other hand, we know that α ∧ α = −dα in the entire domain Ω, sod(α ∧ α) = 0 in Ω. Regularizing by convolution the coordinates of α ∧ α yields asequence of 2-forms γn ∈ C∞(ω; Λ2) such that
γn → α ∧ α in Lp2 (ω; Λ2) and dγn = 0 on ω
ON SYSTEMS OF FIRST ORDER LINEAR PARTIAL DIFFERENTIAL EQUATIONS 29
(the second relation holds since d(α ∧ α) = 0 in Ω and d is a linear operator). In
particular, dγn → d(α ∧ α) in Lp2 (ω; Λ2). This shows that α ∧ α ∈ W p
2(ω; Λ2).
(ii) Decomposition of the 1-form α. We define the 2-form ψ as the unique solutionin the space H1
∗ (ω; Λ2) to the variational equation
(5.3)
∫
ω
∇ψ · ∇β dx = −∫
ω
(α ∧ α) · β dx
for all β ∈ H1∗ (ω; Λ2). That this equation possesses a unique solution is proved by
the Lax-Milgram Theorem. We only need to prove that the linear form appearingin the right-hand side of (5.3) is continuous over the Hilbert space H1
∗ (ω; Λ2).Since the 2-form α ∧ α belongs to the space Lp/2(ω; Λ2), we have on one hand
(see inequalities (3.1))∣
∣
∣
∫
ω
(α ∧ α) · β dx∣
∣
∣≤ ‖α ∧ α‖Lp/2(ω)‖β‖L
pp−2 (ω)
for all 2-form β ∈ Lp
p−2 (ω; Λ2). On the other hand, the Sobolev inequality estab-lished in Theorem 3.7 combined with inequality (3.2) show that, for some constantsC > 0,
‖β‖L
pp−2 (ω)
≤ C‖β‖L
2pp−2 (ω)
≤ C‖∇β‖L2(ω) ≤ C‖β‖H1∗(ω)
for all 2-form β ∈ H1∗ (ω; Λ2). Combining these last two inequalities shows that the
linear form appearing in the right-hand side of (5.3) is indeed continuous over theHilbert space H1
∗ (ω; Λ2).We have proved that the equation (5.3) has a solution in H1
∗ (ω; Λ2). Since α∧αbelongs to the space Lp/2(ω; Λ2), the regularity result established in Theorem 3.13(see also Remark 3.14) shows that
ψ ∈ W 2,p/2(ω; Λ2),
and that it satisfies the boundary value problem
(5.4)∆ψ = α ∧ α in ω,
∗ ψ = 0 and ∗ dψ = 0 on ∂ω.
Hence we have found a form ψ belonging to the space W2,p/2∗ (ω; Λ2) that satisfies
the variational equation (5.3). By step (i), we know that α ∧ α ∈ Wp/2(ω; Λ2).Therefore the second assertion of Theorem 3.13 shows that dψ belongs to the space
W1,p/2∗ (ω; Λ3) and satisfies the variational equation
∫
ω
∇(dψ) · ∇β dx = 0
for all β ∈ W1, p
p−2∗ (ω; Λ3). But the regularity result of Theorem 3.13 shows that
dψ belongs to the space H1∗ (ω; Λ3) (it suffices to take r = max2, p
2 in Theorem3.13; see also Remark 3.14), which next implies that the variational equation aboveis also satisfied for all β ∈ H1
∗ (ω; Λ3), since the space
γ ∈ C∞(ω; Λ3) ; ∗γ = 0 on ∂ω ⊂ W1, p
p−2∗ (ω; Λ3)
is dense in H1∗ (ω; Λ3) by Theorem 2.6(ii). By taking β = dψ in the variational
equation above, we immediately deduce that
dψ = 0 in ω,
30 SORIN MARDARE
which means that the 2-form ψ is closed.Note that since p > d, ψ ∈ W 1,p(ω; Λ2) by the Sobolev embedding W 2,p/2(ω) ⊂
W 1,p(ω). In particular δψ ∈ Lp(ω; Λ1), so that α − δψ ∈ Lp(ω; Λ1).We now infer from the relation dψ = 0 and from the identity
−∆ = dδ + δd,
that
d(α − δψ) = dα − dδψ = dα − (dδ + δd)ψ = dα + ∆ψ = −α ∧ α + ∆ψ = 0.
Therefore the 1-form (α−δψ) is closed, which next implies, by the Poincare Theorem3.3, that there exists a 0-form φ ∈ W 1,p(ω; Λ0) such that
α − δψ = dφ.
(iii) Regularization of φ and ψ. Since the space C∞(ω) is dense in W 1,p(ω), thereexists a sequence of 0-forms φε in C∞(ω; Λ0) such that, as ε → 0,
φε → φ in the W 1,p(ω)-norm.
Next, for each ε, we define a 2-form ψε by solving the nonlinear problem (to becompared with (5.4)):
(5.5)− ∆ψε + (dφε + δψε) ∧ (dφε + δψε) = 0 in ω,
∗ ψε = 0 and ∗ dψε = 0 on ∂ω.
That this system has a solution is proven by the Implicit Function Theorem. Tothis end, define the spaces
E := W 1,p(ω; Λ0),
F := W2,p/2∗ (ω; Λ2),
G := Lp/2(ω; Λ2),
and define the mapping f : E × F → G by letting
f(σ, τ) = −∆τ + (dσ + δτ) ∧ (dσ + δτ).
It is clear that this mapping is well defined and that it is of class C∞ (since itsdifferential is a continuous affine mapping, i.e., a translation of a continuous linearoperator). Its derivative with respect to the second variable at (φ, ψ) is the linearmap Dτf(φ, ψ) : F → G defined by
Dτf(φ, ψ)(θ) = −∆θ + α ∧ δθ + δθ ∧ α,
where (see the step (ii))α = dφ + δψ.
Now we show that the mapping Dτf(φ, ψ) is an isomorphism by using severalproperties of second order linear elliptic equations. For each ξ ∈ G, we first provethat the linear system
(5.6) − ∆θ + α ∧ δθ + δθ ∧ α = ξ,
has one and only one solution in the space F . To this end, we apply the Lax-Milgram lemma to the following variational equation: Find θ ∈ H1
∗ (ω; Λ2) suchthat
(5.7)
∫
ω
∇θ · ∇β dx +
∫
ω
(α ∧ δθ + δθ ∧ α) · β dx =
∫
ω
ξ · β dx
for all β ∈ H1∗ (ω; Λ2).
ON SYSTEMS OF FIRST ORDER LINEAR PARTIAL DIFFERENTIAL EQUATIONS 31
The assumptions of the Lax-Milgram lemma are satisfied because, on one hand,the linear form appearing in the right hand side of (5.7) is continuos over H1
∗ (ω; Λ2)(the proof is the same as in step (ii)) and, on the other hand, the bilinear formappearing in the left-hand side of the same equation is continuous and coerciveover H1
∗ (ω; Λ2), as we now show. First, using succesively the Holder inequality,Theorem 3.7, and the inequalities (3.1) and (3.2), we deduce that, for all θ, β ∈H1
∗ (ω; Λ2),∣
∣
∣
∣
∫
ω
∇θ · ∇β dx +
∫
ω
(α ∧ δθ + δθ ∧ α) · βdx
∣
∣
∣
∣
≤ ‖θ‖H1∗(ω)‖β‖H1
∗(ω) + 2‖α‖Lp(ω)‖δθ‖L2(ω)‖β‖L2p/(p−2)(ω)
≤ ‖θ‖H1∗(ω)‖β‖H1
∗(ω) + C‖α‖Lp(ω)‖δθ‖L2(ω)‖∇β‖L2(ω)
≤ C‖θ‖H1∗(ω)‖β‖H1
∗(ω),
where the last constant C depends on ‖α‖Lp(ω), but is independent on θ and β.Thus the bilinear form is continuous. Using the same ingredients, we have that, forall θ ∈ H1
∗ (ω; Λ2),∣
∣
∣
∣
∫
ω
(α ∧ δθ + δθ ∧ α) · θdx
∣
∣
∣
∣
≤ 2‖α‖Lp(ω)‖δθ‖L2(ω)‖θ‖L2p/(p−2)(ω)
≤ CR1−d/p‖α‖Lp(Ω)‖δθ‖L2(ω)‖∇θ‖L2(ω)
≤ CR1−d/p‖α‖Lp(Ω)‖∇θ‖2L2(ω)
≤ CR1−d/p‖α‖Lp(Ω)‖θ‖2H1
∗(ω),
for some constants C, the last one depending only on p,d and ℓ. It is clear then thatfor small R, i.e., for those R that are smaller than a certain number R0 dependingonly on p,d, ℓ and ‖α‖Lp(Ω) (for an explicit value of R0, see Remark 5.4), we have
∣
∣
∣
∣
∫
ω
(α ∧ δθ + δθ ∧ α) · θdx
∣
∣
∣
∣
≤ a‖θ‖2H1
∗(ω) ,
for some constant a < 1, which shows that the bilinear form associated with (5.7)is coercive over the space H1
∗ (ω; Λ2).We now show by a bootstraap argument that the solution to the system (5.7)
belongs to the space W2,p/2∗ (ω; Λ2). In turn, this implies that the equation (5.6)
is equivalent with the variational equation (5.7), and therefore the equation (5.6)has a unique solution if and only if the equation (5.7) has a unique solution. Toprove this equivalence, we first observe that, if the solution θ to the variational
equation (5.7) belongs to the space W2,p/2∗ (ω; Λ2), then θ is a strong solution to
equation (5.6). In the opposite sense, if θ is solution to (5.6), then θ is solution tothe variational equation (5.7), since
∫
ω
(−∆θ) · β dx =
∫
ω
∇θ · ∇β dx
for all β ∈ H1∗ (ω; Λ2) by Corollary 3.12 (the hypotheses of the Corollary are satisfied
since 12 + 2
p < 1 + 1d for p > d ≥ 2).
In order to prove that θ does belong to the space W2,p/2∗ (ω; Λ2), it suffices (in
view of Theorem 3.13 and Remark 3.14) to prove that
ξ − (α ∧ δθ + δθ ∧ α) ∈ Lp/2(ω; Λ2).
32 SORIN MARDARE
In fact, since ξ ∈ Lp/2(ω; Λ2) and α ∈ Lp(ω; Λ1), it suffices to prove that
δθ ∈ Lp(ω; Λ1).
We first prove by a recursion argument that there exists a number r ≥ r :=min
p, dpp−d
such that δθ ∈ Lr(ω; Λ1). To this end, we define recursively a se-
quence of numbers r0 < r1 < r2 < . . . such that δθ ∈ Lrk(ω; Λ1): set r0 = 2 andsuppose we have defined r0 < r1 < · · · < rk < r. Then
(α ∧ δθ + δθ ∧ α) − ξ ∈ Lsk(ω; Λ2),
where 1sk
= 1rk
+ 1p , which next implies that
θ ∈ W 2,sk(ω; Λ2),
by applying Theorem 3.13 to the equation (5.7). Furthermore, the Sobolev Em-bedding Theorem (note that sk < d) shows that
δθ ∈ W 1,sk(ω; Λ1) ⊂ Lrk+1(ω; Λ1),
where1
rk+1=
1
sk− 1
d=
1
rk−
(1
d− 1
p
)
.
The last relation shows that rk < rk+1 and that after a finite number of iterationswe will find r = rh ≥ r.
If r ≥ p, then δθ ∈ Lr(ω; Λ1) ⊂ Lp(ω; Λ1). If dpp−d ≤ r < p, then we deduce by
the same argument as above that θ ∈ W 2,s(ω; Λ2), where 1s = 1
r + 1p ≤ 1
d . Hence
s ≥ d and the Sobolev Embedding Theorem shows that
δθ ∈ W 1,s(ω; Λ1) ⊂ Lp(ω; Λ1).
We already proved that the linear mapping Dβf(φ, ψ) is a bijection. Sinceit is also continuous, the closed graph theorem shows that it is an isomorphism.Consequently, the mapping f satisfies the assumptions of the implicit functiontheorem (see Theorem 2.1). Therefore there exist open balls Bδ(φ) ⊂ E, Br(ψ) ⊂ Fand a mapping g : Bδ(φ) → Br(ψ) of class C1 such that g(φ) = ψ and
f(σ, g(σ)) = 0 for all σ ∈ Bδ(φ).
In particular, for σ = φε there exists ψε := g(φε) such that f(φε, ψε) = 0. More-over, since g is continuous and since φε → φ in W 1,p(ω; Λ0), we have ψε → ψ inW 2,p/2(ω; Λ2). Hence
ψε → ψ in W 1,p(ω; Λ2)
thanks to the Sobolev embedding W 2,p/2(ω) ⊂ W 1,p(ω), which is valid becausep ≥ d.
(iv) The 2-form ψε belongs to the space W 2,r(ω; Λ2) ∩ C∞(ω) for all r ∈ [1,∞).
We proved in step (iii) that ψε ∈ W2,p/2∗ (ω; Λ2) and that it satisfies the boundary
value problem (see (5.5)):
− ∆ψε + (dφε + δψε) ∧ (dφε + δψε) = 0 in ω,
∗ ψε = 0 and ∗ dψε = 0 on ∂ω,
where φε ∈ C∞(ω). We now prove that ψε ∈ W 2,r(ω; Λ2) for all r ∈ [1,∞).
ON SYSTEMS OF FIRST ORDER LINEAR PARTIAL DIFFERENTIAL EQUATIONS 33
By multiplying the equation satisfied by ψε with a test function β ∈ W1, p
p−2∗ (ω; Λ2),
which is possible since the space Lp
p−2 (ω) is the dual of the space Lp2 (ω), we first
deduce that
(5.8) −∫
ω
∆ψε · β dx = −∫
ω
((dφε + δψε) ∧ (dφε + δψε)) · β dx ,
hence, by Corollary 3.12,
(5.9)
∫
ω
∇ψε · ∇β dx = −∫
ω
((dφε + δψε) ∧ (dφε + δψε)) · β dx
for all β ∈ W1, p
p−2∗ (ω; Λ2).
We prove that ψε ∈ W 2,r(ω; Λ2) for all r ∈ [1,∞) by applying several timesthe regularity result established in Theorem 3.13. To this end, we first prove by a
recursion argument that there exists a number p ≥ 2d such that ψε ∈ W 2, p2 (ω; Λ2).
We know that ψε ∈ W 2, p2 (ω; Λ2). If p ≥ 2d, we set p = p. If p < 2d, we define
recursively a sequence of numbers p0 < p1 < p2 < ... such that
ψε ∈ W 2,pi2 (ω; Λ2), i = 0, 1, ...
until we reach a number pk ≥ 2d. For, let p0 = p and assume that we already
defined the numbers p0 < p1 < ... < pi < 2d. Since ψε ∈ W 2,pi2 (ω; Λ2), we deduce
that
(dφε + δψε) ∧ (dφε + δψε) ∈ Lpi+1
2 (ω; Λ2),
where
(5.10)1
pi+1=
2
pi− 1
d,
by using the Sobolev embedding
W 1,pi2 (ω) ⊂ Lpi+1(ω).
Then, by applying Theorem 3.13 (see also Remark 3.14) to the variational equation(5.9), we deduce that
ψε ∈ W 2,pi+1
2 (ω; Λ2).
Moreover, pi+1 > pi since
1
pi+1=
1
pi−
(1
d− 1
pi
)
≤ 1
pi−
(1
d− 1
p
)
.
This shows that relation (5.10) correctly defines the sequence pi. But the lastinequality also shows that, after a finite number of iterations, we obtain pi+1 ≥ 2d.We thus set p = pi+1 in the case where p < 2d.
We found a number p ≥ 2d such that ψε ∈ W 2, p2 (ω; Λ2). In addition, the Sobolev
Embedding Theorem shows that
W 1, p2 (ω) ⊂ Lr(ω) for all r ≥ 1.
Combined with the fact that φε ∈ C∞(ω; Mℓ), this implies that
(dφε + δψε) ∧ (dφε + δψε) ∈ Lr(ω; Λ2) for all r ≥ 1,
so that we can apply once again Theorem 3.13 to the variational equation (5.9) andthus obtain
ψε ∈ W 2,r(ω; Λ2) for all r ≥ 1.
34 SORIN MARDARE
We now prove that the 2-forms ψε are of class C∞ in ω by using the interiorregularity of the Laplacian combined with a bootstraap argument.
Since φε ∈ C∞(ω; Mℓ) and since ψε ∈ W 2,r(ω; Λ2) for all r ≥ 1, we have (see(5.5))
∆ψε = (dφε + δψε) ∧ (dφε + δψε) ∈ W 1, r2 (ω; Λ2) for all r > 2.
Then the interior regularity of the Laplacian (see, e.g., Dautray and Lions [6]) showsthat
ψε ∈ W3, r
2
loc (ω; Λ2) for all r > 2.
In turn, this implies that
∆ψε = (dφε + δψε) ∧ (dφε + δψε) ∈ W2, r
22
loc (ω; Λ2) for all r > 22,
so that we have, again by the interior regularity of the Laplacian,
ψε ∈ W4, r
22
loc (ω; Λ2) for all r > 22.
By applying this argument recursively, we obtain that
ψε ∈ Wm,rloc (ω; Λ2) for all m ≥ 2, r > 1.
Hence ψε ∈ C∞(ω; Λ2) by the Sobolev Embedding Theorem.
(v) The 2-form ψε is closed for ε small enough. We may assume that the dimen-sion of Ω satisfies d ≥ 3 since in the case d = 2 we have dψε = 0 simply becausedψε is a form of degree > 2.
Let αε := (dφε + δψε). Note that αε → α in Lp(ω; Λ1) since φε → φ inW 1,p(ω; Λ0) and ψε → ψ in W 1,p(ω; Λ2). In particular
‖αε‖Lp(ω) ≤ ‖α‖Lp(ω) + 1 ≤ ‖α‖Lp(Ω) + 1
for ε small enough, that is, smaller than a certain value ε0. We will prove that, forall ε < ε0, the 2-forms ψε are closed.
We wish to prove that dψε = 0 by applying the second assertion of Theorem3.13 to the variational equation (see (5.9))
(5.11)
∫
ω
∇ψε · ∇β dx = −∫
ω
(αε ∧ αε) · β dx
for all β ∈ W1, p
p−2∗ (ω; Λ2). Since by the step (iv),
αε ∧ αε ∈ W 1,r(ω; Λ2) ⊂ Wr(ω; Λ2) for all r ≥ 1,
and in particular for r = max2, p2, we obtain by Theorem 3.13 that dψε ∈
W 1,r∗ (ω; Λ3) ⊂ H1
∗ (ω; Λ3) satisfies
(5.12)
∫
ω
∇(dψε) · ∇β dx = −∫
ω
d(αε ∧ αε) · β dx
for all β ∈ H1∗ (ω; Λ3).
On the other hand, we have
d(αε ∧ αε) = (dδψε) ∧ αε − αε ∧ (dδψε).
But the 2-form (dδψε) satisfies
dδψε = −∆ψε − δ(dψε) = −αε ∧ αε − δ(dψε),
ON SYSTEMS OF FIRST ORDER LINEAR PARTIAL DIFFERENTIAL EQUATIONS 35
which combined with the previous equation gives
d(αε ∧ αε) = −δ(dψε) ∧ αε + αε ∧ δ(dψε).
Using this expression in the right hand side of (5.12), we deduce that
(5.13)
∫
ω
∇(dψε) · ∇β dx =
∫
ω
(δ(dψε) ∧ αε) · β − (αε ∧ δ(dψε)) · β dx
for all β ∈ H1∗ (ω; Λ3).
In order to prove that dψε = 0, we let β = dψε in the variational equation (5.13)and we obtain the following estimate
(
we use in particular the inequalities (3.1))
:∣
∣
∣
∣
∫
ω
∇dψε · ∇dψε
∣
∣
∣
∣
≤ 2‖αε‖Lp(ω)‖∇(dψε)‖L2(ω)‖dψε‖L2p/(p−2)(ω)
≤ 2(
‖α‖Lp(Ω) + 1)
‖∇(dψε)‖L2(ω)‖dψε‖L2p/(p−2)(ω),
which combined with the inequalities (3.2) and with the Sobolev inequality of The-orem 3.7 shows that
‖∇(dψε)‖2L2(ω) ≤ CR1−d/p
(
‖α‖Lp(Ω) + 1)
‖∇(dψε)‖2L2(ω),
the constant C depending only on p,d and ℓ. Thus, for R small enough, i.e., forR smaller than a certain number R0 depending only on p,d, ℓ and ‖α‖Lp(Ω) (seeRemark 5.4 for an explicit value of R0), the left-hand side of the previous inequalitymust vanish. Combined with the null boundary conditions satisfyed by dψε, thisimplies that dψε = 0.
(vi) Approximation of the 1-form α. We have already seen that
αε := dφε + δψε → α in Lp(ω; Λ1).
In addition, αε is of class C∞ in ω since φε and ψε are of class C∞(ω). On the otherhand dψε = 0 for ε small enough, as we have seen in the previous step. Therefore
d(dφε + δψε) = dδψε = (dδ + δd)ψε = −∆ψε = −(dφε + δψε) ∧ (dφε + δψε),
from which we deduce that the 1-form αε (with ε small enough) satisfies the relation
dαε + αε ∧ αε = 0.
¤
Remark 5.3. The forms αε constructed in the proof of Theorem 5.2 belong to the
space C∞(ω; Λ1) ∩(
⋂
1≤r<+∞
W 1,r(ω; Λ1)
)
.
Remark 5.4 (an explicit formula for R0). It is clear that we can choose the numberR0 appearing in the statement of Theorems 5.1 and 5.2 as the minimum betweenthe constants (still denoted R0) appearing in the proofs of the above steps (iii) and(v). Following the inequalities therein, we can take
(5.14)
R0 := min
(
C2ℓ2d2(d − 1)‖α‖Lp(Ω)
)
pd−p
,
(
C2ℓ2d2(d − 1)(d − 2)
(
‖α‖Lp(Ω) + 1)
3
)
pd−p
,
where C2 is the constant of Theorem 3.7.
36 SORIN MARDARE
An immediate consequence of Theorem 5.2 is the following
Corollary 5.5. Let Ω be an open subset of Rd and let α ∈ Lp
loc(Ω; Λ1), p > d, bea 1-form with matrix coefficients that satisfies in the distributional sense
dα + α ∧ α = 0 in Ω.
Then there exists a non-increasing function R0 : Ω ⋐ Ω ; Ω open → (0,∞]
(i.e., Ω1 ⊂ Ω2 ⋐ Ω ⇒ R0(Ω1) ≥ R0(Ω2)) with the following property: given any
open subset Ω ⋐ Ω and any open cube ω ⋐ Ω whose edges have lengths < R0(Ω),there exists a sequence of 1-forms (αε) in C∞(ω; Λ1) ∩ Lp(ω; Λ1) such that
(5.15)dαε + αε ∧ αε = 0 in ω
αε → α in Lp(ω) as ε → 0.
Proof. For any open subset Ω ⋐ Ω, the form α|Ω satisfies the hypotheses of Theorem5.2. Hence there exists R0 ∈ (0,∞] (depending on p,d, ℓ and ‖α‖Lp(Ω)) such thatthe form α|Ω can be approached with smooth forms satisfying the conditions (5.15)over any open cube ω ⋐ Ω whose edges have lengths < R0.
Among all the constants R0 ∈ (0,∞] satisfying the above property, we select onesuch that the mapping Ω 7→ R0 be non-increasing.
To do so, it suffices to define R0 by the formula (5.14). Another possibility isto define R0 as the largest admissible value in the statement of Theorem 5.2 (it iseasy to see that such a value does exist). ¤
6. Existence theory
6.1. Necessary conditions. This section provides the necessary conditions thatthe coefficients Ai must satisfy in order that the Pfaff system
∂iY = Y Ai in Ω ; Y (x0) = Y 0
has at least a solution for any given matrix Y 0 ∈ Mq×ℓ. We begin with a preliminary
result:
Lemma 6.1. Let X ∈ W 1,ploc (Ω; Mℓ) be a solution to the Pfaff system
∂iX = XAi in Ω for all i,
where Ω is a connected open subset of Rd, p is a number > d, and Ai ∈ Lp
loc(Ω; Mℓ),i ∈ 1, 2, ...,d are matrix fields. Assume that there exists a point x0 ∈ Ω such thatthe matrix X(x0) is invertible. Then
(i) the matrix X(x) is invertible at each point x ∈ Ω.(ii) the matrix field X−1 : Ω → M
ℓ, defined by X−1(x) = (X(x))−1 for all x ∈ Ω,
belongs to the space W 1,ploc (Ω; Mℓ) and satisfies the system
∂i(X−1) = −AiX
−1 in Ω for all i.
Proof. (i) Assume on the contrary that there exists x1 ∈ Ω such that the matrix
X(x1) is not invertible. Hence[
X(x1)]T
is not invertible either and therefore 0 is
one of its eigenvalues. This implies the existence of a vector v 6= 0 in Rℓ satisfying
[
X(x1)]T
v = 0 in Rℓ.
Since X satisfies ∂iX = XAi, it follows that the vector field (vT X) : Ω → M1×ℓ,
defined by(vT X)(x) = vT X(x) ∀x ∈ Ω,
ON SYSTEMS OF FIRST ORDER LINEAR PARTIAL DIFFERENTIAL EQUATIONS 37
satisfies the system∂i(v
T X) = (vT X)Ai for all i,
(vT X)(x1) = 0.
By the uniqueness result of Theorem 4.2, the solution of this system must vanishin the entire Ω. In particular, (vT X)(x0) = 0. But this clearly contradicts theassumption that X(x0) is invertible (remerber that v 6= 0). Hence the first part ofthe Theorem is established.
(ii) Since the matrix field X is continuous and invertible in a connected openset, its determinant detX : Ω → R, which is a continuous function, must be eitherpositive at every point of Ω, or negative at every point of Ω. Without losing ingenerality, we may assume the former. Let K ⋐ Ω. Then there exists a constantε > 0, depending on K, such that
det X(x) ≥ ε ∀x ∈ K.
Since W 1,ploc (Ω) is an algebra, the function detX and the matrix field CofX belong
respectively to W 1,ploc (Ω) and W 1,p
loc (Ω; Mℓ). Together with the inequality above, this
implies that the field X−1 also belongs to W 1,ploc (Ω; Mℓ), since
X−1 =1
det X(CofX)T .
This allows to compute the partial derivatives (in the distributional sense) of therelation XX−1 = I , where I : Ω → M
ℓ is the constant matrix field whose value isthe identity matrix. Hence
(∂iX)X−1 + X(∂iX−1) = 0,
which next implies that
X(AiX−1 + ∂iX
−1) = 0 in Lp(Ω).
Multiplying this relation by X−1 to the left, we finally obtain that X−1 satisfiesthe system of the part (ii) of the Lemma. ¤
We are now in a position to achieve the objective of this section:
Theorem 6.2. Let Ω be an open and connected subset of Rd, let x0 ∈ Ω, and let
Ai ∈ Lploc(Ω; Mℓ), where p ≥ d. If for any given matrix Y 0 ∈ M
q×ℓ, the system
(6.1)∂iY = Y Ai in Ω for all i,
Y (x0) = Y 0
has at least a solution in W 1,ploc (Ω; Mq×ℓ), then its coefficients satisfy in the distri-
butional sense the compatibility conditions
∂jAi − ∂iAj = AiAj − AjAi in Ω,
for all i, j ∈ 1, 2, . . . ,d.Proof. For each k ∈ 1, 2, ..., ℓ, let Y 0(k) denote the matrix in M
q×ℓ whose onlynon zero element is that situated at the first row and k-th column. By assumption,we know that each system
∂iY = Y Ai,
Y (x0) = Y 0(k),
has at least a solution, say Y (k) ∈ W 1,ploc (Ω; Mq×ℓ).
38 SORIN MARDARE
Let X : Ω → Mℓ be the matrix field whose k-th row is the first row of the matrix
field Y (k). It is then clear from the above system that the field X, which belongs
to the space W 1,ploc (Ω; Mℓ), satisfies the system
∂iX = XAi,
X(x0) = I,
where I : Ω → Mℓ is the identity field. Now, since the second derivatives of X
commute, it follows that, for all i, j,
∂j(XAi) = ∂i(XAj),
which in turn implies that, in the distributional sense,
X(AjAi + ∂jAi) = X(AiAj + ∂iAj).
But we have shown in Lemma 6.1 that the matrices X(x) are invertible at any
point x ∈ Ω and that X−1 ∈ W 1,ploc (Ω; Mℓ). Noting that it make sense to multiply
the above (distributional) equation with X−1 to the left, we deduce that
AjAi + ∂jAi = AiAj + ∂iAj
in the space of distributions. The proof is now complete.¤
6.2. Sufficient conditions. We study here the existence of solutions to a Pfaffsystem whose coefficients are matrix fields of class Lp
loc, p > d, in an open set
Ω ⊂ Rd. We prove that this system has locally a solution if its coefficients satisfy
the necessary compatibility conditions of Theorem 6.2. If in addition the set Ω issimply connected, we prove that the Pfaff system has in fact a global solution ofclass W 1,p
loc (Ω). To do so, we proceed in two steps: first we deduce from the localexistence result that the Cauchy problem associated with the Pfaff system has asolution in every small enough ball contained in Ω, then we prove that these localsolutions can be glued together to obtain a global solution.
The key tool in the proof of the local existence result is the approximation resultfor Pfaff systems established in Section 5 (the balls have to be small enough soas to fit in a cube where the coefficients of the Pfaff system can be approximatedaccording to Theorem 5.1):
Theorem 6.3. Let Ω be an open subset of Rd and let there be given d matrix fields
Ai ∈ Lploc(Ω; Mℓ), p > d, that satisfy the relations (for all i, j)
∂jAi − ∂iAj = AiAj − AjAi
in the space of distributions D′(Ω; Mℓ). Let there be given an open subset Ω ⋐ Ω,a matrix Y 0 ∈ M
q×ℓ, and an open ball Br = Br(x0) centered at x0 ∈ Ω and with
radius
r < min
(
1
ddist(x0,Ωc), R(Ω)
)
where R(Ω) :=R0(Ω)
2, R0(Ω) being the number defined in Corollary 5.5 (hence the
function R : Ω′ ⋐ Ω ; Ω′ open → (0,+∞] is non-increasing, i.e., if Ω1 ⊂ Ω2 ⋐ Ω,then R(Ω1) ≥ R(Ω2)).
ON SYSTEMS OF FIRST ORDER LINEAR PARTIAL DIFFERENTIAL EQUATIONS 39
Then the Pfaff system
(6.2)∂iY = Y Ai in D′(Br; M
q×ℓ) for all i,
Y (x0) = Y 0
has a solution Y ∈ W 1,p(Br; Mq×ℓ).
Proof. It is based on an approximating method, the solution of the system (6.2)being found as a limit of solutions to a sequence of Pfaff systems with smoothcoefficients.
Choose a number R such that r < R < min(
1ddist(x0,Ωc), R(Ω)
)
. Then
BdR(x0) ⋐ Ω. Since the open cube ω2R centered at x0 with edges of length 2Ris contained in BdR(x0), Theorem 5.1 shows that there exist sequences of matrixfields An
i ∈ C∞(ω2R; Mℓ) ∩ Lp(ω2R; Mℓ) that satisfy
∂jAni − ∂iA
nj = An
i Anj − An
j Ani in ω2R,
Ani → Ai in Lp(ω2R; Mℓ) as n → ∞.
Since the coefficients Ani are smooth, the classical existence result on Pfaff
systems (see, e.g., Thomas [17]) shows that there exists a matrix field Y n ∈C∞(ω2R; Mq×ℓ) that satisfies
(6.3)∂iY
n = Y nAni in ω2R,
Y n(x0) = Y 0.
By the stability result of Theorem 4.1, there exists a constant C > 0 such that
‖Y n − Y m‖W 1,p(Br) ≤ C∑
i
‖Ani − Am
i ‖Lp(ω2R),
which means that (Y n) is a Cauchy sequence in the space W 1,p(Br; Mq×ℓ). Since
this space is complete, there exists a field Y ∈ W 1,p(Br; Mq×ℓ) such that
Y n → Y in W 1,p(Br; Mq×ℓ) as n → ∞.
In addition, the Sobolev continuous embedding W 1,p(Br; Mq×ℓ) ⊂ C0(Br; M
q×ℓ)shows that
Y n(x0) → Y (x0) in Mq×ℓ as n → ∞.
By passing to the limit n → ∞ in the equations of the system (6.3), we deduce thatthe field Y satisfies the Pfaff system (6.2). ¤
If in addition to the assumptions of the previous Theorem, the set Ω is connectedand simply-connected, then there exists a unique global solution (i.e., a solutiondefined over the entire set Ω) to the Pfaff system. More specifically, we have thefollowing global existence result:
Theorem 6.4. Let Ω ⊂ Rd be a connected and simply connected open set, let
x0 ∈ Ω, and let Y 0 ∈ Mq×ℓ. Let there be given d matrix fields Ai ∈ Lp
loc(Ω; Mℓ),p > d, satisfying the relations (for all i, j)
∂jAi − ∂iAj = AiAj − AjAi
in the space of distributions D′(Ω; Mℓ). Then the Pfaff system
(6.4)∂iY = Y Ai in D′(Ω; Mq×ℓ) for all i,
Y (x0) = Y 0
40 SORIN MARDARE
has one and only one solution Y ∈ W 1,ploc (Ω; Mq×ℓ).
Proof. The uniqueness of the solution to the Pfaff system (6.4) being given byTheorem 4.2, it suffices to establish the existence of a solution to such a system.
Outline. We follow the proof of Theorem 3.1 of [12]. The idea is to define afield Y : Ω → M
q×ℓ by glueing together some sequences of local solutions, whoseexistence is established by Theorem 6.3, along curves starting from the given pointx0 (see step (i) below). We shall prove that this definition is unambiguous thanksto the uniqueness result proved in Theorem 4.2 and to the simple connectedness ofthe set Ω (see steps (ii)-(iv)). Finally, we show in step (v) that the field Y definedin this fashion satisfies the system (6.4).
(i) Definition of a global solution to the Pfaff system (6.4) from local solutions.With any point x ∈ Ω, we associate a path γ ∈ C0([0, 1]; Ω) joining x0 to x (i.e.,γ(0) = x0 and γ(1) = x), a positive number R, and a division ∆ = t0, t1, t2, ..., tNof the interval [0, 1] such that
R < min
(
1
ddist(Im γ,Ωc
γ), R(Ωγ)
)
,(6.5)
γ(t) ∈ BR(xi) for all t ∈ [ti−1, ti+1] and all i ∈ 0, 1, ..., N,where
0 = t−1 = t0 < t1 < t2 < ... < tN = tN+1 = 1 and xi := γ(ti),
R(Ωγ) is the number defined in Theorem 6.3,
Ωγ := x ∈ Rd ; dist(x, Im γ) < ζ(γ,Ω),(6.6)
the number ζ(γ,Ω) being defined by
(6.7) ζ(γ,Ω) =
1
2dist(Im γ,Ωc) if Ω 6= R
d,
1 if Ω = Rd.
Note that Ωγ ⋐ Ω since dist(Ωγ ,Ωc) ≥ 12dist(Im γ,Ωc) in the case Ω 6= R
d
and Ωγ is bounded in the case Ω = Rd. Therefore Ai ∈ Lp(Ωγ ; Mℓ) and thus
the number R(Ωγ) > 0 is well defined. Note also that dist(Im γ,Ωcγ) ≥ ζ(γ,Ω)
and R <1
ddist(x,Ωc
γ) for all x ∈ Im γ, inequalities which will be useful in the
subsequent analysis.The triple (γ,R,∆) being chosen as before, we successively define the field
Y i := Y i(γ,R,∆) ∈ W 1,p(Bi; Mq×ℓ), i = 0, 1, 2, ..., N , where Bi := BR(xi), as
the solutions to the systems
(6.8)∂jY
i = Y iAj in D′(Bi; Mq×ℓ),
Y i(xi) = Y i−1(xi),
with the convention that Y −1(x0) := Y 0. Note that this definition is correct thanksto Theorems 6.3 and 4.2 which establish the existence and uniqueness of such solu-
tions, the assumption R < min
(
1
ddist(xi,Ωc
γ), R(Ωγ)
)
needed in the first theorem
being clearly satisfied.Finally, we define the matrix field Y by letting
(6.9) Y (x) := Y N (x).
ON SYSTEMS OF FIRST ORDER LINEAR PARTIAL DIFFERENTIAL EQUATIONS 41
We shall prove in the steps (ii)-(iv) below that this definition is unambiguous, thatis, it does not depend on the choice of γ, R, and ∆.
(ii) The definition (6.9) of Y (x) does not depend on the division ∆. Let a triple(γ,R,∆) be given as in the step (i), let t∗ ∈ (tk, tk+1), and let
∆∗ = t0, t1, ..., tk, t∗, tk+1, ..., tN.With the triple (γ,R,∆), we associate the functions Y i, i ∈ 0, 1, 2, ..., N, solutionsto the systems (6.8). In the same way, with the triple (γ,R,∆∗), we associate thefunctions Y 0
∗ , Y 1∗ ,..., Y k
∗ , Y∗, Y k+1∗ ,...,Y N
∗ . We wish to show that Y N (x) = Y N∗ (x).
By the uniqueness of the solution to the system (6.8) with i = 0, 1, 2, ..., k (seeTheorem 4.2), we deduce that
Y i∗ = Y i in Bi for all i = 0, 1, 2, ..., k.
Let x∗ := γ(t∗) and B∗ := BR(x∗). Since Y∗ and Y k satisfy
∂jY∗ = Y∗Aj in D′(B∗; Mq×ℓ),
∂jYk = Y kAj in D′(Bk; Mq×ℓ),
Y∗(x∗) = Y k
∗ (x∗) = Y k(x∗),
we infer from Theorem 4.2 that Y∗ = Y k in B∗ ∩ Bk. In particular, Y∗(xk+1) =
Y k(xk+1), which next implies that Y k+1∗ (xk+1) = Y k+1(xk+1). Since Y k+1
∗ andY k+1 satisfy in addition the relations
∂jYk+1∗ = Y k+1
∗ Aj in D′(Bk+1; Mq×ℓ),
∂jYk+1 = Y k+1Aj in D′(Bk+1; M
q×ℓ),
we infer again from Theorem 4.2 that
Y k+1∗ = Y k+1 in Bk+1.
By the uniqueness of the solution to the system (6.8) with i = k +2, k +3, ..., N ,we finally obtain that
Y i∗ = Y i in Bi for all i = k + 2, k + 3, ..., N.
In particular, Y N∗ = Y N in BR(x).
Now, let ∆ = t0, t1, t2, ..., tN and ∆′ = t′0, t′1, t′2, ..., t′M be two divisions of theinterval [0, 1] satisfying the conditions of step (i). Let ∆ ∪ ∆′ = s0, s1, s2, ..., sP .Beginning with ∆ and “refining” it to ∆ ∪ ∆′, we show by using the previousargument a finite number of times that
Y P (γ,R,∆ ∪ ∆′) = Y N (γ,R,∆) in BR(x).
In the same way, but “refining” ∆′ to ∆ ∪ ∆′, we also find that
Y P (γ,R,∆ ∪ ∆′) = Y M (γ,R,∆′) in BR(x).
Therefore, Y N (γ,R,∆) = Y M (γ,R,∆′) in BR(x). This implies that the definition(6.9) of Y (x) does not depend on the division ∆.
(iii) The definition (6.9) of Y (x) does not depend on the number R. Let Y (x)
be defined as in step (i) and let R > R be a number that satisfies the inequality
42 SORIN MARDARE
(6.5). For each i ∈ 0, 1, 2, ..., N, let Y i ∈ W 1,p(BR(xi); Mq×ℓ) be the solution tothe system
∂j Yi = Y iAj in D′(BR(xi); Mq×ℓ),
Y i(xi) = Y i−1(xi),
where Y −1(x0) := Y 0. We now prove by a recursion argument that Y i = Y i inBR(xi).
For i = 0 this is a consequence of Theorem 4.2. Assume now that Y k = Y k
in BR(xk) for a fixed k ∈ 0, 1, 2, ..., N − 1. Since xk+1 ∈ BR(xk) ⊂ BR(xk), it
follows that Y k(xk+1) = Y k(xk+1). By the uniqueness of the solution to the system
(6.8) with i := k + 1, this implies that Y k+1 = Y k+1 in BR(xk+1). After a finite
number of iterations, we obtain that Y N = Y N in BR(x). Hence the definition(6.9) of Y (x) does not depend on the number R.
(iv) The definition (6.9) of Y (x) does not depend on the path γ. Let there begiven two paths γ, γ ∈ C0([0, 1]; Ω) joining x0 to x. Since Ω is simply connected,there exists a homotopy ϕ ∈ C0([0, 1] × [0, 1]; Ω) such that
ϕ(t, 0) = γ(t), ϕ(t, 1) = γ(t),
ϕ(0, s) = x0,ϕ(1, s) = x.
Let a number R > 0 be fixed that satisfies
(6.10) R < min
(
1
dinf
s∈[0,1]
dist(
Imϕ(·, s),Ωcϕ(·,s)
)
, R(Ωϕ)
)
,
where R(Ωϕ) is the number defined in Theorem 6.3 and
Ωϕ :=⋃
s∈[0,1]
Ωϕ(·,s),
where Ωϕ(·,s) is given by (6.6) with γ = ϕ(·, s). Note that such a number R exists
since, on one hand, Imϕ ⋐ Ω and Ωϕ ⋐ Ω(
the last inclusion being a consequence
of the inequality dist(Ωϕ,Ωc) ≥ 12dist(Im ϕ,Ωc) in the case Ω 6= R
d and of the fact
that Ωϕ is bounded in the case Ω = Rd)
and, on the other hand, for all s ∈ [0, 1]we have that
dist(
Imϕ(·, s),Ωcϕ(·,s)
)
≥ 1
2dist
(
Imϕ(·, s),Ωc)
≥ 1
2dist(Im ϕ,Ωc) if Ω 6= R
d,
dist(
Imϕ(·, s),Ωcϕ(·,s)
)
≥ 1 if Ω = Rd.
Since ϕ is uniformly continuous over the compact set [0, 1] × [0, 1], there existsan integer N > 0 such that
(6.11) |ϕ(t, s) − ϕ(t′, s′)| < R
for all (t, s), (t′, s′) ∈ [0, 1]× [0, 1] that satisfy |t− t′|2 + |s− s′|21/2 ≤√
2/N . Lettk = sk := k/N for all k = 0, 1, ..., N . Then one can see that Y (x) can be definedas in step (i) by means of the path γk := ϕ(·, sk), of the number R, and of thedivision ∆ := t0, t1, t2, ..., tN.
In order to prove that the definition (6.9) of Y (x) does not depend on the choiceof the path joining x0 to x, it suffices to prove that, for a given k ∈ 1, 2, ..., N,the definition (6.9) based on the triple (γk−1, R,∆) coincides with the definition(6.9) based on the triple (γk, R,∆).
ON SYSTEMS OF FIRST ORDER LINEAR PARTIAL DIFFERENTIAL EQUATIONS 43
Let xk,i := γk(ti) and Bk,i := BR(xk,i). For each i ∈ 0, 1, 2, ..., N, let Y k,i ∈W 1,p(Bk,i; M
q×ℓ) be the solution to the system (see (6.8))
(6.12)∂jY
k,i = Y k,iAj in D′(Bk,i; Mq×ℓ),
Y k,i(xk,i) = Y k,i−1(xk,i),
where Y k,−1(xk,0) := Y 0. We wish to prove that Y k−1,N (x) = Y k,N (x).First, notice that Y k−1,0 = Y k,0 in Bk−1,0 ∩ Bk,0 thanks to the uniqueness of
the solution to the system (6.12) with i = 0 (see Theorem 4.2). Assume that for afixed i ∈ 0, 1, 2, ..., N − 1 we have Y k−1,i = Y k,i in Bk−1,i ∩ Bk,i. In particular,Y k−1,i(xk−1,i+1) = Y k,i(xk−1,i+1), which implies on the one hand that
Y k−1,i+1(xk−1,i+1) = Y k,i(xk−1,i+1).
On the other hand, Theorem 4.2 implies that Y k,i+1 = Y k,i in the set Bk,i+1∩Bk,i.Since xk−1,i+1 belongs to this set, we obtain in particular that
Y k,i+1(xk−1,i+1) = Y k,i(xk−1,i+1).
Combining the two relations above gives that
Y k−1,i+1(xk−1,i+1) = Y k,i+1(xk−1,i+1).
We then infer from Theorem 4.2 that Y k−1,i+1 = Y k,i+1 in Bk−1,i+1∩Bk,i+1. AfterN iterations, we eventually find that Y k−1,N = Y k,N in Bk−1,N ∩ Bk,N = BR(x),which implies in particular that Y k−1,N (x) = Y k,N (x). Therefore, the definition(6.9) of Y (x) does not depend on the choice of the path γ joining x0 to x.
(v) The field Y defined in step (i) satisfies the Pfaff system (6.4). For any givenpoint x ∈ Ω, let γ ∈ C0([0, 1]; Ω) be a path joining x0 to x, let
Ω♯γ :=
x ∈ Rd ; dist(x; Im γ) <
3
2ζ(γ,Ω)
,
let R > 0 be a number that satisfies
R < min
(
2
2d + 1ζ(γ,Ω), R(Ω♯
γ)
)
,
and let a division ∆ := t0, t1, . . . , tN of the interval [0, 1] that satisfies
γ(t) ∈ BR(xi) for all t ∈ [ti−1, ti+1] and all i ∈ 0, 1, ..., N,where 0 = t−1 = t0 < t1 < t2 < ... < tN = tN+1 = 1 and xi := γ(ti). Rememberthat ζ(γ,Ω) is the number defined by (6.7), R(Ω♯
γ) is the number defined in Theorem
6.3, and note that Ai ∈ Lp(Ω♯γ ; Mℓ) since Ω♯
γ ⋐ Ω for reasons similar to those usedin step (i) to prove that Ωγ ⋐ Ω.
It is clear that Ωγ ⊂ Ω♯γ and R < 1
dζ(γ,Ω) ≤ 1ddist(Im γ,Ωc
γ), hence that
R < min(
1ddist(Im γ,Ωc
γ), R(Ωγ))
. This allows us to define the value of Y at the
point x by letting Y (x) = Y N (x), where Y N is defined as in step (i) by means ofγ, R and ∆. Now, we prove that in fact Y = Y N over the entire ball BR(x).
Let x be a fixed, but otherwise arbitrary, point in BR(x), let the path γ bedefined by (note that γ is a path joining x0 to x)
γ(t) =
γ(2t) for all t ∈ [0, 1/2] ,
(2 − 2t)x + (2t − 1)x for all t ∈ (1/2, 1] ,
44 SORIN MARDARE
and let ∆ := t0, t1, . . . , tN , tN+1, where ti = ti/2 for all i ∈ 0, 1, 2, . . . , N andtN+1 = 1. Using the inequalities dist(z, Im γ) ≤ dist(z, Im γ) ≤ dist(z, Im γ) + R,which are valid for all z ∈ R
d, one can prove that Ωγ ⊂ Ω♯γ and that dist(Im γ,Ωc
γ) ≥ζ(γ,Ω) ≥ ζ(γ,Ω) − R
2 , hence
R < min
(
1
ddist(Im γ,Ωc
γ), R(Ωγ)
)
.
Therefore we can define Y (x) by means of γ, R and ∆ in the same way that Y (x)is defined by means of γ, R and ∆. More specifically, let xi := γ(ti). Then
Y (x) := Y N+1(x),
where, for all i = 0, 1, 2, ..., N + 1, Y i = Y i(γ, R, ∆) ∈ W 1,p(BR(xi); Mq×ℓ) is thesolution to the system
(6.13)∂j Y
i = Y iAj in D′(BR(xi); Mq×ℓ),
Y i(xi) = Y i−1(xi),
where Y −1(x0) := Y 0.Since xi = xi for all i ∈ 0, 1, 2, ..., N, we infer from the uniqueness result of
Theorem 4.2 that Y i = Y i in BR(xi) for all i ∈ 0, 1, 2, ..., N. Hence Y N = Y N
in BR(xN ), which implies in particular that Y N (x) = Y N (x). On the other hand,
since x = xN+1, we have Y N+1(x) = Y N (x) by the second equation of (6.13). Bycombining these relations, we finally obtain
Y (x) = Y N+1(x) = Y N (x) = Y N (x).
We have proved that the matrix field Y : Ω → Mq×ℓ satisfies Y = Y N in the
open ball BR(x). Since Y N belongs to the space W 1,p(B(x,R); Mq×ℓ) and satisfiesthe system
∂jYN = Y NAj in D′(BR(x); Mq×ℓ),
letting x vary in the set Ω shows that Y belongs to the space W 1,ploc (Ω; Mq×ℓ) and
satisfies the system
∂jY = Y Aj in D′(Ω; Mq×ℓ),
Y (x0) = Y 0.
This completes the proof of the Theorem.¤
The proof of Theorem 6.4, which shows how to obtain a global existence resultfrom a local existence result and a uniqueness result, is based on a strategy thatcan be easily adapted to other systems of partial differential equations. In particu-lar, this strategy can be used to prove the following generalized Poincare theoremin simply connected sets by using the Poincare theorem in a cube, the latter be-ing established by Theorem 2.5. Note that this generalised Poincare theorem wasmentioned in the Introduction in relation to the solvability of a nonhomogeneoussystem of first order partial differential equations of type (1.4) (see Theorem 1.1).
Theorem 6.5. Let Ω be a connected and simply connected open subset of Rd.
Let f1, f2, ..., fd ∈ Lploc(Ω), where p ≥ 1, satisfy in the distributional sense the
compatibility conditions
∂ifj = ∂jfi in Ω
ON SYSTEMS OF FIRST ORDER LINEAR PARTIAL DIFFERENTIAL EQUATIONS 45
for all i, j ∈ 1, 2, ...,d. Then there exists ψ ∈ W 1,ploc (Ω), unique up to a constant,
such that
∂iψ = fi in Ω
for all i ∈ 1, 2, ...,d.
We end this section with a regularity result “up to the boundary” for Pfaffsystems. As expected, this will require global Lp-regularity for the matrix fieldsAi, but also some regularity condition for the domain Ω.
Definition 6.6. We say that Ω satisfies the uniform interior cone condition ifthere exists a fixed open cone ω such that, for every x ∈ Ω, there exists a cone ωx
congruent with ω, with vertex x, such that ωx ⊂ Ω.
Remark 6.7. All bounded open sets with Lipschitz boundary satisfy the uniforminterior cone condition, but the class of open sets with this property is strictly larger.For example, the set BR(x) \ x satisfy the uniform interior cone condition. Theproperty also allows for sets with inward cusps.
Theorem 6.8. Let Ω ⊂ Rd be a connected and simply connected bounded open
set that satisfies the uniform interior cone condition. Let there be given a pointx0 ∈ Ω, p > d, and Y 0 ∈ M
q×ℓ. Let Ai ∈ Lp(Ω; Mℓ) be matrix fields that satisfythe relations (for all i, j)
∂jAi − ∂iAj = AiAj − AjAi
in the space of distributions D′(Ω; Mℓ). Then the Pfaff system
(6.14)∂iY = Y Ai in D′(Ω; Mq×ℓ) for all i,
Y (x0) = Y 0
has one and only one solution Y ∈ W 1,p(Ω; Mq×ℓ).
Proof. In this proof we drop the matrix spaces from notation such as Lp(Ω; Mℓ),W 1,p(Ω; Mq×ℓ), etc.
By Theorem 6.4, we know that there exists a unique solution Y ∈ W 1,ploc (Ω) to
the problem (6.14). We only have to prove that Y belongs in fact to the spaceW 1,p(Ω).
Let ω be the cone of the uniform interior cone condition satisfied by Ω and let apoint y ∈ ω be given once and for all. Let ε := dist(y, ωc) and note that ε > 0.
Now let a point x ∈ Ω and choose a cone ωx with vertex x that is congruentwith ω and satisfies ωx ⊂ Ω. Since ωx ⋐ Ω, we have that Y ∈ W 1,p(ωx). Letyx ∈ ωx be the point corresponding to y ∈ ω through the congruence of ωx with ω.By the inequality (2.2) of Corollary 2.3, applied for each component of the matrixfield Y , there exists a constant C depending only on d, p and |ω0| (ω0 is the cone
46 SORIN MARDARE
of diameter 1 homothetic with ω) such that, for all z ∈ ωx,
|Y (z)| ≤ |Y (yx)| + CD1−d/pω
d∑
i=1
‖∂iY ‖Lp(ωx)
= |Y (yx)| + CD1−d/pω
d∑
i=1
‖Y Ai‖Lp(ωx)
≤ |Y (yx)| + CD1−d/pω
d∑
i=1
‖Y ‖L∞(ωx)‖Ai‖Lp(ωx)
≤ |Y (yx)| + CD1−d/pω ‖Y ‖L∞(ωx)
d∑
i=1
‖Ai‖Lp(Ω),
with the notation of Corollary 2.3. Therefore
(
1 − CD1−d/pω
d∑
i=1
‖Ai‖Lp(Ω)
)
‖Y ‖L∞(ωx) ≤ |Y (yx)| ≤ ‖Y ‖L∞(Ωε),
where Ωε := x ∈ Ω ; dist(x,Ωc) > ε. To obtain the last inequality, we used thefact that yx ∈ Ωε (since dist(yx,Ωc) > dist(yx, ωc
x) = ε) and the inclusion Ωε ⋐ Ω,which implies that Y ∈ L∞(Ωε).
It is obvious that if Ω satisfies the uniform interior cone condition with a fixedcone ω, then it satisfies the same condition with any smaller cone homothetic withω. Therefore we can choose a cone ω′, smaller than ω, whose diameter Dω′ satisfies
CD1−d/pω′
d∑
i=1
‖Ai‖Lp(Ω) ≤1
2.
Then we deduce from the previous inequality that for all x ∈ Ω,
‖Y ‖L∞(ω′x) ≤ 2‖Y ‖L∞(Ωε′ )
,
where ε′ = dist(y′, ω′c), y′ being the point in ω′ that was fixed once and for all. Inparticular,
|Y (x)| ≤ 2‖Y ‖L∞(Ωε′ )
for all x ∈ Ω, hence Y ∈ L∞(Ω) and ‖Y ‖L∞(Ω) ≤ 2‖Y ‖L∞(Ωε′ ).
Since ∂iY = Y Ai for all i ∈ 1, . . . ,d and Ai ∈ Lp(Ω), we obtain ∂iY ∈ Lp(Ω)for all i ∈ 1, . . . ,d. Therefore Y ∈ W 1,p(Ω).
¤
Remark 6.9. Since the domain ω that appears in the statement of Corollary 2.3only needs to be a convex set, not necessarily a cone, we can replace the “uniforminterior cone condition” with an apparently weaker “uniform interior convex setcondition”, defined as in Definition 6.6 but with the fixed cone replaced by a fixedbounded open convex set ω and with the condition that x is the vertex of ωx replacedby x ∈ ωx. However, a closer look shows that these properties are equivalent, sinceany bounded open convex set satisfies the uniform interior cone condition.
ON SYSTEMS OF FIRST ORDER LINEAR PARTIAL DIFFERENTIAL EQUATIONS 47
7. Appendix
For completeness, we give here a simple proof of the regularity result for ellipticequations with mixed boundary conditions used in this paper. Throughout theAppendix the number d is an integer ≥ 2 given once and for all.
We begin with the following preliminary result about extensions:
Lemma 7.1. Let Ω := (−L,L) × ω and Ω := (−3L, 3L) × ω, where ω is an openset in R
d−1, and let Γ0 ⊂ ∂ω be a relatively open subset of the boundary of ω. Forany s ≥ 1, define the spaces
Xs(Ω) := v ∈ W 1,s(Ω); v = 0 on (−L,L) × Γ0,Xs(Ω) := v ∈ W 1,s(Ω); v = 0 on (−3L, 3L) × Γ0,Ys(Ω) := v ∈ W 1,s(Ω); v = 0 on ((−L,L) × Γ0) ∪ (−L × ω) ∪ (L × ω),Ys(Ω) := v ∈ W 1,s(Ω); v = 0 on ((−3L, 3L) × Γ0) ∪ (−3L × ω) ∪ (3L × ω),
and define the operators S : Ls(Ω) → Ls(Ω) and A : Ls(Ω) → Ls(Ω) by letting, forall f ∈ Ls(Ω),
(Sf)(x1, x′) =
f(x1, x′) if (x1, x
′) ∈ (−L,L) × ω,
f(2L − x1, x′) if (x1, x
′) ∈ (L, 3L) × ω,
f(−2L − x1, x′) if (x1, x
′) ∈ (−3L,−L) × ω,
and
(Af)(x1, x′) =
f(x1, x′) if (x1, x
′) ∈ (−L,L) × ω,
− f(2L − x1, x′) if (x1, x
′) ∈ (L, 3L) × ω,
− f(−2L − x1, x′) if (x1, x
′) ∈ (−3L,−L) × ω.
Let p, q ≥ 1 such that 1p + 1
q = 1 and let f ∈ Lr(Ω), where r ≥ 1 and satisfies
1
r+
1
q≤ 1 +
1
dif q 6= d,
r > 1 if q = d.
(i) If u ∈ Xp(Ω) satisfies the equation∫
Ω
∇u · ∇v dx =
∫
Ω
fv dx for all v ∈ Xq(Ω),
then (Su) belongs to the space Xp(Ω) and satisfies the equation∫
Ω
∇(Su) · ∇v dx =
∫
Ω
(Sf)v dx for all v ∈ Xq(Ω).
(ii) If u ∈ Yp(Ω) satisfies the equation∫
Ω
∇u · ∇v dx =
∫
Ω
fv dx for all v ∈ Yq(Ω),
then (Au) belongs to the space Yp(Ω) and satisfies the equation∫
Ω
∇(Au) · ∇v dx =
∫
Ω
(Af)v dx for all v ∈ Yq(Ω).
48 SORIN MARDARE
Proof. First note that the integrals appearing in the right hand side of the varia-tional equations above are well defined since the test functions v and v are of classL
rr−1 (the dual of Lr) thanks to the Sobolev embedding W 1,q ⊂ L
rr−1 .
(i) It is clear from the definition of partial derivatives in the distributional sense
that (Su) ∈ W 1,p(Ω), which next implies that (Su) ∈ Xp(Ω). For each v ∈ Xq(Ω),let v, w : Ω → R be defined by
v(x1, x′) = v(2L − x1, x
′) for all (x1, x′) ∈ Ω,
w(x1, x′) = v(−2L − x1, x
′) for all (x1, x′) ∈ Ω.
Then v, w ∈ W 1,q(Ω) and, by changing variables in the integrals over the sets(L, 3L) × ω and (−3L,−L) × ω, we have
∫
Ω
∇(Su) · ∇v dx =
∫
Ω
∇u · ∇v dx +
∫
Ω
∇u · ∇v dx +
∫
Ω
∇u · ∇w dx.
Since (v+v+w) ∈ Xq(Ω), we next have, again by changing variables in the integralover the sets (L, 3L) × ω and (−3L,−L) × ω, that
∫
Ω
∇(Su) · ∇v dx =
∫
Ω
f(v + v + w) dx =
∫
Ω
(Sf)v dx.
(ii) Since u = 0 on −L,L × ω, the definition of the partial derivatives of u
in the distributional sense shows that (Au) ∈ W 1,p(Ω), which next implies that
(Au) ∈ Yp(Ω). For each v ∈ Yq(Ω), let v, w : Ω → R be defined by
v(x1, x′) = −v(2L − x1, x
′) for all (x1, x′) ∈ Ω,
w(x1, x′) = −v(−2L − x1, x
′) for all (x1, x′) ∈ Ω.
Then v, w ∈ W 1,q(Ω) and, by using a change of variables, we have∫
Ω
∇(Au) · ∇v dx =
∫
Ω
∇u · ∇v dx +
∫
Ω
∇u · ∇v dx +
∫
Ω
∇u · ∇w dx.
Since (v + v + w) ∈ Yq(Ω) (in particular, (v + v + w) vanishes on −L,L × ω), wenext have, by using a change of variables,
∫
Ω
∇(Au) · ∇v dx =
∫
Ω
f(v + v + w) dx =
∫
Ω
(Af)v dx.
¤
The remainder of Appendix is dedicated to the proof of Theorem 2.9. We beginby specifying the notation used henceforth.
If Ω ⊂ Rd is a generic cuboid, i.e.,
Ω := (−a1, a1) × (−a2, a2) × ... × (−ad, ad),
we denote Γj , where j is any number in the set 1, 2, ...,d, the union of its twolateral faces that are orthogonal to the xj-axis, namely
Γj := (−a1, a1) × ... × (−aj−1, aj−1) × −aj , aj × (−aj+1, aj+1) × ... × (−ad, ad).
Note that the boundary of Ω satisfies
∂Ω =d
⋃
j=1
Γj .
ON SYSTEMS OF FIRST ORDER LINEAR PARTIAL DIFFERENTIAL EQUATIONS 49
For any real number s ≥ 1 and for any fixed subset of indices i1, i2, ..., ik ⊂1, 2, ...,d, we define the space
(7.1) Vs(Ω) := v ∈ W 1,s(Ω); v = 0 on Γi1 ∪ Γi2 ∪ ... ∪ Γik.
Then we have the following regularity result for variational equations:
Theorem 7.2. Let ω be an open cuboid in Rd whose edges are parallel to the axes
of coordinates and let i1, i2, . . . , ik ⊂ 1, 2, ...,d. Let p, q ≥ 1 such that 1p + 1
q = 1
and let f ∈ Lr(ω), where r ∈ (1,+∞) satisfies 1r + 1
q ≤ 1+ 1d . If u ∈ Vp(ω) satisfies
the variational equation
(7.2)
∫
ω
∇u · ∇v dx =
∫
ω
fv dx for all v ∈ Vq(ω),
then u ∈ W 2,r(ω).
Proof. Without losing in generality, we may assume that the indices appearing inthe definition of Vp(ω) and Vq(ω) are i1 = 1, i2 = 2, ... , ik = k. We apply severaltimes the previous Lemma.
First, we define the extensions u1, f1 : ω1 → R of u, f : ω → R, where
ω1 := (−3a1, 3a1) × (−a2, a2) × ... × (−ad, ad),
by letting, for all x′′ := (x2, ..., xd) ∈ (−a2, a2) × ... × (−ad, ad),
u1(x1, x′′) =
u(x1, x′′) if x1 ∈ (−a1, a1),
−u(2a1 − x1, x′′) if x1 ∈ (a1, 3a1),
−u(−2a1 − x1, x′′) if x1 ∈ (−3a1,−a1)
and
f1(x1, x′′) =
f(x1, x′′) if x1 ∈ (−a1, a1),
−f(2a1 − x1, x′′) if x1 ∈ (a1, 3a1),
−f(−2a1 − x1, x′′) if x1 ∈ (−3a1,−a1).
Then Lemma 7.1(ii) shows that u1 belongs to the space Vp(ω1) and satisfies the
variational equation∫
ω1
∇u1 · ∇v dx =
∫
ω1
f1v dx for all v ∈ Vq(ω1).
Second, we define the extensions u2, f2 : ω2 → R of u1, f1 : ω1 → R, where
ω2 := (−3a1, 3a1) × (−3a2, 3a2) × (−a3, a3)... × (−ad, ad),
by letting, for all x1 ∈ (−3a1, 3a1) and x′′ := (x3, ..., xd) ∈ (−a3, a3)×...×(−ad, ad),
u2(x1, x2, x′′) =
u1(x1, x2, x′′) if x2 ∈ (−a2, a2),
−u1(x1, 2a2 − x2, x′′) if x2 ∈ (a2, 3a2),
−u1(x1,−2a2 − x2, x′′) if x2 ∈ (−3a2,−a2)
and
f2(x1, x2, x′′) =
f1(x1, x2, x′′) if x2 ∈ (−a2, a2),
−f1(x1, 2a2 − x2, x′′) if x2 ∈ (a2, 3a2),
−f1(x1,−2a2 − x2, x′′) if x2 ∈ (−3a2,−a2).
50 SORIN MARDARE
Then Lemma 7.1(ii) shows that u2 belongs to the space Vp(ω2) and satisfies the
variational equation∫
ω2
∇u2 · ∇v dx =
∫
ω2
f2v dx for all v ∈ Vq(ω2).
Then, after k iterations of this argument, we define the extensions uk, fk : ωk →R of uk−1, fk−1 : ωk−1 → R, where
ωk := (−3a1, 3a1) × ... × (−3ak, 3ak) × (−ak+1, ak+1) × ... × (−ad, ad),
by letting, for all x′ := (x1, ..., xk−1) ∈ (−3a1, 3a1) × ... × (−3ak−1, 3ak−1) andx′′ := (xk+1, ..., xd) ∈ (−ak+1, ak+1) × ... × (−ad, ad),
uk(x′, xk, x′′) =
uk−1(x′, xk, x′′) if xk ∈ (−ak, ak),
−uk−1(x′, 2ak − xk, x′′) if xk ∈ (ak, 3ak),
−uk−1(x′,−2ak − xk, x′′) if xk ∈ (−3ak,−ak)
and
fk(x′, xk, x′′) =
fk−1(x′, xk, x′′) if xk ∈ (−ak, ak),
−fk−1(x′, 2ak − xk, x′′) if xk ∈ (ak, 3ak),
−fk−1(x′,−2ak − xk, x′′) if xk ∈ (−3ak,−ak).
Then Lemma 7.1(ii) shows that uk belongs to the space Vp(ωk) and satisfies the
variational equation∫
ωk
∇uk · ∇v dx =
∫
ωk
fkv dx for all v ∈ Vq(ωk).
Next, we define the extensions uk+1, fk+1 : ωk+1 → R of uk, fk : ωk → R, where
ωk+1 := (−3a1, 3a1) × ... × (−3ak+1, 3ak+1) × (−ak+2, ak+2) × ... × (−ad, ad),
by letting, for all x′ := (x1, ..., xk) ∈ (−3a1, 3a1) × ... × (−3ak, 3ak) and x′′ :=(xk+2, ..., xd) ∈ (−ak+2, ak+2) × ... × (−ad, ad),
uk+1(x′, xk+1, x′′) =
uk(x′, xk+1, x′′) if xk+1 ∈ (−ak+1, ak+1),
uk(x′, 2ak+1 − xk+1, x′′) if xk+1 ∈ (ak+1, 3ak+1),
uk(x′,−2ak+1 − xk+1, x′′) if xk+1 ∈ (−3ak+1,−ak+1)
and
fk+1(x′, xk+1, x′′) =
fk(x′, xk+1, x′′) if xk+1 ∈ (−ak+1, ak+1),
fk(x′, 2ak+1 − xk+1, x′′) if xk+1 ∈ (ak+1, 3ak+1),
fk(x′,−2ak+1 − xk+1, x′′) if xk+1 ∈ (−3ak+1,−ak+1).
Then Lemma 7.1(i) shows that uk+1 belongs to the space Vp(ωk+1) and satisfies
the variational equation∫
ωk+1
∇uk+1 · ∇v dx =
∫
ωk+1
fk+1v dx for all v ∈ Vq(ωk+1).
Finally, after (d−k) iterations of this argument, we define the extensions ud, fd :ωd → R of ud−1, fd−1 : ωd−1 → R, where
ωd := (−3a1, 3a1) × ... × (−3ad, 3ad),
ON SYSTEMS OF FIRST ORDER LINEAR PARTIAL DIFFERENTIAL EQUATIONS 51
by letting, for all x′ := (x1, ..., xd−1) ∈ (−3a1, 3a1) × ... × (−3ad−1, 3ad−1),
ud(x′, xd) =
ud−1(x′, xd) if xd ∈ (−ad, ad),
ud−1(x′, 2ad − xd) if xd ∈ (ad, 3ad),
ud−1(x′,−2ad − xd) if xd ∈ (−3ad,−ad)
and
fd(x′, xd) =
fd−1(x′, xd) if xd ∈ (−ad, ad),
fd−1(x′, 2ad − xd) if xd ∈ (ad, 3ad),
fd−1(x′,−2ad − xd) if xd ∈ (−3ad,−ad).
Then Lemma 7.1(i) shows that ud belongs to the space Vp(ωd) and satisfies the
variational equation∫
ωd
∇ud · ∇v dx =
∫
ωd
fdv dx for all v ∈ Vq(ωd).
This last equation implies that the function ud, which belongs to W 1,p(ωd),satisfies
−∆ud = fd in D′(ωd).
Since fd ∈ Lr(ωd), r ∈ (1,+∞), and since ω ⋐ ωd (i.e., the closure of ω is acompact subset of ωd), the interior regularity of the Laplacian (see, e.g., Dautrayand Lions [6]) then shows that u = ud|ω ∈ W 2,r(ω). The proof is now complete. ¤
Remark 7.3. Under the hypotheses of Theorem 7.2, one can show that the functionu also satisfies the inequality
‖u‖W 2,r(ω) ≤ C‖f‖Lr(ω)
for some constant C.
An immediate consequence of Theorem 7.2 is that the solution of the variationalequation (7.2) satisfies a Laplace equation supplemented with mixed boundary con-ditions:
Corollary 7.4. Under the assumptions of Theorem 7.2, if u satisfies the variationalequation (7.2), then u is a strong solution to the boundary value problem
(7.3)
−∆u = f in ω,
u = 0 on Γi1 ∪ ... ∪ Γik,
∂ju = 0 on Γj for all j 6∈ i1, ..., ik.
Proof. We know by Theorem 7.2 that u ∈ W 2,r(ω). By letting v ∈ D(ω) andintegrating by parts in the variational equation (7.2), we deduce that
−∆u = f
first in the distributional sense, then a.e. in ω since f and ∆u belong to the spaceLr(ω).
Since u ∈ Vp(ω), we also have
u = 0 on Γi1 ∪ ... ∪ Γik.
We now prove that the normal derivative of u on Γj vanishes for all j 6∈ i1, ..., ik.Note first that this derivative is νj∂ju and that it is well defined in W 1−1/r,r(Γj),
52 SORIN MARDARE
since u ∈ W 2,r(ω). For all v ∈ C∞(ω) that vanishes in a neighborhood of the set∂ω \ Γj , we have by Stokes formula that
∫
Γj
v(∂ju)νj dx1...dxj−1dxj+1...dxd =
∫
ω
∇u · ∇v dx +
∫
ω
(∆u)v dx.
Therefore∫
Γj
v(∂ju)νj dx1...dxj−1dxj+1...dxd = 0
since u satisfies the variational equation (7.2) and v ∈ Vq(ω). This shows that
νj∂ju = 0 on Γj
in the distributional sense, but also in W 1−1/r,r(Γj) since νj∂ju belongs to thisspace. Thus ∂ju = 0 on Γj since νj = ±1 on Γj . Therefore u satisfies all theequations of the boundary value problem (7.3). ¤
Acknowledgements
This work is part of the French-Romanian Research Project PICS 3450 ”Mathe-matiques et Applications”, financed by the CNRS. It was initiated during my visitat the City University of Hong Kong in July 2005 at the invitation of ProfessorPhilippe Ciarlet and was completed during my stay at the University of Zurich as apostdoctoral fellow in the team of Professor Michel Chipot, under the contracts #20-105155/1 and 20-113287/1 supported by the Swiss National Science Foundation.
References
1. Adams R.A., Sobolev Spaces, Academic Press, 1975.
2. Barden D., Thomas C. An Introduction to Differential Manifolds, Springer-Verlag, Berlin,
1985.
3. Bleecker D., Gauge Theory and Variational Principles, Addison-Wesley Publishing Company,
1981.
4. Bourgain J., Brezis H., Mironescu P., Lifting in Sobolev spaces, J. Anal. Math. 80, 2000, 37-86.
5. Cartan E., Sur la possibilite de plonger un espace riemannien donne dans un espace euclidien,
Ann. Soc. Polon. Math. 6, 1927, 1-7.
6. Dautray R., Lions J.-L., Mathematical Analysis and Numerical Methods for Science and
Technology, Vol. 1, 4, Springer-Verlag, Berlin, 1990.
7. Deimling K., Nonlinear functional analysis, Springer-Verlag, Berlin, 1985.
8. Gilbarg D., Trudinger N.S., Elliptic Partial Differential Equations of Second Order, Springer-
Verlag, Berlin, 1983.
9. Hartman P., Wintner A., On the fundamental equations of differential geometry, Amer. J.
Math. 72, 1950, 757-774.
10. Jost J., Riemannian Geometry and Geometric Analysis, Springer-Verlag, Berlin Heidelberg,
1995.
11. Malliavin P., Geometrie Differentielle Intrinseque, Hermann, Paris, 1972.
12. Mardare S., The fundamental theorem of surface theory with little regularity, Journal of Elas-
ticity 73, 2003, 251-290.
13. Mardare S., On Pfaff systems with Lp coefficients and their applications in differential ge-
ometry, J. Math. Pures Appl. 84, 2005, 1659-1692.
14. Maz’ja V.G., Sobolev Spaces, Springer-Verlag, Berlin, 1985.
15. Schwartz L., Analyse II: Calcul Differentiel et Equations Differentielles, Hermann, Paris,
1992.
16. Taylor M.E., Partial Differential Equations I. Basic Theory, Springer-Verlag, New York, 1996.
17. Thomas T.Y., Systems of total differential equations defined over simply connected domains,
Annals math. 35, 1934, 730-734.
ON SYSTEMS OF FIRST ORDER LINEAR PARTIAL DIFFERENTIAL EQUATIONS 53
Institut fur Mathematik, Universitat Zurich, Winterthurerstrasse 190, CH-8057
Zurich, Switzerland
E-mail address: [email protected]