16
103 MATRIX REPRESENTATIONS OF POLYNOMIAL OPERATORS by P. M. Prenter (*) (Fort Collins, U. S. A.) SUMMARY. Let H be a separable Hilbert space. Every bounded, n-linear operator L on H n to H(n=0, 1, 2, ...) is shown to have a unique matrix representation with respect o~ to each complete orthonormal sequence {~ok} 1 . Conversely, every operator on H n to H pos- sessing a matrix representation is proved to be a bounded, n-linear operator. The foregoing conclusions then apply to polynomial operators P on H to H where Px=Lo+L~x-k-L~x ~-~- .-. -~-L,~xn and each L, is a k-linear operator. 1. INTRODUCTION It is well known that if H is a real separable Hilbert space with a Schauder basis {q~i: i = 1, 2, 3, ...} and if A is a continuous linear operator on H into H, then A has a matrix representation (a,.j) with respect to I%} given by a~ = (A%, %) where ( , ) denotes inner product. Furthermore, every linear operator A having a matrix representation (at~) must be a continuous linear operator. Thus, solving the equation A x = y in the Hilbert space H where A is a bounded linear operator is equivalent to solving the infinite system of linear equations (1.1) y, = i = 1, 2 .... j=l (*) Sponsored by Mathematics Research Center, United States Army, Madison, Wisconsin, under contract No. D.A.-31-124-ARO-D-462.

Matrix representations of polynomial operators

Embed Size (px)

Citation preview

Page 1: Matrix representations of polynomial operators

103

MATRIX REPRESENTATIONS OF POLYNOMIAL OPERATORS

by P. M. Prenter (*) (Fort Collins, U. S. A.)

SUMMARY. Let H be a separable Hilbert space. Every bounded, n-linear operator L on H n to H ( n = 0 , 1, 2, . . .) is shown to have a unique matrix representation with respect

o~ to each complete orthonormal sequence {~ok} 1 . Conversely, every operator on H n to H pos- sessing a matrix representation is proved to be a bounded, n-linear operator. The foregoing conclusions then apply to polynomial operators P on H to H where

P x = L o + L ~ x - k - L ~ x ~-~- .-. -~-L,~x n

and each L, is a k-linear operator.

1. INTRODUCTION

It is well k n o w n that if H is a real separab le Hi lber t space with a

Schauder bas is {q~i: i = 1, 2, 3, . . .} and if A is a con t inuous l inear opera to r

on H into H, then A has a mat r ix represen ta t ion (a,.j) with respec t to I%}

given by a~ = (A%, %) where ( , ) deno tes inner product . Fur thermore , every

l inear opera to r A hav ing a matr ix represen ta t ion (at~) mus t be a cont inuous

l inear operator . Thus , so lv ing the equat ion A x = y in the Hilber t space H

where A is a bounded linear opera to r is equivalent to so lv ing the infinite

sys t em of l inear equat ions

( 1 . 1 ) y , = i = 1, 2 . . . . j=l

(*) Sponsored by Mathematics Research Center, United States Army, Madison, Wisconsin,

under contract No. D.A.-31-124-ARO-D-462.

Page 2: Matrix representations of polynomial operators

1 0 4 P. ~ . P r z ~ r

in l ~. Such systems of equations and their relation to the finite subsystems

N

(1.2) y , - - ~ a,~x~, i - - 1, 2 , . . . , N j- - I

was studied extensively by Hilbert and his followers. Related linear systems

(1.1) and (1.2) also arise in the solution of A x = y via projectional or varia-

tional methods such as least squares and (]alerkin procedures.

The study of solutions of infinite systems of equations and their finite

subsystems is, of course, not restricted to linear equations in a Hilbert space H.

A number of mathematicians have recently been trying to adapt such variational

methods to solving nonlinear equations in Hilbert or Banach spaces. The papers

of Cesari [2] and his student Locker [6], of Urabe [19, 18] and of E. Hopf [4]

are of particular interest. Among the nonlinear operators in a normed linear

space the polynomial operators are perhaps the simplest. Such operators,

through the vehicle of matrix representations can give rise to infinite polynomial

systems o o o o

(1.3) y , - - a,-[- ~ a~l,/, xj, xj, -F- ~ a,i,j,...jk xj, . . . xl, -[- . . . j l j = = l j l j ~ . . . ]k = 1

where i - - -1, 2, . . . . Sufficient conditions for the convergence of direct iterative

techniques for the solution of (1,3) and of its finite subsystems N N

(1.4) y , - a, + a,j,j, xj, xj, + JN x j , . . . j l j= = l j , I2 . .. j N - - 1

and the interrelation of their solutions is treated by Marcus in his papers [7]

and [8] in this journal (1962, 1964). Marcus addressed himself strictly to sy-

stems (1.3) and (1.4). A reformulation of Marcus theory in the language of

polynomial operators is outlined in the paper [14] assuming some characteriza-

tions of these operators which have matrix representations. This same paper

also reviews some other specialized iterative techniques for solving polynomial

operator equations due to Rail [15] and [16], and to McFarland [9]. The purpose

of the present paper is to prove that each continuous polynomial operator P

on a real separable Hilbert space H into H has a matrix representation and

that each polynomial operator endowed with a matrix representation is conti-

nuous. Such theorems generalize to l ~ spaces, 1 ~< p-~< oo in a manner ana-

logous to the linear theory. Importance attaches to such representations for

the same reason importance attaches to the original linear theory of Hilbert.

Page 3: Matrix representations of polynomial operators

MATRIX REPRESEI~TATIONS OF POLYNOMIAL OPERATORS 1 0 5

In particular, matrix representations give rise to polynomial sys tems (1.3) and

(1.4) for which some computational algori thms exist. It is also important to

remark that polynomial equations in a normed linear space X to a normed

linear space Y encompass a broad spectrum of applied problems including all

linear equations. Among these are the Riccati differential equation, the Navier-

Stokes equations, the Chandrashekar equat ions of radiative transfer [3]. and a

wide class of Hammerste in integral equations. We start by defining these sim-

ple non linear operators.

2. POLYNOMIAL OPERATORS

Let X and Y be linear spaces over the field of real (complex) numbers.

For each n = l, 2, . . . , let X n denote the direct product of X with itself n

times. That is

X n ----- {(x~, x2 . . . . . xn): x~ ~ X, i ----- 1, 2 . . . . , nl.

An n-linear operator L on X ~ into Y is a function L(x~, xo . . . . , xn) which is

linear and homogeneous in each of its arguments separately. Tha t is, fixing

each coordinate x i of the n-tuple (xt, x2, . . . . xn) except the k th one obtains a

linear operator in the variable xk on the space X into the space Y. A O-linear

operator Lo on X is a constant function on X into Y. We shall identify a

0-linear operator L0 with its range so that Lox = Lo for all x E X. In the

event L is n-linear, n ----- I, 2, . . . , and xt = x~ = . . . = Xn = x we adopt the

notation

L(x t , x~, . . . , x n ) = L x n.

For each k---~0, 1, 2, . . . let Lk be a k-linear operator on X to Y. The

operator P on X to Y given by

P x = Lo + L i x + L~x 2 + . . . + Lnx n

is called a polynomial operator of degree n on X. The polynomial operators

are in general non-linear and are among the most simple of the non-linear

operators. This non-l ineari ty is obvious in the case of polynomials of degree

zero and is s imply illustrated when n = 2 by the quadratic P x = L2x 2 through

the equation

P (x + y) = L~ (x -q- y, x q- y) ----- L~ x 2 q- L~ x y + L~ y x -q- L~ y2.

Page 4: Matrix representations of polynomial operators

1 0 6 P. ~ . ~,R~.~T~R

Clearly P(x + y) = P x + P y iff L~xy = - - L~yx. Polynomial operators are a

direct generalization of ordinary polynomials in n real (complex) variables to a

linear space setting. Although a comprehensive theory of these operators is not

yet available, some appealing analogies to the theory of ordinary polynomials

in n real (complex) variables have been observed. In particular, L. B. Rall [15]

has investigated quadratic equations (polynomials of degree 2) and found a

quadratic formula in Banach space; the Weierstrass Theorem has been proved

[l l , 13] for polynomial operators on a separable Hilbert space; and J. R.

Phillips [10] has drawn upon the theory of compact, self-adjoint operators on

a separable Hilbert space H to obtain eigenfunction expansions for a certain

class of self-adjoint quadratic operators (2 nd degree) on H. The interested reader

is referred to the bibliography for a list of that work done on polynomial

operators that is familiar to the author.

Let ~n(X, Y) denote the family of n-linear operators from X to Y and

let ~n(X, Y) denote the family of polynomials of degree n from X to Y.

Clearly, each of the families ~n(X, Y) and ~ ( X , Y) is a linear space with

addition and scalar multiplication defined by:

(2.1) (S + T)(z) .~ S z + Tz, (aS) (z) -~ a (Sz),

for each z E X ~ and each a in the scalar field of X when S, TEZ~,(X, Y) and

(2.2) (S + T) (x) ----- S x + Tx, (a S) (x) = a (Sx),

for each x E X when S, TE ~ ( X , Y).

Examples of polynomials abound, Many of the equations of elasticity

theory are of this type, the Chandrasekhar equation of radiative transfer [3] is

quadratic, and all of the examples worked on in the papers of the Cesari

school [2, 6] are quadratics or cubics. We give several examples here and

leave the rest to the readers experience and imagination.

E x a m p l e 1. In the case of ordinary differential equations, the famous

Riccati equation

(2.2) dy a t + a(t)y + b(t)y 2 = c(t), y(0) = c,

is quadratic. Assuming the coefficient functions a, b, c to be continuous, the

operator P defined by

P(y) ---- + Lty + L0

Page 5: Matrix representations of polynomial operators

M A T R I X R E P R E S E N T A T I O N S O F P O L Y N O M I A L O P E R A T O R S 107

where L~(y, z ) = b ( t ) y ( t ) z ( t ) , L ~ y = - a ( t ) y - [ - y ' , and L0 = - - c ( t ) may be

regarded as a quadrat ic operator from the space C 1 [0, a] into the space C[0, a].

Clearly P(y)= 0 is the Riccati equation (2.2).

Example 2. The Hammerstein Integral Equations provide us with a second

example. Let K(s, tr t2, t3) be a square integrable function of four variables

for which

Then

/o'fo'/o'/o' [k(s, t i , t~, t3)[=dti dt~ d t 3 ds < oo.

r l ~'1 1

C(x,, x~, x3)=Jo -t0 -f'0 k(s, t , , t~, t3) x( t , )x( t~)x( t3)dt , d t2d t 3

is a 3-linear operator on (L2[O, 1]) 3 to L~[O, 1]. Let yEL2[O, 1] and let ;~ be a

scalar. The equation

(2.3) C x 8 - - Zx : y

is a cubic Fredholm integral equation on Le[O, 1] into L2[O, 1].

Example 3. The Navier-Stokes equations provide us with a polynomial

partial differential equat ion which is quadratic. Here the problem is to find

a velocity field u = u(x, t) and a pressure field p = p(x, t) which, for

x-----(xi, x~, . . . , xn)E A (A a bounded simply connected region in C") and

for t > 0 satisfy the differential equat ions

(2.4) u~ -Jr- u �9 grad u = - - grad p -~- A u, div u = O,

where A is the Laplacian, div is the divergence, and u(x, t) satisfies the initial

condit ions

u (x, 0) ---- Uo (x), x ~ A

and the boundary condit ion

Letting L~(u, v) = u �9 grad v,

quadrat ic equation in u.

u(x, t) = 0, x E boundary of A.

L i u = u t - A u , it is apparent that (2.4) is a

Page 6: Matrix representations of polynomial operators

1 0 8 e . ~ . ~,REr~rE~

3. BOUNDED POLYNOMIAL OPERATORS

In the event X and Y carry topologies, one can speak of cont inuous

n-linear operators and of cont inuous polynomial operators. To this end let X

and Y be normed, linear spaces and let X ~ carry the product topology. A basic

open set U(x) about z E X ~ is a direct product ~ S ( ~ , x~) of ~ neighborhoods t = I

of the coordinates x~ of z=(x~, x2, . . . , x,) where S (% x~) = {y~EX: [[x~-y~ll < ~t}.

We define an e neighborhood U(~, z) o f z E X ~ to be the direct product f IS(e,x~). 1=1

Clearly z ' = (x~, x~, . . . , x'~)EU(e, z) iff max {]!x~--x~]l: i = 1 , 2 . . . . , n} < ~.

Furthermore X ~ is a linear space with a metric d where for each z=(x~ , x2,..., x,)

and z ~ = (xl, x~ . . . . , x ~ ) i n X ~

z + z ' = (x, + + x; . . . . . x. + x'.),

a z = (ax , , axe , . . . , ax.) ,

d(z , z') = max {[Ixi - - x~[[: i : 1, 2 . . . . . n}.

The metric d induces a norm I] ]]max on X" through the equation [Iz[]max = d(z, 6).

Here 0 is the identity in X ~ (that is, 0----(x~, x2 . . . . , x.) where x, = x2~ ... = x , = O).

It can be shown that the topology generated on X ~ by the norm II [[max is

identical to the product topology on X" and that X ~ together with this topology

is a topological linear space. In the event X is a Banach space, it fol lows

that X" is also a Banach space with respect to the norm II [Ima~.

Let T be an n-linear operator on X ~ to Y. T is said to be bounded if

there exists a positive constant M such that

Ir r ( x l , . . . , x,)ll <MFIx, ll Itx ll . . . Hx~

for all (xl, x2 . . . . . x,,) ~ X". In the event T is 0-linear, we say T is bounded

if H T x I I ~ M [ I x [ I ~ for all x E X . Thus, every zero linear operator is

a lways bounded. One can then quite easily prove

Theorem 3.1. For each n = 0, 1, 2 . . . . . l'et T be an n-linear operator. Then

a) T is bounded iff T is continuous,

b) T is bounded iff T is continuous at 6,

c) T is continuous iff T is continuous at ().

Page 7: Matrix representations of polynomial operators

MATRIX REPRESENTATIONS OF POLYNOMIAL OPERATORS 1 0 ~

We say that a polynomial operator P

P x -~ Lo -Jr- L~x ~ L2x 2 -Jr- . . . -q- Lnx n

of degree n is bounded iff each of its component k'linear operators Lo, L~, . . . , Ln

is bounded. It follows that Theorem 3.1 also applies to polynomial operators T.

Clearly unbounded polynomials exist. For example, any unbounded linear ope-

rator is an unbounded polynomial of degree one.

If T is a bounded, n-linear operator we define the norm [IT]I of T through

the equation

(3.1) I lZl l - - inf{M: HT(x,, x~, ..., xn)]l ~<Mtlx, ll'lLx211" ... "llxoll for all (xl, x2 .... ,x,)EX"}.

It is then simple to prove

Theorem 3.2. Let T be a bounded n-linear operator. Then

[kTll = sup l iT(x , , x2, . . . , x,)ll llx~l]<l

= sup liT(x,, x2, . . . , xn) ll i = 1 , 2, . . . , n. l i x t l [ = l

Let ~ n ( X , Y) denote the family of bounded, n-linear operators from

X n to Y together with addition and scalar multiplication defined by (2.1)

and (2.2). In the event Y is a Banach space, one can easily show that

c't3~(X, Y) together with the norm defined by (3.1) is itself a Banach space.

An analogous statement holds for the space of bounded, polynomial operators.

A function f from a topological space Z into the real numbers R (complex

numbers C) is said to be bounded at a point Xo if there exists an open

neighborhood U(xo) and a positive constant K such that IIf(x)ll ~ < K for all

x E U(Xo). One can easily prove

Theorem 3.3. Let T be an n-linear operator. Then for all n ~ 1

a) T is bounded at z E X n iff T is bounded at O,

b) T is bounded at z E X ~ iff T is a bounded operator.

4. MATRIX REPRESENTATIONS OF POLYNOMIALS

Let A be an operator on a separable Hilbert space H and let /q0i}7 be a

complete, orthonormal basis for H. Then A is said to have a matrix represen-

tation (aij) with respect to lq0i}7 if for each x E H

A x = ~ dt~i i = l

Page 8: Matrix representations of polynomial operators

1 ] 0 P . M . PRENTEn

where

and

co

di = ~ at i x i y=l

x = ~ x ~ % . j=l

It is well known that every continuous, linear operator has a matrix represen-

tation and that any linear operator having a matrix representation is continuous.

Now let A be an operator on H a to H where k ~ l . Then A is said to

have a matr ix representation (a:,6,i~ .... i~) with respect to {%}~=, if, for each k

elements x, y , . . . , z in H,

oo

A ( x , y, . . . , z ) = ~-~djr j = l

where

dj = E {aj, il,i2 ..... & xik y&-i . . . Zl, : i t , i~, . . . , i~ = 1, 2 . . . . I

and

X ~- ~'~ Xt ~ l

o o

y = ~ - ~ y , %

o o

Z ~ EZi~i. i=l

Then it is a simple matter to prove:

Theorem 4.1. Let H be a separable Hilbert space with a complete, orthonormal o~

basis {cpil~= 1 . Then

a) Every continuous k-l inear operator A, k = 1, 2 . . . . . has a unique matr ix

representation (aj, il,i~ ..... ik), with respect to {q0i}~ 1 where j , is, is, . . . , ik = 1, 2, ... and

b) ( a j , 6 , 6 . . . . . ik) = (A ~i~ ~ik_~ . . . ~i,, ~j) where (,) denotes inner product.

Page 9: Matrix representations of polynomial operators

MATRIX REPRESENTATIONS OF POLYNOMIAL OPERATORS 111

P r o o f . Let x ~, x~, . . . , x k be any k elements of H. Let

--" L ( xi, ~/) ~i X N j = l

for each i = 1, 2 , . . . , k and let

Then

a j , i , , i , . . . . . ik = ( A ~ i , ~ i k _ 1 . . . ~ i l , ( ~ j ) .

1 2 k A x~v x N . . . x N ----

N

E i~ , i~, . . . , i~ --- I

x ~. x ~ x k A q~,, q~i~ qoik. l l i~ " " " i k " " "

It follows that

1 2 k q ) j ) = (A x N X,v . . . XN,

N

E i l , i 2 , . . . . i k --- 1

, X 1 X z X k a j , ik , i k _ l , ... i~ i~ & " ' " ik "

Since A is continuous

and

1 2 k _ _ A x ~ x ~ . x k lim Ax Nx N . . . x N .. N-.-~.oo

N

(Ax i x 2 . . , x k, %) = lim N - , o o t ~ , i ~ , . . . . & ' - I

X 1. X z X k a j , i k , i k - 1 , . . . , i~ ta i~ " " " i k

o o

E il , i2, . . . . ik -~ I

a j , & & _ ~ i~ x 1 x 2 x k. . . . . . . i l i~ ~ " " I k ~

converges for all x ~, x ~ , . . . , x* in H. Letting d j - - ( A x ~ x ~ . . , x k, %) we see

that A has a matrix representation with respect to Iq0,t~. Uniqueness follows

simply from the uniqueness of (Aq0~q~. . . q0/k, %). This completes the proof of

the theorem.

Every continuous k-linear operator has a matrix representation. Does every

infinite matrix (aj, gl,~.~ ..... ~k) represent a continuous k-linear operator on H" to H

or even a non-continuous k-linear operator on H k to H ? The answer to this

question is clearly " n o " since counter-examples are easily constructed.

A somewhat more subtle conjecture which does have an affirmative answer

asks whether every operator on H k to H, k >~ 1, having a matrix representation

is a continuous, k-linear operator. Well known proofs of this theorem in the

case k-----1 are based on either Landau's Theorem combined with the closed

Page 10: Matrix representations of polynomial operators

112 P.M. PRENTER

graph theorem, or on properties of the adjoint of a linear operator, or on

convexity arguments. It is a simple matter to adapt the theory for the linear

case to the k-linear case to prove

Theorem 4.2. Let H be a complete, separable Hilbert space. Every operator

L on H k to H possessing a matrix representation (aj, i,,i~ ..... i~) is a continuous

k-linear operator.

We shall prove this theorem through a sequence of lemmas using mathe-

matical induction. First recall that if ~ a k bk converges for all (bk)E l", p ~ 1, k : l

then Landau ' s Theorem states that (ak)E l q where 1 - I - 1 ~ 1. If L has a P q

matrix representation (aj, i~,i, ..... i~), then for each x = (x ~, x ~ . . . . , x k) E H k

( 1 ) L x = j = l

n If x ~ = ~ x ~ t , n = - l , 2, . . . , k then

i : 1

(2) o o

jCx)= a j , . . . . . x2 x k ik l k - I " ' " il "

i l , i2 . . . . , i k ~ l

It is well known [17] that Theorem 4.2 is true when n : 1. We now make the

induction hypothesis that Theorem 4.2 is true when n ~ k - 1.

With this assumption we are able to prove

Lemma 4.3. Let (a],i l , i~ . . . . . ik) be a matrix representation of an operator L

f rom H k to t t where L x = ~ ~j(x)cpj and each Qj is given by equation (2). j : l

Then each Qj is a continuous k-linear functional.

Proof. It is obvious that each flj is k-linear. For fixed j let

be given by

c = (ci~: ii = l, 2 . . . . )

~ j a j , i] , ir . . . . " x I x 2 x k - 1 , t k ik t k _ l " " " i2 "

i2,i3 . . . . . lk----I

It is clear from Landau 's theorem that c ~ l 2 since Xcilx i , converges for all

(xi,) E 12 . We now use the induction hypothesis. Since for each j -~-1 , 2, . . . ,

Y, aj , il,i~ . . . . . i k X 1. x 2 x k-~ converges for all (x ~, x ~ x k-~) E H k-i it ~k i k ~ l " ' " i2 ' " ' " '

i s , . . . , l k

Page 11: Matrix representations of polynomial operators

MATRIX H]~PR~,SENTA.TION'$ O~ POLYI~O~IAL OPERATORS 1 ] 3

fol lows that, for each j , (aj, g,,~, ..... i~) is a matrix representation of a cont inuous

(k - -1 ) - l i nea r operator Aj. Furthermore, c ~ A/(x ~, x ~ . . . . . xk-~). N o w simply

note that

~j(x ~, x~, . . . , x ~ ) = (c, x ~)

-----(Aj(x t, x~, . . . , xk-t), xk).

Thus , using Schwartz 's inequali ty

[~/(x ' , x 2 . . . . , x*)l = [(Aj(x I, x ~ , . . . , xk-t), xk) l

~< I I&(x ~, x ~- . . . . , x~-~)l[ - tlx~ll

~< [I A/I1" II x ~ I1" II x ~l l ' '" II x~ II.

Thus f~j is cont inuous as was to be proved.

Lemma 4.4. Let

L x = ~ Q j ( x ) % . j ~ l

I f each ~/ is continuous, then L is closed.

Proof. Let ( x , ) = (x~, x~, . . . , x~) be a sequence in H k which converges

to y : ( y ~ , y~, . . . , yk) in H k. Suppose L xn converges to z in H. We must

prove that L y = z. But (Lx , , %)-----~/(x,) which converges to ~ ( y ) since ~ is

continuous. Since (Lxn, %)-~ (z, %), it fol lows that ~(y)~-- - (z , %) for each

j = I, 2, . . . . But then o o

[[z - - tyll~ = Y'~ [(z - - t y , %)12 j = l

= E I(z, r,) - (Z y, r,) l 2 J

= ~ I~(Y) - - (Ly, %)1 ~ = 0. J

Thus L y = z and L is closed.

Now let L be given as in Lemma 4.4. Observe that if T x ~ L x where

x E H, then it fol lows from the induction hypotheses that T x E q3~_~IH, HI, the

family of bounded, ( k - - 1 ) - l i n e a r operators from H ~-' to H. We then have

Lemma 4.5. Let L be an operator from H ~ to H having a matrix represen-

tation (aj, g,i~ ..... ~k). I f L is closed and T: H ~ q3k_~[H, HI is given by T x = L x

where x E H, then T is closed. 8 - R e n d . C i r c . b l a l e m . P a l e r m o - Serie n - Tomo XXI - Anno 1972

Page 12: Matrix representations of polynomial operators

114 v.u. W,~NTES

Proof. Let (x~) be a sequence in H which converges to x in H and suppose

Tx~ converges to A in c'Bk_~[H, H]. We must prove that T x - ~ A. For each

Y __ (y~, y~, . . . , yk_~) in H k-~, (Tx,)(y) converges pointwise to A y. However,

L is closed. Thus for all y in H k-~(x,,, y~, y2, . . . , yk-~) converges to

(x, y~, y 2 , . . . , yk-~), (Txo) (y ) - -L(x~ , y~, y 2 , . . . , yk-~) converges to Ay and thus by the closedness of L

L(x, y', / , . . . , / - ' ) = (rx)(y) = Ay

for all y EH*-'. Thus ( A - T x) is a ( / r 1)-linear operator which vanishes at

all y E H k-'. Thus A - - T x which was to be proved.

Lemma 4.6. Let T: H-~ cB,_,[H, H] linearly. If T is closed then T is continuous.

Proof. The proof is a simple adaptation of a canonical proof for the linear

case (see Taylor, p. 180). Let graph T - -{ (x , T x): x EHI. This set is closed

in the product topology on H X c~,_~[H, H]. Observe that graph T is a sub-

space when addition on H X Q3,_,[H, H] is defined by

(x, A ) + ( y , B ) - - ( x + y , A + B )

where x, y EH and A, BEc~,_,[H, H]. Let H �9 cB,_~[H, H] be the space

H X c~,_~[H, H] topologized by the norm

}l(x, B)II = Itx II + 11 B !1

where B E c~k_~[H , H]. This topology is equivalent to the product topology on

H X c~k_,[H, H]. Since H and 93,_,[H, H] are complete, H �9 c~k_~[H, H] is

complete. Observe that graph T is a closed subspace of a complete space and

thus is itself complete. Define a function A from graph T to H by

Then

A (x, 7"x) ---- x.

]1 A (x, Tx) ll = 11 x 11 ~ I1 x 11 + [i T x II = II (x, Tx) ll;

so that A is bounded. Furthermore, A is linear since

A ((x, T x ) + (y, r y ) ) = A (x + y, Y(x + y))

- - A(x + y, r x + Ty)

,-- x + y - - A (x, Tx) + A (y, Ty).

Page 13: Matrix representations of polynomial operators

MATRIX REPRESENTATIONS OF POLYNOMIAL OPERATORS 115

Thus A is a bounded linear transformation on graph T onto H. Also A is I - - 1 .

Thus A -~ exists and is continuous since graph T and H are both complete.

It follows that if the sequence (x~) in H converges to x in H, then A-~(x~)

converges to A-~x. That is (x~, Tx~)converges to (x, T x) in H �9 ~k_~[H, HI.

But then T x~ converges to T x since

II(x., Txo) - - (x, r x ) ll = llx - - x.ll + tl T xo - - T~II.

Thus T is continuous as was to be proved.

We can now prove

Theorem 4.2. Let H be a complete, separable Hilbert space. Every operator

L on H k to H possessing a matrix representation (ai, gx,~ ..... ~k) is a continuous

k-linear operator.

Proof. The theorem is Clearly true when n - - I . Assume the theorem is

true when n - - k - 1 (induction hypothesis) and let n - - k . Define T from H

to c~3k_~[H, H] by T x = L x. Invoking Lemmas 4.3 through 4.6 we see that

L is closed which implies T is closed which implies T is continuous. Let

x = ( x ~, x ~-, . . . , x k) E H k. Then

II t. (x', x ~, . . . , x')II = l l ( r x ~) (x ~, x' , . . . , x ~) II

~< II r x' IJ Jl x = tl H x' lJ . . . I[ x~ll

II T II it x' il I1 x = l J . . . [I x~Jt

and L is continuous. This completes the proof of the theorem.

We have given a proof of Theorem 4.2 by arguments involving closed

graph theory. There is a second and somewhat more complicated proof involving

n-convexity notions for n-linear functionals. We shall however, omit this proof.

The preceding theory pertains to n-linear operators where n ~>~ 1. When

n - - 0 we can define a matrix representation of a O-linear operator. Let L0 be a

constant function on H to H. Then there exists a fixed y = L o E H such that

LoX -- y for all x E H. But y ~ 2 y ~ . The one way matrix (y~" i - - 1, 2, . . . ) i = I

is said to be a matrix representation of the operator Lo. It is clear that to each

one way matrix (c~)t=~ for which ,2 I c~]2( oo there corresponds a 0-1inear operator t=1

L 0 defined by Lo x ~ X c~ t for all x E H . t-'-I

Page 14: Matrix representations of polynomial operators

1 1 6 P .M. P,,~m'rm..

5. FINAL REMARKS

We have proved that every continuous n-linear operator (and hence every

continuous polynomial operator) has a unique matrix representation with respect

to each complete, orthonormal system t~,l~ in a separable Hilbert space and

conversely. However, not every matrix (aj, il,~ ..... /,), j, it, i2, . . . , i~ = 1, 2 . . . . ,

where c o

A (x', x', . . . , x")-~ ~ dj%, ./=I

and

Z " " " ] " ' ~

il, i2,..., tn

x + ~ + = x j ?j j = l

corresponds to an n-linear operator on H" to H. For example, let n = 2 and

define aj, k,t ~ j " 8kt where 8k~ is the Kronecher delta. Then for each j

If x ~ %

and

a j k l

[i ~176 j 0

0 j

A x = [i ~176 oO oo

Ax~-~- ~ n?. ~H. n = l

Necessary conditions upon the terms of the matrix (aj, il,i~ ..... i.) to guarantee it

to be an operator on H" to H are rather easily come by. However, such

conditions are usually not sufficient conditions. In the event n = 1, Schur's

Lemma [11] gives a very stringent set of sufficient conditions on (ajk) to gua-

rantee it to be a matrix representation of a linear operator. Can Sehur's Lemma

Page 15: Matrix representations of polynomial operators

MATRIX REPRESENTATIONS OF POLYNOMIAL OPERATORS | 1 7

be extended to the cases n =>/2? Also conditions exist which guarantee that

the matrix (aij) represents a compact linear operator on H to H. Certainly,

these conditions can also be extended to n ~ 2.

All of the foregoing theory reduces to the case H=12(n) where n = 1, 2, . . . .

A good part of it should carry over to the spaces H = IP(n) where p > 1 or

p = 1. At least, one can certainly ask the same questions in any separable

Banach space.

Acknowledgement. The author wishes to thank Professor J. B. Rosser for

pointing out a serious error in the initial draft of this paper. Any remaining

errors are certainly the sole responsibility of the author.

Fort Collins (Colorado), January 1971.

Page 16: Matrix representations of polynomial operators

1 1 8 P . M . ~,RSNrEn

REFERENCES

[1] Akhiezer N. I. and Glazman I. M., Theory of Linear Operators in Hilbert Space, Vol. i, Ungar, New York (1961).

[2] Cesari L., Functional Analysis and Galerkin's Method, Michigan Math. Journal, 11 (1964),

336-384. [3] Chandrasekhar S., Radiative Transfer, Dover, New York, (1960). [4] Hopf E., Uber die Anfangswertat~fgabe fur hydrodynamischen Grundgleichungcn, Math.

Nachrichten, 4 (1951), 213-231. [5] Kantorovich L. V. and Akilov (3. P., Functional Analysis in Normed Spaces, MacMillan

(1964). [6] Locker John, An existence analysis for nonlinear equations in Hilbert space, Transactions

A. M. S., 128 (1967), 403-413. [7] Marcus B., Solutions of infinite polynomial systems by iteration, Rendiconti Circolo Mate-

matico di Palermo, Serie I1, vol. 11 (1962), 5-24. [8] Marcus B., Error bounds for solutions of infinite polynomial systems by iteration, Rendi-

conti Circolo Matematico di Palermo, Serie I!, vol. 13 (1964), 5-10. [9] McFarland J. E., An iterative solution of the quadratic eqtzation in Banach space, Pro-

ceedings of A. M. S., (1958), 824-830. [10] Phillips John R., Eigenfunction expansions for self-adjoin/ bilinear operators in Hilbert

space, Technical report 27, Oregon Start University, May (I966). [11] Prenter P. M., A Weierstrass Theorem for real, separable Hilbert sFaces, MRC report 86~,

April (1968) and Journal of Approximation Theory (to appear). [12] Prenter P. M., Lagrange and Hermite interpolation in Banach spaces, MRC report 921,

October (1968). [13] Prenter P. M., A Weierstrass Theorem for real normed linear spaces, MRC report 957,

January (1969) and Bulletin of A. M. S., 75 (1969), 860-862. [14] Prenter P. M., On Polynomial Operators and Equations. Nonlinear Functional Analysis,

Proceeding of an advanced symposium, MRC, edited by L. B. Rall, Wiley (1970)

(to appear). [15] Rail L. B., Quadratic equations in Banach spaces, Rendiconti Circolo Matematico di Pa-

lermo, Serie I!, vol. 10 (1961), 314-332. [16] Rail L. B., Solutions of abstract polynomial equations by iterative methods, MRC Technical

Summary Report 892, August (1968), 1-35. [17] Taylor A. E., An Introduction to Functional Analysis, Wiley, New York (1958). [18] Urabe Minoru, Galerkin's procedure for nonlinear periodic systems, Arch. Rational Mech.

Anal., 20 (1965), 120-152. [19] Urabe Minoru, Galerkin's procedure for nonlinear periodic systems and its extension to

multipoint boundary value problems for general nonlinear systems. Numerical solutions

of nonlinear differential equations, Proceedings of an advanced symposium, MR C, edited by Donald (3reenspan, Wiley (1966).