24
Probab. Th. Rel. Fields 86, 131-154 (1990) erobabmty T h e o r y a.d Related Fields Springer-Verlag 1990 On probabilities of large deviations in Banach spaces V. Bentkus I and A. Ra~kauskas 2 1 Institute of Mathematics and Cybernetics of Academy of Sciences Lithuanian, Akademijos 4, 232600 Vilnius, Lithuania 2 Vilnius University, Naugarduko 24, 232006 Vilnius, Lithuania Received March 3, 1988; in revised form August 11, 1989 Summary. Let X, X~, X 2 .... ~ B denote a sequence of i.i.d, random variables of a real separable Banach space B, Y ~ B denote a Gaussian random variable. Suppose that E X = E Y = 0 and that covariances of X and Y coincide. Denote S. = n-1/2(X~ + ... + X,). We prove that under appropriate conditions P(llS. ll>r)=P(llYll>r)(l+o(1)) asn~ o0 and give estimates of the remainder term. Applications to the ~o2-test, the Anderson-Darling test and to the empirical characteristic functions are given. 1. Introduction Let H be a real separable Hilbert space with the norm I['II, X, X t, X2, . . . ~ H be a sequence of i.i.d, centered random variables with the distribution L(X) such that P(X = 0)< 1. Let Y E H be a centered Gaussian random variable such that covariances of X and Y coincide. Denote S. = n-a/z(x I + ... + Xn). Through- out 0 e ~1 satisfies the only condition [01 < 1. The following theorem presents an illustration of more general and exact results given in the paper. Theorem 1.1. Let 0<y< 1. If ~> 1/2 then suppose in addition that Elk(X) = Efk(Y) for every k-linear continuous form fk" H ~ ~1 and every integer k >= 3 such that k < (2 - 7)/(1 - ~). Then thefollowingfive statements are equivalent: a) There exists h > 0 such that E exp(h L[ X II ~) < oo ; b) There exist positive constants M i = Mi(7, L(X)), i = 1, 2, 3 such that P(ilS,[L > r) < Mlexp(- M2 rz) when 0 <- r < M3n~/t4-a~; c) There exist positive constants M i = Mi(7, L(X)), i = 1, 2 such that P(I[S,[[ > r) < M1P(I [ Y[I > r) when O <_ r <_ m2n~/(*-a~) ;

On probabilities of large deviations in Banach spaces

Embed Size (px)

Citation preview

Probab. Th. Rel. Fields 86, 131-154 (1990)

erobabmty Theory a.d Related Fields

�9 Springer-Verlag 1990

On probabilities of large deviations in Banach spaces

V. Bentkus I and A. Ra~kauskas 2 1 Institute of Mathematics and Cybernetics of Academy of Sciences Lithuanian, Akademijos 4, 232600 Vilnius, Lithuania 2 Vilnius University, Naugarduko 24, 232006 Vilnius, Lithuania

Received March 3, 1988; in revised form August 11, 1989

Summary. Let X, X~, X 2 . . . . ~ B denote a sequence of i.i.d, r a n d o m variables of a real separable Banach space B, Y ~ B denote a Gauss ian r a n d o m variable. Suppose that E X = E Y = 0 and that covariances of X and Y coincide. Denote S. = n-1/2(X~ + . . . + X,) . We prove that under appropr ia te condit ions

P( l lS . l l > r ) = P ( l l Y l l > r ) ( l + o ( 1 ) ) a s n ~ o0

and give est imates of the remainder term. Applicat ions to the ~o2-test, the Ander son-Dar l ing test and to the empirical characterist ic functions are given.

1. Introduction

Let H be a real separable Hi lber t space with the no rm I['II, X, X t, X 2 , . . . ~ H be a sequence of i.i.d, centered r a n d o m variables with the distr ibution L ( X ) such that P ( X = 0 ) < 1. Let Y E H be a centered Gauss ian r a n d o m variable such that covariances of X and Y coincide. Denote S. = n - a / z ( x I + . . . + Xn). Through- out 0 e ~1 satisfies the only condi t ion [01 < 1.

The following theorem presents an i l lustration of more general and exact results given in the paper.

Theorem 1.1. Let 0 < y < 1. I f ~ > 1/2 then suppose in addition that Elk(X) = Efk(Y) for every k-linear continuous form fk" H ~ ~1 and every integer k >= 3 such that k < (2 - 7)/(1 - ~). Then thefollowingfive statements are equivalent:

a) There exists h > 0 such that

E exp(h L[ X II ~) < oo ;

b) There exist positive constants M i = Mi(7, L(X)) , i = 1, 2, 3 such that

P(ilS,[L > r) < M l e x p ( - M2 rz) when 0 <- r < M3n~/t4-a~;

c) There exist positive constants M i = Mi(7, L(X)) , i = 1, 2 such that

P(I[S,[[ > r) < M 1 P ( I [ Y[I > r) when O <_ r <_ m2n~/(*-a~) ;

132 V. Bentkus and A. Ra6kauskas

d) There exist positive constants M i = M~(y, L(X)), i = 1, 2 such that

P(tIS, Jl > r) = P(II YII > r)(1 + OMx(r + 1)n (1-s)/~2+2s))

when r < M z n ~/~4- 2~J. Here s > 2 is the largest integer such that s < (2 - 7)/(1 - 7); e) For every function f : ~1 ~ ~1 such that f ( x ) --*0 as x ~

P( II S, II > r)/P( II YI] > r) ---, 1 as n ~

uniformly with respect to r < f (n)n ~/~4- 2~).

For instance, when 7 = 1/2 Theorem 1.1 implies, that the condit ion Eexp(h II X [I 1/2) < o0 is equivalent to the following equality

P(I]S, ll > r) = P([I Y][ > r)(1 + OMl(r + 1)n - I /0)

whenever r < M2 rill6 Theorem 1.1 can be applied to certain statistics. As an example let us consider

o92-statistic 2 = n~ ( F . ( t ) - F(t)) 2 d F ( t ) , o9n

where F is a cont inuous distribution function on ~ and F. denotes the empirical distribution function of a random sample of size n related to F. Denote

U(x) = lim P(o9, 2 > x ) .

Corollary 1.2. For every function f : R ~ ~ R t such that lim f ( x ) = 0 X ~ OO

p(o92 > x ) /U (x) ~ 1 as n ~

uniformly with respect to x <=f(n)n 1/3. Moreover, there exist positive constants Ma and M 2 such that the equality

p(o92 > x ) / U ( x ) = 1 + o m l ( x + 1)a/2n-1/6

is valid when x < M2n 1/3.

Theorem 1.1 implies large deviations results for linear combinat ions of order statistics (Bentkus and Zitikis 1990). In Sect. 2 we generalize Theorem 1.1 to the case of a general Banach space. In the case of the space C[0, 1] these generaliza- tions imply large deviations results for empirical characteristic functions (see Theorem 2.7). In the case of the space Lp, p ~> 3 these generalizations yield large deviations results for raP-statistics with weights (see Theorem 2.8).

Yurinskii (1976) has proved that condit ion a) with y = 1 yields

P(IIS.II > r) <= M~(r + 1)P(tl Ytl > r) (1.1)

when 0 <_ r <- M e n 1/6. Theorem 1.1 strengthen this result. Osipov (1977) has proved that under appropria te conditions

P([[S, II >r ) /P( l [Yl [ > r ) = 1 +o (1 ) a s n ~ (1.2)

when r = o(nl/6). Unfortunately, even the boundedness of X is not sufficient to satisfy Osipov's conditions.

One can consider our approach as a renewed and generalized version of that in Bentkus (1986a, b), where was proved that a) with y = 1 implies d) with s = 2 when

Large deviations in Banach spaces 133

r < M2 nl/6. The proofs are based on the application of the Taylor's formula, properties of the limiting Gaussian distribution, induction in n and iterations of the estimates. Methods of such kind were developed earlier for the estimation of the convergence speed in the Central Limit Theorem (Bergstrom 1951; Kuelbs and Kurtz 1974; Butzer et al. 1975; Paulauskas 1976; Zolotarev 1976; Sazonov 1981; Bolthausen 1982; Bentkus and Ra6kauskas 1982a, b; Haeusler 1984; G6tze 1986; etc.). Our proofs differ from that usually used to obtain limit theorems for probabil- ities of large deviations (Ibragimov and Linnik 1965; Petrov 1975; Rudzkis et al. 1979; etc.) since we do not use Fourier transform.

Below we give estimates of constants and present a generalization of Theorem 1.1 to the case of general Banach spaces (including the space C[0, 1]) and to the case of nonidentical distributed summands. Moreover, we show that in a general case the order O(n (1-s)/(zs+2)) of the remainder term in d) of Theorem 1.1 is unimprovable. Earlier in Bentkus and Ra6kauskas (1983a, b), Bentkus (1983) in the same generality Berry-Esseen type estimates of the convergence speed in the Central Limit Theorem in Banach spaces were obtained.

We thank M.A. Lifshits, who kindly has presented the proof of Lemma 3.1.

2. Results

Let F ~ B denote a pair of real separable Banach spaces with norms I. Iv and I" IB respectively such that the identical embedding F ~ B is linear and continuous (without loss of generality one can assume that Ixlr > Ixla, Vx e F).

Let X 1 . . . . . X,, e F be independent random variables with mean zero. Denote

S n = n - x / 2 ( X 1 + . . . + X n ) .

We are going to estimate the probability P(LS, IB > r). For this we need notation and some conditions.

Throughout the paper we fix a number 7, 0 < 7 < 1, and the largest integer s = s(y) such that s < (2 - ~,)/(1 - 7).

Condition A. Random variables X I . . . . . X , are pregaussian in F. More exactly, there exists a mean zero Gaussian random variable Y e F such that covariances o f X k and Y coincide, i.e.

E f2(X k) = E L ( Y ) , 1 <_ k <_ n

for every bilinear continuous symmetric form f2: F ~ ~1. Moreover, i f7 > 1/2 then

Efm(X k) = E L ( Y ) , 1 < k < n (2.1)

for every m-linear continuous symmetric form fm:F ~ ~1 and all m, 1 <_ m <_ s.

The formulations of the results involves certain constants related to the distri- bution of the Gaussian random variable Y. Let us define them. Lemma 3.1 ensures that there exist positive constants a = a ( L ( Y ) ) and b 1 = b l ( L ( Y ) ) such that for all r > 0 , e > 0

P(] YIB > r -- e) < aexp(bZlre)P(I YIB > r) (2.2)

134 V. Bentkus and A. Ra6kauskas

(note that necessarily a > 1). Moreover, for each r o > 0 there exists a positive constant b e = b2(ro, L ( Y ) ) such that for all r > to, e > 0

P(r - ~ < [YIB < r + 0 < ebz(b2r + 1)P([ Yln > r - e). (2.3)

Condition B. The distr ibution func t ion F (r) = P(I Yln < r) has the bounded density.

Lemma 3.1 implies that condit ion B is fulfilled if and only if (2.3) is valid for all r > 0 , e > 0 .

The inclusion map F c B is of type 2 (see, e.g. Paulauskas and Ra6kauskas 1989) if for every sequence of independent identically distributed random variables

X , X 1 , X 2 . . . . e F , E X = O , ElXlrZ < oe

there exists a Gaussian r andom variable Y E B such that the distributions L ( S , ) converge weakly to L ( Y ) in the space B (or in other words, X satisfies the Central Limit Theorem). The results of the paper one can consider as a refinement of the Central Limit Theorem. Therefore we need more stringent condit ion on the inclusion map F c B than that for the Central Limit Theorem. Let us formulate it. Fo r a function f : B ~ ~1 define the derivatives

f ' ( x ) h = lira ( f ( x + th) - f ( x ) ) / t , t ~ •1, t--* O

f " ( x ) h g = ( f ' ( x ) h ) ' g , f " ( x ) h 2 = f " ( x ) h h . . . . .

where x e B, h, g e F. We say that f e D k ( B / F ) if for all h 1 . . . . . h,, �9 F, x �9 B, m < k there exists the derivative fr . . . h,, such that the map

(h 1 . . . . . hm) ~-~ f(m)(X)hl �9 . . hm : F x . . . x F ~ ~1

is m-linear symmetric and continuous.

Condition C. There ex i s t s a cons tant d such that f o r all r, e > 0 one can f ind a func t ion f = f~.~ e D s+ I ( B / F ) so that

O < f < 1, f ( x : l X l B < r - - O = O , f ( x : l x l , > r ) = 1

and fo r all x �9 B, h �9 F, 1 < k < s + 1

I fek)(x)hkl < d e - k l h l ~ . (2.4)

Condi t ion C ensures that the inclusion map F c B is of type 2. For example, condit ion C is satisfied if F = B and the norm-funct ion x ~ [x[e :B \ {0} ~ ~ has s + 1 Frechet derivatives bounded on the unit sphere. Therefore spaces F = B -- L 2 and F = B = Lp, p > s + 1 satisfy condit ion C. The pair F = lip~ [0, 1], ~ > 1/2 and B = C[0, 1] also satisfies condit ion C (for more information see Bentkus and Ra6kauskas (1982a) and, e.g. Paulauskas and Ra6kauskas (1989)). Denote

G k = L ( X k ) -- L ( Y ) , G = ([GI[ + . . . + I a , I ) / n ,

where I Gk[ is the variation of the measure G k,

v(h) = ~ Ix /h l~ +I e x p ( [ x / h l ~ ) a ( d x ) , F

v = inf(h > 0: v(h) < 1).

Th roughou t R, = R,(7) = n ~/(4- 2r).

Large deviations in Banach spaces 135

Theorem 2.1. Assume that conditions A and C are fulfilled. Then

P ( I S , IB > r) < M I P ( I YIB > r)

when r < M 2 R ., where

M 1 = 6a, M 2 = m i n ( M b { 2 ; (b 2 vr) 1/t~-2)) .

The constant M > 0 is arbitrary satisfying

21 ad(Mv) s+l exp(M2bi -2) < (s + 1) !

Theorem 2.2. Assume that conditions A and C are fulfilled. Then for every r o > 0

P(IS.IB > r) = P(I Yln > r)(1 + O M I ( T / M ) (~+1)/('+ 2))

when r o < r < M 2 R .. Here

T = b(br + 1)n (1 -,)/(2~+2), b = max(b 1 ; b2)

and the constants M1, M2, M are defined in Theorem 2.1 with b instead o f b 1.

The theorem is valid for all r < M 2 R . with b 2 such that (2.3) holds for all r > O, e > 0 when condition B is fulfilled.

Theorem 2.3. Suppose that conditions A, B, C are fulfilled. Then there exists a constant C = C(7) > 0 such that the equality

P(IS , I~ > r) = P(I YIB > r)(1 + OMIT(1 + M1/a)/M)

is valid when r < r,. Here M~ = 12a, the constant M is arbitrary satisfying

Cad(Mv)~+ l(1 + bv )exp(2M2/b 2) < 1.

The constant b = max{b 1 ;b2}, where b 2 is such that (2.3) is fulfilled for all r > O, e > O. The constant

M 2 = (12)- 1 m i n ( M b - 2 ;(bv)r/tr-2)/b ) and

r, = - 2b -1 In n + M 2 R , , T = b(br + 1)n t1-~1/~2~+2)

In the remaining part of the section we assume that r andom variables X, X~, X 2 . . . . �9 B are identically distributed.

In the following theorem we denote T = (r + 1)n tl -~/~2+2~) and do not indicate the dependence of constants on L(X) and 7.

Theorem 2.4. Assume that EIXI~: +x < ~ , P ( Y e F) = 1 and that conditions A and C are fulfilled. Then the following assertions are equivalent:

a) There exists h > 0 such that

E exp(hlX]~) < ~ ;

b) There exist constants M1, M 2 > 0 such that

P(IS.[n > r) <= M1P(I YIB > r) when r <= M 2 R . ;

c) There exist constants M i > O, i = 1, 2, 3 such that

P(IS. ln > r) <= M l e x p ( - M2 r2) when O < r <_ M 3 R . ;

136 V. Bentkus and A. Ra6kauskas

d) For each r o > 0 there exist constants M1, M 2 > 0 such that

P(IS,[n > r) = P(I YIB > r)(1 + o m 1T ~s+1)/('+2))

when r o <_ r <_ M 2 R , ;

e) For each r o > 0 and every function f : ~ ~ N 1 such that f ( x ) ~ 0 as x ~ oo

P(IS, In > r)/P(I YIn > r) --, 1 as n --, oo

uniformly with respect to r o < r < f ( n )R , . I f condition B is fulfilled then one can put ro = 0 in d) and e). Moreover, then a)-e)

are equivalent to f ) There exist constants M 1, M 2 > 0 such that

P(IS, IB > r) -- P(I YIB > r)(1 + OM1T) when r <= M z R . .

Theorem 2.5. Assume that E X = 0 and that there exist a Gaussian random variable Y e B, P ( Y = O) + 1 and a constant M such that for every function f : ~1 ~ ~1,

f (n) ~ 0 as n --, oo one has

P(FS,[ B > r)/P([ YIB > r) ~ 1 as n ~ oo (2.5)

uniformly with respect to r > M, r < f ( n ) R , . Then there exist positive constants M i = Mi (L(X) , 7), 1 <_ i <_ 4 such that

P([S, IB > r) < M~ P(I YIB > r) , (2.6)

P(IS.I~ > r) <= M z e x p ( - M z r z) (2.7) when r < M 4 R ..

If(2.7) is fulfilled then there exists h = h(L(X) , y) > 0 such that

E e x p ( h l X l ~ ) < ~ . (2.8)

The following theorem shows that est imates of the remainder term in Theorems �9 1.1, 2.3, 2.4 depends on n in the proper way.

Theorem 2.6. Assume that O, ~ O. Then there exist a pair F c B o f Banach spaces and random variables X , Y ~ F, E X = E Y = 0 such that:

a) The space F is the Hilbert one and the pair F c B satisfies condition C; b) X is bounded (in F ) and symmetric, Y is Gaussian and covariances o f X and

Y coincide. Therefore condition A is fulfilled for 7 < 2/3 (or s < 3); c) Condition B is fulfilled. Moreover, the distribution function

Gz(r) = P(I Y + z[n < r) has a bounded density and

sup(JG~zk)(r)l:z ~ B, r ~ ~1) <

for every k = 1 , 2 , . . . d) I f O < a < b < co then there exists a constant C = C(L(X), a, b) > 0 such

that

inf IP(IS, fB > r) -- P([ Yln > r) l /[(r + 1)P([ Y[~ > r)] > CO, n (1 -s)/(2s+2) a < r < b

for infinitely many n.

Let us discuss the necessity of condit ions A, B, C. Pregaussianness of X~ . . . . . X , in condit ion A is a natural requirement since we est imate P(IS,[ s > r) via the Gauss ian probabi l i ty P(I YIn > r). The question when a Banach space

Large deviations in Banach spaces 137

valued r a n d o m variable is pregaussian is well investigated (see, e.g. Pau lauskas and Ra6kauskas (1989) and li terature cited). Condi t ion (2.1) of the coincidence of momen t s in the one-dimensional case F = B = ~1 reduces to the classical one and in this case is necessary in theorems for probabil i t ies of large deviations (see, e.g. I b r ag imov and Linnik 1965; Pet rov 1972).

The theorems of the paper one can consider as a refinement of the Central Limit Theorem. For example, using Bernstein's type inequalities (Yurinskii 1976) and the fact that P(I YIB > r) rapidly vanishes as r --+ ~ , one can show that Theorems 2.2 and 2.3 imply convergence rates O(n -1/8) and O(n -1/6) in the Central Limit Theorem provided that the r andom variable X is bounded. Ta lagrand and Rhee (1984) have showed that wi thout condit ion B one cannot obta in any convergence rate in the Central Limit Theorem for bounded X. Therefore without condit ion B one cannot prove Theorems 2.2 and 2.3. Similarly the result of Bentkus (1985) implies that wi thout condit ion C one cannot prove Theorems 2.2 and 2.3. Concern- ing Theorem 2.1 it remains unknown if one can weaken or remove condit ion C.

N o w we are going to applicat ions to empirical characteristic functions and certain statistics.

Let z, z~, z 2 . . . . c R ~ denote a sequence of i.i.d, r a n d o m variables. Deno te

f ( t ) = E exp(itz) , f ,( t) = n - 1 ~ exp( i tzk) , k = l

S,(t) = nl /Z( f , ( t ) - f ( t ) ) , 0 <_ t <_ 1 .

Let Y(t), 0 < t -< 1 be a Gauss ian centered r andom process such that covariance functions of S,(t) and Y(t) coincide. Denote II YII = sup([ Y(t)1:0 _< t _< 1).

Theorem 2.7. Suppose that EIz l e+3/2 < oo for some e > 0. Then there exist con- stants M i = Mi (L( z ) ) > O, i = 1, 2 such that

P(HS , tl > r) < M1P(II Yll > r ) when r <= M2 nl/6 .

Moreover , for every funct ion 9 : ~ ~ ~ ~ such that lim g(n) = 0 and every r o > 0 ? 1 ~ o o

P(blS, I[ >r) /P ( I IY I I > r ) - - * l a s n ~ o e

uniformly with respect to r o <= r <= 9(n)n a/6. I f the distribution funct ion r ~ P(II Y]I > r) has a bounded density then

P(IIS. II > r) = P(II Yll > r)(1 + O((r + 1)n-1/6))

when 0 < r <_ M 2 n 1/6, Let q: [0, 1]--* [0, ~ ) be a measurable non-negat ive function, F be a con-

t inuous distr ibution function, 1 < p < ~ . Define the C r a m e r - v o n Mises -Smi rnov statistic

coP.(q) = n p/2 S [Fn(t ) - F ( t ) f q ( F ( t ) ) d F ( t ) , - o o

where F. denotes the empirical distr ibution function of a r a n d o m sample of size n related to F. The change of variables x = F(t) reduces 09nP(q) to the statistic

1

~o.'(q) = n "/2 S IF.(t) - t f q ( t ) d t , 0

138 V. Bentkus and A. Ra6kauskas

where F, denotes the empirical distr ibution function of a r a n d o m sample of size n related to the uniform distr ibution on [0, 1].

1

Theorem 2.8. Suppose that ~ q(t) dt > 0, 0 < ~, __< 1/2, p = 2 or p >= 3. Suppose that 0

1

~ t(1 - t)q(t)dt < oo . (2.9) 0

U(x) --- lim P(coe,,(q) < x) (2.10) n ~ c t )

where

U ( x ) = U ( x ; p ' q ) = P ( i ' w ' - t w l ' p q ( t ) d t < x ) o

and wt, 0 <- t <- 1 denotes the Wiener process. Moreover, the followin9 three state- ments are equivalent:

a) There exists h > 0 such that

i e x p h o i t ' q ( t ) d t Jo ~ d z < oo ,

h ( 1 - t I P q ( t ) d t J ~d~< oo;

b) There exist constants M i = Mi( ~, p, q) > 0, i = 1, 2 such that

P(~o~(q) > x) = (1 - U(x))(1 + OMl(x lip + 1)n -1/6)

when x < M2nPr/(4-2~); c) for each function f : ~1 ~ R1 such that f ( n ) ~ 0 as n ~ c~

P(og~(q) > x)/(1 - U(x)) ~ 1 as n ~

uniformly with respect to x < f(n)n pr/(4-27).

Remark. Obviously, the condit ion a) implies (2.9). Condi t ion (2.9) is not necessary for (2.10). For instance, one can replace (2.9) by the following weaker condi t ion (p _>_ 2)

tPq(t)dt dz < ~ , 0

~ (1 - t)'q(t)dt ~ dz < 0 0 .

0

Remark. When q(t)---1 and p = 2 then the statistic m~(q) reduces to the e~-cri ter ion. If q(t) = It(1 - t ) ] -1 and p = 2 then the statistic ~ ( q ) coincides with the well-known Anderson-Dar l ing test. In both cases a) is fulfilled with 7 = I / 2 and therefore b) and c) are valid when x <= M2 n~/3 and x <=f(n)n ~/3. Let us note also that the weight-function q(t) = [ln t(1 - t)l~/[t(1 - t)] satisfies a) when 6 < - 1 + P/7 and does not satisfy a) when 6 > - 1 + p/~.

Then there exists the limit

Large deviations in Banach spaces 139

3. An auxiliary result

L e m m a 3.1. Let Y e B be a centered Gaussian random variable such that P ( Y = O) = O. Then there exists a constant c > 0 such that for all e > O, r _~ 0

P(I r i b > r - 0 -<- c exp(cr 0 P ( I YIB > r ) . (3.1)

Furthermore, for every r o > 0 there exists a constant c = c (L (Y ) , ro) > 0 such that for all e > 0, r >__ r o

P(r - e < [ Y{B < r + e) < ce(r + 1)P({YI8 > r - e). (3.2)

The inequality (3.2) is fulfilled for all r >___ O, ~ > 0 with c = c (L(Y) ) if and only if the distribution function F(r) = P(I YIB < r) has the bounded density.

Remark. The p roo f of L e m m a 3.1 belongs to M.A. Lifshits. Ear l ier Bentkus (1986) in t roduced inequal i t ies (3.1) and (3.2) as condi t ions and verified them in cer ta in special cases inc luding the case when B is a Hi lber t space.

Proof. Let Q(x) = (2n)-1/2 i exp(-- t2/2) dt be the s t anda rd n o r m a l d i s t r ibu t ion - - c O

function, X = 1 - Q. I t is k n o w n (Ehrhard 1982) that if Z e Nk is a Gauss i an centered r a n d o m var iable then for all convex Borel sets A, C c Rk

Q- I { P ( Z e aA + flC) } >= a Q - I { P ( Z ~ A) } + f lQ- I { P (Z ~ C) } (3.3)

when ~ , f l > 0 , c t + / 3 - - 1. Deno te q ( r ) = Q - l ( 1 - P r ) = X - l ( P r ) , where Pr = P(I YIB > r). F r o m (3.3) one can easily derive tha t q(er + fls) > eq(r) + flq(s). This means tha t q is concave. Deno te by r 1 a po in t such tha t Pr, = 1/2 or equivalent ly q ( r l ) = 0.

Let us prove (3.1). Let us show tha t dur ing the p roo f we m a y assume tha t

e < r/4, 2r I + 1 < r - e, r 1 < r - e, Pr-e < X ( 1 ) . (3.4)

Indeed, there exist cons tan ts c l , c 2 > 0 such tha t P~ > c 1 exp( - c2r2), Vr > 0 since P ( Y = 0) = 0. Therefore if e > r/4 then

P~_~ < 1 < c ; 1 exp(c2r2)p~ < c{ 1 exp(4c2er)P, ,

which implies (3.1). Thus we m a y assume tha t the first inequal i ty in (3.4) is fulfilled. If 2r I + 1 > r - e then r < cl = (8r x + 4)/3 since we m a y assume that e < r/4.

Obvious ly in f{P~:0 < r _< cl} = c 2 > 0 and therefore Pr-~ < 1 < Pr/c2 which implies (3.1). Thus we may assume that the second inequal i ty in (3.4) is fulfilled.

The second inequal i ty in (3.4) implies the th i rd one. If Pr_~ > X(1) then r - e < c < o0 since X(1) > 0 and lira P~ = 0. Proceed ing

r ~ o o

as in the case of inequal i ty 2r~ + 1 > r - e we get (3.1). Therefore we m a y assume that the fourth inequal i ty in (3.4) is fulfilled.

Pu t ~ = U ( r - r x ) , / 3 = 1 - ~ . Since q is concave q(~r l + f i r ) > ~ q ( r l ) + /3q(r) ~ q(r) < q(r -- ~)I /3 ~, P, >= X (q(r - 0//3)~ P,/ P , - , >= X ( q ( r - e)/ fl)/ X ( q ( r - e)) and we get (note that X ( z ) ~ z -1 e x p ( - z 2 / 2 ) if z >__ 1)

PJP ,_ , > c /3exp(-qE(r - e)(/3 -2 - 1)/2). (3.5)

140 V. Bentkus and A. Ra6kauskas

The function q is concave and therefore q ( r - e ) < c ( r - e ) < c r when r - e > 2r I + 1. Using (3.4) one can verify that fl > 1/2 and that r2(/~ -2 - 1) < cre. Thus (3.5) yields (3.1).

N o w let us prove (3.2) in the case r > r o, e > 0. Without loss of generality we can assume that

e < ro/2, r - e > c 1 , (3.6)

where a sufficiently large constant Cl is such that q(Pz) > 1 when z > r - e. Indeed, if e > ro/2 then

P(r - e < I Y[B < r + e) < P(I Yls > r - ~) ,

1 <- 2~/r o <= ce(r + 1), c = 2 / r o and (3.2) is fulfilled. Thus we may assume that the first inequality in (3.6) is fulfilled. It is known (Tsirel'son 1975; Ehrhard 1982) that the distribution function F(r) = P([ Y[s < r) is almost sure differentiable and the density p ( r ) = F'(r) is bounded on every interval (6, oe), 3 > 0. Therefore, if r - e < c l a n d e < r 0 / 2 t h e n

P(r - e < l Y[ B < r + e ) = ~ p ( z ) d z <= ce , 1" e

P(] Y[, > r - e) > P(I YIB > cl) > 0 ,

which yield (3.2). Thus we may assume that the second inequality in (3.6) is ful- filled. Moreover, we may choose cl sufficiently large so that q(Pcl) > 1 since lim q(u) = oe. u ~ O

Obviously (2~) l /2p(z) = q'(z) e x p ( - q2 (z)/2).

The function q is concave. Consequently q ' ( z ) < c when z > ro/2. Not ing that X(u) > cu -1 exp ( " u 2 / 2 ) , u > 1 we obtain p(z) <= cq(z)P~ < czP~. Thus

r + s

P(r - e < I YI, < r + ~) -- J" p ( z ) d z <= ce(r + e)P(I YIB > r - e), r - - E

which yield (3.2) since ~ < ro/2. Ifp(z) is bounded then only minor corrections are needed in the above proof of

(3.2). If (3.2) holds for all r > 0, e > 0 then obviously p(r) < c(r + 1)P(I YIB > r) and

therefore the density p(r) is bounded since according to the Fern ique- Sko rohod-Shepp theorem ~c > O, E exp(cl YI 2) < oo.

4. Proofs

Let us consider the triangular array

(Xk,: 1 --< k < n, n = 1,2 . . . . ) (4.1)

of centered independent in every row random variables of the space F such that

E f , , (Xk , ) = E f , , ( Y ) (4.2)

Large deviations in Banach spaces 141

for every m-linear symmetr ic cont inuous form fm:F--* ~1 and all 1 < m < s, l < k < n , n > l . Denote

S n =- n - 1 / 2 ( X l n "4- �9 �9 �9 -t- X n n ) ,

~/k. = L ( X k . ) - L ( Y ) ,

H . = n - l ( I H a . I + . . . + t H , , [ ) ,

v,(h) = ~ [x/hl~ +1 exp ( l x /h l~ )H, (dx ) , F

v, = inf(h > O'v,(h) < 1).

Let Y1, Y2 . . . . be a sequence of independent copies of the Gauss ian r andom variable Y ~ F. Denote

Z , = n - 1 / 2 ( Y 1 + . . . + Y . ) ,

Wkn = n - 1 / z ( x l n + . . . + X k _ l , "4- rk+, + ' ' " + r . ) ,

~,(t) = sup P(IS,[ B > t ) ,

where sup is taken over all ar rays (4.1) such that v, < v, n = 1, 2 , . . .

L e m m a 4.1. Suppose that the pair F c B satisfies the condition C. Then for every e > 0

P ( I S , ] B > r ) < P ( [ Y ] ~ > r - s ) + J d r 1)!, (4.3)

where J = ~ 6,_~(t) Ix[} +x H , ( d x ) ,

F

t = t(X) = r -- s -- n-1/21xl• ,

6o(t) = l ( I x l . > r - - e ) ,

I (A) denotes the indicator function of the event A.

Proof. Obvious ly P(IS. Is > r ) < E f (S . ) , where the function f = f r , ~ is that one f rom the condi t ion C,

E f ( S . ) < E f ( Y ) + A, E f ( Y ) < P(L YIs > r - s),

where A = IE f (S . ) - E f (Y ) ] . Therefore

P([S,I~ > r) <= P(I YIB > r - e) + A.

Let us est imate A. Clearly A < A~ + . . . + A,, where

A k = [ e f ( w k , + n -1 /2Xk , ) -- e f ( W k , + n - 1 / 2 r k ) I

= E v~f(Wk" + n-1 /ZX)nk"(dx) "

Let us expand f into the Tay lo r series

f ( y + z) = ~ f(J)(y)zJ/j] + E(1 - T)sf (~+ 1)(y + zz)z~+ 1is ! j = O

142 V. Bentkus and A. Ra6kauskas

with y = Wk,, z = n - ~t2x, where the r andom variable ~ is uniformly distr ibuted in the interval [0, 1] and is independent from all other variables. Since the momen t s of Xk , and Yk coincide up to the order s (see (4.2)), the derivative f ( ,+ l ) satisfies est imate (2.4) and f (x) = 0 if Jx[n < r - e, we get

A k <= Jk, d e - ' - l n ( 1 - s ) / Z / ( s + 1)!, where

Jk, = j P(I Wk, IB > t)lXl~ +I n k , ( d x ) . F

Clearly F(I Wk, IB > t) < 6,_ l(t)- Collecting the est imates we complete the p roof of the lemma.

P r o o f o f Theorem 2.1. The p roof is based on L e m m a 4.1 and the induct ion in n.

Dur ing the p roof without loss of generali ty we assume that b I = 1. Indeed, we may replace Y, X , . . . . . by b 1 Y, b l X k , , . . . Hence inequality (2.2) reduces to

P(] Y]a > r - e) < a exp(re)P(] Y[n > r) , (4.4)

which implies when e = r

P(] Y]B > r) > a -1 e x p ( - r 2) . (4.5)

We remind that according to the definitions

M 1 = 6a, m 2 = min(M; v~/(~-2)), (4.6)

21 a d ( M v ) ~§ 1 exp(M 2) < (s + 1)! (4.7) Denote

rn = M 2 n~'/(4-27), ~ -'~'~n = m - l ? l ( 1 - s ) / ( 2 s + 2 ) '

The definitions of s, M2, M, r n and e = e, imply er, < 1 and therefore

e r < 1 when r < r , . (4.8)

Inequalit ies (4.4) and (4.8) imply

P ( [ Y [ B > r - e ) < 3 a P ( ] Y [ s > r ) when r < r , . (4.9)

We shall prove that

6,(r) < M 1 P ( [ YIB > r) when r < r, . (4.10)

The theorem is an obvious consequence of (4.10). Let us prove (4.10) when n -- 1. L e m m a 4.1 implies

P([SI[ ~ > r) 5 P(J Y]~ > r - e) + I , where

I --- J d e - S - 1 / ( s + 1)t, J = f ]x[.I -+1Hl(dX) . F

If h > v 1 then the definition of v,(h) implies

J = h ~+~ ~ Ix/hl'e ~L Hz(dx) _<- h~+~vz(h), F

Large deviations in Banach spaces 143

Obviously vl(h ) < 1 when h > v l . Hence J < h s+l when h > v I and thus J < v] +1 < v ~+1 since v I < v. Therefore (note that s = sx = M -x)

I < d(Mv)~+l/(s + 1)! (use (4.5))

< ad(Mv) ~+1 exp(r2)p(I YIn > r)/(s + 1)!

(note that r < r I = M 2 < M and use the definition of M)

< P ( [ Y l n > r ) ( s i n c e a > l ) < a P ( I Y l n > r ) .

This estimate of I together with estimate (4.9) of P(IYIn> r - e ) imply P(IS1 In > r) < 4aP(} YfB > r), which completes the proof in the case n = 1.

Let us prove the theorem when n > 2. Suppose that (4.10) is already proved for 1 , . . . , n - 1. Let us prove (4.10) for n. Lemma 4.1 implies

P(IS,} n > r) < P(I YIs > r - e) + A , (4.11)

A = Jde-~-lntl-s)/2/(s + 1)! ,

J = ~ 6 , - l ( t ) lx]~+ln , (dx) , F

t = t ( X ) = r - - 13 - - n - 1 / 2 i X i B .

A little later we shall show that

J < - _ a v S + l ( 3 a + l O M l e x p ( M Z ) ) P ( l Y I s > r ) when r < r , . (4.12)

This estimate, the definitions of s = s, and M 1 imply

A < 3a[21ad(Mv) ~+1 exp(M2)]P( [ YIn > r)/(s + 1)! .

Therefore the definition of M (see (4.7)) yields A <= 3aP(I YIn > r), which together with estimate (4.9) of the Gaussian probabil i ty in (4.11) completes the proof of (4.10). It remains to prove (4.12). Denote

A = { x : t ( x ) < O } , C = { x ' 0 < t ( x ) < r , _ l } , D = { x : t ( x ) > r , _ a } .

Obviously J < J~ + Jc + J/~, where

JN = j 6 , - l ( t ) l x lV 1H,(dx), N = A, C, D . N

The estimates

Ja < 3a2v1 +~P(I Y[n > r ) , (4.13)

Jc < 9aMlvl+~P([ Yln > r), (4.14)

Jo < aMl Vl +*exp(M2)p(I YIn > r) (4.15)

clearly imply (4.12). Thus it remains to estimate JA, Jc, JD. Estimation of JA. We shall consider two cases (i) r < s and (ii) r > e. (i) Let us show that in this case (i.e. r __< e)

j~ < v~+ 1, I < 3a2p(I Yls > r) , (4.16)

where

144 V. Bentkus and A. Ra6kauskas

which clearly imply (4.13). For each h > v,

JA < [. IxlV 1 H.(dx) = h ~+1 S ix/h[~+l Hn(dx) =< h'+ %, (h) <= hs+l F F

since v,(h) < 1 when h > v,. Thus JA < hs+l when h > v, and therefore JA < V~ +1, which implies the first inequality in (4.16) since v, < v. Using (4.5) we get

1 < a exp(r2)p(i Y[B > r) (since r < e)

< a exp(er)P(I YIB > r) (since re __< 1, see (4.8))

< 3aP([Y]~ > r ) ,

which yields the second inequality in (4.16) if we note that a > 1. (ii) Let us prove that in this case (i.e. r > e)

1 <= 3a2exp(]x /v[~)P( IY[R > r) when t(x) < 0 . (4.17)

Inequal i ty (4.5) implies

1 < a exp((r - g ) z - ~ ' ( r - - g)~)P([ Y[B > r -- e) <

(since r > e, t ( x ) < 0 ~ * 0 < r - e < rl-1/2[X[B )

< aexp(r~n- ' t l2[x l~)P([ Y[~ > r - e ) ,

which implies (4.17) if we note that r < r, ~ rZ-~n -7/2 <= v -~ and use (4.9) for the es t imat ion of the Gauss ian probabili ty. Est imate (4.17), the definitions of JA, Vn and the inequality v, < v imply (4.15).

Es t imat ion of Jc. Let us show that

(~,_l(t) < 3 a M 1 exp( lx /v[~)P(] Y[ , > r) (4.18)

when 0 < t = t(x) < r,_ 1. Since t < r ,_ 1 we can use the inductional assumption, that is est imate (4.10) with n replaced by n - 1. We get

~._ l ( t ) < M~P([ r tn > t)

(note that t = t(x) = r - (e + n- l /2 lx[~) and use (4.4))

< M l a e x p ( r e ) e x p ( n - l l 2 1 x l ~ r ) P ( I YIB > r)

(note that er =< 1 (see (4.8)) and that t(x) > 0 =~ Ix[ B < rn 1/2 =~ IxlD-' < rl-'n(1-')12)

<= 3aM1 exp(Ix l~r2-~n-~12)P([ YIn > r) ,

which imply (4.18) since r a - r n -r iz < v -y when r _-< r,. Inequal i ty (4.18) and the definition of Jc imply (4.14).

Es t imat ion of JD. The inequality

6 , - l ( t ) < aM1 exp(M2)P([ YIB > r) when t > r , _ l (4.19)

and the definitions of JD, V, and v clearly imply (4.15). Let us prove (4.19). If t > r ,_ 1 then using the inductional assumpt ion (i.e. (4.10) with n replaced by n - 1) we have

6n_l( t ) < O n l ( r n _ l ) < M I P ( I Y I ~ > r , _ l ) (use (4.4))

< a m 1 exp(r , (r , - r , _ l ) )P ( ] YIB > r,)

Large deviations in Banach spaces 145

(note that r < r, and that r, = M2 n~ 0 = 7/(4 - 27), 0 < 0 < 1/2)

< a M 1 exp(M~n~ ~ - (n - 1)~ YI~ > r.), which yields (4.19) since M e < M and n~ ~ - (n - 1) ~ < 1 when n = 1, 2 . . . . , 0 < 0 < 1/2.

T h r o u g h o u t we shall denote

On(t1, t2) = sup e ( t a < IS,[B < t2) ,

p,(r) = sup lP( IS , IB > r) " P(I YI/~ > r)l ,

where sup is taken over all ar rays (4.1) such that v, < v, Yn = 1, 2 , . . . .

L e m m a 4.2. I f e > 0, n > 2 then

IP(IS, IB > r) -- P(I YIB > r)l < P(r -- e < I YIB < r + e) + Jde-~-ln(1-~)/2/(s + 1)[,

where

Proof

where

J = S t ~ n - l ( t l ' t2)lxl~ +t H . ( d x ) , F

t l = r - e - n - 1 / 2 1 x l n , t 2 = ( r + e + n - 1 / z l x [ B ) e x p ( 1 / n ) .

It is known (Kuelbs and Kur t z 1974; Paulauskas 1976) that

IP(IS, IB > r) - P(I YIB > r)[ < P(r - e < I YIB < r + e) + A , (4.20)

A = max [ E f ( S , ) - Ef~(Y) l , /=1,2

the function f t satisfies condit ion C, the function f2 satisfies est imate (2.4) and

A ( x : l X [ B < r ) = O , f 2 ( x ' l X L B > r + e ) = l , 0 < f 2 < l .

Condi t ion C ensures that such funct ionfz does exist. Fo r the sake of completeness let us give the p roof of (4.20). If, for instance, P(IS,[n > r) > P(I Y[n > r) then

P(IS.I~ > r) - P(I Y[. > r) < Ef~(S,) - P(I YI, > r)

< A + E f , ( Y ) - e(I YIB > r)

<_ A + P ( r - e < I YIB < r) .

Est imate (4.20) allows to repeat the p roof of L e m m a 4.1. The only difference is that one should note that f ~ ( x ) - 1 and therefore (~+1) f~ (x) = 0 when IxlB > r + e, i = 1,2.

Proo f o f Theorem 2.2. The proof is similar to that of Theorem 2.1 but the induct ional assumpt ion should be replaced by the est imate of Theorem 2.1. Let us outline only the ma in differences.

Similarly to the p roof of Theorem 2.1 we can assume that b = max(h i ;b 2) = 1. Therefore inequalities (2.2) and (2.3) imply

P(I YIB > r -- e) < a exp(er)e(I Y[B > r) (4.21) and when r > r o

P(r - e < I YIB < r + ~) < e(r + 1)P(I YIB > r - e). (4.22)

146 V. Bentkus and A. Ra6kauskas

Clearly (4.21) imply (4.5). The definitions of constants M, M 1 , M 2 coincide with those in the p roof of Theo rem 2.1. Denote

g = e n = (T/M)tS+X)/(s+Z)/(r -k- 1), r, = M 2 n ~/~4-2r~ ,

where T = (r + 1)n ~1-~)/(z~+2). It is easy to verify that

~,r "( l , F2-TFl -'~12 ~ V -~' when r ~ r n

It is sufficient to prove that

p,(r) < M I ( T / M ) ~+ 1)/(~+2)p(1YIB > r) (4.23)

when r o < r < r,. The theorem is an obvious consequence of (4.23). We shall consider only the case n > 2 since the p roof when n = t is very similar to that of Theorem 2.1 in the same case. L e m m a 4.2 yields

IP( IS , IB > r) - P(I Y{B > r)l < P(r - ~ < I YI~ < r + ~) + I , (4.24) where

I = J d e - S - l n ( 1 - s ) / 2 / ( s + 1)! ,

J = ~ ~ n _ l ( t ) [ x l S v +I H . ( d x ) , t = r - e - n - 1 / 2 l x [ B . F

Let us note that the assertion of Theorem 2.1 remains valid if we replace b 1 by b since bl _< b. Thus proceeding similarly to the p roof of (4.12) but replacing the induct ional assumpt ion by the est imate of Theorem 2.1 we get

J < av 1 +~(3a + 10Mx) exp(M2)V([ Y]~ > r) (4.25)

when r o < r < r,. Using (4.21), (4.22) and ~r < 1 we obtain

P(r - e < [Yls < r + e) <= 3ae(r + 1)P([ Y[o > r ) .

Therefore (4.25), (4.24), M 1 --- 6a and the definition of M imply

p,(r) < 3a[e(r + 1) + ( eM) -S - ln~ l -~ ) /2 ]V( ] rIB > r) .

This inequality yields (4.23) since the definitions of e = e, and T imply

~(r + 1) = (eM) -~- 1 n(1 -s)/2 _~_ (T/M)(~+ 1)/(s+2) .

P r o o f o f Theorem 2.3. The p roof is based on i terations of es t imates beginning f rom the est imate of Theorem 2.2.

Dur ing the p roof wi thout loss of generali ty we assume that b = 1. Therefore (2.2), (2.3) and condit ion B imply that for all r > 0, e > 0

P([ Y[~ > r -- e) _-< a exp(er)P([ Y[~ > r ) , (4.26)

P ( r - e < [r[~ < r + e) <= e(r + I)P([ Y]g > r - - e), (4.27)

a exp(r2)P([ Y[B > r) _--> 1. (4.28)

I t is sufficient to prove that there exists a constant C = C(y) > 0 such that

p,(r) < M 1 T I ( I + M1/")P(I Y[B > r) when r < r , , (4.29)

Large deviations in Banach spaces 147

where

M i = 12a, T 1 = T / M , T = (r + l )n ~1-s)/(2~+2)

M z = (1 /12)min(M, v ~/~r-z)) , r, = - 2 Inn + M z nr/~4-zr)

and the cons tant M is a rb i t ra ry satisfying

Cad(mv)X+~(1 + v) e x p ( 2 m 2) =< 1 .

The theorem is an obvious consequence of (4.29). Using the no ta t ion of (4.29) we can re formula te Theorem 2.2 as follows. If

a cons tan t C > 21/(s + 1)! (note that s depends only on V) then

p, (r) < (M1/2) T] ~ + tl/ts + zl p([ YIB > r) (4.30)

when r < 12(r, + 21nn). Dur ing the p roof of (4.29) we m a y assume that

r > 0 , 0 < T l < l , n > 5 . (4.31)

Indeed, if r < 0 then p,(r) = 0 and (4.29) is obvious. Clearly T1 > 0. I f T 1 > 1 then T]~+ ~)/~s+ 2~ < Tt and (4.30) implies (4.29). I f n < 4 then it is easy to verify (if we note t h a t s > 2 a n d r + l > 1 w h e n r > 0 ) that

TtlS+ 1)/~s +2) < 2T1(1 + M 1/")

and therefore (4.30) yields (4.29). Denote l = [In n], where [-u] is the integer par t of the number u. Clearly

l > l , - 1 - 1 n n < l < l n n , n - l > n / 2 when n > 5 . (4.32)

Denote ~,. = 1 - (s + 2) -"~- l, e,, = T]~/(r + 1)

and consider the interval

I,, = ( - 0% - 2m + exp ( -2 rn /n )m2n~ /~4 -2r ) /4 ) , 0 < m < l .

Using the induct ion in m we shall prove that there exists a constant C = C(y) > 0 such that the inequali ty

Cad(vM)X+~(1 + MZ)(1 + v ) e x p ( M 2) < 1

implies P,- t+m(r) < 6aT~ l 'P ( I YI~ > r) (4.33)

when r ~ I,., 0 < m ~< 1. Let t ing m = t in (4.33) and not ing tha t

I~ = ( - 0% r,), T~ ' -~ < 2(1 + M ~/") (4.34)

we derive (4.29) (we omit the verification of (4.34) since it is based on (4.31), (4.32) and e lementary calculations).

I t remains to prove (4.33). The p roof is based on the induction in m. Let us consider the case m = 0. Es t imate (4.30) implies

p,_ l (r ) < ( M 1 / 2 ) [ ( r + 1)(n - l)(i-s)/(zs+2)/M](S+l)/(s+2)p( [ Y[8 > r)

148 V. Bentkus and A. Ra6kauskas

when r < 12[r,_~ + 21n(n - l)]. This estimate and elementary calculations imply (4.33) when m = 0 if we note that n - l > n/2 (see (4.32)).

Let us consider the case m > 1. Suppose that (4.33) is already proven for 0 . . . . . m - 1. Let us prove (4.33) for m. Letting e = 5,. in Lemma 4.2 and noting that hi2 < n - l + m < n we obtain

IP( tS ._~+~I~ > ~) - P( t YI~ > r)l

<-_ P(r - ~ < I Yla < r + e) + c d M l +ST]" d / T ] m- ~ (4.35)

where c -- c(y) here and in the sequel denotes various positive constants depending only on ~,

J = ~ b( t l , t2) ix l~v+lH(dx) , F

~ ( t l , t2) = 6 n _ t + , . _ l ( t a , t 2 ) , H = Hn_t+ m ,

t 1 = r -- e -- r/2, t2 = (r + e + r/2) exp(2/n) ,

Z = 4lxlBn -1/2, e =em �9

During the proof of (4.33) we may assume that

e < 1, er < 1, e(r + 1) < 1 . (4.36)

Indeed, ~(r + 1) = T~ m < 1 since we assume that 0 < T 1 < 1. Inequalities (4.26), (4.27) and (4.36) imply

P(r - e < I YIB < r + 5) < 3a T]m P(I Yln > r) . (4.37)

In view of (4.35), (4.37) for the p roof of (4.33) it is sufficient to show that J does not exceed

ca2vS+l(1 + v)(1 + M e ) T ] ~-~ P(I YIB > r) (4.38)

when r ~ t,,. Not ing that 6( t l , t2) < 6( t l , r) + 6(r, re) and splitting the space F = (x :z < z/2) u ( x ' e > r/2) we have J __< J1 + J2 + J3 + J4, where

J1 = S ~5(r - z, r)lxl~ +~ H(dx) , F

J2 = S 6(r - 2e, r)lxl} + i n ( d x ) , F

J3 ---= ~ 6(r, (r + z)exp(2/n))lxl~e +1 H(dx) , F

J4 = [. fi(r, (r + 2 e ) e x p ( 2 / n ) ) l x l ) + x n ( d x ) . v

Therefore (4.33) reduces to the p roof that J~, 1 _< i _< 4 do not exceed (4.38). Est imation of J~. Obviously Ja < A~ + A z, where

hx = ~ 3 ( - co, r)[x]~ + l n ( d x ) , r '<T

A 2 = [ 6 ( r - r , r ) l x [ ) + l n ( d x ) .

Therefore it is sufficient to show that A z and A 2 do not exceed (4.38).

Large deviations in Banach spaces 149

Let us estimate A 1. For the estimation of 6 ( - ~ , r ) = 6 . _ ~ + , , _ 1 ( - ~ , r) we may apply the inductional assumption since r ~ I,. ~ I m_ 1 and

[P(IS.L~ < r ) - P(IYIB < r)l = IP(IS.IB > r ) - - P( lYts > r)l �9

We get 6( -- ~ , r) < P(I YIB < r) + caT]m-l P(I YIB > r) . (4.39)

A little later we shall show that

P(I Yls < r) < c a v M T ~ "-~ exp((lxlB/v)r)P(I YIB > r) (4.40)

when r < z. It follows from (4.40), (4.39) and the definition of A 1 that AI does not exceed (4.38). So let us prove (4.40). Inequali ty (4.27) implies P(I Yln < r) < r when r > 0. We have when r < z

r < z(r + 1) < c l x l B T = clxlBMT1 < c l x l ~ M Z ~ ~ - '

since r > 0, 0 < T 1 < 1, 0 < a m_ 1 < 1. The inequality z < c exp(z~/2), z > 0 implies

kxln -< cv exp(([xln/v)~/2) and therefore

P([ Y[I~ < r) < c v M T ~ " - ' exp(([xln/v)~/2)

when r < z. This inequality imply (4.40) if we note that

1 < a exp(rZ)P(I YIB > r) (see (4.28))

and that when r < z

r e < r2-~'z~ = 4rrZ-rn-e/Zlx[~ < (IxlB/v)~)/2

since 4~rZ-rn -~/2 < v-r~2 when r ~ I,, ~ I o. The estimation of A z is similar to that of A 1. The only difference is

that probabilities P([ YIn < r) and P([ Yln > r) should be replaced by P(r - z < I YIs < r) and P(I YIB > r -- z) respectively. But (4.26), (4.27) and r ~ Ira, r > z imply

P(r - z < I Yln < r) < z(r + 1)P([ Yis > r - z) ,

P(I YIB > r -- r) < a exp(rz)P(I YI~ > r) ,

exp(r'c) < exp((I x IB/v)r/2)

which allows to repeat the evaluation of A~. Est imation of J2. Write

S . . . + - . . 2 ~ > r 2 ~ < r

Similarly to the estimation of J1 we use the inductional assumption. This reduces the problem to the evaluation of the Gaussian probabilities. We have

P(I YIB < r) < r(r + 1) < ce(r + 1) = cT]" < cT] m-~

when r < 2e. If r > 2e then

P(r -- 2~ < I YIn < r) < 2e(r + 1)P(I YIn > r - 2e),

P(I YIB > r -- 2e) < 9aP([ YIB > r)

150 V. Bentkus and A. Ra6kauskas

since re < 1 and therefore exp(2rE) < 9. Estimation of J3' Write

J 3 = B I + B2 := ~ . . . + ~ . . . r < l ~ > 1

Let us estimate B 1. I f z < 1 and r e I,, then obviously (r + z )exp (2 /n ) e I,~_~ (note that we assume that n >= 5). Therefore we can apply the inductional assumption and reduce the problem to the evaluation of

P(r <IYIB < (r + r )exp(2 /n) ) .

According to (4.27) this probabili ty do not exceed (note that r =< l)

�9 c[(r + r)exp(2/n) - r](r + 1)P([ YIB > r) Obviously

(r + z )exp(2 /n ) - r < c(z + r / n ) ,

r /n < (r + 1)Z/n < T 2 = M z T 2 < M z T~ "-~ ,

< c [ x I B M T ~ m-' .

Collecting the estimates we complete the evaluation of B 1 . Let us estimate B E . Note that

B 2 < ~ zD(r, o o ) l x l ~ + l n ( d x ) . r > l

Now we can apply the inductional assumption and repeat the evaluation of J1. Estimation of J~. Note that e < 1 and therefore r �9 I m imply (r + 2Dexp(2/n) �9 I~_ 1. Thus we can apply the inductional assumption and repeat the evaluation of J2.

P r o o f o f Theorem 2.5. Let us prove (2.5) =~ (2.6). Denote

R n = n ~'/(4-2~), a,(r) = P(]S, In > r)/P(] Ytn > r ) .

Clearly inf P([ Yln > r) > 0 since P ( Y = 0) 4: 1. Therefore (2.5) implies that the r<-_M

sequence sup {a , ( r ) : r < f ( n ) R . } (4.41)

is bounded for every function f such thatf(n) ~ 0 as n ~ co. It is sufficient to prove that if (2.6) is not fulfilled then (4.41) is violated. So assume that (2.6) fails. Consider the sets

A k = { n : s u p { a , ( r ) : r < R , / k } > k } , k = 1 ,2 . . . .

Obviously A k ~ Ak+ 1 and every set A, is infinite. Choose n k �9 A k so that 1 := n o < n 1 < n 2 < . . . and definef(n) = 1/k i f n k _ 1 < n < n~. Then

s u p { a , ( r ) : r < f ( n ) R , } > k when n = nk

and therefore (4.41) is violated. P roof of (2.6)=~ (2.7). It is sufficient to note that according to the well-known

Fe rn ique -Skorohod-Shepp theorem E exp(a[ YI~) < ~ with some a > 0.

Large deviations in Banach spaces 151

Proof of (2,7) =~ (2.8). At first we assume that X is symmetric. Denote r, = M4n ~/(4- 2r), an = n lJ2r,. Employing the Levy inequality (see, e.g. Araujo and Gine 1980) we get

2P(IS.I~ > r,) > P ( l <_k<__,max IXk]~ > a , ) =

Therefore (2.7) implies

Obviously

where

= 1 - (1 - P(IXI, > a,))" >= e - iP( IX ln > a,) .

P([X[ B > a,) < 2eM2 e x p ( - M 3 r 2 ) .

E exp(h]Xl~) < exp(ha]) + I ,

I = ~ exlS(haY.+t)P([Xla > a,) < n>~ l

< 2eM 2 ~ exp(ha.~+l - M3r2.) < oo n>=l

if we choose h < M3M]-~/2 . Now suppose that X is not symmetric. Denote X, X', X 1, X ' ~ , . . . a sequence of

i.i.d, random variables, X~ = X , - X'., S~. = n- t / z (X] + . . . + X~,). Obviously P(IS~.ln > r) ~ 2P([S.[~ > r/2) and similarly to the symmetric case Eg(X s) < oo where O(x) = exp(hrx{~). The function gc(X) = max(C, g(x)) is convex if we choose a number C = C(h, ~) sufficiently large. Using the Jensen inequality we get

o ~ ( x ) = E oc(X - ~ ( x ' I x ) ) <-_ ~ e ( o ~ ( x - x ' ) l x ) = v, o ~ ( x ~)

since E X = 0. Therefore (2.8) is fulfilled since E gc(X) < oo iff E 9(X) < oo.

Proof o f Theorem 2.4. Theorem 2.2 implies a)=~ d). Obviously d)=~ e), c), b). Theorem 2.5 yields e) =~ c), e) =:, b) =~ a). Thus assertions a)-e) are equivalent. The proof remains the same when condition B is fulfilled but Theorem 2.2 should be replaced by Theorem 2.3.

Proof of Theorem 2.6. Theorem 2 in Bentkus (1986) yields a result very similar to Theorem 2.6. The only difference is that d) in the formulation of Theorem 2.6 should be replaced by d') I f0 < a < b < oo then there exists a constant C = C(L(X), a, b) > 0 such that

inf IP(IS.IB > r) - P(I YIB > r)l > CO.n ~1-')/{2"+2) a<r<b

for infinitely many n. But d') implies d) since the Fernique-Skorohod-Shepp theorem ensures that

sup (r + 1)P([ YIB > r) < oo. a < r < b

Proof o f Theorem 1.1. It is sufficient to take F = B = H in Theorem 2,4 and note that in the Hilbert space conditions B and C are fulfilled (see, e.g. Paulauskas and Ra~kauskas 1989).

152 v. Bentkus and A. Ra6kauskas

P r o o f o f Theorem 2.7. We shall show that Theorem 2.7 is a consequence of Theorem 2.4.

Denote lip,, 0 < c~ < 1 the space of functions r 1] --* C t such that

tq~f~ = Ir + s u p I~p(t) - r - s[ ~ < oo s @ t

and r - q~(s) = o(It - sf ~) as t - s - , 0. It is known (Bentkus 1983) that the pair of Banach spaces lip~ c C[0, 1] satisfies condit ion C when 1/2 < e < 1. Clearly

] e x p ( i x ) - exp(iy)p < 2 1 - p ] x - y]~ when x , y c •1 ,

0 < fl < 1. Therefore the r a n d o m process X ( t ) = exp( i t z ) belongs to lip6+1/2 and E[X]3+l/2 < O0 when 0 < 6 < e/3 and EJz] ~+3/2 < oo. Moreover , X is bounded in C[0, 1-l. It is known (Acosta et al. 1977, Bentkus 1983) that the inclusion m a p lip~ c lipp is of type 2 when a > ft. Therefore P ( Y e lip6+l/Z) = 1. The coincidence

of the covariance functions of X ( t ) and Y(t) and EIXI3+l /2 < 0% 0 < ,5 < ~/3 imply condit ion A with s = 2 (Bentkus 1983). Therefore the pair F = l ip6+l// c B = C [ 0 , 1] and r andom variables X, Y satisfy condit ions of Theorem 2.4.

P r o o f o f Theorem 2.8. We shall show that the theorem is a consequence of Theorem 2.4 when F = B = Lv([0, 1]; q).

Let r e [0, 1] be the uniformly distr ibuted in [0, 1] r a n d o m variable. Define the r a ndom process

X ( t ) = I ( z < t) - t, 0 <_ t <- 1 .

Clearly E X ( t ) = 0, E X ( t ) X ( s ) = min(t, s ) - st, i.e. the mean and the covariance function of X ( t ) coincide with those of w, - tw 1 .

Let X 1, X2 . . . . be independent copies of X. It is easy to verify that

Fn(t) - t = n-1/'2 S,(t) , o)~P(q) = ]]S, HV, q,

where S,( t ) = n - 1 / 2 ( X l ( t ) + . . . + X , ( t ) ) and

[I = I (t) lpq(tldt 0

is the no rm of the Banach space Lv = Lp([0, 1]; q). Changing the order of integrat ion and noting that [0, 1] = [0, t] w (t, 1] we get

i i 1

EIIXIIPp, q = f j II(z < t) -- t l P q ( t ) d t d z = ~ [(1 - t)Pt + tP(1 - t ) ] q ( t ) d t . O 0 0

Therefore 1

E H X N P q < oo < : > ~ t ( 1 - t ) q ( t ) d t < oo 0

(4.42)

Large deviations in Banach spaces 153

Let us prove (2.10). The space Lp, p > 2 is of type 2 (see, e.g. Pau lauskas and Ra6kauskas (1987)) and therefore the condi t ions

e x = 0 , EllXll~,q < ~

ensure tha t X satisfies the Cent ra l Limi t theorem in Lp. This means that there exists a Gauss i an r a n d o m var iable Y ~ Lp such that E Y = 0, covar iances of X and Y coincide, L(S, ) converge weakly to L(Y) . It is k n o w n that the d is t r ibu t ion funct ion rv--~P([l Y]]p,q < r) is con t inuous (see, e.g. Pau lauskas and Ra~kauskas 1989). Therefore the wel l -known facts concerning the weak convergence (see, e.g. Billingsly 1968) ensure that (4.42) implies

lira P(e~P,(q) < x) = P(ll Ylb~,q < x), p > 2 , n ~ c t 3

which completes the p roo f of (2.10) if we note that the d is t r ibu t ion of Y coincides with that of w t - tw 1. Similar ly to (4.42) one can verify tha t

3h > 0 Eexp(hl]X[t~,q) < oo

if and only if condi t ion a) in the formula t ion of Theorem 214 is fulfilled. Therefore the theorem follows from Theorem 2.4 if we verify condi t ions B and C. But it is wel l -known (see, e.g. Pau lauskas and Ra6kauskas (1989)) tha t condi t ion B is fulfilled when p > 1 and that condi t ion C is fulfilled with s = 2 when p = 2 or p > 3. W h e n p = 3 the inequal i ty (2.4) for k = 3 in condi t ion C should be replaced by somewhat weaker one

I f" (x + h)h 2 - f " ( x ) h 2 [ < de -3 Ilhll3,q

which is still sufficient for all theorems of the pape r if we make certain technical changes in the proofs.

Proof o f Corollary 1.2. It is sufficient to put p = 2, q(t) - 1 in Theorem 2.8.

References

Acosta, A. de, Araujo, A. de, Gine, E.: On Poisson measures, Gaussian measures and the central limit theorem in Banach spaces. In: Probabilities in Banach spaces, pp. 1~58. New York Basel 1977

Araujo, A., Gine, E.: The central limit theorem for real and Banach valued random variables. New York Chichester Brisbane Toronto: Wiley 1980

Bentkus, V.: Estimates of the proximity of sums of independent random elements in the space C[0, 1]. Lith. Math. J. 23, 1-8 (1983)

Bentkus, V.: Lower bounds for the rate of convergence in the central limit theorem in Banach spaces. Lith. Math. J. 25, 312-320 (1985)

Bentkus, V.: On large deviations in Banach spaces (Russian). Teor. Veroyatn. Primen. 31,710-716 (1986a)

Bentkus, V.: On large deviations in Banach spaces. Thesis of the pt World Congress of the Bernoulli society, USSR Tashkent 1986, p. 757. Moscow: Nauka 1986b

Bentkus, V., Ra6kauskas, A.: Estimates of the rate of convergence of sums of independent random variables in a Banach space, I. Lith. Math. J. 22, 222-234 (1982a)

Bentkus, V., Ra6kauskas, A.: Estimates of the rate of convergence of sums of independent random variables in a Banach space. II. Lith. Math. J. 22, 344-353 (1982b)

Bentkus, V., Zitikis, R.: Probabilities of large deviations for L-statistics. Lith. Math. J. (to appear 1990)

154 V. Bentkus and A. Ra~kauskas

Bergstrom, H.: On asymptotic expansions of probability functions. Skand. Aktuarietidskr. 1-2, 1-24 (1951)

Billingsley, P.: Convergence of probability measures. New York: Wiley 1968 Bolthausen, E.: Exact convergence rates in some martingale central limit theorems. Ann. Probab.

10, 672-688 (1982) Butzer, P.L., Hahn, L., Westphal, U.: On the rate of approximation in the central limit theorem.

J. Approximation Theory 13, 327-340 (1975) Ehrhard, A." Sur la densite du maximum d'une fonction aleatoire Gaussienne. Seminaire de

probabilities XVI, Univ. Strasbourg 1980/81 (Lect. Notes Math., vol. 920, pp. 581 601) Berlin Heidelberg New York: Springer 1982

G6tze, F.: On the rate of convergence in the central limit theorem in Banach spaces. Ann. Probab. 14, 922-942 (1986)

Haeusler, E.: A note on the rate of convergence in the martingale central limit theorem. Ann. Probab. 12, 635-639 (1984)

Ibragimov, I.A., Linnik, J.V.: Independent and stationary connected variables (Russian). Moscow: Nauka I965

Kuelbs, J. and Kurtz, T.: Berry-Esseen estimates in Hilbert space and application to law of the iterated logarithm. Ann. Probab. 2, 387-407 (1974)

Osipov, L.V.: On large deviations for sums of independent random vectors (Russian). Abstracts of communications of the second Vilnius conference on probability theory and mathematical statistics, vol. 2, pp. 95-96 (1977)

Paulauskas, V.I.: On the convergence rate in the central limit theorem in certain Banach spaces. Theor. Probab. Appl. 21, 775-791 (1976)

Paulauskas, V.I.: On the approximation of indicator functions by smooth functions in Banach spaces. Functional analysis and approximation. Basel: Birkh/iuser Verlag 1981

Paulauskas, V., Ra~kauskas, A.: Approximation theory in the central limit theorem. Exact results in Banach spaces. Norwell Laucaster Dordrecht Kluwer 1989

Petrov, Yu.V.: Sums of independent random variables. Berlin Heidelberg New York: Springer 1975

Rhee, W.S., Talagrand, M.: Bad rates of convergence for the central limit theorem in Hilbert space. Ann. Probab. 12, 843-850 (1984)

Rudzkis, R., Saulis, L., Statulevi6ius, V.A.: Large deviations of sums of independent random variables. Lith. Math. J. 19, 118-125 (1979)

Sazonov, V.V.: Normal approximation some recent advances. (Lect. Notes Math., vol. 879, pp. 1-105) Berlin Heidelberg, New York: Springer 1981

Tsirel'son, B.S.: The density of the maximum of a Gaussian process. Theor. Probab. Appl. 20, 847-855 (1975)

Yurinskii, V.V.: Exponential inequalities for sums of random vectors. J. Multivar, Anal. 6, 473-499 (1976)

Zolotarev, V.M.: Approximation of distributions of sums of independent random variables with values in infinite-dimensional spaces. Teor. Veroyatn. Primen. 21, 741-757 (1976)