656

Asymptotic analysis of random walks

  • Upload
    others

  • View
    4

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Asymptotic analysis of random walks
Page 2: Asymptotic analysis of random walks

Asymptotic Analysis of Random Walks

This book focuses on the asymptotic behaviour of the probabilities of large deviations of thetrajectories of random walks with ‘heavy-tailed’ (in particular, regularly varying, sub- andsemiexponential) jump distributions. Large deviation probabilities are of great interest innumerous applied areas, typical examples being ruin probabilities in risk theory, errorprobabilities in mathematical statistics and buffer-overflow probabilities in queueing theory.The classical large deviation theory, developed for distributions decaying exponentially fast(or even faster) at infinity, mostly uses analytical methods. If the fast decay condition fails,which is the case in many important applied problems, then direct probabilistic methodsusually prove to be efficient. This monograph presents a unified and systematic exposition oflarge deviation theory for heavy-tailed random walks. Most of the results presented in thebook are appearing in a monograph for the first time. Many of them were obtained by theauthors.

Professor Alexander Borovkov works at the Sobolev Institute of Mathematics inNovosibirsk.

Professor Konstantin Borovkov is a staff member in the Department of Mathematics

and Statistics at the University of Melbourne.

Page 3: Asymptotic analysis of random walks

ENCYCLOPEDIA OF MATHEMATICS AND ITS APPLICATIONS

All the titles listed below can be obtained from good booksellers or from Cambridge University Press. For a completeseries listing visit http://www.cambridge.org/uk/series/sSeries.asp?code=EOM

60 J. Krajicek Bounded Arithmetic, Propositional Logic, and Complexity Theory61 H. Groemer Geometric Applications of Fourier Series and Spherical Harmonics62 H.O. Fattorini Infinite Dimensional Optimization and Control Theory63 A.C. Thompson Minkowski Geometry64 R.B. Bapat and T.E.S. Raghavan Nonnegative Matrices with Applications65 K. Engel Sperner Theory66 D. Cvetkovic, P. Rowlinson and S. Simic Eigenspaces of Graphs67 F. Bergeron, G. Labelle and P. Leroux Combinatorial Species and Tree-Like Structures68 R. Goodman and N. Wallach Representations and Invariants of the Classical Groups69 T. Beth, D. Jungnickel, and H. Lenz Design Theory I, 2nd edn70 A. Pietsch and J. Wenzel Orthonormal Systems for Banach Space Geometry71 G.E. Andrews, R. Askey and R. Roy Special Functions72 R. Ticciati Quantum Field Theory for Mathematicians73 M. Stern Semimodular Lattices74 I. Lasiecka and R. Triggiani Control Theory for Partial Differential Equations I75 I. Lasiecka and R. Triggiani Control Theory for Partial Differential Equations II76 A.A. Ivanov Geometry of Sporadic Groups I77 A. Schinzel Polymomials with Special Regard to Reducibility78 H. Lenz, T. Beth, and D. Jungnickel Design Theory II, 2nd edn79 T. Palmer Banach Algebras and the General Theory of *-Algebras II80 O. Stormark Lie’s Structural Approach to PDE Systems81 C.F. Dunkl and Y. Xu Orthogonal Polynomials of Several Variables82 J.P. Mayberry The Foundations of Mathematics in the Theory of Sets83 C. Foias, O. Manley, R. Rosa and R. Temam Navier–Stokes Equations and Turbulence84 B. Polster and G. Steinke Geometries on Surfaces85 R.B. Paris and D. Kaminski Asymptotics and Mellin–Barnes Integrals86 R. McEliece The Theory of Information and Coding, 2nd edn87 B. Magurn Algebraic Introduction to K-Theory88 T. Mora Solving Polynomial Equation Systems I89 K. Bichteler Stochastic Integration with Jumps90 M. Lothaire Algebraic Combinatorics on Words91 A.A. Ivanov and S.V. Shpectorov Geometry of Sporadic Groups II92 P. McMullen and E. Schulte Abstract Regular Polytopes93 G. Gierz et al. Continuous Lattices and Domains94 S. Finch Mathematical Constants95 Y. Jabri The Mountain Pass Theorem96 G. Gasper and M. Rahman Basic Hypergeometric Series, 2nd edn97 M.C. Pedicchio and W. Tholen (eds.) Categorical Foundations98 M.E.H. Ismail Classical and Quantum Orthogonal Polynomials in One Variable99 T. Mora Solving Polynomial Equation Systems II

100 E. Olivieri and M. Eulalia Vares Large Deviations and Metastability101 A. Kushner, V. Lychagin and V. Rubtsov Contact Geometry and Nonlinear Differential Equations102 L.W. Beineke, R.J. Wilson, P.J. Cameron. (eds.) Topics in Algebraic Graph Theory103 O. Staffans Well-Posed Linear Systems104 J.M. Lewis, S. Lakshmivarahan and S. Dhall Dynamic Data Assimilation105 M. Lothaire Applied Combinatorics on Words106 A. Markoe Analytic Tomography107 P.A. Martin Multiple Scattering108 R.A. Brualdi Combinatorial Matrix Classes110 M.-J. Lai and L.L. Schumaker Spline Functions on Triangulations111 R.T. Curtis Symmetric Generation of Groups112 H. Salzmann, T. Grundhofer, H. Hahl and R. Lowen The Classical Fields113 S. Peszat and J. Zabczyk Stochastic Partial Differential Equations with Levy Noise114 J. Beck Combinatorial Games

Page 4: Asymptotic analysis of random walks

Asymptotic Analysis of Random WalksHeavy-Tailed Distributions

A.A. BOROVKOV

K.A. BOROVKOV

Translated by

O.B. BOROVKOVA

Page 5: Asymptotic analysis of random walks

cambridge university pressCambridge, New York, Melbourne, Madrid, Cape Town, Singapore, Sao Paulo, Delhi

Cambridge University PressThe Edinburgh Building, Cambridge CB2 8RU, UK

Published in the United States of America by Cambridge University Press, New York

www.cambridge.org

C© A.A. Borovkov and K.A. Borovkov 2008

This publication is in copyright. Subject to statutory exceptionand to the provisions of relevant collective licensing agreements,

no reproduction of any part may take place withoutthe written permission of Cambridge University Press.

First published 2008

A catalogue record for this publication is available from the British Library

ISBN 978-0-521-88117-3 hardback

Cambridge University Press has no responsibility for the persistence or accuracy of URLsfor external or third-party internet websites referred to in this publication, and does notguarantee that any content on such websites is, or will remain, accurate or appropriate.

Printed in the United States of America

Page 6: Asymptotic analysis of random walks

Contents

Notation page xiIntroduction xv

1 Preliminaries 11.1 Regularly varying functions and their main properties 11.2 Subexponential distributions 131.3 Locally subexponential distributions 441.4 Asymptotic properties of ‘functions of distributions’ 511.5 The convergence of distributions of sums of random variables

with regularly varying tails to stable laws 571.6 Functional limit theorems 75

2 Random walks with jumps having no finite first moment 802.1 Introduction. The main approach to bounding from above

the distribution tails of the maxima of sums of random variables 802.2 Upper bounds for the distribution of the maximum of sums

when α � 1 and the left tail is arbitrary 842.3 Upper bounds for the distribution of the sum of random

variables when the left tail dominates the right tail 912.4 Upper bounds for the distribution of the maximum of sums

when the left tail is substantially heavier than the right tail 972.5 Lower bounds for the distributions of the sums. Finiteness

criteria for the maximum of the sums 1032.6 The asymptotic behaviour of the probabilities P(Sn � x) 1102.7 The asymptotic behaviour of the probabilities P(Sn � x) 120

3 Random walks with jumps having finite mean and infinite variance1273.1 Upper bounds for the distribution of Sn 1273.2 Upper bounds for the distribution of Sn(a), a > 0 1373.3 Lower bounds for the distribution of Sn 1413.4 Asymptotics of P(Sn � x) and its refinements 1423.5 Asymptotics of P(Sn � x) and its refinements 149

v

Page 7: Asymptotic analysis of random walks

vi Contents

3.6 The asymptotics of P(S(a) � x) with refinements and thegeneral boundary problem 154

3.7 Integro-local theorems on large deviations of Sn for index−α, α ∈ (0, 2) 166

3.8 Uniform relative convergence to a stable law 1733.9 Analogues of the law of the iterated logarithm in the case of

infinite variance 176

4 Random walks with jumps having finite variance 1824.1 Upper bounds for the distribution of Sn 1824.2 Upper bounds for the distribution of Sn(a), a > 0 1914.3 Lower bounds for the distributions of Sn and Sn(a) 1944.4 Asymptotics of P(Sn � x) and its refinements 1974.5 Asymptotics of P(Sn � x) and its refinements 2044.6 Asymptotics of P(S(a) � x) and its refinements. The

general boundary problem 2084.7 Integro-local theorems for the sums Sn 2174.8 Extension of results on the asymptotics of P(Sn � x) and

P(Sn � x) to wider classes of jump distributions 2244.9 The distribution of the trajectory {Sk} given that Sn � x

or Sn � x 228

5 Random walks with semiexponential jump distributions 2335.1 Introduction 2335.2 Bounds for the distributions of Sn and Sn, and their

consequences 2385.3 Bounds for the distribution of Sn(a) 2475.4 Large deviations of the sums Sn 2505.5 Large deviations of the maxima Sn 2685.6 Large deviations of Sn(a) when a > 0 2745.7 Large deviations of Sn(−a) when a > 0 2875.8 Integro-local and integral theorems on the whole real line 2905.9 Additivity (subexponentiality) zones for various distribution

classes 296

6 Large deviations on the boundary of and outside the Cramer zone

for random walks with jump distributions decaying exponentially

fast 3006.1 Introduction. The main method of studying large deviations

when Cramer’s condition holds. Applicability bounds 3006.2 Integro-local theorems for sums Sn of r.v.’s with distributions

from the class ER when the function V (t) is of index fromthe interval (−1,−3) 308

Page 8: Asymptotic analysis of random walks

Contents vii

6.3 Integro-local theorems for the sums Sn when the Cramertransform for the summands has a finite variance at the rightboundary point 315

6.4 The conditional distribution of the trajectory {Sk} givenSn ∈ Δ[x) 318

6.5 Asymptotics of the probability of the crossing of a remoteboundary by the random walk 319

7 Asymptotic properties of functions of regularly varying and semiex-

ponential distributions. Asymptotics of the distributions of stopped

sums and their maxima. An alternative approach to studying the

asymptotics of P(Sn � x) 3357.1 Functions of regularly varying distributions 3357.2 Functions of semiexponential distributions 3417.3 Functions of distributions interpreted as the distributions of

stopped sums. Asymptotics for the maxima of stopped sums 3447.4 Sums stopped at an arbitrary Markov time 3477.5 An alternative approach to studying the asymptotics

of P(Sn � x) for sub- and semiexponential distributions ofthe summands 354

7.6 A Poissonian representation for the supremum S and thetime when it was attained 367

8 On the asymptotics of the first hitting times 3698.1 Introduction 3698.2 A fixed level x 3708.3 A growing level x 391

9 Integro-local and integral large deviation theorems for sums of

random vectors 3989.1 Introduction 3989.2 Integro-local large deviation theorems for sums of indepen-

dent random vectors with regularly varying distributions 4029.3 Integral theorems 412

10 Large deviations in trajectory space 41710.1 Introduction 41710.2 One-sided large deviations in trajectory space 41810.3 The general case 422

11 Large deviations of sums of random variables of two types 42711.1 The formulation of the problem for sums of random variables

of two types 42711.2 Asymptotics of P (m,n, x) related to the class of regularly

varying distributions 429

Page 9: Asymptotic analysis of random walks

viii Contents

11.3 Asymptotics of P (m,n, x) related to semiexponentialdistributions 432

12 Random walks with non-identically distributed jumps in the tri-

angular array scheme in the case of infinite second moment. Tran-

sient phenomena 43912.1 Upper and lower bounds for the distributions of Sn and Sn 43912.2 Asymptotics of the crossing of an arbitrary remote boundary 45412.3 Asymptotics of the probability of the crossing of an arbitrary

remote boundary on an unbounded time interval. Bounds forthe first crossing time 457

12.4 Convergence in the triangular array scheme of random walkswith non-identically distributed jumps to stable processes 464

12.5 Transient phenomena 471

13 Random walks with non-identically distributed jumps in the tri-

angular array scheme in the case of finite variances 48213.1 Upper and lower bounds for the distributions of Sn and Sn 48213.2 Asymptotics of the probability of the crossing of an arbitrary

remote boundary 49513.3 The invariance principle. Transient phenomena 502

14 Random walks with dependent jumps 50614.1 The classes of random walks with dependent jumps that

admit asymptotic analysis 50614.2 Martingales on countable Markov chains. The main results

of the asymptotic analysis when the jump variances can beinfinite 509

14.3 Martingales on countable Markov chains. The main resultsof the asymptotic analysis in the case of finite variances 514

14.4 Arbitrary random walks on countable Markov chains 516

15 Extension of the results of Chapters 2–5 to continuous-time ran-

dom processes with independent increments 52215.1 Introduction 52215.2 The first approach, based on using the closeness of the

trajectories of processes in discrete and continuous time 52515.3 The construction of a full analogue of the asymptotic analysis

from Chapters 2–5 for random processes with independentincrements 532

16 Extension of the results of Chapters 3 and 4 to generalized renewal

processes 54316.1 Introduction 54316.2 Large deviation probabilities for S(T ) and S(T ) 55116.3 Asymptotic expansions 574

Page 10: Asymptotic analysis of random walks

Contents ix

16.4 The crossing of arbitrary boundaries 58516.5 The case of linear boundaries 592

Bibliographic notes 602References 611Index 624

Page 11: Asymptotic analysis of random walks
Page 12: Asymptotic analysis of random walks

Notation

This list includes only the notation used systematically throughout the book.

Random variables and events

ξ1, ξ2 . . . are independent random variables (r.v.’s), assumed to be identicallydistributed in Chapters 1–11, 16 (in which case ξj

d= ξ)ξ(a) = ξ − a

ξ〈y〉 is an r.v. ξ ‘truncated’ at the level y: P(ξ〈y〉 < t) = P(ξ < t)/P(ξ < y),t � y

ξ(λ) is an r.v. with distribution P(ξ(λ) ∈ dt) = (eλt/ϕ(λ))P(ξ ∈ dt) (theCramer transform)

ξn = maxk�n

ξk

Sn =∑n

j=1 ξj

Sn = maxk�n

Sk

S = supk�0

Sk

Sn = mink�n

Sk

Sn = maxk�n

|Sk| ≡ max{Sn, Sn}Sn(a) =

∑ni=1 ξi(a) ≡ Sn − an

Sn(a) = maxk�n

Sk(a) = maxk�n

(Sk − ak)

S(a) = supk�0 Sk(a) = supk�0(Sk − ak)S〈y〉n =

∑nj=1 ξ

〈y〉j

S(λ)n =

∑nj=1 ξ

(λ)j

τ1, τ2, . . . are independent identically distributed r.v.’s (τj > 0 in Chapter16)

τd= τj

tk =∑k

j=1 τj

Gn is one of the events {Sn � x}, {Sn � x} or{maxk�n(Sk − g(k)) � 0

}GT =

{supt�T (S(t) − g(t)) � 0

}(in Chapters 15 and 16)

1(A) is the indicator of the event A

xi

Page 13: Asymptotic analysis of random walks

xii Notation

d=,d�,

d� are equality and inequalities between r.v.’s in distribution

⇒ is used to denote convergence of r.v.’s in distribution

Distributions and their characteristics

The notation ζ ⊂=G means that the r.v. ζ has the distribution GThe notation ζn ⊂=⇒G means that the distributions of the r.v.’s ζn converge

weakly to the distribution G (as n → ∞)Fj is the distribution of ξj (Fj = F in Chapters 1–11, 16)F+(t) = F([t,∞)) ≡ P(ξ � t), Fj,+(t) = Fj([t,∞))F−(t) = F((−∞,−t)) ≡ P(ξ < −t), Fj,−(t) = Fj((−∞,−t))F (t) = F−(t) + F+(t), Fj(t) = Fj,−(t) + Fj,+(t)FI(t) =

∫ t

0F (u) du, F I(t) =

∫∞t

F (u) du

V (t), W (t), U(t) are regularly varying functions (r.v.f.’s) (in Chapters 1–4):V (t) = t−αL(t), α > 0W (t) = t−βLW (t), β > 0U(t) = t−γLU (t), γ > 0

L(t), LW (t), LU (t), LY (t) are slowly varying functions (s.v.f.’s), correspond-ing to V , W , U , Y

V (t) = e−l(t), l(t) = tαL(t), α ∈ (0, 1), L(t) is an s.v.f. (in Chapter 5)l(t) = tαL(t) is the exponent of a semiexponential distributionV (t) = max{V (t),W (t)}Fτ is the distribution of τ

G is the distribution of ζ

α, β are the exponents of the right and left regularly varying distribution tailsof ξ respectively, or those of their regularly varying majorants (or mino-rants)

α = max{α, β}α = min{α, β}d = Var ξ = E(ξ − Eξ)2

f(λ) = Eeiλξ is the characteristic function (ch.f.) of ξ

g(λ) = Eeiλζ is the ch.f. of ζ

ϕ(λ) = Eeλξ is the moment generating function of ξ

(α, ρ) are the parameters of the limiting stable lawFα,ρ is the (standard) stable distribution with parameters (α, ρ)Fα,ρ,+(t) = Fα,ρ([t,∞)), Fα,ρ,− = Fα,ρ((−∞,−t)), t > 0Fα,ρ(t) = Fα,ρ,+(t) + Fα,ρ,−(t), t > 0Φ is the standard normal distributionΦ(t) is the standard normal distribution function

Conditions on distributions

[ · , =] ⇔ F+(t) = V (t), t > 0[ · , <] ⇔ F+(t) � V (t), t > 0

Page 14: Asymptotic analysis of random walks

Notation xiii

[ · , >] ⇔ F+(t) � V (t), t > 0[=, · ] ⇔ F−(t) = W (t), t > 0[<, · ] ⇔ F−(t) � W (t), t > 0[>, · ] ⇔ F−(t) � W (t), t > 0[=, =] ⇔ F+(t) = V (t), F−(t) = W (t), t > 0[<, <] ⇔ F+(t) � V (t), F−(t) � W (t), t > 0[>, >] ⇔ F+(t) � V (t), F−(t) � W (t), t > 0[Rα,ρ] means that F (t) = t−αLF (t), α ∈ (0, 2], where LF (t) is an s.v.f. and

there exists the limit

limt→∞

F+(t)F (t)

=: ρ+ =12(ρ + 1) ∈ [0, 1]

[D(h,q)], h ∈ (0, 2], are conditions on the smoothness of F (t) at infinity; see§ 3.4

[D(k,q)], k = 1, 2, . . . , are generalized conditions of the differentiability ofF (t) at infinity; see § 4.4

[<] means that Fτ (t) � Vτ (t) := t−γLτ (t), where Lτ is an s.v.f.

Scalings

b(n) =

⎧⎨⎩F (−1)(1/n) if Eξ2 = ∞, α < 2,

Y (−1)(1/n) if Eξ2 = ∞, α = 2,√nd if d = Var ξ < ∞

σ(n) ={

V (−1)(1/n) if Eξ2 = ∞,√(α − 2)dn lnn if d = Var ξ < ∞

σW (n) = W (−1)(1/n)σ1(n) = w

(−1)1 (1/n), where w1(t) = t−2l(t) (in Chapter5)

σ2(n) = w(−1)2 (1/n), where w2(t) = t−2l2(t) (in Chapter5)

Combined conditions

[Q1]: Eξ2 = ∞, [<, <], W (t) � cV (t), x → ∞ and nV (x) → 0[Q2]: Eξ2 = ∞, [<, <], x → ∞ and nV

( x

lnx

)< c < ∞

[Q] = [Q1] ∪ [Q2]

Distribution classes

L is the class of distributions with asymptotically locally constant tails (or oftheir distribution tails)

R is the class of distributions with right tails regularly varying at infinity (orthe class of their tails); in Chapters 2–4, the condition F+ ∈ R coincideswith condition [ · , =]

ER is the class of regularly varying exponentially decaying distributions (or oftheir tails)

Se is the class of semiexponential distributions (or of their tails); in Chapter 5,the condition F+ ∈ Se coincides with condition [ · , =]

Page 15: Asymptotic analysis of random walks

xiv Notation

S+ is the class of subexponential distributions on R+ (or of their tails)S is the class of subexponential distributions (or of their tails)Sloc is the class of locally subexponential distributionsC is the class of distributions satisfying the Cramer conditionMs is the class of distributions satisfying the condition E|ξ|s < ∞S is the class of stable distributionsNα,ρ is the domain of normal attraction to the stable law Fα,ρ

Miscellaneous

∼ is the relation of asymptotic equivalence: A ∼ B ⇔ A/B → 1� is the relation of asymptotic comparability: A � B ⇔ A = O(B), B =

O(A)x+ = max{0, x}�x denotes the integral part of x

r = x/y � 1φ = signλ

Π = Π(x, n) = nV (x)Π(y) = Π(y, n)Π(y) = nV (x)Δ[x) = [x, x + Δ), Δ > 0 (in Chapter 9, Δ[x) is a cube in R

d with edgelength Δ)

(x, y) =∑d

i=1 x(i)y(i) is the scalar product of the vectors x, y ∈ Rd

|x| = (x, x)1/2

Sd−1 = {x ∈ Rd : |x| = 1} is the unit sphere in Rd

e(x) = x/|x| , x ∈ Rd

Page 16: Asymptotic analysis of random walks

Introduction

This book is concerned with the asymptotic behaviour of the probabilities of rareevents related to large deviations of the trajectories of random walks, whose jumpdistributions decay not too fast at infinity and possess some form of ‘regular be-haviour’. Typically, we will be considering regularly varying, subexponential,semiexponential and other similar distributions. For brevity, all these distribu-tions will be referred to in this Introduction as ‘regular’. As the main key wordsfor the contents of the present book one could list the following: random walks,large deviations, slowly decaying and, in particular, regular distributions. To thequestion why the above-mentioned themes have become the subject of a separatemonograph, an answer relating to each of these key concepts can be given.

• Random walks form a classical object of probability theory, the study ofwhich is of tremendous theoretical interest. They constitute a mathematical modelof great importance for applications in mathematical statistics (sequential analy-sis), risk theory, queueing theory and so on.

• Large deviations and rare events are of great interest in all these appliedareas, since computing the asymptotics of large deviation probabilities enablesone to find, for example, small error probabilities in mathematical statistics, smallruin probabilities in risk theory, small buffer overflow probabilities in queueingtheory and so on.

• Slowly decaying and, in particular, regular distributions present, when oneis studying large deviation probabilities, an alternative to distributions decayingexponentially fast at infinity (for which Cramer’s condition holds; the meaning ofthe terms we are using here will be explained in more detail in what follows). Thefirst classical results in large deviation theory were obtained for the case of distri-butions decaying exponentially fast. However, this condition of fast (exponential)decay fails in many applied problems. Thus, for instance, the ‘empirical distri-bution tails’ for insurance claim sizes, for the sizes of files sent over the Internetand also for many other data sets as well, usually decay as a power function (seee.g. [1]). However, in the problems of, say, mathematical statistics, the assump-tion of fast decay of the distributions is often adequate as it reflects the nature of

xv

Page 17: Asymptotic analysis of random walks

xvi Introduction

the problem. Therefore, both classes of distributions, regular and fast decaying,are of great interest.

Random walks with fast decaying distributions will be considered in a sepa-rate book (a companion volume to the present monograph). The reason for this isthat studying random walks with regular distributions requires a completely dif-ferent approach in comparison with the case of fast decaying distributions, sincefor regular distributions the large deviation probabilities are mostly formed bycontributions from the distribution tails (i.e. on account of the large jumps in therandom walk trajectory), while for distributions decaying exponentially fast theyare formed by contributions from the ‘central parts’ of the distributions. In thelatter case analytical methods prove to be efficient, and everything is determinedby the behaviour of the Laplace transforms of the jump distributions. In the for-mer case, direct probabilistic approaches prove to be more efficient. However,until now the results for regular distributions were of a disconnected character,and referred to different special cases. The writing of the present monograph wasundertaken as an attempt to present a unified exposition of the theory on the basisof a common approach; a large number of the results we present are new.

Before surveying the contents of the book, we will make a few further, moredetailed, remarks of a general character in order to make the naturalness of theobjects of study, and also the logic and structure of the monograph, clearer to thereader.

The application of probability theory as a mathematical discipline is most effi-cient when one is studying phenomena where a large number of random factorsare present. The influence of such factors is, as a rule, additive (or can be reducedto such), especially in cases where the individual contribution of each factor issmall. Hence sums of random variables have always been (and will be) amongthe main objects of research in probability theory. A vast literature is devotedto the study of the main asymptotic laws describing the distributions of sums oflarge numbers of random summands (see e.g. [130, 152, 186, 223, 121, 122]).The most advanced results in this direction were obtained for sums of indepen-dent identically distributed (i.i.d.) random variables (r.v.’s).

Let ξ, ξ1, ξ2, . . . be i.i.d. (possibly vector-valued) r.v.’s. Put S0 := 0 and

Sn :=n∑

i=1

ξi, n = 1, 2, . . .

The sequence {Sn; n � 1} is called a random walk. The following assertionsconstitute the fundamental classical limit theorems for random walks.

1. The strong law of large numbers. If there exists a finite expectation Eξ

then, as n → ∞,Sn

n→ Eξ almost surely (a.s.). (0.0.1)

One could call the value nEξ the first-order approximation to the sum Sn.

Page 18: Asymptotic analysis of random walks

Introduction xvii

2. The central limit theorem. If Eξ2 < ∞ then, as n → ∞,

ζn :=Sn − nEξ√

nd⇒ ζ ⊂=Φ, (0.0.2)

where d = Var ξ = Eξ2−(Eξ)2 is the variance of the r.v. ξ, the symbol ⇒ denotesthe (weak) convergence of the r.v.’s in distribution and the notation ζ ⊂=F saysthat the r.v. ζ has the distribution F; in this case, F = Φ is the standard normaldistribution (with parameters (0,1)). One could call the quantity nEξ + ζ

√nd the

second-order approximation to Sn.Since the limiting distribution Φ in (0.0.2) is continuous, the relation (0.0.2) is

equivalent to the following one: for any v ∈ R we have

P(ζn � v) → P(ζ � v) as n → ∞,

and, moreover, this convergence is uniform in v. In other words, for deviations ofthe form x = nEξ + v

√nd,

P(Sn � x) ∼ P(ζ � (x − nEξ)/

√nd)

= 1 − Φ(v) as n → ∞ (0.0.3)

uniformly in v ∈ [v1, v2], where −∞ < v1 � v2 < ∞ are fixed numbers and Φ isthe standard normal distribution function (here and in what follows, the notationA ∼ B means that A/B → 1 under the indicated passage to the limit).

3. Convergence to stable laws. If the expectation of the r.v. ξ is infinite ordoes not exist, then the first-order approximation for the sum Sn can only befound when the sum of the right and left tails of the distribution of ξ, i.e. thefunction

F (t) := P(ξ � t) + P(ξ < −t), t > 0,

is regularly varying as t → ∞; it can be represented as

F (t) = t−αL(t), (0.0.4)

where α ∈ (0, 1] and L(t) is a slowly varying function (s.v.f.) as t → ∞. Thesame can be said about the second-order approximation for Sn in the case whenE|ξ| < ∞ but Eξ2 = ∞. In this case, we have α ∈ [1, 2] in (0.0.4).

For these two cases, we have the following assertion. For simplicity’s sake,assume that α < 2, α �= 1; we will also assume that Eξ = 0 when the expectationis finite (the ‘boundary case’ α = 1 is excluded to avoid the necessity of non-trivial centring of the sums Sn when Eξ = ±∞ or the expectation does notexist).

Let F+(t) := P(ξ � t), let (0.0.4) hold and let there exist the limit

limt→∞

F+(t)F (t)

=: ρ+ ∈ [0, 1]

Page 19: Asymptotic analysis of random walks

xviii Introduction

(if ρ+ = 0 then the right tail of the distribution does not need to be regularlyvarying). Denote by

F (−1)(x) := inf{t > 0 : F (t) � x}, x > 0,

the (generalized) inverse function for F , and put

b(n) := F (−1)(n−1) = n1/αL1(n),

where L1 is also be an s.v.f. (see § 1.1). Then, as n → ∞,

Sn

b(n)⇒ ζ(α,ρ) ⊂=Fα,ρ, (0.0.5)

where Fα,ρ is the standard stable law with parameters α and ρ = 2ρ+ − 1.

For completeness of exposition, we will present a formal proof of the aboveassertion for all α ∈ (0, 2] in § 1.5.

The relation (0.0.5), similarly to (0.0.3), means that, for x = vb(n),

P(Sn � x) ∼ 1 − Fα,ρ,+(v) as n → ∞ (0.0.6)

uniformly in v ∈ [v1, v2] for fixed v1 � v2 from (0,∞), where Fα,ρ,+(v) =Fα,ρ([v,∞)) is the right tail of the distribution Fα,ρ.

The assertions (0.0.2), (0.0.3) and (0.0.5), (0.0.6) give a satisfactory answer forthe behaviour of the probabilities of the form P(Sn � x) for large n only fordeviations of the form x = vb(n), where v is moderately large and b(n) is thescaling factor in the respective limit theorem (b(n) =

√nd in the case Eξ2 < ∞,

when we also assume that Eξ = 0). For example, in the finite variance case it isrecommended to deal very carefully with the normal approximation values givenby (0.0.3) for v > 3 and moderately large values of n (say, n � 100). This leads toa natural setting for the problem on the asymptotic behaviour of the probabilitiesP(Sn � x) for x � b(n), all the more so since, as we have already noted, itis precisely such ‘large deviations’ that are often of interest in applied problems.Such probabilities are referred to as the probabilities of large deviations of thesums Sn.

So far we have only discussed questions related to the distributions of the par-tial sums Sn. However, in many applications (in particular, as already mentioned,in mathematical statistics, queueing theory, risk theory and other areas) questionsrelated to the behaviour of the entire trajectory S1, S2, . . . , Sn are of no less im-portance. Thus, of interest is the problem of computing the probability

P(g1(k) < Sk < g2(k); k = 1, . . . , n

)(0.0.7)

for two given sequences (‘boundaries’) g1(k) and g2(k), k = 1, 2, . . . , or theprobability of the complementary event that the trajectory {Sk; k = 1, . . . , n}will leave at least once the corridor specified by the functions gi(k). These are

Page 20: Asymptotic analysis of random walks

Introduction xix

the so-called boundary problems for random walks. The simplest is the problemon the limiting distribution of the variables

Sn := maxk�n

Sk and Sn(a) := maxk�n

(Sk − ak)

for a given constant a.The following is known about the asymptotics of the probability (0.0.7). Let

Eξ = 0, Eξ2 < ∞ (and without loss of generality, one can assume that in thiscase Eξ2 = 1), and let the boundaries gi be given by the relations

gi(k) =√

nfi

(k/n

), i = 1, 2, (0.0.8)

where f1 < f2 are some fixed sufficiently regular (e.g. piecewise smooth) func-tions on [0, 1], f1(0) < 0 < f2(0). Then by the Kolmogorov-Petrovskii theorem(see e.g. [162]) the probability (0.0.7) converges as n → ∞ to the value P (0, 0)of the solution P (t, z) to the boundary problem for the parabolic equation

∂P

∂t+

12

∂2P

∂z2= 0

in the region{(t, z) : 0 < t < 1, f1(t) < z < f2(t)

}with boundary conditions{

P (t, fi(t)) = 0, t ∈ [0, 1], i = 1, 2,

P (1, z) = 1, z ∈ (f1(1), f2(1)).

The above assertion also follows from the so-called invariance principle (alsoknown as the functional central limit theorem). According to the principle, thedistribution of the random polygon {ζn(t); t ∈ [0, 1]} with vertices at the points(k/n, Sk/

√n ), k = 0, 1, . . . , n, in the space C(0, 1) of continuous functions on

[0, 1] (endowed with the σ-algebra of Borel sets generated by the uniform distancein C(0, 1)) converges weakly to the distribution of the standard Wiener process{w(t); t ∈ [0, 1]} in the space C(0, 1) as n → ∞. From this fact it follows thatthe probability (0.0.7) for the boundaries (0.0.8) converges to the quantity

P(f1(t) < w(t) < f2(t); t ∈ [0, 1]

),

which, in turn, is given by the solution to the above-mentioned boundary problemfor the parabolic equation at the point (0, 0).

A similar ‘invariance principle’ holds in the case of convergence to stable laws,when Eξ2 = ∞. In the case where gi(k) = b(n)fi(k/n), the probability (0.0.7)converges to the value

P(f1(t) < ζ(t) < f2(t); t ∈ [0, 1]

),

where ζ(·) is the corresponding stable process (the increments of the process ζ(·)on disjoint time intervals are independent of each other and have, up to a scalingtransform, the distribution Fα,ρ).

Here we encounter the same problem: the above results do not give a satisfac-tory answer to the question of the behaviour of probabilities of the form (0.0.7)

Page 21: Asymptotic analysis of random walks

xx Introduction

(or the probabilities of the complementary events) in the case where g1(k) and−g2(k) are large in comparison with b(n). We again arrive at the large deviationproblem but now in the context of boundary problems for random walks. Oneof the simplest examples here is the problem on the asymptotic behaviour of theprobability

P(Sn(a) � x

)→ 0

when, say,

Eξ = 0, a = 0, x � b(n),

whereas the case

Eξ = 0, a > 0, x → ∞provides one with another example of a problem on large deviation probabilities;this, however, does not quite fit the above scheme.

The above-mentioned problems on large deviations, along with a number ofother related problems, constitute the main object of study in the present book.

Here we should make the following important remark. Even while studyinglarge deviation probabilities for Sn in the case Eξ = 0, Eξ2 < ∞, it turns outthat the nature of the asymptotics of P(Sn � x) when x � √

n lnn, and themethods used to find it, strongly depend on the behaviour of the tail P(ξ � t) ast → ∞. If the tail vanishes exponentially fast, i.e. the so-called Cramer conditionis met:

ϕ(λ) := Eeλξ < ∞ (0.0.9)

for some λ > 0, then, as we have already noted, the asymptotics in question willbe formed, roughly speaking, in equal degrees by contributions from all the jumpsin the trajectory of the random walk. In this case, the asymptotics are describedby laws that are established mainly via analytical calculations and are determinedin a wide zone of deviations by the analytic properties of the moment generatingfunction ϕ(λ).

If Cramer’s condition does not hold then, as when studying the conditions forconvergence to stable laws in (0.0.5), we have to assume the regular character ofthe tails F+(t) = P(ξ � t). Such an assumption can be either a condition of theform (0.0.4) or the condition

P(ξ � t) = exp{−t−αL(t)}, α ∈ (0, 1), (0.0.10)

where L(t) is an s.v.f. as t → ∞ possessing some smoothness properties. Theclass of the tails of the form (0.0.4) will be called the class of regularly varyingtails (distributions), and the class specified by the condition (0.0.10) (under someadditional assumptions on the function L) will be called the class of semiexpo-nential tails (distributions).

In the cases (0.0.4) and (0.0.10), the asymptotics of P(Sn � x) in situationswhere x grows fast enough as n → ∞ will, as a rule, be governed by a single

Page 22: Asymptotic analysis of random walks

Introduction xxi

large jump. The methods of deriving the asymptotics, as well as the asymptoticsitself, turn out to be substantially different from those in the case where Cramer’scondition (0.0.9) is met. The methods used here are mostly based on direct prob-abilistic approaches.

The main objects of study in the present monograph are problems on largedeviations for random walks with jumps following regular distributions (in par-ticular, regularly varying, (0.0.4), and semiexponential, (0.0.10), ones).

There is a great deal of literature on studying these problems. This is especiallyso for the asymptotics of the distributions P(Sn � x) of sums of r.v.’s (see belowand also the bibliographic notes to Chapters 2–5). However, the results obtainedin this direction so far have been, for the most part, disconnected and incomplete.

Along with conditions of the form (0.0.4) and (0.0.10), one often encountersin the literature the so-called subexponentiality property of the right distributiontails, which is characterized by the following relation: for independent copies ξ1

and ξ2 of the r.v. ξ one has

P(ξ+1 + ξ+

2 � t) ∼ 2P(ξ+ � t) as t → ∞, (0.0.11)

where x+ = max{0, x} is the positive part of x. Distributions from both classes(0.0.4) and (0.0.10) possess this property. Roughly speaking, the classes (0.0.4)and (0.0.10) form a ‘regular’ part of the class of subexponential distributions. Inthis connection, it is important to note that the methods of study and the formof the asymptotics of interest for the classes (0.0.4) and (0.0.10) prove in manysituations to be substantially different. That is why in this book we will studythese classes separately, believing that this approach is methodologically welljustified.

A short history of the problem. Research in the area of large deviations forrandom walks with heavy-tailed jumps began in the second half of the twentiethcentury. At first the main effort was, of course, concentrated on studying the largedeviations of the sums Sn of r.v.’s. Here one should first of all mention the pa-pers by C. Heyde [141, 145], S.V. Nagaev [201, 206], A.V. Nagaev [194, 195],D.H. Fuk and S.V. Nagaev [127], L.V. Rozovskii [237, 238] and others. These es-tablished the basic principle by which the asymptotics of P(Sn � x) are formed:the main contribution to the probability of interest comes from trajectories thatcontain one large jump.

Later on papers began appearing in which this principle was used to find thedistribution of the maximum Sn of partial sums and also to solve more generalboundary problems for random walks (I.F. Pinelis [225], V.V. Godovanchuk [131],A.A. Borovkov [40, 42]).

Somewhat aside from this were papers devoted to the probabilities of largedeviations of the maximum of a random walk with negative drift. The first generalresults were obtained by A.A. Borovkov in [42], while more complete versions

Page 23: Asymptotic analysis of random walks

xxii Introduction

(for subexponential summands) were established by N. Veraverbeke [275] andD.A. Korshunov [178].

The authors of the present book began a systematic study of large deviations forrandom walks with regularly distributed jumps in their papers [51] and [63, 66].Then the papers [52, 64, 54, 59, 60] and some others appeared, in which thederived results were extended to semiexponential and regular exponentially de-caying distributions, to multivariate random walks, to the case of non-identicallydistributed summands and so on. As a result, a whole range of interesting prob-lems arose, unified by the general approach to their solution and a system ofinterconnected and rather advanced results that were, as a rule, quite close tounimprovable. As these problems and results were, moreover, of a considerableinterest for applications, the idea of writing a monograph on all this became quitenatural. The same applies to the related monograph, to be devoted to randomwalks with fast decaying jump distributions.

More detailed bibliographic references will be given within the exposition ineach chapter, and also in the bibliographic notes at the end of the book.

Now we will outline the contents of the book. Chapter 1 contains preliminaryresults and information that will be used in the sequel. In § 1.11 we present thebasic properties of slowly varying and regularly varying functions and in § 1.2 theclasses of subexponential and semiexponential distributions are introduced. Wegive conditions characterizing these classes and establish a connection betweenthem. The asymptotic properties of the so-called functions of (subexponential)distributions are studied in § 1.4. The above-mentioned §§ 1.1–1.4 constitute thefirst part of the chapter.

In the second part of Chapter 1 (§§ 1.5, 1.6) we present known fundamentallimit theorems of probability theory (already briefly mentioned above). Sec-tion 1.5 contains a proof of the theorem on the convergence in distribution ofthe normed sums of i.i.d. r.v.’s to a stable law. A special feature of the proof is theuse of an explicit form of the scaling sequence, which enables one to characterizethe moderate deviation and large deviation zones using the same terms as those tobe employed in Chapters 2 and 3. In § 1.6 we present functional limit theorems(the invariance principle) and the law of the iterated logarithm.

Chapters 2–5 are similar in their contents and structure to each other, and itis appropriate to review them as a whole. They are devoted to studying largedeviation probabilities for random walks whose jump distributions belong to oneof the following four distribution classes:

(1) regularly varying distributions (or distributions admitting regularly vary-ing majorants or minorants) having no finite means (Chapter 2);

(2) distributions of the above type having finite means but infinite variances(Chapter 3);

1 The first part of the section number stands for the chapter number.

Page 24: Asymptotic analysis of random walks

Introduction xxiii

(3) distributions of the above type having finite variances (Chapter 4);(4) semiexponential distributions (Chapter 5).

The first sections in all these chapters are devoted to bounding from above theprobabilities P(Sn � x) and P(Sn(a) � x). The main approach to obtainingthese bounds is the same in all four chapters (it is presented in § 2.1), but theresults are different depending on the conditions imposed on the majorants of thedistribution tails of the r.v. ξ. The same can be said about the lower bounds. Thederived two-sided bounds prove to be sharp enough to obtain, under the conditionsof Chapters 2–4 in the case when the tails P(ξ � t) = V (t) are regularly varying,the asymptotics

P(Sn � x) ∼ nV (x), P(Sn � x) ∼ nV (x)

either in the widest possible zones of deviations x or in zones quite close to thelatter.

Each of these four chapters contains sections where, using more precise ap-proaches, we establish the asymptotics of the probabilities

P(Sn � x), P(Sn � x), P(Sn(a) � x

)and, moreover, asymptotic expansions for them as well (under additional condi-tions on the tails F+(t)). An exception is Chapter 2, as under its assumptionsthe problems of deriving asymptotic expansions and studying the asymptotics ofP(Sn(a) � x) are not meaningful.

Furthermore,

• Chapter 2 contains a finiteness criterion for the supremum of cumulative sums(§ 2.5) and the asymptotics of the renewal function (§ 2.6).

• In Chapters 3 and 4 we obtain integro-local theorems on large deviations of Sn

(§§ 3.7, 4.7).• In Chapter 3 we find conditions for uniform relative convergence to a stable law

on the entire axis and establish analogues to the law of the iterated logarithm inthe case Eξ2 = ∞ (§ 3.9).

• In Chapters 3 and 4 we find the asymptotics of the probability P(maxk�n(Sk−

g(k)) � 0)

of the crossing of an arbitrary boundary {g(k); k � 1} prior to thetime n by the random walk (§§ 3.6, 4.6).

• In Chapter 4 we consider the possibility of extending the results on the asymp-totic behaviour of P(Sn � x) and P(Sn � x) to wider classes of jumpdistributions (§ 4.8) and we describe the limiting behaviour of the trajectory{Sk; k = 1, . . . , n} given that Sn � x or Sn � x (§ 4.9).

Chapters 2–5 are devoted to ‘heavy-tailed’ random walks, i.e. to situationswhen the jump distributions vanish at infinity more slowly than an exponentialfunction, and indeed this is the main focus of the book.

In Chapter 6, however, we present the main approach to studying the large

Page 25: Asymptotic analysis of random walks

xxiv Introduction

deviation probabilities for ‘light-tailed’ random walks – this is the case when thejump distributions vanish at infinity exponentially fast or even faster (i.e. theysatisfy Cramer’s condition (0.0.9) for some λ > 0). This is done for the sakeof completeness of exposition, and also to ascertain that, in a number of cases,studying ‘light-tailed’ random walks can be reduced to the respective problemsfor heavy-tailed distributions considered in Chapters 2–5.

In § 6.1 we describe the main method for studying large deviation probabilitieswhen Cramer’s condition holds (the method is based on the Cramer transform andthe integro-local Gnedenko–Stone–Shepp theorems) and also ascertain its appli-cability bounds.

In §§ 6.2 and 6.3 we study integro-local theorems for sums of r.v.’s with lighttails of the form

P(ξ � t) = e−λ+tV (t), 0 < λ+ < ∞,

where V (t) is a function regularly varying as t → ∞. In a number of cases themethods presented in § 6.1 do not work for such distributions, but one can achievesuccess using the results of Chapters 3 and 4. In § 6.2 we consider the case whenthe index of the function V (t) belongs to the interval (−1,−3) (in this case oneuses the results of Chapter 3); in § 6.3, we take the index of V (t) to be less than−3 (in this case one needs the results of Chapter 4).

In § 6.4 we consider large deviations in more general boundary problems. How-ever, here the exposition has to be restricted to several special types of boundary{g(k)}, as the nature of the boundary-crossing probabilities turns out to be quitecomplicated and sensitive to the particular form of the boundary.

Chapters 7–16 are devoted to some more specialized aspects of the theory ofrandom walks and also to some generalizations of the results of Chapters 2–5 andtheir extensions to continuous-time processes.

In Chapter 7 we continue the study of functions of subexponential distribu-tions that we began in § 1.4. Now, for narrower classes of regularly varying andsemiexponential distributions, we obtain wider conditions enabling one to findthe desired asymptotics (§§ 7.1 and 7.2). In § 7.3 we apply the obtained results tostudy the asymptotics of the distributions of stopped sums and their maxima, i.e.the asymptotics of

P(Sτ � x) and P(Sτ � x),

where the r.v. τ is either independent of {Sk} or is a stopping time for that se-quence (§§ 7.3 and 7.4). In § 7.5 we discuss an alternative approach (to that pre-sented in Chapters 3–5) to studying the asymptotics of P(S∞ � x) in the caseof subexponential distributions of the summands ξk with Eξk < 0. The approachis based on factorization identities and the results of § 1.3. Here we also obtainintegro-local theorems and asymptotic expansions in the integral theorem underminimal conditions (in Chapters 3–5 the conditions were excessive, as there wealso included the case n < ∞).

Page 26: Asymptotic analysis of random walks

Introduction xxv

Chapter 8 is devoted to a systematic study of the asymptotics of the first hit-ting time distribution, i.e. the probabilities P

(η+(x) � n

)as n → ∞, where

η+(x) := min{k � 1 : Sk � x}, and also of similar problems for η−(x) :=min{k � 1 : Sk � −x}. We classify the results according to the following threemain criteria:

(1) the value of x, distinguishing between the three cases x = 0, x > 0 isfixed and x → ∞;

(2) the drift direction (the value of the expectation Eξ, if it exists);(3) the properties of the distribution of ξ.

In § 8.1 we consider the case of a fixed level x (usually x = 0) and differentcombinations of criteria (2) and (3). In § 8.2 we study the case when x → ∞together with n, again with different combinations of criteria (2) and (3).

In Chapter 9 the results of Chapters 3 and 4 are extended to the multivariatecase. Our attention is given mainly to integro-local theorems, i.e. to studying theasymptotics

P(Sn ∈ Δ[x)

),

where Sn =∑n

j=1 ξj is the sum of d-dimensional i.i.d. random vectors and

Δ[x) := {y ∈ Rd : xi � yi < xi + Δ}

is a cube with edge length Δ and a vertex at the point x = (x1, . . . , xd). Thereason is that in the multivariate case, the language and approach of integro-localtheorems prove to be the most natural. Integral theorems are more difficult toprove directly and can easily be derived from the corresponding integro-local the-orems.

Another difficulty arising when one is studying the probabilities of large devia-tions of the sums Sn of ‘heavy-tailed’ random vectors ξk consists in defining andclassifying the very concept of a heavy tailed multivariate distribution. In § 9.1,examples are given in which the main contribution to the probability for Sn tohit a remote cube Δ[x) comes not from trajectories with one large jump (as inthe univariate case) but from those with exactly k large jumps, where k can beany integer between 1 and d > 1. In § 9.2 we concentrate on the ‘most regu-lar’ jump distributions and establish integro-local theorems for them, both whenE|ξ|2 = ∞ and when E|ξ|2 < ∞; § 9.3 is devoted to integral theorems which canbe obtained using integro-local theorems as well as in a direct way. In the lattercase, one has to impose conditions on the asymptotics of the probability that theremote set under consideration will be reached by one large jump.

We then return to univariate random walks. In Chapter 10 such walks are con-sidered as processes, and we study there the probability of large deviations of suchprocesses in their trajectory spaces. In other words, we study the asymptotics of

P(Sn(·) ∈ xA

),

Page 27: Asymptotic analysis of random walks

xxvi Introduction

where Sn(t) = S�nt�, t ∈ [0, 1], and A is a measurable set in the space D(0, 1)of functions without discontinuities of the second type (it is supposed that the setA is bounded away from zero). Under certain conditions on the structure of theset A the desired asymptotics are found for regularly varying jump distributions,both in the case of ‘one-sided’ sets (§ 10.2) and in the general case (§ 10.3). Herewe use the results of Chapter 3 when Eξ2 = ∞ and the results of Chapter 4when Eξ2 < ∞.

Chapters 11–14 are devoted to extending the results of Chapters 3 and 4 torandom walks of a more general nature, when the jumps ξi are independent butnot identically distributed. In Chapter 11 we consider the simplest problem ofthis kind, the large deviation probabilities of sums of r.v.’s of two different types.In § 11.1 we discuss a motivation for the problem and give examples. As before,we let Sn :=

∑ni=1 ξi and, moreover, Tm :=

∑mi=1 τi, where the r.v.’s τi are

independent of each other and also of {ξk} and are identically distributed. We areinterested in the asymptotics of the probabilities

P (m,n, x) := P(Tm + Sn � x)

as x → ∞. In § 11.2 we study the asymptotics of P (m,n, x) for the case ofregularly varying distributions and in § 11.3 for the case of semiexponential dis-tributions.

In Chapters 12 and 13 we consider random walks with arbitrary non-identicallydistributed jumps ξj in the triangular array scheme, both in the case of an infinitesecond moment (Chapter 12 contains extensions of the results of Chapter 3) and inthe case of a finite second moment (Chapter 13 is a generalization of Chapter 4).The order of exposition in Chapters 12 and 13 is roughly the same as in Chapters 3and 4. In §§ 12.1 and 13.1 we obtain upper and lower bounds for P(Sn � x) andP(Sn � x) respectively. The asymptotics of the probability of the crossing of anarbitrary remote boundary are found in §§ 12.2, 12.3 and 13.2. Here we also obtainbounds, uniform in a, for the probabilities P

(Sn(a) � x

)and the distributions

of the first crossing time of the level x → ∞.In § 12.4 we establish theorems on the convergence of random walks to random

processes. On the basis of these results, in § 12.5 we study transient phenomenain the problem on the asymptotics of the distribution of Sn(a) as n → ∞, a → 0.Similar results for random walks with jumps ξi having a finite second momentsare established in § 13.3.

The results of Chapters 12 and 13 enable us in Chapter 14 to extend the mainassertions of these chapters to the case of dependent jumps. In § 14.1 we give adescription of the classes of random walks that admit an asymptotic analysis inthe spirit of Chapters 12 and 13. These classes include:

(1) martingales with a common majorant of the jump distributions;(2) martingales defined on denumerable Markov chains;(3) martingales defined on arbitrary Markov chains;

Page 28: Asymptotic analysis of random walks

Introduction xxvii

(4) arbitrary random walks defined on arbitrary Markov chains.

For arbitrary Markov chains one can obtain essentially the same results as fordenumerable ones, but the exposition becomes much more technical. For this rea-son, and also because in case (1) one can obtain (and in a rather simple way) onlybounds for the distributions of interest, we will restrict ourselves in Chapter 14to considering martingales and arbitrary random walks defined on denumerableMarkov chains.

In § 14.2 we obtain upper and lower bounds for and also the asymptotics of theprobabilities P(Sn � x) and P(Sn � x) for such walks in the case where thejumps in the walk can have infinite variance. The case of finite variance is consid-ered in § 14.3. In § 14.4 we study arbitrary random walks defined on denumerableMarkov chains.

Chapters 15 and 16 are devoted to extending the results of Chapters 2–5 tocontinuous-time processes. Chapter 15 contains such extensions to processes{S(t)} with independent increments. Two approaches are considered. The firstis presented in § 15.2. It is based on using the closeness of the trajectories of theprocesses with independent increments to random polygons with vertices at thepoints (kΔ, S(kΔ)) for a small fixed Δ, where the S(kΔ) are clearly the sumsof i.i.d. r.v.’s that we studied in Chapters 2–5. The second approach is presentedin § 15.3. It consists of applying the same philosophy, based on singling out onelarge jump (now in the process {S(t)}), as that employed in Chapters 2–5. Usingthis approach, we can extend to the processes {S(t)} all the results of Chap-ters 3 and 4, including those for asymptotic expansions. The first approach (thatin § 15.1) only allows one to extend the first-order asymptotics results.

Chapter 16 is devoted to the generalized renewal processes

S(t) := Sν(t) + qt, t � 0,

where q is a linear drift coefficient,

ν(t) :=∞∑

k=1

1(tk � t) = min{k � 1 : tk � t} − 1,

tk := τ1 + · · ·+ τk , the r.v.’s τj are independent of each other and of {ξk} and areidentically distributed with finite mean aτ := Eτ1. It is assumed that the distribu-tion tails P(ξ � t) = V (t) of the r.v.’s ξ (and in some cases also the distributiontails P(τ � t) = Vτ (t) of the r.v.’s τ ) are regularly varying functions or are dom-inated by such. In § 16.2 we study the probabilities of large deviations of the r.v.’sS(T ) and S(T ) := maxt�T S(t) under the assumption that the mean trend in theprocess is equal to zero, Eξ+qEτ = 0. Here substantial contributions to the prob-abilities P(S(T ) � x) and P(S(T ) � x) can come not only from large jumps ξj

but also from large renewal intervals τj (especially when q > 0). Accordingly, insome deviation zones, to the natural (and expected) quantity H(T )V (x) (whereH(T ) := Eν(T )) giving the asymptotics of P(S(T ) � x) and P(S(T ) � x), we

Page 29: Asymptotic analysis of random walks

xxviii Introduction

may need to add, say, values of the form a−1τ (τ − x/q)Vτ (x/q), which can dom-

inate when Vτ (t) � V (t). The asymptotics of the probabilities P(S(T ) � x)and P(S(T ) � x) are studied in § 16.2 in a rather exhaustive way: for values ofq having both signs, for different relations between V (t) and Vτ (t) or between x

and T and for all the large deviation zones.In § 16.3 we obtain asymptotic expansions for P(S(T ) � x) under additional

assumptions on the smoothness of the tails V (t) and Vτ (t). The asymptotics ofthe probability P

(supt�T (S(t)−g(t)) � 0

)of the crossing of an arbitrary remote

boundary g(t) by the process {S(t)} are studied in § 16.4. The case of a linearboundary g(t) is considered in greater detail in § 16.5.

Let us briefly list the main special features of the present book.

1. The traditional range of problems on limit theorems for the sums Sn is

considerably extended in the book: we include the so-called boundary prob-lems relating to the crossing of given boundaries by the trajectory of the ran-dom walk. In particular, this applies to problems, of widespread application,on the probabilities of large deviations of the maxima Sn = maxk�n Sk ofsums of random variables.

2. The book is the first monograph in which the study of the above-mentioned

wide range of problems is carried out in a comprehensive and systematic

way and, as a rule, under minimal adequate conditions. It should fill a numberof previously existing gaps.

3. In the book, for the first time in a monograph, asymptotic expansions (theasymptotics of second and higher orders) under rather general conditions,close to minimal, are studied for the above-mentioned range of problems.(Asymptotics expansions for P(Sn � x) were also studied in [276] but fora narrow class of distributions.)

4. Along with classical random walks, a comprehensive asymptotic analysis is

carried out for generalized renewal processes.

5. For the first time in a monograph, multivariate large deviation problems for

jump distributions regularly varying at infinity are touched upon.6. For the first time complete results on large deviations for random walks

with non-identically distributed jumps in the triangular array scheme are

obtained. Transient phenomena are studied for such walks with jumps

having an infinite variance.

One may also note that the following are included:

• integro-local theorems for the sums Sn;• a study of the structure of the classes of semiexponential and subexponential

distributions;• analogues of the law of the iterated logarithm for random walks with infinite

jump variance;

Page 30: Asymptotic analysis of random walks

Introduction xxix

• a derivation of the asymptotics of P(Sn � x) and P(Sn � x) for randomwalks with dependent jumps, defined on Markov chains.

The authors are grateful to the ARC Centre of Excellence for Mathematicsand Statistics of Complex Systems, the Russian Foundation for Basic Research(grant 05-01-00810) and the international association INTAS (grant 03-51-5018)for their much appreciated support during the writing of the book. The authorsare also grateful to S.G. Foss for the discussions of some aspects of the book.

The writing of this monograph would have been a much harder task were it notfor a constant technical support from T.V. Belyaeva, to whom the authors expresstheir sincere gratitude.

For the reader’s attention We use := for ‘is defined by’, ‘iff’ for ‘if and only if’,and � for the end of proof. Parts of the expositions that are, from our viewpoint,of secondary interest, are typeset in a small font.

A.A. Borovkov, K.A. Borovkov

Page 31: Asymptotic analysis of random walks
Page 32: Asymptotic analysis of random walks

1

Preliminaries

1.1 Regularly varying functions and their main properties

Regularly (and, in particular, slowly) varying functions play an important role inthe subsequent exposition. In this section we will present those basic propertiesof the above-mentioned functions that will be used in what follows. We will oftenassume that the domain of the functions under consideration includes the righthalf-axis (0,∞) where the functions are measurable and locally integrable.

1.1.1 General properties

Definition 1.1.1. A positive (Lebesgue) measurable function L(t) is said to be aslowly varying function (s.v.f.) as t → ∞ if, for any fixed v > 0,

L(vt)L(t)

→ 1 as t → ∞. (1.1.1)

A function V (t) is said to be a regularly varying (of index −α ∈ R) function(r.v.f.) as t → ∞ if it can be represented as

V (t) = t−αL(t), (1.1.2)

where L(t) is an s.v.f. as t → ∞.

The definition of an s.v.f. (r.v.f.) as t ↓ 0 is quite similar. In what follows,the term s.v.f. (r.v.f.) will always refer, unless otherwise stipulated, to a functionwhich is slowly (regularly) varying at infinity.

One can easily see that, similarly to (1.1.1), the convergence

V (vt)V (t)

→ v−α as t → ∞ (1.1.3)

for any fixed v > 0 is a characteristic property of regularly varying functions.Thus, an s.v.f. is an r.v.f. of index zero.

Note that r.v.f.’s admit a definition that, from the first glance, appears to be more general

1

Page 33: Asymptotic analysis of random walks

2 Preliminaries

than (1.1.3). One can define them as measurable functions such that, for all v > 0 from aset of positive Lebesgue measure, there exists the limit

limt→∞

V (vt)

V (t)=: g(v). (1.1.4)

In this case, one necessarily has g(v) ≡ v−α for some α ∈ R and, moreover, (1.1.3) holdsfor all v > 0 (see e.g. p. 17 of [32]). The fact that the power function appears in the limitbecomes natural from the obvious relation

g(v1v2) = limt→∞

V (v1v2t)

V (v2t)× V (v2t)

V (t)= g(v1)g(v2),

which is equivalent to the Cauchy functional equation for h(u) := ln g(eu):

h(u1 + u2) = h(u1) + h(u2).

It is well known that, in ‘non-pathological’ cases, this equation can only have a linearsolution of the form h(u) = cu, which means that g(v) = vc.

The following functions are typical representatives of the class of s.v.f.’s: thelogarithmic function and its powers lnγ t, γ ∈ R, linear combinations thereof,multiple logarithms, functions with the property that L(t) → L = const �= 0 ast → ∞ etc. An example of an oscillating bounded s.v.f. is provided by

L0(t) = 2 + sin(ln ln t), t > 1. (1.1.5)

We will need the following two fundamental properties of s.v.f.’s.

Theorem 1.1.2 (Uniform convergence theorem). If L(t) is an s.v.f. as t → ∞,then the convergence (1.1.1) holds uniformly in v on any interval [v1, v2] with0 < v1 < v2 < ∞.

It follows from the assertion of the theorem that the uniform convergence (1.1.1)on an interval [1/M,M ] will also take place in the case when, as t → ∞, thequantity M = M(t) increases to infinity slowly enough.

Theorem 1.1.3 (Integral representation). A positive function L(t) is an s.v.f. ast → ∞ iff for some t0 > 0 one has

L(t) = c(t) exp

{ t∫t0

ε(u)u

du

}, t � t0, (1.1.6)

where c(t) and ε(t) are measurable functions, with c(t) → c ∈ (0,∞) andε(t) → 0 as t → ∞.

For example, for the function L(t) = ln t the representation (1.1.6) holds withc(t) = 1, t0 = e and ε(t) = (ln t)−1.

Proof of Theorem 1.1.2. Put

h(x) := lnL(ex). (1.1.7)

Page 34: Asymptotic analysis of random walks

1.1 Regularly varying functions and their main properties 3

Then the property (1.1.1) of s.v.f.’s is equivalent to the following: for any u ∈ R,one has the convergence

h(x + u) − h(x) → 0 (1.1.8)

as x → ∞. To prove the theorem, we have to show that this convergence isuniform in u ∈ [u1, u2] for any fixed ui ∈ R. To do this, it suffices to verify thatthe convergence (1.1.8) is uniform on the interval [0, 1]. Indeed, from the obviousinequality

|h(x+u′+u′′)−h(x)| � |h(x+u′+u′′)−h(x+u′)|+|h(x+u′)−h(x)| (1.1.9)

we have

|h(x + u) − h(x)| � (u2 − u1 + 1) supy∈[0,1]

|h(x + y) − h(x)|, u ∈ [u1, u2].

For a given ε ∈ (0, 1) and any x > 0 put

Ix := [x, x + 2], I∗x := {u ∈ Ix : |h(u) − h(x)| � ε/2},I∗0,x := {u ∈ I0 : |h(x + u) − h(x)| � ε/2}.

It is clear that the sets I∗x and I∗0,x are measurable and differ from each other only

by a translation by x, so that μ(I∗x) = μ(I∗

0,x), where μ is the Lebesgue measure.By virtue of (1.1.8), the indicator function of the set I∗0,x converges to zero at anypoint u ∈ I0 as x → ∞. Therefore, by the dominated convergence theorem, theintegral of this function, which is equal to μ(I∗

0,x), tends to 0, so that for largeenough x0 one has μ(I∗x) < ε/2 when x � x0.

Further, for s ∈ [0, 1] the interval Ix∩Ix+s = [x+s, x+2] has length 2−s � 1,so that when x � x0 the set

(Ix ∩ Ix+s) \ (I∗x ∪ I∗x+s)

has measure � 1 − ε > 0 and is therefore non-empty. Let y be a point from thisset. Then

|h(x + s) − h(x)| � |h(x + s) − h(y)| + |h(y) − h(x)| < ε/2 + ε/2 = ε

for x � x0, which proves the required uniformity on [0, 1] and hence on any otherfixed interval as well. The theorem is proved.

Proof of Theorem 1.1.3. That the right-hand side of (1.1.6) is an s.v.f. is almostobvious: for any fixed positive v �= 1,

L(vt)L(t)

=c(vt)c(t)

exp

{ vt∫t

ε(u)u

du

}, (1.1.10)

where, as t → ∞, one has c(vt)/c(t) → c/c = 1 andvt∫t

ε(u)u

du = o

( vt∫t

du

u

)= o(ln v) = o(1). (1.1.11)

Page 35: Asymptotic analysis of random walks

4 Preliminaries

Now we prove that any s.v.f. admits the representation (1.1.6). In terms of thefunction (1.1.7), the required representation will be equivalent (after the changeof variable t = ex) to the relation

h(x) = d(x) +

x∫x0

δ(y) dy, (1.1.12)

where d(x) = ln c(ex) → d ∈ R and δ(x) = ε(ex) → 0 as x → ∞, x0 = ln t0.Therefore it suffices to establish the representation (1.1.12) for the function h(x).

First of all note that h(x) (like L(t)) is a locally bounded function. Indeed, byTheorem 1.1.2, for a large enough x0 and all x � x0

sup0�y�1

|h(x + y) − h(x)| < 1.

Hence for any x > x0 we have by virtue of (1.1.9) the bound

|h(x) − h(x0)| � x − x0 + 1.

Further, the local boundedness and measurability of the function h mean that it islocally integrable on [x0,∞) and therefore can be represented for x � x0 as

h(x) =

x0+1∫x0

h(y) dy+

1∫0

(h(x)−h(x+y)) dy+

x∫x0

(h(y+1)−h(y)) dy. (1.1.13)

The first integral in (1.1.13) is a constant that we will denote by d. The secondtends to zero as x → ∞ owing to Theorem 1.1.2, so that

d(x) := d +

1∫0

(h(x) − h(x + y)) dy → d, x → ∞.

As to the third integral in (1.1.13), by the definition of an s.v.f. for its integrandone has

δ(y) := h(y + 1) − h(y) → 0

as y → ∞, which completes the proof of the representation (1.1.12).

1.1.2 The main asymptotic properties

In this subsection we will obtain several corollaries from Theorems 1.1.2 and 1.1.3.

Theorem 1.1.4.

(i) If L1 and L2 are s.v.f.’s then L1 + L2, L1L2, Lb1 and L(t) := L1(at + b),

where a � 0 and b ∈ R, are also s.v.f.’s.

Page 36: Asymptotic analysis of random walks

1.1 Regularly varying functions and their main properties 5

(ii) If L is an s.v.f. then for any δ > 0 there exists a tδ > 0 such that

t−δ � L(t) � tδ for all t � tδ. (1.1.14)

In other words, L(t) = to(1) as t → ∞.(iii) If L is an s.v.f. then for any δ > 0 and v0 > 1 there exists a tδ > 0 such that

for all v � v0 and t � tδ,

v−δ � L(vt)L(t)

� vδ. (1.1.15)

(iv) (Karamata’s theorem) If α > 1 then, for the r.v.f. V in (1.1.2), one has

V I(t) :=

∞∫t

V (u) du ∼ tV (t)α − 1

as t → ∞. (1.1.16)

If α < 1 then

VI(t) :=

t∫0

V (u) du ∼ tV (t)1 − α

as t → ∞. (1.1.17)

If α = 1 then one has the equalities

VI(t) = tV (t)L1(t) (1.1.18)

and

V I(t) = tV (t)L2(t) if

∞∫0

V (u)du < ∞, (1.1.19)

where the Li(t) → ∞ as t → ∞, i = 1, 2, are s.v.f.’s.(v) For an r.v.f. V of index −α < 0 put

σ(t) := V (−1)(1/t) = inf{u : V (u) < 1/t}.Then σ(t) is an r.v.f. of index 1/α:

σ(t) = t1/αL1(t), (1.1.20)

where L1 is an s.v.f. If the function L has the property

L(tL1/α(t)

) ∼ L(t) (1.1.21)

as t → ∞ then

L1(t) ∼ L1/α(t1/α

). (1.1.22)

Similar assertions hold for functions that are slowly or regularly varying as t

decreases to zero.Note that from Theorem 1.1.2 and the inequality (1.1.15) we also obtain the

Page 37: Asymptotic analysis of random walks

6 Preliminaries

following property of s.v.f.’s: for any δ > 0 there exists a tδ > 0 such that for allt and v satisfying the inequalities t � tδ, vt � tδ one has

(1 − δ) min{vδ, v−δ} � L(vt)L(t)

� (1 + δ) max{vδ, v−δ}. (1.1.23)

Proof. Assertion (i) is evident (just observe that, to prove the last part of (i), oneneeds Theorem 1.1.2).

(ii) This property follows immediately from the representation (1.1.6) and thebound ∣∣∣∣∣

t∫t0

ε(u)u

du

∣∣∣∣∣ =

∣∣∣∣∣ln t∫

t0

+

t∫ln t

∣∣∣∣∣ = O

( ln t∫t0

du

u

)+ o

( t∫ln t

du

u

)= o(ln t)

as t → ∞.

(iii) To prove this property, we notice that, in relation to the expression on theright-hand side of (1.1.10), for any fixed δ > 0 and v0 > 1 and all sufficientlylarge t one has

v−δ/2 � v−δ/20 � c(vt)

c(t)� v

δ/20 � vδ/2, v � v0,

and ∣∣∣∣∣vt∫t

ε(u)u

du

∣∣∣∣∣ � δ

2ln v

(by virtue of (1.1.11)). From this (1.1.15) follows.

(iv) Owing to the uniform convergence theorem, one can choose an M =M(t) → ∞ as t → ∞ such that the convergence in (1.1.1) will be uniformin v ∈ [1,M ]. Changing variables by putting u = vt, we obtain

V I(t) = t−α+1L(t)

∞∫1

v−α L(vt)L(t)

dv = t−α+1L(t)

( M∫1

+

∞∫M

). (1.1.24)

If α > 1 then, as t → ∞,

M∫1

∼M∫1

v−αdv → 1α − 1

,

whereas due to property (iii) one has, for δ = (α − 1)/2, the relation

∞∫M

<

∞∫M

v−α+δdv =

∞∫M

v−(α+1)/2dv → 0.

Page 38: Asymptotic analysis of random walks

1.1 Regularly varying functions and their main properties 7

Together these two relations mean that

V I(t) ∼ t−α+1

α − 1L(t) =

tV (t)α − 1

.

The case α < 1 is considered quite similarly, but taking into account that theconvergence in (1.1.1) is uniform in v ∈ [1/M, 1] and also the equality

1∫0

v−αdv =1

1 − α.

If α = 1 then the first integral on the right-hand side of (1.1.24) is

M∫1

∼M∫1

v−1dv = lnM,

so that if∞∫0

V (u) du < ∞ (1.1.25)

then

V I(t) � (1 + o(1))L(t) ln M � L(t) (1.1.26)

and therefore

L2(t) :=V I(t)tV (t)

=V I(t)L(t)

→ ∞ as t → ∞.

Now note that, by virtue of property (i), L2 is an s.v.f. if the function V I(t) issuch. But, for v > 1,

V I(t) = V I(vt) +

vt∫t

V (u) du,

where the integral clearly does not exceed (v−1)L(t)(1+o(1)). Owing to (1.1.26)this implies that V I(vt)/V I(t) → 1 as t → ∞, which completes the proofof (1.1.19).

That (1.1.18) is true in the subcase when (1.1.25) holds is almost obvious, since

VI(t) = tV (t)L1(t) = L(t)L1(t) =

t∫0

V (u) du →∞∫0

V (u) du,

so that, firstly, L1 is an s.v.f. by virtue of property (i) and, secondly, L1(t) → ∞since L(t) → 0 owing to (1.1.26).

Page 39: Asymptotic analysis of random walks

8 Preliminaries

Now let α = 1 and∫∞0

V (u) du = ∞. Then, if M = M(t) → ∞ suffi-ciently slowly, one obtains by the uniform convergence theorem a result similarto (1.1.26) (see also (1.1.24)):

VI(t) =

1∫0

v−1L(vt) dv �1∫

1/M

v−1L(vt) dv ∼ L(t) ln M � L(t).

Therefore L1(t) := VI(t)/L(t) → ∞ as t → ∞. Further, also by an argumentsimilar to the previous exposition, for v ∈ (0, 1) one has

VI(t) = VI(vt) +

t∫vt

V (u) du,

where the last integral does not exceed (1 − v)L(t)(1 + o(1)) � VI(t), so thatVI(t) (and also, by property (i), L1(t)) is an s.v.f. This completes the proof ofproperty (iv).

(v) Clearly, by the uniform convergence theorem the quantity σ = σ(t) is asolution to the ‘asymptotic equation’

V (σ) ∼ 1t

as t → ∞ (1.1.27)

(where the symbol ∼ can be replaced by the equality sign provided that the func-tion V is continuous and monotonically decreasing). Representing σ in the formσ = t1/αL1, L1 = L1(t), we obtain an equivalent relation

L−α1 L(t1/αL1) ∼ 1, (1.1.28)

and it is obvious that

t1/αL1 → ∞ as t → ∞. (1.1.29)

Fix an arbitrary v > 0. Substituting vt for t in (1.1.28) and for brevity puttingL2 = L2(t) := L1(vt), we get the relation

L−α2 L(t1/αL2) ∼ 1, (1.1.30)

since L(v1/αt1/αL2) ∼ L(t1/αL2) owing to (1.1.29) (with L1 replaced by L2).Now we will show by contradiction that (1.1.28)–(1.1.30) imply that L1 ∼ L2 ast → ∞, where the latter clearly means that L1 is an s.v.f.

Indeed, the contrary assumption means that there exist a v0 > 1 and a sequencetn → ∞ such that

un := L2(tn)/L1(tn) > v0, n = 1, 2, . . . (1.1.31)

(the possible alternative case can be considered in exactly the same way). Ev-idently, t∗n := t

1/αn L1(tn) → ∞ by virtue of (1.1.29), so that from (1.1.28),

Page 40: Asymptotic analysis of random walks

1.1 Regularly varying functions and their main properties 9

(1.1.29) and property (iii) with δ = α/2 we obtain that

1 ∼ L−α2 (tn)L(t1/α

n L2(tn))

L−α1 (tn)L(t1/α

n L1(tn))= u−α

n

L(unt∗n)L(t∗n)

� u−α/2n < v

−α/20 < 1.

We get a contradiction.Note that the above argument proves the uniqueness (up to the asymptotic

equivalence) of the solution to the equation (1.1.27).Finally, the relation (1.1.22) can be proved by directly verifying (1.1.27) for

σ := t1/αL1/α(t1/α): using (1.1.21), one has

V (σ) = σ−αL(σ) =L(t1/αL1/α(t1/α))

tL(t1/α)∼ L(t1/α)

tL(t1/α)=

1t.

The desired assertion now follows owing to the above-mentioned uniqueness ofthe solution to the asymptotic equation (1.1.27). Theorem 1.1.4 is proved.

1.1.3 The asymptotic properties of the transforms of r.v.f.’s (an Abelian type

theorem)

For an r.v.f. V (t), its Laplace transform

ψ(λ) :=

∞∫0

e−λtV (t) dt < ∞

is defined for any λ > 0. The following asymptotic relations hold true for thetransform.

Theorem 1.1.5. Let V (t) be an r.v.f. (i.e. it has the form (1.1.2)).

(i) If α ∈ [0, 1) then

ψ(λ) ∼ Γ(1 − α)λ

V (1/λ) as λ ↓ 0. (1.1.32)

(ii) If α = 1 and∫∞0

V (t) dt = ∞ then

ψ(λ) ∼ VI(1/λ) as λ ↓ 0, (1.1.33)

where VI(t) =∫ t

0V (u) du → ∞ is an s.v.f. and, moreover, VI(t) � L(t)

as t → ∞.(iii) In any case, ψ(λ) ↑ VI(∞) =

∫∞0

V (t) dt � ∞ as λ ↓ 0.

Rewriting the relation (1.1.32), one obtains

V (t) ∼ ψ(1/t)tΓ(1 − α)

as t → ∞.

Relations of this type will also hold true in the case when, instead of the regularityof the function V , we require its monotonicity and then assume that ψ(λ) is an

Page 41: Asymptotic analysis of random walks

10 Preliminaries

r.v.f. as λ ↓ 0. Assertions of this kind are referred to as Tauberian theorems. Inthe present book, we will not be using such theorems, so we will not dwell onthem here.

Proof of Theorem 1.1.5. (i) For any fixed ε > 0 we have

ψ(λ) =

ε/λ∫0

+

∞∫ε/λ

, (1.1.34)

where owing to (1.1.17) one has the following relation for the first integral in thecase α < 1:

ε/λ∫0

e−λtV (t) dt �ε/λ∫0

V (t) dt ∼ εV (ε/λ)λ(1 − α)

as λ ↓ 0. (1.1.35)

Making the change of variables λt = u, one can rewrite the second integralin (1.1.34) as follows:

∞∫ε/λ

=V (1/λ)

λ

∞∫ε

e−uu−α L(u/λ)L(1/λ)

du =V (1/λ)

λ

( 2∫ε

+

∞∫2

). (1.1.36)

Here, as λ ↓ 0, each of the two integrals on the right-hand side converges tothe respective integral of e−uu−α: for the former, this follows from the uniformconvergence theorem (the convergence L(u/λ)/L(1/λ) → 1 holds uniformly inu ∈ [ε, 2]), whereas for the latter it is a consequence of (1.1.1) and the dominatedconvergence theorem (since, owing to Theorem 1.1.4(iii), for all sufficiently smallλ one has L(u/λ)/L(1/λ) < u for u � 2). Therefore

∞∫ε/λ

∼ V (1/λ)λ

∞∫ε

u−αe−udu. (1.1.37)

Now observe that, as λ ↓ 0,

εV (ε/λ)λ

/V (1/λ)

λ= ε1−α L(ε/λ)

L(1/λ)→ ε1−α.

Since ε > 0 can be chosen arbitrary small, this relation together with (1.1.35) and(1.1.37) completes the proof of (1.1.32).

(ii) Integrating by parts and again making the change of variables λt = u, we

Page 42: Asymptotic analysis of random walks

1.1 Regularly varying functions and their main properties 11

obtain for α = 1 and M > 1 that

ψ(λ) =

∞∫0

e−λtdVI(t) = −∞∫0

VI(t) de−λt

=

∞∫0

VI(u/λ)e−udu =

1/M∫0

+

M∫1/M

+

∞∫M

. (1.1.38)

By Theorem 1.1.4(iv), VI(t) � L(t) is an s.v.f. as t → ∞ and hence, for M =M(λ) → ∞ sufficiently slowly as λ ↓ 0, by the uniform convergence theoremthe middle integral on the final right-hand side of (1.1.38) is

VI(1/λ)∫ M

1/M

VI(u/λ)VI(1/λ)

e−udu ∼ VI(1/λ)∫ M

1/M

e−udu ∼ VI(1/λ).

The other two integrals are negligibly small: since VI(t) is an increasing function,the first does not exceed VI(1/λM)/M = o(VI(1/λ)) and, for the second, byTheorem 1.1.4(iii) we have

VI(1/λ)∫ ∞

M

VI(u/λ)VI(1/λ)

e−udu � VI(1/λ)∫ ∞

M

ue−udu = o(VI(1/λ)).

Hence (ii) is proved. Assertion (iii) is obvious.

1.1.4 The subexponential property

An important property of r.v.f.’s is that their regularity character is preserved underconvolution. We will confine ourselves here to considering the case of probabilitydistributions whose tails are r.v.f.’s.

Let ξ, ξ1, ξ2, . . . be independent identically distributed (i.i.d.) random variables(r.v.’s) with distribution F, and let the right tail of this distribution,

F+(t) := F([t,∞)) = P(ξ � t), t ∈ R,

be an r.v.f. (as t → ∞) of the form (1.1.2):

F+(t) ≡ V (t) = t−αL(t).

We will denote the class of all such distributions with a fixed α � 0 by R(α), andthe class of all distributions with regularly varying right tails by R :=

⋃α�0 R(α).

It turns out that in this case, as x → ∞,

P(ξ1 + ξ2 � x) = V 2∗(x) := V ∗ V (x)

= −∞∫

−∞V (x − t) dV (t) ∼ 2V (x) = 2P(ξ � x), (1.1.39)

Page 43: Asymptotic analysis of random walks

12 Preliminaries

and, more generally, for any fixed n > 1,

V n∗(x) := P(ξ1 + · · · + ξn � x) ∼ nV (x) as x → ∞. (1.1.40)

In order to prove (1.1.39), introduce the events A = {ξ1 + ξ2 � x} and Bi ={ξi < x/2}, i = 1, 2. Clearly

P(A) = P(AB1) + P(AB2) − P(AB1B2) + P(AB1B2),

where P(AB1B2) = 0, P(AB1B2) = P(B1B2) = V 2(x/2) (here and in whatfollows B denotes the complement of B) and

P(AB1) = P(AB2) =

x/2∫−∞

V (x − t)F(dt).

Therefore

V 2∗(x) = 2

x/2∫−∞

V (x − t)F(dt) + V 2(x/2). (1.1.41)

(We could have obtained the same result by integrating by parts the convolutionin (1.1.39).) It remains to observe that V 2(x/2) = o(V (x)) and

x/2∫−∞

V (x − t)F(dt) =

−M∫−∞

+

M∫−M

+

x/2∫M

, (1.1.42)

where, as can be easily seen, for any M = M(x) → ∞ as x → ∞ such thatM = o(x) one has

M∫−M

∼ V (x) and

−M∫−∞

+

x/2∫M

= o(V (x)),

which proves (1.1.39).One can establish (1.1.40) in a similar way (we will prove this relation for a

more general case in Theorem 1.2.12(iii) below).The same assertions turn out also to be true for the so-called semiexponential

distributions, i.e. distributions of which the right tails have the form

F+(t) = e−tαL(t), α ∈ (0, 1), (1.1.43)

where L(t) is an s.v.f. as t → ∞ satisfying a certain smoothness condition (seeDefinition 1.2.22 below, p. 29).

Considerable attention will be paid in this book to extending the asymptoticrelation (1.1.40) to the case when n grows together with x, and also to refiningthis relation for distributions with both regularly varying and semiexponentialtails.

Page 44: Asymptotic analysis of random walks

1.2 Subexponential distributions 13

Such distributions are the main representatives of the class of so-called subex-ponential distributions, of which the characteristic property is given by the rela-tion (1.1.39). In the next section we will consider the main properties of thesedistributions.

1.2 Subexponential distributions

1.2.1 The main properties of subexponential distributions

Before giving any formal definitions, we will briefly describe the relationshipsbetween the classes of distributions that we are going to introduce and explainwhy we pay them different amounts of attention in different contexts.

As we have already noted in the Introduction, the main objects of study in thisbook are random walks in which the jump distributions have tails that are eitherregularly varying at infinity or semiexponential. In both cases, such distributionsare typical representatives of the class of so-called subexponential distributions.

The characteristic property of subexponential distributions is the asymptotictail additivity of the convolutions of the original distributions, i.e. the fact that forthe sums Sn = ξ1 + · · · + ξn of (any) fixed numbers n of i.i.d. r.v.’s ξi one has

P(Sn � x) ∼ nP(ξ1 � x) as x → ∞ (1.2.1)

(cf. (1.1.40)). Below we will establish a number of sufficient and necessary condi-tions that, to some extent, characterize the class S of subexponential distributions.These conditions show that the class S is the result of a considerable extensionof the union of the class R of distributions with regularly varying tails and theclass Se of distributions with semiexponential tails (see Definition 1.2.22 below,p. 29) which is obtained, roughly speaking, through the addition of functions that‘oscillate’ between the functions from R or between those from Se. For such‘oscillating’ functions the limit theorems to be obtained in Chapters 2–4 will, asa rule, be valid in much narrower zones than for elements of R or Se (see § 4.8).However, these functions usually do not appear in applications, where the as-sumption of their presence would look rather artificial. Therefore in what followswe will confine ourselves mostly to considering distributions from the classes Rand Se. We will devote less attention to other distributions from the class S.Nevertheless, a number of properties of random walks will be established for thewhole broad class S.

The difference between the class S, on the one hand, and the classes R andSe, on the other, is that the above-mentioned tail-additivity property (1.2.1) ex-tends much further (in terms of the number n of summands in the sum of randomvariables) for distributions from R and Se than for arbitrary distributions from S.More precisely, for the classes R and Se, the relation (1.2.1) remains true also inthe case when n grows rather fast with x (for more detail, see § 5.9). This allowsone to advance much further in studying the asymptotic properties of the distri-butions of the r.v.’s Sn, Sn = maxk�n Sk etc. for the classes R and Se, whereas

Page 45: Asymptotic analysis of random walks

14 Preliminaries

the zone in which it is natural to do this within the class S is quite narrow (seebelow).

At the same time, in subsequent chapters we will study distributions from theclasses R and Se separately, the reason being that the technical aspects and eventhe formulations of results for these distribution classes are different in many re-spects. However, there are problems for which it is natural to conduct studieswithin the wider class of subexponential distributions. As an example of such aproblem, we could mention here that of the asymptotic behaviour of the probabil-ity P(S � x) as x → ∞ in the case Eξi < 0, where S = supk�0 Sk (see § 7.5).

Now we will give a formal definition of the class of subexponential distributionsand discuss their basic properties and also the relations between this class andother important distribution classes to be considered in the present book.

Let ζ ∈ R be an r.v. with distribution G: G(B) = P(ζ ∈ B) for any Borel setB (recall that in this case we write ζ ⊂=G). By G(t) we denote the complementdistribution function corresponding to the distribution of the r.v. ζ:

G(t) := P(ζ � t), t ∈ R.

Similarly, to the distribution Gi there corresponds the function Gi(t), and so on.The function G(t) is also referred to as the (right) tail of the distribution G,

but normally this term is used only when t > 0. Note that throughout this book,except in §§ 1.2–1.4, for the right distribution tails we will use notation of theform G+(t) (this should not lead to any confusion).

The convolution of the tails G1(t) and G2(t) is the function

G1 ∗ G2(t) := −∫

G1(t − y) dG2(y) =∫

G1(t − y)G2(dy) = P(Z2 � t),

where Z2 = ζ1 + ζ2 is the sum of independent r.v.’s ζi ⊂=Gi, i = 1, 2. ClearlyG1 ∗ G2(t) = G2 ∗ G1(t). By G2∗(t) = G ∗ G(t) we denote the convolution ofthe tail G(t) with itself, and put G(n+1)∗(t) = G ∗ Gn∗(t), n � 2.

Similarly, the convolution G1 ∗ G2 of two distributions G1 and G2 is themeasure G1 ∗ G2(B) =

∫G1(B − t)G2(dt), and so on.

Definition 1.2.1. A probability distribution G on [0,∞) belongs to the class S+

of subexponential distributions on the positive half-line if

G2∗(t) ∼ 2G(t) as t → ∞. (1.2.2)

A probability distribution G on R belongs to the class S of subexponential dis-tributions if the distribution G+ of the positive part ζ+ := max{0, ζ} of ther.v. ζ ⊂=G belongs to S+. A r.v. is said to be subexponential if its distribution issubexponential.

Page 46: Asymptotic analysis of random walks

1.2 Subexponential distributions 15

Remark 1.2.2. As clearly we always have

(G+)2∗(t) = P(ζ+1 + ζ+

2 � t) � P({ζ+1 � t} ∪ {ζ+

2 � t})

= P(ζ1 � t) + P(ζ2 � t) − P(ζ1 � t, ζ2 � t)

= 2G(t) − G2(t) = 2G+(t)(1 + o(1))

as t → ∞, subexponentiality is equivalent to the following property:

lim supt→∞

(G+)2∗(t)G+(t)

� 2. (1.2.3)

Observe also that, since the relation (1.2.2) only makes sense when G(t) > 0for all t ∈ R, any subexponential distribution has an unbounded (from the right)support.

Note that in the literature the notation S is normally used for the class of subex-ponential distributions on [0,∞), whereas the general case is either ignored or justmentioned in passing as a possible extension obtained as above. This situation canbe explained, on the one hand, by the fact that subexponentiality is, by definition,a property of right distribution tails only and, on the other, by a historical tra-dition: the class of subexponential distributions was originally introduced in thecontext of the theory of branching processes and then used mostly for modellinginsurance claim sizes and service times in queueing theory, where the respectiver.v.’s are always positive. In the context of general random walks, such a restric-tive assumption is no longer natural.

Moreover, such an approach (i.e. one confined to the class of positive r.v.’s)may well cause confusion, especially when the concept of subexponentiality isencountered for the first time. Thus, it is by no means obvious from the defini-tion that the tail additivity property (1.2.2) holds for subexponential r.v.’s that canassume both negative and positive values; this fact requires a non-trivial proofand will be seen to be a consequence of Theorems 1.2.8 and 1.2.4(vi) (see The-orem 1.2.12(iii) below). Moreover, the fact that (1.2.2) holds for the distributionG of a signed r.v. ζ itself (and not for the distribution G+ of the r.v. ζ+) doesnot imply that G ∈ S (an example illustrating this observation is given in Re-mark 1.2.10 on p. 19). Therefore, to avoid vagueness and ambiguities, from thevery beginning we will be considering the general case of distributions given onthe whole real line.

As we have already seen in § 1.1.4, the class R of distributions with regularlyvarying right tails is a subset of the class S (see (1.1.39)). Below we will seethat in a similar way the so-called semiexponential distributions also belong tothe class S (Theorem 1.2.21).

One of the main properties of subexponential distributions G is that the cor-responding functions G(t) are asymptotically locally constant in the followingsense.

Page 47: Asymptotic analysis of random walks

16 Preliminaries

Definition 1.2.3. A function G(t) > 0 is said to be (asymptotically) locally con-stant (l.c.) if, for any fixed v,

G(t + v)G(t)

→ 1 as t → ∞. (1.2.4)

In the literature, distributions with l.c. tails are often referred to as ‘long-taileddistributions’; it would appear, however, that the term ‘locally constant’ betterreflects the meaning of the concept. The class of all distributions G with l.c. tailsG(t) will be denoted by L.

For future reference, we will present the main properties of l.c. functions in theform of a separate theorem.

Theorem 1.2.4.

(i) For an l.c. function G(t) the convergence (1.2.4) is uniform in v on any fixedbounded interval.

(ii) A function G(t) is l.c. iff, for some t0 > 0, it admits the following represen-tation:

G(t) = c(t) exp

{ t∫t0

ε(u) du

}, t � t0, (1.2.5)

where the functions c(t) and ε(t) are measurable, with c(t) → c ∈ (0,∞)and ε(t) → 0 as t → ∞.

(iii) If G1(t) and G2(t) are l.c. then G1(t) + G2(t), G1(t)G2(t), Gb1(t) and

G(t) := G1(at + b) where a � 0, b ∈ R, are also l.c. functions.(iv) If G(t) is an l.c. function then, for any ε > 0,

eεtG(t) → ∞ as t → ∞.

In other words, any l.c. function G(t) can be represented as

G(t) = e−l(t), l(t) = o(t) as t → ∞. (1.2.6)

(v) Let

GI(t) :=∫ ∞

t

G(u) du < ∞

and let at least one of the two following conditions be met:

(a) G(t) is an l.c. function,(b) GI(t) is an l.c. function and G(t) is monotone.

Then

G(t) = o(GI(t)) as t → ∞. (1.2.7)

(vi) If G ∈ L then G2∗(t) ∼ (G+)2∗(t) as t → ∞.

Page 48: Asymptotic analysis of random walks

1.2 Subexponential distributions 17

Remark 1.2.5. It follows from the assertion of part (i) of the theorem that theuniform convergence in (1.2.4) on the interval [−M,M ] will also hold in the casewhen M = M(t) increases unboundedly (together with t) slowly enough.

Remark 1.2.6. The coinage of the term ‘subexponential distribution’ was ap-parently due mostly to the fact that the tail of such a distribution decays, ast → ∞, slower than any exponential function e−εt with ε > 0. According toTheorem 1.2.4(iv), this is actually a property of a much wider class L of distribu-tions with l.c. tails (the relationship between the classes S and L is discussed inmore detail below, in Theorems 1.2.8, 1.2.17 and 1.2.25).

Proof of Theorem 1.2.4. (i)–(iii) It is evident from Definitions 1.1.1 and 1.2.3 thatG(t) is an l.c. function iff L(t) := G(ln t) is an s.v.f. Therefore the assertion ofpart (i) immediately follows from Theorem 1.1.2 (the uniform convergence theo-rem for s.v.f.’s), whereas those of parts (ii) and (iii) follow from Theorems 1.1.3and 1.1.4(i) respectively.

The assertion of part (iv) follows from the integral representation (1.2.5).

(v) If condition (a) is met then, for any M > 0 and all sufficiently large t,

GI(t) >

t+M∫t

G(u) du >12MG(t).

Since M is arbitrary, GI(t) � G(t). Further, if condition (b) is satisfied then

G(t)GI(t)

� 1GI(t)

t∫t−1

G(u) du =GI(t − 1)

GI(t)− 1 → 0

as t → ∞.

(vi) Let ζ1 and ζ2 be independent copies of the r.v. ζ and let Z2 := ζ1 + ζ2,

Z(+)2 := ζ+

1 + ζ+2 . Clearly ζi � ζ+

i , so that

G2∗(t) = P(Z2 � t) � P(Z(+)2 � t) = (G+)2∗(t). (1.2.8)

However, for any M > 0,

G2∗(t) � P(Z2 � t, ζ1 > 0, ζ2 > 0) +2∑

i=1

P(Z2 � t, ζi ∈ [−M, 0]

),

where the first term on the right-hand side equals P(Z(+)2 � t, ζ+

1 > 0, ζ+2 > 0),

and the last two can be estimated as follows: since G ∈ L then, for any ε > 0

Page 49: Asymptotic analysis of random walks

18 Preliminaries

and for all sufficiently large M and t,

P(Z2 � t, ζ1 ∈ [−M, 0]

)� P

(ζ2 � t + M, ζ1 ∈ [−M, 0]

)= G(t)

G(t + M)G(t)

[P(ζ1 � 0) − P(ζ1 < −M)]

� (1 − ε)G(t)P(ζ+1 = 0)

= (1 − ε)P(Z(+)2 � t, ζ+

1 = 0).

Thus we obtain for G2∗(t) the lower bound

G2∗(t) � P(Z

(+)2 � t, ζ+

1 > 0, ζ+2 > 0

)+ (1 − ε)

2∑i=1

P(Z(+)2 � t, ζ+

i = 0)

� (1 − ε)P(Z(+)2 � t) = (1 − ε)(G∗)2∗(t).

Thereby part (vi) is proved, as ε is arbitrarily small.

Later we will also need the following modification of the concept of an l.c.function.

Definition 1.2.7. Let ψ(t) � 1 be a fixed non-decreasing function. A functionG(t) > 0 is said to be ψ-asymptotically locally constant (ψ-l.c.) if, for any fixedv,

G(t + vψ(t)

)G(t)

→ 1 as t → ∞. (1.2.9)

Clearly, any l.c. function will be ψ-l.c. for ψ ≡ 1 and, moreover, it will also beψ-l.c. for a suitable (i.e. sufficiently slow growing) function ψ(t) → ∞.

In what follows, we will be using only monotone ψ-l.c. functions. For them theconvergence in (1.2.9), as well as the convergence in (1.2.4), will obviously beuniform in v on any fixed bounded interval. In this case any ψ-l.c. function is l.c.,so that the class of l.c. functions is at its broadest and includes all ψ-l.c. functions.

Observe that any r.v.f. is ψ-l.c. for any function ψ(t) = o(t). For example,the function G(t) = e−ctα

, α ∈ (0, 1) (a Weibull distribution tail) is ψ-l.c. forψ(t) = o

(t1−α

). However, the exponential function G(t) = e−ct is not ψ-l.c. for

any ψ. Accordingly, any ψ-l.c. function decays (or grows) more slowly than theexponential function (cf. Theorem 1.2.4(iv)).

Now we return to our discussion of subexponential distributions. First we con-sider the relationship between the classes S and L .

Theorem 1.2.8. One has S ⊂ L, and therefore all the assertions of Theorem 1.2.4hold true for subexponential distributions. This inclusion is strict: not every dis-tribution from the class L is subexponential.

Remark 1.2.9. Some sufficient conditions for a distribution G ∈ L to belong tothe class S are given below, in Theorems 1.2.17 and 1.2.25.

Page 50: Asymptotic analysis of random walks

1.2 Subexponential distributions 19

Remark 1.2.10. In the case when a distribution G is not concentrated on [0,∞),the tail additivity condition (1.2.2) alone will be insufficient for the function G(t)to be l.c. (and hence for ensuring the ‘subexponential decay’ of the distributiontail, cf. Remark 1.2.6). It is this fact that explains the necessity of defining subex-ponentiality in the general case in terms of the condition (1.2.2) on the distribu-tion G+ of the r.v. ζ+. As we will see below (Corollary 1.2.16), the subexponen-tiality of a distribution G on R is actually equivalent to the combination of theconditions (1.2.2) (on the distribution G itself) and G ∈ L.

The following example shows that for r.v.’s assuming values of both signs, thecondition (1.2.2), generally speaking, does not imply subexponential behaviourfor G(t).

Example 1.2.11. Let μ > 0 be fixed and let the right tail of a distribution G havethe form

G(t) = e−μtV (t), (1.2.10)

where V (t) is an r.v.f. converging to zero as t → ∞ and such that

g(μ) :=

∞∫−∞

eμyG(dy) < ∞.

We have (cf. (1.1.41), (1.1.42))

G2∗(t) = 2

t/2∫−∞

G(t − y)G(dy) + G2(t/2),

wheret/2∫

−∞G(t − y)G(dy) = e−μt

t/2∫−∞

eμyV (t − y)G(dy)

= e−μt

⎛⎜⎝ −M∫−∞

+

M∫−M

+

t/2∫M

⎞⎟⎠ .

One can easily see that, for M = M(t) → ∞ slowly enough as t → ∞, we get

M∫−M

eμyV (t − y)G(dy) ∼ g(μ)V (t),

−M∫−∞

+

t/2∫M

= o(G(t)

),

whereas

G2(t/2) = e−μtV 2(t/2) � ce−μtV 2(t) = o(G(t)

).

Thus we obtain

G2∗(t) ∼ 2g(μ)e−μtV (t) = 2g(μ)G(t), (1.2.11)

Page 51: Asymptotic analysis of random walks

20 Preliminaries

and it is clear that one can always find a distribution G (with a negative mean)such that g(μ) = 1. Then the relation (1.2.2) given in the definition of subexpo-nentiality will hold true, although G(t) decays exponentially fast and therefore isnot an l.c. function.

Nevertheless, observe that the class of distributions that satisfy just the rela-tion (1.2.2) is an extension of the class S, distributions from the former classpossessing many properties of distributions from S.

Proof of Theorem 1.2.8. First we will prove that S ⊂ L. Since the definitionsof both classes are given in terms of the right distribution tails, one can assumewithout loss of generality that G ∈ S+ (or just consider from the very beginningthe distribution G+). For independent (non-negative) r.v.’s ζi ⊂=G we have, fort > 0,

G2∗(t) = P(ζ1 + ζ2 � t) = P(ζ1 � t) + P(ζ1 + ζ2 � t, ζ1 < t)

= G(t) +

t∫0

G(t − y)G(dy). (1.2.12)

Since G(t) is a non-increasing function and G(0) = 1, we obtain for t > v > 0that

G2∗(t)G(t)

= 1 +

v∫0

G(t − y)G(t)

G(dy) +

t∫v

G(t − y)G(t)

G(dy)

� 1 +(1 − G(v)

)+

G(t − v)G(t)

(G(v) − G(t)

).

Hence for large enough t (such that G(v) − G(t) > 0)

1 � G(t − v)G(t)

� 1G(v) − G(t)

(G2∗(t)G(t)

− 2 + G(v))

.

Since G ∈ S+, the final right-hand side tends to G(v)/G(v) = 1 as t → ∞, andtherefore G ∈ L.

To complete the proof of the theorem, it suffices to give an example of a dis-tribution G ∈ L \ S. Since in order to achieve this we will have to refer toa necessary condition for subexponentiality given below in Theorem 1.2.17, itseems natural to present such a construction after this theorem, which we do inExample 1.2.18 (see p. 27). One can also find examples of distributions from Lthat are not subexponential in [109, 229].

The next theorem states several important properties of subexponential distri-butions.

Page 52: Asymptotic analysis of random walks

1.2 Subexponential distributions 21

Theorem 1.2.12. Let G ∈ S. Then the following assertions hold true.

(i) If Gi(t)/G(t) → ci as t → ∞, ci � 0, i = 1, 2, c1 + c2 > 0, then

G1 ∗ G2(t) ∼ G1(t) + G2(t) ∼ (c1 + c2)G(t).

(ii) If G0(t) ∼ cG(t) as t → ∞, c > 0, then G0 ∈ S.

(iii) For any fixed n � 2

Gn∗(t) ∼ nG(t) as t → ∞. (1.2.13)

(iv) For any ε > 0 there exists a value b = b(ε) < ∞ such that, for all n � 2and t, holds

Gn∗(t)G(t)

� b (1 + ε)n

Remark 1.2.13. It is clear that the asymptotic relation G1(t) ∼ G2(t) as t → ∞defines an equivalence relation on the set of distributions on R. Theorem 1.2.12(ii)means that the class S is closed with respect to that equivalence. One can easilysee that in each equivalence subclass of S under this relation there is always adistribution with an arbitrarily smooth tail G(t).

Indeed, let p(t) be an infinitely many times differentiable probability densityon R vanishing outside [0, 1]; for instance, one can take

p(x) = c exp{−1/[x(1 − x)]

}, x ∈ (0, 1); p(x) = 0, x �∈ (0, 1).

Now we will ‘smooth’ the function l(t) := − lnG(t), G ∈ S, by putting

l0(t) :=∫

p(t − u)l(u) du, G0(t) := e−l0(t). (1.2.14)

It is evident that G0(t) is infinitely many times differentiable and, since l(t) isnon-decreasing and we actually integrate only over [t − 1, t], one has l(t − 1) �l0(t) � l(t) and hence by Theorem 1.2.8

1 � G0(t)G(t)

� G(t − 1)G(t)

→ 1 as t → ∞.

Therefore G0 is equivalent to the original G. A simpler smoothing procedure thatleads to a less smooth asymptotically equivalent tail consists in replacing l(t) byits linear interpolation with nodes at the points (k, l(k)), k being an integer.

Thus, up to an additive term o(1), the function l(t) = − lnG(t), G ∈ S, canalways be assumed arbitrarily smooth.

The aforesaid is clearly applicable to the class L as well: this class is also closedwith respect to the above equivalence, and in each of its equivalence subclassesthere are arbitrarily smooth representatives.

Remark 1.2.14. Note that from Theorem 1.2.12(ii), (iii) it follows immediatelythat if G ∈ S then also Gn∗ ∈ S, n = 2, 3, . . . Moreover, if we denote by

Page 53: Asymptotic analysis of random walks

22 Preliminaries

Gn∨ the distribution of the maximum of i.i.d. r.v.’s ζ1, . . . , ζn ⊂=G then, from theobvious relation

Gn∨(t) = 1 − (1 − G(t))n ∼ nG(t) as t → ∞ (1.2.15)

and Theorem 1.2.12(ii), one obtains that Gn∨ also belongs to S.The relations (1.2.15) and (1.2.13) show that, in the case of a subexponen-

tial G, the tail Gn∗(t) of the distribution of the sum of a fixed number n of i.i.d.r.v.’s ζi ⊂=G is asymptotically equivalent (as t → ∞) to the tail Gn∨(t) of themaximum of these r.v.’s. This means that ‘large’ values of this sum are mainlydue to the presence of a single ‘large’ summand ζi in it. One can easily see thatthis property is characteristic of subexponentiality.

Remark 1.2.15. Observe also that the converse of the assertion stated at the be-ginning of Remark 1.2.14 is true as well: if Gn∗ ∈ S for some n � 2 then G ∈ S([112]; see also Proposition A3.18 of [113]). That Gn∨ ∈ S implies that G ∈ Sfollows in an obvious way from (1.2.15) and Theorem 1.2.12(ii).

Proof of Theorem 1.2.12. (i) First assume that c1c2 > 0 and that both distribu-tions Gi are concentrated on [0,∞). Fix an arbitrary ε > 0 and choose valuesM < N large enough that Gi(M) < ε, i = 1, 2, G(M) < ε and such that fort > N one has

1 − ε <G(t − M)

G(t)< 1 + ε, (1 − ε)ci <

Gi(t)G(t)

< (1 + ε)ci (1.2.16)

for i = 1, 2 (that the former pair of inequalities can be satisfied is seen fromTheorem 1.2.8).

Let ζ ⊂=G and ζi ⊂=Gi, i = 1, 2, be independent r.v.’s. Then, for t > 2N , onehas the representation

G1 ∗ G2(t) = P1 + P2 + P3 + P4, (1.2.17)

where (see Fig. 1.1)

P1 := P(ζ1 � t − ζ2, ζ2 ∈ [0,M)),

P2 := P(ζ2 � t − ζ1, ζ1 ∈ [0,M)),

P3 := P(ζ2 � t − ζ1, ζ1 ∈ [M, t − M)),

P4 := P(ζ2 � M, ζ1 � t − M).

We will show that the first two terms on the right-hand side of (1.2.17) areasymptotically equivalent to c1G(t) and c2G(t), respectively, whereas the lasttwo are negligibly small compared with G(t). Indeed, for P1 we have the obvioustwo-sided bounds

(1 − ε)2c1G(t) < G1(t)(1 − G2(M)) = P(ζ1 � t, ζ2 ∈ [0,M))

� P1 � P(ζ1 � t − M) = G1(t − M) � (1 + ε)2c1G(t)

Page 54: Asymptotic analysis of random walks

1.2 Subexponential distributions 23

t

ζ2

0 M t − M t ζ1

P2

P1

P3

P4

��

��

��

��

��

��

Fig. 1.1. An illustration of the representation (1.2.17). Although M = o(t), the maincontribution to the sum comes from the terms P1 and P2.

by virtue of (1.2.16); bounds for the term P2 can be obtained in a similar way.Further,

P4 = P(ζ2 � M, ζ1 � t − M) = G2(M)G1(t − M) < ε(1 + ε)2c2G(t).

It remains to estimate P3 (note that now we will need the condition G ∈ S; so farwe have only used the fact that G ∈ L). We have

P3 =∫

[M,t−M)

G2(t − y)G1(dy) � (1 + ε)c2

∫[M,t−M)

G(t − y)G1(dy), (1.2.18)

where it is clear that, owing to (1.2.16), the last integral is equal to

P(ζ + ζ1 � t, ζ1 ∈ [M, t − M))

� P(ζ � t − M, ζ1 ∈ [M, t − M)) + P(ζ + ζ1 � t, ζ ∈ [M, t − M))

= G(t − M) G1([M, t − M)) +∫

[M,t−M)

G1(t − y)G(dy)

� ε(1 + ε)G(t) + (1 + ε)c1

∫[M,t−M)

G(t − y)G(dy). (1.2.19)

Next observe that, using an argument similar to that above, one can easily obtain

Page 55: Asymptotic analysis of random walks

24 Preliminaries

(putting G1 := G2 := G) that

G2∗(t) = (1 + θ1ε)2G(t) +∫

[M,t−M)

G(t − y)G(dy) + ε(1 + θ2ε)G(t),

where |θi| � 1, i = 1, 2. Since G2∗(t) ∼ 2G(t) as G ∈ S+, this equality meansthat the integral on its right-hand side is o(G(t)). Now it follows from (1.2.18),(1.2.19) that P3 = o(G(t)) also, and hence the required assertion is proved in thecase G ∈ S+.

To extend the desired result to the case of distributions Gi on R, it sufficessimply to repeat the argument from the proof of Theorem 1.2.4(vi).

The case when one of the ci is zero can be reduced to the case already consid-ered, c1c2 > 0. If, say, c1 = 0, c2 > 0 then one can introduce the distributionG1 := (G1 + G)/2, for which clearly G1(t)/G(t) → c1 = 1/2, and therefore,by virtue of the assertion that we have already proved,

12

+ c2 ∼ G1 ∗ G2(t)G(t)

=G1 ∗ G2(t) + G ∗ G2(t)

2G(t)

=G1 ∗ G2(t)

2G(t)+ (1 + o(1))

1 + c2

2

as t → ∞, so that G1 ∗ G2(t)/G(t) → c2 = c1 + c2.

(ii) Let G+0 be the distribution of the r.v. ζ+

0 , where ζ0 ⊂=G0. Since G+0 (t) =

G0(t) for t > 0, it follows immediately from part (i) with G1 = G2 = G+0 that

(G+0 )2∗(t) ∼ 2G+

0 (t), i.e. G0 ∈ S.

(iii) If G ∈ S then, by Theorems 1.2.4(vi) and 1.2.8, one has, as t → ∞,

G2∗(t) ∼ (G+)2∗(t) ∼ 2G(t).

The relation (1.2.13) follows straight away from part (i) using an induction argu-ment.

(iv) One has Gn∗(t) � Gn∗+ (t), n � 1 (cf. (1.2.8)). Hence it is clear that it

suffices to consider the case G ∈ S+. Put

αn := supt�0

Gn∗(t)G(t)

.

Analogously to (1.2.12), for n � 2 one has

Gn∗(t) = G(t) +

t∫0

G(n−1)∗(t − y)G(dy),

Page 56: Asymptotic analysis of random walks

1.2 Subexponential distributions 25

and therefore, for any M > 0,

αn � 1 + sup0�t�M

t∫0

G(n−1)∗(t − y)G(t)

G(dy)

+ supt>M

t∫0

G(n−1)∗(t − y)G(t − y)

G(t − y)G(t)

G(dy)

� 1 +1

G(M)+ αn−1 sup

t>M

G2∗(t) − G(t)G(t)

.

Since G ∈ S, for any ε > 0 there exists an M = M(ε) such that

supt>M

G2∗(t) − G(t)G(t)

< 1 + ε,

and hence

αn � b0 + αn−1(1 + ε), b0 := 1 + 1/G(M), α1 = 1.

From here one obtains recursively

αn � b0 + b0(1 + ε) + αn−2(1 + ε)2 � · · · � b0

n−1∑j=0

(1 + ε)j � b0

ε(1 + ε)n.

The theorem is proved.

1.2.2 Sufficient conditions for subexponentiality

Now we will turn to a discussion of sufficient conditions for a given distributionG to belong to the class of subexponential distributions. As we already know,S ⊂ L (Theorem 1.2.8) and therefore the easily verified condition G ∈ L is,quite naturally, always present in conditions sufficient for G ∈ S. First we willpresent a simple assertion that clarifies the subexponentiality condition for signedr.v.’s; it follows from Theorems 1.2.8, 1.2.12(iii) and 1.2.4(vi).

Corollary 1.2.16. A distribution G belongs to S iff G ∈ L and G2∗(t) ∼ 2G(t)as t → ∞.

The next theorem basically paraphrases the laconic definition of subexponen-tiality in terms of the relative smallness of the probabilities of certain simpleevents for a pair of independent r.v.’s with a common distribution G.

Theorem 1.2.17.

(i) Let G ∈ S. Then, for any fixed p ∈ (0, 1), as t → ∞G(pt)G((1 − p)t) = o(G(t)), (1.2.20)

Page 57: Asymptotic analysis of random walks

26 Preliminaries

and for any M = M(t) → ∞ such that M � pt � t − M one has

pt∫M

G(t − y)G(dy) = o(G(t)). (1.2.21)

(ii) Conversely, let G ∈ L and, for some p ∈ (0, 1), let relation (1.2.20) hold.Moreover, suppose that for some M = M(t) → ∞ such that

c0 := lim supt→∞

G(t − M)G(t)

< ∞ (1.2.22)

one has (1.2.21) with the upper integration limit replaced by max{p, 1−p}t.Then G ∈ S.

In what follows, it will sometimes be convenient to use an equivalent form ofcondition (1.2.20) in terms of the function l(t) = − lnG(t):

l(pt) + l((1 − p)t) − l(t) → ∞ as t → ∞. (1.2.23)

t

ζ2

(1 − p)t

M

t − M

0 M pt t − M t ζ1

(1−p)tRM

G(t − y)G(dy)

ptRM

· · ·

G(pt)G((1 − p)t)

��

��

��

��

��

��

Fig. 1.2. An illustration relating to the proof of Theorem 1.2.17: for G ∈ S, all threeexpressions in the plot should be o(G(t)).

Proof. To prove both parts of the theorem, it suffices to consider the case G ∈ S+.

(i) We will use the representation (1.2.17) (with G1 = G2 = G) for G2∗(t).Clearly, when M < pt < t − M , one has

pt∫M

G(t − y)G(dy) � P3, G(pt)G((1 − p)t) � P3 + P4,

Page 58: Asymptotic analysis of random walks

1.2 Subexponential distributions 27

where the Pi were defined after (1.2.17) (see also Figs. 1.1, 1.2). The desiredstatements are now obvious since we showed in the proof of Theorem 1.2.12(i)that P3 + P4 = o(G(t)) when M = M(t) → ∞ as t → ∞.

(ii) One can easily see that, in the representation (1.2.17),

P3 + P4 �pt∫

M

G(t − y)G(dy) +

(1−p)t∫M

G(t − y)G(dy) + G(pt)G((1 − p)t),

so that it suffices to show that P1 ≡ P2 ∼ G(t) as t → ∞. By Theorem 1.2.4(i),there exists an M1 = M1(t) → ∞ such that G(t − y)/G(t) → 1 uniformly iny ∈ [0,M1]. It remains to observe that

P1 =∫

[0,M1)

G(t − y)G(dy) +∫

[M1,M)

G(t − y)G(dy),

where the first integral is evidently (1 + o(1))G(t) while the second does notexceed c0G(t)G([M1,M)) = o(G(t)).

Now we will use the above theorem to construct an example of a distributionG ∈ L \ S (thus completing the proof of Theorem 1.2.8).

Example 1.2.18. According to Theorem 1.2.4(ii), G ∈ L if for the functionl(t) = − lnG(t) one has the representation l(t) =

∫ t

0ε(u) du, where

ε(u) � 0, ε(u) → 0 as u → ∞,

∞∫0

ε(u) du = ∞. (1.2.24)

Define a piecewise constant function ε(t) as follows. Put t0 := 0, tk := 2k−1,

k = 1, 2, . . . , and let ε(t) := 1 for t ∈ [t0, t1) and, for k = 1, 2, . . . ,

ε(t) :=

⎧⎨⎩ t−1n

∫ tn

0ε(u) du,

t−1n = (tn+1 − tn)−1,

t ∈ [tn, tn+1),n = 2k − 1,

n = 2k.

Clearly, by construction one has

l(t2k) =

t2k−1∫0

ε(u) du +

t2k∫t2k−1

ε(u) du = 2l(t2k−1), k = 1, 2, . . . , (1.2.25)

and, moreover, all the conditions from (1.2.24) are satisfied.However, it follows from Theorem 1.2.17(i) that G �∈ S, since condition

(1.2.20) (or equivalently condition (1.2.23)) is not satisfied for p = 1/2; by virtueof (1.2.25) we have 2l(t) − l(2t) �→ ∞ as t → ∞.

Theorem 1.2.17(ii) enables one to obtain a number of more convenient con-ditions that are sufficient for subexponentiality. First of all observe that, putting

Page 59: Asymptotic analysis of random walks

28 Preliminaries

p = 1/2 and M = t/2 in this theorem, we immediately establish the followingresult from [109].

Corollary 1.2.19. If G ∈ L and lim supt→∞ G(t)/G(2t) < ∞ then G ∈ S.

Theorem 1.2.21 below is also a simple corollary of Theorem 1.2.17. It extendsthe condition sufficient for subexponentiality that we established earlier (the reg-ular variation of G(t) at infinity).

Clearly, any r.v.f. G(t) possesses the property that if b > 0 is fixed thenG(bt) ∼ cbG(t) as t → ∞ (where cb = b−α when G(t) is of the form (1.1.2)).Following [42], we will now introduce the following class of functions.

Definition 1.2.20. A function G(t) is said to be an upper-power function if it isl.c. and, for any b > 1, there exists a c(b) > 0 such that

G(bt) > c(b)G(t), t > 0, (1.2.26)

where c(b) is bounded away from zero on any interval (1, b1), b1 < ∞. In otherwords, for any p ∈ (0, 1) one has G(pt) < cpG(t) for some cp < ∞, where cp isbounded on any interval (p1, 1), 0 < p1 < 1.

One can easily see that if the function G(t) is non-increasing and the con-dition (1.2.26) is satisfied for some b > 1 then this condition will also be metfor any b > 1 and c(b) will be bounded away from zero on any interval (1, b1),b1 < ∞. In the literature, the property (1.2.26) is often referred to as dominatedvariation (see e.g. § 1.4 of [113]).

It is clear that the class of upper-power functions is broader than the classof r.v.f.’s. In particular, it contains functions obtained by multiplying r.v.f.’s by‘slowly oscillating’ factors m(t) that bounded away from zero. It turns out thatsuch functions still belong to S and, moreover, applying the above operation (un-der the additional condition that the function m(t) is bounded) to the tails ofsubexponential distributions does not take one outside S.

Theorem 1.2.21.(i) If G(t) is an upper-power function then G ∈ S.

(ii) Let the tail of a distribution G0 be of the form G0(t) = m(t)G(t), whereG ∈ S and m(t) is an l.c. function such that for all t

0 < m1 � m(t) � m2 < ∞. (1.2.27)

Then G0 ∈ S.

Proof. (i) This assertion follows from Corollary 1.2.19 and the above remarks.

(ii) It suffices to verify the conditions of Theorem 1.2.17(ii) for p = 1/2. De-note by M = M(t) → ∞ (as t → ∞) a function for which the convergenceG(t + v)/G(t) → 1 is uniform in v ∈ [−M,M ] (see Remark 1.2.5 on p. 17).

Clearly, G0 ∈ L (cf. Theorem 1.1.4(i)). Therefore it remains only to verify(1.2.20) and (1.2.21) for G0(t), p = 1/2 and the chosen M = M(t). By virtue

Page 60: Asymptotic analysis of random walks

1.2 Subexponential distributions 29

of (1.2.27) and the relation (1.2.20) for G ∈ S (which holds owing to Theo-rem 1.2.17(i)),

G20(t/2) � m2

2G2(t/2) = o(G(t)) = o(G0(t)),

so that (1.2.20) also holds for G0(t). Further, quite similarly to the bounds(1.2.18), (1.2.19) from the proof of Theorem 1.2.12(i), one derives that

t/2∫M

G0(t − y) G0(dy) = O

⎛⎜⎝ t/2∫M

G(t − y)G(dy)

⎞⎟⎠+ o(G(t)) = o(G(t));

the last equality holds owing to the fact that G ∈ S and to the relation (1.2.21)for G(t) (which holds by Theorem 1.2.17(i)). Since G(t) = O(G0(t)) by condi-tion (1.2.27), the above bound immediately implies the relation (1.2.21) for G0(t).The theorem is proved.

1.2.3 Further sufficient conditions for distributions to belong to S. Relation-

ship to the class of semiexponential distributions

Theorem 1.2.17 essentially gives necessary and sufficient conditions for a distri-bution to belong to S. So far, using the theorem we have only established suffi-cient conditions from Theorem 1.2.21, which, as we will see from what follows,are quite narrow. To construct broader (in a certain sense) sufficient conditions,we will introduce the class of so-called semiexponential distributions. This is arather wide class (in particular, it includes, along with the class R, the lognormaldistribution, the Weibull distribution G(t) = e−tα

, 0 < α < 1, and many others).

Definition 1.2.22. A distribution G belongs to the class Se of semiexponentialdistributions if

(i) G(t) = e−l(t), where, for some s.v.f. L(t), the following representationholds true:

l(t) = tαL(t), α ∈ [0, 1]; L(t) → 0 as t → ∞ when α = 1;(1.2.28)

(ii) as t → ∞, for Δ = Δ(t) = o(t) one has

l(t + Δ) − l(t) = (α + o(1))Δt

l(t) + o(1). (1.2.29)

We will denote the subclass of the distributions G ∈ Se for which the index ofthe function l(t) is equal to α by Se(α), α ∈ [0, 1].

Condition (ii) could be rewritten in the following equivalent form: for any fixed

Page 61: Asymptotic analysis of random walks

30 Preliminaries

δ > 0, as t → ∞,

for α > 0, l(t + Δ) − l(t) ∼ αΔl(t)/t

for α = 0, l(t + Δ) − l(t) = o(Δl(t)/t)

}if

Δt

l(t) > δ, (1.2.30)

for α � 0, l(t + Δ) − l(t) = o(1) ifΔt

l(t) → 0. (1.2.31)

As we will see below, in the proof of Theorem 1.2.36, it is this property that, in asense, plays the central role in determining whether G ∈ Se. The first propertyin (1.2.28) (i.e. that the representation l(t) = tαL(t), where L is an s.v.f., holds)is, in many aspects, a consequence of (1.2.29). This observation will be importantwhen we compare the classes S and Se.

Remark 1.2.23. It is seen from Definition 1.2.22 that if G ∈ Se(α) for someα ∈ [0, 1] then the distribution G1 with tail G1(t) ∼ G(t) (or, which is thesame, with l1(t) := − lnG1(t) = l(t) + o(1)) as t → ∞ also belongs to thesubclass Se(α). Thus, like the class S (cf. Remark 1.2.13 on p. 21), each subclassSe(α) is closed with respect to the relation of asymptotic equivalence and canbe partitioned in a similar way into equivalence subclasses. The last assertion isevidently applicable to the whole class Se as well.

Remark 1.2.24. From the above it follows that the definition of the class Se canbe rewritten in the following way:

G ∈ Se if G(t) = e−l(t)+o(1), where l(t) admits the representation (1.2.28), andfor all Δ = o(t) one has,

for α > 0, l(t + Δ) − l(t) ∼ αΔl(t)/t,

for α = 0, l(t + Δ) − l(t) = o(Δl(t)/t).(1.2.32)

In this definition, the function l(t) (and therefore the function L(t)) can, withoutloss of generality, be assumed to be differentiable, since it can be ‘smoothed’without leaving a given equivalence subclass (as in Remark 1.2.13).

If L(t) is differentiable and L′(t) = o(L(t)/t) as t → ∞, then the relation(1.2.32) for all Δ = o(t) follows immediately from the representation (1.2.28).

Now we will turn to the relationship between the class Se (and its subclassesSe(α)) and the distribution classes that we have already considered. First we willshow that

R ⊂ Se(0), but R �= Se(0). (1.2.33)

That is, the class R of distributions with tails regularly varying at infinity provesto be strictly smaller than Se(0). For this and other reasons also (in particular, theimportance of the class R in applications), in the consequent exposition we willsingle out this class and consider it separately from Se.

Page 62: Asymptotic analysis of random walks

1.2 Subexponential distributions 31

We now return to (1.2.33). The first relation is next to obvious. Indeed, forG ∈ R one has G(t) = t−βLG(t), β � 0, where LG(t) is an s.v.f., so that

l(t) = − lnG(t) = β ln t − lnLG(t),

and hence, as one can easily verify, conditions (1.2.28) and (1.2.29) are metfor α = 0.

To establish the second relation in (1.2.33), we will construct an example of adistribution G ∈ Se(0)\R. Put l(t) := L0(t) ln t, where L0(t) is the s.v.f. L0(t)from (1.1.5), which oscillates between the values 1 and 3. Clearly G(t) = e−l(t)

is not an r.v.f., whereas l(t) is an s.v.f. by Theorem 1.1.4(i), and it is not hard toverify that condition (1.2.29) holds with α = 0, so that G ∈ Se(0).

Now we will present a number of further assertions characterizing the class Sand its connections with Se and its subclasses Se(α).

Theorem 1.2.25. Let G ∈ L and let the function l(t) = − lnG(t) possess theproperty

lim supt→∞, u→1

[l(t + uz(t)) − l(t)

]< 1 for z(t) :=

t

l(t). (1.2.34)

Then G ∈ S.

Remark 1.2.26. The property (1.2.34) can be rewritten in the following form: ifΔ = Δ(t) ∼ z(t) as t → ∞ then, for all sufficiently large t, one has

l(t + Δ) − l(t) � (α + o(1))Δt

l(t) + q (1.2.35)

for 0 � α < 1 and α + q < 1. Under broad conditions on the function l(t), thisrelation will also remain true for Δ = o(z(t)) (see below).

Roughly speaking, the first term on the right-hand side of (1.2.35) bounds therate of the ‘differentiable growth’ of the function l(t), whereas the second boundsthe rate of its ‘singular growth’. The jumps in the function l(t) must be o(t) ast → ∞ owing to the fact that G(t) is an l.c. function.

Remark 1.2.27. The relation (1.2.35) could be rewritten in the following equiv-alent form:

l(t) − l(t − Δ) � (α + o(1))Δt

l(t) + q, Δ = Δ(t) ∼ z(t),

for 0 � α < 1 and α + q < 1. To verify this, one has to observe that z(t) = o(t)and that under condition (1.2.34) we have l(t + Δ) ∼ l(t) for Δ ∈ [0, z(t)].Since the functions l(t) and z(t) = t/l(t) are ‘equally smooth’, we also havez(t + Δ) ∼ z(t) and Δ(t) ∼ Δ(t ) for t := t − Δ(t).

Prior to proving Theorem 1.2.25, we will state an important corollary of thatassertion which establishes a connection between the distribution classes underconsideration.

Page 63: Asymptotic analysis of random walks

32 Preliminaries

Corollary 1.2.28.

(i) If, for some c > 0 and α ∈ [0, 1) and all Δ ∈ [c, z(t)], the functionl(t) = − lnG(t) satisfies the inequality

l(t + Δ) − l(t) � (α + o(1))Δt

l(t) + o(1) as t → ∞ (1.2.36)

then G ∈ S.(ii) For α ∈ [0, 1) one has Se(α) ⊂ S.

Proof of Corollary 1.2.28. (i) One can easily see that inequality (1.2.36) ensuresthat the conditions of Theorem 1.2.25 are met. Indeed, (1.2.36) implies (1.2.35)and therefore also (1.2.34) by virtue of Remark 1.2.26. It remains to demonstratethat G ∈ L. To this end, note that from (1.2.36) with Δ = c it follows thatl(t+c)−l(t) = O(l(t)/t)+o(1), and hence we just have to show that l(t) = o(t).This relation follows immediately from (1.2.36) and the next lemma.

Lemma 1.2.29. If, for some γ0 < 1 and t0 < ∞,

l(t + z(t)) − l(t) � γ0 for t � t0 (1.2.37)

then l(t) = o(t) as t → ∞ (in other words, z(t) → ∞).

Observe that (1.2.34) implies (1.2.37).

Proof. Consider the increasing sequence s0 := t0,

sn+1 := sn + z(sn), n = 0, 1, 2, . . . (1.2.38)

Owing to (1.2.37) and the obvious inequality

x1 + x2

y1 + y2� min

{x1

y1,x2

y2

}for y1, y2 > 0

we obtain

z(sn+1) =sn + z(sn)

l(sn + z(sn))� sn + z(sn)

l(sn) + γ0� min{z(sn), z(sn)/γ0}

= z(sn) � z(sn−1) � · · · � z(s0) = z(t0) > 0. (1.2.39)

Hence sn → ∞ as n → ∞, and there exists a (possibly infinite) limit

z∞ := limn→∞ z(sn) � z(t0) > 0.

Since for t ∈ [sn, sn+1]

z(t) =t

l(t)� sn

l(sn+1)� sn

l(sn) + γ0=

z(sn)1 + γ0/l(sn)

∼ z(sn),

to complete the proof of the lemma we just have to show that z∞ = ∞.Assume the contrary, that z∞ < ∞, so that

z(sn) = (1 − ε1(n))z∞, ε1(n) → 0 as n → ∞. (1.2.40)

Page 64: Asymptotic analysis of random walks

1.2 Subexponential distributions 33

This implies that there exists an infinite subsequence {n′} such that

l(sn′+1) − l(sn′) � (1 − ε2(n′))z(sn′)z∞

, ε2(n′) → 0. (1.2.41)

Indeed, if this were not the case then there would exist a δ > 0 and an n0 < ∞such that, for all n � n0,

l(sn+1) − l(sn) � z(sn)z∞ + δ

and therefore

l(sn) = l(sn0) +n∑

k=n0+1

(l(sk) − l(sk−1)

)� l(sn0) +

1z∞ + δ

n∑k=n0+1

z(sk−1) = l(sn0) +sn − sn0

z∞ + δ,

whence lim infn→∞ z(sn) � z∞ + δ, which would contradict the definitionof z∞.

It remains to notice that from (1.2.40) and (1.2.41) one has

l(sn′ + z(sn′)) − l(sn′) � (1 − ε1(n′))(1 − ε2(n′)) → 1 as n′ → ∞.

This contradicts (1.2.37), therefore z∞ = ∞. The lemma is proved.

We return to the proof of Corollary 1.2.28.

(ii) Since z(t) = o(t) when α < 1, the property (1.2.29) of the elements ofSe(α) with α < 1 clearly implies (1.2.36). Corollary 1.2.28 is proved.

The proof of Theorem 1.2.25 will be preceded by the following lemma.

Lemma 1.2.30. If condition (1.2.37) is satisfied for the function l(t) then thereexist t0 < ∞, γ < 1 and a continuous piecewise differentiable function h(t) suchthat, for all t � t0,

|h(t) − l(t)| � γ, (1.2.42)

h′(t) � γh(t)t

, (1.2.43)

h(t)h(u)

�(

t

u

, t0 � u � t. (1.2.44)

In particular, h(t) � h(t0)(t/t0)γ .

Proof. Suppose that (1.2.37) holds. Since l(t) → ∞, we can assume without lossof generality that

γ0

1 − 1/l(t0)� γ :=

γ0 + 12

.

Page 65: Asymptotic analysis of random walks

34 Preliminaries

Using the sequence (1.2.38) with s0 = t0, denote by h(t) the continuous piece-wise linear function with nodes at the points (sn, l(sn)), n � 0:

h(t) := l(sn) +l(sn+1) − l(sn)

z(sn)(t − sn), t ∈ [sn, sn+1].

Since sn → ∞ as n → ∞ by virtue of (1.2.39), we have thus defined the functionh(t) on the entire half-line [t0,∞) (its definition on the left of the point t0 isinessential for our purposes).

It is evident that, owing to (1.2.37), on the interval [sn, sn+1] one has the in-equalities

l(sn) � l(t) � l(sn+1) � l(sn) + γ0,

l(sn) � h(t) � l(sn+1) � l(sn) + γ0,(1.2.45)

which proves (1.2.42) since γ0 < γ < 1.Further, to obtain (1.2.43) note that for t ∈ [sn, sn+1] we have sn � t − z(sn)

and therefore

h′(t) =l(sn+1) − l(sn)

z(sn)� γ0

z(sn)=

γ0l(sn)sn

� γ0h(t)t − z(sn)

=γ0

1 − z(sn)/t

h(t)t

� γ0

1 − z(sn)/sn

h(t)t

=γ0

1 − 1/l(sn)h(t)

t� γh(t)

t,

since l(sn) � l(t0).Finally, (1.2.44) is almost obvious from (1.2.43): integrating the latter inequal-

ity on the interval [u, t], we get

lnh(t)h(u)

� γ lnt

u, (1.2.46)

which proves (1.2.44). The lemma is proved.

Proof of Theorem 1.2.25. We will make use of Theorem 1.2.17(ii) with p = 1/2and M = z(t). Recall that, under the conditions of the theorem, by virtue ofRemark 1.2.27 one has z(t) ∼ z( t ) for t := t − z(t). Therefore, by (1.2.34),

G(t − M)G(t)

= exp{l(t) − l( t )} = exp{l( t + z(t)) − l( t )} < e

for all sufficiently large t, and so condition (1.2.22) of Theorem 1.2.17(ii) is sat-isfied.

Further, for the function h from Lemma 1.2.30 one has |l(t)−h(t)| < γ, t � t0.

Therefore, if we establish relations (1.2.20), (1.2.21) for the distribution Gh withGh(t) = e−h(t) then they will be valid for the original distribution G as well.So to simplify the exposition one could assume from the very beginning that the

Page 66: Asymptotic analysis of random walks

1.2 Subexponential distributions 35

function l(t) has the properties (1.2.43), (1.2.44). From these properties it followsthat for any v ∈ [0, 1/2] and t � t0 one has

l(t) − l(vt) − l((1 − v)t) � [1 − vγ − (1 − v)γ ]l(t). (1.2.47)

Next, use the inequality

1 − (1 − v)γ � (2γ − 1)vγ , v ∈ [0, 1/2],

of which the left-hand (right-hand) side is a concave (convex) function on the in-dicated interval, the values of these functions coinciding with each other at the endpoints v = 0 and v = 1/2 of the interval. Together with (1.2.47) the inequalityimplies that

l(t) − l(vt) − l((1 − v)t) � −cvγ l(t), c = 2 − 2γ > 0. (1.2.48)

From this we immediately obtain (1.2.23), so that the condition (1.2.20) of Theo-rem 1.2.17 is satisfied (for any p ∈ (0, 1)).

As we have already noted, under the conditions of Theorem 1.2.25 there existt0 < ∞ and γ0 < 1 such that (1.2.37) holds true. As z(t) → ∞ by Lemma 1.2.29(or simply by virtue of the assumption G ∈ L and Theorem 1.2.4(iv)), one hasz(t) � t0 for all sufficiently large t. Denote by {sn;n � 0} the sequence con-structed according to (1.2.38) with initial value

s0 = z(t).

Further, inequality (1.2.48) implies that

t/2∫M

G(t − y)G(dy) = G(t)

t/2∫z(t)

exp{l(t) − l(y) − l(t − y)} dl(y)

� G(t)

t/2∫z(t)

exp{−c(y/t)γ l(y)} dl(y) =: G(t)I(t).

(1.2.49)

Put N := min{n � 1 : sn � t/2} and represent I(t) as a sum of integrals overthe intervals [sn, sn+1). Observe that, according to (1.2.37), the increments ofthe function l(t) on each of these intervals do not exceed γ. Therefore, using theinequalities

sn � s0 + nz(s0) � (n + 1)z(t),

l(sn+1) − l(sn) � sn+1

z(sn+1)− sn+1

z(sn)+ 1 � 1,

which are valid owing to (1.2.38), (1.2.39) and our choice of s0, we obtain from

Page 67: Asymptotic analysis of random walks

36 Preliminaries

the dominated convergence theorem that

I(t) �N−1∑n=0

sn+1∫sn

exp{−c(y/t)γ l(t)} dl(y) �N−1∑n=0

exp{−c(sn/t)γ l(t)}

�∞∑

n=0

exp{−c(n + 1)γ l1−γ(t)} → 0 as t → ∞.

Together with (1.2.49) this implies that condition (1.2.21) of Theorem 1.2.17 issatisfied. Thus, all the conditions of Theorem 1.2.17(ii) are met, and thereforeG ∈ S. The theorem is proved.

We will present one more important consequence of Theorem 1.2.25. First re-call that, by virtue of Remark 1.2.13 (p. 21), for distributions G ∈ S one canalways assume that the function l(t) = − lnG(t) (up to an additive o(1) correc-tion term, t → ∞) is differentiable arbitrarily many times. A similar assertionholds for G ∈ Se as (cf. Remark 1.2.24 on p. 30).

Theorem 1.2.31. Let G(t) = e−l(t)+o(1), where l(t) is continuous and piecewisedifferentiable and

lim supt→∞

z(t)l′(t) < 1, z(t) = t/l(t). (1.2.50)

Then the conditions of Theorem 1.2.25 are satisfied, so that G ∈ S.

Proof. The condition (1.2.50) clearly means that, for some γ < 1 and t0 < ∞,one has l′(t) � γl(t)/t for t � t0. As shown in Lemma 1.2.30, this relationimplies (1.2.46), so that

lnl(t + Δ)

l(t)� γ ln

t + Δt

, t � t0, Δ > 0.

Hence, for Δ = o(t), t → ∞, one has

l(t + Δ)l(t)

�(

1 +Δt

= 1 +Δt

(γ + o(1)).

From here the relation (1.2.36) is immediate. It remains to use Corollary 1.2.28.

Theorem 1.2.31, in its turn, implies the following.

Corollary 1.2.32. For G ∈ S it suffices that

lim supt→∞

t [ln(− lnG(t))]′ < 1. (1.2.51)

Next we will give one more consequence of the above results. It concernsconditions for G ∈ S that are, in a sense, close to the necessary ones and are notrelated to the differentiability properties of the function l(t) = − lnG(t). Thecondition to be presented will be expressed simply in terms of some restrictions

Page 68: Asymptotic analysis of random walks

1.2 Subexponential distributions 37

on the asymptotic behaviour of the increments l(t + Δ) − l(t) for some functionΔ = Δ(t) such that 0 < c � Δ = o(z(t)) as t → ∞.

First note that, for an l(t) = t−αL(t) with a ‘regular’ s.v.f. L(t), the deriva-tive l′(t) is close to αl(t)/t = α/z(t) (cf. Theorem 1.1.4(iv)), so that in thissituation 1/z(t) characterizes the decay rate of the function l′(t) as t → ∞. The-orem 1.2.31 shows that a sufficient condition for G ∈ S is that the ratio of l′(t)and 1/z(t) is bounded away (from below) from unity as t → ∞.

In the general case, we will consider the ratio

r(t,Δ) := z(t)(l(t + Δ) − l(t))/Δ

of the difference analogue (l(t + Δ) − l(t))/Δ of the derivative of l(t) and thefunction 1/z(t).

Theorem 1.2.33.

(i) Let there exist a function Δ = Δ(t) such that

0 < c � Δ(t) = o(z(t)) as t → ∞ (1.2.52)

and

α+(G,Δ) := lim supt→∞

r(t,Δ(t)) < 1. (1.2.53)

Then G ∈ S.(ii) Conversely, if G ∈ S then, for any function Δ = Δ(t) � c > 0, we have

α−(G,Δ) := lim inft→∞ r(t,Δ(t)) � 1. (1.2.54)

Remark 1.2.34. Observe that (1.2.52) contains the condition that z(t) → ∞ ast → ∞ or, equivalently, that l(t) = o(t). The proof of part (ii) of the theoremshows that in this case one always has α−(G,Δ) � 1.

To prove the theorem we will need the following analogue of Lemma 1.2.30.

Lemma 1.2.35. If the conditions of Theorem 1.2.33(i) are satisfied for l(t) thenthere exist t0 < ∞, γ < 1 and a continuous piecewise differentiable function h(t)such that

|h(t) − l(t)| = o(1) as t → ∞, (1.2.55)

h′(t) � γh(t)t

for all t � t0. (1.2.56)

Proof. The proof of the lemma essentially repeats that of Lemma 1.2.30 but withz(t) replaced by Δ(t) when constructing the sequence (1.2.38): now we putsn+1 := sn + Δ(sn). The value s0 is chosen to be such that r(t,Δ(t)) � γ < 1for all t � s0 (its existence is ensured by condition (1.2.53)). It is clear thatsn → ∞ as n → ∞ by virtue of (1.2.52).

Page 69: Asymptotic analysis of random walks

38 Preliminaries

Denote by h(t) a continuous piecewise linear function with nodes (sn, l(sn)),n � 0. Then clearly

l(sn+1) − l(sn) � γΔ(sn)z(sn)

=: rn = o(1) as n → ∞. (1.2.57)

Since both l(t) and h(t) are non-decreasing functions we find, cf. (1.2.45), thatthe deviation of these functions from each other on the interval [sn, sn+1] alsodoes not exceed rn. This proves (1.2.55).

Further, from (1.2.57) and (1.2.55) we obtain that, for any given ε > 0 and forall sufficiently large n, one has

h′(t) =l(sn+1) − l(sn)

Δ(sn)� γ

z(sn)� γ(1 + ε)h(t)

t, t ∈ [sn, sn+1].

Increasing, if necessary, the values of the previously chosen quantities s0 andγ < 1, we arrive at (1.2.56). The lemma is proved.

Proof of Theorem 1.2.33. (i) This assertion is an obvious consequence of Theo-rem 1.2.31 and Lemma 1.2.35.

(ii) Assume that α−(G,Δ) > 1 for some function Δ(t) � c > 0. Then,evidently, sn := sn−1 + Δ(sn−1) → ∞ as n → ∞ and, for a sufficiently larges0, one has r(t,Δ(t)) � 1 for all t � s0. Therefore, for all n � 0 we have

z(sn+1) =sn + Δ(sn)

l(sn + Δ(sn))� sn + Δ(sn)

l(sn) + Δ(sn)/z(sn)= z(sn).

Thus the sequence {z(sn)} is non-increasing, so that l(sn) � sn/z(s0), n � 0.

Since sn → ∞, the above relation implies that l(t) �= o(t), which is inconsistentwith G ∈ S by Theorems 1.2.8 and 1.2.4(iv). The theorem is proved.

It follows from the above assertions that the ‘regular’ part of the class S,i.e. the part consisting of the elements for which the upper and lower limitsα± in (1.2.53) and (1.2.54) coincide with each other, is nothing other than theclass Se of semiexponential distributions. More precisely, denote by S(α) ⊂ Sthe subclass of distributions G ∈ S for which there exists a function Δ = Δ(t)satisfying (1.2.52) and such that α+(G,Δ) = α−(G,Δ) = α ∈ [0, 1]:

S(α) :={G ∈ S :

there is a Δ such that (1.2.52) holdsand α+(G,Δ) = α−(G, Δ) = α

}. (1.2.58)

Recall that Se(α) denotes the subclass of semiexponential distributions with in-dex α ∈ [0, 1] (see Definition 1.2.22 on p. 29).

Theorem 1.2.36. For α ∈ [0, 1),

S(α) = Se(α).

For α = 1 one has S(1) ⊂ Se(1).

Page 70: Asymptotic analysis of random walks

1.2 Subexponential distributions 39

For distributions G ∈ Se(1), to ensure that G ∈ S(1) one needs, gener-ally speaking, additional conditions on the s.v.f. L(t) → 0 in the representa-tion (1.2.28). For more detail, see Remark 1.2.37 below.

Proof. As we observed in Remark 1.2.23 (p. 30), the subclasses Se(α) are closedwith respect to asymptotic equivalence: if G1 ∈ Se(α) and G1(t) ∼ G(t) (or,equivalently, l1(t) := − lnG1(t) = l(t) + o(1)) as t → ∞ then G ∈ Se(α).Hence, to prove the inclusion

S(α) ⊂ Se(α), α ∈ [0, 1], (1.2.59)

it suffices to show that, for any G ∈ S(α), there is an asymptotically equivalentdistribution G1 ∈ Se(α).

Assume that G ∈ S(α). To construct the required function G1(t) we will em-ploy the linear interpolation h(t) from the proof of Lemma 1.2.35 (it is obtainedusing the function Δ(t) from (1.2.58)) and put l1(t) := h(t), G1(t) := e−l1(t).The same argument as in Lemma 1.2.35, together with the condition α+(G,Δ) =α−(G,Δ) = α, shows that there exists the limit

limt→∞ t

(ln l1(t)

)′ = α.

Therefore, (ln l1(t))′ = α/t + ε(t)/t, where ε(t) → 0 as t → ∞, so that

ln l1(t) = α ln t +

t∫1

ε(u)u

du

and hence

l1(t) = tαL(t), where L(t) = exp

{ t∫0

ε(u)u

du

}. (1.2.60)

By virtue of Theorem 1.1.3, the last representation means that L(t) is an s.v.f.Since G ∈ S, by Theorem 1.2.12(ii) we have G1 ∈ S and therefore l1(t) = o(t)(see Theorems 1.1.2(iv) and 1.1.3), so that L(t) = o(1) when α = 1.

Further, it is clear thatt+Δ∫t

ε(u)u

du = o(Δ/t) as Δ = o(t),

so that from (1.2.60) we obtain

l1(t + Δ)l1(t)

=(

1 +Δt

eo(Δ/t).

Therefore

l1(t + Δ) − l1(t) = (α + o(1))Δt

l1(t).

Thus the conditions of Definition 1.2.22 are satisfied for the function G1(t). So

Page 71: Asymptotic analysis of random walks

40 Preliminaries

we have proved that G1 ∈ Se(α) (and hence that G ∈ Se(α)), which estab-lishes (1.2.59).

Now let G ∈ Se(α), α ∈ [0, 1). To prove that G ∈ S(α) it suffices, byvirtue of Theorem 1.2.33(i), to show that there is a function Δ = Δ(t) satisfy-ing (1.2.52), for which there exists the limit

α+(G,Δ) = α−(G,Δ) = α

(note that we consider only the case α < 1). But the existence of such a func-tion Δ(t) directly follows from the relation (1.2.29) in Definition 1.2.22. Therelation implies that there exists a function ε = ε(t) that converges to zero slowlyenough as t → ∞ and has the property that, for Δ(t) := ε(t)t/l(t) = o(z(t)),one has Δ(t) � c > 0 and

l(t + Δ(t)) − l(t) = (α + o(1))Δ(t)

tl(t).

The last relation means that r(t,Δ(t)) → α as t → ∞. The theorem is proved.

Remark 1.2.37. In the boundary case where α+(G,Δ) = 1 in (1.2.53), bothG ∈ S and G /∈ S are possible. Before discussing the conditions for a distri-bution G ∈ Se(1) to belong to the class of subexponential distributions, we willgive an example of a distribution G ∈ Se(1) \ S.

Example 1.2.38. Set l(t) := tL(t), where

L(t) :=

{k−1, t ∈ [22k, 22k+1],

(log2 t − k − 1)−1, t ∈ [22k+1, 22k+2],k = 1, 2, . . .

Clearly, L(t) ∼ 2/ log2 t, L′(t) = 0 for t ∈ (22k, 22k+1) and

L′(t) = − log2 e

t(log2 t − k − 1)2for t ∈ (22k+1, 22k+2),

so that in any case L′(t) = o(L(t)/t). Hence for Δ = Δ(t) = o(t) one hasL(t + Δ) ∼ L(t) and

l(t + Δ) − l(t) = ΔL(t + Δ) + t(L(t + Δ) − L(t))

= (1 + o(1))ΔL(t) + O(tL′(t)Δ) = (1 + o(1))ΔL(t). (1.2.61)

Thus the conditions for G ∈ Se(1) from Definition 1.2.22 are satisfied. However,for t = 22k+1,

2l(t/2) − l(t) = t[L(t/2) − L(t)] = 22k+1[L(22k) − L(22k+1)] = 0,

so that condition (1.2.20) (or equivalently (1.2.23)) for p = 1/2 is not satisfiedand therefore G �∈ S by Theorem 1.2.17(i).

Page 72: Asymptotic analysis of random walks

1.2 Subexponential distributions 41

That a distribution G ∈ Se(1) belongs to the class S can only be proved under theadditional assumption that the derivative L′(t) is regularly varying. Namely, the followingassertion holds true.

Theorem 1.2.39. Assume that G ∈ Se(1) and that the function L(t) in the representationl(t) = − ln G(t) = tL(t) is differentiable for t > t0 for large enough t0. If

ε(t) := −tL′(t) is an s.v.f. (1.2.62)

at infinity and, for some γ ∈ (0, 1),

γε(t)L(−1)(L(t)/γ) + ln ε(t) − ln L(t) → ∞, (1.2.63)

where L(−1)(u) = inf{t : L(t) < u} is the inverse to the function L(t), then G ∈ S.

Example 1.2.40. Consider a distribution G with l(t) = tL(t), where L(t) = ln−β t,β > 0, for t > e. Clearly, in this case ε(t) = β ln−β−1 t and L(−1)(u) = exp{u−1/β},so that for γ = 1/2 the left-hand side of (1.2.63) is equal to

2−1βt2−1/β

ln−β−1 t + ln β − ln ln t → ∞ as t → ∞.

Thus, the conditions of Theorem 1.2.39 are satisfied and therefore G ∈ S for all1 β > 0.

Proof of Theorem 1.2.39. We will make use of Theorem 1.2.17(ii) with p = 1/2. Firstnote that, for t > t0,

L(t) =

∞Zt

ε(u)

udu, (1.2.64)

so that ε(t) = o(L(t)) by Theorem 1.1.4(iv) (for the case α = 1). Therefore, l′(t) =L(t) − ε(t) ∼ L(t), and hence, for M = o(t),

l(t) − l(t − M) ∼tZ

t−M

L(u) du ∼ ML(t).

Since L(t) → 0 as t → ∞, we obtain immediately that G ∈ L (choosing M := c), andalso that condition (1.2.22) is satisfied for

M := 1/L(t) → ∞(note that 1/L(t) = o(t) by Theorem 1.1.4(ii)).

Further, again by Theorem 1.1.4 (part (iv) in the case α = 0 and part (ii)) we have that,as t → 0,

2l

„t

2

«− l(t) = t

»L

„t

2

«− L(t)

–= t

tZt/2

ε(u)

udu

�tZ

t/2

ε(u) du ∼ tε(t) − t

„t

2

«∼ t

2ε(t) → ∞,

since ε(t) is an s.v.f. Thus condition (1.2.23) is also met.

1 It was claimed in [266] that, in the case when G(t) = exp{−t ln−β t}, ‘we can similarly showthat G ∈ S’ iff β > 1 (a remark at the bottom of p. 1001). This assertion appears to be imprecise.

Page 73: Asymptotic analysis of random walks

42 Preliminaries

It remains to verify (1.2.21), namely that the following expression tends to zero:

t/2ZM

G(t)

G(t − y)G(dy) =

t/2ZM

exp{l(t) − l(t − y) − l(y)} dl(y)

=

t/2ZM

exp

jtˆL(t) − L(t − y)

˜− l(y)

»1 − L(t − y)

L(y)

–ffdl(y)

=

NZM

+

t/2ZN

=: I1 + I2,

where we put N = N(t) := L(−1)(L(t)/γ), so that N → ∞ and N = o(t) as t → ∞.Observe that because ε(u) > 0 (as an s.v.f.), by virtue of (1.2.64) the function L(t) willdecrease monotonically and continuously for t > t0 and therefore L(−1)(v) will be aunique solution to the equation L(L(−1)(v)) = v, so that one has L(N) = L(t)/γ.

Without restricting the generality, one can assume that M > t0 and therefore the func-tion L(y) decreases for y � M . This implies that for all sufficiently large t one hasL(y) � L(N) = L(t)/γ for y ∈ [M, N ] and, since L(t − y) ∼ L(t) as t → ∞ wheny ∈ [M, N ], for c := (1 − γ)/2 > 0 we obtain

I1 �NZ

M

exp˘−l(y)[1 − L(t − y)/L(y)]

¯dl(y) �

NZM

e−cl(y)dl(y) <1

ce−cl(M).

Further, for y ∈ [M, t/2] one has t − y � y and hence L(t − y)/L(y) � 1, and sinceε(t) is an s.v.f. we obtain

L(t) − L(t − y) = −t

tZt−y

ε(u)

udu � −

tZt−y

ε(u) du = −(1 + o(1))yε(t).

Therefore, using the monotonicity of L and the equality L(N) = γL(t), we get

I2 �t/2Z

N

exp˘t[L(t) − L(t − y)]

¯dl(y) �

t/2ZN

exp˘−(1 + o(1))yε(t)

¯dl(y)

�t/2Z

N

exp

j− (1 + o(1))ε(t)

L(N)l(y)

ffdl(y) =

l(t/2)Zl(N)

exp

j− (γ + o(1))ε(t)

L(t)u

ffdu

<L(t)

(γ + o(1))ε(t)exp

j− (γ + o(1))ε(t)

L(t)l(N)

ff∼ L(t)

γε(t)e−(1+o(1))ε(t)N = o(1),

owing to (1.2.63). Thus condition (1.2.21) of Theorem 1.2.17(ii) is also satisfied, andhence G ∈ S. The theorem is proved.

Now we will give an example showing that the tails of distributions from Scan have very large fluctuations around the tails of semiexponential distributions(much larger fluctuations than those discussed in Theorem 1.2.21).

Example 1.2.41. For a fixed δ ∈ (0, 1) put tk := 2k, t′k := (1 + δ)tk, k =

Page 74: Asymptotic analysis of random walks

1.2 Subexponential distributions 43

0, 1, . . . , and partition the region [1,∞) into half-open intervals [tk, t′k), [t′k, tk+1),k = 0, 1, . . . Put l(1) := 0 and, for a fixed γ ∈ (0, 1), set

l(t) :={

l(tk) for t ∈ (tk, t′k),l(tk) + tγ − ((1 + δ)tk)γ for t ∈ [t′k, tk+1],

so that l′(t) = 0 on intervals of the first kind and l′(t) = γtγ−1 inside intervalsof the second kind. Clearly

l(tk+1) = l(tk) + 2(k+1)γ − (1 + δ)γ2kγ = l(tk) + 2(k+1)γq1,

where q1 = 1 − ((1 + δ)/2)γ . Hence

l(tk) = q1

k∑j=1

2jγ = q2kγ + O(1) = l0(tk) + O(1) as k → ∞,

where l0(t) := qtγ , q = q1/(1 − 2−γ). Thus the function l(t) ‘oscillates’ aroundl0(t), and hence G(t) := e−l(t) oscillates around G0(t) := e−l0(t), where evi-dently G0 ∈ Se.

Denote by l1(t) a polygon with vertices at the ‘lower nodes’ (t′k, l(t′k)) of thefunction l(t). It is seen from the construction that l1(t) = (1 + δ)−γ l0(t) + O(1)and l1(t) � l(t). Hence

l′(t) � γtγ−1 =γ

t

((1 + δ)γ

ql1(t) + O(1)

)� l(t)

t

γ(1 + δ)γ

q(1 + o(1)).

Since γ < 1, for small enough δ > 0 we also have γ(1 + δ)γ/q < 1, i.e. theconditions of Theorem 1.2.31 are satisfied and so G ∈ S.

Observe also that the fluctuations of the function l(t) around l0(t) = qtγ areunbounded:

lim supt→∞

(l(t) − l0(t)) = lim supt→∞

(l0(t) − l(t)) = ∞.

This corresponds to unbounded ‘relative’ fluctuations of the function G(t) aroundthe tail G0(t) of the semiexponential distribution G0.

This example also shows that functions from S could be constructed in a similarway from ‘pieces’ of functions from the classes Se(α) with different values of α.

Remark 1.2.42. Definition 1.2.1 and the assertions of statements 1.2.4–1.2.36could be extended in an obvious way to the case of finite measures G withG(R) = g �= 1. For them, the subexponentiality property can be stated as

G2∗(t)G(t)

→ 2g as t → ∞.

A few of the assertions in statements 1.2.4–1.2.21 will need minor obvious amend-ment. Thus, the assertion of Theorem 1.2.12(iii) will take the form

Gn∗(t)G(t)

→ ngn−1,

Page 75: Asymptotic analysis of random walks

44 Preliminaries

whereas that of part (iv) of the same theorem will become

Gn∗(t)G(t)

� bgn−1(1 + ε)n.

To prove these assertions, one has to introduce the subexponential distributionsG = g−1G and make use of statements 1.2.4–1.2.36.

1.3 Locally subexponential distributions

Along with subexponentiality one could also consider a more subtle property, thatof local subexponentiality. To simplify the exposition we will confine ourselveshere to dealing with distributions on the positive half-axis. The transition to thegeneral case could be made in exactly the same way as in § 1.2.

1.3.1 Arithmetic distributions

We will start with the simpler discrete case, where one considers distributions (ormeasures) G = {gk; k � 0} on the set of integers. By the convolution of thesequences {gk} and {fk} we mean the sequence

(g ∗ f)k :=k∑

j=0

gjfk−j .

We will denote the convolution of {gk} with itself by {g2∗k } and, furthermore, put

g(n+1)∗k := (g ∗ gn∗)k, n � 2. Clearly gn∗

k = Gn∗({k}).Definition 1.3.1. A sequence {gk � 0; k � 0}, ∑∞

k=0 gk = 1, is said to besubexponential if

limk→∞

gk+1

gk= 1, (1.3.1)

limk→∞

g2∗k

gk→ 2. (1.3.2)

One can easily see that a regularly varying sequence

gk = k−α−1L(k), α > 1,

where L(t) is an s.v.f., will belong (after proper normalization) to the class ofsubexponential sequences, just as a semiexponential sequence gk = e−kαL(k),α ∈ (0, 1) does. Without normalizing, on the right-hand side of (1.3.2) one shouldhave 2g, where g =

∑∞k=0 gk.

For subexponential sequences there exist analogues of statements 1.2.4–1.2.21(with the difference that the property (1.3.1) is now part of the definition andtherefore requires no proof). We will restrict our attention to those assertionswhose proofs require substantial changes compared with the respective argumentsin § 1.2. These assertions are analogues of parts (iii) and (iv) of Theorem 1.2.12.

Page 76: Asymptotic analysis of random walks

1.3 Locally subexponential distributions 45

Theorem 1.3.2. Let {gk; k � 0} be a subexponential sequence. Then:

(i) for any fixed n � 2

limk→∞

gn∗k

gk= n; (1.3.3)

(ii) for any ε > 0 there exists an M < ∞ such that for all n � 1 and k � M

gn∗k

gk<

(1 + ε)n+1.

Proof. Similarly to the argument in the proof of Theorem 1.2.12(iii), we willmake use of induction. Assume that (1.3.3) holds true. Then, for M � k,

g(n+1)∗k

gk=

k∑j=0

gjgn∗k−j

gk=

k−M∑j=0

+k∑

j=k−M+1

. (1.3.4)

The first sum on the final right-hand side can be rewritten as follows:

k−M∑j=0

=k−M∑j=0

gj

gn∗k−j

gk−j

gk−j

gk,

where the ratio gn∗k−j/gk−j can be made arbitrarily close to n for all j � k − M

by choosing M large enough, whereas

k−M∑j=0

gjgk−j

gk=

k∑j=0

−k∑

j=k−M+1

→ 1 + G(M) as k → ∞. (1.3.5)

The latter relation follows from the fact that, by virtue of (1.3.2) and (1.3.1), forany fixed M one has, as k → ∞,

k∑j=0

=g2∗

k

gk→ 2 and

k∑j=k−M+1

gjgk−j

gk∼

k∑j=k−M+1

gk−j → 1 − G(M).

(1.3.6)Since G(M) =

∑∞j=M gj can be made arbitrarily small by choosing M large

enough, the first sum on the final right-hand side of (1.3.4) can be made arbitrarilyclose to n for all sufficiently large k.

The second sum on the final right-hand side of (1.3.4) satisfies the relation

k∑j=k−M+1

gj

gn∗k−j

gk∼

M−1∑j=0

gn∗j (1.3.7)

as k → ∞ and, therefore, can be made arbitrarily close to 1 by choosing M largeenough, since

∑∞j=0 gn∗

j = 1. As the left-hand side of (1.3.4) does not depend

on M , we have proved that g(n+1)∗k /gk → n + 1 as k → ∞.

(ii) Put

αn := supk�M

gn∗k

gk.

Page 77: Asymptotic analysis of random walks

46 Preliminaries

Then

αn+1 = supk�M

k∑j=0

gj

gn∗k−j

gk

� supk�M

k−M∑j=0

gjgn∗k−j

gk−j

gk−j

gk+ sup

k�M

k∑j=k−M+1

gj

gn∗k−j

gk. (1.3.8)

One can easily see from (1.3.5) that, for large enough M , the first supremum inthe second line of (1.3.8) does not exceed

αn supk�M

k−M∑j=0

gjgk−j

gk� αn(1 + ε),

while the second does not exceed 1 + ε owing to (1.3.7). Thus we have obtained

αn+1 � αn(1 + ε) + (1 + ε) for n � 1.

Since α1 = 1, it follows that

αn � ((1 + ε)n+1 − 1)/ε.

The theorem is proved.

Theorem 1.2.17 carries over to the case of sequences in an obvious way. Nei-ther it is difficult to obtain an analogue of Theorem 1.2.21.

An arithmetic distribution G = {gk} with the properties (1.3.1), (1.3.2) couldbe called locally subexponential.

1.3.2 Non-lattice locally subexponential distributions

There are two ways of extending the concept of local subexponentiality to the caseof non-lattice distributions. The first one suggests to consider distributions havingdensities. A density h(t) = G(dt)/dt = −G′(t) will be called subexponential if,for any fixed v, as t → ∞,

h(t + v)h(t)

→ 1, h2∗(t) :=

t∫0

h(u)h(t − u) du ∼ 2h(t). (1.3.9)

By Theorem 1.2.4(i) the convergence in the first relation in (1.3.9) is uniformin v ∈ [0, 1].

For distributions with subexponential densities a complete analogue of Theo-rem 1.3.2 holds true, the proof of which basically coincides with the argumentused in the discrete case.

The second way of extending the concept of local subexponentiality does not

Page 78: Asymptotic analysis of random walks

1.3 Locally subexponential distributions 47

require the existence of a density. For a Δ > 0, denote by Δ[t) the half-openinterval

Δ[t) := [t, t + Δ)

of length Δ, and let Δ1[t) := [t, t + 1).

Definition 1.3.3. A distribution G on [0,∞) is said to be locally subexponential(belonging to the class Sloc) if, for any fixed Δ > 0,

limt→∞

G(Δ[t))G(Δ1[t))

= Δ, (1.3.10)

limt→∞

G2∗(Δ[t))G(Δ[t))

= 2. (1.3.11)

It is not difficult to verify that if, for any fixed Δ > 0 as t → ∞,

G(Δ[t)) ∼ Δt−α−1L(t), α > 1, (1.3.12)

where L(t) is an s.v.f., then G ∈ Sloc (this will also follow from Theorem 1.3.6below). Evidently, the relation (1.3.12) always holds in the case when

G(t) =

∞∫t

u−α−1L(u) du.

It is also clear that if G has a subexponential density then G ∈ Sloc.If, for an arithmetic distribution G = {gk; k � 0}, the sequence {gk} is

subexponential, then we will also write G ∈ Sloc, although the property (1.3.10)will in this case hold only for integer-valued Δ � 1 while (1.3.11) will be truefor any Δ � 1. In what follows, in the case of arithmetic distributions G we willalways assume integer-valued Δ � 1 in (1.3.10), (1.3.11).

The analogues of the main assertions about the properties of distributions fromthe class Sloc that we established in the discrete case have the following form inthe non-lattice case.

Theorem 1.3.4. Let G ∈ Sloc. Then:

(i) for any fixed n � 2 and Δ � 1,

limt→∞

Gn∗(Δ[t))G(Δ[t))

= n; (1.3.13)

(ii) for any ε > 0 there exist an M = M(ε) and a b = b(ε) such that, for alln � 2, t � M and any fixed Δ � 1,

Gn∗(Δ[t))G(Δ[t))

� b(1 + ε)n.

Page 79: Asymptotic analysis of random walks

48 Preliminaries

Proof. The proof of the theorem follows the same line of reasoning as that ofTheorem 1.3.2.

(i) Here we will again use induction. Suppose that (1.3.13) is correct. Then,for M ∈ (0, t),

G(n+1)∗(Δ[t))G(Δ[t))

=

t+Δ∫0

Gn∗(Δ[t − u))G(Δ[t))

G(du) =

t−M∫0

+

t+Δ∫t−M

. (1.3.14)

In the first integral,

t−M∫0

Gn∗(Δ[t − u))G(Δ[t − u))

G(Δ[t − u))G(Δ[t))

G(du), (1.3.15)

the ratio Gn∗(Δ[t − u))/G(Δ[t − u)) can, by the induction hypothesis, be madearbitrarily close to n for all u � t − M by choosing M large enough. Further,

t−M∫0

G(Δ[t − u))G(Δ[t))

G(du) =G2∗(Δ[t))

G(t)−

t+Δ∫t−M

. (1.3.16)

To estimate the integral

t+Δ∫t−M

G(Δ[t − u))G(Δ[t))

G(du) (1.3.17)

we need the following lemma.

Lemma 1.3.5. Let the relation (1.3.10) hold for the distribution G, and let Q(v)be a bounded non-increasing function on R. Then, for any fixed Δ, M > 0,

limt→∞

t+Δ∫t−M

Q(t − u)G(du)

G(Δ[t))=

M∫−Δ

Q(v) dv. (1.3.18)

Proof of Lemma 1.3.5. Consider a finite partition of [t − M, t + Δ) into K half-open intervals Δ1, . . . ,ΔK of the form

Δk = [t − M + (k − 1)δ, t − M + kδ), k = 1, . . . ,K, δ =M + Δ

K.

Then

K∑k=1

Q(M − δ(k − 1))G(Δk)G(Δ[t))

�t+Δ∫

t−M

Q(t − u)G(du)

G(Δ[t))

�K∑

k=1

Q(M − δk)G(Δk)G(Δ[t))

. (1.3.19)

Page 80: Asymptotic analysis of random walks

1.3 Locally subexponential distributions 49

But G(Δk)/G(Δ[t)) → δ/Δ as t → ∞ by virtue of (1.3.10), so the sum on theleft-hand side of (1.3.19) will converge, as t → ∞, to

δ

Δ

K∑k=1

Q(M − δ(k − 1)).

A similar relation holds for the sum in the second line of (1.3.19). Now each ofthese sums can be made, by choosing a large enough K, arbitrarily close to

M∫−Δ

Q(v) dv.

Since the integral on the first right-hand side of (1.3.19) does not depend on K,the lemma is proved.

Now we return to the proof of Theorem 1.3.4. Applying Lemma 1.3.5 to thefunctions Q(t) = G(t) and Q(t) = G(t+Δ), we obtain that the integral (1.3.17)converges as t → ∞ to

⎛⎝ M∫−Δ

G(v)dv −M+Δ∫0

G(v)dv

⎞⎠ =1Δ

⎛⎝ 0∫−Δ

G(v)dv −M+Δ∫M

G(v)dv

⎞⎠= 1 − 1

Δ

M+Δ∫M

G(v)dv = 1 − r(M),

where r(M) � G(M). From this it follows that, by choosing M large enough,one can make the integral (1.3.17) arbitrarily close to 1 as t → ∞, and thereforethe integral on the left-hand side of (1.3.16) will, by virtue of (1.3.11), also be ar-bitrarily close to 1 while the first integral on the second right-hand side of (1.3.14)will be arbitrarily close to n.

Again using Lemma 1.3.5, we obtain in a similar way that for the last integralin (1.3.14) one has

limt→∞

t+Δ∫t−M

Gn∗(Δ[t − u))G(Δ[t))

G(du) = 1 − rn(M),

where rn(M) � Gn∗(M) → 0 as M → ∞. Hence, for large t, the last integralin (1.3.14) can be made, by choosing large enough M , arbitrarily close to 1. Thusthe left-hand side of (1.3.14) converges to n + 1 as t → ∞.

(ii) The proof of this assertion repeats that of Theorem 1.3.2(ii), with the samemodifications related to Lemma 1.3.5 as were made in the proof of part (i). Thetheorem is proved.

A sufficient condition for a distribution to be locally subexponential is con-tained in the following assertion.

Page 81: Asymptotic analysis of random walks

50 Preliminaries

Theorem 1.3.6. Let G ∈ S and, for each fixed Δ > 0, as t → ∞,

G(Δ[t)) ∼ ΔG(t)v(t), (1.3.20)

where v(t) → 0 is an upper-power function.1 Then G ∈ Sloc.

Remark 1.3.7. If G ∈ R and the function G(t) is ‘differentiable at infinity’, i.e.for each fixed Δ > 0, as t → ∞,

G(t) − G(t + Δ) ∼ αΔt−α−1L(t) =αΔG(t)

t,

then the conditions of Theorem 1.3.6 are clearly met.If G ∈ Se and the relation (1.2.29) from Definition 1.2.22 holds for each fixed

Δ and without the additive term o(1) (i.e. l(t + Δ) − l(t) ∼ Δv(t), v(t) =αl(t)/t), then

G(t) − G(t + Δ) = e−l(t)(1 − el(t)−l(t+Δ)

)= e−l(t)

(1 − e−Δv(t)(1+o(1))

) ∼ G(t)Δv(t)

so that again the conditions of Theorem 1.3.6 are satisfied.

Proof of Theorem 1.3.6. That the relation (1.3.10) holds true is obvious from con-dition (1.3.20). Further, let r.v.’s ζi ⊂=G, i = 1, 2, be independent, and letZ2 = ζ1 + ζ2. Then

G2∗(Δ[t)) = P(Z2 ∈ Δ[t)) = 2

t/2∫0

G(Δ[t − y))G(dy) + q,

where by virtue of (1.3.20), (1.2.20) and the fact that v(t) is an upper-powerfunction one has

q = P(ζ1∈Δ[t/2), ζ2 ∈ Δ[t/2), Z2 < t + Δ)

� G2(Δ[t/2)) � cv2(t)G2(t/2) = o(v(t)G(t)).

If M = M(t) → ∞ slowly enough as t → ∞ then, since v(t) and G(t) are l.c.functions, we obtain from (1.3.20) that

M∫0

G(Δ[t − y))G(dy) ∼ Δ

M∫0

v(t − y)G(t − y)G(dy) ∼ Δv(t)G(t).

Moreover, by (1.2.21)

t/2∫M

G(Δ[t − y))G(dy) < cΔv(t)

t/2∫M

G(t − y)G(dy) = o(v(t)G(t)).

1 See Definition 1.2.20 on p. 28.

Page 82: Asymptotic analysis of random walks

1.4 Asymptotic properties of ‘functions of distributions’ 51

Hence

G2∗(Δ[t)) ∼ 2Δv(t)G(t) ∼ 2G(Δ[t)),

which immediately implies (1.3.11). The theorem is proved.

Subexponential and locally subexponential distributions will be used in §§ 4.8,7.5, 7.6 and 8.2.

1.4 Asymptotic properties of ‘functions of distributions’

As before, let G denote the distribution of a r.v. ζ, so that G(B) = P(ζ ∈ B) forany Borel set B, and let G denote the tail of this distribution: G(t) = G([t,∞)).

Let g(λ) := Eeiλζ be the characteristic function (ch.f.) of the distribution G,and let A(w) be a function of the complex variable w. In a number of problems(see e.g. § 7.5), the form of a desired distribution can be obtained in terms ofcertain transforms of that distribution (e.g. its ch.f.) that have the form A(g(λ)).The question is, what can one say about the asymptotics of the tails of the desireddistribution given that the asymptotics of G(t) are known?

It is the distribution that corresponds to the ch.f. A(g(λ)) (or to some othertransform) to which we refer in the heading of this section.

The next theorem answers, to some extent, the question posed.

Theorem 1.4.1. Let the distribution G of the r.v. ζ be subexponential, and let afunction A(w) be analytic in the disk |w| � 1. Then there exists a finite mea-sure A such that the function A(g(λ)) admits a representation of the form

A(ψ(λ)) =∫

eiλxA(dx), Im λ = 0, (1.4.1)

where

A(t) := A([t,∞)) ∼ A′(1)G(t) as t → ∞.

Proof. Since the domain of analyticity is always open, the function A(w) will beanalytic in the region |w| � 1 + δ for some δ > 0 and the following expansionwill hold true:

A(w) =∞∑

k=0

Akwk, where |Ak| < c(1 + δ)−k.

Hence the measure

A :=∞∑

k=0

AkGk∗

is finite, with∫ |A(dx)| �

∑∞k=0 |Ak|. Moreover, the function

A(g(λ)) =∞∑

k=0

Akgk(λ)

Page 83: Asymptotic analysis of random walks

52 Preliminaries

is the Fourier transform of the measure A. This proves (1.4.1).Further,

A(t) =∞∑

k=0

AkGk∗(t),

where, due to the subexponentiality of G, for each k � 1 one has

Gk∗(t)G(t)

→ k as t → ∞.

Moreover, by Theorem 1.2.12(iv), for any ε > 0

Gk∗(t)G(t)

� b(1 + ε)k.

Choosing ε = δ/2, we obtain from the dominated convergence theorem that,as t → ∞,

A(t)G(t)

=∞∑

k=0

AkGk∗(t)G(t)

→∞∑

k=0

Ak = A′(1).

The theorem is proved.

One could also find the proof of (a somewhat more general form of) Theo-rem 1.4.1 of [84].

For locally subexponential distributions, we have the following analogue ofTheorem 1.4.1. As before, let Δ[t) = [t, t + Δ) be a half-open interval oflength Δ > 0.

Theorem 1.4.2. Let G ∈ Sloc and a function A(w) be analytic in the unit disk|w| � 1. Then there exists a finite measure A such that

A(g(λ)) =∫

eiλxA(dx)

and, for any fixed Δ, as t → ∞,

A(Δ[t)) ∼ A′(1)G(Δ[t)). (1.4.2)

Proof. The proof of the theorem is quite similar to that of Theorem 1.4.1. Asbefore, the measure A has the form

A =∞∑

k=0

AkGk∗,

where for the coefficients Ak in the expansion of the function A(w) we have theinequalities Ak � c(1 + δ)k for some δ > 0. Then one has to make use ofTheorem 1.3.4, which states that

Gk∗(Δ[t))G(Δ[t))

→ k as k → ∞

Page 84: Asymptotic analysis of random walks

1.4 Asymptotic properties of ‘functions of distributions’ 53

andGk∗(Δ[t))G(Δ[t))

< b(1 + ε)k for k � 2, t � M.

Setting ε = δ/2, we can use the dominated convergence theorem to obtain (1.4.2).The theorem is proved.

One also has complete analogues of Theorem 1.4.2 in the case when the distri-bution G has a subexponential density and also in the case when the distributionG = {gk; k � 0} is discrete and the sequence {gk} is subexponential. Thus, inthe arithmetic case, we have

Theorem 1.4.3. If the sequence {gk} is subexponential and a function A(w)is analytic in the unit disk |w| � 1 then there exists a finite discrete measureA = {ak; k � 0} such that A(g(w)) =

∑∞k=0 akwk, where g(w) =

∑∞k=0 gkwk

and

ak ∼ A′(1)gk

as k → ∞.

Proof. The proof of the theorem repeats that of Theorem 1.4.2. One just has touse Theorem 1.3.2 instead of Theorem 1.3.4.

There arises the following natural question: is it possible to relax the assump-tion on the analyticity of A(w) in the above Theorems 1.4.1–1.4.3 ? We willconcentrate on Theorems 1.4.1 and 1.4.3. They are the simplest in a series ofresults that enable one to find, given the asymptotics of G(t), the asymptotics ofA(t) for the ‘preimage’ of the function A(g(λ)), where the class of functions Acan be broader than that in Theorems 1.4.1 and 1.4.3 (see e.g. [83, 35, 42]).

We will present here without proof some ‘local theorems’ for arithmetic distri-butions. These results are quite useful in the asymptotic analysis (see e.g. § 8.2).

Theorem 1.4.4. Let {gk = G({k}); k � 0} be a probability distribution on theset of integers, the sequence {gk} be subexponential, g(w) =

∑∞k=0 gnwn and

A(w) be a function analytic on the range of values g(y) on the disk |y| � 1. Thenthere exists a finite measure A = {ak; k � 0} such that

A(g(w)) =∞∑

k=0

akwk (1.4.3)

and, as k → ∞,

ak ∼ A′(1)gk. (1.4.4)

If

A(w) =∞∑

k=0

Akwk,∞∑

k=0

|Ak| < ∞, (1.4.5)

then the measure A can be identified with the measure∑

AkGk∗.

Page 85: Asymptotic analysis of random walks

54 Preliminaries

For the proof of the theorem, see [83].

It is not hard to see that using a change of variables one can extend the assertionof the theorem to the case when, instead of the assumption on subexponentialityof the sequence {gk}, one has

limk→∞

gk+1

gk=

1r, r > 1, lim

k→∞g2∗

k

gk= 2g(r),

and the function A(w) is analytic on the set of values g(y), |y| � r.In [83] one can also find similar assertions for densities and discrete distribu-

tions on the whole axis, under the assumption that conditions (1.3.1) and (1.3.2)hold as k → ±∞. Then the assertion (1.4.4) also holds as k → ±∞.

In the case when gk ∼ ck−α−1 as k → ∞, one can consider, instead of func-tions A(w) that are analytic on the set of values g(y), |y| � 1, functions of theform (1.4.5) as well, where

∞∑k=0

|k|r|Ak| < ∞, r > α + 1.

In this case, the assertion (1.4.4) remains valid (see Theorem 6 of [83]).Next we will present an analogue of Theorem 1.4.4 for an extension of the class

of regularly varying functions. Put

d(bk) := bk − bk+1.

Theorem 1.4.5.

(i) In the notation of Theorem 1.4.4, let the following conditions be satisfied:

(a)∞∑

n=1

1n

∞∑k=n

|d(kgk)| < ∞;

(b) there exist an s.v.f. L(t), α > 0, and a constant c < ∞ such that

L(k)k−α−1 � |gk| � cL(k)k−α−1,gk+1

gk→ 1 as k → ∞.

Further, let A(w) be a function analytic on the range of values g(y), |y| � 1.Then (1.4.3), (1.4.4) hold true.

(ii) Each of the following conditions is sufficient for condition (a) to hold:

(a1) kgk is non-increasing for k � k0, k0 < ∞;(a2) for some ε > 0, one has gk � ck−2(ln k)−2−ε, k � 0.

Proof. For the proof of first part of the theorem see [35] or [42] (Appendix 3).The assertion of the second part of the theorem is obvious, since if (a1) holds

then

|d(kgk)| = kgk − (k + 1)gk+1,∞∑

k=n

|d(kgk)| = ngn,

and condition (a) turns into the relation∑∞

n=1 gn < ∞, which is always true.

Page 86: Asymptotic analysis of random walks

1.4 Asymptotic properties of ‘functions of distributions’ 55

If (a2) is true then

∞∑k=n

|d(kgk)| �∞∑

k=n

kgk < c∞∑

k=n

k−1(ln k)−2−ε ∼ c1(lnn)−1−ε,

and condition (a) will be satisfied since∑∞

n=1 n−1(lnn)−1−ε < ∞.

Theorem 1.4.5, just as Theorem 1.4.4, can be extended to the case of distribu-tions on the whole real line (see [35, 40]).

Note that the assertion of Theorems 1.4.4 and 1.4.5 on the representation (1.4.3)for the function A(g(w)) is a special case of the well-known Wiener–Levy the-orem on the rings (Banach algebras) B of the generating functions of absolutelyconvergent series, which states the following. Let g ∈ B (i.e. g(w) =

∑gkwk,∑ |gk| < ∞) and let A(w) be a function analytic in a simply connected domain

D, situated, generally speaking, on a multi-sheeted Riemann surface. If one canconsider the range of values {g(w); |w| � 1} as a set lying in D then A(g) alsobelongs to B, i.e. one has a representation of the form

A(g(w)) =∞∑

k=1

akwk,

∞∑k=1

|ak| < ∞.

There arises a natural question: to what extent will the assertions of Theo-rems 1.4.2 and 1.4.3 remain true in the case of signed measures? For exam-ple, when studying the asymptotics of the distribution of the first passage time(see [58]), there arises a problem concerning an analogue of Theorem 1.4.4 in thecase when the terms in the sequence {gk} can assume both positive and negativevalues. We will present here an analogue of Theorem 1.4.3, restricting ourselvesto considering regularly varying sequences. In what follows, the relation ak ∼ cbk

with c = 0 will be understood as ak = o(bk) as k → ∞.Let L be a s.v.f., 0 < |c| < ∞, and

gn ∼ cn−αL(n), α > 1; gn := |gn|, g(w) :=∞∑

k=0

gkwk. (1.4.6)

Theorem 1.4.6. Suppose that {gk; k � 0} is a real-valued sequence of theform (1.4.6) and that a function A(w) is analytic in the disk |w| � g(1). ThenA(g(w)) can be represented as the series

A(g(w)) =∞∑

k=0

akwk,∞∑

k=0

|ak| < ∞,

and, as k → ∞,

ak ∼ A′(g(1))gk.

Page 87: Asymptotic analysis of random walks

56 Preliminaries

Proof. As before, let {gn∗k } denote the nth convolution of the sequence {gk} with

itself:

g(n+1)∗k =

k∑j=0

gn∗j gk−j , n � 1,

so that∑

k gn∗k wk = gn(w). It is not hard to see that g2∗

k ∼ 2g(1)gk and then, byinduction, that

gn∗k ∼ ngn−1(1)gk as k → ∞ (1.4.7)

for any fixed n. Further, it is evident that |gn∗k | < gn∗

k where {gn∗k } is the

nth convolution of the sequence {gk := |gk|} with itself, so that∑

gn∗k wk =

gn(w). Clearly, the sequence {gk/g(1)} specifies a subexponential distributionand hence, by virtue of Theorem 1.3.2(ii), for any ε > 0 and all large enough k

and n one has

gn∗k � gkgn−1(1) (1 + ε/2)n

. (1.4.8)

Furthermore, from the relation

A(g(w)) =∑

Angn(w)

we obtain

ak =∞∑

n=0

Angn∗k =

∑n�N

+∑n>N

, (1.4.9)

where, for each fixed N , one has from (1.4.7) that

1gk

∑n�N

→∑n�N

Anngn−1(1) = A′(g(1)) + rN as k → ∞ (1.4.10)

with rN → 0 as N → ∞. For the last sum in (1.4.9) we find by virtue of (1.4.8)that ∣∣∣∣∑

n>N

∣∣∣∣ �∑n>N

|Angn∗k | �

∑n>N

|An| gkgn−1(1)(1 + ε)n/2.

Since |An| � c1[g(1)(1 + ε)]−n for a suitable ε > 0, we see that the sequence|An|gn(1)(1 + ε)n/2 decays exponentially fast as n → ∞. Therefore∣∣∣∣∑

n>N

∣∣∣∣ � c2gk(1 + ε)−N/2. (1.4.11)

Comparing the relations (1.4.9)–(1.4.11) and noting that N can be chosen arbi-trarily large, we obtain the assertion of the theorem.

Page 88: Asymptotic analysis of random walks

1.5 Convergence of distributions of sums of r.v.’s to stable laws 57

1.5 The convergence of distributions of sums of random variables with

regularly varying tails to stable laws

As is well known, in the case Eξ2 < ∞ one has the central limit theorem, whichstates that the distributions of the normalized sums Sn =

∑ni=1 ξi of independent

r.v.’s ξid= ξ converge to the normal law as n → ∞.

If Eξ2 = ∞ then the situation noticeably changes. In this case, the convergenceof the distributions of the appropriately normalized sums Sn to a limiting law willonly take place for r.v.’s with regularly varying distribution tails.

From the proof of the central limit theorem by the method of characteristicfunctions, it is seen that the nature of the limiting distribution for Sn is defined bythe behaviour of the ch.f.

f(λ) := Eeiλξ, λ ∈ R,

of ξ in the vicinity of zero. If Eξ = 0 and Eξ2 = d < ∞ then, as n → ∞,

f

(μ√n

)= 1 +

f′(0)μ√n

+f′′(0)μ2

2n+ o

(1n

)= 1 − dμ2

2n+ o

(1n

). (1.5.1)

It is this relation that defines the asymptotic behaviour of the ch.f. fn(μ/√

n)of Sn/

√n, which leads to the limiting normal law. In the case Eξ2 = ∞ (so

that f′′(0) does not exist) we will use the same method, but, in order to obtain the‘right’ asymptotics of f(μ/b(n)) under a suitable scaling b(n), we will have toimpose regular variation conditions on the ‘two-sided’ tails

F (t) := F((−∞,−t)) + F([t,∞)) = P(ξ �∈ [−t, t)), t > 0.

As before, the functions

F+(t) := F([t,∞)) = P(ξ � t), F−(t) := F((−∞,−t)) = P(ξ < −t)

will be referred to as the right and the left tails of the distribution of ξ, respectively.Assume that the following condition holds for some α ∈ (0, 2] and ρ ∈ [−1, 1]:

[Rα,ρ] The two-sided tail F (t) = F−(t) + F+(t) is an r.v.f. at infinity, i.e. ithas a representation of the form

F (t) = t−αLF (t), α ∈ (0, 2], (1.5.2)

where LF (t) is an s.v.f.; in addition there exists the limit

limt→∞

F+(t)F (t)

=: ρ+ =12(ρ + 1) ∈ [0, 1]. (1.5.3)

If ρ+ > 0 then clearly the right tail F+(t) (just like F (t)) is an r.v.f., i.e. itadmits a representation of the form

F+(t) = V (t) := t−αL(t), α ∈ (0, 2], L(t) ∼ ρ+LF (t)

Page 89: Asymptotic analysis of random walks

58 Preliminaries

(following § 1.1, we use the symbol V to denote an r.v.f.). If ρ+ = 0 then the righttail F+(t) = o(F (t)) need not be assumed to be regularly varying.

It follows from (1.5.3) that there also exists the limit

limt→∞

F−(t)F (t)

=: ρ− = 1 − ρ+.

If ρ− > 0 then, similarly, the left tail F−(t) admits a representation of the form

F−(t) = W (t) := t−αLW (t), α ∈ (0, 2], LW (t) ∼ ρ−LF (t).

If ρ− = 0 then the left tail F−(t) = o(F (t)) is not assumed to be regularlyvarying.

The parameters ρ± are connected to the parameter ρ from condition [Rα,ρ] bythe relations

ρ = ρ+ − ρ− = 2ρ+ − 1.

Evidently, for α < 2 one has Eξ2 = ∞, so that the representation (1.5.1)ceases to hold, and the central limit theorem is inapplicable. In what follows, insituations where Eξ exists and is finite we will always assume, without loss ofgenerality, that

Eξ = 0.

Since F (t) in non-increasing, the (generalized) inverse function F (−1)(u), under-stood as

F (−1)(u) := inf{t > 0 : F (t) < u},

always exists. If F (t) is strictly monotone and continuous then b = F (−1)(u) isthe unique solution of the equation

F (b) = u, u ∈ (0, 1).

Put

ζn :=Sn

b(n),

where the scaling factor b(n) is defined in the case α < 2 by

b(n) := F (−1)(1/n). (1.5.4)

It is obvious that in the case ρ+ > 0 the scaling factor b(n) is connected to thefunction σ(n) = V −1(1/n) introduced in Theorem 1.1.4(v) by σ(ρ−1

+ n) ∼ b(n).For α = 2 we put

b(n) := Y (−1)(1/n), (1.5.5)

Page 90: Asymptotic analysis of random walks

1.5 Convergence of distributions of sums of r.v.’s to stable laws 59

where

Y (t) := 2t−2

t∫0

yF (y) dy = 2t−2

⎛⎝ t∫0

yV (y) dy +

t∫0

yW (y) dy

⎞⎠∼ t−2 E

[ξ2; −t � ξ < t

]=: t−2LY (t) (1.5.6)

and LY is an s.v.f. (see Theorem 1.1.4(iv)). From Theorem 1.1.4(v) it followsalso that if (1.5.2) holds then

b(n) = n1/αLb(n), α � 2,

where Lb is an s.v.f.Recall the notation VI(t) =

∫ t

0V (y) dy, V I(t) =

∫∞t

V (y) dy.

Theorem 1.5.1. Let condition [Rα,ρ] be satisfied. Then the following assertionshold true.

(i) For α ∈ (0, 2), α �= 1, and the scaling factor (1.5.4), we have

ζn ⇒ ζ(α,ρ) as n → ∞, (1.5.7)

where the distribution Fα,ρ of the r.v. ζ(α,ρ) depends only on the parametersα and ρ and has a ch.f. f(α,ρ)(λ) given by

f(α,ρ)(λ) = Eeiλζ(α,ρ)= exp{|λ|αB(α, ρ, φ)}, (1.5.8)

where φ = signλ,

B(α, ρ, φ) = Γ(1 − α)(

iρφ sinαπ

2− cos

απ

2

)(1.5.9)

and for α ∈ (1, 2) we put Γ(1 − α) = Γ(2 − α)/(1 − α).(ii) When α = 1, for the sequence ζn with scaling factor (1.5.4) to converge

to a limiting law the former, generally speaking, needs to be centred. Moreprecisely, we have

ζn − An ⇒ ζ(1,ρ) as n → ∞, (1.5.10)

where

An :=n

b(n)[VI(b(n)) − WI(b(n))

]− ρC, (1.5.11)

C ≈ 0.5772 is the Euler constant and

f(1,ρ)(λ) = Eeiλζ(1,ρ)= exp

{−π|λ|

2− iρλ ln |λ|

}. (1.5.12)

If n[VI(b(n)) − WI(b(n))

]= o(b(n)), then ρ = 0 and one can put

An = 0.

Page 91: Asymptotic analysis of random walks

60 Preliminaries

If Eξ = 0, then

An =n

b(n)[W I(b(n)) − V I(b(n))

]− ρC.

If Eξ = 0, ρ �= 0, then ρAn → −∞ as n → ∞.

(iii) For α = 2 and the scaling factor (1.5.5),

ζn ⇒ ζ(2,ρ) = ζ as n → ∞, f(2,ρ)(λ) = Eeiλζ = e−λ2/2,

so that ζ has the standard normal distribution that is independent of ρ.

Remark 1.5.2. It is not difficult to verify (cf. Lemma 2.2.1 of [286]) that in the‘extreme’ cases ρ = ±1 the ch.f.’s (1.5.8), (1.5.12) of stable distributions withα < 2 admit the following simpler representations:

f(α,1)(λ) = exp{−Γ(1 − α)(−iλ)α

}, α ∈ (0, 2), α �= 1,

f(1,1)(λ) = exp{(−iλ) ln(−iλ)

}; f(α,−1)(λ) = f(α,1)(−λ), α � 2.

Remark 1.5.3. From the representation (1.5.11) for the centring sequence {An}in the case α = 1 it follows that if there exists Eξ = 0 then the boundedness ofthe sequence implies that ρ = 0. The converse assertion, that in the case Eξ = 0the relation ρ = 0 implies the boundedness of {An}, is false.

Indeed, let ξ be an r.v. with Eξ = 0 such that for t � t0 > 0 one has

V (t) =1

2t ln2 t, W (t) = V (t)

[1 +

1L2(t)

], L2(t) := ln ln t.

Then ρ = 0, F (t) ∼ t−1 ln−2 t, b(n) ∼ n ln−2 n and

V I(t) =1

2 ln t, W I(t) = V I(t) +

1 + o(1)L2(t) ln(t)

,

so that

W I(t) − V I(t) ∼ 1L2(t) ln t

.

Therefore

An =(1 + o(1)) ln2 n

L2(b(n)) ln b(n)− ρC ∼ lnn

ln lnn→ ∞ as n → ∞.

Remark 1.5.4. The last assertion of the theorem shows that the limiting distribu-tion can be normal even in the case when ξ has infinite variance.

Remark 1.5.5. If α < 2 then from the properties of s.v.f.’s (Theorem 1.1.4(iv))we have that, as t → ∞,

t∫0

yF (y) dy =

t∫0

y1−αLF (y) dy ∼ 12 − α

t2−αLF (t) =1

2 − αt2F (t).

Page 92: Asymptotic analysis of random walks

1.5 Convergence of distributions of sums of r.v.’s to stable laws 61

Hence for α < 2 one has Y (t) ∼ 2(2 − α)−1F (x),

Y (−1)

(1n

)∼ F (−1)

(2 − α

2n

)∼(

22 − α

)1/α

F (−1)

(1n

)(cf. (1.5.4)). However, when α = 2 and d := Eξ2 < ∞, we have

Y (t) ∼ t−2d, b(n) = Y (−1)

(1n

)∼

√nd.

Thus, the scaling (1.5.5) is ‘transitional’ between the scaling (1.5.4) (up to a con-stant factor (2/(2 − α))1/α) and the standard scaling

√nd in the central limit

theorem in the case Eξ2 < ∞. This also means that the scaling (1.5.5) is ‘univer-sal’ and can be used for all α � 2 (as it is in many texts on probability theory).However, as we will see later on, for α < 2 the scaling (1.5.4) is simper and easierto deal with, and this is why it will be used in the present exposition.

We will present here a proof of Theorem 1.5.1 that essentially uses the explicitform of the scaling sequence b(n) and thereby helps to establish a direct connec-tion between the zones of ‘normal’ deviations (as in Theorem 1.5.1) and largedeviations (as in Chapters 2 and 3).

Recall that Fα,ρ denotes the distribution of ζ(α,ρ). The parameter α assumesvalues from the half-interval (0, 2] and the parameter ρ = ρ+ − ρ− can assumeany value from the closed interval [−1, 1]. The role of the parameters α and ρ willbe clarified later, at the end of this section.

It follows from Theorem 1.5.1 that each law Fα,ρ, 0 < α � 2, −1 � ρ � 1 islimiting for the distributions of suitably normalized sums of i.i.d. r.v.’s. The lawof large numbers implies that the degenerate distribution Ia concentrated at somepoint a is also a limiting one. The totality of all these distributions will be denotedby S0. Further, it is not hard to see that if F ∈ S0 then a distribution obtainedfrom F by scale and shift transformations, i.e. a distribution F{a,b} given, forsome fixed b > 0 and a, by the relation

F{a,b}(B) := F(

B − a

b

), where

B − a

b= {u ∈ R : ub + a ∈ B},

is also limiting (for the distributions of (Sn − an)/bn as n → ∞, with suitable{an} and {bn}).

It turns out that the class S of all distributions obtained by such an extensionof S0 includes all the limiting laws for sums of i.i.d. r.v.’s.

Another characterization of the class S of limiting distributions is possible.

Definition 1.5.6. A distribution F is called stable if, for any a1, a2 and for anyb1 > 0 and b2 > 0, there exist a and b > 0 such that

F{a1,b1} ∗ F{a2,b2} = F{a,b}.

Page 93: Asymptotic analysis of random walks

62 Preliminaries

This definition implies that the convolution of a stable distribution F with itselfproduces the same distribution F, up to scale and shift transformations (or, equiv-alently, for independent r.v.’s ξi ⊂=F one has (ξ1 + ξ2 − a)/b⊂=F for suitable a

and b > 0).In terms of ch.f.’s, stability is stated as follows. For any b1 > 0, b2 > 0 there

exist a and b > 0 such that

f(λb1)f(λb2) = eiλaf(λb), λ ∈ R. (1.5.13)

The class of all stable laws will be denoted by SS . The remarkable fact is thatthe class S of all limiting laws coincides with the class SS of all stable laws.

If, under a suitable scaling,

ζn ⇒ ζ(α,ρ) as n → ∞then one says that the distribution F of the summands ξ belongs to the domain ofattraction of the stable law Fα,ρ.

Theorem 1.5.1 means that if F satisfies condition [Rα,ρ] then F belongs to thedomain of attraction of the distribution Fα,ρ.

One can show that the converse is also true (see e.g. § 5, Chapter XVII of [122]):if F belongs to the domain of attraction of the law Fα,ρ for an α < 2 then condi-tion [Rα,ρ] is satisfied.

Proof of Theorem 1.5.1. We follow the same path as when proving the centrallimit theorem using the relation (1.5.1). We will study the asymptotic propertiesof the ch.f. f(λ) = Eeiλξ in the vicinity of zero (more precisely, the asymptoticsof

f

b(n)

)− 1 → 0

as b(n) → ∞) and show that, under condition [Rα,ρ], for any μ ∈ R one has

n

(f

b(n)

)− 1

)→ ln f(α,ρ)(μ) (1.5.14)

(or some modification of this relation; see (1.5.51) below). From this it will followthat, for ζn = S(n)/b(n),

fζn(μ) → f(α,ρ)(μ) as n → ∞. (1.5.15)

Indeed,

fζn(μ) = fn

b(n)

).

Since f(λ) → 1 as λ → 0, we have

ln fζn(μ) = n ln f

b(n)

)= n ln

[1 +

(f

b(n)

)− 1

)]= n

[f

b(n)

)− 1

]+ Rn,

Page 94: Asymptotic analysis of random walks

1.5 Convergence of distributions of sums of r.v.’s to stable laws 63

where |Rn| � n|f(μ/b(n)) − 1|2 for all sufficiently large n and hence Rn → 0,owing to (1.5.14). From this we see that (1.5.14) implies (1.5.15).

Thus, first we will study the asymptotics of f(λ) as λ → 0 and then estab-lish (1.5.14).

(i) Let α ∈ (0, 1). One has

f(λ) = −∞∫0

eiλtdV (t) −∞∫0

e−iλtdW (t). (1.5.16)

Consider the first integral:

−∞∫0

eiλtdV (t) = V (0) + iλ

∞∫0

eiλtV (t) dt, (1.5.17)

where, after the change of variables |λ|t = y, |λ| = 1/m, we obtain

I+(λ) := iλ

∞∫0

eiλtV (t) dt = iφ

∞∫0

eiφyV (my) dy (1.5.18)

with φ = sign λ (the trivial case λ = 0 is excluded throughout the argument).Assume for the present that ρ+ > 0. Then V (t) is an r.v.f. at infinity and, for

each y, owing to the properties of s.v.f.’s one has

V (my) ∼ y−αV (m) as |λ| → 0.

So it is natural to expect that, as |λ| → 0,

I+(λ) ∼ iφV (m)

∞∫0

eiφyy−αdy = iφV (m)A(α, φ), (1.5.19)

where

A(α, φ) :=

∞∫0

eiφyy−αdy. (1.5.20)

Let us assume that the relation (1.5.19) holds and similarly that

−∞∫0

e−iλtdW (t) = W (0) + I−(λ), (1.5.21)

where

I−(t) := −iλ

∞∫0

e−iλtW (t) dt

∼ −iφW (m)

∞∫0

e−iφyy−αdy = −iφW (m)A(α,−φ). (1.5.22)

Page 95: Asymptotic analysis of random walks

64 Preliminaries

Since V (0) + W (0) = 1, the relations (1.5.16)–(1.5.22) mean that, as λ → 0,

f(λ) = 1 + F (m)iφ [ρ+A(α, φ) − ρ−A(α,−φ)](1 + o(1)). (1.5.23)

One can find a closed-form expression for the integral A(α, φ). Observe thatthe contour integral along the boundary of the positive quadrant in the complexplane (closed as a contour) of the function eizz−α, which is analytic in the quad-rant, is equal to zero. From this it is not hard to obtain that

A(α, φ) = Γ(1 − α)eiφ(1−α)π/2, α > 0. (1.5.24)

(Note also that (1.5.20) is a table integral; its value, (1.5.24), can be found inhandbooks; see e.g. integrals 3.761.4 and 3.761.9 of [134]).

Thus, one has in (1.5.23) that

iφ [ρ+A(α, φ) − ρ−A(α,−φ)]

= iφΓ(1 − α)(

ρ+ cos(1 − α)π

2+ iφρ+ sin

(1 − α)π2

− ρ− cos(1 − α)π

2+ iφρ− sin

(1 − α)π2

)= Γ(1 − α)

(iφ(ρ+ − ρ−) cos

(1 − α)π2

− sin(1 − α)π

2

)= Γ(1 − α)

(iφρ sin

απ

2− cos

απ

2

)= B(α, ρ, φ),

where B(α, ρ, φ) is defined in (1.5.9). Therefore, as λ → 0,

f(λ) − 1 = F (m)B(α, ρ, φ)(1 + o(1)). (1.5.25)

Setting λ := μ/b(n) (so that m = b(n)/|μ|), where b(n) is defined in (1.5.4), andtaking into account that F (b(n)) ∼ 1/n, we get

n

[f

b(n)

)− 1

]= nF

(b(n)|μ|

)B(α, ρ, φ)(1 + o(1))

∼ |μ|αB(α, ρ, φ). (1.5.26)

We have established the validity of (1.5.14) and hence that of assertion (i) of thetheorem in the case α < 1, ρ+ > 0.

If ρ+ = 0 (ρ− = 0) then the above argument remains valid provided we re-place the term V (m)

(W (m)

)with zero or o(W (m))

(o(V (m))

). This follows

from the fact that F+(t)(F−(t)

)admits in this case a regularly varying majorant

V ∗(t) = o(W (t))(W ∗(t) = o(V (t))

). Similar remarks apply to what follows

as well.Thus the theorem is proved in the case α < 1 provided that we justify the

Page 96: Asymptotic analysis of random walks

1.5 Convergence of distributions of sums of r.v.’s to stable laws 65

asymptotic equivalence in (1.5.19). To do that, it suffices to verify that the inte-grals

ε∫0

eiφyV (my) dy and

∞∫M

eiφyV (my) dy (1.5.27)

can be made arbitrarily small compared with V (m) by choosing ε and M appro-priately. Note beforehand that, by virtue of Theorem 1.1.4(iii) (see (1.1.23)), forany δ > 0 there exists a tδ > 0 such that for all v � 1, vt � tδ one has

V (vt)V (t)

� (1 + δ)v−α−δ.

So, for δ < 1 − α, t > tδ,

t∫0

V (u) du � tδ +

t∫tδ

V (u) du = tδ + tV (t)

1∫tδ/t

V (vt)V (t)

dv

� tδ + tV (t)(1 + δ)

1∫0

v−α−δdv = tδ +tV (t)(1 + δ)1 − α − δ

� ctV (t), (1.5.28)

because tV (t) → ∞ as t → ∞. From this we obtain that∣∣∣∣∣∣ε∫

0

eiφyV (my) dy

∣∣∣∣∣∣ � 1m

εm∫0

V (u) du � cεV (εm) ∼ cε1−αV (m).

Since ε1−α → 0 as ε → 0, the required assertion concerning the first integralin (1.5.27) is proved. The second integral in (1.5.27) is equal to

∞∫M

eiφyV (my) dy =1iφ

eiφyV (my)∣∣∣∣∞M

− 1iφ

∞∫M

eiφydV (my)

= − 1iφ

eiφMV (mM) − 1iφ

∞∫mM

eiφu/mdV (u),

so that its absolute value does not exceed

2V (mM) ∼ 2M−αV (m). (1.5.29)

Therefore, by choosing a suitable M , the value of the second integral in (1.5.27)can also be made arbitrarily small compared with V (m). The relation (1.5.19)(and, together with it, the assertion of the theorem in the case α < 1) is proved.

Now let α ∈ (1, 2); hence there exists a finite expectation Eξ that, according

Page 97: Asymptotic analysis of random walks

66 Preliminaries

to our convention, will be assumed to be equal to zero. In this case,

f(λ) − 1 = φ

|λ|∫0

f′(φu) du, φ = sign λ, (1.5.30)

and we have to find the asymptotic behaviour of

f′(λ) = −i

∞∫0

teiλtdV (t) + i

∞∫0

te−iλtdW (t) =: I(1)+ (λ) + I

(1)− (λ) (1.5.31)

as λ → 0. Since tdV (t) = d(tV (t)) − V (t) dt, integrating by parts yields

I(1)+ (λ) = −i

∞∫0

teiλtdV (λ) = −i

∞∫0

eiλtd(tV (t)) + i

∞∫0

eiλtV (t) dt

= −λ

∞∫0

tV (t) eiλtdt + iV I(0) − λ

∞∫0

V I(t) eiλtdt

= iV I(0) − λ

∞∫0

V (t) eiλtdt, (1.5.32)

where by Theorem 1.1.4(iv) both the functions

V I(t) :=

∞∫t

V (u) du ∼ tV (t)α − 1

as t → ∞, V I(0) < ∞,

and

V (t) := tV (t) + V I(t) ∼ αtV (t)α − 1

are regularly varying.Letting, as before, m = 1/|λ|, m → ∞ (cf. (1.5.18), (1.5.19)), we obtain

−λ

∞∫0

V (t) eiλtdt = −φV (m)

∞∫0

V (my) eiφydy

∼ −φ

∞∫0

y−α+1eiφydy = − αV (m)λ(α − 1)

A(α − 1, φ),

I(1)+ (λ) = iV I(0) − αρ+F (m)

λ(α − 1)A(α − 1, φ)(1 + o(1)), (1.5.33)

where the function A(α, φ) defined in (1.5.20) is equal to (1.5.24).

Page 98: Asymptotic analysis of random walks

1.5 Convergence of distributions of sums of r.v.’s to stable laws 67

Similarly,

I(1)− (λ) = i

∞∫0

te−iλtdW (t)

= −λ

∞∫0

tW (t) e−iλtdt − iW I(0) − λ

∞∫0

W I(t) e−iλtdt

= −iW I(0) − λ

∞∫0

W (t) e−iλtdt,

where

W I(t) :=

∞∫t

W (u) du, W (t) := tW (t) + W I(t) ∼ αtW (t)α − 1

and

−λ

∞∫0

W (t) e−iλtdt ∼ − αW (m)λ(α − 1)

A(α − 1,−φ).

Therefore

I(1)− (λ) = iW I(0) − αρ−F (m)

λ(α − 1)A(α − 1,−φ)(1 + o(1)).

Hence, by virtue of (1.5.31), (1.5.33) and the equality V I(0)−W I(0) = Eξ = 0,one has

f′(λ) = − αF (m)λ(α − 1)

[ρ+A(α − 1, φ) + ρ−A(α − 1,−φ)

](1 + o(1)).

Now let us return to the relation (1.5.30). Since

|λ|∫0

u−1 F (u−1) du ∼ α−1F (|λ|−1) = α−1F (m)

(see Theorem 1.1.4(iv)), we obtain, again using (1.5.24) and an argument like that

Page 99: Asymptotic analysis of random walks

68 Preliminaries

employed in the proof for the case α < 1, that

f(λ) − 1 = − 1α − 1

F (m)[ρ+A(α − 1, φ) + ρ−A(α − 1,−φ)

](1 + o(1))

= −Γ(2 − α)α − 1

F (m)[ρ+

(cos

(2 − α)π2

+ iφ sin(2 − α)π

2

)+ ρ−

(cos

(2 − α)π2

− iφ sin(2 − α)π

2

)](1 + o(1))

=Γ(2 − α)

α − 1F (m)

(cos

απ

2− iφρ sin

απ

2

)(1 + o(1))

= F (m)B(α, ρ, φ)(1 + o(1)). (1.5.34)

We arrive once more at the relation (1.5.25) which, by virtue of (1.5.26), impliesthe assertion of the theorem in the case α ∈ (1, 2).

(ii) The calculations in the case α = 1 are somewhat more intricate. We willagain follow the relation (1.5.16), according to which

f(λ) = 1 + I+(λ) + I−(λ). (1.5.35)

Rewrite the relation (1.5.18) for I+(λ) as

I+(t) = iφ

∞∫0

eiφyV (my) dy

= iφ

∞∫0

V (my) cos y dy −∞∫0

V (my) sin y dy. (1.5.36)

Here the first integral on the right-hand side can be represented as the sum of twointegrals,

1∫0

V (my) dy +

∞∫0

g(y)V (my) dy, (1.5.37)

where

g(y) ={

cos y − 1 if y � 1,

cos y if y > 1.(1.5.38)

Note (see e.g. integral 3.782 of [134]) that

−∞∫0

g(y)y−1dy = C ≈ 0.5772 (1.5.39)

is the Euler constant. Since V (ym)/V (m) → y−1 as m → ∞, in a similar way

Page 100: Asymptotic analysis of random walks

1.5 Convergence of distributions of sums of r.v.’s to stable laws 69

to before we obtain for the second integral in (1.5.37) the relation

∞∫0

g(y)V (my) dy ∼ −CV (m). (1.5.40)

Now consider the first integral in (1.5.37),

1∫0

V (my) dy = m−1

m∫0

V (u) du = m−1VI(m), (1.5.41)

where

VI(t) =

t∫0

V (u) du (1.5.42)

can easily be seen to be an s.v.f. in the case α = 1 (see Theorem 1.1.4(iv)) and,moreover, if E|ξ| = ∞ then VI(t) → ∞ as t → ∞, whereas if E|ξ| < ∞ thenVI(t) → VI(∞) < ∞.

Thus, for the first term on the right-hand side of (1.5.36) we have

Im I+(λ) = φ(−CV (m) + m−1VI(m)) + o(V (m)). (1.5.43)

Next we will clarify the character of the dependence of VI(vt) on v when t → ∞.

For any fixed v > 0,

VI(vt) = VI(t) +

vt∫t

V (u) du = VI(t) + tV (t)

v∫1

V (yt)V (t)

dy.

By Theorem 1.1.3 one has

v∫1

V (yt)V (t)

dy ∼v∫

1

dy

y= ln v,

so that

VI(vt) = VI(t) + (1 + o(1)) tV (t) ln v =: AV (v, t) + tV (t) ln v, (1.5.44)

where clearly

AV (v, t) = VI(t) + o(tV (t)) as t → ∞; (1.5.45)

evidently VI(t) � tV (t) by Theorem 1.1.4(iv).Hence, for λ = μ/b(n) (so that m = b(n)/|μ| and therefore V (m) ∼ ρ+|μ|/n)

we obtain from (1.5.43), (1.5.44) (in which one has to put t = b(n), v = 1/|μ|)

Page 101: Asymptotic analysis of random walks

70 Preliminaries

that the following representation is valid as n → ∞:

Im I+(λ) = −Cρ+μ

n+

μ

b(n)

[AV (|μ|−1, b(n)) − ρ+μ

nln |μ|

]+ o(n−1)

b(n)AV (|μ|−1, b(n)) − ρ+μ

n(C + ln |μ|) + o(n−1). (1.5.46)

For the second term on the right-hand side of (1.5.36) we have

Re I+(λ) = −∞∫0

V (my) sin y dy ∼ −V (m)

∞∫0

y−1 sin y dy.

Because sin y ∼ y as y → 0, the last integral converges. Since Γ(γ) ∼ 1/γ asγ → 0, the value of this integral can be found to be (see (1.5.20) and (1.5.24))

limγ→0

Γ(γ) sinγπ

2=

π

2. (1.5.47)

Thus, for λ = μ/b(n),

Re I+(λ) = −π|μ|2n

+ o(n−1). (1.5.48)

In a similar way we can find an asymptotic representation for the integral I−(λ)(see (1.5.16)–(1.5.22)):

I−(λ) = −iφ

∞∫0

W (my)e−iφydy

= −iφ

∞∫0

W (my) cos y dy −∞∫0

W (my) sin y dy. (1.5.49)

Comparing this with (1.5.36) and the subsequent computation of I+(λ), we canimmediately write that, for λ = μ/b(n) (cf. (1.5.46), (1.5.48)),

Im I−(λ) = −−μAW (|μ|−1, b(n))b(n)

+ρ−μ

n(C + ln |μ|) + o

( 1n

),

Re I−(λ) = −π|μ|ρ−2n

+ o(n−1).

(1.5.50)

Thus we obtain from (1.5.35), (1.5.46), (1.5.48) and (1.5.50) that

f

b(n)

)− 1 = −π|μ|

n− iρμ

n(C + ln |μ|)

+iμ

b(n)[AV (|μ|−1, b(n)) − AW (|μ|−1, b(n))

]+ o(n−1).

From (1.5.45) it follows that the second to last term here is equal to

b(n)[VI(b(n)) − WI(b(n))

]+ o(n−1),

Page 102: Asymptotic analysis of random walks

1.5 Convergence of distributions of sums of r.v.’s to stable laws 71

so that finally

f

b(n)

)− 1 = −π|μ|

2n− iρμ

nln |μ| + iμ

An

n+ o(n−1), (1.5.51)

where

An =n

b(n)[VI(b(n)) − WI(b(n))

]− ρC.

Therefore, cf. (1.5.14), (1.5.15), we obtain

fζn−An(μ) = exp

{−iμAn

}f n

b(n)

)= exp

{−iμAn + n ln

[1 +

(f

b(n)

)− 1

)]}= exp

{−iμAn + n

[f

b(n)

)− 1

]+ nO

(∣∣∣∣f( μ

b(n)

)− 1

∣∣∣∣2)}.

When α = 1 the functions VI and WI are slowly varying, by Theorem 1.1.4(iv),so by virtue of (1.5.51) we get

n

∣∣∣∣f( μ

b(n)

)− 1

∣∣∣∣2 � c

(1n

+A2

n

n

)� c1

(1n

+1

b(n)[VI(b(n))2 + WI(b(n))2

])→ 0.

Since clearly

−iμAn + n

(f

b(n)

)− 1

)→ −π|μ|

2− iρμ ln |μ|,

we have

fζn−An(μ) → exp

{−π|μ|

2− iρμ ln |μ|

},

and so the relation (1.5.10) is proved. The assertions about the centring sequence{An} that follow (1.5.10) are obvious when one takes into account (1.5.3) andTheorem 1.1.4(iv).

(iii) It remains to consider the case α = 2. We will follow the representa-tions (1.5.30)–(1.5.32), according to which we have to find the asymptotics (asm = 1/|λ| → ∞) of

f′(λ) = I(1)+ (λ) + I

(1)− (λ), (1.5.52)

where

I(1)+ (λ) = iV I(0) − λ

∞∫0

V (t) eiλtdt = iV I(0) − φ

∞∫0

V (my) eiφydy (1.5.53)

Page 103: Asymptotic analysis of random walks

72 Preliminaries

and, by Theorem 1.1.4(iv),

V I(t) =

∞∫t

V (y) dy ∼ tV (t), V (t) = tV (t) + V I(t) ∼ 2tV (t) (1.5.54)

as t → ∞. Further,∞∫0

V (my) eiφydy =

∞∫0

V (my) cos y dy + φ

∞∫0

V (my) sin y dy. (1.5.55)

Here the second integral on the right-hand side of (1.5.55) is asymptotically equiv-alent (as m → ∞, see (1.5.47)) to

V (m)

∞∫0

y−1 sin y dy =π

2V (m).

The first integral on the right-hand side of (1.5.55) equals

1∫0

V (my) dy +

∞∫0

g(y)V (my) dy,

where the function g(y) was defined in (1.5.38) and

1∫0

V (my) dy =1m

m∫0

V (u) du =1m

VI(m);

VI(t) :=t∫0

V (u) du is an s.v.f. according to (1.5.54). Since

t∫0

uV (u) du =t2V (t)

2− 1

2

t∫0

u2dV (u),

t∫0

V I(u) du = tV I(t) +

t∫0

uV (u) du,

V I(t) ∼ tV (t), we obtain

VI(t) =

t∫0

(uV (u) + V I(u)) du = tV I(t) + t2V (t) −t∫

0

u2dV (u)

= −t∫

0

u2dV (u) + O(t2V (t)), (1.5.56)

Page 104: Asymptotic analysis of random walks

1.5 Convergence of distributions of sums of r.v.’s to stable laws 73

where the last term is negligibly small because, owing to Theorem 1.1.4(iv),

t∫0

uV (u) du � t2V (t).

It is also clear that, as t → ∞,

VI(t) → VI(∞) = E[ξ2; ξ > 0

] ∈ (0,∞].

As a result we obtain, cf. (1.5.40), that

I(1)+ (λ) = iV I(0) − iπ

2V (m) − λVI(m) + φCV (m) + o(V (m))

= iV I(0) − λVI(m)(1 + o(1)),

since VI(t) � tV (t).In the same manner we find that

I(1)− (λ) = −iW I(0) − λWI(m)(1 + o(1)),

where WI is an s.v.f. and is obtained from the function W in the same way as VI

was from V . Since V I(0) = W I(0), we see now that the relation (1.5.52) leadsto

f′(λ) = −λ[VI(m) + WI(m)

](1 + o(1)).

Hence from (1.5.30) we get the representation

f(λ) − 1 = φ

1/m∫0

f′(φu) du = −1/m∫0

u[VI(1/u) + WI(1/u)

]du

∼ − 12m2

[VI(m) + WI(m)

] ∼ − 12m2

E[ξ2; −m � ξ < m

]owing to (1.5.56) and a similar relation for WI . Turning now to the definition ofthe function Y (t) = t−2LY (t) in (1.5.6) and putting

b(n) := Y (−1)(1/n), λ := μ/b(n),

we obtain

n(f(λ) − 1) ∼ −n

2Y

(b(n)|μ|

)∼ −nμ2

2Y (b(n)) → −μ2

2.

The theorem is proved.

With regard to the role of the parameters α and ρ we will note the following.The parameter α characterizes the decay rate of the functions

Fα,ρ,−(t) := Fα,ρ((−∞,−t)) and Fα,ρ,+(t) := Fα,ρ([t,∞))

Page 105: Asymptotic analysis of random walks

74 Preliminaries

as t → ∞. The following assertion follows from the results of Chapters 2 and 3:If condition [Rα,ρ] is met, α �= 1, α < 2, and ρ+ > 0 then

P(

Sn

b(n)� v

)∼ nV (vb(n)) as v → ∞.

Therefore, if v → ∞ slowly enough then, owing to the properties of s.v.f.’s,

P(

Sn

b(n)� v

)∼ v−αnV (b(n)) ∼ ρ+v−αnF (b(n)) ∼ ρ+v−α.

However, by virtue of Theorem 1.5.1, if v → ∞ slowly enough then

P(

Sn

b(n)� v

)∼ Fα,ρ,+(v). (1.5.57)

From this it follows that, for ρ+ > 0,

Fα,ρ,+(v) ∼ ρ+v−α as v → ∞. (1.5.58)

One can easily obtain a similar relation for the left tails as well: for ρ− > 0,

Fα,ρ,−(v) ∼ ρ−v−α as v → ∞. (1.5.59)

Note that, for ξ ⊂=Fα,ρ, the asymptotic relation (1.5.57) turns into an exactequality if in it one replaces b(n) by bn := n1/α:

P(

Sn

bn� v

)= Fα,ρ,+(v). (1.5.60)

This follows from the observation that[f(α,ρ)(λ/bn)

]n coincides with f(α,ρ)(λ)(see (1.5.8)) and therefore the distribution of the scaled sum Sn/bn coincides withthat of ξ.

The parameter ρ assuming values from the closed interval [−1, 1] is a measureof asymmetry of the distribution Fα,ρ. If, for instance, ρ = 1 (ρ− = 0) thenfor α < 1 the distribution Fα,1 will be concentrated on the right half-axis. Thisis seen from the fact that in this case the distribution Fα,1 could be consideredas the limiting law for the scaled sums of i.i.d. r.v.’s ξk � 0 (with F−(0) = 0).Since the distributions of such sums will all be concentrated on the right half-axis, the limiting law must also have this property. Similarly, for ρ = −1, α < 1,the distribution Fα,−1 is concentrated on the left half-axis. In the case ρ = 0(ρ+ = ρ− = 1/2) the ch.f. of the distribution Fα,0 will be real-valued, and thedistribution Fα,0 itself will be symmetric.

As we saw, the ch.f.’s f(α,ρ)(λ) of the stable laws Fα,ρ admit closed-form ex-pressions. It is obvious that they all are absolutely integrable on R, and the sameapplies to the functions λkf(α,ρ)(λ), k � 1. Therefore, all stable distributionshave densities that are infinitely many times differentiable (see e.g. Chapter XVof [122]). As to the explicit formulae for these densities, they are only knownfor a few laws. To them belong, first of all, the normal law F2,ρ and the Cauchydistribution F1,0, with density 2/(π2 + 4t2), −∞ < t < ∞.

Page 106: Asymptotic analysis of random walks

1.6 Functional limit theorems 75

An explicit form for another stable distribution can be obtained from a closed-form expression for the distribution of the maximum of the Wiener process. Thisis the distribution F1/2,1 with parameters 1/2, 1 and density (up to a scale trans-form, cf. (1.5.58))

1√2π t3/2

e−1/2t, t > 0

(this is the density of the first passage time of level 1 by the standard Wienerprocess; see e.g. § 2, Chapter 18 of [49]).

1.6 Functional limit theorems

1.6.1 Preliminary remarks

In this section, we will be interested in obtaining conditions ensuring the conver-gence of random processes ζn(t) generated by the partial sums Sk =

∑kj=1 ξj of

i.i.d. r.v.’s ξj . For example, we could consider

ζn(t) :=S�nt�b(n)

, t ∈ [0, 1], (1.6.1)

where � · denotes the integral part and b(n) is a scaling factor.To state the respective assertions, we will need two functional spaces, the space

C = C(0, 1) of continuous functions g(t) on [0, 1] and the space D = D(0, 1)of functions g(t), t ∈ [0, 1], without discontinuities of the second kind, g(+0) =g(0), g(1 − 0) = g(1), which can be assumed, for definiteness, to be right-continuous: g(t + 0) = g(t), t ∈ [0, 1). We will suppose that the spaces C and D

are endowed with the respective metrics

ρC(g1, g2) := supt∈[0,1]

|g1(t) − g2(t)|

and

ρD(g1, g2) := infλ

{sup

t|g1(t) − g2(λ(t))| + sup

t|t − λ(t)|

},

where the infimum is taken over all continuous monotone functions λ(t) such thatλ(0) = 0, λ(1) = 1 (the Skorokhod metric).

Consider further the measurable functional spaces 〈H,BH〉, where H can beeither C or D and BH is the σ-algebra generated by cylindric sets or the Borelσ-algebra generated by the metric ρH (for H = C or D these two σ-algebrascoincide with each other; see e.g. §§ 1.3 and 3.14 of [28]).

Denote by FH the class of functionals f on D possessing the following prop-erties:

(1) f is BD-measurable;(2) f is continuous in the metric ρH at points belonging to H .

Now consider a sequence of processes {ζn(·); n � 1} defined on 〈D,BD〉.

Page 107: Asymptotic analysis of random walks

76 Preliminaries

We will say that the processes ζn(·) H-converge as n → ∞ to a process ζ(·)given in 〈H,BH〉 if, for any functional f ∈ FH , one has

f(ζn) ⇒ f(ζ)

(the symbol ⇒ denotes, as usual, weak convergence of the respective distribu-tions).

If H = C and

P(ζn(·) ∈ C) = 1, n � 1, (1.6.2)

then the C-convergence turns into the conventional weak convergence of distri-butions in the space 〈C,BC〉. The requirement (1.6.2), however, does not needto be satisfied – as it does, for example, for the process defined in (1.6.1). In thiscase, to meet condition (1.6.2) one should construct a continuous polygon insteadof (1.6.1).

If H = D then the D-convergence is the weak convergence of distributionsin 〈D,BD〉.

Since the class FC is wider than FD, the C-convergence is clearly strongerthan the D-convergence.

1.6.2 Invariance principle

Let the r.v.’s ξj have zero mean and a finite variance Eξ2j = d < ∞.

Denote by w(·) the standard Wiener process, i.e. a process with homogeneousindependent increments such that for t, u � 0 the increment w(t + u)−w(t) hasthe normal distribution with parameters (0, u). Let, as before, ζn(·) be a randompolygon, say, of the form (1.6.1) with b(n) =

√nd.

Theorem 1.6.1. Under the above assumptions, the sequence of the processesζn(·) defined in (1.6.1) C-converges to the process w(·) as n → ∞.

In the case when one takes the ζn(·) to be continuous polygons, this theoremon the weak convergence in 〈C,FC〉 of the distributions of ζn to that of w isknown as the invariance principle. Since the functional f(g) := maxt∈[0,1] g(t)is ρC-continuous and also ρD-continuous, Theorem 1.6.1 implies, in particular,the following assertion on the distribution of Sn := maxk�n Sk (for simplicitywe put d = 1): for any x, as n → ∞,

P(n−1/2Sn � x) = P(f(ζn) � x) → P(

maxt∈[0,1]

w(t) � x

)= 2P(w(1) � x) = 2(1 − Φ(x)), (1.6.3)

where Φ(x) is the standard normal distribution function. One can similarly obtaina closed-form expression for the limiting distribution for n−1/2 maxk�n |Sk|.

It will follow from these relations and the bounds for P(Sn � x) to be obtainedin Chapter 4 that, in the case when Eξ2 < ∞ and E[max{0, ξ}]α < ∞ for some

Page 108: Asymptotic analysis of random walks

1.6 Functional limit theorems 77

α > 2, along with the convergence of distributions (1.6.3) as n → ∞ one alsohas the convergence of the moments

E(

Sn√n

)v

→ E(w(1))v (1.6.4)

for any v < α, where w(t) := maxu�t w(u). Similar relations hold for themaxima maxk�n |Sk/

√n| when E|ξ|α < ∞, α > 2.

The assertion (1.6.4) can be strengthened somewhat. Let F+(t) = P(ξ � t) �V (t), where V (t) is an r.v.f, and let g(t) be a continuous increasing function on[0,∞) such that

−∫

g(t) dV (t) < ∞.

Then, as n → ∞,

Eg

(Sn√

n

)→ Eg(w(1)). (1.6.5)

One can also obtain from Theorem 1.6.1 the following fact extending (1.6.3).Let h+(t) > h−(t) be two functions from D. Then, under the assumptions wehave made,

P(

h−(

k

n

)<

Sk√n

< h+

(k

n

); 1 � k � n

)

− P(h−(t) < w(t) < h+(t); t ∈ [0, 1]

)→ 0 as n → ∞ (1.6.6)

(Kolmogorov’s theorem [170, 172]).When addressing the problem of convergence rates in the invariance principle,

the question how to measure these immediately arises. A natural way here isto study the decay rate (in n) of the Prokhorov distance [230, 74] between thedistributions Pζn

and Pw of the processes ζn(·) and w(·) respectively in C(0, 1).The Prokhorov distance between the distributions P and Q in 〈C,BC〉 is definedas follows:

ρ(P,Q) := inf{ε : ρ(P,Q, ε) � ε},where

ρ(P,Q, ε) := supB∈BC

|P(B) − Q(Bε)|

and Bε is the ε-neighbourhood of B:

Bε :={

g ∈ C : infh∈B

ρC(g, h) < ε}

.

The following assertion holds true for the distance

ρn := ρ(Pζn ,Pw)

(see also the survey and bibliography of [74]).

Page 109: Asymptotic analysis of random walks

78 Preliminaries

Theorem 1.6.2. Let Eξ = 0, Eξ2 = 1.

(i) If E|ξ|α < ∞ for some α > 2 then, as n → ∞,

ρn = o(n−(α−2)/[2(α+1)]

).

(ii) If Eeλ|ξ|β < ∞ for some 0 < β � 1 and λ > 0 then, as n → ∞,

ρn = o(n−1/2(lnn)1/β

).

The rate of the convergence Pζn(B) → Pw(B) for sets B of a special form

(for instance, for the sets appearing in the boundary problems (1.6.6)) admits asharper bound (see [202, 244] and also the survey [44]):

Theorem 1.6.3. If Eξ = 0, Eξ2 = 1, E|ξ|3 < ∞ and the functions h±(t) satisfythe Lipschitz condition that, for an h′ < ∞,

|h±(t + u) − h±(t)| � h′u, 0 � t � t + u � 1,

then the absolute value of the difference on the left-hand side of (1.6.6) does notexceed ch′E|ξ|3/√n.

Recall that, under the conditions of Theorem 1.6.3 on the distribution of ther.v. ξ, the Berry–Esseen theorem states that

supx

∣∣∣∣P( Sn√n

< x

)− Φ(x)

∣∣∣∣ <cE|ξ|3√

n,

where c is an absolute constant (see e.g. § 8.5 of [49]).

1.6.3 A functional limit theorem on convergence to stable laws

Now consider the case where Eξ2 = ∞ and the distribution of ξ satisfies theregularity conditions [Rα,ρ] of § 1.5. Put b(n) = F (−1)(1/n) when α < 2(see (1.5.4)); when α = 2 the function b(n) is defined by the relation (1.5.5).When E|ξ| < ∞ we assume that Eξ = 0. Then, by virtue of Theorem 1.5.1, inthe case α ∈ (0, 2], α �= 1, we have the following convergence for ζn = Sn/b(n)as n → ∞:

ζn ⇒ ζ(α,ρ),

where ζ(α,ρ) is an r.v. following the stable distribution Fα,ρ with ch.f. (1.5.8). Inthe case α = 1 one needs, generally speaking, centring as well:

ζn − An ⇒ ζ(1,ρ)

(see (1.5.10), (1.5.11)).Now consider in 〈D,BD〉 a right-continuous process ζ(α,ρ)(·) with homo-

geneous independent increments such that ζ(α,ρ)(1) has the distribution Fα,ρ.As before, let ζn(·) be defined by (1.6.1).

Page 110: Asymptotic analysis of random walks

1.6 Functional limit theorems 79

Theorem 1.6.4. Under condition [Rα,ρ] (i.e. the condition of Theorem 1.5.1)with α �= 1 the processes ζn(·) D-converge to the process ζ(α,ρ)(·).

Similarly to the preceding subsection, Theorem 1.6.4 implies the convergenceof the distributions of Sn/b(n) and maxk�n |Sk|/b(n) to those of the maxima

ζ(α,ρ)

(1) := maxt�1 ζ(α,ρ)(t) and maxt�1 |ζ(α,ρ)(t)| respectively. The rela-tion (1.6.6) holds, too, if in it one replaces

√n and w(t) by b(n) and ζ(t) re-

spectively.Also, it follows from the bounds of Chapter 3 for P(Sn � x) that for v < α,

as n → ∞,

E(

Sn

b(n)

)v

→ E(ζ(α,ρ)

(1))v

.

A similar relation holds true for the moments E(maxk�n

|Sk|/b(n))v

as well. One

also has an assertion similar to (1.6.5).The proof of Theorem 1.6.4 in the more general case of non-identically dis-

tributed r.v.’s ξi in the triangular array scheme is presented in Chapter 12.

1.6.4 The law of the iterated logarithm

The invariance principle is closely related to the following result, which charac-terizes the a.s. magnitude of oscillations in a random walk with zero drift.

Theorem 1.6.5. Let Eξ = 0, d = Eξ2 � ∞. Then

P(

lim supn→∞

Sn√2n ln lnn

=√

d

)= 1.

Proof. In the case d < ∞ a proof can be found in e.g. [49, 122] and in the cased = ∞ in [261].

When the scaled sums Sn of i.i.d. r.v.’s converge to a non-normal stable law, bythe ‘law of the iterated logarithm’ one means the following assertion, which is ofa somewhat different character.

Theorem 1.6.6. If, as n → ∞, one has ζn ⇒ ζ(α,ρ), α < 2, then

P(

lim supn→∞

|ζn|1/ln ln n = e1/α

)= 1.

In Chapter 3 we will present some extensions of this result.

Page 111: Asymptotic analysis of random walks

2

Random walks with jumps having no finite firstmoment

2.1 Introduction. The main approach to bounding from above the

distribution tails of the maxima of sums of random variables

2.1.1 Introduction

Let ξ, ξ1, ξ2, . . . be i.i.d. r.v.’s with a common distribution F. Put S0 := 0 andconsider the r.v.’s

Sn :=n∑

j=1

ξj , Sn(a) := maxk�n

(Sk − ak), a ∈ R, Sn := Sn(0),

and the events

Bj(v) := {ξj < y + vg(j)}, B(v) :=n⋂

j=1

Bj(v), v � 0, (2.1.1)

where the choice of the function g will depend on the distribution F.One of our main goals will be to obtain bounds for probabilities of the form

P(Sn � x), P(Sn(a) � x) and P(Sn(a) � x; B(v)) (2.1.2)

as x → ∞. Note that the probabilities P(Sn(a) � x; B(v)) will play an impor-tant role when we come to find the exact asymptotics of P(Sn(a) � x).

As for the distribution of ξj , it will be assumed in Chapters 2–4 that its tails

F−(t) = F((−∞,−t]) = P(ξj < −t),

F+(t) = F([t,∞)) = P(ξj � t),t > 0,

are majorated or minorated by r.v.f.’s (at infinity).Majorants (or minorants) for the right tails F+(t) will be denoted by V (t) and

for the left tails F−(t) by W (t). In addition, in the case where V (t) (W (t)) is anr.v.f., we will be using for the respective index and s.v.f. the notation α and L(t)(β and LW (t) respectively):

V (t) = t−αL(t), α > 0, (2.1.3)

W (t) = t−βLW (t), β > 0. (2.1.4)

80

Page 112: Asymptotic analysis of random walks

2.1 Introduction. Main approach to bounding distribution tails 81

Without loss of generality, we will assume that the functions V and W are mono-tone, with V (0) � 1 and W (0) � 1.

In what follows, we will often use the following conditions on the asymptoticbehaviour of the distribution tails under consideration:

[ · , <] F+(t) � V (t), t > 0,

[ · , >] F+(t) � V (t), t > 0,

[<, · ] F−(t) � W (t), t > 0,

[>, · ] F−(t) � W (t), t > 0,

where V (t) and W (t) are of the forms (2.1.3) and (2.1.4) respectively.

When studying the exact asymptotic behaviour of the probabilities P(Sn � x)and P(Sn(a) � x), we will also be using the condition of regular variation of thetails:

[ · , =] F+(t) = V (t), t > 0,

which is the intersection of the conditions [ · , <] and [ · , >] (with a commonfunction V (t)), so that one could write:

[ · , =] = [ · , <] ∩ [ · , >].

Recall that we agreed to denote the class of distributions satisfying condition[ · , =] by R (see p. 11).

In a similar way, for a condition of the form F−(t) = W (t), t > 0, we willuse the notation [=, · ] and will be considering the following intersections ofconditions already introduced:

[=, =] = [ · , =] ∩ [=, · ],[<, <] = [<, · ] ∩ [ · , <],

[>, <] = [>, · ] ∩ [ · , <],

[<, >] = [<, · ] ∩ [ · , >].

Since we will mainly be studying the probabilities of large deviations on thepositive half-axis [0,∞), the main parameter according to which the classificationof different cases will be carried out will be the index α of the function V (t)in (2.1.3). We will often refer to it as the ‘right index’ (more precisely, the indexof the right tail of the distribution of the r.v. ξ, or of its majorant or minorant).

2.1.2 On the main approach to bounding from above the distributions of Sn

In this subsection, we present a general approach to deriving upper bounds forprobabilities of the form (2.1.2). Such bounds, to be obtained in Chapters 2–5,are essentially different from one another, but also have a lot in common. Inparticular, they are derived using the same type of inequalities for truncated r.v.’s.

Page 113: Asymptotic analysis of random walks

82 Random walks with jumps having no finite first moment

The arguments proving these inequalities follow a common scheme (see also [201,40]) which can described as follows.

Suppose for the time being that the r.v. ξ satisfies Cramer’s condition, i.e. thatfor some μ > 0 we have

ϕ(μ) := Eeμξ < ∞.

We will need the following relation, which could be referred to as a Kolmogorov–Doob-type inequality.

Lemma 2.1.1. For all n � 1, x � 0 and μ � 0 one has

P(Sn � x) � e−μx max{1, ϕn(μ)}. (2.1.5)

Proof. Since η(x) := inf{k � 1 : Sk � x} � ∞ is a Markov time the event{η(x) = k} and the r.v. Sn − Sk are independent of each other, so that

ϕn(μ) = EeμSn �n∑

k=1

E[eμSn ; η(x) = k

]�

n∑k=1

E[eμ(x+Sn−Sk); η(x) = k

]= eμx

n∑k=1

ϕn−k(μ)P(η(x) = k

)� eμx min{1, ϕn(μ)}P(Sn � x),

whence one immediately obtains (2.1.5). The lemma is proved.

If ϕ(μ) � 1 for μ � 0 (which is always the case when Eξ � 0 exists) then theright-hand side of (2.1.5) will be equal to e−μxϕn(μ), while the inequality (2.1.5)itself can be obtained as a consequence of the well-known Doob inequality forsubmartingales (see e.g. § 3, Chapter 14 of [49]).

Now return to the case of ‘heavy’ tails, when Cramer’s condition is not satis-fied. Consider ‘truncated’ r.v.’s whose distribution coincides with the conditionallaw of ξ given that ξ < y for some cut-off level y, the choice of which is atour disposal. Namely, introduce i.i.d. r.v.’s ξ

〈y〉j , j = 1, 2, . . . , with distribution

function

P(ξ〈y〉j < t

):= P(ξ < t| ξ < y) =

P(ξ < t)P(ξ < y)

, t � y,

and put

S〈y〉n :=

n∑j=1

ξ〈y〉j , S〈y〉

n := maxk�n

S〈y〉k .

Using the notation B(v) from (2.1.1), we have

P(Sn � x) � P(B(0)

)+ P

(Sn � x;B(0)

)� nF+(y) + P, (2.1.6)

where

P := P(Sn � x; B(0)

)=(P(ξ < y)

)nP(S 〈y〉

n � x). (2.1.7)

Page 114: Asymptotic analysis of random walks

2.1 Introduction. Main approach to bounding distribution tails 83

Applying Lemma 2.1.1 to the r.v.’s ξ〈y〉j we obtain that, for any μ � 0,

P(S 〈y〉n � x) � e−μx

[max

{1,Eeμ ξ〈y〉}]n

.

Since

Eeμξ〈y〉=

R(μ, y)P(ξ < y)

, where R(μ, y) :=

y∫∞

eμtF(dt),

we arrive at the following inequality, which will form the basis of many subse-quent considerations.

The basic inequality. For x, y, μ � 0,

P ≡ P(Sn � x; B(0)

)� e−μx

[max{P(ξ < y), R(μ, y)}]n

� e−μx max{1, Rn(μ, y)}. (2.1.8)

The problem of bounding the probability P , and hence also the probabilityP(Sn � x) by virtue of (2.1.6), reduces therefore to obtaining bounds for theintegral R(μ, y). These bounds will be different, depending on the value of theindex α and the ‘thickness’ of the left tail F−(t).

In Chapters 2–5 we will be choosing the cut-off level y present in (2.1.8) insuch a way that y � x and the ratio

r =x

y� 1

is bounded. Thus the growth rates of x and y as y → ∞ will be the same (up to abounded factor).

In Theorem 1.1.4(v) we introduced the function

σ(n) := V (−1)(1/n), n > 0, (2.1.9)

where V (−1) denotes the generalized inverse function for V :

V (−1)(u) := inf{v : V (v) � u}.

The deviations x = σ(n), like the deviations b(n) ∼ ρ−1/α+ σ(n) used in § 1.5,

play the important role of the ‘standard deviation’ for Sn. As seen from the def-inition, they are characterized by the fact that, under the assumption F+ ≡ V ,events of the form {ξj � σ(n)} on average occur once during a time intervalof length n. As shown in Theorem 1.1.4(v) (see (1.1.20)), under the assumptionthat (2.1.3) holds one has

σ(n) = n1/αLσ(n), where Lσ is an s.v.f. (2.1.10)

Deviations of magnitude x = sσ(n) with s → ∞ are naturally referred to as

Page 115: Asymptotic analysis of random walks

84 Random walks with jumps having no finite first moment

large deviations for Sn (or, more generally, for the whole trajectory S1, . . . , Sn

of the random walk). Clearly, along with the quantity

s := s(x, n) =x

σ(n),

such deviations can be equivalently characterized by the quantity

Π := Π(x, n) = nV (x),

i.e. the mean number of occurrences of the events {ξj � x}, j = 1, . . . , n. Manybounds in the subsequent exposition will be given in terms of this quantity.

The quantities s and Π are connected to each other by the following relations.Suppose that (2.1.3) holds for V . If s is fixed then, by virtue of the properties ofr.v.f.’s, one has for x = sσ(n)

Π = nV(sσ(n)

)→ s−α as n → ∞.

Moreover, by Theorem 1.1.4(ii) we have

s−α−δ < Π < s−α+δ

for any δ > 0 and all sufficiently large s. Similar ‘inverse’ inequalities hold truefor s.

Along with the quantities Π and s, we will sometimes also use the functions

Π(y) := Π(y, n) = nV (y) and s(y) := s(y, n) with y = x/r, r � 1,

so that Π = Π(x) < Π(y) ∼ rαΠ, s(y) = s/r.We stress that it will always be assumed in what follows that

1 � r =x

y� c < ∞, Π(y) < 1 (s(y) > 1). (2.1.11)

In concluding this section we will note that, in the cases where α < 1 or β < 1(under conditions [ · , <] or [>, · ] respectively), the bounds for the probabili-ties (2.1.2) could substantially differ from one another, depending on the relative‘thicknesses’ of the left and right distribution tails of ξ. In this connection, wewill single out the following two possibilities:

(1) α < 1, the tail F−(t), t > 0, is arbitrary;

(2) β < 1, the tail F−(t) is substantially ‘heavier’ than F+(t), t > 0.

The bounds in the first case are essentially bounds for the sums Sn when ther.v.’s ξj are non-negative (F−(0) = 0).

2.2 Upper bounds for the distribution of the maximum of sums when α � 1and the left tail is arbitrary

As we have already noted, the considerations in this and many subsequent sectionswill be based on bounds for the probability P = P(Sn � x; B), (2.1.7), the maintool used for their derivation being the basic inequality (2.1.8).

Page 116: Asymptotic analysis of random walks

2.2 Upper bounds when α � 1 and the left tail is arbitrary 85

Theorem 2.2.1. Let condition [ · , <] be satisfied with α < 1 and let r = x/y � 1be bounded. Then there exists a constant c < ∞ such that for all n � 1 and x > 0one has the following inequality for the probability P defined in (2.1.7):

P � cΠ(y)r, r = x/y, Π(y) = nV (y). (2.2.1)

The constant c in (2.2.1) can be replaced by the expression (e/r)r + ε(Π(y)),where ε(·) is a bounded function such that ε(v) ↓ 0 as v ↓ 0.

The notation ε(v) (with or without indices) will be used in what follows forfunctions converging to zero (either as v ↓ 0 or as v → ∞, depending on thecircumstances).

Remark 2.2.2. Observe that the bound (2.2.1) is of a universal character: up tominor modifications, it is applicable to all the types of random walks with ‘heavytails’ that are discussed in detail in the present monograph (namely, for those withF ∈ R or F ∈ Se). The bound is rather sharp and admits the following graphicalinterpretation. From the subsequent exposition in Chapters 2–5 it will be seenthat, for these classes of random walks, the main contribution to the probabilityof a large deviation Sn � x (or Sn � x) comes from trajectories having a singlelarge jump of the order of magnitude of x, so that the asymptotics of P(Sn � x)have the form

P( n⋃

j=1

{ξj � x})

∼ nV (x).

If, however, none of the jumps ξj exceeds y = x/r, r > 1 (as is the case forour event {Sn � x; B}), then reaching the level x by one jump is impossi-ble – to cross the level, there need to be at least r ‘large’ jumps (assume forsimplicity that r > 1 is an integer). Since the probability of a jump of a sizecomparable with y is of order of V (y), the probability of having r jumps of thatsize among n independent summands will be of order

(nV (y)

)r. This is, up

to a constant factor, the right-hand side of the bound (2.2.1) for the probabilityP(Sn � x, all jumps < y).

For the case α = 1 we will state a separate assertion. Here, along with V (x)and s = x/σ(n), we will also need some additional characteristics. Recall that,according to Theorem 1.1.4(iv), when α = 1,

L1(x) :=1

xV (x)

x∫0

V (u) du =VI(x)xV (x)

→ ∞ as x → ∞

is an s.v.f. For δ > 0 put

Π(δ)(y) := Π(y)L1+δ1 (x) ∼ Π(y)L1+δ

1 (y). (2.2.2)

It is not hard to see that, under broad conditions, L1

(σ(n)

) ∼ L1(n).

Page 117: Asymptotic analysis of random walks

86 Random walks with jumps having no finite first moment

Theorem 2.2.3. Let condition [ · , <] be satisfied with α = 1. Then (2.2.1) holdstrue if, for some δ > 0 and c1 < ∞, one has Π(δ)(y) � c1 or Π(δ)(x) � c1. Forthe last inequality to hold, it suffices that

s = s(x, n) > L1

(σ(n)

)1+γfor a fixed γ > 0.

The constant c in (2.2.1) can be replaced by (e/r)r + ε(Π(δ)(y)

), where ε(·) is a

bounded function such that ε(v) ↓ 0 as v ↓ 0.

The above-stated bounds enable one to obtain the next important result.

Corollary 2.2.4. Let condition [ · , <] be met. Then the following assertions holdtrue.

(i) If α < 1 then there exists a function ε(v) ↓ 0 as v ↓ 0 such that, for alln � 1,

supx: Π�v

P(Sn � x)Π

� 1 + ε(v), Π = nV (x), (2.2.3)

or, equivalently,

supx: s�t

P(Sn � x)Π

� 1 + ε(1/t), s =x

σ(n).

(ii) If α = 1 then, for any fixed δ > 0,

supx: Π(δ)�v

P(Sn � x)Π

� 1 + ε(v). (2.2.4)

From this corollary it follows, in particular, that in the case α � 1 for any δ > 0one has

lim supn→∞

supx>n1/α+δ

P(Sn � x)Π

� 1. (2.2.5)

Proof of Corollary 2.2.4. By (2.1.6), condition [ · , <] and Theorem 2.2.1 one has

P(Sn � x) �(1 + cΠ(y)r−1

)Π(y). (2.2.6)

Next assume without loss of generality that v < 1 and put r := 1+ | ln v|−1/2, sothat

y =x

1 + | ln v|−1/2.

Clearly, the relations Π � v and v ↓ 0 imply that x → ∞ and r ↓ 1, and hencethat L(y)/L(x) → 1. Therefore, there exists a function ε1(v) ↓ 0 as v ↓ 0 suchthat, for Π � v, we have L(y)/L(x) � 1 + ε1(v) and

V (y)V (x)

= rα L(y)L(x)

� (1 + | ln v|−1/2)α(1 + ε1(v)) =: 1 + ε2(v),

Page 118: Asymptotic analysis of random walks

2.2 Upper bounds when α � 1 and the left tail is arbitrary 87

so that

Π(y) � (1 + ε2(v))Π.

Moreover, when Π � v, one has

Πr−1 � (eln v)1/√

| ln v| = e−√

| ln v| =: ε3(v).

Substituting the above inequalities into (2.2.6), we find that

P(Sn � x) �[1 + c(1 + ε2(v))r−1ε3(v)

](1 + ε2(v))Π =: (1 + ε(v))Π,

where evidently ε(v) → 0. Thus (2.2.3) is established.The assertion (2.2.4) can be proved in exactly the same way, using Theo-

rem 2.2.3. The corollary is proved.

Proof of Theorem 2.2.1. Let M := 2α/μ < y (μ → 0 is to be chosen later). Tomake use of the basic inequality (2.1.8) we have to bound

R(μ, y) =

y∫−∞

eμtF(dt) =

0∫−∞

+

M∫0

+

y∫M

=: I1 + I2 + I3, (2.2.7)

where clearly

I1 � F−(0). (2.2.8)

Further, integration by parts yields

b∫a

eμtF(dt) = −F+(t)eμt

∣∣∣∣ba

+ μ

b∫a

F+(t)eμtdt. (2.2.9)

From this and condition [ · , <] we obtain

I2 � F+(0) − e2αF+(M) + μ

M∫0

V (t)eμtdt,

where the integral increases unboundedly as μ → 0, in the case α < 1, but doesnot exceed

e2α

M∫0

V (t) dt =e2αMV (M)

1 − α(1 + o(1)) � c

μV (1/μ)

by Theorem 1.1.4(iv). Hence we conclude that

I2 � F+(0) + cV (1/μ). (2.2.10)

Note that when α > 1 we obtain only I2 � F+(0) + cμ (see (3.1.26) below), sothe bound (2.2.10) will be invalid in this case.

Page 119: Asymptotic analysis of random walks

88 Random walks with jumps having no finite first moment

Further, again from (2.2.9) and [ · , <],

I3 =

y∫M

eμtF(dt) � V (M)e2α + μ

y∫M

V (t)eμtdt =: V (M)e2α + I03 . (2.2.11)

Now we bound the second term on the right-hand side. In what follows, the valueof μ will be chosen so that

λ = μy → ∞ (i.e. y � 1/μ). (2.2.12)

Using the change of variables (y − t)μ =: u, we obtain

I03 = eμyV (y)

(y−M)μ∫0

V (y − u/μ)V (y)

e−udu. (2.2.13)

Consider the integral on the right-hand side of (2.2.13). Since 1/μ � y, the factorin the integrand

ry,μ(u) :=V (y − u/μ)

V (y)

converges to 1 for any fixed u. In order to apply the dominated convergencetheorem to obtain that the integral on the right-hand side of (2.2.13) tends to

∞∫0

e−udu = 1 (2.2.14)

as y → ∞, we have to estimate the growth rate of the function ry,μ(u) as u

increases.By virtue of the properties of r.v.f.’s (see Theorem 1.1.4(iii)), for all small

enough μ (or sufficiently large M ; recall that y − u/μ � M in the integrandin (2.2.13)) one has

ry,μ(u) �(

1 − u

μy

)−3α/2

=: g(u).

Since g(0) = 1 and μy − u � Mμ = 2α, one has in this range the relation(ln g(u)

)′ =3α

2(μy − u)� 3α

4α=

34,

and therefore ln g(u) � 3u/4, so that ry,μ(u) � e3u/4. This means that theintegrand in (2.2.13) is dominated by the exponential e−u/4 and so the use of thedominated convergence theorem is justified. Hence, owing to the convergence ofthe integral in (2.2.13) to the limit (2.2.14), we obtain

I03 ∼ eμyV (y)

∞∫0

e−udu = eλV (y),

Page 120: Asymptotic analysis of random walks

2.2 Upper bounds when α � 1 and the left tail is arbitrary 89

and it is not hard to find a function ε(λ) ↓ 0 as λ ↑ ∞ such that

I03 � eλV (y)(1 + ε(λ)). (2.2.15)

Summarizing (2.2.8)–(2.2.15), we get

R(μ, y) � 1 + cV (1/μ) + eλV (y)(1 + ε(λ)). (2.2.16)

Therefore, recalling that Π(y) = nV (y), one has

Rn(μ, y) � exp{ncV (1/μ) + Π(y)eλ(1 + ε(λ))

}. (2.2.17)

Now choose μ (or λ) to be a value that ‘almost minimizes’ the expression

−μx + Π(y)eλ = −λr + Π(y)eλ.

Namely, put

λ := lnr

Π(y). (2.2.18)

As we have already noted, if s(y) := s(y, n) = y/σ(n) → ∞ (where σ(n) =V (−1)(1/n)) then

Π(y) < s(y)−α+δ → 0.

Note also that, for such a choice of λ (or μ = y−1 ln(r/Π(y))), when Π(y) → 0one has λ = μy ∼ − ln Π(y) → ∞ and hence the assumption y � 1/μ that wemade above in (2.2.12) is correct.

From (2.1.8), (2.2.17) and (2.2.18) it follows that

lnP � −xμ + cnV (1/μ) + Π(y)eλ(1 + ε(λ)), (2.2.19)

where Π(y)eλ = r and, for any δ > 0 and large enough y, owing to Theo-rem 1.1.4(iii) one has

nV (1/μ) � c1Π(y)V (y/| ln Π(y)|)

V (y)� c1Π(y)| ln Π(y)|α+δ → 0

as Π(y) → ∞. Therefore (2.2.19) implies that

lnP � −r lnr

Π(y)+ r + ε1(λ),

where ε1(λ) ↓ 0 as λ → ∞ and one can assume without loss of generality thatln(r/Π(y)) � 1. This completes the proof of the theorem.

Proof of Theorem 2.2.3. In the case α = 1 the scheme of the proof remains thesame but the bounding of I2 will be different. By Theorem 1.1.4(iv),

M∫0

V (t)eμtdt � e2α

M∫0

V (t) dt = e2MV (M)L1(M),

Page 121: Asymptotic analysis of random walks

90 Random walks with jumps having no finite first moment

where L1(M) is an s.v.f., so that instead of (2.2.10) we obtain

I2 � F+(0) + cV (1/μ)L1(1/μ);

for simplicity’s sake, we will now replace cL1(1/μ) by L1(1/μ).The bound of I3 remains unchanged. As a result, in the case α = 1 one gets,

instead of (2.2.17),

Rn(μ, y) � exp{nV (1/μ)L1(1/μ) + Π(y)eλ

(1 + ε(λ)

)}.

The choice of λ = ln(r/Π(y)) (μ = y−1 ln(r/Π(y))) also remains unchanged.Then, instead of (2.2.19), we have

lnP � −xμ + nV (1/μ)L1(1/μ) + r(1 + ε(λ)

),

where

nV (1/μ)L1(1/μ) � c2nV

(y

| ln Π(y)|)

L1

(y

| ln Π(y)|)

� c2Π(y)L1(y)| ln Π(y)|1+δ � L−δ/21 (y) → 0 (2.2.20)

as y → ∞, and

Π(y) < c1L−1−δ1 (y), δ > 0. (2.2.21)

The relation (2.2.21) is clearly equivalent to the inequality Π(δ)(y) < c1. If wehave Π(δ) < c1 then Π(δ)(y) ∼ rαΠ(δ) < c1r

α, and the sufficiency of conditionΠ(δ) < c1 for (2.2.20) is also proved.

Now let

s > L1

(σ(n)

)1/(α−δ).

Then, for any δ′ ∈ (0, δ/4),

Π � nV (sσ(n)) < s−α+δ′< s−δ′

L1(σ(n))(−α+2δ′)/(α−δ)

< L1(x)(−α+2δ′)/(α−δ) < L−1−ε1 (2.2.22)

for some ε > 0. This means that the inequality Π(δ) < 1 holds true. The theoremis proved.

Remark 2.2.5. If the function L(t) is differentiable, L′(t) = o(L(t)/t) andF+(t) = V (t) ≡ t−αL(t) then the bound for I3 in (2.2.11) can be improved:

I3 � α1V (y)μy

for any α1 > α and large enough y. This makes it possible to refine the assertionof Theorem 2.2.1 and obtain the bound

P � c

[Π(y)

| ln Π(y)|]r

.

Page 122: Asymptotic analysis of random walks

2.3 Upper bounds when the left tail dominates 91

2.3 Upper bounds for the distribution of the sum of random variables when

the left tail dominates the right tail

To obtain the desired estimates for the large deviation probabilities, we will needbounds for the probabilities of ‘small deviations’ of the sums of negative r.v.’s.

In this section, it will be convenient for us to mean by σW (n) any fixed func-tion such that σW (n) ∼ W (−1)(1/n), where W (−1) is the function inverse toW (t) = t−βLW (t), LW being an s.v.f. (in particular, one could put σW (n) :=W (−1)(1/n)). If, for instance, W (t) ∼ ct−β then one could take σW (n) =(cn)1/β . As before, by the symbols c, C (with or without indices) we will bedenoting constants that are not necessarily the same when they appear in differentformulae.

Theorem 2.3.1. Let the r.v.’s ξj � 0 satisfy condition [>, · ] with β < 1. Thenthere exist c > 0, C < ∞ and δ > 0 such that for u � Cσβ−1

W (n) and all largeenough n one has

P(

Sn

σW (n)� −u

)� e−cu−δ

. (2.3.1)

If LW (n) is a non-increasing function or LW (n) → const as n → ∞ then onecan put δ = β/(1 − β). In the general case, one can choose any δ < β/(1 − β).

Remark 2.3.2. Let LW (t) ≡ 1 and C � 1, for example. Then, for u =σβ−1

W (n) = n−(1−β)/β , we have from (2.3.1) that

P(

Sn

σW (n)� −u

)≡ P

(Sn > −n

)� e−cn.

Now observe that for any u > 0, including the case u < Cσβ−1W (n), to obtain

for P(Sn/σW (n) > −u) a bound that would be better than e−cn is, generallyspeaking, impossible. Indeed, if p0 = P(ξj = 0) > 0 then P(Sn = 0) = pn

0 , sothat for all u > 0

P(

Sn

σW (n)� −u

)� pn

0 .

Proof of Theorem 2.3.1. For λ � 0 and any z > 0 one has

ϕ(λ)n ≡0∫

−∞eλtP

(Sn ∈ dt

)�

0∫−z

� e−λzP(Sn � −z

),

so that

P(Sn � −z

)� eλz+n ln ϕ(λ). (2.3.2)

Further, clearly

ϕ(λ) =

0∫−∞

eλuP(ξ ∈ du) = 1 − λ

0∫−∞

eλuP(ξ < u) du.

Page 123: Asymptotic analysis of random walks

92 Random walks with jumps having no finite first moment

Since P(ξ < −t) � W (t) = t−βLW (t), we get

ϕ(λ) � 1 − λ

∞∫0

e−λtW (t) dt = 1 −∞∫0

e−vW (v/λ) dv,

where, as λ → ∞,∞∫0

e−vW (v/λ) dv = W (1/λ)

⎛⎝ ∞∫0

e−vv−βdv

⎞⎠ (1 + o(1))

= W (1/λ)Γ(1 − β)(1 + o(1))

(see Theorem 1.1.5) and Γ(·) is the gamma function. Thus

ϕ(λ) � 1 − Γ(1 − β)W (1/λ) (1 + o(1)),

and therefore

lnϕ(λ) � −Γ(1 − β)W (1/λ) (1 + o(1)).

Put Γ := Γ(1 − β) and take

z = uσW (n), λ = (βΓ/u)1/(1−β)σ−1

W (n),

so that λ → 0 as u � σβ−1W (n) and one has

λz + n lnϕ(λ) � u(βΓ/u

)1/(1−β) − nΓW((u/βΓ)1/(1−β)σW (n)

)(1 + o(1)).

First assume that LW (n) is non-increasing or, alternatively, that LW (n) → constas n → ∞. Then, for u/βΓ � 1,

W((u/βΓ)1/(1−β)σW (n)

)�(

u

βΓ

)β/(β−1)

n−1(1 + o(1)). (2.3.3)

From here it follows that

λz + n lnϕ(λ) � −(1 − β)ββ/(1−β)Γ1/(1−β)uβ(β−1)(1 + o(1)), (2.3.4)

and hence, by virtue of (2.3.2),

P(

Sn

σW (n)� −u

)� e−cu−δ(1+o(1)) as δ =

β

1 − β. (2.3.5)

Now choose C sufficiently large that, for u � Cσβ−1W (n), in (2.3.5) one has

1+o(1) � 1/2 for all large enough n. Then (2.3.5) implies that the bound (2.3.1)is proved for

c =12

(1 − β)ββ/(1−β)Γ1/(1−β), δ =β

1 − β.

In the general case, by the properties of s.v.f.’s (Theorem 1.1.4), for any ε > 0and all large enough μσW (n) and n, one has

LW (μσW (n)) � μεLW (σW (n)), W (μσW (n)) � μ−β+εn−1.

Page 124: Asymptotic analysis of random walks

2.3 Upper bounds when the left tail dominates 93

Putting μ := (u/βG)1/(1−β) and repeating the above argument (see (2.3.3),(2.3.4)), we again get (2.3.5) but with δ = (β − ε)/(1 − β) < β/(1 − β). Thetheorem is proved.

Taking into account that, by virtue of (1.5.59),

Fβ,−1,−(t) ∼ t−β as t → ∞,

we obtain the following assertion (see also (1.5.60)).

Corollary 2.3.3. Let F = Fβ,−1 be the stable distribution with parameters β

and ρ = −1. One can assume without loss of generality that W (t) ∼ t−β . Then,putting bn := n1/β , we obtain that bn ∼ σW (n) = n1/β and all the distributionsof the r.v.’s Sn/bn coincide with F and therefore, for u > 0,

P(ξ � −u) � e−cu−δ

, δ =β

1 − β. (2.3.6)

From (2.3.6) it follows that, for any k > 0,

E|ξ|−k < ∞. (2.3.7)

Remark 2.3.4. The exact asymptotics of P(ξ � −u) as u → 0 for ξ ⊂=Fβ,−1

were found in Theorem 2.5.3 of [286]. Comparison with that result shows thatthe bound (2.3.6) is asymptotically tight (up to the value of the constant c).

Now we will obtain bounds for P(Sn � −z) in the case of arbitrary r.v.’sξj ≶ 0, satisfying conditions [>, <] with V (t) = o(W (t)). Recall the notationσ(n) = V (−1)(1/n) (see (2.1.9)).

Theorem 2.3.5. Let condition [>, <] be satisfied and let V (t) = o(W (t)) ast → ∞. Then in the case α < 1 there exists a constant c1 < ∞ such that forz � 0 and all n, one has

P(Sn � −z) � c1nV (σW (n) − z) + exp{−c

(z + σ(n)σW (n)

)−δ}, (2.3.8)

where c and δ are the same as in Theorem 2.3.1.If β < 1, α > 1, α �= 2 then (2.3.8) remains true if one replaces σ(n) on the

right-hand side of (2.3.8) by the quantity an, where

a = E[ξ| ξ � 0] =E[ξ; ξ � 0]P(ξ � 0)

.

The cases of the ‘threshold’ values α = 1 and α = 2 require additional consid-eration. However, even in these cases, inequalities of the form (2.3.8) enable oneto obtain meaningful bounds since the fulfilment of condition [ · , <], say, in thecase α = 1 implies that F+(t) � t−α′

for any α′ < 1 and all large enough t.

Page 125: Asymptotic analysis of random walks

94 Random walks with jumps having no finite first moment

Proof. If ξ � 0 then the desired assertion follows from Theorem 2.3.1. Now let

p := P(ξ < 0) ∈ (0, 1)

and let ξ∓j be independent r.v.’s with distributions

P(ξ−j < t) =P(ξ < t)

p, t � 0, (2.3.9)

P(ξ+j < t) =

P(0 � ξ < t)1 − p

, t � 0, (2.3.10)

respectively. Denote by ν � n the number of negative jumps ξj in the sum Sn.Then

Sn = S−ν + S+

n−ν , (2.3.11)

where, for a fixed ν = m, the sums S−m and S+

n−m

(S∓

k =∑k

j=1ξ∓j

), like

the summands ξ∓j in them, are independent. Furthermore, it is obvious (see e.g.Theorem 10, Chapter 5 of [49]) that

P(|ν − np| > εn) � c1e−qn, (2.3.12)

where q = q(ε) > 0 for ε > 0.Fix ν = m, m being a value from the interval (n(p − ε), n(p + ε)), where

ε � 12 min{p, 1 − p}. Then, for such an m, Theorem 2.3.1 is applicable to S−

m.Now construct a new r.v. S− � 0, which is independent of S+

n−m and followsthe distribution

P(S− � −v) = exp{−c

(v

σW (m)

)−δ}, v � vm := Cσβ

W (m),

(2.3.13)

P(S− = 0) = exp{−c

(vm

σW (m)

)−δ}=: Pm. (2.3.14)

Then it is evident that, by virtue of Theorem 2.3.1, one has S−m

d� S− and there-

fore

P(Sn � −z| ν = m) = P(S+n−m � −z − S−

m)

� P(S+n−m � −z − S−) � Pm + Qm, (2.3.15)

where

Qm := P(S+n−m � −z − S−;S− < −vm).

First consider the case α < 1. There are two possibilities:

(1) vm − z � σ(n),

(2) vm − z > σ(n).

Page 126: Asymptotic analysis of random walks

2.3 Upper bounds when the left tail dominates 95

In the former,

Qm � P(−z − σ(n) � S− < −vm

)+ P

(S+

n−m � −z − S−; S− < −z − σ(n)), (2.3.16)

where, owing to (2.3.14), the first term on the right-hand side does not exceed

exp{−c

(z + σ(n)σW (m)

)−δ}� Pm. (2.3.17)

By virtue of Corollary 2.2.4, the second term on the right-hand side does notexceed the value

c(n − m)E[V (−z − S−); S− < −z − σ(n)

]. (2.3.18)

First consider

E[V (−z − S−); −σW (n) < S− < −z − σ(n)

], (2.3.19)

where one can assume without loss of generality that z < σW (n)/2 (otherwisethe inequality (2.3.8) would not be meaningful). The expectation (2.3.19) can berewritten as

−z−σ(n)∫−σW (n)

V (−z − v)P(S− ∈ dv). (2.3.20)

Here, when u = v/σW (n) increases from −1 to −(z + σ(n))/σW (n), on theone hand the quantity V (−z − v) grows as a power function of v (or of u) fromthe value V (σW (n) − z) to V (σ(n)). On the other hand, as u = v/σW (n)grows, by virtue of (2.3.13) the probability P(S− � v) (and its density as well)decays much faster (‘semiexponentially fast’, as e−cu−δ

) from the value e−c to

exp{−c

(z + σ(n)σW (n)

)−δ}. These simple qualitative considerations make it pos-

sible to omit tedious computations (the reader could reproduce them indepen-dently) and claim that (2.3.19), (2.3.20) will admit an upper bound of the form

cV (σW (n) − z).

The second part of the integral in (2.3.18), which is equal to

E[V (−z − S−); S− � −σW (n)

],

clearly does not exceed V (σW (n) − z).Summarizing the above argument (see (2.3.15)–(2.3.20)) and the fact that both

m and n − m are between c1n and c2n, where 0 < c1 < c2 < 1, we obtain that

Page 127: Asymptotic analysis of random walks

96 Random walks with jumps having no finite first moment

in case (1)

P (Sn � −z| ν = m) � exp{−c

(vm

σW (n)

)−δ}+ exp

{−c

(z + σ(n)σW (n)

)−δ}+ c0nV (σW (n) − z).

(2.3.21)

In case (2), the first term on the right-hand side of (2.3.16) will be absent, whilethe second can be estimated as before. Hence

P (Sn � −z|ν = m) � exp{−c

(vm

σW (n)

)−δ}+c0nV (σW (n)−z). (2.3.22)

Taking into account the inequality (2.3.12) and the fact that the derived bounds(2.3.21), (2.3.22) are uniform in m ∈ [(p − ε) n, (p + ε) n], we obtain

P (Sn � −z) � ce−qn + exp{−c

(vm

σW (n)

)−δ}+ exp

{−c

(z + σ(n)σW (n)

)−δ}+ c0nV (σW (n) − z). (2.3.23)

Finally, we note that(vm

σW (n)

)−δ

= C σ(1−β)δW (n) � c1n

h, h ∈ (0, 1],

and that the minimum value of V (σW (n)−z) (attained at z = 0) has a power termof the form n−α/β . Hence the first two terms on the right-hand side of (2.3.23)are negligibly small compared with the last one. This proves (2.3.8).

Now let β < 1, α ∈ (1, 2). Then, as will be shown in Corollary 3.1.2, thebound

P(S+n � x) � cnV (x − an), a = E(ξ| ξ > 0), (2.3.24)

becomes valid for deviations x exceeding the threshold an+σ(n), where σ(n) =o(n). Hence for any fixed a′ > a one has (possibly with a different c)

P(S+n � x) � cnV (x)

for x � a′n (in the case α < 1, the bound from Corollary 2.2.4 is used in (2.3.18)).Therefore the two alternative possibilities (1), (2) have become the following:

(1) vm − z � a′n,

(2) vm − z > a′n.

The rest of the argument remains valid in this case, and thus the changes neededreduce to replacing σ(n) by a′n. Hence (2.3.8) will still be true (possibly, with asomewhat different c or δ) if one replaces σ(n) by a′n. Finally, notice that (2.3.8)will remain true if further we replace a′n by an and c by c(a/a′)δ .

Page 128: Asymptotic analysis of random walks

2.4 Upper bounds for maximum when left tail is much heavier 97

The case α > 2 can be dealt with in a similar way. Here the bound (2.3.24)becomes valid for deviations x � an +

√(α − 2) n lnn (see § 4.1). Otherwise,

the argument remains unchanged. The theorem is proved.

2.4 Upper bounds for the distribution of the maximum of sums when the

left tail is substantially heavier than the right tail

In this section, it will be assumed that β < α. One could expect that, in thiscase, owing to the domination of the left jump-distribution tail the trajectory ofthe random walk will drift to −∞ with probability one, and therefore the boundsfor the probabilities P(Sn � x) will, starting from some point, be independentof n. If, to bound the probability P(Sn � x), one used inequalities of the form

P(Sn � x) � P(B) + P

with the sets B = B(0) (as in the proof of Corollary 2.2.4) then one wouldnot obtain the desired estimates. So instead we will use the sets B = B(v)from (2.1.1) with v > 0 and

g(j) = j1/γ , γ ∈ (β, α)

in these inequalities.

Theorem 2.4.1. Let condition [>, <] be satisfied, functions V (t) and W (t) be-ing of the form (2.1.3) and (2.1.4) respectively with β < min{1, α}. Then, for asuitable v and all n, as y → ∞,

P (v) := P(Sn � x, B(v)

)� cmin

{nV (y), y−(α−β)

}r−ε (2.4.1)

with r = x/y, where ε = ε(v, γ) > 0 can be made arbitrarily small by choosingv > 0 and γ > β.

Moreover, for any fixed ε > 0,

P(Sn � x) � cV (x) min{n, xβ+ε

}. (2.4.2)

Corollary 2.4.2. Under the conditions of Theorem 2.4.1, S = supn�0 Sn is aproper r.v.

The assertion of the corollary follows in an obvious way from (2.4.2).To prove Theorem 2.4.1, we will need bounds for the probability P = P (0)

from (2.1.7).In the following lemma, to make computations easier, we will assume in addi-

tion that in (2.1.3) and (2.1.4) one has

L(t) = L + o(1), LW (t) = LW + o(1) (2.4.3)

as t → ∞, where L and LW are constants.

Page 129: Asymptotic analysis of random walks

98 Random walks with jumps having no finite first moment

Lemma 2.4.3. Let the conditions of Theorem 2.4.1 be satisfied and let (2.4.3)hold true. Then, for all n,

P � min{c(nV (y))r, c1y

−r(α−β)(ln y)−rβ}, r =

x

y. (2.4.4)

Consequently, for all large enough y one has

P � min{c(nV (y))r, y−r(α−β)

},

where c is the same as in Theorem 2.2.1.

Note that conditions (2.4.3) will be used only to obtain the second part of in-equality (2.4.4). Furthermore, they will not be needed for proving Theorem 2.4.1.

Proof. By virtue of the inequalities (2.1.8), the problem again reduces to that ofbounding the quantity R(μ, y) from (2.2.7), with the same partition of that integralinto the sub-integrals I1, I2, I3. Now let us show that μ → 0 can be chosen sothat R(μ, y) < 1.

Since

I1 = F−(0) − μ

0∫−∞

F−(t)eμtdt � F−(0) −∞∫0

e−uW (u/μ) du

and we have assumed that β < 1, one has that, as μ → 0,

∞∫0

e−uW (u/μ) du ∼ W (1/μ)

∞∫0

e−uu−βdu = Γ(1 − β)W (1/μ),

where Γ(·) is the gamma function, so that

I1 � F−(0) − Γ(1 − β)W (1/μ)(1 + o(1)). (2.4.5)

In the case α < 1, the bounds for the integrals I2 and I3 remain the same asin (2.2.10), (2.2.11) and (2.2.15). Hence for this case

R(μ, y) � 1 − Γ(1 − β)W (1/μ)(1 + o(1))

+ cV (1/μ) + V (y)eμy(1 + ε(μy)), (2.4.6)

where V (1/μ) = o(W (1/μ)

).

If α > 1 then instead of the term cV (1/μ) in (2.4.6) one would have cμ

(see the remark after (2.2.10) and also the proof of Theorem 3.1.1 and the re-lations (3.1.27), (3.1.28) on p. 133). Because μ = o (W (1/μ)), all the subsequentarguments involving (2.4.6) will remain valid.

For α = 1, instead of the term cV (1/μ) in (2.4.6) we have cμ| lnμ|; it is alsoobvious that the relation μ| lnμ| = o (W (1/μ)) is true, and that the subsequentargument will remain valid as well.

Page 130: Asymptotic analysis of random walks

2.4 Upper bounds for maximum when left tail is much heavier 99

Thus one obtains from (2.4.6) the following bounds:

R(μ, y) � 1 − Γ(1 − β)W (1/μ)(1 + o(1)) + V (y)eμy(1 + o(1))

� 1 + V (y)eμy(1 + o(1)). (2.4.7)

We will make use of both these inequalities. First, we turn to the former andchoose μ in such a way that

Γ(1 − β)W (1/μ) = V (y)eμy. (2.4.8)

To make finding the root μ of this equation easier, we will use the simplifyingassumptions (2.4.3). Then for y � 1/μ equation (2.4.8) will take the form

yαμβ = c eμy(1 + o(1)). (2.4.9)

Putting μy =: λ, one can rewrite (2.4.9) as

β lnλ + (α − β) ln y = λ + c1 + o(1).

From here it is seen that we can ‘almost satisfy’ the equation (2.4.9) by setting

λ = (α − β) ln y + β ln ln y + c2.

With such a choice of λ (or μ) and a suitable c2,

R(μ, y) � 1 − Γ(1 − β)LW λβy−β(1 + o(1)) + y−αLeλ(1 + o(1))

= 1 − y−β lnβ[LW Γ(1 − β)(α − β)β(1 + o(1)) − Lec2(1 + o(1))

]< 1.

Therefore, by the basic inequality (2.1.8)

P � e−μx = e−rλ = exp{−r[(α − β) ln y + β ln ln y + c2]

}= c1y

−r(α−β)(ln y)−rβ .

Hence the second part of the inequality (2.4.4) follows.To prove the first part of (2.4.4), consider the second inequality in (2.4.7) and

choose μ in the same way as in the proof of Theorem 2.2.1, i.e. put

μy = lnr

Π(y).

Then, following the computations (2.2.17)–(2.2.19), we similarly obtain

lnP � −xμ + Π(y)eμy(1 + ε(μy)) = −r lnr

Π(y)+ r + o(1).

This yields the inequality (2.2.1) and proves the lemma.

Page 131: Asymptotic analysis of random walks

100 Random walks with jumps having no finite first moment

Proof of Theorem 2.4.1. First, we will estimate

P (v) ≡ P(Sn � x;B(v)). (2.4.10)

For g(x) = x1/γ and A > 1, put

m1 := g(−1)(x) = xγ , mk := xγAk−1,

M0 := 0, Mk :=k∑

j=1

mj ≡ xγAk, Ak :=Ak − 1A − 1

� Ak−1,

xk := x + g(Mk−1) = x(1 + A1/γk−1),

yk := y + vg(Mk) = y(1 + vrA1/γk ),

(2.4.11)

for k = 1, 2, . . . Here one can assume, for simplicity, that the mk are integers.For n � M1, owing to Lemma 2.4.3,

P (v) � P(

Sn � x;n⋂

j=1

{ξj < y1})

� min{c(nV (y1))r1 , y

−r1(α−β)1

},

where r1 = x/y1 = x/y(1 + vrA1/γ) can be made greater than r−ε by choosingv appropriately. This proves (2.4.1).

Now let n > M1. Then

P (v) � P(

Sm1 � x1;m1⋂j=1

{ξj < y1})

+ P(

SM1 � −M1/γ1 ;

M1⋂j=1

{ξj < y1})

+ P(Sn � x, Sm1 < x, SM1 < −M

1/γ1 ; B(v)

). (2.4.12)

Arguing in the same way, we see that for n > M2 the last term in (2.4.12) willnot exceed

P(

Sm2 � x2;m2⋂j=1

{ξj < y2})

+ P(

SM2 � −M1/γ2 ;

M2⋂j=1

{ξj < y2})

+ P(Sn � x, SM2 < x, SM2 < −M

1/γ2 ; B(v)

),

and so on. Thus, to get a bound for P (v), we need to obtain estimates for

N∑k=1

P(

Smk� xk;

mk⋂j=1

{ξj < yk})

(2.4.13)

andN∑

k=1

P(

SMk� −M

1/γk ;

Mk⋂j=1

{ξj < yk})

, (2.4.14)

Page 132: Asymptotic analysis of random walks

2.4 Upper bounds for maximum when left tail is much heavier 101

where N := min{k : Mk � n}. By virtue of Lemma 2.4.3, for large enough y

the former sum does not exceed the quantity∑k

y−rk(α−β)k , (2.4.15)

where

rk =xk

yk=

x(1 + A1/γk−1)

y(1 + vrA1/γk )

� r − ε, ε > 0

for all k and a suitable v = v(r, A, ε). Therefore the sums (2.4.13) and (2.4.15)will not exceed ∑

k

y−(r−ε)(α−β)k . (2.4.16)

But the Ak increase as a geometric progression (see (2.4.11)). The same can besaid about the sequence 1+rvA

1/γk (see the definition of yk in (2.4.11)). Therefore

the sums (2.4.13), (2.4.15) and (2.4.16) will not exceed cy−(r−ε)(α−β).Next we estimate the sum (2.4.14). For brevity, let M

1/γk =: zk. Denote by ν

the number of events {ξj � 0} in Mk independent trials. Then, by (2.3.12),

P(

SMk� −zk;

Mk⋂j=1

{ξj < yk})

=Mk∑i=1

P(

SMk� −zk;

Mk⋂j=1

{ξj < yk}∣∣∣∣ ν = i

)P(ν = i)

=[Mkp2]∑

i=[Mkp1]

+ O(e−qMk

), (2.4.17)

where p1 = F−(0) − ϕ > 0, p2 = F−(0) + ϕ < 1, ϕ > 0 and q = q(ϕ) > 0.Further, let ν = i ∈ [p1Mk, p2Mk] be fixed. Then, cf. (2.3.11),

SMk= S−

i + S+Mk−i,

where S∓k =

∑kj=1 ξ∓j are independent sums of independent r.v.’s ξ∓j with distri-

bution functions (2.3.9) and (2.3.10) respectively. Hence the first factor in the ithterm on the right-hand side of (2.4.17) will not exceed

P(S−i � −2zk) + P

(S+

Mk−i � zk;Mk−i⋂j=1

{ξ+j < yk}

). (2.4.18)

Here the second term, by virtue of Theorem 2.2.1, does not exceed the quantityc[MkV (yk)]r

∗k , where (see (2.4.11))

r∗k =M

1/γk

yk=

xA1/γk

y(1 + vrA

1/γk

) � r

1 + vr> r − ε

Page 133: Asymptotic analysis of random walks

102 Random walks with jumps having no finite first moment

for v < ε/r2. Therefore, the second term in (2.4.18) is bounded by

c[MkV (yk)]r−ε � c1

[xγAk

(yA

1/γk

)−α]r−ε

= c2

(yγ−αA

1−α/γk

)r−ε

uniformly in i. But Ak � Ak−1, A > 1, γ < α. Hence the sum in k of theseterms (see (2.4.14), (2.4.17)) does not exceed c3y

−(r−ε)(α−γ).Now we will obtain a bound for the first term in (2.4.18), putting for brevity

Mk = m, i = mp, p ∈ [p1, p2]. One has the following embedding for the eventin the first term: {

S−mp � −2m1/γ

}⊂

mp⋂j=1

{ξ−j � −2m1/γ

}.

Hence

P(S−

mp � −2m1/γ)

�(

1 − W (2m1/γ)F (0)

)mp

�(1 − cm−β/γ

)mp

< e−cp1m1−β/γ

(2.4.19)

uniformly in i ∈ [mp1, mp2]. Taking into account that m = Mk grows as ageometric progression, that M1 = xγ and that 1 − β/γ > 0, we obtain thatthe contribution of the first terms in (2.4.18) to the sum in (2.4.14) does not ex-ceed e−cyγ−β

.Note that a bound of the form (2.4.19) can be obtained in a different way, using

Theorem 2.3.1.We have proved that, for all large enough y,

P (v) < y−(r−ε)(α−β).

Combining all the above bounds, we arrive at the second part of the inequal-ity (2.4.1). It remains to notice that, for n > M1,

nV (y) > cxγ−α � xβ−α,

and therefore (2.4.1) is proved.To obtain the second assertion of Theorem 2.4.1, we need to estimate

P(B(v)) �n∑

j=1

P(ξj � y + vj1/γ)

�n∑

j=1

V (y + vj1/γ) �n∫

0

V (y + vt1/γ) dt. (2.4.20)

If n � yγ then the integral does not exceed cnV (y). If n � yγ then one shouldrepresent the integral in (2.4.20) as the sum

∫ yγ

0+∫ n

yγ , where the first integral

Page 134: Asymptotic analysis of random walks

2.5 Lower bounds for distributions of sums 103

has already been shown to be bounded by cyγV (y). The last integral, by virtueof Theorem 1.1.4(iv), does not exceed

c

∞∫y

uγ−1V (y + vu) du ∼ c1yγV (y).

Therefore

P(B(v)) � cV (y) min{yγ , n}. (2.4.21)

Now choosing r � 1 + ε in (2.4.1), we obtain for P(Sn � x) the same bound asin (2.4.21):

P(Sn � x) � P(Sn � x; B(v)) + P(B(v)) � cV (x) min{xγ , n}.The theorem is proved.

Remark 2.4.4. It is not difficult to verify that, at the price of some complica-tions in the derivation, the bounds (2.4.1), (2.4.2) could be made more precise.If we put g(j) := j1/β ln−b j and m1 := xβ lnb x then the quantity ε in (2.4.1),(2.4.2) could be replaced by 0 but then a logarithmic factor would appear on theright-hand sides of these inequalities. Indeed, the only place in the proof of Theo-rem 2.4.1 that is sensitive to the approach of the parameter γ to the value β is thebounding of the first term in (2.4.18). But this bound is exponential (see (2.4.19)and what follows). Hence one could achieve a power decay rate in the bound by asuitable choice of b in the new definition of the function g. Bounds for P(B(v))would change in a similar way. Therefore one can actually obtain the bound

P(Sn � x) � cV (x) min{n, xβ lnb1 x

}(2.4.22)

for a suitable b1 > 0.Since in the assertion of Theorem 2.4.1 we have an arbitrary fixed ε > 0 in the

exponent, the simplifying assumptions (2.4.3) do not lead to a loss of generality.One could even assume that

V (t) � t−α′, W (t) � t−β′

,

where α′ < α and β′ > β are close to α and β respectively. In that case theassertions (2.4.1), (2.4.2) would retain their form with ε = ε(β′, α′).

2.5 Lower bounds for the distributions of the sums. Finiteness criteria for

the maximum of the sums

2.5.1 Lower bounds

In §§ 2.1–2.4 we presented upper bounds for the distributions of Sn and Sn. Nowwe will derive lower bounds for the distributions of Sn. These are substantiallysimpler and more general.

Page 135: Asymptotic analysis of random walks

104 Random walks with jumps having no finite first moment

Here we will not need conditions on the existence of regularly varying majo-rants or minorants. Recall that

F+(t) = P(ξ � t), F−(t) = P(ξ < t), F (t) = F−(t) + F+(t).

Theorem 2.5.1. Let {K(n) > 0; n � 1} be an arbitrary sequence, and letQn(u) := P

(Sn/K(n) < −u

). Then, for y = x + uK(n − 1),

P(Sn � x) � nF+(y)(

1 − Qn−1(u) − n − 12

F+(y))

.

Proof. Put Gn := {Sn � x} and Bj := {ξj < y}. Then

P(Sn � x) � P(

Gn;n⋃

j=1

Bj

)

�n∑

j=1

P(GnBj) −∑

i<j�n

P(GnBi Bj)

�n∑

j=1

P(GnBj) − n(n − 1)2

F 2+(y).

Here, for y = x + uK(n − 1),

P(GnBj) =

∞∫y

P(Sn−1 � x − t)F(dt)

� P(Sn−1 � x − y)F+(y) = (1 − Qn−1(u))F+(y).

The theorem is proved.

Now we will present conditions under which one can obtain explicit estimatesfor K(n) and Qn(u) in the case Eξ2

j = ∞.

We will say that condition [ · , ≶] is satisfied if, for some c � 1,

V (t) � F+(t) � cV (t), (2.5.1)

where V (t) is given in (2.1.3). (If c = 1 then [ · , ≶] coincides with [ · , =].)

We will also be using condition [Rα,ρ] from § 1.5: F (t) is an r.v.f. with index−α, α ∈ (0, 2), and, moreover, there exists the limit

limt→∞

F−(t)F (t)

= ρ−, ρ− ∈ [0, 1], ρ = 1 − 2ρ−.

When ρ− = 0, one can assume that condition [<, =] is met.Under condition [Rα,ρ] the scaled sums Sn/b(n), b(n) = F (−1)(1/n), con-

verge in distribution to the stable law Fα,ρ with parameters (α, ρ) (see § 1.5; recall

Page 136: Asymptotic analysis of random walks

2.5 Lower bounds for distributions of sums 105

that we assume that Eξ = 0 when Eξ exists and is finite, and that centring maybe needed in the case α = 1):

P(

Sn

b(n)∈ ·

)⇒ Fα,ρ( · ). (2.5.2)

Theorem 2.5.2.

(i) Let condition [<, · ] hold, β < 1, and let σW (n) = W (−1)(1/n). Then, fory = x + uσW (n − 1),

P(Sn � x) � nF+(y)(

1 − cu−β+δ − n − 12

F+(y))

(2.5.3)

for any fixed δ > 0 and a suitable c < ∞.(ii) Let, in addition, condition [ · , ≶] hold and let W (t) � c1V (t). Then for

x = sσ(n) → ∞ (σ(n) = V (−1)(1/n)) one has

P(Sn � x) � nV (x)(1 − ε(s)), (2.5.4)

where ε(s) ↓ 0 as s ↑ ∞.

Proof. Put K(n) := σW (n) in Theorem 2.5.1. Then, applying Corollary 2.2.4to the sums −Sn, we obtain

Qn(u) = P(−Sn � uσW (n)) � c1nW (uσW (n)) � cu−β+δ (2.5.5)

for any fixed δ > 0 and all u � 1. This proves (2.5.3).Now let, in addition, condition [ · , ≶] be met and W (t) � c1V (t), x = sσ(n).

Then σW (n) � c2σ(n). Hence for u = s1−δ , δ ∈ (0, 1), we have

y = x + uσW (n − 1) = sσ(n) + s1−δσW (n−1) � x(1 + c2s−δ)

and therefore F+(y) � V (x)(1 + ε(s)), where ε(s) ↓ 0 as s ↑ ∞.Choosing δ so that (−β + δ)(1 − δ) � −β/2, and taking into account (2.5.1)

and the inequality nF+(y) < cnV (sσ(n)) ∼ cs−α, we obtain from (2.5.3) therelation (2.5.4). The theorem is proved.

The following corollary for the case of regular tails follows in an obvious wayfrom Theorem 2.5.2.

Corollary 2.5.3. Let condition [Rα,ρ] hold with ρ > −1. Then, for x = sσ(n)and Π = Π(x) = nV (x), we have

infx: s>t

P(Sn � x)Π

� 1 − ε(t), (2.5.6)

where ε(t) ↓ 0 as t ↑ ∞.

Page 137: Asymptotic analysis of random walks

106 Random walks with jumps having no finite first moment

2.5.2 Finiteness criterion for the supremum of partial sums

It is not difficult to derive from Theorem 2.3.5 and the above lower bounds afiniteness criterion for S = supk�0 Sk.

Theorem 2.5.4.

(i) Let condition [>, <] hold with β < 1, and let V (t) = o(W (t)). Then theconvergence of the integral

∞∫1

V (t) dt

tW (t)< ∞ (2.5.7)

implies that S < ∞ a.s.(ii) Let condition [<, >] hold with β < 1. Then the divergence of the integral

∞∫1

V (t) dt

tW (t)= ∞ (2.5.8)

implies that S = ∞ a.s.

(iii) Let condition [=, =] hold with β < 1, and let the limit

limt→∞

V (t)W (t)

� ∞ (2.5.9)

exist. Then the convergence of the integral in (2.5.7) is necessary and suffi-cient for S < ∞ a.s.

The proof of the theorem is given below (p. 107).In the general case (without assumptions [ · , <], [>, · ] on the existence of

regularly varying majorants and minorants), the finiteness criterion for S wasobtained in [119]. It follows from the strong law of large numbers in the casewhen Eξ is undefined, and it has the following form. Put

m−(x) :=

x∫0

F−(t) dt, m+(x) :=

x∫0

F+(t) dt,

I+ :=

∞∫0+

t

m−(t)F(dt), I− :=

0−∫−∞

|t|m+(|t|) F(dt).

Note that the integrands in I± are bounded in the vicinity of zero if ξ can assumevalues of both signs with positive probabilities. The following theorem holds true(an alternative proof thereof using the subadditive Kingman theorem was givenin [102]).

Theorem 2.5.5. If E|ξ| = ∞ then:

(i) max{I+, I−} = ∞;

Page 138: Asymptotic analysis of random walks

2.5 Lower bounds for distributions of sums 107

(ii) limn→∞

Sn

n= ∞ a.s. ⇐⇒ I− < ∞;

(iii) limn→∞

Sn

n= −∞ a.s. ⇐⇒ I+ < ∞;

(iv) lim supn→∞

±Sn

n= ∞ a.s. ⇐⇒ I− = I+ = ∞.

It follows from Theorem 2.5.5 that

S < ∞ a.s. ⇐⇒ I+ < ∞.

Theorem 2.5.4 could be obtained as a consequence of Theorem 2.5.5. However,presenting the proof of Theorem 2.5.5 given in [119], which is based on quitedifferent arguments, would take us too far from the mainstream of our exposition.For this reason, and also to illustrate the precision of the bounds that we haveobtained, we will obtain the assertion of Theorem 2.5.4 in a simpler way – as acorollary to Theorems 2.3.5 and 2.5.2.

The next assertion ensues from Theorem 2.5.4.

Corollary 2.5.6. Let β < 1. If condition [>, <] is satisfied and if we also haveV (t) � cW (t)(ln t)−1−ε, ε > 0 then S < ∞ a.s. If condition [<, >] is met andV (t) � cW (t)(ln t)−1 then S = ∞ a.s.

Observe that condition [>, <] in Theorem 2.5.4(i) means that there exists a

r.v. ξd� ξ with regularly varying tails (i.e. with a distribution satisfying condi-

tion [=, =]) for which (2.5.7) holds true. A similar remark is valid for condi-tion [<, >] in Theorem 2.5.4(ii).

Proof of Theorem 2.5.4. We will need the following auxiliary assertion.

Lemma 2.5.7. The integral (2.5.7) converges iff

∞∑n=1

V (σW (n)) =∞∑

n=1

V (W (−1)(1/n)) < ∞. (2.5.10)

Proof. Clearly, convergence of the series in (2.5.10) is equivalent to convergenceof the integral

∞∫1

V (W (−1)(1/t)) dt.

Using the change of variables W (−1)(1/t) = u (i.e. t = 1/W (u)) and putting,

Page 139: Asymptotic analysis of random walks

108 Random walks with jumps having no finite first moment

for brevity, W (−1)(1) =: w, we see that the above integral is equal to

−∞∫

w

V (u) dW (u)W 2(u)

< c1

∑k

V (2k)W (2k)W 2(2k)

= c1

∑k

V (2k)W (2k)

2k

2k< c2

∞∫1

V (t) dt

tW (t).

Similar converse inequalities are also true. The lemma is proved.

Now we return to the proof of Theorem 2.5.4.

(i) We will make use of the following well-known criterion for the finiteness ofS (see e.g. § 7, Chapter XII of [122] or § 2, Chapter 11 of [49]): S < ∞ a.s. iff

∞∑n=1

P(Sn > 0)n

< ∞(

or∞∑

n=1

P(Sn � 0)n

< ∞)

. (2.5.11)

If condition [>, <] is satisfied with α < 1 and β < 1 then by Theorem 2.3.5with z = 0 one has

P(Sn > 0) � c1nV (σW (n)) + exp{−c

(σ(n)

σW (n)

)−δ}. (2.5.12)

Here, by virtue of Theorem 1.1.4(iii), for any fixed ε > 0 and all large enough n,

V (σW (n)) = V

(σW (n)σ(n)

σ(n))

�(

σW (n)σ(n)

)−α−ε 1n

and therefore

nV (σW (n)) �(

σW (n)σ(n)

)−α−ε

.

At the same time, for any k > 0

exp{−ct−δ} � t−k

as t → ∞. Hence

nV (σW (n)) � exp{−c

(σ(n)

σW (n)

)−δ},

and so by (2.5.12)

P(Sn > 0) � cnV (σW (n)). (2.5.13)

This, together with (2.5.11) and Lemma 2.5.7, proves the first assertion of thetheorem in the case α < 1.

If α � 1, β < 1 then the integral (2.5.7) converges. However, in that casecondition [ · , <] is satisfied for any α ∈ (β, 1) and therefore S < ∞ a.s. byvirtue of the above argument.

Page 140: Asymptotic analysis of random walks

2.5 Lower bounds for distributions of sums 109

(ii) Now we will prove the second assertion of the theorem. To simplify theexposition, assume that at least one of the two alternative inequalities

F+(t) � c1W (t) or F+(t) � c2W (t), 0 < c2, c1 < ∞, (2.5.14)

holds true for all large enough t. If the second inequality holds then condition

[<, >] holds with V (t) = c2W (t). Therefore there exist r.v.’s ξj

d� ξj with the

distribution

P(ξj < −t

)= W (t), P

(ξj � t

)= c2W (t), t � 0

(one can always assume that 1−W (0) = c2W (0)), such that for the sums Sn :=∑nj=1 ξj the following holds true. For some c > 0, the r.v.’s Sn/cσW (n) converge

in distribution (as n → ∞) to the stable law Fβ,ρ with parameters β and ρ > −1,so that Fβ,ρ,+(0) =: q > 0. Therefore

P(Sn � 0) � P(Sn � 0

)→ q > 0.

From here and (2.5.11) it follows that S = ∞ a.s.Now suppose that the first inequality in (2.5.14) holds true. We will make use of

the following lower bound for P(Sn � 0), which is derived from Theorems 2.5.1and 2.5.2 (with x = 0, y = uσW (n)):

P(Sn � 0) � nF+(y)(

1 − Qn−1(u) − n − 12

F+(y))

, (2.5.15)

where

Qn−1(u) = P(

Sn

σW (n)< −u

).

Construct an r.v. ξj := min{ξj , 0} � ξj and putSn :=∑n

j=1 ξj . Then evidently

Qn(u) � P(

Sn

σW (n)< −u

)→ Fβ,−1,−(−u),

where Fβ,−1 is the stable distribution with parameters (β,−1).Further,

n − 12

F+(y) � c1n

2W (uσW (n)) ∼ c1u

−β

2as n → ∞. Hence for large enough fixed u we will have

Qn−1(u) +n − 1

2F+(y) � 1

2,

so that by virtue of (2.5.15)

P(Sn � 0) � n

2V (uσW (n)) ∼ nu−β

2V (σW (n)).

This means that if the series in (2.5.10) diverges (see Lemma 2.5.7) then the seriesin (2.5.11) also diverges and S = ∞ a.s.

Page 141: Asymptotic analysis of random walks

110 Random walks with jumps having no finite first moment

(iii) The third assertion of Theorem 2.5.4 follows from the first two. Indeed,if the integral in (2.5.7) converges then V (t) = o(W (t)) as t → ∞ (the limitin (2.5.9) is equal to zero), and it follows from assertion (i) that S < ∞ a.s.

Now let the integral (2.5.7) diverge. Then it follows from assertion (ii) thatS = ∞ a.s. (owing to (2.5.9), the one of the alternatives in (2.5.14) is alwaystrue). The theorem is proved.

2.6 The asymptotic behaviour of the probabilities P(Sn � x)

As in §§ 2.2–2.4, we will distinguish here between the following two cases:

(1) α < 1, the tail F−(t) does not dominate F+(t), t > 0;

(2) β < 1, the tail F−(t) is ‘much heavier’ than F+(t), t > 0.

First we consider the former possibility.

Theorem 2.6.1. Let condition [Rα,ρ], α < 1, ρ > −1, be satisfied. Then, forx = sσ(n),

sups�t

∣∣∣∣P(Sn � x)nV (x)

− 1∣∣∣∣ � ε(t) ↓ 0,

sups�t

∣∣∣∣P(Sn � x)nV (x)

− 1∣∣∣∣ � ε(t) ↓ 0

as t ↓ 0.

Proof. The relations follow immediately from Corollaries 2.2.4 and 2.5.3.

In § 3.7 we will present integro-local theorems on the asymptotic behaviour ofthe probability P(Sn ∈ [x, x + Δ)), Δ > 0, when W (t) � cV (t).

Now consider the latter possibility (case (2)). Here (and also in many otherconsiderations in the sequel) we will be using the following standard approach,which is related to the bounds for the distributions of sums of truncated r.v.’s thatwe obtained in §§ 2.2 and 2.4. In agreement with our previous notation, we put

Bj = {ξj < y}, B =n⋂

j=1

Bj ,

and let

Gn := {Sn � x}.Then

P(Gn) = P(GnB) + P(GnB),

wheren∑

j=1

P(GnBj) � P(GnB) �n∑

j=1

P(GnBj) −∑

i<j�n

P(GnBiBj).

Page 142: Asymptotic analysis of random walks

2.6 The asymptotic behaviour of the probabilities P(Sn � x) 111

Since

P(GnBiBj) � P(B1)2 = F 2+(y),

we have

P(Gn) = P(GnB) +n∑

j=1

P(GnBj) + O((nF+(y))2). (2.6.1)

Our first task is to estimate the probability P(GnB). The bounds for it that weobtained in §§ 2.1–2.4 prove to be insufficient for our purposes in this section.We will need additional bounds; unfortunately, to derive them, one has to assumethat β < α. Bounds in the case α = β are apparently much harder to obtain.

Theorem 2.6.2. Let condition [>, <] with β < α < 1 be satisfied. Then thefollowing assertions hold true.

(i) For y = n1/γ , any fixed γ > β, ε > 0 such that θ := 1 − β/γ − ε > 0 andall large enough n one has

P(GnB) � n−θxn−1/γ

e−nθ

. (2.6.2)

(ii) For y = x/r, any fixed r � 1, ε > 0 and all small enough values of nV (x)one has

P(GnB) � [nV (y)]re−ny−β−ε

. (2.6.3)

Remark 2.6.3. Note that, for y = n1/γ and all large enough n, it follows fromthe inequality (2.6.2) that

P(GnB) � e−nθ

, 0 < θ < 1 − α/γ, (2.6.4)

and this bound cannot be substantially sharpened when

|x| � nv, v < 1 − β

γ+

1γ∈(

,1β

).

However, if y = x/r and nV (x) → 0 fast enough then (2.6.3) turns into theinequality

P(GnB) �[nV (y)

]r; (2.6.5)

this also cannot be improved.

Corollary 2.6.4. Let condition [>, <] with β < α < 1 be satisfied.

(i) If γ > β and |x| � nv, v ∈ [1/γ, 1/β), then

P(Sn � x) � nV(n1/γ

)+ O

(e−nθ)

, 0 < θ < 1 − β/γ. (2.6.6)

(ii) If nV (x) → 0 then

P(Sn � x) � nV (x)(1 + o(1)). (2.6.7)

Page 143: Asymptotic analysis of random walks

112 Random walks with jumps having no finite first moment

Proof. The proof of Corollary 2.6.4 is next to obvious. One has to use the in-equality

P(Gn) � P(B) + P(GnB) � nV (y) + P(GnB).

Then the bound (2.6.6) follows from (2.6.4) and the inequality (2.6.7) from (2.6.5),provided that we put r := 1 + | lnnV (x)|−1/2 ∼ 1.

Observe that for x = 0 the bound (2.6.6) is weaker than the bound obtainedin Theorem 2.3.5. It is also clear that, for γ < α, the zones of the deviations x

in (2.6.6) and in (2.6.7) do overlap.

Proof of Theorem 2.6.2. Following the standard argument from (2.1.8), we obtain

P(GnB) � e−μxRn(μ, y), (2.6.8)

where

R(μ, y) =

y∫−∞

eμtF(dt).

Further, following (2.4.5), (2.4.6) and taking into account the observation thatV (t) = o(W (t)) as t → ∞, we obtain that, as μ → 0, μy → ∞,

R(μ, y) � 1 − ΓW (1/μ)(1 + o(1)) + V (y)eμy(1 + o(1)), (2.6.9)

where Γ = Γ(1 − β). Hence

P(GnB) � exp{−μx−nΓW (1/μ)(1+o(1))+nV (y)eμy(1+o(1))

}. (2.6.10)

(i) First put

y := n1/γ , μ :=θ

ylnn, θ ∈ (0, (α − β)/γ)

for some γ > β. Then for the terms on the right-hand side of (2.6.10) we willhave

nΓW (1/μ) ∼ nΓ θβW (y/lnn) ∼ n1−β/γL1(n), (2.6.11)

nV (y)eμy = n1−α/γ+θL2(n), (2.6.12)

where L1, L2 are s.v.f.’s and 1 − α/γ + θ < 1 − α/γ + (α − β)/γ = 1 − β/γ.Hence the term (2.6.12) will be negligibly small compared with (2.6.11). Thismeans that, for any fixed ε > 0 and all large enough n,

lnP(GnB) � −θxn−1/γ lnn − n1−β/γ−ε.

This proves (2.6.2).

(ii) Now put

y :=x

r, r � 1, μ := −1

ylnnV (y),

Page 144: Asymptotic analysis of random walks

2.6 The asymptotic behaviour of the probabilities P(Sn � x) 113

where we assume that nV (x) → 0 as x → ∞. Then for the terms in (2.6.10) wehave

nV (y)eμy = 1, nΓW (1/μ) = nΓW(y| lnnV (y)|−1

)> ny−β−ε

for any fixed ε > 0 and all large enough y. This means that, under the aboveconditions,

lnP(GnB) � r lnnV (y) − ny−β−ε,

which is equivalent to (2.6.3). The theorem is proved.

Now we will formulate the main assertion of the present section.

Theorem 2.6.5. Let condition [=, =] with β < α < 1 be satisfied. Then, forx � −n1/γ , γ ∈ (β, α), max{x, n} → ∞, one has

P(Sn � x) = nEV (x − ζσW (n))(1 + o(1)), (2.6.13)

where ζ � 0 is an r.v. following the stable distribution Fβ,−1 with parameters(β,−1) (i.e. the limiting law for Sn/σW (n) as n → ∞). That is,

P(Sn � x) ∼

⎧⎪⎪⎪⎪⎪⎨⎪⎪⎪⎪⎪⎩

nV (σW (n))E(−ζ)−α if x = o(σW (n)) andn → ∞,

nV (σW (n))E(b − ζ)−α if x ∼ bσW (n) and0 < b < ∞, n → ∞,

nV (x) if x � σW (n).

(2.6.14)

Remark 2.6.6. Note that in the case when |x| = o(σW (n)) the asymptotics ofP(Sn � x) do not depend on x.

As was shown in [197], the assertion of the theorem will remain true in thecase α = β, V (t) = o(W (t)). For x � σW (n) the assertion follows from thebounds of §§ 2.2 and 2.5.

From Theorem 2.6.5 one can obtain corollaries describing the asymptotic be-haviour of the renewal function

H(t) =∞∑

n=1

P(Sn � t) (2.6.15)

or, equivalently, of the mean time spent by the trajectory {Sn} above the level t.

Corollary 2.6.7. Let condition [=, =] with β < α < 1 be met, and let t be anarbitrary fixed number. Then the following assertions hold true.

(i) H(t) < ∞ iff∞∑

n=1

nV (σW (n)) < ∞. (2.6.16)

Page 145: Asymptotic analysis of random walks

114 Random walks with jumps having no finite first moment

(ii) If (2.6.16) is true then, as x → ∞,

H(−x) ∼ E(−ζ)−β

W (x).

From part (i) of the corollary it follows, in particular, that these implicationshold true:

{α > 2β} =⇒ {H(t) < ∞} =⇒ {α � 2β}.In the case when ξ � 0 the following, more general, assertion was proved

in [118]:

Let β < 1,

F−,I(t) :=

t∫0

F−(u) du.

Then the relations

F−,I(t) ∼ t1−βLW (t)1 − β

as t → ∞, (2.6.17)

where LW is an s.v.f., and

H(−x) ∼ [Γ(β + 1)Γ(2 − β)

]−1 1x−βLW (x)

as x → ∞ (2.6.18)

are equivalent.

If, instead of (2.6.17), a stronger condition that F−(t) = W (t) ∼ t−βLW (t) issatisfied then (2.6.18) and Theorem 2.6.2 imply that

H(−x) ∼ [Γ(β + 1)Γ(1 − β)

]−1 1W (x)

as x → ∞

and

E(−ζ)−β =[Γ(β + 1)Γ(1 − β)

]−1.

Moreover, for ξ � 0, [118] also contains a local renewal theorem that describesthe asymptotic behaviour of H(−x−Δ)−H(−x), for an arbitrary fixed Δ > 0,as x → ∞ in the case when F−(t) = W (t), 1/2 < β < 1:

H(−x − Δ) − H(−x) ∼ [Γ(β)Γ(1 − β)

]−1 ΔxW (x)

.

In the case when β ∈ (0, 1/2], a similar relation was obtained but only for

lim supx→∞

[H(−x − Δ) − H(−x)

]xW (x).

In the lattice case, these results were extended in [128, 279] to the class of r.v.’s ξ

that can assume values of both signs.

Page 146: Asymptotic analysis of random walks

2.6 The asymptotic behaviour of the probabilities P(Sn � x) 115

Proof of Corollary 2.6.7. (i) The first assertion of the corollary follows in an ob-vious way from the first relation in (2.6.14).

(ii) To prove the second assertion, put nv := v/W (x) and, for fixed ε > 0 andN < ∞, partition the sum H(−x) (see (2.6.15)) into three separate sums:

H(−x) =∑

n�nε

+∑

nε<n<nN

+∑

n�nN

. (2.6.19)

The first sum does not exceed nε. To estimate the terms in the second sum, wewill use the theorem on the convergence of Sn/σW (n) in distribution to the stablelaw Fβ,−1. For brevity, set

Q(t) := Fβ,−1([−t,∞)).

Then, for nε < n < nN ,

P(Sn � −x) = P(

Sn

σW (n)� − x

σW (n)

)∼ Q

(x

σW (n)

)as x → ∞, so that

∑nε<n<nN

P(Sn � −x) ∼∑

nε<n<nN

Q

(x

σW (n)

)∼

nN∫nε

Q

(x

σW (t)

)dt.

(2.6.20)Next we change the variables, setting x/σW (t) = u. Then, for any fixed u, wewill have

σW (t) = x/u, t ∼ W−1(x/u) ∼ W−1(x)u−β .

Since du(W−1(x)u−β) = −βu−β−1W−1(x) du, it is natural to expect that theintegral in (2.6.20) will be asymptotically equivalent to

βW−1(x)

uε∫uN

Q(u)u−β−1du, (2.6.21)

where

uv =x

σW (nv)=

x

σW (vW−1(x))∼ v−1/β.

(Since W (σW (t)) ∼ 1/t, by putting z := σW (W−1(x)) we obtain the relationW (z) ∼ W (x), and therefore z ∼ x.) Hence the integral in (2.6.21) is asymptot-ically equivalent to

ε−1/β∫N−1/β

Q(u)u−β−1du,

and by choosing small enough ε and 1/N it can be made arbitrarily close to the

Page 147: Asymptotic analysis of random walks

116 Random walks with jumps having no finite first moment

quantity

Q :=

∞∫0

Q(u)u−β−1du =1β

∞∫0

u−βdQ(u) =1β

E(−ζ)−β

(that the integrals converge at zero follows from Corollary 2.3.3).To make the transition from the integral in (2.6.20) to (2.6.21) more formal,

one should use integration by parts to compute

uN∫uε

Q(u) d(W−1(x/u)

).

Thus, by choosing appropriate values of ε and N , the sum of the first two termson the right-hand side of (2.6.19) multiplied by W (x) can be made arbitrarilyclose to βQ.

Next we will show that the third term on the right-hand side of the represen-tation (2.6.19) is o(W−1(x)) as N → ∞. For a fixed N and n � nN , we haveσW (n) � σW (NW−1(x)) ∼ N1/βx. Therefore, by Theorem 2.3.5,

P(Sn � −x) � c1nV (σW (n) − x) + exp{−c

(x + σ(n)σW (n)

)−δ}� c2nV (σW (n)) + exp

{−c

(x + σ(n)σW (n)

)−δ}.

Note that the first term on the right-hand side summed over all n � nN convergesto zero as x → ∞ (nN → ∞). So it remains to bound

∑n�nN

exp{−c

(x + σ(n)σW (n)

)−δ}∼

∞∫nN

exp{−c

(x + σ(t)σW (t)

)−δ}dt

= nN

∞∫1

exp{−c

(x + σ(nNu)σW (nNu)

)−δ}du.

Since 1/β − 1/α � 1/2β, by virtue of (2.6.16), for any ε1 ∈ (0, 1 − β/α) andall large enough N we have

σW (nNu) ∼ (Nu)1/βσW (W−1(x)) ∼ (Nu)1/βx,

σ(nNu) ∼ (Nu)1/ασ(W−1(x)) < (Nu)1/α xβ/α+ε1 ,

so that

A(u) :=x + σ(nNu)σW (nNu)

� 2(Nu)−1/β + (Nu)1/α−1/β xβ/α−1+ε1 � (Nu)−1/2β .

Page 148: Asymptotic analysis of random walks

2.6 The asymptotic behaviour of the probabilities P(Sn � x) 117

Hence

nN

∞∫1

e−cA(u)−δ

du � nN

∞∫1

e−c(Nu)δ/2β

du

= W−1(x)

∞∫N

e−cvδ/2β

dv = o(W−1(x))

as N → ∞. The corollary is proved.

Proof of Theorem 2.6.5. Return to the representation (2.6.1) and bound the term

P(GnBj) = P(Sn−1 + ξn � x, ξn � y) = P(Sn−1 � x − y)P(ξn � y)

+ P(ξn � x − Sn−1, Sn−1 < x − y)

= P(Sn−1 � x − y)P(ξn � y)

+ E[V (x − Sn−1); Sn−1 < x − y

]. (2.6.22)

First let |x| � n1/γ , y = n1/γ , γ ∈ (β, α). Then, by virtue of (2.6.6),

P(Sn−1 � x − y)P(ξn � y) � n(V (n1/γ)

)2(1 + o(1)).

The ‘power part’ of this expression is n1−2α/γ , where 1 − 2α/γ < −α/β for γ

close enough to β. So, for such γ,

P(Sn−1 � x − y)P(ξn � y) = o(V (σW (n))

). (2.6.23)

The second term on the right-hand side of (2.6.22) (in which we will replacen − 1 by n for simplicity) can be rewritten as

E[V (x − Sn); Sn < x − y

]= E1 + E2; (2.6.24)

the quantities Ei, i = 1, 2, will be defined and bounded in what follows.For a small fixed ε > 0, put

E1 := E[V (x − Sn); −εσW (n) � Sn < x − y

]. (2.6.25)

It is not difficult to see from Theorem 2.3.5 that

E1 � E[V (x − S∗);−εσW (n) � S∗ � x − y

], (2.6.26)

where S∗ is an r.v. which has the following distribution: at the point x− y < 0, ithas an atom of size

P(S∗ = x − y) = cnV (σW (n))

for a suitable c > 0, and on the interval (−εσW (n), x − y) it has density fn(t)which is the derivative of the second term on the right-hand side of (2.3.8) (withz replaced by −t). This means that

x−y∫−u

fn(t) dt = exp{−c

(u + σ(n)σW (n)

)−δ}− exp

{−c

(y − x + σ(n)

σW (n)

)−δ}

Page 149: Asymptotic analysis of random walks

118 Random walks with jumps having no finite first moment

and

P(x − y > Sn � −u)

� P(S∗ = x − y) +

x−y∫−u

fn(t) dt � P(x − y � S∗ � −u).

From here, using the same argument as that employed while bounding the inte-gral (2.3.20) (the fast decay of fn(t) as t → 0), we obtain that

E1 � V (y)P(S∗ = x − y) +

x−y∫−εσW (n)

V (x − t)fn(t) dt

� V (y)cnV (σW (n)) + V (x + εσW (n)) e−cε−δ

. (2.6.27)

Since x = o(σW (n)), the last term on the right-hand side of (2.6.27) is asymptot-ically equivalent to ε−αe−cε−δ

V (σW (n)) and, by choosing a suitable ε, it can bemade arbitrarily small compared with V (σW (n)). The first term on the right-handside of (2.6.27) is o(V (σW (n))) because nV (n1/γ) → 0 as γ < α.

Now consider

E2 := E[V (x − Sn); Sn < −εσW (n)

]. (2.6.28)

Since x = o(σW (n)), we obtain here that, for Sn = O(σW (n)),

V (x − Sn) ∼(− Sn

σW (n)

)−α

V (σW (n)).

The function (−t)−α is continuous and bounded on (−∞,−ε). Furthermore,taking into account the continuity of the limiting stable distribution Fβ,−1 of ther.v. ζ � 0 at the point −ε we obtain, as a consequence of the weak convergenceSn/σW (n) ⇒ ζ, the following relation:

E2 ∼ E[(−ζ)−α; ζ < −ε

]V (σW (n)). (2.6.29)

Since by Corollary 2.3.3

E[(−ζ)−α; ζ < −ε

]→ E(−ζ)−α < ∞ (2.6.30)

as ε → 0, we finally obtain (see (2.6.1), (2.6.22)–(2.6.29)) that

P(GnBj) ∼ E(−ζ)−αV (σW (n)) ∼ EV (x − ζσW (n)). (2.6.31)

Now it follows from (2.6.1) and Theorem 2.6.2 that

P(Sn � x) = nEV (x − ζσW (n))(1 + o(1))

+ O(e−nθ

)+ O

((nV (n1/γ)

)2), θ > 0, (2.6.32)

where(nV (n1/γ)

)2= o(nV (σW (n))

), so that for the respective exponents we

Page 150: Asymptotic analysis of random walks

2.6 The asymptotic behaviour of the probabilities P(Sn � x) 119

have 2(1 − α/γ) < 1 − α/β for a suitable γ ∈ (β, α). This proves (2.6.13) and(2.6.14) in the case |x| � n1/γ .

Now let x � n1/γ , γ ∈ (β, α) (and therefore nV (x) → 0), y = x/2 (r = 2).Then again we will have (2.6.22), where by virtue of (2.6.7)

P(Sn−1 � x − y)P(ξn � y) � cnV (x)2. (2.6.33)

One now obtains, cf. (2.6.23),

cnV 2(x) = o(

min{V (x), V (σW (n))}). (2.6.34)

Further, instead of (2.6.24) we will now make use of the representation

E[V (x − Sn); Sn < x − y

]= E0 + E1 + E2,

where, by Theorem 2.3.5,

E0 := E[V (x − Sn); 0 � Sn < x − y

]� V (y)P(Sn � 0)

� cnV (x)V (σW (n)) = o(min{V (x), V (σW (n))}

).

When considering the integrals E1 and E2 (defined in (2.6.25) and (2.6.28), re-spectively), we will distinguish between three possibilities:

(1) x = o(σW (n)),(2) x ∼ bσW (n), 0 < b < ∞,(3) σW (n) = o(x).

In the first case all the considerations in (2.6.26)–(2.6.30) remain valid (uponreplacing the quantity x−y by 0 in (2.6.25), (2.6.26)), so that we again obtain therelation (2.6.31) and then also (2.6.32), where the bounding terms, in accordancewith Theorem 2.6.5 and the new value y = x/2, should be replaced by

O((nV (x))2

)= o

(min{nV (x), nV (σW (n))}

).

Now let x ∼ bσW (n), 0 < b < ∞. Since the function (b − t)−α is con-tinuous and bounded on the whole half-line (−∞, 0), it follows from the weakconvergence Sn/σW (n) ⇒ ζ that

E1 + E2 = E[V (x − Sn); Sn � 0

]∼ E(b − ζ)−αV (σW (n)) ∼ EV (x − ζσW (n)).

By virtue of (2.6.33), (2.6.34), the same equivalence will hold for P(GnBj).In case (3), evidently

E0 + E1 + E2 ∼ V (x) ∼ EV (x − ζσW (n)),

and the same equivalence will also hold for P(GnBj). The subsequent transitionto (2.6.32) uses the argument that we have already employed above, the residualterms in (2.6.32) being replaced by O

((nV (x))2

). This completes the proof of

the theorem.

Page 151: Asymptotic analysis of random walks

120 Random walks with jumps having no finite first moment

Remark 2.6.8. In § 3.7 we will also present the so-called integro-local theoremson large deviations of Sn, i.e. assertions on the asymptotic behaviour of the prob-ability P(Sn ∈ [x, x + Δ)) as x → ∞, Δ = o(x).

2.7 The asymptotic behaviour of the probabilities P(Sn � x)

As before, we will again distinguish between the two cases mentioned at the be-ginning of the previous section. In the case

(1) α < 1, the tail F−(t) does not dominate F+(t), t > 0,

the asymptotic behaviour of P(Sn � x) was established in Theorem 2.6.1. Nowwe will consider the second case:

(2) β < 1, the tail F−(t) is ‘much heavier’ than F+(t), t > 0.

Theorem 2.7.1. Let condition [=, =] with β < min{1, α} be satisfied. Then thefollowing assertions hold true.

(i) For all n, as x → ∞,

P(Sn � x) ∼n∑

j=1

EV (x − ζσW (j)), (2.7.1)

where the r.v. ζ � 0, as before, has the stable distribution Fβ,−1 with pa-rameters (β,−1).

(ii) If x � σW (n) then, for all n, as x → ∞,

P(Sn � x) ∼ nV (x). (2.7.2)

(iii) If x → ∞, n → ∞ then when σW (n)/x � c > 0 or when σW (n)/x → 0slowly enough, one has

P(Sn � x) ∼ V (x)W (x)

C(β, α, σW (n)/x

), (2.7.3)

where

C(β, α, t) := βE

⎛⎝|ζ|−β

−tζ∫0

vβ−1(1 + v)−αdv

⎞⎠ , (2.7.4)

so that

P(S � x) ∼ V (x)W (x)

C(β, α,∞), (2.7.5)

where

C(β, α,∞) = βE∣∣ζ−β

∣∣B(β, α − β) (2.7.6)

and B(β, γ) :=∫ 1

0uβ−1(1 − u)γ−1du is the beta function.

Page 152: Asymptotic analysis of random walks

2.7 The asymptotic behaviour of the probabilities P(Sn � x) 121

Remark 2.7.2. As B(β, 0) = ∞, it appears that the asymptotics of P(S � x) inthe case α = β will be different from cV (x)/W (x). Since for β < α, as x → ∞,

∞∫x

V (t) dt

tW (t)∼ 1

α − β

V (x)W (x)

,

it is not unreasonable to conjecture that a general asymptotic representation forP(S � x) in the case β � α will have the form

P(S � x) ∼ c(β, α)

∞∫x

V (t) dt

tW (t).

Remark 2.7.3. Comparison with Theorem 2.6.5 shows that, along with (2.7.1),we can also write

P(Sn � x) ∼n∑

j=1

P(Sj � x)j

. (2.7.7)

It is not difficult to see that this relation will also remain true in the case whenthere exists Eξ1 = −a < 0 and the function V (t) = F+(t) is subexponential,since, in that case, as x → ∞,

P(Sn � x) ∼n∑

j=1

V (x + aj) ∼n∑

j=1

P(Sj � x)j

(for the first relation see [178, 51, 63, 66, 52]). Since the functions V (x), W (x) donot appear in (2.7.7), there arises the question whether the last relation holds underbroader conditions as well. In § 7.6 we will present assertions (Theorem 7.6.1 andits consequences) confirming the conjecture (2.7.7) in the case where n = ∞ andthe distribution F is subexponential.

Set η(x) := min{k : Sk � x}. Then

P(η(x) = n) = P(Sn � x) − P(Sn−1 � x),

and the relation (2.7.7) makes quite plausible the conjecture that

P(η(x) = n) ∼ P(Sn � x)n

(under some additional conditions, this asymptotic relation can be extracted from[63]). However, this last relation, like (2.7.7), cannot be universal. As shownin [37, 68], under broad assumptions, when Cramer’s condition is met the prob-abilities P(η(x) = n) and P(Sn � x) will have the same asymptotics (up to aconstant factor).

Proof of Theorem 2.7.1. The proof, like that of Theorem 2.6.5, will again be basedon a relation of the form (2.6.1) but with different Gn and Bj . Put

Gn := {Sn � x}

Page 153: Asymptotic analysis of random walks

122 Random walks with jumps having no finite first moment

and set

Bj := Bj(v) ={ξj < y + vj1/γ

}, γ ∈ (β, α), B =

n⋂j=1

Bj

(here, as in § 2.4, we assume that β < min{1, α}). Then, under condition [ · , =],we have

P(Gn) = P(GnB) +n∑

j=1

P(GnBj) + O

([ n∑j=1

V (y + vj1/γ)]2)

. (2.7.8)

A bound for P(GnB) is contained in Theorem 2.4.1, which, in particular, in-cludes the following:

Let condition [>, <] with β < min{1, α} be satisfied. Then, for a suitable v

and all n, as y → ∞,

P(GnB) � cx−(α−β)(r−ε), (2.7.9)

P(Sn � x) � cV (x) min{n, xβ+ε

}, (2.7.10)

where r = x/y and ε > 0 is an arbitrarily small fixed number.

A bound for the last term in (2.7.8) follows from the inequalityn∑

j=1

V(y + vj1/γ

)� nV (y),

which holds for all n, and the relations that follow below. which hold as n → ∞and in which we will put v = 1 for simplicity (the value of v can only influencea constant factor that will appear at some stage). Using the change of variablest1/γ = uy, we have

n∑j=1

V(y + j1/γ

)∼

n∫0

V(y + t1/γ

)dt

= γyγ

n1/γy−1∫0

V (y + uy)uγ−1du

∼ γyγV (y)

n1/γy−1∫0

(1 + u)−αuγ−1du

= yγV (y) b(γ − 1,−α, n1/γ/y), (2.7.11)

where

b(γ − 1,−α, t) := γ

t∫0

(1 + u)−αuγ−1du.

Page 154: Asymptotic analysis of random walks

2.7 The asymptotic behaviour of the probabilities P(Sn � x) 123

Since b(γ − 1,−α,∞) < ∞ and b(γ − 1,−α, t) ∼ tγ as t → 0, we obtain forthe last term in (2.7.8) the bound⎡⎣ n∑

j=1

V(y + vj1/γ

)⎤⎦2

� c [V (y) min{n, yγ}]2 . (2.7.12)

Now we bound the main terms in (2.7.8):

P(GnBj) = P(Sn � x; ξj � y + vj1/γ), (2.7.13)

where, for brevity, we will put yj := y + vj1/γ . To evaluate P(GnBj), note that,by virtue of (2.7.10) with y = rx, r = const ∈ (0, 1),

p(j, x) := P(Sj−1 � x, ξj � yj) � cV (x) min{j, yγ}V (yj). (2.7.14)

Further,

P(GnBj) = P(Sn � x, Sj−1 < x, ξj � yj) + O(p(j, x)), (2.7.15)

where

P(Sn � x, Sj−1 < x, ξj � yj)

= P(Sj−1 + ξj + S ∗n−j � x, Sj−1 < x, ξj � yj)

and the sums S ∗n−j are defined in the same way as the Sn−j and are independent

of the r.v.’s ξ1, . . . , ξj . Setting

Zj,n := Sj−1 + S ∗n−j (2.7.16)

and again using equalities of the form (2.7.15), we obtain

P(GnBj) = P(Zj,n + ξj � x, ξj � yj) + O(p(j, x)). (2.7.17)

Now we introduce the events

Cj := {Zj,n � x − y − vj1/γ}and represent {Zj,n + ξj � x, ξj � yj} as a sum of intersections of itself and theevents Cj and Cj . Since

{Zj,n + ξj � x; Cj} ⊂ {ξj � x − Zj,n � y + vj1/γ = yj},then

P(Zj,n + ξj � x, ξj � yj ; Cj) = P(Zj,n + ξj � x; Cj). (2.7.18)

We will consider the right-hand side of this equality (which will form the mainterm in P(GnBj)) later on. For the complement Cj , we have the followingbound:

P (j, x) := P(Zj,n + ξj � x, ξj � yj ; Cj)

� V (yj)P(Zj,n � x − y − vj1/γ), (2.7.19)

Page 155: Asymptotic analysis of random walks

124 Random walks with jumps having no finite first moment

where, by virtue of the bound (2.7.10), Theorem 2.6.5 and relations of the form(x − y)/2 = cx one has

P(Zj,n � x − y − vj1/γ

)� P

(Sj−1 � x − y

2− vj1/γ

)+ P

(Sn−j � x − y

2

)= O

(jV (x − vj1/γ + σW (j)

)+ O

(V (x) min{n, xγ})

= O(jV (x + j1/γ)

)+ O

(V (x) min{n, xγ}). (2.7.20)

Multiplying the right-hand side of the last relation by V (yj) (see (2.7.19)) yieldsan upper bound for P (j, x).

Now we turn to considering the main term in the probability P(GnBj), whichwe singled out in (2.7.18) as

P(Zj,n + ξj � x;Cj) = E[V (x − Zj,n);Zj,n < x − y − vj1/γ

](2.7.21)

and note that it has the same form as the left-hand side of (2.6.24) provided thatthere we replace Sn by Zj,n and y by yj . Furthermore, Zj,n possesses all theproperties of Sj−1 that will allow us to use here the reasoning from the proof ofTheorem 2.6.5 (see (2.6.24)–(2.6.31)). Indeed:

(1) Sj−1 � Zj,n � Sj−1 + S ∗∞, where S ∗

∞ � 0 is a proper r.v.;(2) therefore Zj,n has the same weak convergence property as Sj−1,

Zj,n

σW (j)⇒ ζ as j → ∞;

(3) Zj,n admits basically the same large deviation bounds (see (2.7.10) andTheorem 2.6.5) as were used in (2.6.24)–(2.6.31) (up to arbitrarily smallchanges in the exponents), so that

P(Zj,n � x) � P(Sj−1 > x/2 − j1/γ

)+ P

(Sn > x/2 + j1/γ

)� cjV (x + σW (j)) + cV (x + j1/γ) min

{n, (x + j1/γ)γ

}.

From this it follows that the computation of the integral (2.6.24) can be carriedover, without any substantial changes, to the computation of the integral (2.7.21).The reader could verify this by repeating the argument from the proof of Theo-rem 2.6.5 in the present situation. Therefore one can conclude that (see (2.6.31))

E[V (x − Zj,n);Zj,n < x − y − vj1/γ

] ∼ EV (x − ζσW (j)).

It remains to bound∑n

j=1 p(j, x) and∑n

j=1 P (j, x), where p(j, x) and P (j, x)were defined in (2.7.14) and (2.7.19). To estimate the first sum we observe that,cf. (2.7.11), (2.7.12),

n∑j=1

min{j, yγ}V (yj) � cV (y)[min{n, yγ}]2. (2.7.22)

Page 156: Asymptotic analysis of random walks

2.7 The asymptotic behaviour of the probabilities P(Sn � x) 125

Therefore (see (2.7.14))

p(x) :=n∑

j=1

p(j, x) � c[V (x) min(n, xγ)

]2. (2.7.23)

For P (x) :=∑n

j=1 P (j, x) we obtain in a similar way that, owing to (2.7.20),

P (x) = O([

V (x) min{n, xγ}]2) . (2.7.24)

Note that the order of magnitude of the previous bounds (2.7.12), (2.7.23) doesnot exceed that of P (x). Now taking into account the bound (2.7.9) we obtainthat, as x → ∞,

P(Sn � x) =n∑

j=1

EV (x − ζσW (j))(1 + o(1))

+ O(x−(α−β)(r−ε)

)+ O (P (x)) , (2.7.25)

where a bound for P (x) is given in (2.7.24). The main term here does not exceed(see (2.7.11))

cV (x) min{n, xβ′} (2.7.26)

for any β′ < β. Therefore, owing to (2.7.24) and the fact that xγV (x) → 0as x → ∞, the quotient of P (x) and the main term in (2.7.25) tends to zero.Summarizing, we obtain

P(Sn � x) =n∑

j=1

EV (x − ζσW (j))(1 + o(1)).

This proves (2.7.1). The relation (2.7.2) then follows in an obvious way.Now we will prove (2.7.3). For m < n we have that, as m → ∞,

Im,n :=n∑

j=m

EV (x − ζσW (j)) ∼n∫

m

EV (x − ζσW (u)) du.

The change of variables σW (u) = t (u = 1/W (t)) leads to the integral

−σW (n)∫

σW (m)

EV (x − ζt) dW (t)W 2(t)

. (2.7.27)

Since the increment of W (t) on the interval (t, t(1 + Δ)) for a small fixed Δ hasthe asymptotics −ΔβW (t) as t → ∞, and since the integrand in (2.7.27) doesnot change much over that interval, the above integral is asymptotically equivalentto

β

σW (n)∫σW (m)

EV (x − ζt) dt

tW (t)= β

σW (n)/x∫σW (m)/x

EV (x(1 − ζv)) dv

vW (vx)

Page 157: Asymptotic analysis of random walks

126 Random walks with jumps having no finite first moment

(in other words, we can act as if the function W (t) were differentiable withW ′(t) ∼ −βW (t)/t).

If v = const > 0 or v → 0 slowly enough as x → ∞ then

V (x(1 − ζv)) ∼ (1 − ζv)−αV (x), W (vx) ∼ v−βW (x).

Therefore when σW (m)/x → 0 (or mW (x) → 0) slowly enough, we have

Im,n ∼ βV (x)W (x)

σW (n)/x∫σW (m)/x

vβ−1E(1 − ζv)−αdv ∼ V (x)W (x)

C(β, α, σW (n)/x),

whereas I1,m ∼ mV (x) = o(V (x)/W (x)

). This proves (2.7.3). To prove the

remaining relations (2.7.5), (2.7.6) from the assertion of Theorem 2.7.1 one usesthe equality ∫ ∞

0

vβ−1(1 + v)−αdv = B(β, α − β).

The theorem is proved.

In concluding this chapter we note that a number of sections related to thepresent topics have been included in Chapter 3 because these sections relateequally to both chapters. Among them are:

(1) integro-local theorems on large deviations (§ 3.7);(2) uniform limit theorems for sums of random variables, acting on the whole

axis (§ 3.8);(3) iterated logarithm-type laws (§ 3.9).

Page 158: Asymptotic analysis of random walks

3

Random walks with jumps having finite mean andinfinite variance

In this chapter we will assume that the r.v. ξ has finite expectation and, moreover,that Eξ = 0. Along with the functionals Sn and Sn = maxk�n Sk of the randomwalk {Sk}, which were studied in Chapter 2, we will consider here the functionals

Sn(a) := maxk�n

(Sk − ak)

(in the last chapter, studying Sn(a) would have been meaningless since therethe asymptotics of P

(Sn(a) � x

)were essentially independent of a). We will

also study in this chapter a more general ‘boundary problem’ on the asymptoticbehaviour of the probability

P(maxk�n

(Sk − g(k)) � 0)

for a given ‘boundary’ {g(k)}, mink�n g(k) → ∞. Moreover, we will obtainuniform limit theorems (i.e. limit theorems acting on the whole real line) for Sn

and Sn, as well as analogues of the law of the iterated logarithm.

3.1 Upper bounds for the distribution of Sn

First consider the ‘basic case’ where the majorants V (t) and W (t) for the rightand left tails, respectively, have the indices −α and −β, α ∈ [1, 2), β ∈ (1, 2).

Recall the notation we used earlier (cf. (2.1.1), (2.1.7)),

Bj = {ξj � y}, B = B(0) =n⋂

j=1

Bj , P = P(Sn � x; B),

and also the convention that the ratio r = x/y � 1 remains bounded in all ourformulations.

Theorem 3.1.1. Let the conditions Eξ = 0, [<, <] with α ∈ [1, 2), β ∈ (1, 2)and

W (t) � cV (t) (3.1.1)

127

Page 159: Asymptotic analysis of random walks

128 Random walks with finite mean and infinite variance

be satisfied. Then the inequalities (2.2.1) from Theorem 2.2.1 and (2.2.3) fromCorollary 2.2.4 hold true, i.e. for all n

P � c1Π(y)r, r =x

y, Π(y) = nV (y), (3.1.2)

and

supn,x: Π(x)�v

P(Sn � x)Π(x)

� 1 + ε(v), ε(v) ↓ 0 as v ↓ 0. (3.1.3)

If (3.1.1) does not hold then (3.1.2) remains true for all n and y such that, forsome c2 < ∞,

nW

(y

| lnnV (y)|)

< c2. (3.1.4)

For (3.1.4) to hold it suffices that we have either

nW (y) < c3, | lnnV (y)| < [nW (y)]−1/β−ε (3.1.5)

or

nW

(y

ln y

)< c4, (3.1.6)

for some ε > 0 and c3, c4 < ∞.

Note that, for values of y such that, say, nW (y) > 1, nV (y) > 1, the in-equality (3.1.2) will, as a rule, be trivial. The constant c1 in (3.1.2), providedthat (3.1.1) holds, admits the representation c1 = (e/r)r + ε(Π(y)), ε(v) ↓ 0 asv ↓ 0. In the general case,

c1 =(

e

r

)r

+ ε1(Π(y)) + ε2

(nW

(y

| ln Π(y)|))

,

where εi(v) ↓ 0 as v ↓ 0 (cf. Theorem 2.2.1).If condition [<, <] is met but (3.1.1) is not then an analogue of Corollary 2.2.4

has the following form.

Corollary 3.1.2.

(i) The inequality

supn,x: Π�v, ΠW �1

P(Sn � x)Π

� 1 + ε(v) (3.1.7)

holds true with

Π = nV (x), ΠW = nW

(y

| lnnV (y)|)

, ε(v) ↓ 0 as v ↓ 0.

(ii) Put V (t) := max{V (t),W (t)}, Π := nV (x). Then

supn,x: b�v

P(Sn � x)

Π� 1 + ε(v). (3.1.8)

Page 160: Asymptotic analysis of random walks

3.1 Upper bounds for the distribution of Sn 129

Proof. The proof of the first assertion repeats, with obvious amendments, theargument from the proof of Corollary 2.2.4. Assertion (ii) follows from (3.1.3) ifone replaces both majorants in condition [<, <] by V .

If one puts S n := mink�n Sk then it clearly follows from (3.1.8) that

supn,x: b�v

max{P(Sn � x), P(S n < −x)}Π

� 1 + ε(v). (3.1.9)

Let Sn := maxk�n |Sk| = max{Sn,−S n}. Theorem 3.1.1 implies the fol-lowing result as well.

Corollary 3.1.3. If the condition [<, <] is satisfied then

supn,x: b�v

P(Sn � x)n(V (x) + W (x))

� 1 + ε(v). (3.1.10)

The quantity ε(v) in the relation (3.1.10) has the same meaning as in Theo-rem 3.1.1 and Corollary 3.1.2.

Proof of Corollary 3.1.3. We will make use of the inequality

P(Sn � x) � P(Sn � x) + P(S n � −x).

If c1V (t) � W (t) � c2V (t) for some 0 < c1 < c2 < ∞ then (3.1.10) followsfrom (3.1.3). If V (t) � W (t) then, fixing a δ > 0 and taking as a majorantfor the right tail the function δW (t) > V (t) instead of V (t), we will obtainfrom what has already been said that (3.1.10) holds true with V (x) + W (x) re-placed by W (x)(1 + δ). But since δ > 0 is arbitrary and V (x) = o(W (x))then W (x)(1 + δ) can be replaced in (3.1.10) by W (x) or by V (x) + W (x).This proves (3.1.10). One can argue in the same way when W (t) � V (t). Thecorollary is proved.

Corollary 3.1.4. Under the conditions [<, <] and n � xγ , 1 < γ < min{α, β},the inequalities (3.1.2), (3.1.3) always hold true.

The corollary is obvious since, as y → ∞, we have

yγV (y) → 0, yγW

(y

ln y

)→ 0.

Without loss of generality one can assume that y > e, so that the inequality (3.1.6)and therefore also (3.1.4), (3.1.2) hold true.

Remark 3.1.5. Conditions (3.1.4)–(3.1.6) are essential for the inequalities (3.1.2),(3.1.3), because, when W (y) � V (y) and the left tail is regularly varying, thedeviations y for which nW (y) > 1 will fall (even when nV (y) → 0) into the‘normal deviations zone’, where the distributions of the scaled sums Sn can beapproximated by the limiting stable law.

Page 161: Asymptotic analysis of random walks

130 Random walks with finite mean and infinite variance

Note also that the inequality

y > σW (n) ln σW (n), σW (n) = W (−1)(1/n),

is sufficient for (3.1.6) to be met. Indeed, for such a y one has y/ln y > c1σW (n),

nW

(y

ln y

)< c2nW (σW (n)) � c2.

Now consider the general case, when α and β can each assume any valuefrom [1,∞) and Eξ = 0. In other words, in comparison with Theorem 3.1.1,one also admits now the values

α � 2, β = 1, β � 2.

The formulation of the corresponding assertion will be more cumbersome andwill require the introduction of some new notation. It is important to note thatthe case Eξ2 < ∞ (when necessarily α � 2, β � 2) will be considered in moredetail in Chapter 4, so that the general assertion below will be devoted mostly tothe ‘threshold’ values of α and β, which are equal to 1 and 2 respectively. Thesevalues do not play any significant role in the subsequent exposition.

Put

W :=

∞Z0

tW (t) dt, V :=

∞Z0

tV (t) dt (3.1.11)

and introduce the following unboundedly increasing s.v.f.’s:

L1(t) :=1

tW (t)

∞Zt

W (u) du ≡ W I(t)

tW (t)if β = 1;

L2(t) :=

tZ0

∞Zu

W (v) dv

!du ≡

tZ0

W I(u) du if β = 2, W = ∞;

L3(t) :=1

t2V (t)

tZ0

uV (u)du if α = 2, V = ∞.

Further, put

Wβ(t) :=

8>>><>>>:W (t)Lβ(t) if W = ∞, β = 1, 2,

Γ(2 − β)

β − 1W (t) if β ∈ (1, 2),

Wt−2 if W < ∞, β � 2,

(3.1.12)

and

Vα(t) :=

8>>><>>>:1

2 − αV (t) if α ∈ [1, 2),

V (t)L3(t) if V = ∞, α = 2,

V t−2 if V < ∞, α � 2.

(3.1.13)

Page 162: Asymptotic analysis of random walks

3.1 Upper bounds for the distribution of Sn 131

Along with the product Π = nV (x), the quantity

Π∗ := n

»Vα

„y

| ln Π|«

+ Wβ

„y

| ln Π|«–

(3.1.14)

will also play an important role in the cases we are dealing with here.

Theorem 3.1.6. Let the conditions [<, <] with α � 1, β � 1 and Eξ = 0 be satisfied.If Π∗ < c1 then

P � c[nV (y)]r, r =x

y. (3.1.15)

For the boundedness of nWβ(y/| lnΠ|) it suffices that, for some c < ∞

nW

„y

ln y

«Lβ

„y

ln y

«< c if β = 1, β = 2, W = ∞, (3.1.16)

nW

„y

ln y

«< c if β ∈ (1, 2), (3.1.17)

y > cn1/2 ln(n + 1) if W < ∞, β � 2. (3.1.18)

For the boundedness of nVα(y/| ln Π|) it suffices that

Π < c if α < 2, (3.1.19)

ΠL1+ε3 (y) < c, ε > 0 if α = 2, V = ∞, (3.1.20)

y > cn1/2 ln(n + 1) if V < ∞, α � 2. (3.1.21)

All the above conditions are satisfied if

y > n1/θ+ε (3.1.22)

for some ε > 0, where θ = min{α, β, 2}.

The following analogue of Corollary 3.1.2 holds true.

Corollary 3.1.7. If the conditions [<, <] with α � 1, β � 1 and Eξ = 0 are met then

supn,x: Π∗�v

P(Sn � x)

Π� 1 + ε(v),

where ε(v) ↓ 0 as v ↓ 0.

Proof. The proof of the corollary repeats, with some obvious changes, the proof of Corol-lary 2.2.4.

There are also exist analogues of the inequalities (3.1.8)–(3.1.10).

Example 3.1.8. Consider now an example with the threshold value β = 1, where, forsome t0 > 0,

W (t) =1

t ln2 tfor t > t0.

Here

W1(t) = W (t)L1(t) =1

t

∞Zt

W (u) du =1

t ln t,

Page 163: Asymptotic analysis of random walks

132 Random walks with finite mean and infinite variance

and the quantity nW1(y/ln y) that appears in (3.1.16) will be given by

nW1

„y

ln y

«=

n ln y

y ln(y/ln y)< c1 for y > cn.

Since in the case of finite Eξ one always has nV (n) → 0 (i.e. Π → 0 for x � cn)as n → ∞, the assertion (3.1.15) will hold in this example for all y > cn.

Proof. The proof of Theorem 3.1.1 mostly follows the argument used to proveTheorem 2.2.1. To use the basic inequality (2.1.8), we again have to obtain abound for R(μ, y) in (2.2.7), where, under the present circumstances,

I1 =

0∫−∞

eμt F(dt) = F−(0) + μ

0∫−∞

tF(dt) +

0∫−∞

(eμt − 1 − μt)F(dt).

Here0∫

−∞(eμt − 1 − μt)F(dt) = μ

∞∫0

(1 − e−μt)F−(t) dt.

Since 1 − e−μt � 0 for μ, t > 0, the last integral does not exceed

μ

∞∫0

W (t)(1 − e−μt) dt = μ2

∞∫0

W I(t)e−μt dt,

where, by virtue of Theorem 1.1.4(iv), for β > 1

W I(t) =

∞∫t

W (u) du ∼ tW (t)β − 1

as t → ∞, (3.1.23)

and therefore, by Theorem 1.1.5, as μ → 0,

μ2

∞∫0

W I(t)e−μt dt ∼ μW I

(1μ

) ∞∫0

e−uu−β+1 du ∼ W (1/μ)β − 1

Γ(2 − β).

Thus,

I1 � F−(0) + μ

0∫−∞

tF(dt) +W (1/μ)β − 1

Γ(2 − β)(1 + o(1)). (3.1.24)

With the same choice of M = 2α/μ as in the proof of Theorem 2.2.1, we havethe following bound for the integral I2:

I2 ≡M∫0

eμt F(dt) � F+(0) + μ

∞∫0

tF(dt) +

M∫0

(eμt − 1 − μt)F(dt),

Page 164: Asymptotic analysis of random walks

3.1 Upper bounds for the distribution of Sn 133

where, by the inequality eμt − 1 � μt(e2α − 1)/2α which holds for t � M ,

M∫0

(eμt − 1 − μt)F(dt) � μ

M∫0

(eμt − 1)V (t) dt � μ2

2α(e2α − 1)V ∗(M),

V ∗(t) : =

t∫0

uV (u) du ∼ V (t)t2

2 − α. (3.1.25)

Therefore

I2 � F+(0) + μ

∞∫0

tF(dt) +2α(e2α − 1)

2 − αV (2α/μ), (3.1.26)

and hence

I1 + I2 � 1 + c1W (1/μ) + c2V (1/μ). (3.1.27)

For I3, which is given by (2.2.11), (2.2.13), one can obtain the same bound asin Theorem 2.2.1. Thus, under the conditions of the present section, we obtain

R(μ, y) � 1 + c1W (1/μ) + c2V (1/μ) + V (y)eμy(1 + ε(λ)), (3.1.28)

where λ = μy (cf. (2.2.16)). The choice of the ‘optimal’

μ =1y

lnr

Π(y)

is also analogous to that in Theorem 2.2.1 (see (2.2.18)). Therefore, cf. (2.2.19),we obtain

lnP � −xμ + c1nW (1/μ) + c2nV (1/μ) + eλΠ(y)(1 + ε(λ))

� −r lnr

Π(y)+ r + ε1(Π(y)) + c3nW

(y

| ln Π(y)|)

, (3.1.29)

where ε1(v) ↓ 0 as v ↓ 0. If (3.1.1) holds then the last term on the right-hand sideof (3.1.29) could be omitted (one can include it in ε1(Π(y))). If (3.1.4) holds then

lnP � −r lnr

Π(y)+ c.

This proves (3.1.2), and hence also (3.1.3).Now we will verify that conditions (3.1.5), (3.1.6) are sufficient for (3.1.4).

Let (3.1.5) be satisfied. Then, for any ε > 0 and all large enough y,

nW

(y

| lnnV (y)|)

� nW (y)| lnnV (y)|β+ε/2

� [nW (y)]1−(β+ε/2)/(β+ε) = [nW (y)]ε/[2(β+ε)] < c.

If (3.1.6) is met then, owing to the relation

| lnnV (y)| = − lnn − lnV (y) < − lnV (y) < (α + 1) ln y, (3.1.30)

Page 165: Asymptotic analysis of random walks

134 Random walks with finite mean and infinite variance

which holds for large enough y, we will have

nW

(y

| lnnV (y)|)

� c1nW

(y

ln y

)< c2.

The inequality (3.1.3) is proved in the same way as Corollary 2.2.4. The theo-rem is proved.

Proof of Theorem 3.1.6. First consider the case β = 1. Here, owing to Theorem 1.1.4(iv),instead of (3.1.23) we have

W I(t) = tW (t)L1(t), (3.1.31)

where L1(t) → ∞ as t → ∞ and is an s.v.f. So instead of (3.1.24) we will have

I1 � F−(0) + μ

0Z−∞

tF(dt) + W (1/μ)L1(1/μ). (3.1.32)

If β � 2 then (3.1.23) remains valid and, as μ → 0,

μ2

∞Z0

W I(t)e−μtdt ∼j

W (1/μ)L1(1/μ) if W = ∞,μ2W if W < ∞,

(3.1.33)

where W � ∞ is defined in (3.1.11); if W = ∞ then

L2(t) :=

tZ0

W I(u) du → ∞, t → ∞, (3.1.34)

is an s.v.f. (see Theorem 1.1.4(iv)). Now we make use of the notation Wβ(t) from (3.1.12).Then (3.1.32) will hold in all the cases considered above, provided that we replace theterm W (1/μ)L1(1/μ) on the right-hand side of (3.1.32) by Wβ(1/μ); in the case β > 2,in accordance with (3.1.12) we put this term equal to W2(1/μ) = Wμ2.

We have a similar situation with the parameter α. In the case α � 2, instead of (3.1.25)we will have

V ∗(t) =

tZ0

uV (u) du =

jt2V (t)L3(t) if V = ∞,V if V < ∞,

where L3(t) is an unboundedly increasing s.v.f. Using the notation Vα(t) introducedin (3.1.13), it can seen that, for all α � 1, we will have instead of (3.1.26), (3.1.27) theinequalities

I2 � F+(0) + μ

∞Z0

tF(dt) + Vα(1/μ), I1 + I2 � 1 + Wβ(1/μ) + Vα(1/μ).

For Π(y) = nV (y) and large enough y, we will have (cf. (2.2.19))

ln P � −xμ + 2n [Wβ(1/μ) + Vα(1/μ)] + Π(y)eλ(1 + ε(λ)).

Again putting

μ =1

yln

r

Π(y),

Page 166: Asymptotic analysis of random walks

3.1 Upper bounds for the distribution of Sn 135

we will obtain the previous result provided that (see (3.1.14))

Π∗ = nˆWβ(1/μ) + Vα(1/μ)

˜� c

as y → ∞. That is, for (3.1.15) to hold we need

nWβ

„y

| ln Π(y)|«

< c, nVα

„y

| ln Π(y)|«

< c. (3.1.35)

Now we turn to conditions sufficient for (3.1.35). The cases α ∈ [1, 2), β ∈ (1, 2) havealready been dealt with in Theorem 3.1.1.

In the cases β = 1, 2 and W = ∞, we have

nWβ

„y

| ln Π(y)|«

= nW

„y

| ln Π(y)|«

„y

| ln Π(y)|«

.

Since | ln Π(y)| < (α+1) ln y for sufficiently large y (see (3.1.30)), the first of the requiredinequalities (3.1.35) will hold if

nW

„y

ln y

«Lβ

„y

ln y

«< c1.

If W < ∞ then the first of the inequalities (3.1.35) will hold if

y−2Wn| ln Π(y)|2 < c.

It suffices to take y > n1/2 ln(n + 1). Then by (3.1.30)

n| ln Π(y)|2y2

<n(α + 1)2 ln2 y

y2< (α + 1)2

n ln2(n ln2(n + 1))

2n ln2 n� c1

for all n � 1. The sufficiency of conditions (3.1.16)–(3.1.18) is proved.The case α � 2 is dealt with in a similar way. When α = 2 and V < ∞, it again

suffices to take y > n1/2 ln(n + 1). If α = 2, V = ∞ then (cf. (2.2.20))

nV2

„y

| ln Π(y)|«

� c1Π(y)L3(y)| ln Π(y)|2+δ < L−δ/23 (y) → 0

when Π(y) < L−2−δ3 (y). As was shown in (2.2.21) and (2.2.22), the last inequality holds

when

s(y, n) =y

σ(n)> L1+ε

3 (σ(n))

or when

Π(ε) ≡ ΠL1+ε3 (x) � 1.

That conditions (3.1.19)–(3.1.21) are sufficient is proved.The sufficiency of condition (3.1.22) can be verified in an obvious way. The theorem is

proved.

In concluding this section, we will obtain bounds for the moments of the r.v.’s

Sn = maxk�n

|Sk| = max{Sn, |S n|}, where S n = mink�n

Sk.

Recall the notation V (t) = max{V (t),W (t)}, Π = nV (x), and put

σ(n) := V (−1)(1/n).

Page 167: Asymptotic analysis of random walks

136 Random walks with finite mean and infinite variance

The functions V (t), σ(n) are obviously regularly varying together with V (t)and W (t). It is also clear that, owing to (3.1.10),

supx: b�v

P(Sn � x)

Π� 2 + ε(v), ε(v) ↓ 0 as v ↓ 0. (3.1.36)

Corollary 3.1.9. Let condition [<, <] be satisfied, and let g(t) be an r.v.f. ofindex γ ∈ (0,min{α, β}). Then

Eg(Sn) � cg(σ(n)). (3.1.37)

Proof. We can assume without loss of generality that g is a differentiable function(otherwise instead of g(t) we could consider the function g(t) =

∫ t+1

tg(u)du,

which is asymptotically equivalent to it). Therefore, owing to the convergenceg(t)V (t) → 0 as t → ∞ we have

Eg(Sn) =

∞∫0

g(t)P(Sn ∈ dt) � c1 +

∞∫0

g′(t)P(Sn � t) dt. (3.1.38)

Further, the function V (t) is an r.v.f. (together with V (t) and W (t)), and hence itadmits a representation of the form V (t) = t−ρL(t), where ρ = min{α, β} andL is an s.v.f. Now consider, along with V (t), the differentiable function

V (t) := ρ

∞∫t

u−ρ−1L(u)du ∼ t−ρL(t) = V (t).

Clearly σ(n) := V (−1)(1/n) ∼ σ(n) and, along with (3.1.36), we also have

P(Sn > x)

nV (x)� c2 for x � σ(n) = σ(n)(1 + o(1)). (3.1.39)

Hence, by virtue of Theorem 1.1.4(iv) and (3.1.39), for the integral on the finalright-hand side of (3.1.38) we obtain

eσ(n)∫0

g′(t)P(Sn � t) dt �eσ(n)∫0

g′(t) dt � g(σ(n)

)and

∞∫eσ(n)

g′(t)P(Sn � t) dt � c2n

∞∫eσ(n)

g′(t)V (t) dt

= c2n

∞∫eσ(n)

g(t)tρ−1L(t) dt − g(σ(n)

)V(σ(n)

)]

∼ c3ng(σ(n)

)V(σ(n)

) ∼ c3g(σ(n)

) ∼ c3g(σ(n)

).

Together with (3.1.38) this establishes (3.1.37). The corollary is proved.

Page 168: Asymptotic analysis of random walks

3.2 Upper bounds for the distribution of Sn(a), a > 0 137

3.2 Upper bounds for the distribution of Sn(a), a > 0

In this section we will derive upper bounds for the probabilities

P(Sn(a) � x) and P(Sn(a) � x;B(v)),

where, as before, Eξ = 0,

Sn(a) = maxk�n

(Sk − ak), a > 0, B(v) =n⋂

j=1

Bj(v).

Here we will put g(j) = aj, so that

Bj(v) = {ξj < y + vaj}, v > 0.

Clearly, Sn(a) is nothing other than the value of the maximum S′n in the case

where the summands ξ′j := ξj − a have a negative mean −a. This interpretationof Sn(a) does not exclude situations with small values of a. To make formulationsfor such situations precise, we will introduce the triangular array scheme, wherethe distribution of ξ depends on n (the approach used when studying so-calledtransitional phenomena; see Chapter 12). Note, however, that the value of n inthe forthcoming assertions can be fixed or infinite. So it is often more convenientto assume that the distribution of ξ (and therefore also the tails V (t) = Va(t)and W (t) = Wa(t)) will depend on some varying parameter that, when a → 0,could be identified with a. With regard to Va and Wa we will assume that a ratherstringent condition on the character of their dependence on a is met. Namely, wewill assume that the following uniformity condition is satisfied.

[U] There exist tails V0, W0, belonging to the same classes as those to whichthe tails V and W belonged (we retain the same exponents −α and −β), suchthat

supa

∣∣∣∣Va(t)V0(t)

− 1∣∣∣∣→ 0, sup

a

∣∣∣∣Wa(t)W0(t)

− 1∣∣∣∣→ 0 as t → ∞.

This condition will be substantially broadened in Chapter 12.It is not difficult to see that all the bounds obtained in this chapter so far will

remain, under condition [U], true for the triangular array scheme as well. Thereason is that all the arguments and asymptotic analyses that we used for func-tionals of V = Va, W = Wa can be replaced, introducing a small error ε, by thesame reasoning for the same functionals of the fixed functions V0(t), W0(t) fort � tε with a suitable tε.

In what follows, we will still be considering the case of centred r.v.’s ξ, sothat Eξ = 0 while the tails V and W can, generally speaking, depend on theparameter a → 0 or another parameter, provided that condition [U] is satisfied.The index a indicating this dependence will be omitted for brevity.

Below we will need bounds for the probabilities that the crossing of the bound-ary x + ak occurred after a given time. The first crossing time is given by the

Page 169: Asymptotic analysis of random walks

138 Random walks with finite mean and infinite variance

r.v.

η(x) := mink

{k : Sk − ak � x}.

Put

m := min{n, x/a}, r = x/y.

To make our notation for conditions on the range of values of n and x morecompact, in what follows we will be using condition [Q], which is satisfied whenat least one of the following two conditions is met:

[Q1]

W (t) � cV (t), x → ∞ and nV (x) → 0,

or

[Q2]

x → ∞ and nV( x

lnx

)< c, where V (t) = max{V (t),W (t)},

so that

[Q] = [Q1] ∪ [Q2].

Theorem 3.2.1.

(i) Let conditions [<, <] and [U] with α ∈ (1, 2) and also condition [Q] with n

replaced by m be satisfied. Then, for v � 1/4r, uniformly in a ∈ (0, a0) forany fixed a0 > 0,

P(Sn(a) � x; B(v)) � c[mV (x)]r0 , (3.2.1)

where r0 = r/(1 + vr) � 4r/5, r > 5/4.If a � a0 for a fixed a0 > 0 and mV (x) → 0 then, without loss of

generality, one can assume that condition [Q1] is met with n replaced in itby m.

The inequality (3.2.1) remains true for any a without assuming [Q], pro-vided that V (x) on the right-hand side of (3.2.1) is replaced by V (x).

(ii) Under the conditions of part (i),

P(Sn(a) � x) � cmV (x). (3.2.2)

If, under the conditions of part (i), condition [Q] is met with n = n1 :=�x/a then, for bounded values of t and all large enough x,

P(∞ > η(x) � xt

a

)� cxV (x)

at1−α, (3.2.3)

where the constant c can be found explicitly.If t → ∞ together with x then the inequality (3.2.3) remains true if the

exponent 1 − α is replaced in it by 1 − α + ε for any fixed ε > 0.

Page 170: Asymptotic analysis of random walks

3.2 Upper bounds for the distribution of Sn(a), a > 0 139

The assertions stated in part (i) after (3.2.1) hold true for the inequali-ties (3.2.2), (3.2.3) as well.

Note that if a � a0 = const > 0, β > 1 then condition [Q2] with n replacedby m is always satisfied.

Proof. For n � x/a the assertion follows from Theorem 3.1.1 and Corollary 3.1.4.Now let n > x/a. Without loss of generality, we can assume that n1 := x/a isan integer. Then, for

n0 := 0, nk := 2k−1n1 = 2k−1x/a, k = 1, 2, . . . ,

we have

P(Sn(a) � x; B(v)) �∞∑

k=0

pk, pk := P(η(x) ∈ (nk, nk+1]; B(v)

).

(3.2.4)Put

xk := ank = x2k−1, yk := y + avnk = y + xv2k−1.

For any k such that nk � n, we have

B(v) ⊂nk⋂j=1

{ξj < y + vank} =nk⋂j=1

{ξj < yk} =: B(k)

and therefore, by Theorem 3.1.1 with n1V (x) = xa−1 V (x) � 1,

p0 = P(Sn1(a) � x; B(v)) � P(Sn1 � x;B(1)) � c[n1V (x)

]r0,

where

r0 :=x1

y1≡ r

1 + vr.

Similarly, for k � 1 (that condition [Q] is met for nk = n12k−1, xk = x2k−1

follows from the fact that it is satisfied for n1 and x1 = x), we obtain

pk = P(η(x) ∈ (nk, nk+1]; B(v)

)� P

(Snk

� ank

2; B(k)

)+ P

(Snk

<ank

2

)P(

Snk� x +

ank

2;

nk⋂j=1

{ξj < yk+1})

= P(Snk

� x2k−2; B(k))

+ P(

Snk� x(1 + 2k−2);

nk⋂j=1

{ξj � yk+1})

� c1

[nkV (x2k−2)

]rk + c2

[nkV (x(1 + 2k−2))

]r′k , (3.2.5)

Page 171: Asymptotic analysis of random walks

140 Random walks with finite mean and infinite variance

where

rk :=x2k−2

yk=

r2k−2

1 + vr2k−1↑ 1

2v, (3.2.6)

r′k :=x(1 + 2k−2)

yk+1=

r(1 + 2k−2)1 + vr2k

↑ 14v

(3.2.7)

as k → ∞, v � 1/4r. Note that here

rk � r0 =r

1 + vr, r′k � r′1 =

3r

2(1 + 2vr)� r, k � 1, v � 1

4r.

So min{rk, r′k} = r0 = r/(1 + vr) � 4r/5 for v � 1/4r. Summarizing, weobtain that, for 0 < ε < α − 1, v � 1/4r and all k, we have

pk � c

[x2k

aV (x2k)

]r0

� c1

[xa

V (x)2−(α−1−ε)k]r0

.

Hence from (3.2.4)

P(Sn(a) � x; B(v)

)�

∞∑k=0

pk � c2

[xV (x)

a

]r0

, r0 > 1, r >54.

This proves the inequality (3.2.1).Now we will show that when a � a0 > 0 one can assume, without loss of

generality, that the condition W (t) � cV (t) is satisfied. Indeed, for h > 0introduce

(h)ξj := max{−h, ξj} + ah, ah := E(ξj + h; ξj � −h) < 0,

which are centred versions of the r.v.’s ξj ‘truncated’ at the level −h, h > 0,and endow the notation Sn(a), corresponding to the r.v.’s (h)ξj , with a left super-script (h). Then, clearly,

Sn(a) � (h)Sn(a + ah),

where ah → 0 as h → ∞ and all the conditions of the form [<, <], W (t) <

cV (t) are satisfied for (h)ξj . This gives us the required bound for P(Sn(a) � x)with a ‘slightly diminished’ value of a.

The last assertion of part (i) of the theorem follows in an obvious way from theabove argument and Corollary 3.1.2.

The proof of part (ii) of the theorem follows the same avenue. Using an argu-ment similar to that above, we find that, for n � x/a, k � 1,

P(η(x) ∈ (nk, nk+1]

)� P

(Snk

� ank

2

)+ P

(Snk

<ank

2

)P(Snk

� x +ank

2

)� P

(Snk

� x2k−2)

+ P(Snk

� x(1 + 2k−2))

� cnkV (x2k);

Page 172: Asymptotic analysis of random walks

3.3 Lower bounds for the distribution of Sn 141

from this it follows that

P(∞ > η(x) � nk) = P(∞ > η(x) � x2k−1

a

)� c1

x2k−1

aV (x2k).

For a fixed t, this yields

P(∞ > η(x) � xt

a

)� c2

xV (x)a

t1−α.

If t → ∞ then, owing to the properties of r.v.f.’s, one can replace t1−α in thisinequality by t1−α+ε for any fixed ε > 0. The theorem is proved.

3.3 Lower bounds for the distribution of Sn

Lower bounds for the distributions of the sums Sn are based, as in § 2.5, on thegeneral result of Theorem 2.5.1 and are similar to the bounds from Theorem 2.5.2.Recall that

V (t) = max{V (t),W (t)}, σ(n) = V (−1)(1/n), α = min{α, β}.Theorem 3.3.1. Let condition [<, <] with β ∈ (1, 2) be satisfied and let Eξ = 0.Then, for

y = x + uσ(n − 1)

and any fixed δ > 0, we have

P(Sn � x) � nF+(y)(

1 − cu−α+δ − n − 12

F+(y))

. (3.3.1)

If, moreover, condition [ · , ≶] is satisfied (see (2.5.1)), x � σ(n) then

P(Sn � x) � nV (x)(1 + o(1)). (3.3.2)

Proof. By virtue of Theorem 2.5.1,

P(Sn � x) � nF+(y)(

1 − Qn−1(u) − n − 12

F+(y))

, (3.3.3)

where y = x + uK(n − 1) and Qn(u) = P(Sn/K(n) < −u

). Corollary 3.1.2

implies that

P(Sn < −x) � nV (x)[1 + o(nV (x))

]. (3.3.4)

Put K(n) := σ(n). Then, for any fixed δ > 0 and all large enough t,

Qn(u) � cnV(uV (−1)(1/n)

)� cu−α+δ.

This proves (3.3.1).If x = sσ(n), s → ∞, u → ∞, u = o(s) then x ∼ y. Therefore, under

condition [ · , ≶], we obtain nF+(y) � nV (x)(1 + o(1)), nF+(y) → 0. Thisestablishes (3.3.2). The theorem is proved.

Page 173: Asymptotic analysis of random walks

142 Random walks with finite mean and infinite variance

One can also give uniform versions of the inequalities stated in Theorem 3.3.1.For instance, the following assertion holds true. Recall that condition [Rα,ρ] (see§§ 1.5 and 2.5) means that F (t) is an r.v.f. at infinity with index −α, α ∈ (0, 2),and there exists the limit

limt→∞

F+(t)F (t)

= ρ+.

Corollary 3.3.2. Let condition [Rα,ρ] be satisifed, α ∈ (1, 2), ρ+ > 0. Then, forx = sσ(n), Π = Π(x) = nV (x),

infx: s�t

P(Sn � x)Π

� 1 − ε(t), (3.3.5)

ε(t) ↓ 0 as t ↑ ∞.

The corollary follows in an obvious way from Theorem 3.3.1.Lower bounds for P(Sn(a) � x) are contained in Theorem 4.3.3.

3.4 Asymptotics of P(Sn � x) and its refinements

Some assertions on the first-order asymptotics of P(Sn � x) follow immediatelyfrom the upper and lower bounds presented above.

Theorem 3.4.1. Let Eξ = 0, α ∈ (1, 2) and condition [<, =] be satisfied.If W (t) � cV (t) then there exists a function ε(t) ↓ 0 as t ↑ ∞ such that, forx = sσ(n),

supx: s�t

∣∣∣∣P(Sn � x)Π

− 1∣∣∣∣ � ε(t), (3.4.1)

supx: s�t

∣∣∣∣P(Sn � x)Π

− 1∣∣∣∣ � ε(t). (3.4.2)

If W (t) � cV (t) but nW (x/lnx) < c then the asymptotic equivalence rela-tions

P(Sn � x) ∼ P(Sn � x) ∼ nV (x) as x → ∞

remain valid.

In the latter case it is more difficult to determine a parameter for which unifor-mity would hold. There is no doubt, however, that uniform analogues of (3.4.1),(3.4.2) will remain true if nW (x/lnx) < v ↓ 0.

Proof. The proof follows in an obvious way from Theorems 3.1.1 and 3.3.1. Onejust has to note that each of the two pairs of relations W (t) � cV (t), x � σ(n)and W (t) � V (t), nW (x/lnx) < c implies that x � σ(n) = V (−1)(1/n).

Page 174: Asymptotic analysis of random walks

3.4 Asymptotics of P(Sn � x) and its refinements 143

Below we will extend the above assertions to more general situations (this couldalready be partially done with the help of the bounds from §§ 3.2 and 3.3). More-over, we will also obtain refinements of these assertions, i.e. estimates of the rateof convergence of the difference

P(Sn � x)nV (x)

− 1

to zero and asymptotic expansions for P(Sn � x). Later on, analogous resultswill be obtained for P(Sn � x) and P(Sn(a) � x) as well. The need for deriv-ing second- and higher-order asymptotics arises from the fact that the first-orderasymptotics (the main terms of the asymptotics) often provide not very preciseapproximations to the probabilities under consideration (to some extent, this willbe illustrated by numerical examples below). The very fact that the same functionnV (x) appears in Theorem 3.4.1 as an approximation to the substantially differentprobabilities P(Sn � x) < P(Sn � x) already indicates that the quality of thisapproximation cannot be very good. To this one should add that the same approx-imation is valid for the probability P(ξn � x) as well, where ξn = maxj�n ξj .Indeed,

P(ξn � x) = 1 − (1 − F+(x))n

= nF+(x) − n(n − 1)2

F+(x)2 + O((nF+(x))2

),

so that if F+(x) = V (x) then

P(ξn � x) = nV (x) + O((nV (x))2

), (3.4.3)

and we obtain the same first-order approximation as in (3.4.1), (3.4.2).We will see below that, although nV (x) is a lower-quality approximation to the

probabilities of the form P(Sn � x) and P(Sn � x) than it is to P(ξn � x), itcan be substantially improved with the help of even the first term of the asymptoticexpansions. It is of interest to observe that, in the case α < 2, sometimes theapproximation nV (x) to P(Sn � x) (but not to P(Sn � x)) is, in a certainsense, as good as it is to P(ξn � x) (see Remark 3.4.5).

To obtain refinements of the limit theorems, we will need additional smooth-ness conditions on the function V (t). In the conditions below, q(t) denotes afunction such that q(t) → 0 as t → ∞.

[Dh,q] Condition [ · , =] is satisfied and, in addition, there exists a functionL1(t) → α as t → ∞ such that, as t → ∞ and Δ → 0,

V (t(1 + Δ)) =

{V (t)[1 + O(|Δ|h) + o(q(t))] if h � 1,

V (t)[1 − L1(t)Δ + O(|Δ|h) + o(q(t))] if 1 < h � 2.

(3.4.4)

For h = 1 we will need a stronger condition.

Page 175: Asymptotic analysis of random walks

144 Random walks with finite mean and infinite variance

[D(1,q)] Condition [ · , =] is satisfied and, in addition, there exists a functionL1(t) → α as t → ∞ such that, as t → ∞ and Δ → 0,

V (t(1 + Δ)) = V (t)[1 − L1(t)Δ + o(Δ) + o(q(t))]. (3.4.5)

For example, under condition [ · , =] the distribution tail of the first positivesum in the walk {Sk} will always satisfy condition [D(1,q)] with q(t) = t−1

(see § 7.5). This is also true for the Cramer transforms of the distribution weconsider in Chapter 6.

Condition [D(1,q)] means that the tail V can be represented as a sum of twocomponents: one is ‘differentiable at infinity’ (it satisfies condition [D(1,0)])while the other can be arbitrary, the only restriction being that its absolute valueshould admit a majorant of the order o(q(t)). Similar remarks hold true for otherconditions [D( · , · )] as well.

Furthermore, along with conditions [Dh,q] and [D(1,q)] one can consider con-ditions [Dh,O(q)] and [D(1,O(q))], which are formulated as follows.

[Dh,O(q)], [D(1,O(q))] Conditions [Dh,q], [D(1,q)] respectively are satisfiedwhen the remainder term o(q(t)) in them is replaced by O(q(t)).

In the lattice case, when the r.v. ξ is integer-valued with the greatest commondivisor of its values equal to 1, V (t) will be a step function. In this case, oneshould assume that t and Δt in (3.4.4) and (3.4.5) are also integer-valued.

If the function L(t) in the representation V (t) = t−αL(t) is differentiable, andL′(t) = o(L(t)/t) then in (3.4.4) with h > 1 and in (3.4.5) we can identify thefunction −L1(t) with tV ′(t)/V (t) ∼ −α, where V ′(t) is the derivative of V (t),and put q(t) ≡ 0.

As it will be seen from the proofs below, the accuracy of the asymptotic rep-resentations for P(Sn � x) and P(Sn � x) will depend on the sensitivity ofthe tails V (t) to relatively small variations in t. In this sense, conditions [Dh,q]reflect the nature of things.

We will also see (Remark 3.4.8) that, in the case where q(t) = o(t−h), condi-tion [Dh,q] can be replaced by [Dh,0].

Remark 3.4.2. It is not hard to see the following.

(1) If the functions Vi(t) satisfy conditions [Dh,qi], i = 1, 2, then V (t) :=

V1(t) + V2(t) satisfies [Dh,q] with

q(t) = q1(t)V1,0(t) + q2(t)V2,0(t), Vi,0(t) =Vi(t)

V1(t) + V2(t).

In the case h > 1, one should take, in the representation (3.4.4) for V (t),L1(t) = L1,1(t)V1,0(t) + L1,2(t)V2,0(t), where L1,i(t) is the function fromthe representation (3.4.4), corresponding to Vi(t), i = 1, 2.

(2) If V1(t) satisfies condition [Dh,q1 ] and q(t) < cq1(t)V1(t) then V (t) :=V1(t) + q(t) also satisfies [Dh,q1 ].

Page 176: Asymptotic analysis of random walks

3.4 Asymptotics of P(Sn � x) and its refinements 145

Remark 3.4.3. Clearly, if conditions [<, <] and [Q] = [Q1] ∪ [Q2] (introducedon p. 138) are met then, by virtue of Theorem 3.1.1, we have

P(Sn � x) � nV (x)(1 + o(1))

and, moreover, if [Q2] is satisfied,

P(S n < −x) � nW (x)(1 + o(1)),

where S n = mink�n Sk.Furthermore, it is not difficult to see that condition [Q] implies the convergence

nV (x) → 0 and therefore that the conditions of Corollary 3.1.2 are satisfied,which, in turn, implies (3.1.9).

Also, note that the condition nV (x) → 0 as x → ∞ always holds for devia-tions x > cn, since nV (n) → 0 as n → ∞ when E|ξ| < ∞.

Theorem 3.4.4. Let the conditions Eξ = 0, [<, =] with α ∈ (1, 2) and [Q] besatisfied. Then the following assertions hold true.

(i) One has

P(Sn � x) = nV (x)(1 + o(1)).

(ii) If condition [Dh,q] is satisfied then

P(Sn � x) = nV (x)(1 + o(q(x)) + rn,x

), (3.4.6)

where

rn,x =

{O(nV (x)

)if h > α := min{α, β},

O((σ(n)/x)h

)if h < α,

(3.4.7)

and the bounds O(·) are uniform in n and x, which satisfy nV (x) � ε foran arbitrary fixed ε > 0.

(iii) If condition [D(1,q)] is met then

P(Sn � x) = nV (x)(1 + o(q(x)) + O

(σ(n)/x

)). (3.4.8)

The remainder term O(σ(n)/x) is uniform in the same zones for n, x as in

part (ii).(iv) If conditions [Dh,O(q)] and [D(1,O(q))] are satisfied instead of conditions

[Dh,q] and [D(1,q)] respectively, then in (3.4.6) and (3.4.8) the remainderterm o(q(x)) must be replaced by O(q(x)).

In the lattice case, all the assertions of the theorem remain true for integer-valued x.

The so-called integro-local theorems on the asymptotics of P(Sn ∈ [x, x+Δ)),Δ > 0, as x → ∞, will be obtained in § 3.7 and Chapter 9.

Page 177: Asymptotic analysis of random walks

146 Random walks with finite mean and infinite variance

Remark 3.4.5. The assertion of Theorem 3.4.4 concerning the uniformity of O(·)means that if, say, the conditions h > α, nV (x) � ε are met then rn,x � cnV (x).One should note that σ(n)/x → 0 iff nV (x) → 0. More precisely, if x = sσ(n)then nV (x) ∼ s−α for a fixed s, and nV (x) � s−α+ε = o(s−1) as s → ∞ forany ε, 0 < ε < α − 1. Therefore,

nV (x) = o

(σ(n)

x

)as s → ∞ (3.4.9)

(or as nV (x) → 0). Similarly,

nV (x) = o

(σ(n)

x

)as nV (x) → 0. (3.4.10)

It is also clear that (σ(n)

x

)h

� nV (x) for h > α,(σ(n)

x

)h

� nV (x) for h < α.

(3.4.11)

It will follow from the proof of the theorem that the bound (3.4.7) can be writtenin the universal form

rn,x = O

((σ(n)

x

)h

+ n

x∫bσ(n)

uh−1V (u) du,

),

which coincides with (3.4.7) when h �= α.

Remark 3.4.6. If W (t) � cV (t) and h > α then, by virtue of Theorem 3.4.4,for P(Sn � x) one has an approximation nV (x)[1 + O(nV (x))] of the sameform as the approximation (3.4.3) for P(ξn � x). This situation is different fromthat in the case α > 2, where, under certain conditions, we have P(Sn � x) =nV (x)

[1 + cnx−2 + · · · ] (see Corollary 4.4.5).

Proof of Theorem 3.4.4. Put Gn := {Sn � x} and r = x/y = 2. It follows fromTheorem 3.1.1 (see (3.1.2)) that

P(Sn � x) = P(Gn) = P(GnB) + P(GnB)

= P(GnB) + O((nV (x))2

), (3.4.12)

where for the first term on the right-hand side we haven∑

j=1

P(GnBj) � P(GnB) �n∑

j=1

P(GnBj) −∑

i<j�n

P(GnBiBj)

=n∑

j=1

P(GnBj) + O((nV (x))2

). (3.4.13)

Page 178: Asymptotic analysis of random walks

3.4 Asymptotics of P(Sn � x) and its refinements 147

Therefore

P(Gn) =n∑

j=1

P(GnBj) + O((nV (x))2

). (3.4.14)

Further,

P(GnBj) = P(GnBn) = P(Sn � x, ξn � y)

= P(Sn−1 + ξn � x, ξn � y) = P(Sn−1 � x − y, ξn � y)

+ P(Sn−1 + ξn � x, Sn−1 < x − y).

Since the r.v.’s ξn and Sn−1 are independent of each other, the first term on theright-hand side does not exceed cnV 2(x), owing to (3.1.9). The second term canbe rewritten as

E[V (x − Sn−1); Sn−1 < x/2] = P1 + P2, (3.4.15)

where

P1 := E[V (x − Sn−1); Sn−1 < −x/2],

P2 := E[V (x − Sn−1); −x/2 � Sn−1 < x/2].

As we have already noted, condition [Q] implies the convergence nV (x) → 0.Using (3.1.9) to bound the left tails, we obtain

P1 � V (3x/2)P(Sn−1 < −x/2)

� V (3x/2)cnV (x/2) = O(nV (x)V (x)

). (3.4.16)

(i) By virtue of (3.4.12)–(3.4.16), to prove this part of the theorem it suffices toshow that P2 = V (x)(1 + o(1)). It follows from condition [Q] and (3.1.9) that ifε → 0 slowly enough as x → ∞ then for M = εx one has

P(|Sn−1| > M) → 0. (3.4.17)

Hence

P2 = E[V (x − Sn−1); |Sn−1| < M ] + o(V (x)). (3.4.18)

But V (x − S) ∼ V (x) for |S| < M , so that

P2 = V (x)(1 + o(1)).

Assertion (i) is proved.

(ii) If the smoothness condition [Dh,q] is satisfied then one can obtain more pre-cise results for P(Sn � x). In this case, for h > 1 we have, by virtue of (3.4.4),that

P2 = V (x)E[1 + L1(x)

Sn−1

x+ o(q(x)) + O

(∣∣∣∣Sn−1

x

∣∣∣∣h); |Sn−1| <x

2

].

(3.4.19)

Page 179: Asymptotic analysis of random walks

148 Random walks with finite mean and infinite variance

Set

T kn := E[Sk

n−1; |Sn−1| � x/2], (3.4.20)

Ihn := E[|Sn−1|h; |Sn| < x/2]. (3.4.21)

Since ESn−1 = 0, we have

P2 = V (x)[1 + o(q(x)) − T 0

n + O(x−1|T 1n | + x−hIh

n)], (3.4.22)

and now the main problem consists of bounding T kn and Ih

n .First we show that, for k = 0, 1 and h > α,

|T kn | � cxknV (x). (3.4.23)

The inequality T 0n ≡ P(|Sn−1| � x/2) � cnV (x) is contained in (3.1.9). For

k = 1, integrating by parts and again using (3.1.9), we obtain

|Tn| � E(|Sn−1|; |Sn−1| � x

2

)= −

∞∫x/2

u dP(|Sn−1| � u)

=x

2P(|Sn−1| >

x

2

)+

∞∫x/2

P(|Sn−1| � u) du

� cxnV (x) + cn

∞∫x/2

V (u) du � c1xnV (x) (3.4.24)

by virtue of Theorem 1.1.4(iv). This proves (3.4.23).Similarly,

Ihn �

x/2∫0

huh−1P(|Sn−1| � u) du =

bσ(n)∫0

+

x/2∫bσ(n)

. (3.4.25)

Here the first integral on the right-hand side does not exceed

h

bσ(n)∫0

uh−1du = σh(n).

Further, by (3.1.9) and Theorem 1.1.4(iv), the second integral on the right-handside admits the bound

hn

x/2∫bσ(n)

uh−1V (u) du �

⎧⎪⎪⎪⎪⎪⎪⎪⎨⎪⎪⎪⎪⎪⎪⎪⎩

hn

x∫1

uh−1V (u) du ∼ cxhnV (x), h > α,

hn

∞∫bσ(n)

uh−1V (u) du ∼ cσh(n), h < α

Page 180: Asymptotic analysis of random walks

3.5 Asymptotics of P(Sn � x) and its refinements 149

(recall that V (σ(n)) ∼ 1/n).Therefore, owing to (3.4.11),

Ihn �

{cxhnV (x) if h > α,

cσh(n) if h < α.

Together with the relations (3.4.14)–(3.4.16), (3.4.18), (3.4.22) and (3.4.23), thisproves (3.4.6), (3.4.7).

(iii) Now assume that condition [D(1,q)] is satisfied. Then, instead of (3.4.19),one has

P2 = V (x)E[1 − L1(x)

Sn−1

x+ o(q(x)) + o

( |Sn−1|x

); |Sn−1| <

x

2

].

Using the same argument as above, we obtain

P2 = V (x)[1 + o(q(x)) + o

(σ(n)

x

)].

(iv) The last assertion of Theorem 3.4.4 follows from the previous consider-ations in an obvious way, since replacing the term o(q(t)) in conditions (3.4.4),(3.4.5) by O(q(t)) does not cause any changes in the proof apart from replac-ing the bounds o(q(x)) in the final answers (3.4.6), (3.4.8) by O(q(x)). Theo-rem 3.4.4 is proved.

Remark 3.4.7. Since E|Sn|h � c > 0 for n � 1, adding the quantity O(x−h) toO(|Sn−1/x|h) in (3.4.19) will not affect the estimation of the terms on the right-hand side of that relation. Therefore, conditions [Dh,q] with q(t) � ct−h and[Dh,0] lead to the same results.

Remark 3.4.8. As we have already said, the bound o(q(x)) ‘passes’ from con-dition [Dh,q] to (3.4.19) and (3.4.6) without any change and has no influence onthe rest of the argument in the proof. The same applies to the proofs of othertheorems. Hence, in what follows, in the proofs related to condition [Dh,q] wecan assume that in fact condition [Dh,0] is met and then just add the term o(q(x))to the right-hand sides of the respective relations, from the very beginning to thefinal result.

3.5 Asymptotics of P(Sn � x) and its refinements

Some assertions on the first-order asymptotics of P(Sn � x) follow from thebounds of §§ 3.2 and 3.3 and were given in Theorem 3.4.1. In § 3.4 we also pre-sented a few remarks on the precision of the first-order approximations and statedthe smoothness conditions [Dh,q] on the tails V (t) that are necessary for derivinghigher order approximations.

The following theorem is the main result of the present section.

Page 181: Asymptotic analysis of random walks

150 Random walks with finite mean and infinite variance

Theorem 3.5.1. Let conditions [<, =] and [Q] with α ∈ (1, 2) be satisfied. Thenthe following assertions hold true.

(i) One has

P(Sn � x) = nV (x)(1 + o(1)). (3.5.1)

(ii) If condition [Dh,q] is met for some h ∈ (1, 2] then

P(Sn � x) = nV (x)[1 +

L1(x)nx

n−1∑j=1

ESj + o(q(x)) + rn,x

], (3.5.2)

where the form of rn,x is given in Theorem 3.4.4 and the convergence isuniform in n and x, satisfying the condtion nV (x) � ε for an arbitraryfixed ε > 0.

If condition [D(1,q)] is satisfied then (3.5.2) holds with rn,x = o(σ(n)/x).(iii) If condition [Dh,q] is satisfied for some h � 1 then

P(Sn � x) = nV (x)[1 + o(q(x)) + O

(σ(n)

x

)h]. (3.5.3)

(iv) If conditions [Dh,O(q)] and [D(1,O(q))] are satisfied instead of conditions[Dh,q] and [D(1,q)] respectively, then the remainder term o(q(x)) in (3.5.2)and (3.5.3) should be replaced by O(q(x)).

In the lattice case, when the r.v. ξ is integer-valued with the greatest commondivisor of its values equal to 1, all the assertions of the theorem remain true forinteger-valued x.

Note also that Remarks 3.4.5 and 3.4.6 concerning Theorem 3.4.4 are also ap-plicable to Theorem 3.5.1.

Corollary 3.5.2. If conditions [D(1,q)] and [Rα,ρ] with α ∈ (1, 2) and ρ > −1are satisfied then, as nV (x) → 0, one has

P(Sn � x) = nV (x)[1 +

α2Eζ(1)α + 1

b(n)x

+ o(q(x)) + o

(σ(n)

x

)], (3.5.4)

where ζ(1) := supu�1 ζ(u) is the supremum of the values ζ(u) of the stableprocess limiting for S�nu�/b(n). The bounds o(·) are uniform in n and x suchthat nV (x) � ε for an arbitrary ε → 0.

Recall that

b(n) = F (−1)(1/n), σ(n) ∼ F (−1)(1/ρ+n) = b(ρ+n) ∼ ρ1/α+ b(n),

where ρ+ = (ρ + 1)/2, and that clearly σ(n)/x → 0 iff nV (x) � ε → 0.

Page 182: Asymptotic analysis of random walks

3.5 Asymptotics of P(Sn � x) and its refinements 151

Proof of Theorem 3.5.1. (i) Here we will put Gn := {Sn � x}. The same ar-gument as we used at the beginning of the proof of Theorem 3.4.4 leads to ananalogue of (3.4.14) of the form

P(Sn � x) = P(Gn) =n∑

j=1

P(GnBj) + O((nV (x))2

). (3.5.5)

Again choosing r ≡ x/y = 2 we have

P(GnBj) = P(Sn � x, ξj � y)

= P(Sn � x, ξj � y, Sj−1 � x) + P(Sn � x, ξj � y, Sj−1 < x)

= P(Sn � x, ξj � y, Sj−1 < x) + O(jV 2(x)), (3.5.6)

where the last bound follows from Theorem 3.1.1. Next we observe that{Sn � x, ξj � y, Sj−1 < x

}={Sj + S

(j)n−j � x, ξj � y, Sj−1 < x

},

where S(j)n−j := max0�m�n−j(Sm+j − Sj). Setting

Zj,n := Sj−1 + S(j)n−j (3.5.7)

and noting that

Sj−1 � Zj,n

d� Sn−1, (3.5.8)

we obtain, again using Theorem 3.1.1, that

P(GnBj) = P(Sj + S

(j)n−j � x, ξj � y, Sj−1 < x

)+ O

(jV 2(x)

)= P

(Sj + S

(j)n−j � x, ξj � y

)+ O

(P(ξj � y, Sj−1 � x)

)+ O(jV 2(x))

= P(ξj + Zj,n � x, ξj � y, Zj,n < x/2)

+ P(ξj + Zj,n � x, ξj � y, Zj,n � x/2) + O(jV 2(x))

= P(ξj + Zj,n � x, Zj,n < x/2) + O(nV 2(x)). (3.5.9)

Here the last equality holds owing to the relation

{ξj + Zj,n � x, ξj � y, Zj,n < x/2} = {ξj + Zj,n � x, Zj,n < x/2}

and the bound

P(ξj � y, Zj,n � x/2) � P(ξj � y)P(Sn−1 � x/2) = O(nV 2(x)).

Page 183: Asymptotic analysis of random walks

152 Random walks with finite mean and infinite variance

Further, since

P(ξj + Zj,n � x, Zj,n < x/2)

= E[E[I(ξj � x − Zj,n)|Sj−1, ξj+1, . . . , ξn]; Zj,n < x/2

]= E[V (x − Zj,n); Zj,n < x/2], (3.5.10)

we obtain from (3.5.5) and (3.5.9) that

P(Gn) =n∑

j=1

E[V (x − Zj,n); Zj,n < x/2] + O((nV (x))2

). (3.5.11)

As in (3.4.15), we split the summands E[V (x−Zj,n); Zj,n < x/2] in (3.5.11)into two terms Ej,1 and Ej,2, where, using (3.1.9) to estimate the left tails, wehave

Ej,1 := E[V (x − Zj,n); Zj,n � −x/2] = O(nV (x)V (x)

). (3.5.12)

The remainder of the proof of part (i) is completely parallel to the correspondingpart of the proof of Theorem 3.4.4(i).

(ii) Here we just have to evaluate the term

Ej,2 := E[V (x − Zj,n); |Zj,n| < x/2]. (3.5.13)

Using ESj = 0 and condition [Dh,q] with 1 < h � 2, we can write (recallthat L1(x) ∼ α and that, according to Remark 3.4.8, in these proofs we can usecondition [Dh,0] instead of [Dh,q]):

Ej,2 = V (x)E[1 + L1(x)

Zj,n

x+ O

(∣∣∣∣Zj,n

x

∣∣∣∣h); |Zj,n| <x

2

]= V (x)

{1 − P

(|Zj,n| >

x

2

)+ L1(x)

ES(j)n−j

x

− L1(x)E[Zj,n

x; |Zj,n| � x

2

]+ O

(E[∣∣∣∣Zj,n

x

∣∣∣∣h; |Zj,n| <x

2

])}.

(3.5.14)

Setting, for k = 0, 1,

T kj,n := E

[Zk

j,n; |Zj,n| � x/2], (3.5.15)

Ihj,n := E

[|Zj,n|h; |Zj,n| < x/2], (3.5.16)

one can rewrite the representation (3.5.14) for Ej,2 as

Ej,2 = V (x){

1−T 0j,n+L1(x)

ES(j)n−j

x−L1(x)

T 1j,n

x+O

(x−hIh

j,n

)}. (3.5.17)

From (3.5.8) we have |Zj,n|d� Sn, so that the bounding of Ih

j,n completely repeats

Page 184: Asymptotic analysis of random walks

3.5 Asymptotics of P(Sn � x) and its refinements 153

that of Ihn in § 3.4. The bounds for T k

j,n = O(xknV (x)) are also obtained in asimilar way to those for T k

n in § 3.4.When conditions [D(1,q)] and [Q] are met, the proof of (3.5.2) with a remainder

term rn,x = O(σ(n)/x) is carried out similarly to the argument in the proof ofTheorem 3.4.4(iii).

Part (ii) of Theorem 3.5.1 is proved.

(iii), (iv) The proofs of the last two parts of the theorem are also parallel tothe corresponding arguments from the proof of Theorem 3.4.4. Theorem 3.5.1 isproved.

Proof of Corollary 3.5.2. The representation (3.5.4) follows from (3.5.2). First,observe that L1(x) → α and that, under the conditions of Corollary 3.5.2, asj → ∞, the distributions of Sj/b(j) converge weakly to the stable law Fα,ρ

(see § 1.5). Further, the r.v.’s |Sj |/b(j), j � 1, are uniformly integrable. Thisfollows from Theorem 3.1.1:

P(∣∣∣∣ Sj

b(j)

∣∣∣∣ > v

)� cjV (vb(j)) � cjv−α+δV (b(j)) � cv−α+δ (3.5.18)

for any δ > 0 and all sufficiently large v.Therefore, ESj/b(j) → Eζ(1) as j → ∞ (see also [146]), where ζ(1) =

maxu�1 ζ(u) and ζ(u) is the stable process that is limiting for S�nu�/b(n). Sinceb(j) = j1/αLb(j), Lb(t) being an s.v.f. at infinity, we have

ESj =ESj

b(j)b(j) =

(Eζ(1) + o(1)

)j1/αLb(j) = j1/αL0(j),

where L0 is also an s.v.f. Hence, by Theorem 1.1.4(iv)

n−1∑j=1

ESj ∼ Eζ(1)n−1∑j=1

j1/αL0(j) ∼ Eζ(1)

n∫1

x1/αL0(x) dx

∼ Eζ(1)1/α + 1

n1/α+1L0(n) ∼ αEζ(1)α + 1

nb(n). (3.5.19)

Corollary 3.5.2 is proved.

Example 3.5.3. We will illustrate Theorem 3.5.1 by a numerical example. Con-sider symmetrically distributed r.v.’s ξj with

F+(t) = P(ξ1 � t) =

⎧⎪⎨⎪⎩0.5 for 0 � t <

49,

427

t−3/2 for t � 49.

Obviously, conditions [=, =] (with α = β = 3/2), Eξ1 = 0 and [Dh,0] (withh = 2) are satisfied, and L1(t) = 3/2.

Let us take n = 10 and demonstrate how much the relation (3.5.2) can improvethe first-order approximation (3.5.1) for P(Sn � x).

Page 185: Asymptotic analysis of random walks

154 Random walks with finite mean and infinite variance

Fig. 3.1. Illustrating Theorem 3.5.1: a comparison of the approximations (3.5.1) (lowersmooth curve) and (3.5.2) (upper smooth curve) with P(Sn � x).

Figure 3.1 depicts plots of the crude approximation nV (x) (the main term inthe approximation (3.5.1)), the second-order approximation (the right-hand sideof (3.5.2) without remainders) and a Monte Carlo estimator of the tail P(Sn � x)(to obtain this, we simulated 105 trajectories {Sk; k � n}; the values ESj werealso estimated from simulations).

3.6 The asymptotics of P(S(a) � x) with refinements and the general

boundary problem

Now we consider the asymptotic behaviour of the probabilities P(Sn(a) � x) asx → ∞ for the maxima Sn(a) = maxk�n(Sk − ak) when Eξ = 0, a > 0.

Theorem 3.6.1. Let conditions [<, =] with α ∈ (1, 2) and [Q] with n replacedby m = min{n, x/a} be satisfied. Then the following assertions hold true.

(i) For an arbitrary fixed a0 < ∞, uniformly in n and a ∈ (0, a0) we have

P(Sn(a) � x) =

⎛⎝ n∑j=1

V (x + ja)

⎞⎠ (1 + o(1)). (3.6.1)

(ii) Under condition [D1,q], for an arbitrary fixed a1 > 0, uniformly in n anda > a1 we have

P(Sn(a) � x) =n∑

j=1

V (x+ja)[1+o(q(x+ja))

]+O

(mσ(m)x−1V (x)

),

(3.6.2)where m := min{n, x} and

V (t) = max{V (t),W (t)}, σ(n) = V (−1)(1/n) = max{σ(n), σW (n)}.

Page 186: Asymptotic analysis of random walks

3.6 Asymptotics of P(S(a) � x) and the boundary problem 155

Remark 3.6.2. It will be seen from the proof of Theorem 3.6.1 that if a � a1 > 0,then (3.6.1) will also hold when condition [<, =] is replaced by [ · , =]. Indeed,the only place in the proof of part (i) of the theorem where we use [<, =] is in thederivation of the bound (3.6.19). In the case a � a1 > 0 one could use, instead ofthat relation, the bound Ej,1 = o(jV (x + aj)), which is due to the law of largenumbers.

Note that in Theorem 3.6.1, in contrast with the results of §§ 3.4 and 3.5, thereare no upper restrictions on n (for example, of the form nV (x) → 0), and there-fore one could put n = ∞. So we obtain the following results for S(a) := S∞(a).

Corollary 3.6.3. Let condition [<, =] with α ∈ (1, 2) and W (t) < c/(t ln t) besatisfied.

(i) As x → ∞,

P(S(a) � x) =( ∞∑

j=1

V (x + ja))

(1 + o(1))

=(

1a

∞∫x

V (u) du

)(1 + o(1)) =

xV (x)a(α − 1)

(1 + o(1)).

(3.6.3)

(ii) If, moreover, condition [Dh,q] holds with h = 1 then, as x → ∞,

P(S(a) � x) =∞∑

j=1

V (x+ja)[1+o(q(x+ja))

]+O

(σ(x)V (x)

). (3.6.4)

Observe that in the last assertion we have σ(x) = o(x). A more accurateasymptotic representation for P(S(a) � x) will be found in § 7.5.

Now we will consider to the case of general boundaries. For a given boundaryg = {g(k)}, we set

Gn :={

maxk�n

(Sk − g(k)) � 0}

.

Further, let Gx,n be the class of boundaries {g(k) = g(k, x); k = 0, 1, . . . , n},such that

min1�k�n

g(k) � cx

for a fixed c > 0. It is obvious that, in the case a � 0, the boundary g(k) = x+ak

belongs to Gx,∞ and that G∞ = {S(a) � x}.From the applications viewpoint, the most interesting boundaries are of the

following two types:

(1) g(k) = x + gk, k = 0, 1, . . . , where the sequence {gk; k � 0} does notdepend on x, infk�0 gk > −∞;

Page 187: Asymptotic analysis of random walks

156 Random walks with finite mean and infinite variance

(2) g(k) = xf(k/n), where usually one considers only the values k � n andthe function f(t) is given on [0, 1], does not depend on x or n and is suchthat inft∈[0,1] f(t) > 0.

Evidently, boundaries of both types belong to the class Gx,n for any n.

Theorem 3.6.4. Let conditions [<, =] with α ∈ (1, 2) and [Q] be satisfied.Then, for g ∈ Gx,n, the following two assertions hold true.

(i) As x → ∞, one has

P(Gn) =( n∑

j=1

V (g∗(j)))

(1 + o(1)) + O(n2V (x)V (x)

), (3.6.5)

where

g∗(j) := minj�k�n

g(k).

(ii) If condition [Dh,q] with h = 1 is met then, as x → ∞,

P(Gn) =( n∑

j=1

V (g∗(j)))(

1 + o(q(x)) + O(σ(n)/x)). (3.6.6)

The bounds O(·) in (i), (ii) are uniform in g ∈ Gx,n for n and x satisfyingnV (x) � ε for an arbitrary fixed ε > 0.

Remark 3.6.5. In Theorems 3.6.1 and 3.6.4 one can consider the case h > 1 also,but this would not lead to substantially stronger results.

As was the case in Theorems 3.4.4 and 3.5.1, conditions of the form [Dh,q] canbe replaced by [Dh,O(q)] with o(q(x)) in (3.6.6) changed to O(q(x)).

If, say, g(k) = x + gk, where the sequence {gk} depends neither on x noron n, gk ↑ ∞ as k ↑ ∞ sufficiently fast, then we can obtain from Theorem 3.6.4assertions for the probability P(G∞), which are similar to (3.6.3) and (3.6.4).For example, under the conditions [<, =], W (t) < c1V (t), one would have, asx → ∞,

P(

supk�0

(Sk − g(k)) � 0)

=( ∞∑

k=1

V (x + gk))

(1 + o(1)). (3.6.7)

In the case where the sequence {gk} is dominated by a linear sequence, (3.6.7)follows from Theorems 3.2.1 and 3.6.4.

Corollary 3.6.6. Let the following conditions be met: [<, =] with α ∈ (1, 2),W (t) < c1V (t), gk ↑, gk � gk for some g > 0 and

∑∞k=1 V (x+ gk) > cxV (x).

Then (3.6.7) holds true.

Page 188: Asymptotic analysis of random walks

3.6 Asymptotics of P(S(a) � x) and the boundary problem 157

Proof. Note that condition [Q] is always satisfied when n = cx. Therefore,by virtue of Theorem 3.6.4, to prove the corollary it suffices to verify that theprobability that the boundary {x + gk} will be crossed after a time n = tx,where t → ∞ together with x (and slowly enough), is negligibly small comparedwith xV (x). But this follows immediately from Theorem 3.2.1(ii) (see (3.2.3)).The corollary is proved.

It is not difficult to obtain an analogue of Corollary 3.6.6 for regularly varyingsequences {gk}. For simplicity, we will restrict ourselves to the case gk = gkγ ,γ ∈ (0, 1).

Theorem 3.6.7. Let the following conditions be met: [<, =] with α ∈ (1, 2),gk = gkγ , γ ∈ (1/α, 1), α = min{α, β}, g > 0. Then, as x → ∞,

P(supk�0

(Sk − gk) � x)∼

∞∑k=1

V (x + gk)

∼ x1/γV (x)

∞∫0

(1 + guγ)−αdu. (3.6.8)

Proof. For n = xα−ε, ε > 0, condition [Q] is met and so by Theorem 3.6.4

P(

supk�n

(Sk − gk) � x

)∼

n∑k=1

V (x + gk),

where, after the change of variables t = ux1/γ , we obtain

n∑k=1

V (x + gkγ) ∼n∫

0

V (x + gtγ) dt = x1/γ

nx−1/γ∫0

V (x(1 + guγ)) du

∼ x1/γV (x)

xα−ε−1/γ∫0

(1 + guγ)−αdu.

Since α − ε − 1/γ > 0 for ε < α − 1/γ, it may be seen that the probabilityP(supk�n(Sk − gk) � x

)is asymptotically equivalent to the right-hand side

of (3.6.8). It remains to show that the probability that the boundary {x + gkγ}will be crossed after a time n = xα−ε is o(x1/γV (x)). This can be done inexactly the same way as in the proof of Theorem 3.2.1(ii). One just has to putnk = x1/γ2(k−1)/γ , k = 1, 2, . . . The theorem is proved.

We can draw a number of conclusions about the distribution of Sn(−a) fromthe above results. It is well known that, for a > 0, in the normal deviations zonethe distribution of Sn(−a) is close to that of an + Sn (see [142]). This proximitytakes place in the large deviations zone as well. More precisely, Theorem 3.6.4implies the following result.

Page 189: Asymptotic analysis of random walks

158 Random walks with finite mean and infinite variance

Corollary 3.6.8. Let the following conditions be met: [<, =], α ∈ (1, 2), [Q],the function g(k), k � n, is non-increasing, g(n) = cx. Then

P(maxk�n

(Sk − g(k)) � 0)

= nV (g(n))(1 + o(1)),

where the remainder term o(1) is uniform in x and n such that nV (x) � ε → 0.

In particular, for g(k) = a(n − k) + x with a fixed a > 0, one has

P(maxk�n

(Sk + ak) − an � x)

= nV (x)(1 + o(1)) (3.6.9)

uniformly in x and n such that nV (x) � ε → 0.

Corollary 3.6.8 follows from Theorem 3.6.4(i) and the obvious fact that, fornon-increasing {g(k)}, one has g∗(j) = g(n), j � n. The assertion (3.6.9) couldalso be obtained from the inequalities

Sn � maxk�n

(Sk + ak) − an � Sn

and Theorems 3.4.4 and 3.5.1.It is also possible to obtain refinements of the representation (3.6.9) using an ar-

gument similar to those employed in Theorems 3.4.4 and 3.5.1. We will illustratethis in more detail in §§ 4.6 and 5.7.

To prove Theorems 3.6.1 and 3.6.4 we will use Theorem 3.2.1.

Proof of Theorem 3.6.1. Let Gn := {Sn(a) � x},

Bj(v) := {ξj < y + jva}, B(v) :=n⋂

j=1

Bj(v), m := min{n, x/a}.

It will be convenient for us to state a number of intermediate results in the formof lemmata.

Lemma 3.6.9. Let conditions [<, <] with α ∈ (1, 2) and [Q] with n replacedby m = min{n, x/a} be satisfied. For all n and v � min{1/4r, (r − 2)/2r},r > 5/4, one has

P(Gn) =n∑

j=1

P(GnBj(v)) + O((mV (x))2

). (3.6.10)

Proof. Clearly

P(Gn) = P(GnB(v)) + P(GnB(v)). (3.6.11)

Page 190: Asymptotic analysis of random walks

3.6 Asymptotics of P(S(a) � x) and the boundary problem 159

As in (3.4.13), for the last term we can write

n∑j=1

P(GnBj(v)) � P(GnB(v))

�n∑

j=1

P(GnBj(v)) −∑

i<j�n

P(GnBi(v)Bj(v)

)=

n∑j=1

P(GnBj(v)) + O((mV (x))2

), (3.6.12)

where the form of the remainder follows from the bounds∑i<j�n

P(GnBi(v)Bj(v)

)�( n∑

j=1

P(Bj(v)))2

and

n∑j=1

P(Bj(v)) �n∑

j=1

V (y + jva) ∼n∫

0

V (y + vat) dt

� min

⎧⎨⎩nV (y),1va

∞∫y

V (u) du

⎫⎬⎭ � cmV (x). (3.6.13)

Applying Theorem 3.2.1(i) with v � min{1/4r, (r − 2)/2r} (which ensuresthat r0 � 2) completes the proof of the lemma.

Next we will obtain a representation for the summands in (3.6.10). Put

S(j)n−j(a) := max{0, ξj+1 − a, . . . , Sn − Sj − (n − j)a},Zj,n(a) := Sj−1 + S

(j)n−j(a).

Lemma 3.6.10. Let δ ∈ (0, 1) and a0 ∈ (0,∞) be fixed. Then, uniformly in a ∈(0, a0),

P(GnBj(v)) = E[(x + aj − Zj,n(a)); Zj,n � δ(x + aj)

]+ O

(V (x+aj)

[min{j, x/a}V (x)

+ min{n, x/a + j}V (x + aj)])

. (3.6.14)

Moreover, for z � σ(j),

P(Zj,n(a) � z) � c[j + min{n, z/a}]V (z),

P(|Zj,n(a)| � z) � c[min{n, z/a}V (z) + jV (z)

].

(3.6.15)

Page 191: Asymptotic analysis of random walks

160 Random walks with finite mean and infinite variance

Proof. First we will obtain bounds for the distribution of Zj,n(a). For z � σ(j),we find from Corollary 3.1.2 and Theorem 3.2.1(ii) that

P(Zj,n(a) � z) � P(Sj−1 � z/2) + P(Sn−j(a) � z/2)

� c(jV (z) + mV (z)

).

Further, since

|Zj,n(a)| � |Sj−1| + S(j)n−j(a),

for z � σ(j) we have

P(|Zj,n(a)| � z) � P(|Sj−1| � z/2) + P(Sn(a) � z/2)

� c[jV (z) + min{n, z/a}V (z)

].

The inequalities (3.6.15) are proved. Now we will establish (3.6.14). Clearly

P(GnBj(v)) = P(Sn(a) � x, ξj � y + jva)

= P(Sn(a) � x, ξj > y + jva, Sj−1(a) < x)

+ P(Sn(a) � x, ξj � y + jva, Sj−1(a) � x)

= P(Sn(a) � x, ξj � y + jva, Sj−1(a) < x) + ρn,j,x, (3.6.16)

where, by Theorem 3.2.1(ii),

ρn,j,x � cV (y + jva) min{j, x/a}V (x). (3.6.17)

Further,

P(Sn(a) � x, ξj � y + jva, Sj−1(a) < x

)= P

(Sj − aj + S

(j)n−j(a) � x, ξj � y + jva, Sj−1(a) < x

)= P

(ξj + Zj,n(a) � x + aj, ξj � y + jva

)+ O

(min{j, x/a}V (x)V (y + jva)

)= P

(ξj + Zj,n(a) � x + aj, ξj � y + jva, Zj,n(a) < x + aj − y − jva

)+ O

(min{j, x/a}V (x)V (x + aj) + min{n, x/a + j}V 2(x + aj)

).

(3.6.18)

To obtain the last relation, we used (3.6.15) and the fact that, for the chosen valuesof r and v,

c1(x + aj) � y + jva � c2(x + aj),

c3(x + aj) � x + aj − y − jva � c4(x + aj).

The event {ξj � y + jva} under the probability symbol on the right-hand sideof (3.6.18) is redundant. Therefore, owing to the independence of the r.v.’s ξj

Page 192: Asymptotic analysis of random walks

3.6 Asymptotics of P(S(a) � x) and the boundary problem 161

and Zj,n(a), the probability on the final right-hand side of (3.6.18) can be repre-sented (up to an additive term O

(min{n, x/a + j}V 2(x + aj)

)) as

E[V (x + aj − Zj,n(a)); Zj,n(a) � δ(x + aj)

].

Collecting bounds from (3.6.17) and (3.6.15), we obtain (3.6.14). Lemma 3.6.10is proved.

We now return to the proof of Theorem 3.6.1. Let

x(j) := δ(x + aj).

Represent the expectation in (3.6.14) as a sum Ej,1 + Ej,2, where

Ej,1 := E[V (x + ja − Zj,n(a)); Zj,n(a) < −x(j)

],

Ej,2 := E[V (x + ja − Zj,n(a)); |Zj,n(a)| � x(j)

],

and note that, since E|ξ| < ∞, the deviations x + aj of Sj have the property thatmaxj�0 jV (x + aj) → 0 as x → ∞, so that one can use the inequality (3.1.9)for them. Hence, owing to Zj,n(a) � Sj−1, we obtain

Ej,1 � cV (x + aj)P(Sj−1 < −x(j)) < c1jV (x + aj)W (x + aj). (3.6.19)

Moreover, by virtue of (3.6.15), for δ → 0 slowly enough we have

Ej,2 = V (x + aj)(1 + o(1)).

So, for m = min{n, x/a} we getn∑

j=1

Ej,1 < c1W (x)n∑

j=1

jV (x + aj) ∼ cW (x)mV (x) = o(mV (x))

(cf. (3.6.13)) andn∑

j=1

Ej,2 =( n∑

j=1

V (x + aj))

(1 + o(1)),

where

c1mV (x) <

n∑j=1

V (x + aj) ∼ 1a

x+an∫x

V (u) du < c2mV (x).

Summing the remainder terms in (3.6.14) over j yields a term O(m2V 2(x)

)=

o(mV (x)). Hence from (3.6.11) and Lemmata 3.6.9 and 3.6.10 we obtain thefirst assertion of Theorem 3.6.1.

(ii) Using condition [Dh,0] with h = 1 (see Remark 3.4.8), one can write

Ej,2 = E[V (x + aj − Zj,n(a)); |Zj,n(a)| � x(j)

]= V (x + ja)E

[1 + O

(∣∣∣∣Zj,n(a)x + ja

∣∣∣∣); |Zj,n(a)| � x(j)]. (3.6.20)

Page 193: Asymptotic analysis of random walks

162 Random walks with finite mean and infinite variance

(Note that, had we assumed that condition [Dh,0] is met with 1 < h � 2 thenon the right-hand side of (3.6.20) a term proportional to ES

(j)n−j(a)/(x + aj)

would also appear, but the latter expression tends to infinity as n − j → ∞ sinceES∞(a) = ∞ in the case Eξ2 = ∞.)

Now introduce the quantities

T 0j,n := P(|Zj,n(a)| � x(j)),

I1j,n := E[Zj,n(a); |Zj,n(a)| � x(j)],

(cf. (3.5.15), (3.5.16) in the proof of Theorem 3.5.1), so that the expectation in(3.6.20) can be written as

1 − T 0j,n + O

((x + aj)−1I1

j,n

).

We need to obtain an upper bound for this expression. A bound for T 0j,n is part

of (3.6.15). Using that result, we obtain

n∑j=1

V (x + aj)T 0j,n � c

n∫1

min{n, x/a + t}V (x + at)V (x + at) dt

� c1m2V (x)V (x).

Now consider the bounds related to I1j,n. For m(t) := min{n, t/a},

I1,+j,n := −

x(j)∫0

t dP(Zj,n � t) �x(j)∫0

P(Zj,n � t) dt

�x(j)∫0

P(Sj−1 � t/2) dt +

x(j)∫0

P(Sn−j(a) � t/2) dt

� 2E(Sj−1; Sj−1 � 0) + c

x(j)∫0

m(t)V (t) dt,

where, for mj = min{n, x + aj}, owing to Theorem 1.1.4(iv) we have

x(j)∫0

m(t)V (t) dt � cmin{x2(j)V (x(j)), n2V (n)

}= cm2

jV (mj)

(recall that in part (ii) of Theorem 3.6.1 we assumed that a � a1 > 0).

Page 194: Asymptotic analysis of random walks

3.6 Asymptotics of P(S(a) � x) and the boundary problem 163

Similarly,

I1,−j,n :=

0∫−x(j)

|t| dP(Zj,n < t) �0∫

−x(j)

P(Zj,n < t) dt

�0∫

−x(j)

P(Sj−1 < u) du � E(|Sj−1|; Sj−1 < 0).

Therefore

I1j,n = I1,+

j,n + I1,−j,n � 2E|Sj−1| + cm2

jV (mj).

Here, to simplify our argument, we will assume that condition [=, =] is met. Aremark on how to obtain the necessary bound in the case where only condition [Q]is satisfied, can be found at the end of the proof.

As before, let σ(n) = max{σ(n), σW (n)}. If condition [=, =] is satisfiedthen Sn/b(n) converge in distribution to an r.v. ζ following a stable distributionand, moreover, E|Sn|/b(n) → E|ζ| < ∞ (cf. (3.5.18)) and the quantities b(n)and σ(n) are of the same order of magnitude. Hence E|Sj−1| < cσ(j) and

n∑j=1

V (x + aj)I1j,n

x + aj

� c1

n∑j=1

V (x + aj)σ(j)x + aj

+ c2

n∑j=1

V (x + aj)m2jV (mj)

x + aj. (3.6.21)

The first sum on the right-hand side does not exceed

c3V (x)

xmin

{nσ(n), xσ(x)

}= c3

V (x)x

mσ(m),

while the second is bounded by

c4V (x)

xmin

{n3V (n), x3V (x)

}= c4

V (x)x

m3V (m).

Collecting the above bounds and taking into account that mV (x) = o(σ(m)/x),we obtain

P(Gn) =n∑

j=1

V (x + aj)

+ O

(m2V (x)V (x) +

V (x)x

mσ(m) +V (x)

xm3V (m)

)=

n∑j=1

V (x + aj) + O(mσ(m)x−1V (x)

).

This proves (3.6.2). If condition [=, =] is not satisfied then one should use,

Page 195: Asymptotic analysis of random walks

164 Random walks with finite mean and infinite variance

instead of (3.6.21), the bounds for the right and left distribution tails of Sj−1

from (3.1.9) and Theorem 3.4.4.Theorem 3.6.1 is proved.

Proof of Theorem 3.6.4. Now let

Gn :={

maxk�n

(Sk − g(k)) � 0}

, Bj := {ξj � y}, B :=n⋂

j=1

Bj .

Since mink�n g(k) = cx and, without loss of generality, we can assume thatc = 1, we have

P(GnB) � P ≡ P(Sn � x; B),

where P � c(nV (y))r by Theorem 3.1.1. Therefore

P(Gn) = P(GnB) + O((nV (y))r

).

Moreover, cf. (3.4.14) and (3.5.5), we have

P(GnB) =n∑

j=1

P(GnBj) + O((nV (x))2

). (3.6.22)

Taking r = 2, we obtain from this that

P(Gn) =n∑

j=1

P(Gn; ξj � y) + O((nV (y))2

). (3.6.23)

Next consider P(Gn; ξj � y). We will use an argument quite similar to thatemployed in the proofs of Theorems 3.4.4 and 3.5.1. First, using the independenceof ξj and Sj−1, we can infer from Theorem 3.1.1 that

P(Gn; ξj � y) = P(Gn; ξj � y, Sj−1 < x)

+ P(Gn; ξj � y, Sj−1 � x)

= P(Gn; ξj � y, Sj−1 < x) + O(jV 2(x)

). (3.6.24)

Note that if the event Gn occurs and Sj−1 < x then maxj�k�n(Sk − g(k)) � 0.Introduce the r.v.’s

Mj,n := max0�k�n−j

(Sk+j − Sj − g(k + j)) + g∗(j)

= max0�k�n−j

(Sk+j − g(k + j)) + g∗(j) − Sj . (3.6.25)

Again using the bound

P(ξj � y, Sj−1 � x) = O(jV 2(x)

)

Page 196: Asymptotic analysis of random walks

3.6 Asymptotics of P(S(a) � x) and the boundary problem 165

and setting Gj,n :={maxj�k�n(Sk − g(k)) � 0

}, we obtain from (3.6.24) that

P(Gn; ξj � y)

= P(Gj,n; ξj � y, Sj−1 < x

)+ O

(jV 2(x)

)= P

(Gj,n; ξj � y

)+ O

(jV 2(x)

)= P

(Sj + Mj,n � g∗(j), ξj � y, Sj−1 + Mj,n < x/2

)+ O

(jV 2(x)

)+ P

(Sj + Mj,n � g∗(j), ξj � y, Sj−1 + Mj,n � x/2

). (3.6.26)

Now put

Zj,n := Sj−1 + Mj,n, j = 1, . . . , n. (3.6.27)

Clearly, Mj,n � S(j)n−j and therefore

Zj,n � Sj−1 + S(j)n−j

d� Sn−1. (3.6.28)

However, there exists a kj ∈ {0, 1, . . . , n − j} such that

g(kj + j) − g∗(j) = 0 and Mj,n � min0�k�n−j

(Sk+j − Sj).

Hence

Zj,n

d� S n−1 = min

k�n−1Sk. (3.6.29)

Since, from the above discussion,

P(ξj � y, Zj,n � x/2) = O(nV 2(x)

)and the events {Sj + Mj,n � g∗(j)} and {ξj < x/2, Zj,n < x/2} are mutuallyexclusive (recall that y = x/2, g∗(j) � x), we obtain from (3.6.26) that

P(Gn; ξj � y) = P(ξj + Zj,n � g∗(j), Zj,n < x/2

)+ O

(nV 2(x)

).

The probability on the right-hand side can be rewritten as

P(ξj + Zj,n � g∗(j), Zj,n < x/2

)= E

[E[I(ξj � g∗(j) − Zj,n)|Sj−1, ξj+1, . . . , ξn]; Zj,n < x/2

]= E

[V (g∗(j) − Zj,n); Zj,n < x/2

], (3.6.30)

so that from (3.6.23) we have

P(Gn) =n∑

j=1

E[V (g∗(j) − Zj,n); Zj,n < x/2

]+ O

(nV 2(x)

). (3.6.31)

As in the proofs of the previous theorems, we split the expectation on the right-hand side of (3.6.30) into two parts, Ej,1 and Ej,2:

Ej,1 := E[V (g∗(j) − Zj,n); Zj,n < −x/2] = O(nV (x)V (x)

)(3.6.32)

Page 197: Asymptotic analysis of random walks

166 Random walks with finite mean and infinite variance

(by virtue of (3.6.29) and (3.1.9)) and

Ej,2 := E[V (g∗(j) − Zj,n); |Zj,n| � x/2]. (3.6.33)

Now

P(|Zj,n| � x/2) = 1 − P(Zj,n > x/2) − P(Zj,n < −x/2),

and, for any fixed v, the probabilities P(Zj,n > xv) and P(Zj,n < −xv) can beestimated using the inequalities (3.6.28), (3.6.29) and (3.1.9):

P(|Zj,n| � xv) � P(Sn−1 � xv) = O(nV (x)).

Thus E2,j = V (g∗(j))(1 + o(1)), which, together with (3.6.31) and (3.6.32),proves the first assertion of Theorem 3.6.4.

To prove the second, consider in more detail the term (3.6.33) which, by virtueof conditions [D1,0] (see Remark 3.4.8), can be rewritten as

E2,j = E[V (g∗(j) − Zj,n); |Zj,n| � x/2

]= V (g∗(j))E

[1 + O

(∣∣∣∣ Zj,n

g∗(j)

∣∣∣∣); |Zj,n| � x/2]. (3.6.34)

(If condition [Dh,0] with h > 1 is met then an additional term EMj,n will appearon the right-hand side but, in the case of a general boundary {g(k)}, computingthis term is rather difficult.)

The computation of the expectation on the right-hand side of (3.6.34) using theinequalities (3.6.28), (3.6.29) can be carried out in exactly the same way as inTheorem 3.5.1. Theorem 3.6.4 is proved.

3.7 Integro-local theorems on large deviations of Sn for index −α, α ∈ (0, 2)

Let Δ[x) := [x, x + Δ) be a half-open interval of length Δ > 0 with left endpoint x. The present section deals with the asymptotics of the probabilities

P(Sn ∈ Δ[x)) (3.7.1)

as x → ∞ for various Δ = o(x). It is natural to term the corresponding assertionsintegro-local theorems, retaining the term ‘local theorem’ for assertions relatingto the density of Sn (when it exists) and to the probabilities P(Sn = k) in thelattice case.

Integro-local theorems are of independent interest but can also be very use-ful for finding the asymptotic behaviour, as x → ∞, of integrals of the formE(f(Sn); Sn � x) for broad classes of functions f . In Chapter 6 we will usethem to describe the large deviation probabilities for random walks with regularexponentially decaying jump distributions.

In the multivariate case, integro-local theorems are the most convenient andnatural type of assertion on the asymptotics of large deviation probabilities forsums of random vectors (see Chapter 9).

Page 198: Asymptotic analysis of random walks

3.7 Integro-local theorems on large deviations of Sn 167

In what follows, we will be using the smoothness condition [D(1,q)] in a formsomewhat different from the previous one (the interpretations of the function q

and the increment Δ being different from those in (3.4.5)). This is merely forconvenience of exposition. Thus we write

[D(1,q)] As t → ∞, for Δ ∈ [Δ1,Δ2], Δ1 � Δ2 = o(t), one has

V (t)−V (t+Δ) = V1(t)[Δ(1+o(1))+o(q(t))

], V1(t) = αV (t)/t, (3.7.2)

where the term o(q(t)) does not depend on Δ and q(t) is an r.v.f. of the formq(t) = t−γqLq(t), γq � −1, Lq(t) is an s.v.f.

Here the remainder o(1) is assumed to be uniform in Δ ∈ [Δ1,Δ2] in thefollowing sense: for any fixed function δu ↓ 0 as u ↑ ∞, there exists a func-tion εu ↓ 0 as u ↑ ∞ such that o(1) in (3.7.2) can be replaced by ε(t, Δ) � εu

for Δ2 � δuu and all t � u.

If, in the important special case Δ = Δ1 = Δ2 = const (Δ is an arbitraryfixed number), we have

V (t) − V (t + Δ) = V1(t)(Δ + o(1))

as t → ∞ then clearly condition [D(1,q)] is satisfied with q(t) ≡ 1 (here theassumption on the uniformity of o(1) disappears).

The relation (3.7.2) follows from (3.4.5): we just substitute the quantities q(t)/t

and Δ/t for q(t) and Δ respectively in (3.4.5) (with V1(t) = L1(t)V (t)/t).In particular, when Δ � q(t) we obtain from (3.7.2) the representation

V (t) − V (t + Δ) = V1(t)Δ(1 + o(1)).

Now let the following condition be satisfied.

[D] The s.v.f. L(t) is differentiable for t � t0 > 0 and, moreover, L′(t) =o(L(t)/t

)as t → ∞.

Under condition [D], the function V (t) is also differentiable for t � t0 and onecan identify the function V1(t) in (3.7.2) with the derivative V1(t) = −V ′(t) (fort � t0) and put q(t) ≡ 0.

In the lattice (arithmetic) case, when ξ is integer-valued with lattice span equalto 1, condition [D(1,q)] stipulates that (3.7.2) holds for Δ = 1.

Observe also that the desired asymptotics for (3.7.1) could be obtained fromTheorem 3.4.4 but only when Δ � σ(n).

Now recall the notation

α = min{α, β}.In what follows, we will consider the following two alternatives:

(1) α < 1, x > n1/γ ; (3.7.3)

(2) α ∈ (1, 2), Eξ = 0, x > n1/γ , (3.7.4)

Page 199: Asymptotic analysis of random walks

168 Random walks with finite mean and infinite variance

where γ < α is an arbitrary fixed number.Now we can state the main assertion.

Theorem 3.7.1. Assume that conditions [<, =] and [D(1,q)] in the form (3.7.2)are satisfied. Then, in the cases (3.7.3), (3.7.4), for Δ ∈ [Δ1,Δ2], Δ2 = o(x),Δ1 � max{cq(x), x−γ0} for an arbitrary fixed γ0 � −1, the following relationholds true. As N → ∞,

P(Sn ∈ Δ[x)) = nV1(x)Δ(1 + o(1)), V1(x) =αV (x)

x, (3.7.5)

where the term o(1) in (3.7.5) is uniform in x, n and Δ ∈ [Δ1,Δ2] such that

x � max{N, n1/γ}, Δ1 � max{cq(x), x−γ0}, Δ2 � xεN

for a fixed function εN ↓ 0 as N ↑ ∞.In the lattice (arithmetic) case, assertion (3.7.5) holds true for integer-valued

x and Δ � max{1, q(x), x−γ0}.

By the uniformity of o(1) in Theorem 3.7.1 we understand the existence of afunction δ(N) ↓ 0 as N ↑ ∞ (depending on εN ) such that the term o(1) in (3.7.5)can be replaced by a function δ(x, n, Δ), with |δ(x, n, Δ)| � δ(N). If we hadassumed in the theorem that n → ∞ then, when constructing the uniformitydomain, we could have replaced the inequality x � max{N,n1/γ} by x � n1/γ .This would have ensured the required convergence x → ∞. The parameter N

was added to cover the case when n remains bounded as x → ∞.If q(x) → 0 as x → ∞, γ0 > 0 then Δx := max{q(x), x−γ0} → 0, and

Theorem 3.7.1 implies an ‘almost local theorem’: for Δ = Δx and x → ∞,n � xγ ,

P(Sn ∈ Δ[x))Δ

∼ nV1(x). (3.7.6)

When q(t) = t−γq , one can put γ0 := γq and then the range of Δ will be of theform Δ � x−γ0 .

If Δ → 0 at a yet faster rate then the relation (3.7.6) does not need to be true(as the distributions of ξ and Sn do not necessarily have densities).

However, if condition [D] is satisfied then the density V ′(t) ∼ −αV (t)/t willexist for t � t0 and a stronger assertion will hold true.

Theorem 3.7.2. Let conditions [<, =], [D] and [Q] be met. Then the distributionof Sn can be represented as a sum of two measures,

P(Sn ∈ ·) = Pn,1(·) + Pn,2(·),where the measure Pn,1, for any fixed r > 1 and all large enough x, has theproperty

Pn,1([x,∞)) < (nV (x))r.

Page 200: Asymptotic analysis of random walks

3.7 Integro-local theorems on large deviations of Sn 169

The measure Pn,2 is absolutely continuous w.r.t. the Lebesgue measure, with thedensity

Pn,2(dx)dx

= −nV ′(x)(1 + o(1))

as x → ∞. The term o(1) is uniform in x and n such that nV (x) < εx (orσ(n)/x < εx) for any fixed function εx → 0 as x → ∞.

Note that Pn,2([x,∞)) ∼ nV (x) and therefore that the distribution of Sn canbe written in the form

P(Sn � x) = Pn,2([x,∞))[1 + O

((nV (x))r

)]for any fixed r > 0, x → ∞, nV (x) < εx → 0.

Proof. The proof of Theorem 3.7.1 follows the scheme of that of Theorem 3.4.4.For y < x, put

Gn := {Sn ∈ Δ[x)}, Bj = {ξj < y}, B =n⋂

j=1

Bj . (3.7.7)

Then

P(Gn) = P(GnB) + P(GnB), (3.7.8)

wheren∑

j=1

P(GnBj) � P(GnB) �n∑

j=1

P(GnBj) −∑

i<j�n

P(GnBiBj). (3.7.9)

We will split the proof into three steps: (1) bounding P(GnB), (2) boundingP(GnBiBj), i �= j, and (3) evaluating P(GnBj).

(1) Bounding P(GnB). We will make use of the crude inequality

P(GnB) � P(S(n) � x; B). (3.7.10)

By Theorems 2.2.1 and 3.1.1, for x > max{N,n1/γ}, N → ∞, we have

P(GnB) < c[nV (y)]r, r =x

y. (3.7.11)

Choose r so that (nV (x)

)r � nV1(x)Δ (3.7.12)

for x � max{N,n1/γ}, Δ � max{x−γ0 , q(x)} and γ < α (for such x and n,condition [Q] is always met). If n → ∞ then one can assume that N < n1/γ � x.Putting n = xγ and comparing the powers of x on both sides of (3.7.12), we seethat (3.7.12) will hold provided that

r − 1 >1 + γ0

α − γ. (3.7.13)

Page 201: Asymptotic analysis of random walks

170 Random walks with finite mean and infinite variance

For n < xγ the inequality (3.7.12) will hold true all the more.Hence for r that satisfy the inequality (3.7.13) we have

P(GnB) = o(nV1(x)Δ). (3.7.14)

(2) Bounding P(GnBiBj). It suffices to bound P(GnBn−1Bn). Let

δ :=1r

<12, Hk := {v : v < (1 − kδ)x + Δ}, k = 1, 2.

Then

P(GnBn−1Bn) =∫

H2

P(Sn−2 ∈ dz)

×∫

H1

P(z + ξ ∈ dv, ξ � δx)P(v + ξ ∈ Δ[x), ξ � δx).

(3.7.15)

Since in the region H1 we have x − v > δx − Δ, by condition [D(1,q)] the lastfactor on the right-hand side of (3.7.15) has the form ΔV1(x − v)(1 + o(1)) �cΔV1(x) as x → ∞. Hence, for large enough x, the integral over H1 in (3.7.15)will not exceed the quantity

cΔV1(x)P(z + ξ ∈ H1, ξ � δx) � cΔV1(x)V (δx).

It is obvious that the integral over H2 in (3.7.15) admits the same upper bound.This implies that∑

i<j�n

P(GnBiBj) � c1Δn2V1(x)V (x) = o(ΔnV1(x)). (3.7.16)

(3) Evaluating P(GnBj). Owing to (3.7.8), (3.7.12) and (3.7.16), the sum-mands P(GnBj) will determine the main term in the asymptotics of P(GnB)and P(Gn).

By virtue of condition [D(1,q)],

P(GnBn) =∫

H1

P(Sn−1 ∈ dz)P(ξ ∈ Δ[x − z), ξ � δx)

�∫

H1

P(Sn−1 ∈ dz)P(ξ ∈ Δ[x − z)

)=∫

H1

P(Sn−1 ∈ dz) V1(x − z)[(1 + o(1))Δ + o(q(x − z))

].

(3.7.17)

As before, set

σ(n) := max{V (−1)(1/n), W (−1)(1/n)

},

Page 202: Asymptotic analysis of random walks

3.7 Integro-local theorems on large deviations of Sn 171

where V (−1) are W (−1) are functions inverse to V and W respectively. Thenwe obtain from Corollaries 2.2.4 and 3.1.2 (see also (3.1.9)) that P(|Sn−1| >

Mσ(n)) → 0 as M → ∞. Therefore

E[V1(x − Sn−1); |Sn−1| < Mσ(n)

] ∼ V1(x)

when M → ∞, Mσ(n) = o(n1/γ), x � n1/γ , γ < α (in this case x � Mσ(n)).Moreover, as M → ∞, one obviously has that

E[V1(x − Sn−1); Sn−1 ∈ (−∞,−Mσ(n))

]= o(V1(x)),

E[V1(x − Sn−1); Sn−1 ∈ (Mσ(n), (1 − δ)x + Δ)

]= o(V1(x)).

The above discussion implies that∫H1

P(Sn−1 ∈ dz)V1(x−z) = E[V1(x−Sn−1); Sn−1 � x(1−δ)+Δ

] ∼ V1(x).

Similarly, ∫H1

P(Sn−1 ∈ dz)V1(x − z)q(x − z) < cV1(x)q(x).

Hence, for Δ � q(x), by virtue of (3.7.17) we obtain

P(GnBn) � ΔV1(x)(1 + o(1)).

Similarly, using (3.7.17) one finds that

P(GnBn) �(1−δ)x∫−∞

P(Sn−1 ∈ dz)P(ξ ∈ Δ[x − z)

) ∼ ΔV1(x). (3.7.18)

From (3.7.17) and (3.7.18) we obtain

P(GnBn) = ΔV1(x)(1 + o(1)).

Together with (3.7.8)–(3.7.12) and (3.7.16) this yields

P(Gn) = ΔnV1(x)(1 + o(1)).

The required uniformity of the bound o(1) is obvious from the above argument.The theorem is proved.

Remark 3.7.3. To bound P(GnB) (see step (1) of the proof of Theorem 3.7.1),instead of the crude inequality (3.7.10) one could use more precise approaches,which would yield stronger results. For more detail, see Remark 4.7.3 below.

Proof of Theorem 3.7.2. Using the set B defined in (3.7.7), put

Pn,1(A) := P(Sn ∈ A; B), Pn,2(A) := P(Sn ∈ A; B).

Page 203: Asymptotic analysis of random walks

172 Random walks with finite mean and infinite variance

The desired bound for Pn,1([x,∞)) coincides with the bound

P(GnB) � c[nV (x)

]rfrom the proof of Theorem 3.7.1 (see (3.7.11)).

The evaluation of the density Pn,2 is done in the same way as that of the prob-ability P(GnB) in Theorem 3.7.1. One has (cf. (3.7.17))

P(Sn ∈ dx; Bn)dx

=

(1−δ)x∫−∞

P(Sn−1 ∈ dz)P(ξ ∈ dx − z)

dx

= −(1−δ)x∫−∞

P(Sn−1 ∈ dz)V ′(x − z)

= −E[V ′(x − Sn−1); Sn−1 � (1 − δ)x

].

The subsequent computation of the last integral does not differ from the respectiveargument in the proofs of Theorems 3.4.4, 3.7.1 etc.

The estimation of P(Sn ∈ dx; Bn−1Bn)/dx can be dealt with in a similar way.The theorem is proved.

Remark 3.7.4. If n → ∞ and the order of magnitude of the deviations is fixed,say, by

x ∼ A(n) := n1/γLA(n), (3.7.19)

where γ < α and LA is an s.v.f., then one can suggest another approach to prov-ing integro-local theorems. With this approach, condition [D(1,q)] (see (3.4.5))can be substantially weakened to the following condition:

[DA] We assume that n → ∞ and condition [Rα,ρ] with ρ > −1 is satisfied.Moreover, for Δn = εnb(n), where εn is a sequence converging to zero, we have

V (x) − V (x + Δn) = ΔnV1(x)(1 + o(1)), V1(x) = αV (x)/x. (3.7.20)

Condition (3.7.20) can be expressed in terms of the deviations x → ∞ only.Resolving the relation x ∼ A(n) in (3.7.19) for n, we obtain n ∼ A(−1)(x),Δn = εnb(A(−1)(x)). Taking εn to be an s.v.f., we arrive at the equality Δn =xγ/αLΔ(x), where LΔ is an s.v.f. Since 0 < γ/α < 1, one has, as x → ∞,

xγ/αLΔ(x) = o(x), xγ/αLΔ(x) → ∞,

and in this sense condition [DA] is much weaker than condition [D(1,0)], becauseit requires regular variation of the increments V (x) − V (x + Δ) only for verylarge Δ values.

If condition [DA] is satisfied then, for any fixed Δ > 0, the relation (3.7.5)holds true. In the arithmetic case, one should put Δ = 1.

The scheme of the proof of this assertion remains the same: as before, one

Page 204: Asymptotic analysis of random walks

3.8 Uniform relative convergence to a stable law 173

should use (3.7.14) and then observe that the principal contribution to the mainterm P(GnBj) = P(Sn ∈ Δ[x), ξn � y) comes from the integral

J :=

∞∫y

P(ξ ∈ dt)P(Sn−1 ∈ Δ[x − t); |Sn−1| < Nnb(n)

),

where Nn is an unboundedly increasing sequence. It is evident that

J ∼x+Nnb(n)∫

x−Nnb(n)

P(ξ ∈ dt)P(Sn−1 ∈ Δ[x − t)

)

∼∑k∈In

tk+1∫tk

P(ξ ∈ dt + x)P(Sn−1 ∈ Δ[−t)

),

where tk := kΔn and In := [−Nn/εn, Nn/εn]. If the ξi are non-lattice then, bythe Stone–Shepp theorem (see [258, 259] and also § 8.4 of [32]),

P(Sn−1 ∈ Δ[−t)) ∼ Δb(n)

f(−tk/b(n)), t ∈ [tk, tk+1),

where f is the density of the stable distribution Fα,ρ. Therefore, the principal partof the integral in question is asymptotically equivalent to

Δb(n)

∑k∈In

P(ξ ∈ [x + tk, x + tk+1)

)f(−tk/b(n))

∼ Δ∑k∈In

Δn

b(n)V1(x)f(−tk/b(n))

= ΔV1(x)∑k∈In

εnf(−tk/b(n)) ∼ ΔV1(x).

Estimation of the P(GnBiBj) can be dealt with in a similar way (cf. (3.7.16)).

3.8 Uniform relative convergence to a stable law

The assertions of this and the next sections are consequences of the results of§§ 2.2–2.5 and §§ 3.1, 3.3. They will be valid for values of the parameter α fromthe interval (0, 2).

Denote by Nα,ρ the domain of normal attraction to the stable law Fα,ρ, i.e. theclass of distributions F for which condition [Rα,ρ] holds with L(t) → L = constas t → ∞. For distributions F ∈ Nα,ρ, the inverse function F (−1) has a simpleexplicit asymptotics:

F (−1)(1/n) = b(n) ∼ (Ln)1/α,

V (−1)(1/n) = σ(n) ∼ ρ1/α+ b(n) ∼ (Lnρ+)1/α.

(3.8.1)

Page 205: Asymptotic analysis of random walks

174 Random walks with finite mean and infinite variance

It is obvious that the stable distribution Fα,ρ itself also belongs to Nα,ρ, and that,for any F ∈ Nα,ρ with ρ > −1 and all v � v0 > 0,

nV (vb(n)) ∼ nρ+F (vb(n)) ∼ ρ+v−α as n → ∞. (3.8.2)

The property (3.8.2) enables one to obtain the following assertion on uniformrelative convergence to a stable law.

Theorem 3.8.1. Let condition [Rα,ρ], ρ > −1, α ∈ (0, 2), be satisfied and letEξ = 0 provided that E|ξ| < ∞. In this case, F ∈ Nα,ρ iff

supt�0

∣∣∣∣P(Sn/b(n) � t)Fα,ρ,+(t)

− 1∣∣∣∣→ 0 (3.8.3)

as n → ∞, where Fα,ρ,+(t) = Fα,ρ([t,∞)).

The assertion of the theorem means that for F ∈ Nα,ρ the large deviationproblem for P(Sn � x) is, in a sense, non-existent: the limiting law Fα,ρ(t)gives a good approximation

P(Sn � x) ∼ Fα,ρ,+(x/b(n))

uniformly in all x � 0. In the central limit theorem on convergence to the normallaw, this is only possible when the ξj are normally distributed.

An assertion of the form (3.8.3) (with a convergence rate estimate) also followsfrom the results of [24] but under the much stronger condition that the pseudo-moments of the orders γ > α are finite:∫

|t|γ |F − Fα,ρ|(dt) < ∞,

which necessarily entails a high rate of convergence of F (t) − Fα,ρ(t) to zeroas |t| → ∞.

Proof of Theorem 3.8.1. Sufficiency. Let F ∈ Nα,ρ. Theorems 2.6.1 and 3.4.1imply (see (3.4.1)) that, for any sequence t = tn → ∞ and for x = sσ(n),

sups�t

∣∣∣∣P(Sn � x)nV (x)

− 1∣∣∣∣→ 0, (3.8.4)

or, equivalently, that

sups�t

∣∣∣∣P(Sn � sb(n))nV (sb(n))

− 1∣∣∣∣→ 0,

where, by virtue of (3.8.2),

nV (sb(n)) ∼ ρ+s−α ∼ Fα,ρ,+(s).

So (3.8.4) can also be written as

sups�t

∣∣∣∣P(Sn � sb(n))Fα,ρ,+(s)

− 1∣∣∣∣→ 0 (3.8.5)

Page 206: Asymptotic analysis of random walks

3.8 Uniform relative convergence to a stable law 175

as n → ∞, t = tn → ∞. However, it follows from Theorem 1.5.1 and thecontinuity of Fα,ρ that, for any fixed t > 0,

sup0�s�t

∣∣∣∣P(Sn � sb(n))Fα,ρ,+(s)

− 1∣∣∣∣→ 0. (3.8.6)

This means that there exists a sequence tn → ∞ increasing sufficiently slowlythat (3.8.6) remains valid with t replaced by tn. Together with (3.8.5), thisproves (3.8.3).

Necessity. It follows from (3.8.3), (3.8.4) that

nV (tb(n)) ∼ ρ+t−α, nF (tb(n)) ∼ t−α.

Hence

F (tb) ∼ t−αF (b), L(tb) ∼ L(b) (3.8.7)

for any sequences t and b tending to infinity. But this is only possible whenL(b) → L = const as b → ∞. Assuming the contrary – for instance, thatL(b) → ∞ as b → ∞ – one can find a sequence b′ such that b′/b → ∞ and

L(b′) > L2(b). (3.8.8)

Setting t := b′/b in (3.8.7) we obtain L(b′) ∼ L(b), which contradicts (3.8.8).The theorem is proved.

An assertion similar to Theorem 3.8.1 can also be obtained for the distributionof Sn. First we observe that it follows from the ‘invariance principle’ for the caseof convergence to stable laws (see § 1.6) that, as n → ∞,

Sn

b(n)⇒ ζ(1), (3.8.9)

where ζ(u) is a stable process corresponding to the distribution Fα,ρ (for this pro-cess, ζ(1)⊂=Fα,ρ), ζ(t) = supu�t ζ(u). Denote by Hα,ρ the distribution of ζ(1).Then, using an argument like the one above, it follows from Theorems 3.4.1, 2.6.1and 3.8.1 and the relations (3.8.2) and Sn � Sn that

Hα,ρ,+(t) := Hα,ρ([t,∞)) ∼ t−α as → ∞. (3.8.10)

Note that the convergence (3.8.9) can also be derived from the results of [141](that paper also gives an explicit expression for Hα,ρ).

Theorem 3.8.2. Under the conditions of Theorem 3.8.1, F ∈ Nα,ρ iff

supt>0

∣∣∣∣P(Sn � tb(n))Hα,ρ,+(t)

− 1∣∣∣∣→ 0

as n → ∞.

Proof. The proof of Theorem 3.8.2 repeats that of Theorem 3.8.1. One just hasto replace Sn by Sn and Fα,ρ by Hα,ρ everywhere.

Page 207: Asymptotic analysis of random walks

176 Random walks with finite mean and infinite variance

3.9 Analogues of the law of the iterated logarithm in the case of infinite

variance

The upper and lower bounds for the distributions of Sn and Sn obtained in thischapter also enable one to establish assertions of the law of the iterated logarithmtype for the sequence {Sn} in the case Eξ2

j = ∞.

Theorem 3.9.1.

(i) Let condition [ · , <], α < 1, or the conditions [<, <], α ∈ (1, 2), Eξj = 0,W (t) � c1V (t) be satisfied. Then, for any ε > 0,

lim supn→∞

Sn

σ(n)(ln n)1/α+ε< 1 a.s. (3.9.1)

(ii) Let the conditions [<, ≶], α < 1, W (t) � c1V (t) or the conditions [Rα,ρ],α ∈ (1, 2), ρ > −1, Eξ = 0 be satisfied. Then, for any ε > 0,

lim supn→∞

Sn

σ(n)(lnn)1/α−ε> 1 a.s. (3.9.2)

It is not hard to see that, since ε > 0 is arbitrary, the relations (3.9.1) and (3.9.2)are respectively equivalent to the assertions

lim supn→∞

Sn

σ(n)(lnn)1/α+ε= 0 a.s.,

lim supn→∞

Sn

σ(n)(lnn)1/α−ε= ∞ a.s..

Let ln+ t := ln max{1, t}. Theorem 3.9.1 implies the following.

Corollary 3.9.2. Let the conditions [<, ≶], α < 1, W (t) � c1V (t) or the con-ditions [Rα,ρ], α ∈ (1, 2), ρ > −1, Eξ = 0 be satisfied. Then

lim supn→∞

ln+ Sn − lnσ(n)ln lnn

=1α

a.s. (3.9.3)

The relation (3.9.3) can also be written as

lim supn→∞

(Sn

σ(n)

)1/ln ln n

= e1/α a.s. (3.9.4)

Observe that if, for the s.v.f. L(t) from the representation V (t) = t−αL(t), onehas | lnL(t)| � ln ln t as t → ∞ then the function Lσ(n) from the representationσ(n) = n1/αLσ(n) will have the same property, and in (3.9.1)–(3.9.4) one canreplace σ(n) by n1/α.

The formulation (3.9.3) justifies, to some extent, the use of the term ‘the law ofthe iterated logarithm’, since it contains the scaling factor ln lnn; in the assertionsfor the sums Sn themselves (not for ln+ Sn) it is absent. There is a large numberof papers devoted to analogues of the law of the iterated logarithm in the infinitevariance case (see e.g. the bibliographies in [190, 164]). However, in order to

Page 208: Asymptotic analysis of random walks

3.9 Analogues of the law of the iterated logarithm 177

obtain (3.9.4), in all of them rather strong conditions on the ξj are imposed – forinstance, that their distribution belongs to the normal domain of attraction of astable law: F ∈ Nα,ρ. Theorems 3.9.1–3.9.4 extend these results.

Proof of Theorem 3.9.1. If one follows the classical way of proving the law ofthe iterated logarithm using the Borel–Cantelli lemma then the problem reducesto the following (see e.g. Chapter 19 of [49]): to demonstrate (3.9.1), one has toshow that ∑

k

P(Snk� xk) < ∞, (3.9.5)

where nk := �Ak , xk := σ(nk)(lnnk)1/α+ε and A > 1 is an arbitrary fixednumber. To prove (3.9.2), one has to establish that∑

k

P(Snk− Snk−1 � yk) = ∞

or, equivalently, that ∑k

P(Smk� yk) = ∞, (3.9.6)

where mk := nk − nk−1 = �nk(1 − A−1) + i, i assumes the values 0 or 1 andyk := σ(nk)(lnnk)1/α−ε.

First we prove (3.9.5) and (3.9.1). By virtue of Corollaries 2.2.4 and 3.1.2, forx � σ(n) we have

P(Sn � x) � cnV (x).

Putting x := σ(n)(ln n)1/α+ε, we obtain by Theorem 1.1.4(iii) that, for any fixedδ > 0,

P(Sn � x) � c(lnn)−(1/α+ε)(α−δ)

as n → ∞. For δ := α2ε/3 and small enough ε we have

(1/α + ε)(α − δ) > 1 + αε/2, P(Snk� xk) � c1k

−(1+αε/2).

This means that the series (3.9.5) converges, and hence (3.9.1) holds true.Now we prove (3.9.6) and (3.9.2). By virtue of Theorem 2.5.2 and Corol-

lary 3.3.2, for x > σ(n) and m := �n(1 − A−1) we have

P(Sm � x) � cnV (x).

Setting x := σ(n)(lnn)1/α−ε, one obtains

P(Sm � x) � (lnn)−(1/α−ε)(α+δ),

where (1/α − ε)(α + δ) < 1 − αε/2 for δ := α2ε/2. This yields

P(Smk� yk) � c1k

−(1−αε/2),

which means that the series (3.9.6) diverges and hence that (3.9.2) is true. Thetheorem is proved.

Page 209: Asymptotic analysis of random walks

178 Random walks with finite mean and infinite variance

It follows from Theorem 3.9.1 that, provided that the conditions [<, ≶], α < 1and W (t) � cV (t) are satisfied, if for the random walk {Sk} there exists an‘exact’ upper boundary, i.e. a function ψ(n) such that

lim supn→∞

Sn

ψ(n)= 1 a.s.,

then this boundary is of the form σ(n)(lnn)1/α+o(1). If, however, W (t) � V (t)as t → ∞ then, under assumptions that are otherwise the same, one can only find‘upper bounds’ for the upper boundary and the form of these bounds (when theyexist) will, generally speaking, be different.

First consider the case α < 1.

Theorem 3.9.3. Let the following conditions be satisfied: [>, <] with α < 1and W (t) � V (t)(ln t)γ for some γ > 1 and all large enough t. Then, for anyfixed ε > 0,

lim supn→∞

Sn(lnn)ε

σW (n)< −1 a.s. (3.9.7)

or, equivalently,

lim supn→∞

Sn(lnn)ε

σW (n)= −∞ a.s.

Here one should note the fact that the upper bound proves to be negative and,moreover, for any ε > 0,

lim supn→∞

(Sn + σW (n)(lnn)−ε

)< 0,

supn�0

(Sn + σW (n)(lnn)−ε

)< ∞ a.s.

Proof of Theorem 3.9.3. We will follow the same line of reasoning, based onthe Borel–Cantelli lemma, as was used to prove Theorem 3.9.1(i). Similarlyto (3.9.5), (3.9.6), it suffices to verify that, with probability 1, there will occuronly finitely many events

Ck :={

maxnk−1�n�nk

Sn � −xk

}, k � 1,

where nk := �Ak , xk := σW (nk)(lnn)−ε and A > 1 is an arbitrary fixed num-ber. To this end, it suffices, in turn, to show that the series

∑k P(Ck) converges.

Now

P(Ck) � Pk,1 + Pk,2,

where

Pk,1 := P(Snk−1 � −2xk), Pk,2 = P(Smk� xk)

and mk := nk − nk−1 ∼ nk(1−A−1). Here by Theorem 2.3.5 (see (2.3.8)) one

Page 210: Asymptotic analysis of random walks

3.9 Analogues of the law of the iterated logarithm 179

has

Pk,1 � c1nk−1V (σW (nk−1) − 2xk)

+ exp{−c

(2xk + σ(nk−1)

σW (nk−1)

)−δ}(3.9.8)

for δ > 0, c1 < ∞, 0 < c < ∞. Since xk+1 = o(σW (nk)) as k → ∞, we obtain(replacing, for convenience, the index k − 1 by k) that

nkV (σW (nk) − 2xk+1) ∼ nkV (σW (nk)) � cnkW (σW (nk))(ln nk)−γ

∼ c(lnnk)−γ ∼ ck−γ lnA.

Now consider the second term on the right-hand side of (3.9.8). It is not hardto see that, for any fixed ε1 > 0 and all large enough n,

σ(n) = V (−1)(1/n) = inf {t : V (t) � 1/n}� inf

{t : W (t)(ln t)−γ � 1/n

}< σW (n)(lnn)−γ/β+ε1 .

Therefore(2xk+1 + σ(nk)

σW (nk)

)−δ

�(c1(lnnk)−ε + (lnnk)−γ/β+ε1

)−δ

� c2

(k−ε + k−γ/β+ε1

)−δ � c3kv

with v := min{εδ, (γ/β − ε1)δ

}> 0 provided that ε1 < γ/β. Hence the second

summand in (3.9.8) decays no slower than e−c3kv � k−γ . From this it followsthat Pk,1 � ck−γ and ∑

k

Pk,1 < ∞.

Now consider the terms Pk,2. By virtue of Theorem 2.4.1 (see (2.4.2)), forany ε2 > 0,

Pk,2 � c1mkV (xk) � c2nkV (xk) = c2nkV(σW (nk)(lnnk)−ε

)� c2(lnnk)αε+ε2nkW (σW (nk))(ln σW (nk))−γ ∼ c3k

αε−γ+ε2 .

Next we observe that it will suffice to prove (3.9.7) for any ε ∈ (0, (γ − 1)/3α). Ifit holds for any ε from this interval then it will certainly hold for ε � (γ − 1)/3α.Now, if ε ∈ (0, (γ − 1)/3α) then, for ε2 < (γ − 1)/3,

αε − γ + ε2 � (γ − 1)/3 − γ + ε2 � −1 − (γ − 1)/3 < −1.

Hence∑

k Pk,2 < ∞. The theorem is proved.

If, in the assumptions of Theorem 3.9.3, we were to require in addition thatcondition [Rα,ρ] (with ρ = −1) is met then the distribution of Sn/σW (n) wouldconverge to the respective stable law and so, for any fixed ε > 0, infinitely manyevents An = {Sn � −εσW (n)} would occur. Therefore, if there exists an upper

Page 211: Asymptotic analysis of random walks

180 Random walks with finite mean and infinite variance

boundary {ψ(n)} for {Sn} then, by Theorem 3.9.3, it will have to be of theform ψ(n) = −σW (n)ψ1(n), where ψ1(n) → 0 and | lnψ1(n)| = o(ln lnn) asn → ∞.

Now consider the problem on the upper boundary of the random walk in thecase Eξ = 0, [<, <], β > 1, W (t) � V (t). As in Theorem 3.9.3, the resulthere will be different from the assertion of Theorem 3.9.1, where we assumedthat W (t) � c1V (t).

Theorem 3.9.4. Let the following conditions be satisfied: [<, <] with β > 1,Eξ = 0 and V (t) < cW (t) for some c < ∞. Then, for any fixed v > 0,

lim supn→∞

Sn

σW (n) ln n< v a.s. (3.9.9)

or, equivalently,

lim supn→∞

Sn

σW (n) ln n= 0 a.s. (3.9.10)

Proof. Similarly to (3.9.5), (3.9.6), it suffices to verify that, with probability 1,there will occur only finitely many events

Ck ={

maxnk−1�n�nk

Sn � xk

}, k � 1,

where nk := �Ak , xk := vσW (nk) ln nk and A > 1 is an arbitrary fixed number.We will use Theorem 3.1.1 in the case where condition (3.1.6) is satisfied. Firstnote that, for x := vσW (n) ln n, the latter condition is met. Indeed,

nW

(x

lnx

)∼ nW

(vσW (n) ln n

lnσW (n)

)∼ nv−βββW (σW (n)) ∼ v−βββ = const < ∞.

Therefore, by virtue of Theorem 3.1.1, for any ε > 0 and all large enough n,

P(Ck) � P(Snk� xk) < c1nkV (xnk

) < c2nkW (vσW (nk) ln nk)

� c3nkW (σW (nk))(ln nk)−β+ε � c4k−β+ε.

Hence, for ε < (β − 1)/2 one has

P(Ck) � c4k−1−(β−1)/2,

∑k

P(Ck) < ∞.

That (3.9.10) and (3.9.9) are equivalent to one another follows from the fact that,in the case Eξ = 0, one has lim supn→∞ Sn = ∞ (see e.g. Chapter 10 of [49]).The theorem is proved.

Remark 3.9.5. It can seen from the proof of Theorem 3.9.4 that the factor lnn

multiplying σW (n) in (3.9.9) appears there not from the upper bound for P(Ck)but rather from the condition (3.1.6) for the applicability of Theorem 3.1.1. This

Page 212: Asymptotic analysis of random walks

3.9 Analogues of the law of the iterated logarithm 181

observation, together with a comparison with Theorem 3.9.1 (the conditions ofTheorems 3.9.1 and 3.9.4 have, for β > 1, a non-empty intersection), leads toa natural conjecture that the above-mentioned factor lnn could be replaced by aslower growing function (say, by (lnn)γ with γ < 1/β).

Remark 3.9.6. It is obvious that Theorems 3.9.1–3.9.4 also enable one to con-struct bounds for lower boundaries for {Sk} (by considering the reflected randomwalk {−Sk}).

Page 213: Asymptotic analysis of random walks

4

Random walks with jumps having finite variance

In this chapter, we will assume that

Eξ = 0, d := Eξ2 < ∞.

As before, the main objects of study will be the probabilities of large deviationsof Sn, Sn, Sn(a) and also the general boundary problem on the asymptotics of

P(

maxk�n

(Sk − g(k)) � 0),

where the boundary{g(k)

}is such that mink�n g(k) =: x → ∞.

4.1 Upper bounds for the distribution of Sn

On the one hand, by the central limit theorem, as n → ∞,

P(Sn � x) ∼ 1 − Φ(x/

√nd)

uniformly in x ∈ (0, Nn√

n), where Nn → ∞ slowly enough. On the other hand,as we saw in § 1.1.4, when condition [ · , =], V ∈ R is met and x → ∞ one has

P(Sn � x) ∼ nV (x) = nP(ξ � x) (4.1.1)

for any fixed n (and hence also for an n growing sufficiently slowly). These twoasymptotics ‘interlock’ with each other in the following way. If, in addition to theabove-mentioned conditions, it is also assumed that E(ξ2; |ξ| > t) = o(1/ ln t)as t → ∞ and that x >

√n, then

P(Sn � x) ∼ 1 − Φ(x/

√nd)

+ nV (x), n → ∞ (4.1.2)

182

Page 214: Asymptotic analysis of random walks

4.1 Upper bounds for the distribution of Sn 183

(Corollary 7 of [237]; see also [206]1). The representation (4.1.2) will also bediscussed below in § 4.7.2 of the present chapter.

In what follows, we will need a value σ(n) that will characterize the range ofdeviations of Sn where the asymptotics P(Sn � x) change from the ‘normal’asymptotics, 1 − Φ

(x/

√nd), to the asymptotics nV (x) describing P(Sn � x)

for large enough x. We will denote this value, as before, by σ(n). It is definedas the deviation for which the asymptotics e−x2/2nd(1+o(1)) and nV (x) ‘almostcoincide’. It is easily seen that one can put

σ(n) =√

(α − 2)nd lnn, (4.1.3)

σ(n) being the principal part of the solution for x of the equation

− x2

2nd= lnnV (x) = lnn − α lnx + o(lnx).

Recall that, under the assumptions of the preceding chapters, the deviations x atwhich the approximation by a stable law was replaced with an approximation bythe quantity nV (x) were of the form F (−1)(1/n), which had a power factor n1/α,

α � 2.

Remark 4.1.1. To avoid confusion over the definition of σ(n) for n = 1 (we donot exclude the value n = 1), we just put σ(1) := 1.

It will be assumed throughout the present section that

d = 1, x >√

n,

so that nV (x) → 0 as x → ∞. As before, we will use the notation

Bj = Bj(0) = {ξj < y}, B =n⋂

j=1

Bj

(see (2.1.1)) with y = x/r, where r � 1 is fixed.

Theorem 4.1.2. Let the conditions [ · , <], α > 2, Eξ = 0 and Eξ2 = 1 besatisfied. Then the following assertions hold true.

(i) For any fixed h > 1, s0 > 0, for x = sσ(n), s � s0 and all small enoughΠ = nV (x), one has

P ≡ P(Sn � x; B) � er

(Π(y)

r

)r−θ

, (4.1.4)

1 In [206], the representation (4.1.2) was given under the additional assumption that E|ξ|2+δ < ∞,δ > 0 (Theorem 1.9), with a reference to [194]. The latter paper in fact dealt only with thecase x � n1/2 ln n (then (4.1.2) turns into (4.1.1)) but without additional moment conditions.According to [191], the result presented in [206] was obtained in A.V. Nagaev’s Dr. Sci. thesis(On large deviations for sums of independent random variables, Institute of Mathematics of theAcademy of Sciences of the UzSSR, Tashkent, 1970). In [225] the representation (4.1.2) wasobtained under the assumption that F (t) = O(t−α) as t → ∞.

Page 215: Asymptotic analysis of random walks

184 Random walks with jumps having finite variance

where

Π(y) = nV (y), θ =hr2

4s2

(1 + b

ln s

lnn

), b =

α − 2. (4.1.5)

(ii) For any fixed h > 1, τ > 0, for x = sσ(n), s2 < (h − τ)/2 and all largeenough n, we have

P � e−x2/2nh. (4.1.6)

Corollary 4.1.3.

(i) If x = sσ(n), s → ∞ then, for any ε > 0 and all small enough Π = nV (x),

P � Πr−ε. (4.1.7)

(ii) If s2 > c lnn then

P � c1Πr. (4.1.8)

(iii) If s is fixed and r = 2s2/h � 1 then, as n → ∞,

P � cΠs2/h+o(1).

In particular, for s2 > 2h and all large enough n,

P � cΠ2. (4.1.9)

Corollary 4.1.4.

(i) If s → ∞ then, for any δ > 0 and all small enough nV (x),

P(Sn � x) � nV (x)(1 + δ). (4.1.10)

(ii) If s2 � h + τ for a fixed τ > 0 then, for all small enough nV (x),

P(Sn � x) � cnV (x). (4.1.11)

(iii) For any fixed h > 1, τ > 0, for s2 < (h − τ)/2 and all large enough n,

P(Sn � x) � e−x2/2nh. (4.1.12)

Remark 4.1.5. It is not hard to verify that, as in Corollaries 2.2.4 and 3.1.2, thereexists a function ε(t) ↓ 0 as t ↑ ∞ such that, along with (4.1.10), one has

supx: s�t

P(Sn � x)nV (x)

� 1 + ε(t), x = sσ(n).

Proof of Corollary 4.1.3. Since θ → 0 as s → ∞, assertion (i) follows in an ob-vious way from (4.1.4). We now prove the second assertion. Since y = sσ(n)/r,for any δ > 0 one has

T :=r

Π(y)=

r

nV (y)< c1s

α+δn(α+δ)/2−1.

Page 216: Asymptotic analysis of random walks

4.1 Upper bounds for the distribution of Sn 185

From this we obtain

lnT θ � hr2

4s2

(1 + b

ln s

lnn

)[ln c1 + (α + δ) ln s +

(α + δ

2− 1

)lnn

].

Clearly, for s2 > c lnn the right-hand side of this inequality is bounded. Togetherwith (4.1.4), this proves (4.1.8).

If s ≡ x/σ(n) is fixed and nV (x) → 0 then necessarily n → ∞. Therefore,

r − θ = r − hr2

4s2

(1 + b

ln s

lnn

)= ψ(r, s) + o(1),

where the function

ψ(r, s) := r − hr2

4s2

achieves at the point r0 = 2s2/h its maximum in r, which is equal to

ψ(r0, s) =s2

h. (4.1.13)

Hence

r0 − θ � s2

h+ o(1). (4.1.14)

In particular, for s2 > 2h and all large enough n, we obtain

r0 − θ > 2. (4.1.15)

This establishes (4.1.9). Corollary 4.1.3 is proved.

Proof of Corollary 4.1.4. The proof, as well as parallel arguments from Chap-ters 2 and 3 (cf. Corollaries 2.2.4 and 3.1.2), is based on the inequality (2.1.6):

P(Sn � x) � nV (y) + P. (4.1.16)

Assertion (i) follows from (4.1.7) by setting r := 1 + 2ε and using standardarguments from calculus.

We now prove (ii). If s → ∞, the assertion follows from (i). If s is boundedthen, setting r := r0 and assuming without loss of generality that s2 > h + τ (fora suitable h > 1 and a τ that is somewhat smaller than that in condition (ii)), weobtain from (4.1.4), (4.1.13) and (4.1.14) that

P(Sn � x) � nV (y) + c(nV (y))1+τ/2 � 2nV (x/r0) ∼ 2rα0 nV (x).

Assertion (iii) follows from inequalities (4.1.6) and (4.1.16); these imply that

P(Sn � x) � nV (x) + e−x2/2nh, (4.1.17)

where, for s2 < (h − τ)/2 and n → ∞,

e−x2/2nh > exp{− (h − τ)

2(α − 2)n lnn

2nh

}> n−(α−2)/4 � nV (

√n) � nV (x) (4.1.18)

Page 217: Asymptotic analysis of random walks

186 Random walks with jumps having finite variance

(recall that x >√

n). Hence the second term on the right-hand side of (4.1.17)dominates. Slightly changing h (if necessary), we obtain (4.1.12). Corollary 4.1.4is proved.

Proof of Theorem 4.1.2. (i) We will follow the same line of reasoning as in theproofs of Theorems 2.2.1 and 3.1.1. The main elements will again be the basicinequality (2.1.8) and bounds for R(μ, y). In the present context, however, thepartition of R(μ, y) into ‘subintegrals’ will differ from (2.2.7). Put M(v) := v/μ,so that M := 2α/μ = M(2α), and write

R(μ, y) = I1 + I2,

where, for a fixed ε > 0,

I1 :=

M(ε)∫−∞

eμt F(dt) =

M(ε)∫−∞

(1 + μt +

μ2t2

2eμθ(t)

)F(dt) (4.1.19)

with 0 � θ(t)/t � 1. Now∫M(ε)

−∞ F(dt) � 1,

M(ε)∫−∞

tF(dt) = −∞∫

M(ε)

tF(dt) � 0 (4.1.20)

andM(ε)∫−∞

t2eμθ(t) F(dt) � eε

M(ε)∫−∞

t2 F(dt) � eε =: h. (4.1.21)

Therefore

I1 � 1 +μ2h

2. (4.1.22)

Next we will bound

I2 := −y∫

M(ε)

eμtdF+(t) � V (M(ε))eε + μ

y∫M(ε)

V (t)eμtdt. (4.1.23)

First consider, for M(ε) < M < y, the integral

I2,1 := μ

M∫M(ε)

V (t)eμtdt =

2α∫ε

V (v/μ)evdv.

As μ → 0,

V (v/μ)ev ∼ V (1/μ)g(v), (4.1.24)

where the function g(v) := v−αev is convex on (0,∞). Hence

I2,1 � 2α − ε

2V (1/μ)

(g(ε) + g(2α)

)(1 + o(1)) � cV (1/μ). (4.1.25)

Page 218: Asymptotic analysis of random walks

4.1 Upper bounds for the distribution of Sn 187

The integral

I2,2 := μ

y∫M

V (t)eμtdt

can be dealt with in the same way as I3 in (2.2.11)–(2.2.15), which yields

I2,2 � V (y)eμy(1 + ε(λ)), λ := μy, (4.1.26)

ε(λ) ↓ 0 as λ ↑ ∞. Summing up (4.1.22)–(4.1.25), we obtain

R(μ, y) � 1 + μ2h/2 + cV (1/μ) + V (y)eμy(1 + ε(λ)), (4.1.27)

and so

Rn(μ, y) � exp{

nμ2h

2+ cnV

(1μ

)+ nV (y)eμy(1 + ε(λ))

}. (4.1.28)

Now we will choose

μ :=1y

lnT, T :=r

nV (y),

so that λ = lnT (cf. (2.2.18)). Then (4.1.28) will become

Rn(μ, y) � exp{

nμ2h

2+ cnV

(1μ

)+ r(1 + ε(λ))

}, (4.1.29)

where, as before, by Theorem 1.1.4(iii) for any δ > 0,

nV

(1μ

)∼ nV

( y

lnT

)∼ cnV

(y

| lnnV (y)|)

� cnV (y)| lnnV (y)|α+δ → 0 (4.1.30)

since nV (y) → 0. Therefore from (2.1.8) we obtain (cf. (2.2.19))

lnP � −r lnT + r +nh

2y2ln2 T + ε1(T )

=(−r +

nh

2y2lnT

)lnT + r + ε1(T ), (4.1.31)

where ε1(T ) ↓ 0 as T ↑ ∞. For x = sσ(n), σ(n) =√

(α − 2)n lnn andnV (x) → 0, we have

lnT = − lnnV (x) + O(1)

= − lnn + α ln s +α

2lnn + O

(ln lnn + | lnL(sσ(n))|)

=α − 2

2lnn

(1 + b

ln s

lnn

)(1 + o(1)), (4.1.32)

Page 219: Asymptotic analysis of random walks

188 Random walks with jumps having finite variance

where b = 2α/(α − 2) (the term o(1) appears in the last equality owing to ourassumption that n + s → ∞). Hence

nh

2y2lnT =

hr2

4s2

(1 + b

ln s

lnn

)(1 + o(1)),

so that, by virtue of (4.1.31),

lnP � r −[r − h′r2

4s2

(1 + b

ln s

lnn

)]lnT

for any h′ > h > 1 and all small enough nV (x). This proves the first assertionof Theorem 4.1.2.

(ii) Since we are assuming everywhere that x >√

n, we have

s = x/σ(n) > 1/√

(α − 2) lnn.

Hence it suffices to prove (4.1.6) for values of s that satisfy

n−γ < s2 <h − τ

2

for an arbitrarily small fixed γ > 0. This corresponds to the following range ofvalues for x2:

cn1−γ lnn < x2 <12(h − τ)(α − 2)n lnn. (4.1.33)

In this case, as will be shown below, the principal contribution to the exponent onthe right-hand side of (4.1.28) will come from the quadratic term nμ2h/2, and sowe will put

μ :=x

nh

(this is the value minimizing −μx + nμ2h/2). Then, for y = x (so that r = 1,

λ = x2/nh), we obtain from (2.1.8) and (4.1.28) that

lnP � −μx +nμ2h

2+ cnV

(1μ

)+ nV (y)eμy(1 + ε(λ))

= − x2

2nh+ cnV

(nh

x

)+ nV (x)ex2/nh(1 + ε(λ)). (4.1.34)

Here x2/2nh > 1/2h, whereas the last two terms on the right-hand side arenegligibly small as n → ∞. Indeed, owing to the second inequality in (4.1.33),

nV

(nh

x

)� cnV

(√n

lnn

)→ 0 as n → ∞.

Further, using the first inequality in (4.1.33), we find that

nV (x) � n(2−α)/2+γ′,

Page 220: Asymptotic analysis of random walks

4.1 Upper bounds for the distribution of Sn 189

where the choice of γ′ is at our disposal. Moreover, by virtue of (4.1.33),

x2

nh� 1

2h(h − τ)(α − 2) lnn =

(α − 2

2− τ(α − 2)

2h

)lnn.

Hence, for γ′ < τ(α − 2)/2h,

nV (x)ex2/nh � n−τ(α−2)/2h+γ′ → 0

as n → ∞.Thus

lnP � − x2

2nh+ o(1),

where the term o(1) in the last relation can be removed by slightly changing thevalue of h > 1. (Formally, we have proved that, for h > 1 and all large enough n,the inequality (4.1.6) holds with h on its right-hand side replaced by h′ > h,where one could take, for instance, h′ = h + (h − 1)/2. Since the value h′ > 1can also be made arbitrarily close to 1, by choosing a suitable h, the assertion thatwe have obtained is equivalent to that in the formulation of Theorem 4.1.2.) Thisproves (4.1.6).

The theorem is proved.

Comparing the assertions of Theorem 4.1.2 and Corollaries 4.1.3, 4.1.4, we seethat, roughly speaking, the range of possible values of s splits into two subranges:s < 1/2 and s > 1/2. In each subrange one can obtain quite satisfactory and, ina certain sense, unimprovable bounds for the probabilities P and P(Sn � x).

Now consider the ‘threshold’ case α = 2, Eξ2 < ∞. In this situation, finding theasymptotics of the solution σ(n) (for x) of the equation

− x2

2n= ln n + ln V (x) ≡ ln n − α ln x + ln L(x) (4.1.35)

is rather difficult, because it depends now on the s.v.f. L. Because of this, to describe theprobabilities of deviations that are close to normal (i.e. they lie in the region where thebound (4.1.6) is valid), we will need two conditions. The first one assumes straight awaythat the function e−x2/nh dominates nV (x), i.e. it is supposed that x is such that

nV (x)ex2/nh → 0. (4.1.36)

The second condition stipulates that

V (t) = O

„1

t2 ln t

«as t → ∞, (4.1.37)

which, in essence, leads to no loss of generality in the threshold case α = 2, Eξ2 < ∞.In the present context, we will consider, for the case α = 2, deviations of the form

x = s√

n ln n,

where the parameter s will clearly have a meaning somewhat different from that it previ-ously had. As before,

r =x

y, Π(y) = nV (y), Π = nV (x).

Page 221: Asymptotic analysis of random walks

190 Random walks with jumps having finite variance

Theorem 4.1.6. Let the conditions [ · , <], α = 2, Eξ = 0, Eξ2 = 1 and x = s√

n ln nbe satisfied. Then the following assertions hold true.

(i) For any fixed s0 > 0 and all s � s0, one has

P � er

„Π(y)

r

«r+θ

, θ = o(s−2) + O

„ln s

s2 ln n

«(4.1.38)

as nV (x) → 0.(ii) In addition, let conditions (4.1.36), (4.1.37) be met. Then, for any fixed h > 1 and all

large enough n, one has (4.1.6).

Corollary 4.1.7.

(i) If s � s0, nV (x) → 0 then

P � cΠr+o(1). (4.1.39)

(ii) If s2 > c1 ln n then

P � cΠr. (4.1.40)

Corollary 4.1.8.

(i) If s � s0, nV (x) → 0 then

P(Sn � x) � nV (x)(1 + o(1)). (4.1.41)

(ii) Let conditions (4.1.36), (4.1.37) be satisfied. Then, for any fixed h > 1 and all largeenough n, one has (4.1.12).

Proof of Theorem 4.1.6. (i) For s � s0, the reasoning in the proof of the first part ofTheorem 4.1.2 remains unchanged, up to relations (4.1.28), (4.1.29). However, in contrastwith (4.1.32), for x = s

√n ln n we now have

ln T ≡ ln r − ln nV (y) = − ln r − ln nV (x) + o(1),

where

ln nV`s√

n ln n´

= −2 ln s− ln ln n + ln L`s√

n ln n´

= −2 ln s + o(ln n) + o(ln s).

Henceln T = 2 ln s − ln r + o(ln n) + o(ln s), (4.1.42)

and therefore

nh

2y2ln T =

hr2

2s2 ln n

`2 ln s − ln r + o(ln n) + o(ln s)

´= o(s−2) + O

„ln s

s2 ln n

«,

so that (4.1.31) becomes

ln P � r −»r + o(s−2) + O

„ln s

s2 ln n

«–ln T + o(1).

The first assertion of the theorem is proved.(ii) Now we will prove the second assertion in the case when conditions (4.1.36), (4.1.37)

are met. Again letting μ := x/nh, we obtain the relation (4.1.34), in which, as in Theo-rem 4.1.2 as well, we need to bound the last two terms. From condition (4.1.36) it neces-sarily follows that s → 0 as nV (x) → 0 and so, by virtue of (4.1.37),

nV

„nh

x

«= o

„nV

„rn

ln n

««= o(1)

Page 222: Asymptotic analysis of random walks

4.2 Upper bounds for the distribution of Sn(a), a > 0 191

as n → ∞.That the second term from (4.1.34) to be bounded converges to zero follows directly

from (4.1.36). Theorem 4.1.6 is proved.

Proof. The proofs of Corollaries 4.1.7 and 4.1.8 are also similar to the previous proofs.Their assertions follow from Theorem 4.1.6 and the fact that if nV (x) → 0, s � s0,then n + s → ∞. This implies that θ = o(1). Moreover, owing to (4.1.42),

θ ln Π = O(θ ln T ) = o(1),

provided that s2 > c ln n. The relations (4.1.39), (4.1.40) are proved.To obtain inequality (4.1.41), one should use relations (4.1.16) and (4.1.39), in the latter

setting r := 1 + ε, where ε tends to zero so slowly that

P � cΠ1+ε/2, Πε/2 → 0.

Corollaries 4.1.7 and 4.1.8 are proved.

In conclusion, we state a consequence of Theorems 4.1.2 and 4.1.6 that is concernedwith bounds for

P(bSn � x), bSn := maxk�n

|Sk| = max˘Sn, |Sn|

¯, Sn := min

k�nSk.

Let, as before, bV (t) := max{V (t), W (t)}, bα := max{α, β}.

Corollary 4.1.9. Let the following conditions be satisfied: [<, <], Eξ2 < ∞ and

x >p

(bα − 2 + ε)n ln n

for some ε > 0. Then, for all small enough nV (x),

P(bSn � x) � cnbV (x).

This assertion follows from Corollary 4.1.4(ii) and Corollary 4.1.8(i), applied to Sn

and Sn.

4.2 Upper bounds for the distribution of Sn(a), a > 0

This section does not differ much from § 3.2. As in the latter, we will put

B(v) =n⋂

j=1

Bj(v), Bj(v) = {ξj < y + vaj}, v > 0,

η(x) = min{k : Sk − ak � x}.We will not exclude the case a → 0 as x → ∞ and will consider here thetriangular array scheme, assuming that the uniformity condition [U] from § 3.2(see p. 137) is satisfied.

Theorem 4.2.1.

(i) Assume that the conditions [ · , <], α > 2, Eξ = 0, Eξ2 = d < ∞ and [U]

Page 223: Asymptotic analysis of random walks

192 Random walks with jumps having finite variance

are satisfied. Then, for v � 1/4r, a ∈ (0, a0) for an arbitrary fixed a0 < ∞,

all n and all x such that x → ∞, x > c| ln a|/a, we have

P(Sn(a) � x; B(v)) � c1[mV (x)]r∗ , (4.2.1)

P(Sn(a) � x) � c1mV (x), (4.2.2)

where the constants c, c1 are defined below in the proof of the theorem and

m := min{n, x/a}, r∗ :=r

2(1 + vr), r ≡ x

y>

52.

For any bounded or sufficiently slow-growing values t, we have

P(∞ > η(x) � xt

a

)� c2xV (x)

at1−α. (4.2.3)

If, together with x, t tends to infinity at an arbitrary rate then the inequality(4.2.3) will remain true provided that one replaces in it the exponent 1 − α

by 1 − α + ε with an arbitrary fixed ε > 0.(ii) Let the conditions of part (i) of the theorem be met, with the following amend-

ments for the range of x and n: now we assume that

x <c| ln a|

a, n � n1 =

x

a. (4.2.4)

Then, for any fixed h > 1 and all large enough n1,

P(Sn(a) � x) � c1e−xa/2hd. (4.2.5)

If x = o(| ln a|/a

)then, for any bounded or sufficiently slow-growing t,

P(∞ > η(x) � xt

a

)� c1e

−γxa, (4.2.6)

where the constants c, c1, γ are defined below in the proof of the theorem. Inparticular, for xa � c2 > 0,

P(∞ > η(x) � xt

a

)� c1e

−γ′t, γ′ = γc2.

The theorem implies the following result.

Corollary 4.2.2. Let the conditions [ · , <], α > 2, Eξ = 0, Eξ2 < ∞ and [U]be satisfied. Then there exist constants c and γ > 0 such that, for any x andn � n1 := x/a,

P(Sn(a) � x) � cmax{

e−γxa,x

aV (x)

}.

Proof of Theorem 4.2.1. The proof of part (i) basically repeats the argument usedto prove Theorem 3.2.1. One just has to observe that, to prove (4.2.1), one shouldnow use Theorems 4.1.2 and 4.1.6 for n � c0x/a with some c0 < ∞.

First note that if the value s ≡ x√

(α − 2)n lnn in the conditions of Theo-rem 4.1.2 is such that θ � r/2 (for the definition of θ see (4.1.5)) then all the

Page 224: Asymptotic analysis of random walks

4.2 Upper bounds for the distribution of Sn(a), a > 0 193

bounds in the proof of Theorem 3.2.1 will remain valid provided that in them wereplace rk and r′k by rk/2 and r′k/2 respectively. Hence the inequality (4.2.1) willhold for r∗ = r0/2 (cf. the assertion of Theorem 3.2.1, p. 138).

It remains to discover for which x and a, in the case n � cox/a, one hasθ � r/2 or, equivalently,

s2 ≡ c1x2

n lnn> c2, (4.2.7)

where c1 and c2 are determined by the values of α and r.The inequality (4.2.7) will hold if

x >c| ln a|

a

(or a >

c′ lnx

x

).

Indeed, in this case

n lnn � c0x

a

[ln

x

a

](1 + o(1))

� c0x2

c′ lnx

[ln(

x2

lnx

)](1 + o(1)) � c0x

2

c′(1 + o(1)). (4.2.8)

From this it follows that s2 > c1c′/co � c2 for c′ � c2c0/c1, and so (4.2.7) holds

true. This proves (4.2.1).Similarly, to prove (4.2.2) we use Corollaries 4.1.4 and 4.1.8, in which one

requires that the condition x2/n lnn > c3, with c3 an explicitly known con-stant, is met. We are again in a situation where we have to determine under whatconditions the relation (4.2.7) holds. The only difference is in the values of theconstants. Otherwise, the proof of Theorem 3.2.1(i) remains unchanged.

(ii) Now we prove the second assertion. Using an argument similar to thatinvolving (4.2.7), (4.2.8), one can verify that the inequality s2 < (h − τ)/2(see Theorem 4.1.2(ii)) will hold for n � n1 provided that x < c| ln a|/a (or a >

(c′ lnx)/x). We will again follow the argument in the proof of Theorem 3.2.1but will obtain different bounds, since we will now be using Theorem 4.1.2(ii),see (4.1.6). For example, instead of (3.2.5) we will obtain that, for a fixed k,nk = n12k−1 and n1 = x/a, the inequality

pk = P(η(x) ∈ (nk, nk+1]; B(v))

� exp{−x222(k−2)

2nkhd

}+ exp

{−x2(1 + 2k−2)2

2nkhd

}� 2 exp

{−xa2k−2

4hd

}(4.2.9)

will hold if the pairs (nk, x2k−2) and (nk, x(1 + 2k−2)) (see (3.2.5)), togetherwith the pair (n1, x), still satisfy the conditions of Theorem 4.1.2(ii). As k grows,starting from some point these conditions will no longer be met and then one willhave to make use of the assertion of Theorem 4.1.2(i). However, when summing

Page 225: Asymptotic analysis of random walks

194 Random walks with jumps having finite variance

the pk, the main contribution to the sum will be due to the first few summands,since, for n = n1 = x/a,

exp{− x2

2nhd

}= exp

{− xa

2hd

}� x

aV (x)

for x < c| ln a|/a and a suitable c. It also follows from these relations that

P(B(v)) �n∑

j=1

V (y + avj) � cx

aV (x) = o(e−xa/2hd).

The above argument implies (4.2.5).If x = o (| ln a|/a) then, for each fixed k, the bound (4.2.9) will be valid,

and the right-hand side of (4.2.9) will dominate xV (x)/a. Therefore, by virtueof (4.2.9),

P(∞ > η(x) � n12k−1) � c exp{−xa2k−2

4h

}.

This implies (4.2.6). The theorem is proved.

Note that analogues of Corollary 3.6.6 and Theorem 3.6.7 (under the conditions[ · , <] and γ > 1/2) remain valid in the present set-up.

As can be seen from the proof, one can make a more precise statement regard-ing the admissible growth rate for t in (4.2.6). If, for instance, x = v/a, a → 0,v = const then one can take t < c| ln a| with a suitable c.

4.3 Lower bounds for the distributions of Sn and Sn(a)

First we consider lower bounds for P(Sn � x).

Theorem 4.3.1. Let Eξ = 0, Eξ2 = d < ∞. Then, for y = x + u√

(n − 1)d,

u > 0,

P(Sn � x) � nF+(y)(

1 − u−2 − n − 12

F+(y))

. (4.3.1)

Proof. The assertion will follow from Theorem 2.5.1 if we put K(n) :=√

nd

and make use of the Chebyshev inequality, which implies that

Qn(u) ≡ P(Sn/K(n) < −u

)� u−2.

The following results are obvious consequences of Theorem 4.3.1.

Corollary 4.3.2.

(i) If x → ∞, x � √n then, as u → ∞,

P(Sn � x) � nF+(y)(1 + o(1)).

Page 226: Asymptotic analysis of random walks

4.3 Lower bounds for the distributions of Sn and Sn(a) 195

(ii) If, moreover, condition [ · , >] is satisfied then

P(Sn � x) � nV (x)(1 + o(1)).

Thus, a lower bound for P(Sn � x) with right-hand side nF+(y) holds underweaker conditions on x than the upper bounds (the latter require x � √

n lnn)and in the absence of a regular domination condition of the form [ · , <]. Somelower bounds for P(Sn � x) were obtained in [205], while lower bounds forP(|Sn| � x) were established in [208].

Next we will obtain lower bounds for P(Sn(a) � x). The case Eξ2 = ∞ willnot be excluded. Suppose that

Eξ = 0, bγ := E|ξ|γ < ∞ for some γ ∈ (1, 2]. (4.3.2)

Set

Zγ(x, t) :=n∑

j=1

F+

(x + aj + (j − 1)1/γtb

),

where, as usual, a > 0. Clearly, Zγ(x, t) → 0 as x → ∞.

Theorem 4.3.3. For all n, x and t > 0,

P(Sn(a) � x) � Zγ(x, t)[1 − (1 + ε(t))t−γ − Zγ(x, t)

], (4.3.3)

where ε(t) ≡ 0 for γ = 2, and ε(t) → 0 as t → ∞, γ < 2.

Let

Iγ(x, t) :=

n+1∫1

F+

(x + au + (u − 1)1/γtb

)du � Zγ(x, t).

Given that (4.3.3) holds, it is not hard to find values t0 > 1, z0 = z0(t0) > 0 suchthat, for Zγ(x, t) < z0, t > t0,

P(Sn(a) � x) � Iγ(x, t)[1 − (1 + ε(t))t−γ − Iγ(x, t)

]. (4.3.4)

Indeed, observe that for t > t0 > 1 and c := supt(1+ ε(t)), the function g(z) :=z(1 − ct−γ − z) increases monotonically on [0, z0], where z0 = z0(t0) > 0.Hence, for Zγ(x, t) < z0, the right-hand side of (4.3.3) exceeds

Iγ(x, t)[1 − (1 + ε(t))t−γ − Iγ(x, t)

].

It is not difficult to give explicit values for t0, z0. For instance, when γ = 2,one can put t0 := 2, z0 := 3/8.

Corollary 4.3.4. Assume that the conditions (4.3.2) and [ · , <] are satisfied.Then, as x → ∞,

P(Sn(a) � x) �(

1a

x+an∫x

V (u) du

)(1 + o(1)). (4.3.5)

Page 227: Asymptotic analysis of random walks

196 Random walks with jumps having finite variance

Proof of Corollary 4.3.4. Put t := lnx in (4.3.4). Then, as x → ∞,

Iγ(x, t) �n+1∫1

V(x + au + (u − 1)1/γb lnx

)du =

1 + o(1)a

x+an∫x

V (u) du.

Since Iγ(x, t) → 0 as x → ∞, the bound (4.3.5) follows from (4.3.4).

Proof of Theorem 4.3.3. Let

Gn := {Sn(a) � x}, Bj := {ξj < x + aj + (j − 1)1/γtb}.Using the same argument as in the proof of Theorem 2.5.1, we obtain

P(Gn) �n∑

j=1

P(Gn Bj) −( n∑

j=1

P(Bj))2

, (4.3.6)

where, evidently,n∑

j=1

P(Bj) = Zγ(x, t). (4.3.7)

Further,

P(GnBj) > P(Sj−1 � −(j − 1)1/γtb; Bj

)= F+

(x + aj + (j − 1)1/γtb

)[1 − P(Sj−1 < −(j − 1)1/γtb)

].

(4.3.8)

If γ = 2 then, by the Chebyshev inequality,

P(Sj−1 < −(j − 1)1/2tb

)� t−2.

If γ < 2 then condition [<, <] with α = β = γ, V (t) = W (t) = bγt−γ isfulfilled and therefore, by virtue of Corollary 3.1.2,

P(Sj−1 < −(j − 1)1/γtb

)� (1 + ε(t))(j − 1)V

((j − 1)1/γtb

)= (1 + ε(t))t−γ ,

where ε(t) → 0 as t → ∞. It follows from (4.3.8) and the above discussion that

n∑j=1

P(GnBj) � Zγ(x, t)[1 − (1 + ε(t))t−γ

]and hence, by virtue of (4.3.6) and (4.3.7),

P(Gn) � Zγ(x, t)[1 − (1 + ε(t)

)t−γ − Zγ(x, t)

].

The theorem is proved.

Page 228: Asymptotic analysis of random walks

4.4 Asymptotics of P(Sn � x) and its refinements 197

4.4 Asymptotics of P(Sn � x) and its refinements

First-order asymptotics for the probabilities P(Sn � x) and P(Sn � x) in thecase s = x/σ(n) → ∞ (recall that σ(n) =

√(α − 2)n lnn ) immediately follow

from Corollary 4.1.4 (see also Remark 4.1.5) and Corollary 4.3.2(ii). Namely, wehave the following results.

Theorem 4.4.1. Let the conditions [ · , =], α > 2, Eξ = 0 and Eξ2 < ∞ besatisfied. Then there exists a function ε(t) ↓ 0 as t ↑ ∞ such that

supx: s�t

∣∣∣∣P(Sn � x)nV (x)

− 1∣∣∣∣ � ε(t), (4.4.1)

supx: s�t

∣∣∣∣P(Sn � x)nV (x)

− 1∣∣∣∣ � ε(t). (4.4.2)

Remark 4.4.2. It will be seen from the proof of Theorem 4.4.4 below that theprobabilities P(Sn � x) and P(Sn � x) are uniformly asymptotically equivalentto nV (x) for s > c > 1 provided that n → ∞.

Note also that the relation

limn→∞

P(Sn � x)nV (x)

= 1

for s > c > 1 (i.e. for x > cσ(n), c > 1) also follows from the uniform represen-tation (4.1.2). It appears that a similar representation

P(Sn � x) ∼ 2[1 − Φ

(x/

√nd)]

+ nV (x), x >√

n, n → ∞, (4.4.3)

holds for Sn (with the same consequences for P(Sn � x)/nV (x)); it was estab-lished in [225] under the additional condition that F (t) = O(t−α) as t → ∞.

To obtain more precise results, one needs additional smoothness conditionson the function V (t). First we will formulate ‘first-order’ and ‘second-order’smoothness conditions. It will be assumed in the remaining part of this chapterthat condition [ · , =] is satisfied. As in § 3.4, let q(t) be a function vanishingas t → ∞ (for instance, an r.v.f. of index −γ, γ � 0). The conditions below aresimilar to conditions [D(h,q)] from § 3.4 (see p. 144).

[D(1,q)] The following representation is valid:

V (t(1 + Δ)) − V (t) = V (t)[− L1(t)Δ(1 + ε(Δ, t)) + o(q(t))

],

where L1(t) = α + o(1) and ε(Δ, t) → 0 as |Δ| → 0, t → ∞.

As we observed in § 3.4, the distribution tail of the first positive sum in a ran-dom walk will always satisfy condition [D(1,q)] with q(t) = t−1, provided thatcondition [ · , =] is met (see § 7.5).

If the function L(t) in the representation V (t) = t−αL(t) is continuously dif-ferentiable for all t � t0, starting from some t0 > 0, and if

L′(t) = o

(L(t)

t

)as t → ∞,

Page 229: Asymptotic analysis of random walks

198 Random walks with jumps having finite variance

then condition [D(1,0)] is satisfied. Indeed, in this case,

V ′(t) = −V (t)t

L1(t), L1(t) = α − L′(t)tL(t)

= α + o(1)

as t → ∞. Integrating V ′(u) from t to t + Δt yields

V (t + Δt) − V (t) = V ′(t)Δt(1 + ε(Δ, t))

= −V (t)L1(t)Δ(1 + ε(Δ, t)), (4.4.4)

where ε(Δ, t) → 0 as |Δ| → 0, t → ∞, and hence condition [D(1,q)] is metfor q(t) ≡ 0.

The next, stronger, smoothness condition [D(2,q)] has a similar form:

[D(2,q)] For some t0 > 0, the function V (t) is continuously differentiablefor t � t0,

L1(t) :=V ′(t)tV (t)

= −α +tL′(t)L(t)

is an s.v.f. and, as t → ∞,

V (t(1 + Δ)) − V (t) = V (t)[− L1(t)Δ + L2(t)Δ2(1 + ε(Δ, t)) + o(q(t))

],

where L2(t) = α(α + 1) + o(1) and ε(Δ, t) → 0 as |Δ| → 0, t → ∞.

It is evident that if

V ′′(t) = α(α + 1)V (t)t2

(1 + o(1)) (4.4.5)

exists then tL′(t)/L(t) → 0 as t → ∞, and so condition [D(2,q)] is satisfied forq(t) ≡ 0. The reason is that in this case

V (t + Δt) − V (t) =

t+Δt∫t

(V ′(t) +

v∫t

V ′′(u) du

)dv

= V ′(t)Δt + V ′′(t)Δ2t2

2(1 + o(1))

= V (t)(−L1(t)Δ +

α(α + 1)2

Δ2(1 + ε(Δ, t)))

, (4.4.6)

where ε(Δ, t) → 0 as |Δ| → 0, t → ∞, and, by virtue of (4.4.5),

L1(t) = −V ′(t)tV (t)

= α + o(1).

Clearly, condition [D(1,q)] (like condition [D(2,q)]) is not a differentiability con-dition in the usual sense, as it specifies the values of the increments V (t(1+Δ))−V (t) only for Δ > q(t). In the case q(t) = 0 this condition could be referred toas ‘differentiability at infinity’.

Now we will formulate a general smoothness condition [D(k,q)], k � 1.

Page 230: Asymptotic analysis of random walks

4.4 Asymptotics of P(Sn � x) and its refinements 199

[D(k,q)] For some t0 > 0, the function V (t) is k − 1 times continuouslydifferentiable for t � t0, and

V (t(1 + Δ)) − V (t) = V (t)[ k−1∑

j=1

(−1)jLj(t)Δj

j!

+ (−1)kLk(t)Δk

k!(1 + ε(Δ, t)) + o(q(t))

],

(4.4.7)

where ε(Δ, t) → 0 as |Δ| → 0, t → ∞, and |ε(Δ, t)| < c for t � t0 and|Δ| < 1 − δ, where δ > 0 is a fixed arbitrary number. The s.v.f.’s Lj are defined,for j � k − 1, by the equalities

Lj(t) = (−1)j tj

V (t)dj

dtjV (t). (4.4.8)

The functions Lj with j � k have the property that, as t → ∞,

Lj(t) = α(α + 1) · · · (α + j − 1)(1 + o(1)). (4.4.9)

It can be seen that the the expansion in (4.4.7) differs from an expansion forthe power function t−α only by the presence of slowly varying factors that areasymptotically equivalent to 1. However, replacing these factors by unity is, ofcourse, impossible, since that could lead to the introduction of errors whose ordersof magnitude would exceed those of the subsequent terms of the expansion.

Similarly to the above discussion, we can observe that if the kth derivative

V (k)(t) ≡ dk

dtkV (t) = α(α + 1) . . . (α + k − 1)

V (t)tk

(1 + o(1)) (4.4.10)

exists then condition [D(k,0)] is met. Also, it is not hard to see that condi-tion [D(k,0)] implies [D(j,0)] for j < k.

One could give the following sufficient conditions for (4.4.10): if the kth deriva-tive V (k)(t) exists for t � t0 and is monotone on that half-line then (4.4.10) holdsand so [D(k,0)] is satisfied.

Indeed, in this case, for some t1 � t0 the function V (k)(t) will have thesame sign on the whole interval (t1,∞), and therefore the (k − 1)th deriva-tive V (k−1)(t) will be monotone on this interval. Continuing this kind of rea-soning, we conclude that all the derivatives V (k−2)(t), . . . , V ′(t) will be mono-tone ‘at infinity’. It is known, however (see e.g. Theorem 1.7.2 of [32]), that themonotonicity of V ′(t) and regular variation of V (t) imply that

V ′(t) ∼ −αt−α−1L(t), t → ∞.

From this and the monotonicity of V ′′(t) it follows, in turn, that

V ′′(t) ∼ (−1)2α(α + 1)t−α−2L(t), t → ∞,

Page 231: Asymptotic analysis of random walks

200 Random walks with jumps having finite variance

and so on, up to the relation (4.4.10).

Remark 4.4.3. (i) As in § 3.4, one can also consider, along with [D(k,q)], con-ditions [D(k,O(q))], that differ from [D(k,q)] in that in them the remainder termo(q(t)) in (4.4.7) is replaced by O(q(t)).

(2) As in § 3.4, one could also consider conditions [D(h,q)] with a fractionalvalue of h and a remainder term of order |Δ|h.

Comments on how the assertions in the forthcoming theorems would change ifwe switched to the conditions mentioned in Remark 4.4.3 will be presented afterthe respective theorems.

Now we state the main assertion of this section. Recall that we are assumingeverywhere that Eξ = 0, d = Eξ2 < ∞ and that V (t) = max{V (t),W (t)}.

Theorem 4.4.4. Let the following conditions be satisfied: [ · , =], α > 2, andthat, for some k � 1, [D(k,q)] holds and E|ξ|k < ∞ (the latter is only requiredwhen k > 2). Then, as x → ∞, we have

P(Sn � x) = nV (x)[1+

k∑j=2

Lj(x)j!xj

ESjn−1+o(nk/2x−k)+o(q(x))

](4.4.11)

uniformly in n � cx2/ lnx for some c > 0.

Since

ES2n−1 = (n − 1)d,

ES3n−1 = (n − 1)Eξ3,

ES4n−1 =

12

(n − 1)(n − 2)d2 + O(n)

and

L2(x) = α(α + 1) + o(1),L4(x)

4!=(

α + 34

)+ o(1),

we obtain the following.

Corollary 4.4.5. Let the conditions of Theorem 4.4.4 be met. Then the followingassertions hold as x → ∞.

(i) If k = 2 then

P(Sn � x) = nV (x)[1 +

α(α + 1)(n − 1)d2x2

(1 + o(1))]. (4.4.12)

(ii) If k = 4 then

P(Sn � x) = nV (x)[1 +

(n − 1)L2(x)d2x2

− (n − 1)L3(x)Eξ3

3!x3

+(n − 1)(n − 2)

2x4

(α + 3

4

)(1 + o(1))

].

Page 232: Asymptotic analysis of random walks

4.4 Asymptotics of P(Sn � x) and its refinements 201

Both relations hold uniformly in n � cx2/lnx for some c > 0.

Remark 4.4.6. In the lattice case, when ξ is an integer-valued r.v. and the greatestcommon divisor of its possible values is equal to 1, it is not hard to construct dif-ference analogues of conditions [D(k,q)] and then to obtain a complete analogueof Theorem 4.4.4 for integer-valued x.

Remark 4.4.7. For the special case where the principal part of V (t) is a linearcombination of negative powers of t, a number of refinements of the relationP(Sn � x) ∼ nV (x) were derived in [276] under some additional restrictiveassumptions and conditions, which are sometimes irrelevant. It will be seen easilyfrom the proofs below that all the main assertions of the present section could betransferred, in a standard way, to the case where V (t) is a linear combination ofterms of the form t−α(i)L(i)(t) satisfying the respective smoothness condition.

See also Remark 3.4.2.

Remark 4.4.8. The terms ESjn−1 in (4.4.11) are clearly polynomials in

√n of

orders not exceeding j. Observe, however, that the asymptotic representation(4.4.11) is, generally speaking, not a ‘pure form’ asymptotic expansion in powersof (

√n/x), which is the case in (4.4.12), since the Lj(x) could be quite compli-

cated s.v.f.’s. If L(t) ≡ L = const for t � t0 then

Lj(x)j!

=(

α + j − 1j

),

and (4.4.11) becomes an asymptotic expansion in powers of (√

n/x).

Proof of Theorem 4.4.4. Let Gn := {Sn � x} and, as before, put

Bj = {ξj < y}, B =n⋂

j=1

Bj , r =x

y> 2.

Then, similarly to § 3.4, by virtue of Corollary 4.1.3 we will again have repre-sentations of the form (3.4.12), (3.4.13), which yield, for n � cx2/lnx and asuitable c, that

P(Gn) =n∑

j=1

P(GnBj) + O((nV (x))2

). (4.4.13)

So the main problem consists in finding asymptotic representations for

P(GnBj) = P(GnBn) = P(Sn−1 + ξn � x, ξn � y)

= P(Sn−1 � x − y, ξn � y)

+ P(Sn−1 < x − y, Sn−1 + ξn � x). (4.4.14)

Page 233: Asymptotic analysis of random walks

202 Random walks with jumps having finite variance

Here, owing to Corollary 4.1.4,

P(Sn−1 � x − y, ξn � y) = P(Sn−1 � x − y)P(ξn � y)

� cnV 2(x). (4.4.15)

Further, for y = δx, δ = 1/r < 1/2,

P(ξn � x − Sn−1, Sn−1 < x − y) = E[V (x − Sn−1); Sn−1 < (1 − δ)x

]= E1 + E2, (4.4.16)

where

E1 := E[V (x − Sn−1); Sn−1 � −(1 − δ)x

],

E2 := E[V (x − Sn−1); |Sn−1| < (1 − δ)x

].

First, assume that condition [D(k,0)] holds. Setting t := x and Δ := −Sn−1/x

in (4.4.7), we obtain

E2 = V (x)E[1 +

k∑j=1

Lj(x)j!

(Sn−1

x

)j

+Lk(x)

k!

(Sn−1

x

)k

ε

(Sn−1

x, x

);∣∣∣∣Sn−1

x

∣∣∣∣< 1 − δ

]. (4.4.17)

Next we will find out by how much the truncated moments of Sn on the right-hand side of (4.4.17) differ from the complete moments.

Lemma 4.4.9. Let Eξ = 0, Eξ2 = 1 and E|ξ|b < ∞ for some b � 2. Then thesequence

{(Sn/

√n )b; n � 1

}is uniformly integrable.

Proof. First note that{|Sn/

√n |b; n � 1

}is uniformly integrable. This follows

from the fact that, under the condition E|ξ|b < ∞, one has convergence of mo-ments in the central limit theorem: as n → ∞,

E∣∣∣∣ Sn√

n

∣∣∣∣b → ∫|t|b dΦ(t),

where Φ(t) is the standard normal distribution function (see e.g. § 5.10, Chap-ter IV of [224]).

To complete the proof, it suffices to use Kolmogorov’s inequality:

P(

Sn√n

� t

)� P

(∣∣∣∣ Sn√n

∣∣∣∣ � t −√

2)

(see e.g. (3.13) in Chapter III of [224] or § 2, Chapter 10 of [49]).

Page 234: Asymptotic analysis of random walks

4.4 Asymptotics of P(Sn � x) and its refinements 203

By Lemma 4.4.9, for j � k,

E(|Sj

n|; |Sn| � (1 − δ)x)

� nk/2

((1 − δ)x)k−jE[∣∣∣∣ Sn√

n

∣∣∣∣k;∣∣∣∣ Sn√

n

∣∣∣∣ � (1 − δ)x√n

]= o

(nk/2xj−k

),

and therefore replacing the truncated moments by the complete moments in therelation (4.4.17) would introduce an error of order o(nk/2x−k).

Further, since, as x → ∞, we have

ε(Δ, x) � c for |Δ| < 1 − δ,

ε(Δ, x) → 0 for |Δ| → 0,

we obtain, again owing to uniform integrability, that

E[(

Sn−1

x

)k

ε

(Sn−1

x, x

);∣∣∣∣Sn−1

x

∣∣∣∣< 1 − δ

]= o

(nk/2x−k

).

Therefore (4.4.17) yields

E2 = V (x)(

1 +k∑

j=1

Lj(x)j!xj

ESjn−1 + o

(nk/2x−k

)). (4.4.18)

It remains to bound the integral E1 in (4.4.16). Applying Corollary 4.1.4 (orCorollary 4.1.8) to bound the probabilities of negative deviations of Sn−1, wehave

E1 � V (x)P(Sn−1 � −(1 − δ)x

)� cV (x)nW (x). (4.4.19)

Summarizing, we obtain from (4.4.13)–(4.4.19) the assertion of the theoremunder condition [D(k,0)]. If q(t) �≡ 0 then, as in § 3.4, the term

o(q(x))P(∣∣∣∣Sn−1

x

∣∣∣∣ < 1 − δ

)= o(q(x))

is added to the right-hand side of (4.4.17), owing to condition [D(k,q)], andthis term will pass, unchanged through all the calculations, to the representa-tion (4.4.11). The theorem is proved.

The proof of the assertion in Remark 4.4.2 is a simplified version of that ofTheorem 4.4.4. We leave it to the reader.

All the remarks from § 3.4 on how the assertion of Theorem 4.4.4 changeswhen one replaces condition [D(k,q)] by [D(k,O(q))] remain valid in the presentsection. One can also obtain a modification of the assertion of the theorem in thecase where condition [D(h,q)] holds for a fractional value of h (see Remark 4.4.3).

Page 235: Asymptotic analysis of random walks

204 Random walks with jumps having finite variance

4.5 Asymptotics of P(Sn � x) and its refinements

As was the case for P(Sn � x), the first-order asymptotics for P(Sn � x) followdirectly from Corollaries 4.1.4 and 4.3.2 and is contained in Theorem 4.4.1.

Now we will present refinements of that theorem. As in the previous section,we will need the smoothness conditions [D(k,q)].

Theorem 4.5.1. Let the following conditions be satisfied: [ · , =], α > 2, andthat, for some k � 1, [D(k,q)] holds and E|ξk| < ∞ (the latter is only requiredwhen k > 2). Then, as x → ∞,

P(Sn � x) = nV (x){

1 +L1(x)nx

n−1∑j=1

ESj

+1n

k∑i=2

Li(x)i!xi

[n−1∑j=1

ESin−j +

i∑l=2

(i

l

)ESl

j−1ESi−ln−j

]

+ o(nk/2x−k

)+ o

(q(x)

)}(4.5.1)

uniformly in n � cx2/ lnx for some c > 0.

Remarks 4.4.3 and 4.4.7, which relate to Theorem 4.4.4, remain valid for theabove theorem as well.

To obtain simpler asymptotic representations from this theorem, note that byKolmogorov’s inequality (or by virtue of Corollary 4.1.4) the r.v.’s n−1/2Sn areuniformly integrable. Therefore it follows from the invariance principle that

n−1/2Sn ⇒√

d w(1), En−1/2Sn →√

dEw(1) =

√2d

π

as n → ∞, where d = Eξ2, {w(t); t � 0} is the standard Wiener process,w(t) := maxu�t w(u), P(w(1) > t) = 2(1−Φ(t)), and Φ is the standard normaldistribution function. Since, moreover, L1(x) → α, we obtain the following.

Corollary 4.5.2. If conditions [ · , =], α > 2, and [D(1,q)] are satisfied then, asn → ∞,

P(Sn � x) = nV (x)[1 +

23/2α√

nd

3√

πx(1 + o(1)) + o(q(x))

](4.5.2)

uniformly in x from the zone n � cx2/ lnx.

One could compute higher-order terms of the asymptotic expansion (4.5.1) ina similar way.

Observe also that one could easily derive from Theorem 4.5.1 asymptotic ex-pansions for the joint distribution of Sn and Sn. Namely, for any fixed u > 1, onthe one hand, we clearly have

P(Sn � x, Sn < ux) = P(Sn � x) − P(Sn � ux).

Page 236: Asymptotic analysis of random walks

4.5 Asymptotics of P(Sn � x) and its refinements 205

On the other hand, for a fixed u ∈ (0, 1),

P(Sn < x, Sn � ux)

= P(Sn � ux) − P(Sn � x) + P(Sn � x, Sn < ux), (4.5.3)

where, under condition [=, =], the last term is negligibly small (the order ofmagnitude being n2V (x)W (x)), since the event {Sn � x, Sn < ux} requires,roughly speaking, two large jumps (in opposite directions) in the random walk.More precisely, letting η(x) := min{k : Sk � x}, we have

P(Sn � x, Sn < ux) �n∑

k=1

P(η(x) = k)P(Sn−k < −(1 − u)x

)� c1nW

((1 − u)x

)P(Sn � x) � c2n

2W (x)V (x).

Under conditions [=, =] and [D(k,q)] on the tails V (t) and W (t), the requiredasymptotic expansions follow immediately from the above relation and (4.5.3).

Proof of Theorem 4.5.1. According to the above remarks (see Remark 3.4.8 andthe end of the proof of Theorem 4.4.4), without loss of generality we can as-sume from the very beginning that condition [D(k,0)] is met and then just add aremainder term o(q(x)) to the final result.

Our proof will repeat, in many aspects, the arguments in the proofs of The-orems 4.4.4 and 3.5.1. Let Gn := {Sn � x}. As in (4.4.13) and (3.5.5), forn � cx2/lnx with a suitable c and r = x/y > 2 we have

P(Gn) =n∑

j=1

P(GnBj) + O((nV (x))2

). (4.5.4)

Furthermore, as in (3.5.6) and (3.5.9), for y = δx, δ = 1/r < 1/2 and

S(j)n−j = max

0�m�n−j(Sm+j − Sj),

we have

P(GnBj) = P(Sn � x, ξj � y)

= P(Sn � x, ξj � y, Sj−1 < x) + O(jV 2(x))

= P(Sj + S

(j)n−j � x, ξj � y

)+ O(jV 2(x)). (4.5.5)

As before, set

Zj,n := Sj−1 + S(j)n−j .

Page 237: Asymptotic analysis of random walks

206 Random walks with jumps having finite variance

Then

P(Sj + S(j)n−j � x, ξj � y)

= P(ξj + Zj,n � x, ξj � y)

= P(ξj + Zj,n � x, ξj � δx; Zj,n < (1 − δ)x

)+ P

(ξj + Zj,n � x, ξj � δx, Zj,n � (1 − δ)x

).

The event {ξj � δx} is clearly redundant in the last term. Moreover, noting

that ξj and Zj,n are independent and that Zj,n

d� Sn−1, we see that the last term

does not exceed

P(ξj � δx)P(Sn−1 � (1 − δ)x

)= O(nV 2(x)).

Therefore

P(GnBj) = P(ξj + Zj,n � x, Zj,n < (1 − δ)x

)+ O(nV 2(x))

= E[V (x − Zj,n); Zj,n < (1 − δ)x

]+O(nV 2(x)),

so that

P(Gn) =n∑

j=1

E[V (x − Zj,n); Zj,n < (1 − δ)x

]+ O

((nV (x))2

). (4.5.6)

Using reasoning similar to that in previous sections (cf. (3.5.12), (3.5.13) and(4.4.16)), we have

E[V (x − Zj,n); Zj,n < (1 − δ)x

]= Ej,1 + Ej,2,

where

Ej,1 := E[V (x − Zj,n); Zj,n � −(1 − δ)x

]= O

(nV (x)W (x)

).

Now consider

Ej,2 := E[V (x − Zj,n); |Zj,n| < (1 − δ)x

]and use the inequalities

Sj−1 � Zj,n

d� Sn−1 (4.5.7)

and condition [D(k,0)] with t = x, Δ = −Zj,n/x. We obtain

Ej,2 = V (x)E[1 +

k∑i=1

Li(x)i!

(Zj,n

x

)i

+Lk(x)

k!

(Zj,n

x

)k

ε

(Zj,n

x, x

);∣∣∣∣Zj,n

x

∣∣∣∣ < 1 − δ

]. (4.5.8)

Next note that, by virtue of Lemma 4.4.9 and inequalities (4.5.7), the family ofr.v.’s {

(Zj,n/√

n)k; 1 � j � n < ∞}

Page 238: Asymptotic analysis of random walks

4.5 Asymptotics of P(Sn � x) and its refinements 207

is uniformly integrable. Hence here, as in (4.4.17) and (4.4.18), we can replacethe truncated moments of Zj,n on the right-hand side of (4.5.8) by the com-plete moments, and then discard the last term. This will introduce an error oforder o

(nk/2x−k

). Thus we obtain

Ej,2 = V (x)(

1 +k∑

i=1

Li(x)i!xi

EZij,n + o

(nk/2x−k

)).

It remains to note that

EZij,n = E

(Sj−1 + S

(j)n−j

)i =i∑

l=0

(i

l

)ESl

j−1ESi−ln−j

and, in particular,

EZj,n = ESn−j .

Summing up in (4.5.6) the derived expressions for Ej,1 + Ej,2, we obtain (4.5.1)(for q(t) ≡ 0). The theorem is proved.

Example 4.5.3. We will illustrate the results of Corollary 4.4.5(i) and Corol-lary 4.5.2 by a numerical example. Consider symmetric r.v.’s ξj with

F+(t) = P(ξ � t) =

⎧⎪⎪⎪⎨⎪⎪⎪⎩0.5 for 0 � t <

1√3

,

16√

3t−3 for t � 1√

3.

So the conditions [=, =] (with α = β = 3), Eξ = 0 and Eξ2 = 1 are satisfied.Condition [D(k,0)] is clearly satisfied for any k � 1. Let us take n = 10 andcompare our results with the ‘crude approximation’ in a reasonable range of x

values.Figure 4.1 depicts the plots of the ‘crude approximation’ nV (x), the principal

part of the approximation (4.4.12) and a Monte Carlo estimator of P(Sn � x);we simulated 106 trajectories {Sk; k � n}.

Figure 4.2 displays the same ‘crude approximation’ nV (x), the principal partof the approximation from Corollary 4.5.2 and an estimator of P(Sn � x) (alsoobtained using simulation).

We see that the approximation precision increases substantially in both cases.The fit of the plots on Fig. 4.1 is much better, for the simple reason that theapproximation (4.4.12) used there is in fact an asymptotic expansion with two‘correction terms’ (the coefficient of n1/2/x equals zero, and the first non-trivialcorrection term is of order n/x2), whereas (4.5.2) is an expansion with only onesuch term.

Page 239: Asymptotic analysis of random walks

208 Random walks with jumps having finite variance

Fig. 4.1. Illustration of Corollary 4.4.5(i), comparing the approximations nV (x) (lowersmooth line) and (4.4.12) (upper smooth line) for P(Sn � x) (see Example 4.5.3).

Fig. 4.2. Illustration of Corollary 4.5.2, comparing the approximations nV (x) (lowersmooth line) and (4.5.2) (upper smooth line) for P(Sn � x) (see Example 4.5.3).

4.6 Asymptotics of P(S(a) � x) and its refinements. The general boundary

problem

Now we will turn to more general problems on the crossing of an arbitrary bound-ary {g(j)} by the trajectory {Sj}. The boundary types which are most oftenencountered in applications were described in § 3.6.

Page 240: Asymptotic analysis of random walks

4.6 Asymptotics of P(S(a) � x) and the general boundary problem 209

4.6.1 Asymptotics of P(S(a) � x)

First we turn our attention to the important special case where g(j) = x + aj fora fixed a > 0 and set, as before,

Sn(a) = maxj�n

(Sj − aj),

assuming that Eξ = 0, Eξ2 = d < ∞.

Theorem 4.6.1. Let condition [<, =], α > 2, be met. Then the following asser-tions hold true.

(i) Uniformly in n = 1, 2, . . . ,

P(Sn(a) � x) =( n∑

j=1

V (x + ja))

(1 + o(1)), x → ∞.

(ii) If, for some k � 1, the conditions [D(k,q)] and E|ξ|k < ∞ hold (the latteris only required when k > 2), and q(t) is an r.v.f. then, as x → ∞,

P(Sn(a) � x) =n∑

j=1

V (x + aj){

1 +k∑

i=1

Li(x + aj)i!(x + aj)i

×[ES i

n−j(a) +i∑

l=2

(i

l

)ESl

j−1ESi−ln−j(a)

]}+ O

(mV (x)(mV (x) + xV (x))

)+ o

(xV (x)q(x)

),

(4.6.1)

where m = min{n, x}.(iii) If we additionally require in part (ii) that E(ξ+)k+1 < ∞ then we have

ES k(a) < ∞, where S(a) := S∞(a), and the representation (4.6.1) holdsfor n = ∞. In this case,

P(S(a) � x) =∞∑

j=1

V (x + aj){

1 +k∑

i=1

Li(x + aj)i!(x + aj)i

×[ES i(a) +

i∑l=2

(i

l

)ESl

j−1ESi−l(a)]}

+ O(x2V (x)V (x)

)+ o(xV (x)q(x)). (4.6.2)

Remark 4.6.2. As will be seen in the proof of the theorem, the assertion ofpart (ii) is obtained under the assumption that α < k + 1. If E(ξ+)k+1 < ∞then this can be improved with the help of Lemma 4.6.6 (see below).

Remarks 4.4.3 and 4.4.7, which relate to Theorem 4.4.4, remain valid in thepresent subsection.

Page 241: Asymptotic analysis of random walks

210 Random walks with jumps having finite variance

Corollary 4.6.3. If the conditions [<, =], α > 2, [D(2,0)] and E(ξ+)3 < ∞ aresatisfied then

P(S(a) � x)

=∞∑

j=1

V (x + aj){

1 +L1(x + aj)

x + ajES(a)

+jL2(x + aj)d2(x + aj)2

(ES(a)2 + (j − 1)d

)}+ O

(x2V (x)V (x)

)=

∞∑j=1

V (x + aj){

1 +α

x + ajES(a) +

α(α + 1)jd2(x + aj)2

}+ o(V (x)). (4.6.3)

Observing that, owing to [D(2,0)] and the properties of r.v.f.’s, one has

∞∑j=1

V (x + aj) =1a

∞∫x

V (u)du − 12

V (x) + o(V (x)),

∞∑j=1

V (x + aj)x + aj

∼ 1a

∞∫x

V (u)u

du ∼ V (x)aα

,

∞∑j=1

jV (x + aj)(x + aj)2

∼ 1a2

∞∫0

V (x + u)u(x + u)2

du

=1a2

[ ∞∫x

V (t)t

dt − x

∞∫x

V (t)t2

dt

]

∼ 1a2

[V (x)

α− V (x)

α + 1

]=

V (x)a2α(α + 1)

as x → ∞, we obtain from (4.6.3) the following ‘integral’ representation:

P(S(a) � x) =1a

∞∫x

V (u) du

+(

d

2a2+

ES(a)a

− 12

)V (x) + o(V (x)). (4.6.4)

It is of interest to note that the functions L1(x), L2(x) appearing in condi-tion [D(2,0)], as well as the third moment of ξ+, are not present in the repre-sentation (4.6.4). That the conditions [D(2,0)] and E(ξ+)3 < ∞ are not neededfor (4.6.4) will be confirmed in Theorem 7.5.8, where the assertion (4.6.4) is ob-tained without the above-mentioned conditions, see p. 361. (One should note that,in the notation of Chapter 7, the function V (x) = F+(x) in Theorem 7.5.8 is thetail of the r.v. ξ − a, so that the expression a−1

∫∞x

F+(u) du from § 7.5 is the

Page 242: Asymptotic analysis of random walks

4.6 Asymptotics of P(S(a) � x) and the general boundary problem 211

same as

1a

∞∫x

V (u + a) du =1a

∞∫x

V (u) du − V (x) + o(V (x))

in our current notation. This is why the term −V (x)/2 in (4.6.4) is replaced by+V (x)/2 in Theorem 7.5.8.) The above means that the methods used to proveTheorem 4.6.1 are not quite adequate in the case n = ∞, i.e. when we are study-ing the asymptotics of P(S(a) � x).

Proof of Theorem 4.6.1. (i) The proof of the theorem will consist of several stages,which we will formulate as lemmata. The first is an analogue of Lemma 3.6.9.Set m := min{x, n},

Gn := {Sn(a) � x}, Bj(v) := {ξj < y + vj}, B(v) :=n⋂

j=1

Bj(v).

Lemma 4.6.4. Let condition [ · , <], α > 2, hold. Then, for

v � min{a/2r, (r − 2)/2r}, r > 2,

and for all n one has, as x → ∞,

P(Sn(a) � x) =n∑

j=1

P(GnBj(v)) + O((mV (x))2

). (4.6.5)

Proof. The proof the lemma repeats that of Lemma 3.6.9. The only difference isthat now, instead of using Theorem 3.2.1, one should use Theorem 4.2.1.

The subsequent proof of Theorem 4.6.1 involves arguments quite similar tothose in the proofs of Theorems 3.6.1 and 4.5.1. As in § 3.2, put

S(j)n−j(a) := max

{0, ξj+1 − a, . . . , Sn − Sj − (n − j)a

},

Zj,n(a) := Sj−1 + S(j)n−j(a).

First we will obtain representations for the summands in (4.6.5).

Lemma 4.6.5. Assume that condition [<, <], α > 2, is satisfied. Then, forδ ∈ (0, 1),

P(GnBj(v))

= E[V (x + aj − Zj,n(a)); Zj,n(a) < δ(x + aj)

]+ O

(V (x + aj)

[min{j, x}V (x) + min{n, x + aj}V (x + aj)

])(4.6.6)

and, for z � c√

j ln j, a suitable c and any j � n,

P(Zj,n(a) � z) � c1

[j + min{n, z}]V (z), (4.6.7)

P(|Zj,n(a)| � z

)� c1

[min{n, z}V (z) + jV (z)

]. (4.6.8)

Page 243: Asymptotic analysis of random walks

212 Random walks with jumps having finite variance

Proof. First we derive bounds for the distribution tails of Zj,n(a) and |Zj,n(a)|.For z > c

√j ln j, we find from Corollary 4.1.4 and Theorem 4.2.1 that

P(Zj,n(a) � z) � P(Sj−1 � z/2

)+ P

(Sn−j(a) � z/2

)� c1

[jV (z) + min{n, z}V (z)

]� c2

[j + min{n, z}]V (z). (4.6.9)

Further, since

|Zj,n(a)| d� |Sj−1| + S

(j)n−j(a), (4.6.10)

we have, for z > c√

j ln j,

P(|Zj,n(a)| � z

)� P

(|Sj−1| � z/2)

+ P(Sn(a) � z/2

)� c2

[jV (z) + min{n, z}V (z)

]. (4.6.11)

Next we obtain a representation for

P(GnBj(v)) = P(Sn(a) � x, ξj � y + jv

)= P

(Sn(a) � x, ξj � y + jv, Sj−1(a) < x

)+ ρn,j,x, (4.6.12)

where, by Theorem 4.2.1,

ρn,j,x � cV (y + jv) min{j, x}V (x). (4.6.13)

By virtue of (4.6.7), we have, similarly to § 3.6 (cf. (3.6.18)), that

P(Sn(a) � x, ξj � y + jv, Sj−1(a) < x

)= P

(ξj + Zj,n(a) � x + aj, ξj � y + jv

)+ O

(min{j, x}V (x)V (y + jv)

)= P

(ξj + Zj,n(a) � x + aj, ξj � y + jv, Zj,n(a) < x + aj − y − jv

)+ O

(min{j, x}V (x)V (x + aj)

)+ O

(min{n, x + aj}V 2(x + aj)

).

Here we have used the fact that, for the chosen values of r and v, one always has

c1(x + aj) � x − y + j(a − v) � c2(x + aj),

c3(x + aj) � y + jv � c4(x + aj).

As in § 3.6, this yields (4.6.6). Lemma 4.6.5 is proved.

As before, we will split the expectation in (4.6.6) into two parts:

Ej,1 := E[V (x + aj − Zj,n(a)); Zj,n(a) � −x(j)

],

Ej,2 := E[V (x + aj − Zj,n(a)); |Zj,n(a)| < x(j)

],

where x(j) = δ(x + aj) and

Ej,1 � c1V (x + aj)P(Sj−1 � −x(j))

� cjV (x + aj)W (x + aj). (4.6.14)

Page 244: Asymptotic analysis of random walks

4.6 Asymptotics of P(S(a) � x) and the general boundary problem 213

Owing to (4.6.8), as x → ∞,

P(|Zj,n(a)| � (x + aj)ε

)→ 0

uniformly in j for any fixed ε > 0. This means that

Ej,2 = V (x + aj)(1 + o(1)).

By virtue of (4.6.5) and (4.6.12)–(4.6.14), this proves the first part of the theorem.

(ii) Now let condition [D(k,q)] be satisfied and q(t) be an r.v.f. Then we havethe following expansion, similar to (4.4.17):

Ej,2 = V (x + aj)E[1 +

k∑i=1

Li(x + aj)i!

(Zj,n(a)x + aj

)i

+Lk(x)

k!

(Zj,n(a)x + aj

)k

ε

(Zj,n(a)x + aj

, x + aj

)+ o(q(x + aj));

∣∣∣∣Zj,n(a)x + aj

∣∣∣∣ < δ

]. (4.6.15)

In order to pass to ‘complete’ expectations, we will have to bound the quantities

T ij,n := E

[|Zij,n(a)|; |Zj,n(a)| � x(j)

], i � k − 1;

Ikj,n := E

[|Zkj,n(a)|; |Zj,n(a)| < x(j)

],

Ik,εj,n := E

[|Zk

j,n(a)| ε(

Zj,n(a)x + aj

, x + aj

); |Zj,n(a)| < x(j)

].

Let mj := min{n, x + aj}.

Lemma 4.6.6.

Ikj,n �

{cjk/2 if E(ξ+)k+1 < ∞,

c[jk/2 + mk+1

j V (mj)]

if α < k + 1;(4.6.16)

Ik,εj,n = o(Ik

j,n), (4.6.17)

T ij,n � c(x + aj)i

[jV (x + aj) + mjV (x + aj)

], i � k. (4.6.18)

Proof. As is well known, if E(ξ+)k+1 < ∞ then ES k(a) < ∞ (this couldalso be derived easily from Theorem 4.2.1). Therefore, if E(ξ+)k+1 < ∞ thenESn(a)k � ES(a)k < c < ∞ and, since E|Sn|k � cknk/2, we obtain

E|Zj,n(a)|k � cjk/2.

The first inequality in (4.6.16) is proved.Now consider the case where α < k + 1 and therefore E(ξ+)k+1 = ∞. Then,

Page 245: Asymptotic analysis of random walks

214 Random walks with jumps having finite variance

using the bound (4.2.2), we have

E[|Zj,n(a)|k; |Zj,n(a)| < x(j)

]� c1E

[|Sj−1|k + (S (j)n−j(a))k; |Sj−1| � x(j), S

(j)n−j(a) � 2x(j)

]+ xk(j)P

(Sj−1 < −x(j)

)� c2j

k/2 + c2

2x(j)∫0

zk−1 min{n, z}V (z) dz.

When 2x(j) � n, the integral on the right-hand side does not exceed

2x(j)∫0

zkV (z) dz � c3x(j)k+1V (x + aj).

If 2x(j) > n it does not exceed

c3

(nk+1V (n) + n

2x(j)∫n

zk−1V (z) dz

)� c4n

k+1V (n).

Thus the integral is bounded from above by cmk+1j V (mj). This proves the sec-

ond inequality in (4.6.16).Since the function ε

(z/(x + aj), x + aj

)is bounded and tends to zero when

z = o(x + aj), and since in the subsequent argument the upper integration limitx(j) = δ(x+aj) can be chosen with an arbitrarily small fixed δ > 0, the relation(4.6.17) is also proved.

It remains to consider T ij,n. It follows from the inequality (4.6.10) that

T ij,n � c

{E[|Si

j−1|; |Sj−1| � x(j)2

]+ (x + aj)i P

(|Sj−1| � x(j)

2

)+ E

[S i

n−j(a); Sn−j(a) � x(j)2

]+ (x + aj)i P

(Sn−j(a) � x(j)

2

)}, (4.6.19)

where, by virtue of Corollary 4.1.4 and Theorem 4.2.1(i),

E[|Si

j−1|; |Sj−1| � x(j)2

]< c1j(x + aj)i V (x + aj),

E[S i

n−j(a); Sn−j(a) � x(j)2

]< c1 min{n, x + aj}(x + aj)i V (x + aj).

This, together with (4.6.19), implies (4.6.18). The lemma is proved.

Now we return to (4.6.5) and start gathering up the bounds that we have ob-tained. First we consider the sum of the remainder terms that appear in (4.6.6). It

Page 246: Asymptotic analysis of random walks

4.6 Asymptotics of P(S(a) � x) and the general boundary problem 215

is not hard to see thatn∑

j=1

V (x + aj)[min{j, x}V (x) + min{n, x + aj}V (x + aj)

]� cm2V 2(x),

so that

P(Sn(a) � x) =n∑

j=1

(Ej,1 + Ej,2) + O(m2V 2(x)

),

where, by virtue of (4.6.14),n∑

j=1

Ej,1 < cm2V (x)W (x).

To compute∑n

j=1 Ej,2, we have to use (4.6.15) and Lemma 4.6.6, which impliesthat

n∑j=1

V (x + aj)(x + aj)i

T ij,n � c

n∑j=1

[jV (x + aj)V (x + aj) + mjV

2(x + aj)]

� c1

[m2V (x)V (x) + mxV 2(x)

].

Thus, for α < k + 1,

P(Sn(a) � x) =n∑

j=1

V (x + aj)

(1 +

k∑i=1

Li(x + aj)i!(x + aj)i

EZij,n(a)

)

+ o

⎡⎣ n∑j=1

V (x + aj)(x + aj)k

(jk/2 + mk+1

j V (mj))⎤⎦

+ O(m2V (x)V (x)

)+ O

(mxV 2(x)

)+ o

(xV (x)q(x)

),

wheren∑

j=1

V (x + aj)(x + aj)k

jk/2 � cmk/2+1x−kV (x),

n∑j=1

V (x + aj)(x + aj)k

mk+1j V (mj) � cmk+2V (m)x−kV (x)

= cmk+1V (m)xk+1V (x)

mxV 2(x) � cmxV 2(x),

since the power factor of the function tk+1V (t) has the exponent k + 1 − α > 0,

while m � x. The second assertion of Theorem 4.6.1 is proved. In the casewhen E(ξ+)k+1 < ∞, the bounds could be improved owing to Lemma 4.6.6.

(iii) The third assertion of the theorem is an obvious consequence of the second.The theorem is proved.

Page 247: Asymptotic analysis of random walks

216 Random walks with jumps having finite variance

Proof of Corollary 4.6.3. The first equality in (4.6.3) is obvious from (4.6.2). Toproof the second, it suffices to observe that

L1(x) ∼ α, L2(x) ∼ α(α + 1), x2V (x) = o(1).

4.6.2 The general boundary problem

Now we will turn to the general boundary problem on the asymptotic behaviourof the probability

P(

maxk�n

(Sk − g(k)) � 0),

where {g(k)} is a boundary from the class Gx,n, for which mink�n g(k) = cx,c > 0 (cf. p. 155). The following analogue of Theorem 3.6.4 holds true.

Theorem 4.6.7. Assume that the conditions [<, =] and Eξ2 < ∞ are satisfied.Then, as x → ∞,

P(maxk�n

(Sk − g(k)) � 0)

=

[n∑

j=1

V (g∗(j))

](1 + o(1)) + O

(n2V (x)V (x)

),

where g∗(j) = minj�k�n g(k) and the remainder terms are uniform in g ∈ Gx,n

in the zone of n and x such that n < c1x2/lnx for a suitable c1 > 0.

Proof. The proof of Theorem 4.6.7 basically repeats the argument proving Theo-rem 3.6.4(i), with a few obvious changes since now Eξ2 < ∞. So we will omitit.

We also have a complete analogue of Corollary 3.6.6 for the case under con-sideration, and, in particular, an assertion on the asymptotics of the probabilitiesP(Sn(−a) − an � x

)for a > 0, where

Sn(−a) := maxk�n

(Sk + ak).

Under condition [D(k,q)], one could carry out a more complete analysis of thedistribution of Sn(−a)−an. We will confine ourselves to the case when condition[D(2,0)] is met.

Theorem 4.6.8. Let conditions [<, =], α > 2, and [D(2,0)] be satisfied. Then,for a fixed a > 0, as x → ∞,

P(Sn(−a) − an � x) = nV (x)[1 +

L1(x)xn

n∑j=1

Eζj

+α(α − 1)n

2x2(1 + o(1)) + O

(n3/2x−3

)],

Page 248: Asymptotic analysis of random walks

4.7 Integro-local theorems for the sums Sn 217

where ζj := −mink�j(Sk +ak) and the remainder terms are uniform in the zonen � cx2/lnx for some c > 0.

If n � N → ∞, then the above representation takes the form

nV (x)[1 +

αEζ

x(1 + o(1)) +

α(α − 1)n2x2

(1 + o(1)) + O(n3/2x−3

)],

where ζ = ζ∞, Eζ < ∞ and the remainder terms are uniform in the zone n ∈[N, cx2/lnx], c > 0.

Proof. The proof of Theorem 4.6.8 follows the scheme of the proof of Theo-rems 4.4.4, refg4p5t1 and 4.6.1 and uses the relations

Sn � Sn(−a) − an � Sn, Sn(−a) − an = Sn + ζ′n,

where ζ ′nd= ζn. A detailed scheme of the proof is given in § 5.7. We will not

present it here to avoid repetition.

It is seen from the theorem that the first terms in the expansions for the distri-butions of Sn and Sn(−a) − an coincide when x = o(n).

4.7 Integro-local theorems for the sums Sn

4.7.1 Integro-local theorems on the large deviations of Sn

In this section, as in § 3.7, we will study the asymptotic behaviour of the proba-bilities P

(Sn ∈ Δ[x)

), where

Δ[x) := [x, x + Δ),

x → ∞, Δ = o(x). For remarks on the formulation of this problem, see § 3.7.Integro-local theorems will be used in Chapters 6–8. In the multivariate case,integro-local theorems provide the most natural form for assertions on the asymp-totics of large deviation probabilities (see Chapter 9).

We will need condition [D(1,q)] in the form (3.7.2):

[D(1,q)] As t → ∞, for Δ ∈ [Δ1,Δ2], Δ1 � Δ2 = o(t), one has

V (t) − V (t + Δ) = V1(t)[Δ(1 + o(1)) + o(q(t))

], V1(t) = αV (t)/t,

(4.7.1)where the term o(q(t)) does not depend on Δ, and q(t) = t−γqLq(t), γq � −1, isan r.v.f. The remainder term o(1) is assumed here to be uniform in Δ in the samesense as in (3.7.2).

If, in the important special case where Δ = Δ1 = Δ2 = const (Δ is anarbitrary fixed number), one has

V (t) − V (t + Δ) = V1(t)(Δ + o(1))

as t → ∞ then clearly condition [D(1,q)] will hold with q(t) ≡ 1 (here theassumption on the uniformity of o(1) disappears).

Page 249: Asymptotic analysis of random walks

218 Random walks with jumps having finite variance

In the lattice case, when the lattice span is equal to 1, we assume that Δ � 1and that Δ and t in (4.7.1) are integer-valued.

For remarks on the form (4.7.1) of condition [D(1,q)], which is somewhat dif-ferent from that used in § 4.4, see § 3.7 (the differences are only related to theconvenience of exposition). Condition (4.7.1) will always hold with q(t) ≡ 0provided that L(t) is differentiable, L′(t) = o(L′(t)/t). Then one can identifyV1(t) with −V ′(t).

Theorem 4.7.1. Let Eξ = 0, Eξ2 < ∞ and let conditions [ · , =] with α > 2and [D(1,q)] be satisfied. Then, for x � N

√n lnn, Δ � cq(x), as N → ∞,

P(Sn ∈ Δ[x)) = ΔnV1(x)(1 + o(1)), V1(x) =αV (x)

x, (4.7.2)

where the remainder term o(1) in (4.7.2) is uniform in x, n and Δ such that

x � N√

n lnn, max{x−γ0 , q(x)} � Δ � xεN

for any fixed γ0 � −1 and any fixed function εN ↓ 0 as N ↑ ∞.

By the uniformity of o(1) in (4.7.2) we understand the existence of a functionδ(N) ↓ ∞ as N ↑ ∞ (depending on εN ) such that the term o(1) in (4.7.2) couldbe replaced by a function δ(x, n, Δ) with |δ(x, n,Δ)| � δ(N).

In the lattice case, the assertion (4.7.2) remains valid for integer-valued x andΔ � max{1, cq(x)}.

All the remarks following Theorem 3.7.1 remain valid here.For Δ = const, x � n1/(b−1), where b is such that E|ξ|b < ∞, the assertions

of Theorem 4.7.1 can be derived from Theorem 4.4.4.Now we will require that, in addition, the following condition be satisfied:

[D] For some t0 > 0 and all t � t0, the function V (t) (L(t)) is differentiable,

V ′(t) ∼ −αV (t)t

(L′(t) = o

(L(t)

t

))as t → ∞.

Then the next analogue of Theorem 3.7.2 will hold true.

Theorem 4.7.2. Let the conditions [ · , =], α > 2, [D] and x � √n lnn be met.

Then the distribution of Sn can be represented as a sum of two measures:

P(Sn ∈ ·) = Pn,1(·) + Pn,2(·),where the measure Pn,1 has, for any fixed r and all large enough x, the property

Pn,1

([x,∞)

)� [nV (x)]r.

The measure Pn,2 has density

Pn,2(dx)dx

= −nV ′(x)(1 + o(1)),

Page 250: Asymptotic analysis of random walks

4.7 Integro-local theorems for the sums Sn 219

where the reminder o(1) is uniform in x and n such that√

n lnn/x < εx for anyfixed function εx → 0 as x → ∞.

Proof. The proof of Theorem 4.7.1 follows the scheme of the proof of Theo-rem 3.7.1. For y < x, set

Gn := {Sn ∈ Δ[x)}, Bj := {ξj < y}, B =n⋂

j=1

Bj . (4.7.3)

Then

P(Gn) = P(GnB) + P(GnB), (4.7.4)

wheren∑

j=1

P(GnBj) � P(GnB) �n∑

j=1

P(GnBj) −∑

i<j�n

P(GnBiBj). (4.7.5)

As in the proof of Theorem 3.7.1, we will split the argument into three stages:bounding P(GnB), bounding P(GnBiBj), i �= j, and evaluating P(GnBj).

(1) Bounding P(GnB). We will use the crude inequality

P(GnB) � P(Sn � x; B) (4.7.6)

and Theorem 4.1.2. Owing to the latter, for x = ry, a fixed r > 2, any δ > 0 andx � N

√n lnn, N → ∞, one has

P(Sn � x; B) � (nV (y))r−δ (4.7.7)

(see Corollary 4.1.3). Now choose r such that

(nV (x))r−δ � nΔV1(x) (4.7.8)

for x � √n, Δ � cq(x). Setting n = x2 and comparing the powers of x

on the right- and left-hand sides of (4.7.8), we obtain, taking into account thecondition Δ � x−γ0 , that for (4.7.8) to hold it suffices that r is chosen in such away that

(2 − α)(r − δ) < 1 − α − γ0.

For α > 2, this is equivalent to

r >α − 1 + γ0

α − 2.

For such an r, owing to (4.7.6)–(4.7.8) we will have

P(GnB) = o(nΔV1(x)

). (4.7.9)

(2) Bounding P(GnBiBj) is done in the same way as in the proof of Theo-rem 3.7.1 and results in the inequality (3.7.16).

Page 251: Asymptotic analysis of random walks

220 Random walks with jumps having finite variance

(3) Evaluating P(GnBj). This is based, as in the proof of Theorem 3.7.1, onthe representation (3.7.17), which yields

P(GnBn) � ΔE[V1(x − Sn−1); Sn−1 < (1 − δ)x + Δ

]= ΔV1(x)(1 + o(1)). (4.7.10)

The last relation holds for x � √n since, due to the Chebyshev inequality, one

has E[V1(x−Sn−1); |Sn−1| � M

√n] ∼ V1(x) as M → ∞ and M

√n = o(x).

Moreover, the following obvious bounds are valid:

E[V1(x − Sn−1); Sn−1 ∈ (M

√n, (1 − δ)x + Δ)

]= o(V1(x))

and

E[V1(x − Sn−1); Sn−1 ∈ (−∞,−M

√n )]

= o(V1(x))

as M → ∞.Similarly, by virtue of (3.7.17) one finds that

P(GnBn) �(1−δ)x∫−∞

P(Sn−1 ∈ dz)P(ξ ∈ Δ[x − z)

) ∼ ΔV1(x). (4.7.11)

From (4.7.10) and (4.7.11) we obtain

P(GnBn) = ΔV1(x)(1 + o(1)).

Together with (4.7.4), (4.7.9) and (3.7.16), this gives

P(Gn) = ΔnV1(x)(1 + o(1)).

The required uniformity of the term o(1) follows in an obvious way from theabove argument. The theorem is proved.

Remark 4.7.3. To bound P(GnB) (see stage (1) of the proof of Theorem 4.7.1)one could use more refined approaches instead of the crude inequality (4.7.6),which would lead to stronger results. Note that P(GnB) = P(B)P(Gn|B),where

P(Gn|B) = P(S〈y〉n ∈ Δ[x)), S〈y〉

n =n∑

j=1

ξ〈y〉j

and the ξ〈y〉j are ‘truncated’ (at the level y) r.v.’s, following the distribution

P(ξ〈y〉 < t) =P(ξ < t)P(ξ < y)

, t � y,

so that (cf. § 2.1)

ϕy(λ) := Eeλξ〈y〉=

R(λ, y)P(ξ < y)

, R(λ, y) = E(eλξ; ξ < y).

Since the r.v. ξ〈y〉 is bounded from above, we have ϕy(λ) < ∞ for λ > 0, so

Page 252: Asymptotic analysis of random walks

4.7 Integro-local theorems for the sums Sn 221

that one can perform a Cramer transform on the distribution of ξ〈y〉 and introducer.v.’s ξ′j with the distribution

P(ξ′ ∈ dt) =eλtP(ξ〈y〉 ∈ dt)

ϕy(λ). (4.7.12)

Then (see e.g. § 8, Chapter 8 of [49] or § 6.1 of the present text)

P(S〈y〉n ∈ dt) = e−λtϕn

y (λ)P(S′n ∈ dt),

where S′n =

∑nj=1 ξ′j . Since P(B) =

(P(ξ < y)

)n, we have

P(S〈y〉n ∈ Δ[x)) � e−λxϕn

y (λ)P(S′n ∈ Δ[x))

and, therefore,

P(GnB) � e−λxRn(λ, y)P(S′n ∈ Δ[x)). (4.7.13)

Now the product e−λxRn(λ, y) in (4.7.13) is exactly the quantity which is nor-mally used to bound P(S〈y〉

n � x) and which was the main object of our studiesin § 4.1. In particular, we established there that, for

λ =1y

lnr

nV (y), x → ∞, (4.7.14)

and any δ > 0, one has

e−λxRn(λ, y) �[nV (x)

]−r+δ,

so that

P(GnB) �[nV (x)

]−r+δP(S′

n ∈ Δ[x)). (4.7.15)

This inequality is more precise than (4.7.6), (4.7.7), because of the presence of thefactor P(S′

n ∈ Δ[x)) on the right-hand side of (4.7.15). Indeed, for this factor wehave the following bounds, owing to the well-known results for the concentrationfunction: for all n � 1,

P(S′n ∈ Δ[x)) � c√

n×{

(Δ + 1) for all Δ,

Q(Δ) for Δ � 1,

where Q(Δ) := supt P(ξ′ ∈ Δ[t)) � 1 is the concentration function of the r.v. ξ′

(see e.g. Lemma 9 and Theorem 11 in Chapter 3 of [224]). This means that, forany δ > 0, Δ > 0, as x → ∞ we have

P(GnB) �[nV (x)

]−r+δ Δ + 1√n

. (4.7.16)

It is not hard to see that if ξ has a bounded density V1(x) and V1(x) = O(V (x))as x → ∞ then the Cramer transform (4.7.12) for the value of λ given in (4.7.14)

Page 253: Asymptotic analysis of random walks

222 Random walks with jumps having finite variance

will result in a distribution of ξ′ that also possesses a bounded density, and there-fore Q(Δ) � cΔ. In this case we will also have

P(GnB) �[nV (x)

]−r+δ Δ√n

(4.7.17)

for any Δ � 1. The inequality (4.7.16) enables one to obtain the bound (4.7.9)for smaller values of r, and the inequality (4.7.17) enables one to obtain (4.7.9)for any Δ � 1 (not necessarily for Δ � q(x)).

Proof. The proof of Theorem 4.7.2 is similar to that of Theorem 3.7.2 and hencewill be omitted.

Remark 4.7.4. If n → ∞ and the order of magnitude of the deviations is fixedas follows: x ∼ A(n) := n1/γLA(n), γ < 2, where LA is an s.v.f., then ananalogue of Remark 3.7.4 is valid under the conditions of the present chapter. Thisremark concerns the possibility of an alternative approach to obtaining integro-local theorems when condition [D(1,q)] is substantially relaxed and replaced bycondition (3.7.20) for x ∼ A(n), Δn = εn

√n, εn = o(1) as n → ∞. The latter

is equivalent to the condition

V (x) − V (x + Δ) = ΔV1(x)(1 + o(1))

for Δ = xγ/2LΔ(x), where LΔ is an s.v.f., x → ∞. The scheme of the proofof the integro-local theorem remains the same as in Remark 3.7.4. One just letsb(n) :=

√nd, takes f to be the standard normal density and uses the Gnedenko–

Stone–Shepp theorem [130, 254, 258, 259].

In conclusion note that, under the conditions of this section, one could alsoobtain asymptotic expansions in the integro-local theorem, as in § 4.4.

4.7.2 Integro-local theorems valid on the whole real line

An integral theorem for the sums Sn of r.v.’s with regularly varying distributions,which is valid on the whole real line, is given by (4.1.2). Now we will present asimilar integro-local representation, i.e. a representation for

P(Sn ∈ Δ[x)

),

that will be valid on the whole half-line x � 0. The complexity of the proofsof the respective results, which were obtained in the recent paper [192], is some-what beyond the level of the present monograph. So we will present these resultswithout proofs, the latter can be found in [192].

As before, we will consider two distribution types, non-lattice and arithmetic.For these distributions we will need an additional smoothness condition [D1],which is a simplified version of condition [D(1,1)], see (4.7.1):

Page 254: Asymptotic analysis of random walks

4.7 Integro-local theorems for the sums Sn 223

[D1] In the non-lattice case, for any fixed Δ > 0, as t → ∞,

V (t) − V (t + Δ) ∼ ΔαV (t)t

.

In the arithmetic case, for integer-valued k → ∞,

V (k) − V (k + 1) ∼ αV (k)k

.

As we have already pointed out, the function V1(t) := αV (t)/t in condi-tion [D1] plays the role of the derivative of −V (t), and is asymptotically equiv-alent to the latter, when the derivative exists and behaves regularly enough atinfinity.

We will also need the condition that

E(|ξ|2; ξ < −t

)= o

(1

ln t

)as t → ∞, (4.7.18)

which controls the decay rate of the left tail F−(t) as t → ∞.First we consider the non-lattice case, where we can assume without loss of

generality that Eξ = 0, d = Var ξ = 1.

Theorem 4.7.5. Let ξ be a non-lattice r.v.,

Eξ = 0, Var ξ = 1, (4.7.19)

and let the conditions [ · , =] with α > 2, [D1] and (4.7.18) be satisfied. Then,for any fixed Δ > 0,

P(Sn ∈ Δ[x)

) ∼ Δ√2πn

e−x2/2n + ΔnαV (x)

x(4.7.20)

uniformly in x � √n. In particular, for any fixed ε > 0,

P(Sn ∈ Δ[x)

)∼

⎧⎪⎪⎨⎪⎪⎩Δ√2πn

e−x2/2n if√

n � x � (1 − ε)√

(α − 2)n lnn,

ΔnαV (x)

xif x � (1 + ε)

√(α − 2)n lnn.

(4.7.21)

If x � (1 − ε)√

(α − 2)n lnn then condition [D1] is redundant.

Now consider the arithmetic case. Here condition (4.7.19) does restrict gener-ality, and we are dealing with arbitrary values of Eξ and d = Var ξ.

Theorem 4.7.6. Let ξ be an arithmetic r.v., and let the conditions [ · , =] forα > 2, [D1] and (4.7.18) be satisfied. Then

P(Sn − �nEξ = x

) ∼ 1√2πnd

e−x2/2dn + nαV (x)

x

Page 255: Asymptotic analysis of random walks

224 Random walks with jumps having finite variance

uniformly in x � √n. In particular, for any fixed ε > 0,

P(Sn − �nEξ = x

)∼

⎧⎪⎪⎨⎪⎪⎩1√

2πnde−x2/2nd, if x � (1 − ε)

√d(α − 2)n lnn,

nαV (x)

x, if x � (1 + ε)

√d(α − 2)n lnn.

If x � (1 − ε)√

d(α − 2)n lnn then condition [D1] is redundant.

Note that generally speaking Theorems 4.7.5 and 4.7.6 do not imply the integraltheorem (4.1.2) because in that theorem condition [D1] was not assumed.

It is clear that Theorems 4.7.5 and 4.7.6 could be combined into one assertionabout the asymptotics of

P(Sn − nEξ ∈ Δ[x)

),

without the assumption (4.7.19), where Δ is an arbitrary fixed positive number inthe non-lattice case and Δ = 1 in the arithmetic case.

Observe also that the assertions of Theorems 4.7.5 and 4.7.6 for deviationsx � √

n lnn and x = O(√

n)

have already been proved in Theorem 4.7.1 andthe Gnedenko–Stone–Shepp theorem (see [130, 258, 254]) respectively. For aproof for the remaining zone

√n � x = O

(√n lnn

), see [192].

4.8 Extension of results on the asymptotics of P(Sn � x) and P(Sn � x) to

wider classes of jump distributions

The main assumption under which we established in Chapters 3 and 4 the asymp-totics

P(Sn � x) ∼ nP(ξ � x), P(Sn � x) ∼ nP(ξ � x) (4.8.1)

in the respective deviation zones was that the distribution F of the summands ξi

is regularly varying at infinity, i.e. that condition [ · , =] is satisfied. Now we willshow that the asymptotics (4.8.1) remain valid for a wider class of distributionsas well, but possibly for narrower deviation zones.

First assume that Eξ2 < ∞. Recall that the definitions of ψ-locally constant(ψ-l.c.) and upper-power functions can be found in Chapter 1 (see pp. 18, 28).

Theorem 4.8.1. Let the following conditions be satisfied:

(1) [ · , <], α > 2, and Eξ2 < ∞;

(2) the function F+(t) is both upper-power and ψ-l.c.

Let, moreover, x → ∞ and

n <x2

2c lnx, n < c1ψ

2(x), nV 2(x) = o(F+(x)) (4.8.2)

for some c > α − 2, c1 < ∞. Then (4.8.1) holds true.

Page 256: Asymptotic analysis of random walks

4.8 Extension to wider classes of jump distributions 225

Remark 4.8.2. By virtue of Theorem 1.2.21(i), any distribution F that satisfiescondition (2) of Theorem 4.8.1 is subexponential and hence, for any fixed n, onehas P(Sn � x) ∼ nF+(x) as x → ∞. As we saw in Chapter 1, subexponen-tial distributions do not need to behave ‘regularly’ at infinity; for instance, theycan fluctuate between two regularly varying (or subexponential) functions (seeTheorem 1.2.21(ii) and Example 1.2.41).

Remark 4.8.3. The last condition in (4.8.2) is always satisfied provided thatF+(t) > ct−α−ε for some ε < α − 2. Indeed, in that case, for n < x2 andany ε′ ∈ (0, α − 2 − ε), by Theorem 1.1.4(i) we have

nV 2(x) = nx−2αL2(x) = o(x−2α+2+ε′)

= o(F+(x)),

since −2α + 2 + ε′ < −α − ε.

Remark 4.8.4. It is easily seen that Theorem 4.8.1 together with Theorem 4.8.6below, which covers the case Eξ2 = ∞, includes the following as special cases:(a) situations when the distribution F is of ‘extended regular variation’, i.e. when,for some 0 < α1 � α2 < ∞ and any b > 1,

b−α2 � lim infx→∞

F+(bx)F+(x)

� lim supx→∞

F+(bx)F+(x)

� b−α1 ; (4.8.3)

(b) the somewhat more general case of distributions of ‘intermediate regular vari-ation’ (introduced in [89] and sometimes also referred to as distributions with‘consistently varying tails’), i.e. distributions with the property

limb↓1

lim infx→∞

F+(bx)F+(x)

= 1. (4.8.4)

Under the assumption that the r.v. ξ = ξ′ − Eξ′ was formed by centring anon-negative r.v. ξ′ � 0, the first of the asymptotic relations (4.8.1) was obtainedin these (a) and (b) in [90] and [210], respectively. Note that in [210] condi-tion [ · , <] was not assumed for any α > 1 but the large deviations theorem wasestablished there only in the zone x � δn, where δ > 0 is fixed.

Remark 4.8.5. Since a ψ-l.c. function F+ with ψ(t) = t is an s.v.f., one canassume in condition (2) of Theorem 4.8.1 that necessarily ψ(t) = o(t).

Proof of Theorem 4.8.1. First consider P(Sn � x). As condition [ · , <] is satis-fied, the bounds of § 4.1 hold true. Therefore, as in our previous arguments, weobtain, using the method of ‘truncated’ r.v.’s, that, for a fixed δ ∈ (0, 1),

P(Sn � x) = nP(Sn � x, ξn � δx) + O((nV (x))2

)

Page 257: Asymptotic analysis of random walks

226 Random walks with jumps having finite variance

(cf. (4.4.13)), where, for M ∈ (0, (1 − δ)x/√

n),

P(Sn � x, ξn � δx)

= P(Sn−1 + ξn � x, ξn � δx)

= P(Sn−1 � (1 − δ)x

)F+(δx) +

(1−δ)x∫−∞

P(Sn−1 ∈ dt)F+(x − t)

= o(F+(x)

)+ E

[F+(x − Sn−1); |Sn−1| < M

√n]

+ E[F+(x − Sn−1); Sn−1 ∈ (−∞,−M

√n] ∪ [M

√n, (1 − δ)x ]

].

(4.8.5)

Since M√

n <√

c1Mψ(x) and from condition (2) one has F+(x − t) ∼ F+(x)as x → ∞ when |t| <

√c1Mψ(x) and M → ∞ slowly enough, for such an M

we have

E[F+(x − Sn−1); |Sn−1| < M

√x]

= F+(x)(1 + o(1)).

The last term on the right-hand side of (4.8.5) is clearly o(F+(x)), as F+(x) is anupper-power function and P

(|Sn−1| � M√

n)→ 0. Therefore

P(Sn � x, ξn � δx) = F+(x)(1 + o(1)).

Since (nV (x))2 = o(nF+(x)), the first relation in (4.8.1) is proved.Now we turn to the distribution of Sn. Since P(Sn � x) � P(Sn � x), it

suffices to verify that P(Sn � x) � nF+(x)(1 + o(1)).Again using our previous arguments (see (4.5.4)), we have

P(Sn � x) =n∑

k=1

P(Sn � x, ξk � δx) + O((nV (x))2

). (4.8.6)

Here

P(Sn � x, ξk � δx) = Pk + O(nV (x)F+(x)

),

where

Pk := P(Sk−1 <

δx

2, ξk � δx, Sn � x

)= P

(Sk−1 <

δx

2, ξk � δx, Sk � x

)+

x∫−∞

P(Sk−1 <

δx

2, ξk � δx, Sk ∈ dt, S

(k)n−k � x − t

), (4.8.7)

S(k)n−k = max0�m�n−k(Sm+k − Sk) d= Sn−k is independent of {S1, . . . , Sk}.

Page 258: Asymptotic analysis of random walks

4.8 Extension to wider classes of jump distributions 227

Now the first term on the final right-hand side of (4.8.7) does not exceed

P(Sk−1 <

δx

2, Sk � x

)=

δx/2∫−∞

P(Sk−1 ∈ dt)F+(x − t)

= E[F+(x − Sk−1); Sk−1 � δx

2

]= E

[F+(x − Sk−1); |Sk−1| < M

√n]

+ E[F+(x − Sk−1); Sk−1 ∈ (−∞,−M

√n ] ∪ [M√

n, δx/2]]

.

In exactly the same way as in (4.8.5), we can verify that the right-hand side of thelast equality has the form F+(x)(1 + o(1)).

Next consider the last term on the right-hand side of (4.8.7). It does not exceed

δx/2∫−δx/2

P(Sk−1 ∈ du)

x−u∫δx

P(ξ ∈ dv)P(Sn−k � x − (u + v)) + o(F+(x)

).

(4.8.8)Fix a number u, |u| < δx/2. For simplicity, let u = 0. Then, since F+ is a ψ-l.c.function, we have

x∫x−Mψ(x)

P(ξ ∈ dv)P(Sn−k � x − v) � P(ξ ∈ (x − Mψ(x), x]

)= F+

(x − Mψ(x)

)− F+(x)

= o(F+(x)

). (4.8.9)

Now put Z(v) := P(Sn−k � x − v) and consider

x−Mψ(x)∫δx

P(ξ ∈ dv)Z(v) = −F+(v)Z(v)∣∣∣∣ x−Mψ(x)

δx

+

x−Mψ(x)∫δx

F+(v) dZ(v)

� F+(δx)Z(δx) + F+(δx)Z(x − Mψ(x)

).

(4.8.10)

Since

Z(x − Mψ(x)) = P(Sn−k � Mψ(x)

)� P

(Sn−k > cM

√n)→ 0

as M → ∞, the right-hand side of (4.8.10) is o(F+(x)). If we now consider,instead of (4.8.9), (4.8.10), the expression⎛⎜⎝ x−u−Mψ(x)∫

δx

+

x−u∫x−u−Mψ(x)

⎞⎟⎠P(ξ ∈ dv)P(Sn−k � x − u − v)

Page 259: Asymptotic analysis of random walks

228 Random walks with jumps having finite variance

for any fixed u, |u| < δx/2 then all the computations will remain valid (withobvious minor changes, which, however, do not affect the final bound o(F+(x))).This means that the bound o(F+(x)) remains true for the integrals in (4.8.7),(4.8.8) as well. Comparing (4.8.6), (4.8.7) with the derived bounds, we obtainthat P(Sn � x) = nF+(x)(1 + o(1)).

The theorem is proved.

The case Eξ2 = ∞, E|ξ| < ∞, Eξ = 0 is dealt with in exactly the same way.

Theorem 4.8.6. Let the following conditions be satisfied:

(1) [<, <] for α ∈ (1, 2), W (t) � cV (t);

(2) F+(t) is both upper-power and ψ-l.c. with ψ(t) = σ(t) = V (−1)(1/t).

Then relation (4.8.1) holds true provided that

x � σ(n), n <c

V (ψ(x)), c < ∞, nV 2(x) = o(F+(x)). (4.8.11)

Proof. The proof is similar to that of Theorem 4.8.1. One just has to replacein (4.8.5) the integration region {Sn−1 < M

√n } by {Sn−1 < Mσ(n)}, where,

by virtue of (4.8.11), we have σ(n) < c1ψ(x), and then to use the fact thatP(Sn−1 � Mσ(n)) → 0 as M → ∞.

Similar changes should be made in the second part of the proof, which dealswith Sn. The theorem is proved.

Remark 4.8.7. If condition W (t) < cV (t) does not hold, β ∈ (1, 2), then theassertion of Theorem 4.8.6 will remain true provided that we replace x � σ(n)in (4.8.11) by x � σ∗(n), where σ∗ = σ∗(n) is such that nW (σ∗/lnσ∗) < c.Indeed, for such values of x, all the bounds for P(Sn � x) that we used in theproof of Theorem 4.8.6 will remain valid, owing to Theorem 3.1.1.

4.9 The distribution of the trajectory {Sk} given that Sn � x or Sn � x

As stated in the Introduction, the trajectories of the random walk {Sk; k =1, . . . , n} that give the biggest contribution to the asymptotics of the probabil-ities P(Sn � x) and P(Sn � x) (such trajectories could also be called the‘shortest’ or ‘most typical’), are of a completely different character in the Cramercase (0.0.9) as compared with the case when the tails P(ξ � t) are regularly vary-ing. In the Cramer case these trajectories are close to the straight path connectingthe points (0, 0) and (n, x). Owing to the results obtained above, it is natural toexpect that the scaled trajectory {S�nt�; t ∈ [0, 1]} of the walk will be close toa step function with a single jump at a random time that is uniformly distributedover (0, 1). More precisely, we have the following result.

Set

η(x) := min{k : Sk � x}, χ(x) := Sη(x) − x,

Page 260: Asymptotic analysis of random walks

4.9 Distribution of {Sk} given that Sn � x or Sn � x 229

and denote by E(t) the the following random step process on [0, 1]:

E(t) :={

0 for t � ω,1 for t > ω,

where ω is an r.v. with the uniform distribution on [0, 1].

Theorem 4.9.1. Let the conditions [ · , =], α > 2, and Eξ2 < ∞ be met. Then,for x � √

n lnn, i � n and a fixed z > 0,

P(η(x) = i, χ(x) � zx|Sn � x

) ∼ P(η(x) = i, χ(x) � zx|Sn � x

)∼ (1 + z)−α

n. (4.9.1)

The conditional distribution of the process {x−1S�nt�; t ∈ [0, 1]} given Sn � x

(or Sn � x), converges weakly in the Skorokhod space D(0, 1), as n → ∞, to thedistribution of the process {ζαE(t); t ∈ [0, 1]}. Here the r.v. ζα and the process{E(t)} are independent of each other and P(ζα � y) = y−α, y � 1.

Analogous assertions will hold true under the conditions of Chapter 3 whenα ∈ (1, 2), x � σ(n).

Proof. Put

Gi−1 :={Si−1 < x

}, Yi−1 := V (x + zx − Si−1)1

(Gi−1

)and let ε = ε(x) → 0 as x → ∞ be such that εx � √

n lnn. Then, for i � n,

P(η(x) = i, χ(x) � zx

)= P

(Gi−1; Si−1 + ξi � x + zx

)= EYi−1 = E1 + E2 + E3 + E4,

where

E1 := E[Yi−1; |Si−1| < εx

] ∼ V (x + zx),

E2 := E[Yi−1; Si−1 � −εx

]= o

(V (x)

),

E3 := E[Yi−1; Si−1 ∈ [εx, x/2]

]= o

(V (x)

),

E4 := E[Yi−1; Si−1 ∈ (x/2, x)

]� V (zx)P

(Si−1 > x/2

)� cnV (x)V (zx) = o

(V (x)

).

Thus we obtain that, for i � n,

P(η(x) = i, χ(x) � zx

) ∼ V (x + zx) ∼ (1 + z)−αV (x). (4.9.2)

Moreover, because

P(η(x) = i, χ(x) � zx, Sn < x

)� P

(η(x) = i, χ(x) � zx

)P(Sn−i < −zx)

and the last factor tends to zero, we also have

P(η(x) = i, χ(x) � zx, Sn � x

) ∼ (1 + z)−αV (x). (4.9.3)

Page 261: Asymptotic analysis of random walks

230 Random walks with jumps having finite variance

Since

P(Sn � x) ∼ P(Sn � x) ∼ nV (x) (4.9.4)

by Theorem 4.4.1 the relations (4.9.2) and (4.9.3) immediately imply (4.9.1).Furthermore, it is evident that the probabilities

P(maxj�i

|Sj−1| < εx, ξi � x, maxj�n−i

|S(i)j | < εx

)have the same asymptotics as (4.9.2). From this and (4.9.4) one can easily de-rive the convergence of the conditional distributions of {x−1S�nt�; t ∈ [0, 1]} inD(0, 1): it suffices to prove the convergence of the finite-dimensional distribu-tions (which is obvious) and verify the compactness (tightness) conditions. Thelatter can be done in a standard way (see e.g. Chapter 3 of [28]), and we will leavethis to the reader.

The following assertion could be obtained in a similar way, using the integro-local theorems of § 4.7. As before, let ξn = maxk�n ξk.

Theorem 4.9.2. Let the conditions [ · , =], α > 2, Eξ2 < ∞, [D(1,q)] withq(t) ≡ 1 and x � √

n lnn be satisfied, and let Δ > 0 be an arbitrary fixednumber. Then, as n → ∞,

P(η(x) = i |Sn ∈ Δ[x)

) ∼ 1n

.

The conditional distribution of the process {x−1S�nt�; t ∈ [0, 1]} given the eventSn ∈ Δ[x) converges weakly in D(0, 1), as n → ∞, to the distribution of theprocess {E(t); t ∈ [0, 1]}.

The limiting distribution for the conditional laws of (ξn − x)/√

n given theevent Sn ∈ Δ[x) coincides with the limiting distribution for −Sn/

√n (which is

clearly normal).

The last assertion follows in an obvious way from the relation

Si−1 + ξn + Sn − Si ∈ Δ[x),

which clearly holds on the event {ωn = i} ∩ {Sn ∈ Δ[x)}, where we put ωn :=min{k � 1 : ξk = ξn}, and from the fact that the limiting distributions for

Sn−1 + δn√n

, |δn| � Δ andSn√

n

are the same.Along with the assertions of Theorems 4.9.1 and 4.9.2 providing first-order ap-

proximations to the conditional distributions of {x−1S�nt�}, one can also obtaina second-order approximation. What happens here is that, roughly speaking, un-der the condition Sn ∈ Δ[x) the processes {S�nt�} approach (in distribution) theprocess (

x − w(1)√

n)E(t) +

√nw(t),

Page 262: Asymptotic analysis of random walks

4.9 Distribution of {Sk} given that Sn � x or Sn � x 231

where w(t) is the standard Wiener process (we assume that Eξ2 = 1), which isindependent of E(t).

We will state the above claim in a more precise form. Let

Eωn(t) :=

{0 for t � ωn,1 for t > ωn.

Theorem 4.9.3. Let the conditions of Theorem 4.9.2 be met. Then, as n → ∞,the conditional distribution of the process

1√n

(S�nt� − xE(t)

), t ∈ [0, 1],

given Sn ∈ Δ[x), converges weakly in D(0, 1) to the distribution of the process{w(t) − w(1)E(t)}, where the processes {w(t)} and {E(t)} are independent ofeach other.

Proof. To prove the convergence of the finite-dimensional distributions we willfollow the arguments in §§ 4.4 and 4.7. Consider the trajectory {S∗

�nt�} that isobtained from {S�nt�} by replacing in the latter its maximum jump ξn (whichwill be unique with probability tending to 1) by zero, so that ξn ∈ x−S∗

n + Δ[0)(given Sn ∈ Δ[x)). It can be seen from the proofs of the theorems of §§ 4.4and 4.7 that, for each i, the conditional distribution of the trajectory {S∗

�nt�}given ξi � x/r ‘almost coincides’ with the distribution of the sequence of cu-mulative sums of independent r.v.’s (distributed as ξ), one of which (the ith one)is replaced by zero. As in §§ 4.4 and 4.7, we can represent the principal part of theprobability P(Gn) of the desired event Gn as the sum

∑nj=1 P(GnBj), and then

it is not hard to derive from the above observations that, for each t, the sequence

1√n

(S�nt� − ξnEωn

(t))

=1√n

(S�nt� − (x− S∗

n + δn)Eωn(t)), |δn| � Δ,

given Sn ∈ Δ[x), will converge in distribution as n → ∞ to the same r.v. as thesequence S∗

�nt�/√

n.In other words, the ‘conditional process’

1√n

(S�nt� − xEωn

(t))

will converge to the same limiting process as

1√n

(S∗�nt� − S∗

nEωn(t)),

i.e. to the process {w(t) − w(1)E(t)}. Compactness (tightness) conditions areverified in the standard way.

One could also obtain second-order approximations to the conditional distribu-tions of the process {S�nt�} given Sn � x.

In the case Eξ2 = ∞ it is not difficult to obtain, using the same arguments,

Page 263: Asymptotic analysis of random walks

232 Random walks with jumps having finite variance

complete analogues of Theorems 4.9.1–4.9.3 under condition [<, =] with α < 2and W (t) < cV (t). For example, the following analogue of Theorem 4.9.1 holdstrue. Let γ ∈ (0, α) be an arbitrary fixed number.

Theorem 4.9.4. Let conditions [<, =] with α < 2 and W (t) < cV (t) and[D(1,q)] with q(t) ≡ 1 be satisfied. Then, for x > n1/γ and any fixed Δ > 0,

as n → ∞,

P(η(x) = i |Sn ∈ Δ[x)

) ∼ 1n

.

The conditional distribution of the process {x−1S�nt�; t ∈ [0, 1]} given the eventSn ∈ Δ[x) converges weakly in D(0, 1), as n → ∞, to the distribution of theprocess {E(t); t ∈ [0, 1]}.

If, moreover, condition [Rα,ρ] holds then the limit of the conditional laws of(ξn − x)/b(n), b(n) = F (−1)(1/n) given Sn ∈ Δ[x) coincides with the limitingdistribution of −Sn/b(n) (i.e. with the stable distribution Fα,−ρ; for simplicitywe assume that α �= 1).

In the statement of the analogue of Theorem 4.9.3 in the case Eξ2 = ∞, oneshould replace the Wiener process {w(t)} by the stable process {ζ(α,ρ)(t)} andthe scaling sequence

√n by b(n).

Page 264: Asymptotic analysis of random walks

5

Random walks with semiexponential jumpdistributions

5.1 Introduction

In Chapters 5 and 6 we will be studying random walks with jumps for whichthe distributions decay at infinity in a regular manner but faster than any powerfunction. For such walks also, one can obtain a rather complete description ofthe asymptotics of the large deviation probabilities using methods close to thosedeveloped in Chapters 2–4.

Two distribution classes will be considered. This chapter is devoted to studyingrandom walks with semiexponential jump distributions, which were introducedin Definition 1.2.22 (p. 29). In Chapter 6, however, we will study exponentiallydecaying distributions of the form

P(ξ � t) = V0(t)e−λ+t, λ+ > 0,

where V0(t) is an r.v.f. such that∫∞1

tV0(t)dt < ∞. In this case, the left deriva-tive ϕ′(λ+) of the function ϕ(λ) = Eeλξ is finite, and so the existing analyticmethods for studying the asymptotics of, say, the probabilities P(Sn � x) andP(Sn � x) will not work for deviations x such that x/n > ϕ′(λ+)/ϕ(λ+)(see [37]). This means that one has to look for some ‘mixed’ approaches thatwould use the results of Chapters 2–4 (for more detail, see Chapter 6).

So, the present chapter deals with large deviation problems for the class Se ofsemiexponential distributions, which have the form

P(ξ � t) = V (t) = e−l(t), (5.1.1)

where

(1) the function l(t) admits the representation

l(t) = tαL(t), 0 < α < 1, L(t) is an s.v.f. at infinity, (5.1.2)

(2) as t → ∞, for Δ = o(t) one has

l(t + Δ) − l(t) =αΔl(t)

t(1 + o(1)) + o(1). (5.1.3)

233

Page 265: Asymptotic analysis of random walks

234 Random walks with semiexponential jump distributions

In other words, (2) means that

l(t + Δ) − l(t) ∼ αΔl(t)t

(5.1.4)

if Δ = o(t) and Δl(t)/t > ε for a fixed ε > 0, and that

l(t + Δ) − l(t) = o(1) (5.1.5)

if Δl(t)/t → 0.

Along with the notation F ∈ Se, which indicates that the distribution F issemiexponential, we will sometimes use the equivalent notation V ∈ Se (orF+ ∈ Se), where V (or F+) is the right tail of F. This is quite natural, asconditions (5.1.3)–(5.1.5) refer to the right distribution tails only.

Note that, in the present chapter, we will exclude the extreme cases α = 0and α = 1 (cf. Definition 1.2.22).

Some properties of semiexponential distributions were studied in § 1.2. Recallthat if the function L(t) is differentiable and

L′(t) = o(L(t)/t

)as t → ∞ then l′(t) ∼ αl(t)/t, the property (5.1.3) always holds and (5.1.4) istrue for all Δ = o(t), so that the second remainder term o(1) in (5.1.3) is absent.

If the distribution of ζ satisfies Cramer’s condition, ϕ(λ) = Eeλζ < ∞ forsome λ > 0, then, under rather broad conditions, the distribution of ξ = ζ2 willbe semiexponential. Assume, for example, that, in the representation

P(ζ � t) = e−λ+t+h(t), λ+ = sup{λ : ϕ(λ) < ∞} > 0, (5.1.6)

the function h(t) = o(t) is differentiable for t � t0 > 0 and h′(t) → 0 as t → ∞.Then

P(ξ � t) = e−λ+√

t+h(√

t),

so that the function l(t) = λ+

√t − h(

√t ) has the property that

l(t + Δ) − l(t) =Δ

2√

t

(1 + o(1)

)as t → ∞, Δ = o(t), and hence the relations (5.1.2), (5.1.3) hold for α = 1/2.

It is obvious that the same could also be said about the distribution of the sum

χ2 := ζ21 + · · · + ζ2

k ,

where the r.v.’s ζjd= ζ are independent.

In this chapter, by conditions [ · , =] and [ · , <] we will understand, respec-tively, the relation (5.1.1) and the inequality P(ξ � t) � V (t), where the functionV (t) = e−l(t) satisfies (5.1.2) and (5.1.3).

For semiexponential distributions, the description of the asymptotics of largedeviation probabilities differs substantially from that presented in Chapter 2–4

Page 266: Asymptotic analysis of random walks

5.1 Introduction 235

for the case of regularly varying distributions. The difference is, first of all, inthe presence of a rather extensive ‘intermediate’ large deviation zone, where theasymptotics of the probabilities in question (say, P(Sn � x)) will, roughly speak-ing, be intermediate between the so-called Cramer approximation (which is validfor ‘moderately large deviations’) and an approximation of the form nV (x) dueto the maximum jump.

It will be assumed throughout Chapter 5 that the following conditions hold true,along with the semiexponentiality of P(ξ � t):

Eξ = 0, Eξ2 = 1, E|ξ|b < ∞ for a b =⌊

11 − α

⌋+ 2. (5.1.7)

The main objects of study will be the functionals

Sn =n∑

k=1

ξk, Sn(a) = maxk�n

(Sk − ak), Sn = Sn(0). (5.1.8)

Now we will briefly review what is known about the probabilities of large de-viations of the functionals (5.1.8) for distributions of the form (5.1.1), (5.1.2) anddistributions close to this form, in the zone of ‘moderately large deviations’ wherethe Cramer approximation is valid. Problems on ‘moderately large deviations’of the sums Sn in the case of the above-mentioned distributions were studiedin [152, 201, 220, 221, 280, 281, 247, 212, 206, 238] and a number of other pa-pers, where, in particular, the following was proved. Let (5.1.7) be true and, forsome function h(t) that is close, in some sense, to an r.v.f. of index α ∈ (0, 1)(conditions on the function h(t) vary between publications), let

E(eh(ξ); ξ � 0

)< ∞. (5.1.9)

(In papers [152, 201, 220, 221, 280, 281, 247], the condition

E(eh(|ξ|)) < ∞ (5.1.10)

was used.) Further, let σ1(n) be the solution to the equation x2 = nh(x). Then,for the probability P(Sn � x), we have the following Cramer approximation,which holds uniformly in x � σ1(n):

P(Sn � x) =[1 − Φ

(x√n

)]e−nΛ0

κ(x/n)(1 + o(1)). (5.1.11)

Here Φ(t) is the standard normal distribution function,

Λ0κ

(x

n

):= Λκ

(x

n

)− x2

2n2,

Λκ(x/n) is the ‘truncated Cramer series’, i.e. a truncated series expansion corre-sponding to the ‘deviation function’ of ξ (which is formally defined only when

Page 267: Asymptotic analysis of random walks

236 Random walks with semiexponential jump distributions

Cramer’s condition is satisfied),

Λκ(t) :=κ∑

j=2

vj tj

j!, κ :=

⌊1

1 − α

⌋+ 1, (5.1.12)

where v2 = 1, v3 = γ3, v4 = γ4 − 3γ23 and so on (see e.g. [152, 201, 220])

and the γj are semi-invariants of the distribution of ξ, so that γ2 = 1. It followsfrom (5.1.11) that, in particular,

P(Sn � x) ∼√

n√2π x

e−n Λκ(x/n) (5.1.13)

for x � σ1(n), x � √n → ∞, and that

P(Sn � x) ∼ 1 − Φ(

x√n

)(5.1.14)

for x = o(n2/3). Recall that an ∼ bn means that an/bn → 1 as n → ∞.If F ∈ Se then (5.1.9) will be satisfied for the function h(t) = l(t) − ln t

for t � 1 because, in that case, for α ∈ (0, 1) one has

E(eh(ξ); ξ � 1

)=

∞∫1

dl(t)t

< ∞.

This means that, for semiexponentially distributed summands, the approxima-tions (5.1.11)–(5.1.14) hold in the deviation zone x � σ1(n), where the functionσ1(n) is clearly of the form

σ1(n) = n1/(2−α)L1(n), L1 is an s.v.f.

(see Theorem 1.1.4(v)).Similar results for Sn were established in [2]: under condition (5.1.9) one has

P(Sn � x) = 2[1 − Φ

(x√n

)]e−nΛ0

κ(x/n)(1 + o(1)) (5.1.15)

uniformly in x � σ1(n); that paper also contains a representation of the form(5.1.11) for P(Sn(−a) − an � x), a > 0.

Beyond the deviation zone x � σ1(n) = n1/(2−α)L1(n) one can identifytwo further zones, σ1(n) < x � σ2(n) and x > σ2(n), in which the asymp-totics of P(Sn � x) will be different. In the deviation zone x � σ2(n) =n1/(2−2α)L2(n) (the asymptotics of σ2(n) will be made more precise later on),the ‘maximum jump principle’ is valid, i.e. the principal contribution to large de-viations of Sn comes from ξn = maxj�n ξj , so that

P(Sn � x) ∼ P(ξn � x) = nV (x)(1 + o(1)

). (5.1.16)

The asymptotic representation (5.1.16) for x � n1/(2−2α) was obtained in [238,196, 227, 51]. Concerning results on the large deviations of Sn for distributions

Page 268: Asymptotic analysis of random walks

5.1 Introduction 237

satisfying (5.1.10), see also [206, 191] (these references also contain more com-plete bibliographies). The asymptotics of P(Sn � x) in cases when they co-incide with those for P(Sn � x) were established in [227]. In [238] theoremson the asymptotics of P(Sn � x) valid on the whole real line were considered.In particular, that paper gives the form of P(Sn � x) in the intermediate zonex ∈ (σ1(n), σ2(n)

)but it does this under conditions whose meaning is quite dif-

ficult to comprehend. The intermediate deviation zone σ1(n) < x < σ2(n) wasalso considered in [195]. The paper deals with the rather special case when thedistribution F has density

f(t) ∼ e−|t|α as |t| → ∞;

the asymptotics of P(Sn � x) were found there for α > 1/2 in the form of re-cursive relations, whence one cannot extract, in the general case, the asymptoticsof P(Sn � x) in explicit form.

Asymptotic representations for P(Sn � x) in the intermediate deviation zoneσ1(n) < x < σ2(n) and also in the zone x � σ2(n) were studied in [52].The following asymptotic representation for P

(Sn(a) � x

), a > 0, was estab-

lished in [178] (see also [275]): for all so-called strongly subexponential distribu-tions V (t), as x → ∞, for all n one has

P(Sn(a) � x) ∼ 1a

x+an∫x

V (u) du. (5.1.17)

D.A. Korshunov has communicated to us that the sufficient conditions from [178]for a distribution to be strongly subexponential will be satisfied for laws from theclass Se.

The content of the present chapter is, in many aspects, similar to that of Chap-ters 2–4. We will be using the same approaches as in the above chapters but in amodified form. Owing to more severe technical difficulties, the class of problemswill be narrowed: we will not deal with the probability that a trajectory will crossa given arbitrary boundary, although in principle there are no obstacles to derivingthis.

In § 5.8 we present integro-local and integral theorems for Sn that are valid onthe whole real line. They cover all deviation zones including boundary regions.In particular, they improve the integral theorems for Sn established in the previ-ous sections and the above-mentioned literature. The complexity of the techniqueused to prove these results, which were presented in the recent paper [73], is,however, somewhat beyond the level of the other material in the present mono-graph. This is why the respective theorems are presented in § 5.8 without proofs;the latter can be found in [73].

Page 269: Asymptotic analysis of random walks

238 Random walks with semiexponential jump distributions

5.2 Bounds for the distributions of Sn and Sn, and their consequences

5.2.1 Upper bounds for P(Sn � x)

Similarly to Chapter 4, introduce a function σ1 = σ1(n) that characterizes thezone of deviations x where the ‘normal’ asymptotics e−x2/2n and the ‘maximumjump’ asymptotics nV (x) will be approximately the same. More precisely, wewill define such deviations x as solutions to the equation

x2

2n= − lnnV (x). (5.2.1)

As we are only interested in the asymptotics of σ1(n), this is the same as thesolution to the equation

x2

2n= − lnV (x) = l(x).

It will somewhat be more convenient for us to consider instead the equation

x2 = nl(x), (5.2.2)

of which the solution differs from that of the original equation by a boundedfactor. It is not hard to see that, under the conditions of the present chapter, thefunction σ1 = σ1(n) will have the form

σ1(n) = n1/(2−α)L1(n), (5.2.3)

where L1(n) is an s.v.f. (see Theorem 1.1.4(v); note that σ1(n) denoted in § 5.1 aformally different but quite close quantity, see (5.4.13) on p. 253).

The deviation zone x � σ1(n) will be referred to as the Cramer’s zone; thezone σ1(n) < x � σ2(n), where σ2(n) is to be defined below, will be calledintermediate and the zone x > σ2(n) will be called the zone of validity of themaximum jump principle (the asymptotics of P(Sn � x) in this zone coincidewith those of P(ξn � x) ∼ nV (x)).

Put

w1(t) := −t−2 lnV (t) = t−2l(t) = tα−2L(t). (5.2.4)

One can assume, without loss of generality, that the function w1(t) is decreasing.Then the equation (5.2.2) can be rewritten as w1(x) = 1/n, and σ1(n) is simplythe value of w

(−1)1 , the function inverse to w1, at the point 1/n:

σ1(n) = w(−1)1 (1/n). (5.2.5)

It is not difficult to see that, if L satisfies the condition

L(tL1/(2−α)(t)

) ∼ L(t) as t → ∞, (5.2.6)

then w(−1)1 (u) has the form

w(−1)1 (u) ∼ u1/(α−2)L1/(2−α)

(u1/(α−2)

), (5.2.7)

Page 270: Asymptotic analysis of random walks

5.2 Bounds for the distributions of Sn and Sn 239

so that L1(n) ∼ L1/(2−α)(n1/(2−α)

)(see Theorem 1.1.4(v)).

Note that although condition (5.2.6) is quite broad it is not always satisfied. Forexample, it fails for the s.v.f. L(t) = exp{ln t/ ln ln t}.

Since the boundary σ1(n) of the Cramer deviation zone depends on n, it couldbe equivalently characterized by the inequalities

n � 1w1(x)

in the Cramer zone,

n <1

w1(x)in the intermediate zone.

Thus, deviations could be characterized both by the quantity

s1 :=x

σ1(n)

(cf. Chapter 4; s1 � 1 for the Cramer zone) and the quantity

π1 := π1(x) = nw1(x) = nxα−2L(x) (5.2.8)

(π1 � 1 for the Cramer zone); observe that, for a fixed s1,

π1(x) = nw1(s1σ(n)) ∼ nsα−21 w1(σ(n)) ∼ sα−2

1

as n → ∞. In some cases, it will be more convenient for us to use the charac-teriztic π1 (in which the argument x will often be omitted; when the argument ofπ1(·) is different from x, it will always be included).

As before, let

Bj = {ξj < y}, B =n⋂

j=1

Bj , P = P(Sn � x; B).

Theorem 5.2.1. Let condition [ · , <] with V ∈ Se be satisfied. Then there existsa constant c < ∞ (its explicit form could be obtained from the proof) such that,for any fixed h > 1, all n and all sufficiently large y,

P � c[nV (y)]r−π1(y)h/2, r =x

y, (5.2.9)

where π1 is defined in (5.2.8).If, for fixed h > 1 and ε > 0, one has π1h � 1 + ε then, for y = x and all

large enough n,

P � e−x2/2nh. (5.2.10)

If the deviation y is characterized by the relation y = sσ1(n) for a fixed s > 0then (5.2.9) holds true with π1(y) replaced by sα−2(1+o(1)). If y = x, s2−α < h

then the relation (5.2.10) is true.

Now we will give a few consequences of Theorem 5.2.1.Along with the function w1(t) (see (5.2.4)), we introduce the function

w2(t) := w1(t)l(t) = t−2l2(t) = t2α−2L2(t), (5.2.11)

Page 271: Asymptotic analysis of random walks

240 Random walks with semiexponential jump distributions

which will be assumed, like w1(t), to be monotonically decreasing, so that theinverse function w

(−1)2 (·) is defined for it. Set

σ2(n) := w(−1)2 (1/n) = n1/(2−2α)L2(n), (5.2.12)

where L2 is an s.v.f. that could be found explicitly under the additional assump-tion (5.2.6) (as was the case for L1).

Further, let r0 be the minimum solution to the equation

1 = r − π1h

2r2−α,

which always exists when π1h < 2α−1. Obviously,

r0 − 1 ∼ π1h

2as π1 → 0.

Here and in what follows, h > 1 is, as before, an arbitrary fixed number, whichcan be chosen to be arbitrarily close to 1.

Corollary 5.2.2.

(i) If π1h < 2α−1 then

P(Sn � x) � cnV (x/r0). (5.2.13)

If π1l(x) � c or, equivalently, x � c2σ2(n) then

P(Sn � x) � c1nV (x). (5.2.14)

(ii) If π1h � 2α−1 then

P(Sn � x) < cnV (x)1/2π1h. (5.2.15)

(iii) Let h > 1, ε > 0 be any fixed numbers. If π1h � 1 + ε then, for all largeenough n, one has

P(Sn � x) � e−x2/2nh = V (x)1/2π1h. (5.2.16)

Remark 5.2.3. As in Remark 4.1.5 (see also Corollaries 2.2.4 and 3.1.2), it isnot difficult to verify that there exists a function ε(t) ↓ 0 as t ↑ ∞ such that, forx = s2σ2(n),

supx: s2�t

P(Sn � x)nV (x)

� 1 + ε(t).

Proof of Theorem 5.2.1. The scheme of the proof remains the same as before.The main tool is again the basic inequality (2.1.8), in which one has to bound theintegral R(μ, y). The integral I1 (see (4.1.19)) from the representation R(μ, y) =I1+I2 admits the same upper bound as that obtained in the proof of Theorem 4.1.2with M(ε) = ε/μ. Set eε = h. Then (see (4.1.22))

I1 � 1 +μ2h

2. (5.2.17)

Page 272: Asymptotic analysis of random walks

5.2 Bounds for the distributions of Sn and Sn 241

An upper bound for

I2 =

y∫M(ε)

eμt F(dt) � V (M(ε))h + μ

y∫M(ε)

V (t)eμtdt (5.2.18)

(see (4.1.23)) will now be somewhat different. We will represent the last term asthe sum

μ

M∫M(ε)

ef(t)dt + μ

y∫M

ef(t)dt =: I2,1 + I2,2, f(t) := −l(t) + μt, (5.2.19)

where the quantity M will be chosen below, and study the properties of the func-tion f .

First assume for simplicity that the function L from the representation (5.1.2)is differentiable and, moreover, that

L′(t) = o(L(t)/t), l′(t) is decreasing. (5.2.20)

In this case,

l′(t) =αl(t)

t

(1 +

L′(t)tαL(t)

)∼ αl(t)

tas t → ∞.

Then the minimum of f(t) is attained at t0 = λ(μ), where λ(·) = (l′)(−1)(·) isthe function inverse to l′(·), so that

l′(λ(μ)) ≡ μ, λ(μ) = μ1/(α−1)L∗(μ), L∗(μ) is an s.v.f. as μ → 0.

Put

μ := vl′(y), (5.2.21)

where v > 1 is to be chosen later (more precisely, it will be convenient for usto choose μ, thereby specifying v as well). It is clear that λ(μ) < y for v > 1.Observe that, for v ≈ 1/α > 1, the value

f(y) = −l(y) + vl′(y)y ≈ l(y)(vα − 1)

could be made small, and so ef(y) becomes comparable with 1.Note also that

y ≡ λ(μ/v) ∼ v1/(1−α)λ(μ), v > 1. (5.2.22)

In the following argument we will keep in mind that v � 1 + ε, ε > 0. Let

M := γλ(μ),

where γ is some point from the interval (1, α1/(α−1)) (for example, its midpoint).Then, on the one hand, for t � M we have

f ′(t) � f ′(M) ∼ −γα−1l′(λ(μ)) + μ

∼ μ(1 − γα−1) = cμ, c > 0. (5.2.23)

Page 273: Asymptotic analysis of random walks

242 Random walks with semiexponential jump distributions

On the other hand , for brevity setting λ := λ(μ), one has

f(M) ∼ −l(γλ) + μγλ ∼ − l′(γλ)γλ

α+ μγλ

∼(

1 − γα−1

α

)μγλ, (5.2.24)

where γα−1 > α. Since by Theorem 1.1.4(ii)

μλ(μ) > μα/(α−1)+δ, l(M(ε)

)= l(ε/μ) > μ−α+δ

for any δ > 0 and all small enough μ, we see that the quantity I2,1 from (5.2.19)can be bounded, owing to the above-mentioned properties of the function f , asfollows:

I2,1 � μMe−μ−α+δ

= μγλ(μ)e−μ−α+δ

= o(μ2) (5.2.25)

as μ → 0. It is obvious that the term V (M(ε))h in (5.2.18) admits a bound of thesame kind.

To evaluate I2,2 in the case when y > M (i.e. when v1/(1−α) > γ), we will usethe inequality (5.2.23), which means that when computing the integral it sufficesto consider its part over a neighbourhood of the point y only. For t = y − u,u = o(y), we have, owing to (5.2.21), that

f(t) − f(y) = l(y) − l(y − u) − μu

= l′(y)u(1 + o(1)) − μu ∼ μu

(1v− 1

).

So, for U = o(y), U � μ−1,

μ

y∫y−U

ef(t)dt ∼ μef(y)

U∫0

eμu(1/v−1)du � ef(y) v

v − 1.

The integral∫ y−U

Mcan be evaluated in a similar way, with the result o

(ef(y)

).

Therefore,

I2,2 � vef(y)

v − 1(1 + o(1)). (5.2.26)

Now we can complete the bounding of the integral R(μ, y) in the basic in-equality (2.1.8). Collecting together (5.2.17), (5.2.18), (5.2.25) and (5.2.26), weobtain

R(μ, y) � 1 +μ2h

2(1 + o(1)) +

vef(y)

v − 1(1 + o(1)),

and hence

Rn(μ, y) � exp{

nhμ2

2(1 + o(1)) +

vn

v − 1ef(y)(1 + o(1))

}. (5.2.27)

Page 274: Asymptotic analysis of random walks

5.2 Bounds for the distributions of Sn and Sn 243

Put

μ :=1y

lnT, T :=r(1 − α)nV (y)

≡ c

nV (y)

and observe that (5.2.9) becomes trivial for π1(y) = nw1(y) > 2r/h (the right-hand side of this inequality will then be unboundedly increasing). For deviations y

such that π1(y) � 2r/h, one has n � c1y2−α+ε for any ε > 0 and therefore

μ =1y

lnT ∼ −1y

lnnV (y) ∼ l(y)y

∼ l′(y)α

.

This means that we have v ≈ 1/α > 1 in (5.2.21) and that all the assumptionsmade about μ and v hold true. Further, as in § 4.1, we find that

lnP � −μx +nhμ2

2(1 + o(1)) +

nV (y)1 − α

eμy(1 + o(1)), (5.2.28)

where, by virtue of our choice of μ, we have

nV (y)1 − α

eμy = r, −μx +nhμ2

2= (−r + ρ) ln T,

ρ :=nh lnT

2y2= − nh

2y2(lnnV (y) + ln c), c = r(1 − α).

Here (see (5.2.4))

− lnV (y)y2

= l(y)y−2 = w1(y) =π1(y)

n. (5.2.29)

Therefore, assuming for simplicity that cn � 1, we see that ρ � π1(y)h/2 andhence

P � c1[nV (y)]r−π1(y)h/2 (5.2.30)

(if cn < 1 then one should add o(1) to the exponent in (5.2.30) and then removethat summand by slightly increasing h). This proves the first part of the theorem.

Now we will consider the Cramer deviation zone, where π1 = nw1(x) can belarge. Here we put

y := x, μ :=x

nh,

so that

μ =xw1(x)

π1h=

l(x)xπ1h

∼ l′(x)απ1h

, v ∼ 1απ1h

(see (5.2.21)), and the condition v > 1, which we assumed after (5.2.21), couldfail for large π1. If v > γ1−α (or, equivalently, x = y > M ; see (5.2.22))then all the bounds for R(μ, y) obtained above will still hold and one will againobtain (5.2.27). If, however, v � γ1−α then I2,2 will disappear from the aboveconsiderations and likewise the last term in the exponent on the right-hand sideof (5.2.27) will disappear, too. In this case, we will immediately obtain the secondassertion of the theorem.

Page 275: Asymptotic analysis of random walks

244 Random walks with semiexponential jump distributions

Thus it only remains to consider the case (απ1h)−1 > γ1−α > 1, and for thiscase to bound the last term in the exponent in (5.2.27). For μ = y/nh = x/nh,the logarithm of that term is equal to

H = μy + lnnV (y) + O(1)

=x2

n

(1h− w1(y)n

)+ lnn + O(1) =

x2

n

(1h− π1

)+ lnn + O(1).

If π1h � 1 + ε and x2 � n lnn then H → −∞ as n → ∞. Now observe thatif n1/2 < x < n1/2+ε for a small enough ε > 0 then

π1 = nw1(x) → ∞,x2π1

n= x2w1(x) � lnn → ∞

and hence again H → −∞. Therefore, the last summand on the right-hand sideof (5.2.28) is negligibly small, and

lnP � − x2

2nh(1 + o(1)) + o(1);

the last term, o(1), could be removed by slightly increasing the value of h. Thetheorem is proved under the assumption (5.2.20).

If (5.2.20) does not hold then one should use the representation (5.1.3), whichimplies that the increments l(t+Δ)−l(t) (and it was the behaviour of these incre-ments that was important in the above analysis) behave in exactly the same way asunder the assumption (5.2.20), up to error terms o(1) that change neither the qual-itative conclusions nor the bounds for the integrals. The value l′(y) used in theabove considerations should be replaced by l(1)(y) := αl(y)/y (cf. (5.1.3)). Thenall the assertions in (5.2.21)–(5.2.30) will remain true. The theorem is proved.

Proof of Corollary 5.2.2. We have

P(Sn � x) � nV (y) + P � nV (y) + c[nV (y)]r−π1(y)h/2. (5.2.31)

Our aim is to choose y (or r = x/y) as close to the optimal value as possible.Observe that, for r comparable with 1, one has, as x → ∞,

π1(y) = π1(x/r) ∼ r2−απ1, l(y) ∼ r−αl(x).

Moreover, recall that we are considering deviations x >√

n, which correspondsto the situation lnn < 2 ln x � l(x). Therefore the factors n will play a sec-ondary role in (5.2.31), and so we can only consider the behaviour of the fac-tors V (y) (to the respective powers).

The logarithm of the second term on the right-hand side of (5.2.31) has theform (recall that π1 = π1(x))

ln[V (y)

]r−π1(y)h/2 = −l(x)(

r1−α − hπ1

2r2−2α

)(1 + o(1)), (5.2.32)

Page 276: Asymptotic analysis of random walks

5.2 Bounds for the distributions of Sn and Sn 245

where the right-hand side attains its minimum value,

−l(x)1

2π1h

(1 + o(1)

),

in the vicinity of the point r := (π1h)1/(α−1). For r = r,

lnV (y) = −l(x)(π1h)−α/(α−1)(1 + o(1)). (5.2.33)

Therefore, if r −α = (π1h)−α/(α−1) � (2π1h)−1 (or, equivalently, π1h � 2α−1)then the logarithm of the second term on the right-hand side of (5.2.31) will be atleast as large as (5.2.33), and we could choose r0 = r as the desired optimal valueof r. Moreover, since the power of n in this term is equal to (2π1h)−1 � 2−α < 1,we obtain from (5.2.31) that

P(Sn � x) � cnV (x)(1+o(1))/2π1h,

where the factor 1 + o(1) could be replaced by 1 on slightly changing the valueof h. This proves (5.2.15).

If π1h < 2α−1 then one should take r0 to be the value for which both terms onthe right-hand side of (5.2.31) are roughly equal, i.e. r0 is chosen as the minimumsolution to the equation

1 = r − π1h

2r2−α,

so that

r0 = 1 +π1h

2+ (2 − α)

(π1h

2

)2

+ · · ·

and r0 − 1 ∼ π1h/2 as π1 → 0. In this case,

P(Sn � x) � cnV (x/r0).

This proves (5.2.13). The inequality (5.2.14) is a consequence of (5.2.13). Indeed,setting 1/r0 = 1 − θ we obtain that θ = 1

2π1h(1 + o(1)) as π1 → 0,

V (x/r0) = exp{−l(x(1 − θ))

}= exp

{−l(x) + αθl(x)(1 + o(1))}

= exp{−l(x) +

α

2π1hl(x)(1 + o(1))

}. (5.2.34)

This implies (5.2.14).Finally, the assertion (5.2.16) follows from the first inequality in (5.2.31) with

x = y, the bound (5.2.10) and the fact that, for π1h � 1+ ε and x >√

n, one has

exp{− x2

2nh

}= exp

{− l(x)

2π1h

}� V (x) exp

{l(x)

1 + 2ε

2 + 2ε

}� nV (x).

The corollary is proved.

Page 277: Asymptotic analysis of random walks

246 Random walks with semiexponential jump distributions

Remark 5.2.4. Unfortunately, Theorem 5.2.1 does not enable one to obtain in-equalities of the form

P(Sn � x) � nV (x)(1 + o(1))

for x � σ2(n) (π1l(x) → 0 as x → ∞). Indeed, we will have the asymptoticequivalence V (y) ∼ V (x) in the main inequality (5.2.31) only if r ≡ x/y is ofthe form r = 1 + θ, where θl(x) → 0 as x → ∞ (cf. (5.2.34)). But, for such a θ,the second term on the second right-hand side of (5.2.31) will be asymptoticallyequivalent to cnV (x), so that the whole of this right-hand side will be asymptoti-cally equivalent to (1 + c)nV (x), c > 0. Bounds of that form have already beenobtained in (5.2.14), under the weaker assumption π1l(x) < c.

5.2.2 Lower bounds for P(Sn � x)

Lower bounds for P(Sn � x) will follow from Theorem 4.3.1, the assertion ofwhich is not related to the condition of regular variation of the tails P(ξ � t). Inparticular, the theorem implies that, for y = x + u

√n, as u → ∞,

P(Sn � x) � nF+(y)(1 + o(1)).

If condition [ · , =], V ∈ Se, is met then

V (y) = V (x + u√

n) = e−l(x+u√

n ),

where

l(x + u√

n ) − l(x) =αu

√nl(x)x

(1 + o(1)

)+ o(1) = o(1),

provided that n � x2/l2(x) = 1/w2(x) or, equivalently, that x � σ2(n)and u → ∞ slowly enough. Therefore, in the specified range of x-values, onehas V (y) ∼ V (x).

We have proved the following assertion.

Corollary 5.2.5. Let condition [ · , =], V ∈ Se, be satisfied. Then there exists afunction ε(t) ↓ 0 as t ↑ ∞ such that, for x = s2σ2(n),

P(Sn � x)nV (x)

� 1 − ε(s2).

In view of the absence of a similar opposite inequality (see Remark 5.2.4), onecannot derive from here the exact asymptotics

P(Sn � x) ∼ nV (x), P(Sn � x) ∼ nV (x)

in the zone x � σ2(n). These asymptotics will be obtained below, in Theo-rems 5.4.1 and 5.5.1.

Page 278: Asymptotic analysis of random walks

5.3 Bounds for the distribution of Sn(a) 247

5.3 Bounds for the distribution of Sn(a)

We will begin with upper bounds. As in Chapters 3 and 4, the main elements ofthese bounds are inequalities for

P (a, v) := P(Sn(a) � x;B(v)),

where

B(v) =n⋂

j=1

Bj(v), Bj(v) = {ξj < y + vj}.

Put

z := z(x) =x

αl(x)= o(x). (5.3.1)

Note that the value z(x) is an increment of x such that, for a fixed t, one hasl(x + zt) − l(x) = t(1 + o(1)) or, equivalently,

V (x + zt) ∼ e−tV (x). (5.3.2)

In situations where the argument of the function z(·) is different from x, it will beindicated explicitly.

Theorem 5.3.1. Suppose that condition [ · , <] with V ∈ Se is satisfied, and thatδ ∈ (0, 1), ε ∈ (0, 1), a > 0 are fixed. Then, for y � εx,

P (a, v) � cmin{zr+1(y), nr}V r(y), r =x

y. (5.3.3)

Note that the bound (5.3.3) in not unimprovable; the function zr+1(y) couldbe replaced by zr(y). This is a result of the use of the crude inequalities (5.3.8)in the proof of the theorem. Deriving an exact bound requires additional effort.However, the inequality (5.3.3) will prove to be sufficient for finding the exactasymptotics of P(Sn(a) � x).

Owing to the above-mentioned deficiency of the inequality (5.3.3), one cannotderive from it the following assertion, the proof of which will be constructed in adifferent way.

Theorem 5.3.2. Let condition [ · , <] with V ∈ Se be satisfied, and a > 0 be afixed number. Then

P(Sn(a) � x) � cmV (x), m := min{z(x), n}. (5.3.4)

To prove Theorem 5.3.1, we will need an auxiliary assertion. Set

S(l, r) :=n∑

j=1

jlV r(y + vj). (5.3.5)

Lemma 5.3.3. If V ∈ Se then

S(l, r) � cΓ(l + 1) min{

Al+1,nl+1

Γ(l + 2)

}V r(y), (5.3.6)

Page 279: Asymptotic analysis of random walks

248 Random walks with semiexponential jump distributions

where A = z(y)/rv and the constant c can be chosen arbitrarily close to 1 forlarge enough n.

Proof. Clearly S(l, r) � c I(l, r), where

I(l, r) :=

n∫0

tlV r(y + vt) dt =1

vl+1

nv∫0

ulV r(y + u) du.

For u � nv = o(y) we have

V r(y + u) = V r(y) exp{− ur

z(y)(1 + o(1))

}.

SinceA∫

0

tle−tdt � min{

Γ(l + 1),Al+1

l + 1

}we obtain

nv∫0

ulV r(y + u) du � V r(y)(

z(y)r

)l+1nvr/z(y)∫

0

tle−t(1+o(1))dt

� cV r(y)(

z(y)r

)l+1

min{

Γ(l + 1),(

nvr

z(y)

)l+1 1l + 1

}.

This bound, proving (5.3.6), will obviously remain valid for arbitrary nv as well.The lemma is proved.

Proof of Theorem 5.3.1. For n � z(y) we have

y1 := y + vn � y + vz(y) ∼ y, π1(y) = nw1(y) � z(y)y−2l(y) ∼ 1αy

,

and thereforex

y1− π1(y)h

2� x

y(1 + vz(y)/y)− h + o(1)

2αy

� r − rvz(y)y

+ O(1/y) = r + O(1/l(y)).

Hence, by Theorem 5.2.1,

P (a, v) � P(

Sn � x;n⋂

j=1

{ξj < y1 + vn})

� c[nV (y1)

]x/y1−π1(y)h/2 � c1

[nV (y1)

]r.

Now let n be arbitrary. First we bound the probability

P(Sn − an � x; B(v)

)� P

(Sn � x + an;

n⋂j=1

{ξj < y + vn})

.

Page 280: Asymptotic analysis of random walks

5.3 Bounds for the distribution of Sn(a) 249

We will use Theorem 5.2.1, there taking x to be x1 = x + an and y and r to bey1 = y + vn and r1 = x1/y1 respectively, so that

r1 � rx + an

x + a(1 − δ)nfor v � a(1 − δ)

r. (5.3.7)

By virtue of Theorem 5.2.1,

P(Sn − an � x; B(v)) � c[nV (y1)

]r1−hπ1(y1)/2,

where π1(y1) = nw1(y1) = nyα−21 L(y1) = o

(min{1, n/x}) for y � εx and

x → ∞. However,

r1 � r(1 + f(n/x)

),

where, owing to (5.3.7),

f(t) :=1 + at

1 + at(1 − δ)− 1 =

atδ

1 + at(1 − δ)� cmin{1, t}.

Hence, for all large enough x,

r1 − hπ1(y1)2

� r, P(Sn − an � x; B(v)) � c[nV (y + vn)

]r.

This leads to the bounds

P (a, v) �n∑

j=1

P(Sj − aj � x; B(v))

� cn∑

j=1

jrV r(y + vj) � c1

[min{z(y), n}]r+1

V r(y). (5.3.8)

The last inequality uses Lemma 5.3.3. Theorem 5.3.1 is proved.

Proof of Theorem 5.3.2. For n � z, the assertion of the theorem follows fromCorollary 5.2.2. Indeed, in this case

π1l(x) � zl2(x)x−2 ∼ l(x)αx

→ 0,

and therefore the conditions of the second assertion of Corollary 5.2.2(i) are sat-isfied. Hence

P(Sn(a) � x) � P(Sn � x) � cnV (x).

For n � z, we will make use of Corollary 7.5.4 below (see also [275]), whichstates that

P(S(a) � x) =

(1a

∞∫0

V (x + t) dt

)(1 + o(1)).

Therefore (see the proof of Lemma 5.3.3)

P(Sn(a) � x) � P(S(a) � x) � czV (x).

Page 281: Asymptotic analysis of random walks

250 Random walks with semiexponential jump distributions

The theorem is proved.

The assertion of Theorem 5.3.2 can also be obtained as a consequence of theresults of [178], where the relation (5.1.17) was established for so-called stronglysubexponential distributions. Sufficient conditions for a distribution to belong tothe class of strongly subexponential distributions will be met for any V ∈ Se.

As in § 5.2, lower bounds will follow from those of § 4.3.

5.4 Large deviations of the sums Sn

In this section we will obtain the first-order approximation for P(Sn � x) to-gether with a more detailed description of the asymptotics of this probability.

5.4.1 Preliminary considerations

We will introduce an additional smoothness condition on the r.v.f. l(t) = tαL(t)from (5.1.1), which will be needed for deriving asymptotic expansions. This con-dition is similar to condition [D(2,q)] from § 4.4.

[D] The s.v.f. L(t) is continuously differentiable for t � t0 and some t0 > 0,and, as t → ∞, one has L′(t) = o(L(t)/t) and, for Δ = o(t),

l(t + Δ) − l(t) = l′(t)Δ +α(α − 1)

2l(t)t2

Δ2(1 + o(1)

)+ o

(q(t)

), (5.4.1)

where q(t) � 1 is a non-increasing non-negative function.

Condition [D] with q(t) ≡ 0 will be met provided there exists

l′′(t) = α(α − 1)l(t)t2(1 + o(1)

). (5.4.2)

In this case,

l(t + Δ) − l(t) =

t+Δ∫t

(l′(t) +

v∫t

l′′(u)du

)dv

= l′(t)Δ +α(α − 1)

2l(t)t2

Δ2(1 + o(1)

). (5.4.3)

If a function l1(t) has a second derivative with the property (5.4.2) and if

l(t) = l1(t) + o(q(t)

), (5.4.4)

then condition (5.4.1) will be satisfied but with l(t) replaced on its right-handside by l1(t) and with the function q(t) from (5.4.4). In fact, it is this, more gen-eral, form of condition [D] that is actually required for refining the asymptoticsof P(Sn � x). However, the form of condition [D] presented above appearspreferable as it simplifies the exposition.

Page 282: Asymptotic analysis of random walks

5.4 Large deviations of the sums Sn 251

Condition [D] with q(t) ≡ 0 (see (5.4.2)) could be referred to as second-orderdifferentiability at infinity.

In the lattice (arithmetic) case, t and Δ are assumed to be integer-valued,whereas condition [D] takes this form:

[D] The s.v.f. L(t) is such that L(t + 1) − L(t) = o(L(t)/t

)as t → ∞ and

for Δ = o(t), Δ � 2, one has

l(t + Δ) − l(t) =[l(t + 1) − l(t)

]Δ +

α(α − 1)2

l(t)t2

Δ2(1 + o(1)) + o(q(t)

),

where q(t) � 1 is a non-increasing non-negative function.

All sufficiently smooth r.v.f.’s l(t) = tαL(t) will satisfy (5.4.2), and hencecondition [D] as well. For example, for L(t) = (ln t)γ we have

L′(t) =γ

t(ln t)γ−1 = o(L(t)/t), l′′(t) = α(α − 1) tα−2(ln t)γ(1 + o(1)).

Now consider conditions of another type. They will ensure that approximationsof the form (5.1.11), (5.1.15) hold true. Had the relations (5.1.9) or (5.1.10) beensatisfied, one would not need these new conditions. But the property V ∈ Se

(see (5.1.1)–(5.1.3)) does not imply (5.1.9) and even less does it imply the ex-cessive (concerning left tails) condition (5.1.10). There is little doubt, however,that (5.1.11) holds (possibly in a somewhat smaller deviation zone; see belowfor more details) only when conditions (5.1.1)–(5.1.3) are satisfied. Since such aresult, to the best of our knowledge, has not been obtained yet we can assume,along with (5.1.1)–(5.1.3), only what we need, namely, that the Cramer approx-imation (5.1.11) holds true. (For an approximation of the form (5.1.16), we usethe term maximum jump approximation).

In order to state the main assertions, we will discuss in some detail the char-acteristics of the Cramer deviation zone, the so-called intermediate zone and thezone where the maximum jump approximation holds true (the ‘extreme zone’).First consider the boundary of the Cramer deviation zone. Recall that we definedit as the value x = σ1 = σ1(n), for which the logarithmic Cramer approximation

lnP(Sn � x) ∼ − x2

2 n(5.4.5)

is of the same order of magnitude as the logarithmic maximum jump approxima-tion

lnP(Sn � x) ∼ lnn V (x).

In other words, we consider the equation

x2

2n= − lnnV (x),

or, equivalently from the viewpoint of the asymptotics of σ1(n), the equation

x2

2n= l(x). (5.4.6)

Page 283: Asymptotic analysis of random walks

252 Random walks with semiexponential jump distributions

It will, however, be convenient for us to amend further this equation for x = σ1

by removing the coefficient 1/2 from its left-hand side, which results in (5.2.2).Using the function

w1(t) = t−2l(t),

which we introduced in (5.2.4), we obtain a solution to the equation (5.2.2) in theform

σ1(n) = w(−1)1 (1/n), (5.4.7)

where w(−1)1 is the function inverse to w1. For simplicity, we assume that the

functions l(t) and w1(t) are continuous and monotone for t � t0 and some t0 > 0,so that one could write

w1(σ1(n)) =1n

for n >1

w1(t0)

(the transition to the general case merely complicates the notation somewhat).Clearly, the solution to (5.4.6) is equal to σ1(2n); it differs from σ1(n) only by aconstant factor, which is close to 21/(2−α) (see also (5.4.9) below). If the functionL(t) has the property

L(tL1/(2−α)(t)

) ∼ L(t) as t → ∞ (5.4.8)

(for example, all the powers (ln t)γ , γ > 0, and all functions varying even moreslowly will possess this property) then it is not hard to see that we will have (5.2.7)or, equivalently,

σ1(n) = n1/(2−α) L1(n), L1(n) ∼ L1/(2−α)(n1/(2−α)

). (5.4.9)

We will assign deviations x = s1σ1(n) with s1 � 1 to the Cramer deviationzone. When s1 → ∞, we will assign them either to the intermediate or to theextreme zones. One could equivalently characterize deviations using the quantity

π1 = π1(x) = nw1(x). (5.4.10)

It is obvious that if s1 = x/σ1(n) is fixed then

π1(x) = nw1(sσ1(n)) ∼ sα−21 , (5.4.11)

so that the deviations x belong to the Cramer zone if π1(x) > 1.Now observe that, under the conditions [ · , <] with V ∈ Se, Eξ = 0 and

E|ξ|b<∞ for b � κ + 1, κ = �1/(1 − α) + 1, we have the following ‘Cramerapproximation’ property:

[CA] The uniform approximation (5.1.11) holds in the zone

0 < x � σ+(n) := σ1(n)(1 − ε(n)), (5.4.12)

where ε(n) → 0 as n → ∞.

Indeed, if condition [ · , <] with V ∈ Se is met then, as we have already noted

Page 284: Asymptotic analysis of random walks

5.4 Large deviations of the sums Sn 253

in § 5.1, (5.1.9) holds for the function h(t) = l(t) − ln t. It is not difficult to seethat the function σ∗(n), which is the solution to the equation x2 = nh(x) andwhich specifies the zone where (5.1.11) takes place, has the form

σ∗(n) = σ1(n)(1 − ε(n)), (5.4.13)

where σ1(n) is defined in (5.4.7), ε(n) > 0 and, moreover, owing to the repre-sentation (5.1.3) we have

ε(n) ∼ lnn

(2 − α)2nα

2 − α↓ 0

as n → ∞, as required. It remains to observe that in the papers that established(5.1.11) (Lemma 1b of [238] and Theorem 2.1 of [206]), it was also assumed thatthe function h in (5.1.9) satisfies certain conditions that are met if V ∈ Se.

Thus, when V ∈ Se and (5.4.12) holds, the Cramer approximation [CA] al-ways holds true.

Note that for negative deviations one has

P(Sn < t) = Φ(

t√n

)(1 + o(1)), t < 0, (5.4.14)

uniformly in the zone

t � σ−(n) := −(1 − ε)√

(b − 2)n lnn, ε > 0. (5.4.15)

If condition (5.1.10) is met then clearly an approximation of the form (5.1.11)will also hold for P(Sn < −t).

Likewise, when studying the asymptotics of P(Sn � x) we will be using thefollowing property.

[CA] The uniform approximation (5.1.15) holds in the zone 0 < x � σ+(n) =σ1(n)(1 − ε(n)), where ε(n) → 0 as n → ∞.

It is not hard to establish, using an argument similar to that above and the resultsof [2], that [CA] always holds if the conditions [ · , <] with V ∈ Se and (5.1.10)are satisfied. As was the case with [CA], the expected result is that for [CA] tohold it suffices that condition [ · , <] with V ∈ Se is met.

Now we return to characterizing the deviation zones. Recall that in § 5.2 weintroduced the function

w2(t) = w1(t) l(t) = t−2l2(t) = t2α−2L2(t) (5.4.16)

(see (5.2.11)) and put

σ2(n) = w(−1)2 (1/n) = n1/(2−2α)L2(n). (5.4.17)

Under condition (5.4.8) the s.v.f. L2, as well as L1, can be found explicitly.It will be seen from the assertions to be presented below that the deviations

x = s2σ2(n) � σ1(n) will belong to the intermediate zone when s2 → 0 and tothe maximum jump approximation zone when s2 → ∞.

Page 285: Asymptotic analysis of random walks

254 Random walks with semiexponential jump distributions

One could equivalently characterize deviations x � σ1(n) using the quantity

π2(x) = π2 = nw2(x), (5.4.18)

assigning deviations to the intermediate zone if π2 → ∞.To state the main assertion, we will need some additional preliminary consid-

erations.Introduce the function

g2(t) := l(x − t) +t2

2n. (5.4.19)

Assume that the function L(t) (or, equivalently, the function l(t)) is continuouslydifferentiable for t � t0 > 0 and that l′(t) ∼ αl(t)/t as t → ∞. Then g2(t) willbe differentiable for t � x − t0.

If the deviations x belong to the intermediate zone, i.e. (see (5.4.10), (5.4.11))

π1 = nw1(x) = nx−2l(x) → 0, (5.4.20)

then, for any fixed ε > 0, the function g2(t) will attain its minimum on the in-terval (0, εx) at a point t∗ > 0, t∗ = o(x). Indeed, on the one hand, g′2(t) < 0for t � 0. On the other hand, for large enough x,

g′2(εx) = −l′(x(1 − ε)) +εx

n= −αxα−1(1 − ε)α−1L(x)(1 + o(1)) +

εx

n

=x

n

[ε − α(1 − ε)α−1π1(1 + o(1))

]> 0 (5.4.21)

by virtue of (5.4.20), which implies that t∗ = o(x).In what follows, along with l(x) an important role will be played by the func-

tion

z = z(x) :=1

l′(x)∼ x

αl(x)=

x1−α

αL(x). (5.4.22)

In terms of z, the property (5.4.20) can be rewritten as

n � xz. (5.4.23)

To find an approximation to t∗, observe that

d

dtl(x − t)

∣∣∣∣t=t∗

= −l′(x)(1 + o(1)),

and hence

g′2(t∗) = 0 = −l′(x)(1 + o(1)) +

t∗

n.

From here we find that

t∗ = n l′(x)(1 + o(1)) =n

z(1 + o(1)) = o(n). (5.4.24)

Page 286: Asymptotic analysis of random walks

5.4 Large deviations of the sums Sn 255

Now note that if t = o(n) then for the function Λκ from (5.1.12) we have

nΛκ

(t

n

)=

t2

2n(1 + o(1)), Λ′

κ

(t

n

)=

t

n(1 + o(1)). (5.4.25)

Therefore, all the above discussion (and, in particular, the relation (5.4.24)) willremain true if, in the definition (5.4.19) of g2(t), we substitute nΛκ(t/n) for thefunction t2/2n, i.e. if by t∗ we understand the point where the function

gκ(t) = gκ(t, x, n) := l(x − t) + nΛκ

(t

n

)(5.4.26)

attains its minimum value. Put

M = M(x, n) := mint

gκ(t, x, n). (5.4.27)

Clearly M � l(x) and, moreover, owing to (5.4.24) and (5.4.25), one has

M = l(x − t∗) + nΛκ

(t∗

n

)= l(x) − n

2(l′(x))2(1 + o(1))

= l(x) − nα2

2

(l(x)x

)2

(1 + o(1)) = l(x) − α2

2nw2(x)(1 + o(1))

= l(x)(

1 − α2

2nw1(x)(1 + o(1))

). (5.4.28)

Hence, if π2 = nw2(x) → 0 (i.e. the deviations belong to the maximum jumpapproximation zone) then

M = l(x) + o(1). (5.4.29)

If the deviations are in the intermediate zone (i.e. π1 = nw1(x) → 0) then

M = l(x)(1 + o(1)). (5.4.30)

If condition [D] holds with q(t) ≡ 0 (or (5.4.2) holds true) and κ = 2 thenwe can find more precise expressions for t∗ and M . In this case, for t = o(x),π1(x) = nw1(x) → 0 we have

g′2(t) = −l′(x − t) +t

n= −l′(x) +

t

n

[1 + α(α − 1)π1(x)(1 + o(1))

].

So the solution t∗ to the equation g′2(t) = 0 will have the form

t∗ = nl′(x)[1 − α(α − 1)π1(x)(1 + o(1))

].

Page 287: Asymptotic analysis of random walks

256 Random walks with semiexponential jump distributions

From this we find that

M = l(x − t∗) +(t∗)2

2n

= l(x) − t∗l′(x) +α(α − 1)

2l(x)x2

(t∗)2(1 + o(1)) +(t∗)2

2n

= l(x) − n(l′(x)

)2[1 − α(α − 1)π1(x)(1 + o(1))]

+α(α − 1)

2π1(x)n

(l′(x)

)2(1 + o(1))

+n(l′(x)

)22

− n(l′(x)

)2α(α − 1)π1(x)(1 + o(1)).

Putting, for brevity,

π∗2(x) := n

(l′(x)

)2 ∼ α2π2(x),

one obtains

M = l(x) − π∗2(x)2

+α(α − 1)

2π1(x)π∗

2(x)(1 + o(1))

= l(x) − π∗2(x)2

+α3(α − 1)

2π1(x)π2(x)(1 + o(1)). (5.4.31)

The case κ = 3 can be considered in a similar way but the resulting expressionswill be somewhat different.

5.4.2 Limit theorems for large deviations of Sn

Now we can state the main assertion.

Theorem 5.4.1. Let the conditions (5.1.7) and [ · , =] with V ∈ Se be satisfied.Then the following assertions hold true.

(i) If condition [D] holds with q(t) ≡ 1 then in the intermediate and extremedeviation zones one has

P(Sn � x) = ne−M (1 + ε(x, n)), (5.4.32)

where ε(x, n) = o(1) uniformly (see Remark 5.4.4 below) in the range ofvalues x, n such that

n → ∞, s1 =x

σ1(n)→ ∞.

The functions M = M(x, n), σ1(n) and σ2(n) are defined and describedin (5.4.27)–(5.4.31), (5.4.7), (5.4.17). The condition s1 → ∞ can be re-placed by π1 → 0.

(ii) Let n → ∞ and s2 = x/σ2(n) → ∞ (or, equivalently, π2 = nw2(x) → 0).Then, uniformly in x and n from that range,

P(Sn � x) = nV (x)(1 + o(1)). (5.4.33)

Page 288: Asymptotic analysis of random walks

5.4 Large deviations of the sums Sn 257

(iii) If condition [D] holds and s2 = x/σ2(n) → ∞ (or, equivalently, n = o(z2),where z = 1/l′(x) ∼ x/αl(x)), then

P(Sn � x) = n V (x)[1 +

n − 12z2

+(α − 1)(n − 1)

2xz(1 + o(1))

+ O((√

n/z)3)

+ o(q(x)

)], (5.4.34)

where the remainder terms are uniform in the range of x and n specified inpart (ii).

Remark 5.4.2. It will be seen below from the proof of the theorem that thevalue ne−M from (5.4.32), which gives the asymptotics of P(Sn � x) in theintermediate zone, is simply the main part of the convolution of the Cramer ap-proximation and the extreme zone approximation nV (x) (see part (ii) of the the-orem).

Remark 5.4.3. In the lattice case, we have a complete analogue of the assertions(5.4.32) and (5.4.34) for the values of x on the lattice.

Remark 5.4.4. The uniformity in part (i) of the theorem is understood in thefollowing sense. For any sequences n′ → ∞, s′1 → ∞ one can construct an ε′ =ε′(n′, s′1) → 0 such that in (5.4.32) one has ε(x, n) � ε′ for all n � n′, s1 � s′1.The uniformity of the remainder terms in the assertion (5.4.34) is to be understoodin a similar way.

Remark 5.4.5. It can be seen from (5.4.28) that

e−M = V (x)eα2π2(1+o(1))/2,

and hence in the case π2 → ∞ (i.e in the intermediate deviation zone) wehave e−M � V (x), so that the asymptotics (5.4.32) and nV (x) are quite dif-ferent. It follows from (5.4.30), however, that the ‘crude asymptotics’ (i.e. theasymptotics of lnP(Sn � x)) will coincide in the intermediate and extreme zones(when π1 → 0, see (5.4.28), (5.4.30)); then

lnP(Sn � x) = (1 + o(1)) lnnV (x) = (1 + o(1)) lnV (x).

Remark 5.4.6. When b > 3 one can obtain more complete asymptotic expansionsin part (ii) of the theorem.

Using Theorem 5.4.1, one can find, for example, the asymptotics of the dis-tribution tail of χ2

n := ζ21 + · · · + ζ2

n, where the i.i.d. r.v.’s ζi satisfy Cramer’scondition (see (5.1.6)).

To prove the theorem, we will need the following auxiliary assertion.

Lemma 5.4.7. Let the conditions (5.1.7) and [ · , =] with V ∈ Se be satisfied,

Page 289: Asymptotic analysis of random walks

258 Random walks with semiexponential jump distributions

and an r.v. ξ be independent of Sn. Then, for n → ∞ and x � σ+,

P(ξ + Sn � x, σ− � Sn < σ+) =1 + o(1)√

2πn

σ+∫σ−

V (x − t)e−nΛκ(t/n)dt,

(5.4.35)where σ± were defined in (5.4.12), (5.4.15).

Proof. First note that the function

Qn(x) :=[1 − Φ

(x√n

)]e−nΛκ(x/n)+x2/2n

on the right-hand side of (5.1.11) is differentiable and that

qn(x) := −Q′n(x) =

1√2πn

e−nΛκ(x/n)(1 + o(1))

as n → ∞ uniformly in [σ−, σ+]. Moreover, if points v1 < v2 from [σ−, σ+] aresuch that the integral

v2∫v1

qn(t) dt

is comparable with Qn(v1) (i.e. has the same order of magnitude) then we obtainfrom (5.1.11) that

P(Sn ∈ [v1, v2)

)=

v2∫v1

qn(t) dt (1 + o(1)).

For a given Δ > 0, partition the segment [σ−, σ+] into semi-open intervals ofthe form Δk := [uk−1, uk), where the uk are defined as solutions to the equations

Qn(uk) =12

e−Δk, k � 0,

Φ(

uk√n

)=

12

eΔk, k < 0,

and assume for simplicity that

N+ := − 1Δ

ln 2Qn(σ+), N− :=1Δ

ln 2Φ(

σ−√n

)< 0

are integers, so that u0 = 0, uN− = σ− and uN+ = σ+. (To make N+ integer-valued, it suffices to slightly decrease, if necessary, the value of σ+; a similarremark applies to N−.)

Since, for a fixed Δ > 0, the integral∫Δk

qn(t) dt = Qn(uk−1) − Qn(uk) = Qn(uk−1)(1 − e−Δ), 0 < k � N+,

Page 290: Asymptotic analysis of random walks

5.4 Large deviations of the sums Sn 259

is comparable with Qn(uk−1) it follows from the previous argument that

P(Sn ∈ Δk) =∫Δk

qn(t) dt (1 + o(1)). (5.4.36)

It is clear that the representation (5.4.36) remains valid for negative k > N− aswell.

Further, note that for large k one has |Δk| := uk−uk−1 = o(uk). This followsfrom the fact that, for uk � √

n,

k = − 1Δ

ln 2Qn(uk) ∼ u2k

2nΔ, uk ∼

√2nΔk.

As |Δk| = o(uk), we obtain (slightly decreasing the value of σ+ if needed) that(5.4.36) holds for the ‘boundary’ interval ΔN++1 = [uN+ , uN++1) as well. Thesame remark applies to ΔN− .

Now we can proceed to prove (5.4.35). Rewrite the probability on the left-handside of (5.4.35) as

p :=

σ+∫σ−

V (x − t)P(Sn ∈ dt) =N+∑

k=N−+1

∫Δk

V (x − t)P(Sn ∈ dt)

�N+∑

k=N−+1

V (x − uk)P(Sn ∈ Δk)

=N+∑

k=N−+1

V (x − uk)∫

Δk

qn(v) dv (1 + o(1)),

where the last equality holds by (5.4.36). By the definition of the quantities uk, theintegrals

∫Δk

over the intervals Δk of the function qn(v) will have the followingproperties:∫

Δk

= eΔ

∫Δk+1

for k � 1,

∫Δ0

=∫Δ1

,

∫Δk

= e−Δ

∫Δk+1

for k < 0.

Therefore, continuing the above chain of inequalities, we obtain

p � eΔ

N+∑k=N−+1

V (x − uk)∫

Δk+1

qn(v) dv (1 + o(1))

� eΔ

N++1∑k=N−+2

∫Δk

V (x − v)qn(v) dv (1 + o(1))

� eΔ

uN++1∫uN−

V (x − t)qn(t) dt (1 + o(1)).

Page 291: Asymptotic analysis of random walks

260 Random walks with semiexponential jump distributions

On the right-hand side we have the integral∫ σ++|ΔN++1|

σ−V (x−t)qn(t) dt, where,

as we observed above, |ΔN++1| = o(uN+) = o(σ+). The asymptotics of exactlythe same integral but for limits σ− and σ+, will be studied below (see (5.4.49)–(5.4.54)). From these computations it will follow in an obvious way that fordeviations x � σ+ the asymptotics in question are determined by an integralover a subinterval of (σ−, σ+) that is ‘far’ from the endpoints σ± and so will notdepend on small relative variations in the upper integration limit. This means that,as n → ∞ and x � σ+,

σ++|ΔN+ |∫σ−

∼σ+∫

σ−

and p � eΔ(1 + o(1))

σ+∫σ−

.

In exactly the same way one finds that

p � e−Δ(1 + o(1))

σ+∫σ−

.

Since Δ > 0 is arbitrary and p does not depend on Δ, it follows that

p = (1 + o(1))

σ+∫σ−

.

The lemma is proved.

Proof of Theorem 5.4.1. (i) Let

Gn := {Sn � x}, Bj := {ξj < y}, B =n⋂

j=1

Bj .

Then

P(Sn � x) = P(GnB) + P(GnB), (5.4.37)

where P = P(GnB) was bounded in Theorem 5.2.1:

P � c[nV (y)

]r−π1(y)h/2, r =

x

y, π1(y) = nw1(y). (5.4.38)

Since by assumption π1 = π1(x) → 0, we see that for y = δx, where δ ∈ (0, 1)is fixed, one has π1(y) → 0 and

V (y)r−π1(y)h/2 = V (y)r+o(1) = exp{−l(δx)(1/δ + o(1))

}= exp

{−l(x)(δα−1 + o(1))}

� V (x)1+γ′(5.4.39)

for any fixed γ′ � δα−1 − 1 > 0 and all large enough x. For x � √n the same

bound will clearly hold for P as well.

Page 292: Asymptotic analysis of random walks

5.4 Large deviations of the sums Sn 261

For the second term on the right-hand side of (5.4.37), we haven∑

j=1

P(GnBj) � P(GnB) �n∑

j=1

P(GnBj) −∑

i<j�n

P(GnBiBj)

�n∑

j=1

P(GnBj) − n(n − 1)2

[P(ξ1 � y)

]2, (5.4.40)

so that for y = δx, δ ∈ (0, 1), x � √n the following holds:

P(GnB) =n∑

j=1

P(GnBj) + O((nV (y))2

).

Therefore

P(Gn) =n∑

j=1

P(GnBj) + O(V 1+γ(x)

)(5.4.41)

for some γ ∈ (0, min{1, γ′}). Thus the main problem is now to evaluate theterms

P(GnBj) = P(GnBn) = P(Sn−1 + ξn � x, ξn � y)

= P(ξn � y, Sn−1 � x − y)

+ P(Sn−1 < x − y, Sn−1 + ξn � x). (5.4.42)

Here the first term in the sum is equal to

P(1) := V (y)P(Sn−1 � x − y).

From Corollary 5.2.2(i) we obtain that, for y = δx,

P(1) � cnV (δx)V(x(1 − δ)

(1 − 1

2π1(x(1 − δ))h

)). (5.4.43)

The following evaluations are insensitive to the asymptotics of L(x), so, for sim-plicity, we will put for the present L(x) ≡ 1. Then we obtain from (5.4.43) that

P(1) � cn exp{−(δx)α −

[x(1 − δ)

(1 − 1

2π1(x(1 − δ))h

)]α}

= cn exp{−xα[δα + (1 − δ)α + O(π1)]

}= cn exp

{−xα[1 + γ(δ) + O(π1)]}, (5.4.44)

where the function γ(δ) := δα + (1− δ)α − 1 is concave on [0, 1] and symmetricwith respect to the point δ = 1/2, γ(0) = 0, γ(1/2) = 21−α − 1 and γ′(0) = ∞.Hence γ(δ) > 0 for δ ∈ (0, 1), and, for any δ ∈ (0, 1/2), one has

γ(δ) > 2δ(21−α − 1).

In the general case, for an arbitrary s.v.f. L, we will have l(x) instead of xα on theright-hand side of (5.4.44), and the quantity γ(δ) will acquire a factor (1 + o(1)).

Page 293: Asymptotic analysis of random walks

262 Random walks with semiexponential jump distributions

As a result, the right-hand side of (5.4.44) admits an upper bound cn(V (x))1+γ ,γ > 2γ(δ)/3, which yields for x � √

n

nP(1) � (V (x))1+γ , γ >γ(δ)

2> 0. (5.4.45)

Now consider the second term on the right-hand side of (5.4.42):

P(2) := P(Sn−1 < x − y, Sn−1 + ξn � x)

= E[V (x − Sn−1); Sn−1 < x − y

]= E1 + E2 + E3, (5.4.46)

where, for σ− = −c√

n lnn (see (5.4.15) and also (5.4.12)),

E1 := E[V (x − Sn−1); Sn−1 < σ−

],

E2 := E[V (x − Sn−1); σ− < Sn−1 � σ+

], (5.4.47)

E3 := E[V (x − Sn−1); σ+ < Sn−1 � x(1 − δ)

].

We will begin by bounding E1. Since |σ−| � √n, by the central limit theorem

we have P(Sn < σ−) → 0 and therefore

E1 � V (x − σ−)P(Sn < σ−) = o(V (x)). (5.4.48)

Next consider E2. By Lemma 5.4.7 (to simplify the notation, we will replace n−1by n; this will change nothing in the asymptotics of E2),

E2 =1 + o(1)√

2πn

σ+∫σ−

V (x − t) e−nΛκ(t/n)dt =1 + o(1)√

2πn

σ+∫σ−

e−gκ(t)dt. (5.4.49)

We have already discussed the properties of the function

gκ(t) = l(x − t) + nΛκ(t/n)

(see pp. 254–256). For t = o(x), t = o(n) we have, by virtue of condition [D],that

l(x − t) = l(x − t∗) − (t − t∗)l′(x − t∗)

+α(α − 1)

2l(x − t∗)(x − t∗)2

(t − t∗)2(1 + o(1)) + o(1).

Here the last term, o(1), on the right-hand side could be omitted (i.e. one canassume that [D] holds with q ≡ 0), since eo(1) ∼ 1 and hence that term does notaffect the first-order asymptotics dealt with in part (i) of the theorem. Further,

t2

2n=

(t∗)2

2n+

t∗(t − t∗)n

+(t − t∗)2

2n.

It follows that, when π1 → 0 (i.e. when x−2l(x) = o(1/n)),

gκ(t) = gκ(t∗) +(t − t∗)2

n(1 + o(1)). (5.4.50)

Page 294: Asymptotic analysis of random walks

5.4 Large deviations of the sums Sn 263

Now we show that the minimum point t∗, together with its√

n-neighbourhood,lies inside the integration interval [σ−, σ+]. If s1 → ∞ so slowly that

L(s1 σ1(n)) ∼ L(σ1(n))

then, by virtue of (5.4.24),

t∗ ∼ αnl(x)x

= αnxw1(x) = αns1σ1(n)w1(s1σ1(n))

∼ αsα−11 σ1(n) = o(σ1(n)). (5.4.51)

If s1 → ∞ at a faster rate then it is even more obvious that

t∗ = o(σ1(n)) (5.4.52)

holds. Moreover, since σ+ ∼ σ1 we have

t∗ = o(σ+(n)). (5.4.53)

As√

n = o(σ+(n)), along with (5.4.53) the following also holds:

t∗ + c√

n � σ+.

Finally, it is evident that t∗ > 0 and√

n � |σ−|.The above, together with (5.4.49) and (5.4.50), enables one to conclude that

E2 = e−gκ(t∗)(1 + o(1)) = e−M (1 + o(1)). (5.4.54)

It remains to bound the quantity

E3 = E[V (x − Sn−1); σ+ � Sn−1 < x(1 − δ)

]= −

x(1−δ)∫σ+

V (x − u) dP(Sn−1 � u)

= − V (x − u)P(Sn−1 � u)∣∣∣∣x(1−δ)

σ+

+

x(1−δ)∫σ+

P(Sn−1 > u)V (x − u) l′(x − u) du.

Page 295: Asymptotic analysis of random walks

264 Random walks with semiexponential jump distributions

By virtue of Corollary 5.2.2 and the fact that l′(x) → 0 as x → ∞, we have

E3 � V (x − σ+)P(Sn−1 � σ+)

+

x(1−δ)∫σ+

V (x − u)l′(x − u)cnV

(u

(1 − π1(u)h

2

))du

� cnV (x − σ+)V(

σ+

(1 − π1(σ+)h

2

))

+ cn

⎡⎢⎣ x(1−δ)∫σ+

V (x − u)V(

u

(1 − π1(u)h

2

))du

⎤⎥⎦ o(1), (5.4.55)

where π1(u) � π1(σ+) ∼ π1(σ1) = 1. We have already estimated products ofthe form

V (x − u)V(

u

(1 − π1(u)h

2

)),

which are present in (5.4.55). This was done in (5.4.43) and (5.4.44), but for thecase where u is comparable with x whereas in (5.4.55) one could have u = o(x).In the latter case, for h � 4/3 and u � σ+,

V (x − u)V(

u

(1 − π1(u) h

2

))= exp

{−l(x − u) − l

(u

(1 − π1(u) h

2

))}� exp

{−l(x) +

αul(x)x

(1 + o(1)) − l

(u

(1 − h

2

))(1 + o(1))

}� exp

{−l(x) +

αul(x)x

(1 + o(1)) − 3−αl(u)(1 + o(1))}

,

where l(x)/x = o(l(u)/u) for u = o(x). Therefore, for such a u → ∞ one has

V (x − u)V(

u

(1 − π1(u) h

2

))� V (x)V (u)γ1

for some fixed γ1 > 0. From here and (5.4.55) it follows that, for large enough n,

E3 � V (x)V (σ1)γ (5.4.56)

for some γ ∈ (0, 1).Collecting together the relations (5.4.48), (5.4.54) and (5.4.56) and noticing

that V (x) � e−M , we obtain

P(2) = e−M (1 + o(1)).

This, together with (5.4.41) and (5.4.45), proves the first assertion of the theorem.

(ii) In this case, the bound for P(1) remains true, and the only difference will

Page 296: Asymptotic analysis of random walks

5.4 Large deviations of the sums Sn 265

be in how we evaluate P(2). Instead of (5.4.46), (5.4.47), one now needs to useanother partition of the integration range Sn−1 < x − y.

To simplify the exposition, we first assume that the function l(t) is differen-tiable and that

l′(t) ∼ αl(t)/t, t → ∞. (5.4.57)

Further, for z = 1/l′(x) ∼ x/αl(x), set

P(2) = E′1 + E′

2 + E′3,

where

E′1 := E

[V (x − Sn−1); Sn−1 < −z

],

E′2 := E

[V (x − Sn−1); |Sn−1| � z

], (5.4.58)

E′3 := E

[V (x − Sn−1); z < Sn−1 < x(1 − δ)

].

The quantity E′1 can be bounded using Chebyshev’s inequality:

E′1 � V (x + z)P(Sn−1 < −z) = V (x) o

(nb/2z−b

). (5.4.59)

Evaluating E′2 is quite trivial: since V (x+ v) = V (x) (1+ o(1)) for v = o(z),

0 < c1 < V (x+v)/V (x) < c2 < ∞ for |v| < z and z � √n by the assumptions

of the theorem, we have from the central limit theorem that

E′2 = V (x)(1 + o(1)). (5.4.60)

In order to bound E′3 we need to distinguish between the following two cases:

z � σ1 and z > σ1. In the second case, by virtue of Corollary 5.2.2(i) we find,cf. (5.4.55), that

E′3 = E

[V (x − Sn−1); z � Sn < x(1 − δ)

]� V (x − z)P(Sn−1 � z)

+

x(1−δ)∫z

V (x − u)l′(x − u)cnV

(u

(1 − π1(u)h

2

))du, (5.4.61)

where π1(u)h/2 � h/2 for u � z � σ1. Repeating the argument that fol-lows (5.4.55), we find that, for z � u � x(1 − δ),

V (x − u)V(

u

(1 − π1(u)h

2

))� V (x)V (y)γ1 , γ1 > 0,

and therefore

E′3 � V (x)V (z)γ , 0 < γ < 1. (5.4.62)

If z � σ1 then we split the integral E′3 into two parts,

E3,1 := E[V (x − Sn−1); z � Sn−1 < σ1

]

Page 297: Asymptotic analysis of random walks

266 Random walks with semiexponential jump distributions

and

E3,2 := E[V (x − Sn−1); σ1 � Sn−1 < x(1 − δ)

]. (5.4.63)

The integral E3,2 coincides with E′3 in the preceding considerations and therefore

admits an upper bound V (x)V (σ1)γ � V (x)V (z)γ , γ > 0.For E3,1 we obtain in a way similar to that used in our previous analysis, that

E3,1 = E[V (x − Sn−1); Sn−1 ∈ [z, σ1)

]= −

σ1∫z

V (x − u) dP(Sn−1 � u)

� V (x − z)P(Sn−1 � z) +

σ1∫z

P(Sn−1 � u)V (x − u)l′(x − u) du

� V (x − z)e−z2/2nh +

σ1∫z

V (x − u)l′(x − u)e−u2/2nhdu, (5.4.64)

where the last inequality follows from Corollary 5.2.2(ii). For u � σ1 = o(x) wehave

l(x − u) = l(x) − α l(x) u

x

(1 + o(1)

).

Observe that, for u � z ∼ x/αl(x) and π2(x) → 0, one has

nl(x)xu

� cn

(l(x)x

)2

= cπ2(x) → 0.

Hence for u � z and large enough x we obtain the inequality

V (x − u)eu2/2nh � V (x)e−u2/3nh,

so that E3,1 � V (x)e−z2/3nh and therefore

E′3 � V (x)

(V γ(z) + e−z2/3nh

). (5.4.65)

Since z2/n ∼ 1/α2π2(x) → ∞, collecting up the above bounds leads to therelation (5.4.33).

In the general case, when (5.4.57) does not necessarily hold, one should putz := x/αl(x) and then replace the integrals containing l′(x − u) du by sumsof integrals with respect to dul(x − u) over intervals of length Δ0z for a smallfixed Δ0 > 0; on these intervals the increments of the function l behave in thesame way as when (5.4.57) holds true (see (5.1.3)).

(iii) It remains to establish the asymptotic expansion (5.4.34). To this end, oneneeds to do a more detailed analysis of the integral E′

2. All the other boundsremain the same.

Recall that z � √n in the deviation zone under consideration. We have

E′2 = E

[e−l(x−Sn−1); |Sn−1| � z

], (5.4.66)

Page 298: Asymptotic analysis of random walks

5.4 Large deviations of the sums Sn 267

where, for S = o(x), owing to condition [D] (see (5.4.1)) we have

l(x − S) = l(x) − l′(x)S +α(α − 1)

2l(x)x2

S2(1 + o(1)) + o(q(x)

)or, equivalently,

l(x) − l(x − S) =S

z+

(1 − α)2

S2

xz(1 + o(1)) + o

(q(x)

). (5.4.67)

Clearly, for |S| � z,

|l(x − S) − l(x)| � c < ∞,

and the Taylor expansion of the function el(x)−l(x−S) in the powers of the differ-ence (l(x) − l(x − S)) yields

el(x)−l(x−S) = 1 +S

z+

(1 − α)2

S2

xz(1 + o(1)) + o

(q(x)

)+

S2

2z2+ O

( |S|3z3

),

(5.4.68)where the remainders o(1) and O(·) are uniform in |S| � z. Substituting thesum Sn−1 for S in (5.4.68) and using (5.4.67) and the fact that b = �1/(1−α) +2 � 3 and therefore E|ξ|3 < ∞ and E|Sn|3 = O(n3/2), we obtain

E′2 = V (x)

{E[1 +

Sn−1

z+

(1 − α)2

S2n−1

xz(1 + o(1))

+ o(q(x)

)+

S2n−1

2z2; |Sn−1| � z

]+ O(n3/2z−3)

}.

(5.4.69)

Next note that, for k � b,

Tk := E[|Sn−1|k; |Sn−1| > z

]= E

[ |Sn−1|b|Sn−1|b−k

; |Sn−1| > z

]� zk−bE|Sn−1|b = O

(zk−bnb/2

). (5.4.70)

Returning to (5.4.69) and observing that ESn−1 = 0, ES2n−1 = n − 1 and

nz−b = o(n3/2z−3), we have

E′2 = V (x)

[1 +

n − 12z2

+(1 − α)

2(n − 1)

xz(1 + o(1)) + o

(q(x)

)+ O

(n3/2

z3

)].

Now taking into account the bounds (5.4.59) and (5.4.65) for E′1 and E′

3 respec-tively, we obtain (5.4.34). The uniformity of the bounds claimed in the statementof Theorem 5.4.1 can be verified in an obvious way since, for all the terms o(1)and O(·), one can give explicit bounds in the form of functions of x and n. Thetheorem is proved.

Page 299: Asymptotic analysis of random walks

268 Random walks with semiexponential jump distributions

5.5 Large deviations of the maxima Sn

As noted, the asymptotics of P(Sn � x) in the zone of moderately large devia-tions x � σ+(n) ∼ σ1(n) (σ1 is defined in (5.4.7)) were studied in [2], where itwas established that under condition (5.1.10) one has the representation (5.1.15).In the present section, as in § 4, we will deal with deviations x � σ1(n).

We observe that condition (5.1.10) is somewhat excessive for (5.1.15) (cf. thediscussion in the previous section), and so we will simply assume that property[CA] (see p. 253) is satisfied. Recall that the expected result here is that condi-tion (5.1.9) (or [ · , <] with V ∈ Se) will imply [CA].

Theorem 5.5.1.

(i) Let the conditions (5.1.7), [ · , =] with V ∈ Se, [D] with q = 1 and [CA]be satisfied. Then, in the intermediate deviation zone, one has

P(Sn � x) = 2ne−M (1 + ε(x, n)), (5.5.1)

where ε(x, n) = o(1) uniformly (see Remark 5.4.4) for values of x and n

such that

n → ∞, s1 =x

σ1(n)→ ∞, s2 =

x

σ2(n)→ 0. (5.5.2)

The quantities M = M(x, n), σ1(n), σ2(n) are defined in (5.4.27)–(5.4.31),(5.4.7) and (5.4.17). The conditions s1 → ∞, s2 → 0 can be replacedby π1 → 0, π2 → ∞.

(ii) Let n → ∞ and s2 → ∞ (or, equivalently, π2 → 0). Then, uniformly in x

and n from that range,

P(Sn � x) = nV (x)(1 + o(1)). (5.5.3)

(iii) If condition [D] is satisfied and s2 → ∞ (or, equivalently, n = o(z2), wherez = 1/l′(x) ∼ x/αl(x)) then

P(Sn � x) = nV (x){

1 +1zn

n−1∑i=1

ESi

+1

2z2

[(n − 1)(n − 2)

2n+

1n

n−1∑j=1

ES 2j

]

×[1 +

(1 − α)zx

(1 + o(1))]

+ o(q(x)

)+ O

((√

n/z)3)}

, (5.5.4)

where the remainder terms are uniform in the range of values of x and n

stated in part (ii).

Page 300: Asymptotic analysis of random walks

5.5 Large deviations of the maxima Sn 269

Observe that here, owing to the invariance principle and the uniform integra-bility of Sn/

√n and

(Sn/

√n)2 (see Lemma 4.4.9 on p. 202), we have

ESj√

j∼√

, ES 2

j

j∼ 1, j → ∞. (5.5.5)

Hence Theorem 5.5.1 immediately implies the following.

Corollary 5.5.2. Under the conditions of Theorem 5.5.1(iii), as n → ∞,

P(Sn � x) = nV (x)[1 +

23/2

3√

π

√n

z(1 + o(1))

]. (5.5.6)

(Compare Corollary 4.5.2, where a similar (but not identical) representationwas obtained for V ∈ R.)

Remark 5.5.3. Remarks 5.4.2–5.4.6 following Theorem 5.4.1 remain valid in thissetup as well. Moreover, as already observed, condition [CA] in Theorem 5.5.1is likely to be superfluous.

Proof of Theorem 5.5.1. (i) Set Gn := {Sn � x}. Then, cf. (5.4.37),

P(Sn � x) = P(Gn B) + P(Gn B),

where B has the same meaning as before and P = P(Gn B) can be boundedwith the help of Corollary 5.2.2 in exactly the same way as in § 5.4 (see (5.4.38)and (5.4.39)):

P � cV 1+γ(x), γ > 0.

The relation (5.4.41) also remains valid, so that the main problem here consists inevaluating, for y = δx, the probabilities

P(GnBj) = P(GnBj{Sj−1 � x − y})

+ P(GnBj{Sj−1 � x − y}) =: P(1) + P(2). (5.5.7)

Owing to Corollary 5.2.2(i) the first term, P(1), satisfies

P(1) � cnV (δx) V((1 − δ)x

(1 − 1

2π1((1 − δ)x)h

)). (5.5.8)

We have already bounded such expressions in § 5.4 (see (5.4.43)–(5.4.45)):

nP(1) � V 1+γ(x), γ > 0. (5.5.9)

For the second term on the right-hand side of (5.5.7), we have

P(2) = P(ξj + Zj,n � x, ξj � y, Sj−1 < x − y

), (5.5.10)

where

Zj,n := Sj−1 + S(j)n−j , j = 0, . . . , n − 1,

Page 301: Asymptotic analysis of random walks

270 Random walks with semiexponential jump distributions

and S(j)n−j

d= Sn−j is independent of ξ1, . . . , ξj . Since

{Sj−1 < x − y} = {Sj−1 < x − y} ∪ {Sj−2 � x − y, Sj−1 < x − y},

one can again use the bounds (5.5.8), (5.5.9) to write

P(GnBj) = P(ξj � x − Zj,n, ξj � y, Sj−1 < x − y

)+ O

(V 1+γ(x)

)= E

[V (x − Zj,n); Sj−1 + S

(j)n−j � x − y

]+ O

(V 1+γ(x)

).

(5.5.11)

We have obtained the same expectation as in (5.4.46) but with Sn−1 replacedby Zj,n. If we again make use of the representation

E[V (x − Zj,n); Zj,n < x − y

]=: E1,j + E2,j + E3,j (5.5.12)

of the expectation as a sum of three integrals of the form (5.4.47) then the evalua-tions of the first and third integrals will simply repeat our computations from theprevious section, because

(1) Sj−1 � Zj,n

d� Sn−1 and

(2) there, for bounding the distribution of Sn−1 we actually used bounds forSn−1 (see Corollary 5.2.2).

Now consider

E2,j = E[V (x − Zj,n); σ− � Zj,n < σ+

], (5.5.13)

where σ± are the same as in (5.4.47) (i.e. they are given by (5.4.12) and (5.4.15)).To evaluate this integral, we need to know the asymptotics of P(Zj,n � t).

Lemma 5.5.4. Suppose that δn < j � (1 − δ) n and√

n � t < δ σ+ for somefixed δ ∈ (0, 1/2). Then

P(Zj,n � t) ∼ 2P(Sn−1 � t). (5.5.14)

Proof. Clearly,

P(Zj,n � t) =

t∫−∞

P(Sj−1 ∈ du)P(Sn−j � t − u) + P(Sj−1 � t).

Split the integral into two parts:

t∫−∞

=

σ−∫−∞

+

t∫σ−

=: I1 + I2,

where, by virtue of Corollary 5.2.2(iii),

I1 � P(Sj−1 < σ−)P(Sn−j � t − σ−) � e−t2/2(n−j)h. (5.5.15)

Page 302: Asymptotic analysis of random walks

5.5 Large deviations of the maxima Sn 271

Since |σ−| = o(σ+) and therefore

t − u � t − σ− < δσ+(n) (1 + o(1)) < σ+(δn) � σ+(n − j), (5.5.16)

the probability P(Sn−j � t − u) in the integrand of I2 can be estimated usingthe uniform approximation (5.1.15). Hence

I2 = 2

t∫σ−

P(Sj−1 ∈ du

)[1 − Φ

(t − u√n − j

)]

× exp{−(n − j)Λκ

(t − u√n − j

)+

(t − u)2

2(n − j)

}(1 + o(1))

= 2

t∫σ−

P(Sj−1 ∈ du)P(Sn−j � t − u)(1 + o(1)).

Bounding such integral but for limits −∞ and σ− (which again leads to thebound (5.5.15)) and observing that an integral of this kind from t to ∞ does notexceed P

(Sj−1 � t

), we obtain

I2 = 2

∞∫−∞

P(Sj−1 ∈ du)P(Sn−j � t − u)(1 + o(1))

+ O

(exp

{− t2

2(n − j)h

})− 2θ P(Sj−1 � t), 0 < θ � 1.

Therefore, using Corollary 5.2.2(iii) to bound P(Sj−1 � t), we have

P(Zj,n � t

)= 2P(Sn−1 � t)(1 + o(1))

+ O

(exp

{− t2

2(n − j)h

}+ exp

{− t2

2jh

}). (5.5.17)

As the value of h can be arbitrarily close to 1, we can always choose it in such away that ((1 − δ)h)−1 > 1 + δ/2. Then

t2

2(n − j)h− t2

2n>

δt2

4n.

The same inequality will hold for t2/2jh− t2/2n. Since t � √n, the remainder

term in (5.5.17) will be o(P(Sn−1 � t)

)and hence

P(Zj,n � t) = 2P(Sn−1 � t)(1 + o(1)). (5.5.18)

The lemma is proved.

We return to the evaluation of E2,j in (5.5.13). First we compute

E′2,j := E

[V (x − Zj,n); v

√n � Zj,n < δσ+

],

Page 303: Asymptotic analysis of random walks

272 Random walks with semiexponential jump distributions

where v → ∞ slowly enough. This problem is quite similar to that of calcu-lating E2, see (5.4.47), (5.4.49), because, according to (5.5.17), the asymptoticsof P(Zj,n � t) in the zone

√n � t < δσ+ coincide (up to the factor 2(1+o(1)))

with that of the probability P(Sn−1 � t), which determines the integral E2

in (5.4.47) and (5.4.49). Therefore, cf. Lemma 5.4.7, we have

E′2,j =

2(1 + o(1))√2πn

δσ+∫v√

n

V (x − t) e−nΛκ(t/n)dt. (5.5.19)

The evaluation of this integral repeats our calculations in (5.4.49)–(5.4.54), since(5.4.50) is still true and t∗ + c

√n � σ+. Moreover,

t∗ ∼ αnl(x)x

= α√

nπ2 � √n.

Consequently we obtain, as in (5.4.54), the following asymptotics for (5.5.19):

2e−M (1 + o(1)). (5.5.20)

The difference between E2,j and E′2,j can be estimated without difficulty using

the inequality

Sj−1 � Zj,n

d� Sn−1 (5.5.21)

and calculations which we have already done above, and so this part of the proofis left to the reader. The result is that for δn � j � (1− δ)n the integral E2,j alsohas the asymptotics (5.5.20).

Furthermore, owing to (5.5.21), for all j one has

E2,j � 2e−M (1 + o(1)).

Summing up the above results, we obtain

P(Gn) = (1 + o(1))n−1∑j=0

E[V (x − Zj,n); σ− � Zj,n < σ+

]+ o

(V (x)

)= (1 + o(1))

[2(n − 2δn) e−M + rn

],

where 0 � rn � 4δne−M (1+o(1)). Since δ > 0 can be chosen arbitrarily small,this implies the first assertion of Theorem 5.5.1.

(ii) Next we prove the asymptotics (5.5.3). As in the proof of Theorem 5.4.1(ii),we will need to split the integral

E = E[V (x − Zj,n); Zj,n < x − y

](5.5.22)

into three parts, in a way that differs from the representation (5.5.12). We write

Page 304: Asymptotic analysis of random walks

5.5 Large deviations of the maxima Sn 273

E = E′1,j + E′

2,j + E′3,j , cf. (5.4.58), where

E′1,j := E

[V (x − Zj,n); Zj,n < −z

],

E′2,j := E

[V (x − Zj,n); |Zj,n| � z

], (5.5.23)

E′3,j := E

[V (x − Zj,n); z < Zj,n < (1 − δ)x

].

Owing to (5.5.21), the quantities E′1,j , E′

3,j can be bounded using the same argu-ment as in the proof of Theorem 5.4.1 (see (5.4.59), (5.4.62) and (5.4.65)).

Since for s2 = x/σ2(n) → ∞ one has z � √n, and the main mass of the

distribution of Zj,n is concentrated in a√

n-neighbourhood of 0, we obtain asbefore (cf. (5.4.60)) that

E′2,j = V (x)(1 + o(1)).

This proves (5.5.3).

(iii) Now we will prove the asymptotic expansion (5.5.4). Again, the argumentis quite similar to previous calculations. Using (5.4.67) and (5.4.68), we find that

E′2,j = V (x)E

[1 +

Zj,n

z+

(1 − α)Z2j,n

2xz(1 + o(1)) +

Z2j,n

2z2

+ o(q(x)

)+ O

( |Zj,n|3z3

); |Zj,n| � z

]. (5.5.24)

Here we will need results that are somewhat more detailed than (5.4.70). SinceZj,n � Sj−1, the quantities

T−k,j := E

(|Zj,n|k; Zj,n < −z), k = 0, 1, 2,

I− := E(|Zj,n|3; Zj,n ∈ (−z, 0)

)can be bounded as before:

T−k,j � cnzk−b

b − k, I− = O

(n3/2

).

Further, since z � √n (again compare our previous calculations, see (5.4.70))

one has

T+k,j := E

(Zk

j,n; Zj,n > z)

� E(S k

n−1; Sn−1 > z)

= O(zk−bnb/2

),

I+ := E(Z3

j,n; Zj,n ∈ (0, z))

� ES 3n−1 � cn3/2.

Taking into account that

EZj,n = ESn−j−1, EZ2j,n = j − 1 + ES 2

n−j ,

Page 305: Asymptotic analysis of random walks

274 Random walks with semiexponential jump distributions

we finally obtain

E′2,j = V (x)

[1 +

ESn−j

z+

12z2

(j − 1 + ES 2

n−j

)(1 +

z

x(1 + o(1))

)+ o

(q(x)

)+ O

((√

n/z)3)]

.

Combining this with the previously derived estimates, we arrive at (5.5.4). Thetheorem is proved.

5.6 Large deviations of Sn(a) when a > 0

As already stated, the following asymptotic representation (a first-order approxi-mation) was proved in [178] (see also [275]) for the functional

Sn(a) = maxk�n

(Sk − ak)

in the case a > 0: for all n, as x → ∞,

P(Sn(a) � x) ∼ 1a

x+an∫x

V (u) du. (5.6.1)

This representation holds true for all so-called strongly subexponential distribu-tions F. It can be shown that the class Se of semiexponential distributions isembedded in the class of strongly subexponential distributions (for sufficient con-ditions for a distribution to belong to the latter, see [178]). In the present section,we will supplement (5.6.1) with asymptotic expansions for P(Sn(a) � x) thathold as x → ∞ uniformly in n = 1, 2, . . .

As before, let

z = z(x) =1

l′(x).

The argument of the function z(·) will only be shown when it is different from x.

Theorem 5.6.1. Let conditions (5.1.7) and [ · , =] with V ∈ Se be satisfied.Moreover, assume that a > 0 is fixed, condition [D] holds and xj = x + aj,

zj = z(xj), m = min{n, z}. Then, as x → ∞, uniformly in all n,

P(Sn(a) � x)

=n∑

j=1

V (xj)[1 +

ESn−j(a)zj

+1

2z2j

(j − 1 + ES 2

n−j(a))(

1 +(1 − α) zj

xj(1 + o(1))

)]+ V (x)

[O

(m5/2

z3

)+ o(mq(x))

]. (5.6.2)

Page 306: Asymptotic analysis of random walks

5.6 Large deviations of Sn(a) when a > 0 275

In particular,

P(Sn(a) � x) =z

aV (x)

(1 − e−an/z

)(1 + o(1)). (5.6.3)

Remark 5.6.2. Condition [D] is not needed for (5.6.3).

Remark 5.6.3. To simplify the right-hand side of (5.6.2), one could replace thesums by integrals (see Lemma 5.6.4 below) and use the relations

ESn−j(a) → ES(a) as n − j → ∞; xj ∼ x, zj ∼ z for j = o(x).

This, however, would introduce new errors due to the above approximations (seeCorollary 5.6.5 below).

The sums in (5.6.2) and the respective integrals can be computed using thefollowing lemma and its refinements.

For a fixed v > 0, put

S(k, r) :=n−1∑j=0

j kV r(x + vj). (5.6.4)

Then clearly

S(k, r) = I(k, r)(1 + εn) � cI(k, r), (5.6.5)

where εn → 0 as n → ∞ and

I(k, r) =

n∫0

tkV r(x + vt) dt =1

vk+1

nv∫0

ukV r(x + u) du. (5.6.6)

Lemma 5.6.4. Let V ∈ Se. Then the following assertions hold true.

(i)

S(0, 1) =z

vV (x)

(1 − e−nv/z

)(1 + o(1)),

S(1, 1) =z2

v2V (x)

(1 − e−nv/z − nv

ze−nv/z

)(1 + o(1)).

(5.6.7)

(ii) Under condition [D],

S(0, 1) =1v

x+(n−1/2)v∫x−1/2

V (u) du[1 + O(z−2) + o(q(x))

]

=1v

x+nv∫x

V (u) du(1 + O(z−1)

)(5.6.8)

Page 307: Asymptotic analysis of random walks

276 Random walks with semiexponential jump distributions

and, setting A := rvn/z,

I(k, r) =( z

rv

)k+1

V r(x)

rvn/z∫0

tke−t dt[1 + O

( z

x

)+ o(q(x))

]

= k!( z

rv

)k+1

V r(x)(

1 − e−Ak∑

j=0

Aj

j!

)[1 + O

( z

x

)+ o(q(x))

].

(5.6.9)

In particular,

I(0, 1) =z

vV (x)

(1 − e−vn/z

) [1 + O

( z

x

)+ o(q(x))

]. (5.6.10)

Proof. (i) For n � zxγ , α > γ > 0, and u � vn = o(x) one has V (x + u) =V (x)e−u/z(1+o(1))+o(1) owing to (5.1.3), and therefore

S(1, 1) =n−1∑j=0

jV (x + vj) = V (x)n−1∑j=0

je−vj/z(1+o(1))+o(1)

∼ V (x)

n∫0

te−vt/zdt = V (x)(z

v

)2nv/z∫0

ue−udu

= V (x)(z

v

)2 (1 − e−nv/z − nv

ze−nv/z

). (5.6.11)

For n > zxγ this result will clearly remain true, since∞∫

zxγ

uV (x + u) du = z2V (x) O(e−xγ)

.

The calculation of S(0, 1) is even simpler.

(ii) Now assume that [D] is satisfied. Then, for |u − xj | < c,

V (u) = V (xj) + (u− xj)V ′(xj) + O((u− xj)2V (xj)z−2

j

)+ o(V (xj)q(xj)

),

where

V ′(xj) ∼ −V (xj)zj

.

Therefore

V (xj) =1v

xj+v/2∫xj−v/2

V (u) du[1 + O

(z−2j

)+ o(q(xj))

], (5.6.12)

S(0, 1) =1v

x+v(n−1/2)∫x−v/2

V (u) du[1 + O(z−2) + o(q(x))

]. (5.6.13)

Page 308: Asymptotic analysis of random walks

5.6 Large deviations of Sn(a) when a > 0 277

This proves the first equality in (5.6.8).Now we prove the second equality in (5.6.8) and also (5.6.10). For n � z xγ ,

α > γ > 0, and u � n = o(x) one has

V (x + u) = V (x)e−u/z

[1 + O

(u2

xz

)+ o(q(x))

]. (5.6.14)

Making the change of variables u/z = t, we obtain

vI(0, 1) =

nv∫0

V (x + u) du = zV (x)

nv/z∫0

e−tdt

(1 + O

( t2z

x

)+ o(q(x))

)

= zV (x)(1 − e−nv/z

)[1 + O

( z

x

)+ o(q(x))

].

For n > zxγ this result will clearly remain true because

∞∫zxγ

V (x + u) du = zV (x)O(e−xγ )

.

This proves (5.6.10). Since∫ y

y−v/2V (u) du ∼ (v/2)V (y) as y → ∞ and

V (x) = O

(1z

∞∫x

V (u) du

),

the previous relation proves the second equality in (5.6.8) as well.In a similar way, we have

I(k, r) =

n∫0

tkV r(x + vt) dt

=zk+1

vk+1V r(x)

vn/z∫0

tke−trdt[1 + O

( z

x

)+ o(q(x))

].

SinceA∫

0

uke−udu = k![1 − e−A

k∑j=0

Aj

j!

],

the previous relation implies (5.6.9). The lemma is proved.

Note that, using the inequality

A∫0

tke−tdt � min{

k!,Ak+1

k + 1

},

Page 309: Asymptotic analysis of random walks

278 Random walks with semiexponential jump distributions

one obtains from Lemma 5.6.4 the bound

S(k, r) � k!V r(x) min{( z

rv

)k+1

,nk+1

(k + 1)!

}(1 + εn), (5.6.15)

where εn → 0 as n → ∞. In particular,

S(0, 1) � V (x) min{z

v, n}

(1 + εn). (5.6.16)

From Theorem 5.6.1 and Lemma 5.6.4 one can easily derive the following re-sult.

Corollary 5.6.5. As x → ∞, n → ∞, under condition [D] one has:

(i)

P(Sn(a) � x) =1a

x+(n−1/2)a∫x−a/2

V (u)du

[1 +

ES(a)z

+ o

(1z

)+ o(q(x))

];

(ii)

P(Sn(a) � x) =z

aV (x)

(1 − e−na/z

)[1 +

ES(a)z

+ o(q(x))]

+1

2a2V (x)

(1 − e−na/z − na

ze−na/z

)+ o(V (x)).

Assertion (i) follows from (5.6.2) (taking into account only the first two termsof the expansion) and (5.6.8), and (ii) follows from (5.6.7) and (5.6.2) (taking intoaccount the first three terms).

Proof of Theorem 5.6.1. Here we will put

Gn := {Sn(a) � x}, Bj(v) := {ξj < y + jv}, B(v) =n⋂

j=1

Bj(v).

As usual, we have

P(Gn) = P(GnB(v)) + P(GnB(v)).

A bound for the first term on the right-hand side was obtained in Theorem 5.3.1:for v � (1 − δ)/r with δ ∈ (0, 1) a fixed number,

P(Gn B(v)) � cmin{zr+1(y), nr}V r(y), (5.6.17)

where, as before, z(y) = 1/l′(y) ∼ y/αl(y) and r = x/y � 1. For the secondterm one has (cf. (5.4.40))

P(GnB(v)) =n∑

j=1

P(GnBj(v)) + O

([ n∑j=1

V (y + vj)]2)

,

Page 310: Asymptotic analysis of random walks

5.6 Large deviations of Sn(a) when a > 0 279

where, by virtue of Lemma 5.6.4 (see (5.6.16)),

n∑j=1

V (y + vj) � cmin{z(y), n}V (y).

So, for y = δx, [ n∑j=1

V (y + vj)]2

� cm2V 2(δx) < V 1+γ(x)

for sufficiently large x, where γ ∈ (0, 2δα − 1) can be chosen arbitrarily closeto 2δα − 1 > 0 when δ > 2−1/α.

Further, by (5.6.17) the probability P(GnB(v)) also admits an upper boundV 1+γ(x) with γ ∈ (0, δα−1 − 1). Summarizing, the above yields

P(Gn) =n∑

j=1

P(GnBj(v)) + O(V 1+γ(x)

), γ > 0. (5.6.18)

Now consider (again in a similar way to that used in the preceding exposition) therepresentation

P(GnBj(v)) = P(GnBj(v)

{Sj−1(a) � x − y

})+ P

(Gn Bj(v)

{Sj−1(a) < x − y

})=: P1,j + P2,j .

(5.6.19)

By Theorem 5.3.2 the first term on the right-hand side satisfies

P1,j � cmin{j, z(x − y)}V (x − y)V (y + jv)

� cV (yj)V (xj − yj) min{j, z(xj − yj)},where xj := x + jv and yj := y + jv. Again setting for simplicity L(x) ≡ 1 (asin (5.4.44)) we obtain

V (yj)V (xj − yj) = exp{−(ρα + (1 − ρ)α)xαj },

where ρ = ρj = yj/xj � δ, δ < 1/2. Furthermore, if j � x/v then we will alsohave ρ � (1 + δ)/2 < 1.

Recall that we studied the function γ(ρ) = ρα + (1 − ρ)α − 1 > 0 on p. 261(see (5.4.44) and (5.4.45)). As in those calculations, we obtain the inequalities

V (yj)V (xj − yj) � V 1+γ(xj), P1,j � V 1+γ(xj), γ > 0 for j � x

v.

Hence, by virtue of Lemma 5.6.4 (see (5.6.15)), for n � x/v,

n∑j=1

P1,j � S(0, 1 + γ) � cmin{z, n}V 1+γ(x). (5.6.20)

Page 311: Asymptotic analysis of random walks

280 Random walks with semiexponential jump distributions

However, if j � n0 = x/v then yj � yn0 = (1 + δ)x and

P1,j � cz((1 − δ)x)V (yj),

so thatn∑

j=n0

P1,j � cz2V ((1 + δ)x) � V 1+γ(x), γ > 0,

and therefore the inequality (5.6.20) holds for all n.Now consider the second term on the right-hand side of (5.6.19). For

Sj(a) := Sj − aj, Zj,n(a) := Sj−1 + S(j)n−j(a),

where S(j)n−j(a) d= Sn−j(a) is independent of ξ1, . . . , ξj , we obtain (cf. (5.5.10),

(5.5.11))

P2,j = P(Sn(a) � x, ξj � yj , Sj−1(a) < x − y

)= P

(Sj−1 + ξj − aj + S

(j)n−j(a) � x, ξj � yj , Sj−1(a) < x − y

)= P

(ξj � x − Zj,n(a) + aj, ξj � yj , Sj−1(a) < x − y

)+ O

(V 1+γ(x)

)= E

[V (x − Zj,n(a) + aj); Zj,n < x − yj + aj

]+ O

(V 1+γ(x)

).

(5.6.21)

As before, let

xj = x + ja, zj = z(xj). (5.6.22)

Then the principal part of (5.6.21) can be written in a form similar to (5.5.22),(5.5.23):

E(j) = E[V (xj − Zj,n(a)); Zj,n(a) < xj − yj

]= E1,j + E2,j + E3,j ,

where

E1,j := E[V (xj − Zj,n(a)); Zj,n(a) < −zj

],

E2,j := E[V (xj − Zj,n(a)); |Zj,n(a)| � zj

], (5.6.23)

E3,j := E[V (xj − Zj,n(a)); zj < Zj,n(a)

< xj − yj = (1 − δ) xj + (aδ − v)j].

To evaluate these expectations, we will need bounds for the distribution of Zj,n(a).Clearly

Sj−1 � Zj,n(a)d� Sj−1 + ζ, (5.6.24)

where ζd= S(a) is independent of Sj−1, and

P(ζ � x) � V (x) := czV (x) (5.6.25)

Page 312: Asymptotic analysis of random walks

5.6 Large deviations of Sn(a) when a > 0 281

(see Theorem 5.3.2 or (5.6.3) and Lemma 5.6.4). By virtue of the first inequalityin (5.6.24), the term E1,j admits the same bound as before (see (5.4.59)):

E1,j � V (xj)O(j3/2z−3

j

)(5.6.26)

(since b � 3, one can use Chebyshev’s inequality with exponent 3).In order to estimate E3,j , we will need bounds for the distribution of Sj + ζ

(see (5.6.24)). In what follows, by π1 = π1(x) we will understand the value (5.2.8)with n = j, i.e.

π1 = π1(x) = jl(x)x−2.

Recall that the relations π1(x) � 1 and x � σ1(j) are equivalent to each other.

Lemma 5.6.6. We have

Pj(x) := P(Sj + ζ � x) �{

c e−x2/2hj for x � σ1(j),

V ((1 − Δ)x) for x � σ1(j),(5.6.27)

where Δ = 12 π1(x)h(1 + o(1)) provided that π1(x) → 0, and Δ � Δ0 < 1 for

π1 � 1.

Corollary 5.6.7.

Pj(x) �{

c V (x)1/2π1h for π1 � 1,

V (x)(1−Δ)α

for π1 � 1,

where Δ has the properties described in Lemma 5.6.6.

Proof. The assertions of the corollary follow from the relations

e−x2/(2hj) = e−l(x)/(2π1h) = V (x)1/(2π1h)

and V ((1 − Δ)x) = V (x)(1−Δ)α(1+o(1)).

Proof of Lemma 5.6.6. First consider the case x � σ1(j) (i.e. π1 � 1). UsingCorollary 5.2.2(ii) and (5.6.25) and integrating by parts, we find that

Pj(x) =

∞∫0

P(ζ ∈ dt)P(Sj � x − t)

� P(ζ � x) +

x∫0

e−(x−t)2/(2jh)P(ζ ∈ dt)

= e−x2/(2jh) +

x∫0

e−(x−t)2/(2jh) x − t

jhP(ζ � t) dt

� e−x2/(2jh) + c

x∫0

e−(x−t)2/(2jh) x − t

jhz(t) e−l(t)dt (5.6.28)

Page 313: Asymptotic analysis of random walks

282 Random walks with semiexponential jump distributions

(in this calculation we have assumed for simplicity that the inequality (5.2.16) inCorollary 5.2.2 holds for all x � σ1(j). The changes one should make for x suchthat l(x) � 2 ln j are obvious, and are left to the reader).

In the integrand of the last integral in (5.6.28) one has

x − t

jhz(t) � x2

jhl(x)=

1π1(x)h

� 1h

,

− (x − t)2

2jh= − x2

2jh+ p(t), p(t) :=

xt

jh− t2

2jh.

After the change of variables t = ux, u ∈ (0, 1), the function p(t) − l(t) willhave the form

p(t) − l(t) =x2

jh

(u − u2

2

)−uαl(x)(1 + o(1))

= l(x)[u − u2/2π1(x)h

uα(1 + o(1))]

� l(x)h

[u − u2

2− huα(1 + o(1))

]� − l(x)

h(h − 1)uα(1 + o(1)) ∼ −h − 1

hl(t)

as t → ∞. Since for x � σ1(j) one has j/x → ∞ as x → ∞, the part of the lastintegral in (5.6.28) over the interval (j/x, x) does not exceed

e−x2/(2jh)

x∫j/x

e−(1−1/h)l(t)(1+o(1))dt = o(e−x2/(2jh)

).

Because p(t) � 1/h for t ∈ (0, j/x), the remaining part of that integral will notexceed

ce−x2/(2jh)

j/x∫0

e1/h−l(t)dt � c1e−x2/(2jh).

This proves the first inequality of the lemma.Now let x � σ1(j) (π1 � 1). Then, for δ ∈ (0, 1), we have by virtue of (5.6.25)

and Corollary 5.2.2(i) (assuming for simplicity that the inequality (5.2.13) holds

Page 314: Asymptotic analysis of random walks

5.6 Large deviations of Sn(a) when a > 0 283

for all x � σ1(j)) that

Pj(x) � P(Sj � x) +

x∫−∞

P(Sj ∈ dt) V (x − t)

� cjV

((1 − π1h

2

)x

)+ V (x) +

σ1(j)∫0

e−t2/(2jh)V (x − t)t

jhdt

+ cj

x∫σ1(j)

V

((1 − π1(t)h

2

)t

)V (x − t)l′(t) dt. (5.6.29)

Here the first two terms on the right-hand side can clearly be written, as x → ∞,in the form V ((1 − Δ)x), where Δ � π1h/2 and h ∈ (1, 2) can still be chosenarbitrarily close to 1 (this new value of h differs from that in (5.6.29)).

The third term (we will denote it by I3) is an analogue of the integral E2

in (5.4.49) (for n = j), where the function gκ(t) is replaced now by g2(t) =l(x − t) + t2/2jh and the power factor 1/

√j is replaced by tz(x − t)/j. If

π1 → 0 (s1 = x/σ1(j) → ∞) then, using an argument quite similar to thatin (5.4.50)–(5.4.54), we obtain for the third term the value

ct∗z(x − t∗)j−1/2e−g2(t∗),

where, as in (5.4.24) and (5.4.51), for the maximum point t∗ of the integrand onehas t∗ ∼ αjl(x)h/x = απ1hx. But

g2(t∗) = l(x − t∗) +(t∗)2

2jh

= l(x) − 12

α2hj l2(x)x2

(1 + o(1))

= l(x)[1 − 1

2α2π1h(1 + o(1))

].

Summarizing, we have obtained that the third term on the right-hand side of(5.6.29) also admits a bound of the form

I3 � V ((1 − Δ)x), Δ � π1h

2= o(1) as π1 → 0. (5.6.30)

If, however, π1 � π0 > 0 then x � π1/(α−2)0 σ1(j). In this case, we consider

the following two alternatives: t∗ � σ1/2 and t∗ > σ1/2. In the former, one hasI3 � σ1V (σ1/2); in the latter,

I3 � σ1e−σ2

1/8jh = σ1e−l(σ1)/8π1h = σ1V

1/8π1h(σ1).

Since σ1 � xπ1/(2−α)0 , in both cases

I3 � cxV γ1(x) � V γ(x)

Page 315: Asymptotic analysis of random walks

284 Random walks with semiexponential jump distributions

for some fixed γ > 0 and large enough j (or x). Thus, the first relation in (5.6.30)holds for all π1 � 1 and Δ � Δ0 < 1.

It remains to bound the last integral in (5.6.29). We have already estimatedthe part

∫ (1−δ)x

σ1of this integral (see the bound for E3 in (5.4.55), (5.4.56)); this

gives an upper bound V (x)V γ(σ1). Estimating the remaining part∫ x

(1−δ)xof the

integral is also not difficult. One can easily see that it is equivalent to

cjV

(x

(1− π1 h

2

))l′(x)

∞∫0

V (u) du = V ((1−Δ)x), Δ =π1h

2(1 + o(1)),

where Δ = o(1) as π1 → 0 and Δ � Δ0 < 1 for π1 � 1.The lemma is proved.

Now we can continue evaluating the terms in (5.6.23). If we assume for sim-plicity that v � aδ then the form of E3,j in (5.6.23) is almost the same as thatof E′

3 in (5.4.61) (or E3,j in (5.5.23)):

E3,j � E[V (xj − Zj,n(a)); zj � Zj,n(a) < (1 − δ)xj

],

where xj = x+aj, zj = z(xj). However, here we cannot directly use the resultsof § 5.4 (see (5.4.65)), since in (5.4.65) we employed the condition π2 → 0.To bound E3,j we will consider, as in § 5.4, the two alternatives, zj � σ1(j)and zj > σ1(j). If zj > σ1(j) then we can basically repeat the computationsfrom (5.4.61), (5.4.62); this yields

E3,j � V (xj)V (zj)γ , γ > 0. (5.6.31)

If zj � σ1(j) then E3,j , like E′3 in (5.4.63), should be split into two parts:

E3,j,1 := E[V (xj − Zj,n(a)); zj � Zj,n(a) < σ1(j)

]and

E3,j,2 := E[V(xj − Zj,n(a)

); σ1(j) � Zj,n(a) < (1 − δ)xj

].

Since V (zj) � V (σ1(j)), the integral E3,j,2 can clearly be bounded by the right-hand side of (5.6.31). So we only have to bound E3,j,1. Using an argument similarto that in § 5.4 (cf. (5.4.64)), we obtain from Lemma 5.6.6 that

E3,j,1 � V (xj − zj) e−z2j /(2jh) +

σ1(j)∫zj

V (xj − u) l′(xj − u) e−u2/(2jh) du.

Since σ1(j) = o(xj), we have

l(xj − u) = l(xj) − u

zj(1 + o(1)), u ∈ [zj , σ1(j)],

Page 316: Asymptotic analysis of random walks

5.6 Large deviations of Sn(a) when a > 0 285

and the integrand in the last integral will not exceed

V (xj) exp{

u

zj(1 + o(1)) − u2

2jh

}. (5.6.32)

First let j � xθ, where 1 − α < θ < min{1, 2 − 2α}. Then the ratio of theterms in the exponent in (5.6.32) is of order

uzj

j�

z2j

j� (x + j)2

xθl2(x + j)∼ x2−θ

l2(x)→ ∞,

since 2 − θ > 2α. This means that the second term in the exponent in (5.6.32)will dominate, and so the value of (5.6.32), for large enough x, will not exceed

V (xj) e−z2j /3jh.

Thus, for j � xθ we have an inequality similar to (5.4.65):

E3,j � V (xj)(V γ(zj) + e−z2

j /3jh), γ > 0. (5.6.33)

If j > xθ then for E3,j,1 we will use the obvious bound

E3,j,1 � V (xj − σ1(j)). (5.6.34)

To estimate E3,j,2, one should use the second inequality in Lemma 5.6.6, whichimplies that, for v � σ1(j),

P(Zj,n(a) � v) � Pj(v) � V ((1 − Δ)v), Δ = o(1) as π1(j) → 0.

Using this inequality, one can bound E3,j,2 in exactly the same way as in ourestimation of the integral E3 in (5.4.55) and (5.4.56), which yields the boundE3,j,2 � V (xj)V γ(σ1(j)) for some γ > 0. Therefore, by virtue of (5.6.34) wecan use the crude inequality

E3,j � V (xj − σ1(j)). (5.6.35)

Now we evaluate E2,1 in (5.6.23). Again making use of expansions of theform (5.4.3), (5.4.67), (5.4.68), we obtain, cf. (5.4.69):

E2,j = V (xj)E[1 +

Zj,n(a)zj

+Z2

j,n(a)2z2

j

+(1 − α)Z2

j,n(a)2xjzj

(1 + o(1)) + o(q(xj)

); |Zj,n(a)| � zj

]= V (xj)

[1 +

EZj,n(a)zj

+EZ2

j,n(a)2z2

j

(1 +

(1 − α) zj

xj(1 + o(1))

)+ Rn,j

], (5.6.36)

Page 317: Asymptotic analysis of random walks

286 Random walks with semiexponential jump distributions

where

|Rn,j | � R(0)j + R

(1)j + R

(2)j + R

(3)j + o

(q(xj)

),

R(k)j � c

zkj

E[|Zj,n(a)|k; |Zj,n(a)| > zj

], k = 0, 1, 2, (5.6.37)

R(3)j � z−3

j E[|Zj,n(a)|3; |Zj,n(a)| � zj

].

Since the r.v. ζ has finite moments of all orders we see that, owing to the inequality(5.6.24), the moments of Zj,n(a) over the set {|Zj,n(a)| > zj} admit the samebounds as the moments of Sj over the set {|Sj | > zj}. Hence one can bound thequantities R

(k)j , k = 0, 1, 2, in exactly the same way as Tk in (5.4.70):

R(k)j � j3/2z−3

j , k = 0, 1, 2; (5.6.38)

for k = 3 we have the same bound,

R(3)j � z−3

j E|Zj,n(a)|3 � cj3/2z−3j . (5.6.39)

Hence the total remainder term in the sum∑

j E2,j , by virtue of (5.6.36)–(5.6.39)and Lemma 5.6.4 (it is obvious that (5.6.5) and the first relation in (5.6.9) remaintrue for fractional k � 0 as well), will not exceed

n∑j=1

V (xj)|Rn,j | � c

[n∑

j=1

j3/2

z3j

V (xj) + o

( n∑j=1

q(xj)V (xj))]

� c1V (x)[m2

z3+

m5/2

z3+ o

(mq(x)

)]� c2V (x)

[m5/2

z3+ o

(mq(x)

)], (5.6.40)

m = min{z, n}. To obtain the last two relations in (5.6.40), we used the mono-tonicity of q(t) and the inequalities

n∑j=1

q(xj)V (xj) � q(x)n∑

j=1

V (xj) � cmq(x)V (x).

Since, furthermore,

EZj,n(a) = ESn−j(a), EZ2j,n(a) = j − 1 + ES 2

n−j(a),

the sum∑n

j=1 E2,j has exactly the same form as the right-hand side of (5.6.2).To complete the proof of the theorem, it suffices to show that the contribu-

tions of all the remaining terms in the representation for P(Gn), which we obtainfrom (5.6.18), (5.6.19), (5.6.21) and (5.6.23), will be ‘absorbed’ by the remainderterms on the right-hand side of (5.6.2).

For the sum∑n

j=1 P1,j , this follows from (5.6.20). That the remainder termO(V 1+γ(x)) from (5.6.21) is negligible is obvious. Further, comparison of the

Page 318: Asymptotic analysis of random walks

5.7 Large deviations of Sn(−a) when a > 0 287

bounds (5.6.26) and (5.6.38) shows that the total contribution of the terms E1,j

does not exceed the right-hand side of the inequality (5.6.40), which boundsthe total remainder term for

∑nj=1 E2,j . Similarly, a comparison of the bounds

(5.6.33) and (5.6.38) shows that the same assertion is true for the total contribu-tion from the terms E3,j , j � mθ := min{n, xθ}. Finally, the contribution of theterms E3,j with j > mθ is bounded, by virtue of (5.6.35) and Lemma 5.6.4, bythe sum ∑

j>mθ

E3,j �∑

j>mθ

V (x + aj − σ1(j)) �∑

j>mθ

V (x + aj/2)

� czV (x + axθ/2) � V (x) e−xψ

, ψ > 0.

Here the last inequality follows from the relation

l

(x +

axθ

2

)= l(x)

[1 +

αaxθ−1

2(1 + o(1))

],

where, since θ > 1 − α, one has

αaxθ−1

2l(x) � xγ′

, γ′ > 0.

Theorem 5.6.1 is proved.

Observe that the monotonicity of q(t), assumed in condition [D], was usedonly in (5.6.40). We have not needed it in the theorems of §§ 5.4 and 5.5.

5.7 Large deviations of Sn(−a) when a > 0

In this section, we will study the asymptotics of the probability

P(Sn(−a) − an � x) as x → ∞, (5.7.1)

where

Sn(−a) = maxk�n

(Sk + ak), a > 0, Eξ = 0.

Possible approaches to solving this problem were mentioned in § 3.6. One ap-proach is based on the observation that for a > 0 the r.v. Sn(−a) will, in a certainsense, ‘almost coincide’ with Sn + an. More precisely,

Sn � Sn(−a) − an = Sn + ζn, (5.7.2)

where, for ξi(−a) = ξi + a and Sn(−a) = Sn + an, the r.v.

ζn := max{0,−ξn(−a), −ξn(−a) − ξn−1(−a), . . . , −Sn(−a)

}(5.7.3)

has the same distribution as

maxk�n

(−Sk(−a)) = −mink�n

Sk(−a)

Page 319: Asymptotic analysis of random walks

288 Random walks with semiexponential jump distributions

and is dominated in distribution by the proper r.v.

ζ := supk�0

(−Sk(−a)) = − infk�0

(Sk + ak); (5.7.4)

moreover,

Eζb−1 < ∞ if E|ξj |b < ∞. (5.7.5)

However, evaluating the large deviation probabilities in question on the basisof (5.7.2) is difficult and inconvenient for at least two reasons:

(1) the r.v.’s ζn and Sn are dependent;(2) bounds for probabilities of large deviations of ζn require additional conditions

on the left distribution tails of ξi, and these conditions prove to be superfluousfor the problems we consider here.

Another approach uses the cruder inequalities

Sn � Sn(−a) − an � Sn. (5.7.6)

These inequalities and results already derived give the ‘correct’ asymptotics of(5.7.1) in the zone x � σ2(n) and an ‘almost correct’ asymptotics (up to a fac-tor 2) in the intermediate zone σ1(n) � x � σ2(n).

In this section we will use the same approach as in §§ 5.2–5.4. We will as-sume that condition [D] is satisfied, and, for the intermediate deviation zone, thefollowing condition also (cf. conditions [CA], [CA]).

[CA∗] The uniform approximation given by the right-hand side of the rela-tion (5.1.11) holds for P(Sn(−a) − an � x) in the zone

0 < x < σ+(n) = σ1(n) (1 − ε(n)),

where ε(n) → 0 as n → ∞.

It was established in [2] that condition [CA∗] will hold provided that (5.1.10)is satisfied. As before, it is likely that, under conditions (5.1.1), (5.1.2) and forlarge enough b > 2 (see (5.1.7)), condition [CA∗] will be redundant.

Following the same exposition path as in §§ 5.2–5.4, it will be noticed that,owing to inequality (5.7.2), the exposition undergoes no serious changes. So wewill present below only a sketch of the proof of the following main assertion.

Theorem 5.7.1. Let the conditions (5.1.7) and [ · , =] with V ∈ Se be satisfied.Then the following statements hold true.

(i) The assertions of Theorem 5.4.1(i), (ii) remain true if one substitutes in themSn(−a) − an for Sn and condition [CA∗] for [CA].

Page 320: Asymptotic analysis of random walks

5.7 Large deviations of Sn(−a) when a > 0 289

(ii) Let condition [D] hold and s2 = x/σ2(n) → ∞ (i.e. π2 = nw2(x) → 0).Then, uniformly in x and n from that range,

P(Sn(−a) − an � x)

= nV (x)[1 +

1zn

n∑j=1

Eζj +n − 12z2

+(1 − α)(n − 1)

2xz(1 + o(1)) + o

(√n

z2

)+ O

(n3/2

z3

)+ o(q(x))

]= nV (x)

[1 +

z+

n

2z2+

(1 − α)n2xz

(1 + o(1))

+ O

(n3/2

z3

)+ o(z−1 + q(x))

], (5.7.7)

where ζj and ζ were defined in (5.7.3), (5.7.4).

Here one also has analogues of Remarks 5.4.3–5.4.6, which followed Theo-rem 5.4.1. Comparison with Theorem 5.4.1 shows that the asymptotic expansionsfor the probabilities P(Sn � x) and P

(Sn(−an) − an � x

)differ even in the

first term, provided that z � n (x � n1/(1−α)).

Proof. The argument proving the first part of the theorem does not deviate muchfrom the corresponding argument in § 5.4. As in the previous situation, the maincontribution to the asymptotics of P(Gn), Gn := {Sn(−a) − an � x}, comesfrom the summands (cf. (5.4.41), (5.4.42), (5.5.7))

P(Gn Bj {Sj−1(−a) − a(j − 1) < x − y}

), Bj = {ξj � y},

which, up to the respective higher-order correction terms (owing to (5.7.2), (5.7.6),all the bounds we used before remain valid), could be written as

P(Sj−1 + ξj + S

(j)n−j(−a) − a(n − j) � x,

Sj−1 + S(j)n−j(−a) − a(n − j) < x − y

)= P

(ξj + Zj,n � x, Zj,n < x − y

)= E

[V (x − Zj,n); Zj,n < x − y

],

(5.7.8)

where

Zj,n := Sj−1 + S(j)n−j(−a) − a(n − j)

and the r.v. S(j)n−j(−a) d= Sn−j(−a) is independent of ξ1, . . . , ξj . By an argument

similar to that in § 5.5 (see Lemma 5.5.4), we can verify that, in the deviation zonex < δσ+, one has for Zj,n the same approximations, of the form (5.1.11), as thosethat hold for Sn (all the evaluations remain true, owing to (5.7.6)). Therefore thecomputation of the asymptotics of P(Gn) leads to the same result as in Theo-rem 5.4.1. This proves the first assertion of the theorem.

Page 321: Asymptotic analysis of random walks

290 Random walks with semiexponential jump distributions

When deriving asymptotic expansions for probability (5.7.1), there will besome changes compared with the proof of Theorem 5.4.1. We will have to use therepresentation

Zj,nd= Sn−1 + ζ∗n−j ,

where the ζ∗n−jd= ζn−j are defined, in the appropriate way, by the last n − j

summands in the sum Sn−1. Further, one should use, as before, the decomposi-tions (5.4.67) and (5.4.68). We obtain, cf. (5.4.69), (5.5.24), (5.6.36),

P(Gn) = nV (x)[1 +

1zn

n∑j=1

Eζn−j

+1

2z2n

n∑j=1

EZ2j,n

(1 +

(1 − α)zx

(1 + o(1)))

+ Rn(x)],

(5.7.9)

where |Rn(x)| � cn3/2z−3. Next, we use the relations

EZj,n = E(Sn−1 + ζ∗n−j) = Eζn−j → Eζ as n − j → ∞;

EZ2j,n = (n − 1) + 2ESn−1ζ

∗n−j + Eζ2

n−j .

Here ESn−1ζ∗n−j = o(

√n) because Sn−1/

√n and ζ∗n−j are asymptotically in-

dependent and, for all n,

Eζ2n � Eζ2 < ∞ (5.7.10)

owing to (5.7.5) since b � 3. Therefore

EZ2j,n = n + o

(√n). (5.7.11)

Substituting this in (5.7.9), we obtain the desired assertion. The theorem is proved.

5.8 Integro-local and integral theorems on the whole real line

In this section, we present integro-local theorems on the asymptotic behaviourof P(Sn ∈ Δ[x)) and integral theorems on the asymptotics of P(Sn � x),x → ∞, that are valid on the whole real line, i.e. in all deviation zones, includingthe boundary zones. In particular, these results improve the integral theoremsfor Sn, both those obtained in the preceding sections and also obtained in theliterature cited in § 5.1.

The complexity of the proofs of the above-mentioned results, which were ob-tained in the recent paper [73], renders them somewhat beyond the level of thepresent monograph. So we will present the theorems below without proofs; thesecan be found in [73].

Page 322: Asymptotic analysis of random walks

5.8 Integro-local and integral theorems on the whole line 291

When studying the asymptotics of P(Sn � x) in § 5.4, we found that, in thecase s1 = x/σ1(n) → ∞, an important role is played by the function

M(x, n) = min0�t�x

M(x, t, n), M(x, t, n) = l(x − t) + nΛκ

(t

n

)(see Theorem 5.4.1 and (5.4.27), (5.1.12)), whose properties were studied in§ 5.4.1. To study the asymptotics of P(Sn � x) in the transitional zone x =s1σ1(n) for a fixed s1 ∈ (0, ∞), we will need the function

Mδ(x, n) := min0�t�(1−δ)x

M(x, t, n) with δ = δα :=1 − α

2 − α,

so that M(x, n) = M0(x, n). It turns out, however, that, as s1 → ∞ the asymp-totic properties of Mδ and M0 coincide and therefore do not depend on δ. So, byfunction M in the discussion in § 5.4 one can understand the function Mδ , and ev-erywhere in what follows we can assume that δ = δα and denote the function Mδ

with δ = δα by M . This will lead to no confusion.The asymptotics of M(x, n) in the zone x = s1σ1(n) with a fixed s1 are

‘transitional’ from the asymptotics of l(x) to that of x2/2n; these asymptoticscorrespond respectively to the cases s1 → ∞ and s1 → 0. In this connection, itis of interest to consider the ratio limits (recall that δ = δα)

G1(s1) := limn→∞

M(x, n)l(x)

and G2(s1) := limn→∞

2nM(x, n)x2

, (5.8.1)

where x = s1σ1(n). Since for t � √n one has

Λκ

(t

n

)n ∼ t2

2n, (5.8.2)

we obtain for x = s1σ1(n) and n → ∞ that, putting t := px for p ∈ [0, 1 − δ]and using (5.2.3) and the relation l(σ1(n)) ∼ σ2

1(n)/n, one has

M(x, n) ∼ min0�p�1−δ

((1 − p)αl(x) +

p2x2

2n

)= l(x) min

0�p�1−δ

[(1 − p)α +

p2s21σ

21(n)

2nl(s1σ1(n))

]∼ l(x) min

0�p�1−δ

((1 − p)α +

p2s2−α1

2

)and so

M(x, n) ∼ x2

2nmin

0�p�1−δ

(2sα−2

1 (1 − p)α + p2).

From here it follows that the limits in (5.8.1) are equal to the values of

Gi(s) = min0�p�1−δ

Hi(s, p), i = 1, 2, (5.8.3)

Page 323: Asymptotic analysis of random walks

292 Random walks with semiexponential jump distributions

at s = s1, where

H1(s, p) := (1 − p)α +12

p2s2−α, H2(s, p) := 2sα−2(1 − p)α + p2.

Now we will study the properties of the functions Gi(s). Denote by p(s) themaximum value p ∈ [0, 1 − δ], at which the minimum in (5.8.3) is attained, sothat Gi(s) = Hi(s, p(s)) for i = 1, 2 (clearly, p(s) does not depend on i). Thequantity

s0 :=2 − α

(2 − 2α)(1−α)/(2−α)(5.8.4)

will play an important role in what follows.

Lemma 5.8.1. If Var ξ = 1 then the functions G1(s), G2(s) and p(s) have thefollowing properties.

(i) The functions G1(s) and G2(s) are related to each other by

G1(s) =s2−α

2G2(s);

the function G2(s) is decreasing, while G1(s) is increasing for s > 0. More-over,

G2(s0) = 1, G2(s) → ∞ as s → 0, G1(s) ↑ 1 as s → ∞.

(5.8.5)(ii) The function p(s) is continuous and positive for s � 0 and decreasing

for s � s0, and

p(s0) =α

2 − α, p(s) ∼ α

s2−αas s → ∞.

If we waived the condition d ≡ Var ξ = 1 then, for t � √n, instead of (5.8.2)

we would have

Λκ

(t

n

)n ∼ t2

2nd,

and instead of (5.8.1) we would obtain, for x = s1d1/(2−α)σ1(n) and any fixed

s1 > 0, the relation

M(x, n) ∼ G1(s1)l(x) ∼ x2

2ndG2(s1).

Equally obvious changes consequent on replacing Var ξ = 1 by Var ξ = d couldbe made in what follows as well. So, from here on, we will assume in the presentsection, without loss of generality, that Var ξ = 1 unless stipulated otherwise.

Now we will state the main result on crude (logarithmic) asymptotics in theintegral theorem setup.

Page 324: Asymptotic analysis of random walks

5.8 Integro-local and integral theorems on the whole line 293

Theorem 5.8.2. Let F ∈ Se. Then, for x � √n, x = s1σ1(n),

lnP(Sn � x) ∼

⎧⎪⎨⎪⎩−x2

2nif x � √

n, s1 � s0,

−G1(s1)l(x) if s1 � s0,(5.8.6)

where G1(s1)l(x) ∼ G2(s1)x2/2n for each fixed s1 > 0.

It follows from Theorem 5.8.2 that the ‘point’ x = s0σ1(n) separates theCramer deviation zone from the ‘non-Cramer’ zone.

One can also show that there exists a common representation for the right-handside of (5.8.6) in the form −M0(x, n), since

M0(x, n) ∼

⎧⎪⎨⎪⎩x2

2nif x � √

n, s1 � s0,

G1(s1)l(x) if s1 � s0,

i.e. for −M0(x, n) one has the same representation as for the left-hand sideof (5.8.6).

Next we will state an ‘exact’ integral theorem. Put

c1(s) :=(1

2H ′′

2 (s, p(s)))−1/2

for s � s0,

where s0 is defined in (5.8.4) and H ′′2 (s, p) is the second derivative of the function

H2(s, p) with respect to p, and let

c1(s) := c1(s0) for s � s0.

By Lemma 5.8.1 the function c1(s) is continuous and positive for all s � 0, andc1(s) → 1 (H ′′

2 (s, p(s)) → 2) as s → ∞.

Theorem 5.8.3. Let F ∈ Se and E|ξ|κ < ∞, where

κ =

⎧⎪⎪⎨⎪⎪⎩⌊

11 − α

⌋+ 2 if

11 − α

is a non-integer,

11 − α

+ 1 if1

1 − αis an integer.

Then, uniformly in x = s1σ1(n) → ∞, x � √n, we have

P(Sn � x) ∼ Φ(−x/

√n)e−Λ0

κ(x/n)n + nc1(s1)e−M(x,n), (5.8.7)

where

Φ(−x/

√n)e−Λ0

κ(x/n)n ∼√

n

x√

2πe−Λκ(x/n)n

for x � √n and c1(s1) → 1 as s1 → ∞. In particular, for any fixed ε > 0,

P(Sn � x) ∼⎧⎨⎩ Φ

(−x/√

n)e−Λ0

κ(x/n)n if x � √n, s1 � s0 − ε,

nc1(s1) e−M(x,n) if s1 � s0 + ε.

Page 325: Asymptotic analysis of random walks

294 Random walks with semiexponential jump distributions

In the ‘extreme’ deviation zone x = s2σ2(n) → ∞, s2 � c = const > 0, onehas

P(Sn � x) ∼ nc2(s2)e−l(x), (5.8.8)

where

c2(s) := exp{

α2

2s2−2α

}→ 1 as s → ∞.

Theorem 5.8.3 is an analogue of the results of [238], which hold on the wholereal line (the zone of normal deviations x <

√n, n → ∞, is covered by the

central limit theorem), but it has a simpler form. It is also an analogue of therepresentation (4.1.2), which is uniform over the range x � √

n and holds forregularly varying tails F+(t).

Now we will turn to integro-local theorems. Here we will need the followingadditional condition.

[D1] For non-lattice distributions, for any fixed Δ > 0, as t → ∞,

l(t + Δ) − l(t) ∼ Δαl(t)t

;

for arithmetic distributions, for integer-values k → ∞,

l(k + 1) − l(k) ∼ αl(k)k

.

The function αl(t)/t in condition [D1] plays the role of the derivative l′(t) andis asymptotically equivalent to it, provided that the latter exists and is sufficientlyregular.

Observe that, in the zone of deviations that are close to normal, this condi-tion will not be required (see Theorems 5.8.4 and 5.8.5 below) while in the largedeviation zone it could be relaxed (cf. § 4.8).

For x = s1σ1(n), put

m(x) := c(m)(s1)αl(x)x

,

where c(m)(s1) := (1 − p(s1))α−1 for s1 � s0, and c(m)(s1) := c(m)(s0)for s1 � s0. By Lemma 5.8.1(ii),

p(s1) → 0, c(m)(s1) → 1, m(x) ∼ αl(x)x

as s1 → ∞. Furthermore, we can show that, for any fixed Δ > 0, in the zones1 � s0 one has

M(x + Δ, n) − M(x, n) ∼ Δm(x),

so that the function m(x) plays the role of the derivative of M(x, n) with respectto x.

First we will state an integro-local theorem in the non-lattice case.

Page 326: Asymptotic analysis of random walks

5.8 Integro-local and integral theorems on the whole line 295

Theorem 5.8.4. Assume that F ∈ Se and that the conditions E|ξ|κ < ∞ and[D1] are met, where κ is defined in Theorem 5.8.3. Then, for any fixed Δ > 0,uniformly in x = s1σ1(n) → ∞, x � √

n,

P(Sn ∈ Δ[x)

) ∼ Δ√2πn

e−Λκ(x/n)n + Δnm(x)c1(s1) e−M(x,n), (5.8.9)

where one has m(x)c1(s1) ∼ αl(x)/x as s1 → ∞. In particular, for anyfixed ε > 0,

P(Sn ∈ Δ[x)

) ∼⎧⎪⎪⎨⎪⎪⎩

Δ√2πn

e−Λκ(x/n)n if x � √n, s1 � s0 − ε,

Δnm(x)c1(s1) e−M(x,n) if s1 � s0 + ε.

In the ‘extreme’ deviation zone x = s2σ2(n) → ∞, s2 � c = const > 0, wehave

P(Sn ∈ Δ[x)) ∼ Δnαl(x)x

c2(s2)e−l(x), (5.8.10)

where the ci(si), i = 1, 2, are defined in Theorem 5.8.3. If x = O(σ2(n)) thencondition [D1] is superfluous.

Now consider the arithmetic case. We cannot assume here without losing gen-erality that a = Eξ = 0, d = Var ξ = 1. Hence the following theorem is statedfor arbitrary a and d.

Theorem 5.8.5. Suppose that F ∈ Se and that the conditions E|ξ|κ < ∞and [D1] are satisfied, where κ is defined in Theorem 5.8.3. Then, uniformlyin integer-valued x = s1b

2/(2−α)σ1(n) → ∞, x � √n, the following holds:

P(Sn − �an = x

)∼ 1√2πnd

e−Λκ(x/n)n + nm(x)c1(s1)e−M(x,n),

where m(x)c1(s1) ∼ αl(x)/x as s1 → ∞. In particular, for any fixed ε > 0,

P(Sn − �an = x

) ∼⎧⎪⎪⎨⎪⎪⎩

1√2πnd

e−Λκ(x/n)n if x � √n, s1 � s0 − ε,

nm(x)c1(s1) e−M(x,n) if s1 � s0 + ε.

In the ‘extreme’ deviation zone x = s2σ2(n) → ∞, s2 � c = const > 0, forinteger-valued x one has

P(Sn − �an = x

) ∼ nαl(x)x

c2(s2)e−l(x), (5.8.11)

where ci(si), i = 1, 2, are defined in Theorem 5.8.3. If x = O(σ2(n)) thencondition [D1] is superfluous.

Note that, in the deviation zone x � σ2(n) Theorem 5.8.3 will, generallyspeaking, not follow from Theorems 5.8.4 and 5.8.5 (indeed, Theorem 5.8.3 does

Page 327: Asymptotic analysis of random walks

296 Random walks with semiexponential jump distributions

not contain condition [D1]). Theorems 5.8.4 and 5.8.5 are analogues of Theo-rems 4.7.5 and 4.7.6.

Further, observe that the case when n does not grow is not excluded in Theo-rems 5.8.2–5.8.5. In this case s2 := x/σ2(n) → ∞, c2(s2) → 1 as x → ∞, andthe assertions (5.8.8), (5.8.10) and (5.8.11), with c2(s2) replaced on their right-hand sides by 1, can also be obtained from the known properties of subexponentialand locally subexponential distributions (see e.g. §§ 1.2 and 1.3).

5.9 Additivity (subexponentiality) zones for various distribution classes

We have seen that, under the conditions of Chapters 2–5, the following subexpo-nentiality property holds for the sums Sn of i.i.d. r.v.’s ξ1, . . . , ξn, Eξ = 0. Forany fixed n, as x → ∞,

P(Sn � x) ∼ nV (x), V (x) = P(ξ � x). (5.9.1)

This property could also be referred to as tail additivity with respect to the additionof random variables. This term is more justified in the case of non-identicallydistributed independent summands ξ1, . . . , ξn. In this case, instead of (5.9.1) onehas (see Chapters 12 and 13)

P(Sn � x) ∼n∑

j=1

Vj(x),

where Vj(x) = P(ξj � x).It follows from the results of Chapters 2–5 that the relation (5.9.1) remains valid

also for all n � n(x), where n(x) → ∞ as x → ∞. If n(x) is an ‘unimprovable’value with this property then it could be called the boundary of the additivity(subexponentiality) zone. When considering the whole class S of subexponentialdistributions, one can only say about the boundary n(x) of the additivity zone fora distribution F ∈ S that it ought to be a sufficiently slow growing function of x

(that depends on F). On the one hand, the more regular is the behaviour of F atinfinity, the greater the boundary n(x). On the other hand, there exists a trivialupper bound

n(x) <1

V (x)(5.9.2)

dictated by the fact that nV (x) approximates a probability and therefore one musthave nV (x) � 1. It will be seen in what follows that, in a number of cases, thistrivial bound is essentially attained.

For the distribution classes R(α) and Se(α), which we considered in Chap-ters 1–5 (see pp. 11, 29), the boundaries of the additivity zones can be foundexplicitly. It turns out that we always have

n(x) = xγLn(x), (5.9.3)

Page 328: Asymptotic analysis of random walks

5.9 Additivity zones for distribution classes 297

where Ln(x) is an s.v.f. and γ > 0. More precisely, the following assertions holdtrue.

(i) If F ∈ R(α) and condition [<, =] with α ∈ (0, 2) is satisfied then, undersuitable conditions on the majorant W of the left distribution tail (for example,W (t) < cV (t)), one has

n(x) =xαε(x)L(x)

, (5.9.4)

where ε(x) is an arbitrary s.v.f. such that ε(x) → 0 as x → ∞ and L(x) is thes.v.f. from the representation V (x) = x−αL(x), so that n(x) = o

(V −1(x)

).

This means that the trivial bound (5.9.2) is essentially attained in this case. Theadditivity boundary n(x) cannot assume values comparable with V −1(x), sincefor n ∼ cV −1(x) the distribution of Sn/V (−1)(1/n) is approximated by a stablelaw. (If condition [Rα,ρ] holds for ρ > −1 then the scaling sequence F (−1)(1/n)from § 1.5 will differ from V (−1)(1/n) just by a constant factor; if n � V −1(x)then P(Sn � x) ∼ Fα,ρ,+(0).) Thus, for F ∈ R(α) with α ∈ (0, 2), in therepresentation (5.9.3) one has

γ = γ(F) = α, Ln(x) =ε(x)L(x)

.

(ii) If F ∈ R(α) for α > 2 and Eξ2 < ∞ then, from the discussion inChapter 4,

n(x) � x2

c lnx(5.9.5)

for any c > α − 2.If n assumes values in the vicinity of x2/(α − 2) lnx then for the asymptotics

of P(Sn � x) we will have a ‘mixed’ approximation, given by the sum of nV (x)and 1 − Φ(x/

√nd ), where Φ is the standard normal distribution function and

d = Eξ21 (see (4.1.2)). For n > x2/c1 lnx, c1 < α−2 (in particular, for n ∼ cx2),

we will have the normal approximation. If n � x2 then P(Sn � x) → 1/2.Hence, in the case F ∈ R(α), α > 2, in the representation (5.9.3) one can let

γ = γ(F) = 2, Ln(x) =1 + ε(x)(α − 2)

lnx, (5.9.6)

where ε(x) → 0 as x → ∞.

(iii) If F ∈ Se(α), α ∈ (0, 1) then, by virtue of the results of the presentchapter (see Theorem 5.4.1(ii)),

n(x) =x2−2αε(x)

L2(x),

where ε(x) is an arbitrary s.v.f. vanishing at infinity and L(x) is the s.v.f. fromthe representation

P(ξ � t) = e−l(t), l(t) = tαL(t), α ∈ (0, 1).

Page 329: Asymptotic analysis of random walks

298 Random walks with semiexponential jump distributions

If n is comparable with or greater than x2−2αL−2(x) but satisfies the relationn � x2−α/L(x) then there is another approximation for P(Sn � x) (see The-orem 5.4.1(i)). For n � x2−α/L(x), we have first the ‘Cramer approximation’and then the normal one. If n � x2 then P(Sn � x) → 1/2.

Thus, for F ∈ Se(α) in (5.9.3) we have

γ = γ(F) = 2 − 2α, Ln(x) =ε(x)

L2(x).

Summarizing the above, we can now plot the dependence of the main param-eter γ = γ(F), which characterizes the additivity boundary n(x), on the param-eter α that specifies the classes R(α) and Se(α) containing the distribution F(see Fig. 5.1).

2

γ(F)

�2 α

F ∈ R(α)

��

��

��

���

2����

1�

� 3� �

�1

γ(F)

F ∈ Se(α)α

������

� 3��4�

5�1�

Fig. 5.1. The plots show the values of γ in the representation (5.9.3) for which one oranother approximation holds: ©1 , additivity (subexponentiality) zone; ©2 , approximationby a stable law; ©3 , normal approximation; ©4 , Cramer’s approximation; ©5 , intermediateapproximation.

For exponentially decaying distributions the tail additivity property, generallyspeaking, does not hold. For example, if P(ξ � t) ∼ ce−μt as t → ∞, μ > 0then

P(S2 � t) ∼ c1te−μt � 2P(ξ � t).

For distributions with a negative mean, however, the additivity property be-comes possible. As was shown in § 1.2 (Example 1.2.11 on p. 19), if

P(ξ � t) = e−μtV (t),

where V (t) is an integrable r.v.f. then ϕ(μ) = Eeμξ < ∞ and

P(S2 � t) ∼ 2ϕ(μ)P(ξ � t).

If the distribution F is such that ϕ(μ) = 1 (which is only possible in the caseϕ′(0) = Eξ < 0) then it will have the additivity (subexponentiality) property.

Using Cramer transforms of the distributions of the r.v.’s ξ and Sn, the problem

Page 330: Asymptotic analysis of random walks

5.9 Additivity zones for distribution classes 299

on the asymptotics of P(Sn � x) can be reduced to the respective problem forsums of r.v.’s with distributions from R (for this reduction, we do not need theequality ϕ(μ) = 1; see § 6.2).

Page 331: Asymptotic analysis of random walks

6

Large deviations on the boundary of and outsidethe Cramer zone for random walks with jump

distributions decaying exponentially fast

6.1 Introduction. The main method of studying large deviations when

Cramer’s condition holds. Applicability bounds

In this chapter, in contrast with the rest of the book, we will assume that thedistribution of ξ satisfies Cramer’s condition, i.e.

ϕ(λ) := Eeλξ < ∞,

for some λ > 0. In this case, methods for studying the probabilities of largedeviations of Sn are quite well developed and go back to Cramer’s paper [95](see also [233, 16, 219, 259, 120, 49] etc.). Somewhat later, these methods wereextended in [37, 44, 69, 70] to enable the study of Sn and also the solution ofa number of other problems related to the crossing of given boundaries by thetrajectory of a random walk.

The basis of the modern approach to studying the probabilities of large devia-tions of Sn, including integro-local theorems, consists of the following two mainelements:

(1) the Cramer transform and the reduction of the problem to integro-local theo-rems in the normal deviation zone;

(2) the Gnedenko–Stone–Shepp integro-local theorems [130, 254, 258, 259] onthe distribution of Sn.

We will briefly describe the essence of the approach. As before, let

Δ[x) := [x, x + Δ)

be a half-open interval of length Δ > 0 with left endpoint x. We will studythe asymptotics of P

(Sn ∈ Δ[x)

)as n → ∞. (Note that earlier we assumed

that x → ∞ but often made no such assumption with regard to n. Now theunbounded growth of x will follow from that of n.)

We say that an r.v. ξ(λ) follows the Cramer transform of the distribution (or the

300

Page 332: Asymptotic analysis of random walks

6.1 Introduction. The main method under Cramer’s condition 301

conjugate distribution1) of ξ if

P(ξ(λ) ∈ dt) =eλtP(ξ ∈ dt)

ϕ(λ), (6.1.1)

so that

Eeμξ(λ)=

ϕ(λ + μ)ϕ(λ)

.

The Cramer transform of the distribution of Sn has the form

eλtP(Sn ∈ dt)ϕn(λ)

. (6.1.2)

It is clear that the moment generating function of this distribution is equal to(ϕ(λ + μ)/ϕ(λ)

)n. So we obtain the following remarkable fact: the transform(6.1.2) corresponds to the distribution of the sum

S(λ)n := ξ

(λ)1 + · · · + ξ(λ)

n

of i.i.d. r.v.’s ξ(λ)1 , . . . , ξ

(λ)n following the same distribution as ξ(λ). From this it

follows that

P(Sn ∈ dt) = ϕn(λ) e−λtP(S(λ)n ∈ dt),

and therefore

P(Sn ∈ Δ[x)

)= ϕn(λ) e−λx

x+Δ∫x

e(x−t)λP(S(λ)n ∈ dt)

= ϕn(λ) e−λx

Δ∫0

e−λuP(S(λ)

n − x ∈ du). (6.1.3)

Since e−λu → 1 uniformly in u ∈ [0,Δ] as Δ → 0, we have, for such Δ’s, that

P(Sn ∈ Δ[x)

) ∼ ϕn(λ) e−λxP(S(λ)

n − x ∈ Δ[0)); (6.1.4)

hence, knowing the asymptotics of P(S

(λ)n −x ∈ Δ[0)

)would mean that we also

know the desired asymptotics of P(Sn ∈ Δ[x)

).

Now note that Eξ(λ) = ϕ′(λ)/ϕ(λ) =(lnϕ(λ)

)′. Clearly, Eξ(λ) is an in-creasing function of λ because

(lnϕ(λ)

)′′> 0. This function increases from the

value

θ− := inf{

ϕ′(λ)ϕ(λ)

: λ ∈ (λ−, λ+)}

to the value

θ+ := sup{

ϕ′(λ)ϕ(λ)

: λ ∈ (λ−, λ+)}

,

1 In actuarial and finance-related literature, this distribution is often referred to as the Esscher trans-formed or exponentially tilted distribution; see e.g. p. 31 of [113].

Page 333: Asymptotic analysis of random walks

302 Random walks with exponentially decaying distributions

where

λ− := inf{λ : ϕ(λ) < ∞}, λ+ := sup{λ : ϕ(λ) < ∞}.Therefore, if

θ :=x

n∈ (θ−, θ+) (6.1.5)

then we can choose a λ = λ(θ) such that

Eξ(λ) =ϕ′(λ)ϕ(λ)

= θ. (6.1.6)

In this case we have ES(λ)n − x = nθ − x = 0, which means that the probability

P(S

(λ)n − θn ∈ Δ[0)

)on the right-hand side of (6.1.4) will refer to the normal

deviation zone.Observe also that

Eξ = ϕ′(0) ∈ [θ−, θ+], λ(θ±) = λ±.

Now consider the lattice case, for which the distribution of ξ is concentrated onthe set {a + kh, k = . . . ,−1, 0, 1, . . .} and h is the maximum value possessingthis property. Without loss of generality, we can put h = 1. Moreover, whenstudying the distribution of Sn, we can set a = 0 (sometimes it is more convenientto set Eξ = 0 but in such a case, generally speaking, a �= 0). When h = 1and a = 0, the distribution of ξ is said to be arithmetic. In the arithmetic case,analogues of the relations (6.1.3), (6.1.4) have a somewhat simpler form: forinteger-valued x,

P(Sn = x) = ϕn(λ) e−λxP(S(λ)n − θn = 0), (6.1.7)

where, as before, E(ξ(λ) − θ) = 0 for λ = λ(θ), θ ∈ (θ−, θ+).Thus, to study the asymptotics of P

(Sn ∈ Δ[x)

), we should turn to integro-

local theorems for the sums

Zn := ζ1 + · · · + ζn

of i.i.d. r.v.’s ζ1, ζ2, . . . in the normal deviation zone in the non-lattice case and tolocal theorems for these sums in the lattice case.

Let F be a stable distribution with a density f .In the lattice case, we have Gnedenko’s theorem [130], as follows.

Theorem 6.1.1. Assume that ζ has a lattice distribution with span h = 1 and anarbitrary a. The relation

limn→∞ sup

k

∣∣∣∣bnP(Zn = na + k) − f

(na + k − an

bn

)∣∣∣∣ = 0 (6.1.8)

holds, for suitable an, bn, iff the distribution of ζ belongs to the domain of attrac-tion of the stable law F. Moreover, here the sequence (an, bn) is the same as thatensuring convergence of the distribution of (Zn − an)/bn to the law F.

Page 334: Asymptotic analysis of random walks

6.1 Introduction. The main method under Cramer’s condition 303

Proof. The proof of Theorem 6.1.1 can be found in §§ 49, 50 of [130]; see alsoTheorem 4.2.1 of [152] and Theorem 8.4.1 of [32].

In the non-lattice case, the following Stone–Shepp integro-local theorem holdstrue.

Theorem 6.1.2. Suppose that ζ is a non-lattice r.v. Then a necessary and suffi-cient condition for the existence of an, bn such that

limn→∞ sup

Δ∈[Δ1,Δ2]

supx

∣∣∣∣bnP(Zn ∈ Δ[x)

)− Δf

(x − an

bn

)∣∣∣∣ = 0 (6.1.9)

for any fixed 0 < Δ1 < Δ2 < ∞ is that the distribution of ζ belongs tothe domain of attraction of the stable distribution F. Moreover, here the se-quence (an, bn) is the same as that ensuring convergence of the distributionof (Zn − an)/bn to the law F.

Proof. The proof of this assertion can be found in [254, 258, 259, 120, 32].

Remark 6.1.3. The assertions of Theorems 6.1.1 and 6.1.2 can be presented ina unified form, using instead of (6.1.8) the common assertion (6.1.9), where inthe lattice case the variable x assumes values an + k, k = . . . ,−1, 0, 1, . . . , andΔ � 1 is integer-valued.

Remark 6.1.4. If the Cramer condition on the characteristic function,

[C] p := lim sup|t|→∞∣∣g(t)

∣∣ < 1, where g(t) := Eeitζ ,

is satisfied then the assertion of Theorem 6.1.2 can be made stronger: the relation(6.1.9) will hold uniformly in Δ ∈ [e−p1n,Δ2] for any fixed p1 < p and Δ2 < ∞(see [259]).

Note that we want to apply Theorems 6.1.1 and 6.1.2 in the representations(6.1.4), (6.1.7) to the sums S

(λ)n − θn with λ = λ(θ), where the value θ = x/n

and hence that of λ(θ) will depend, generally speaking, on n (as, for instance, isthe case when x = θ0n + bnα, α ∈ (0, 1), θ0 = const). This means that wewill have to deal with the triangular array scheme and so will need versions ofTheorems 6.1.1 and 6.1.2 that are uniform in λ. Such versions were establishedin [72], and in part in [259]. In the general situation, conditions for the uniformityare rather cumbersome but in the special case when ζ = ξ(λ(θ)) − θ, θ ∈ [θ−, θ+]and θ− < θ− < θ+ < θ+, one needs no additional conditions and Theorems 6.1.1and 6.1.2 take more concrete forms. In particular, we have the following result.

Theorem 6.1.5. If the r.v. ξ is non-lattice and ζ = ξ(λ(θ)) − θ then uniformlyin θ = x/n ∈ [θ−, θ+] one has the relation (6.1.9), where

f(t) =1√2π

e−t2/2

Page 335: Asymptotic analysis of random walks

304 Random walks with exponentially decaying distributions

is the density of the standard normal distribution, an = 0 and b2n = nd(θ)

with d(θ) := Var ξ(λ(θ)). In particular,

limn→∞ sup

Δ∈[Δ1,Δ2]

∣∣∣∣√nd(θ)P(S(λ(θ))

n − θn ∈ Δ[0))− Δ√

∣∣∣∣ = 0. (6.1.10)

One should also note that the Cramer transform (6.1.1) converts the distributionof ξ into distributions that are absolutely continuous with respect to the originalone and also with respect to each other.

If θ− > −∞ and ϕ′′(λ(θ−))

< ∞ (Var ξ(λ(θ−)) < ∞) then the uniformityin θ = x/n in Theorems 6.1.1 and 6.1.2 will hold on the interval [θ−, θ+]. Simi-larly, for θ+ < ∞ and ϕ′′(λ(θ+)

)< ∞ one will have uniformity in θ ∈ [θ−, θ+].

Thus, under the conditions of Theorem 6.1.5, we have

P(S(λ(θ))

n − θn ∈ Δ[0)) ∼ Δ√

2πnd(θ), n → ∞, (6.1.11)

uniformly in Δ ∈ [Δ1,Δ2].If the distribution of ξ satisfies condition [C] then the interval [Δ1,Δ2] can be

replaced by [e−p1n,Δ2] for some p1 > 0.The assertion of Theorem 6.1.5 follows from the uniform versions of Theo-

rems 6.1.1 and 6.1.2 in an obvious way.If the conditions that ϕ′′(λ(θ±)

)are finite do not hold then, for θ = x/n ↑ θ+

(θ ↓ θ− > −∞), one will need versions of Theorems 6.1.1 and 6.1.2 with a non-normal stable density f (see § 6.2). Coresponding changes will have to be madein assertions (6.1.10), (6.1.11) as well.

Further, note that assertions (6.1.9), (6.1.11), being true in the non-lattice casefor any fixed Δ, will also hold for Δ = Δn → 0 slowly enough. Therefore,returning to (6.1.4), we obtain from Theorem 6.1.5 the following statement. Againlet [θ−, θ+] be any interval, located inside (θ−, θ+).

Theorem 6.1.6. In the non-lattice case, uniformly in θ = x/n ∈ [θ−, θ+], asn → ∞ and Δ = Δn → 0 slowly enough, one has, for λ = λ(θ),

P(Sn ∈ Δ[x)

) ∼ ϕn(λ) e−λx Δ√2πnd(θ)

. (6.1.12)

In the lattice case, for values of x of the form an + k, one has

P(Sn = x) ∼ ϕn(λ) e−λx 1√2πnd(θ)

. (6.1.13)

The uniformity in θ could be extended to the entire intervals [θ−, θ+] and[θ−, θ+] if θ− > −∞, ϕ′′(λ−

)< ∞ and θ+ < ∞, ϕ′′(λ+

)< ∞ respectively.

The case θ+ < ∞, ϕ′′(λ+

)= ∞ and θ ∼ θ+ will be considered in § 6.2.

Note that the product ϕn(λ(θ)

)e−λ(θ)x in (6.1.12) and (6.1.13) can be written

as [ϕ(λ(θ)

)e−λ(θ)θ

]n = e−nΛ(θ),

Page 336: Asymptotic analysis of random walks

6.1 Introduction. The main method under Cramer’s condition 305

where

Λ(θ) = λ(θ)θ − lnϕ(λ(θ)

)is the large deviation rate function (a.k.a. the deviation function or Legendretransform of lnϕ(λ)) given by

Λ(θ) := supλ

(λθ − lnϕ(λ)

).

In this form, the deviation function appears in a natural way when finding theasymptotics of P

(Sn ∈ Δ[x)

)using an alternative approach employing the inver-

sion formula and the saddle-point method. The properties of the deviation func-tion as a probabilistic characteristic are known well enough (see e.g. [38, 49, 67]).In particular, it is known that the function Λ(θ) is non-negative, lower semi-continuous, convex and analytic on the interval (θ−, θ+) and that

Λ(Eξ) = 0, Λ′(θ) = λ(θ) > 0 for θ > Eξ.

In terms of the function Λ(θ), the relation (6.1.12) takes the form

P(Sn ∈ Δ[x)

) ∼ Δe−nΛ(θ)√2πnd(θ)

.

From here (or from (6.1.12)) one can easily obtain an integral theorem as well.Since for θ < θ+, v/n → 0 one has

Λ(θ +

v

n

)= Λ(θ) +

v

nλ(θ) + o

( v

n

),

we see that, for any fixed v, as n → ∞,

P(Sn ∈ Δ[x + v)

) ∼ Δe−nΛ(θ)−vλ(θ)√2πnd(θ)

. (6.1.14)

This implies the following.

Corollary 6.1.7. Let θ = x/n > Eξ. Then in the non-lattice case, uniformlyin θ = x/n ∈ [θ−, θ+], for any fixed Δ � ∞ one has

P(Sn ∈ Δ[x)

) ∼ e−nΛ(θ)√2πnd(θ)

Δ∫0

e−vλ(θ)dv =(1 − e−λ(θ)Δ)e−nΛ(θ)

λ(θ)√

2πnd(θ).

(6.1.15)In particular, for Δ = ∞,

P(Sn � x) ∼ e−nΛ(θ)

λ(θ)√

2πnd(θ). (6.1.16)

In the lattice case, we similarly obtain from (6.1.13) that, for values of x of theform an + k, one has

P(Sn � x) ∼ e−nΛ(θ)

(1 − e−λ(θ))√

2πnd(θ). (6.1.17)

Page 337: Asymptotic analysis of random walks

306 Random walks with exponentially decaying distributions

The general integro-local theorem (6.1.15), (6.1.13) was obtained, in a some-what different form, in [259].

The asymptotic relations (6.1.16), (6.1.17) show, in particular, the degree ofprecision of the following crude bound, which is obtained directly from Cheby-shev’s inequality. For x � Eξ � 0,

P(Sn � x) � infλ>0

e−λxEeλSn = infλ>0

e−λx+n ln ϕ(λ) = e−nΛ(θ). (6.1.18)

We emphasize that, when studying one-sided deviations, one of the main con-ditions in the basic Theorem 6.1.6 is the relation

θ =x

n� θ+ < θ+.

This inequality, describing the so-called Cramer zone of right-sided deviations,sets applicability bounds for methods based on the Cramer transform and alsoon all the other analytic methods leading to asymptotics of the form (6.1.12)–(6.1.17). If θ+ = ∞ then there are no such bounds. If θ+ < ∞, λ+ = ∞ thennecessarily

P(ξ � θ+) = 1,

i.e. the r.v. ξ is bounded. This case is scarcely interesting from the large deviationsviewpoint, and we will not consider it here.

There remains the little-studied (for deviations x > θ+n) possibility

0 < λ+ < ∞, θ+ =ϕ′(λ+)ϕ(λ+)

< ∞, (6.1.19)

where necessarily one has ϕ(λ+) < ∞, ϕ′(λ+) < ∞. It follows from theserelations that the function V (t) from the representation

P(ξ � t) = e−λ+tV (t), λ+ > 0, (6.1.20)

has the property that tV (t) is integrable:

∞∫0

tV (t) dt < ∞. (6.1.21)

Indeed, the condition ϕ′(λ+) < ∞ implies that

E(ξeλ+ξ; ξ > 0) < ∞, (6.1.22)

and therefore, as t → ∞,

V (t) = eλ+tP(ξ � t) � t−1E(ξeλ+ξ; ξ � t

)= o(t−1).

Page 338: Asymptotic analysis of random walks

6.1 Introduction. The main method under Cramer’s condition 307

From (6.1.22) and the last relation, integrating by parts, we obtain

∞ > E(ξeλ+ξ; ξ > 0) > E

(ξeλ+ξ; 0 < ξ < N)

= −N∫

0

teλ+td(e−λ+tV (t)

)= λ+

N∫0

tV (t) dt −N∫

0

t dV (t)

� λ+

N∫0

tV (t) dt − NV (N) → λ+

∞∫0

tV (t) dt as N → ∞,

which establishes (6.1.21).Moreover, if 1/V (t) is an upper-power function (see p. 28) then

V (t) � 2c

t

t∫t/2

V (u) du � 4c

t2

t∫t/2

uV (u)du = o(t−2)

owing to (6.1.21). It is not difficult to construct an example showing that withoutadditional assumptions on the behaviour of V (t) the bound V (t) = o(t−1) isessentially unimprovable.

It turns out that one can find the asymptotics of the probabilities of large devia-tions x > θ+n in the case (6.1.19), (6.1.20) only under the additional assumptionof regular variation of the function V ; this is quite similar to what we saw inChapters 2–5 and in contrast with the ‘Cramer theory’ presented at the beginningof the present section. Moreover, the asymptotics of P(Sn � x) are determinedin this case not only by the function V (t) but also by the values of λ+, ϕ(λ+)and ϕ′(λ+), which depend on the entire distribution of ξ. (We did not have thisphenomenon in the situations considered in Chapters 2–5; in particular, in theCramer deviation zone there was no dependence on the asymptotics of the non-exponential factor V (t)). Thus, in this sense, we will be dealing with ‘transi-tional’ asymptotics, whose place is between the asymptotics of Chapters 2–5 andthe asymptotics (6.1.15)–(6.1.17) in the Cramer deviation zone.

In what follows, we will consider the class ER of distributions (or, equiva-lently, functions of the form (6.1.20)) with the property that the function V (t)in (6.1.20) belongs to the class R of regularly varying functions. The class ERwill be referred to as the class of regularly varying exponentially decaying distri-butions. We will distinguish between two subclasses of ER, for which one has,respectively,

∞∫0

t2V (t) dt = ∞ (6.1.23)

and∞∫0

t2V (t) dt < ∞. (6.1.24)

Page 339: Asymptotic analysis of random walks

308 Random walks with exponentially decaying distributions

For studying these one will have to use the results of Chapter 3 and Chapter 4,respectively. Functions V ∈ R of indices from the intervals (−2,−3) corre-spond to the subclass (6.1.23), (6.1.21) whereas functions of indices less than −3correspond to the subclass (6.1.24).

One can also obtain results, similar to (but, unfortunately, technically morecomplicated than) those to be derived below in §§ 6.2 and 6.3, for the class ESe ofdistributions with the property that the function V in (6.1.20) is from the class Se

of semiexponential functions. In this case one should make use of integro-localtheorems.

As mentioned in the Introduction, a companion volume to the present text willbe devoted to a more detailed study of random walks with distributions fast de-caying at infinity.

6.2 Integro-local theorems for sums Sn of r.v.’s with distributions from the

class ER when the function V (t) is of index from the interval (−1,−3)

In the previous section, we used integro-local theorems in the normal deviationzone and the Cramer transform to obtain theorems of the same type in the wholeCramer deviation zone. Now we will turn to integro-local theorems on the bound-ary and outside the Cramer zone, where the approaches presented in § 6.1 do notwork.

In this section, we will consider the possibility (6.1.23), i.e. the case when thedistribution of ξ has the form

P(ξ � t) = e−λ+tV (t), λ+ > 0, (6.2.1)

where

V (t) = t−α−1L(t), (6.2.2)

α ∈ (0, 2) and L is an s.v.f. (For reasons that will become clear later on it isconvenient in the section to denote the index of the r.v.f. V (t) by −α − 1.)

Formally, the case α ∈ (0, 1) is not related to the group of distributions withλ+ < ∞ and θ+ < ∞, which was specified by (6.1.19), since, for such indices,

∞∫1

tV (t) dt =

∞∫1

t−αV (t) dt = ∞

and therefore ϕ′(λ+) = ∞, θ+ = ϕ′(λ+)/ϕ(λ+) = ∞. This means that we canuse the approaches discussed in § 6.1 to study the probabilities P(Sn � x) withx ∼ cn for any fixed c. It turns out, however, that the approaches presented in§ 6.2.2 below (first and foremost, this refers to the use of the Cramer transformat the fixed point λ+) enable one to study the asymptotics of P(Sn ∈ Δ[x))when x � n also (such deviations x could be called super-large; under Cramer’s

Page 340: Asymptotic analysis of random walks

6.2 Integro-local theorems: the index of V is from (−1,−3) 309

condition, along with normal deviations O(√

n ), one usually distinguishes be-tween moderately large deviations, when

√n � x = o(n), and the ‘usual’ large

deviations x ∼ cn).

6.2.1 Large deviations on the boundary of the Cramer zone: case α ∈ (1, 2)

Now we return to the case α ∈ (1, 2) and put

y := x − nθ+. (6.2.3)

We will assume in this subsection that y = o(n) as n → ∞ (in fact, the rangeof the values of y to be considered will be somewhat narrower). This meansthat θ = x/n remains in the vicinity of the point θ+, i.e. next to the right-handboundary of the Cramer zone (θ−, θ+).

Applying the Cramer transform (6.1.1) at the fixed point λ+, we obtain that,as Δ → 0 (cf. (6.1.4)),

P(Sn ∈ Δ[x)

) ∼ ϕn(λ+)e−λ+xP(S(λ+)

n − θ+n ∈ Δ[y)). (6.2.4)

Now we will find out what kind of distribution will describe the jumps

ζ := ξ(λ+) − θ+. (6.2.5)

We have

P(ζ ∈ dt) =eλ+(t+θ+)

ϕ(λ+)P(ξ ∈ dt + θ+) =

λ+V (t + θ+) dt − dV (t + θ+)ϕ(λ+)

.

(6.2.6)From this and Theorem 1.1.4, one derives that

Vζ(t) := P(ζ � t) =λ+tV (t)αϕ(λ+)

(1 + o(1))

=λ+t−αL(t)αϕ(λ+)

(1 + o(1)) = t−αLζ(t), (6.2.7)

where

Lζ(t) ∼ λ+L(t)αϕ(λ+)

is clearly also an s.v.f.Let ζ1, ζ2, . . . be independent copies of the r.v. ζ. Since

Eζ = Eξ(λ+) − θ+ = 0

we obtain, by virtue of the results of § 1.5, that, as n → ∞, the distribution of thescaled sums Zn/bn, where

Zn =n∑

i=1

ζi, bn = V(−1)ζ (1/n) ∼

(λ+

αϕ(λ+)

)1/α

V (−1)(1/n), (6.2.8)

Page 341: Asymptotic analysis of random walks

310 Random walks with exponentially decaying distributions

will converge to the stable law Fα,1 with parameters (α, 1) (the left distribu-tion tail of ζ decays exponentially fast). Hence, owing to Theorem 6.1.2 onehas (6.1.9), where an = 0, bn = V

(−1)ζ (1/n) and f = fα,1 is the density of the

stable law Fα,1. This means that if Δ = Δn → 0 slowly enough then

P(Zn ∈ Δ[y)

)=

Δbn

fα,1

(y

bn

)+ o

(Δbn

).

In other words, when |y| = O(bn) (i.e. for n > c/Vζ(y) or when nVζ(y) decaysslowly enough) one has

P(Zn ∈ Δ[y)

) ∼ Δbn

fα,1

(y

bn

).

Returning to (6.2.4) and noting that

ϕn(λ+)e−λ+x = e−nΛ(θ+)−λ+y,

we obtain the following assertion.

Theorem 6.2.1. Assume that conditions (6.2.1), (6.2.2) with α ∈ (1, 2) are satis-fied. Then, in the non-lattice case, for y = x−θ+n = o(n) and for Δ = Δn → 0slowly enough as n → ∞, one has the representation

P(Sn ∈ Δ[x)

)=

Δe−nΛ(θ+)−λ+y

bn

[fα,1

(y

bn

)+ o(1)

], (6.2.9)

where the bn are defined in (6.2.8).In the arithmetic case, for integer-valued x,

P(Sn = x) =e−nΛ(θ+)−λ+y

bn

[fα,1

(y

bn

)+ o(1)

]. (6.2.10)

In both cases the remainder terms are uniform in x (in y) in the zone wherenVζ(y) � εn, εn → 0 slowly enough as n → ∞.

Remark 6.2.2. The function Λ(θ) is linear for θ � θ+:

Λ(θ) = Λ(θ+) + (θ − θ+)λ+, θ � θ+. (6.2.11)

Hence for y > 0 the argument of the exponent in (6.2.9), (6.2.10) can also bewritten in the form −nΛ(θ).

Corollary 6.2.3. In the non-lattice case, under conditions (6.2.1), (6.2.2), for anysequence εn → 0 slowly enough as n → ∞, one has

P(Sn ∈ Δ[x)

)=

e−nΛ(θ+)−λ+y

λ+bn

(1 − e−λ+Δ

)[fα,1

(y

bn

)+ o(1)

]uniformly in y, |y| < εnn and Δ � εn.

Page 342: Asymptotic analysis of random walks

6.2 Integro-local theorems: the index of V is from (−1,−3) 311

Proof. The assertion of the corollary follows from Theorem 6.2.1 owing to thecontinuity of the function fα,1 and the fact that, as n → ∞,

Δ∫0

e−λ+vfα,1

(y + v

bn

)dv =

1 − e−λ+Δ

λ+

[fα,1

(y

bn

)+ o(1)

]uniformly in y and Δ.

Thus, Theorem 6.2.1 and Corollary 6.2.3 describe the asymptotics of the prob-abilities P

(Sn ∈ Δ[x)

)when |y| = O

(V

(−1)ζ (1/n)

). We will need another

approach when y � V(−1)ζ (1/n).

6.2.2 Large deviations and super-large deviations outside the Cramer zone:

case α ∈ (0, 2)

In this subsection, we will assume that x and n are such that

y = x − nθ+ → ∞, y � σ(n) := V(−1)ζ (1/n),

where the tail Vζ(t) of the r.v. ζ = ξ(λ+) − θ+ is given in (6.2.7). Observe thatthe left tail of ζ decays at infinity exponentially fast, so that condition [<, =]with Wζ(t) = o(Vζ(t)), σ(n) = σ(n) always holds for ζ. Furthermore, underthe conditions of the present subsection, deviations y � σ(n) for Zn belong tothe large deviation zone and so the approach used in the previous subsection is nolonger applicable. Instead, to approximate P

(Zn ∈ Δ[y)

)we can now use the

integro-local theorems of § 3.7. This results in the following assertion.Recall that, in the case under consideration, we have θ = x/n > θ+, Eξ(λ+) =

θ+ and

λ(θ) = λ+, Λ(θ) = θλ+ − lnϕ(λ+) = Λ(θ+) + λ+y/n.

Theorem 6.2.4. Let conditions (6.2.1), (6.2.2) with α ∈ (0, 2) be met. Then, inthe non-lattice case, for Δ = Δy → 0 slowly enough as y → ∞, we have therepresentation

P(Sn ∈ Δ[x)

)= Δe−nΛ(θ) nλ+V (y)

ϕ(λ+)(1 + o(1)) (6.2.12)

as N → ∞, where the remainder term o(1) is uniform in y and n such thaty � max{N,n1/γ} for some fixed γ < α.

In the arithmetic case, the representation (6.2.12) holds for Δ = 1 and integer-valued x → ∞, provided that we replace the factor λ+ on the right-hand sideof (6.2.12) by 1 − e−λ+ .

Remark 6.2.5. Note that in this theorem, in contrast with Theorem 6.1.6 andCorollary 6.1.7, generally speaking, we did not assume that n → ∞.

Page 343: Asymptotic analysis of random walks

312 Random walks with exponentially decaying distributions

Proof of Theorem 6.2.4. It follows from (6.2.6) that, for any Δ = o(t),

P(ξ(λ+) − θ+ ∈ Δ[t)

)= ϕ−1(λ+)

[V (t + θ+) − V (t + θ+ + Δ) + λ+

t+θ++Δ∫t+θ+

V (u) du

]

= ϕ−1(λ+)[V (t + θ+) − V (t + θ+ + Δ) + λ+ΔV (t)(1 + o(1))

].

Since for Δ = o(t) one has

V (t + θ+) − V (t + θ+ + Δ) = o(V (t)

),

we obtain that the distribution of ζ = ξ(λ+) − θ+ has the property that

P(ζ ∈ Δ[t)

)=

λ+V (t)ϕ(λ+)

[(1 + o(1))Δ + o(1)

]. (6.2.13)

In other words, the tail Vζ(t) := P(ζ � t) (see (6.2.7)) satisfies condition [D(1,q)]from § 3.7 in the form (3.7.2):

Vζ(t) − Vζ(t + Δ) = V1(t)[(1 + o(1))Δ + o

(q(t)

)]with

q(t) ≡ 1, V1(t) =λ+V (t)ϕ(λ+)

, Vζ(t) ∼ tλ+V (t)αϕ(λ+)

. (6.2.14)

Hence we can use Theorem 3.7.1. Applying the theorem with γ0 = 0, we obtainthat, for any Δ � c, Δ = o(y), one has

P(S(λ+)

n − θ+n ∈ Δ[y))

=nλ+V (y)ϕ(λ+)

(1 + o(1))Δ (6.2.15)

as N → ∞, where the remainder term o(1) is uniform in y, n and Δ satisfyingthe inequalities y � max{N,n1/γ} for some fixed γ < α and c � Δ � yεN foran arbitrary function εN ↓ 0 as N ↑ ∞. Since c is arbitrary, the relation (6.2.15)will still be true for Δ = Δy → 0 slowly enough. It only remains to use (6.2.4).

The demonstration in the lattice case is quite similar. The theorem is proved.

Corollary 6.2.6. Assume that (6.2.1), (6.2.2) hold true. In the non-lattice case,for any Δ � Δy , where Δy → 0 slowly enough as y → ∞, one has

P(Sn ∈ Δ[x)

)=

e−nΛ(θ)nV (y)ϕ(λ+)

(1 − e−λ+Δ

)(1 + o(1)). (6.2.16)

In particular, when Δ = ∞ we obtain

P(Sn � x) =e−nΛ(θ)nV (y)

ϕ(λ+)(1 + o(1)) (6.2.17)

Page 344: Asymptotic analysis of random walks

6.2 Integro-local theorems: the index of V is from (−1,−3) 313

as N → ∞, where the remainders o(1) are uniform in y � max{N,n1/γ} forany fixed γ < α, Δ � Δy .

In the arithmetic case, the relation (6.2.17) holds for integer-valued x → ∞.

Proof. The proof of Corollary 6.2.6 is almost obvious from Theorem 6.2.4. In-deed, in the non-lattice case, for small Δ one has (6.2.16) and, moreover, for anyfixed v > 0,

nΛ(θ +

v

n

)= nΛ(θ) + λ+v.

Therefore, for small Δ we have

P(Sn ∈ Δ[x + v)

)=

Δnλ+V (y + v)ϕ(λ+)

e−nΛ(θ)e−λ+v(1 + o(1)),

and so for arbitrary Δ � Δy

P(Sn ∈ Δ[x)

)=

nλ+e−nΛ(θ)

ϕ(λ+)

Δ∫0

V (y + v)e−λ+vdv(1 + o(1)

)= ϕ−1(λ+)nV (y)e−nΛ(θ)

(1 − e−λ+Δ

)(1 + o(1)), (6.2.18)

where the required uniformity of o(1) follows from Theorem 6.2.4 and obviousuniform bounds for the integral in (6.2.18). This proves (6.2.16) and (6.2.17). Inthe lattice case, the assertion (6.2.17) follows directly from Theorem 6.2.4. Thecorollary is proved.

6.2.3 Super-large deviations in the case α ∈ (0, 1)

As already noted, in the case α ∈ (0, 1) one has θ+ = ∞ and therefore theprobabilities of deviations of the form cn for any fixed c can be obtained withinthe framework of § 6.1. It turns out that, using the same approaches as in §§ 6.2.1and 6.2.2, in the case α ∈ (0, 1) one can also find the asymptotics of P(Sn � x)for x � n.

For Δ → 0 one has (cf. (6.1.4), (6.2.4))

P(Sn ∈ Δ[x)

) ∼ ϕn(λ+)e−λ+xP(S(λ+)

n ∈ Δ[y)), (6.2.19)

where ζ := ξ(λ+) follows the distribution

P(ζ ∈ dt) =eλ+t

ϕ(λ+)P(ξ ∈ dt) =

λ+V (t) dt − dV (t)ϕ(λ+)

,

so that

Vζ(t) := P(ζ � t) =λ+tV (t)αϕ(λ+)

(1 + o(1)) = t−αLζ(t), Lζ(t) ∼ λ+L(t)αϕ(λ+)

.

Page 345: Asymptotic analysis of random walks

314 Random walks with exponentially decaying distributions

As in § 6.2.1, it follows from this that the scaled sums Zn/bn (see (6.2.8)) con-verge in distribution to the stable law Fα,1 with parameters (α, 1). Therefore, byvirtue of Theorem 6.1.2, for Δ = Δn → 0 slowly enough,

P(Zn ∈ Δ[x)

)=

Δbn

fα,1

(x

bn

)+ o

(Δbn

),

where fα,1 is the density of the distribution Fα,1 (recall that in the case of con-vergence to a stable law with α < 1, no centring of the sums Zn is needed).Returning to (6.2.19), we obtain, as in § 6.2.1, the following assertion.

Theorem 6.2.7. Let conditions (6.2.1), (6.2.2) with α ∈ (0, 1) be satisfied and letx � n. Then, in the non-lattice case, for Δ = Δn → 0 slowly enough as n → ∞one has

P(Sn ∈ Δ[x)

)=

Δbn

ϕn(λ+)e−λ+x

[fα,1

(x

bn

)+ o(1)

],

where the bn were defined in (6.2.8).In the arithmetic case, for integer-valued x,

P(Sn = x) =1bn

ϕn(λ+)e−λ+x

[fα,1

(x

bn

)+ o(1)

].

Corollary 6.2.8. In the non-lattice case, as n → ∞,

P(Sn � x) =ϕn(λ+)e−λ+x

λ+bn

[fα,1

(x

bn

)+ o(1)

].

The above assertions could be made uniform in the same way as in § 6.2.1. Theygive exact asymptotics of the desired probabilities only when x is comparablewith bn ∼ n1/αLb(n), where Lb is an s.v.f. If x grows at a faster rate then thefollowing analogue of Theorem 6.2.4 should be used.

Let γ < α be an arbitrary fixed number.

Theorem 6.2.9. Let conditions (6.2.1), (6.2.2) with α ∈ (0, 1) be satisfied. Then,as N → ∞, for x � max{N,n1/γ} and Δ = Δx → 0 slowly enough, in thenon-lattice case one has

P(Sn ∈ Δ[x)

)= Δϕn−1(λ+)e−λ+xnλ+V (x)(1 + o(1)). (6.2.20)

In the arithmetic case, the assertion (6.2.20) remains true for integer-valued x

and Δ = 1, provided that we replace the factor λ+ on the right-hand sideof (6.2.20) by 1 − e−λ+ .

Corollary 6.2.10. Under conditions (6.2.1), (6.2.2), for α ∈ (0, 1), N → ∞ andx � max{N,n1/γ} one has

P(Sn � x) = ϕn−1(λ+)e−λ+xnV (x)(1 + o(1)).

We will omit the proofs of Theorem 6.2.9 and Corollary 6.2.10 as they are com-pletely analogous to the proofs of Theorem 6.2.4 and Corollary 6.2.6 respectively.

Page 346: Asymptotic analysis of random walks

6.3 Integro-local theorems: the finite variance case 315

6.3 Integro-local theorems for the sums Sn when the Cramer transform for

the summands has a finite variance at the right boundary point

The structure of this section is similar to that of § 6.2. First we will considerdeviations

y = x − nθ+ = O(√

n)

under the assumption that ϕ′′(λ+) < ∞ (here and elsewhere in the present chap-ter, by the derivatives at the end points λ± we understand the respective one-sidedderivatives) and then, in the second part of the section, turn to deviations

y �√

n lnn

under conditions (6.2.1), (6.2.2), α > 2. Note that in the first part of the sectionthe assumptions (6.2.1), (6.2.2) will not be needed.

6.3.1 Large deviations on the boundary of the Cramer zone

We will assume in this subsection that ϕ′′(λ+) < ∞ (under the assumptionsof § 6.2 this condition was not met). Approaches to studying the asymptoticsof P(Sn ∈ Δ[x)) in this case are close to those used in § 6.1 to prove Theo-rem 6.1.6. The difference is that here we will use the Cramer transform at thefixed point λ+ (as in § 6.2). For the r.v. ζ = ξ(λ+) − θ+ we have

Eeλξ(λ+)=

ϕ(λ + λ+)ϕ(λ+)

, Eζ = 0,

b := Eζ2 =ϕ′′(λ+)ϕ(λ+)

−(

ϕ′(λ+)ϕ(λ+)

)2

< ∞.

(6.3.1)

Theorem 6.3.1. Let ϕ′′(λ+) < ∞. Then, in the non-lattice case, for y = x −θ+n = o(n) and Δ = Δn → 0 slowly enough as n → ∞, one has

P(Sn ∈ Δ[x)

)=

Δe−nΛ(θ+)−λ+y

√2πnb

[e−y2/2bn + o(1)

], (6.3.2)

where the remainder o(1) is uniform in x (in y).In the arithmetic case, for integer-valued x one has

P(Sn = x) =e−nΛ(θ+)−λ+y

√2πnb

[e−y2/2bn + o(1)

].

Recall that, for θ � θ+,

λ(θ) = λ+, e−nΛ(θ) = e−nΛ(θ+)−λ+y = ϕn(λ+)e−λ+x.

The assertion of Theorem 6.3.1 extends, in the case ϕ′′(λ+) < ∞, the applica-bility zone of Theorem 6.1.6 up to the boundary values θ = x/n ∼ θ+. We alsohave the following analogue of Corollary 6.1.7.

Page 347: Asymptotic analysis of random walks

316 Random walks with exponentially decaying distributions

Corollary 6.3.2. In the non-lattice case, for any Δ � Δn, where Δn → 0 slowlyenough, one has

P(Sn ∈ Δ[x)

)=

e−nΛ(θ+)−λ+y−y2/2bn

λ+

√2πnb

(1 − e−λ+Δ

)(1 + o(1)) (6.3.3)

uniformly in y, |y| � c√

n.In particular, for Δ = ∞,

P(Sn � x) ∼ e−nΛ(θ+)−λ+y−y2/2bn

λ+

√2πnb

. (6.3.4)

In the arithmetic case, for integer-valued x one has

P(Sn � x) ∼ e−nΛ(θ+)−λ+y−y2/2bn

(1 − e−λ+)√

2πnb.

Corollary 6.3.2 follows from Theorem 6.3.1 in an obvious way.

Proof of Theorem 6.3.1. For the distribution of ζ = ξ(λ+) − θ+ we have rela-tions (6.3.1), which imply that b = Var ζ < ∞. Hence the distribution of thenormalized sums Zn = ζ1+· · ·+ζn converges to the normal law, and the integro-local theorem 6.1.2 holds true in the non-lattice case:

P(Zn ∈ Δ[y)

) ∼ 1√2πnb

(Δe−y2/2nb + o(1)

)(6.3.5)

uniformly in Δ ∈ [Δ1,Δ2]. It only remains to make use of (6.2.4).In the arithmetic case, the proof is similar. The theorem is proved.

6.3.2 Large deviations outside the Cramer zone

We will assume in this section that y = x− nθ+ � √n lnn. Such deviations are

large for Zn, and so the relations (6.3.5) are not meaningful for them. However,we can now use the integro-local theorems from § 4.7. As a result, we can provethe following assertion.

Theorem 6.3.3. Let conditions (6.2.1), (6.2.2) with α > 2 be satisfied, and let

y = x − nθ+ �√

n lnn.

Then, in the non-lattice case with Δ = Δy → 0 slowly enough as y → ∞, wehave

P(Sn ∈ Δ[x)

)= Δe−nΛ(θ) nλ+V (y)

ϕ(λ+)(1 + o(1)) (6.3.6)

as N → ∞, where the term o(1) is uniform in y and n such that y � N√

n lnn.In the arithmetic case, the assertion (6.3.6) holds true for integer-valued x,

Δ = 1, if we replace the factor λ+ on the right-hand side of (6.3.6) by 1− e−λ+ .

Remark 6.2.5 following Theorem 6.2.4 is valid for the above theorem as well.

Page 348: Asymptotic analysis of random walks

6.3 Integro-local theorems: the finite variance case 317

Proof. The proof of Theorem 6.3.3 essentially repeats that of Theorem 6.2.4.Under the new conditions, we still have the representation (6.2.13), and hencecondition [D(1,q)] of § 3.7 holds in the form (3.7.2), where q and V1 are givenin (6.2.14). Therefore the conditions of Theorem 4.7.1 are met and so, by virtueof that theorem, for Δ = Δy → 0 slowly enough as y → ∞, one has

P(Zn ∈ Δ[y)

)= ΔnV1(y)

(1 + o(1)

), V1(t) =

λ+V (t)ϕ(λ+)

,

as N → ∞, where the term o(1) is uniform in y and n such that y � N√

n lnn.It remains to make use of (6.2.4).

The proof in the lattice case is completely analogous. The theorem is proved.

Theorem 6.3.3 implies the following.

Corollary 6.3.4. Under the conditions of Theorem 6.3.3, for any Δ � Δy ,

P(Sn ∈ Δ[x)

)=

e−nΛ(θ)nV (y)ϕ(λ+)

(1 − e−λ+Δ

)(1 + o(1)).

In particular,

P(Sn � x) =e−nΛ(θ)nV (y)

ϕ(λ+)(1 + o(1))

as N → ∞, where the remainder o(1) is uniform in y � N√

n lnn. The assertionremains true in the lattice case as well.

Assertions that are close to the theorems and corollaries of §§ 6.2 and 6.3 wereobtained for narrower deviation zones in [27] (Lemmata 2, 3).

Remark 6.3.5. Observe that, under the assumptions of the present chapter, theintegro-local theorems could be obtained from the corresponding integral theo-rems (provided that the latter are available). Indeed, for y = x − θ+n > 0,

e−nΛ(θ) = e−nΛ(θ+)−λ+y,

and therefore P(Sn � x) decays in x exponentially fast (in the presence of other,not so quickly varying, factors). Hence, for any fixed Δ > 0, the asymptoticsof the probability P

(Sn ∈ Δ[x)

)will differ from that for P(Sn � x) just by

a constant factor (1 − e−λ+Δ). So the headings of §§ 6.2 and 6.3 reflect themethodological side of the subject rather than its essence, since the assertionsin these sections were obtained with the help of the integro-local theorems ofChapters 3 and 4.

Theorems 6.2.4 and 6.3.3 show that the integro-local theorems of §§ 3.7 and4.7, together with the Cramer transform, are effective tools for studying largedeviation probabilities outside the Cramer zone.

Page 349: Asymptotic analysis of random walks

318 Random walks with exponentially decaying distributions

In concluding this section we observe that, under the conditions assumed init, one can also obtain asymptotic expansions in the integro-local and integraltheorems, in a way similar to that used in § 4.4 (see also [64]).

6.4 The conditional distribution of the trajectory {Sk} given Sn ∈ Δ[x)

The present section is an analogue of § 4.9. For F ∈ ER, the conditional distribu-tion of the trajectory {Sk; 1 � k � n} given Sn � x and that given Sn ∈ Δ[x)will not differ much from each other. So, for convenience of exposition, we willrestrict ourselves to conditioning on the events {Sn ∈ Δ[x)} only.

First observe that, in the Cramer deviation zone (when θ := x/n < θ+), incontrast with the results of § 4.9, for x � √

n → ∞ the process {x−1S�nt�;t ∈ [0, 1]} given Sn ∈ Δ[x) converges in distribution to the deterministic pro-cess {ζ1(t) ≡ t; t ∈ [0, 1]}. This is the ‘first-order approximation’ in trajectoryspace. The ‘second-order approximation’ states that the (conditional) process{n−1/2(S�nt� − xt); t ∈ [0, 1]} given Sn ∈ Δ[x) converges in distribution to theBrownian bridge process {w(t) − tw(1)}, where {w(t)} is the standard Wienerprocess (see [45, 47]). In the present section, we will show that, outside theCramer deviation zone, when y = x − θ+n → ∞, we have for the conditionaldistributions of the sums {Sk} of r.v.’s with distributions from the class ER a pic-ture that is intermediate between two absolutely different behaviour types, thatdescribed above and that established in § 4.9.

As in § 4.9, let E(·) denote a step process on [0, 1]:

E(t) :={

0 for t � ω,1 for t > ω,

where ω is an r.v. uniformly distributed over [0, 1].

Theorem 6.4.1. Let conditions (6.2.1), (6.2.2) be satisfied, Δ > 0 be an arbitraryfixed number, n → ∞ and one of the following two conditions hold true:

(1) α ∈ (1, 2), y = x − θ+n � n1/γ , where γ < α is some fixed number;

(2) α > 2, y = x − θ+n � √n lnn.

Then the conditional distribution of the process{y−1(S�nt� − θ+nt); t ∈ [0, 1]

}given the event Sn ∈ Δ[x) will converge weakly in D(0, 1) to the distributionof E(t).

Proof. We will present only a sketch of the proof. It is not difficult to give a moredetailed and formal argument, using [45, 47] and Chapters 3 and 4.

The Cramer transform of the distribution of the sequence S1, . . . , Sn at thepoint λ+ has the property that the distribution of the process {S�nt�} in the zone

of deviations of order x coincides with the distribution of {S(λ+)�nt� − θ+�nt } (up

Page 350: Asymptotic analysis of random walks

6.5 The probability of the crossing of a remote boundary 319

to a factor ϕn(λ+)e−λ+x, cf. (6.2.4), which then disappears when we switch tothe conditional distribution given Sn ∈ Δ[x)). The conditional distribution ofthe process {S(λ+)

�nt� − θ+�nt } given Sn ∈ Δ[x) coincides with the conditionaldistribution of

Z�nt� := S(λ+)�nt� − θ+�nt

given Zn ∈ Δ[y), where y = x−θ+n. Now, the r.v.’s ζ = ξ(λ+)−θ+ have a reg-ularly varying right tail Vζ and an exponentially fast decaying left tail. Hence wecan make use of Theorems 4.9.2 and 4.9.4 to describe the conditional distributionof Z�nt� given Zn ∈ Δ[y). Therefore, we will obtain the conditional behaviourof the original trajectory S�nt� by taking the sum of the trajectory θ+t and thetrajectory described in Theorems 4.9.2 and 4.9.4. This completes the proof of thetheorem.

The assertion of Theorem 6.4.1 means that the trajectory n−1S�nt� given theevent Sn ∈ Δ[x) can be represented as

n−1S�nt� = θ+t + yn−1E(t) + rn(t),

where rn = O(n−1b(n)) and

b(n) =

{V

(−1)ζ (1/n) when α ∈ (1, 2),√n when α > 2.

As in § 4.9, one could also obtain here an approximation of higher order by deter-mining the behaviour of the remainder rn. Namely, as in Theorem 4.9.3, one canshow that conditioning on Sn ∈ Δ[x) results in the weak convergence

1b(n)

[S�nt� − θ+nt − yEωn

(t)]⇒ ζ(t) − ζ(1)E(t),

where the ωn have the same meaning as in Theorem 4.9.3, but refer to the r.v.’s ζi,

and ζ(t) is the corresponding stable process; {ζ(t)} and {E(t)} are independent.

6.5 Asymptotics of the probability of the crossing of a remote boundary by

the random walk

6.5.1 Introduction

As in Chapters 3 and 4, we will study here the asymptotics of the probabil-ity P(Gn) of the crossing of a remote boundary g(·) = {g(k); k = 1, . . . , n}by the trajectory {Sk; k = 1, . . . , n}:

Gn :={

maxk�n

(Sk − g(k)) � 0}

.

Without losing generality, we will assume in this section that Eξ = 0.

We have a situation similar to the one that we had when studying the asymp-totics of P(Sn � x). When the boundary g(·) lies, roughly speaking, in the

Page 351: Asymptotic analysis of random walks

320 Random walks with exponentially decaying distributions

Cramer deviation zone, the asymptotics of P(Gn) have been studied fairly thor-oughly (see e.g. [36, 37, 38, 44]). If, for instance, g(n) = x, g(k) = ∞ fork < n,

θ =x

n< θ+ ≡ ϕ′(λ+)

ϕ(λ+)and λ+ = sup{λ : ϕ(λ) < ∞}

then for P(Gn) = P(Sn � x) we obtain the asymptotics (6.1.16), (6.1.17).The asymptotic behaviour of P(Sn � x) and also that of the joint distributionP(Sn � x, Sn < x − y) for such x values was studied in detail in [33, 34, 37](for lattice r.v.’s ξ, and also when the distribution of ξ has an absolutely continuouscomponent).

In the general case, first assume for simplicity that the boundary g(·) has theform

g(k) = xf(k/n), k = 1, . . . , n, inf0�t�1

f(t) > 0, (6.5.1)

where f(t), t ∈ [0, 1], is a fixed piecewise continuous function. As shown in [36],a determining role for the behaviour of the probabilities P(Gn) is played by thelevel lines pθ(t), t ∈ [0, 1], pθ(1) = θ, which are given implicitly as solutions tothe equation

tΛ(pθ(t)/t) = Λ(θ), t ∈ [0, 1], (6.5.2)

for fixed values of the parameter θ. The relation (6.5.2) is obtained by equating theexpressions kΛ(pθ(k/n)n/k) and nΛ(θ). Owing to the well-known logarithmicasymptotics [80]

− lnP(Sk � y) ∼ kΛ(y/k) as k → ∞, (6.5.3)

the above values are the principal terms of the asymptotics for the quantities

− lnP(Sk � pθ(k/n)n

)and − lnP(Sn � θn)

respectively (cf. (6.1.16), (6.1.17)). Therefore, any two points on a given levelline pθ(t), t ∈ [0, 1], have the property that, for a suitably scaled random walk (forwhich both the time and the space coordinates are ‘compressed’ n times each), theprobabilities that these points will be reached (in the respective times) are roughlythe same.

As was shown in [36], the functions pθ(·) are concave and θt � pθ(t) � θ

for t ∈ [0, 1]. Moreover, in the general case, on the left of the point

tθ := min{

1,Λ(θ)Λ(θ+)

}(6.5.4)

the function pθ is linear with a slope coefficient, whose value does not dependon θ:

pθ(t) = λ−1+ Λ(θ) + g+t, t ∈ [0, tθ], g+ := λ−1

+ lnϕ(λ+), (6.5.5)

Page 352: Asymptotic analysis of random walks

6.5 The probability of the crossing of a remote boundary 321

whereas on the right of tθ this function is analytic with a derivative continuousat tθ. The deviation function Λ(θ) increases for θ > 0, so that for θ � θ+ onehas tθ = 1 owing to (6.5.4), and hence (6.5.5) holds for all t ∈ [0, 1]. This alsoimplies that pθ(t) is an increasing function of θ for θ > 0. Moreover, it is obviousfrom (6.5.2) and (6.5.4) that

t ∈ [0, tθ) iff pθ(t)/t > θ+. (6.5.6)

Now let θ∗ > 0 = Eξ be the minimum value of θ, for which a curve fromthe family {pθ(·); θ > 0} will ‘touch’ the scaled boundary xn−1f(·); cf. (6.5.1).Then, using the well-known logarithmic asymptotics (6.5.3) and the crude bounds

maxk�n

P(Sk � xf(k/n)

)� P(Gn) �

n∑k=1

P(Sk � xf(k/n)

)� n max

k�nP(Sk � xf(k/n)

)(or the results of [36]), one can easily verify that, as n → ∞,

lnP(Gn) ∼ −nΛ(θ∗). (6.5.7)

In other words, the probability that the random walk will cross the boundary(6.5.1) has the same logarithmic asymptotics as the probability that the randomwalk will be above the point g(k) = xf(k/n) = npθ∗(k/n) at the time k :=�nt∗ , where

t∗ := inf{t > 0 : pθ∗(t) = xn−1f(t)}is the point where the level lines ‘touch’ the scaled boundary for the first time.

In the case when t∗ lies in the ‘regularity interval’ (tθ∗ , 1) (i.e. when we arewithin the Cramer deviation zone), the exact asymptotics of P(Gn) were obtainedin [36] under rather broad assumptions on f .

In the present chapter, we are dealing with an alternative situation,

t∗ ∈ (0, tθ∗), (6.5.8)

and assuming, as in §§ 6.2 and 6.3, that

λ+ < ∞, ϕ(λ+) < ∞. (6.5.9)

In this case, the nature of the asymptotics of P(Gn) will be quite different fromthose in the case when the boundary g(·) lies within the Cramer deviation zone.Obtaining the asymptotics of P(Gn) in the case (6.5.8) is more difficult than inthe Cramer case t∗ ∈ (tθ∗ , 1); moreover, in the present situation there exists nocommon (universal) law for P(Gn).

As we have already noted, most often one encounters in applications bound-aries that belong to one of the following two main types:

(1) g(k) = x + gk, k � 1, where the sequence {gk} depends neither on x noron n;

Page 353: Asymptotic analysis of random walks

322 Random walks with exponentially decaying distributions

(2) g(k) = xf(k/n), where f(t) > 0 is a function on [0, 1] that depends neitheron x nor on n.

In this section, we will somewhat change the form in which we represent theboundaries {g(k)} and will write

g(k) = x + gk,x,n, k = 1, 2 . . . ;

this includes representations (1) and (2) as special cases. In addition, in each ofthe typical cases to be dealt with below we will impose on {gk,x,n} a few specificconditions.

Recall that

g+ =1

λ+lnϕ(α+) > 0 (6.5.10)

(because ϕ′(0) = Eξ = 0 and ϕ(λ) is a convex function). The above-mentioned‘typical cases’ of boundaries refer to situations where the level lines first touchthe boundary at the left endpoint, at the right endpoint, or at a middle point, re-spectively. They are characterized by the following sets of conditions [A � ].

[A0] For some fixed k0 � 1 and γ > 0,

gk,x,n � (g+ + γ)k for k > k0 (6.5.11)

for all large enough x and n, and there exists a fixed number a (independent of x

and n) such that, for any fixed k � 1,

gk,x,n → ak as x, n → ∞. (6.5.12)

Note that, owing to (6.5.11), a � g+ + γ necessarily holds and also that condi-tion (6.5.12) actually stipulates convergence on an initial segment of the boundary{gk,x,n; k = 1, . . . , n} (the relation (6.5.12) holds uniformly in k � N, wherethe value N = N(x, n) → ∞ grows slowly enough as x, n → ∞, so that for,say, k >

√n the values of gk,x,n could be quite far from ak). Condition (6.5.12)

could be referred to as asymptotic local linearity of the boundary at zero.An example of a boundary for which condition [A0] is met is given by a bound-

ary of the form (6.5.1) with x ∼ cn, f(0) = 1 and f(t) − f(0) � c−1(g+ + γ)tand such that the function f(t) has a finite right derivative f ′(0) > c−1(g+ + γ)at t = 0 (in this case a = cf ′(0)).

The next condition has a similar form but refers to the terminal segment of theboundary {gk,x,n; k = 1, . . . , n}.

[An] For some fixed k0 � 1 and γ > 0,

gn−k,x,n � −(g+ − γ)k for k > k0

Page 354: Asymptotic analysis of random walks

6.5 The probability of the crossing of a remote boundary 323

for all large enough x and n, gn,x,n = 0 and there exists a fixed value b (indepen-dent of x and n) such that, for any fixed k � 1,

gn−k,x,n → −bk as x, n → ∞. (6.5.13)

Remarks similar to those we made regarding condition [A0] apply to [An] aswell (in particular, that b � g+ − γ).

The case when the level lines first touch the boundary inside the temporal in-terval is described by the following combination of the above conditions.

[Am] There exists a sequence m = m(n) → ∞ such that n − m → ∞,

gm,x,n = 0 and the sequences

{gm+k,x,n; k � 1} and {gm−k,x,n; k � 1}possess the properties stated in conditions [A0] and [An] respectively.

Observe that, in the above-listed cases, the level lines pθ∗(t) ‘touch’ the bound-ary n−1g(�nt ) at a non-zero angle. To get an idea of what happens in the caseof a smooth contact of these lines, we will consider the following condition:

[A0,n] g(k) = x + g+k, k = 1, . . . , n.

The analysis of the cases [A � ] below will show how diverse the asymptoticbehaviour of P(Gn) can be. Including in our considerations the general case of‘smooth contact’ of the level lines with the boundary would lead to an even widervariety of different types of asymptotic behaviour and to much more complicatedproofs. At the same time, as we will see, under conditions [A � ] the case of anarbitrary boundary can often be reduced to that of a linear boundary.

6.5.2 Boundaries under condition [A0]

In this case, the boundary g(·) increases so fast that the most likely scenario forits crossing is that the random walk exceeds the boundary at the very beginningof the time interval. Hence the asymptotics of P(Gn) will be the same as thatof P(G∞), the probability that the boundary g(·) will ever be crossed.

Prior to stating the main result observe that, when a > g+, for the random walk

Sk(a) := Sk − ak, k = 0, 1, . . . ,

generated by the r.v.’s ξj(a) := ξj − a, we have Eξj(a) := ξj − a < −g+ < 0(see (6.5.10)) and therefore

S(a) = supk�0

Sk(a) < ∞ a.s. (6.5.14)

Hence the first ladder epoch

ηa := η(0+, a) = inf{k � 1 : Sk(a) > 0}

Page 355: Asymptotic analysis of random walks

324 Random walks with exponentially decaying distributions

is an improper r.v., P(ηa < ∞) < 1.

Since any r.v.f. is an upper-power function (see Definition 1.2.20, p. 28), thefollowing assertion holds true for a distribution class that is wider than ER.

Theorem 6.5.1. Assume that the following conditions are met:

F+(t) = e−λ+tV (t), ϕ(λ+) < ∞, V (t) is upper-power. (6.5.15)

If condition [A0] holds then, as x → ∞ and n → ∞,

P(Gn) ∼ c(a)F+(x), (6.5.16)

where

c(a) :=1 − P(ηa < ∞)(

eλ+a − e−λ+g+)(

1 − E[exp{λ+Sηa(a)}; ηa < ∞]) . (6.5.17)

We will need two lemmata to prove the theorem. The first follows immediatelyfrom Theorem 12, Chapter 4 of [42].

Lemma 6.5.2. If (6.5.15) holds and

ϕ(a)(λ+) := Eeλ+ξ(a) < 1

then, as x → ∞,

P(S(a) � x) ∼ c(a)F+(x),

where c(a) is given in (6.5.17).

Lemma 6.5.3. Assume that (6.5.15) holds true. Then, for any a > g+, we haveϕ(a)(λ+) = e−λ+(a−g+) < 1 and, for any fixed N � 1, as x → ∞,

P(SN (a) � x) ∼ NϕN−1(a) (λ+)F+(x) (6.5.18)

and

P(

supk>N

Sk(a) � x

)� cNϕN

(a)(λ+)F+(x)(1 + o(1)). (6.5.19)

Proof of Lemma 6.5.3. From the definition of g+ = λ−1+ lnϕ(λ+) we have

ϕ(a)(λ+) = e−λ+aϕ(λ+) = e−λ+(a−g+) < 1 for a > g+.

Further, repeating the argument from Example 1.2.11 (p. 19), we can establishthat

P(S2(a) � x) ∼ 2ϕ(a)(λ+)F+(x) as x → ∞(see (1.2.11)). The demonstration of (6.5.18) is completed by induction, in exactlythe same way as in the above-mentioned computation in Example 1.2.11.

Now we will prove (6.5.19). Clearly

supk>N

Sk(a) d= SN+1(a) + S,

Page 356: Asymptotic analysis of random walks

6.5 The probability of the crossing of a remote boundary 325

where Sd= S(a) is an r.v. independent of SN+1(a). Owing to Lemma 6.5.2 and

the fact that V (t) is an l.c. function, we have

P(S � t) � cP(ξ � t) � c1P(ξ − a � t), t � 0,

whereas for t < 0 the above inequality is obvious. Therefore, by virtue of (6.5.18)

P(

supk>N

Sk(a) � x)

=∫

P(SN+1(a) ∈ dt)P(S � x − t)

� c1

∫P(SN+1(a) ∈ dt)P(ξ − a � x − t)

= c1P(SN+2(a) � x)

� c2(N + 2)ϕN+1(a) (λ+)F+(x)(1 + o(1)).

The lemma is proved.

Proof of Theorem 6.5.1. Fix an N � k0 and put

ε = εx,n(N) := maxk�N

|gk,x,n − ak|.

Clearly, for n � N ,

P(S(a) � x + ε

)− P(

supk>N

Sk(a) � x + ε)

� P(

supk�N

Sk(a) � x + ε)

� P(Gn) � P(

supk�N

Sk(a) � x − ε)

+ P(

supk>N

Sk(g+ + γ) � x)

� P(S(a) � x − ε) + cNe−λ+γNF+(x)(1 + o(1)),

owing to (6.5.19) and the obvious equality ϕ(g++γ)(λ+) = e−λ+γ . Again usingLemmata 6.5.2 and 6.5.3, we obtain, after dividing the left- and right-hand sidesof the previous relation by c(a)F+(x), the inequality

F+(x + ε)F+(x)

(1 + o(1)) − cNe−λ+γN

� P(Gn)c(a)F+(x)

� F+(x − ε)F+(x)

(1 + o(1)) + cNe−λ+γN ,

since a − g+ � γ. By choosing a large enough N one can make the termNe−λ+γN arbitrarily small.

To complete the proof of the theorem, it remains for us to notice that

F+(x ± ε)F+(x)

= e∓λ+ε V (x ± ε)V (x)

→ 1,

since, for any fixed N , one has that ε → 0 as x, n → ∞ owing to (6.5.12), andsince V (x) is an l.c. function.

Page 357: Asymptotic analysis of random walks

326 Random walks with exponentially decaying distributions

6.5.3 Boundaries under condition [An]

It follows from what we said in § 6.5.1 that, in the case [An], the ‘most likely’crossing time of the boundary g(·) (given that event Gn has occurred) will beclose to the right-end point n. Recall that [An] implies the inequality b � g+−γ.

Let

τk := −ξ(λ+)k + b, Tk := τ1 + · · · + τk. (6.5.20)

Clearly,

Eτ1 = b − θ+ < g+ − θ+ =1

λ+(lnϕ(λ+) − λ+θ+) = −Λ(θ+)

λ+< 0.

(6.5.21)Therefore

T := supk�0

Tk < ∞ a.s., (6.5.22)

and we have from (6.1.1) that

ϕτ (λ) := Eeλτ1 = eλb ϕ(λ+ − λ)ϕ(λ+)

and hence

ϕτ (λ+) = e−λ+(g+−b) � e−λ+γ < 1. (6.5.23)

Now put y := x − θ+n ≡ (θ − θ+)n.

Theorem 6.5.4. For α > 1, let

F+(t) = e−λ+tV (t), V (t) = t−α−1L(t), L(t) be an s.v.f., (6.5.24)

let condition [An] be met, n → ∞ and s > 0 be fixed. Then, for α ∈ (1, 2),uniformly in y � n1/α′

for any fixed α′ < α, one has

P(Gn; Sn � x − s)

= e−nΛ(θ) nV (y)ϕ(λ+)

(E[eλ+T ; T < s

]+ eλ+s P(T � s)

)(1 + o(1)) (6.5.25)

and

P(Gn) = e−nΛ(θ) nV (y)ϕ(λ+)

Eeλ+T (1 + o(1)), Eeλ+T < ∞. (6.5.26)

In the case α > 2, these relations hold uniformly in y � Nn

√n lnn for any fixed

sequence Nn → ∞.

Proof. To simplify the argument, we will confine ourselves to considering thespecial case of a linear boundary

g(k) = x − b(n − k), k = 1, . . . , n, b � g+ − γ. (6.5.27)

Page 358: Asymptotic analysis of random walks

6.5 The probability of the crossing of a remote boundary 327

The changes that need to be made in the proof in the case of general (asymptot-ically locally linear) boundaries satisfying condition [An] amount to adding anargument similar to that used in the proof of Theorem 6.5.1.

First we will show that, if the random walk does cross the linear boundary(6.5.27) during the time interval [0, n] then, with a high probability, this will occurat the very end of the interval. For j � n (the choice of j will be made later),introduce the event

Gn,j :={

maxn−j�i�n

(Si − g(i)) � 0}

.

On the one hand, it is obvious that

P(Gn) � P(Gn,j) � P(Sn � x).

On the other hand, for the event difference G′n,j := Gn \ Gn,j , we have from

Corollaries 6.2.6 and 6.3.4 the following bound:

P(G′n,j) �

n−j∑k=1

P(Sk � g(k)) � cnV (y)n−j∑k=1

e−kΛ(g(k)/k)

= cnV (y)e−nΛ(θ)

n−j∑k=1

exp{

nΛ(θ) − kΛ(

g(k)k

)}. (6.5.28)

Since both the argument values, θ and g(k)/k > θ+, of the function Λ in (6.5.28)lie, in the case under consideration, in the interval where this function is linear(see (6.2.11)), we conclude that the argument of the exponential on the right-handside of (6.5.28) is equal to

n

[λ+

x

n− lnϕ(λ+)

]− k

[λ+

x − b(n − k)k

− lnϕ(λ+)]

= λ+(b − g+)(n − k) � −λ+γ(n − k).

Therefore

P(G′n,j) � cnV (y)e−nΛ(θ)

n−j∑k=1

e−λ+γ(n−k) � c1e−λ+γj P(Sn � x),

by virtue of Corollaries 6.2.6 and 6.3.4. By choosing a large enough j, this prob-ability could be made arbitrarily small relative to P(Sn � x) and hence alsorelative to P(Gn) and P(Gn; Sn � x− s) � cP(Sn � x− s). This means thatwe only have to evaluate the probabilities P(Gn,j) and P(Gn,j ; Sn � x − s).

Next we observe that, for any s > 0, one has

P(Gn,j ; Sn � x − s) = P(Sn � x) +

0∫−s

P(Gn,j ; Sn ∈ x + du). (6.5.29)

In the range of x values with which we are dealing, one can make the Cramer

Page 359: Asymptotic analysis of random walks

328 Random walks with exponentially decaying distributions

transform according to (6.1.1) and switch to the random walk {Zi}, which wasintroduced in (6.2.5) and (6.2.8). Then, for u < 0, putting

sk := x1 + · · · + xk, k = 1, 2, . . . ,

and

A :={

(x1, . . . , xn) : maxn−j�k�n

(sk − g(k)) > 0, sn ∈ x + du}

,

we obtain the representation

P(Gn,j ;Sn ∈ x + du)

=∫A

P(ξ1 ∈ dx1, . . . , ξn ∈ dxn)

= ϕn(λ+)∫A

e−λ+snP(ξ(λ+)1 ∈ dx1, . . . , ξ

(λ+)n ∈ dxn)

= e−nΛ(θ)e−λ+uP(Hn,j |Zn = y + u)P(Zn ∈ y + du), (6.5.30)

where the event

Hn,j :={

maxn−j�i�n

(Zi − h(i)) � 0}

means that the random walk {Zi} crossed the boundary

h(i) := g(i) − θ+i = y + (θ+ − b)(n − i) =: y + hn,i, i = 1, . . . , n,

(6.5.31)during the time interval [n − j, n]. Note that the slope coefficient of this linearboundary is negative (see (6.5.21)).

Thus the integral in (6.5.29) takes the form

e−nΛ(θ)

0∫−s

e−λ+uP(Hn,j |Zn = y + u)P(Zn ∈ y + du). (6.5.32)

Since we have not assumed that V (t) is absolutely continuous (even for largeenough t), we will now have to approximate the last integral by a finite sum. Fixa Δ > 0 such that k := s/Δ is an integer, and put ym := y −mΔ, m = 1, 2, . . .

It is clear that the relative error of approximating the integral in (6.5.32) by thesum

Σ :=k∑

m=1

eλ+mΔP(Hn,j |Zn ∈ Δ[ym))P(Zn ∈ Δ[ym)) (6.5.33)

does not exceed

maxm�k

(eλ+mΔ − eλ+(m−1)Δ

)� λ+eλ+sΔ

and so could be made uniformly arbitrarily small by choosing a small enough Δ.

Page 360: Asymptotic analysis of random walks

6.5 The probability of the crossing of a remote boundary 329

Now consider the conditional probabilities

P(Hn,j |Zn ∈ Δ[ym))

= P(

maxn−j�i�n

(Zi − y − hn,i

)� 0

∣∣∣Zn ∈ Δ[ym))

� P(

maxn−j�i�n

(Zi − Zn − (m − 1)Δ − hn,i

)� 0

∣∣∣Zn ∈ Δ[ym))

= P(maxi�j

T(n)i � (m − 1)Δ

∣∣∣Zn ∈ Δ[ym)), (6.5.34)

where T(n)i is the time-reversed random walk with jumps τ

(n)i := τn−i+1:

T(n)i := τ

(n)1 + · · · + τ

(n)i = Tn − Tn−i, i = 0, 1, . . . , n.

Observe that the event under the last probability sign is expressed in terms ofthe r.v.’s ζn−j+1, ζn−j+2, . . . , ζn only (see (6.2.5), (6.2.8)), whereas their jointconditional distribution given Zn ∈ Δ[ym) will be close to the unconditionaldistribution of ζ1, . . . , ζj . Indeed, following the remark made at the beginningof § 4 in [45], for any fixed bounded Borel set B we have

P(ζn ∈ B|Zn ∈ Δ[ym))

=∫B

P(ζn ∈ du|Zn ∈ Δ[ym))

=∫B

P(ζn ∈ du)P(Zn−1 ∈ Δ[ym − u))

P(Zn ∈ Δ[ym))→ P(ζ1 ∈ B),

because the ratio of the probabilities in the last integrand converges to unity, owingto Corollaries 6.2.6 and 6.3.4. One can show in the same way that for any fixed j

the joint conditional distribution of the random vector (ζn−j+1, ζn−j+2, . . . , ζn)given Zn ∈ Δ[ym) tends to the unconditional distribution of (ζ1, . . . , ζj). There-fore

P(Hn,j |Zn ∈ Δ[ym)) � P(maxi�j

Ti � (m − 1)Δ)(1 + o(1)). (6.5.35)

We can establish in a similar way that

P(Hn,j |Zn ∈ Δ[ym)) � P(maxi�j

Ti � mΔ)(1 + o(1)). (6.5.36)

Note that

P(maxi�j

Ti � u)

= P(T � u) + εj , (6.5.37)

where εj = εj(u) → 0 as j → ∞ uniformly in u ∈ [0, s], since T < ∞ a.s.Now combining (6.5.35) and (6.5.37) and using Theorems 6.2.4 and 6.3.3, we

obtain for the sum (6.5.33) the upper bound

Σ � nλ+V (y)ϕ(λ+)

k∑m=1

Δeλ+mΔ[P(T � (m − 1)Δ) + εj

](1 + o(1)).

Page 361: Asymptotic analysis of random walks

330 Random walks with exponentially decaying distributions

Now, fixing an arbitrarily small δ > 0, one can always choose j large enough thatεj < δ and Δ small enough that the relative error of approximating the integralin (6.5.32) by the sum (6.5.33) and the relative error of the approximation

k∑m=1

Δeλ+mΔP(T � (m − 1)Δ) ≈s∫

0

eλ+uP(T � u) du

will each not exceed δ. Together with a similar argument for the lower bound(using (6.5.36) instead of (6.5.35)), this shows that, for j = j(n) → ∞ slowlyenough as n → ∞, the integral in (6.5.32) is asymptotically equivalent to

nV (y)ϕ(λ+)

s∫0

λ+eλ+uP(T � u) du

=nV (y)ϕ(λ+)

(E[eλ+T ; T < s

]+ eλ+sP(T � s) − 1

)(we have used integration by parts). Thereby the asymptotic behaviour of thesecond term on the right-hand side of (6.5.29) is established. Next we observethat the asymptotic behaviour of the first term on the right-hand side of (6.5.29)has already been established in Corollaries 6.2.6 and 6.3.4. Taking into accountthe remark we made (at the beginning of the proof of the theorem) concerning thepossibility of replacing Gn with Gn,j , the sum of these two asymptotics will givethe desired representation (6.5.25).

To prove (6.5.26), note that

P(Gn,j) = P(Gn,j ; Sn � x − s) + P(Gn,j; Sn < x − s),

where

P(Gn,j ; Sn < x − s) � P(Gn,j)P(mink�j

(Sk − b(k)) < −s)

and, for a fixed j, the second factor on the right-hand side of this inequality couldbe made arbitrarily small by choosing a large enough s. Again turning to theremark about replacing Gn with Gn,j , we conclude that the desired result willfollow immediately from (6.5.25) once we have shown that

Eeλ+T < ∞, (6.5.38)

since in this case

E[eλ+T ; T < s

]+ eλ+sP(T � s) → Eeλ+T < ∞ as s → ∞.

To prove (6.5.38), observe that from (6.5.23) and Chebyshev’s inequality wehave

P(T � t) �∑k�0

P(Tk � t) � e−λ+t∑k�0

ϕkτ (λ+) =

e−λ+t

1 − e−λ+γ,

Page 362: Asymptotic analysis of random walks

6.5 The probability of the crossing of a remote boundary 331

and hence EeλT < ∞ for any λ ∈ [0, λ+). By the monotone convergence theo-rem, it remains to show that limλ↑λ+ EeλT < ∞. Now, the last relation followsfrom the factorization identity (see e.g. § 18 of [42] or § 3, Chapter 11 of [49])

EeλT = w(λ)/(1 − ϕτ (λ)), (6.5.39)

where w(λ) is a function that is analytic in the half-plane Re λ > 0 and continuousfor Re λ � 0. Indeed, we see from (6.5.23) that the right-hand side of (6.5.39)is analytic for Re λ ∈ (0, λ+) and continuous in the closed strip Re λ ∈ [0, λ+].Thus we have proved (6.5.38).

The uniform smallness of the remainder terms in (6.5.25), (6.5.26) follows fromthe uniform smallness in the assertions of Theorems 6.2.4 and 6.3.3 and theircorollaries. Theorem 6.5.4 is proved.

6.5.4 Boundaries under condition [Am]

In this case, the ‘most likely’ time of the crossing of the boundary g(·) by therandom walk is in the vicinity of m.

Theorem 6.5.5. Let conditions (6.5.24) and [Am] be satisfied, n → ∞, and let

θ := x/m, y := x − θ+m ≡ (θ − θ+)m.

Then, for α ∈ (1, 2), uniformly in y � n1/α′for any fixed α′ < α,

P(Gn) = e−mΛ(θ) mV (y)ϕ(λ+)

Eeλ+ max{T , S(a)}(1 + o(1)),

where T and S(a) are independent copies of the r.v.’s defined in (6.5.22) and(6.5.14) respectively.

In the case α > 2, the above representation for the probability P(Gn) holdsuniformly in y � Nn

√n lnn for any fixed sequence Nn → ∞.

Proof. Using an argument similar to (6.5.30) but transforming the distributionsof the first m r.v.’s ξi only, we obtain for u < 0 and

A :={

maxk�n

(sk − g(k)) � 0, sm ∈ x + du}

the representation

P(Gn; Sm ∈ x + du)

= ϕm(λ+)∫A

e−λ+smP(ξ(λ+)1 ∈ dx1, . . . , ξ

(λ+)m ∈ dxm,

ξm+1 ∈ dxm+1, . . . ξn ∈ dxn

)= e−mΛ(θ)e−λ+uP(Hm ∪ Gm,n,u|Zm = y + u)P(Zm ∈ y + du),

(6.5.40)

Page 363: Asymptotic analysis of random walks

332 Random walks with exponentially decaying distributions

where the events

Hm :={

maxk�m

(Zk − h(k)) � 0}

, h(k) := g(k) − θ+k, k = 1, . . . ,m,

and

Gm,n,u :={

maxk�n−m

[(Sm+k − Sm) − (g(m + k) − x)

]� −u

}are independent.

It is not difficult to see that, for a fixed Zm equal to y + u ≡ x − θ+m ≡g(m) − θ+m, the occurrence of the event Hm is equivalent to the occurrence ofthe event {

maxk�m

[(Tm − Tm−k) + g(m) − g(m − k) − bk

]� −u

}.

Using the argument from the proof of Theorem 6.5.4 we obtain that, for a fixedZm = y + u,

P(Hm|Zm) → P(T � −u) as n → ∞(here we again have to consider events of the form Gn,j and Hn,j in order toapproximate integrals by finite sums etc.). Thus the right-hand side of (6.5.40)takes the form

e−mΛ(θ)e−λ+u[P(max{T , S(a)} � −u

)+ o(1)

]P(Zm ∈ y + du).

The rest of the proof follows the argument establishing Theorem 6.5.4.

6.5.5 Boundaries under condition [A0,n]

The case [A0,n] adjoins, in a certain sense, both [A0] and [An]. Recall that herewe are considering a boundary of the form

g(k) = g(0) + g+k, k � 1, (6.5.41)

and that for ξj(g+) := ξj − g+ we have ϕ(g+)(λ+) ≡ Eeλ+ξ1(g+) = 1.As was the case for [A0], in the case [A0,n] we can establish the asymptotics

of P(Gn) for a distribution class that is wider than ER.We will use the r.v.’s Sk(a) and ηa, which were introduced before Theorem 6.5.1

(see p. 323).

Theorem 6.5.6. Let F+(t) = e−λ+tV (t), where the function V (t) satisfies con-dition (6.1.21),

∞∫0

tV (t) dt < ∞.

If x → ∞, n → ∞ in such a way thatx

n+ g+ < θ+ − γ

Page 364: Asymptotic analysis of random walks

6.5 The probability of the crossing of a remote boundary 333

for a fixed γ > 0 then, for the boundary (6.5.41), we have

P(Gn) ∼ c0e−λ+x, (6.5.42)

where

c0 :=1 − P(ηg+ < ∞)

λ+E[Sηg+

(g+) exp{λ+Sηg+(g+)}; ηg+ < ∞] . (6.5.43)

In the case when x/n + g+ > θ+ − γ, the asymptotics of P(Gn) can dependon n.

Proof. We have

P(Gn) � P(S(g+) � x) � P(Gn) + P(supk>n

Sk(g+) � x).

The probability in the middle part of this relation was evaluated in Theorem 11,Chapter 4 of [40] and has the following form.

Lemma 6.5.7. Under the conditions of Theorem 6.5.5, we have ϕ′g+

(λ+) < 1and, as x → ∞,

P(Gn) ∼ c0e−λ+x,

where c0 is defined in (6.5.43).

To complete the proof of Theorem 6.5.6, it remains to demonstrate that, asx → ∞, n → ∞,

P(supk>n

Sk(g+) � x)

= o(e−λ+x

).

As before, let θ∗ = θ∗(x, n) be the minimum value of the parameter θ, forwhich a curve from the family of level lines {pθ(·); θ > 0} will ‘touch’ theboundary n−1g(�nt ) (although the level lines were introduced on p. 320 whenconsidering a boundary of the form xf(k/n), the concept does not depend onthe specific form of boundary and retains its meaning in the general case). Sincein the case under consideration λ+ < ∞ and θ+ < ∞ (cf. (6.1.19)) and henceΛ(θ+) < ∞, by virtue of (6.5.4) we have tθ > 0 for any fixed θ > 0. There-fore, there is a straight-line segment with a slope coefficient g+ at the beginningof any level line pθ(·). As the boundary (6.5.41) itself has a constant slope coef-ficient g+, and the level lines are concave, it is clear that the ‘contact’ of pθ∗(t)and n−1g(�nt ) takes place over an entire interval at the initial part of the bound-ary and, in particular, for t = 0 one has

x

n=

g(0)n

= pθ∗(0) =Λ(θ∗(x, n))

λ+. (6.5.44)

Further, from the property pθ(1) = θ, the concavity of the level lines and (6.5.5),it follows owing to the assumptions of the theorem that

θ∗ = pθ∗(1) � x

n+ g+ � θ+ − γ.

Page 365: Asymptotic analysis of random walks

334 Random walks with exponentially decaying distributions

Therefore, Λ(θ∗)/Λ(θ+) � 1 − c1, c1 > 0, so that by virtue of (6.5.4) wehave tθ∗ � 1 − c2, c2 > 0. From this it is easily seen that, for some c3 > 0and all k � n,

x

k+ g+ � θ∗(x, k) + c3

and hence

Λ(x/k + g+) � Λ(θ∗(x, k) + c3) > Λ(θ∗(x, k)) + c4, c4 > 0.

So, owing to inequality (6.1.18) and to (6.5.44), we have

P(supk>n

Sk(g+) � x)

�∑k>n

P(Sk � x + g+k)

�∑k>n

e−kΛ(x/k+g+) �∑k>n

e−kΛ(θ∗(x,k))−c4k

= e−λ+x∑k>n

e−c4k = O(e−λ+x−c4k

)= o

(e−λ+x

),

as required. The theorem is proved.

Page 366: Asymptotic analysis of random walks

7

Asymptotic properties of functions of regularlyvarying and semiexponential distributions.

Asymptotics of the distributions of stopped sumsand their maxima. An alternative approach to

studying the asymptotics of P(Sn � x)

The present chapter continues § 1.4, where we studied the asymptotic propertiesof functions of subexponential distributions. More precisely, assuming that ζ isan r.v. with a subexponential distribution and that g(λ) = Eeiλζ is its ch.f., westudied there asymptotic properties of the preimage A(x) corresponding to thefunction A(g(λ)

), where A(w) is a given function of a complex variable:

A(x) := A([x,∞)

),

∫eiλxA(dx) = A(g(λ)

).

We refer to the measure A as a function of the distribution of the r.v. ζ.In this chapter, we will identify ζ with the original r.v. ξ considered in Chap-

ters 1–6. (In Chapters 6 and 8 and in a number of other places as well, ζ is actu-ally identified with r.v.’s other than ξ, and this is why this notation was introducedin § 1.4.) Thus, in this chapter,

g(λ) = f(λ) ≡ Eeiλξ,

where ξ has a distribution F,

F+(t) = 1 − F([t,∞)) = V (t),

V is defined in (1.1.2) in the case when F ∈ R and in (5.1.1)–(5.1.3) (p. 233; seealso (1.2.28) and (1.2.29)) for F ∈ Se.

In the case when the function A(w) is analytic in the unit disk |w| < 1, includ-ing the boundary |w| = 1, and the distribution F is subexponential the asymp-totics of the preimage A(x) were studied in § 1.4. If, however, we assume that thedistribution F belongs to R or Se then one can broaden the conditions on A, andthe conditions we have to impose will now depend on the value of a := Eξ.

7.1 Functions of regularly varying distributions

In this section, we will assume only that

A(w) =∞∑

k=0

akwk,

335

Page 367: Asymptotic analysis of random walks

336 Asymptotic properties of functions of distributions

where the series∑∞

k=0 ak is absolutely convergent, and that there exists A′(1) =∑∞k=0 kak > 0. In some cases, we will also assume sufficiently fast (but not

exponentially fast) decay of the sequence

T (n) :=∞∑

k=n

ak → 0, n → ∞,

where T (n) > 0 for all large enough n. Furthermore, for a non-integer t we willput T (t) := T (�t ).

As in § 1.4, the main object of study in this section will be the asymptotics ofthe preimage A(x), which clearly admits a representation of the form

A(x) =∞∑

n=0

anP(Sn � x). (7.1.1)

Theorem 7.1.1. Let F ∈ R, x → ∞.

(i) If a = Eξ < 0 then, without any additional conditions,

A(x) ∼ A′(1)V (x). (7.1.2)

(ii) Let a = 0, α > 2 and Eξ2 < ∞ (we will assume without loss of generalitythat Eξ2 = 1). Then the following assertions hold true.

(ii1) If

T (x2) = o(V (x)

)(7.1.3)

then we have (7.1.2).(ii2) If

T ∈ R (7.1.4)

then

A(x) ∼ A′(1)V (x) + BT (x2), (7.1.5)

where

B = 2γ−1π−1/2Γ(γ + 1/2) (7.1.6)

and −γ < −1 is the index of the r.v.f. T .

(iii) Let a = 0, α ∈ (1, 2) and the condition [<, =] with W ∈ R be met.Further, let at least one of the following two conditions be satisfied:

(iii1)

W (t) � cV (t); (7.1.7)

(iii2)

W (t) � c1V (t), T

(1

W (x/ lnx)

)= o

(V (x)

). (7.1.8)

Then (7.1.2) holds true.

Page 368: Asymptotic analysis of random walks

7.1 Functions of regularly varying distributions 337

(iv) Let a > 0 and one of the following two conditions be satisfied:

(iv1)

T (x) = o(V (x)

); (7.1.9)

(iv2)

T ∈ R. (7.1.10)

Then, in the former case, (7.1.2) holds true. In the latter case,

A(x) ∼ A′(1)V (x) + T (x/a). (7.1.11)

The condition F ∈ R of Theorem 7.1.1 could be broadened by replacing R bywider distribution classes, described in § 4.8.

Proof. (i) First let a = Eξ < 0. Then, uniformly in n, as x → ∞,

P(Sn � x) = P(Sn − an � x − an) ∼ nV (x − an), (7.1.12)

so that for any k one has

P(Sk � x)V (x)

→ k as x → ∞,P(Sk � x)

kV (x)< c.

Therefore, by virtue of (7.1.1) and the dominated convergence theorem,

A(x)V (x)

∼∞∑

k=0

kak = A′(1).

This proves (7.1.2).

(ii) Now let a = 0, Eξ2 = 1. In this case, as x → ∞,

P(Sn � x) = nV (x)(1 + o(1)

)(7.1.13)

uniformly in n � n1 = n1(x) = �c1x2/lnx for some c1 ∈ (0, 1/[2(α − 2)]

)(see Remark 4.4.2 and Theorem 4.4.1). Represent the preimage A(x) from (7.1.1)as

A(x) =∞∑

n=0

anP(Sn � x) = Σ1 + Σ2 + Σ3, (7.1.14)

where

Σ1 :=∑

n<n1

, Σ2 :=∑

n1�n<n2

, Σ3 :=∑

n�n2

,

n2 = n2(x) = �c2(1+ ε)x2/lnx , c2 = 1/[2(α− 2)] and ε > 0. Then it followsfrom (7.1.13) that

Σ1 =∑

n<n1

nanV (x)(1 + o(1)

) ∼ A′(1)V (x).

Before we start evaluating the sums Σ2 and Σ3, observe that the arguments inthe subsequent proof of the theorem in the cases (ii1), (ii2) (see (7.1.3), (7.1.4))

Page 369: Asymptotic analysis of random walks

338 Asymptotic properties of functions of distributions

are completely analogous to each other, and that the proof in the former case caneasily be derived from that in the latter case. So we will consider only the secondsituation, T ∈ R. It follows from the uniform representation

P(Sn � x) = nV (x)(1 + o(1)) +(

1 − Φ(

x√n

))(1 + o(1)), (7.1.15)

valid for x >√

n (see p. 182), that we have

P(Sn � x) ∼ 1 − Φ(

x√n

)uniformly in n � n2. Hence, using the Abel transform, we obtain

Σ3 =∑

n�n2

anP(Sn � x) ∼∑

n�n2

an

(1 − Φ

(x√n

))

= T (n2)(

1 − Φ(

x√n2

))+∑

n�n2

T (n)f(x, n), (7.1.16)

where

f(x, n) := Φ(

x√n

)− Φ

(x√

n + 1

)=

1√2π

x/√

n∫x/

√n+1

e−t2/2dt.

Therefore, since T (t) = t−γLT (t) is an r.v.f. and x/√

n2 → ∞, we have

∑n�n2

T (n)f(x, n) =1√2π

∑n�n2

x/√

n∫x/

√n+1

T (n) e−t2/2dt

∼ 1√2π

x/√

n2∫0

T (x2/t2) e−t2/2dt

∼ 1√2π

∞∫0

t2γ

x2γLT (x2/t2) e−t2/2dt

=T (x2)√

∞∫0

LT (x2/t2)LT (x2)

t2γe−t2/2dt.

It can easily be seen from Theorem 1.1.4 that the last integral is asymptoticallyequivalent to

∞∫0

t2γe−t2/2dt = 2γ−1/2

∞∫0

uγ−1/2e−udu = 2γ−1/2Γ(γ + 1/2),

so that the second term on the right-hand side of (7.1.16) is

2γ−1π−1/2Γ(γ + 1/2)T (x2)(1 + o(1)).

Page 370: Asymptotic analysis of random walks

7.1 Functions of regularly varying distributions 339

Further, for the first term on the right-hand side of (7.1.16) we have

T (n2)(

1 − Φ(

x√n2

))∼ c3T

(x2

lnx

)(lnx)−1/2e−c4 ln x = o

(T (x2)

).

Therefore the third sum Σ3 =∑

n�n2is asymptotically equivalent to BT (x2)

(see (7.1.6)). The bound

Σ2 =n2∑n1

anP(Sn � x) = o(V (x)

)+ o

(T (x2)

)for the remaining sum Σ2 follows from previous calculations since, in the intervaln ∈ [n1, n2], one has

P(Sn � x) < 2[nV (x) + 1 − Φ

(x√n

)]owing to (7.1.15).

We observe that the right-hand sides of the above estimates for the sums andintegrals are not sensitive to constant factors in the definition of the boundariesn1 = n1(x), n2 = n2(x). This proves the second assertion of the theorem.

(iii) Now let a = 0, α ∈ (1, 2). First assume that (7.1.8) is true. Split theseries (7.1.1) representing A(x) into two parts:

A(x) = Σ1 + Σ2, where Σ1 =∑

n�n(x)

, n(x) :=1

W (x/lnx).

Then, by Theorem 3.4.4(i), P(Sn � x) ∼ nV (x) uniformly in n � n(x), andtherefore

Σ1 =∑

n�n(x)

anP(Sn � x) ∼ A′(1)V (x).

Owing to (7.1.8) the second sum does not exceed T (n(x)) = o(V (x)

). This

proves the asymptotics (7.1.2).

If condition (7.1.7) is satisfied then, by virtue of Theorem 3.4.4(i) the equiva-lence P(Sn � x) ∼ nV (x) holds for n � n(x) := 1/V (x). Since T

(n(x)

)=

T (1/V (x)) = o(V (x)

), we again obtain (7.1.2), in a similar way to our previous

argument.

(iv) Now let a > 0. Then, for n � n(x) := xa−1(1 − ε), ε > 0, we have

P(Sn � x) ∼ nV (x − an).

Assume that condition (7.1.9) is met. As

V (x − an)V (x)

< c for n < n(x),V (x − an)

V (x)→ 1 for n = o(x),

we obtain, in a similar way to our previous calculations, that

A(x) = A′(1)V (x)(1 + o(1)) + Σ2, (7.1.17)

Page 371: Asymptotic analysis of random walks

340 Asymptotic properties of functions of distributions

where |Σ2| � |T (n(x))| ∼ c1|T (x)| = o(V (x)

).

Now let T ∈ R. By the law of large numbers Sn/n → a as n → ∞, so that wehave P(Sn � x) → 1 for n � xa−1(1 + ε) and hence (assuming for definitenessthat T (xa−1) > 0)

T (xa−1(1 + ε))(1 + o(1)) � Σ2 � T (xa−1(1 − ε)).

As ε is arbitrarily small, this implies that Σ2 ∼ T (xa−1) and therefore, togetherwith (7.1.17), proves the relation (7.1.11).

The theorem is proved.

Remark 7.1.2. In the third part of the theorem, the asymptotics of A(x) remainunknown for a rather broad class of cases, for which neither (7.1.7) nor (7.1.8)hold; for example, in the case when W (t) � V (t), T (1/W (t)) is comparablewith or greater than V (t). It is not difficult to determine the form of the asymp-totics, but we could not give a rigorous proof thereof owing to the absence ofestimates for P(Sn � x) in the ‘intermediate’ zone

n1(x) :=1

W (x/ lnx)< n � 1

N(x)W (x)=: n2(x)

(see Theorem 3.4.4), where N(x) → ∞ slowly enough that one has the uniformasymptotic equivalence

P(

Sn

b(n)� x

b(n)

)∼ Fβ,−1,+

(x

b(n)

)for n > n2.

Here b(n) = W (−1)(1/n) and Fβ,−1 is the stable law with parameters (β,−1).To obtain the asymptotics of A(x) in the case when

W (t) � V (t), T ∈ R,

we need to split the series (7.1.1) representing A(x) into three parts, cf. (7.1.14),over the ranges n � n1(x), n1(x) < n � n2(x) and n > n2(x) respectively.As before, we can then derive that Σ1 ∼ A′(1)V (x). Further, setting for brevityn2(x) =: n2 and using summation by parts, we obtain

Σ3 =∑

n>n2

anP(

Sn

b(n)� x

b(n)

)∼ T (n2)Fβ,−1,+

(x

b(n2)

)+∑

n>n2

T (n)f(x, n), (7.1.18)

where

f(x, n) = Fβ,−1,+

(x

b(n)

)− Fβ,−1,+

(x

b(n + 1)

)=

x/b(n)∫x/b(n+1)

f(u) du,

Page 372: Asymptotic analysis of random walks

7.2 Functions of semiexponential distributions 341

f being the density of Fβ,−1. Hence, setting T1(t) := T (1/W (x)), we see thatthe second term on the right-hand side of (7.1.18) is asymptotically equivalent to

x/b(n2)∫0

T1(xu−1)f(u) du = T1(x)

x/b(n2)∫0

T1(xu−1)T1(x)

f(u) du

∼ T1(x)

∞∫0

uβγf(u) du,

where −γ is the index of the r.v.f. T ∈ R.It remains to evaluate the ‘intermediate’ sum Σ2. It is possible to obtain the

desired bound

Σ2 = o(V (x) + T (1/W (x))

),

but this would require rather cumbersome calculations, which we will not presenthere. They would lead to the asymptotics

A(x) ∼ A′(1)V (x) + BT(1/W (x)

), B :=

∞∫0

tβγf(t) dt.

Remark 7.1.3. If we assume that P(ξ ∈ Δ[x)) ∼ ΔαV (x)/x as x → ∞,Δ[x) = [x, x + Δ), and that the integro-local theorems for P(Sn ∈ Δ[x)) holdtrue (see §§ 3.7, 4.7 and 9.2) then we can derive, in a way similar to our previouscalculations, the asymptotics of A

(Δ[x)

)= A(x) − A(x + Δ) as x → ∞.

7.2 Functions of semiexponential distributions

In this section, we use the notation of § 7.1 but assume that F+ = V ∈ Se.

Theorem 7.2.1. Let V ∈ Se, α ∈ (0, 1). Suppose that the series∑∞

n=0 nan =A′(1) > 0 converges.

(i) If a := Eξ < 0 then, as x → ∞,

A(x) ∼ A′(1)V (x). (7.2.1)

(ii) Let a = 0, Eξ2 < ∞. If

|an| < exp{−nα/(2−α)L1(n)

}(7.2.2)

for a suitable s.v.f. L1 (see the proof below) then (7.2.1) holds true.(iii) Let a > 0 and, for some ε > 0,

T(xa−1 (1 − ε)

)= o

(V (x)

). (7.2.3)

Then (7.2.1) holds true.

Page 373: Asymptotic analysis of random walks

342 Asymptotic properties of functions of distributions

As in Theorem 7.1.1, the condition V ∈ Se can be relaxed in the above as-sertion. Moreover, in parts (ii) and (iii) of the theorem one can obtain asymp-totics of A(x), depending on the function T , in the case when T ∈ Se andT (t2) � cV (t) for a = 0, and also when T (t/a) � cV (t) for a > 0.

Proof. (i) The first assertion of the theorem is proved in exactly the same way asin Theorem 7.1.1.

(ii) To prove the second assertion, we will make use of the following results(see Theorem 5.4.1(ii) and Corollary 5.2.2):

P(Sn � x) ∼ nV (x) for n = o

(x2

l2(x)

)(7.2.4)

and

P(Sn � x) < cn exp{−l(x)

[1 − bnl(x)

x2(1 + o(1))

]}for n < c1

x2

l(x),

(7.2.5)where the constants b and c1 are known. First put n1(x) := x2/l2(x), n2(x) :=c1x

2/l(x) > n1(x) and split the series (7.1.1) into three parts as in (7.1.14) with

Σ1 =∑

n<n1(x)

, Σ2 =∑

n1(x)�n<n2(x)

, Σ3 =∑

n�n2(x)

. (7.2.6)

In the first sum, we have the relation (7.2.4) for n = o(n1(x)

)and the inequality

P(Sn � x) < cnV (x) for n � n1(x). Therefore, by an argument similar tobefore, Σ1 ∼ A′(1)V (x).

Owing to (7.2.5), for large enough x the sum Σ2 will not exceed

cV (x)n2(x)∑n1(x)

nan exp{

2bn

n1(x)

}.

By condition (7.2.2), the absolute value of the latter sum is less than or equal to

n2(x)∑n1(x)

n exp{−nα/(2−α)L1(n) +

2bn

n1(x)

}, (7.2.7)

where

2b

n1(x)− n(2α−2)/(2−α)L1(n) <

2b

n1(x)− n2(x)(2α−2)/(2−α)L1(n). (7.2.8)

The power factor in the last two terms is the same, x2α−2. Hence one canalways choose an s.v.f. L1(n) such that the difference in (7.2.8) will be lessthan −3(lnn)/n1(x). In this case, the sum (7.2.7) will not exceed∑

n�n1(x)

n exp{−3n lnn

n1(x)

}�

∑n�n1(x)

n−2 ∼ 1n1(x)

= o(1).

Page 374: Asymptotic analysis of random walks

7.2 Functions of semiexponential distributions 343

Thus, the second sum in (7.2.6) is o(V (x)

).

The sum Σ3 in (7.2.6) is bounded by∑n�n2(x)

exp{−nα/(2−α)L1(n)

}� exp

{−n2(x)α/(2−α)L2

(n2(x)

)},

where L2 is an s.v.f. and the power-function part of n2(x)α/(2−α) is equal to xα.Therefore, choosing a suitable L1(n) (or L2(n)), one can always obtain

n2(x)α/(2−α)L2

(n2(x)

)� l(x).

This means that the third sum in (7.2.6) is also o(V (x)

).

(iii) Finally, consider the case a > 0. As in (7.1.14) and (7.2.6), we split theseries representing A(x) into three parts, setting n1(x) := z/a in (7.2.6) (where,as before, z = z(x) = x/αl(x)) and n2(x) = xa−1(1 − ε) for a fixed ε > 0.Since

V (x − an)V (x)

→ 1 for n = o(z),V (x − an)

V (x)< c for n < z,

we have here that, owing to Theorem 5.4.1 and Corollary 5.2.2,

P(Sn � x) = P(Sn − an � x − an) ∼ nV (x) for n = o(z),

P(Sn � x) < cnV (x) for n < z

(the inequality n � x2/l2(x) clearly holds in these cases). Therefore, in (7.2.6),

Σ1 ≡∑

n<z/a

= V (x)A′(1)(1 + o(1)

).

In the sum Σ2 from (7.2.6), we have to deal with summands of the formanP(Sn − an � x′), where the deviations x′ := x − an > εx satisfy theconditions

n < xa−1(1 − ε) � c1(x′)2/l(x′),

and hence for them (7.2.5) holds. Moreover, owing to (7.2.3) one can assumethat |an| < V (an). Therefore

|an|P(Sn � x) � cn exp{−l(an) − l(x′)

[1 − 2bnl(x′)

(x′)2

]}. (7.2.9)

Further, we will split this sum Σ2 in turn into two sub-sums, one over the rangean ∈ [z, εx] and the other over an ∈ [εx, x(1− ε)]. In the first range, for small ε

one has

l(x′) = l(x − an) = l(x) − an

z(1 + o(1)),

Page 375: Asymptotic analysis of random walks

344 Asymptotic properties of functions of distributions

so that, for any δ ∈ (0, α) and all sufficiently large x,

l(x′) + l(an) � l(x)[1 − αan

x(1 + o(1)) +

(an

x

)α−δ]

� l(x)[1 +

12

(an

x

)α−δ]

. (7.2.10)

Hence for an � z one obtains

l(x′) + l(an) − l(x) � 12αα−δ

l(x)1−α+δ. (7.2.11)

Since for n < εx and a suitable δ > 0 we have for the ‘remainder’ term in (7.2.9)the bound

nl(x′)(x′)2

= o

((n

x

)α−δ)

,

it follows from (7.2.10) that a bound of the form (7.2.11) will remain true for

l(an) + l(x′)[1 − 2bnl(x′)

(x′)2

]− l(x).

From this and (7.2.9) we obtain

|an|P(Sn � x) � cnV (x) exp{−cl(x)1−α+δ

}.

This clearly shows that the sub-sum over the range an ∈ [z, εx] admits thebound o

(V (x)

).

For the sub-sum corresponding to an ∈ [εx, x(1 − ε)], we can use the repre-sentation obtained in Chapter 5 (see (5.4.44)):

l(an) + l(x − an) − l(x) = l(x)[1 + γ

(an

x

)(1 + o(1))

],

where the function γ(v) = vα+(1−v)α−1 � 0 is concave and symmetric aroundthe point v = 1/2 and γ(v) � γ(ε) > 0 for v ∈ [ε, 1 − ε], ε < 1/2. From this,we can again easily derive that the sub-sum over the range an ∈ [εx, x(1 − ε)]is o(V (x)

). Therefore the same bound holds for the whole sum Σ2.

It remains to bound the third sum Σ3 in (7.2.6). This sum is taken over therange an � x(1 − ε) and so, owing to (7.2.3), is also o

(V (x)

).

The theorem is proved.

7.3 Functions of distributions interpreted as the distributions of stopped

sums. Asymptotics for the maxima of stopped sums

One could interpret the assertions of the previous sections and § 1.4 as follows.Let us depart from the representation (7.1.1). Given that

ak � 0, A(1) =∞∑

k=0

ak < ∞,

Page 376: Asymptotic analysis of random walks

7.3 Distributions of stopped sums 345

we can assume without loss of generality that A(1) = 1 and consider A(f(λ))

asthe ch.f. of an r.v. S for which

EeiλS = A(f(λ))

and which can be represented as

S = Sτ = ξ1 + · · · + ξτ , (7.3.1)

where the r.v. τ is independent of {ξi}, P(τ = k) = ak. In this case, A′(1) = Eτ

and our Theorem 1.4.1 implies, for instance, the following result (see also e.g.Theorem A3.20 of [113]).

Corollary 7.3.1. Let F ∈ S and E(1 + δ)τ < ∞ for some δ > 0. Then

P(Sτ � x) ∼ EτV (x), x → ∞. (7.3.2)

One can similarly reformulate, in terms of the distribution of τ and the tailsT (n) = P(τ � n), the assertions of Theorems 7.1.1 and 7.2.1.

In regard to the above probabilistic interpretation of Theorems 7.1.1 and 7.2.1,there arise the following two natural problems.

(1) To study the asymptotics of P(Sτ � x), Sn = maxk�n Sk.(2) To establish under what conditions the relation (7.3.2) will remain valid when

τ is an arbitrary stopping (Markov) time, not necessarily independent of {ξi}(i.e. an r.v. defined on a common probability space together with {ξi} andsuch that, for any n � 0, the event {τ � n} belongs to the σ-algebraσ(ξ1, . . . , ξn) generated by ξ1, . . . , ξn).

This and the next sections are devoted to solving the above problems.In the present section we will concentrate on analogues of Theorems 7.1.1 and

7.2.1 for Sτ in the case when τ is independent of {ξi}. Since the asymptoticsof P(Sn � x) and P(Sn � x) are close to each other in many deviation zones,the assertions below do not differ much from those of Theorems 7.1.1 and 7.2.1.

Theorem 7.3.2. Assume that F ∈ R, x → ∞ and that an integer-valued r.v. τ

with Eτ < ∞ is independent of {ξi}.

(i) If a := Eξ < 0 then

P(Sτ � x) ∼ EτV (x). (7.3.3)

(ii) If a = 0, α > 2, Eξ2 = 1 and at least one of the conditions (7.1.3), (7.1.4)holds true then

P(Sτ � x) ∼ EτV (x) + 2BT (x2), (7.3.4)

where B is defined in (7.1.6).(iii) Let a = 0, F satisfy condition [<, =] with α ∈ (1, 2), and let at least one of

the conditions (7.1.7) and (7.1.8) be met. Then (7.3.3) holds true.

Page 377: Asymptotic analysis of random walks

346 Asymptotic properties of functions of distributions

(iv) Let a > 0 and one of the conditions (7.1.9) and (7.1.10) be met. Then, in theformer case (7.3.3) holds true. In the latter case,

P(Sτ � x) ∼ EτV (x) + T (x/a). (7.3.5)

As was the case with Theorem 7.1.1, the condition F ∈ R could be broadenedby replacing R by the wider distribution classes described in § 4.8.

Proof of Theorem 7.3.2. We can essentially repeat the argument used to proveTheorem 7.1.1. Instead of (7.1.1), we need to start with the relation

P(Sτ � x) =∞∑

n=0

anP(Sn � x), (7.3.6)

where for the probabilities P(Sn � x) we have, as a rule, the same bounds asfor P(Sn � x) (see Chapters 3 and 4). More precisely, the following hold.

(i) The proof of the first assertion repeats verbatim the argument from § 7.1 forthe case a < 0.

(ii) The proof of the second assertion differs from that for Theorem 7.1.1 intwo respects. These refer to the evaluation of analogues of the sums Σ2 and Σ3.In the sum Σ3 =

∑n�n2(x) anP(Sn � x) one should take the summation

limit n2(x) := x2/N(x), where N(x) → ∞ slowly enough that, uniformlyin n � n2(x),

P(Sn � x) ∼ 2[1 − Φ

(x√n

)].

The subsequent evaluation of the sum Σ3 differs from the calculations in § 7.1(see (7.1.16) etc.) only by the presence of the factor 2, and therefore we willeventually obtain Σ3 ∼ 2BT (x2).

To bound Σ2 =∑n2(x)

n1(x) anP(Sn � x), with the same value of the summationlimit n1(x) = �c1x

2/ lnx , we use inequalities from Corollary 4.1.4. This yieldsΣ2 = o

(V (x)

)+ o

(T (x2)

)and completes the proof of (7.3.4).

(iii), (iv) There are no substantial changes in the proofs of the third and fourthparts of the theorem compared with those § 7.1, since the same bounds are validfor P(Sn � x) and P(Sn � x) in the respective calculations, and Sn/n → a

as n → ∞.The theorem is proved.

Theorem 7.3.3. Assume that F ∈ Se, α ∈ (0, 1) and that an integer-valued r.v.τ with Eτ < ∞ is independent of {ξi}.

(i) If a := Eξ < 0 then

P(Sτ � x) ∼ EτV (x), x → ∞. (7.3.7)

Page 378: Asymptotic analysis of random walks

7.4 Sums stopped at an arbitrary Markov time 347

(ii) Let a = 0, Eξ2 < ∞. If

an < exp{−nα/(2−α)L1(n)

}for a suitable s.v.f. L1 then (7.3.7) holds true.

(iii) Let a > 0 and, for some ε > 0,

T(xa−1(1 − ε)

)= o

(V (x)

).

Then (7.3.7) holds true.

Proof. The proof of Theorem 7.3.3 does not differ from that of Theorem 7.2.1,because, under the conditions of the theorem, we have the same estimates forP(Sn � x) and P(Sn � x).

The remarks that we made following Theorems 7.1.1 and 7.2.1 remain validfor Theorems 7.3.2 and 7.3.3 (in Remark 7.1.2, instead of Fβ,−1 we now need thedistribution of the maximum of a stable process {ζ(1); t ∈ [0, 1]} with stationaryindependent increments, for which ζ(1)⊂=Fβ,−1).

7.4 Sums stopped at an arbitrary Markov time

In this section we will concentrate on the second class of problems mentionedin § 7.3. They deal with the conditions under which the asymptotic laws estab-lished in §§ 7.1–7.3 remain valid in the case when τ is an arbitrary stopping time.It turns out that in this case the asymptotics of P(Sτ � x) are, in a certain sense,more accessible that those of P(Sτ � x).

7.4.1 Asymptotics of P(Sτ � x)

We introduce the class S∗ of distributions F with a finite mean a = Eξ that havethe property

t∫0

F+(u)F+(t − u) du ∼ 2Eξ+ F+(t) as t → ∞, (7.4.1)

where ξ+ = max{0, ξ}. It is not difficult to verify that R and Se are subclassesof S∗. It is also known that if F ∈ S∗ then F ∈ S and a distribution with the tailF I(t) :=

∫∞t

F+(u) du will also belong to S (see [166]). The following rathergeneral assertion was obtained in [126] (this paper also contains a comprehensivebibliography on related results).

Theorem 7.4.1. Let F ∈ S∗, a < 0 and τ be an arbitrary stopping time. Then

P(Sτ � x)F+(x)

→ Eτ as x → ∞. (7.4.2)

Page 379: Asymptotic analysis of random walks

348 Asymptotic properties of functions of distributions

For the case τ = min{k : Sk < 0} this assertion was proved in [8]. Similarresults for the asymptotics of the trajectory maxima over cycles of an ergodicHarris Markov chain were obtained in [56].

Now let a be arbitrary.

Corollary 7.4.2. Let F ∈ S∗ and let there exist a function h(x) such that, asx → ∞, one has h(x) → ∞, h(x) = o(x) and

F+(x − h(x))F+(x)

→ 1, P(τ > h(x)

)= o

(F+(x)

). (7.4.3)

Then (7.4.2) holds true.

The first relation in (7.4.3) means that F+(t) is an h-l.c. function (see p. 18).The second relation clearly implies that Eτ < ∞.

Proof. Choose a number b > a = Eξ and introduce i.i.d. r.v.’s ξi(b)d= ξ − b,

so that Eξi(b) = a − b < 0. Then, on the one hand, for Sn(b) :=∑n

i=1 ξi(b),Sn(b) := maxk�n Sk(b), we will have

P(Sτ � x) � P(Sτ (b) + bτ � x)

� P(bτ > bh(x)

)+ P(Sτ (b) � x − bh(x)

)= o

(F+(x)

)+ Eτ F+

(x − bh(x) + b

)(1 + o(1)) ∼ Eτ F+(x),

owing to Theorem 7.4.1. On the other hand, clearly ξ > ξ(b), Sτ � Sτ (b) andtherefore

P(Sτ � x) � P(Sτ (b) � x) ∼ EτF+(x + b) ∼ EτF+(x),

since F+ is l.c. The corollary is proved.

Corollary 7.4.3.

(i) Let F ∈ R and, in the case a � 0, in addition let

P(τ > x) = o(F+(x)

). (7.4.4)

Then (7.4.2) holds true.(ii) Let F ∈ Se and, in the case a � 0, in addition let

P(τ > z(x))

= o(e−l(x)

), (7.4.5)

where z(x) = x/αl(x). Then (7.4.2) holds true.

Theorems 7.1.1 and 7.2.1 show that conditions (7.4.4), (7.4.5) are essential for(7.4.2) to hold.

Proof. (i) If (7.4.4) holds then clearly there exists a function h(x) = o(x) suchthat P

(τ > h(x)

)= o

(F+(x)

). This means that conditions (7.4.3) of Corol-

lary 7.4.2 will be met.

Page 380: Asymptotic analysis of random walks

7.4 Sums stopped at an arbitrary Markov time 349

(ii) If F ∈ Se and (7.4.5) holds true then, similarly, there exists a functionh(x) = o

(z(x)

)such that P(τ > h(x)

)= o

(F+(x)

), and hence (7.4.3) is true.

The corollary is proved.

7.4.2 On the asymptotics of P(Sτ � x)

Now we turn to the asymptotic behaviour of Sτ . Unfortunately, under the condi-tions of Theorem 7.4.1, it is impossible to obtain a result of the form

P(Sτ � x) ∼ Eτ F+(x) as x → ∞. (7.4.6)

This is demonstrated by the following simple example. Let a = Eξ < 0 andτ = η− := min{k � 1 : Sk � 0}. Then, clearly, Sτ � 0 and P(Sτ � x) = 0for all x > 0, so that (7.4.6) cannot hold under any conditions. Thus, for (7.4.6)to hold true, we must narrow down the class of stopping times. The followingassertion was obtained in [136] (on p. 43 of this paper the authors state that itcould easily be extended to the case F ∈ R).

Theorem 7.4.4. Let F+(x) ∼ x−α and F−(x) = O(x−β) as x → ∞, whereα, β > 0. Further, let τ be an arbitrary stopping time such that

P(τ > n) = o(n−r) as n → ∞, (7.4.7)

where r > max{1, α/β} and

r �{

α/2 if Eξ = 0,

α if Eξ �= 0.

Then P(Sτ � x) ∼ Eτ x−α as x → ∞.

Bounds for P(Sτ � x) obtained under very broad conditions can be foundin [76, 46].

In connection with the above example for τ = η−, observe that condition(7.4.7) is not satisfied in that situation (indeed, by virtue of Theorem 8.2.3 wehave P(η− > n) > cn−α), so that condition (7.4.7) is essential for the assertionof Theorem 7.4.4: it restricts the class of stopping times under consideration. An-other restriction of this class was considered in § 7.1; there we assumed that τ wasindependent of {ξi}.

Now we will consider another restriction. Suppose we are given two sequences(boundaries), {g+(k)} and {g−(k)}, such that g+(k) > −g−(k) for k � 1. Wewill call τ a boundary stopping time if

τ := inf{k � 1 : Sk � g+(k) or Sk � −g−(k)

}. (7.4.8)

Observe that the differences g±(k) ∓ ak, a = Eξ, cannot grow too fast ask → ∞, as then τ might be an improper r.v., owing to the law of the iteratedlogarithm. For example, in the case Eξ = 0, Eξ2 < ∞ it is natural to consideronly functions satisfying g±(k) < c

√k ln k.

Page 381: Asymptotic analysis of random walks

350 Asymptotic properties of functions of distributions

The values of g−(k) in the definition (7.4.8) may all be infinite. In this case, wewill have a boundary stopping time corresponding to a single boundary g(k) =g+(k). In accordance with the above observation, we will assume in this case that

g(k) � c + k, k � 1. (7.4.9)

Note that if g(k) is non-decreasing then, clearly, Sτ ≡ Sτ and so all the asser-tions of the previous subsection become applicable.

First we consider the case Eξ = 0.

Theorem 7.4.5. Let Eξ = 0, the distribution of ξ satisfy condition [<, =] withV = F+ ∈ R, τ be a boundary stopping time and let conditions (7.4.4), (7.4.9)be met. Then (7.4.6) holds true.

Proof. Since Sτ � Sτ , Corollary 7.4.3 implies that it suffices to verify that

P(Sτ � x) � Eτ F+(x)(1 + o(1)). (7.4.10)

Let N = N(x) be a function such that, as x → ∞,

N → ∞, N = o(x),(V (x) + W (x)

)N2 → 0. (7.4.11)

Fix an arbitrary ε > 0 and introduce the events

Ak := {ξk � (1 + ε)x}, k = 1, 2, . . .

One can easily see that, for the indicator function of the event in question, wehave I(Sτ � x) � I1 − I2, where

I1 :=min{τ,N}∑

k=1

I(Ak;Sτ � x

),

I2 :=min{τ,N}∑

ν=1

(ν − 1)I(ν events Ak occurred prior to the time min{τ,N}).

Here

EI1 = EN∑

k=1

I(Ak; Sτ � x, τ � k

), EI2 � 1

2(NV (x)

)2(1 + o(1)).

As τ is a boundary stopping time and g(k) < c + k, N = o(x), we see that,for k � N ,

Ak ∩ {Sk−1 > −εx/2, τ � k} ⊂ Ak ∩ {Sτ � x, τ � k}.

Page 382: Asymptotic analysis of random walks

7.4 Sums stopped at an arbitrary Markov time 351

Therefore

EI1 �N∑

k=1

P(Sk−1 � −εx

2, τ � k

)P(Ak)

= F+

((1 + ε)x

) N∑k=1

[P(τ � k) − P

(τ � k, Sk−1 < −εx

2

)]

� F+

((1 + ε)x

) [Eτ(1 + o(1)) −

N∑k=1

P(Sk−1 < −εx

2

)]

� F+

((1 + ε)x

) [Eτ(1 + o(1)) −

N∑k=1

(k − 1)W(εx

2

)(1 + o(1))

]= F+

((1 + ε)x

)[Eτ + o(1) + O

(N2W (x)

)]= EτF+(x)

((1 + ε)x

)+ o

(F+(x)

).

Since ε is arbitrary, this implies (7.4.10). The theorem is proved.

Transition to the case a �= 0 under the conditions of Theorem 7.4.5 can be donein a way similar to that used in Corollaries 7.4.2 and 7.4.3.

Now we return to the general case (7.4.8) where τ is a stopping time specifiedby two boundaries. Here the sign of Eξ does not play any important role, whereasconditions on the distribution F can be relaxed.

Set

θ±(x) := sup{k : g±(k) � x

},

where we let θ±(x) = ∞ if sup g±(k) � x (the θ±(x) are generalized inversefunctions for g±(k)). It is evident that θ±(x) → ∞ as x → ∞; moreover, θ±(x)may be equal to ∞ for finite values of x.

Lemma 7.4.6. Assume that Eτ < ∞ and that h(x) is an arbitrary function withthe properties

h(x) <x

2, h(x) → ∞ as x → ∞.

(i) The following lower bound holds true:

P(Sτ � x) � F+(x + h(x))(Eτ − δ(x)

), (7.4.12)

where δ(x) → 0 as x → ∞.If

g±(k) � g < ∞ (7.4.13)

then, for x > g,

P(Sτ � x) � F+(x + g)Eτ. (7.4.14)

Page 383: Asymptotic analysis of random walks

352 Asymptotic properties of functions of distributions

(ii) The following upper bound holds true:

P(Sτ � x) � F+(x − h(x))Eτ + F+(x/2)δ(x) +∑

k�θ+(x/2)

P(τ > k).

(7.4.15)If

g+(k) � g+ < ∞then, for x � g+,

P(Sτ � x) � F+(x − g+)Eτ. (7.4.16)

Proof. (i) Let θ(x) := min{θ−(h(x)), θ+(x)

},

Ck :={Sj ∈ (−g−(j), g+(j)), j � k

} ≡ {τ > k}.Is is clear that θ(x) → ∞ as x → ∞. We have

P(Sτ � x) =∞∑

k=0

P(τ = k + 1, Sk+1 � x)

�θ(x)−1∑

k=0

P(Ck; Sk + ξk+1 � x), (7.4.17)

where the inequality holds owing to the fact that g+(k + 1) � x for k < θ(x).Since for such k holds g−(k) � h(x), we obtain

P(Sk + ξk+1 � x|Ck

)� F+(x + h(x)), (7.4.18)

so that

P(Sτ � x) � F+(x + h(x))∑

k<θ(x)

P(τ > k).

This implies (7.4.12). In the case (7.4.13), using the equality in (7.4.17) andnoting that the left-hand side of (7.4.18) is not less than F+(x + g) we obtain

P(Sτ � x) � F+(x + g)∞∑

k=0

P(τ > k) = EτF+(x + g).

(ii) Using the equality in (7.4.17) we can split the sum on its right-hand sideinto three parts as follows:

P(Sτ � x) =∑

k<θ+(h(x))

+θ+(x/2)−1∑k=θ+(h(x))

+∑

k�θ+(x/2)

.

In the first sum (over k < θ+(h(x))) one has g+(k + 1) � h(x) < x,

P(τ = k + 1, Sk+1 � x) = P(Sk + ξk+1 � x |Ck

)P(Ck)

� F+(x − h(x))P(Ck).

Page 384: Asymptotic analysis of random walks

7.4 Sums stopped at an arbitrary Markov time 353

For the terms in the second sum one has g+(k + 1) � x/2,

P(τ = k + 1, Sk+1 � x) = P(Sk + ξk+1 � x|Ck

)P(Ck)

� F+(x/2)P(Ck).

Therefore

P(Sτ � x) � F+(x − h(x))∑

k<θ+(h(x))

P(τ > k)

+ F+(x/2)θ+(x/2)∑

k=θ+(h(x))

P(τ > k) +∑

k�θ+(x/2)

P(τ > k).

Since θ+(h(x)) → ∞ as x → ∞ and Eτ < ∞ we have

δ(x) :=∑

k�θ+(h(x))

P(τ > k) → 0

as x → ∞. Hence

P(Sτ � x) � F+(x − h(x))Eτ + F+(x/2)δ(x) +∑

k�θ+(x/2)

P(τ > k).

This proves (7.4.15).If g+(k) � g+ then the left-hand side in (7.4.18) does not exceed F+(x − g+)

for x � g+, and so (7.4.17) implies (7.4.16). The lemma is proved.

Theorem 7.4.7. Let τ be a boundary stopping time (7.4.8) with Eτ < ∞ and letF+ be an l.c. function.

(i) If (7.4.13) is true then we have (7.4.6).(ii) Assume that F+ has the property F+(x/2) < cF+(x) and, moreover,∑

k�θ+(x/2)

P(τ > k) = o(F+(x)

)as x → ∞. (7.4.19)

Then (7.4.6) holds true.

It is evident that functions F+ ∈ R satisfy the conditions of the theorem.

Proof. The first assertion of the theorem is obvious from (7.4.14) and (7.4.16).To obtain the second assertion, observe that for an l.c. F+(x) there always existsa function h(x) tending to infinity slowly enough that F+(x − h(x)) ∼ F+(x)as x → ∞. Therefore it remains for us to make use of (7.4.12) and 7.4.15. Thetheorem is proved.

The most difficult condition to verify in Theorem 7.4.7 is (7.4.19). It can besimplified for special boundaries g+(k). For example, suppose that g+(k) = kγ ,γ ∈ (0, 1). Then θ+(x) ∼ x1/γ and in the case F+(x) = x−αL(x), L being ans.v.f., condition (7.4.19) will be satisfied provided that

P(τ > k) < kθ, θ < −αγ − 1.

Page 385: Asymptotic analysis of random walks

354 Asymptotic properties of functions of distributions

Indeed, in this case the sum on the left-hand side in (7.4.19) does not exceedcx(θ+1)/γ , where (θ + 1)/γ < −α.

If Eξ2 < ∞ and g+(k) + g−(k) � cnγ for γ < 1/2 and all k � n then it isnot difficult to obtain the bound

P(τ > n) � (1 − p)n1−2γ

,

where p > 0 is the minimum probability that the random walk will reach a bound-ary during the time interval [jn2γ , (j+1)n2γ ] having started from inside the strip.

Remark 7.4.8. Recall that the l.c.-property used in Theorem 7.4.7, generallyspeaking, does not imply that F is subexponential (see Theorem 1.2.8). ThusTheorem 7.4.7 provides us with an example of a situation where the relationP(Sn � x) ∼ nF+(x) (as x → ∞) may not hold for each fixed n, but thereexists a stopping time τ such that (7.4.6) holds true.

7.5 An alternative approach to studying the asymptotics of P(Sn � x) for

sub- and semiexponential distributions of the summands

As we stated in § 1.4, there is a class of large deviation problems for random walksthat are analyzed more naturally using not the techniques being developed in themain part of this monograph for regularly varying and semiexponential distribu-tions but, rather, asymptotic analysis within a wider class of subexponential distri-butions. For example, one such problem is that on the asymptotics of P(S � x),S = supk�0 Sk, in the case Eξ < 0 (see also problems from Chapter 8). For suchproblems, analytic approaches different from those used in Chapters 2–5 and insubsequent chapters prove to be the most effective. These approaches are based onfactorization identities and employ the asymptotic analysis of subexponential dis-tributions presented in §§ 1.2–1.4 and also in the present chapter. For upper-powerdistributions the asymptotics of P(S � x) were obtained in that way in [39, 42].For the class of subexponential distributions, these results were established, usingthe same approaches, in [275]. The present section is devoted to presenting theabove-mentioned approaches. It lies somewhat outside the mainstream expositionin Chapters 2–5.

7.5.1 An integral theorem on the first order asymptotics of P(S � x)

As before, let ξi be independent and identically distributed, Eξ < 0,

S = supk�0

Sk, Sk =n∑

i=1

ξi and F+(x) = P(ξ � x).

Set

F I+(t) :=

∞∫t

F+(u) du.

Page 386: Asymptotic analysis of random walks

7.5 An alternative approach to studying P(Sn � x) 355

A main aim of this subsection is to prove the following assertion (see also [275]).

Theorem 7.5.1. If the function F I+(t) is subexponential (F I

+ ∈ S) then

P(S � x) ∼ − 1Eξ

F I+(x) as x → ∞. (7.5.1)

Theorem 7.5.1 undoubtedly differs from the theorems in Chapters 2–5 on thedistribution of S in the cases of regularly varying and semiexponential distri-butions F, in that it is more general and complete. Finding the asymptoticsof P(Sn � x) (or P(Sn(a) � x) if Eξ = 0, a > 0) for subexponential dis-tributions in the case of a finite growing n, however, requires much more effortand additional conditions (see e.g. [178]). If we also take into account that oneneeds to refine the asymptotics and find distributions of other functionals of therandom walk {Sk} then the separate analysis of regularly varying and semiexpo-nential distributions becomes justified.

Note also that the condition F I+ ∈ S is not only sufficient but also necessary

for the asymptotics (7.5.1); see [177].First we will state a factorization identity to be used in what follows. Let

η+ := inf{k � 1 : Sk > 0}, η− := inf{k � 1 : Sk � 0}. (7.5.2)

On the events {η± < ∞} we can define r.v.’s

χ± := Sη± ,

which are respectively the first positive and the first non-positive sums. It is clearthat in the case Eξ < 0 we have

P(η− < ∞) = 1, P(η+ < ∞) = P(S > 0) =: p < 1.

Further, set

f(λ) := Eeiλξ

and introduce an r.v. χ with distribution U, given by the following relation:

P(χ < t) := P(χ+ < t| η+ < ∞) ≡ 1p

P(χ+ < t, η+ < ∞), t > 0.

Lemma 7.5.2. If Eξ < 0 then, for Im λ = 0,

1 − f(λ) =(1 − pEeiλχ+

)(1 − Eeiλχ−

), (7.5.3)

EeiλS =1 − p

1 − pEeiλχ. (7.5.4)

Proofs of these assertions can be found, for instance, in [42, 49, 122]. Thelatter identity can easily be derived directly from the representation S =

∑νi=1 χi,

where the r.v. ν does not depend on {χi} (the χid= χ are independent) and is the

number of ‘upper ladder points’ in the sequence {Sk}, P(ν = k) = (1 − p)pk,

k = 0, 2, . . .

Page 387: Asymptotic analysis of random walks

356 Asymptotic properties of functions of distributions

Proof of Theorem 7.5.1. We will split the proof into two steps. First we will findthe asymptotics of the tail

U(x) := P(χ � x).

Lemma 7.5.3. If Eξ < 0 and

F+(x) = o(F I

+(x))

(7.5.5)

as x → ∞ then

U(x) ∼ F I+(x)a−p

, (7.5.6)

where a− := −Eχ− < ∞.

Proof. The proof of the lemma follows a scheme used in [39, 42]. Let

δ(t) :={

0 for t � 0,

1 for t > 0,F0(t) := δ(t) − P(ξ < t),

so that F0(t) → 0 as t → ±∞. Then 1−f(λ) and 1/(1 − Eeiλχ−) can be written,respectively, as

1 − f(λ) =∫

eiλxdF0(x),1

1 − Eeiλχ−= −

0∫−∞

eiλxdH(−x),

where H(x) is the renewal function for the r.v. −χ− � 0. Differentiating theidentity (7.5.3) at λ = 0 we obtain Eξ = (1 − p)Eχ−, so that

a− := −Eχ− = − Eξ

1 − p> 0 (7.5.7)

is finite and therefore H(x) ∼ x/a− as x → ∞. Hence the identity (7.5.3) canbe rewritten as

−(∫

eiλxdF0(x))( 0∫

−∞eiλxdH(−x)

)= 1 − pEeiλχ,

which implies that, for x > 0,

pU(x) =

∞∫x

H(t − x)F(dt). (7.5.8)

The renewal function H(t) has the following property (see e.g. § 1, Chapter 9of [49]): for any ε > 0 there exists an N < ∞ such that

t

a−� H(t) <

t + N

a− − ε. (7.5.9)

Page 388: Asymptotic analysis of random walks

7.5 An alternative approach to studying P(Sn � x) 357

Therefore

1a−

∞∫x

(t − x)F(dt) � pU(x) <1

a− − ε

∞∫x

(t − x + N)F(dt),

where∞∫

x

(t − x)F(dt) = −∞∫

x

(t − x) dF+(t) =

∞∫x

F+(t) dt ≡ F I+(x).

From this we find that

1a−

� pU(x)F I

+(x)� 1

a− − ε

(1 +

F+(x)NF I

+(x)

).

Since the right-hand side converges to 1/(a− − ε) as x → ∞, and ε > 0 isarbitrary, the lemma is proved.

Now we can proceed to the second stage of the proof of the theorem. LetF I

+ ∈ S. Then, according to Theorem 1.2.4(v) and Theorem 1.2.8, we have

F+(x) = o(F I

+(x)),

and so (7.5.5) is proved.Further, it follows from (7.5.4) that

EeiλS = A(g(λ)), (7.5.10)

where g(λ) := Eeiλχ and A(z) = (1 − p)/(1 − pz). The function A(z) is ana-lytic in the disk |z| < 1/p, and so we can make use of Theorem 1.4.1, accordingto which

P(S � x) ∼ A′(1)P(χ � x) =p

1 − pU(x) ∼ F I

+(x)a−(1 − p)

due to (7.5.6). It remains to use identity (7.5.7).The theorem is proved.

Corollary 7.5.4. If the tail F+(x) = P(ξ � x) is regularly varying or semiex-ponential then the conditions of Theorem 7.5.1 are satisfied and therefore (7.5.1)holds true.

Proof. The assertion is nearly obvious: if the function F+ is regularly varyingthen F I

+ is also regularly varying, by virtue of Theorem 1.1.4(iv), and hence issubexponential. The same claim holds for semiexponential tails F+ (see § 1.3).

The fact that

U(x) ∼ 1a−p

∞∫x

F+(u) du

Page 389: Asymptotic analysis of random walks

358 Asymptotic properties of functions of distributions

and hence that for an l.c. F+(u) we have

U(Δ[x)

)= U(x) − U(x + Δ) ∼ Δ

a−pF+(x), (7.5.11)

suggests that the function U(x) will, under broad conditions, be locally subex-ponential (see Definition 1.3.3, p. 47). This means that one could sharpen the‘integral’ theorem 7.5.1 and obtain, along with the latter, an ‘integro-local’ theo-rem on the asymptotics of P(S ∈ Δ[x)) as well.

7.5.2 An integro-local theorem on the asymptotics of P(S � x)

The objective of this subsection is to give a proof of the following refinement ofTheorem 7.5.1.

Theorem 7.5.5. Let the distribution of ξ be non-lattice, Eξ < 0, F+ be an l.c.function and F I

+ ∈ S. Further, assume that, as x → ∞,

F I+(x) ∼ F+(x)

v(x), (7.5.12)

where v(x) → 0 is an upper-power function (see Definition 1.2.20, p. 28). Then,for any fixed Δ > 0,

P(S ∈ Δ[x)

) ∼ − ΔEξ

F+(x). (7.5.13)

If the distribution of ξ is arithmetic then, under the same conditions, for integer-valued x → ∞,

P(S = x) ∼ F+(x)Eξ

. (7.5.14)

Corollary 7.5.6. Let Eξ < 0. If F ∈ R or F ∈ Se, α ∈ (0, 1) then (7.5.13)((7.5.14) in the arithmetic case) holds true.

This result could be strengthened by replacing the conditions F+ ∈ R andF+ ∈ Se by the weaker conditions of Theorems 1.2.25, 1.2.31 and 1.2.33. In thecase when Eξ2 < ∞, Corollary 7.5.6 also follows from Theorem 7.5.8 below.

An assertion close to the claims of Theorem 7.5.5 and Corollary 7.5.6 was ob-tained in [26, 11]. In [11], to ensure the validity of the relations (7.5.13), (7.5.14)the authors used the condition that F belongs to the distribution class S∗, whichis characterized by the relation (7.4.1).

Proof of Corollary 7.5.6. If F ∈ R then the assertion of the corollary is almostobvious. In this case F+(t) = V (t) := t−αL(t) is an l.c. function, and, byTheorem 1.1.4(iv),

F I+(x) =

∞∫x

u−αL(u) du ∼ x−α+1L(x)α − 1

=F+(x)v(x)

,

Page 390: Asymptotic analysis of random walks

7.5 An alternative approach to studying P(Sn � x) 359

where v(x) = (α − 1)/x. The conditions of Theorem 7.5.5 are satisfied.

Now let F ∈ Se. Then F+(t) = e−l(t),

l(x + u) − l(x) ∼ αul(x)x

as x → ∞, u = o(x),ul(x)

x→ ∞. (7.5.15)

Set v(x) := αl(x)/x and then choose N = N(x) such that N = o(x) andNv(x) → ∞. Then, by virtue of (7.5.15),

x+N∫x

e−l(t)dt = e−l(x)

N∫0

e−(l(x+u)−l(x))du

= e−l(x)

N∫0

e−uv(x)(1+o(1))du

=e−l(x)

v(x)

Nv(x)∫0

e−y(1+o(1))dy ∼ F+(x)v(x)

.

Repeating the same argument for

x+N+N(x+N)∫x+N

e−l(t)dt

and so on, we obtain

F I+(x) ∼ F+(x)

v(x).

This proves (7.5.12) and also that F I+ ∈ Se and therefore that F I

+ ∈ S. Theconditions of Theorem 7.5.5 are again satisfied.

The corollary is proved.

Proof of Theorem 7.5.5. The scheme of this proof is roughly the same as in [42,275]. It is based on two elements: the well-known facts about factorization iden-tities, and Theorems 1.3.6 and 1.4.2 (1.4.3 in the discrete case). The first elementcan be presented in the form of the following assertion, most of whose parts arewell known. To formulate it, we will need the notation introduced after the state-ment of Theorem 7.5.1 (see p. 355) and also some factorization identities.

Lemma 7.5.7. Let F+ be an l.c. function, Eξ < 0 and let x → ∞. Then thefollowing assertions hold true.

(i) Along with (7.5.3), (7.5.4), we have

U(x) ∼ F I+(x)a−p

, a− = −Eχ−, (7.5.16)

U(Δ[x)

) ∼ ΔF+(x)a−p

(7.5.17)

Page 391: Asymptotic analysis of random walks

360 Asymptotic properties of functions of distributions

for any fixed Δ > 0.In the arithmetic case, the quantity Δ in (7.5.17) has to be integer-valued.

(ii) If Eξ2 < ∞ then, for non-lattice ξ’s,

U(x) =F I

+(x)a−p

+b

pF+(x) + o

(F+(x)

), (7.5.18)

where b := a(2)/2a2− and a(2) := Eχ2

− < ∞.In the arithmetic case (7.5.18) remains true, but with a somewhat different

coefficient of F+(x) (a(2) is replaced by a(2) + a−).

Proof. (i) Since, in the case of an l.c. F+, the relation (7.5.5) follows from Theo-rem 1.2.4(v), the assertion (7.5.16) was obtained in Lemma 7.5.3.

Now we prove (7.5.17) (formally, (7.5.16) does not imply (7.5.17)). From therepresentation (7.5.8) and the local renewal theorem (see e.g. (10) in § 4, Chapter 9of [49]), we obtain that for an l.c. F+ one has

pU(Δ[x)

)=

x+Δ∫x

H(t − x)F(dt) +

∞∫x+Δ

[H(t − x) − H(t − x − Δ)

]F(dt)

= o(F+(x)

)+

Δa−

F+(x + Δ)(1 + o(1))

=ΔF+(x)

a−+ o

(F+(x)

).

This proves (7.5.17).

(ii) If Eξ2 < ∞ then, differentiating the identity (7.5.3) at λ = 0, we obtainEχ2

− < ∞. Therefore

H(t) =t

a−+ b + ε(t),

where ε(t) → 0 as t → ∞ (see e.g. Appendix 1 of [42] or [120, 49]). Hence,owing to (7.5.8),

U(x) =1p

∞∫x

H(t − x)F(dt) =F I

+(x)a−p

+bF+(x)

p+ ε1(x),

where

ε1(x) :=1p

∞∫0

ε(v)F(x + dv) = o(F+(x)

),

when F+ is an l.c. function. The lemma is proved.

Now we are in position to prove the theorem. We will restrict ourselves to thenon-lattice case and will make use of Theorem 1.3.6, where we take G to be thedistribution U. That conditions (1.3.20) are satisfied follows from Lemma 7.5.7

Page 392: Asymptotic analysis of random walks

7.5 An alternative approach to studying P(Sn � x) 361

and (7.5.12), and therefore U ∈ Sloc, owing to the above-mentioned theorem. Itremains to use the representation (7.5.4), which has the form

EeiλS = A(g(λ)), (7.5.19)

where

g(λ) = Eeiλχ, χ⊂=U, A(w) =1 − p

1 − pw.

The function A(w) is analytic in the disk |w| < 1/p and so, by Theorem 1.4.2,

P(S ∈ Δ[x)

) ∼ A′(1)U(Δ[x)

).

From this, Lemma 7.5.7 and the relation (7.5.7) we derive (7.5.13).The theorem is proved.

7.5.3 A refinement of the integral theorem for the maxima of sums

In this subsection we will obtain another refinement of Theorem 7.5.1, whichcontains the next term in the asymptotic expansion for P(S � x). Such a re-finement in the case F ∈ R was established in Corollary 3 of [63], but this wasunder additional moment and smoothness conditions on F+. As will follow fromTheorem 7.5.8 below, these additional conditions prove to be superfluous.

For M/G/1 queueing systems, i.e. in the case ξ = ζ − τ , where the r.v.’sζ � 0 and τ � 0 are independent and τ follows the exponential distribution, arefinement of the first-order asymptotics for the distribution of S (or, equivalently,for the limiting waiting-time distribution in M/G/1 systems) was obtained, underthe assumption that ζ has a heavy-tailed density of a special form, in [267].

Theorem 7.5.8. Let F ∈ R or F ∈ Se, α ∈ (0, 1), Eξ < 0, Eξ2 < ∞ and thedistribution of ξ be non-lattice. Then

P(S � x) = −F I+(x)Eξ

+ cF+(x) + o(F+(x)

), (7.5.20)

where

c =b

1 − p− 2a+p

Eξ(1 − p), b =

Eχ2−

2(Eχ−)2, p = P(η+ < ∞), a+ = Eχ.

In the arithmetic case, this representation remains true for integer-valued x

and somewhat different values of c (see Lemma 7.5.7).One also has1

c =Eξ2

2(Eξ)2− ES

Eξ.

1 The calculation of the constant c to be found in [63] contains an error: in the corresponding expan-sion, the terms that correspond to the second derivatives F ′′

+ and so account for the dependence ofc on the variance of ξ were not taken into account.

Page 393: Asymptotic analysis of random walks

362 Asymptotic properties of functions of distributions

For remarks on why the coefficient c of F+(x) in (7.5.20) and the correspond-ing coefficient in the representation (4.6.4), which was obtained under more re-strictive conditions, are different, see the end of Corollary 4.6.3 (p. 210).

Remark 7.5.9. Observe that the asymptotic expansion (7.5.20) was obtained forthe classes R and Se only. The question whether (7.5.20) holds for the entireclass S remains open.

Remark 7.5.10. In the case Eξ2 < ∞, Theorem 7.5.8 clearly implies Corol-lary 7.5.6. As in Corollary 7.5.6, the conditions F ∈ R and F ∈ Se can berelaxed. For example, instead of F ∈ R it suffices to assume that

(1) F+ has the property that, for any fixed v,

F+(x + v lnx)F+(x)

→ 1 as x → ∞;

(2) F+ is an upper-power function;(3) F+ has a regularly varying majorant V such that x−1V (x) = o

(F+(x)

)as x → ∞.

The scheme of the proof of Theorem 7.5.8 is basically the same as for The-orem 7.5.5: it is based on factorization identities, Lemma 7.5.7(ii) and directcalculations, relating the distribution of S to the distributions of the sums

Zn :=n∑

i=1

ζi, ζi := χi − a+,

where the r.v.’s χid= χ are independent. We will denote the distribution of the r.v.

ζ = χ − a+ by G.At the last stage of the proof we will need an additional proposition, refining the

asymptotics of P(Zn � x). It is an insignificant modification (and simplification)of Theorem 3 of [63] in the case G ∈ R, and of Theorem 2.1 of [52] in the caseG ∈ Se. In the case G ∈ R (G+(t) = t−αGLG(t)), consider the followingsmoothness condition1, which was introduced in § 4.7 (see p. 217):

[D(1,q)] As t → ∞, Δ → 0,

G+

(t(1 + Δ)

)− G+(t) = −G+(t)[ΔαG(1 + o(1)) + o(q(t))

], (7.5.21)

where q(t) → 0.

As observed in Remark 3.4.8 and in the proofs of Theorems 4.4.4 and 4.5.1, theremainder term o

(q(t)

)can be directly transferred to the final answer (cf. (7.5.23),

(7.5.25) below), after which one can simply assume that condition [D(1,0)] is met.

1 This is a relaxed version of conditions [D1] of [63]: the latter did not have the term o(q(t)) on theright-hand side of (7.5.21). It almost coincides with condition [D(1,q)] from [66], which containedO(q(t)) instead of o

`q(t)

´.

Page 394: Asymptotic analysis of random walks

7.5 An alternative approach to studying P(Sn � x) 363

In the case G ∈ Se, condition [D(1,q)] will have the following form. As usual,put z(t) := t/l(t).

[D(1,q)] As t → ∞, Δ = o(l−1(t)) (or Δt = o(z(t))),

G+

(t(1 + Δ)

)− G+(t) = −G+(t)[ΔαGl(t)(1 + o(1)) + o

(q(t)

)]. (7.5.22)

(This condition is a relaxed version of condition [D1] from [52].)

In the case G ∈ R, αG < 2, we will need the inverse function

σ(n) := G(−1)+ (1/n).

The above-mentioned refinement of the asymptotics of P(Zn � x) is containedin the following assertion.

Theorem 7.5.11. Let Eζ = 0.

(i) If Eζ2 < ∞, G ∈ R, αG < 2 and condition [D(1,q)] holds then

P(Zn � x) = nG(x)[1 + o(q(x)) + o(

√nx−1)

](7.5.23)

uniformly in n < cx2/ lnx for some c > 0.(ii) If G ∈ R, αG ∈ (1, 2), G−(t) � cG+(t) and condition [D(1,q)] is met then

P(Zn � x) = nG+(x)[1 + o(q(x)) + o(σ(n)x−1)

](7.5.24)

uniformly in n < ε/G+(x), ε > 0.(iii) If G ∈ Se, αG ∈ (0, 1), Eζ2 < ∞ and (7.5.22) holds then

P(Zn � x) = nG+(x)[1 + o(q(x)) + o(

√nz−1)

], (7.5.25)

where z = z(x) = x/l(x), uniformly in n = o(z2).

Proof. (i) When G ∈ R, Eζ2 < ∞, the assertion (7.5.23) is a direct consequenceof Theorem 4.4.4 for k = 1.

(ii) The assertion (7.5.24) is a direct consequence of Theorem 3.4.4(iii).

(iii) The case G ∈ Se can be dealt with in the same way. All the calculationsfrom the proof of Theorem 5.4.1(iii) remain valid here except for the estimate ofthe quantity E′

2, which was introduced in (5.4.58):

E′2 = E

[G+(x − Zn−1); |Zn−1| � z

],

and which gives the principal part of the desired asymptotics. More precisely,owing to (5.4.58), (5.4.59) and (5.4.65), one has for z � √

n the representation

P(Zn � x) = nE′2 + nG+(x) o

(nz−2

)for z � √

n. (7.5.26)

Since |G+(x − vz) − G+(x)| < cG+(x) for |v| � 1, the part of the integral E′2

over the set εz < |Zn−1| � z, where ε tends to zero slowly enough, can bebounded, owing to Chebyshev’s inequality, by

G+(x)O(P(εz � |Zn−1| � z)

)= G+(x)o(nz−2).

Page 395: Asymptotic analysis of random walks

364 Asymptotic properties of functions of distributions

Therefore we just need to evaluate

E[G+(x − Zn−1); |Zn−1| � εz

],

where ε → 0. For this term, owing to [D(1,q)] and the relations EZn−1 = 0,E|Zn−1| = O(

√n ), we have, again assuming that ε → 0 slowly enough,

E[G+(x − Zn−1); |Zn−1| � εx

]= G+(x)E

[1 + αGZn−1z

−1 + o(|Zn−1|z−1

)+ o(q(x)); |Zn−1| � εx

]= G+(x)

[1 + o(

√nz−1) + o(q(x)) + O

(P(|Zn−1| > εx

)+ E

(|Zn−1|z−1; |Zn−1| > εz)]

= G+(x)[1 + o(

√nz−1) + o(q(x))

].

Together with (7.5.26), this proves (7.5.25). Theorem 7.5.11 is proved.

Proof of Theorem 7.5.8. It follows from the identity (7.5.4) that

P(S � x) = (1 − p)∞∑

n=0

pnP(Zn � x − a+n), (7.5.27)

where Zn =∑n

i=1(χi − a+), a+ = Eχ and the χi are independent and followthe same distribution as χ. Hence, owing to Lemma 7.5.7(ii), the probabilityP(χ � x) = U(x) has the form (7.5.18) and therefore

G+(t) ≡ P(χ − a+ � t) = U(t + a+)

=F I

+(t + a+)a−p

+bF+(t)

p+ o

(F+(t)

). (7.5.28)

Now let us verify that G+(t) satisfies condition [D(1,q)]. First let F ∈ R,Eζ2<∞. Then, as t → ∞, Δ → 0,

G+(t(1 + Δ)) − G+(t) = − 1a−p

t(1+Δ)+a+∫t+a+

F+(u) du + o(F+(t)

)= − Δt

a−pF+(t)(1 + o(1)) + o

(F+(t)

). (7.5.29)

Since by Lemma 7.5.7(i)

G+(t) ∼ F I+(t)a−p

∼ tF+(t)a−p(α − 1)

,

we obtain

G+(t(1 + Δ)) − G+(t) = −G(t)[Δ(α − 1)(1 + o(1)) + o(t−1)

],

which means that condition [D(1,q)] is met for q(t) = t−1 and an α-parametervalue αG equal to α − 1 and corresponding to the index of G+.

Page 396: Asymptotic analysis of random walks

7.5 An alternative approach to studying P(Sn � x) 365

Therefore, by (7.5.28) and Theorem 7.5.11 we have, uniformly in n � √x, as

x → ∞,

P(Zn � x − a+n)

= nG+(x − a+n)(

1 + o

(√n

x

))= n

[F I

+(x − a+(n − 1))a−p

+bF+(x)

p+ o

(F+(x)

)](1 + o

(√n

x

))= n

[F I

+(x)a−p

+a+(n − 1)F+(x)

a−p(1 + o(1))

+bF+(x)

p+ o

(F+(x)

)](1 + o

(√n

x

)). (7.5.30)

Now we return to the identity (7.5.27) and split the sum on its right-hand sideinto two parts, a sum over n <

√x and a sum over n � √

x. Then the second partwill not exceed cp

√x, so that

P(S � x) = O(p√

x)

+F I

+(x)a−(1 − p)

+2a+F+(x)pa−(1 − p)2

+bF+(x)1 − p

+ o(F+(x)

).

(7.5.31)This proves (7.5.20).

Now consider the case F ∈ R, Eζ2 = ∞. (Since ζ � −a+, the left tail G−(t)of the distribution ζ disappears for t < −a+ and therefore αG = α − 1 � 2.)Owing to (7.5.29) and Lemma 7.5.7(ii), condition [D(1,q)] is again satisfied forq(t) = t−1. We find from (7.5.24) that (cf. (7.5.30))

P(Zn � x − a+n) = nG(x − a+n)(

1 + o

(σ(n)

x

)).

This again yields (7.5.31) and therefore (7.5.20).Now assume that F ∈ Se. As before, we will verify that G satisfies [D(1,q)]

(see (7.5.22)). Since here

G+(t) ∼ F I+(t)a−p

∼ z(t)F+(t)αa−p

, z(t) =t

l(t),

it follows from (7.5.29) with Δt = o(z(t)

)that l(t(1 + Δ)) − l(t) = o(1),

G+(t(1 + Δ)) − G+(t) = − Δt

a−pF+(t)(1 + o(1)) + o

(F+(t)

)= −G+(t)

[Δαl(t)(1 + o(1)) + o(z−1(t))

].

Thus condition [D(1,q)] is satisfied for q(t) = z−1(t). Hence, by virtue of Theo-rem 7.5.11(iii), we again find (cf. (7.5.30)) that, for z = z(x),

P(Zn � x − a+n) = nG(x − a+n)(1 + o(

√nz−1)

)uniformly in n = o

(min{x, z2}). As before, we turn to (7.5.27) and split the

Page 397: Asymptotic analysis of random walks

366 Asymptotic properties of functions of distributions

sum in this representation into three parts: for n1 = �εz , n2 = �εx , whereε = ε(x) → 0 slowly enough (so that ε > z−1/2), we write

∞∑n=0

=n1∑

n=0

+n2∑

n=n1+1

+∞∑

n=n2+1

=: Σ1 + Σ2 + Σ3.

Then

Σ3 = O(e−cεx

)= O

(e−cεl(x)z

)= O

(e−cl(x)

√z)

= o(F+(x)

). (7.5.32)

In the sum Σ1 we have n = o(z). For such an n, one clearly has l(x − a+n) =l(x) + o(1) and so (cf. (7.5.30)) we obtain

P(Zn � x − a+n) = n

[F I

+(x)a−p

+a+(n − 1)F+(x)

a−p+

bF+(x)p

+ o(F+(x)

)](1 + o

(√n

z

)),

so that, as in (7.5.31),

(1 − p)Σ1 =F I

+(x)a−(1 − p)

+a+F+(x)p

2a−(1 − p)2+

bF+(x)(1 − p)

+ o(F+(x)

). (7.5.33)

In the sum Σ2, we have n � εx, π1 ≡ nl(x)x−2 � ε/z → 0 and therefore,owing to Corollary 5.2.2(i), for a c1 < ∞,

P(Zn � x − a+n) � c1nG+

(x − a+n

r0

), r0 = 1 +

π1h

2(1 + o(1)).

But, for n = o(x) and all small enough π1,

l

(x − a+n

r0

)> l(x − a+n)(1 − cπ1)

= l(x) − αa+nz−1 + o(max{1, nz−1})+ O

(nz−2

)> l(x) − c2nz−1

for a c2 > 0. Hence

(1 − p)Σ2 � c3

∞∑n1+1

exp{−l(x) + c2

n

z+ n ln p + lnn

}� c3 exp

{−l(x) +

εz

2ln p

}� c3 exp

{−l(x) +

√z

2ln p

}= o

(F+(x)

).

From this and (7.5.32), (7.5.33) we obtain (7.5.20). The theorem is proved.

The coefficient c in (7.5.20) also admits a somewhat different representation.Since b = a(2)/2a2

− and, owing to factorization identities,

a− = − Eξ

1 − p, a+ =

(1 − p)p

ES, Eξ2 = a(2)(1 − p) − 2Eξ ES,

Page 398: Asymptotic analysis of random walks

7.6 A Poissonian representation for (θ, S) 367

we have

b =(Eξ2 + 2Eξ ES)(1 − p)

2(Eξ)2,

c =Eξ2 + 2Eξ ES

2(Eξ)2− 2ES

Eξ=

Eξ2

2(Eξ)2− ES

Eξ.

7.6 A Poissonian representation for the supremum S and the time when it

was attained

We will complete this chapter with a remark that is somewhat outside the main-stream exposition of the book and applies to arbitrary random walks (those with-out any conditions on the distribution F), for which S = supk�0 Sk < ∞ a.s.

Let (τ, Z), (τ1, Z1), (τ2, Z2), . . . be a sequence of i.i.d. random vectors withdistribution

P(τ = j, Z � t) =P(Sj � t)

Dj, j � 1, t > 0, (7.6.1)

where

D =∞∑

j=1

P(Sj � 0)j

< ∞.

The latter condition is equivalent to the relation S < ∞ a.s. (see e.g. § 7, Chap-ter XII of [122] or § 2, Chapter 11 of [49]). Here we make no additional assump-tions on the distribution of ξj .

Denote by θ the time when the supremum value S is first attained in the randomwalk {Sk}:

θ := min{k � 0 : Sk = S

},

and by ν an r.v. which is independent of {(τj , Zj); j � 1} and has the Poissondistribution with parameter D.

Theorem 7.6.1. If D < ∞ then the following representation holds true:

(θ, S) d= (τ1, Z1) + · · · + (τν , Zν), (7.6.2)

where the right-hand side is equal to (0, 0) when ν = 0. In particular,

Sd=

ν∑j=1

Zj ,

where

P(Z � t) =: U(t) =1D

∞∑j=1

P(Sj � t)j

, t > 0.

Page 399: Asymptotic analysis of random walks

368 Asymptotic properties of functions of distributions

If the distribution U(t) is subexponential then, as x → ∞, for a suitable func-tion n(x) → ∞ we have

P(Z1 + · · · + Zn � x) ∼ nU(x)

for all n � n(x). In this case,

P(S � x) ∼∞∑

k=0

P(ν = k)kU(x) = EνU(x) =∞∑

j=1

P(Sj � x

)j

.

The required subexponentiality property U(t) will apparently follow from thesubexponentiality of F. In any case, U(t) has this property under the conditionsof Theorem 2.7.1 in the case E|ξj | = ∞ (see also (2.7.7)). This is also true whenthe conditions Eξj = −a < 0 and [ · , =], V ∈ R, are satisfied. Indeed, in thiscase, for all n as x → ∞,

P(Sn � x) = P(Sn + an � x + an) ∼ nF+(x + an),

so that by Theorem 1.1.4(iv)

U(x) =1D

∞∑n=1

P(Sn � x)n

∼ 1D

∞∑n=1

F+(x + an)

∼ 1aD

∞∫x

F+(t) dt ∼ xF+(x)aD(α − 1)

. (7.6.3)

The distribution U(t) will be subexponential for semiexponential distributionsF as well; this can be verified in the same way as (7.6.3) (see Chapter 5).

Proof of Theorem 7.6.1. The assertion of the theorem immediately follows fromCorollary 6, § 15 of [42], which establishes the infinite divisibility of the distribu-tion of (θ, S) via the representation

EωθeiλS = exp

⎧⎨⎩∞∑

k=1

∞∫0

(ωkeiλt − 1

)dP(Sk < t)k

⎫⎬⎭= exp

{ ∞∑k=1

ωk

kE(eiλSk ;Sk � 0) − D

}(see formula (10) in § 15 of [42]). Now, the integral representation

EωP

j�ν τj eiλP

j�ν Zj

for the sum on the right-hand side of (7.6.2) has exactly this form. The theoremis proved.

Results similar to Theorem 7.6.1 can also be obtained for the distribution of(θ∗, S), where θ∗ := max{k � 0 : Sk = S} (see [42]).

Page 400: Asymptotic analysis of random walks

8

On the asymptotics of the first hitting times

8.1 Introduction

For x � 0, let

η+(x) := inf{k � 1 : Sk > x

}, η−(x) := inf

{k � 1 : Sk � −x

},

where we put η+(x) = ∞ (η−(x) = ∞) if Sk � x (Sk > −x) for all k = 1, 2, . . .

The objective of the present chapter is to find the asymptotics of and bounds forthe probabilities P(η−(x) = n) and P(η+(x) = n) or for their integral analoguesP(n < η−(x) < ∞) and P(n < η+(x) < ∞) as n → ∞. The level x � 0 canbe either fixed or growing with n. Special attention will be paid to the randomvariables η± = η±(0). Since {η+(x) > n} = {Sn � x}, the asymptoticsof P

(η+(x) > n

) → 0 can be considered as the asymptotics of the probabilitiesof small deviations of Sn.

In the study of the above-mentioned asymptotics, the determining role is playedby the ‘drift’ in the random walk, which we will characterize using the quantities

D = D+ :=∞∑

k=1

P(Sk > 0)k

and D− :=∞∑

k=1

P(Sk � 0)k

, (8.1.1)

where clearly D+ + D− = ∞.Let p := P(η+ = ∞). It is well known (see e.g. [122, 49]) that

{D+ < ∞, D− = ∞} ⇐⇒ {η− < ∞ a.s., p > 0}⇐⇒ {S = −∞, S < ∞ a.s.} (8.1.2)

and

{D+ = ∞, D− = ∞} ⇐⇒ {η− < ∞, η+ < ∞ a.s.}⇐⇒ {S = −∞, S = ∞ a.s.},

where S = supk�0 Sk and S = infk�0 Sk. Also, it is obvious that a relationsymmetric to (8.1.2) holds in the case {D− < ∞, D+ = ∞}. Taking into accountthis symmetry, we can confine ourselves to considering the case

A0 = {D− = ∞, D+ = ∞}

369

Page 401: Asymptotic analysis of random walks

370 On the asymptotics of the first hitting times

and only one of the two possibilities

A− = {D− = ∞, D+ < ∞} or A+ = {D− < ∞, D+ = ∞}.If Eξ = a exists then

A0 = {a = 0}, A− = {a < 0}, A+ = {a > 0}.In what follows, the classification of the results will be made according to the

following three main criteria:

(1) the value of x (we will distinguish between the cases x = 0, a fixed valueof x > 0 and x → ∞);

(2) the direction of the drift (one of the possibilities A0, A±);(3) the character of the distribution of ξ.

Accordingly, the present chapter has the following structure. In § 8.2 we studythe case of a fixed level x, mostly when x = 0: §§ 8.2.1–8.2.3 are devoted tothe case A−, x = 0, for various distributions classes for ξ; § 8.2.4 deals withthe case A0. The relationship between the distributions of η±(x) and η± for afixed x > 0 is discussed in § 8.2.5.

In § 8.3 we consider the case x → ∞ together with n; §§ 8.3.1–8.3.3 deal withvarious distribution classes for the law of ξ.

The asymptotics of the distributions of η±(x) do not always depend on the ex-act asymptotic behaviour of the distribution tails of ξ. But sometimes, in the casesof regularly varying or exponentially fast decaying tails F+, the desired asymp-totics are closely related to each other (cf. Chapter 6). Therefore exponentiallyfast decaying distributions are also considered in the present chapter.

A survey of results, which is close to the exposition of this chapter, was pre-sented in [193].

8.2 A fixed level x

8.2.1 The case A− with x = 0. Introduction

Assuming that a = Eξ < 0 exists, we will concentrate here on studying theasymptotics of the probabilities

P(η− > n) and P(η+ = n) as n → ∞, (8.2.1)

and also on the closely related asymptotics of the probability P(θ = n) for thetime

θ := min{n � 0 : Sn = S}when the random walk first attains its maximum value S. This relation has asimple form. Since

P(θ = n) = P(Sn−1 < Sn = Sn)P(maxk�n

(Sk − Sn) = 0)

Page 402: Asymptotic analysis of random walks

8.2 A fixed level x 371

(the first factor on the right-hand side is equal to 1 for n = 0) and from the dualityprinciple for random walks (see e.g. § 2, Chapter XII of [122]) we have

P(Sn−1 < Sn = Sn

)= P(η− > n),

it follows that

P(θ = n) = pP(η− > n) (8.2.2)

for p = P(S = 0) = P(η+ = ∞) = 1/Eη− (with regard to the last twoequalities, see Theorem 8.2.1(i) below or [122, 49, 42]).

We will consider the following distribution classes, which we have alreadydealt with, and their extensions:

• the class R of distributions with regularly varying right tails

F+(t) = V (t) = t−αL(t), α � 0, (8.2.3)

where L(t) is an s.v.f., and• the class Se of semiexponential distributions with tails

F+(t) = V (t) = e−l(t), l(t) = tαL(t), α ∈ (0, 1), (8.2.4)

where L(t) is an s.v.f. such that, for Δ = o(t), t → ∞, and any fixed ε > 0,⎧⎪⎪⎨⎪⎪⎩l(t + Δ) − l(t) ∼ αΔl(t)

tif

αΔl(t)t

> ε;

l(t + Δ) − l(t) → 0 ifαΔl(t)

t→ 0

(8.2.5)

(see also (5.1.1)–(5.1.5), p. 233).

According to Remark 1.2.24, a sufficient condition for (8.2.5) is that the func-tion L(t) is differentiable for all large enough t and that L′(t) = o(L(t)/t),t → ∞. Another sufficient condition for (8.2.5) is that l(t) = l1(t) + o(1),t → ∞, where l1(t) satisfies (8.2.5).

Some properties of distributions from the classes R and Se were studied inChapter 1. In particular, we saw there that distributions from these classes aresubexponential (see § 1.1.4 and Theorem 1.2.36 respectively).

It turns out that if we somewhat extend the classes R and Se by allowing thefunctions F+ to ‘oscillate slowly’ (see below) then distributions from the newclasses will still possess the properties of the distributions F ∈ R and F ∈ Se

that are required for our purposes in this chapter. So we will deal with suchextensions as well.

An alternative to R and Se is

• the class C of exponentially fast decaying distributions, i.e. distributions thatsatisfy the right-sided Cramer condition:

λ+ := sup{λ : ϕ(λ) < ∞} > 0, where ϕ(λ) := Eeλξ.

Page 403: Asymptotic analysis of random walks

372 On the asymptotics of the first hitting times

Let λ0 be the point at which the minimum

minλ

ϕ(λ) =: ϕ

occurs. We will distinguish between the two possibilities:

(1) λ0 � λ+, ϕ′(λ0) = 0,

(2) λ0 = λ+, ϕ′(λ+) < 0.(8.2.6)

It is evident that in case (1) we always have ϕ′(λ0) = 0 if λ0 < λ+ and Eξ < 0.Now we will turn to previously known results. As usual, by the symbol c, with

or without indices, we will be denoting a constant, not necessarily with the samemeaning if it appears in different formulae.

In the case of bounded lattice-valued ξ’s, the complete asymptotic analysis ofP(η±(x) = n) for all x and n → ∞ was presented in [33, 34].

For an extended version of the class R, the asymptotics P(θ = n) ∼ cV (−an)and hence that of P(η− > n) were obtained in Chapter 4 of [42]. It was alsoestablished there that, for the class C in the case λ0 < λ+, one has

P(θ = n) ∼ cϕn

n3/2

(∼ pP(η− > n) owing to (8.2.2)). (8.2.7)

Comprehensive results on the asymptotics of the probabilities P(η−(x) > n)and P(n < η+(x) < ∞) in the case of a fixed x � 0 for the classes R and C wereobtained in [116, 32, 104, 27, 193]. In this connection, one should note that someresults from Theorems 8.2.3, 8.2.12, 8.2.14 and 8.2.16 below are already known.Nevertheless, we present them here for completeness of exposition.

Necessary and sufficient conditions for the finiteness of Eηγ−, γ > 0, and some

relevant problems were studied in [143, 154, 138, 160]. It was found, in particular,that Eηγ

− < ∞, γ > 0, iff E(ξ+)γ < ∞ where ξ+ = max{0, ξ} (see e.g. [143]).Unimprovable bounds for P(η− > n) were given in § 43 of [50].

Because of the established relationship (8.2.2) between the distribution of ther.v.’s θ and η−, we will concentrate on the distributions of η± in what follows.

We will begin with results that clarify the relationship between the distributionsof η− and η+, which is of independent interest. Introduce an r.v. ζ having agenerating function

H(z) := Ezζ =1 − Ezη−

(1 − z)Eη−

(Eη− < ∞ owing to our assumption that Eξ < 0), so that

P(ζ = k) =P(η− > k)

Eη−, k = 0, 1, . . . ,

and put p(z) := E(zη+ | η+ < ∞),

p := P(η+ = ∞) = P(S = 0), q := 1 − p = P(η+ < ∞).

Page 404: Asymptotic analysis of random walks

8.2 A fixed level x 373

Moreover, let η be an r.v. with distribution

P(η = k) = P(η+ = k| η+ < ∞)

(and generating function p(z)); let η1, η2, . . . be independent copies of η,

T0 := 0, Tk := η1 + · · · + ηk, k � 1,

and ν be an r.v. with the geometric distribution P(ν = k) = pqk, k � 0, that isindependent of {ηj}.

Theorem 8.2.1. The following assertions hold true.

(i)

D =∞∑

k=1

P(Sk > 0)k

= − ln p, p =1

Eη−. (8.2.8)

(ii)

H(z) =1 − q

1 − qp(z), (8.2.9)

and therefore the distribution of η+ completely defines that of η−, and viceversa.

(iii) For any n � 0,

P(η− > n) = pP(Tν = n) > P(η+ = n). (8.2.10)

(iv) If the sequence{P(η+ = n)/q;n � 1

}is subexponential then, as n → ∞,

P(η− > n) ∼ 1p2

P(η+ = n). (8.2.11)

If the sequence

bn :=P(Sn > 0)

nD, n � 1, (8.2.12)

is subexponential then, as n → ∞,

P(η− > n) ∼ eD

nP(Sn > 0),

P(η+ = n) ∼ e−D

nP(Sn > 0).

(8.2.13)

(v) The r.v. ζ has an infinitely divisible distribution and admits the representation

ζd= ω1 + · · · + ων , (8.2.14)

where ω1, ω2, . . . are independent copies of an r.v. ω with distribution

P(ω = k) = P(Sk > 0)/kD, k = 1, 2, . . . ,

and the r.v. ν is independent of {ωi} and has the Poisson distribution withparameter D.

Page 405: Asymptotic analysis of random walks

374 On the asymptotics of the first hitting times

It follows from (8.2.14) that

P(η− > n) = Eη−P(ω1 + · · · + ων = n). (8.2.15)

Remark 8.2.2. The assertion of the theorem that the distributions of the r.v.’s η±determine each other looks somewhat surprising, as a similar assertion for the firstpositive and first non-positive sums χ+ := Sη+ and χ− := Sη− would be wrong.Indeed, if F−(t) = ce−ht for t � 0, c < 1, h > 0, then for any F+ we wouldhave P(χ− < −t) = e−ht, t > 0, whereas

1 − E(eiλχ+ ; η+ < ∞) =(1 − f(λ)) (iλ + h)

iλ, f(λ) := Eeiλξ.

Therefore, the distributions P(χ+ > x, η+ < ∞) will be different for differenttails F+(t), t > 0.

Proof. The proof of Theorem 8.2.1 generally follows the path used in § 21 of [42]to study the asymptotics of P(θ = n). It is based on the factorization identities

1 − Ezη− = exp{−

∞∑n=1

zn

nP(Sn � 0)

}, (8.2.16)

1 − E(zη+ ; η+ < ∞) = exp

{−

∞∑n=1

zn

nP(Sn > 0)

}, (8.2.17)

|z| � 1 (see e.g. [122, 49, 42]). Since

11 − z

= e− ln(1−z) = exp{ ∞∑

n=1

zn

n

},

the identities (8.2.16), (8.2.17) can be rewritten as

∞∑n=0

znP(η− > n) =1 − Ezη−

1 − z= exp

{ ∞∑n=1

zn

nP(Sn > 0)

}, (8.2.18)

∞∑n=1

znP(η+ = n) = 1 − exp{−

∞∑n=1

zn

nP(Sn > 0)

}. (8.2.19)

We now consider the theorem parts in turn.

(i) Letting z = 1 in (8.2.18), (8.2.19), we obtain D = − ln p, p = 1/Eη−.

(ii) The assertion (8.2.9) follows from comparing (8.2.18) with (8.2.19).

(iii) The relation (8.2.9) is clearly equivalent to the equality

1

1 −∞∑

n=1

znP(η+ = n)

=∞∑

n=0

znP(η− > n),

Page 406: Asymptotic analysis of random walks

8.2 A fixed level x 375

of which the left-hand side is equal to∞∑

k=0

qk∞∑

n=0

znP(Tk = n) = pEzTν .

This immediately implies (8.2.10).

(iv) Consider the entire functions

A−(v) := evD and A+(v) := 1 − e−vD.

For |z| � 1, the relations (8.2.18), (8.2.19) can be rewritten in the form∞∑

n=0

znP(η− > n) = A−(b(z)

), (8.2.20)

∞∑n=1

znP(η+ = n) = A+

(b(z)

), (8.2.21)

where

b(z) :=1D

∞∑n=1

znP(Sn > 0)n

≡∞∑

n=1

znbn

and the bn were defined in (8.2.12). Since {bn} is subexponential by assumption,it only remains to make use of the known theorems (see § 1.4 or e.g. [83]) onfunctions of distributions (specified by the functions A± in (8.2.20), (8.2.21)).

As A± are entire functions and A′−(1) = DeD, A′

+(1) = De−D, we obtainby virtue of (8.2.20), (8.2.21) and Theorem 1.4.3 that

P(η− > n) ∼ bnA′−(1) = eD P(Sn > 0)

n= Eη−

P(Sn > 0)n

,

P(η+ = n) ∼ bnA′+(1) = e−D P(Sn > 0)

n= P(η+ = ∞)

P(Sn > 0)n

.

This proves (8.2.13). The relation (8.2.11) can be proved in exactly the same way,taking into account the fact that the function

A(v) :=1 − q

1 − qv

is analytic in the disk |v| < 1/q, A′(1) = q/(1 − q), and that H(z) = A(p(z))

owing to (8.2.9). Therefore, as n → ∞,

P(η− > n)Eη−

∼ q

1 − q

P(η+ = n)q

,

which (taking account of (8.2.8)) is equivalent to (8.2.11).

(v) The assertion (8.2.14) follows from the observation that

H(z) = exp{ ∞∑

n=1

znP(Sn > 0)n

− D

}= exp

{D(Q(z) − 1)

}, (8.2.22)

Page 407: Asymptotic analysis of random walks

376 On the asymptotics of the first hitting times

where

Q(z) =∞∑

n=1

znP(Sn > 0)nD

.

It is clear that the right-hand side of (8.2.22) is the generating function of therandom sum ω1 + · · · + ων . The theorem is proved.

Now we will turn to ‘explicit’ asymptotic representations for the distributionsof the r.v.’s η± in terms of the distribution F.

8.2.2 The case Eξ < 0 for x = 0 when F belongs to the classes R or Se or to

their extensions

Theorem 8.2.3. Let a = Eξ < 0 and either F ∈ R or F ∈ Se, where we assumein the latter case that α < 1/2. Then, as n → ∞,

P(η− > n) ∼ eDV (|a|n), (8.2.23)

P(η+ = n) ∼ e−DV (|a|n), (8.2.24)

where D is defined in (8.1.1), (8.2.8) and

eD = Eη−, e−D = P(S = 0) = P(η+ = ∞). (8.2.25)

The same assertion holds for the ‘dual’ pair of r.v.’s

η0+ := inf{k � 1 : Sk � 0}, η0

− := inf{k : Sk < 0}.Theorem 8.2.4. Assume that the conditions of Theorem 8.2.3 are satisfied. Then,as n → ∞,

P(η0− > n) ∼ eD0

V (|a|n),

P(η0+ = n) ∼ e−D0

V (|a|n),

where

D0 :=∞∑

k=1

P(Sk � 0)k

, eD0= Eη0

− =1

P(η0+ = ∞)

.

Remark 8.2.5. Both assertions of Theorem 8.2.3 have a simple ‘physical’ in-terpretation. First note that, under the conditions of the theorem, the probabilitythat during time m a large jump of size x ∼ cn or bigger will occur is approx-imately equal to mV (x). Further, the rare event {η− > n} where n is largeoccurs, roughly speaking, when there is a large jump of size |a|n in a time in-terval of length η− (with mean Eη−); after that, the random walk will need, onaverage, a time n to reach the negative half-axis. Therefore it is natural to expectthat P(η− > n) ∼ Eη−V (|a|n).

The result P(Sη− > x) ∼ Eη−V (x) of [8] admits a similar interpretation.The rare event {η+ = n} occurs when first (prior to time n) the trajectory stays

Page 408: Asymptotic analysis of random walks

8.2 A fixed level x 377

above zero (the probability of this is close to P(S = 0); at time n the trajectorywill be in a ‘neighbourhood’ of the point an) and then, at time n, there occursa large jump of size � −an. These considerations explain, to some extent, theasymptotics (8.2.24).

Proof of Theorem 8.2.3. The assertions (8.2.23) and (8.2.24) are simple conse-quences of Theorem 8.2.1(iv) (see (8.2.12), (8.2.13)). We simply have to find theasymptotics of {bn} as n → ∞ and verify that this sequence is subexponential.

The probability P(Sn > 0) can be written in the form P(S0n > |a|n), where

S0k = Sk − ak, ES0

k = 0. Now if F ∈ R then

P(S0n > |a|n) ∼ nV (|a|n) (8.2.26)

(see e.g. §§ 3.4 and 4.4). The same relation holds if F ∈ Se (that is, condi-tions (8.2.4), (8.2.5) are satisfied) and α < 1/2 (see § 5.4; for α � 1/2 thedeviations x = |a|n do not belong to the region where (8.2.26) holds true). Thus,under the conditions of Theorem 8.2.3,

bn ∼ V (|a|n)D

,

and so the sequence {bn} is subexponential (see p. 44). The theorem is proved.

It follows from Theorem 8.2.3 that the relation (8.2.11) holds as n → ∞ andthat, for any increasing function Q(t), we have, putting QI(t) :=

∫ t

0Q(u) du, the

following equivalences:

EQ(η−) < ∞ ⇐⇒ E[Q(ξ/|a|); ξ > 0] < ∞,

E[Q(η+); η+ < ∞] < ∞ ⇐⇒ E[QI(ξ/|a|); ξ > 0] < ∞.

Theorem 8.2.4 can be proved in exactly the same way, using the ‘dual’ identities

1 − Ezη0− = exp

{−

∞∑k=1

zk

kP(Sk < 0)

},

1 − E(zη0

+ ; η0+ < ∞) = exp

{−∑ zk

kP(Sk � 0)

}.

The duality seen in Theorems 8.2.3 and 8.2.4 is present everywhere in the re-mainder of the chapter as well. Because of its obviousness, its further descriptionwill be omitted in what follows.

One can broaden the conditions under which the relations (8.2.23), (8.2.24)remain true.

First we consider the case of finite variance.

Theorem 8.2.6. Let Eξ2 < ∞.

Page 409: Asymptotic analysis of random walks

378 On the asymptotics of the first hitting times

(i) Suppose that, instead of the condition F ∈ R, the following conditions holdin Theorem 8.2.3:

(i1) [ · , <], V ∈ R, and nV 2(n) = o(F+(|a|n)

);

(i2) for any fixed h ∈ (0, 1) and some c < ∞, for all large enough t onehas F+(ht) < cF+(t);

(i3) for any fixed M < ∞ one has F+(t + u) ∼ F+(t) as t → ∞ for|u| < M

√t.

Then the relations (8.2.23), (8.2.24) hold true with V replaced by F+.(ii) Suppose that, instead of the condition F ∈ Se, the following conditions hold

in Theorem 8.2.3:

(ii1) [ · , <], V ∈ Se, α < 1/2;(ii2) for any fixed γ ∈ (α, 1) and some c < ∞, for all large enough t one

has v(th)/v(t) > chγ , where v(t) := − lnF+(t);(ii3) for any fixed M < ∞ one has v(t + u) − v(t) = o(1) as t → ∞,

|u| � M√

t.

Then the relations(8.2.23), (8.2.24) hold true with V replaced by F+.

Proof. We can again use the argument employed to prove Theorem 8.2.3. Onejust has to verify that the new, broader, conditions still imply (8.2.26) and thesubexponentiality of the sequence {bn}.

(i) Let conditions (i1)–(i3) be satisfied. Using ‘truncation’ (at the level h|a|n)to analyze the asymptotics of P(S0

n > an) (see Chapters 3 and 4), one obtainsfrom (i1)–(i3) that, for h ∈ (0, 1),

P(S0n > |a|n) = nP(S0

n > |a|n, ξn − a > h|a|n) + O((nV (h|a|n))2

),

where, for M → ∞ slowly enough, one has, by virtue of Chebyshev’s inequalityand (i2), (i3), that

P(S0n > |a|n, ξn − a > h|a|n)

= P(S0n−1 + ξn − a > |a|n, ξn − a > h|a|n)

= P(S0

n−1 > (1 − h)|a|n)F+(|a|(hn − 1))

+

(1−h)|a|n∫−∞

P(S0n−1 ∈ dt) F+(|a|(n − 1) − t) (8.2.27)

= o(V (n)) + E[F+(|a|(n − 1) − S0

n−1); S0n−1 < M

√n]

+ E[F+(|a|(n − 1) − S0

n−1); M√

n � S0n−1 < (1 − h)|a|n]

= F+(|a|n) + o(F+(n)

).

This proves the relation P(S0

n > |a|n) ∼ nF+(|a|n).

Page 410: Asymptotic analysis of random walks

8.2 A fixed level x 379

The subexponentiality of {bn}, bn ∼ D−1F+(|a|n), follows from the relations(supposing for simplicity that n is even)

n∑k=0

bkbn−k = 2n/2−1∑k=0

bkbn−k + b2n/2,

wheren/2−1∑k=0

=�M√

n�∑k=0

+n/2−1∑

k=�M√n�+1

and, owing to (i2), (i3),

�M√n�∑

k=0

bkbn−k ∼ bn,

n/2−1∑k=�M√

n�+1

bkbn−k < cF+(|a|n)n/2−1∑

k=�M√n�+1

bk = o(bn).

(ii) The case when conditions (ii1)–(ii3) are met is dealt with in a similar man-ner, using the results of Chapter 5.

Theorem 8.2.6 is proved.

The case Eξ2 = ∞ can be considered in exactly the same way. For simplicitylet α ∈ (1, 2) in (8.2.3), and let

F−(t) ≡ P(ξ < −t) � cF+(t). (8.2.28)

Theorem 8.2.7. Under the above conditions, the first assertion of Theorem 8.2.6remains true in the case (8.2.28), provided that we replace the range |u| < M

√t

in (i3) by |u| < Mσ(t), σ(t) = V (−1)(1/t).

If the condition P(ξ < −t) � cF+(t) does not hold then Theorem 8.2.7 stillremains true, but with a different function σ(t) (see Chapter 3).

Proof. The proof of Theorem 8.2.7 completely repeats that of Theorem 8.2.6.One just replaces in (8.2.27) the deviation range {S0

n−1 < M√

n} by the range{S0

n−1 < Mσ(n)} and then uses bounds from Chapter 3.

One could also consider the cases α = 1, α = 2.Now we will derive bounds for the probabilities (8.2.1). Since, owing to The-

orem 8.2.1(iii), one has P(η+ = n) < P(η− > n), we will obtain bounds onlyfor P(η− > n).

Theorem 8.2.8. Assume that there exists a distribution F which satisfies the con-ditions of Theorem 8.2.6 and is such that F+(t) � V (t) := F (t) for all t (in

other words, one has ξd� ξ, P(ξ � t) = V (t), ξ satisfies the conditions of

Theorem 8.2.3, if Eξ = a < 0). Then, for any ε > 0 and all large enough n,

P(η− > n) � V(|a|n(1 − ε)

)Eη− . (8.2.29)

Page 411: Asymptotic analysis of random walks

380 On the asymptotics of the first hitting times

If F ∈ Se, α � 1/2 then we still have (8.2.29) (in the case F ∈ Se, mul-tiplying V

(|a|n(1 − ε))

by a constant factor makes no difference since a smallvariation in ε can change V

(|a|n(1 − ε))

by more than a constant factor).

Proof. The assertion (8.2.29) is next to obvious, since η−d� η−, where η− is

defined as η− but for i.i.d. r.v.’s ξ1, ξ2, . . .⊂= F. Therefore, by virtue of Theo-rem 8.2.3,

P(η− > n) � P(η− > n) ∼ V (|a|n)Eη−, n → ∞. (8.2.30)

Now we will redefine the distribution of ξ as follows,

P(ξ � t) ={

F+(t) for t � M,

V (t) for t > V (−1)(F+(M)),

and then, for a given ε > 0, choose a value M such that Eξ =: a < a + ε,Eη− < Eη− + ε. This, together with (8.2.30), implies (8.2.29).

To prove the second assertion, we can again make use of the above argument,except that there is no relation of the form (8.2.26) for the sums Sn =

∑ni=1 ξi.

If V ∈ Se, α � 1/2 then, from the results of Chapter 5, it follows only that, forany ε > 0 and all large enough n,

P(Sn − an > |a|n) � cn

(V (|a|n)

)1−ε = cne−l(|ba|n)(1−ε) (8.2.31)

(see Corollary 5.2.2, p. 240). Since l(tn) ∼ tαl(n) for a fixed t > 0, we see thatwhen a < a+ε one can bound the right-hand side of (8.2.31) (slightly changing ε

if needed) by

ne−l(|a|n(1−ε)) = nV(|a|n(1 − ε)

).

Owing to (8.2.18), the probabilities P(η− > n) � P(η− > n) do not exceed thecoefficients in the Taylor expansion of

exp{ ∞∑

k=0

zk

kP(Sk > 0)

},

where1k

P(Sk > 0) � V(|a|k(1 − ε)

)for large enough k. Therefore these probabilities will also not exceed the coeffi-cients in the expansion of

exp{

B

∞∑k=0

zkbk

},

where bk ∼ B−1V (|a|k(1 − ε)),∑∞

k=1 bk = 1, and

B :=M∑

k=1

P(Sk > 0) +∑k>M

V(|a|k(1 − ε)

).

Page 412: Asymptotic analysis of random walks

8.2 A fixed level x 381

Now set A (v) := eBv , so that A′(1) = BeB . As the sequence {bk} is subexpo-nential, we obtain from Theorem 1.4.3 that

P(η− > n) < V(|a|n(1 − ε)

)BeB

(1 + o(1)

).

Since the value of V(|a|n(1− ε)

)can be diminished BeB times (or by any other

fixed factor) by slightly changing ε, we have obtained the second assertion of thetheorem. The theorem is proved.

Corollary 8.2.9. If

E := E(el(ξ); ξ > 0

)< ∞,

where l(t) = tαL(t) is a non-decreasing function, α ∈ (0, 1), and L(t) is an s.v.f.then, for any ε > 0,

Eel(|a|(1−ε)η−) < ∞, E(el(|a|(1−ε)η+); η+ < ∞) < ∞.

This assertion follows directly from Theorem 8.2.8 and Chebyshev’s inequality

P(ξ � x) � Ee−l(x).

Remark 8.2.10. In the case V ∈ Se the assertion (8.2.29) admits a simpler proof,based on the crude inequalities (for, say, α < 1/2)

P(η− > n) � P(Sn > 0) � P(Sn > 0)(1 + o(1))

� nV (|a|n) � V(|a|n(1 − ε)

).

Remark 8.2.11. In the assertion (8.2.29) of Theorem 8.2.8 one can assume thatε = ε(n) → 0 as n → ∞. The rate of convergence ε(n) → 0 can be obtainedfrom the bounds for ε in (8.2.31), which are found in Corollary 5.2.2, and fromestimates for the distance between a and a.

8.2.3 The case Eξ < 0 for x = 0 when F belongs to the class CFirst we consider the possibility (1) in (8.2.6) for the case F ∈ C. Recall thatϕ(λ) = Eeλξ, λ0 > 0 is the point where the function ϕ(λ) attains its minimumvalue ϕ := minλ ϕ(λ) and that λ+ = sup{λ : ϕ(λ) < ∞}.

In the case under consideration, one has λ0 � λ+, ϕ′(λ0) = 0, which includesthe following two substantially different subcases:

(1a) λ0 < λ+;(1b) λ0 = λ+, ϕ′(λ+) = 0.

In case (1b), the fact that both quantities ϕ(λ+) and ϕ′(λ+) are finite impliesthat (see (6.1.20), (6.1.21))

F+(t) = e−λ+tV (t),

∞∫0

tV (t)dt < ∞. (8.2.32)

Page 413: Asymptotic analysis of random walks

382 On the asymptotics of the first hitting times

In this situation, we will need the following condition:

[B] Either ϕ′′(λ+) < ∞, or ϕ′′(λ+) = ∞ and

V I(t) =

∞∫t

V (u) du ∈ R. (8.2.33)

The relation (8.2.33) means that

V I(t) = t−αL0(t), (8.2.34)

where α ∈ [1, 2] and L0 is an s.v.f.Now return to the general case λ0 � λ+ and consider the distribution, conju-

gate to F (its Cramer transform):

F(λ0)(dt) :=eλ0t

ϕF(dt). (8.2.35)

Let the ξ(λ0)i be independent r.v.’s with distribution F(λ0). Put

S(λ0)n :=

n∑i=1

ξ(λ0)i .

Then clearly Eξ(λ0)i = 0. If λ0 < λ+ then d := E

(ξ(λ0)i

)2< ∞ and therefore

the distribution of S(λ0)n /

√nd converges weakly to the normal law as n → ∞.

If λ0 = λ+ then (cf. (6.2.6), (6.2.7), p. 309)

V0(t) := P(ξ(λ0)1 � t

)=

∞∫t

eλ0u

ϕF(du)

=1ϕ

∞∫t

(λ0V (u) du − dV (u)

) ∼ λ0

ϕV I(t) ∈ R.

(8.2.36)

This means (see § 1.5) that, provided that the conditions λ0 = λ+ and [B] aresatisfied, the distribution of ξ

(λ0)i belongs to the domain of attraction of the stable

law Fα,1 with parameters (α, 1), α ∈ [1, 2] (note that the left tail of the distribu-tion F(λ0) decays exponentially fast; for α = 2, the limiting law Fα,1 is normal).The distribution Fα,1 has a continuous density f .

Put

Dϕ :=∞∑

k=1

P(Sk > 0)kϕk

, σn :=

{ √nd if d < ∞,

V(−1)0 (1/n) if d = ∞,

where V(−1)0 := inf{v : V0(v) � u} is the (generalized) inverse of the func-

tion V0.Recall that a distribution F is said to be non-lattice if its support is not part

Page 414: Asymptotic analysis of random walks

8.2 A fixed level x 383

of a set of the form {b + hk; k = 0,±1,±2, . . .}, 0 � b � h, where h can beassumed, without loss of generality, to be equal to 1. A distribution F is calledarithmetic if the r.v. ξ ⊂=F is integer-valued with lattice span equal to 1.

Theorem 8.2.12. Assume that F ∈ C, λ0 � λ+, ϕ′(λ0) = 0 and, in the caseλ0 = λ+, that the conditions (8.2.32) and [B] are met. Then, in the non-latticecase,

P(η− > n) ∼ eDϕf(0)ϕn

λ0nσn, (8.2.37)

P(η+ = n) ∼ e−Dϕf(0)ϕn

λ0nσn, (8.2.38)

where f(0) is the value of the density of the limiting stable law Fα,1 at 0.If the r.v. ξ has an arithmetic distribution then the factor 1/λ0 on the right-hand

sides of (8.2.37), (8.2.38) should be replaced by e−λ0/(1 − e−λ0).

Proof. The distributions of Sn and S(λ0)n have a similar relation to that in (8.2.35):

P(Sn ∈ dt) = ϕne−λ0tP(S(λ0)

n ∈ dt)

(see e.g. (6.1.3)), so that

P(Sn > 0) = ϕn

∞∫0

e−λ0tP(S(λ0)

n ∈ dt)

= ϕnλ0

∞∫0

e−λ0uP(S(λ0)

n ∈ (0, u])du. (8.2.39)

Repeating the argument from the proofs of Theorems 6.2.1 and 6.3.1, we obtain

P(Sn > 0) ∼ λ0ϕnf(0)σn

∞∫0

ue−λ0udu =ϕnf(0)λ0σn

.

This result also follows from Corollaries 6.2.3 and 6.3.2.Further, we will again make use of the factorization identities (8.2.18), (8.2.19),

where, having made the change of variables s = zϕ, we will proceed as in theproof of Theorem 8.2.1, using in (8.2.20), (8.2.21) the function

bϕ(s) := b

(s

ϕ

)=

∞∑k=1

bϕ,ksk

instead of b(z). The sequence

bϕ,k =P(Sk > 0)

Dϕkϕk∼ f(0)

λ0kσkDϕ(8.2.40)

Page 415: Asymptotic analysis of random walks

384 On the asymptotics of the first hitting times

is subexponential. Therefore, again applying results from § 1.4, we obtain

P(η− > n)ϕ−n ∼ eDϕf(0)

λ0kσkDϕ.

In the arithmetic case, we use the local theorem∑k

∣∣∣∣P(S(λ0)n = k) − 1

σnf

(k

σn

)∣∣∣∣→ 0

as n → ∞ (see Theorem 4.2.2 of [152]), which implies that

P(Sn > 0) = ϕn∞∑

k=1

e−λ0kP(S(λ0)

n = k) ∼ ϕne−λ0f(0)

(1 − e−λ0)σn.

Again, the same result follows from Corollaries 6.2.3 and 6.3.2.The proof of the second assertion of the theorem is completely analogous.The theorem is proved.

Remark 8.2.13. Note that, in the cases λ0 < λ+ or λ0 = λ+, ϕ′′(λ+) < ∞,one has f(0) = 1/

√2π, σn =

√nd and d = ϕ′′(λ0)/ϕ in the relations (8.2.37),

(8.2.38).Observe also that one could consider the lattice non-arithmetic case as well,

when ξ assumes the values b + hk, k = 0,±1, . . . , 0 < |b| < h. In this case thecoefficient of (nσn)−1 on the right-hand sides of (8.2.37), (8.2.38) will dependon n, oscillating within fixed limits.

Now consider possibility (2) in (8.2.6): λ0 = λ+, ϕ′(λ+) < 0. Then we stillhave (8.2.32). Suppose that V ∈ R, i.e.

V (t) = t−α−1L(t), α > 1, (8.2.41)

where L is an s.v.f.

Theorem 8.2.14. Assume that F ∈ C, λ0 = λ+ and ϕ′(λ+) < 0 and also thatconditions (8.2.32), (8.2.41) are satisfied. Then

P(η− > n) ∼ eDϕϕn−1V (a+n),

P(η+ = n) ∼ e−Dϕϕn−1V (a+n),

where a+ = −Eξ(λ0) = −ϕ′(λ+)/ϕ > 0.

Proof. Under the conditions of Theorem 8.2.14 one still has the equality (8.2.39)for P(Sn > 0), in which λ0 can be replaced by λ+, so that the problem againreduces to finding the asymptotics of

P(S(λ0)

n ∈ (0, u])

= P(S(λ0)

n + a+n ∈ (a+n, a+n + u]).

Page 416: Asymptotic analysis of random walks

8.2 A fixed level x 385

We have

P(ξ(λ0)1 ∈ (t, t + u]

)=

t+u∫t

eλ+v

ϕF(dv)

=

t+u∫t

λ+V (v)ϕ

dv −t+u∫t

dV (v)ϕ

∼ λ+u

ϕV (t).

Again repeating the argument from the proofs of Theorems 6.2.1 and 6.3.1 weobtain that, for any fixed u > 0,

P(S(λ0)

n ∈ (0, u]) ∼ λ+u

ϕαnV (a+n). (8.2.42)

This assertion also follows from Corollaries 6.2.3 and 6.3.2.As before, from (8.2.42) we find that, by virtue of (8.2.39),

P(Sn > 0)ϕ−n ∼ λ+

∞∫0

e−λ+uu duλ+nV (a+n)

ϕα=

nV (a+n)ϕα

.

Thus, using the notation (8.2.40) we have

bϕ,k =P(Sk > 0)ϕ−k

Dϕk∼ V (a+k)

ϕαDϕ.

As in Theorem 8.2.12 the sequence {bϕ,k} is subexponential, and hence the re-mainder of the previous proof will remain valid.

The theorem is proved.

It appears that it would not be very difficult to extend the assertion of Theo-rem 8.2.14 to the case V ∈ Se, α < 1/2.

Observe that the assumption (8.2.41), as may be seen from the relation (8.2.42),corresponds to the non-lattice case. The lattice case can be considered in exactlythe same way, but this will require a change in the form of condition (8.2.41).Note also that the local theorems 2.1 and 3.1 of [54] for the lattice case wereobtained earlier in [105].

8.2.4 The case A0 for x = 0

First note that we have here the following analogue of Theorem 8.2.1.

Theorem 8.2.15. For the case A0 we have that

1 − Ezη−

1 − z= exp

{ ∞∑k=1

zk

kP(Sk > 0)

}=

11 − Ezη+

,

Eη± = ∞ and that the distribution of η− is uniquely determined by that of η+

and vice versa.

Page 417: Asymptotic analysis of random walks

386 On the asymptotics of the first hitting times

Proof. The proof of the theorem follows in an obvious way from (8.2.17), (8.2.18)and the fact that P(η+ < ∞) = 1.

Now, for γ ∈ (0, 1) put

Δn := P(Sn � 0) − γ, D(z) := exp{−

∞∑k=1

zkΔk

k

}. (8.2.43)

Theorem 8.2.16. The following assertions hold true for the case A0.

(i) The relation

P(η− > n) ∼ n−γL(n)Γ(1 − γ)

as n → ∞, γ ∈ (0, 1), L is an s.v.f.,

(8.2.44)holds iff

n−1n∑

k=1

P(Sk � 0) → γ (8.2.45)

or, equivalently, iff

P(Sn � 0) → γ. (8.2.46)

(ii) If (8.2.44) or (8.2.45) holds then necessarily L(n) ∼ D(1 − n−1) and

P(η−(x) > n)P(η− > n)

→ r(x), n → ∞,

for any fixed x � 0, where the function r(x) is given in explicit form.(iii) If Eξ = 0, Eξ2 < ∞ then γ = 1/2 and∑ |Δn|

n< ∞. (8.2.47)

This means that the function L(n) in the relation (8.2.44) can be replacedby D(1), 0 < D(1) < ∞.

Proof. The proof of Theorem 8.2.16 can be found in [32], pp. 381–2 (see [32] fora more detailed bibliography; the equivalence of conditions (8.2.45) and (8.2.46)was established in [106]).

It is evident that an assertion symmetric to (8.2.44) holds for η+, η+(x) (withγ and Δn replaced by 1 − γ and −Δn respectively):

P(η+ > n) ∼ nγ−1L−1(n)Γ(γ)

.

If Eξ2 = ∞ (assuming that if Eξ is finite then Eξ = 0) then, under the wellknown regularity conditions on the distribution tails of ξ (see p. 57 and Theo-rem 1.5.1), we have, as n → ∞,

P(Sn � 0) → Fα,ρ((−∞, 0]) = Fα,ρ,−(0) ≡ γ,

Page 418: Asymptotic analysis of random walks

8.2 A fixed level x 387

where Fα,ρ is a stable law with parameters α ∈ (0, 2], ρ ∈ [−1, 1].For symmetric ξ’s one has γ = 1/2,

P(Sn � 0) − 12

=12

P(Sn = 0) <c√n

,

so that the series (8.2.47) is convergent and

D(z) = exp{−1

2

∞∑n=1

zn

nP(Sn = 0)

}.

It may be seen from the above that here, in contrast with the case A−, theinfluence of the distribution tails of ξ on the asymptotics (8.2.1) is less significant.

There is a vast literature on the rate of convergence of

F (n)(t) := P(Sn/σn < t)

(for a suitable scaling sequence σn) to Fα,ρ,−(t). In particular, one can findin it conditions that are sufficient for convergence of the series (8.2.47) in thecase Eξ2 = ∞. For illustration purposes, we will present here just one of theresults known to us :

Let νr :=∫ |xr(F(dx) − Fα,ρ(dx))| < ∞ for some r > α, and assume that∫

x(F(dx) − Fα,ρ(dx)) = 0 in the case α � 1. Then

supt

∣∣F (n)(t) − Fα,ρ,−(t)∣∣ � cνrn

1−r/α

(see [216, 217]).

The convergence (8.2.47) clearly occurs in such a case.Now we will obtain a refinement of Theorem 8.2.16.

Theorem 8.2.17.

(i) If in the case A0 we have that

Δn ∼ cn−γ as n → ∞, 0 � c < ∞, γ ∈ (0, 1), (8.2.48)

then

P(η− = n) ∼ γn−γ−1

Γ(1 − γ)D(1), 0 < D(1) < ∞. (8.2.49)

(ii) If Eξ = 0, E|ξ|3 < ∞ and the distribution ξ is either lattice or satisfies thecondition

lim sup|λ|→∞

|f(λ)| < 1,

then γ = 1/2 and the relations (8.2.48), (8.2.49) hold true.

Similar relations hold for P(η+ = n).

Page 419: Asymptotic analysis of random walks

388 On the asymptotics of the first hitting times

Concerning the asymptotics of P(η+ = n) under weaker conditions, see re-marks after Theorem 8.2.18 below.

Proof of Theorem 8.2.17. Owing to (8.2.16) and (8.2.43), we have

Ezη− = 1 − exp{−γ

∞∑k=1

zk

k−

∞∑k=1

zkΔk

k

}= 1 − (1 − z)γD(z). (8.2.50)

The asymptotics of the coefficients ak in the expansion of the function

a(z) := −(1 − z)γ =∞∑

k=0

akzk

are well known:

an ∼ γn−1−γ

Γ(1 − γ). (8.2.51)

To find the asymptotics of the coefficients dk in the expansion

D(z) =∞∑

k=0

dkzk,

we will make use of Theorem 1.4.6 (p. 55) with A(w) = ew, gk = −Δk/k toclaim that

dn ∼ A′(g(1))gn = −D(1)n

Δn ∼ cD(1)n−γ−1, n → ∞. (8.2.52)

Now, from (8.2.50) with ε = ε(n) → 0 slowly enough as n → ∞, we have(assuming for simplicity that εn is integer-valued)

P(η− = n) =n∑

k=0

akdn−k =εn∑

k=0

+(1−ε)n∑k=εn+1

+n∑

k=(1−ε)n+1

,

where, by virtue of the equality a(1) = 0,εn∑

k=0

=εn∑

k=0

akdn(1 + o(1)) = o(dn).

Further,n∑

k=(1−ε)n+1

=n∑

k=(1−ε)n+1

an(1 + o(1)) dn−k ∼ D(1)an,

and so from (8.2.51), (8.2.52) we obtain that

(1−ε)n∑k=εn+1

� n[(εn)−1−γ

]2 = cε−2−2γn−1−2γ = o(an).

This proves (8.2.49).

Page 420: Asymptotic analysis of random walks

8.2 A fixed level x 389

The second assertion of the theorem follows from the observation that, underthe conditions of part (ii), one has

Δn = P(Sn � 0) − 12

=cEξ3

√n

+ o

(1√n

)(see e.g. Theorem 21 in Chapter V of [224]), and so (8.2.48) holds with γ = 1/2.

The theorem is proved.

8.2.5 A fixed level x > 0

The asymptotics of the distributions of r.v.’s η±(x) for x > 0 differ from therespective asymptotics for η± by factors that depend only on x. Namely, thefollowing assertion was established in [116, 32, 104, 27, 193].

Theorem 8.2.18. Let the conditions of one of Theorems 8.2.3 (for the class R),8.2.12, 8.2.14, 8.2.16 ((8.2.45)) be satisfied. Then, for the cases A− and A0, asn → ∞, for any fixed x � 0 we have

P(η−(x) > n)P(η− > n)

→ r−(x),

P(n < η+(x) < ∞)P(n < η+ < ∞)

→ r+(x),(8.2.53)

where the functions r±(x) are given in explicit form.

In the case Eξ = 0, d = Eξ2 < ∞ the following local theorem was obtainedin [3]: if the distribution of ξ is arithmetic then there exists a function U(x) suchthat for any fixed x � 0 we have

limn→∞n3/2P(η+(x) = n) = U(x), (8.2.54)

where U(x) ∼ x/√

2πd as x → ∞. If, in addition, E|ξ|3 < ∞ then (8.2.54)holds true for non-lattice distributions as well [193].1 In the case of boundedlattice-valued r.v.’s ξ, asymptotic expansions for P(η±(x) = n) were derivedin [33, 34].

In the next section, we will find the asymptotics of the functions r±(x) from(8.2.53) as x → ∞. In the present subsection we will restrict ourselves to givinga simple proof of Theorem 8.2.18 and the explicit form of the functions r±(x) intwo cases, when Eξ = 0, Eξ2 < ∞ and when Eξ = 0 and condition [Rα,ρ],ρ ∈ (−1, 1), (see p. 57) is met. For the case Eξ < 0 we will prove the first of theassertions (8.2.53).

1 A more general result claiming that Eξ = 0, Eξ2 < ∞ would suffice for (8.2.54) was publishedin [117]. This theorem, however, proved to be incorrect since its conditions contain no restrictionson the structure of F (as communicated to us by A.A. Mogulskii). A paper by A.A. Mogulskii witha correct version of this result was submitted to Siberian Advances in Mathematics.

Page 421: Asymptotic analysis of random walks

390 On the asymptotics of the first hitting times

Proof of Theorem 8.2.18. Let Eξ � 0 (we will assume that Eξ2 < ∞ in theEξ = 0), let η1, η2, . . . and χ1, χ2, . . . be independent r.v.’s that are distributedrespectively as η− and χ−, and let

Tk :=k∑

i=1

ηi, Hk :=k∑

i=1

χi, τ(x) := min{k : Hk � −x}.

Then it is well known (see e.g. § 17 of [42]) that

E|χ−| < ∞, η−(x) = Tτ(x), Eτ(x) < ∞.

Denote by Fn the σ-algebra generated by (η1, χ1), (η2, χ2), . . . , (ηn, χn). Thenobviously {τ(x) � n} ∈ Fn, so that τ(x) will be a stopping time. By virtueof Theorems 8.2.3 and 8.2.16(i) (in the cases under consideration, when Eξ = 0and either Eξ2 < ∞ or condition [Rα,ρ], ρ ∈ (−1, 1), is satisfied, clearly thelimit (8.2.45) will always exist), the distribution tail of ηj is an r.v.f. Since ηi > 0one has maxk�n Hk = Hn and so one can make use of Corollary 7.4.2. It isevident that P(τ(x) > n) decays exponentially fast as n → ∞, and hence theconditions of that theorem are met and therefore

P(η−(x) > n

)= P(Tτ(x) > n) ∼ Eτ(x)P(η− > n), (8.2.55)

where

Eτ(x) =∞∑

k=1

P(Hk � −x) = r−(x).

The first assertion in (8.2.53) is proved. The second assertion in the case Eξ = 0,Eξ2 < ∞ is proved in exactly the same way.

If F ∈ C, Eξ < 0 then the function r−(x) will be of a more complex nature.Since by the renewal theorem

Eτ(x) ∼ x

E|χ−| as x → ∞,

it is natural to expect from (8.2.55) and (8.2.23) that, under the conditions ofTheorem 8.2.3, for Eξ = a < 0 and large x,

P(η−(x) > n

) ∼ xEη−E|χ−| V

(|a|n) =x

|a| V(|a|n). (8.2.56)

For x = o(n), this fact (in its more precise formulation) will be proved in the nextsection.

Observe also that relations similar to (8.2.55), (8.2.56), could also be obtainedfor the distribution tails of χ−(x) = Sη−(x) + x, x � 0. Indeed, using thefactorization identity (see e.g. formula (1) in Chapter 4 of [42])

1 − Eeiλχ− = Eη−EeiλS(1 − f(λ)

), f(λ) = Eeiλξ,

Page 422: Asymptotic analysis of random walks

8.3 A growing level x 391

one can easily demonstrate that, in the case of an l.c. left tail W (t) = P(ξ < −t),we have, as t → ∞,

P(χ− < −t) ∼ Eη−W (t)

(for more detail, see e.g. § 22 of [42]). This corresponds to the above reasoningand the representation

χ− = ξ1 + · · · + ξη− .

A similar representation holds for χ−(x), which makes it natural to expect that,as t → ∞,

P(χ−(x) < −t) ∼ Eη−(x)W (t).

8.3 A growing level x

In this section, it will be convenient for us to change the direction of the driftwhen it is non-zero, so that now the main case is A+ = {a > 0} and the mainobject of study will be the time η+(x) of the first crossing of the level x → ∞.Accordingly, the regular variation condition will now be imposed on the left tail:

F−(t) = P(ξ < −t) = W (t) ≡ t−βLW (t), (8.3.1)

where β > 1 and LW (t) is an s.v.f. at infinity (this is condition [=, · ] as used inChapters 3 and 4).

In the present section, we will deal only with the tail classes R and C and alsowith distributions from the class Ms, for which E|ξ|s < ∞. It is clear thatthe intersection of the classes Ms and R is non-empty. Since {η+(x) > n} ={Sn � x}, we can also view our problem as that on ‘small deviations’ of themaximum Sn, if x does not grow fast enough.

8.3.1 The distribution class CFor distributions from the class C, a comprehensive study of the asymptoticsof P

(η+(x) > n

)= P(Sn � x) for arbitrary a = Eξ was made in [33, 34, 37].

The method used in these papers consists in finding double transforms (in x andin n) of P

(η+(x) > n

)in terms of solutions to Wiener–Hopf-type equations (or,

equivalently, in terms of the factorization components; these components werefound in [33, 34] in explicit form) and then using asymptotic inversion of thesetransforms. For this method to work, one has to assume that the following condi-tion is met.

[Ca] The distribution F of ξ has an absolutely continuous component.

The same approach (in its simpler form) is also applicable in the case ξ islattice-valued (see [33, 34]).

Along with condition [Ca], we will also need Cramer’s condition [C0] on thech.f. f of the distribution F:

Page 423: Asymptotic analysis of random walks

392 On the asymptotics of the first hitting times

[C0] lim sup|λ|→∞

|f(λ)| < 1.

Clearly, [Ca] ⊂ [C0].To formulate the corresponding results we will need some notation. As before,

let θ = x/n,

ϕ(λ) = Eeλξ, Λ(θ) = supλ

{λθ − lnϕ(λ)

}(8.3.2)

and

λ+ = sup{λ : ϕ(λ) < ∞}, λ− = inf{λ : ϕ(λ) < ∞},

θ− = limλ↓λ−

ϕ′(λ)ϕ(λ)

, θ+ = limλ↑λ+

ϕ′(λ)ϕ(λ)

.

The deviation function Λ(θ) is analytic in (θ−, θ+), and the supremum in (8.3.2)is attained at the point λ(θ) = Λ′(θ) (see e.g. § 8, Chapter 8 of [49]). The nextassertion follows from the results of [37].

Theorem 8.3.1. Let F ∈ C, condition [Ca] be satisfied and n → ∞.

(i) If θ− < 0 < θ+ and x → ∞, θ =x

n= o(1) then

P(η(x) = n) ∼ cx

n3/2e−nΛ(θ),

where c is a known constant admitting a closed-form expression in terms ofthe factorization components of the function 1 − zf(λ).

(ii) If 0 < θ1 � θ � θ2 < θ+ then

P(η(x) = n) ∼ c(θ)√n

e−nΛ(θ),

where c(θ) is a known analytic function, also admitting a closed-form ex-pression in terms of the above factorization components.

From this theorem one can easily obtain asymptotic representations for

P(η(x) > n), P(S � x), P(η(x) > n| η(x) < ∞)

and so on. Note also that there are no conditions on the sign of a = Eξ in thestatement of the theorem.

As we have already observed, the method of proof of Theorem 8.3.1 is analyticand is based on the idea of factorization and inverting double transforms. It wasdiscovered relatively recently that this approach is not the only possible one forsolving the problems under consideration. Using direct probabilistic methods, itwas shown in [45, 47] that, in a number of cases, the asymptotics of the prob-abilities of large (and small) deviations of Sn can be found from the respectiveasymptotics for Sn. In particular, the following result holds true.

Page 424: Asymptotic analysis of random walks

8.3 A growing level x 393

Theorem 8.3.2. Let a distribution F ∈ C be either lattice or satisfy condi-tion [C0] and let θ = x/n satisfy the inequalities θ < a, 0 < θ1 � θ � θ2 < θ+.Then, as n → ∞,

P(η(x) > n) ∼ c(θ)P(Sn � x), (8.3.3)

where the function c(θ) is defined below (see 8.3.5).

In fact, the relation (8.3.3) and the form of the function c(θ) are simple corol-laries of the following general fact. Consider an r.v. ξ(λ(θ)) that is the Cramertransform of ξ at the point λ(θ),

P(ξ(λ(θ)) ∈ dt

)=

eλ(θ)t

ϕ(λ(θ))P(ξ ∈ dt), Eξ(λ(θ)) = θ.

Put S(λ(θ))n :=

∑ni=1 ξ

(λ(θ))i , where the r.v.’s ξ

(λ(θ))i are independent copies of

the r.v. ξ(λ(θ)). For simplicity assume that the distribution of Sn has a den-sity and that x/n → θ = const. Then, for any fixed k, the conditional distri-bution of the random vector (ξn, ξn−1, . . . , ξn−k) given Sn ∈ x − dy, wherey = o(x) (for instance, y is fixed), converges as n → ∞ to the distributionof(ξ(λ(θ))1 , ξ

(λ(θ))2 , . . . , ξ

(λ(θ))k

)(see [45, 47]). From here it is not difficult to see

that

P(Sn � x|Sn ∈ x − dy

)→ P(T (λ(θ)) � y

),

where T (λ(θ)) := supk�0

(−S(λ(θ))k

)< ∞ a.s. Now we use the total probability

formula for P(Sn � x) (conditioning on the value of Sn) and take into accountthe facts that, for θ ∈ (θ−, θ+), θ < a,

P(Sn � x) ∼√

Λ′′(θ)λ(θ)

√2πn

e−nΛ(θ) (8.3.4)

(cf. (6.1.16); observe that Λ′′(θ) = 1/d(θ)) and

P(Sn ∈ x − dy)P(Sn � x)

∼ λ(θ)eλ(θ)ydy.

Thus we arrive at (8.3.3) with

c(θ) = λ(θ)

∞∫0

eλ(θ)y P(T (λ(θ)) � y) dy < ∞ (8.3.5)

(here λ(θ) < 0). The above assumption that x/n → θ = const does not restrictthe generality, since all the assertions that we have used hold uniformly in θ.

Moreover, it is not difficult to derive from (8.3.3) and (8.3.5) the asymptoticsof P

(η+(x) = n

).

Page 425: Asymptotic analysis of random walks

394 On the asymptotics of the first hitting times

8.3.2 The distribution class Ms

For distributions from the class Ms, in the absence of any regularity assump-tions on the function (8.3.1) one can obtain the asymptotics of the probabil-ity P(η+(x) > n) only in case A0 = {a = 0}.

It will be assumed everywhere in what follows that, in the lattice case, the valueof the level x → ∞ belongs to the respective lattice.

Theorem 8.3.3. Let F ∈ M3, a = 0, Eξ2 = 1, x → ∞ and x = o(√

n). Then

P(η+(x) > n) ∼ x

√2

πn. (8.3.6)

This assertion immediately follows from the convergence rate estimate

supx

∣∣∣∣P(Sn � x)− 2

[Φ(

x√n

)− 1

2

]∣∣∣∣ <c√n

, c < ∞,

from [203] (see also [204, 202]), which holds when E|ξ|3 < ∞. Here Φ is thestandard normal distribution function. Since Φ(u)−1/2 ∼ u/

√2π as u → 0, we

obtain (8.3.6).

If the distribution of ξ is lattice or condition [C0] is satisfied then, provided thatF ∈ Ms for s > 3, one can also study asymptotic expansions for P

(Sn � x

)as

x → ∞ (see [204, 37, 175, 176]); it is also possible to obtain similar expansionsfor

P(η+(x) = n) ∼ x√2πn3/2

(8.3.7)

(i.e. the local limit theorems for η+(x)).As far as we know, the problem when F ∈ Ms with a > 0 and x → ∞ remains

open. The next subsection is devoted to studying the problem for distributionsfrom the class R.

8.3.3 Asymptotics of P(η+(x) > n) under the conditions x → ∞, a > 0 and

[=, · ] with W ∈ RAlong with the conditions listed in the heading of this subsection, in the caseβ ∈ (1, 2) (see (8.3.1)) we might in addition need condition [ · , <],

P(ξ � t) � V (t), V ∈ R, (8.3.8)

i.e. that the majorant V has the form V (t) = t−αL(t), α > 1, where L is an s.v.f.Put

z := an − x.

Theorem 8.3.4. Let a > 0, x → ∞, x < an, and let condition [=, · ] withW ∈ R be satisfied. Then the following assertions hold true.

Page 426: Asymptotic analysis of random walks

8.3 A growing level x 395

(i) If Eξ2 < ∞, β > 2 and z � √n lnn then

P(η+(x) > n) ∼ x

aW (z). (8.3.9)

(ii) The relation (8.3.9) remains true when β ∈ (1, 2), condition (8.3.8) is metand n and z are such that

nW (z) → 0, nV( z

ln z

)→ 0. (8.3.10)

The meaning of the assertion (8.3.9) is rather simple: the principal contributionto the probability P(Sn � x) comes from trajectories that have a negative jump� x − an at one of the first x/a steps (prior to the crossing of the boundary x bythe ‘drift line’ a(k) = ESk = ak).

It follows from Theorem 8.3.4 and the relation P(Sn � x) ∼ nW (z) (seeChapters 3 and 4) that

P(Sn � x) ∼ x

anP(Sn � x),

so that we will still have the asymptotics (8.3.3), but with a very simple func-tion c(θ) = θ/a instead of a rather complex factor c(θ) (the case θ → 0 is notexcluded).

Proof of Theorem 8.3.4. We will split the argument into three steps.

(1) A lower bound. Let

Gn := {Sn � x}, Bj := {ξj < −z(1 + ε)}, v :=x

a(1 − ε),

where ε > 0 is a fixed small number. Then, assuming for simplicity that v is aninteger, we will have (cf. (2.6.1))

P(Gn) � P(

Gn ∩( v⋃

j=1

Bj

))=

v∑j=1

P(GnBj) + O((xW (z))2

).

It is obvious that, for j � v, P(Gn|Bj) → 1 as x → ∞ owing to the invarianceprinciple. As xW (z) � anW (z) → 0, we have

P(Gn) �v∑

j=1

P(Bj) (1 + o(1)) + O((xW (z))2

) ∼ x

a(1 − ε) W (z(1 + ε)).

Since ε > 0 is arbitrary, we finally obtain

P(Gn) � x

aW (z) (1 + o(1)). (8.3.11)

(2) An upper bound for P(Gn) for truncated summands. Set

Cj :={

ξj > −z

r

}, r > 1, C :=

n⋂j=1

Cj .

Page 427: Asymptotic analysis of random walks

396 On the asymptotics of the first hitting times

Then

P(Gn) = P(GnC) + P(GnC), (8.3.12)

where C is the complement of C,

rP(GnC) �n∑

j=1

P(GnCj), (8.3.13)

P(GnC) � P(Sn − an � −z;C). (8.3.14)

Lemma 8.3.5. If Eξ2 < ∞, β > 2 and x � √n lnn then

P(GnC) � (nW (z))r+o(1).

If β ∈ (1, 2) and conditions (8.3.8), (8.3.10) are met then

P(GnC) � c(nW (z)

)r.

The assertion of the lemma follows from the bound (8.3.14) and Theorem 4.1.2(or Corollary 4.1.3) and Theorem 3.1.1 (or Corollary 3.1.2). If we choose anr > β/(β − 1) then clearly(

nW (z))r = o

(xW (z)

), P(GnC) = o

(xW (z)

). (8.3.15)

(3) An upper bound for (8.3.13). We have

P(GnCj) = P(Sj−1 � x, ξj � −z

r, Sj−1 + ξj + S

(j)n−j � x

),

where the r.v. S(j)n−j

d= Sn−j is independent of ξ1, . . . , ξj . Since ξj does not

depend on ξ1, . . . , ξj−1 or S(j)n−j , one has

P(GnCj) � E[W(−x + S

(j)n−j + Sj−1

); S

(j)n−j + Sj−1 > x +

z

r

]� EW

(max

{z

r, −x + Sn−1

})= EW

(max

{z

r, z + S0

n−1

})∼ W (z)E

[max

{1r, 1 +

S0n

z

}]−β

,

where S0n = Sn − an. Since S0

n/zp−→ 0 under the respective conditions (for

z � √n lnn in part (i) of the theorem and under condition (8.3.10) in part (ii)),

we obtain that

P(GnCj) � W (z)(1 + o(1)).

Moreover,

P(GnCj) � W(z

r

)P(Sj−1 � x) = W

(z

r

)P(η+(x) � j).

Page 428: Asymptotic analysis of random walks

8.3 A growing level x 397

Hencen∑

j=1

P(GnCj) =∑

j�(1+ε)x/a

+∑

j>(1+ε)x/a

� x

a(1 + ε)W (z) + W

(z

r

) ∑j>(1+ε)x/a

P(η(x) � j

).

(8.3.16)

Here W (z/r) � cW (z), and we find from the strong law of large numbers andthe renewal theorem Eη+(x) =

∑P(η+(x) > j) ∼ x/a that the second term on

the right-hand side of (8.3.16) is o(x)W (z). Thereforen∑

j=1

P(GnCj) � x

aW (z)(1 + o(1)).

Comparing this inequality with (8.3.12)–(8.3.15), we obtain the same bound forthe probability P(Gn). This, together with the lower bound (8.3.11), completesthe proof of the theorem.

Corollary 8.3.6. Assume that the conditions of Theorem 8.3.4 are satisfied andEξ2 < ∞. Then, for a fixed x � 0, there exist constants c± = c±(x) such that,for all sufficiently large n, we have

c− < P(η+(x) > n)/W (n) < c+. (8.3.17)

Proof. First assume that there exists a subsequence n → ∞ such that

P(η+(x) > n

)� r(n)W (n), (8.3.18)

where r(n) → ∞, r(n) = o(n). Set y :=√

r(n). Then, by Theorem 8.3.4(i),for all sufficiently large n we have x < y and

P(η+(x) > n

)< P

(η+(y) > n

) ∼ √r(n)a

W (an),

which contradicts to (8.3.18). This proves the second inequality in (8.3.17).The first inequality in (8.3.17) follows from the relations

P(η+(x) > n) � P(ξ1 < −2an)P(Sn−1 � x + 2an

)∼ W (2an)P

(Sn−1 � x + 2an

),

where the second factor on the final right-hand side tends to 1 as n → ∞.The corollary is proved.

Page 429: Asymptotic analysis of random walks

9

Integro-local and integral large deviation theoremsfor sums of random vectors

9.1 Introduction

Let ξ1, ξ2, . . . be independent d-dimensional random vectors, d � 1, having thesame distribution as ξ ⊂=F, let Sn =

∑ni=1 ξi and let Δ[x) be a cube in Rd with

edge length Δ > 0 and a vertex at the point x = (x(1), . . . , x(d)),

Δ[x) :={y ∈ R

d : x(i) � y(i) < x(i) + Δ, i = 1, . . . , d}.

Let (x, y) =∑d

i=1 x(i)y(i) be the standard scalar product.The main objective of this chapter is to study the asymptotics of

P(Sn ∈ Δ[x)

)(9.1.1)

for

t := |x| =√

(x, x) � σ(n), Δ ∈ [Δ1,Δ2], t−γ � Δ1 < Δ2 = o(t),

where γ > −1 and the sequence σ(n) determines the zone of large deviations(see below; σ(n) =

√n lnn in the case Eξ = 0, E|ξ|2 < ∞).

In the case when the Cramer condition holds,

Ee(λ,ξ) < C < ∞ (9.1.2)

in the neighbourhood of a point λ0 �= 0, a comprehensive study of the asymptoticsof P

(Sn ∈ Δ[x)

)as (λ0, x) → ∞ was presented in [69, 70]. The method of

studying the asymptotics of (9.1.1) in Cramer’s case (9.1.2), which is based onusing the Cramer transform and was described in § 6.1 in the univariate setting,remains fully applicable in the multivariate case as well.

In this chapter we will study the asymptotics of (9.1.1) in the case condi-tion (9.1.2) does not hold. In the univariate case, the asymptotics of (9.1.1) werefound in §§ 3.7 and 4.7 (for regularly varying tails in the cases Eξ2 = ∞ andEξ2 < ∞ respectively).

The problem becomes more complicated in the multivariate case d > 1, as thereare no ‘collective’ limit theorems on the asymptotics of (9.1.1) in that situation;while in the Cramer case the asymptotics of these probabilities were determined

398

Page 430: Asymptotic analysis of random walks

9.1 Introduction 399

by the properties of the analytic function Ee(λ,ξ) (see e.g. [69, 70, 49, 224]), inthe case when (9.1.2) is not satisfied these asymptotics will depend strongly onthe ‘configuration’ of the distribution F (more precisely, on the configuration ofcertain level lines). The examples below illustrate this statement.

Example 9.1.1. Let ξ =(ξ(1), . . . , ξ(d)

), where the r.v.’s ξ(i) are independent

and have distribution tails

V (i)(u) := P(ξ(i) � u) = u−αiLi(u),

the Li being s.v.f.’s at infinity such that the functions Vi satisfy condition [D(1,0)](see pp. 167, 217). Then, in the case E|ξ|2 < ∞ and mini x(i) � √

n lnn, weclearly have from Theorem 4.7.1 that

P(Sn ∈ Δ[x)

)= Δdnd

d∏i=1

V(i)1 (x(i))(1 + o(1)), (9.1.3)

where V(i)1 is the function V1 from (4.7.1) corresponding to the coordinate ξ(i).

Example 9.1.2. The character of the asymptotics of (9.1.1) is similar, in a sense,to (9.1.3) in the case when ξ = (0, . . . , 0, ξ(i), 0, . . . , 0) with probability pi > 0,i = 1, . . . , d,

∑di=1 pi = 1 and the r.v.’s ξ(i) again satisfy condition [D(1,0)]. In

this case, the components of the vector ξ are strongly dependent. By the law oflarge numbers (or by the central limit theorem for the multinomial scheme),

P(Sn ∈ Δ[x)

) ∼ Δd∑

Pni=n

pn11 · · · pnd

d

n1! · · ·nd!n!

d∏i=1

niV(i)1 (x(i))

∼ Δdndd∏

i=1

piV(i)1 (x(i)). (9.1.4)

In these two examples, the asymptotics of P(ξ ∈ Δ[x)

)as |x| → ∞ are

‘star-shaped’: for equal αi = α, say, and for the same values of t = |x|, thevalues of P

(ξ ∈ Δ[x)

)along the coordinate axes (more precisely, when x(i) �

maxj �=i x(j) for some i) are much larger than, for instance, those on the ‘diagonal’x(1) = · · · = x(d).

Example 9.1.3. If, however, P(ξ ∈ Δ[x)

) ∼ ΔdV1(x), where V1(x) has a form

V1(x) = v(t)g(e(x)

), t = |x|, e(x) := x/t, (9.1.5)

g being a continuous positive function on the unit sphere Sd−1 in Rd then, as wewill see below, under broad assumptions on the r.v.f. v the asymptotics of (9.1.1)will have the form

P(Sn ∈ Δ[x)

) ∼ ΔdnV1(x) (9.1.6)

and therefore will be of a substantially different character compared with (9.1.3),(9.1.4).

Page 431: Asymptotic analysis of random walks

400 Large deviation theorems for sums of random vectors

Example 9.1.4. Combining Examples 9.1.1 and 9.1.3, i.e. considering randomvectors ξ with independent subvectors satisfying conditions (9.1.5), one can ob-tain for P

(Sn ∈ Δ[x)

)asymptotics of the form

Δdnkk∏

i=1

V(i)1 (x)

for any k, 1 � k � d, where k will be determined, roughly speaking, by theminimum number of large ‘regular’ jumps required to move from 0 to Δ[x).

One could give other examples illustrating the fact that, even for distributions Fpossessing the property of ‘directional tail’ regular variation, the asymptotics ofthe large deviation probabilities will essentially depend on the ‘configuration’ ofthe distribution. Moreover, in some cases, the very setting-up of the large devia-tion problem proves to be difficult (a possible general setup will be illustrated byTheorem 9.2.4 below). It is seemingly due to this reason that there is only a rathersmall number of papers on large deviations in the multivariate ‘non-Cramerian’case, most of the publications known to us being devoted mainly to the ‘isotropic’situation, when the rate of decay of the distribution in different directions is givenby the same r.v.f.

First of all, observe that the convergence of the scaled sums Sn to a non-Gaussian limiting law takes place iff P(|ξ| � t) is an r.v.f.,

P(|ξ| � t) = t−αL(t), L(t) is an s.v.f., (9.1.7)

where α ∈ (0, 2), and the distribution

St(B) := P(e(ξ) ∈ B| |ξ| � t

)on the unit sphere Sd−1 converges weakly to a limiting distribution S:

St ⇒ S as t → ∞ (9.1.8)

(Theorem 4.2 of [243]; see also Corollary 6.20 of [5]; further references, as wellas a detailed discussion of the problem on the convergence of Sn under operatorscaling, can be found in [189]).

In the special case when F has a bounded density f that admits a representationof the form

f(x) = t−α−d[h(e(x)) + θ(x)ω(t)

], t = |x| > 1, (9.1.9)

where h(e) is a continuous function on Sd−1, |θ(x)| � 1 and ω(t) = o(1) as

t → ∞, the large deviation problem was considered in [283, 200]. In [283], alocal limit theorem for the density fn of Sn was established for large deviationsalong ‘non-singular directions’, i.e. directions satisfying

e(x) ∈ {e ∈ Sd−1 : h(e) � δ}

for a fixed δ > 0. The theorem states that, uniformly in such directions, one

Page 432: Asymptotic analysis of random walks

9.1 Introduction 401

has fn(x) ∼ nf(x) in the zone t � n1/α. This result was complementedin [200] by an analysis of the asymptotics of fn(x) along ‘singular directions’(i.e. for e ∈ S

d−1 such that h(e) = 0) for an even narrower distribution class (inparticular, it was assumed that there are only finitely many such singular direc-tions and that the density f(x) decays along such directions as a power functionof order −β−d, β > α). The main result of [200] shows that the principal contri-bution to the probabilities of large deviations along singular directions comes notfrom trajectories with a single large jump (which was the case in the univariateproblem, and also for non-singular directions when d > 1) but from those withtwo large jumps. In § 9.2.2 below we will investigate this phenomenon undersubstantially broader assumptions, when the principal contribution comes fromtrajectories with k � 2 jumps.

In a more general case, when only condition (9.1.7) is met (with α > 0), anintegral-type large deviation theorem describing the behaviour of the probabilitiesP(Sn ∈ xA) when the set A ⊂ Rd is bounded away from 0 and has a ‘regular’boundary was obtained in [151] together with its functional version.

As we have already said, the above-mentioned difficulties related to the distri-bution configuration are absent in the univariate case, and the large deviation prob-lem itself is quite well studied in that case, the known results including asymptoticexpansions and large deviation theorems in the space of trajectories (see Chap-ters 2–8; a more complete bibliography can be found in these chapters and in therespective bibliographic notes at the end of the book).

We emphasize once again that in the multivariate case we encounter a situationwhere, for a distribution with ‘heavy’ tails, the principal contribution to the prob-ability that a set in the large deviation zone will be reached is due not to one largejump (as in the univariate case) but to several large jumps. In Examples 9.1.1 and9.1.2, say, in order to hit a cube Δ[x) located at the diagonal x(1) = · · · = x(d)

the random walk will need d large jumps. All other trajectories will be far lesslikely to reach that cube.

In the present chapter, we will concentrate on the ‘most regular’ distributiontypes (for example, those of the form (9.1.5)) and also on distributions that aremore general versions of the laws in Examples 9.1.1 and 9.1.2. Moreover, wewould like to point out that, in the multivariate case, the language of integro-localtheorems is the most natural and convenient one, because:

(1) describing the asymptotics of the probability that a ‘small’ remote cubewill be hit is much easier than finding such asymptotics for an arbitraryremote set;

(2) in this case, the large deviation problem admits a complete (in a certainsense) solution, since knowing such ‘local’ probabilities for small cubesallows one to easily obtain the ‘integral’ probability of hitting a givenarbitrary set;

(3) in integro-local theorems, one needs to impose essentially no additional

Page 433: Asymptotic analysis of random walks

402 Large deviation theorems for sums of random vectors

conditions on the distribution F, in contrast with integral theorems; inessence, we can obtain local limit theorems without assuming the exis-tence of a density.

Moreover, integro-local theorems on the asymptotics of probabilities (9.1.1) areof substantial independent interest: for a broad class of functions f , they proveto be quite useful for evaluating integrals of the form E

[f(Sn); Sn ∈ tA

]as

t → ∞, where A is a ‘solid’ set that is bounded away from the origin.The usefulness of integro-local theorems was demonstrated in Chapters 6–8

(e.g. in §§ 6.2 and 6.3, where such theorems helped us to study the asymptotics oflarge deviation probabilities for distributions from the class ER).

In § 9.2, we will study the asymptotics of (9.1.1) under the assumption of theregular behaviour of P

(ξ ∈ Δ[x)

)as x escapes to infinity within a given cone,

when the principal contribution to the probability P(Sn ∈ [x)

)comes from tra-

jectories containing a single large jump. Assuming that Eξ = 0, we considerboth the case E|ξ|2 < ∞ and the case E|ξ|2 = ∞. We will also deal witha more general approach to the problem, in which regularity conditions are im-posed on P

(Sj ∈ Δ[x); B

(j)), for a fixed j and events B

(j)that mean that all

the jumps ξ(1), . . . , ξ(j) are large (for a more precise formulation, see (9.2.11) be-low). In this case, it may happen that the principal contribution to the probabilityP(Sn ∈ Δ[x)

)belongs to trajectories of a ‘zigzag’ shape, which contain several

large jumps.Integral large deviation theorems (i.e. theorems on the asymptotics of the prob-

ability P(Sn ∈ tA

)as t → ∞) are obtained in § 9.3 as corollaries of the integro-

local theorems of § 9.2. We also consider an alternative approach to proving inte-gral theorems that is not related to integro-local theorems.

Of course, the asymptotics of (9.1.1) can be found when ξ has a regularly vary-ing density. In this case, one can obtain an asymptotic representation for thedensity of Sn (cf. Theorem 8 in [63]).

9.2 Integro-local large deviation theorems for sums of independent random

vectors with regularly varying distributions

9.2.1 The case when the main contribution to the probability that Δ[x) will

be hit is due to trajectories containing one large jump

We will begin with a more precise formulation (compared with that in (9.1.5)) ofthe conditions under which we will be studying the asymptotics of (9.1.1).

Consider the asymptotics of P(ξ ∈ Δ[x)

)in a cone, i.e. for such x with

t = |x| → ∞ that e(x) = x/t ∈ Ω, where the subset Ω ⊂ Sd−1 of the unit

sphere, which characterizes the cone, is assumed to be open. An analogue [DΩ]of condition [D(1,0)] (see §§ 3.7 and 4.7) has here the following form.

Page 434: Asymptotic analysis of random walks

9.2 Random vectors with regularly varying distributions 403

Introduce the half-space Π(e) := {v ∈ Rd : (v, e) � 0} and put

Π(Ω) :=⋃e∈Ω

Π(e).

It is evident that if Ω contains a hemisphere then Π(Ω) coincides with the wholespace R

d.

[DΩ] There exists a sector Ω ⊂ Sd−1 and functions Δ1 = Δ1(t) � t−γ forsome γ > −1, Δ2 = Δ2(t) = o(t) such that, for any Δ ∈ [Δ1,Δ2], e(x) ∈ Ω,one has

P(ξ ∈ Δ[x)

)= ΔdV1(x)(1 + o(1)) as t → ∞, (9.2.1)

where

V1(x) = v(t)g(e(x)

), t = |x|, e(x) =

x

t, v(t) = t−α−dL(t),

L(t) is an s.v.f. at infinity and g(e) is a continuous function on Ω that satisfies onthis set the inequalities 0 < g1 < g(e) < g2 < ∞.

Moreover, for any constant c1 > 0 and all y ∈ Π(Ω), |y| � c1t, we have

P(ξ ∈ Δ[y)

)� c2Δdv(t). (9.2.2)

The remainder term o(1) in (9.2.1) is assumed to be uniform in the followingsense: there exists a function εu ↓ 0 as u ↑ ∞ such that the o(1) term in (9.2.1)can be replaced by ε(x,Δ) � εu for all t � u → ∞, e(x) ∈ Ω, Δ ∈ [Δ1,Δ2].

Clearly, under condition [DΩ] one has

P(|ξ| ∈ Δ[t), e(ξ) ∈ Ω

) ∼ c(Ω)td−1v(t)Δ, (9.2.3)

so that the left-hand side of (9.2.3) follows the same asymptotics as the proba-bility P

(ξ ∈ Δ[t)

)in the univariate case under condition [D(1,0)] (see §§ 3.7

and 4.7).Note that condition (9.2.2) is essential for the claim of our theorem below. It

does not allow the ‘most plausible’ of the trajectories hitting the cube Δ[x) tocontain two (or more) large jumps (i.e. to reach the set Δ[x) along a ‘zigzag’trajectory, as was the case in Example 9.1.1). Condition (9.2.2) can be weakened,depending on the deviation zone.

Denote by ∂Ω the boundary of the set Ω in Sd−1, by Uε(∂Ω) the ε-neighbour-

hood of ∂Ω in Sd−1 and by Ωε = Ω \ Uε(∂Ω) the ‘ε-interior’ of Ω.

First we consider the case of finite variance.

Theorem 9.2.1. Let Eξ = 0, E|ξ|2 < ∞, α > 2 and condition [DΩ] be satisfied.Then, for t = |x| � √

n lnn and e(x) ∈ Ωε for some ε > 0,

P(Sn ∈ Δ[x)

)= ΔdnV1(x)(1 + o(1)), (9.2.4)

where the remainder term o(1) is uniform in the following sense. For any se-quence s(n) → ∞, there exists a sequence R(n) → 0 such that the o(1) term

Page 435: Asymptotic analysis of random walks

404 Large deviation theorems for sums of random vectors

in (9.2.4) can be replaced by R such that |R| � R(n) for all t > s(n)√

n lnn,e(x) ∈ Ωε and Δ ∈ [Δ1,Δ2].

Proof. We will follow the scheme of the proofs of Theorems 3.7.1 and 4.7.1.Using notation similar to before, we put

Gn :={Sn ∈ Δ[x)

}, Bj :=

{(ξj , e(x)) < t/r

}, r > 1, B :=

n⋂j=1

Bj ,

and again make use of the relations (3.7.8) and (3.7.9) ((4.7.4) and (4.7.5)).

(1) Bounding P(GnB). It is not hard to see that, under the conditions [DΩ]and e(x) ∈ Ω, as t → ∞,

P((ξ, e(x)) � t

)� V0(t), where V0(t) ∼ ctdv(t) = ct−αL(t). (9.2.5)

Hence, cf. §§ 3.7 and 4.7, we obtain from Corollary 4.1.3 that for any δ > 0,as t → ∞,

P(GnB) �(nV0(t)

)r−δ,

where we choose r > 2 and δ in such a way that, for t � √n and Δ � t−γ , we

have (nV0(t)

)r−δ � nΔdv(t).

Using an argument like that following (4.7.8) (p. 219), one can easily see that itsuffices to choose r − δ in such a way that

r − δ > 1 +(1 + γ)dα − 2

.

Therefore, for such an r and δ,

P(GnB) = o(nΔdV1(x)

). (9.2.6)

(2) Bounding P(GnBn−1Bn). Let

Ak :={v ∈ R

d : (v, e(x)) < t(1 − k/r) + Δ√

d}, k = 1, 2. (9.2.7)

Then GnBn−1Bn ⊂ {Sn−2 ∈ A2} and

P(GnBn−1Bn) =∫

z∈A2

P(Sn−2 ∈ dz)∫

v∈A1

P(z + ξ ∈ dv, (ξ, e(x)) � t/r

)× P

(v + ξ ∈ Δ[x), (ξ, e(x)) � t/r

).

(9.2.8)

Since for v ∈ A1 one has x−v ∈ Π(Ω), |x−v| > t/r−Δ√

d, we see from (9.2.2)(with y = x−v) that the last factor in (9.2.8) does not exceed cΔdv(t). Therefore(see also (9.2.5)), the integral over the range A1 in (9.2.8) is less than

cΔdv(t)P((ξ, e(x)) � t/r

)� c1Δdv(t)V0(t).

Page 436: Asymptotic analysis of random walks

9.2 Random vectors with regularly varying distributions 405

It is evident that the same bound holds for the integral over A2. Hence∑i�=j

P(GnBiBj) � c1Δdn2v(t)V0(t) = o(nΔdV1(x)

). (9.2.9)

(3) Evaluating P(GnBn). Since GnBn ⊂ {Sn−1 ∈ A1} by the definition ofthe set A1, we obtain that, by virtue of condition [DΩ] (recall that x − z ∈ Π(Ω)for z1 ∈ A1),

P(GnBn) =∫A1

P(Sn−1 ∈ dz)P(ξ ∈ Δ[x − z), (ξ, e(x)) � t/r

)� E

[P(ξ ∈ Δ[x − Sn−1)|Sn−1

);Sn−1 ∈ A1

]� ΔdE

[V1

(x − Sn−1

); |Sn−1| < εt

](1 + o(1))

+ cΔdv(t)P(Sn−1 ∈ A1, |Sn−1| � εt

).

Here e(x − z) ∈ Ω for |z| < εt, P(|Sn−1| � εt

)→ 0 for t � √n. Hence

P(GnBn) � ΔdV1(x)(1 + o(1)).

Setting A0 :={v ∈ Rd : (v, e(x)) < t(1 − 1/r) − Δ

√d}

, we find in a similarway that

P(GnBn) �∫A0

P(Sn−1 ∈ dz

)P(ξ ∈ Δ[x − z)

)� ΔdE

[V1

(x − Sn−1

); |Sn−1| < εt

](1 + o(1))

= ΔdV1(x)(1 + o(1)).

This means that

P(GnBn) = ΔdV1(x)(1 + o(1)).

Together with (9.2.6), (9.2.9) and the relations (4.7.4), (4.7.5), this implies

P(Gn) = nΔdV1(x)(1 + o(1)).

As before, the required uniformity of the bounds follows from the argumentused in the proof. The theorem is proved.

Remark 9.2.2. Observe that if we had used more precise bounds for integrals inthe proof of the above theorem then condition (9.2.2) could have been relaxed tothe following:

P(ξ ∈ Δ[x − z)

)� cΔdv(t)

|z|2n

for e(x) ∈ Ωε, z ∈ A1, |z| � εt, t � √n (cf. condition (9.2.12) below).

Page 437: Asymptotic analysis of random walks

406 Large deviation theorems for sums of random vectors

Now consider the case of infinite variance and assume that condition [DΩ] ofthe present subsection holds for α < 2. For such an α, one should complementcondition [DΩ] by adding the following requirement.

For any e ∈ Sd−1 we have

P((ξ, e) � t

)� W (t), (9.2.10)

where W (t) = t−αW LW (t) is an r.v.f. of index −αW � −α.

Theorem 9.2.3. Let condition [DΩ], complemented in the above-mentioned way,be satisfied. Then, in each of the following two cases,

(1) α < 1, t = |x| > n1/θ, θ < α,

(2) α ∈ (1, 2), there exists Eξ = 0, αW > 1 and t > n1/θ , θ < αW ,

we will have the relation (9.2.4) for all x such that e(x) ∈ Ωε, t = |x| → ∞ andall Δ ∈ [Δ1,Δ2].

The statements of Theorems 3.7.1 and 9.2.1 concerning the uniformity of theremainder term o(1) in t � t∗ → ∞, t > n1/θ , e(x) ∈ Ωε, Δ ∈ [Δ1,Δ2],remain true in this case.

Proof. The proof of Theorem 9.2.3 is quite similar to that of Theorems 3.7.1(p. 169) and 9.2.1, and so is left to the reader. We just note that, at the third stageof the argument (see the proofs of Theorems 3.7.1, 9.2.1), it will be necessary touse the relation

P(|Sn−1| > MσW (n)

)→ 0 as M → ∞,

where σW (n) = W (−1)(1/n), which follows from Corollaries 2.2.4 and 3.1.2.

9.2.2 The case when the main contribution to the probability that Δ[x) will

be hit is due to ‘zigzag’ trajectories containing several large jumps

We return to the case E|ξ|2 < ∞ and consider some more general conditionsthat would cover Examples 9.1.1–9.1.4 and allow situations when the most likelytrajectories to reach Δ[x) are of ‘zigzag’ shape. For simplicity we can restrict our-selves to the case when the cone from Theorem 9.2.1 coincides with the positiveorthant, i.e.

Ω ={e = (e(1), . . . , e(d)) ∈ S

d−1 : e(i) � 0}.

As before, we use the notation

t = |x|, Bj ={(ξj , e(x)) � t/r

}, B(k) =

k⋂j=1

Bj .

In this subsection, we will be using the following conditions:

Page 438: Asymptotic analysis of random walks

9.2 Random vectors with regularly varying distributions 407

For a fixed j and the values of Δ specified in condition [DΩ], we have

P(Sj ∈ Δ[x); B(j)

)= Δdv(j)(t)g(j)

(e(x)

)(1 + o(1)) (9.2.11)

for t = |x| → ∞, e(x) ∈ Ωε, ε > 0, where v(j) is an r.v.f. of the form

v(j)(t) = t−α(j)−dL(j)(t), α(j) > 2,

L(j) is an s.v.f., the g(j)(e) > 0 are continuous functions on Ωε and the remainderterm o(1) in (9.2.11) is uniform in the same sense as in condition [DΩ]. Moreover,for e(x) ∈ Ωε, z ∈ A1 (see (9.2.7)), |z| � εt, t � √

n, we have

P(Sj ∈ Δ[x − z); B(j)

)� cΔdv(j)(t)

|z|2n

. (9.2.12)

We will also need the following condition.

[D∗Ω] Let there exist a k, possessing the following properties:

(1) for all j � k, conditions (9.2.11), (9.2.12) are satisfied;(2) x is such that v(j)(t) = o

(v(k)(t)nk−j

)as t = |x| → ∞ for j < k;

(3) P(Sn ∈ Δ[x); B(k+1)

)= o

(Δdv(k)(t)nk

)where

B(k+1) =⋃

1�i1<···<ik+1�n

Bi1 · · ·Bik+1

means that k + 1 or more events of the form Bi have occurred.

The following condition is sufficient for (3) to hold:

(31) x is such that, as t = |x| → ∞, one has the relation (9.2.12) for j = k+1,

and

nv(k+1)(t) = o(v(k)(t)

). (9.2.13)

The sufficiency of this condition will be demonstrated after the proof of Theo-rem 9.2.4.

Observe that conditions (2) and (31) are of the same type and have a rathersimple probabilistic interpretation: they mean that, for deviations x, the productnjv(j)(t), t = |x|, attains its maximum value when j = k and that the valueis much greater than that of njv(j)(t) for j < k and for j = k + 1. As wewill see below in Theorem 9.2.4, it is the product nkv(k)(t) that determines theasymptotics of the probabilities P

(Sn ∈ Δ[x)

), and the value of k (found by

comparing the quantities njv(j)(t)) gives the number of large jumps (with sizesof order t) required for the most probable trajectories {Si} to reach the set Δ[x).The factor nk gives (up to a factor j!) the asymptotics of the number of allocationsof k large jumps to n places.

Now we will explain why condition [D∗Ω] holds in Examples 9.1.1–9.1.3.

Page 439: Asymptotic analysis of random walks

408 Large deviation theorems for sums of random vectors

In Example 9.1.1 (the case of independent components ξ(i)), for directionse(x) ∈ Ωε, j � k = d and large enough r, we have

P(Sj ∈ Δ[x); B(j)

) ∼ P(Sj ∈ Δ[x)

) ∼ (jΔ)dd∏

i=1

V(i)1 (x(i)), (9.2.14)

so that one can put

v(j)(t) =d∏

i=1

V(i)1 (t) for all j � d. (9.2.15)

(Recall that, in the sector e(x) ∈ Ωε with ε > 0, the value of t = |x| and those ofall the components x(i) are of the same order of magnitude, as t → ∞.)

The relations (9.2.14) can be clarified as follows. For instance, in the casej = 2 the main contribution to the probability P

(S2 ∈ Δ[x); B(2)

)is from the

‘trajectories’ {S1, S2} for which the two large jumps (whose sizes are comparablewith t = |x|) occur in different directions, each of which belongs to one of twodisjoint groups of coordinates: one direction is from a group (i1, . . . , is), 1 �s � d − 1, and the other is from the complementary group of d − s coordinates.Hence the probability P

(S

(il)2 ∈ Δ[x(il)); B(2)

), l = 1, . . . , s, will in fact be

asymptotically equivalent to the probability of the event{S

(il)2 ∈ Δ[x(il))

}when

there is only one large jump (we denote the latter event by B(il)(1)), whereas

P(S

(il)2 ∈ Δ[x(il)); B(il)(1)

) ∼ P(S

(il)2 ∈ Δ[x(il))

) ∼ 2ΔV(il)1

(x(il)

)(see § 4.7). This proves (9.2.14) when j = 2. The case j > 2 can be dealt with inexactly the same way (although the argument is more tedious).

For j = d + 1, the relation (9.2.14) is no longer correct. In this case,

P(Sd+1 ∈ Δ[x); B(d+1)

) � Δdv(d)(t)td∑

i=1

V(i)1 (t) (9.2.16)

(we write a � b if a = O(b), b = O(a)). For example, when d = 2 theprincipal contribution to P

(S3 ∈ Δ[x); B(3)

)comes from trajectories for which

either S(1)3 contains two large jumps and S

(2)3 just one (this comprises the event

B(1)(2)B(2)(1)) or the other way around (the event B(1)(1)B(2)(2)). Now for

Page 440: Asymptotic analysis of random walks

9.2 Random vectors with regularly varying distributions 409

some c1, c2, 0 < c1 < c2 < 1,

P(S

(1)3 ∈ Δ[x(1)); B(1)(2)B(2)(1)

)∼

c2x(1)∫−∞

P(S

(1)2 ∈ du, one large jump in S

(1)2

)P(ξ(1)3 ∈ Δ[x(1)−u)

)

∼ 2Δ

c2x(1)∫c1x(1)

V(1)1 (u)V (1)

1

(x(1) − u

)du

∼ cΔx(1)[V

(1)1 (x(1))

]2.

Moreover, for the second component we clearly have

P(S

(2)3 ∈ Δ[x(2)); B(2)(1)

) ∼ 3ΔV(2)1

(x(2)

).

Since analogous relations hold true for the event B(1)(1)B(2)(2), this proves therelation (9.2.16). The case d > 2 is dealt with in a similar (but more cumbersome)way.

Since

tV(i)1 (t) ∼ αiVi(t), n max

iVi(t) → 0 for t � √

n,

the above relations prove (9.2.11) for j � k = d, and they also prove that condi-tion (2) and relation (9.2.13) hold for Example 9.1.1.

Now we prove (9.2.12). For j � d, under additional conditions of the formP(ξ(i) ∈ Δ[y(i))

)� cΔV

(i)1 (|y(i)|) for y(i) < 0, we have

P(Sj ∈ Δ[x − z); B(j)

)� c(j)Δd

d∏i=1

V(i)1 (t), t = |x|, (9.2.17)

for all x ∈ Ωε and z ∈ A1, |z| � εt. This proves (9.2.12) for j � d.For j = d + 1 we obtain, cf. (9.2.16), that for e(x) ∈ Ωε and z ∈ A1, |z| > εt,

P(Sj ∈ Δ[x − z)

) � Δdv(d)(t)td∑

i=1

V(i)1 (t).

Since nt maxi V(i)1 (t) → 0 for t � √

n, we arrive at (9.2.14).One can verify in a similar way that condition [D∗

Ω] is satisfied in Exam-ples 9.1.2 (with k = d ) and 9.1.3 (with k = 1). It is not difficult to see thatin Example 9.1.2 we have

P(Sj ∈ Δ[x); B(j)

)= 0 for j < d, e(x) ∈ Ωε and t = |x| > 0.

It is clear that condition [D∗Ω] will be satisfied for a much wider class of distri-

butions F. For instance, this applies to distributions similar to those from Exam-ple 9.1.1 when there is a weak enough dependence between the components of ξ,

Page 441: Asymptotic analysis of random walks

410 Large deviation theorems for sums of random vectors

and also to distributions for which

P((ξ, x) � u

) ∼ h(e(x)

)u−f(e(x)),

where the functions h(·) and f(·) satisfy some additional conditions, and so on.

Theorem 9.2.4. Let Eξ = 0, E|ξ|2 < ∞ and condition [D∗Ω] be satisfied. Then,

for t = |x| � √n lnn and e(x) ∈ Ωε for some ε > 0, we have

P(Sn ∈ Δ[x)

)= Δdnkv(k)(t)g(k)

(e(x)

)(1 + o(1)),

where the term o(1) is uniform in the same sense as in Theorem 9.2.1.

Proof. The scheme of the argument is somewhat different in this case. We willagain make use of the equality (4.7.4) but will apply the latter representation tothe probability P(GnB):

P(GnB) =n∑

i=1

P(GnB

(1)

i

)+

∑1�i<j�n

P(GnB

(2)

ij

)+ · · ·

+∑

1�i1<i2<···<ik�1

P(GnB

(k)

i1,...,ik

)+ P

(GnB(k+1)

), (9.2.18)

where B(j)

i1,...,ijis the event that j events Bi1 , . . . , Bij of the form Bm occurred

during a time n and B(k+1) is the event that k +1 or more events of the form Bm

occurred.

(1) Bounding P(GnB). By integrating over the cubes Δ[x) we find from(9.2.11) that

P(Bj) ∼ cv(1)(t)td = ct−α(1)L(1)(t) =: v(t).

Hence, again using Corollary 4.1.3 we obtain that, for any δ > 0 and largeenough t,

P(GnB) �(nv(t)

)r−δ.

Now choose r − δ in such a way that(nv(t)

)r−δ = o(Δdnkv(k)(t)

).

To this end, for t � n1/2 it suffices to take

r − δ > 1 +γ + α(k) − 2k

α(1) − 2

(cf. the bound for P(GnB) in the proof of Theorem 9.2.1). Thus

P(GnB) = o(Δdnkv(k)(t)

). (9.2.19)

Page 442: Asymptotic analysis of random walks

9.2 Random vectors with regularly varying distributions 411

(2) Bounding P(GnB(j)

i1,...,ij), j � k. We have

P(GnB

(j)

i1,...,ij

)=∫A1

P(

Sn−j ∈ dz;n−j⋂i=1

Bi

)P(Sj ∈ Δ[x − z); B(j)

),

(9.2.20)where A1 is defined by (9.2.7). Clearly, x− z ∈Π(Ω) for z ∈ A1 and, moreover,|x − z| > t/r − Δ

√d ∼ t/r. Therefore we can apply condition (9.2.12) to the

second factor in the integrand in (9.2.20). Note that this factor is asymptoticallyequivalent to the right-hand side of (9.2.11) for |z| < ε1t as ε1 → 0.

In the first factor,

P

(n−j⋂i=1

Bi

)=(1 − P(B1)

)n−j �(1 − c1v(1)(t)

)n → 1,

P(∣∣Sn−j

∣∣ < ε1t)→ 1, P

(∣∣Sn−j

∣∣ < ε1t,

n−j⋂i=1

Bi

)→ 1

assuming that ε1 → 0 slowly enough that ε1t �√

n.Now we will represent the integral in (9.2.20) as the sum∫

|z|<εt

+∫

A1∩{|z|�εt}

.

Then, owing to [D∗Ω] and the previous discussion, the first integral will be equiv-

alent to Δdv(j)(t)g(j)

(e(x)

). By virtue of (9.2.12), the second integral will not

exceed

cΔdv(j)(t)E[

1n

∣∣Sn−j

∣∣2; Sn−j ∈ A1, |Sn−j | � ε1t

]= o

(Δdv(j)(t)

).

Therefore

P(GnB

(j)i1,...,ij

)= Δdv(j)(t)g(j)

(e(x)

)(1 + o(1)). (9.2.21)

(3) The asymptotics of P(Gn). Now we can return to (4.7.4) and (9.2.18).Collecting together the estimates (9.2.19) and (9.2.21), we obtain from [D∗

Ω] that

P(Gn) = o(Δdnkv(k)(t)

)+ Δd

k∑j=1

v(j)(t)g(j)

(e(x)

)nj(1 + o(1))

= Δdnkv(k)(t)g(k)

(e(x)

)(1 + o(1)).

The theorem is proved.

Now we will demonstrate that condition (9.2.13) is sufficient for part (3) ofcondition [D∗

Ω].

Page 443: Asymptotic analysis of random walks

412 Large deviation theorems for sums of random vectors

For j = k + 1, let conditions (9.2.12) and (9.2.13) be satisfied. Then

P(GnB(k+1)

)�

∑1�i1<···<ik+1�n

P(GnBi1 · · ·Bik+1)

� nk+1P(GnB(k+1)), (9.2.22)

where, cf. (9.2.20),

P(GnB(k+1)

)=∫A1

P(Sn−k−1 ∈ dz

)P(Sk+1 ∈ Δ[x − z); B(k+1)

).

Since x− z ∈ Π(Ω) for z ∈ A1 and |x− z| > r−1t(1+o(1)), we see that, owingto (9.2.12) for j = k +1, the second factor in the integrand on the right-hand sidewill not exceed

cΔdv(k+1)(t)|z|2n

,

and so the integral itself is O(Δdv(k+1)(t)

). Therefore, by virtue of (9.2.22)

and (9.2.13),

P(GnB(k+1)

)= O

(nk+1Δdv(k+1)(t)

)= o

(nkΔdv(k)(t)

).

The sufficiency of condition (9.2.13) is proved.

9.3 Integral theorems

9.3.1 Integral theorems via the corresponding integro-local theorems

The integro-local theorems that we proved in the previous sections allow oneto obtain easily the asymptotics of the probability that the sum Sn of randomvectors hits an arbitrary remote set. For example, one can find the asymptotics ofprobabilities of the form

P(Sn ∈ tA

)→ 0, t → ∞,

where A is a fixed solid set, bounded away from 0.Let, for the sake of definiteness, Eξ = 0, E|ξ|2 < ∞ and A be a set in Rd that,

together with the vector ξ, has the following properties:

(1) The inequalities

inf{|v| : v ∈ A

}> 0, sup

v∈Asup

{ε > 0 : Uε({v}) ⊂ A

}> 0 (9.3.1)

hold true, i.e. the set A is bounded away from the origin and is solid (contains anopen subset).

(2) Condition [DΩ] from § 9.2 is satisfied for some α > 2, γ > −1, ε > 0 and

Ω(A) :={e(v) : v ∈ A

} ⊂ Ωε, (9.3.2)

where Ωε = Ω \ Uε(∂Ω) is the ε-interior of Ω.

Page 444: Asymptotic analysis of random walks

9.3 Integral theorems 413

In the present context, the function g(e) does not need to be everywhere positiveon Ω(A); here, the relation (9.2.1) has to be written in the form P

(ξ ∈ Δ[x)

)=

ΔdV1(x) + o(Δdv(t)

), t = |x|, and it should be assumed that

∫Ω(A)

g(e) de > 0.

(3) For any fixed M > 0, the Lebesgue measure μ of the intersection of theε-neighbourhood Uε(∂A) of the boundary ∂A of A with the ball UM ({0}) tendsto 0 as ε → 0:

μ(U ε(∂A) ∩ UM

({0}))→ 0. (9.3.3)

It is obvious that the property (9.3.3) will hold for all sets A with smooth enoughboundaries.

Theorem 9.3.1. Under the above conditions (1)–(3), as t → ∞ and t � √n lnn,

one has

P(Sn ∈ tA

)= nt−αL(t)

∫A

|v|−α−dg(e(v)

)dv (1 + o(1)), (9.3.4)

where the functions L and g are defined in condition [DΩ] (see p. 403).

The nature of the large deviation probabilities for the sums Sn clearly remainsthe same as before: the right-hand side of (9.3.4) can be written as

nP(ξ ∈ tA) (1 + o(1)), (9.3.5)

so that the main contribution to the probability of the event{Sn ∈ tA

}comes

from the n events{ξj ∈ tA

}, j = 1, . . . , n.

With the help of Theorem 9.2.3 the reader will encounter no difficulties inproving a similar assertion in the case E|ξ|2 = ∞.

As we have already said, Theorem 9.3.1 demonstrates the following usefulproperty of integro-local theorems. When calculating the asymptotics of the prob-abilities P

(Sn ∈ tA

), these theorems enable one to proceed as if Sn had a density

and we knew its asymptotics, although condition [DΩ] contains no assumptionson the existence of densities.

Proof of Theorem 9.3.1. Let ZdΔ be a lattice in R

d with a span Δ = o(t). Denoteby U b := U b({0}) the ball of radius b > 0 whose centre is at the origin, and letA(t,M) be the set of all points z ∈ Zd

Δ for which Δ[z) ⊂ tA ∩ U tM and A(t,M)

be the minimal set of points z ∈ ZdΔ such that

tA ∩ U tM ⊂⋃

z∈A(t,M)

Δ[z).

Then, clearly, ∑z∈A(t,M)

P(Sn ∈ Δ[z)

)� P

(Sn ∈ tA ∩ U tM

)�

∑z∈A(t,M)

P(Sn ∈ Δ[z)

). (9.3.6)

Page 445: Asymptotic analysis of random walks

414 Large deviation theorems for sums of random vectors

It is evident that, owing to conditions (9.3.1)–(9.3.3) and Theorem 9.2.1, eachsum in (9.3.6) is asymptotically equivalent to

n∑

z∈tA∩ZdΔ∩UtM

Δd|z|−α−dL(|z|)g(e(z)

)∼ n

∫u∈tA∩UtM

|u|−α−dL(|u|)g(e(u)

)du

∼ nt−αL(t)∫

v∈A∩UM

|v|−α−dg(e(v)

)dv.

It only remains to notice that, for UtM

:= Rd \ U tM ,

P(Sn ∈ tA ∩ U

tM)

= o(nt−αL(t)

)as M → ∞, which can be verified either using relations of the form (9.2.4) orusing bounds for the probability that Sn hits a half-space of the form

Π(e, tM) ={v ∈ R

d : (v, e) � tM}, e ∈ Ω.

The theorem is proved.

9.3.2 A direct approach to integral theorems

The relations (9.3.4), (9.3.5) indicate that a somewhat different approach to prov-ing integral large deviation theorems in R

d, d > 1, may also be possible. Insteadof condition [DΩ] one could postulate those properties of the set A and vector ξ

that turn out to be essential for the asymptotics (9.3.5) to hold true. Namely, inplace of a rather restrictive condition [DΩ], we will assume that condition [A], tobe stated below, is satisfied. For simplicity we will first restrict ourselves to thecase when Eξ = 0, E|ξ|2 < ∞ and the set A can be separated from the origin bya hyperplane, i.e. there exist a b > 0 and a vector e ∈ S

d−1 such that

A ⊂ Π(e, b) ={v ∈ R

d : (v, e) � b}. (9.3.7)

(A remark concerning the transition to the general case, when A is bounded awayfrom the origin by a ball U b for some b > 0, will be made after the proof ofTheorem 9.3.2.)

Condition [A] has the following form:

[A] (1) We have the relation

P (tA) := P(ξ ∈ tA) ∼ V (t)F (A), (9.3.8)

where V (t) = t−αL(t) (α > 2 when E|ξ|2 < ∞), L is an s.v.f. at infinity andF (A) is a functional defined on a suitable class of sets. This functional, the set A

and the vector ξ are such that

P (z + tA) ∼ P (tA) for |z| = o(t), t → ∞. (9.3.9)

Page 446: Asymptotic analysis of random walks

9.3 Integral theorems 415

The property (9.3.9) simply expresses the continuity of the functional F : we haveF (v + A) ∼ F (A) as |v| → 0. In Theorem 9.3.1 the role of the functional F isplayed by the integral on the right-hand side of (9.3.4).

(2) Condition (9.3.7) is met and, for the corresponding e ∈ Sd−1,

P((ξ, e) � t

)� cV (t) (9.3.10)

for some c < ∞.

Theorem 9.3.2. Let condition [A] be satisfied. Then, for t � √n lnn,

P(Sn ∈ tA

) ∼ nV (t)F (A) ∼ nP(ξ ∈ tA). (9.3.11)

Proof. The proof of this result follows our previous approaches. Set

Gn :={Sn ∈ tA

}, Bj :=

{(ξj , e) < ρt

}for ρ < b, B =

n⋂j=1

Bj ,

and again make use of the representations (4.7.4), (4.7.5).

(1) Bounding P(GnB). Since Gn ⊂ Gn,e :={(

Sn, e)

� bt}

, we have theinequality P(GnB) � P(Gn,eB), where, owing to the bounds from § 4.1, forany δ > 0 and as t → ∞,

P(Gn,eB) � c[nV (t)

]r−δ, r :=

b

ρ> 1.

Hence

P(GnB) = o(nV (t)

). (9.3.12)

(2) The bound P(GnBn−1Bn) � P(Bn−1Bn) < c(V (ρt)

)2 is obvious here,so that ∑

i�=j

P(GnBiBj) � c(nV (t)

)2 = o(nV (t)

). (9.3.13)

(3) Evaluating P(GnBn). We have

P(GnBn) =∫

P(Sn−1 ∈ dz

)P(z + ξ ∈ tA, (ξ, e) � ρt

)=

∫|z|<M

√n

+∫

|z|�M√

n

. (9.3.14)

Here, in the first integral on the right-hand side, for ρ < b,

P(z + ξ ∈ tA, (ξ, e) � ρt

)= P(z + ξ ∈ tA) ∼ P(ξ ∈ tA)

owing to condition (9.3.9), so that when M → ∞ slowly enough,∫|z|<M

√n

∼ P(ξ ∈ tA).

Page 447: Asymptotic analysis of random walks

416 Large deviation theorems for sums of random vectors

In the second integral on the right-hand side of (9.3.14),

P(z + ξ ∈ tA, (ξ, e) � ρt

)� P

((ξ, e) � ρt

)� cV (t),

so that, as M → ∞, ∫|z|>M

√n

= o(V (t)

).

The above means that P(GnBn) ∼ P(ξ ∈ tA

). Together with (9.3.12), (9.3.13),

this establishes (9.3.11). The theorem is proved.

The transition to the case A is bounded away from the origin by a ball of ra-dius b > 0 can be made by, for example, partitioning the set A into finitely manysubsets A1, . . . , AK (one could, say, take the intersections of A with each ofthe K = 2d coordinate orthants), in such a way that each of the subsets would liein a subspace Π(ek, bk) for suitable ek ∈ Sd−1 and bk > 0, k = 1, . . . ,K. Afterthat, the above argument is applied to each of the subsets A1, . . . , AK . Insteadof (9.3.10) one should now, of course, assume that, for all k = 1, . . . ,K andsome ck < ∞,

P((ξ, ek) � t

)< ckV (t).

The transition to the case E|ξ|2 = ∞ can be made in a way similar to that usedin Theorem 9.2.3 (see also § 3.7).

Page 448: Asymptotic analysis of random walks

10

Large deviations in trajectory space

10.1 Introduction

We return now to the one-dimensional random walks studied in Chapters 2–8.The most general of the problems considered in those chapters was that on thecrossing of a given remote boundary by the trajectory of the walk. This is one ofthe simplest problems on large deviations in the space of trajectories.

Now we will consider a more general setting of this problem. Let D(0, 1)be the space of functions f(·) on [0, 1] without discontinuities of the secondkind, endowed with the uniform norm ‖f‖ = supt∈[0,1] |f(t)| and the respec-tive σ-algebra B of Borel (or cylindrical) sets. For definiteness, we will assumethat the elements of f ∈ D(0, 1) are right-continuous. Define a random pro-cess {Sn(t)} in D(0, 1) by letting

Sn(t) := S�nt�, t ∈ [0, 1],

where, as before, Sk =∑k

i=1 ξi, the r.v.’s ξi are independent and have the samedistribution as ξ and �nt is the integral part of nt. Then the probability

P(Sn(·) ∈ xA

)(10.1.1)

is defined for any A ∈ B. If Eξ = 0 and the set A is bounded away from zero (fora more precise definition, see below) then, provided that x → ∞ fast enough, theprobability (10.1.1) will tend to zero, and so there arises the problem of studyingthe asymptotics of this probability. This problem is referred to as the problem onprobabilities of large deviations in the trajectory space. As we have already said,the simplest problems of this kind were dealt with in Chapters 2–8, where, for agiven function g, we considered a set A

A = Ag ={

f : supt∈[0,1]

(f(t) − g(t)

)� 0

}.

In § 10.2 we will consider a problem on so-called one-sided large deviations intrajectory space under the assumption that condition [ · , =] of Chapters 3 and 4is met. In § 10.3, the results obtained in the previous section will be extended tothe general case.

417

Page 449: Asymptotic analysis of random walks

418 Large deviations in trajectory space

10.2 One-sided large deviations in trajectory space

To formulate the main assertions, we will need some notation and notions. Intro-duce the step functions

ev,t = ev,t(u) :={

0 for u < t,v for u � t,

t ∈ [0, 1], v ∈ R,

and the number set

A(t) := {v : ev,t ∈ A}, (10.2.1)

which could be called the section of the set A ∈ B at the point t. Define themeasure μA(t) of the set A(t) by

μA(t) := α

∫A(t)

u−α−1du, (10.2.2)

where −α is the index of the r.v.f. V (t) in condition [ · , =] .

We will need the following conditions on the set A.

[A0] The set A is bounded away from zero, i.e.

inff∈A

‖f‖ > 0.

In this case, one can assume without loss of generality that

inff∈A

‖f‖ = 1. (10.2.3)

The set A will be said to be one-sided if the following condition is met:

[A0+]

inff∈A

supu∈[0,1]

f(u) > 0.

It is evident that condition [A0+] implies [A0] and, moreover, one can assumewithout loss of generality that

inff∈A

supu∈[0,1]

f(u) = 1, A(t) ⊂ [1,∞). (10.2.4)

The next condition is related to the structure of the sets A(t).

[A1] For each t ∈ [0, 1], the set A(t) is a union of finitely many intervals(segments, half-intervals). The dependence of the boundaries of these intervalson t is such that the function μA(t) is Riemann integrable.

Thus, under condition [A1] we can define the measure

μ(A) :=

1∫0

μA(t) dt. (10.2.5)

Page 450: Asymptotic analysis of random walks

10.2 One-sided large deviations in trajectory space 419

Remarks on a possible relaxation of condition [A1] can be found after Theo-rem 10.2.3.

Further, we will need a continuity condition. Take an arbitrary f ∈ D(0, 1) andadd to it a jump of a variable size at a point t, i.e. consider the family of functions

fv,t(u) := f(u) + ev,t(u).

Those values of v for which fv,t ∈ A form the set

Af (t) := {v ∈ R : fv,t ∈ A}. (10.2.6)

Our continuity condition (for the measure μA(t)) consists in the following.

[A2] For each t ∈ [0, 1], the set Af (t) is a finite union of intervals (segments,half-intervals) and, as ‖f‖ → 0, the set Af (t) converges ‘in measure’ to A(t):uniformly in t ∈ [0, 1],

α

∫Af (t)

u−α−1du → μA(t). (10.2.7)

Example 10.2.1. The boundary problems considered in Chapters 3–5. In theseproblems, we had

A ={

f ∈ D(0, 1) : supt∈[0,1]

(f(t) − g(t)

)� 0

}for a given function g. If inft∈[0,1] g(t) > 0 and g ∈ D(0, 1) then conditions[A0]–[A2] are satisfied, and the sets A(t) and Af (t) are half-intervals of theform [v,∞):

A(t) = [g∗(t),∞), g∗(t) = infu∈(t,1]

g(u),

μA(t) = α

∞∫g∗(t)

u−α−1du = g−α∗ (t), μ(A) =

1∫0

g−α∗ (t) dt.

Example 10.2.2. The set

A ={

f ∈ D(0, 1) :

1∫0

f(u)du � b

}, b > 0,

satisfies all the conditions [A0]–[A2]. In this situation, the sets A(t) and Af (t)are, as was the case in Example 10.2.1, half-intervals of the form [v,∞): we haveA(t) = [b/(1 − t),∞) and

μA(t) = α

∞∫b/(1−t)

u−α−1du =(

1 − t

b

, μ(A) =

1∫0

(u

b

du =b−α

α + 1.

Page 451: Asymptotic analysis of random walks

420 Large deviations in trajectory space

Now we can formulate the main assertion. First consider one-sided sets A andthe case of finite variance Eξ2 < ∞.

Theorem 10.2.3. Let Eξ = 0, Eξ2 < ∞ and assume that conditions [ · , =] withV ∈ R and α > 2, [A0+], [A1] and [A2] are satisfied. Then, as n → ∞,

P(Sn(·) ∈ xA

)= μ(A)nV (x)(1 + o(1)), (10.2.8)

where the measure μ(A) is defined in (10.2.1), (10.2.2) and (10.2.5) and the re-mainder term o(1) is uniform in the zone n < cx2/lnx for a suitable c.

Regarding the value of c, see Remark 4.4.2.

Remark 10.2.4. The assertion of the theorem is substantially different from thoseon the asymptotics of P

(Sn(·) ∈ xA

)in the ‘Cramer case’, when Eeλξ < ∞ for

some λ > 0. The foremost difference is that, generally, in the Cramer case onecan find only the asymptotics of lnP

(Sn(·) ∈ xA

). In addition, the nature of that

asymptotics proves to be completely different from (10.2.8) (see e.g. [38] and alsoChapter 6).

Proof of Theorem 10.2.3. The scheme of the proof is the same as that used inChapters 2–5. For

Gn := {Sn(·) ∈ xA}, Bj = {ξj < y}, B =n⋂

j=1

Bj ,

we have, for x = ry, r > 2,

P(Gn) = P(GnB) + P(GnB), (10.2.9)

P(GnB) =n∑

j=1

P(GnBj) + O((nV (x))2

), (10.2.10)

where, owing to the obvious inclusion Gn ⊂ {Sn � x} (see (10.2.4)) and Theo-rem 4.1.2,

P(GnB) = o((nV (x))2

).

As before, it remains to evaluate P(GnBj).Let

S〈j〉n (t) := Sn(t) − eξj ,j/n(t), j � n, (10.2.11)

so that the process S〈j〉n (t) is obtained from Sn(t) by removing the jump ξj at the

point j/n. For ε > 0, put

Cj :={

maxt∈[0,1]

|S〈j〉n (t)| < εx

}. (10.2.12)

It is obvious that ξj and {S〈j〉n (t)} are independent of each other and that, for

Page 452: Asymptotic analysis of random walks

10.2 One-sided large deviations in trajectory space 421

ε = ε(x) → 0 slowly enough (but in such a way that εx � √n), we have from

the Kolmogorov inequality that

maxj�n

P(Cj) → 0.

This implies that, for such an ε,

P(GnBj) = P(GnBjCj) + o(V (x)

). (10.2.13)

Now let F〈j〉n be the σ-algebra generated by ξ1, . . . , ξj−1, ξj+1, . . . , ξn. Then

P(GnBjCj) = E[P(GnBj |F〈j〉

n ); Cj

]. (10.2.14)

Furthermore, observe that x−1Sn(·) ∈ Uε(eξj ,j/n) on the event Cj , where Uε(f)denotes the ε-neighbourhood of f in the uniform norm. Next we note also that,for fixed σ-algebra F

〈j〉n , the event GnBj in (10.2.14) is equivalent to the event

Bj ∩ {ξj ∈ xAf (j/n)} with f(·) = x−1S〈j〉n (·), so that Af (t) does not depend

on ξj .By condition [A2], on the set Cj one has

P(Bj ∩ {ξj ∈ xAf (j/n)}|F〈j〉

n

)= P

(ξj � y, x−1ξj ∈ A(j/n)

)(1 + o(1)) (10.2.15)

and

P(ξj ∈ xA(j/n)

)= V (x)μA(j/n)(1 + o(1)). (10.2.16)

The event {ξj � y} on the right hand-side of (10.2.15) is ‘redundant’ owingto (10.2.4), whereas the equality in (10.2.16) holds because the set A(j/n) is aunion of a finite number of intervals and V (ux) ∼ u−αV (x) as x → ∞.

Since P(Cj) → 1, we obtain from (10.2.14)–(10.2.16) that

P(GnBjCj) = V (x)μA(j/n)(1 + o(1)). (10.2.17)

Combining this relation with (10.2.9), (10.2.10) and (10.2.13) and using the uni-formity in condition [A2] and (10.2.15), (10.2.16), we obtain

P(Gn) = V (x)n∑

j=1

μA(j/n)(1 + o(1)) = μ(A)nV (x)(1 + o(1)).

The uniformity of o(1) in (10.2.8) follows from that of the bounds from § 4.1. Thetheorem is proved.

Remark 10.2.5. An assertion close to Theorem 10.2.3 was obtained in [225] inthe case when ξ has a regularly varying density

−F ′+(t) = αt−α−1L(t), L is an s.v.f. at infinity.

In this case, conditions [A1] and [A2] could be somewhat relaxed with regard tothe assumption that A(t) and Af (t) are unions of finite collections of intervals.

Page 453: Asymptotic analysis of random walks

422 Large deviations in trajectory space

The same paper [225] contains a ‘transient’ assertion for deviations x � √n as

well, where an approximation for P(Gn) includes also the term P(w(·) ∈ xA

),

w(·) being the standard Wiener process.

An assertion on the asymptotics of the probability P(S(·) ∈ xA) in the caseEξ = 0, Eξ2 = ∞ has a similar form, as follows.

Theorem 10.2.6. Let Eξ = 0, Eξ2 = ∞ and let conditions [<, =] with W, V ∈R, [A0+], [A1] and [A2] be satisfied. If W (t) < cV (t) then the relation (10.2.8)holds for x � σ(n) ≡ V (−1)(1/n).

If condition W (t) < cV (t) is not satisfied then (10.2.8) will still be true butonly for values of n and x such that nW (x/ lnx) < c.

The assertions on the uniformity of the remainder term o(1) in (10.2.8) in therespective range of n-values and an analogue of the above remark on relaxingconditions [A1] and [A2] remain valid in this case as well. Under more stringentconditions on the tail F+(t), such a relaxation was given in paper [131], whichcontains an assertion close to Theorem 10.2.6.

Proof. The proof of Theorem 10.2.6 repeats, up to obvious changes, the proof ofTheorem 10.2.3. Now, to obtain desired bounds for the probabilities P(GnB),P(Sn � εx) and P

(mink�n Sk < −εx

), one just needs the corresponding re-

sults of Chapter 3. Therefore a more detailed exposition of the proof will beomitted.

10.3 The general case

We will begin with the so-called two-sided boundary problem, where, for givenfunctions g+(t) > 0, g−(t) < 0 possessing the properties

inft∈[0,1]

g+(t) = 1, supt∈[0,1]

g−(t) = −g, g > 0, (10.3.1)

we want to find the asymptotic behaviour of the probability of the event Gn com-plementary to

Gn :={

xg−(k

n

)< Sk < xg+

(k

n

); k = 1, . . . , n

}.

Clearly Gn = G+n ∪ G−

n , where

G+n =

{maxk�n

(Sk − xg+

(k

n

))� 0

}, G−

n ={

mink�n

(Sk − xg−

(k

n

))� 0

}.

Hence

P(Gn) = P(G+n ) + P(G−

n ) − P(G+n G−

n ). (10.3.2)

Page 454: Asymptotic analysis of random walks

10.3 The general case 423

The asymptotics of P(G±n ) were found (under the appropriate conditions) in

Chapters 3 and 4 (they are of the form c+nV (x) and c−nW (x) respectively).Now we will show that

P(G+n G−

n ) = o(P(Gn)

). (10.3.3)

To avoid repeating very similar formulations in the cases Eξ2 < ∞, Eξ = 0and Eξ2 = ∞, Eξ = 0 (cf. e.g. Theorems 10.2.3 and 10.2.6), we will introduce a‘united’ condition [Q∗], which ensures that the approximations

P(Sn � x) ∼ P(Sn � x) ∼ nV (x), (10.3.4)

P(S n � −x) ∼ P(Sn � −x) ∼ nW (x), (10.3.5)

where S n = mink�n Sk, hold true (not necessarily both at the same time) or thatthe respective upper bounds cnV (x) and cnW (x) for the probabilities in questionwill be valid.

In the case Eξ2 < ∞, Eξ = 0, condition [Q∗] means that x > c√

n lnn,

n → ∞, and that one of the following three alternative sets of conditions:

(1) [=, =], W (x) ∼ c1V (x), c1 > 0, α > 2, c >√

α − 2; (10.3.6)

(2) [<, =], W (x) = o(V (x)

), α > 2, c >

√β − 2; (10.3.7)

(3) [=, <], V (x) = o(W (x)

), β > 2, c >

√α − 2. (10.3.8)

It follows from Remark 4.4.2 that under conditions (10.3.6) we will have therelations (10.3.4), (10.3.5). Under condition (10.3.7), we will have (10.3.4) andthe bound P(S n � −x) < c2nW (x) (see Remark 4.4.2 and Corollary 4.1.4). Asimilar (symmetric) assertion holds under condition (10.3.8).

In the case Eξ2 = ∞, Eξ = 0, condition [Q∗] means that x → ∞ and thatone of the following three alternative sets of conditions is satisfied:

(1) [=, =], α ∈ (1, 2), W (x) ∼ c1V (x), c1 > 0, nV (x) → 0; (10.3.9)

(2) [<, =], α ∈ (1, 2), W (x) = o(V (x)

), nV

( x

lnx

)→ 0; (10.3.10)

(3) [=, <], β ∈ (1, 2), V (x) = o(W (x)

), nW

( x

lnx

)→ 0. (10.3.11)

It follows from Theorem 3.4.1 that under conditions (10.3.9) we will have theasymptotics (10.3.4), (10.3.5). If (10.3.10) holds true then we will have (10.3.4)and the bound P(S n � −x) < c2nW (x). A similar (symmetric) situation takesplace when (10.3.11) is satisfied.

We have the following bound for P(G+n G−

n ).

Lemma 10.3.1. Let condition [Q∗] be satisfied. Then

P(G+n G−

n ) � cn2[V (x)W

(x(1 + g)

)+ W (gx)V

(x(1 + g)

)]� c1n

2V (x)W (x), (10.3.12)

Page 455: Asymptotic analysis of random walks

424 Large deviations in trajectory space

where g is defined in (10.3.1).

Proof. Let

η± := min{

k > 0 : ±Sk � ±xg±(k

n

)}.

Since {η± > k} ∈ Fk := σ(ξ1, . . . , ξk), we have

P(G+n G−

n ) �n∑

k=1

P(η+ = k, η− > k)P(S n−k � −x(1 + g)

)+

n∑k=1

P(η− = k, η+ > k)P(Sn−k � x(1 + g)

). (10.3.13)

By virtue of condition [Q∗],

P(Sn−k � x(1 + g)

)� c(n − k)V

(x(1 + g)

)� cnV

(x(1 + g)

),

P(Sn−k � −x(1 + g)

)� c(n − k)W

(x(1 + g)

)� cnW

(x(1 + g)

).

Hence (10.3.13) implies the inequality

P(G+n G−

n ) � cnW(x(1 + g)

)P(Sn � x) + cnV

(x(1 + g)

)P(S n � −gx).

Again using bounds from Chapters 3 and ,4, we obtain (10.3.12). The lemma isproved.

Lemma 10.3.1 and the relation (10.3.2) imply the following result.

Theorem 10.3.2. Let condition [Q∗] be satisfied. Then

P(Gn) =(P(G+

n ) + P(G−n ))(1 + o(1)). (10.3.14)

The asymptotics of P(G±n ) were described in Theorems 3.6.4 and 4.6.7.

Now we can obtain in a similar way extensions of Theorems 10.2.3 and 10.2.6to the general case of ‘two-sided’ deviations. We will again assume that ourconditions [A0]–[A2]are satisfied.

In the case of ‘two-sided’ deviations, the sets A(t) and Af (t) from conditions[A1] and [A2] could contain intervals from the negative half-line (−∞, 0) aswell. Put

A+(t) := A(t) ∩ (0,∞), A−(t) := A(t) ∩ (−∞, 0).

Then, under conditions [A1] and [A2], there will exist measures μA±(t) on thereal line and measures μ±(A) on D(0, 1) such that

μ±(A) :=

1∫0

μA±(t)dt.

Page 456: Asymptotic analysis of random walks

10.3 The general case 425

We define the sets A±f (t) in a similar way and assume in condition [A2] that,

as ‖f‖ → 0,

α

∫A±

f (t)

u−α−1du → μA±(t)

uniformly in t ∈ [0, 1].As in the previous section (see Examples 10.2.1 and 10.2.2), one can easily ver-

ify that the above-mentioned conditions are satisfied in the two-boundary problemstated at the beginning of this section and also in the problem on the asymptoticsof the probabilities

P

(∣∣∣∣∣1∫

0

Sn(u)du

∣∣∣∣∣ � x

)or P

( 1∫0

∣∣Sn(u)∣∣du � x

).

Now we can formulate the following assertion.

Theorem 10.3.3. Let conditions [Q∗], [A1] and [A2] be satisfied. Then

P(Sn(·) ∈ xA

)= n

[μ+(A)V (x) + μ−(A)W (x)

](1 + o(1)). (10.3.15)

Proof. Since, owing to (10.2.3),

Gn ={Sn(·) ∈ xA

} ⊂ {Sn � x} ∪ {S n � −x},we can write Gn = G+

n ∪ G−n , where

G+n := Gn ∩ {Sn � x} and G−

n := Gn ∩ {S n � −x}.Then, by virtue of Lemma 10.3.1,

P(G+n G−

n ) � P(Sn � x, S n � −x) � cn2(V (x)W (x)

).

Owing to (10.3.2), it remains to find the asymptotics of P(G±n ). Assume, for

simplicity and definiteness, that condition [=, =] is satisfied.It follows from the standard arguments from the previous chapters (e.g. from

the argument in the proof of Theorem 3.5.1 on the asymptotics of P(Hn), whereHn = {Sn � x}) that, for Bj = {ξj < y} with y = x/r, r > 1 fixed,

P(G+n ) = P(GnHn) =

n∑j=1

P(GnHnBj) + o(nV (x)

),

where, cf. (10.2.13),

P(GnHnBj) = P(GnHnBjCj) + o(V (x)

)= P(GnBjCj) + o

(V (x)

);

the Cj were defined in (10.2.12).Thus,

P(G+n ) =

n∑j=1

P(GnBjCj) + o(nV (x)

);

Page 457: Asymptotic analysis of random walks

426 Large deviations in trajectory space

we note that the asymptotics of P(GnBjCj) was established in the proofs ofTheorems 10.2.3 and 10.2.6. Combining this with similar results for P(G−

n ), weobtain the assertion of the theorem.

If condition [<, =] with W (x) = o(V (x)

)holds then finding the exact asymp-

totics of P(G−n ) is replaced by establishing the bound P(G−

n ) < cnW (x), andin this case, the assertion (10.3.15) will remain true. The case [=, <] withV (x) = o

(W (x)

)can be dealt with in a similar way.

The theorem is proved.

Page 458: Asymptotic analysis of random walks

11

Large deviations of sums of random variables oftwo types

Chapters 11–13 of the book deal with random walks with non-identically dis-tributed jumps. In the present chapter, we will consider a rather special problem(compared with the material of Chapters 12 and 13) on the asymptotics of thedistributions of sums of r.v.’s of two types. The results of this chapter will be usedin Chapter 13. We will start with a discussion of motivations and applications ofthe problem.

11.1 The formulation of the problem for sums of random variables

of two types

Let ξ1, ξ2, . . . and τ1, τ2, . . . be two independent sequences of independent r.v.’s,which are independent copies of the r.v.’s ξ and τ with distributions Fξ and Fτ

respectively, and let

E|ξ| < ∞, E|τ | < ∞.

Put

Sn :=n∑

i=1

ξi, Tm :=m∑

i=1

τi.

The aim is to study the asymptotics of the probabilities

P (m,n, x) := P(Tm + Sn � x

)→ 0 (11.1.1)

as x → ∞. Without losing generality, we may assume that m � n. The num-bers m and n can be either fixed or unboundedly increasing. If n → ∞ (m → ∞)then, again without loss of generality, it may be assumed that

Eξ = 0 (Eτ = 0).

Studying the asymptotics of P (m,n, x) is of interest in a range of problems. Forexample, papers [13, 124] (which also contain discussions of motivations andapplications to queueing theory) deal with the asymptotics of

P(η∗(τ) > n

)→ 0

427

Page 459: Asymptotic analysis of random walks

428 Large deviations of sums of random variables of two types

as n → ∞, where the r.v. τ � 0, η∗(t) = min{n � 1 : S∗n > t} is the first

crossing time of the level t by the random walk S∗n =

∑ni=1 ξ∗i , n = 1, 2, . . . and

the r.v.’s ξ∗i � 0 are i.i.d. and independent of τ . Evidently

{η∗(τ) > n} = {S∗n � τ}.

If we let ξi := a − ξ∗i , where a = Eξ∗i , then {η∗(τ) > n} = {Sn + τ � an} sothat

P(η∗(τ) > n) = P (1, n, an). (11.1.2)

Hence we have arrived at problem (11.1.1) under the following special assump-tions:

(1) m = 1, x = an;(2) ξ � a, τ � 0.

Observe that in [13, 124] it was assumed that the distribution Fτ is close todistributions from the class Se.

Analysing the asymptotics of P (m,n, x) for m = 1 could be of interest in anumber of other problems, unrelated to (11.1.2). Thus to some extent, one canreduce to such an analysis the problem on the probabilities of large deviations ofsemi-Markov processes defined on a Markov chain {Y (n)} with a positive atom,say, at a point y0. Let X(n) be the value of the semi-Markov process at the timeof the nth visit of the chain to y0. Then

X(n) = X(1) + ξ1 + · · · + ξn,

where ξk is the increment of the semi-Markov process on the kth of the cyclesformed by the visits of {Y (j)} to y0. In this case the distribution of τ := X(1)is, generally speaking, different from that of ξ1, when Y (0) �= y0.

Another example is provided by a renewal process with a time of the first re-newal τ > 0 and time intervals ξ1, ξ2, . . . between subsequent renewals (ξi � 0).As is well known, in the case of a stationary process one has here

P(τ � t) =1Eξ

∞∫t

P(ξ � u) du. (11.1.3)

Studying the large deviations of the time of the nth renewal, we will arrive atproblem (11.1.1) with m = 1.

Many results on the asymptotics of P (m,n, x) for a fixed m can be easilyextended to the case when m = ν is a Markov time for the sequence {τi}, and oneis interested in the asymptotics of P (ν, n, x) or P (ν, n−ν, x). Such probabilitiescan arise, for example, in the theory of semi-Markov processes.

In what follows, we will consider problem (11.1.1) in its general form. Asbefore, we will deal with the two classes of ‘right-sided distribution tails’:

Page 460: Asymptotic analysis of random walks

11.2 Asymptotics of P (m,n, x) for regularly varying distributions 429

(1) the class R of distributions with regularly varying tails,

F ∈ R ⇔ F+(t) = V (t) = t−αL(t), α > 1, (11.1.4)

where L(t) is an s.v.f. at infinity;(2) the class Se of distributions with semiexponential tails,

F ∈ Se ⇔ F+(t) = V (t) = e−l(t), l(t) = tαL(t), α ∈ (0, 1),(11.1.5)

where L(t) is an s.v.f. at infinity possessing certain smoothness properties(see Definition 1.2.22 or (5.1.3)).

Also, we will sometimes write V ∈ R (V ∈ Se) in the cases when V (t) is theright tail of a distribution from the respective class.

For these classes, we will study the asymptotics of P (m,n, x) both in the caseof fixed m and when m → ∞, m � n. When dealing with the class of semiex-ponential distributions, it will be assumed that n → ∞. Because of the technicaldifficulties, we will consider not all possible growth rates of m.

11.2 Asymptotics of P (m,n, x) related to the class of regularly varying

distributions

For the characteristics of the distributions of the r.v.’s τ and ξ we will use, as a rule,the same symbols, if necessary endowing them with subscripts τ and ξ, respec-tively. Recall that we assumed without losing generality that m � n. If n → ∞(m → ∞) then we also assume that Eξ = 0 (Eτ = 0).

The relations a(x, m, n) ∼ b(x,m, n) and a(x, m, n) � b(x,m, n) respec-tively mean that

a(x,m, n)b(x, m, n)

→ 1 anda(x,m, n)b(x, m, n)

→ ∞

as x → ∞ (the dependence of the ranges for m and n on x will be specified inthe respective assertions). For an r.v. ζ, its distribution will be denoted by Fζ , sothat Fζ,±(t) will stand for the tails of that distribution:

Fζ,+(t) = P(ζ � t), Fζ,−(t) = P(ζ < −t).

Condition [ · , · ]ζ will denote condition [ · , · ], which we introduced in § 2.1,but for the distribution Fζ . If, say, condition [<, <]ζ is satisfied then Vζ(t) andWζ(t) will denote the regularly varying majorants for Fζ,+(t) and Fζ,−(t) respec-tively. If condition [=, =]ζ is met then Vζ(t) and Wζ(t) will denote the regularlyvarying tails themselves: Fζ,+(t) = Vζ(t) = t−αζ Lζ(t), Fζ,−(t) = Wζ(t) =t−βζ LWζ

(t).

Theorem 11.2.1. Let x → ∞ and at least one of the following three sets ofconditions hold (the first two sets are symmetric with respect to τ and ξ):

Page 461: Asymptotic analysis of random walks

430 Large deviations of sums of random variables of two types

(1) [ · , =]τ , Eτ2 < ∞, ατ > 2, [ · , <]ξ , nVξ(x) = o(mVτ (x));

(2) [ · , =]ξ , Eξ2 < ∞, αξ > 2, [ · , <]τ , mVτ (x) = o(nVξ(x));

(3) [ · , =]τ , Eτ2 < ∞, ατ > 2, [ · , <]ξ , Eξ2 < ∞, αξ > 2.

Then, for x �√n ln(n + 1),

P (m,n, x) ∼ mVτ (x) + nVξ(x). (11.2.1)

Now let the following set of conditions be met:

(1a) [<, =]τ , ατ ∈ (1, 2); [ · , <]ξ , αξ > 2, nVξ(x) = o(mVτ (x)).

Then (11.2.1) will hold true provided that the quantities x and m satisfy the rela-tions

mVτ

( x

lnx

)→ 0, mWτ

( x

lnx

)→ 0. (11.2.2)

If αξ ∈ (1, 2) in conditions (1a) then, in order to have (11.2.1), we require inaddition that condition [<, <]ξ must be met and

nWξ

( x

lnx

)→ 0.

Analogues (2a), (3a) of the sets of conditions (2), (3) in the cases specified byconditions [<, =]ξ , [ · , <]τ and [<, =]ξ, [<, =]τ , αξ ∈ (1, 2), ατ ∈ (1, 2),respectively are formulated in a similar way. Under these conditions, the rela-tion (11.2.1) will hold true.

Proof. Let G := Gn,m = {Tm + Sn � x}. The proof is based on the represen-tation

P(G) = P(G; Sn <

x

2

)+ P

(G; Tm <

x

2

)+ P

(Tm � x

2, Sn � x

2

),

(11.2.3)

where the first two terms on the right-hand side are evaluated in the same way,owing to their symmetry. If condition (1) is met then, for δ ∈ (0, 1/2), one has

P(G; Sn � x

2

)= E

[FTm,+(x − Sn); |Sn| � δx

]+ R1 + R2, (11.2.4)

where

R1 � FSn,+(δx)FTm,+

(x

2

), R2 � FSn,−(δx)FTm,+((1 + δ)x).

Here, by virtue of the results of § 4.1, for x � √n ln(n + 1) we have the in-

equalities

FSn,+(δx) � cnVξ(x), FTm,+

(x

2

)� cmVτ (x),

so that

R1 � cmnVτ (x)Vξ(x).

Page 462: Asymptotic analysis of random walks

11.2 Asymptotics of P (m,n, x) for regularly varying distributions 431

It is obvious that the last term in (11.2.3) admits the same upper bound. For R2,

we have

R2 = o(mVτ (x)).

It remains to evaluate the first term on the right-hand side of (11.2.4). Choosinga small enough δ > 0, we can make its ratio to mVτ (x) arbitrarily close to 1.Therefore

P(G; Sn <

x

2

)∼ mVτ (x).

In exactly the same way one can show that

P(G; Tm <

x

2

)� cnVξ(x).

Combining the above results and taking into account the conditions (1), we obtainthat

P(G) ∼ mVτ (x),

which proves (11.2.1).The proof of (11.2.1) under conditions (2) and (3) is similar to the above argu-

ment.If ατ ∈ (1, 2) then the asymptotic relations used in the above proofs will

be valid provided that the relations (11.2.2) hold true. That we have condi-tions (11.2.2) enables one to claim that, under conditions (1a) with Eτ2 = ∞,as m → ∞ (see § 3.1),

P(Tm < −δx) � cmWτ (x) = o(1), P(Tm � x) ∼ mVτ (x).

Now repeating the rest of the above argument, we arrive at (11.2.1). The caseαξ ∈ (1, 2) can be dealt with in the same way. The theorem is proved.

In problem (11.1.2) we have ξi � a, and so those of conditions (1) of Theo-rem 11.2.1 that concern the r.v. ξ are always satisfied.

In the problem on a stationary renewal process, when [ · , =]ξ holds true, wesee that conditions (3) and (3a) of Theorem 11.2.1 are satisfied, so that

P(τ + Sn � x) ∼(

n +x

(α − 1)Eξ

)Vξ(x).

Extending the results of Theorem 11.2.1 to the case when m = ν is a randomstopping time for {τj} does not require any additional considerations providedthat we impose the conditions of the theorem directly on the sum τ∗ := Tν andstudy the probabilities P (1, n, x) constructed for the r.v.’s τ∗ and {ξj}. If theseconditions are imposed on τj and ν then conditions (2) will be satisfied providedthat Vτ (x) + Fν,+(x) = o(nVξ(x)) since, owing to the results of [76], one hasP(Tν � x) � c(Vτ (x) + Fν,+(x)). To state conditions (1) and (3) in terms

Page 463: Asymptotic analysis of random walks

432 Large deviations of sums of random variables of two types

of τj and ν, we need to know the nature of ν in more detail. If, for instance,condition [ · , =]τ is met, Fν ∈ R and {τj} and ν are independent then

P(Tν � x) =∞∑

k=1

P(ν = k)P(Tk � x) ∼ EνP(τ � x) + P(ν � x

),

and so reformulating conditions (1) and (3) in terms of τj and ν will cause nodifficulties.

11.3 Asymptotics of P (m,n, x) related to semiexponential distributions

In this section, it will be assumed that n → ∞ whereas m can be either fixed orgrowing with n. However, we will not cover all growth rates for m → ∞, sincedealing with some of them requires serious technical difficulties to be overcome.

Regarding ξi, we will assume that

Eξi = 0, Eξ2i = 1, E|ξi|b < ∞, b > 2. (11.3.1)

As in the previous section, functions V (t) of the form (11.1.5) but with sub-scripts τ or ξ will play the role of the tails or their majorants for the distributionsof τ and ξ.

As in Chapter 5, a condition of the form [ · , =]ζ or [ · , <]ζ will now meanrespectively that the distribution of ζ is semiexponential, Fζ,+ = Vζ ∈ Se, orthat the right tail Fζ,+ of the distribution admits a majorant Vζ ∈ Se and so on.The relation V ∈ Se means that the function l(t) = tαL(t) in (11.1.5) has theproperty (see (5.1.3)) that, for Δ = o(t), t → ∞,

l(t + Δ) − l(t) =αΔl(t)

t(1 + o(1)) + o(1). (11.3.2)

We will also need a stronger condition, [D], introduced in § 5.4 (see 5.4.1)),with q(t) = 0. This condition requires that, for Δ = o(t), t → ∞,

l(t + Δ) − l(t) = l′(t)Δ +α (α − 1)

2l(t)t2

Δ2(1 + o(1)). (11.3.3)

Note that condition [D] is somewhat excessive for the assertions below andcould be relaxed (cf. [13, 124, 51]).

If the function l(t) is defined just for integral t (the lattice case) then we assumethat a modification of condition [D] presented in § 5.4 (see p. 251) is satisfied withq(t) = 0.

In what follows, we will need the basic facts and results presented in §§ 5.1–5.3.As in § 11.2, the notation introduced in these sections will be endowed with sub-scripts ξ and τ , respectively. For example, if condition [ · , =]τ (with Fτ ∈ Se)or condition [ · , <]τ is satisfied then lτ , ατ will denote the respective character-istics l and α, which are present in (11.1.5) and which correspond to the r.v. τ . Wewill also endow any further notation with subscripts τ and ξ when it is necessary

Page 464: Asymptotic analysis of random walks

11.3 Asymptotics of P (m,n, x) for semiexponential distributions 433

to indicate to which distribution it corresponds. For instance, we will denote byg(τ,ξ)κ the functions gκ in (5.4.26) with l(x − t) replaced by lτ (x − t), while the

deviation function Λκ = Λ(ξ)κ and the parameters κ = κξ = �1/(1 − α) + 1,

α = αξ, still correspond to ξ. Let

M (τ,ξ) = M (τ,ξ)(x, n) := mint

g(τ,ξ)κ (t, x, n) (11.3.4)

(cf. (5.4.27)). Then, cf. (5.4.28), we have

M (τ,ξ) = lτ (x)(

1 − α2τ

2nw

(τ)1 (x)(1 + o(1))

)= lτ (x) − α2

τ

2nw

(τ)2 (x)(1 + o(1)), (11.3.5)

where w(τ)i (x) = liτ (x)x−2, i = 1, 2, so that corollaries similar to (5.4.29) and

(5.4.30) will hold true. The functions σ(τ)i (n) = w

(τ)(−1)i (1/n) for i = 1, 2

are introduced in a similar way. Thus, according to the above convention, inTheorem 5.4.1 one has M = M (ξ,ξ), wi = w

(ξ)i and σi = σ

(ξ)i .

Theorem 11.3.1. Let m be fixed, n → ∞.

(i) Let condition [ · , =]ξ be satisfied (so that the Cramer approximation (5.1.11)holds true) and, for some t0 > 0,

Fτ,+(t) � cVξ(t), t � t0. (11.3.6)

Then, for all x � 0,

P (m,n, x) ∼ P(Sn � x). (11.3.7)

(ii) Let conditions [ · , =]τ and [ · , <]ξ be met and

(1 + θ)lτ (t) � l(t) (11.3.8)

for t � t0 and some θ > 0, t0 > 0, where l corresponds to Vξ and lτ satisfiescondition [D]. Then, for s

(τ)1 = x/σ

(τ)1 (n) → ∞,

P (m,n, x) ∼ me−M(τ,ξ)(x,n), (11.3.9)

where M (τ,ξ) = M (τ,ξ)(x, n) is defined in (11.3.4), (11.3.5).If s

(τ)2 = x/σ

(τ)2 (n) → ∞ then

P (m,n, x) ∼ mVτ (x). (11.3.10)

(iii) Let conditions [ · , =]τ and [ · , =]ξ be met, s(τ)1 → ∞, s

(ξ)1 → ∞ and lτ

and lξ satisfy [D]. Then

P (m,n, x) ∼ me−M(τ,ξ)(x,n) + ne−M(ξ,ξ)(x,n). (11.3.11)

Here, as in (11.3.9), (11.3.10), one has

P (m,n, x) ∼ mVτ (x) + nVξ(x) (11.3.12)

Page 465: Asymptotic analysis of random walks

434 Large deviations of sums of random variables of two types

provided that s(τ)2 → ∞, s

(ξ)2 → ∞.

Remark 11.3.2. If n is fixed then the assertion (11.3.7) will be satisfied under theconditions of Theorem 11.3.1(i) once we have Fτ,+(t) = o(Vξ(t)) as t → ∞. Thecondition that Fτ,+(t) � cVξ(t) for t � t0 can be relaxed in the case Fτ ∈ Se tothe condition that lτ (t) � lξ(t)−v ln t for a suitable v, which is chosen dependingon the range of the deviations x. It is not difficult to see from Theorem 5.4.1 thata more substantial relaxation of this condition is impossible.

Remark 11.3.3. Condition (11.3.8) can be relaxed to the condition that l(t) �lτ (t) + γ(t), where γ(t) = o(lτ (t)) but γ(t) → ∞ fast enough.

In problem (11.1.2) one has ξ � a, and so the conditions of part (ii) of thetheorem are met. Note also that problem (11.1.2) was considered in [13, 124]under broader conditions on the function l. As we have already observed, inTheorem 11.3.1 condition [D] could be relaxed.

In the problem on a stationary renewal process (see (11.1.3)), if Fξ ∈ Se thenthe conditions of part (iii) of the theorem are satisfied and

Vτ (x) ∼ x1−α

αEξVξ(x).

Proof of Theorem 11.3.1. (i) First suppose that m = 1 and that c � 1 in (11.3.6).Put G := {τ + Sn � x}. Then

P (1, n, x) = P(G) = P(G; τ > t0) + P(G; τ � t0).

The r.v.’s τ and ξn+1 can be given on a common probability space by settingτ := F

(−1)τ,+ (ω), ξn+1 := F

(−1)ξ,+ (ω), where ω is an r.v. that is uniformly distributed

over [0, 1] and independent of Sn. Then ξn+1 � τ for ω � ω0 = Fτ,+(t0) and{ω < ω0} ⊆ {τ > t0}. Therefore

P(G) � P(Sn+1 � x; ω � ω0) + P(Sn + t0 � x; ω > ω0)

= P(Sn+1 � x) + P(Sn + t0 � x)P(ω > ω0)

− P(Sn+1 � x; ω > ω0). (11.3.13)

Further, for an arbitrary fixed numerical sequence ωn ↑ 1 as n → ∞, putNn := −F

(−1)ξ,+ (ωn). Then, clearly, ξn+1 � −Nn for ω � ωn and

P(Sn+1 � x; ω > ω0) � P(Sn+1 � x; ωn � ω > ω0)

� P(Sn − Nn � x; ωn � ω > ω0)

= P(Sn � x + Nn)P(τ ∈ (ω0, ωn]). (11.3.14)

Now observe that the asymptotics of P(Sn � x) as n → ∞, x � 0 will notchange if we replace n by n + 1, and x by x + y, where

|y| = o(nγ), γ <1 − α

2 − α<

12.

Page 466: Asymptotic analysis of random walks

11.3 Asymptotics of P (m,n, x) for semiexponential distributions 435

That is,

P(Sn+1 � x + y) ∼ P(Sn � x). (11.3.15)

This follows from the approximation (5.1.11) and Theorem 5.4.1. (On the junc-tion of the Cramer and intermediate deviation zones one could use either a refine-ment of Theorem 5.4.1 or theorems from [238], which are valid on the whole realline). Therefore, choosing ωn ↑ 1 in such a way that Nn = o(nγ), we obtain

P(τ ∈ (ω0, ωn]) → P(ω > ω0) =: p,

and hence, owing to (11.3.13)–(11.3.15),

P(G) � P(Sn � x)(1 + o(1)) + P(Sn � x)p(1 + o(1))

− P(Sn � x)p(1 + o(1))

= P(Sn � x)(1 + o(1)).

At the same time, for the same reasons, for any sequence Nn → ∞ such thatNn = o(nγ) we have

P(G) � P(G; τ > −Nn)

� P(Sn � x + Nn)P(τ > −Nn)

= P(Sn � x)(1 + o(1)).

This proves (11.3.7) in the case when m = 1 and c � 1 in (11.3.6).When c > 1, we put k := �c + 1. Since Fξ ∈ S owing to Theorem 1.2.36, we

have from Theorem 1.2.12(iii) that FSk,+(t) ∼ kFξ,+(t) as t → ∞, and one canassume without loss of generality that

Fτ,+(t) � FSk,+(t), t � t0.

Therefore, in a similar way to the previous argument, one can define the r.v.’s τ

and S(n)k := ξn+1 + · · · + ξn+k on a common probability space by setting τ :=

F(−1)τ,+ (ω), S

(n)k := F

(−1)

S(n)k ,+

(ω), where ω is an r.v. that is uniformly distributed

over [0, 1] and independent of Sn, and this will ensure that S(n)k � τ for ω � ω0.

The subsequent argument remains basically unchanged from the case when c � 1(one simply replaces Sn+1 by Sn+k).

If m > 1, one should repeat the above argument m times.

(ii) For simplicity we will again assume first that m = 1. Then, cf. (11.2.3), forany fixed δ > 0 we have

P (1, n, x) = P(G) = P(1) + P(2) + P(3), (11.3.16)

where

P(1) := P(Sn � (1 − δ)x

)P(τ > δx),

P(2) := P(G; Sn < (1 − δ)x

),

P(3) := P(G; τ � δx).

Page 467: Asymptotic analysis of random walks

436 Large deviations of sums of random variables of two types

We will begin by bounding P(1). One can assume without loss of generality thatl(t) = (1+θ)lτ (t) for t > t0. Then, along with s

(τ)1 → ∞ we will have s1 → ∞,

where s1 corresponds to the tail Vξ , so that π1 ≡ nx−2l(x) → 0 and, by virtue ofCorollary 5.2.2(i),

P(Sn � x) � nVξ(x)1+o(1). (11.3.17)

Hence, for large enough x,

P(1) � nVτ (δx) exp{−l((1 − δ)x)(1 + o(1))

}� n exp

{−lτ (δx) − lτ ((1 − δ)x)}.

Here

lτ (δx) + lτ ((1 − δ)x) ∼ lτ (x)ψ(δ),

where ψ(δ) = δατ + (1 − δ)ατ > 1 for δ ∈ (0, 1). Therefore

lτ (δx) + lτ ((1 − δ)x) � lτ (x)(1 + ψ1), ψ1 > 0 for δ > 0

(see also (5.4.43)–(5.4.54)), so that for s(τ)1 → ∞ and all large enough x,

P(1) � Vτ (x)1+ψ, ψ > 0. (11.3.18)

Further,

P(2) = E[Vτ (x − Sn); Sn � (1 − δ)x

].

Expressions of this kind were studied in (5.4.46)–(5.4.56). It follows from thoseconsiderations that if s

(τ)1 → ∞ and condition [D] is met for lτ then

P(2) ∼ e−M(τ,ξ)(x,n). (11.3.19)

It remains to bound

P(3) =

δx∫−∞

P(Sn � x − t)P(τ ∈ dt).

Owing to (11.3.17), as s1 → ∞,

P(3) � n

δx∫−∞

e−l(x−t)(1+o(1)) P(τ ∈ dt)

� n exp{−l((1 − δ)x)(1 + o(1))

}= n exp

{−l(x)(1 − δ)α(1 + o(1))}

� n exp{−lτ (x)(1 − δ)α(1 + θ)(1 + o(1))

},

where the last inequality follows from (11.3.8). Hence, for small enough δ wehave

P(3) � e−lτ (x)(1+θ/2) (11.3.20)

Page 468: Asymptotic analysis of random walks

11.3 Asymptotics of P (m,n, x) for semiexponential distributions 437

for all large enough x. Since M (τ,ξ)(x, n) = lτ (x)(1 + o(1)), we derive from(11.3.18)–(11.3.20) that (11.3.9) holds true for m = 1.

When considering an arbitrary m > 1, one should note that since Vτ is subex-ponential we have

P(Tm � x) = e−lτ (x)+ln m+o(1). (11.3.21)

If s(τ)2 → ∞ and condition [D] holds for lτ then one can see from the calculations

in (5.4.58)–(5.4.65) that

M (τ,ξ)(x, n) = lτ (x) + o(1),

and hence (11.3.10) holds true. The second assertion of the theorem is proved.

(iii) Now we will prove the third assertion. Taking into account the assertionsof parts (i), (ii) of the theorem and somewhat simplifying the problem, we willconfine ourselves to considering the case

lτ (t) � lξ(t) � lτ (t)(1 + θ), t � t0, (11.3.22)

for some θ > 0, t0 > 0, which is complementary to the case (11.3.8) and in-equality lτ (t) � lξ(t) in part (i) of the theorem. We will follow the argument inthe proof of part (ii) and the representations (11.3.16). Nothing changes in thebounding of P(1) and the evaluation of P(2). It is not hard to see that, by virtue ofTheorem 5.4.1,

P(3) ∼ ne−M(ξ,ξ)(x,n) ∼ P(Sn � x).

The transition to the case of arbitrary m > 1 and also to the situation wheres(τ)2 → ∞, s

(ξ)2 → ∞ can be made in a similar way to the previous argument.

The theorem is proved.

If m → ∞, n → ∞ then studying the asymptotics of P (m,n, x) could proveto be more difficult. We will note here the following two cases for which thisproblem turns out to be simple:

(1) Fτ ∈ Se, Fξ ∈ Se, s(τ)2 → ∞, s

(ξ)2 → ∞;

(2) r = m/n is a fixed rational number, n → ∞.

In the first case, as we have already observed,

M (τ,ξ)(x, n) = lτ (x) + o(1), M (ξ,ξ)(x, n) = lξ(x) + o(1)

(here we have assumed for simplicity that Eτ2 = Eξ2 = 1; if Eτ2 �= Eξ2 thenone should consider somewhat different functions). Hence the minimum

mint

(M (τ,τ)(x − t,m) + nΛ(ξ)

κξ(t/n)

), (11.3.23)

which we will need to evaluate when calculating an integral for

P(Tm + Sn � x, Sn � (1 − δ)x

)

Page 469: Asymptotic analysis of random walks

438 Large deviations of sums of random variables of two types

(cf. P(2) in (11.3.16)) and when using approximations of Theorem 5.4.1 for thedistribution of Tm, will be attained at t∗ = o(x), as will the minimum in (5.4.26),(5.4.27). Therefore the value (11.3.23) will again have the form

M (τ,ξ)(x, n) + o(1) = lτ (x) + o(1).

From this, using the representation (11.3.16) and computing integrals, which aresimilar to those we dealt with in § 5.4, we obtain the following assertion.

Theorem 11.3.4. Let one of the following three conditions be satisfied:

(1) [ · , <]τ , [ · , =]ξ, mVτ (x) = o(nVξ(x));

(2) [ · , =]τ , [ · , <]ξ, nVξ(x) = o(mVτ (x));

(3) [ · , =]τ , [ · , =]ξ.

Then, as s(τ)2 → ∞, s

(ξ)2 → ∞,

P (m,n, x) ∼ mVτ (x) + nVξ(x). (11.3.24)

Now consider the second case mentioned on p. 437, when m/n = r = m1/n1,n → ∞, m1 and n1 being fixed integers. In other words, m = m1k, n = n1k,k → ∞. Clearly,

Tm + Sn = Hk, (11.3.25)

where Hk :=∑k

i=1 ηi and the r.v.’s ηi are independent and distributed as

η = Tm1 + Sn1 . (11.3.26)

Here, under conditions (1)–(3) of Theorem 11.3.4 we have Fη ∈ Se, and so itremains to make use of Theorem 5.4.1 to obtain an approximation for

P (m,n, x) = P(Hk � x).

Page 470: Asymptotic analysis of random walks

12

Random walks with non-identically distributedjumps in the triangular array scheme in the case of

infinite second moment. Transient phenomena

Chapters 12 and 13 deal with the general problem on random walks with in-dependent non-identically distributed jumps in the triangular array scheme. Weconsider the cases of both finite and infinite variances for jumps with regularlyvarying distributions (analogues of Chapters 3 and 4). Introducing the triangulararray scheme enables us, in particular, to study so-called transient phenomena,when the drift in the random walk vanishes in the limit. In applications to queue-ing theory, this corresponds to a heavy traffic situation.

The transition to the case of random walks with non-identically distributedjumps extends the applicability of the results obtained in Chapters 2–5 (see theIntroduction). In particular, this enables one to evaluate risks in insurance prob-lems, where one deals with claims of different types that are time-inhomogeneous,to study the reliability of queueing systems with customers of different types andso on.

Almost all the main results of Chapters 2–5 can be extended to random walkswith independent non-identically distributed jumps in the triangular array scheme– of course, under some additional conditions ensuring the uniformity of the reg-ular variation in the jump distributions.

In this chapter, we will consider random walks whose jumps have zero means,infinite variance and ‘averaged distributions’ that are close to being regularlyvarying (or which admit regularly varying majorants or minorants).

12.1 Upper and lower bounds for the distributions of Sn and Sn

Let ξ1, ξ2, . . . be independent r.v.’s following the distributions F1,F2, . . . respec-tively. The distributions Fj may also depend on some parameter. As in theclassical triangular array scheme, this parameter may be the number of sum-mands n, so that Fj = F(n)

j will depend on both j and n. In the case whenthe sequence ξ1, ξ2, . . . is infinite, it would be natural to consider other parame-ters as well (for example, this is the case when one studies transient phenomena,see § 12.5). For brevity we will omit in what follows the superscript (n) indicatingthat we are dealing with the triangular array scheme.

439

Page 471: Asymptotic analysis of random walks

440 Non-identically distributed jumps with infinite second moments

As before, let

Sn =n∑

i=1

ξi, Sn = maxk�n

Sk.

Next we will go through all the main stages of deriving the asymptotics of theprobabilities P(Sn � x) and P(Sn � x) presented in Chapter 3, but will do thatnow under new, more general, conditions.

12.1.1 Extensions of Lemma 2.1.1 and of the main inequality (2.1.8)

The extension of Lemma 2.1.1 for P(Sn � x) has the following form. Supposefor a moment that Cramer’s condition is satisfied:

ϕi(μ) := Eeμξi < ∞, i = 1, . . . , n, (12.1.1)

for some μ > 0.

Lemma 12.1.1. For all n � 1, x � 0, μ � 0

P(Sn � x) � e−μx maxk�n

k∏i=1

ϕi(μ). (12.1.2)

This inequality can also be written as

P(

maxk�n

Sk � x)

� e−μx maxk�n

EeμSk .

Proof. The proof follows the argument from § 2.1. Put η(x) := inf{k : Sk � x}.Then, since for k � n the event {η(x) = k} and the r.v.’s Sn−Sk are independent,we have

n∏i=1

ϕi(μ) = EeμSn �n∑

k=1

E(eμSn ; η(x) = k

)�

n∑k=1

E(eμ(x+Sn−Sk); η(x) = k

)= eμx

n∑k=1

n∏i=k+1

ϕi(μ)P(η(x) = k)

� eμx minj�n

n∏i=j+1

ϕi(μ)n∑

k=1

P(η(x) = k)

= eμx P(Sn � x) minj�n

n∏i=j+1

ϕi(μ).

Page 472: Asymptotic analysis of random walks

12.1 Upper and lower bounds for the distributions of Sn and Sn 441

Observing thatn∏

i=1

ϕi(μ)

mink�n

n∏i=k+1

ϕi(μ)= max

k�n

k∏i=1

ϕi(μ),

completes the proof of the lemma.

If the the Cramer condition (12.1.1) is not satisfied then we will make use of‘truncated’ (at level y) versions ξ

(y)i of the r.v.’s ξi, which have the distribution

functions

P(ξ(y)i < t

)= P(ξi < t | ξi < y) =

P(ξi < t)P(ξi < y)

, t � y.

As before, let

Bi = {ξi < y}, B =n⋂

i=1

Bi.

Our aim is to bound the probability

P := P(Sn � x; B).

Repeating the argument from § 2.1 and using 12.1.1, we obtain the following ana-logue of the basic inequality (2.1.8):

P(Sn � x; B) � e−μx maxk�n

k∏i=1

Ri(μ, y), (12.1.3)

where

Ri(μ, y) :=

y∫−∞

eμtFi(dt).

12.1.2 Upper bounds for the distribution of the maximum of sums

To obtain the desired bounds, we will need, as in Chapter 3, conditions on the ex-istence of regularly varying majorants but this time for the averaged distributions

F :=1n

n∑j=1

Fj

with tails

F±(t) =1n

n∑j=1

Fj,±(t), Fj,−(t) = Fj((−∞, t)), Fj,+(t) = Fj([t,∞))

(the conditions will be of a form uniform in the parameter of the triangular arrayscheme). Since we are studying in this section the distributions of Sn and Sn withfinite n, the parameter of the scheme can be identified with n.

Page 473: Asymptotic analysis of random walks

442 Non-identically distributed jumps with infinite second moments

The conditions below could be referred to as uniform averaged regular varia-tion conditions. They relate to the properties of regularly varying functions pre-sented in Theorems 1.1.2 and 1.1.4.

Following our previous conventions, we will say that the averaged distribu-tion F satisfies condition [<, <]U ([<, =]U) if

F−(t) � W (t), F+(t) � V (t) (12.1.4)(F−(t) � W (t), F+(t) = V (t)

),

where V (t), W (t) may depend on the triangular array scheme parameter n andwill possess the properties of an r.v.f. in the sense of conditions [U1], [U2] below.In situations where it is important to stress the dependence of the distribution Fand the majorants V, W on n, we will write F(n), V (n), W (n) respectively.

The above-mentioned condition on the averaged right tails has the followingform. Let α > 0 be fixed.

[U1] For any given δ > 0 there exists a tδ < ∞ such that, for all n, t � tδand v ∈ [1/2, 2], ∣∣∣∣V (n)(vt)

V (n)(t)− v−α

∣∣∣∣ � δ. (12.1.5)

The second condition is stated in terms of uniform inequalities.

[U2] For any given δ > 0 there exists a tδ < ∞ such that, for all n, v �v0 < 1 and tv � tδ ,

v−α+δ � V (n)(vt)V (n)(t)

� v−α−δ. (12.1.6)

Denote by [U] the condition that both conditions [U1], [U2] are satisfied for acommon α ∈ (1, 2) that does not depend on n:

[U] = [U1] ∩ [U2].

Without loss of generality, we will identify the values of tδ in conditions [U1]and [U2].

Observe that it follows from condition [U] that for any given δ > 0 there existsa tδ < ∞ such that, for all t and v satisfying the inequalities t � tδ , tv � tδ , wehave (cf. (1.1.23))

(1 − δ) min{vδ, v−δ} � vαV (n)(vt)V (n)(t)

� (1 + δ) max{vδ, v−δ}. (12.1.7)

It is not difficult to see that the converse is also true: (12.1.7) implies [U].Condition [U] on the tails W has exactly the same form but with α replaced

by β.We will say that the averaged distribution F satisfies condition [U] if the func-

tions V and W corresponding to F satisfy [U1] and [U2].

Page 474: Asymptotic analysis of random walks

12.1 Upper and lower bounds for the distributions of Sn and Sn 443

Remark 12.1.2. When studying the large deviations of Sn and Sn in the casen → ∞, condition [U] can be somewhat relaxed and replaced by the followingcondition:

[U∞] For any given δ > 0 there exist nδ , tδ < ∞ such that (12.1.7) holdsfor all n � nδ , t � tδ, tv � tδ.

A condition sufficient for [U∞] is condition [UR], which will be presentedin § 12.4 below (see p. 464). The latter condition requires that, for some fixed (i.e.independent of n) r.v.f. G(t) and ρ+ ∈ (0, 1),

F (n)(t) =H(n)

nG(t)(1 + o(1)), F

(n)+ (t) = ρ+F (n)(t)(1 + o(1))

(12.1.8)as t → ∞, n → ∞, where H(n) → ∞, H(n) � n.

A simple sufficient condition for the existence of the right regularly varyingmajorants for the averaged distributions is the boundedness of the averaged one-sided moments: for an α � 1,

1n

n∑j=1

E(ξαj ; ξj � 0) < cα < ∞.

In this case, clearly, for t > 0,

F+(t) � V (t) ≡ cαt−α,

and condition [U] will be satisfied. A similar remark is valid for the left tails.If each distribution Fj satisfies condition [<, <]U uniformly in n and j with

common α and β values (the s.v.f.’s Lj in the representations Vj = t−αLj(t) canbe different, however) then this condition will be met for the averaged distribu-tion F as well. For example, consider condition [U2] on the tails Vj . If, for each n

and j, we have for v � v0 > 1, t � tδ

Vj(vt) � Vj(t)v−α+δ

then, obviously, the same inequality will hold for the sums as well:

V (vt) � V (t)v−α+δ.

Now if each distribution Fj satisfies [<, <]U but the value α = αj does de-pend on j, 1 < α∗ � αj � α∗ < 2 then it remains unclear whether con-dition [<, <]U will hold for the averaged distribution F as well. It turns out,however, that when [<, <]U holds for each Fj (uniformly in j, n), one can alsoobtain the desired bounds.

In this connection, we will distinguish between the following two forms ofcondition [<, <]U:

(1) condition [<, <]U on the average, when relations (12.1.5), (12.1.6) holdtrue;

Page 475: Asymptotic analysis of random walks

444 Non-identically distributed jumps with infinite second moments

(2) the individual conditions [<, <]U in which the relations (12.1.5), (12.1.6)hold for each Vj uniformly in n and j, the exponent α in (12.1.5), (12.1.6)being replaced by αj .

The same properties are assumed for the tails Wj .In addition to condition [<, <]U (in one or another form), to obtain the neces-

sary asymptotics in the desired simple form we will also need some further con-ditions to prevent too rapid ‘thinning’ (degeneration) of the tails F−(t) and F+(t)(or W (t), V (t)) in the triangular array scheme as n → ∞ (say, too rapid conver-gence V (t1) → 0 or W (t2) → 0 for some fixed t1 or t2 respectively as n → ∞).The degree of ‘thinning’ of the averaged distribution F under conditions (12.1.8)is described by the ratio H(n)/n, which may tend to zero. In the case of too rapid‘thinning’, a substantial role in bounding the probabilities P(Sn � x) may beplayed by the ‘central parts’ of the distributions F rather than their tails.

We will need the following numerical characteristics:

JVj (t) :=

t∫0

uVj(u) du, JV (t) :=1n

n∑j=1

JVj (t),

JWj (t) :=

t∫0

∞∫u

Wj(v) dv =

∞∫0

min{u, t}Wj(u) du,

JW (t) :=1n

n∑j=1

JWj (t), J(t) := JV (t) + JW (t).

The condition ensuring not too rapid ‘thinning’ of the tails (or their non-degene-racy) as n → ∞ has the following form.

[N] For some δ ∈ (0, min{β − 1, 2 − α}) and for all n and x such thatx → ∞, nV (x) < 1, we have

V (x) � cJ(tδ)x−2, (12.1.9)

where tδ is from conditions [U].

Since for α, β > 1 and a fixed tδ < ∞ the quantity J(tδ) is bounded, itis obvious that we always have (12.1.9) provided that V (x) � cx−2 in therange nV (x) < 1 (recall that V (x) = V (n)(x) depends on n).

Another condition sufficient for (12.1.9) is the inequality

V (tδ) � c1J(tδ), (12.1.10)

where tδ corresponds to a δ < min{β − 1, 2 − α}. (Recall that tδ specifies therange of t where the regular variation properties hold with precision characterisedby δ.) Indeed, in this case, as x → ∞,

V (x) = V (tδ)V (x)V (tδ)

� c1J(tδ)(

x

)−α+δ

,

Page 476: Asymptotic analysis of random walks

12.1 Upper and lower bounds for the distributions of Sn and Sn 445

so that

x2V (x)J−1(tδ) � c1tα−δδ x2−α+δ → ∞.

Consider the following example.

Example 12.1.3. Let ζi be i.i.d. r.v.’s following a common symmetric distributionwith majorant V(ζ) for the right tail. Put

ξi :={

ζi with probability δ(n),0 with probability 1 − δ(n),

where δ(n) → 0 as n → ∞. Here V (t) = δ(n)V(ζ)(t), condition [<, <]U ismet, J(tδ) = cδ(n) → 0,

V (tδ) = δ(n)V(ζ)(tδ) = c−1J(tδ)V(ζ)(tδ) = c1J(tδ)

and the sufficient condition (12.1.10) is satisfied. Thus, if the ‘thinning’ occursowing to the concentration of the probability mass at zero, this does not preventthe desired conditions [U] and [N] from being satisfied.

However, if

ξi :={

ζi with probability δ(n),ζiI(|ζi| < 1

)with probability 1 − δ(n)

then the situation will be quite different. For simplicity, let tδ = 1 for a suitable δ,and δ(n) = n−γ , γ ∈ (0, 1), V(ζ)(x) = Lx−α for x > 1. It is clear thatcondition (12.1.10) is not satisfied since V (1) → 0, J(1) > c > 0 as n → ∞.Condition (12.1.9) will be met if

δ(n)V(ζ)(x) � cx−2,

or, equivalently, if

nγxα−2 < c.

This relation will follow from the inequality nV (x) = n1−γLx−α < 1 only ifαγ/(1 − γ) < 2 − α or, equivalently, if γ < (2 − α)/2. When γ > (2 − α)/2then condition [N] will not be satisfied, and the principal contribution to the prob-abilities of large deviations of Sn and Sn may well come from the central partsof the distributions of the ξi (or the ζi) rather than from their tails.

Now we can formulate the main assertion. Recall that by condition [<, <]Uon the distribution F we understand the existence of majorants V and W forthe tails of Fi, which satisfy condition [U], and that P = P(Sn � x;B) andΠ(x) = nV (x).

Theorem 12.1.4. Let Eξj = 0, j = 1, . . . , n, and let the averaged distribution Fsatisfy conditions [<, <]U and [N] with α ∈ [1, 2), β ∈ (1, 2), and let r =x/y � 1 be fixed. Then the following assertions hold true.

Page 477: Asymptotic analysis of random walks

446 Non-identically distributed jumps with infinite second moments

(i) If W (t) � c1V (t) then, for all n,

P � cΠ(x)r, (12.1.11)

supn,x: Π(x)�v

P(Sn � x)Π(x)

� 1 + ε(v), (12.1.12)

where ε(v) ↓ 0 as v ↓ 0.(ii) If W (t) � c1V (t) for all large enough t then the above inequalities will

remain true for all n and x such that

nW( x

lnx

)< c2 < ∞. (12.1.13)

(iii) The assertions of parts (i), (ii) of the theorem remain true if, instead of stip-ulating condition [<, <]U on the averaged distribution F, we assume thateach distribution Fj , j = 1, . . . , n, satisfies condition [<, <]U with α = αj ,β = βj uniformly in j and n, where

1 < α∗ = minj�n

αj � maxj�n

αj = α∗ < 2,

1 < β∗ = minj�n

βj � maxj�n

βj = β∗ < 2.

In all the assertions where it is assumed that n → ∞, we can replace condi-tion [U] by [U∞].

Put

V (t) := max{V (t),W (t)}, Π(x) = nV (x), S n = mink�n

Sk.

If in condition [<, <]U we replace both majorants by V (t) then (12.1.12) willimply

Corollary 12.1.5. Let the preliminary conditions of Theorem 12.1.4 (those statedprior to part (i)) be satisfied. Then

supn,x: b�v

max{P(Sn � x), P(Sn < −x)

}Π(x)

� 1 + ε(v), (12.1.14)

where ε(v) ↓ 0 as v ↓ 0.

Let Sn := maxk�n |Sk|, F (t) := V (t) + W (t). Theorem 12.1.4 also impliesthe following analogue of Corollary 3.1.3.

Corollary 12.1.6. If the preliminary conditions of Theorem 12.1.4 (those statedprior to part (i)) are met then

supn,x: b�v

P(Sn � x)

nF (x)� 1 + ε(v). (12.1.15)

Page 478: Asymptotic analysis of random walks

12.1 Upper and lower bounds for the distributions of Sn and Sn 447

Remark 12.1.7. The assertions of Theorem 12.1.4 and its corollaries can also bestated in somewhat different terms. The uniformity condition [U] makes it nat-ural to introduce classes F of distributions that satisfy this condition. A class Fwill be characterized by the function tδ in conditions [U] and also the constant c

in condition [N]. In these terms, the assertion of Theorem 12.1.4 states that themain inequalities (12.1.11), (12.1.12) of the theorem hold uniformly in all distri-butions {Fi} from the class F ; moreover, the function ε(·) in (12.1.12), (12.1.14)is specified by the characteristics of the class F only.

Note that we could have avoided introducing condition [N]. In that case, how-ever, the quantity J(tδ) for some δ such that α ± δ ∈ (1, 2), β ± δ ∈ (1, 2)(tδ is from condition [U]) would appear in the bounds for the probabilities P

and P(Sn � x) and so would make the formulations more complicated. Nev-ertheless, we will take this path in Chapter 13 (see Theorem 13.1.2), where thequantity χ that characterizes the ‘thinning’ rate of the tails Vi(t) is present in theformulations of the main assertions.

Observe also that the majorants V (t), W (t) in conditions [N] can be identifiedon the segment [0, tδ] with the tails F+(t) and F−(t) respectively.

Remark 12.1.8. As we have already noted, when studying the probabilities oflarge deviations of Sn and Sn as n → ∞, one can use the relaxed version [U∞]of conditions [U] described in Remark 12.1.2. In this case (12.1.11) will hold forall large enough n but instead of (12.1.12) one should write

supx: Π(x)�v

P(Sn � x)Π(x)

� 1 + ε(v, n),

where ε(v, n) → 0 as v → 0, n → ∞. The same observation applies to Corollar-ies 12.1.5 and 12.1.6.

In paper [127] upper bounds were obtained for the distribution of Sn in termsof ‘truncated’ moments, without any assumptions being made on the existenceof regularly varying majorants (such majorants always exist in the case of finitemoments but their decay rate will not be the true one). This makes the boundsfrom [127] more general, in a sense, but also substantially more cumbersome.One cannot derive from them bounds for Sn of the form (12.1.11), (12.1.12).

Proof of Theorem 12.1.4. The proof mostly repeats that of Theorem 3.1.1, butthe arguments need to be modified for the triangular array scheme under condi-tion [U]. We will consider in detail only those steps where the fact that the ξj arenow non-identically distributed requires substantial amendments.

As before, the proof is based on the basic inequality (12.1.3), which reducesthe problem to bounding the product

maxk�n

k∏j=1

Rj(μ, y), Rj(μ, y) =

y∫−∞

eμt Fj(dt).

Page 479: Asymptotic analysis of random walks

448 Non-identically distributed jumps with infinite second moments

Since ln(1 + v) � v for v > −1, we have

Rj(μ, y) = eln[1+(Rj(μ,y)−1)] � eRj(μ,y)−1,

so thatk∏

j=1

Rj(μ, y) � exp{ k∑

j=1

(Rj(μ, y) − 1)}

. (12.1.16)

Now we will bound the right-hand side of this inequality for k = n. Letting

R(μ, y) :=1n

n∑j=1

Rj(μ, y),

we have

R(μ, y) =

y∫−∞

eμt F(dt) = I1 + I2 + I3,

where

I1 :=

0∫−∞

, I2 :=

M∫0

, I3 :=

y∫M

, M = M(2α) =2α

μ.

(12.1.17)One can bound the integrals I1, I2 in exactly the same way as in § 3.1 but this

time using condition [U]. As in § 3.1, we obtain

I1 = F−(0) + μ

0∫−∞

tF(dt) + μ

∞∫0

(1 − e−μt)F−(t) dt,

where the last integral does not exceed

μ

∞∫0

W (t)(1 − e−μt) dt = μ2

∞∫0

W I(t)e−μtdt, W I(t) =

∞∫t

W (u) du.

By virtue of conditions [U] and the dominated convergence theorem, we obtainthat, as t → ∞,

W I(t) = t

∞∫1

W (tv) dv ∼ tW (t)

∞∫1

v−βdv =tW (t)β − 1

uniformly in n.Now let δ < β − 1. Then, for tδ from condition [U2] and such that

W I(t) � (1 + δ)tW (t)β − 1

for t � tδ,

Page 480: Asymptotic analysis of random walks

12.1 Upper and lower bounds for the distributions of Sn and Sn 449

one has the inequality

μ2

∞∫0

W I(t)e−μtdt � μ2

tδ∫0

W I(t) dt +(1 + δ)μ2

β − 1

∞∫tδ

tW (t)e−μtdt,

where, owing to condition [U], as μ → 0,

μ2

∞∫tδ

tW (t)e−μtdt = W (1/μ)

∞∫μtδ

uW (u/μ)W (1/μ)

e−udu

∼ W (1/μ)

∞∫0

u1−βe−udu = W (1/μ)Γ(2 − β).

Thus, for μ → 0,

I1 � F−(0) + μ

0∫−∞

tF(dt) + JW (tδ)μ2

+W (1/μ)β − 1

Γ(2 − β)(1 + o(1)

), (12.1.18)

where

JW (t) =

t∫0

⎛⎝ ∞∫u

W (v) dv

⎞⎠ du =

∞∫0

min{u, t}W (u) du.

Further, for M = 2α/μ,

I2 =

M∫0

eμt F(dt) � F+(0) + μ

∞∫0

tF(dt) +

M∫0

(eμt − 1 − μt)F(dt),

where, using an argument similar to that for (3.1.26) (see p. 133), we find that thelast integral does not exceed

12α

μ2(e2α − 1)V ∗(M), V ∗(t) :=

t∫0

uV (u) du.

Next, for δ such that α + δ < 2,

t−2V ∗(t) = t−2

t∫0

uV (u) du = t−2

tδ∫0

uV (u) du +

1∫tδ/t

vV (tv) dv

� JV (tδ)t−2 + (1 + δ)V (t)

1∫0

v1−α−δdv,

Page 481: Asymptotic analysis of random walks

450 Non-identically distributed jumps with infinite second moments

where JV (t) :=∫ t

0uV (u) du. Hence

I2 � F+(0) + μ

∞∫0

tF(dt) + JV (tδ) + cV (1/μ),

which yields, together with (12.1.18), that

I1 + I2 � 1 + J(tδ)μ2 + cF (1/μ), (12.1.19)

where J(t) = JV (t) + JW (t), F (t) = V (t) + W (t).Now we turn to bounding the integral

I3 =

y∫M

eμtF(dt) � V (M)e2α + μ

y∫M

V (t)eμtdt

(cf. (2.2.9), (2.2.11)). Since V (M)e2α � cV (1/μ), the main task will be toevaluate the integral

I03 := μ

y∫M

V (t)eμtdt = eμyV (y)

(y−M)μ∫0

V (y − u/μ)V (y)

e−udu. (12.1.20)

Repeating the corresponding arguments from § 2.2 (see (2.2.13)–(2.2.15)) and us-ing condition [U], we conclude that there exists a function ε(λ) → 0 as λ =μy → ∞ such that

I03 � eμyV (y)(1 + ε(λ)). (12.1.21)

As the final result, we obtain the inequality (in which it is convenient to multi-ply both sides by n)

n(R(μ, y) − 1) � nJ(tδ)μ2 + cnF (1/μ) + neμyV (y)(1 + ε(λ)), (12.1.22)

where ε(λ) → 0 as λ → ∞.Now observe that, bounding the integrals Rj(μ, y) for each of the r.v.’s ξj and

again using a partition Rj(μ, y) = I1,j +I2,j +I3,j of the form (12.1.17), we willsimilarly obtain

I1,j = Fj,−(0) + μE(ξj ; ξj < 0) + μ

0∫−∞

(1 − eμt)Fj(dt),

I2,j � Fj,+(0) + μE(ξj ; ξj � 0) +

M∫0

(eμt − 1 − μt)Fj(dt),

I3,j =

y∫M

eμt Fj(dt)

Page 482: Asymptotic analysis of random walks

12.1 Upper and lower bounds for the distributions of Sn and Sn 451

and

Rj(μ, y) − 1 � μ

0∫−∞

(1 − eμt)Fj(dt)

+

M∫0

(eμt − 1 − μt)Fj(dt) +

y∫M

eμt Fj(dt). (12.1.23)

In the inequalities (12.1.23) the right-hand sides are non-negative, and hence thesums

k∑j=1

(Rj(μ, y) − 1

), k � n,

will not exceed the sum of the right-hand sides of (12.1.23) over all j � n.Now we have already shown that the last sum is bounded by the right-hand sideof (12.1.22). It is evident that

maxk�n

k∑j=1

(Rj(μ, y) − 1

)admits the same upper bound. Therefore (see (12.1.16)),

maxk�n

k∏j=1

Rj(μ, y) � exp{

maxk�n

k∑j=1

(Rj(μ, y) − 1)}

� exp{nJ(tδ)μ2 + cnF (1/μ) + nV (y)eμy(1 + ε(λ))

}.

(12.1.24)

As in § 3.1, let

μ :=1y

lnr

Π(y), Π(y) = nV (y).

Then, cf. calculations on p. 133, we obtain

lnP � − r lnr

Π(y)+ r + ε1

(Π(y)

)+ cnW

(y

| ln Π(y)|)

+ nJ(tδ)y−2 ln2 r

Π(y), (12.1.25)

where ε1(v) → 0 as v → 0.If Π(y) → 0 then Π(y) ln2 Π(y) also tends to zero. Hence the last term on the

right-hand side of (12.1.25) will vanish when Π(y) → 0, provided that

J(tδ)y−2V −1(y) < c, (12.1.26)

where we recall that the quantity tδ from condition [U] corresponds to δ such thatβ − δ > 1, α + δ < 2.

Thus, the last term in (12.1.25) is bounded provided that (12.1.26) holds true

Page 483: Asymptotic analysis of random walks

452 Non-identically distributed jumps with infinite second moments

and Π(x) � 1. It will converge to zero if (12.1.26) holds true and Π(x) → 0.This means that, when condition [N] holds, the last term on the right-hand sideof (12.1.25) can be included in the term ε1

(Π(y)

). We have obtained for lnP

an inequality that coincides with (3.1.29) (where V and W have now a newmeaning). The subsequent argument repeats verbatim that in the proof of The-orem 3.1.1, and hence will be omitted.

The theorem is proved.

12.1.3 Lower bounds for the distributions of the sums Sn

In the present subsection we will establish analogues of the lower bounds fromTheorems 2.5.1 and 3.3.1.

Set

S〈j〉n := Sn − ξj .

Theorem 12.1.9. Let K(n) > 0 be an arbitrary sequence, and let

Q〈j〉n (t) := P

(S〈j〉n

K(n)< −t

).

Then, for y = x + tK(n),

P(Sn � x) �n∑

j=1

Fj,+(y)(1 − Q〈j〉

n (t))− 1

2(nF+(y)

)2.

Proof. Put

Gn = {Sn � x}, Bj = {ξj < y}.Then

P(Sn � x) � P(

Gn;n⋃

j=1

Bj

)�

n∑j=1

P(GnBj) −∑

i<j�n

P(GnBiBj)

�n∑

j=1

P(GnBj) − 12

( n∑j=1

Fj,+(y))2

.

For y = x + tK(n) we have

P(GnBj) =

∞∫y

Fj(du)P(S〈j〉n � x − u)

� P(S〈j〉n � x − y)Fj,+(y) = Fj,+(y)

(1 − Q〈j〉

n (t)). (12.1.27)

The theorem is proved.

Recall the notations

V (t) = max{V (t), W (t)}, σ(n) = V (−1)(1/n).

Page 484: Asymptotic analysis of random walks

12.1 Upper and lower bounds for the distributions of Sn and Sn 453

Theorem 12.1.10. Let the averaged distribution F satisfy the conditions [<, <]U,[N] with α ∈ [1, 2), β ∈ (1, 2), and let y = x + tσ(n). Then, for t → ∞,

P(Sn � x) � nF+(y)(1 + o(1)). (12.1.28)

If, in addition, the conditions [<, =]U and x � σ(n) are met then

P(Sn � x) � nV (x)(1 + o(1)

). (12.1.29)

Proof. We will make use of Theorem 12.1.9, puting K(n) := σ(n). Then

Q〈j〉n (t) = P(S〈j〉

n < −x), x := tσ(n).

Owing to condition [U1], for any fixed t one has

nV (x) = nV(tσ(n)

) ∼ t−α, α = min{α, β}.The right-hand side of the last relation tends to zero as t → ∞, and hence Corol-lary 12.1.5 will be applicable for large enough t, which implies that

Q(j)n (t) � cnV (x) � c1t

−α → 0.

Since, moreover, nF+(y) → 0 as t → ∞ we obtain (12.1.28).The inequality (12.1.29) evidently follows from (12.1.28) since, when t → ∞,

t = o(x/σ(n)) and the conditions of the theorem are met, we have x ∼ y,F+(y) = V (y) ∼ V (x). The theorem is proved.

Example 12.1.11. Let ξi = ciζi, where the ζi are i.i.d. r.v.’s following a com-mon distribution Fζ , Eζi = 0, Fζ,+(t) � V(ζ)(t) = Lt−α, α ∈ (1, 2), andFζ,−(t) � V(ζ)(t) for t � t1, for some t1 < ∞. Then Vi(t) = V(ζ)(t/ci),Wi(t) � V(ζ)(t/ci) for t � cit1.

If c∗ := mini�n ci � c− > 0 and c∗ := maxi�n ci < c+ < ∞, where c± donot depend on n then conditions [U] and [N] will clearly be satisfied.

Now let ci ↓ 0 as i → ∞. Then ξip−→ 0. Condition [U] will evidently be met

(Vi and Wi are just power functions for t � cit1 → 0 as i → ∞).Further, for simplicity letting t1 = 1, ci � 1, we obtain

JVi (1) =

1∫0

uVi(u) du = c2i

t/ci∫0

vV(ζ)(v) dv

� c2i

(12

+ L

t/ci∫1

v1−αdv

)� c2

i

2+

Lcαi

2 − α� bcα

i , b :=12

+L

2 − α.

Hence, for t � 1,

Vi(t) = Lt−αcαi � L

bJV

i (1)t−2,

which proves the inequality (12.1.9) of condition [N]. Thus, conditions [U] and

Page 485: Asymptotic analysis of random walks

454 Non-identically distributed jumps with infinite second moments

[N] are satisfied, and so the assertions of Theorems 12.1.4 and 12.1.10 and thoseof their corollaries will hold true for the sequence ξi = ciζi.

If ci ↑ ∞ as i → ∞ then conditions [U] and [N] are, generally speaking, notsatisfied. In this case, however, one can reduce the problem to the previous one,to a certain extent. Introduce new independent r.v.’s

ξ∗i :=ξn−i+1

cn=

cn−i+1

cnζn−i+1, i = 1, . . . , n,

so that we are again dealing with a representation of the form ξ∗i = c∗i ζi with de-creasing coefficients c∗i = cn−i+1/cn, i = 1, . . . , n, but this time in a ‘triangulararray scheme’, since the c∗i depend on n. Note that here

S∗n =

n∑i=1

ξ∗i =Sn

cn, P(Sn � x) = P(S∗

n � x∗) for x∗ =x

cn.

12.2 Asymptotics of the crossing of an arbitrary remote boundary

For the probabilities of the crossing of linear horizontal boundaries and for thedistribution of Sn, one can obtain the desired asymptotics from the bounds of theprevious section.

The next result follows from Theorems 12.1.4 and 12.1.10.

Theorem 12.2.1. Let Eξj = 0 and the averaged distribution F satisfy conditions[<, =]U and [N] with α ∈ (1, 2). Then the following assertions hold true.

(i) If W (t) < cV (t) and x � σ(n) = V (−1)(1/n) (i.e. nV (x) → 0) then

P(Sn � x) = nV (x)(1 + o(1)), (12.2.1)

P(Sn � x) = nV (x)(1 + o(1)). (12.2.2)

(ii) If W (t) > cV (t) as t → ∞ then the relations (12.2.1), (12.2.2) remain truefor n and x such that

nW( x

lnx

)< c.

The assertion of the theorem could be stated in a uniform version, as in The-orem 3.4.1. Under condition [UR] with H(n) = n (see p. 464), the relationP(Sn � x) ∼ nV (x) was obtained for x > cn in [218].

The assertion of Theorem 12.2.1 can be extended to the case of arbitrary bound-aries. For that case, however, imposing conditions on the averaged distributionsproves to be insufficient.

To make the exposition more compact, we will introduce, as in Chapter 3, thefollowing conditions (cf. p. 138):

[Q] At least one of two following two conditions is met:

[Q1] W (t) � cV (t), x → ∞ and nV (x) → 0;

[Q2] x → ∞ and nV (x/lnx) < c, where V (t) = max{V (t),W (t)}.

Page 486: Asymptotic analysis of random walks

12.2 Asymptotics of the crossing of an arbitrary remote boundary 455

Condition [Q] for the averaged distributions was assumed to be satisfied inTheorem 12.2.1(i),(ii).

In this section, we will consider boundaries {g(k)} of the same form as in § 3.6and will assume that

mink�n

g(k) = cx, c > 0, (12.2.3)

where c does not depend on n (or on any other triangular array scheme parameter).The class of all boundaries satisfying (12.2.3) will be denoted by Gx,n. As in § 3.6,we will study the asymptotics of the probability P(Gn) of the event

Gn ={

maxk�n

(Sk − g(k)

)� 0

}.

Theorem 12.2.2. Let conditions [<, =]U be satisfied for the distributions Fj

uniformly in j and n (with common α ∈ (1, 2) and β). Moreover, let x, n and theaveraged distribution F satisfy conditions [Q] and [N]. Then

P(Gn) =

[n∑

j=1

Vj

(g∗(j)

)](1 + o(1)) + O

(n2V (x)V (x)

), (12.2.4)

where g∗(j) = mink�j g(k) and the terms o(·) and O(·) are uniform over theclass Gx,n of boundaries and the class F of distributions {Fj} satisfying condi-tions [U], [Q] and [N].

It follows from relation (12.2.4) that if condition [Q] is met and

maxk�n

g(k) < c1x (12.2.5)

then

P(Gn) ∼n∑

j=1

Vj

(g∗(j)

). (12.2.6)

Proof. The proof of Theorem 12.2.2 is quite similar to that of Theorem 3.6.4.Let, as before,

Bj = {ξj < y}, B =n⋂

j=1

Bj , P = P(Sn � x; B),

and assume without loss of generality that c = 1 in (12.2.3). Since the averageddistributions also satisfy condition [<, =]U, it follows from Theorem 12.1.4 that

P(GnB) � P � c(nV (x)

)r.

Hence, for r = 2,

P(Gn) = P(GnB) + O((nV (x))2

).

Page 487: Asymptotic analysis of random walks

456 Non-identically distributed jumps with infinite second moments

Therefore, as in (3.6.23), we obtain

P(Gn) =n∑

j=1

P(Gn; ξj � y) + O((nV (x))2

).

The analysis of the terms P(Gn; ξj � y) under new conditions also repeats thatfrom § 3.6, with some insignificant obvious amendments that can be summarizedas follows. Set V(j)(x) :=

∑jk=1 Vk(x). Then, instead of (3.6.24) we will have

the relation

P(Gn; ξj � y) = P(Gn; ξj � y, Sj−1 � x) + O(V(j)(x)Vj(x)

).

With the same definitions of the quantities

Mj,n := max0�k�n−j

(Sk+j − g(k + j)

)+ g∗(j) − Sj

as in (3.6.25), we will have, instead of (3.6.26), the relation

P(Gn; ξj � y) = P(Sj + Mj,n � g∗(j), ξj � y, Zj,n < x/2

)+ P

(Sj + Mj,n � g∗(j), ξj � y, Zj,n � x/2

)+ O

(Vj(x)V(j)(x)

),

where

S n = mink�n

Sk � Zj,n := Sj−1 + Mj,n

d� Sn−1.

Hence

P(ξj � y, Zj,n � x/2

)= O

(Vj(x)nV (x)

),

and we obtain, cf. calculations in § 3.6, that

P(Gn; ξj � y) = P(ξj + Zj,n � g∗(j), Zj,n < x/2

)+ O

(Vj(x)nV (x)

),

(12.2.7)where the principal part on the right-hand side of (12.2.7) can be written as

E[Vj

(g∗(j) − Zj,n

); Zj,n < x/2

]= Vj

(g∗(j)

)(1 + o(1)) + O

(Vj(x)nV (x)

)(cf. (3.6.30)–(3.6.33)). By virtue of the theorems of the previous section, allthe terms o(·) and O(·) that appear in the above argument are uniform over theclass Gx,n of boundaries and the class F of distributions {Fj} satisfying condi-tions [U], [Q] and [N] (see Remark 12.1.7). Collecting together the above results,we obtain the assertion of the theorem.

The theorem is proved.

Page 488: Asymptotic analysis of random walks

12.3 Crossing of a boundary on an unbounded time interval 457

12.3 Asymptotics of the probability of the crossing of an arbitrary remote

boundary on an unbounded time interval. Bounds for the first

crossing time

In this section we will deal with boundaries {g(k)} of the form

g(k) = x + gk, k = 1, 2, . . . , (12.3.1)

where the infinite sequence {gk; k � 1} lies between two linear functions:

c1ak − p1x � gk � c2ak + p2x, p1 < 1, k = 1, 2, . . . (12.3.2)

We assume that the constants 0 < c1 � c2 < ∞ and pi > 0, i = 1, 2, donot depend on the parameter of the triangular array scheme, while the parametera ∈ (0, a0), a0 = const < ∞, can tend to 0. Recall that in our triangular arrayscheme both the distributions {Fj} and the boundary {g(k)} can depend on theparameter of the scheme, so that, generally speaking, the sequence {gk} and alsothe value of a will depend on that parameter. In particular, if the parameter of thescheme coincides with n then we have a = a(n), whereas the constants ci and pi

in (12.3.2) will not depend on n. The parameter of the scheme can, of course, bedifferent: thus, it could be the parameter a → 0, which we identify with the value−Eξk = a → 0 in the problem on transient phenomena for the distribution of Sn

when Eξk < 0.If the boundary is such that

gn � bx, (12.3.3)

then the first term on the right-hand side of assertion (12.2.4) of Theorem 12.2.2will be minorated by the function cnV (x) and hence will dominate under con-dition [Q]. Therefore, under the above conditions, it will follow from Theo-rem 12.2.2 that

P(maxk�n

(Sk − gk) � x)∼

n∑j=1

Vj(x + gj,n), (12.3.4)

where gj,n := minj�k�n gk.Now, the inequality (12.3.3) means, together with (12.3.2), that

c1an � (b + p1)x.

This presents one more (in addition to condition [Q]) upper constraint on n. Thequestion of what will happen for n > (b + p1)x/c1a and, in particular, in thecase n = ∞, remains open. To answer this question, we will need, as in § 3.6,bounds for the first crossing time of the boundary {g(k)} that are similar to theresults of § 3.2. For this, we will have to impose some homogenous domination

Page 489: Asymptotic analysis of random walks

458 Non-identically distributed jumps with infinite second moments

conditions on the tails {Vj}. Let

V(1)(x) :=1n1

n1∑j=1

Vj(x), W(1)(x) :=1n1

n1∑j=1

Wj(x), n1 :=x

a,

(12.3.5)where we assume for simplicity that n1 is an integer. The above-mentioned ho-mogenous condition has the following form:

[H] For nk = 2k−1n1, all k = 1, 2, . . . and t > 0,

c(1)V(1)(t) � 1nk

2nk∑j=nk

Vj(t) � c(2)V(1)(t),

c(1)W(1)(t) � 1nk

2nk∑j=nk

Wj(t) � c(2)W(1)(t).

Let

Sn(a) := maxk�n

(Sk − ak), η(x, a) := inf{k : Sk − ak � x},

η(x, a) = ∞ on the event {S∞(a) < x}, and

Bj(v) := {ξj < y + vaj}, v > 0, B(v) :=n⋂

j=1

Bj(v).

The following theorem bounds probabilities of the form P(Sn(a) � x; B(v)

),

P(Sn(a) � x

)and P(∞ > η(x, a) � xt/a). For n � n1 ≡ x/a, these bounds

can be easily obtained from Theorem 12.1.4, since

{Sn � 2x} ⊂ {Sn(a) � x} ⊂ {Sn � x}.So it will be assumed below that n � n1.

Theorem 12.3.1.

(i) Let the averaged distribution

F(1) :=1n1

n1∑j=1

Fj

with n1 = x/a satisfy conditions [<, <]U, [Q] and [N] with n replaced byn1 in the last two and with α ∈ (1, 2). Furthermore, let condition [H] bemet. Then, for v � 1/4r, r = x/y, one has the inequality

P(Sn(a) � x; B(v)

)� c

[n1V(1)(x)

]r0, (12.3.6)

where r0 = r/(1 + vr) � 4r/5 and r > 5/4.If a � a1 = const > 0 then one can assume without loss of generality

that condition [Q1] is satisfied with n replaced in it by n1.

Page 490: Asymptotic analysis of random walks

12.3 Crossing of a boundary on an unbounded time interval 459

The inequality (12.3.6), with V(1)(x) on its right-hand side replaced by

V(1)(x) := max{V(1)(x), W(1)(x)

},

holds true for any value of a without the assumption that condition [Q] issatisfied.

(ii) Under the conditions of part (i) we have

P(Sn(a) � x

)� cn1V(1)(x), (12.3.7)

and, for values of t that are bounded or grow slowly enough,

P(∞ > η(x, a) � xt

a

)� cn1V(1)(x)t1−α. (12.3.8)

If t → ∞ together with x, and we assume nothing about its growth rate thenthe inequality (12.3.8) remains true provided that the exponent 1 − α in it isreplaced by 1 − α + ε for any fixed ε > 0.

The assertions stated in part (i) after the inequality (12.3.6) hold true forthe inequalities (12.3.7), (12.3.8) as well.

All the above inequalities are uniform in a and over the classes of distri-butions {Fj} and boundaries {g(k)} that satisfy the uniformity conditionsfrom part (i). In other words, the constants c in (12.3.6)–(12.3.8) dependonly on the parameters that characterize the classes of {Fj} and {g(k)}.

Proof. The proof of Theorem 12.3.1 completely repeats that of Theorem 3.2.1 butunder more general conditions. One just needs to use condition [H]. In the proofof the theorem, this property is used when bounding the maxima of sums on theintervals [nk, 2nk]. References to Theorem 3.1.1 in the proof of Theorem 3.2.1should be replaced by references to Theorem 12.1.4 and its corollaries. Note that,owing to condition [H], conditions [U], [Q] and [N] of Theorem 12.1.4 willbe satisfied on the intervals [0, nk], [nk, 2nk] with nk = n12k−1, k = 1, 2, . . .

Consider, for instance, condition [N] on the interval [0, nk], i.e. for n = nk. Byvirtue of [H] and condition [N] on [0, n1], we have

JV (tδ) =

tδ∫0

uV (u) du � c1

tδ∫0

uV(1)(u) du � c2V(1)(tδ) � c3V (tδ).

A similar inequality holds for JW . This demonstrates that condition [N] holdson [0, nk].

The theorem is proved.

Corollary 12.3.2. Let the conditions of Theorem 12.3.1(i) except for [Q] be met.Then the inequalities (12.3.7), (12.3.8) will hold true provided that we replaceV(1) on their right-hand sides by V(1).

The assertion will follow in an obvious way from Theorem 12.3.1(i) if oneuses a common majorant V(1) for both tails. Then condition [Q] will always be

Page 491: Asymptotic analysis of random walks

460 Non-identically distributed jumps with infinite second moments

satisfied (one does not need to assume that n1V1(x) is small; if it were not thenthe inequalities in question would become meaningless).

Let

ηg(x) := min{k : Sk − gk � x}.Corollary 12.3.3. Let the conditions of Theorem 12.3.1(i) and conditions (12.3.1),(12.3.2) be met. Then, for values of t that are bounded or grow slowly enough,

P(∞ > ηg(x) � xt

a

)�

cxV(1)(x)a

t1−α. (12.3.9)

If t → ∞ together with x and we assume nothing about its growth rate then theinequality (12.3.9) remains true provided that the exponent 1−α in it is replacedby 1 − α + ε for any fixed ε > 0.

Proof. The assertions of the corollary follow from Theorem 12.3.1 and the in-equality (see (12.3.2))

P(∞ > ηg(x) � xt

a

)� P

(∞ > η

(x(1 − p1), c1a

)� xt

a

).

It only remains to notice that if we replace x and a in the bound (12.3.8) byx(1 − p1) and c1a respectively then the right-hand side of the bound will changeby an asymptotically constant factor.

Put

g∗k := minj�k

gj = gk,∞

and observe that g∗k ↑, g∗k � gk,n � gk and (12.3.2) implies that

c1ak − p1x � g∗k � c2ak + p2x. (12.3.10)

From Theorem 12.2.2 and Corollary 12.3.2 we obtain the following assertion,where the scheme parameter is identified with a.

Theorem 12.3.4. Let the distributions Fj satisfy condition [<, =]U with com-mon values of α ∈ (1, 2) and β uniformly in j and n. Moreover, let conditions(12.3.1), (12.3.2) and [H], [Q] be satisfied for the averaged distribution F(1)

(with n = n1 = x/a) and let

x

aV(1)(x) � δ(x) → 0 as x → ∞. (12.3.11)

Then

P(supk�0

(Sk − gk) � x)

=

( ∞∑j=1

Vj(x + g∗j )

)(1 + o(1)), (12.3.12)

where the term o(·) is uniform in a and over the classes of the distributions {Fj}and boundaries {g(k)} satisfying the conditions of the theorem.

Page 492: Asymptotic analysis of random walks

12.3 Crossing of a boundary on an unbounded time interval 461

Proof. For an arbitrary fixed t � 1 and

T :=1c1

(c2t + p1 + p2) > t,

put

n :=xt

a≡ n1t, N :=

xT

a≡ n1T.

It follows from condition (12.3.2) that, for j � n, one has

gj,N = gj,∞ = g∗j . (12.3.13)

Further,

P(supk�0

(Sk − gk) � x)

= P(ηg(x) � N) + P(∞ > ηg(x) > N),

(12.3.14)where, by virtue of Theorem 12.2.2, we have approximation (12.2.4) for the firstterm on the right-hand side of (12.3.14). This implies (see also (12.3.13) and [H])that

P(ηg(x) � N) =

(n∑

j=1

Vj(x + g∗j ) +N∑

j=n+1

Vj(x + gj,N )

)(1 + o(1))

+ O(N2V(1)(x)V(1)(x)

), (12.3.15)

where the remainder term O(·) is o(xV(1)(x)/a

)owing to condition (12.3.11).

Since gj,N � g∗j we obtain from (12.3.14), (12.3.15) and Corollary 12.3.3 that

P(supk�0

(Sk − gk) � x)

�( ∞∑

j=1

Vj(x + g∗j )

)(1 + o(1))

+ o(x

aV(1)(x)

)+

cxV(1)(x)a

T 1−α. (12.3.16)

It also follows from (12.3.14) and (12.3.15) that

P(supk�0

(Sk − gk) � x)

�(

n∑j=1

Vj(x + g∗j )

)(1 + o(1)) + o

(x

aV(1)(x)

).

(12.3.17)Therefore, in order to prove (12.3.12) it suffices to verify that

∞∑j=1

Vj(x + g∗j ) > cx

aV(1)(x), (12.3.18)

∞∑j=n+1

Vj(x + g∗j ) < cx

aV(1)(x)t1−α, (12.3.19)

i.e. that the sum on the left-hand side of (12.3.19) can be made, by choosing asuitable t, arbitrarily small compared with the left-hand side of (12.3.18), uni-formly over the above-mentioned classes of boundaries and distributions. Since

Page 493: Asymptotic analysis of random walks

462 Non-identically distributed jumps with infinite second moments

by choosing a suitable t (or, equivalently, T ) the last term on the right-hand sideof (12.3.16) can also be made arbitrarily small relative to

∑∞j=1 Vj(x+g∗j ), while

the second to last term possesses this property by virtue of (12.3.18), we seethat (12.3.12) follows from (12.3.16), (12.3.17).

We now prove (12.3.18). This relation follows from the properties of the se-quence g∗j and the inequality (we use here the right-hand relation in (12.3.10))

∞∑j=1

Vj(x + g∗j ) >

n1∑j=1

Vj(x + g∗n1) �

n1∑j=1

Vj

((1 + c2 + p2)x

)� cx

aV(1)(x).

Next we will prove (12.3.19). Owing to (12.3.10) and [H],∞∑

j=n+1

Vj(x + g∗j ) �∞∑

j=n+1

Vj

((1 − p1)x + c1aj

)=

2n∑j=n+1

+4n∑

j=2n+1

+ · · ·

�c(2)xt

a

∞∑k=0

2kV(1)

((1 − p1)x + 2kc1tx

).

Now by condition [U] the function V(1)(x) behaves like an r.v.f., so that

∞∑k=0

2kV(1)

((1 − p1)x + 2kc1tx

) ∼ cV(1)

((1 − p1 + c1t)x

).

(To obtain a more formal proof of this relation, consider, for n1 = x/a, the sum

1n1

n1∑j=1

∞∑k=0

2kVj

((1 − p1)x + 2kc1tx

)and for each j make use of condition [U]).

From these relations we find that∞∑

j=n+1

Vj(x + g∗j ) �cxtV(1)(x)

a(1 − p1 + c1t)α.

This proves (12.3.19).The required assertions, and hence Theorem 12.3.4, are proved.

If we strengthen somewhat the homogeneity conditions for the tails and bound-aries then we can obtain a simpler form for the right-hand side of (12.3.12). In-troduce a new condition

[HΔ] Let condition [H] be met and, for any fixed Δ > 0 and

nΔ :=⌊xΔ

a

⌋= �n1Δ ,

Page 494: Asymptotic analysis of random walks

12.3 Crossing of a boundary on an unbounded time interval 463

let there exist a number g > 0 and majorants Vj , Wj for Fj , j � 1, such that,as x → ∞, the following relations hold uniformly in k:

1nΔ

(k+1)nΔ∑j=knΔ+1

Vj(x) ∼ V(1)(x), (12.3.20)

1nΔ

(k+1)nΔ∑j=knΔ+1

Wj(x) � cW(1)(x), (12.3.21)

1nΔ

(g(k+1)nΔ − gknΔ

) ∼ ga. (12.3.22)

Corollary 12.3.5. Suppose that the averaged distribution

F(1) =1n1

n1∑j=1

Fj

satisfies conditions [<, =]U, α ∈ (1, 2), [N] and [Q] with n = n1. Moreover,let condition [HΔ] be met and, for definiteness, g1 = o(x). Then

P(supk�0

(Sk − gk) � x)∼ 1

ga

∞∫x

V(1)(u) du ∼ xV(1)(x)ga(α − 1)

. (12.3.23)

Note that if the boundary {gk} increases faster than linearly then condition(12.3.2) will not be satisfied and the asymptotics of the left-hand side of (12.3.23)could be different (cf. Theorem 3.6.7).

Proof of Corollary 12.3.5. We will begin by observing that the relations (12.3.22)and g1 = o(x) also hold for the sequence {g∗j }. Further, owing to (12.3.21),(12.3.22), we obtain that, as x → ∞,

∞∑j=1

Vj(x + g∗j ) =∞∑

k=0

(k+1)nΔ∑j=knΔ+1

Vj(x + g∗j )

�∞∑

k=0

(k+1)nΔ∑j=knΔ+1

Vj(x + g∗1 + nΔkga) ∼ nΔ

∞∑k=0

V(1)(x + g∗1 + nΔkga)

=xΔa

∞∑k=0

V(1)

((1 + o(1) + Δkg)x

) ∼ xV(1)(x)a

∞∑k=0

Δ(1 + o(1) + Δkg)α

=xV(1)(x)

ga

( ∞∫0

dv

(1 + v)α

)(1 + ε(Δ)

)=

xV(1)(x)ga(α − 1)

(1 + ε(Δ)

), (12.3.24)

where ε(Δ) → 0 as Δ → 0. A similar converse asymptotic inequality holds as

Page 495: Asymptotic analysis of random walks

464 Non-identically distributed jumps with infinite second moments

well. As the left-hand side of (12.3.24) does not depend on Δ, we infer that

∞∑j=1

Vj(x + g∗j ) ∼ xV(1)(x)ga(α − 1)

. (12.3.25)

Since the conditions of Theorem 12.3.4 are satisfied, the relations (12.3.25) and(12.3.12) imply (12.3.23). The corollary is proved.

Corollary 12.3.5 implies, in turn, the following.

Corollary 12.3.6. Let ξ1, ξ2, . . . be i.i.d. r.v.’s and let condition [<, =]U andconditions [N], [Q] with n = n1 = x/a be satisfied. Moreover, assume thatxV (x)/a → 0 as x → ∞. Then

P(S(a) � x

) ∼ xV (x)a(α − 1)

.

It is clear that the assertions of Corollaries 12.3.5 and 12.3.6 could be stated ina uniform version, as in Theorem 12.3.4.

12.4 Convergence in the triangular array scheme of random walks with

non-identically distributed jumps to stable processes

12.4.1 A theorem on convergence to a stable law

In this subsection we will establish the convergence of the distributions of Sn/b(n)under a suitable scaling b(n) to a stable law under certain (not the most general)conditions; this will be used in the next section when studying transient phenom-ena. These conditions could be referred to as ‘uniform regular variation’ andformulated as follows (recall our convention that when we want to emphasize thefact that the averaged distribution F depends on n we write F (n), F

(n)+ ).

[UR] There exist an increasing r.v.f.

H(n) = nhLh(n) → ∞ as n → ∞(Lh is an s.v.f.) that satisfies H(n) � n and a fixed r.v.f. G(t) = t−αL(t) (whereα ∈ (1, 2) and L is an s.v.f. that does not depend on n) such that the followingtwo conditions, (1) and (2), are met.

(1)

limt→∞, n→∞

nF (n)(t)H(n)G(t)

= 1. (12.4.1)

In other words, for any given δ > 0 there exist nδ, tδ < ∞ for which

supn�nδ, t�tδ

∣∣∣∣ nF (n)(t)H(n)G(t)

− 1∣∣∣∣ < δ.

Page 496: Asymptotic analysis of random walks

12.4 Convergence of random walks to stable processes 465

(2) There exists the limit

limt→∞, n→∞

nF(n)+ (t)

H(n)G(t)= ρ+ ∈ [0, 1]. (12.4.2)

Observe that, for ρ+ > 0, condition [UR] implies condition [U] of § 12.1.Indeed, by virtue of the properties of r.v.f.’s, for any given [c1, c2] and δ > 0 thereexists a tδ < ∞ such that ∣∣∣∣vαG(vt)

G(t)− 1

∣∣∣∣ � δ (12.4.3)

for all t � tδ , v ∈ [c1, c2], and such that

(1 − δ) min{vδ, v−δ} <vαG(vt)

G(t)< (1 + δ) max{vδ, v−δ} (12.4.4)

for all t � tδ and vt � tδ (see Theorem 1.1.2 and (1.1.23)). Therefore, from(12.4.1)–(12.4.3) with ρ+ > 0,

F(n)+ (tv)vα

F+(t)∼ G(tv)vα

G(t)∼ 1,

which, together with the uniformity of these relations, means that [U1] is satisfied.Condition [U2] is verified in a similar way, with the help of (12.4.1)–(12.4.4).

Condition [UR] also implies the non-degeneracy condition [N] of § 12.1 butonly when H(n) grows fast enough. Indeed, as we have already noted, a sufficientcondition for (12.1.9) has the form F+(x) � cx−2 in the range nF+(x) < 1.Under our present conditions,

F+(x) ∼ H(n)ρ+

nG(x) = ρ+x−2 H(n)

nx2G(x). (12.4.5)

Let

b(n) := G(−1)

(1

H(n)

)= nh/αLb(n), Lb is an s.v.f. (12.4.6)

Then, on the boundary of the range nF+(x) < 1, the deviations x will havethe form x = vb(n). (For such an x we have nF+(x) ∼ H(n)ρ+G

(vb(n)

) ∼ρ+v−α.) Therefore, for the factor following x−2 in (12.4.5) one has at the pointx = vb(n) that

H(n)n

x2G(x) ∼ v2b2(n)v−α

n→ ∞ (12.4.7)

for b2(n) � n (which is always true for h > α/2). Owing to (12.4.5), this meansthat F+(x)�x−2 for x > vb(n), as was to be demonstrated.

We also need an upper restriction on the tails Fj . Namely, we will assume thatthe following condition of ‘negligibility’ of the summands ξj/b(n) is met.

Page 497: Asymptotic analysis of random walks

466 Non-identically distributed jumps with infinite second moments

[S] For any ε > 0, as n → ∞,

maxj�n

P(|ξj | > εb(n)

)→ 0

(or, equivalently, maxj�n Fj

(εb(n)

)→ 0).

Now we can formulate our theorem on convergence to a stable law. As in § 1.5,put

ζn :=Sn

b(n),

where b(n) is defined in (12.4.6).

Theorem 12.4.1. Assume that Eξj = 0, that the averaged tails F (·) satisfy con-dition [UR] with α ∈ (1, 2), and that condition [S] is satisfied. Then, as n → ∞and b(n) � √

n, we have

ζn ⇒ ζ(α,ρ), ρ = 2ρ+ − 1,

where ζ(α,ρ) follows the stable distribution Fα,ρ with ch.f. f(α,ρ)(·) defined inTheorem 1.5.1 (see (1.5.8), (1.5.9)).

Remark 12.4.2. If b(n) ∼ c√

n as n → ∞ then, as will be seen from the proof,under all other conditions of the theorem, the (infinitely divisible) distributionthat is limiting for ζn will contain, along with F(α,ρ), a normal component. Ifb(n) = o

(√n)

then the limiting distribution for the scaled sums Sn will benormal (under the scaling

√n ).

Similar assertions undoubtedly hold in the cases α ∈ (0, 1), α = 1, α = 2 aswell but we will not dwell on them in the present chapter.

Observe that all the conditions of the theorem except [S] refer to the averageddistributions, in the same way as in the Lindeberg condition in the central limittheorem for the case of finite variances.

Example 12.4.3. An analogue of Example 12.1.3. Let ζi be i.i.d. r.v.’s that followa common symmetric distribution having fixed regularly varying tails Gζ±(t),Gζ(t) = Gζ+(t) + Gζ−(t) with exponent α ∈ (1, 2), and let

ξ ={

ζi with probability δ(n),0 with probability 1 − δ(n),

where δ(n) → 0 as n → ∞. It is evident that here F (t) = δ(n)Gζ(t) and thatall the conditions of Theorem 12.4.1 will be satisfied provided that the functionH(n) = nδ(n) → ∞ as n → ∞ is such that b(n) � √

n.

Example 12.4.4. An analogue of Example 12.1.11 under the condition [=, =]with Wζ(t) = cV(ζ)(t) for t � t1. Here ξi = ciζi, Vi(t) = V(ζ)(t/ci) and

Page 498: Asymptotic analysis of random walks

12.4 Convergence of random walks to stable processes 467

Wi(t) = cV(ζ)(t/ci) for t � t1, where V(ζ)(t) = Lt−α for t � t0 > 0. Thismeans that, for t � t0 maxi�n ci,

nF+(t) =n∑

i=1

V(ζ)(t/ci) = V(ζ)(t)n∑

i=1

cαi , nF−(t) = cV(ζ)(t)

n∑i=1

cαi .

Put H(n) :=∑n

i=1 cαi . Condition [UR] will clearly be satisfied provided that

H(n) → ∞ as n → ∞. For simplicity let ci = i−γ , γ > 0. Then for γ < 1/α,as n → ∞,

H(n) ∼ nhLh with h = 1 − αγ, Lh = (1 − αγ)−1,

and, furthermore, G(t) = (1 + c)V(ζ)(t),

b(n) = G(−1)

(1

H(n)

)∼ bn1/α−γ , b =

((1 + c)Lh

1 − αγ

)1/α

.

Since b(n) → ∞ as n → ∞, the above relations imply that condition [S] willbe met. One can easily verify that the second condition in [UR] holds withρ+ = 1/(1 + c). Thus, the r.v.’s ξi in Example 12.1.11 with ci = i−γ andγ < 1/α − 1/2 satisfy the conditions of Theorem 12.4.1. The case when ci ↑as i → ∞ (see Example 12.1.11) can be dealt with in a similar way.

Proof of Theorem 12.4.1. There are two ways to prove the convergence of the dis-tributions of ζn. The first consists in modifying the argument in the proof of The-orem 1.5.1. Owing to the negligibility of the summands ξj/b(n) (condition [S]),the problem reduces to proving the convergence

n∑j=1

[fj

b(n)

)− 1

]→ ln f(α,ρ)(λ),

where fj is the ch.f. of ξj ; observe that

n∑j=1

[fj(μ) − 1

]= n

(∫eiμt F(dt) − 1

).

That is, the problem reduces to studying the asymptotics of∫

eiμtnF(dt) or,which is essentially the same, the asymptotics of

∫eiμtG(dt) as μ → 0 (where

G is the distribution with tails ρ±G(t) and G(t) is the function from condi-tion [UR]), which is what we did in the proof of the above-mentioned Theo-rem 1.5.1.

We will use the second way, which is based on a general theorem on conver-gence to infinitely divisible laws (see e.g. § 19 of [130]). If we fix the limitingdistribution in this theorem to be Fα,ρ, α ∈ (1, 2) then the theorem will take thefollowing form.

Page 499: Asymptotic analysis of random walks

468 Non-identically distributed jumps with infinite second moments

Let Eξj = 0 and condition [S] be satisfied. Then, in order to have the conver-gence

ζn ⇒ ζ(α,ρ) as n → ∞,

it is necessary and sufficient that the following two conditions are met:

(1) for any fixed t > 0,

nF±(tb(n)

)→ ρ±t−α, ρ = 2ρ+ − 1 = 1 − 2ρ−, (12.4.8)

(2)

limε→0

lim supn→∞

n

∫|t|<ε

t2 F(dt b(n)

)= 0. (12.4.9)

To prove Theorem 12.4.1, it suffices to verify that condition [UR] impliesconditions (12.4.8), (12.4.9). That (12.4.8) holds is next to obvious: for eachfixed t, as n → ∞,

nF±(tb(n)

)∼ H(n)ρ±G(tb(n)

) ∼ ρ±t−αH(n)G(b(n)

) ∼ ρ±t−α

(if ρ± = 0 then nF±(tb(n)

)→ 0).To verify (12.4.9), first consider

I+ := n

ε∫0

t2 F(dt b(n)

)= − n

b2(n)

εb(n)∫0

u2dF+(u) � 2n

b2(n)

εb(n)∫0

uF+(u) du.

Let ρ+ > 0 and t1, n1 be such that, owing to condition [UR], we have theinequality F+(u) < 2ρ+H(n)n−1G(u) for u � t1, n � n1. Then, for n � n1,

I+ � 2n

b2(n)t212

+4H(n)ρ+

b2(n)

εb(n)∫t1

uG(u) du. (12.4.10)

By virtue of Theorem 1.1.4(iv), the second term on the right-hand side of (12.4.10)is asymptotically equivalent, as n → ∞ (b(n)→∞), to

4H(n)ρ+(εb(n))2G(εb(n))b2(n)(2 − α)

∼ 4ε2−αρ+

2 − αH(n)G

(b(n)

) ∼ 4ε2−αρ+

2 − α.

The first term on the right-hand side of (12.4.10) tends to zero when b(n) � √n.

This implies that

limε→0

lim supn→∞

I+ = 0.

The quantity I− can be introduced and bounded in a similar way (if ρ± = 0 thenlim supn→∞ I± = 0).

Condition (12.4.3) and, hence, Theorem 12.4.1 are proved.

Observe that if I+ + I− had a positive limit (with b(n) ∼ c√

n, see(12.4.10))then the limiting distribution for Sn would have a normal component.

Page 500: Asymptotic analysis of random walks

12.4 Convergence of random walks to stable processes 469

12.4.2 Convergence to stable processes

Denote by ζ(α,ρ)(·) a homogeneous stable process with independent incrementson [0, 1] such that ζ(α,ρ)(1) follows the distribution Fα,ρ, and put

ζn(t) :=S�nt�b(n)

, t ∈ [0, 1],

where {b(n)} is a suitable scaling sequence. Let D(0, 1) be the space of functionson [0, 1] without discontinuities of the second kind, endowed with the Skorokhodmetric

ρD(f1, f2) = infλ

supt

{|f2(t) − f1(λ(t))| + |λ(t) − t|},where the infimum is taken over all continuous increasing functions λ(·) on [0, 1]such that λ(0) = 0, λ(1) = 1 (see also p. 75).

In this subsection we will obtain conditions ensuring the weak convergence,as n → ∞, of the distributions of the processes ζn(·) in the space D(0, 1) to thelaw of ζ(α,ρ)(·).

Fix an arbitrary Δ ∈ (0, 1) and consider the totality of the r.v.’s ξk+1, . . . ,

ξk+�nΔ�. For these r.v.’s, introduce the averaged distribution

F(k,Δ) :=1

�nΔ k+�nΔ�∑j=k+1

Fj (12.4.11)

and the averaged tails

F(k,Δ),±(t) :=1

�nΔ k+�nΔ�∑j=k+1

Fj,±(t), F(k,Δ)(t) := F(k,Δ),+(t) + F(k,Δ),−(t).

To obtain the convergence of the processes ζn(·) to a homogeneous stable pro-cess, we will need homogeneous uniform regular variation conditions, which arebased on condition [UR].

[HUR] There exist a fixed r.v.f. G(t) = t−αL(t), where α ∈ (1, 2) and L isan s.v.f., and an r.v.f. H(n) = nhLh(n) � n, where h ∈ (0, 1] and Lh is an s.v.f.,such that, for any fixed Δ > 0, we have the limits

limt→∞, n→∞

nF(k,Δ)(t)H(n)G(t)

= 1, limt→∞, n→∞

nF(k,Δ),+(t)H(n)G(t)

= ρ+,

where convergence is uniform in k � n(1 − Δ).

As in (12.4.6), put

b(n) := G(−1)

(1

H(n)

)= nh/αLb(n).

Page 501: Asymptotic analysis of random walks

470 Non-identically distributed jumps with infinite second moments

Theorem 12.4.5. Let Eξj = 0 and the conditions [HUR] and b(n) � √n be

satisfied. Then, as n → ∞,

ζn(·) ⇒ ζ(α,ρ)(·), (12.4.12)

i.e. the distributions of the processes ζn(·) converge weakly in D(0, 1) to the dis-tribution of the process ζα,ρ(·).Remark 12.4.6. As in Remark 12.4.2, we note that for b(n) ∼ c

√n the limiting

process for ζn(·) will have a Wiener component. For b(n) = o(√

n), it will

coincide with the Wiener process (under the scaling√

n ).

The conditions of Theorem 12.4.5 will clearly be satisfied if there exists a se-quence n0 = o(n) as n → ∞ such that the averaged distributions

F(k)(t) :=1n0

k+n0∑j=k+1

Fj(t) (12.4.13)

satisfy condition [HUR] uniformly in both k and n (under the usual assumptionthat Eξj = 0).

Proof of Theorem 12.4.5. To demonstrate the convergence (12.4.12), we need toverify the following two conditions (see e.g. § 15 of [28]).

(1) The finite-dimensional distributions of ζn(·) converge to the respectivedistributions of ζ(α,ρ)(·).

(2) The compactness (tightness) condition is satisfied for the family of distri-butions of the processes ζn(·) in the space D(0, 1).

That the convergence of the finite-dimensional distributions holds is obviousfrom Theorem 12.4.1 and conditions of Theorem 12.4.5, since the latter meanthat the F(k,Δ) satisfy all the conditions of Theorem 12.4.1 with common α

and ρ values. Indeed, condition [HUR] clearly implies that condition [UR]of Theorem 12.4.1 is met. The negligibility condition [S] also follows from con-dition [HUR]: for j ∈ (k, k + �nΔ ) and all large enough n,

P(|ξj | > εb(n)

)� nΔF(k,Δ)

(εb(n)

)� 2ΔH(n)G

(εb(n)

)� 3Δε−α.

Since Δ can be chosen arbitrarily small, whereas the left-hand side of the aboverelation does not depend on Δ, we necessarily have

maxj�n

P(|ξj | > εb(n)

)→ 0 as n → ∞.

For the compactness condition to be satisfied it suffices that, for t1 < t < t2,

lim supn→∞

P(|ζn(t)−ζn(t1)| � v, |ζn(t2)−ζn(t)| � v

)� cv−2γ |t2− t1|1+δ

(12.4.14)

for some c < ∞, γ > 0, δ > 0 (see e.g. Theorem 15.6 of [28]). Now, the

Page 502: Asymptotic analysis of random walks

12.5 Transient phenomena 471

conditions of Theorem 12.4.5 imply that the conditions of Corollary 12.1.5 aremet. This allows us to conclude that, for Δ = t−u > 0, m := �nt −�nu ∼ nΔand S′

m := S�nt� − S�nu�, we have as n → ∞P(|ζn(t) − ζn(u)| � v

)= P

(|S�nt� − S�nu�| � vb(n))

= P(|S′

m| � vb(n))

� cmF(vb(n)

)∼ cΔn

H(n)n

G(vb(n)

)∼ cΔv−αH(n)G

(b(n)

) ∼ cΔv−α. (12.4.15)

Since the events under the probability symbol in (12.4.14) are independent we findthat, owing to (12.4.15), for all large enough n this probability will not exceed

2c2v−2α(t2 − t1)2.

This means that inequality (12.4.14) holds true. The theorem is proved.

Remark 12.4.7. The asymptotic inequality (12.4.15) (and hence the proof ofcompactness in (12.4.14) as well) can also be obtained, for a fixed large v, withthe help of Theorem 12.4.1, using the limiting stable distribution to approxi-mate P

(S′

m � vb(n)).

Remark 12.4.8. It is evident that if we consider, for a fixed T > 0, a se-quence �nT of independent r.v.’s ξ1, . . . , ξ�nT� that satisfy the conditions of The-orem 12.4.5 (uniformly in k � nT − nΔ) then the weak convergence (12.4.12)will also hold for the processes ζn(·) in the space D(0, T ).

12.5 Transient phenomena

12.5.1 Introduction

In boundary-crossing problems for random walks, by transient phenomena oneusually understands the following. Let ζ, ζ1, ζ2, . . . be i.i.d. r.v.’s and let

Zn :=n∑

i=1

ζi, Z := supk�0

Zk.

It is well known that if Eζ = −a < 0 then Z is a proper r.v. and that Z = ∞a.s. if a � 0. We call ‘transient’ the phenomena observed in the behaviour of thedistribution of Z (or of that of the maximum of finitely many cumulative sums)as a ↓ 0. It is natural to expect that the r.v. Z will grow in probability in this case.The problem is to find out how fast this growth is and to determine the respectivelimiting distribution.

To make the formulation of the problem more precise, we introduce a trian-gular array scheme, in which the distribution of ζ depends on some varying pa-rameter that can be identified in our case with a, and then consider the situation

Page 503: Asymptotic analysis of random walks

472 Non-identically distributed jumps with infinite second moments

when a → 0. Sometimes it is more convenient to consider the traditional trian-gular array scheme, i.e. sequences ζ1,n, ζ2,n, . . . for which the distribution of ζk,n

depends on the series number n. In this case, a = a(n) depends on n in such away that a(n) → 0 as n → ∞. However, in problems dealing with the distribu-tion of Z, introducing a triangular array scheme with parameter n may prove tobe artificial, as this parameter could be absent in the formulation of the problem.Moreover, since everywhere in what follows the scaling factors will all be func-tions of a (rather than of n), the first approach to introducing a triangular arrayscheme (i.e. using parameter a) often appears to be preferable.

We will assume in what follows that we are considering a triangular arrayscheme in which the distribution of ζ = ζ(a) depends on a but for brevity wewill omit the superscript (a) indicating this dependence.

Further, it will be convenient for us to deal with centred r.v.’s ξi = ζi + a, sothat Eξi = 0. Then the desired functional Z of the sequence of partial sums Zk

will have the form

Z = S(a) = supk�n

(Sk − ak), Sk =n∑

i=1

ξi.

We will also consider the maxima of finitely many partial sums:

Sn(a) = maxk�n

(Sk − ak).

Studying transient phenomena was stimulated, to a great extent, by applica-tions. In queueing problems, transient phenomena arise when a queueing systemworks under ‘heavy traffic’ conditions, i.e. when the intensity of the arrival flowapproaches the maximum possible service rate. In this case, for example, for thesimplest single channel queueing system, in which the queue length (or the wait-ing time) at time n can be described by Sn(a), we will have a situation where a

is small.It is known that if Eξ2 =: da → d < ∞ as a → 0 then, for n = Ta−2, there

exists a limiting distribution for aSn(a): for any z � 0,

lima→0

P(aSn(a) � z) = P

(maxt�T

(√

dw(t) − t) � z), (12.5.1)

where w(·) is the standard Wiener process [165, 231]. In particular, when T = ∞the right-hand side of (12.5.1) becomes e−2z/d. For more detail, see e.g. § 25of [42].

In the case Eξ2 = ∞, all we know is a number of results concerning the limit-ing distribution for Z, which have been obtained for some special types of distri-bution of ζ; see [93, 78, 94]. These results do not explain the nature of the scalingfunctions and limiting distributions in the general case. The analytical methodsemployed in the above-mentioned papers are related to factorization identitiesand have certain limitations. This makes the problems under consideration, intheir general formulation, hard to address using such methods.

Page 504: Asymptotic analysis of random walks

12.5 Transient phenomena 473

12.5.2 The main limit theorems

In this subsection, in accordance with the general aim of the chapter we will dealwith a more general problem formulation, where the ξi are non-identically dis-tributed independent r.v.’s in the triangular array scheme and

maxi

Eξ2i = ∞.

Under these conditions, finding the limiting distributions is possible only whenthe distributions of the partial sums Sn converge (after suitable scaling) to a stablelaw. More precisely, we will use Theorems 12.4.1 and 12.4.5. This means that weneed to assume that the conditions of these theorems are satisfied. The main oneis condition [HUR] of § 12.4. To simplify the formulations and proofs we willrestrict ourselves to considering the most important special case, when H(n) = n

(in which ‘degeneration’ of the tails is impossible). Clarifications concerningwhat the results and their proofs would look like in a more general setup are givenafter the proof of Theorem 12.5.2 below.

In the case H(n) = n, condition [HUR] means that there exists an r.v.f.G(t) = t−αL(t), where α ∈ (1, 2) and L is an s.v.f., such that for any fixed Δ > 0one has

limt→∞, n→∞

F(k,Δ)(t)G(t)

= 1, limt→∞, n→∞

F(k,Δ),+(t)G(t)

= ρ+ (12.5.2)

and the convergence is uniform in k (F(k,Δ) was defined in (12.4.11)).We will take a to be the parameter of the triangular array scheme. This means

that the increasing n values of will be considered as functions n = n(a) → ∞as a → 0. These functions will be specified below, under conditions (12.5.2).

Set D(t) := tG(t),

d(v) := D(−1)(v) = inf{u : D(u) < v

}, n1 := n1(a) =

d(a)a

.

(12.5.3)

Theorem 12.5.1. Assume that Eξi = 0 and that conditions (12.5.2) are metwith n replaced in them by n1 = d(a)/a. Further, let a → 0 and n = n(a) besuch that there exists the limit

lima→0

n(a)n1(a)

= T < ∞.

Then we have the convergence in distribution

Sn(a)d(a)

⇒ Z(α,ρ)(T ) := supu�T

(ζ(α,ρ)(u) − u

). (12.5.4)

One can show that the distribution of Z(α,ρ)(T ) is continuous. This, togetherwith Theorem 12.5.1, means that, for all v,

P(

Sn(a)d(a)

� v

)→ P

(Z(α,ρ)(T ) � v

)as a → 0. (12.5.5)

Page 505: Asymptotic analysis of random walks

474 Non-identically distributed jumps with infinite second moments

Proof of Theorem 12.5.1. As before, let b(n) = G−1(1/n). First note that

d(a) = b(n1). (12.5.6)

This follows from the equalities (we assume for simplicity that the function G(t)is continuous)

a ≡ D(d(a)

)= d(a)G

(d(a)

), G

(d(a)

)=

a

d(a)=

1n1

,

so that d(a) = G−1(1/n1) ≡ b(n1). Hence

Sn(a)d(a)

= maxk�n

(Sk

b(n1)− ak

b(n1)

)= max

k�n

(ζn1

(k

n1

)− kθ

n1

), (12.5.7)

where, by (12.5.6),

θ :=n1a

b(n1)=

d(a)b(n1)

= 1.

Further, the functional

fT (ζ) := supu�T

(ζ(u) − u

)is continuous in the Skorokhod metric and has the property that

fT (ζn1) = maxk�n

(ζn1(k/n1) − k/n1

)(assuming for simplicity that n = Tn1) and so, by virtue of (12.5.7),

Sn(a)d(a)

= fT (ζn1).

It remains to make use of Theorem 12.4.5 and Remark 12.4.8. The theorem isproved.

Now consider the case n � n1 and, in particular, the situation when n = ∞.Put

Z(α,ρ) := Z(α,ρ)(∞).

Theorem 12.5.2. Let the conditions of Theorem (12.5.1) be met, and let T = ∞.Then, as a → 0,

S(a)d(a)

⇒ Z(α,ρ). (12.5.8)

Proof. As before, let

η(x, a) = inf{k : Sk − ak � x}.Put, for brevity,

ζ(n, a) :=Sn(a)d(a)

.

Page 506: Asymptotic analysis of random walks

12.5 Transient phenomena 475

Fix a large number T1 and set nT1 = n1T1. Then, for any fixed v > 0,

P(ζ(∞, a) � v

)= P

(ζ(nT1 , a) � v

)+ P

(∞ > η(vd(a), a

)> nT1

), (12.5.9)

where, by virtue of Theorem 12.5.1, the first term on the right-hand side convergesto P

(Z(α,ρ)(T1) � v

)as a → 0.

Further, it is not difficult to see that the conditions of the theorem imply thatalso satisfied are the conditions of Corollary 12.3.2 (i.e. conditions [<, <]U, [H]and [N]; that the first two are met is obvious whereas, regarding condition [N],see (12.4.7) and the comments following condition [UR] of § 12.4.1). Therefore,for the second term on the right-hand side of (12.5.9), we obtain from Corol-lary 12.3.2 and (12.3.8) with x = vd(a) that

P(∞ > η(vd(a), a) > nT1

)= P

(∞ > η(x, a) >

T1x

av

)� cn1V(1)(x)

(T1/v

)1−α

� c1n1G(x)(T1/v

)1−α, (12.5.10)

where V(1) corresponds to the averaged distribution

F(1) =1n1

n1∑j=1

Fj

(one could also consider the averaged distributions F(T1) = n−1T1

∑nT1j=1 Fj). Now

observe that, for the last expression in (12.5.10),

n1G(x) =d(a)a

G(vd(a)

)=

D(vd(a)

)va

∼ v−α

aD(d(a)

) ∼ v−α,

and hence, uniformly in a,

P(η(vd(a), a) > nT1

)� c1

vT 1−α

1 ,

where the right-hand side can be made arbitrarily small by choosing a suitable T1.It follows from the above that

lim supa→0

P(ζ(∞, a) � v

)� P

(Z(α,ρ)(T1) � v

)+ R(v, T1)

� P(Z(α,ρ) � v

)+ R(v, T1),

lim infa→0

P(ζ(∞, a) � v

)� P

(Z(α,ρ)(T1) � v

)= P

(Z(α,ρ) � v

)− r(v, T1),

(12.5.11)

where R(v, T1) � c1v−1T 1−α

1 , and that

r(v, T1) := P(Z(α,ρ) � v

)− P(Z(α,ρ)(T1) � v

) ↓ 0 as T1 → ∞.

Therefore, the right-hand sides in (12.5.11) differ from P(Z(α,ρ) � v

)only by

Page 507: Asymptotic analysis of random walks

476 Non-identically distributed jumps with infinite second moments

summands that can be made arbitrarily small by choosing a suitable T1. But theleft-hand sides in (12.5.11) do not depend on T1. Therefore there exists the limit

lima→∞P

(ζ(∞, a) � v

)= P

(Z(α,ρ) � v

).

The theorem is proved.

Now we will turn to conditions [HUR] in their more general form (i.e. in the caseH(n) ≡ n). For a function H(n) = nhLh(n), where Lh(·) is an s.v.f., the considerationsbecome much more tedious, beginning with the construction of the scaling functions d(a)and n1(a). So, for simplicity we will put Lh(n) ≡ Lh = const (in this case, b(n) =

G(−1)(1/H(n)) = nh/αLb(n) � √n ).

We based our choice of the scaling functions d(a) and n1(a) in (12.5.3) on the relationan = b(n), which states that the value of the drift an and that of the standard devia-tion b(n) at time n are ‘comparable’ with each other. It followed from this relation that

nG(an) = 1, anG(an) = a, an = D(−1)(a) = d(a), n1 = n1(a) =d(a)

a.

Now such ‘comparability’ relations lead to the equalities

H(n)G(an) = 1, H(an)G(an) = ah.

Therefore, putting

D(v) := H(v)G(v), d(u) := D(−1)(u), (12.5.12)

we obtain

an = d(ah), n1 = n1(a) =d(ah)

a. (12.5.13)

Repeating the argument from the proof of Theorem 12.5.1 we find that

Sn(a)

d(a)⇒ Z(α,ρ)(T ), where T = lim

a→0

n

n1(a)

and the functions d(·) and n1(a) are defined in (12.5.12), (12.5.13). This follows from thefact that the main equality (12.5.6) in the proof of the theorem now becomes

d(ah) = b(n1),

which is a direct consequence of the initial ‘comparability’ relation an = b(n) with n =n1(a) = d(ah)/a inserted into it.

The rest of the argument remains unchanged.

12.5.3 On the explicit form of the limiting distributions

It is scarcely possible to find closed-form expressions in the general case for theproper limiting distributions in Theorems 12.5.1 and 12.5.2 (i.e. the distributionsof Z(α,ρ)(T ); the probabilities of large deviations of Sn(a)/d(a) were studiedin Theorem 12.3.1 and Corollary 12.3.2). In some special cases, however, onecan do this for T = ∞. One of these is the case when F+(t) = o(F−(t)) or,equivalently, when ρ = −1. Then one has G(t) ∼ F−(t), and so it is natural toreplace the parameter α by the parameter β that describes the behaviour of theleft tail F−(t) as t → ∞.

Page 508: Asymptotic analysis of random walks

12.5 Transient phenomena 477

Theorem 12.5.3. For β ∈ (1, 2),

P(Z(β,−1) � v

)= e−bv, where b :=

(β − 1

Γ(2 − β)

)1/(β−1)

.

If we use the convention Γ(1 − β) = Γ(2 − β)/(1 − β) for β ∈ (1, 2) then b

can also be written as b =(−Γ(1− β)

)−1(β−1). See also Theorem 12.5.6 below.

Proof. It is clear that when finding the limiting distribution for Z(β,−1) one canassume without loss of generality that the ξi are i.i.d. r.v.’s and that F−(t) is anr.v.f. of index −β.

Further, in the case when F+(t) = o(F−(t)) as t → ∞, one has ρ = −1and hence the distributions of the process ζ(β,−1)(·) and the r.v. Z(β,−1) will notchange if we replace the tail F+(t) by any other tail F ∗

+(t) with the property thatF ∗

+(t) = o(F−(t)) as t → ∞; it is only required that condition [HUR] is stillsatisfied. This will be the case if we put F ∗

+(t) := qe−γt for t > 0, γ > 0,

q ∈ (0, 1). In other words, we can assume without loss of generality (from theviewpoint of finding the distribution of Z(β,−1)) that

F+(t) ≡ P(ξj � t) = qe−γt,

where q < 1, g = q/a+ and a+ = E(ξ+) =∫∞0

uF(du), so that we still haveEξj = 0.

Now recall that in the case of exponential right tails the distribution of S(a) isknown explicitly. Namely, in this case (see e.g. § 20 of [40] or § 5, Chapter 11,of [49])

EeλS(a) = p +(1 − p)λ1

λ1 − λ, p = P(S(a) = 0), (12.5.14)

so that the distribution of S(a) is also exponential:

P(S(a) � x

)= (1 − p)e−λ1x, x > 0. (12.5.15)

Here λ1 > 0 is the solution to the equation ψ(λ) = 1, where

ψ(λ) := Eeλ(ξ−a) = e−aλ

∞∫−∞

eλt F(dt),

and it is clear that p → 0 as a → 0. To find λ1, an asymptotic representationfor ψ(λ) as λ → 0 is needed.

Lemma 12.5.4. As λ → 0,

ψ(λ) = 1 − λa + b1F−(1/λ)(1 + o(1)), b1 := b1−β =Γ(2 − β)

β − 1,

(12.5.16)and

λ−11 ∼ D(−1)(a/b1) ∼ bd(a) as a → 0. (12.5.17)

Page 509: Asymptotic analysis of random walks

478 Non-identically distributed jumps with infinite second moments

Proof. We have eaλψ(λ) = I1 + I2, where

I1 :=

0∫−∞

eλt F(dt) = F−(0) + λ

0∫−∞

tF(dt) +

0∫−∞

(eλt − 1 − λt)F(dt).

Here∫ 0

−∞ tF(dt) = −E(ξ+) = −a+,

0∫−∞

(eλt − 1 − λt)F(dt) = λ

∞∫0

(1 − e−λt)F−(t) dt = λ2

∞∫0

e−λtF I−(t) dt,

where, by virtue of Theorem 1.1.4(iv), as t → ∞,

F I−(t) =

∞∫t

F−(u) du ∼ tF−(t)β − 1

.

Therefore, as λ → 0 (λt = u), it follows from Theorem 1.1.5(i) that

λ2

∞∫0

e−λtF I−(t) dt ∼ λ2 1

λΓ(1 − (β − 1))F I

−(1/λ) ∼ Γ(2 − β)β − 1

F−(1/λ).

Thus

I1 = F−(0) − λa+ +Γ(2 − β)

β − 1F−(1/λ)(1 + o(1)).

Also, it is obvious that since E(ξ+)2 < ∞ we have, as λ → 0,

I2 :=

∞∫0

eλt F(dt) = F+(0) + λa+ + O(λ2), e−aλ = 1 − aλ + O(λ2).

This proves (12.5.16).Further, the desired value λ1 solves the equation

λa = b1F−(1/λ)(1 + o(1)),

or, equivalently,

λa = b1G(1/λ)(1 + o(1)) or D(1/λ) ∼ a/b1,

since in our case F−(t) ∼ G(t) as t → ∞. From this we find that

λ−11 ∼ d(a/b1) ∼ b

−1/(β−1)1 d(a).

The lemma is proved.

Returning to (12.5.15) and putting x = vd(a), where v > 0 is fixed, we obtainthat

P(

S(a)d(a)

� v

)∼ e−bv, P

(Z(β,−1) � v

)= e−bv.

The theorem is proved.

Page 510: Asymptotic analysis of random walks

12.5 Transient phenomena 479

The second case, for which a closed-form expression for the distribution ofZ(α,ρ) can be found, is the case when F−(t) = o(F+(t)), i.e. when ρ = 1. Thistime, however, one can only find explicitly the ch.f. of Z(α,1).

Theorem 12.5.5. For α ∈ (1, 2) we have

EeiλZ(α,1)=(

1 − iφ|λ|α−1

α − 1A(α − 1, φ)

)−1

, φ = signλ,

where

A(γ, φ) =

∞∫0

eiφvv−γdv = Γ(1 − γ)eiφ(1−γ)π/2

(see (1.5.20), (1.5.24) on p. 64).

The following relationship between the ch.f.’s of Z(α,1) and ζ(α−1,1) is easilyestablished:

EeiλZ(α,1)=(

1 − ln f(α−1,1)(λ)α − 1

)−1

.

Proof of Theorem 12.5.5. Using essentially the same argument as in the proof ofTheorem 12.5.3, one can assume without loss of generality, when one is con-cerned with finding the distribution of Z(α,1), that the ξj are i.i.d. and that

F−(t) ≡ P(ξj < −t) = qe−γt for t � −1,

where γ = −qa−1 > 0 and a− := Emin{0, ξ} are fixed.Then we still have Eξj = 0 and, for a ∈ (0, 1), the first negative sum χ− in the

random walk {Sk − ak; k � 1} will be exponentially distributed:

P(χ− < −t) = e−γt, t � 0, Eeiλχ− =γ

γ + iλ.

Using factorization identities (see e.g. relation (1) from § 18 of [40] or § 5, Chap-ter 11 of [49]), we obtain

EeiλS(a) =p

1 − f(a)(λ)(1 − Eeiλχ−

)=

piλ

(1 − f(a)(λ))(γ + iλ),

where

f(a)(λ) := Eeiλ(ξ−a), p = P(S(a) = 0

).

Clearly, as λ → 0, we have

1 − f(a)(λ) = 1 − e−iλaf(λ) = 1 − (1 − iλa + O(λ2))(

(f(λ) − 1) + 1)

= iλa + O(λ2) − (f(λ) − 1)(1 + o(1)). (12.5.18)

Page 511: Asymptotic analysis of random walks

480 Non-identically distributed jumps with infinite second moments

We have already found the asymptotic behaviour of f(λ) − 1 as λ → 0. Itfollows from (1.5.34) and (12.5.18) that, as m = 1/|λ| → ∞,

1 − f(a)(λ) = iλa +F+(m)α − 1

A(α − 1, φ)(1 + o(1))

= iλa

[1 − iF+(m)

λa(α − 1)A(α − 1, φ)(1 + o(1))

].

Therefore

EeiλS(a) =p

a

[1 − iF+(m)

λa(α − 1)A(α − 1, φ)(1 + o(1))

]−1

(γ + iλ)−1.

Setting λ := μ/d(a) for a fixed μ and a → 0, we find that

EeiμS(a)/d(a)

=p

[1 − iφD

(d(a)/|μ|)

a(α − 1)A(α − 1, φ)(1 + o(1))

]−1(1 +

d(a)γ

)−1

,

where a−1D(d(a)/|μ|) ∼ |μ|α−1 and φ = signμ. Since S(a)/d(a) ⇒ Z(α,1),

this implies that necessarily p/aγ → 1 and

EeiμS(a)/d(a) →(

1 − iφ|μ|α−1

α − 1A(α − 1, φ)

)−1

.

The theorem is proved.

Observe that the exponential character of the distribution of Z(β,−1) can beestablished (without computing the parameter b) in a much easier way than inTheorem 12.5.3, using the fact that there are no positive jumps in the processζ(β,ρ)(u)−u when ρ = −1. The desired exponentiality will follow from the nextassertion.

Theorem 12.5.6. Let {ζ(t); t � 0} be an arbitrary homogenous process withindependent increments such that Z = supt�0 ζ(t) < ∞ a.s. For the r.v. Z tohave the exponential distribution

P(Z > x) = e−bx, x � 0, b > 0, (12.5.19)

it is necessary and sufficient that the trajectories ζ(·) are continuous from above(do not have positive jumps).

Proof. Put η(x) := inf{t � 0 : ζ(t) > x

}if Z > x and η(x) := ∞ if Z � x.

The r.v. χ(x) := ζ(η(x)

)−x is defined on the event {Z > x}. For the definitenesslet the trajectories of the process ζ(·) be right-continuous. Then χ(x) � 0 and,moreover, χ(x) ≡ 0 iff the process ζ(t) is continuous from above.

For an arbitrary y ∈ (0, x), we have

P (x) := P(Z > x) = E[P(Z > x|Fη(y)

); η(y) < ∞], (12.5.20)

Page 512: Asymptotic analysis of random walks

12.5 Transient phenomena 481

where Fη(y) is the σ-algebra generated by the trajectory of the process ζ(·) on thetime interval [0, η(y)]. Owing to the strong Markov property of the process ζ(·),on the event {η(y) < ∞} one has

P(Z > x|Fη(y)

)= P

(Z ′ > x − y − χ(y)|χ(y)

), (12.5.21)

where Z ′ := supt�0 ζ′(t) and ζ ′(·) is an independent copy of the process ζ(·).Therefore, if the process ζ(·) is continuous from above then

P(Z > x|Fη(y)

)= P(Z ′ > x − y) = P (x − y),

which, together with (12.5.20), implies that

P (x) = P (x − y)P(η(y) < ∞) = P (y)P (x − y).

Solutions to this equation in the class of decreasing functions have the form

P (x) = e−bx, b ≥ 0.

(see e.g. § 6, Chapter XVII of [121]). Since P(Z > x) → 0 as x → ∞, wehave b > 0 and hence (12.5.19) is true.

Now let (12.5.19) hold true. Assume that the process ζ(·) has positive jumpsand so P(χ(y) > 0) > 0. Then, by virtue of (12.5.19)–(12.5.21),

P(Z′ > x − y − χ(y)) > P (x − y), P (x) > P (y)P (x − y).

This contradicts (12.5.19). The theorem is proved.

Page 513: Asymptotic analysis of random walks

13

Random walks with non-identically distributedjumps in the triangular array scheme in the case of

finite variances

In the present chapter we will extend the main results of Chapter 4 to the caseof random walks with independent non-identically distributed jumps ξi in thetriangular array scheme when Eξ2

i < ∞.

13.1 Upper and lower bounds for the distributions of Sn and Sn

Let ξ1, ξ2, . . . be independent r.v.’s, following respective distributions F1,F2, . . .

The distributions Fj can also depend on some parameter. As in the classicaltriangular array scheme this could be the number of summands n, so that theFj = F(n)

j depend both on j and n.All the remarks that we made in Chapter 12 concerning the use of other param-

eters in the scheme apply to the present situation as well.As before, let

Sn =n∑

i=1

ξi, Sn = maxk�n

Sk.

In what follows, we will go through all the main stages of bounding and evaluatingthe probabilities P(Sn � x) and P(Sn � x) presented in Chapter 4, but nowunder new, more general, conditions.

13.1.1 Upper bounds for the distributions of Sn. The first version of the

conditions

First of all, an extension of the basic inequality (2.1.8) to the case of non-identi-cally distributed jumps has the same form as in § 12.1 (see Lemma 12.1.1 andinequality (12.1.3) on p. 441).

Further, as in § 12.1 we will introduce a regular variation condition, but thistime for the right averaged tails

F+(t) =1n

n∑j=1

Fj,+(t), where Fj,+(t) = Fj([t,∞)).

482

Page 514: Asymptotic analysis of random walks

13.1 Upper and lower bounds for the distributions of Sn and Sn 483

The conditions [U] and [ · , <]U that we are about to introduce are similar toconditions [U] and [<, <]U from § 12.1.

In the present chapter, we say that condition [ · , <]U is satisfied if the averagedtail F+(t) = F

(n)+ (t) admits a majorant V (t) = V (n)(t) possessing the property

[U] = [U1] ∩ [U2] for a fixed α > 2, where conditions [U1] and [U2] have thefollowing form.

[U1] For any given δ > 0 there exists a tδ such that, for all n, t � tδ andv ∈ [1/2, 2], ∣∣∣∣V (vt)

V (t)− v−α

∣∣∣∣ � δ. (13.1.1)

[U2] For any given δ > 0 there exists a tδ such that, for all n, t and v forwhich t � tδ , tv � tδ , one has

v−α+δ � V (vt)V (t)

� v−α−δ. (13.1.2)

One can assume, without loss of generality, that the tδ’s in [U1] and [U2] areone and the same function.

An equivalent form of condition [U] was given in (12.1.7) (see p. 442).

Remark 13.1.1. While studying the probabilities of large deviations of Sn andSn in the case n → ∞, condition [U] can be somewhat relaxed and replaced bythe following condition.

[U∞] For any given δ > 0 there exist nδ and tδ such that the uniform in-equalities (13.1.1), (13.1.2) hold for all n � nδ , t � tδ and tv � tδ .

A simple sufficient condition for the existence of regularly varying right majo-rants of index α for the averaged distributions is the boundedness of the averagedone-sided moments of order α:

1n

n∑j=1

E(ξαj ; ξj � 0) < cα < ∞.

In this case, clearly, for t > 0,

F+(t) � V (t) ≡ cαt−α,

and so condition [U] will be met.Along with condition [U] on the averaged majorant one could consider, as

in § 12.1, an alternative version of this condition, in which condition [ · , <] andthe relations (13.1.1), (13.1.2) hold uniformly for each majorant V1, . . . , Vn, withrespective exponents α1, . . . , αn:

2 < α∗ � mini�n

αi � maxi�n

αi � α∗, (13.1.3)

Page 515: Asymptotic analysis of random walks

484 Non-identically distributed jumps with finite variances

where α∗ and α∗ do not depend on n. If α∗ = α∗ = α then the above ‘individual’version of condition [ · , <]U will imply the ‘averaged’ version (13.1.1), (13.1.2)(see § 12.1). If the αj are different then it is no longer clear whether the averagedcondition [ · , <]U will be met. In this connection one can consider, as in § 12.1,two versions of condition [ · , <]: (1) the averaged version and (2) the individualone.

In contrast with § 12.1, in the formulation of the main theorem of this sectionwe will not use condition [N] preventing too rapid ‘thinning’ of the distributiontails (this condition will be introduced later) but, rather, will use an alternativeapproach, introducing the quantity

J(n, δ) := tα+δδ nV (tδ), (13.1.4)

where tδ is from condition [U]. The quantity J(n, δ) will be part of the boundsto be derived.

Now we can state the main assertion of the present section. Put

di := Eξ2i , D = Dn :=

n∑i=1

di, Bj = {ξj < y}, B =n⋂

j=1

Bj

and let y = x/r, r � 1,

ρ = ρ(n, δ) :=J(n, δ)

Dn, σ(n) =

√(α − 2)D lnD, x = sσ(n),

(13.1.5)where δ > 0 will be chosen later on. Without loss of generality, we will assumethat D � 1 and, as was the case in § 4.1, that x �

√D.

In the individual version of condition [ · , <]U, the parameter α in (13.1.4),(13.1.5) should be replaced by α∗, and one should put V (t) := n−1

∑nj=1 Vj(t).

Theorem 13.1.2. Let Eξj = 0 and let the averaged distribution F satisfy condi-tion [ · , <]U. Then the following assertions hold true.

(i) For any fixed h > 1, for s � 1 and all small enough nV (x), one has

P ≡ P(Sn � x; B) � er

(nV (y)

r

)r−θ

, (13.1.6)

where

θ :=hr2

4s2

(1 + χ + b

ln s

lnD

), χ := − 2

α − 2ln ρ

lnD, b :=

α − 2,

and the value δ, which determines the quantity ρ = ρ(n, δ), depends on thechosen h > 1 and will be specified in the proof.

If D → ∞, ln ρ = o(lnD) then one can put

θ :=hr2

4s2

(1 + b

ln s

lnD

). (13.1.7)

Page 516: Asymptotic analysis of random walks

13.1 Upper and lower bounds for the distributions of Sn and Sn 485

(ii) Let D = Dn → ∞ as n → ∞. Then, for any fixed h > 1, τ > 0, for

x = sσ(n), s2 < (h − τ)/2

and all large enough n, one has

P � e−x2/2Dh. (13.1.8)

(iii) If, instead of the averaged condition [ · , <]U we require that the ‘individ-ual’ version of this condition is met, in which each distribution Fj separatelysatisfies condition [ · , <]U uniformly in j and n and the relations (13.1.3)hold true, then the assertions of parts (i), (ii) of the theorem remain true forσ(n) =

√(α∗ − 2)D lnD. In this case, one should replace α in the first as-

sertion by α∗ = maxi�n αi. In the second assertion the inequality (13.1.8)will hold for

x = sσ(n), s � (h − τ)(α∗ − 2)2(α∗ − 2)

, α∗ = mini�n

αi.

In all the assertions in which it is assumed that n → ∞, condition [U] can bereplaced by [U∞].

We see that, in the case where ln ρ = o(lnD), the first assertion of Theo-rem 13.1.2 essentially repeats the second assertion of Theorem 4.1.2, the num-ber n in the representation for θ being replaced by D.

A similar remark applies to the second assertion.

Now we will turn our attention to the quantity χ (or, equivalently, to the value−ln ρ/lnD), which is present in (13.1.6). Owing to Chebyshev’s inequality, wehave

nV (t) � Dnt−2, J(n, δ) � tα+δ−2δ Dn, ρ =

J(n, δ)Dn

� tα−2+δδ ≡ c1.

(13.1.9)It is also clear that the inequality (13.1.6) remains true if in it we replace χ bymax{0, χ} (this can only make the inequality cruder), so that one can assumewithout loss of generality that χ � 0 (ρ � 1).

The most ‘dangerous’ values for the quality of the bounds are the small valuesof ρ.

Lower bounds for ρ can be obtained provided that the following condition issatisfied.

[N] For some δ < min{1, α − 2} and γ � 0,

J(n, δ) = tα+δδ nV (tδ) � cD−γ

n . (13.1.10)

We note straight away that if condition [N] is met for some δ > 0 then it willalso be satisfied for any other fixed δ1 < δ, e.g. for δ1 = δ/2. Indeed, since therelation V (tδ1) � V (tδ)(tδ1/tδ)−α−δ holds owing to [U2], we have

J(n, δ1) = tα+δ1δ1

nV (tδ1) � tα+δδ nV (tδ)tδ1−δ

δ1= J(n, δ)t−δ1

δ1� c1D

−γn ,

Page 517: Asymptotic analysis of random walks

486 Non-identically distributed jumps with finite variances

where c1 = ct−δ1δ1

. If tδ does not increase too rapidly as δ → 0 (for instance, iftδ = eo(1/δ)) then the constant c1 � ceo(1) will be essentially the same as c.

If condition [N] is satisfied then

J(n, δ) � cD−γn , ln ρ � ln c − (1 + γ) ln Dn.

Hence

0 � − ln ρ

lnDn� 1 + γ − ln c

lnDn.

This means that under condition [N] the quantity χ = χ(n, δ) � 0 admits, forany fixed δ > 0, an upper bound which is independent of n, and therefore θ → 0as s → ∞.

Let condition [ · , <]U be satisfied (in the averaged or the ‘individual’ version).Then Theorem 13.1.2 and the above imply the following results.

Corollary 13.1.3. If x = sσ(n) and condition [N] is met then, under the condi-tions of Theorem 13.1.2 (i), (iii), for any ε > 0 and all large enough s,

P <(nV (x)

)r−ε. (13.1.11)

Corollary 13.1.4. If x = sσ(n) and condition [N] is met then, under the condi-tions of Theorem 13.1.2 (i), (iii), for any ε > 0 and all large enough s,

P(Sn � x) � nV (x)(1 + ε). (13.1.12)

Assume that Dn → ∞ as n → ∞. Then, for any fixed h > 1 and τ > 0,

for x = sσ(n), s2 < (h − τ)/2 and all large enough n, we have

P(Sn � x) � e−x2/2Dh.

Corollary 13.1.4 is proved in the same way as Corollary 4.1.4.

Remark 13.1.5. As we have already noted, if one studies the probabilities of largedeviations of Sn and Sn as n → ∞ then one could use a relaxed version [U∞] ofcondition [U] (see Remark 13.1.1). In this case, (13.1.11) and (13.1.12) will holdtrue for large enough n.

In paper [127] upper bounds for the distribution of Sn were obtained in terms of‘truncated’ moments, without making any assumptions on the existence of regu-larly varying majorants (such majorants always exist in the case of finite momentsbut their decay rate will not be the true one). This makes the bounds from [127]more general in a sense but also substantially more cumbersome. One cannotderive from them bounds for Sn of the form (13.1.11), (13.1.12).

Some further bounds for the distribution of Sn were obtained in papers [240,226, 197, 206].

Now we consider an example illustrating the assertions of Theorem 13.1.2 andCorollaries 13.1.3 and 13.1.4 (it is an analogue of Example 12.1.11 on p. 453).

Page 518: Asymptotic analysis of random walks

13.1 Upper and lower bounds for the distributions of Sn and Sn 487

Example 13.1.6. Let ξi = ciζi, where the ζi are i.i.d. r.v.’s, satisfying the condi-tions

Eζi = 0, Eζ2i = 1, P(ζ � t) = V(ζ)(t) := Lt−α for t � t0 > 0.

Then Vi(t) = V(ζ)(t/ci) for t � cit0.If c∗ = infi�1 ci > 0 and c∗ = supi�1 ci < ∞ then

Dn =n∑

i=1

c2i ∼ c′(n)n, J(n, δ) ∼ c′′(n)n, ρ ∼

c′′(n)

c′(n)

and ln ρ = o(lnD) as n → ∞; here c′(n) � c2∗ > 0 and c′′(n) � c < ∞ are

bounded sequences, so that in (13.1.6) one can use the representation (13.1.7).Now let ci ↓ 0 as i → ∞. In our case, in condition [U] and in the formulations

that follow it, one can put αi = α∗ = α∗ = α, δ = 0 and tδ = t0. Assume forsimplicity that ci = i−γ , γ > 0. Then, for γ < 1/α,

Dn ∼ 11 − 2γ

n1−2γ ,

J(n, δ) = tα0

n∑i=1

V(ζ)(t0iγ) = L

n∑i=1

i−αγ ∼ Ln1−αγ

1 − αγ.

From this it follows that ρ ∼ cn−γ(α−2) as n → ∞ and that

− ln ρ

lnD∼ γ(α − 2)

1 − 2γ, χ ∼ 2γ

1 − 2γ.

Thus, in this case we have the relations (13.1.6), (13.1.11) and (13.1.12), in whichone can put

θ =hr2

4s2

(1 +

1 − 2γ+ b

ln s

lnD

).

If ci ↑ ∞ as i → ∞ then condition [U] is not satisfied. However, as inExample 12.1.11, the problem can then be reduced, in a sense, to the previousone. Introduce new independent r.v.’s

ξ∗i :=ξn−i+1

cn=

cn−i+1

cnζn−i+1, i = 1, . . . , n,

so that again we have a representation of the form ξ∗i = c∗i ζi with decreasingcoefficients c∗i = cn−i+1/cn but now in a ‘triangular array scheme’ since the c∗idepend on n. We have

S∗n =

n∑i=1

ξ∗i =Sn

cn, P(Sn � x) = P(S∗

n � x∗) for x∗ =x

cn.

Proof of Theorem 13.1.2. The proof follows the scheme used in that of Theo-rem 4.1.2. We will go into detail only where the fact that the ξi are non-identicallydistributed requires us to amend the argument.

Page 519: Asymptotic analysis of random walks

488 Non-identically distributed jumps with finite variances

(i) As before, the proof of such an assertion will be based on the main in-equality (12.1.3) (see p. 441), which reduces the problem to bounding the sums(see (12.1.16))

R(μ, y) =1n

n∑j=1

Rj(μ, y), Ri(μ, y) =

y∫−∞

eμt Fi(dt),

so that

R(μ, y) =

y∫−∞

eμt F(dt).

Exactly these integrals were estimated in the proofs of Theorems 3.1.1 and 4.1.2.We again put M(v) = v/μ. Then, in complete analogy with (4.1.19)–(4.1.21)(see p. 186), we obtain

R(μ, y) = I1 + I2,

where

I1 =

M(ε)∫−∞

eμt F(dt) � 1 +μ2hD

2n, h = eε.

Next we will bound

I2 = −y∫

M(ε)

eμtdF+(t) � V (M(ε))eε + μ

y∫M(ε)

V (t)eμtdt.

First consider for M(ε) < M := M(2α) < y the integral

I2,1 := μ

M∫M(ε)

V (t)eμtdt.

Owing to condition [U] the asymptotic equivalence relation (4.1.24) will be uni-form in n, and therefore we have an analogue of the inequality (4.1.25):

I2,1 � cV (1/μ).

The integral

I2,2 := μ

y∫M

V (t)eμtdt

is bounded using the same considerations as those employed in § 2.2, see (2.2.11)–(2.2.15) (p. 88), and in § 12.1 when estimating I3. We have

I2,2 � V (M)e2α + μ

y∫M

V (t)eμtdt ≡ V (M)e2α + I03 ,

Page 520: Asymptotic analysis of random walks

13.1 Upper and lower bounds for the distributions of Sn and Sn 489

where I03 = μ

∫ y

MV (t)eμtdt. Next we derive that (cf. (12.1.20), (12.1.21))

I03 � eμyV (y)

(1 + ε(λ)

),

where ε(λ) → 0 as λ = μy → ∞.As a result, we obtain

n(R(μ, y) − 1

)� μ2hD

2+ cnV

(1μ

)+ nV (y)eμy

(1 + ε(λ)

)(cf. (4.1.27)). Since for k � n one has

k∑i=1

di � D,k∑

i=1

Vi(t) � nV (t),

inequalities of the same form (with the same right-hand sides) will also hold forthe quantities

k∑i=1

(Ri(μ, y) − 1

), max

k�n

k∑i=1

(Ri(μ, y) − 1

).

Therefore, as in (12.1.24), we obtain

maxk�n

k∏i=1

Ri(μ, y) � exp{

maxk�n

k∑i=1

(Ri(μ, y) − 1

)}� exp

{μ2hD

2+ cnV

(1μ

)+ nV (y)eμy

(1 + ε(λ)

)}.

(13.1.13)

Next we put, as in § 4.1,

μ :=1y

lnT, T :=r

nV (y).

Then λ = lnT → ∞ as nV (y) → 0, and (cf. (4.1.29))

maxk�n

k∏i=1

Ri(μ, y) � exp{

μ2hD

2+ cnV

(1μ

)+ r

(1 + ε(λ)

)}. (13.1.14)

Here, by virtue of [U], one has, uniformly in n,

V

(1μ

)= V

(y

lnT

)∼ cV

(y

| lnnV (y)|)

� cV (y)∣∣lnnV (y)

∣∣α+δ, δ > 0,

so that

nV

(1μ

)� cnV (y)

∣∣lnnV (y)∣∣α+δ → 0 as nV (y) → 0.

Page 521: Asymptotic analysis of random walks

490 Non-identically distributed jumps with finite variances

Hence (cf. (4.1.31))

lnP � −r lnT + r +hD

2y2ln2 T + ε1(T )

= −(

r − hD

2y2lnT

)lnT + r + ε1(T ), (13.1.15)

where ε1(T ) → 0 as T → ∞.Owing to [U1], we have lnT = − lnnV (x)+O(1). Next we will bound nV (x)

from below. It follows from [U2] that for x > tδ one has

nV (x) � J(n, δ)x−α−δ. (13.1.16)

Hence, putting J := J(n, δ), ρ := J/D and α′ := α + δ, we obtain that, forx = sσ(n) → ∞, σ(n) =

√(α − 2)D lnD,

nV (x) � Jx−α′= Js−α′

(α − 2)−α′/2D−α′/2(lnD)−α′/2

= cD(2−α′)/2s−α′(lnD)−α′/2ρ. (13.1.17)

Therefore

lnT = − lnnV (x) + O(1)

� α′ − 22

lnD + α′ ln s − ln ρ +α′

2ln lnD + O(1) (13.1.18)

=α′ − 2

2lnD

(1 +

2α′

α′ − 2ln s

lnD− 2

α′ − 2ln ρ

lnD

)(1 + o(1)).

Since s � 1, ρ � 1 and

2α′

α′ − 2� 2α

α − 2,

1α′ − 2

� 1α − 2

,

we obtain

lnT � α′ − 22

lnD

(1 + b

ln s

lnD− 2

α − 2ln ρ

lnD

)(1 + o(1)).

Why the relative remainder term in (13.1.18) has the form o(1) can be explainedas follows. The convergence x → ∞, which is necessary for nV (x) → 0, meansthat either s → ∞ or D → ∞. If s → ∞, D < c then the value in the largeparentheses in (13.1.18) increases unboundedly. Hence the terms ln lnD + O(1)translate into a factor o(1) times the expression in the large parentheses. Alterna-tively, if D → ∞ then also lnD → ∞, and we again obtain o(1) for the samereason.

If ln ρ = o(lnD) (which is only possible when D → ∞) then the representa-tion (13.1.18) will coincide with (4.1.32) provided that in the latter we replace n

by D.

Page 522: Asymptotic analysis of random walks

13.1 Upper and lower bounds for the distributions of Sn and Sn 491

As in § 4.1, (13.1.18) implies that

hD

2y2lnT � hr2

4s2

(1 +

δ

α − 2

)(1 + b

ln s

lnD− 2

α − 2ln ρ

lnD

)(1 + o(1)).

(13.1.19)Since δ can be chosen arbitrarily small we obtain that, owing to (13.1.15), one hasthe following bound with a new value of h, somewhat greater than that in (13.1.19):

lnP � r −[r − hr2

4s2

(1 + b

ln s

lnD+ χ

)]lnT.

This proves the first assertion of the theorem.

(ii) Now we will prove the second part of the theorem. We will make useof (13.1.13), where we put

μ :=x

Dh.

Then, for y = x (r = 1),

lnP � −μx +μ2hD

2+ cnV

(1μ

)+ nV (y)eμy(1 + o(1))

= − x2

2Dh+ cnV

(Dh

x

)+ ex2/DhnV (x)(1 + o(1)). (13.1.20)

Here, for s2 � c, we have from [U2] that

nV

(Dh

x

)� c1nV

(√D

lnD

),

where, for t � tδ,

V (t) � V (tδ)(

t

)−α+δ

.

Hence for δ < 2 − α we find from (13.1.4) and (13.1.9) that, for α′′ := α − δ,

nV

(Dh

x

)� c2D

−α′′/2J � c3D1−α′′/2 → 0 as D → ∞. (13.1.21)

Consider the last term in (13.1.20). When

(α′′ − 2)(h − τ)2(α − 2)

� s2 � 1(α − 2) lnD

(13.1.22)

(recall that x �√

D), we find in a similar way, using (13.1.9), that

nV (x) � c1x−α′′

J(n, δ) < c2s−α′′

D1−α′′/2 � c2D1−α′′/2(lnD)α′′/2

and

x2

Dh� (h − τ)(α′′ − 2) lnD

2h=(

α′′ − 22

− τ(α′′ − 2)2h

)lnD. (13.1.23)

Page 523: Asymptotic analysis of random walks

492 Non-identically distributed jumps with finite variances

Therefore

nV (x)ex2/Dh � D−τ(α′′−2)/2h(lnD)α′′/2 → 0 (13.1.24)

as D → ∞. Thus,

lnP � − x2

2Dh+ o(1). (13.1.25)

The term o(1) could be removed from (13.1.25) by slightly changing h > 1. Sinceby choosing a suitable δ one can make the ratio (α′′ − 2)/(α − 2) in (13.1.22)arbitrarily close to 1, we can replace the upper bound for s2 in (13.1.22) by s2 �(h − τ)/2, again slightly changing, if necessary, either h or τ (see the end of theproof of Theorem 4.1.2 for a remark concerning this observation).

This completes the proof of the second assertion.

(iii) To prove the third assertion, which uses ‘individual’ conditions [U] for thedistributions Fj , one bounds the quantities Rj(μ, y) separately for each j. Thewhole argument that we used above to derive ‘averaged’ bounds will remain valid,as in this case the uniformity condition [U] will be satisfied for ‘individual’ distri-butions. Further, the relations (13.1.16), (13.1.18) will also remain true providedthat in them we replace α by α∗ = maxj�n αj and put α′ = α∗ + δ. In the proofof the second assertion, one replaces α by α∗ = minj�n αj and lets α′′ = α∗−δ.Thus the inequalities (13.1.23), (13.1.24) will hold true when

1(α∗ − 2) lnD

� s2 � (h − τ)(α∗ − 2)2(α∗ − 2)

.

The theorem is proved.

13.1.2 Upper bounds for the distributions of Sn. The second version of the

conditions

Along with condition [ · , <]U one can consider a similar but somewhat simplerversion thereof, which is analogous to condition [UR] of Chapter 12. In this wayone can get rid of the additional and not quite convenient condition [N], whichwas present in Corollaries 13.1.3 and 13.1.4

We will say that condition [ · , <]UR is satisfied if F+(t) admits a majorant V (t)such that

[UR]

limn→∞, t→∞

nV (t)H(n)V0(t)

= 1, (13.1.26)

where H(n) � n is a non-decreasing function, H(1) = 1 and V0(t) is an r.v.f.that is fixed (independent of n).

If we study the asymptotics of P(Sn � x) when the value n remains fixedas x → ∞ then condition [ · , <]UR means that the regularly varying majorantV (t) = V0(t) is fixed.

Page 524: Asymptotic analysis of random walks

13.1 Upper and lower bounds for the distributions of Sn and Sn 493

It is not difficult to see that condition [UR] implies [U].The relation (13.1.26) clearly means that, for a given δ > 0, there exist an nδ

and a tδ such that

nV (t) =(1 + δ(n, t)

)H(n)V0(t),

where |δ(n, t)| � δ for n � nδ , t � tδ .

Theorem 13.1.7. Let Eξj = 0 and let the averaged distribution F satisfy condi-tion [ · , <]UR. Then the following assertions hold true.

(i) The assertion of Theorem 13.1.2(i) remains true, provided that there we putρ := min{1,H(n)/Dn}, so that ρ � D−1

n , χ � 2/(α − 2).(ii) Let n → ∞, D = Dn → ∞. Then, for

x = sσ(n) �√

D, s � s0, H(n) � Dα/2−s20(α−2) (13.1.27)

and all large enough n, we have (13.1.8).

Note that in Example 13.1.6 with ci = i−γ , γ � 0, one has

Dn ∼ n1−2γ

1 − 2γ, H(n) ∼ n1−αγ � 1

for γ � 1/α, so that the last inequality in (13.1.27) is equivalent to

1 − αγ <α

2(1 − 2γ) − s2

0(α − 2)(1 − 2γ)

that is,

s20 < (2(1 − 2γ))−1.

Proof of Theorem 13.1.7. (i) The proof of the first assertion of the theorem re-peats that of Theorem 13.1.2(i) up to the relation (13.1.14). Further, for any fixedδ > 0 we have

nV

(1μ

)∼ H(n)V0

(1μ

)∼ cH(n)V0

(y

ln(H(n)V0(y))

)� cH(n)V0(y)

∣∣ ln(H(n)V0(y))∣∣α+δ → 0

as nV (y) = H(n)V0(y) → 0. Hence the relation (13.1.15) remains true.As in § 13.1.1, we will now bound nV (x) from below. By virtue of (13.1.26),

we have, for n � nδ, x � tδ,

nV (x) � (1 − δ)H(n)V0(x).

Setting α′ := α + δ, x = sσ(n) and σ(n) :=√

(α − 2)D lnD, we obtain,cf. (13.1.17), that

nV (x) � cH(n)s−α∗D−α∗/2(lnD)α∗/2 = cs−α∗

D(2−α∗)/2(lnD)α∗/2ρ,

where ρ := H(n)/Dn. This implies (13.1.18) but with a new value of ρ, with

Page 525: Asymptotic analysis of random walks

494 Non-identically distributed jumps with finite variances

regard to which one can again assume that ρ � 1 (if a value ρ > 1 is replacedby ρ = 1 then the bounds (13.1.18) and (13.1.6) can only become cruder). Theentire subsequent argument in the proof of Theorem 13.1.2(i) remains valid. Sinceρ � 1/Dn, we have χ � 2/(α − 2). In the case of bounded n, the argumentbecomes even simpler.

(ii) The proof of the second assertion also differs little from the argumentdemonstrating Theorem 13.1.2. All the calculations up to (13.1.21) remain thesame (except that the reference to condition [U2] should be replaced by a refer-ence to condition [UR]). Instead of (13.1.21) we will have by virtue of [UR]that, for α′′ = α − δ, n → ∞ and s � s0,

nV

(Dh

x

)= H(n)V0

(Dh

x

)(1 + o(1)) � cH(n)

(D

lnD

)−α′′/2

→ 0

for small enough δ > 0, owing to (13.1.27). Further, for the last term in (13.1.20),we find, for

((α + 2) lnD)−1 � s2 � s20

(recall that x �√

D), that

nV (x) � cH(n)x−α′′ � cH(n)D−α′′/2,x2

Dh� s2(α − 2)

hlnD.

Therefore

nV (x)ex2/Dh � cH(n)Ds20(α−2)/h−α′′/2 → 0

as n → ∞, when H(n) � Dα/2−s20(α−2) and δ is small enough. Together

with (13.1.21), this demonstrates (13.1.25) and hence the second assertion of thetheorem as well.

The theorem is proved.

It is not hard to see that the assertions of Corollaries 13.1.3 and 13.1.4 willremain true provided that in them we replace conditions [ · , <]U and [N] bycondition [ · , <]UR.

13.1.3 Lower bounds for the distributions of the sums Sn

The lower bounds are based on the main inequality of Theorem 12.1.9, whichstates that

P(Sn � x) �n∑

j=1

Fj,+(y)(1 − Q〈j〉

n (u))− 1

2(nF+(y)

)2,

where

Q〈j〉n (u) = P

(S〈j〉n

K(n)< −u

), S〈j〉

n = Sn − ξj,

Page 526: Asymptotic analysis of random walks

13.2 Probability of the crossing of a remote boundary 495

K(n) > 0 is an arbitrary sequence and y = x + uK(n). Let

D〈j〉n := Dn − dj , Dn = max

j�nD〈j〉

n � Dn.

Observe that since Dn = Dn − minj�n dj we always have Dn � Dn(1 − 1/n)for n � 2, so that Dn = Dn(1 + o(1)) as n → ∞.

Set K(n) := Dn. Then, by Chebyshev’s inequality,

Q〈j〉n (u) = P

(S〈j〉n

Dn

< −u

)< P

(S〈j〉n

D〈j〉n

< −u

)� u−2

and we obtain the following assertion.

Theorem 13.1.8. Let Eξj = 0, Eξ2j = dj < ∞. Then, for y = x + uDn,

P(Sn � x) � nF+(y)(1 − u−2 − 1

2nF+(y)

).

The form of this assertion almost coincides with that of Theorem 4.3.1.

Corollary 13.1.9. For x2 � Dn, u → ∞,

P(Sn � x) � nF+(y)(1 + o(1)).

Proof of Corollary 13.1.9. If x2 � Dn then

nF+(y) =n∑

j=1

Fj,+(x) �n∑

j=1

dj

x2=

Dn

x2→ 0.

Letting u → ∞, we obtain the desired assertion from Theorem 13.1.8.

Corollary 13.1.10. Let x2 � Dn and let the averaged distribution F satisfyconditions [ · , >] and [U1] or conditions [ · , >] and [UR]. Then

P(Sn � x) � nV (x)(1 + o(1)).

Proof. The proof of the corollary is obvious. It follows from Corollary 13.1.9,because if the conditions [U1] (or [UR]), x2 � Dn and u = o(x/

√Dn) are

satisfied then we have x ∼ y and so

nV (y) ∼ nV (x).

13.2 Asymptotics of the probability of the crossing of an arbitrary remote

boundary

Under somewhat excessive conditions, the desired asymptotics for the distribu-tions of Sn and Sn can be obtained from the bounds derived in § 13.1. As in our

Page 527: Asymptotic analysis of random walks

496 Non-identically distributed jumps with finite variances

previous exposition, we will say that the averaged distribution

F =1n

n∑j=1

Fj

satisfies condition [ · , =]U ([ · , =]UR) if it satisfies condition [ · , =] with thefunction V satisfying condition [U] ([UR]) of § 13.1. Recall also that condi-tion [N] from § 13.1 has the following form:

[N] For some δ < min{1, α − 2} and γ � 0,

J(n, δ) = tα+δδ nV (tδ) � cD−γ ,

where tδ is from condition [U].

Theorem 13.2.1. Let Eξj = 0 and the averaged distribution F satisfy the condi-tions [ · , =]U, [N] and x � √

Dn lnDn. Then

P(Sn � x) = nV (x)(1 + o(1)),

P(Sn � x) = nV (x)(1 + o(1)).(13.2.1)

The assertion remains true if we replace conditions [ · , =]U and [N] by condi-tion [ · , =]UR.

Under condition [UR] with H(n) = n (see p. 492), the asymptotic relationP(Sn � x) ∼ nV (x) for x > cn was established in [218].

The assertion of Theorem 13.2.1 can be sharpened. Namely, one can find anexplicit value of c such that (13.2.1) holds true for x = s

√(α − 2)D lnD, s � c.

To achieve this, one would need to do a more detailed analysis, similar to that inthe proof of Theorem 4.4.4 (cf. Remark 4.4.2 on p. 197).

The assertion of the theorem could be stated in a uniform version, like that ofTheorem 4.4.1. This follows from the fact that all the upper and lower boundsobtained above are explicit and uniform.

Moreover, Theorem 13.2.1 can be extended to the case of arbitrary boundaries.Then, however, it is more natural to use ‘individual’ conditions (see § 13.1), whereeach tail Fj,+ satisfies condition [ · , =]U.

As before, let Gx,n be the class of boundaries {g(k)} for which mink�n g(k) =cx, c > 0. The following analogue of Theorem 4.6.7 for the probability of theevent

Gn ={

maxk�n

(Sk − g(k)

)� 0

}holds true.

Theorem 13.2.2. Let Eξj = 0 and the distributions Fj satisfy condition [ · , =]Uuniformly in j and n with a common α = αj , j = 1, . . . , n. Moreover, let the

Page 528: Asymptotic analysis of random walks

13.2 Probability of the crossing of a remote boundary 497

averaged distribution F satisfy condition [N]. Then there exists a c1 < ∞ suchthat, for x > c1

√D lnD, x → ∞,

P(Gn) =

[n∑

j=1

Vj

(g∗(j)

)](1 + o(1)) + O

(n2V 2(x)

), (13.2.2)

where g∗(j) = mink�j g(k) and the reminder terms o(·) and O(·) are uniformover the class Gx,n of boundaries and the class F of distributions {Fj} that satisfyconditions [ · , =]U and [N].

The above assertion remains true if conditions [ · , =]U and [N] are replacedby [ · , =]UR.

Recall that when n → ∞ condition [U] can be replaced by a weaker condition,[U∞] (see Remark 13.1.1).

It follows from (13.2.2) that when

maxk�n

g(k) < c1x (13.2.3)

we have

P(Gn) ∼n∑

j=1

Vj

(g∗(j)

). (13.2.4)

Proof. The proof of Theorem 13.2.2 is similar to those of Theorems 3.6.4 (seep. 156) and 12.2.1, its argument repeating almost verbatim the respective argu-ments for those theorems (up to obvious amendments due to the fact that nowdj = Eξ2

j ). Therefore it will be omitted.

We could also obtain here analogues of all main assertions of § 12.3 on theprobability of the crossing of an arbitrary boundary on an unbounded time inter-val. We will briefly discuss them. Consider a class of boundaries {g(k)} of theform

g(k) = x + gk, k = 1, 2, . . . , (13.2.5)

which are defined on the whole axis and lie between two linear functions:

c1ak − p1x � gk � c2ak + p2x, p1 < 1, k = 1, 2, . . . , (13.2.6)

where the constants 0 < c1 � c2 < ∞ and pi > 0, i = 1, 2, do not dependon the parameter of the triangular array scheme and the variable a ∈ (0, a0),a0 = const > 0, can tend to zero. We are interested in deriving asymptoticrepresentations for P(Gn) that

(1) are uniform in a as a → 0, and

(2) hold for n, growing faster than cx/a, in particular, for n = ∞.

Page 529: Asymptotic analysis of random walks

498 Non-identically distributed jumps with finite variances

Put n1 := x/a and introduce, as in § 12.3, averaged majorants

V(1)(x) :=1n1

n1∑j=1

Vj(x)

(we assume for simplicity that x/a is an integer). We will need the homogeneitycondition

[H] For nk = 2k−1n1 and all k = 1, 2, . . . and t > 0,

c(1)V(1)(t) � 1nk

2nk∑j=nk

Fj,+(t) � c(2)V(1)(t),

c(1)

n1Dn1 � 1

nkDnk

�c(2)

n1Dn1 .

The second line of inequalities basically means that the values Dn grow almostlinearly with n.

As before, let

Sn(a) = maxk�n

(Sk − ak), η(x, a) = inf{k : Sk − ak � x},

Bj(v) = {ξj < y + vaj}, v > 0, B(v) =n⋂

j=1

Bj(v).

The following theorem bounds probabilities of the form P(Sn(a) � x; B(v)

),

P(Sn(a) � x

)and P

(∞ > η(x, a) � xt/a)

for n � n1 = x/a. (For n � n1

such bounds can be obtained from Theorem 13.2.2.) In what follows, by condi-tion [U] we understand condition [U∞] (see Remark 13.1.1).

Theorem 13.2.3.

(i) Let the averaged distribution

F(1) =1n1

n1∑j=1

Fj , n1 = x/a, (13.2.7)

satisfy conditions [ · , <]U and [N] (with n replaced in the latter by n1).Moreover, let condition [H] be met. Then, for

n � n1, x >c| ln a|

a, v � 1

4r, r =

x

y>

52,

we have the inequalities

P(Sn(a) � x; B(v)

)� c1

[n1V(1)(x)

]r1, (13.2.8)

P(Sn(a) � x

)� c1n1V(1)(x), (13.2.9)

where r1 = r/[2(1 + vr)] and the constants c and c1 are defined in the proof.

Page 530: Asymptotic analysis of random walks

13.2 Probability of the crossing of a remote boundary 499

Moreover, for any fixed or slowly enough growing t,

P(∞ > η(x, a) � xt

a

)� c2n1V(1)(x)t1−α. (13.2.10)

If t → ∞ together with x at an arbitrary rate then the inequality (13.2.10)will remain true provided that we replace the exponent 1−α on its right-handside by 1 − α + ε, where ε > 0 is an arbitrary fixed number.

(ii) Let the conditions of part (i) be satisfied when the range of x is

x <c| ln a|

a,

and assume additionally that Dn → ∞ as n → ∞. Then, for any fixedh > 1 and all large enough n,

P(Sn(a) � x

)� c1e

−xa/2dh, d :=Dn1

n1.

If x = o(| ln a|/a) then, for any fixed or slowly enough growing t,

P(∞ > η(x, a) � xt

a

)� c1e

−γt, (13.2.11)

where the constants γ and c1 can be found explicitly.

The assertion of the theorem remains true if conditions [ · , =]U and [N] arereplaced by [ · , =]UR.

It follows from the theorem that there exist constants γ > 0 and c < ∞ suchthat, for n � n1 = x/a and all x, one has

P(Sn(a) � x

)� cmax

{e−γxa/d,

x

aV (x)

}. (13.2.12)

Proof. The proof of the theorem repeats those of Theorems 4.2.1 and 12.3.1. Theonly difference is that now for n � n1 we use Theorem 13.1.2 (instead of The-orem 4.1.2, as was the case in § 4.2). In comparison with Theorem 12.3.1, theexponent of the product n1V(1)(x) in (13.2.8) is different (the r0 in (12.3.6) is re-placed by r1 = r0/2 in (13.2.8)). This is due to the fact that, as in Theorem 4.2.1,we can only use inequality (13.1.6) of Theorem 13.1.2 with the exponent r/2 onits right-hand side if we ensure that θ � r/2. This last inequality will hold if

s2 = cx2

n1 lnn1> c3 (13.2.13)

with a suitable constant c3. As in (4.2.8), one can verify that (13.2.13) will be trueprovided that

x >c4| ln a|

a

(or a >

c4 lnx

x

)for a suitable c4. The rest of the argument does not differ from that used to proveTheorems 4.2.1 and 12.3.1.

The theorem is proved.

Page 531: Asymptotic analysis of random walks

500 Non-identically distributed jumps with finite variances

Now we return to considering boundaries of the more general form (13.2.5).Let

ηg(x) := min{k : Sk − gk � x}.Corollary 13.2.4. Let the conditions of Theorem 13.2.3 (i) and conditions (13.2.5),(13.2.6) be satisfied. Then, for t bounded or growing slowly enough,

P(∞ > ηg(x) � xt

a

)�

cxV(1)(x)a

t1−α. (13.2.14)

If t → ∞ at an arbitrary rate then the inequality (13.2.14) will remain trueprovided that we replace the exponent 1 − α on its right-hand side by 1 − α + ε,

where ε > 0 is an arbitrary fixed number.

Proof. The assertion of the corollary follows from Theorem 13.2.3 and the in-equality (see (13.2.6))

P(∞ > ηg(x) � xt

a

)� P

(∞ > η

((1 − p1)x, c1a

)� xt

a

).

Observe that a similar corollary can be obtained under the conditions of Theo-rem 13.2.3(ii).

Now put

g∗j := mink�j

gk

and identify the parameter of the triangular array scheme with a. It is not hardto see that {g∗j } is non-decreasing and, together with {gj}, satisfies the inequali-ties (13.2.6) (cf. (12.3.10)).

Theorem 13.2.5. Let Eξj = 0 and let the distributions Fj satisfy condition[ · , =]U uniformly in j and a, with a common α = αj , j = 1, 2, . . . More-over, let conditions [H], (13.2.5) and (13.2.6) be met and the averaged distribu-tion F(1) (see (13.2.7)) with n = n1 = x/a satisfy condition [N]. Further, letx > c| ln a|/a for a suitable c (see the proof of Theorem 13.2.3). Then

P(

supk�0

(Sk − gk) � x)

=

( ∞∑j=1

Vj(x + g∗j )

)(1 + o(1)), (13.2.15)

where the remainder term o(·) is uniform in a and also over the classes of distri-butions {Fj} and boundaries {gk} that satisfy the conditions of the theorem.

The assertion of the theorem remains true if we replace conditions [ · , =]U and[N] by [ · , =]UR.

Proof. The proof of the theorem repeats, with obvious amendments, the proof ofTheorem 12.3.4.

Page 532: Asymptotic analysis of random walks

13.2 Probability of the crossing of a remote boundary 501

If we make the homogeneity conditions for the tails and boundaries somewhatstronger then it is possible to obtain a simpler representation for the right-handside of (13.2.15). Consider the following condition.

[HΔ] Let condition [H] hold and, for any fixed Δ > 0 and

nΔ :=⌊

xΔa

⌋= �n1Δ ,

let there exist a g > 0 and majorants Vj such that uniformly in k one has

1nΔ

(k+1)nΔ∑j=knΔ+1

Vj(x) ∼ V(1)(x),

1nΔ

(D(k+1)nΔ − DknΔ

) ∼ Dn1

n1,

1nΔ

(g(k+1)nΔ − gknΔ

) ∼ ga

as n → ∞.

Corollary 13.2.6. Let the averaged distribution F(1) (see (13.2.7)) satisfy con-ditions [ · , =]U and [N] with n = n1. Moreover, assume that x > c| ln a|/a,

condition [HΔ] is met and, for definiteness, g1 = o(x). Then

P(supk�0

(Sk − gk) � x)∼ 1

ga

∞∫x

V(1)(u)du ∼ xV(1)(x)ga(α − 1)

.

The above assertion remains true if conditions [ · , =]U and [N] are replacedby [ · , =]UR.

Proof. The proof of the corollary repeats that of Corollary 12.3.5.

Corollary 13.2.6 implies the following.

Corollary 13.2.7. Let ξ1, ξ2, . . . be i.i.d. r.v.’s (Fj = F, Vj = V , dj = d < ∞)and let condition [ · , =]U and condition [N] with n = n1 = x/a be satisfied.Then, if x > c| ln a|/a (see Theorem 13.2.3) we have

P(S(a) � x

) ∼ xV (x)a(α − 1)

.

The assertion remains true if conditions [ · , =]U and [N] are replaced by condi-tion [ · , =]UR.

It is clear that the assertions of Corollaries 13.2.6 and 13.2.7 can be stated in auniform version, as in Theorem 13.2.5.

Page 533: Asymptotic analysis of random walks

502 Non-identically distributed jumps with finite variances

13.3 The invariance principle. Transient phenomena

There is no need for us to go into much detail in this section, since the invarianceprinciple and transient phenomena have already been well covered in the existingliterature (the single reservation being that transient phenomena have been studiedfor i.i.d. jumps only). We will review here the main results (for completenessof exposition) and give brief explanations in those cases where the results areextended.

13.3.1 The invariance principle

As before, let ξ1, ξ2, . . . , ξn be independent r.v.’s in the triangular array scheme,

Eξj = 0, dj = Eξ2j < ∞, Dn =

n∑j=1

dj .

In this case, the convergence of the distribution of Sn/√

Dn to the limiting normallaw is determined by the Lindeberg condition:

[L] For any fixed τ > 0, as n → ∞,

1Dn

n∑j=1

E[ξ2j ; |ξj | > τ

√Dn

]→ 0

(see e.g. § 4, Chapter 8 of [49] or § 4, Chapter VIII of [122]).

To ensure that the processes

ζn(t) :=S�nt�√

Dn

, t ∈ [0, 1], (13.3.1)

converge to the standard Wiener process w(·), one also needs the homogeneityconditions

[HDΔ] For any fixed Δ > 0, nΔ = �Δn and all k � 1/Δ,

1nΔ

(D(k+1)nΔ − DknΔ

) ∼ Dn

nas n → ∞.

Let C(0, T ) be the space of continuous functions on [0, T ], endowed with theuniform metric. The trajectories of ζn(·) can be considered as elements of thespace D(0, 1) of functions without discontinuities of the second kind.

Theorem 13.3.1. Let conditions [L] and [HDΔ] be satisfied. Then, for any measur-

able functional f on D(0, 1) that is continuous in the uniform metric at the pointsof the space C(0, 1), we have the weak convergence of distributions as n → ∞:

f(ζn) ⇒ f(w),

where w is the standard Wiener process.

Page 534: Asymptotic analysis of random walks

13.3 The invariance principle. Transient phenomena 503

Along with the process (13.3.1), one often considers a continuous process ζn(·)defined as a polygon with nodes at the points(

k

n,

Sk√Dn

), k = 1, . . . , n.

In this case, the assertion of Theorem 13.3.1 can be formulated as the weak con-vergence of the distributions of ζn(·) and w(·) in the metric space C(0, 1) en-dowed with the σ-algebra of Borel sets (which coincides with the σ-algebra gen-erated by cylinders).

Proof. The proof of Theorem 13.3.1 follows the standard path: one needs to es-tablish the convergence of finite-dimensional distributions and the compactnessof the family of distributions of ζn(·) (see e.g. [28, 44]).

It will be convenient for us to rewrite the assertion of Theorem 13.3.1 in asomewhat different form. Let the totality of the independent r.v.’s ξ1, ξ2, . . . , ξn

be extended (or shortened) to the sequence ξ1, ξ2, . . . , ξ�nT�, T > 0, and let thisnew sequence also satisfy conditions [L] and [HD

Δ] (in condition [L] we sum upthe first �nT r.v.’s, and in condition [HD

Δ] the value k varies from 1 to T/Δ).The process ζn(·) (see (13.3.1)) will now be defined on the segment [0, T ].

Corollary 13.3.2. If the totality of independent r.v.’s ξ1, . . . , ξ�nT� satisfies con-ditions [L] and [HD

Δ] then, for any measurable functional f on D(0, T ) that iscontinuous in the uniform metric at the points of C(0, T ), we have the weak con-vergence of distributions as n → ∞:

f(ζn) ⇒ f(w),

where w is the standard Wiener process on [0, T ].

13.3.2 Transient phenomena

The essence of transient phenomena was described in sufficient detail in § 12.5.Here there is no real difference. The main result for i.i.d. summands ξj has alreadybeen given in § 12.5 (see (12.5.1)): for any z � 0,

lima→0

P(aSn(a) � z

)= P

(maxt�T

(√dw(t) − t

)� z

), (13.3.2)

where w is the standard Wiener process, n = Ta−2 and d = lima→0 Eξ2j (see

e.g. § 25, Chapter 4 of [42] or [165, 231]). For n = ∞ (T = ∞) the right-handside of (13.3.2) is equal to e−2z2/d.

Below we will extend these results to the case of non-identically distributedindependent jumps ξj in the triangular array scheme. As before, let F(1) be theaveraged distribution

F(1) =1n1

n1∑j=1

Fj with n1 = �a−2 .

Page 535: Asymptotic analysis of random walks

504 Non-identically distributed jumps with finite variances

Theorem 13.3.3. Let condition [L] be met for n = n1T for any fixed T, and

Dn

n→ d as n → ∞. (13.3.3)

Further, let the averaged distribution F(1) satisfy conditions [ · , <]U and [N](with n replaced in them by n1 = �a−2 ). If condition [H] of § 13.2 is met then(13.3.2) holds true for n = �Ta−2 . In particular, for n = ∞,

lima→0

P(aS(a) � z

)= e−2z2/d. (13.3.4)

The assertion of the theorem remains true if conditions [ · , =]U and [N] arereplaced by [ · , =]UR.

Note that the condition that, for some α > 2,

1n1

n1∑j=1

E|ξj |α < c < ∞ (13.3.5)

implies conditions [<, <]UR and [L]. The condition (13.3.5) with |ξj |α replacedby (ξ+

j )α (v+ = max{0, v}) implies [ · , <]UR and a ‘right-sided’ Lindebergcondition.

Proof of Theorem 13.3.3. The argument repeats that used in the proof of Theo-rem 12.5.1 but now it will employ Theorem 13.2.3 and Corollary 13.3.2. Since(13.3.3) implies that [HD

Δ] holds for n = n1T , we see that the conditions ofCorollary 13.3.2 are met for n = n1T . Similarly, the representation (12.5.7) with

ζn1(t) =S�n1t�√

Dn1

, n1 = �a−2 ,

could be written as

aSn(a) = a√

Dn1 maxk�n

(Sk√Dn1

− ak√Dn1

)= a

√Dn1 max

k�n

(ζn1

(k

n1

)− kθ

n1

),

where a√

Dn1 ∼ √d and θ = an1/

√Dn1 ∼ 1/

√d. For T < ∞, the functional

fT (ζ) := supu�T

(ζ(u) − u/

√d)

is continuous in the uniform metric and has the property that

√dfT (ζn1) =

√d max

k�n

(ζn1

(k

n1

)− k√

dn1

)= aSn(a)(1 + o(1)) + o(1).

Hence (13.3.2) holds true by virtue of Corollary 13.3.2. If n = ∞ (T = ∞)then one should make use of Theorem 13.2.3 (more precisely, of the inequal-ity (13.2.11)) in the same way as we used Theorem 12.3.1 to prove Theorem 12.5.2in the case T = ∞.

Page 536: Asymptotic analysis of random walks

13.3 The invariance principle. Transient phenomena 505

To find the closed-form expression (13.3.4) for the limiting distribution foraS(a), one could use, as in Theorem 12.5.2, the ‘invariance principle’ (13.3.2)proved in the first part of the theorem. That result states that, under the con-ditions of Theorem 13.3.3, the limiting law for aS(a) does not depend on thedistributions Fj . Therefore, when finding the desired limiting distribution we canassume that the r.v.’s ξj are identically distributed (Fj = F(1)) and have the righttail V (t) = P(ξ1 � t) = qe−λ+t, where q < 1, qλ−1

+ = Eξ+1 . In this case, the

distribution of S(a) is well known to be exponential also (see e.g. § 20 of [42];cf. (12.5.14)):

EeλS(a) = p +(1 − p)λ1

λ1 − λ,

so that

P(S(a) � v

)= (1 − p)e−λ1v for v > 0, p = P

(S(a) = 0

),

where λ1 is a solution to the equation ψ(λ) = 1, ψ(λ) = Eeλ(ξ−a). Next we notethat in our case, as λ → 0,

ψ(λ) = 1 − λa +dλ2

2(1 + o(1)).

Therefore λ1 ∼ 2a/d as a → 0. Hence

P(

S(a) � z

a

)= exp

{−z

a

2a

d

}(1 + o(1)) = e−2z/d(1 + o(1)).

The theorem is proved.

Page 537: Asymptotic analysis of random walks

14

Random walks with dependent jumps

14.1 The classes of random walks with dependent jumps that admit

asymptotic analysis

The results of Chapters 12 and 13 for random walks with independent non-identi-cally distributed jumps can be used for the asymptotic analysis of some types ofrandom walks with dependent jumps. Since everywhere in Chapters 12 and 13 weused the condition Eξj = 0, it is natural to attempt to extend the results of thesechapters to martingales, in the first place. The martingale property, however, doesnot play a decisive role. Another broad class of random walks for which one couldstudy large deviations in a similar way are ‘arbitrary’ walks (not necessarily mar-tingales) defined on Markov chains. The following four basic classes of randomwalks, which are rather close to each other, admit asymptotic analysis of the sametype as in Chapters 12 and 13.

1. Martingales with a common majorant for the jump distributions. Leta sequence of r.v.’s ξ1, ξ2, . . . be given on a basic probability space (Ω,F,P)endowed with a family of increasing σ-algebras F1 ⊆ F2 ⊆ · · · ⊆ F such that ξn

is Fn-measurable, n � 1. The stochastic sequence

{Sn,Fn; n � 1}, Sn :=n∑

j=1

ξj ,

forms a martingale if

E|ξn| < ∞ and E(ξn+1|Fn) = 0, n � 1. (14.1.1)

Denote by

Fj(B,ω) := P(ξj ∈ B|Fj−1)

the conditional distribution of ξj given Fj−1 and by

Fj,+(t, ω) := Fj([t,∞), ω), Fj,−(t, ω) := Fj((−∞,−t), ω)

506

Page 538: Asymptotic analysis of random walks

14.1 Classes of random walks with dependent jumps 507

its tails. Assume that there exist regularly varying majorants V (t), W (t) such thata.s.

Fj,+(t, ω) � V (t), Fj,−(t, ω) � W (t), t > 0. (14.1.2)

It may be seen from the analysis in § 12.1 that when deriving upper bounds forthe distributions P(Sn � x), Sn = maxk�n Sk, we have not used the distribu-tions Fj themselves but rather the majorants for their tails. In the present case,we have such majorants in the uniform version (14.1.2). This enables one to ob-tain the required bounds for the probabilities of large deviations of Sn (of theform nV (x)) for the class of martingales under consideration.

2. Martingales defined on countable Markov chains. Another class of mar-tingales, which will be discussed in more detail in §§ 14.2 and 14.3 below andfor which one can derive more advanced and precise results, is a modification ofthe above general model. This class contains martingales defined on countableMarkov chains.

Let X = {Xk; k � 1} be a time-homogeneous ergodic Markov chain witha countable state space. We will assume that all the states are essential. Onecan identify, without loss of generality, the state space X of the chain with theset N = {1, 2, . . .}. Ergodicity of the chain means that, for any fixed i, j ∈ X ,

there exist the limits

limn→∞P(Xn = j|X1 = i) = πj > 0,

∞∑j=1

πj = 1. (14.1.3)

Further, on each of the states j ∈ X of the chain let an r.v. ξ(j) be given whosedistribution F(j) depends on j, but in such a way that, for any j,

Eξ(j) = 0. (14.1.4)

Consider an array of independent r.v.’s {ξk(j); k, j � 1}, which is independentof the chain X and in which the ξk(j) are distributed as ξ(j), and form the sums

Sn :=n∑

k=1

ξk(Xk). (14.1.5)

If we denote by Fn the σ-algebra generated by{X1, . . . , Xn; ξ1(X1), . . . , ξn(Xn)

},

then the stochastic sequence {Sn,Fn; n � 1} will be a martingale. Under suit-able assumptions on the distributions of ξ(j), which will be stated in §§ 14.2 and14.3, one can extend to such random walks all the main results of the asymp-totic analysis of Chapters 12 and 13, without great effort and without using anyadditional constructions.

The reason for this is that for any fixed trajectory

X(n) := {X1, . . . , Xn},

Page 539: Asymptotic analysis of random walks

508 Random walks with dependent jumps

the sequence {ξ1(X1), . . . , ξn(Xn)} will consist of independent non-identicallydistributed r.v.’s whose distribution satisfies the conditions of Chapters 12 and 13.Then averaging the derived asymptotic results over the set Xn

1 of ‘ergodic’ tra-jectories (P(X(n) ∈ Xn

1 ) → 1 as n → ∞), we will obtain a sufficiently simpleexplicit answer. The probability of trajectories that are not from X n

1 will be neg-ligibly small, and such trajectories will not affect the form of the desired asymp-totics.

3. Martingales defined on arbitrary Markov chains. The above randomwalk model could be extended to the case of a Markov chain with an arbitrarystate space X . Assume that for each z ∈ X one is given a distribution F(z) on thereal line such that

∫tF(z)(dt) = 0. This time, we will make use of a somewhat

different, more ‘economical’, way of defining the random walk. Let Q(z)(v) bethe v-quantile of F(z), i.e. Q(z)(v) is the generalized inverse of the distributionfunction F(z)((−∞, t)):

Q(z)(v) = inf{t : F(z)((−∞, t)) > v

}.

Further, let ω1, ω2, . . . be a sequence of i.i.d. r.v.’s that are uniformly distributedover [0, 1] and independent of X . Then we put

ξk(z) := Q(z)(ωk)

and, as before, define the random walk {Sn} by (14.1.5). It is obviuos thatEξk(z) = 0 for any z ∈ X and that the random walk (14.1.5) is again a mar-tingale, which is now defined on an arbitrary Markov chain.

Provided appropriate restrictions are imposed on the distributions F(z), all theconsiderations from §§ 14.2 and 14.3 below which are devoted to the case ofcountable chains will remain valid for these more general walks. We single outthe countable chains for the sole reason of simplifying the exposition and avoidingtechnical difficulties concerning measurability, integrability, convergence etc.

4. Random walks defined on Markov chains. This term will be used forthe processes described in items 2 and 3 above, in the case when the martingaleproperty (i.e. the property that Eξ(z) ≡ 0) is not assumed. We will assume onlythat

a(z) := Eξ(z)

is finite and, for the reasons mentioned at the end of item 3, will confine ourselvesto considering walks defined on countable Markov chains.

The possibility of studying such more general processes is due to the fact that,in the decomposition

Sn = An + S0n, An :=

n∑k=1

a(Xk), S0n := Sn − An,

the sequence {S0n} forms a martingale since Eξk(z) − a(z) = 0 whereas, as n

Page 540: Asymptotic analysis of random walks

14.2 Martingales on countable Markov chains: infinite variances 509

increases, the sequence {An} will behave, with probability close to 1, almost asa linear function aπn, aπ :=

∑j πja(j). This allows one to use the results of

§§ 14.2 and 14.3 on the crossing of arbitrary fixed boundaries.In the case when ξj(Xj) = f(Xj), where f is a given function on X and

{Xk} forms a Harris Markov chain (having a positive recurrent state z0), theasymptotics of P(Sn � x) in terms of regularly varying distribution tails of thesums of quantities f(Xk) taken over a cycle formed by consecutive visits to z0

was studied in [187].

14.2 Martingales on countable Markov chains. The main results of the

asymptotic analysis when the jump variances can be infinite

In this section, we will consider in more detail random walks formed by the mar-tingales (14.1.4), (14.1.5) and defined on ergodic Markov chains with countablymany states.

Let F(j) be the distribution of the jump ξ(j) (in the walk {Sk}) correspondingto state j. Put

F(j),+(t) := F(j)([t,∞)), F(j),−(t) := F(j)((−∞, t)).

Consider the following conditions.

[<, <]U([<, =]U

)There exist regularly varying majorants V(j), W(j), V ∗

and W ∗ such that, for t > 0,

F(j),+(t) � V(j)(t) � V ∗(t), F(j),−(t) � W(j)(t) � W ∗(t)(14.2.1)(

F(j),+j(t) = V(j)(t) � V ∗(t), F(j),−(−t) � W(j)(t) � W ∗(t)),

(14.2.2)

where V(j), W(j) satisfy condition [U] of uniformity in j from § 12.1 with expo-nents αj � 1, βj > 1.

Further, let π = (π1, π2, . . .) be the stationary distribution of the chain X . Set

Vπ(t) :=∞∑

j=1

πjV(j)(t), Wπ(t) :=∞∑

j=1

πjW(j)(t), (14.2.3)

and assume that, for some p > 0,

Vπ(t) � pV ∗(t), Wπ(t) � pW ∗(t). (14.2.4)

Condition (14.2.4) is essentially non-restrictive. For example, let there exist a‘heaviest’ tail

V(j∗)(t) = maxj

V(j)(t).

Page 541: Asymptotic analysis of random walks

510 Random walks with dependent jumps

Then, clearly, one can put V ∗(t) := V(j∗)(t) and the first relation in (14.2.4) willalways hold with p = πj∗ .

It is not hard to see that, under the above conditions, the averaged majorantsVπ , Wπ will also be r.v.f.’s. We will assume that the respective exponents areα ∈ [1, 2) and β ∈ (1, 2), so that the ‘heaviest’ of the tails V(j) and W(j) haveinfinite second moments.

Consider the random walk {Sn : n � 1} defined in (14.1.5) when the jumpsξk(z), z ∈ X , are given by the quantiles Q(z)(ωk); the r.v.’s ω1, ω2, . . . are inde-pendent of each other and of X and are uniformly distributed over [0, 1]. As be-fore, let Sn = maxk�n Sk. We have the following analogue of Theorem 12.1.4.

Theorem 14.2.1. Assume that the random walk (14.1.5), (14.1.4) satisfies theconditions X1 � N < ∞ and [<, <]U (see (14.2.1)–(14.2.4)). Then the follow-ing assertions hold true.

(i) If W ∗(t) < cV ∗(t) then

supx: nV ∗(x)�v

P(Sn � x)nVπ(x)

� 1 + ε(v, n), (14.2.5)

supn,x: nV ∗(x)�v

P(Sn � x)nV ∗(x)

� 1 + ε(v), (14.2.6)

where ε(v, n) → 0 as v → 0, n → ∞ and ε(v) → 0 as v → 0.(ii) If W ∗(t) > cV ∗(t) then the inequalities (14.2.5), (14.2.6) remain true for

all n and x such that

nW ∗( x

lnx

)� c1 < ∞. (14.2.7)

There will also be analogues of Corollaries 12.1.5 and 12.1.6.Note that, unlike (14.2.6), the inequality (14.2.5) will prove to be asymptoti-

cally exact under broad conditions. In this inequality, however, it is assumed thatn → ∞ (in contrast with (14.2.6) and the assertion of Theorem 12.1.4).

Proof of Theorem 14.2.1. If we fix the trajectory X(n) = {X1, . . . , Xn} then thesequence ξ1(X1), . . . , ξn(Xn) will consist of independent r.v.’s, with distributionsF(X1), . . . ,F(Xn) respectively, that satisfy the ‘individual’ conditions [<, <]Uof § 12.1. For a fixed j, the frequency of occurrence of the event {Xk = j} in thetrajectory X(n) will, for large values of n, be close to nπj . Hence the majorant ofthe averaged tail for the sequence ξ1(X1), . . . , ξn(Xn) will, in a sense, be closeto

Vπ(t) =∞∑

j=1

πjV(j)(t).

More precisely, for a given ε > 0 we can split the set of trajectories X(n) into

Page 542: Asymptotic analysis of random walks

14.2 Martingales on countable Markov chains: infinite variances 511

two parts X n1 and X n

2 , where Xn1 is defined as the collection of (z1, . . . , zn) such

that

1n

n∑k=1

V(zk)(t) � Vπ(t)(1 + ε). (14.2.8)

The set Xn2 comprises all the other trajectories. By virtue of the ergodic theorem

and (14.2.1)–(14.2.4), for any fixed ε > 0 we clearly have P(X(n) ∈ Xn1 ) → 1

as n → ∞. Therefore, this relation will still be true when ε = εn → 0 slowlyenough as n → ∞.

Denote by FXn the σ-algebra generated by the trajectories X(n). Then

P(Sn � x) = E[P(Sn � x

∣∣FXn

); X(n) ∈ Xn

1

]+ E

[P(Sn � x

∣∣FXn

); X(n) ∈ X n

2

]. (14.2.9)

In each term on the right-hand side, to compute the probability P(Sn � x|FXn )

we can use Theorem 12.1.4, of which all the conditions are satisfied. For thefirst term, it is obvious that condition [<, <]U is met. Condition [N] for themajorant Vπ(t)(1 + εn) of the averaged distribution holds because Vπ is a fixedr.v.f., independent of n, whereas the factor 1+εn tends to 1 as n → ∞ and affectsnothing. (In this case, conditions [<, <] and [UR] of Chapter 12 will also be metfor the averaged distribution.)

For the second term (for the set Xn2 ), one should use the majorants V ∗, W ∗,

which also clearly satisfy conditions [U] and [N] since they do not depend on n.Owing to the above, under the conditions of the first assertion of the theorem wehave

supn,x: nVπ(x)�v

P(Sn � x|FXn )

nVπ(x)(1 + εn)� 1 + ε(v)

on the set {X(n) ∈ X n1 }, and

supn,x: nV ∗(x)�v

P(Sn � x|FXn )

nV ∗(x)� 1 + ε(v)

on the set {X(n) ∈ X n2 }, where ε(v) → 0 as v → 0. Hence, by virtue of (14.2.4)

and (14.2.9),

supx: nVπ(x)�v

P(Sn � x)nVπ(x)

�[P(X(n) ∈ X n

1 ) +1p

P(X(n) ∈ Xn2 )](

1 + ε1(v, n)), (14.2.10)

where ε1(v, n) → 0 as v → 0, n → ∞. This proves (14.2.5). The inequal-ity (14.2.6) and the second assertion of the theorem are proved in eactly the sameway.

Page 543: Asymptotic analysis of random walks

512 Random walks with dependent jumps

Next we will obtain lower bounds for the probabilities P(Sn � x). Set V (t) :=max

{V ∗(t),W ∗(t)

}, σ(n) := V (−1)(1/n) and

Fπ,+(t) :=∞∑

k=1

πjF(j),+(t).

Theorem 14.2.2. Let the conditions X1 � N , [<, <]U, y = x + uσ(n) andu → ∞ be satisfied. Then, as n → ∞,

P(Sn � x) � nFπ,+(y)(1 + o(1)). (14.2.11)

If, in addition, x � σ(n) and condition [<, =]U is met then

P(Sn � x) � nVπ(x)(1 + o(1)). (14.2.12)

Proof. We can follow the same path as in the proof of Theorem 14.2.1. Take Xn1

to be the set of trajectories (z1, . . . , zn), for which (cf. (14.2.8))∣∣∣∣∣ 1nVπ(t)

n∑k=1

V(zk)(t) − 1

∣∣∣∣∣ < εn,

∣∣∣∣∣ 1nFπ,+(t)

n∑k=1

F(zk),+(t) − 1

∣∣∣∣∣ < εn,

where εn → 0 as n → ∞ slowly enough that P(X(n) ∈ X n1 ) → 1. Then

P(Sn � x) � E[P(Sn � x

∣∣FXn

); X(n) ∈ X n

1

],

where, owing to Theorem 12.3.1, one has

P(Sn � x

∣∣FXn

)� nFπ,+(y)(1 + o(1))

on the set {X(n) ∈ X n1 }, so that

P(Sn � x) � nFπ,+(y)P(X(n) ∈ Xn1 )(1 + o(1))

= nFπ,+(y)(1 + o(1)).

This proves (14.2.11).If x � σ(n) and condition [<, =]U is met then, for u = o(x/σ(n)), u → ∞,

we obtain y ∼ x, Fπ,+(y) ∼ Vπ(x), which means that (14.2.12) holds true.The theorem is proved.

As in § 12.2, it will be convenient for us to introduce condition [Q] that statesthat at least one of the following two conditions is met:

[Q1] W ∗(t) � cV ∗(t), x → ∞ and nV ∗(x) → 0;

[Q2] x → ∞ and nV( x

lnx

)< c, where V (t) := max

{V ∗(t),W ∗(t)

}.

The next result follows in an obvious way from Theorems 14.2.1 and 14.2.2.

Page 544: Asymptotic analysis of random walks

14.2 Martingales on countable Markov chains: infinite variances 513

Theorem 14.2.3. Let the random walk {Sn} (see (14.1.5)) satisfy the conditionsX1 � N and [<, =]U. Then, under the conditions n → ∞, x � σ(n) and [Q],we have

P(Sn � x) = nVπ(x)(1 + o(1)),

P(Sn � x) = nVπ(x)(1 + o(1)).(14.2.13)

It is seen that the main results of §§ 12.1 and 12.2 can be extended withoutmuch effort to martingales defined on Markov chains. We could also extend theother results of Chapter 12 in exactly the same way. One just needs to keep inmind the following observations.

(1) While considering the problem on the crossing of an arbitrary boundary{g(k)} (an analogue of Theorem 12.2.2) we will have for X1 � N and {g∗(k)}that grow not too rapidly that, by virtue of the ergodic theorem,

E

[E

(n∑

k=1

V(Xk)

(g∗(k)

)∣∣∣∣∣FXn

)]∼

n∑k=1

(g∗(k)

). (14.2.14)

(2) While proving convergence to a stable law, one should make use of thefact that, on a suitable set Xn

1 with P(X(n) ∈ Xn1 ) → 1, the averaged distri-

bution n−1∑n

j=1 F(Xj) will be close to Fπ. We take the function G = Gπ tobe given by Gπ(t) = Fπ,+(t) + Fπ,−(t) and assume its regular variation. Thencondition [Rα,ρ] will have the form

limt→∞

Fπ,+(t)Gπ(t)

= ρ+.

(3) While proving the convergence of finite-dimensional distributions in the in-variance principle (an analogue of Theorem 12.4.5), the set Xn

1 should be some-what narrowed so that the event {X(n) ∈ X n

1 } would also imply the event{X�nt1� � N, . . . , X�ntk� � N

}.

This is needed to prove that the joint distribution of

S�nt1�b(n)

, . . . ,S�ntk�b(n)

converges to the respective finite-dimensional distribution of the stable process.

(4) The proofs of all the limit theorems are based on the fact that, for an initialchain state X1 � N , the distributions, averaged over long time intervals, willhave tails close to Fπ,+, Fπ,−.

(5) While studying transient phenomena, to avoid making the exposition toocomplicated one should avoid introducing the triangular array scheme. We couldassume that all the tails V(j), W(j) and Vπ, Wπ are fixed and then, in the analysisof the sequences Sn(a) = maxk�n(Sk − ak), suppose that only the parameter a

is changing (a → 0).

Page 545: Asymptotic analysis of random walks

514 Random walks with dependent jumps

14.3 Martingales on countable Markov chains. The main results of the

asymptotic analysis in the case of finite variances

As in the previous section, we will deal here with a random walk {Sn} definedon a countable Markov chain X (see (14.1.4), (14.1.5)). Consider the followingconditions:

[ · , <]U([ · , =]U

)We have

d(j) := Var ξ(j) < c < ∞,

and there exist regularly varying majorants V(j), V ∗, such that

F(j),+(t) � V(j)(t) � V ∗(t)(F(j),+(t) = V(j)(t) � V ∗(t)

)and V(j) satisfies the uniformity condition [U] of § 13.1 (see p. 483) with exponentα(j) > 2.

As before, we will also assume that (cf. (14.2.4))

Vπ(t) =∞∑

j=1

πjV(j)(t) � pV ∗(t) (14.3.1)

for some p < 1 (see the remark following (14.2.4)).

Recall that FXn is the σ-algebra generated by X(n). For any X1 � N , we have

ES2n = E

( n∑k=1

ξk(Xk))2

= E[E( n∑

k,m=1

ξk(Xk)ξm(Xm)∣∣∣FX

n

)]

= En∑

k=1

d(Xk) ∼ n

∞∑j=1

πjd(j) =: ndπ.

(14.3.2)

Further, put σ(n) :=√

n lnn, x = sσ(n) and assume, as in § 4.1, that x � √n

(i.e. s2 � (lnn)−1).

Theorem 14.3.1. Let the conditions X1 � N and [ · , <]U be satisfied. Then thefollowing assertions hold true.

(i) For s � 1,

P(Sn � x) � nVπ(x)(1 + ε(s)), (14.3.3)

where ε(s) → 0 as s → ∞.(ii) If s < c then for any fixed h > 1 and all large enough n,

P(Sn � x) � e−x2/2hndπ . (14.3.4)

One can give a closed-form expression for the constant c.

Page 546: Asymptotic analysis of random walks

14.3 Martingales on countable Markov chains: finite variances 515

Proof. The proof is based on Theorem 13.1.2 and the approach used in the proofof Theorem 14.2.1. We will again make use of the relation (14.2.9). For thefirst assertion of the theorem, we will define the set X n

1 in the same way as inthe proof of Theorem 14.2.1: for (z1, . . . , zn) ∈ Xn

1 , one has (14.2.8). Then,repeating the argument from the proof of Theorem 14.2.1, we obtain (14.3.3) byvirtue of Corollary 13.1.4.

For the second assertion, the set X n1 will be defined as a collection of trajecto-

ries (z1, . . . , zn) for which

1n

n∑k=1

d(zk) � dπ(1 + εn),

where εn → 0 so slowly that P(X(n) ∈ X n1 ) → 1 as n → ∞. Again repeat-

ing the considerations with separate calculations of P(Sn � x

∣∣FXn

)on the sets

{X(n) ∈ Xn1 } and {X(n) ∈ X n

2 }, and using condition d(j) � c and Corol-lary 13.1.4, we arrive at (14.3.4).

The theorem is proved.

Next we will obtain lower bounds.

Theorem 14.3.2. Let conditions X1 � N and [ · , =]U be satisfied. Then, asn → ∞, for x � √

n we have

P(Sn � x) � nVπ(x)(1 + o(1)).

Proof. Choose the set Xn1 in (14.2.9) so that for X(n) ∈ X n

1 one has∣∣∣∣∣ 1nn∑

k=1

V(Xj)(t) − Vπ(t)

∣∣∣∣∣ � εnVπ(t),

where εn → 0 slowly enough that P(X(n) ∈ X n1 ) → 1 as n → ∞. Then

P(Sn � x) � E[P(Sn � x

∣∣FXn

); X(n) ∈ X n

1

],

where, for y = x + u√

n, the conditional probability on the right-hand side will,by virtue of Theorem 13.1.8, be greater than

nFπ,+(y)(1 − εn)[1 − cu−2 − 1

2nFπ,+(y)(1 + εn)

].

If x � √n, u → ∞, u = o(x/

√n) then y ∼ x, nFπ,+(y) < cnx−2 = o(1) and

P(Sn � x) � nVπ(x)(1 − ε′n)P(X(n) ∈ Xn1 ) = nVπ(x)(1 − ε′′n),

where ε′n, ε′′n → 0 as n → ∞. The theorem is proved.

Theorems 14.3.1, 14.3.2 imply the following result.

Page 547: Asymptotic analysis of random walks

516 Random walks with dependent jumps

Theorem 14.3.3. Let the conditions X1 � N and [ · , =]U be satisfied. Then,as n → ∞, for x � √

n lnn,

P(Sn � x) = nVπ(x)(1 + o(1)),

P(Sn � x) = nVπ(x)(1 + o(1)).

As in the previous section, we see that all the main results of §§ 13.1 and 13.2can be extended without great effort to martingales defined on Markov chains.We could also extend in exactly the same way all the other results of Chapter 13.In this connection, one should bear in mind the observations made at the end ofthe previous section.

Remark 14.3.4. The condition V(j)(t) � V ∗(t) � p−1Vπ(t) is, in a sense, ex-cessive. The tails V(j)(t) could get ‘thicker’ as j increases, at a rate depending onhow fast the πj vanish as j → ∞. In this case, of course, the series

∑πjV(j)(t)

should converge and represent an r.v.f. It is of interest to determine the charac-ter of the former dependence. For example, it would be interesting to find outwhether the assertions of Theorems 14.3.1–14.3.3 remain true when

πj � cj−γ , γ > 1, V(j)(t) � min{1, jβV1(t)}, β < γ − 1.

14.4 Arbitrary random walks on countable Markov chains

In this section, we will deal with the random walks described in §§ 14.2 and 14.3but without assuming the martingale property. That is, we will consider randomwalks for which the function

a(z) := Eξ(z)

does not need to be identically equal to zero.

14.4.1 The case of infinite variances

First we consider the case where the random walk (14.1.5) satisfies conditions(14.2.1)–(14.2.4) with α ∈ [1, 2), β ∈ (1, 2) and the condition

aπ :=∑

πja(j) = 0 (14.4.1)

(when studying the distribution of Sn the last condition does not amount to arestriction of generality). Put

Sn = An + S0n, An :=

n∑k=1

a(Xk), S0n := Sn − An. (14.4.2)

The sequence {S0n} forms a martingale and could be studied as in § 14.2. It re-

mains to estimate the sequence {An}. The most natural way of doing this is to

Page 548: Asymptotic analysis of random walks

14.4 Arbitrary random walks on countable Markov chains 517

split the trajectory of the chain X into cycles by repeated visits to the initial stateX1, which is assumed to be fixed. Let

τ := min{k > 1 : Xk = X1} − 1 and ζ := Aτ − A1

be the length of such a cycle and the increment of the sequence {An} on onecycle respectively. It is not hard to see that Eζ = aπEτ = aπ/πX1 and therefore,by (14.4.1),

Eζ = 0. (14.4.3)

To simplify the subsequent considerations, we will assume that the absolute valueof the function a(z) is bounded; without loss of generality, one can stipulate that

|a(z)| < 1. (14.4.4)

Then clearly |ζ| � τ and ζ � τ , where

ζ := Aτ − A1 = maxk�τ

Ak − A1.

This means that the crucial part of what follows will be obtaining bounds for thetail P(τ > t).

Upper bounds for P(Sn � x)

Assume that there exists an r.v.f. Vτ (t) such that

P(τ > t) � Vτ (t) = t−γLτ (t), γ > 1. (14.4.5)

For finite Markov chains, the probability P(τ > t) decays exponentially fastas t → ∞, and so condition (14.4.5) is satisfied for any γ > 0. If the chain iscountable and the increments ϑ(x) := X2 −X1 of the chain, given X1 = z, havethe following properties: there exists an r.v. ϑ such that

(1) ϑ(z)d� ϑ for large enough z,

(2) Eϑ < 0,(3) P(ϑ > t) � Vϑ(t), where Vϑ(t) is an r.v.f.,

then it is not hard to demonstrate that

P(τ > t) � cVϑ(t)

(see [50]; the easiest way to show this is to use Theorem 3, § 43 of [50]).Now we can state an extension of Theorem 14.2.1 to the case a(z) �≡ 0.

Theorem 14.4.1. Assume that the random walk (14.1.5) satisfies the conditionsX1 � N < ∞ and [<, <]U (see (14.2.1)–(14.2.4)) and that the relations (14.4.1),(14.4.4) and (14.4.5) hold true. Moreover, let

Vτ (t) = o(V ∗(t)

)as t → ∞. (14.4.6)

Page 549: Asymptotic analysis of random walks

518 Random walks with dependent jumps

Then the assertions (14.2.5)–(14.2.7) of Theorem 14.2.1 remain true for all n

and x such that, for any fixed v > 0,

e−vn � nV ∗(x), nV ∗(x) → 0. (14.4.7)

Observe that condition (14.4.6) necessarily implies the inequality γ � α. Con-dition (14.4.7) means that the deviations x are ‘exponentially’ bounded fromabove.

Proof of Theorem 14.4.1. Owing to the representation (14.4.2), we have

Sn � An + S0n, where An := max

k�nAk, S 0

n := maxk�n

S0k ,

so that, for any δ ∈ (0, 1),

P(Sn � x) � P(An � δx) + P(S 0

n � (1 − δ)x).

It suffices to show that

P(An � x) = o(nV ∗(x)

), (14.4.8)

since if (14.4.8) holds and δ = δ(x) → 0 slowly enough as x → ∞, we will alsohave

P(An � δx) = o(nV ∗(x)

)(see Theorem 14.2.1). Let τ1, τ2, . . . be independent copies of the r.v. τ and letaτ := Eτ . Set Tk :=

∑kj=1 τj and denote by

ητ (n) := min{k : Tk > n}the minimum number of cycles ‘covering’ the time interval [1, n]. It follows fromLemma 16.2.6 that, for n+ := (n + un)/aτ , u � 1/2, one has

P(ητ (n) > n+) �{

e−cu2n if Eτ2 < ∞,

e−cuh(u)n if (14.4.5) holds for a γ ∈ (1, 2),(14.4.9)

where h(u) = uz1/(γ−1)Lh(u) is the inverse of u(λ) := λ−1Vτ (λ−1).Further,

P(An � x) � P(ητ (n) > n+) + P(An � x; ητ (n) � n+

), (14.4.10)

where, for any fixed u > 0, owing to (14.4.7) one has P(ητ (n) > n+) � nV ∗(x)when nV ∗(x) → 0 (the right-hand side of (14.4.9) decays ‘exponentially fast’).For the second term on the right-hand side of (14.4.10) we have the inequality

P(An � x; ητ (n) � n+

)� P

(maxk�n+

ζk � x/2)

+ P(1 + Zn+ � x/2

),

(14.4.11)

Page 550: Asymptotic analysis of random walks

14.4 Arbitrary random walks on countable Markov chains 519

where the ζk are independent copies of the r.v. ζ and

Zn :=n∑

k=1

ζk, Zn := maxk�n

Zk.

Clearly,

P(

maxk�n+

ζk � x/2)

� n+Vτ (x/2) = o(nV ∗(x)

),

and, using (14.4.3)–(14.4.6), we obtain

P(Zn+ � x/2 − 1) � n+Vτ (x/2)(1 + o(1)) = o(nV ∗(x)

)as nV ∗(x) → 0. This means that (14.4.8) holds true.

The theorem is proved.

Lower bounds for P(Sn � x)

As above, by δ > 0 we will understand a function δ = δ(x) → 0 slowly enoughas x → ∞. The choice of δ will be made in the proof of Theorem 14.4.2 below.

Theorem 14.4.2. Assume that the conditions of Theorem 14.2.2 are met for therandom walk (14.1.5) and, moreover, that (14.4.1) and (14.4.4)–(14.4.7) holdtrue. Then, for y = (1 + δ)x + uσ(n), n → ∞, u → ∞, we have

P(Sn � x) � nFπ,+(y)(1 + o(1)) + o(nV ∗(x)

). (14.4.12)

If, in addition, x � σ(n) and condition [<, =]U is satisfied then

P(Sn � x) � nVπ(x)(1 + o(1)). (14.4.13)

Proof. Owing to the representation (14.4.2),

P(Sn � x) � P(S0n � (1 + δ)x, An � −δx)

� P(S0n � (1 + δ)x) − P(An < −δx), (14.4.14)

where An := mink�n Ak. An upper bound for P(An < −x) can be obtainedin exactly the same way as the bound for P(An � δx) above (see (14.4.9)–(14.4.11)). Therefore we obtain (cf. (14.4.8))

P(An < −x) = o(nV ∗(x)

).

Moreover, if δ(x) → 0 slowly enough then, clearly, we will also have

P(An < −δx) = o(nV ∗(x)

).

A lower bound for the first term on the right-hand side of (14.4.14) can be obtainedfrom Theorem 14.2.2. This proves (14.4.12). The assertion (14.4.13) followsfrom (14.4.12) in the same way as (14.2.12) followed from (14.2.11).

The theorem is proved.

Page 551: Asymptotic analysis of random walks

520 Random walks with dependent jumps

Exact asymptotics

The following extension of Theorem 14.2.3 to the case a(z) �≡ 0 holds true.

Theorem 14.4.3. Assume that the random walk (14.1.5) satisfies the conditionsX1 � N and [<, =] and, moreover, that the relations (14.4.1), (14.4.4)–(14.4.7)hold true. If n → ∞ and the conditions x � σ(n) and [Q] are met then (14.2.13)holds true.

Proof. The proof follows in an obvious way from Theorems 14.4.1 and 14.4.2.

Remark 14.4.4. If Vτ (t) � V ∗(t) as t → ∞ then it may happen that the de-cisive role in determining the asymptotics of P(Sn � x) will be played by the‘deterministic’ walk {An} on the Markov chain X , i.e. the asymptotics will bedetermined by that of P(An � x), which can also be studied with the help ofcycles.

In the case aπ �= 0, one should use the asymptotic linearity Ak ∼ aπk (withprobability close to 1) and the asymptotics (14.2.14) for the probability of thecrossing of an arbitrary boundary by a martingale walk.

14.4.2 The case of finite variances

Let all the assumptions made at the beginning of § 14.3 hold true. As at the endof the previous subsection, assume that a(z) �≡ 0 and that conditions (14.4.1) and(14.4.4) are satisfied. Then the relation (14.3.2) should be replaced by

E(S0n)2 ∼ ndπ, dπ :=

∞∑j=1

πjd(j).

Concerning the cycle length τ, we will assume that Eτ2 < ∞ and that (14.4.5)holds for a γ � α > 2. Then, repeating the argument of the previous subsectionand using Theorem 14.3.1, we will obtain, in the notation of § 14.3, the followingassertion.

Theorem 14.4.5. Let the conditions X1 � N , [<, <]U, (14.4.1), (14.4.4) and(14.4.5) be satisfied. Moreover, let

Vτ (t) � V ∗(t) as t → ∞ (14.4.15)

e−vn � nV ∗(x) as nV ∗(x) → 0, for any fixed v > 0. (14.4.16)

Then, for s � 1,

P(Sn � x) � nVπ(x)(1 + ε(s)),

where ε(s) → 0 as s → ∞.

Page 552: Asymptotic analysis of random walks

14.4 Arbitrary random walks on countable Markov chains 521

Concerning the lower bounds, the following extension of Theorem 14.3.2 tothe case when a(z) �≡ 0 holds true.

Theorem 14.4.6. Assume that the following conditions are satisfied: X1 � N ,[ · , =]U, (14.4.1), (14.4.4), (14.4.5), (14.4.15) and (14.4.16). Then, for x � √

n,

P(Sn � x) � nVπ(x)(1 + o(1)).

Proof. The proof of the theorem repeats the argument of § 14.4.1 concerning thederivation of the lower bounds.

Theorems 14.4.5 and 14.4.6 imply the following result.

Theorem 14.4.7. Let the conditions of Theorem 14.4.6 be met. Then, as n → ∞,

for x � √n lnn one has

P(Sn � x) = nVπ(x)(1 + o(1)),

P(Sn � x) = nVπ(x)(1 + o(1)).

Remark 14.4.4 remains applicable to the cases aπ �= 0 and Vτ (t) � V ∗(t).

Page 553: Asymptotic analysis of random walks

15

Extension of the results of Chapters 2–5 tocontinuous-time random processes with

independent increments

15.1 Introduction

The objective of this and the next chapter is to extend the main results of Chap-ters 2–5 to the following two classes of continuous-time processes:

(1) processes with independent increments;(2) compound (generalized) renewal processes.

The two classes clearly have a substantial intersection: the collection of com-pound Poisson processes.

For the first class, we will consider two approaches to studying large deviationprobabilities.

The first approach uses rather crude estimates of the closeness of the trajec-tories of discrete- and continuous-time processes to reduce the problem to thealready available results of Chapters 2–5 for discrete-time processes. This ap-proach, however, does not enable one to extend to the continuous-time case anyresults related to asymptotic expansions.

The second approach consists in constructing, in the continuous-time case, acomplete analogue of the whole procedure used to derive the desired asymptoticsin Chapters 2–5. In this way, one can obtain analogues of all the results fromthose chapters.

The distribution of any right-continuous process with homogeneous indepen-dent increments {S(t)} is completely specified by the ch.f.

EeiλS(t) = etψ(λ),

where ψ(λ) admits a Levy–Khinchin representation, which we write in the fol-lowing form:

ψ(λ) = iλq − λ2d

2

+∫

|u|<1

(eiλu − 1 − iλu)G[0](du) +∫

|u|�1

(eiλu − 1)G(du). (15.1.1)

522

Page 554: Asymptotic analysis of random walks

15.1 Introduction 523

As in the case of the standard representation for ψ(λ) (using a single integral, seee.g. [129, 122]), the measures G[0], G will be referred to as spectral measures.For these measures, we have

G[0]({0}) = G[0](R \ (−1, 1)) = 0,

∫u2G[0](du) < ∞,

Θ := G(R) = G(R \ (−1, 1)) < ∞.

The constant d is the variance of the Wiener component of the process.There are two ways of describing ‘local’ properties of the distributions of the

process {S(t)}, either via the distributions F of the increments

ξd= ξ1 := S(1)⊂=F

(we will use this approach in § 15.2) or via the tails G±(t) of the spectral mea-sure G (to be used in § 15.3). There is a direct relationship between the distribu-tion of ξ and the tails of the spectral measure, which is perhaps easiest to graspusing the following argument.

The function ψ(λ) is decomposed in (15.1.1) into four summands, which cor-respond to the representation

ξ = ξ(1) + ξ(2) + ξ(3) + ξ(4), (15.1.2)

where the ξ(i) are independent r.v.’s: ξ(1) ≡ q, ξ(2) has the normal distributionwith parameters (0, d), ξ(3) corresponds to the third term on the right-hand sideof (15.1.1) and has an entire ch.f., so that the tails of ξ(3) decay faster than anyexponential function. The above means that the sum ξ(1) + ξ(2) + ξ(3) of the firstthree terms on the right-hand side of (15.1.2) follows a distribution whose tailsdecay at infinity faster than any exponential function.

The term ξ(4) in (15.1.2) corresponds to the fourth term on the right-hand sideof (15.1.1) and has a ch.f. eψ(4)(λ), with

ψ(4)(λ) =∫

(eiλu − 1)G(du) = Θ∫

(eiλu − 1)G[1](du), (15.1.3)

where Θ = G(R\(−1, 1)

)= G(R) and the probability measure G[1] := Θ−1G

is concentrated on R \ (−1, 1).The ch.f. (15.1.3) corresponds to a compound Poisson process with jumps fol-

lowing the distribution G[1] and jump intensity Θ, so that the distribution F(4)

of ξ(4) will have the tail

F(4),+(t) = e−Θ∞∑

k=1

Θk

k!Gk∗

[1],+(t), (15.1.4)

where Gk∗[1],+ is the tail of the kth convolution Gk∗

[1] of the distribution G[1] withitself. Since for subexponential distributions G[1] we have Gk∗

[1],+(t) ∼ kG[1],+(t)

Page 555: Asymptotic analysis of random walks

524 Extension to processes with independent increments

as t → ∞, we obtain from (15.1.4) that, in this case,

F(4),+(t) ∼ G[1],+(t) e−Θ∞∑

k=1

kΘk

k!= ΘG[1],+(t) = G+(t), (15.1.5)

so that the tails F+(t) ∼ F(4),+(t) and G+(t) are equivalent.Now, as we have already said, the tails of ξ(1) + ξ(2) + ξ(3) decay faster than

any exponential functions and therefore, when considering measures F or G with‘heavy’ (l.c. or subexponential) tails, they do not affect the tails of F or G in anyway (see Chapter 11). Hence, in the case of ‘heavy’ tails (of G or F), we have

F+(t) ∼ F(4),+(t),

and so for subexponential G[1] (see (15.1.5))

F+(t) ∼ ΘG[1],+(t) = G+(t).

Thus, we have proved the following assertion (see also Theorem A3.22 of [113]and § 8.2.7 of [32]).

Theorem 15.1.1. If the distribution G[1] is subexponential then so is F, andF+(t) ∼ G+(t) as t → ∞.

A converse assertion, stating that F ∈ S implies G[1] ∈ S so that againF+(t) ∼ G+(t) holds, is true as well ([112]; see the bibliography in [32]). Similarassertions hold true for the classes R (see [110]) and Se.

Now let the tail F+(t) (and hence also F(4),+(t)) satisfy condition [ · , <].Then it follows from the inequality

G[1],+(t)Θe−Θ � F(4),+(t)

that G[1] also satisfies condition [ · , <]. And, vice versa, if G[1] satisfies [ · , <]with a subexponential majorant VG then, for any fixed ε > 0 and all large enough t,

F+(t) � (Θ + ε)VG(t),

and so F also satisfies condition [ · , <] (with majorant (Θ + ε)VG).It is evident that the above observations equally apply to the left tails.Thus, conditions in terms of the tails of ξ can be considered as conditions on

the tails G±(t), and vice versa.It also follows from what was said above about the role of the terms in the sum

ξ(1) + ξ(2) + ξ(3) that, in all considerations concerning ‘heavy’ (l.c. or subexpo-nential) tails, one can, without loss of generality, put G[0] ≡ 0 in (15.1.1): thiswill not affect the behaviour of the tails of F or G.

Page 556: Asymptotic analysis of random walks

15.2 The first approach: closeness of trajectories 525

15.2 The first approach, based on using the closeness of the trajectories of

processes in discrete and continuous time

Let {S(u); u ∈ [0, T ]} be a separable process with homogeneous independentincrements, S(0) = 0. One can assume, without loss of generality, that the tra-jectories of S(·) belong to the space D(0, T ) of functions without discontinuitiesof the second kind (say, right-continuous). We will show that all the theoremsof Chapters 2–5 concerning the asymptotics of large deviation probabilities inboundary-crossing problems (except for asymptotic expansions), will remain truein the continuous-time case under the respective conditions on the increments ofthe process.

We will need an auxiliary assertion. Let

ξk := S(k) − S(k − 1), k = 1, 2, . . .

If T is an integer (in most problems this can be assumed without loss of generality)then, in the notation of the previous chapters, we have S(T ) = ST . This meansthat, under the respective conditions on the tails of ξ, all the assertions concerningthe asymptotics of P(Sn � x) carry over to the probabilities P

(S(T ) � x

)with T = n. As we saw in § 15.1, for processes with independent increments, theregular variation or semiexponentiality of the tail F+(t) = P(ξ � t) is equivalentto the corresponding regularity property of the spectral function G+(t) in theLevy–Khinchin representation for the ch.f. of ξ. From this and a representationof the form (15.1.5) for the tail of S(u), it also follows that if F ∈ S or G[1] ∈ Sthen, for any fixed u,

P(S(u) � t) ∼ uF+(t) as t → ∞. (15.2.1)

Similar assertions hold for the negative tails as well. Moreover, we always have

S(u)p−→ 0 as u → 0.

This means that the above-noted equivalence of the asymptotics of P(S(T ) � x)and P(ST � x) (for integer-valued T → ∞) will also hold for non-integer T ,because T = �T + {T} and S(T ) = Sn + S({T}), where the terms on theright-hand side are independent and n = �T . (As usual, �T and {T} denoterespectively the integral and fractional parts of T .) Under these conditions we canuse the results of Chapter 11 on the asymptotics of large deviation probabilitiesfor sums of r.v.’s of two types (for classes R and Se).

As before, let σ(T ) = V (−1)(1/T ) and σW (T ) = W (−1)(1/T ). We haveobtained the following assertion.

Theorem 15.2.1.

(i) Let the conditions [Rα,ρ], α ∈ (0, 1), ρ > −1 and TV (x) → 0 be satisfiedfor the distribution F (or G[1]). Then

P(S(T ) � x

)= TV (x)(1 + o(1)). (15.2.2)

Page 557: Asymptotic analysis of random walks

526 Extension to processes with independent increments

(ii) Let α ∈ (1, 2), ES(1) = 0 and the conditions [<, =] and W (t) � cV (t)be satisfied for some c < ∞, TV (x) → 0. Then (15.2.2) holds true.

If the condition W (t) < cV (t) does not hold then (15.2.2) will still betrue provided that

TW

(x

| lnTV (x)|)

→ 0.

(iii) If α > 2, ES(1) = 0, ES2(1) < ∞ and x � √T lnT then (15.2.2) holds

true.(iv) If the distribution of S(1) (or the spectral function of the process) is semiex-

ponential then the assertions of Theorem 5.5.1 (i), (ii) remain true for S(T )(provided that in them we replace n by T ).

In the above assertions, the remainder terms o(1) are uniform in the same senseas in the theorems of Chapters 2–5.

One can also obtain in a similar way analogues of other assertions from Chap-ters 2–5 for continuous-time processes with independent increments. To illustratethis statement, we will demonstrate that the assertions of Chapters 2–5 concerningboundary-crossing problems will also remain true in the present case.

As before, consider the class Gx,T of boundaries g(t) such that

inft�T

g(t) = x. (15.2.3)

We will narrow this set to a smaller class Gx,T,ε by adding the following require-ment to condition (15.2.3):

For some s.v.f. ε = ε(x) → 0 as x → ∞,

g(k − 1) < g(k)(1 + ε), inft∈[k−1,k]

g(t) > g(k)(1− ε), k = 1, . . . , �T .

As before (see e.g. § 6.5), we can concentrate on boundaries g(·) of one of thefollowing two types:

(1) g(t) = x + gt, where gt does not depend on x;(2) g(t) = xf(t/T ), t � T, where f(·) depends neither on x nor on T and

infu∈[0,1] f(u) = 1.

In the first case g(·) ∈ Gx,T,ε for all non-decreasing functions gt that grow moreslowly than any exponential function. In the second, g(·) ∈ Gx,T,ε provided thatf is continuous.

Theorem 15.2.2. The following assertions hold true.

(i) Let the conditions [=, =] with W, V ∈ R and β < min{1, α} be satisfied

Page 558: Asymptotic analysis of random walks

15.2 The first approach: closeness of trajectories 527

for the distribution F (or G[1]). Then, for all T , as x → ∞,

P(S(T ) � x) ∼T∫

0

EV(x − ζσW (u)

)du, (15.2.4)

where, as before, σW (u) = W (−1)(1/u) and ζ follows the stable law Fβ,−1.All the corollaries (2.7.2), (2.7.3), (2.7.5) of (2.7.1) remain true. In par-

ticular, for S = S(∞),

P(S � x) ∼ V (x)W (x)

C(β, α,∞),

where the function C is defined in (2.7.4).(ii) Let α ∈ (1, 2), Eξ = 0 and the conditions [Rα,ρ], ρ > −1 and TV (x) → 0

be satisfied. Then

P(S(T ) � x

)= TV (x)

(1 + o(1)

).

(iii) Let α ∈ (1, 2), Eξ = 0 and conditions [<, =] (W, V ∈ R), W (t) � cV (t)and TV (x) → 0 be satisfied. Then, for a g ∈ Gx,T,ε and an event

GT :={

supu�T

(S(u) − g(u)) � 0}

, (15.2.5)

we have

P(GT ) = (1 + o(1))

T∫0

V(g∗(u)

)du, (15.2.6)

where g∗(u) = infu�v�T g(v). If the condition W (t) � cV (t) is not metthen (15.2.6) will still be true provided that

TW

(x

| lnTV (x)|)

→ 0.

(iv) Let α > 2, Eξ = 0, Eξ2 < ∞ and the conditions [ · , =] (V ∈ R) andg ∈ Gx,T,ε be satisfied. Then, as x → ∞,

P(GT ) = (1 + o(1))

T∫0

V(g∗(u)

)du + o

(T 2V (x)V (x)

)when T � cx2/ lnx for a suitable c > 0.

(v) If the distribution of ξ (or the spectral function) is semiexponential and theconditions of Theorem 5.5.1 are satisfied (with n replaced in them by T ) thenthe assertion of Theorem 5.5.1 (ii) (provided that in it we replace n by T andP(Sn � x) by P(S(T ) � x)) will also hold true for P(S(T ) � x).

Page 559: Asymptotic analysis of random walks

528 Extension to processes with independent increments

(vi) Let α > 1, Eξ = 0 and condition [ · , =] (V ∈ R) be satisfied. Put

S(T, a) := supt�T

(S(t) − at

).

Then, as x → ∞, we have for any T the relation

P(S(T, a) � x

)= (1 + o(1))

1a

x+aT∫x

V (u) du.

In the semiexponential case, the assertion of Theorem 5.6.1 (i) (with obvi-ous changes in notation) remains true.

In all the assertions of the theorem, the remainder terms o(1) are uniform inthe same sense as in the theorems of Chapters 2–5.

As was the case for random walks in discrete time, one can remove any restric-tions on the growth of T from the assertions of parts (iii)–(v) (as in part (vi) of thetheorem), provided that g(t) increases fast enough (for example, if g(t) >

√t ln t

in the case Eξ2 < ∞).To prove Theorem 15.2.2 and to be able to reduce our problems to the cor-

responding ones for discrete-time random walks, we will need two auxiliary as-sertions. The first one is an analogue of the well-known Kolmogorov inequality,which refers to the distribution of S(T ) = maxu�T S(u).

Let

σ(T ) :=

⎧⎨⎩W (−1)(1/T ) if α ∈ (0, 1),max

{W (−1)(1/T ), V (−1)(1/T )

}if α ∈ (1, 2), Eξ = 0,√

T if Eξ2 < ∞, Eξ = 0.

Lemma 15.2.3. Assume that either the conditions [<, <] with α ∈ (0, 1)∪(1, 2)and Eξ = 0 for α > 1 or the conditions Eξ2 < ∞ and Eξ = 0 are satisfied forthe distribution F (or G[1]). Then, for any ε ∈ (0, 1), there exists an M = M(ε)such that, for all T , x � σ(T ),

P(S(T ) � x) � 11 − ε

P(S(T ) � x − Mσ(T )

). (15.2.7)

If T is fixed (for example, T = 1) then the conditions [<, <] and Eξ = 0become superfluous, and in (15.2.7) one can put σ(T ) = 1.

A similar assertion holds in the semiexponential case as well.

Remark 15.2.4. The second assertion of the lemma together with the fact thatS(T ) � S(T ) implies that the distribution tails of S(1) = ξ and S(1) behavein the same way: they simultaneously satisfy conditions [ · , <] or [ · , =] with acommon function V .

Page 560: Asymptotic analysis of random walks

15.2 The first approach: closeness of trajectories 529

Remark 15.2.5. Note that the assertion of Theorem 15.2.2 that concerns theasymptotics of P(S(T ) � x) follows immediately from Lemma 15.2.3 and re-quires no additional considerations. Indeed, assuming for simplicity that T is aninteger, we have

P(S(T ) � x) � P(ST � x) ∼ TV (x),

P(S(T ) � x) � (1 − ε)−1P(ST � x − Mσ(T )

) ∼ TV (x)

as ε → 0, Mσ(T ) = o(x).

Proof of Lemma 15.2.3. Let

η(x) := inf{t > 0 : S(t) � x

}.

Then, owing to the strong Markov property of the process {S(t)}, we have

P(S(T ) � x − Mσ(T )

)� E

[P(S(T ) � x − Mσ(T )

∣∣ η(x)); η(x) � T

]� E

[P(S(T − η(x)) � −Mσ(T )

∣∣ η(x)); η(x) � T

]. (15.2.8)

It follows from the bounds of Chapters 2 and 3 (see Corollaries 2.2.4 and 3.1.2)in the case α < 2 and from the Chebyshev inequality in the case Eξ2 < ∞ that,under the conditions stated in the lemma,

P(S(T − u) < −Mσ(T )

)� P

(S(T − u) < −Mσ(T − u)

)→ 0

as M → ∞. Hence, for any given ε > 0, there exists an M = M(ε) such thatP(S(T − u) > −Mσ(T )

)� 1 − ε for all u ∈ [0, T ]. Owing to (15.2.8) this

means that

P(S(T ) � x − Mσ(T )

)� (1 − ε)P(η(x) � T ).

The last inequality is obviously equivalent to the assertion of the lemma. Theproof in the case of a fixed T proceeds in exactly the same way and is based onthe observation that, for any fixed u,

P(S(u) � −M

)→ 1

as M → ∞. The lemma is proved.

Lemma 15.2.6. Let the conditions of Lemma 15.2.3 be met. Then, for all largeenough z,

P(S(1) � z, S(1) < z/2) � 2V (z/2)W (z/2).

Proof. Set A :={S(1) � z, S(1) < z/2

}. Again making use of the strong

Markov property of the process {S(t)}, we obtain from Lemma 15.2.3 that, for

Page 561: Asymptotic analysis of random walks

530 Extension to processes with independent increments

all large enough z,

P(A) = E[P(A∣∣ η(z)

); η(z) < 1

]� E

[P(S(1 − η(z)) < −z/2

∣∣ η(z)); η(z) < 1

]� W (z/2)P

(η(z) < 1

)� 3

2 W (z/2)P(S(1) > z/2) � 2W (z/2)V (z/2),

where we have used the almost obvious relation (see (15.2.1))

supu�1

P(S(u) < −x) � (1 + o(1))P(S(1) < −x)

as x → ∞. The lemma is proved.

Proof of Theorem 15.2.2. The proofs of the assertions of the theorem are all quitesimilar to each other. First we will consider the asymptotics of P(GT ), where theevent GT is defined in (15.2.5). For simplicity, let T be an integer (for a remarkon how to pass to the case of a general T, see the end of the proof below).

Set

G∂T,g :=

{maxk�T

(Sk − g(k)

)� 0

}and consider the boundary

gε(k) := (1 − 4ε)g(k),

where ε = ε(x) → 0 is the s.v.f. at infinity that appeared in the definition of theclass Gx,T,ε, so that εg(k) → ∞. Then

P(GT ) � P(G

∂T,g

), (15.2.9)

P(GT ) � P(G

∂T,gε

)+ P

(G

T,gεGT

). (15.2.10)

We already know the asymptotics of the right-hand side of (15.2.9) and that of thefirst term on the right-hand side of (15.2.10): they coincide with the asymptoticsof P

(G

∂T,g

), which we studied in Chapters 2–5 and found to be given by

T∑k=1

V(g∗(k)

)(1 + o(1)).

We will show that the second term on the right-hand side of (15.2.10) is negligiblysmall compared to the first.

We have

G∂

T,gεGT ⊂

T⋃k=1

Hk,

where

Hk ={

supu∈[k−1,k)

[S(u) − g(u)

]� 0, S(k − 1) < gε(k − 1), S(k) < gε(k)

},

Page 562: Asymptotic analysis of random walks

15.2 The first approach: closeness of trajectories 531

so that

P(

G∂

T,gεGT

)�

T∑k=1

P(Hk). (15.2.11)

The value of the probability P(Hk) can only increase if we ‘move’ the initialvalue S(k − 1) of the process {S(u)} on the time interval [k − 1, k] to the pointgε(k − 1) = (1 − 4ε)g(k − 1). Then, setting

Sk(u) := S(k − 1 + u) − S(k − 1), Sk(1) := supu∈[0,1]

Sk(u),

we will obtain

P(Hk) � P(

supu∈[0,1]

[gε(k − 1) + Sk(u) − g(k − 1 + u)

]� 0,

Sk(1) < gε(k) − gε(k − 1)). (15.2.12)

Here, for any u ∈ [0, 1], we have by virtue of the condition g(·) ∈ Gx,T,ε that

gε(k − 1) + Sk(u) − g(k − 1 + u)

< Sk(1) + gε(k − 1) − (1 − ε)g(k)

< Sk(1) + (1 + ε)(1 − 4ε)g(k) − (1 − ε)g(k)

< Sk(1) − 2εg(k)

and

Sk(1) − gε(k) + gε(k − 1)

> Sk(1) − (1 − 4ε)g(k) + (1 − ε)(1 − 4ε)g(k)

> Sk(1) − εg(k).

Hence the event under the probability symbol on the right-hand side of (15.2.12)implies the event{

Sk(1) − 2εg(k) � 0, Sk(1) − εg(k) < 0}

={Sk(1) � 2εg(k), Sk(1) < εg(k)

}.

Applying Lemma 15.2.6 with z = 2εg(k), we obtain

P(Hk) � 2V(εg(k)

)W(εg(k)

).

Here, for any δ > 0 and all large enough x,

V(εg(k)

)W(εg(k)

)� ε−α−β−δV

(g(k)

)W(g(k)

)� ε−α−β−δV

(g(k)

)W (x),

where, evidently,

ε−α−β−δW (x) → 0

Page 563: Asymptotic analysis of random walks

532 Extension to processes with independent increments

since ε → 0 is an s.v.f. Therefore

P(Hk) = o(V(g(k)

))= o

(V(g∗(k)

))and, by virtue of (15.2.10) and (15.2.11),

P(GT ) = (1 + o(1))T∑

k=1

V(g∗(k)

) ∼ T∫0

V(g∗(k)

)du.

Thus the first four assertions of Theorem 15.2.2 are proved.Assertion (v) can be proved in exactly the same way.For part (vi), if T is not an integer then, in the above considerations, one should

put n = �T , v = {T} and add the term

P(

supu∈[n,T ]

[S(u) − g(u)

]� 0, S(n) < (1 − 4ε)g(n), S(T ) < (1 − 4ε)g(T )

)to the sum

∑�T�k=1 P(Hk). This term can then be dealt with in exactly the same

way.The theorem is proved.

15.3 The construction of a full analogue of the asymptotic analysis from

Chapters 2–5 for random processes with independent increments

As we have already noted in § 15.1, an arbitrary continuous-time process {S(t)}with homogeneous independent increments can be represented, in accordancewith (15.1.1) and (15.1.2), as a sum of two independent processes:

S(t) = S[0](t) + S[1](t), (15.3.1)

where S[0](t), like ξ(1) + ξ(2) + ξ(3), has an entire ch.f. while {S[1](t)} is acompound Poisson process with jumps ζj , |ζj | � 1. In what follows we will,in turn, represent the component {S[1](t)} as a sum of two independent pro-cesses {S<y(t)} and {S�y(t)}, whose jumps ζ respectively satisfy the inequali-ties ζ < y and ζ � y, y > 1:

S[1](t) = S<y(t) + S�y(t).

The compound Poisson process {S�y(t)} has the rate

Θy := G([y,∞)) ≡ G+(y)

and the jump distribution Θ−1y G(du), u � y. Since the presence of the compo-

nent S[0](t) in (15.3.1) has no effect on the tails of the distributions of interest(it satisfies the Cramer condition), we can get rid of it where we need to (i.e. wecan assume the original process {S(t)} to be a compound Poisson process, witha ch.f. of the form (15.1.3)).

Denote by Jy,j the event that the process {S�y(t)} has exactly one jump on the

Page 564: Asymptotic analysis of random walks

15.3 Analogue of the asymptotic analysis from Chapters 2–5 533

time interval [0, T ] and by Jy,2 the event that the process has at least two jumpson that interval. Then

P(Jy,1

)= ΘyTe−ΘyT ∼ G+(y)T

provided that G+(y)T → 0 and, likewise,

P(Jy,2) = 1 − e−ΘyT − ΘyTe−ΘyT ∼ 12

(ΘyT )2. (15.3.2)

15.3.1 Upper bounds

Now we will turn to the general scheme for studying the asymptotics of largedeviation probabilities in Chapters 2–5. Its first stage consisted in deriving boundsfor the distribution of the maxima of partial sums. In the present case, we requirebounds for

S(T ) := supt�T

S(t).

They will be based on the following analogue of Lemma 2.1.1. Assume for amoment that ϕ(μ) := EeμS(1) < ∞ for some μ > 0.

Lemma 15.3.1. For any x > 0, μ � 0, T > 0,

P(S(T ) � x

)� e−μx max

{1, ϕT (μ)

}. (15.3.3)

Proof. The proof is similar to that in the discrete-time case. As before, puttingη(x) := min

{t > 0 : S(t) � x

}, we have

ϕT (μ) = EeμS(T ) �T∫

0

P(η(x) ∈ dt

)Eeμ(x+S(T−t))

= eμx

T∫0

P(η(x) ∈ dt

)ϕT−t(μ) � eμx P

(S(T ) � x

)min

{1, ϕT (μ)

}.

This implies (15.3.3). The lemma is proved.

Applying the lemma to the process {S<y(t)} (whose jumps are bounded fromabove) and assuming for simplicity that {S(t)} is a compound Poisson process(according to the above convention), we will obtain

P(S<y(T ) � x

)� e−μx max

{1, eTr(μ,y)

},

where

r(μ, y) :=

y∫−∞

(eμu − 1)G(du) =

y∫−∞

eμuG(du) − 1 + G+(y) (15.3.4)

Page 565: Asymptotic analysis of random walks

534 Extension to processes with independent increments

(cf. (2.1.8)). Our task is to bound

RG(μ, y) − 1 :=

y∫−∞

eμuG(du) − 1.

This coincides with the problem on evaluating the asymptotics of

R(μ, y) − 1 =

y∫−∞

eμuF(du) − 1,

which we considered in Chapters 2–5 (see (2.2.7) on p. 87 etc.). We saw that inall the upper bounds for R(μ, y) − 1 in Chapters 2–5 under conditions [ · , <] or[<, <], the right-hand sides included the summands

cV (1/μ) + eμyV (y) (15.3.5)

(see (2.2.11) and (2.4.6)). Further, observe that the quantity r(μ, y), which we areaiming to bound, differs from RG(μ, y) only by the term G+(y) (see (15.3.4)).The value μ chosen to optimize bounds for R(μ, y) in (2.2.11) and (2.4.6), wassuch that 1/μ � y. Hence, under condition [ · , <], the additional term G+(y) ∼F+(y) in (15.3.4) is negligibly small compared with each summand in (15.3.5).

Therefore, all the upper bounds derived in Chapters 2–5 for R(μ, y) can becarried over to the quantity r(μ, y) in (15.3.4). So we have proved the followingassertion.

Lemma 15.3.2. All the upper bounds obtained for P = P(Sn � x; B(0)) inChapters 2–5 also hold true for P( S<y(T ) � x) (with an obvious change ofparameter from n to T ). Moreover, conditions [ · , <], [<, <] and [ · , =] couldbe imposed both on the tails G+(y) of the spectral measure G and on the tailsF+(y) ∼ G+(y) of the distribution of ξ = S(1) (this applies equally to thenegative tails).

Note that, if we derived the upper bounds while retaining the term S[0](t)in (15.3.1) then, since its ch.f. is analytic, the only difference would be that onthe right-hand side of the bounds we would have (along with the terms (15.3.5))a summand of the form cμ2. In the case α < 2, this will be negligibly small(as μ → 0) compared with the other terms, whereas in the case Eξ2 < ∞, α � 2,

it will join a summand of the same form.Further, for the events

Jy,0 :={S�y(·) has no jumps in [0, T ]

}and GT :=

{S(T ) � x

}we have the relations P(Jy,0) = 1 − e−G+(y)T � G+(y)T and

P(S(T ) � x) � P(S(T ) � x; Jy,0

)+ P(Jy,0)

� P(S<y(T ) � x

)+ G+(y)T.

Page 566: Asymptotic analysis of random walks

15.3 Analogue of the asymptotic analysis from Chapters 2–5 535

The last inequality, together with the results of Chapters 2–5 and Lemma 15.3.2,implies the next theorem.

Theorem 15.3.3. For the probabilities P(S(T ) � x), all the upper bounds de-rived in Chapters 2–5 for P(Sn � x) (see Theorems 2.2.1, 3.1.1 and 4.1.2 andtheir corollaries) hold true with n replaced by T and under conditions [<, <],[ · , =] etc. (in the above-listed theorems) imposed on the tails of the distribu-tions G[1] (or F).

Similarly, the inequalities for P(Sn(a) � x) and P(Sn(a) � x; B(v)

)of

Chapters 2–5 can be carried over to the case of continuous-time processes; thisleads to corresponding inequalities for P(S(T, a) � x) and P

(S<y,v(T, a) � x

),

where

S(T, a) := supt�T

(S(t) − at

),

S<y,v(T, a) := supt�T

(S<y,v(t) − at

)and {S<y,v(t)} is a time-inhomogeneous process with independent incrementsand a ‘local spectral measure’ at time t that is equal to G(du) for u < y + vg(t)and is equal to zero for all other u values (the process S<y,v(·) is obtained fromS(·) as follows: if a jump of size � y + vg(t) occurs in the process S(·) at time t

then it will be discarded). Here g(t) ≡ t in the case Eξ = 0.In a similar way, we introduce a time-inhomogeneous compound Poisson pro-

cess S�y,v(·), ‘complementary’ to S<y,v(·):

S(t) = S<y,v(t) + S�y,v(t),

where the processes on the right-hand side are independent of each other. Theprocess S�y,v(·) has a local spectral measure at time t that is equal to G(du)for u � y + vg(t) and equal to zero for all other u values. Inequalities forP(S<y,v(T, a) � x

)are established in exactly the same way as in Chapters 2–5,

but now we do that basing on Lemma 15.3.2 and Theorem 15.3.3. Inequalities forP(S(T, a) � x) follow from the relations

P(S(T, a) � x) � P(S(T, a) � x; Jy,v,0

)+ P(Jy,v,0), (15.3.6)

where

Jy,v,0 :={S�y,v(·) has no jumps on [0, T ]

}and

P(Jy,v,0) �T∫

0

G+

(y + vg(t)

)dt.

Page 567: Asymptotic analysis of random walks

536 Extension to processes with independent increments

Returning to (15.3.6), we obtain the inequality

P(S(T, a) � x) � P(S<y,v(T, a) � x

)+

T∫0

G+

(y + vg(t)

)dt.

We have established the following result.

Theorem 15.3.4. For P(S(T, a) � x

)and P

(S<y,v(T, a) � x

), all the bounds

obtained for the probabilities P(Sn(a) � x

)and P

(Sn(a) � x; B(v)

)in Chap-

ters 2–4 (see Theorems 2.2.1, 3.2.1 and 4.2.1 and their corollaries) hold true withn replaced in them by T and with conditions [<, <], [ · , =] etc. (in the above-listed theorems) imposed on the tails of the distributions G[1] (or F).

15.3.2 A lower bound

The main lower bound for P(S(T ) � x) will have the following form (an ana-logue of Theorem 2.5.1).

Theorem 15.3.5. Let K(t) > 0, t � 0, be an arbitrary function and let QT (u) :=P(S(T )/K(T ) < −u

). Then, for y = x + uK(T ),

P(S(T ) � x

)� Π(T, y)

[1 − QT (u) − Π(T, y)

], (15.3.7)

where Π(T, y) := 1 − e−TG+(y) ∼ TG+(y), provided that TG+(y) → 0. (It isalso clear that Π(T, y) � TG+(y).)

Proof. Since

P(S�y(T ) � y) � P(Jy,0) = Π(T, y)

and, for any two events A and B,

P(AB) � 1 − P(A) − P(B),

we have

P(S(T ) � x

)= P

(S<y(T ) + S�y(T ) � x

)� P

(S<y(T ) � x − y; S�y(T ) � y)

= P(S<y(T ) � x − y

)P(S�y(T ) � y)

� P(S<y(T ) � x − y; Jy,0

)Π(T, y)

= P(S(T ) � x − y; Jy,0

)Π(T, y)

�[1 − P

(S(T ) < −uK(T )

)− Π(T, y)]Π(T, y).

The theorem is proved.

It follows from the last theorem that

P(S(T ) � x) � TG(x)(1 + o(1)) (15.3.8)

provided that the following two conditions are met:

Page 568: Asymptotic analysis of random walks

15.3 Analogue of the asymptotic analysis from Chapters 2–5 537

(1) K(T ) is such that, as u → ∞.

P(S(T ) � −uK(T )

)→ 0, (15.3.9)

(2) x � K(T ), TG(x) → 0.

Since the function K(T ) =√

T satisfies condition (15.3.9) in the case Eξ = 0,Eξ2 < ∞, we obtain the next assertion.

Corollary 15.3.6. If Eξ = 0, Eξ2 < ∞ then (15.3.8) holds true for x � √T .

As we have already observed, Theorem 15.3.5 is a complete analogue of Theo-rem 2.5.1. The same applies to the proof of the theorem and to its corollaries. Toavoid almost verbatim repetitions, carrying over the corollaries of Theorem 2.5.1to the case of continuous-time processes {S(t)} is left to the reader.

As in Chapters 2 and 3, these corollaries imply the following results.

1. Theorems on the uniform convergence of the distributions of the scaledvalue of S(T ) to a stable law (see Theorems 3.8.1 and 3.8.2).

As in the above-mentioned theorems, let Nα,ρ be the domain of normal at-traction of the stable law Fα,ρ, i.e. the class of distributions F (or G[1]) thatsatisfy condition [Rα,ρ] and have the property that, in the representation F+(t) =V (t) = t−αL(t), one has L(t) → L = const as t → ∞.

Theorem 15.3.7. Let F (or G[1]) satisfy [Rα,ρ], ρ > −1, α ∈ (0, 1)∪ (1, 2) andEξ = 0 when α > 1. In this case, the inclusion F ∈ Nα,ρ (or G[1] ∈ Nα,ρ) isequivalent to each of the following two relations: as T → ∞,

supt�0

∣∣∣∣P (S(T )/b(T ) � t)Fα,ρ,+(t)

− 1∣∣∣∣→ 0,

supt�0

∣∣∣∣∣P(S(T )/b(T ) � t

)Hα,ρ,+(t)

− 1

∣∣∣∣∣→ 0,

where b(T ) = ρ−1/α+ V (−1)(1/T ), ρ+ = (ρ + 1)/2 and Hα,ρ,+ is the right dis-

tribution tail of supu�1 ζ(u), {ζ(t)} being a stable process with homogeneousindependent increments, corresponding to Fα,ρ.

2. Analogues of the law of the iterated logarithm for the processes {S(t)}in the infinite variance case.

Theorem 15.3.8. Let F (or G[1]) satisfy the conditions [<, ≶] with α < 1 andW (t) � c1V (t) or the conditions [Rα,ρ], α ∈ (1, 2), ρ > −1 and Eξ = 0. Then

lim supT→∞

ln+ S(T ) − lnσ(T )ln lnT

=1α

. (15.3.10)

Page 569: Asymptotic analysis of random walks

538 Extension to processes with independent increments

15.3.3 Asymptotic analysis in the boundary-crossing problems

Along with the upper and lower bounds discussed above, the results of the moredetailed asymptotic analysis from Chapters 2–5, including asymptotic expan-sions, will also remain valid for continuous-time processes {S(t)} with homo-geneous independent increments. Reproducing all the results and their proofs (intheir somewhat modified form) would take too much space and would containmany almost verbatim repetitions of the exposition from Chapters 2–5. There-fore we will restrict ourselves to presenting, for illustration purposes, assertionsconcerning the asymptotic analysis of P(S(T ) � x) only in the case Eξ = 0,Eξ2 < ∞.

The form of the asymptotics of the probability P(S(T ) � x) (its principal part)follows immediately from the upper and lower bounds we obtained above (cf. e.g.Theorem 4.4.1).

Theorem 15.3.9. Let the distribution F (or G[1]) satisfy the conditions [ · , =](V ∈ R) with α > 2 and Eξ2 < ∞, x = sσ(T ), σ(T ) =

√(2 − α)T lnT .

Then there exists a function ε(u) ↓ 0 as u ↑ ∞ such that

supx: s�u

∣∣∣∣P(S(T ) � x)TG+(x)

− 1∣∣∣∣ � ε(u).

Now we will turn to refinements of this theorem. The following result is ananalogue of Theorem 4.5.1.

Theorem 15.3.10. Let condition [ · , =] (V ∈ R) with α > 2 be met and let, forsome k � 1, the distribution F (or G[1]) satisfy conditions [D(k,q)] from § 4.4and E|ξk| < ∞ (the latter is only required for k > 2). Then there exists a c < ∞such that, as x → ∞,

P(S(T ) � x) = TG(x){

1 +L1(x)Tx

T∫0

ES(t) dt

+1T

k∑i=2

Li(x)i!xi

T∫0

[ES i(T − t)

+i∑

l=2

(i

l

)ESl(t)S i−l(T − t)

]dt

+ o(T k/2x−k

)+ o

(q(x)

)}uniformly in T � cx2/ lnx, where the Li(t) are the s.v.f.’s from the decomposi-tion (4.4.7) for the function V (t) = G+(t).

Theorem 15.3.10 implies the next result (cf. § 4.5).

Page 570: Asymptotic analysis of random walks

15.3 Analogue of the asymptotic analysis from Chapters 2–5 539

Corollary 15.3.11. Let the conditions of Theorem 15.3.10 with k = 1 be satisfied,Eξ2 = 1 . Then, as T → ∞, uniformly in x � c

√T lnT ,

P(S(T ) � x) = TG+(x)[1 +

23/2α√

T

3√

πx(1 + o(1)) + o

(q(x)

)].

Proof of Theorem 15.3.10. As we have already said, the scheme of the proof hereis the same as in the discrete-time case. The main change is that summation isnow replaced by integration. For GT =

{S(T ) � x

}, y = x/r, we have

P(GT ) = P(GT Jy,0

)+ P

(GT Jy,1

)+ P

(GT Jy,2

),

where as before the Jy,k, k = 0, 1, 2, are the events that, on the time interval[0, T ], the trajectory of {S(t)} has respectively 0, 1 or at least 2 jumps of sizeζ � y. By virtue of Theorem 15.3.4 and the relation (15.3.2), for x � c

√T lnT ,

r > 2, we have

P(GT Jy,0

)= O

((TV (x)

)2), P

(Jy,2

)= O

((TV (x)

)2).

It remains to consider the term P(GT Jy,1

). Denote by J(dt, dv) the event that

on the interval [0, T ] the trajectory {S(t)} has just one jump of size ζ � y and,moreover, that ζ ∈ (v, v + dv), the time of the jump belonging to (t, t + dt). It isclear that, for v � y > 1,

P(J(dt, dv)

)= e−tG+(y)dtG(dv)e−(T−t)G+(y) = e−TG+(y)dtG(dv)

and that

P(GT Jy,1

)=

T∫0

∞∫y

P(GT J(dt, dv)

). (15.3.11)

Further, observe that, as the process {S(t)} is stochastically continuous, the dis-tributions of S(t− 0) and S(t) and also those of S(t− 0) and S(t) will coincide.Next note that

{S<y(t − 0) � εx

}and J(dt, dv) are independent events. There-

fore

P(S(t − 0) � εx; J(dt, dv)

)= P

(S<y(t − 0) � εx; J(dt, dv)

)= P

(S<y(t − 0) � εx

)P(J(dt, dv)

)� P

(S(t − 0) � εx

)P(J(dt, dv)

).

Hence, for any fixed ε > 0,

T∫0

∞∫y

P(S(t − 0) � εx; J(dt, dv)

)

� c

T∫0

∞∫y

tG+(x)e−TG+(y)G(dv) dt � c1

(TG+(x)

)2. (15.3.12)

Page 571: Asymptotic analysis of random walks

540 Extension to processes with independent increments

The following bounds are obtained in a similar way:

T∫0

∞∫y

P(S(t) � −εx; J(dt, dv)

)� cT 2G+(x)W (x)

= o(TG+(x)T k/2x−k

), (15.3.13)

T∫0

∞∫y

P(S (t)(T − t) � εx; J(dt, dv)

)� c

(TG+(x)

)2, (15.3.14)

where

S (t)(z) := supv�z

[S(v + t) − S(t)

].

The above means that, under the probability symbol on the right-hand side of therepresentation (15.3.11), we can add or remove the events{

S(t − 0) < εx},

{|S(t − 0)| < εx},

{S (t)(T − t) < εx

},

the errors thus introduced being bounded by the right-hand sides of the relations(15.3.12)–(15.3.14). Therefore, putting

εT,x := TG+(x)T k/2x−k, Zt,T := S(t − 0) + S (t)(T − t),

we will have, owing to (15.3.11), the following representation:

P(GT Jy,1

)=

T∫0

∞∫y

P(GT ; S(t − 0) < εx; J(dt, dv)

)+ o(εT,x)

= e−TG+(y)

T∫0

∞∫y

dtG(dv)P(S(t − 0) < εx, v + Zt,T � x

)+ o(εT,x)

=

T∫0

∞∫y

dtG(dv)P(|S(t − 0)| < εx, S (t)(T − t) < εx,

v + Zt,T � x)

+ o(εT,x)

=

T∫0

E[G+

(x − Zt,T

); |S(t − 0)| < εx, S (t)(T − t) < εx

]dt + o(εT,x).

We have come to the problem of evaluating

E[G+

(x − S(t − 0) − S (t)(T − t)

);

|S(t − 0)| < εx, S (t)(T − t) < εx], (15.3.15)

Page 572: Asymptotic analysis of random walks

15.3 Analogue of the asymptotic analysis from Chapters 2–5 541

which is identical to the problem of evaluating the terms in (4.5.6) on p. 206. Thesubsequent argument does not differ from that in the proof of Theorem 4.5.1. Thetheorem is proved.

We will also state a theorem on asymptotic expansions for the probabilitiesP(S(T, a) � x

), where S(T, a) = supt�T

(S(t) − at

)(an analogue of Theo-

rem 4.6.1(ii)).

Theorem 15.3.12. Let condition [ · , =] (V ∈ R), α > 2, be met, and, for somek � 1, let the distribution F (or G[1]) satisfy conditions [D(k,q)] from § 4.4 withan r.v.f. q(t) and E|ξk| < ∞ (the latter is only required when k > 2). Then,as x → ∞, uniformly in all T,

P(S(T, a) � x)

=

T∫0

G+(x + au){

1 +k∑

i=1

Li(x + au)i!(x + au)i

×[ES i(T − u, a) +

i∑l=2

(i

l

)ESl(u)ES i−l(T − u, a)

]}du

+ O(mG+(x)(mG(x) + xG(x))

)+ o

(xG+(x)q(x)

),

where m := min{T, x}, G(t) := max{G+(t),W (t)} and W (t) is the r.v.f. dom-inating G−(t).

All the remarks following Theorem 4.6.1 and an analogue of Corollary 4.6.3will remain true in the present situation. The proofs of these assertions are similarto the arguments presented in § 4.6 (up to obvious changes, illustrated by the proofof Theorem 15.3.10).

Note, however, that while finding the asymptotics of P(S(∞, a) � x) it wouldseem to be more natural to use the approaches presented in § 7.5, where we studiedthe asymptotics of P(S(a) � x) for random walks under conditions broader thanthose in Chapters 3–5.

As we have observed already in §§ 15.1 and 15.2, when studying large deviationprobabilities for processes with independent increments with ‘heavy’ tails, thecomponent {S[0](t)} of the process (see (15.3.1)) that corresponds to the firstthree terms in the representation (15.1.2) does not affect the asymptotics of thedesired distributions, because it does not influence the behaviour of the tails ofthe measures F and G at infinity. This allows one, in particular, to reduce theproblem on the first-order asymptotics for P(S(∞, a) � x) to the correspondingproblem for random walks in discrete time considered in Chapters 2–5. This isdue to the fact that for compound Poisson processes (corresponding to the fourthterm in (15.1.2)) such a reduction is quite simple.

Indeed, let {S(t)} be a compound Poisson process with jump intensity Θ andjump distribution P(ζj ∈ dz) = GΘ(dz). The jump epochs tk in the process

Page 573: Asymptotic analysis of random walks

542 Extension to processes with independent increments

{S(t)} have the form tk =∑k

j=1 τj , where the r.v.’s τj are independent andP(τj � t) = e−Θt, t > 0. Clearly,

S(∞, a) ≡ supt�0

(S(t) − at

)= sup

k�0

(S(tk) − atk

)= S∗ := sup

k�0S∗

k ,

where S∗k :=

∑kj=1 ξ∗j , ξ∗j := ζj − aτj and the r.v.’s τj , ζj are independent. If

a∗ := Eξ∗i = Eζ − aΘ−1 < 0

then the first- and second-order asymptotics of the probability P(S(∞, a) � x) =P(S ∗ � x) can be found with the help of Theorems 7.5.1 and 7.5.8, where weshould replace ξ by ξ∗ and V (t) by

V ∗(t) := P(ξ∗ � t) = Θ

∞∫0

e−ΘuGΘ,+(t + au) du ∼ GΘ,+(t)

(the last relation holds when GΘ,+(t) := GΘ,+([t,∞)) is an l.c. function), sothat

P(S(∞, a) � x) ∼ 1|a∗|

∞∫x

GΘ,+(u) du =1

|ES(1) − a|

∞∫x

G+(u) du.

That the last answer has this particular integral form is in no way related to theassumption that {S(t)} is a compound Poisson process; it will clearly remain truefor more general processes with independent increments. In a somewhat formalway, the same assertion could be obtained by, for example, using approachesfrom § 15.2.

The above reduction to discrete-time random walks (when studying the distri-bution of S(∞, a)) will be carried out in a more general case (for generalizedrenewal processes) in § 16.1.

Page 574: Asymptotic analysis of random walks

16

Extension of the results of Chapters 3 and 4 togeneralized renewal processes

16.1 Introduction

In the present chapter, we will continue carrying over the results of Chapters 3and 4 to continuous-time random processes. Here we will deal with generalizedrenewal processes (GRPs), which are defined as follows.

Let τ, τ1, τ2, . . . be a sequence of positive independent i.i.d. r.v.’s with a finitemean aτ := Eτ . Put t0 := 0,

tk := τ1 + · · · + τk, k = 1, 2, . . . ,

and let

ν(t) :=∞∑

k=1

1(tk � t), t � 0,

be the (right-continuous) renewal process generated by that sequence. Recall theobvious relation

{ν(t) < k} = {tk > t},which will often be used in what follows, and note that ν(t) + 1 is the first hittingtime of the level t by the random walk {tk, k � 0}:

ν(t) + 1 = min{k � 1 : tk > t}.Further, suppose that

S0 = 0, Sk = ξ1 + · · · + ξk, k � 1,

is a random walk generated by a sequence of i.i.d. r.v.’s ξ, ξ1, ξ2, . . . , which isindependent of {τi; i � 1}.

Definition 16.1.1. A continuous-time process

S(t) := Sν(t) + qt, t � 0,

where q is a real number, is called a generalized renewal process with lineardrift q.

543

Page 575: Asymptotic analysis of random walks

544 Extension to generalized renewal processes

Note that in the previous chapter, which was devoted to processes with indepen-dent increments, we have already considered an important special case of GRPs,that of compound Poisson processes.

Our main aim will be to approximate probabilities of the form

GT :={

supt�T

(S(t) − g(t)) � 0}

(16.1.1)

for sufficiently ‘high’ boundaries {g(t); t ∈ [0, T ]} (assuming, without loss ofgenerality, that T � 1; the case T → 0 is trivial). The most important specialcases of such events GT are, for sufficiently large x, the events

{S(T ) � x} and {S(T ) � x}, (16.1.2)

where S(T ) := supt�T S(t). Regarding the distributions F and Fτ of the r.v.’s ξ

and τ respectively, we will assume satisfied a number of conditions, which aremostly expressed in terms of the asymptotic behaviour at infinity of their tails

F+(t) = P(ξ � t), F−(t) = P(ξ < −t), Fτ (t) := P(τ � t), t > 0.

As in our previous exposition, the conditions F+ ∈ R and Fτ ∈ R respectivelymean that

F+(t) = V (t) := t−αL(t), α > 1, L is an s.v.f., (16.1.3)

Fτ (t) = Vτ (t) := t−γLτ (t), γ > 1, Lτ is an s.v.f. (16.1.4)

The problem on the asymptotic behaviour of the probabilities P(GT ) is ageneralization of similar problems for random walks {Sk} with regularly vary-ing tails, which were studied in detail in Chapters 3 and 4. The asymptoticbehaviour of the large deviation probabilities for GRPs was studied less exten-sively, although such processes, like random walks, are widely used in applica-tions. One of the most important is the famous Sparre Andersen model in risktheory [257, 268, 9].

Throughout the chapter we will be using the notation

aξ := Eξ, aτ := Eτ, H(t) := Eν(t), t � 0,

so that H(t) is the renewal function for the sequence {tk; k � 0}. In [188],an analogue of the uniform representation (4.1.2) was given for GRPs under theadditional assumptions that, apart from condition (16.1.3) on the distribution tailof aτξ − aξτ , one also has E|ξ|2+δ < ∞ and Eτ2+δ < ∞ for some δ > 0 andthe distribution Fτ has a bounded density. In [169], in the case q = 0 the authorsestablished the relation

P(S(T ) − ES(T ) � x

) ∼ H(T )V (x) as T → ∞, x � δT, (16.1.5)

for any fixed δ > 0 under the assumption that the distribution tail of the r.v. ξ � 0satisfies the condition of ‘extended regular variation’ (see § 4.8) and that, for the

Page 576: Asymptotic analysis of random walks

16.1 Introduction 545

process {ν(t)} (which in [169] can be of a more general form than a renewalprocess), the following condition holds: for some ε > 0 and c > 0,∑

k>(1+ε)T/aτ

eckP(ν(T ) > k) → 0 as T → ∞. (16.1.6)

The paper [169] also contains sufficient conditions for (16.1.6) to hold for re-newal processes: Eτ2 < ∞ and Fτ (t) � 1 − e−bt, t > 0, for some b > 0(Lemma 2.3 of [169]). It was shown in [262] that condition (16.1.6) is alwaysmet for renewal processes with aτ < ∞ without any additional assumptions onthe distribution Fτ . Observe also that the last fact immediately follows from in-equality (16.2.13) below, in the proof of Lemma 16.2.8(i).

In this chapter, we will not only extend the zone of x values for which therelation (16.1.5) holds true (and this will be done for arbitrary q) but also establishexact asymptotic behaviour for the probabilities P(GT ) in the case of a muchwider class of events GT of the form (16.1.1).

First of all, note that the problem on the asymptotics of P(S(∞) � x) asx → ∞ can be reduced to that on the asymptotics of P(S � x) for the maximaS = supk�0 Sk of ordinary random walks, considered in Chapters 3 and 4. Thisfollows from the observation that, for q � 0, one has

S(∞) = supk�0

(Sk + qtk) =: Z (16.1.7)

and for q > 0

S(∞) = supk�1

(Sk−1 + qtk) = qτ1 + supk�1

[Sk−1 + q(tk − τ1)]d= qτ + Z,

(16.1.8)where the r.v.’s τ and Z are independent and Z = supk�0 Zk for the random walkZk :=

∑kj=1 ζj generated by i.i.d. r.v.’s ζj := ξj + qτj . These representation

imply the following result.

Theorem 16.1.2. Let a := Eζ = aξ + qaτ < 0. Then the following assertionshold true.

I. If q � 0 and F+ ∈ R then

P(S(∞) � x

) ∼ 1|a|

∞∫x

F+(t)dt ∼ xV (x)(α − 1)|a| . (16.1.9)

II. If q > 0 and one of the following three conditions is met,

(i) F+ ∈ R and Fτ (t) = o(V (t)

)as t → ∞,

(ii) F+ ∈ R and Fτ ∈ R,(iii) Fτ ∈ R and F+(t) = o

(Vτ (t)

)as t → ∞,

Page 577: Asymptotic analysis of random walks

546 Extension to generalized renewal processes

then

P(S(∞) � x

) ∼ 1|a|

∞∫x

(F+(t) + Fτ (t/q)

)dt. (16.1.10)

Note that, in cases II(i) and II(iii), the second and the first summands respec-tively in the integrand in (16.1.10) become negligibly small and so can be omitted.Observe also that the first relation in (16.1.9) was obtained in [115].

We will need the following well-known assertion, whose first part is a conse-quence of Theorem 12, Chapter 4 of [42] (see also Corollary 3.6.3), while thesecond follows from the main theorem of [178].

Theorem 16.1.3. If F+ ∈ R and aξ < 0 then, as x → ∞,

P(S � x

) ∼ 1|aξ|

∞∫x

V (t) dt ∼ xV (x)(α − 1)|aξ| (16.1.11)

and, moreover,

P(Sn � x

) ∼ 1|aξ|

x+n|aξ|∫x

V (t) dt

∼ xV (x) − (x + n|aξ|)V (x + n|aξ|)(α − 1)|aξ| (16.1.12)

uniformly in n � 1.

Note that the first asymptotic relation in (16.1.11) was established in [42] in amore general case (later on, it was shown in [275, 115] that a sufficient conditionfor this relation is that F I

+ is a tail of a subexponential distribution; the necessityof this condition was proved in [177]), while the first relation in (16.1.12) wasobtained in [178] for so-called strongly subexponential distributions. The secondrelations in each of the formulae (16.1.11), (16.1.12) are consequences of (16.1.3)and Theorem 1.1.4(iv).

Proof of Theorem 16.1.2. I. In the case q � 0 one has (16.1.7), the tail Fζ,+

of the distribution Fζ of ζ := ξ + qτ being asymptotically equivalent to thetail F+ = V ∈ R:

Fζ,+(t) =

∞∫0

V (t − qu) dFτ (u)

= V (t)

⎡⎣ M∫0

V (t − qu)V (t)

dFτ (u) + θFτ (M)

⎤⎦ , 0 < θ < 1,

where, for M = M(t), increasing to infinity slowly enough as t → ∞, theexpression in the square brackets converges to 1 by the uniform convergence

Page 578: Asymptotic analysis of random walks

16.1 Introduction 547

theorem for r.v.f.’s (Theorem 1.1.2). Hence (16.1.9) follows immediately from(16.1.11) with aξ replaced in it by a.

II. When q > 0 one has the representation (16.1.8). In case (i) it is easily seen,cf. (1.1.39)–(1.1.42), that

Fζ,+(t) = P(ξ + qτ � t) ∼ F+(t), (16.1.13)

so that, by virtue of (16.1.11),

P(Z � x) ∼ 1|a|

∞∫x

Fζ,+(t) dt. (16.1.14)

Therefore the tail of Z is also an r.v.f. and, moreover, Fτ (t) = o(P(Z � t)

).

Hence, cf. (16.1.13), we obtain that

P(qτ + Z � x) ∼ P(Z � x),

which coincides in this case with (16.1.10) by virtue of (16.1.13), (16.1.14) (seealso Chapter 11).

In case (ii), again using calculations similar those in (1.1.39)–(1.1.42), we have

Fζ,+(t) ∼ V (t) + Vτ (t/q)

so that Fζ,+ ∈ R and therefore, as in case (i),

P(Z � x) ∼ 1|a|

∞∫x

Fζ,+(t) dt ∼ cxFζ,+(x)

and P(qτ + Z � x) ∼ P(Z � x).Case (iii) is considered similarly to (i), (ii). The theorem is proved.

For T < ∞ such a simple reduction of the problem on the asymptotic be-haviour of P(S(T ) � x) to the respective results for random walks is impossi-ble, and so we will devote a special section (§ 16.5) to studying the asymptoticsof P(GT ) for linear boundaries in the whole spectrum of deviations.

A possible way of doing the asymptotic analysis of the probabilities of theform P(GT ) (in the first place for the events (16.1.2)) for a GRP is to use thedecomposition

P(GT ) =∞∑

n=0

P(GT

∣∣ ν(T ) = n)P(ν(T ) = n). (16.1.15)

If, on the set {ν(T ) = n}, the conditional probability P(GT

∣∣ ν(t), t � T)

doesnot depend on the behaviour of the trajectory of the renewal process {ν(t)} insidethe interval [0, T ], i.e.

P(GT

∣∣ ν(T ) = n)

= P(GT

∣∣ {ν(t); t � T}, ν(T ) = n)

(which holds, for example, for the event GT = {S(T ) � x}, when one has

Page 579: Asymptotic analysis of random walks

548 Extension to generalized renewal processes

P(GT | ν(T ) = n

)= P(Sn+qT � x)) then we will say that partial factorization

takes place, since in this case the problem reduces to studying the processes {Sn}and {ν(t)} separately. It then turns out that the asymptotic behaviour of P(GT )(including some asymptotic expansions) can be derived from the known resultsfor random walks presented in Chapters 3 and 4 and some bounds for renewalprocesses.

In the general case, however, one may need to employ another approach, whichdirectly follows the basic scheme for studying random walks with regularly vary-ing jump distribution tails developed in Chapters 3 and 4. Namely, along with‘truncated versions’ of the jumps ξi, we will now truncate the renewal-intervallengths τi. The main contribution to the probability of the event GT will againbe due to trajectories containing exactly one large jump, but now the latter cancomprise not only a large ξi but also a large τi (when q > 0).

The asymptotic behaviour of the probabilities of the events (16.1.2) is mostlydetermined by simple relations for the mean values aξ and aτ , the linear driftcoefficient q and the quantities x and T . Since the mean number of jumps pertime unit is equal to 1/aτ , clearly the rate of the mean trend in the process {S(t)}equals a/aτ , where a := aξ + qaτ is the mean trend of the GRP per renewalinterval. Therefore the event {S(T ) � x}, say, will be a large deviation when x

is much greater than aT/aτ .To simplify the exposition, we will assume here and in what follows that the

mean trend of the process S(t) per renewal interval is equal to zero:

a := aξ + qaτ = 0. (16.1.16)

It is clear that, when considering boundary-crossing problems, such an assump-tion does not restrict generality. Indeed, let {S(t)} be a general GRP with alinear drift. For this process, the event GT will clearly coincide with the eventG

0T = {supt�T (S0(t) − g0(t)) > 0} of the same type, but for the ‘centred’

process S0(t) := S(t) − (a/aτ )t with zero mean trend and for the boundaryg0(t) := g(t) − (a/aτ )t. In particular, the event {S(T ) � x} will become, forthe new process {S0(t)}, the event of the crossing of a linear boundary of theform x − (a/aτ )t.

Next we will discuss briefly how the nature of the asymptotics of P(S(T ) � x)(the simplest of the probabilities under consideration) depends on the value of aξ,under the assumption (16.1.16).

First let aξ � 0 (and therefore q � 0 owing to (16.1.16)). Clearly, starting fromthe ‘vicinity of zero’ at time t < T and moving along a straight line with slopecoefficient q, at time T we will be even further below the (high) level x. Thismeans that the occurrence of the event {S(T ) � x}, whether in the presence orabsence of long renewal intervals (it is during these intervals that the process S(t)moves along straight lines with slope coefficient q � 0), will be very unlikelyin the absence of large jumps ξi. In this case, the asymptotics of large deviationprobabilities will be similar to those established for ordinary random walks: in

Page 580: Asymptotic analysis of random walks

16.1 Introduction 549

the respective zone of x values and under the assumption that F+ ∈ R, one has

P(S(T ) � x) ∼ H(T )V (x), x → ∞ (16.1.17)

(recall that H(T ) is simply the mean number of jumps in the process {S(t)}during the time interval [0, T ], so that (16.1.17) is a natural generalization of theasymptotic relation

P(Sn � x) ∼ nV (x) (16.1.18)

for random walks). If T → ∞ then, by the renewal theorem, the relation (16.1.17)clearly becomes

P(S(T ) � x) ∼ T

aτV (x) (16.1.19)

for an appropriate range of values x → ∞.If aξ < 0 (so that q > 0 owing to (16.1.16)) then we have to distinguish

between the two cases qT � x and qT > x. In the first case, for the event{S(T ) � x} to occur a large jump ξj is necessary (with a dominating probability).Moreover, it turns out that, while the relations (16.1.17)–(16.1.19) still hold truewhen x − qT > δT (δ > 0 is fixed), in the ‘threshold’ situation when o(T ) =x − qT → ∞ the presence of a long renewal interval could make it much easierfor the level x to be reached (it would then suffice for the ‘large’ jump ξj to havea substantially smaller value). This could be reflected by the appearance of anadditional term on the right-hand side of (16.1.19).

In the second case, one has in any case to take into account the possibility ofthe occurrence of the event {S(T ) � x} due to a very long renewal interval τk .For this to happen, such an interval should appear close to the starting point ofthe trajectory of S(t). Indeed, owing to the zero mean trend in the process thevalue of S(tk−1) will be ‘moderate’ in the absence of large jumps ξj , j < k.Therefore, if tk−1 is close enough to T then, starting at the point with co-ordinates(tk−1, S(tk−1)) and moving along a straight line with slope coefficient q, it willalready be impossible to reach the level x by the time T . Roughly speaking, theoccurrence of the event {S(T ) � x} due to a large τk is only possible whenq(T − aτk) > x, i.e. when

k <T − x/q

aτ.

Then, for the event {S(T ) � x} to occur, it will suffice for the large jump atthe beginning of the trajectory to have the property τk > x/q since, both beforeand after that long renewal interval, the process {S(t)} will not deviate far fromthe mean trend line, i.e. it will be moving roughly ‘horizontally’. Assuming theregularity of the tail Fτ (t) of the distribution of τ , we arrive at the conclusionthat, in such a case, one could expect to have asymptotic results of the form

P(S(T ) � x) ∼ T

aτV (x) +

T − x/q

aτFτ (x/q). (16.1.20)

Page 581: Asymptotic analysis of random walks

550 Extension to generalized renewal processes

The corresponding results are established in Theorem 16.2.1 below, with the helpof the representation (16.1.15). They include delicate ‘transient’ phenomena thatoccur when x ∼ qT .

Moreover, it turns out that one can also obtain the few first terms of the asymp-totic expansion for the probability P(S(T ) � x) (Theorem 16.3.1). The deriva-tion of more complete expansions is substantially harder than solving the sameproblem for ordinary random walks, owing to the more complex structure of theprocess {S(t)}.

Arguments similar to those above also hold in the case of more general bound-aries. In particular, the relations (16.1.17)–(16.1.20) remain true (under the re-spective conditions) for the probabilities P(S(T ) � x) (Theorem 16.2.3).

For boundaries of a general form, we will restrict ourselves to considering caseswhere the occurrence of the event GT due to long renewal intervals is impos-sible, as otherwise the asymptotic analysis would become rather tedious. (Anexceptional case is that of boundaries for which the right endpoint is the low-est, i.e. g(T ) = min0�t�T g(t). For such boundaries we will show in Corol-lary 16.4.4 that all the asymptotic results obtained in this chapter for the prob-abilities P(S(T ) � x) remain valid for the probabilities P(GT ).) Namely,we consider classes of boundaries {g(t); t ∈ [0, T ]}, satisfying the conditionsx � inft�0 g(t) � Kx (K > 1 is fixed) under the additional assumptions thatx � δT when q � 0 and inft�T (g(t) − qt) � δT when q > 0. It will beshown that, cf. the results in §§ 3.6 and 4.6 on the asymptotics of the boundary-crossing probabilities for a random walk, the main term in the probability P(GT ),as x → ∞, has the form

T∫0

V (g∗(t)) dH(t),

where g∗(t) := inft�s�T g(s) (Theorem 16.4.1). If, moreover, T → ∞ then theabove integral is asymptotically equivalent to

1aτ

T∫0

V (g∗(t)) dt.

For linear boundaries g(t) = x + gt, we will obtain more complete resultsin § 16.5. For a finite large T , we can find the asymptotics of P(GT ) includingthe case when the ‘deterministic drift’ line qt can cross the boundary x+gt. As inour analysis of the distributions of S(T ) and S(T ), we also study the ‘thresholdcase’, when x + gT and qT are relatively close to each other. Note that the caseof linear boundaries with T = ∞ is covered by Theorem 16.1.2.

In conclusion, we note that studying large deviation probabilities for S(T ) inthe case when the components of the vectors (τi, ξi) are dependent is also possi-ble, but only for some special joint distributions of (τi, ξi) (those that are regularly

Page 582: Asymptotic analysis of random walks

16.2 Large deviation probabilities for S(T ) and S(T ) 551

varying at infinity). The representation

P(S(T ) � x) =∞∑

n=1

T∫0

P(Tn ∈ dt, Sn + qT � x)P(τn+1 > T − t)

(16.1.21)reduces the problem to one of analysing the joint distribution of the vector of sums(Tn, Sn). Such an analysis, enabling one to find the asymptotics of (16.1.21), wasgiven in Chapter 9 for some basic types of joint distributions of (τi, ξi) that displayregular variation at infinity.

16.2 Large deviation probabilities for S(T ) and S(T )

We now introduce the conditions [QT ] and [ΦT ], which are respectively similarto condition [Q] of Chapter 3 and the conditions that we often used in Chapter 4.

[QT ] One has F+ ∈ R for α ∈ (1, 2), and at least one of the following twoconditions holds true:

(i) F−(t) � cV (t), t > 0, and TV (x) → 0;(ii) F−(t) � W (t), t > 0, W ∈ R, x → ∞ and

T [V (x/ lnx) + W (x/ lnx)] < c.

Clearly, [QT ] is simply condition [Q] with n replaced in it by T ; see p. 138.

[ΦT ] One has F+ ∈ R for α > 2, d := Var ξ < ∞ and x → ∞; moreover,

for some c > 1, we have x > c√

(α − 2)a−1τ dT lnT .

Furthermore, we will also need a condition of the form [ · , <] on the distribu-tion Fτ . It will be convenient for us to denote it by [<]:

[<] For some γ > 1, one has Fτ (t) � Vτ (t) := t−γLτ (t), where Lτ is ans.v.f.

It is evident that, by the Chebyshev inequality, for condition [<] to hold for someLτ (t) = o(1) as t → ∞ it suffices that Eτγ < ∞.

Theorem 16.2.1. Let either condition [QT ] or condition [ΦT ] be satisfied, andlet δ > 0 be an arbitrary fixed number. Suppose also that (16.1.16) holds, i.e. thatthe mean trend in the process {S(t); t � 0} is equal to zero. Then the followingassertions hold true.

I. The case q � 0

(i) As x → ∞, uniformly in T values that satisfy x � δT, one has

P(S(T ) � x) ∼ H(T )V (x). (16.2.1)

If T → ∞ then the term H(T ) on the right-hand side can be replacedby T/aτ .

Page 583: Asymptotic analysis of random walks

552 Extension to generalized renewal processes

(ii) If condition [<] is met for a γ ∈ (1, 2) then (16.2.1) holds as x → ∞uniformly in the range of T values satisfying, along with [QT ] or [ΦT ],the condition x � T 1/γLz(T ) for a suitable s.v.f. Lz (a way of choosingthe function Lz(t) is indicated in Lemma 16.2.8(ii) below).

(iii) If α ∈ (1, 2) and Fτ (t) = o(V (t)) as t → ∞, or if α > 2 andEτ2 < ∞ then (16.2.1) holds without any additional assumptions on x

and T (apart from those in conditions [QT ] and [ΦT ]).

II. The case q > 0

(i) The relation (16.2.1) holds as x → ∞ uniformly in T values thatsatisfy

x∗ := x − qT � δT.

(ii) Let Fτ ∈ R. If

T → ∞, x∗ → ∞ and x∗ = o(T ),

then, for α ∈ (1, 2),

P(S(T ) � x) ∼ T

aτV (x)

and, for α > 2,

P(S(T ) � x) ∼ T

aτV (x) +

x2∗V (x∗)Vτ (T )

q2a2τ (α − 1)(α − 2)

.

(iii) If Fτ (t) = o(V (t)) as t → ∞ then the assertions of parts I(i), I(ii)hold without any additional assumptions on x and T (apart from thosein conditions [QT ]and [ΦT ] and in the above-mentioned parts of thetheorem).

(iv) Let condition Fτ ∈ R hold for a γ ∈ (1, 2)∪ (2,∞) (see (16.1.4)) andlet

x∗ = x − qT → −∞, x � T 1/γLz(T )

for a suitable s.v.f. Lz (a way of choosing the function Lz(t) is indicatedin Lemma 16.2.8(ii); for γ > 2 the latter inequality always holds owingto conditions [QT ] or [ΦT ]). Then

P(S(T ) � x) ∼ 1aτ

[TV (x) + (T − x/q)Vτ (x/q)

]as T → ∞.

Remark 16.2.2. If the conditions α > 2 and [<] with Lτ = const are met inpart I(ii) of the theorem then Lz(t) = c ln1−1/γ(t), so that the relation (16.2.1)will hold as x → ∞ uniformly in T values that satisfy x � cT 1/γ ln1−1/γ T fora large enough c.

Page 584: Asymptotic analysis of random walks

16.2 Large deviation probabilities for S(T ) and S(T ) 553

A few comments regarding the second half of Theorem 16.2.1 are in order.As already stated in § 16.1, case II (the presence of a positive linear drift qt) ismore complicated than case I. In case II(i) of the theorem, the level x is muchhigher than the point qT in whose neighbourhood we could find ourselves attime T , owing to the linear drift over a very long renewal interval. In this casethe asymptotics of the probability P(S(T ) � x) will remain of the same form asin case I: roughly speaking, to cross the level x one still needs the presence of alarge jump ξj during the time interval [0, T ].

In case II(ii) the level x is still above qT but the difference between the twovalues is relatively small: x∗ = x − qT = o(T ). In this transient situation,the form of the asymptotics of the probability P(S(T ) � x) is determined bythe ‘thickness’ of the right tail of the distribution of ξj . For a ‘thick’ tail (whenα ∈ (1, 2)) the asymptotics remain the same and do not depend on the distributionof the renewal interval length (of which we require that Fτ ∈ R). When the dis-tribution tail of ξj is ‘thinner’ (α > 2), the asymptotics acquire an additional termwhich depends on the distribution of τk and which may or may not dominate. Itspresence is caused by the following (relatively probable) possibility: one renewalinterval, starting at the very beginning of the time interval [0, T ], proves to be verylong (and covers the time point t = T ), the sum of the jumps ξj ‘accumulated’during the initial time interval (prior to the start of that renewal interval) beinglarge enough for the process to negotiate the ‘gap’ between qT and x. This, in itsturn, is also a ‘large deviation’ and is due to a single ‘moderately large’ jump ξj

exceeding the value x∗ (hence the factor V (x∗)). The presence of the factor x2∗

can be explained by the fact that, roughly speaking, the number of such jumps,at the beginning of the trajectory of the process, that would have to correspondto these large values of τk and ξj is of order x∗, the sequences {τk} and {ξj}being independent of each other. Therefore the probability of the required com-bination of events will be of the order of magnitude of the product of x∗V (x∗)and x∗Fτ (T ).

In case II(iv) the level x is already substantially lower than qT (owing to thecondition x−qT → −∞), and hence the probability that the process will be abovethat level increases by a contribution corresponding to the presence of a singlevery large τk at the beginning of the interval [0, T ]. The form of this additionalterm can be explained as follows. Roughly speaking, in the absence of largedeviations in the random walk {Sn}, for the process S(t) to exceed the level x bythe time T it suffices that one of the first (roughly) (T−x/q)/aτ renewal intervalsis long (> x/q). In this case, the trajectory of S(t) will oscillate about zero untilthe start of the long renewal interval, and then during that interval it will movealong a straight line with slope coefficient q > 0 (and this will take the trajectoryabove the level x). After that (if t is still less than T ), the trajectory will againoscillate at an approximately constant level (and therefore will still be above thelevel x by the time T ).

The very narrow transient case x∗ = O(1) (when the values of qT and x are

Page 585: Asymptotic analysis of random walks

554 Extension to generalized renewal processes

almost the same) proves to be quite difficult. This case is not considered in thepresent exposition and is not covered by Theorem 16.2.1.

As was the case for the ordinary random walks, the first-order asymptotics ofthe probabilities P(S(T ) � x) of large deviations of the maximum of the processturn out to be of the same form as those for P(S(T ) � x). This is due to thesame reason: if (say, due to a large jump ξj) the process {S(t)} crosses a highlevel x somewhere inside the interval [0, T ] then, with a high probability, it willstay in a ‘neighbourhood’ of the point S(tj) until the end of the interval [0, T ](recall that our process has a zero mean trend). Hence the events {S(T ) � x}and {S(T ) � x} prove to be almost equivalent. The process can cross the level x

during an interval of its linear growth, but even then the above remains true aswell.

Theorem 16.2.3. All the assertions of Theorem 16.2.1 remain true, under therespective assumptions, for the probability P(S(T ) � x).

As we saw in § 4.8 in the case of random walks {Sn}, the asymptotics

P(Sn � x) ∼ nF+(x), P(Sn � x) ∼ nF+(x) (16.2.2)

(valid in the respective deviation zones) extend to a wider distribution class than R(possibly for narrower deviation zones). Using the partial factorization approach,which is based on the relations (16.2.2), one can show that a similar situationoccurs for the GRPs as well. We will give here just the following simple corollary(further results can be derived in a similar way, using Theorem 4.8.6 and boundsfrom Lemmata 16.2.6–16.2.8).

Theorem 16.2.4. Suppose that the distribution of ξ satisfies the conditions ofTheorem 4.8.1 and the relations (4.8.2) hold true (with n replaced in them by T ),and let Eτ2 < ∞. Assume also that (16.1.16) is met, and that δ > 0 is anarbitrary fixed number. Then, uniformly in T such that x � (δ + max{0, q})T,

one has

P(S(T ) � x) ∼ P(S(T ) � x) ∼ H(T )F+(x).

If T → ∞ then the term H(T ) on the right-hand side can be replaced by T/aτ .

To prove the theorems we will need a few auxiliary results on the deviations ofthe renewal process ν(t) from its mean value H(t) ∼ t/aτ . We will state theseresults as separate lemmata.

First we will note the following.

Remark 16.2.5. To simplify the computations, we will be assuming in all theproofs of the present chapter that

aτ = 1. (16.2.3)

Clearly, this does not lead to any loss of generality. One can easily see how to

Page 586: Asymptotic analysis of random walks

16.2 Large deviation probabilities for S(T ) and S(T ) 555

make the transition from the results obtained under this assumption to the generalcase. One just has to scale the time by a factor aτ , i.e. to make the followingchanges in the derived results: replace T by T/aτ and Fτ (·) by Fτ (aτ ·) (and,correspondingly, H(·) by H(aτ ·) and so on, so that, for example, the value H(T )will remain unchanged). Further, q should be replaced by qaτ , and g(·) by g(aτ ·)(so that, say, in the case of a linear boundary g(t) = x + gt one replaces, in therelations obtained in the case (16.2.3), the coefficient g by gaτ ). Precisely thiswill be done when the results of our theorems are stated (convention (16.2.3) isnot used therein).

Denote by

t0k := tk − k, k = 0, 1, 2, . . . ,

the centred random walk generated by the sequence {τi}, and by Λθ(v) the devi-ation function for the r.v. θj := 1 − τj :

Λθ(v) := supλ

{vλ − lnϕθ(λ)}, ϕθ(λ) := Eeλθ = eλEe−λτ . (16.2.4)

Lemma 16.2.6.

(i) For all z � 0 and n � 1,

P(mink�n

t0k � −z

)� e−nΛθ(z/n). (16.2.5)

In particular,

(ii) if Eτ2 < ∞ then Λθ(v) � cv2 for v > 0 and some c > 0. Therefore, forall z � 0, n � 1,

P(mink�n

t0k � −z

)� e−cz2/n. (16.2.6)

(iii) Assume that condition [<] holds for a γ ∈ (1, 2). Then u(λ) := λ−1Vτ (λ−1)is a regularly varying function converging to zero as λ → 0 . The generalizedinverse of u(λ),

h(t) = u(−1)(t) := sup{λ : u(λ) � t}, (16.2.7)

has the form h(t) = t1/(γ−1)Lh(t), where Lh(t) is an s.v.f. as t → 0.

For n � 1 and z > 0, one has

P(mink�n

t0k � −z

)� e−czh(z/n). (16.2.8)

Proof. (i) The inequality (16.2.5) follows immediately from Lemma 2.1.1 (it isobtained by minimizing the right-hand side of (2.1.5) with respect to μ, see p. 82).

(ii) If Eτ2 < ∞ then there exists Λ′′θ (0) = Var τ . Moreover, as is well known,

the function Λθ(v) is convex (being the Legendre transform of a convex function,see e.g. § 8, Chapter 8 of [49]). Hence Λθ(v) � cv2 for v ∈ [0, 1] (Λθ(v) = ∞for v > 1). This proves (16.2.6).

Page 587: Asymptotic analysis of random walks

556 Extension to generalized renewal processes

(iii) Here the asymptotics of Λθ(v) as v → 0 will differ from that in the caseEτ2 < ∞. Integrating by parts yields the representation

Ee−λτ = 1 − λ

∞∫0

Fτ (t)e−λtdt = 1 − λ + λ2

∞∫0

F Iτ (t)e−λtdt,

where, by virtue of condition [<] and Theorem 1.1.4(iv),

F Iτ (t) :=

∞∫t

Fτ (u)du �∞∫t

Vτ (u)du =1 + ε(t)γ − 1

tVτ (t), ε(t) = o(1)

as t → ∞. Therefore, as λ → 0,

∞∫0

F Iτ (t)e−λtdt � 1

γ − 1

∞∫0

(1 + ε(t))tVτ (t)e−λtdt

=1 + o(1)γ − 1

∞∫0

tVτ (t)e−λtdt ∼ Γ(2 − γ)Vτ (1/λ)(γ − 1)λ2

by Theorem 1.1.5. Thus, for c > 0 large enough, we get

Ee−λτ � 1 − λ + cVτ (1/λ) � e−λ+cVτ (1/λ), λ ∈ [0, 1],

and therefore

ϕθ(λ) = eλEe−λτ � ecVτ (1/λ), λ ∈ [0, 1].

Again using Lemma 2.1.1, we obtain

P(mink�n

t0k � −z

)� e−λz+cnVτ (1/λ). (16.2.9)

The assertions of the theorem regarding the regular variation at zero of the func-tion u(λ) = λ−1Vτ (λ−1) and its inverse h(t) = u(−1)(t) are obvious. Further,for λ = h(z/cnγ) the exponent on the right-hand side of (16.2.9) equals

−λz + cnVτ (1/λ) = (−z + cnu(λ))λ

�(−z +

cnz

cnγ

)h

(z

cnγ

)= −z(γ − 1)

γh

(z

cnγ

)� c1zh

( z

n

)for z/n � 1, since h(t) is an r.v.f. as t → 0. This establishes (16.2.8) in thecase z � n. For z > n, the inequality (16.2.8) is trivial. Lemma 16.2.6 isproved.

The next result follows from Corollary 3.1.2 and Remark 4.1.5.

Page 588: Asymptotic analysis of random walks

16.2 Large deviation probabilities for S(T ) and S(T ) 557

Lemma 16.2.7. If condition [<] holds for a γ ∈ (1, 2)∪ (2,∞) then there existsa function ε(v) ↓ 0 as v ↓ 0 such that

supx,n: nVτ (x)�v

P(maxk�n t0k � x)

nVτ (x)� 1 + ε(v).

When γ = 2, this relation holds under the addition restriction that x � c√

n lnn,

c > 0 is fixed.

In the following lemma, V (t) is an arbitrary r.v.f.

Lemma 16.2.8.

(i) Let n+ := T + z0 and z0 = εx, where ε = ε(x) > 0, and let x � δT foran arbitrary fixed δ > 0. Then, for any k � 0 and m � 0, uniformly in thezone x � δT one has the relation∑

n>n+

nkP(ν(T ) = n) = o((TV (x))m) as x → ∞ (16.2.10)

provided that ε = ε(x) → 0 slowly enough as x → ∞.(ii) Let condition [<] be met with a γ ∈ (1, 2) and let n+ = T + z0, where z0 is

a solution to the asymptotic equation

z0h(z0/T ) ∼ c+ lnT (16.2.11)

for a sufficiently large c+; the function h(t) is defined in (16.2.7). Thenz0 = T 1/γLz(T ), where Lz is an s.v.f., and for any k � 0 and m � 0 therelation (16.2.10) holds uniformly in the zone x � cT , where c > 0 is anarbitrary fixed number.

Observe that, if condition [<] is satisfied with Lτ ≡ const then

z0 = cT 1/γ ln1−1/γ T.

Proof of Lemma 16.2.8. (i) Changing variables and integrating by parts yields thebound∑

n>n+

nkP(ν(T ) = n) �∞∫

n+

ukP(ν(T ) ∈ du)

= (T + z0)kP(ν(T ) > T + z0)

+ k

∞∫z0

(T + z)k−1P(ν(T ) > T + z) dz. (16.2.12)

Assuming for simplicity that T + z is an integer, we note that

{ν(T ) > T + z} ⊆ {tT+z < T} ={ ∑

j�T+z

θj > z

}, θj = 1 − τj .

Page 589: Asymptotic analysis of random walks

558 Extension to generalized renewal processes

Therefore, using the deviation function (16.2.4), we get from the Chebyshev in-equality the following bound. For z � z0,

P(ν(T ) > T + z) � P(T+z∑

j=1

θj > z

)� exp

{−(T + z)Λθ

( z

T + z

)}� exp{−(T + z)Θ(εδ)}, (16.2.13)

where Θ(u) := Λθ (u/(1 + u)) > 0 for u > 0 and we have used the fact thatthe function Θ(u) is increasing, u := z/T � z0/T = εx/T � εδ. Hence theright-hand side of (16.2.12) does not exceed

(T + εx)ke−(T+εx)Θ(εδ) + k

∞∫εx

(T + z)k−1e−(T+z)Θ(εδ)dz = o((TV (x))m),

(16.2.14)when ε → 0 slowly enough.

(ii) In this case the inequality in the first line of (16.2.13) and Lemma 16.2.6(iii)give the bound

P(ν(T ) > T + z) � exp{−czh

(z

T + z

)}. (16.2.15)

Since one can assume that z0 < T , we see that the right-hand side of (16.2.12) isbounded from above by

c1Tke−c2z0h(z0/T ) + c1T

k−1

T∫z0

e−c2z0h(z0/T )dz + c1

∞∫T

zk−1e−c3zdz

� 2c1Tk exp

{−c2c+

2lnT

}+ c4e

−c3T/2 = o(T−c5)

for any given c5, once c+ is large enough. As one has x � cT , the lemma isproved.

Proof of Theorem 16.2.1. As stated earlier (on p. 548), when considering the event{Sn � x}, one could make use of partial factorization. Letting

S0n := Sn − aξn ≡ Sn + qn

(the last relation holds by virtue of (16.2.3) and (16.1.16)), rewrite (16.1.15) as

P(S(T ) � x) =∑n�0

P(S0

n � x − q(T − n))P(ν(T ) = n)

=∑

n<n−

+∑

n−�n�n+

+∑

n>n+

, (16.2.16)

where the values n± are chosen according to the situation. It will be convenient

Page 590: Asymptotic analysis of random walks

16.2 Large deviation probabilities for S(T ) and S(T ) 559

to estimate these three sums separately, and, depending on the situation, to do thisin different ways. The contribution of the last sum will always be negligibly smalland the middle sum will reduce to the main term of the form H(T )V (x), whilethe first sum will contribute substantially only in those cases when the presenceof a long renewal interval can ‘help’ the trajectory of the process S(t) to exceedthe level x by the time T .

I. The case q � 0.

(i) In this part we put n± := T ± εx, where ε = ε(x) → 0 as x → ∞ slowlyenough that, uniformly in the specified zone of the T -values, one has

P(ν(T ) �∈ [n−, n+]

)= o(1) as x → ∞ (16.2.17)

(since x � δT , this is always possible owing to the law of large numbers forrenewal processes). As

x − q(T − n) � x � δT > δn for n < n−,

in this case by Corollaries 3.1.2 and 4.1.4 one has

P(S0

n � x − q(T − n))

= O(nV (x)) = O(TV (x)), n < n−, (16.2.18)

and hence we obtain from (16.2.17) that∑n<n−

= o(TV (x)).

Further, by Lemma 16.2.8(i) the following relation holds uniformly in the zonex � δT : ∑

n>n+

� P(ν(T ) > n+) = o(TV (x)), (16.2.19)

so that it remains only to estimate the middle sum in the second line of (16.2.16).Since without loss of generality we can assume that |qε| < 1/2 and εδ � 1, wehave for n � n+ that

x − q(T − n) � (1 + qε)x � x

2� δ

4

(T +

x

δ

)� δ

4n+ � δ

4n.

Therefore, by Theorems 3.4.1 and 4.4.1,∑n−�n�n+

= (1 + o(1))∑

n−�n�n+

nV (x − q(T − n))P(ν(T ) = n)

= (1 + o(1))V (x)∑

n−�n�n+

nP(ν(T ) = n)

= (1 + o(1))V (x)∑n>0

nP(ν(T ) = n) + o(TV (x)) ∼ H(T )V (x)

(16.2.20)

using the bounds from Lemma 16.2.8(i) and from (16.2.17). If T → ∞ thenH(T ) ∼ T by the renewal theorem. Part I(i) of the theorem is proved.

Page 591: Asymptotic analysis of random walks

560 Extension to generalized renewal processes

(ii) It follows from I(i) that here we could restrict ourselves to considering thecase x � cT (so that automatically T → ∞). Put n± = T ± εz0 (withoutloss of generality, one can assume for simplicity that n± are integers), whereε = ε(x) → 0 slowly enough as x → ∞ and z0 = T 1/γLz(T ) is defined inLemma 16.2.8(ii), and then again turn to the representation (16.2.16).

For n < n− we have x − q(T − n) � x, and (16.2.18) will hold by virtue ofthe conditions imposed on x. If we show that

TVτ (z0) → 0 (16.2.21)

then, when ε tends to zero slowly enough, we will also have TVτ (εz0) → 0 andtherefore, by Lemma 16.2.7,

P(ν(T ) < n−) = P(t0T−εz0

> εz0

)= O(TVτ (εz0)) = o(1). (16.2.22)

Thus for the first sum on the right-hand side of (16.2.16) we have∑n<n−

= O(TV (x)P(ν(T ) < n−)

)= o(TV (x)).

Now we will verify (16.2.21). To this end, consider a sequence of i.i.d. r.v.’s {τj}such that

P(τj � t) = Vτ (t), t � t0 > 0.

On the one hand, as for the original r.v.’s τj , we have the bound (16.2.8) for thesums t0

j :=∑k

j=1(τj −Eτj). Therefore, using an argument similar to that in theproof of Lemma 16.2.8(ii), for any given c > 0 one has

P(t0T � −z0

)= o(T−c), T → ∞. (16.2.23)

On the other hand, putting bτ (T ) := V(−1)τ (1/T ) one can easily see that, by

virtue of Theorem 1.5.1, the distribution of t0T /bτ (T ) converges as T → ∞ to

the stable law Fγ,1 with parameters γ and 1. Since the support of that law is thewhole real axis when γ > 1, this, together with (16.2.23), means that z0 � bτ (T )and hence (16.2.21) holds true.

Further, to prove (16.2.19) in the case under consideration we put m+ := T+z0

and write ∑n>n+

=∑

n+<n�m+

+∑

n>m+

. (16.2.24)

Owing to Corollary 3.1.2 and Lemma 16.2.6(iii), the first sum on the right-hand

Page 592: Asymptotic analysis of random walks

16.2 Large deviation probabilities for S(T ) and S(T ) 561

side does not exceed∑n+<n�m+

P(S0

n � x − |q|z0

)P(ν(T ) = n)

�∑

n+<n�m+

P(S0

n � cx)P(ν(T ) = n)

= O(TV (x)P(t0

T+εz0< −εz0)

)= O

(TV (x) exp{−c1εz0h(εz0/T )})

= O(TV (x) exp{−c2ε

1−γ lnT}) = o(TV (x)) (16.2.25)

(we have also used (16.2.11) and the fact that h(εz0/T ) ∼ ε−γh(z0/T ) providedthat ε → 0 slowly enough, which holds since h is an r.v.f.), while the second sumis o(TV (x)) owing to Lemma 16.2.8(ii).

Finally, for n ∈ [n−, n+],

x − q(T − n) ∼ x

in the specified zones of x values, and therefore one can evaluate the middlesum in the second line of (16.2.16) in the same way as in part I(i) but this timeusing (16.2.22) and Lemma 16.2.8(ii) to replace the sum

∑n−<n�n+

by∑

n>0.

(iii) First consider the case where α ∈ (1, 2) and Fτ (t) = o(V (t)) as t → ∞.We can again assume that x < cT (the case x � δT was dealt with in part I(i))and take n± = (1 ± ε)T , ε = ε(x) → 0 as x → ∞. It is obvious that thedistribution tail of τ is dominated by an r.v.f. Vτ (t) = o(V (t)) as t → ∞ andalso that if ε → 0 slowly enough then condition [QT ] will still hold when wereplace x by εT and the function V by Vτ . Hence from Lemma 16.2.7 we obtain

P(ν(T ) � n+) = P(tn+ � T ) = P(t0n+

� −εT )

= O(TVτ (εT )) = o(TV (x)),

P(ν(T ) < n−) = P(tn− > T ) = P(t0n− > εT )

= O(TVτ (εT )) = o(TV (x))

(when ε tends to 0 slowly enough), so that

P(ν(T ) �∈ [n−, n+)

)= o(TV (x)).

Moreover, for n ∈ [n−, n+) we have

P(S0

n � x − q(T − n)) ∼ nV (x).

Now the desired assertion can easily be obtained, using computations similar tothose in (16.2.20). We will only note that, in order to replace the sum∑

n−�n�n+

nP(ν(T ) = n) by∑n>0

nP(ν(T ) = n),

Page 593: Asymptotic analysis of random walks

562 Extension to generalized renewal processes

we will have to prove the required smallness of∑n>n+

nP(ν(T ) = n) = o(T ) (16.2.26)

in a somewhat different way. Namely, since as T → ∞ one has the conver-gence ν(T )/T → 1 a.s. by the law of large numbers for renewal processes, andsince Eν(T )/T = H(T )/T → 1 by the renewal theorem, we conclude that ther.v.’s ν(T )/T � 0 are uniformly integrable. Therefore, when ε tends to 0 slowlyenough, we get

E(

ν(T )T

;ν(T )

T> 1 + ε

)→ 0 as T → ∞,

which is equivalent to (16.2.26).When α > 2 and Eτ2 < ∞, put n± = T ± ε

√T lnT and let ε = ε(x) tend

to 0 slowly enough. Then again the relation (16.2.18) clearly holds for n < n−.Combining this with the observation that

P(ν(T ) �∈ [n−, n+]) = P(|ν(T ) − T | > ε

√T lnT

)→ 0 (16.2.27)

by the central limit theorem for renewal processes (see e.g. § 5, Chapter 9 of [49]),we have

∑n<n− = o(TV (x)). The relation

∑n>n+

= o(TV (x)) is establishedin the same way as in the proof of part I(ii), but this time we choose m+ =c1

√T lnT , where c1 is large enough, and use Lemma 16.2.6(i). Finally, for

n ∈ [n−, n+] we again have

x − q(T − n) ∼ x

when x > c√

T lnT (condition [ΦT ]), and the proof is completed in the sameway as in the previous parts of the theorem.

II. The case q > 0.

(i) If x − qT � δT then all the arguments from the proof of part I(i) remainvalid without any significant changes.

(ii) Turning to the representation (16.2.16) with n± = (1 ± ε)T (assuming forsimplicity that T and εT are integers; similar assumptions will be tacitly made inwhat follows as well), one can easily verify that, as in the proof of part I(i), onehas

∑n>n+

= o(TV (x)) (note that x ∼ qT in the case under consideration) and∑n−�n�n+

= (1 + o(1))TV (x).Thus it remains to consider∑

n<n−

= P(S0

ν(T ) � x∗ + qν(T ), ν(T ) < εT)

+ P(S0

ν(T ) � x∗ + qν(T ), εT � ν(T ) < n−). (16.2.28)

Page 594: Asymptotic analysis of random walks

16.2 Large deviation probabilities for S(T ) and S(T ) 563

Clearly, the last probability does not exceed

P(

maxn�n−

S0n � qεT

)P(ν(T ) < n−) � cTV (εT )P(t0

(1−ε)T > εT )

� cTV (εT ) T (εT )−γ

= o(TV (T )) (16.2.29)

when ε tends to 0 slowly enough (the first two relations in (16.2.29) follow fromCorollaries 3.1.2 and 4.1.4 and Lemma 16.2.7 respectively).

To estimate the first term on the right-hand side of (16.2.28), fix a small enoughδ > 0 and introduce events C(k), k � 0, of which the meaning is that for exactly k

of the r.v.’s τj , j � εT , one has τj � δT whereas for all the rest τj < δT . Clearly,the probability we require is equal to

P(S(T ) � x, ν(T ) < εT )

=∑

0�k��1/δ�+1

P(S(T ) � x, ν(T ) < εT ; C(k)

). (16.2.30)

We will show that the main contribution to the sum of probabilities on the right-hand side comes from the second summand (k = 1), which corresponds tothe presence of exactly one long renewal interval τj (covering most of the seg-ment [0, T ]). It is this large τj that will be responsible for ‘driving’ the trajectoryof the process S(t) = Sν(t)+qt along a straight line, with slope coefficient q > 0,beyond the level x = qT + x∗ by time T . The total contribution of all the otherterms (k �= 1) will be negligibly small relative to TV (x), and we already knowthat P(S(T ) � x

)� (1 + o(1))TV (x).

We will start with the case k = 0. By Theorems 3.1.1 and 4.1.2, for any givenb > 0 we have

P(S(T ) � x, ν(T ) < εT ;C(0)

)� P

(t0εT > (1 − ε)T, τi < δT, i � εT

)= O

((εTVτ (T ))m

)= O(T−b) (16.2.31)

provided that δ < 1/m, where m is large enough. Therefore this probability iso(TV (x)).

The main contribution to the sum (16.2.30), as we have already said, is fromthe term with k = 1. This case requires a detailed analysis. Consider the events

C(1)j := {τj � δT, τi < δT, i � εT, i �= j}, 1 � j � εT, (16.2.32)

so that C(1) =⋃

j�εT C(1)j . We are interested in the second term in (16.2.30),

Page 595: Asymptotic analysis of random walks

564 Extension to generalized renewal processes

namely,∑j<εT

P(S0

ν(T ) � x∗ + qν(T ), ν(T ) < εT ; C(1)j

)

= O

⎛⎝∑j<εT

P(ν(T ) < j, τi < δT, i � j)

⎞⎠+∑

j<εT

P(S0j � x∗ + qj)P

(ν(T ) = j; C

(1)j+1

)+∑

j<εT

∑j�n<εT

P(S0

n � x∗ + qn)P(ν(T ) = n; C

(1)j

). (16.2.33)

The first term on the right-hand side of (16.2.33) corresponds to the situation whenthe first overshoot tn+1 > T occurs for a ‘small’ n (< εT ) and all ‘small’ τi,i � n + 1 (not exceeding δT ). As we know, the probability of such an eventis very small. The second term corresponds to the case when the first overshoottj+1 > T occurs due to a very long last renewal interval τj+1, all the previous τi

being less than δT , i � j. This situation, as one could expect, will be the mostprobable and will give the main contribution to the probability we require. Finally,the last term on the right-hand side of (16.2.33) corresponds to the following case:the first overshoot tn+1 > T occurs on a short renewal interval τn+1 < δT , butone of the previous renewal intervals τj was long (τj � δT ) while all the restwere short (τi < δT for i � n, i �= j). Since, as we know, the sum of theseshort τi will be small with a high probability, for this event to occur we will needthe value of τj to hit a relatively small neighbourhood of the point T , which isunlikely when the distribution of τj is regular. Now we will formally estimate allthe terms on the right-hand side of (16.2.33).

For any fixed b > 0, the first term on the right-hand side of (16.2.33) will notexceed

c∑

j<εT

P(t0j > T − j, τi < δT, i � j

)� c1

∑j<εT

(jVτ (T ))m � c2(εT )m+1Vτ (T )m = o(T−b) (16.2.34)

by virtue of Theorems 3.1.1 and 4.1.2, when δ < 1/m and m is large enough.Therefore this term is negligible.

The last term in (16.2.33) is also negligible. Indeed, one can easily see that

supn

P(S0

n � x∗ + qn)

� P(sup

n(S0

n − qn) � x∗)� x∗V (x∗) (16.2.35)

by Theorem 16.1.3, the maximum being attained at a value n � x∗ (we writea � b if a = O(b), b = O(a)), and also that, for n > j > x∗, one has

P(S0n � x∗ + qn) ∼ nV (x∗ + qn) � cjV (x∗ + qj).

Page 596: Asymptotic analysis of random walks

16.2 Large deviation probabilities for S(T ) and S(T ) 565

Therefore, the above-mentioned term does not exceed

c∑j�x∗

x∗V (x∗)P(j � ν(T ) < εT ; C(1)j )

+ c∑

x∗<j<εT

jV (x∗ + qj)P(j � ν(T ) < εT ; C(1)j ). (16.2.36)

Now note that {j � ν(T ) < εT} = {tj � T < tεT }, so that in the case whentj−1+(tεT −tj) < 2εT, for this event to occur one requires (1−2ε)T � τj < T .Hence in each sum in (16.2.36) we can give the following upper bound for theprobabilities:

P(j � ν(T ) < εT ; C

(1)j

)� P

(tj−1 + (tεT − tj) � 2εT ; C

(1)j

)+ P

(tj−1 + (tεT − tj) < 2εT, (1 − 2ε)T � τj < T

)� P

(t0εT−1 � εT + 1, ti < δT, i < εT

)+ P

((1 − 2ε)T � τj < T

)= o(T−b) + Vτ ((1 − 2ε)T ) − Vτ (T ) = o(Vτ (T ))

similarly to (16.2.34) and according to the assumption on the regular behaviourof the tail Vτ (t). Therefore the entire sum (16.2.36) is negligibly small relative to

Vτ (T )

⎛⎝x2∗V (x∗) +

∑x∗<j<εT

jV (x∗ + qj)

⎞⎠ . (16.2.37)

Here by Theorem 1.1.4(iv) we get for α > 2

∑x∗<j<εT

jV (x∗ + qj) � c

∞∫x∗

t−α+1L(x∗ + qt) dt

∼ c1x−α+2∗ L(x∗) = c1x

2∗V (x∗),

and for α ∈ (1, 2)

∑x∗<j<εT

jV (x∗ + qj) � c

εT∫1

t−α+1L(x∗ + qt) dt ∼ c1(εT )2V (εT ).

(16.2.38)Using a bound similar to (16.2.29) we find that the corresponding contributionto the sum (16.2.37) will be o(TV (T )). Thus we have shown that the last sumin (16.2.33) is

o(x2∗V (x∗)Vτ (T ) + TV (T )

).

Now consider the middle sum on the right-hand side of (16.2.33). As above,

Page 597: Asymptotic analysis of random walks

566 Extension to generalized renewal processes

one can easily show that for j < εT and for any fixed b > 0 one has

P(ν(T ) = j; C

(1)j+1

)= P

(tj < 2εT tj+1 � T ; C

(1)j+1

)+ o(T−b)

= (1 + o(1))Vτ (T )P(τ1 � δT, . . . , τεT � δT ) + o(T−b)

∼ Vτ (T ), (16.2.39)

since

P(τ1 < δT )εT = (1 − Vτ (δT ))εT = exp{−εTVτ (δT )(1 + o(1))} → 1.

Hence the above-mentioned middle sum is

(1 + o(1))Vτ (T )∑

j<εT

jV (x∗ + qj).

It follows from (16.2.38) that for α ∈ (1, 2) this expression is o(TV (T )) (whenε → 0 slowly enough). For α > 2 one has

∑j<εT

jV (x∗ + qj) ∼ 1q

εT∫0

((x∗ + qt)V (x∗ + qt) − x∗V (x∗ + qt)

)dt

= q−2

x∗+qεT∫x∗

(uV (u) − x∗V (u)

)du ∼

∼ q−2

(x2∗V (x∗)α − 2

− x2∗V (x∗)α − 1

)=

x2∗V (x∗)

q2(α − 1)(α − 2)

by Theorem 1.1.4(iv), when ε → 0 slowly enough (so that x∗ = o(εT )). Thuswe have shown that, under our assumptions, one has

P(S(T ) � x, ν(T ) < εT ; C(1))

= (1 + o(1))x2∗V (x∗)Vτ (T )

q2(α − 1)(α − 2)+ o(TV (T )), (16.2.40)

where, clearly, for α ∈ (1, 2) the first term on the right-hand side is o(TV (T )),so that its ‘power component’ has the form

x2−α∗ T−γ = (x∗/T )2−αT 2−γ−α = o(T−(γ−1) × T 1−α), γ > 1.

Now consider the term with k = 2 in the sum (16.2.30) (it corresponds to thepresence of two large values among the τj , j � εT ). In this case, the randomwalk {tn} can first cross the level T in the following alternative ways:

(1) by the first of the two ‘large’ jumps τj � δT , or(2) by the second of these jumps, or(3) by one of the ‘small’ jumps τj < δT .

Page 598: Asymptotic analysis of random walks

16.2 Large deviation probabilities for S(T ) and S(T ) 567

In case (1) it can be shown as before that the probability of the event{S(T ) � x, τ1 < δT, . . . , τν(T ) < δT, τν(T )+1 � δT, ν(T ) < εT

}will be of the order of the value (16.2.40) that we found in the case k = 1.Since the event C(2) also includes another large jump τn with n > ν(T ) + 1, wewill obtain, owing to the mutual independence of the r.v.’s, a value of the orderof magnitude of the quantity (16.2.40) multiplied by TVτ (T ). In a similar wayone can show that the contribution corresponding to case (2), will also be small.In case (3), the probability that the random walk {tn} will exceed the level T inthe absence of large jumps τj > δT , j � ν(T ) < εT , will be very small (byTheorems 3.1.1 and 4.1.2). In the same way one can consider the case k � 3.Since the total number of probabilities in the sum (16.2.30) corresponding to thatcase is bounded, their total contribution will be small relative to (16.2.40). Thisobservation completes the proof of part II(ii) of the theorem.

(iii) In this case, the only difference from the argument proving part I of thetheorem is that we have to give another bound for

∑n<n− . Since the relation

Fτ (t) = o(V (t)) implies that the conditions of Lemma 16.2.7 are met for somer.v.f. Vτ (t) = o

(V (t)

), we have for x � δT , say, that∑

n<n−

� P(ν(T ) < n−) � P(maxk�T

t0k > εx

)= O(TVτ (εx)) = o(TV (x)), (16.2.41)

when ε → 0 slowly enough. The case x < δT can be considered in a way similarto that used to prove part I(iii).

(iv) In this case, one can estimate the sum∑

n�n− for n− = T−εx in the sameway as above, which leads to the expression (1 + o(1))TV (x) (since T → ∞ inthe case under consideration, we have H(T ) ∼ T ). However, the sum

∑n<n−

can now contribute substantially to the probability of the event {S(T ) � x},owing to the possibility of having a very long renewal interval τj .

It will be convenient for us to consider separately two alternative cases for thevalues T and x.

• The case x∗ < −δx, where δ > 0 is fixed

First we obtain an upper bound for∑

n<n− . To this end, put Y := maxk�T S0k

and note that, under the conditions on x stated in the theorem, one has the asymp-totic relation

P(Y > x/2) = O(TV (x)) (16.2.42)

by virtue of Corollaries 3.1.2 and 4.1.4. Since P(ν(T ) < n−) = o(1) owing to(16.2.22), by virtue of the previous bound and the independence of the sequences

Page 599: Asymptotic analysis of random walks

568 Extension to generalized renewal processes

{τj} and {ξj} we have∑n<n−

= P(S0

ν(T ) + q(T − ν(T )) � x, ν(T ) < n−, Y � x/2)

+ o(TV (x)),

and it remains to estimate the probability on the right-hand side of this equality.Denoting it by P1, we get

P1 � P(

Y + q(T − ν(T )) > x, Y � x

2

)= P

(ν(T ) < T − x − Y

q, Y � x

2

)= P

(t0T−(x−Y )/q >

x − Y

q, Y � x

2

)

=

x/2∫0

P(t0T−(x−z)/q >

x − z

q

)P(Y ∈ dz)

∼x/2∫0

(T − x − z

q

)Vτ

(x − z

q

)P(Y ∈ dz)

using the asymptotic relation from Theorem 3.4.1 for distribution tails of the sumst0n (condition [Q] holds for the r.v.’s τ for x � T 1/γLz(T ) owing to (16.2.21)).

Since, when ε tends to 0 slowly enough, the probability P(Y > εx) vanishes byCorollaries 3.1.2 and 4.1.4, we obtain that

∫ x/2

0∼ ∫ εx

0and hence

P1 � (T − x/q) Vτ (x/q)(1 + o(1)). (16.2.43)

At the same time, as P(mink�T S0

k � −εx)

= o(1) when ε tends to 0 slowlyenough (for α > 2 this follows from the Kolmogorov inequality and for α ∈ (1, 2)it follows from the bounds of Corollary 3.1.3), we obtain∑

n<n−

� P(S0

ν(T ) + q(T − ν(T )) � x, ν(T ) < n−, mink�T

S0k > −εx

)� P

(−εx + q(T − ν(T )) > x, ν(T ) < n−)P(mink�T

S0k > −εx

)∼ P

(ν(T ) < T − (1 − ε)x/q

)= P

(t0T−(1−ε)x/q > (1 − ε)x/q

)∼ (T − x/q)Vτ (x/q),

where the last relation follows from Theorems 3.4.1 and 4.4.1 (for γ ∈ (1, 2) oneshould use (16.2.21)) and from the fact that T − (1 − ε)x/q → ∞ in the caseunder consideration when −x∗ = q(T − x/q) > δx.

Thus, ∑n<n−

∼ (T − x/q)Vτ (x/q),

Page 600: Asymptotic analysis of random walks

16.2 Large deviation probabilities for S(T ) and S(T ) 569

which completes our analysis of the case x∗ < −δx.

• The case x∗ = o(x).

Note that T ∼ x/q in this case, and put

nT := T − x/q, nT+ := (1 + ε)nT ,

where ε tends to 0 slowly enough as x → ∞. As in the first case, we simply haveto estimate the sum∑

n<n−

=∑

n<nT+

+∑

nT+�n<nT +εT

+∑

nT +εT�n<n−

. (16.2.44)

We begin with the second sum on the right-hand side. Observing that n − nT �nε/(1 + ε) for n � nT+ and also that

{ν(T ) < z} = {ν(T ) < �z�} = {t�z� > T},where �z� = min{k � z : k ∈ Z}, we obtain∑

nT+�n<nT +εT

=∑

nT+�n<nT +εT

P(S0

n � q(n − nT ))P(ν(T ) = n)

�∑

nT+�n<nT +εT

P(S0

n � qnε/(1 + ε))P(ν(T ) = n)

� c

nT +εT∫nT+

zV (εz) dzP(ν(T ) < z)

= c

nT +εT∫nT+

zV (εz) dzP(t0�z� > T − �z�)

= czV (εz)P(t0�z� > T − �z�)∣∣∣∣nT +εT

nT+

− c

nT +εT∫nT+

P(t0�z� > T − �z�) dz(zV (εz)) (16.2.45)

by integrating by parts. Since we have T − z � x/q − εT ∼ x/q � z in theintegration interval, by Theorems 3.4.1 and 4.4.1 one has the relation

P(t0�z� > T − �z�) ∼ zVτ (T − z) ∼ zVτ (x/q) = O(zVτ (T ))

for z ∈ [nT+, nT + εT ]. Hence the first term in the last line of (16.2.45) is

O

(z2V (εz)Vτ (T )

∣∣∣∣nT +εT

nT+

)= O

((εT )2V (ε2T )Vτ (T ) + n2

T V (nT )Vτ (T ))

= o(TV (T ) + nT Vτ (T )

)(16.2.46)

Page 601: Asymptotic analysis of random walks

570 Extension to generalized renewal processes

(recall that Vτ (T ) = T−γLτ (T ) with γ > 1 and ε → 0 slowly enough). Thesecond term in the last line of (16.2.45) is of the order of

Vτ (T )

nT +εT∫nT+

z dz(zV (εz))

= Vτ (T )

⎛⎝z2V (εz)∣∣∣∣nT +εT

nT+

−nT +εT∫nT+

zV (εz) dz

⎞⎠ ,

and since, by Theorem 1.1.4(iv),

b∫a

zV (εz) dz = O(a2V (εa) − b2V (εb)

), aε, bε → ∞,

this term is also bounded by the right-hand side of (16.2.46).Further, the last sum in (16.2.44) can be estimated as follows:∑nT +εT�n<n−

P(S0

n � q(n − nT ))P(ν(T ) = n)

�∑

nT +εT�n<n−

P(S0

n � qεT)P(ν(T ) = n)

= O(TV (εT )P(ν(T ) < n−)

)= O

(TV (εT )P(t0

n− > εx))

= O(T 2V (εT )Vτ (εT )

)= o(TV (T )), (16.2.47)

cf. (16.2.46) (to get the second to last relation we used (16.2.21)).To estimate the first sum

∑n<nT+

on the right-hand side of (16.2.44), intro-duce the event

D :={

maxk�nT+

|S0k | � εnT

}and note that, by the law of large numbers, the probability P(Dc) tends to 0when ε vanishes slowly enough as T → ∞. Therefore

P(ν(T ) < nT+; D

)= P

(t0nT+

> (1 + ε)x/q − εT)P(D)

= O(nT Vτ (x)P(D)

)= o(nT Vτ (x))

by virtue of independence of the sequences {τk} and {ξj}. Hence∑n<nT+

= P(S0

ν(T ) + q(T − ν(T )) � x, ν(T ) < nT+; D)

+ o(nT Vτ (T )),

and it remains to estimate the probability on the right-hand side of the aboverelation, which we will denote by P2. Let z± = (x ± εnT )/q and note that, onthe one hand,

P2 � P(εnT + q(T − ν(T )) > x

)= P(ν(T ) < T − z−).

Page 602: Asymptotic analysis of random walks

16.2 Large deviation probabilities for S(T ) and S(T ) 571

On the other hand, since T − z+ = (1 − ε/q)nT < nT+, we have

P2 � P(−εnT + q(T − ν(T )) > x, ν(T ) < nT+; D

)= P(ν(T ) < T − z+)P(D) ∼ P(ν(T ) < T − z+).

Now we use the fact that z± ∼ x/q → ∞ and T − z± = nT (1 ∓ ε/q) ∼T − x/q = −x∗ → ∞, while T − z± = o(x) = o(z±), and therefore

P(ν(T ) < T − z±) = P(t0T−z± > z±)

∼ (T − z±) Vτ (z±) ∼ (T − x/q) Vτ (x/q)

by Theorems 3.4.1 and 4.4.1. This relation, together with (16.2.44)–(16.2.47),immediately implies that∑

n<n−

= (1 + o(1))P2 + o(TV (x) + nT Vτ (x))

= (1 + o(1))(T − x/q) Vτ (x/q) + o(TV (x)),

which completes the proof of Theorem 16.2.1.

Proof of Theorem 16.2.3. It is easiest to get the desired result by using the obvi-ous relation

P(S(T ) � x

)� P(S(T ) � x) (16.2.48)

and then establishing an upper bound of the form

P(S(T ) � x

)� (1 + o(1))P(S(T ) � x). (16.2.49)

This relation holds, roughly speaking, because, given the event {S(T ) � x}, itis rather unlikely that the quantity S(T ) will be noticeably lower than the (high)level x: such a ‘dive’ of the trajectory {S(t)} after it has achieved the level x

would mean the presence of yet another (independent) large deviation on the timeinterval [0, T ].

To simplify the exposition, we will again assume that aτ = 1. First considercase I(i) in Theorem 16.2.1 (q � 0, x � δT ). Clearly, if we put

ηx := inf{n > 0 : S(tn) � x},then, by virtue of the asymptotics of P(S(T ) � x) found in Theorem 16.2.1, I(i),

P(S(T ) � x)

= P(S(T ) � x, S(T ) � (1 − ε)x

)+ P

(S(T ) � x, S(T ) < (1 − ε)x

)� P

(S(T ) � (1 − ε)x

)+∑n�1

P(ηx = n, tn < T ; S(T ) − S(tn) < −εx

)� (1 + o(1))P(S(T ) � x) + P

(inft�T

S(t) < −εx)P(S(T ) � x

),

Page 603: Asymptotic analysis of random walks

572 Extension to generalized renewal processes

where we also made use of the independence of the segments {S(t); t � tn} and{S(tn + t)−S(tn); t � 0} of the trajectory of the process. The desired assertionnow follows from the convergence

P(

inft�T

S(t) < −εx)

� P(

minn�2T

S(tn) − |q| maxn�2T+1

τn < −εx)

+ P(ν(T ) > 2T )

� P(

minn�2T

S(tn) < −εx

2

)+ P

(max

n�2T+1τn >

εx

2|q|)

+ P(ν(T ) > 2T ) → 0

as T → ∞, which holds owing to the law of large numbers and the obviousrelation

P(

maxn�2T+1

τn >εx

2|q|)

= O(TVτ (εx)) = o(1),

when ε tends to 0 slowly enough.

Cases I(ii), I(iii) and II(iii) are dealt with in almost the same way. In case II(i),we have q > 0, and the crossing of the level x is possible not only by a jump butalso on a linear growth interval. The number of the renewal interval on which theprocess {S(t)} first crosses the level x is equal to

ηx := inf{n > 0 : max{S(tn), S(tn) − ξn)} > x

}.

Next we have to make use of the inequality

P(ηx = n, tn < T, S(T ) − max{S(tn), S(tn) − ξn} < −εx

)� P

(ηx = n, tn < T, inf

t�TS(t) − |ξ| < −εx

),

where ξ is independent of {S(t)}. All the subsequent calculations have to bemodified accordingly.

In case II(ii), when x∗ = x− qT = o(T ), such a simple approach proves to beinsufficient. We will again begin with inequality (16.2.48), but to derive (16.2.49)we have to repeat all the steps in the proof of Theorem 16.2.1, this time just toderive an upper bound only for P(S(T ) � x). The modifications that have to bemade on the way are relatively minor and do not cause any particular difficulties.

Thus, instead of the bounds for the probability

P(S(T ) � x, εT � ν(T ) � n−

)= P

(S0

ν(T ) � x∗ + qν(T ), εT � ν(T ) � n−)

in (16.2.28), (16.2.29) (with n− = (1 − ε)T ), we could use the following obser-vations to bound a similar expression for S(T ). Put T ′ := (1 − ε/4)T (againassuming for simplicity that both T and T ′ are integers). Then:

Page 604: Asymptotic analysis of random walks

16.2 Large deviation probabilities for S(T ) and S(T ) 573

(1) by Lemma 16.2.6(i) one has

P(ν(T ) − ν(T ′) > εT/2

)� P

(ν(εT/4) � εT/2

)= P

(tεT/2 � εT/4

)= P

(t0εT/2 � −εT/4

)� e−cεT ;

(2) one has the following inclusion:{S(T ′) � x, ν(T ) � n−

}={Sν(t) � x∗ + q(T − t) for some t � T ′, ν(T ) � n−

}⊂{

maxn�n−

Sn � qεT/4, ν(T ) � n−}

;

(3) if ε tends to 0 slowly enough then x − qT ′ � 0 and one has{max

t∈[T ′,T ]S(t) � x, εT � ν(T ) � n−, ν(T ) − ν(T ′) � εT/2

}⊂{

S0ν(t) � x − qt + qν(t) for some t ∈ [T ′, T ],

εT � ν(T ) � n−, ν(T ′) � εT/2}

⊆{

maxn�n−

S0n � qεT/2; ν(T ) � n−

}.

From (1)–(3) it clearly follows that

P(S(T ) � x, εT � ν(T ) � n−

)� P

(ν(T ) − ν(T ′) > εT/2

)+ P

(S(T ′) � x, ν(T ) � n−

)+ P

(max

t∈[T ′,T ]S(t) � x, εT � ν(T ) � n−, ν(T ) − ν(T ′) � εT/2

)� e−cεT + P

(maxn�n−

Sn � qεT/4)P(ν(T ) � n−)

+ P(

maxn�n−

S0n � qεT/2; ν(T ) � n−

)� e−cεT + 2P

(maxn�n−

Sn � qεT/4)P(ν(T ) � n−) = o(TV (T ))

(16.2.50)

by virtue of (16.2.29), when ε vanishes slowly enough.The main contribution to an analogue of the right-hand side of (16.2.33) for

S(T ) will also be from the middle sum, which can be estimated as follows: since

{S(T ) � x} ={

supt�T

(Sν(t) + qt) � x}⊂{

maxj�ν(T )

Sj � x∗}

,

Page 605: Asymptotic analysis of random walks

574 Extension to generalized renewal processes

we have ∑j<εT

P(S(T ) � x; ν(T ) = j, C

(1)j+1

)�∑

j<εT

P(maxn�j

(S0n − qn) � x∗

)P(ν(T ) = j; C

(1)j+1

)� (1 + o(1))Vτ (T )

∑j<εT

jV (x∗ + qj) (16.2.51)

by (16.1.12) and (16.2.39).After that, the argument proceeds as in the proof of Theorem 16.2.1 in case II(ii).

In case II(iv) one can modify the argument from the proof of Theorem 16.2.1 in asimilar way.

Proof. The proof of Theorem 16.2.4 in the case q � 0 follows the argument inthe proof of Theorem 16.2.1, case I(i). There exists a function ψ1(t) such thatψ1(t)/ψ(t) → ∞ as t → ∞ and, moreover, the ψ-l.c. function F+(t) is ψ1-l.c.as well (cf. the remark made after Definition 1.2.7, p.18).

Let n± := T ± ψ1(x). Since ψ(x) > c√

T (see (4.8.2)) and Eτ 2 < ∞by the conditions of the theorem, we see that (16.2.17) will hold true owing tothe Chebyshev inequality (or the central limit theorem for renewal processes).Further, (16.2.18) and the first equality in (16.2.20) (with V replaced in themby F+) hold true owing to the choice of n± and the fact that F+ is a ψ1-l.c.function. The last equality in (16.2.20) follows from (16.2.17) and the observationthat, cf. (16.2.12), we have the following bound, using Lemma 16.2.6(ii):∑

n>n+

nP(ν(T ) = n)

� (T + ψ1(x))P(ν(T ) > T + ψ1(x)) +

∞∫ψ1(x)

P(ν(T ) > T + z) dz

� (T + ψ1(x)) exp{− c1ψ

21(x)

T + ψ1(x)

}+

∞∫ψ1(x)

exp{− c1z

2

T + z

}dz = o(T ).

Therefore P(Sn � x) ∼ H(T )F+(x) in the case under consideration.The modifications that one needs to make in the proof in the case q < 0 and

also when considering the asymptotics of P(Sn � x) are equally elementary.The theorem is proved.

16.3 Asymptotic expansions

In this section we will state and proof an assertion establishing for the probabil-ities P(S(T ) � x) asymptotic expansions similar to those obtained in § 4.4 for

Page 606: Asymptotic analysis of random walks

16.3 Asymptotic expansions 575

random walks. Our approach will consist in using partial factorization and re-lations of the form (16.2.16) in conjunction with the above-mentioned results forrandom walks. As in § 4.4, we will need additional conditions of the form [D(k,q)]on the distribution of the random jumps ξj (see p. 199).

Moreover, we will also need additional conditions on the distribution Fτ . Firstof all, since the asymptotic formulae for P(S(T ) � x) will include the varianceof ν(T ), the minimum moment condition on τ will be

aτ,2 := Eτ2 < ∞.

In the case when qT > x and thus the event {S(T ) � x} could occur becauseof the presence of large intervals between the jumps in the process, we will alsorequire the tail Fτ (t) to be regular.

Since we will only be considering events of the form {S(T ) � x}, we canagain assume without loss of generality that the mean trend in the process is equalto zero (assumption (16.1.16)). Recall that H(t) = Eν(t) denotes the renewalfunction for the process {ν(t)}, and put Hm(t) := Eνm(t), m � 2.

Theorem 16.3.1. Suppose that the distribution of ξ is non-lattice, conditions(16.1.3) hold for α > 2, d = Var ξ < ∞ and the right tail of the distributionof ξ satisfies condition [D(2,0)] and also that aτ,2 < ∞ and the mean trend in theprocess

{S(t)

}is equal to zero. Let δ > 0 be an arbitrary fixed number.

I. In the case q � 0 the following assertions hold true.

(i) Uniformly in T , satisfying x � δT, as x → ∞,

P(S(T ) � x)

= V (x)

{H(T ) +

qaτL1(x)x

[(T

aτ+ 1

)H(T ) − H2(T )

]

+α(α + 1)

2x2

[(H2(T ) − H(T ))d + q2a2

τ

((T

aτ+ 1

)2

H(T )

− 2(

T

aτ+ 1

)H2(T ) + H3(T )

)]+ o((1 + T 2)/x2)

}.

(16.3.1)

(ii) If T → ∞ then, uniformly in x � δT ,

P(S(T ) � x) = V (x)

{T

aτ+(

aτ,2

2a2τ

− 1)− 3αqT

x

(aτ,2

2a2τ

− 1)

+α(α + 1)T 2

2x2

(d

a2τ

+ q2

(aτ,2

a2τ

− 1))

+ o(1)

}.

(16.3.2)

Page 607: Asymptotic analysis of random walks

576 Extension to generalized renewal processes

If Fτ is a lattice distribution then the relation (16.3.2) holds for T

values from the lattice.

II. In the case q < 0, the following assertions hold true.

(i) The relations (16.3.1) and (16.3.2) remain true uniformly in the zonex � (q + δ)T .

(ii) If Fτ ∈ R with exponent γ > 2 (see (16.1.4)) then, for x � δT ,T − x/q → ∞, the relation (16.3.2) holds with an additional term onthe right-hand side of the form

a−1τ

(T − x

q

)Vτ

(x

q

)(1 + o(1)), T → ∞. (16.3.3)

If, moreover, E|ξ|3 < ∞ and condition [D(1,0)] holds for the distribu-tion tail of τ then the remainder term o(1) in (16.3.3) can be replacedby o((T − x/q)−1/2).

Remark 16.3.2. Repeating the argument from the proof of Theorem 16.3.1, onecan easily verify that, under condition [D(1,0)] on the right distribution tail of ξ,the error in the approximation of the probabilities of the form P(Sn � x) provesto be too large and so ‘masks’ the effects introduced by the randomness of thejump epochs in the process {S(t)}. Therefore, non-trivial asymptotic expansionsfor P(S(T ) � x) can be obtained only by imposing condition [D(2,0)] on V (t) =P(ξ � t) and assuming that E(ξ2 + τ2) < ∞. However, imposing additionalconditions (say, [D(3,0)] on the right distribution tail of ξ and/or Eτ3 < ∞) doesnot lead to any substantial improvement of the results.

Proof of Theorem 16.3.1. Cases I and II(i). We will again make use of the decom-position (16.2.16) with n± = T ± εx, where ε → 0 slowly enough as x → ∞(with the natural convention that n− = 0 for T − εx � 0). We will also assume,as usual, that aτ = 1 (so that aξ = −q by (16.1.16)) and that d = 1.

To estimate the sum∑

n<n− first note that, for n � n+, one has

x − q(T − n) �

⎧⎨⎩ (1 − |q|ε)x for q � 0,

x − qT � δx

q + δfor q > 0, x − qT � δT.

(16.3.4)

Hence, using Lemma 16.2.7 (with γ = 2) to bound the probability P(t0n− > εx)

in the following formula, we obtain∑n<n−

� maxn�n−

P(S0

n > cx)P(ν(T ) < n−)

= O(TV (x)P

(t0n− > εx

))= O

(TV (x) n−Vτ (εx)

)= O

(T 2V (x)x−2ε−2Lτ (εx)

)= o

(T 2V (x)x−2

)(16.3.5)

when ε vanishes slowly enough.

Page 608: Asymptotic analysis of random walks

16.3 Asymptotic expansions 577

Moreover, by Lemma 16.2.6(ii) the last sum in (16.2.16) can be bounded asfollows:∑

n>n+

� P(ν(T ) > n+) � exp{− cε2x2

T + εx

}= o(T 2V (x)x−2), (16.3.6)

and so it remains to consider the middle sum∑

n−�n�n+.

Using the obvious inequality P(ξ − aξ > t) = V (t + aξ) = V (t − q) andour observation (16.3.4), we have from condition [D(2,0)] for V (t) and Corol-lary 4.4.5(i) (p.200) the following relation for all n− � n � n+ and the valuesof x specified in the conditions of the theorem:

P(S0

n � x − q(T − n))

= nV

(x

[1 − q

x(T − n + 1)

])[1 +

α(α + 1)(n − 1)d2(x − q(T − n))2

(1 + o(1))]

= nV (x)

[1 + L1(x)

q(T − n + 1)x

+α(α + 1)

2(1 + o(1))

(q(T − n + 1)

x

)2]

×[1 +

α(α + 1)(n − 1)d2(x − q(T − n))2

(1 + o(1))]

= nV (x)

{1 + L1(x)

q(T − n + 1)x

+α(α + 1)

2

[(q(T − n + 1)

x

)2

+n − 1x2

d

]× (1 + o(1)) + O

((|T − n| + 1)nx−3

)}(16.3.7)

(note that on the right-hand side of the first line we have nV (x(1 + Δ)), whereΔ := −q(T − n + 1)/x is indeed o(1) for n− � n � n+, x → ∞; we haveused this to substitute the representation for V (x(1+Δ)) from condition [D(2,0)],taking into account (4.4.9)).

If we substitute the expressions for the respective probabilities into the sum∑n−�n�n+

from (16.2.16) and replace that sum by a sum over all n � 0, theerror introduced thereby will be of the order of

V (x)

( ∑n<n−

+∑

n>n+

)(n +

n|T − n|x

+n2 + n(T − n)2

x2

). (16.3.8)

Here the first sum, by virtue of Lemma 16.2.7 (with γ = 2), is

O

((T +

T 2

x+

T 3

x2

)P(ν(T ) < n−)

)= O

(T P(t0

T−εx > εx))

= o(T 2x−2)

Page 609: Asymptotic analysis of random walks

578 Extension to generalized renewal processes

(cf. (16.3.5)), while the second sum is

O

( ∑n>n+

(n +

n2

x+

n3

x2

)P(ν(T ) = n)

)= o(T 2x−2)

by Lemma 16.2.8(i), so that that the total error (16.3.8) is o(T 2V (x)x−2).Thus we obtain from (16.2.16) and (16.3.5)–(16.3.8) that

P(S(T ) � x)

= V (x){Eν(T ) +

qL1(x)x

E(ν(T )(T − ν(T ) + 1)

)+

α(α + 1)2x2

[q2E

(ν(T )(T − ν(T ) + 1)2

)+ (Eν2(T ) − Eν(T ))d

](1 + o(1))

+ O(x−3E(ν2(T )(|T − ν(T )| + 1))

)}+ o(T 2V (x)x−2).

(16.3.9)

It remains to estimate the remainder term O(·) (this will complete the proof of therepresentation (16.3.1) in cases I(i) and II(i)) and also the coefficients expressedin terms of the expectations in the case when T → ∞. We will start with the latterproblem and recall the well-known fact that, for aτ = 1 and aτ,2 < ∞,

Eν(T ) = T + (aτ,2/2 − 1) + o(1), (16.3.10)

Var ν(T ) = (aτ,2 − 1)T + o(T ) (16.3.11)

(see e.g. § 12, Chapter XIII of [121] and § 4, Chapter XI of [122]; in the latticecase it is assumed that T belongs to the lattice). Hence

E[ν(T )(T − ν(T ) + 1)

]= −Var ν(T ) − (Eν(T ))2 + (T + 1)Eν(T )

= 3(1 − aτ,2/2)T + o(T ), (16.3.12)

Eν2(T ) = T 2(1 + o(1)).

Now we estimate the moment E[ν(T )(T − ν(T ) + 1)2]. By the law of largenumbers and the central limit theorem for renewal processes (see e.g. § 5, Chap-ter 9 of [49]), as T → ∞, we have

ν(T )T

→ 1 a.s., ζT :=T − ν(T ) + 1√

T⊂=⇒N(0, aτ,2 − 1), (16.3.13)

where the last relation means that the distributions of ζT converge weakly to theindicated normal law. Note that this weak convergence takes place together withthe convergence of the first and second moments of ζT (cf. (16.3.10)–(16.3.11)),

Page 610: Asymptotic analysis of random walks

16.3 Asymptotic expansions 579

so that, in particular, the r.v.’s ζ2T are uniformly integrable. Now write

T−2E[ν(T )(T − ν(T ) + 1)2

]= E

[ν(T )T

ζ2T ;∣∣∣∣ν(T )

T− 1

∣∣∣∣ � ε

]+ E

[ν(T )

Tζ2T ;

ν(T )T

< 1 − ε

]+ E

[ν(T )

Tζ2T ;

ν(T )T

> 1 + ε

]=: E1 + E2 + E3

and note that, owing to the above,

E1 ∼ Eζ2T ∼ T−1Var ν(T ) → aτ,2 − 1,

E2 < E[ζ2T ; ν(T ) < (1 − ε)T

]= o(1),

E3 < T−2E[ν(T )3; ν(T ) > (1 + ε)T

]= o(1)

as T → ∞ (assuming that ε = ε(T ) tends to 0 slowly enough). Here the estimatefor E2 follows from the uniform integrability of ζ2

T and the law of large numbers(owing to which P(ν(T ) < (1 − ε)T ) = o(1)), while the last relation followsfrom the inequalities (16.2.12) and (16.2.13) in the proof of Lemma 16.2.8.

Thereby we have proved that

E[ν(T )(T − ν(T ) + 1)2

]= (aτ,2 − 1)T 2(1 + o(1)), T → ∞. (16.3.14)

Finally, for the remainder term O(·) in (16.3.9) we have, from the relations(16.3.10)–(16.3.11), the Cauchy–Bunyakovskii inequality and the almost obviousrelation Eν4(T ) ∼ T 4 as T → ∞ (cf. (16.3.13) and Lemma 16.2.8), the bound

x−3E[ν2(T )(|T − ν(T )| + 1)

]� x−3

[Eν2(T ) +

(Eν4(T )E(T − ν(T ))2

)1/2]

= O(x−3

(1 + T 5/2

))= o

((1 + T 2

)x−2

). (16.3.15)

Now the assertions of parts I(i), (ii) and II(i) of Theorem 16.3.1 can be obtainedby substituting the estimates (16.3.10)–(16.3.15) into the representation (16.3.9).One should just note that, for x � δT , we have T k/xk = O(1), x−1 = O(T−1)and T 2V (x)x−2 = O(V (x)) and that

1x

� 12T 1/2

(1x

+T 2

x2

).

Case II(ii). In this case we clearly have to consider also the possibility thatthe event {S(T ) � x} occurs as a result of the presence of a very long renewalinterval on [0, T ]. Using the representation (16.2.16) together with the results forlarge deviation probabilities in the random walk {Sn} and for the moments of therenewal process is no longer sufficient. Now we will have to introduce truncatedversions of not only the random jumps ξj but also of the renewal intervals τj .

Page 611: Asymptotic analysis of random walks

580 Extension to generalized renewal processes

Let n± := (1 ± ε)T (note that x � T in this part of the theorem) and

B :=n+⋂j=1

{ξj < y}, C :=n−⋂j=1

{τj < y},

where y � x is such that r := x/y > max{1, x/T}. It is evident that

P(B) = O(TV (T )), P(C) = O(TVτ (T )). (16.3.16)

By virtue of Lemma 16.2.8(i), it suffices to estimate the probability of the event

D := {S(T ) � x; ν(T ) � n+}.It is obvious that

P(D) = P(DB C) + P(DBC) + P(DB) =: P1 + P2 + P3, (16.3.17)

where

P1 � P(B)P(C) = O(T 2V (T ) Vτ (T )

)(16.3.18)

and

P2 = P(DBC; ν(T ) < n−

)+ P

(DBC; ν(T ) � n−

). (16.3.19)

By Corollary 4.1.3, the first term on the right-hand side of the last equality doesnot exceed

P(B)P(ν(T ) < n−; C) � n+V (y)P

(t0n− > εT ; C

)= O

(TV (T ) (TVτ (εT ))r

)= O

(T 2V (T )Vτ (T )

),

when ε vanishes slowly enough, while the second term does not exceed

P(DB; ν(T ) � n−

)− P(DB C; ν(T ) � n−

)= P

(S(T ) � x, ν(T ) ∈ [n−, n+]; B

)+ O

(T 2V (T )Vτ (T )

)by virtue of the bound in (16.3.18). The first term on the right-hand side coincides,up to O(T 2V 2(T )), with P(S(T ) � x, ν(T ) ∈ [n−, n+]) (we again make useof Corollary 4.1.3), and it can be estimated in exactly the same way as in theproof of part I of the theorem; this leads to the expression on the right-hand sideof (16.3.2).

It remains to consider the term P3 in (16.3.17). When for the distribution of τ

we assume only condition Fτ ∈ R, the required assertion can be derived in ex-actly the same way as in the proof of Theorem 16.2.1, II(iv). Now we turn to thecase where we also impose condition [D(1,0)] on the function Vτ from (16.1.4)and require that E|ξ|3 < ∞.

Introduce the notation

nT := T − x/q, nT± := (1 ± ε)nT ;

Page 612: Asymptotic analysis of random walks

16.3 Asymptotic expansions 581

as usual, we will assume for simplicity that all these quantities (as well as T

itself) are integers.We have

P3 = P(S(T ) � x, ν(T ) � n+; B

)=∑

n�n+

P(S0

n � q(n − nT ); B)P(ν(T ) = n)

= P(ν(T ) < nT ) −∑

n�n+

[1(n < nT ) − P

(S0

n � q(n − nT ); B)]

× P(ν(T ) = n)

= P(t0nT

> x/q) −∑

n<nT

P(S0

n < q(n − nT ); B)P(ν(T ) = n)

+∑

nT �n�n+

P(S0

n � q(n − nT ); B)P(ν(T ) = n). (16.3.20)

By the central limit theorem, the two sums in the last lines are, roughly speaking,the integrals of the left and right tails of an (almost) normal distribution withrespect to the distribution of ν(T ), i.e. a measure that, owing to the regularity ofthe distribution of τ , will be close to a multiple of the Lebesgue measure in theregion where the values of the integrands are noticeably large. Therefore, as wewill show next, owing to the symmetry of the normal distribution the contributionsof these sums effectively cancel each other out and so, with high precision, themain term will be given by

P(t0nT

> x/q)

= nT Vτ (x/q)(1 + o(n1/2T /x)). (16.3.21)

This follows from Theorem 4.4.4 (for k = 1) and the fact that, by condition[D(1,0)] on the distribution tail of τ 0 = τ − 1, one has the representation

Vτ0(t) := P(τ0 � t) = Vτ (t + 1) = Vτ (t)(1 + O(1/t)). (16.3.22)

So, we turn to the sums on the right-hand side of (16.3.20). Replacing them bythe sums ∑

nT−�n<nT

and∑

nT �n<nT+

respectively leads to the introduction of errors E1 and E2, for which we have thefollowing bounds. The uniform integrability of the sequence {(S0

n)2/n} impliesthat

E[(S0

n)2; |S0n| > qεnT

]ε2nT

� 1ε2

E[(S0

n)2

n;|S0

n|√n

> qε√

nT

]= o(1)

uniformly in n < nT− when ε vanishes slowly enough. Using this relation and

Page 613: Asymptotic analysis of random walks

582 Extension to generalized renewal processes

the Chebyshev inequality, one obtains

E1 �∑

n<nT−

P(S0

n < −qεnT

)P(ν(T ) = n)

�∑

n<nT−

E[(S0

n)2; |S0n| > qεnT

]q2ε2n2

T

P(ν(T ) = n)

= o

(1

nTP(ν(T ) < nT−)

)= o

(1

nTP(t0nT− >

x

q+ εnT

))= o(Vτ (T )).

For the second error, we have

E2 =∑

nT+�n�n+

P(S0

n > q(n − nT ); B)P(ν(T ) = n)

=∑

nT+�n<nT +εT

+∑

nT +εT�n<n−

+∑

n−�n�n+

. (16.3.23)

Since n−nT � x/q−εT � cx for n � n−, the last sum above is estimated, fromCorollary 4.1.3, as O(T 2V 2(T )) = o(V (T )). According to (16.2.47), the mid-dle sum on the right-hand side of (16.3.23) is O(T 2V (εT )Vτ (εT )) = o(V (T )),because Vτ (t) = t−γLτ (t), γ > 2, and ε → 0 slowly enough. Finally, the firstsum, as shown in (16.2.45), (16.2.46), is

O((εT )2V

(ε2T

)Vτ (T ) + n2

T V (nT ) Vτ (T ))

= o(V (T ) + Vτ (T )).

Turning back to (16.3.20), after the change in the summation limits we see that,to complete the proof of the theorem in case II(ii), it remains to show that

J1 − J2 :=∑

nT−�n<nT

P(S0

n � q(n − nT ); B)P(ν(T ) = n)

−∑

nT �n�nT+

P(S0

n > q(n − nT ); B)P(ν(T ) = n)

= o(n

1/2T Vτ (T )

). (16.3.24)

First note that here we can remove the event B from the expression under theprobability symbol; the error introduced thereby will certainly be negligible. In-deed, since for x � δT one has

T − nT+ = (1 + ε)x/q − εT � ((1 + ε)δ/q − ε)T > δ1T, δ1 > 0,

one obtains from (16.3.16) that∑nT−�n�nT+

P(· · · ; B

)P(ν(T ) = n) � P

(B; ν(T ) < nT+

)= P(B)P

(t0nT+

> T − nT+

)= O

(T 2Vτ (T ) V (T )

)= o(Vτ (T )).

Further, put

z(u) := q(u − nT )/√

ud, d = Var ξ,

Page 614: Asymptotic analysis of random walks

16.3 Asymptotic expansions 583

and observe that, by the Berry–Esseen theorem (see e.g. Appendix 5 of [49]), thefollowing asymptotic representations hold true uniformly in n ∈ [nT−, nT+]:

P(S0

n � q(n − nT ))

= Φ(z(n)) + O(n−1/2T

),

P(S0

n > q(n − nT ))

= Φ(−z(n)) + O(n−1/2T

),

where Φ denotes the standard normal distribution function. Substituting theserepresentations into the sums in (16.3.24) (now without the event B under theprobability symbols), we see that, by Theorem 4.4.4 (in the case k = 1) and inview of (16.3.22), the contribution of the terms O(n−1/2

T ) to these sums does notexceed

cn−1/2T

[P(ν(T ) < nT+) − P(ν(T ) < nT−)

]= cn

−1/2T

[P(t0nT+

> x/q − εnT

)− P(t0nT− > x/q + εnT

)]= cn

−1/2T

[nT+Vτ (x/q − εnT )

(1 + o

(n−1/2T

))− nT−Vτ (x/q + εnT )

(1 + o

(n−1/2T

))].

Here the terms with o(n−1/2T ) will yield a term of order

O(n−1/2T × nT Vτ (x) × o

(n−1/2T

))= o(Vτ (T )),

while, owing to condition [D(1,0)] on the distribution tail Vτ (t) of the r.v. τ andin view of (16.3.22), the remaining main terms in the sum are equal to

cn−1/2T

[nT

(Vτ (x/q − εnT ) − Vτ (x/q + εnT )

)+ εnT

(Vτ (x/q − εnT ) + Vτ (x/q + εnT )

)]= c(1 + ε)n1/2

T

[Vτ

(x

q

(1 − qε

nT

x

))− Vτ

(x

q

(1 + qε

nT

x

))]+ o

(n

1/2T Vτ (T )

)= 2γ(c + o(1))qε

n3/2T

xVτ

(x

q

)+ o

(n

1/2T Vτ (T )

)= o

(n

1/2T Vτ (T )

).

Therefore, up to negligibly small terms (i.e. terms of order not exceeding theerror order stated in part II(ii) of the theorem), the sums in (16.3.24) are equalrespectively to

J1 =∑

nT−�n<nT

Φ(z(n))P(ν(T ) = n),

J2 =∑

nT �n<nT+

Φ(−z(n))P(ν(T ) = n).

Page 615: Asymptotic analysis of random walks

584 Extension to generalized renewal processes

Integrating by parts and changing variables by putting w = nT + s, we obtain

J1 =

nT∫nT−

Φ(z(w)) dw[P(ν(T ) < w) − P(ν(T ) < nT )]

= −Φ(z(nT−)) [P(ν(T ) < nT−) − P(ν(T ) < nT )]

−nT∫

nT−

[P(ν(T ) < w) − P(ν(T ) < nT )] dwΦ(z(w))

= O(Φ(−qεn

1/2T d−1/2

)nT Vτ (T )

)−

0∫−εnT

[P(t�nT +s� > T ) − P(tnT> T )] dsΦ

(qs√

(nT + s)d

).

Denoting the last integral by J ′1 and assuming for simplicity that nT + s is an

integer, we note that, by Theorem 4.4.4 (for k = 1) the integrand in J ′1 is equal,

by virtue of (16.3.22), to

[· · · ] = P(t0nT +s >

x

q− s

)− P

(t0nT

>x

q

)= (nT + s)Vτ

(x

q− s

)(1 + o

(n−1/2T

))− nT Vτ

(x

q

)(1 + o

(n−1/2T

)).

It is clear that integrating the remainder terms will yield o(n

1/2T Vτ (T )

), whereas

the sum of the main terms is, by condition [D(1,0)], equal to

nT

[Vτ

(x

q− s

)− Vτ

(x

q

)]+ sVτ

(x

q− s

)= nT Vτ

(x

q

)γqs

x(1 + o(1)) + sVτ

(x

q

)(1 + o(1))

= s

(1 +

γqnT

x

)Vτ

(x

q

)(1 + o(1)), −εnT � s < 0.

Therefore, up to the term O(Φ(−qεn

−1/2T d−1/2)nT Vτ (T )

)= o(n1/2

T Vτ (T )),one has

J1 = −(

1 +γqnT

x

)Vτ

(x

q

) 0∫−εnT

(1 + o(1))s dsΦ(

qs√(nT + s)d

)

= −(

1 +γqnT

x

)Vτ

(x

q

)√nT d

q

0∫−qεnT /

√nT−d

(1 + o(1))v dΦ(v)

=1q

√nT d

(1 +

γqnT

x

)Vτ

(x

q

)(1 + o(1)).

Page 616: Asymptotic analysis of random walks

16.4 The crossing of arbitrary boundaries 585

In a similar way we can establish that

J2 =1q

√nT d

(1 +

γqnT

x

)Vτ

(x

q

)(1 + o(1)) + o

(n

1/2T Vτ (T )

),

so that J1 − J2 = o(n

1/2T Vτ (T )

). Theorem 16.3.1 is proved.

16.4 The crossing of arbitrary boundaries

In this section we consider the problem of the crossing of an arbitrary boundary{g(t); t ∈ [0, T ]} by the trajectory of the process {S(t)}, when inft�T g(t) tendsto infinity fast enough, so that the event

GT :={

supt�T

(S(t) − g(t)) � 0}

belongs to the large deviation zone in which we are interested. We will againassume, without loss of generality, that the mean trend in the process {S(t)} isequal to zero (i.e. (16.1.16) holds true). Note that we have already considered thespecial case of a flat boundary g(t) ≡ x in Theorem 16.2.3.

As everywhere in this chapter, an important factor in this situation is the possi-bility that the event GT may occur as a result of a very long renewal interval. Toavoid cumbersome computations, we will exclude this possibility in the presentsection. For q � 0 it simply does not exist, while in the case q > 0 we will requirethe ‘gap’ between the boundary g(t) and the trajectory of the deterministic lineardrift qt, which appears in the definition of the process {S(t)}, to be wide enough:

inf0�t�T

(g(t) − qt) > δT (16.4.1)

for a fixed δ > 0. In this case, in the absence of large jumps ξj and/or τj , theprocess S(t) = Sν(t) + qt moves along the line of its mean trend (with a slopeequal to zero), i.e. it stays in an εT -neighbourhood of its initial value, which iszero. The presence of a single large τj means that, over the respective renewalinterval, the value of S(t) will increase linearly at the rate q > 0 while outsidethe interval it will just oscillate about the mean trend line. Therefore the above-mentioned condition (16.4.1) makes the probability that the process will cross theboundary g(t) during that long renewal interval very small. The presence of twoor more large jumps τj and/or ξj will, as usual, be very unlikely.

Therefore, as in the random walk case, the process {S(t)} can actually cross ahigh boundary as a result of a single large jump ξj . At the time t of the jump itsuffices if, instead of crossing the boundary g(t) itself, the process just exceedsthe level

g∗(t) := inft�s�T

g(s),

since afterwards its trajectory will again move along the (horizontal) mean trend

Page 617: Asymptotic analysis of random walks

586 Extension to generalized renewal processes

line and, at some time, will rise above the boundary, which by that time will havealready ‘dropped’ down to the level g∗(t).

As with the random walks, in the case of a general boundary we can only givethe main term of the asymptotics. Owing to the ‘averaging’ of the jump epochsin the process {S(t)}, the answer will have the form of an integral with respectto the renewal measure dH(t) (see (16.4.2) below). As T → ∞, the renewaltheorem allows one to replace dH(t) by a−1

τ dt.Now we will state the main result of the section. Denote by Gx,K the class of

all measurable boundaries g(t) such that

x � inft�0

g(t) � Kx, x > 0, K ∈ (1,∞).

Theorem 16.4.1. Let either condition [QT ] or condition [ΦT ] hold for the dis-tribution of ξj , and let δ > 0 and K > 1 be arbitrary fixed numbers. Assume thatthe mean trend of the process {S(t)} is equal to zero.

I. In the case q � 0, the relation

P(GT ) = (1 + o(1))

T∫0

V (g∗(t)) dH(t), x → ∞, (16.4.2)

holds uniformly in g ∈ G(x,K), x � δT. If, moreover, condition [<] withγ ∈ (1, 2) holds for the distribution of τ then the relation (16.4.2) holdsas x → ∞ uniformly in the range of values of T , satisfying (along with [QT ]or [ΦT ]) the condition x � T 1/γLz(T ) with a suitable s.v.f. Lz (a way ofchoosing the function Lz(t) is indicated in Lemma 16.2.8(ii)).

II. In the case q > 0 the following assertions hold true.

(i) The relation (16.4.2) holds uniformly in g ∈ G(x,K) for

x∗ := inft�T

(g(t) − qt) � δT.

(ii) If Fτ (t) = o(V (t)) as t → ∞ then all the assertions stated in part I ofthe theorem remain true.

Remark 16.4.2. It is not difficult to see that, as T → ∞, one has the relation

T∫0

V (g∗(t)) dH(t) ∼ 1aτ

T∫0

V (g∗(t)) dt (� TV (x)) (16.4.3)

uniformly in g ∈ G(x,K), so that in this case the integral on the right-hand sideof (16.4.2) can be replaced by the expression on the right-hand side of (16.4.3).

Indeed, assuming for simplicity that the functions H(t) and V (g∗(t)) have nocommon jump points, and integrating twice by parts (recall that g∗(t) is non-decreasing function, so that V (g∗(t)) is a function of bounded variation), we ob-tain by the renewal theorem that the integral on the left-hand side of (16.4.3)

Page 618: Asymptotic analysis of random walks

16.4 The crossing of arbitrary boundaries 587

equals

V (g∗(T ))H(T ) −T∫

0

H(t) dV (g∗(t))

= V (g∗(T ))T

aτ+ o(TV (x)) − 1

T∫0

t dV (g∗(t))

−T∫

0

(H(t) − T

)dV (g∗(t))

=1aτ

T∫0

V (g∗(t)) dt + o(TV (x)) −T∫

0

(H(t) − T

)dV (g∗(t)).

One can easily see that the last integral is o(TV (x)) as well, since by the renewaltheorem H(t) − t/aτ = o(t) as t → ∞, so that∣∣∣∣∣∣

T∫0

(H(t) − T

)dV (g∗(t))

∣∣∣∣∣∣ �

∣∣∣∣∣∣εT∫0

∣∣∣∣∣∣+∣∣∣∣∣∣

T∫εT

∣∣∣∣∣∣� cεTV (g∗(0)) + o(TV (g∗(εT ))) = o(TV (x))

when ε vanishes slowly enough.

The following corollary follows immediately from Theorem 16.4.1 and the def-inition of an r.v.f.

Corollary 16.4.3. Let f(s), s ∈ [0, 1], be a given measurable function takingvalues in an interval [c1, c2] ⊂ (0,∞). Then, for the boundary

g(t) := xf(t/T ), 0 � t � T,

under the respective conditions on x from Theorem 16.4.1 and, in the case q > 0,

under the condition that

x∗ = inf0�s�1

(xf(s) − qsT ) � δT,

we have

P(GT ) ∼ TV (x)aτ

1∫0

f−α∗ (s) ds as T → ∞,

where f∗(s) := infs�u�1 f(u), 0 � s � 1.

Page 619: Asymptotic analysis of random walks

588 Extension to generalized renewal processes

Indeed, one just has to observe that

V (g∗(t)) =V (xf∗(t/T ))

V (x)V (x) = f−α

∗ (t/T )L(xf∗(t/T ))

L(x)V (x)

= (1 + o(1))f−α∗ (t/T )V (x)

uniformly on [0, T ] by virtue of Theorem 1.1.2.In contrast with Theorems 16.2.1 and 16.2.3, we do not consider here the case

when q > 0 and x∗ → ∞ but x∗ = o(T ). As we saw in those theorems, in thiscase there appears the possibility that the boundary will be crossed owing to acombination of two rare events: first, at the very beginning of the interval [0, T ],the process exceeds a linear boundary of the form x∗ + qt and then, during a verylong renewal interval, the process is taken above the boundary g(t) by the deter-ministic positive linear drift. It is clear that considering such a possibility in thecase of an arbitrary boundary will be much more difficult than in the situation ofTheorems 16.2.1 and 16.2.3. Therefore, in the present exposition we will restrictourselves to analysing the above situation in the important special case of a linearboundary; this will be done in the next section.

Another exceptional case is when the terminal point g(T ) of the boundary isthe ‘lowest’ one, i.e. the function g(t) attains there its minimal value on [0, T ]:

g∗(t) = g(T ) =: x for all t ∈ [0, T ]. (16.4.4)

Then from the evident relations

P(S(T ) � x) � P(GT ) � P(S(T ) � x)

and Theorems 16.2.1 and 16.2.3 we immediately obtain the following result.

Corollary 16.4.4. Under the assumptions of Theorem 16.2.1, all the asymptoticrepresentations established in that theorem for P(S(T ) � x) will remain true forP(GT ) uniformly over all the boundaries satisfying (16.4.4).

Proof of Theorem 16.4.1. As usual, we assume that aτ = 1, so that aξ = −q dueto (16.1.16).

Put t′j := tj for j � ν(T ), t′ν(T )+1 := T , and let τ ′j+1 := t′j+1−t′j , j � ν(T ).

For j � ν(T ), introduce the variables

g(j) := inftj�t<t′j+1

g(t), g+(j) := g(j) + |q|τ ′j+1,

g−(j) := inftj�t<t′j+1

(g(t) − q(t − tj)

).

It is not hard to see that, for the events

G :=⋃

j�ν(T )

{Sj + qtj � g(j)}, G± :=⋃

j�ν(T )

{Sj + qtj � g±(j)}

(16.4.5)

Page 620: Asymptotic analysis of random walks

16.4 The crossing of arbitrary boundaries 589

one has the following inclusions:

G+ ⊆ GT ⊆ G for q � 0, (16.4.6)

G ⊆ GT ⊆ G− for q > 0. (16.4.7)

These are the main relations that will be used for proving the theorem. Further-more, we will need the following observation: for the events

D :={

supt�T

|ν(t) − t| � εx}

, D+ :={

supt�T

(ν(t) − t) � εx}

, (16.4.8)

where ε = ε(x) vanishes slowly enough as x → ∞, for x � δT one has

P(D) → 0, P(D+) = o(TV (x)), T → ∞. (16.4.9)

The former relation holds by the law of large numbers for renewal processes (seee.g. § 5, Chapter 9 of [49] or Chapter 2 of [138]), while the latter follows from therelations{

supt�T

(ν(t) − t) > εx} ⊂

⋃k�εx+T

{k − tk > εx} ={

mink�εx+T

t0k < −εx

}and Lemma 16.2.6(i). Indeed, for n := εx + T we have

z := εx � bε(εx + T ) ≡ bεn with bε :=εδ

1 + εδ.

Since ε → 0 arbitrary slowly, both bε � z/n and Λθ(bε) (see (16.2.4)) will alsohave the same property. Therefore it follows from (16.2.5) and the inequalityΛθ(z) � Λθ(bε) that

P(

mink�εx+T

t0k < −εx

)� exp{−(εx + T )Λθ(bε)} = o(TV (x))

if ε tends to 0 slowly enough.

I. The case q � 0. From (16.4.6) we conclude that

P(GT ) � P(G+) � P(G+D) = P

( ⋃j�ν(T )

{S0

j � g+(j) − q(tj − j)}; D

)

= P

( ⋃j�ν(T )

{S0

j � (1 + o(1))g+(j)}; D

), (16.4.10)

since |tj − j| � εx, j � ν(T ), on the event D, whereas g+(j) � x for anyboundary g ∈ G(x,K).

Now put

g+∗ (j) := min

j�k�ν(T )g+(k)

and note that one has the following double-sided inequality on the event D:

g∗(tj) � g+∗ (j) � g∗(tj) + 2|q|εx, j � ν(T ).

Page 621: Asymptotic analysis of random walks

590 Extension to generalized renewal processes

Using this observation and the fact that V((1 + o(1))t

)= (1 + o(1))V (t) as

t → ∞, we see that by Theorems 3.6.4 and 4.6.7 the conditional probability ofthe event

⋃j�ν(T ){· · · } appearing in the last line of (16.4.10), given t1, t2, . . . ,

is equal on the event D to

(1 + o(1))∑

j�ν(T )

V((1 + o(1)

)g+∗ (j))

= (1 + o(1))∑

j�ν(T )

V (g+∗ (j))

= (1 + o(1))∑

j�ν(T )

V (g∗(tj)) � ν(T )V (x). (16.4.11)

To find the unconditional probability of the event of interest, it only remains tofind the expectation of the expression above. To this end, first note that, for anybounded measurable function h(t), t ∈ (0,∞),

E∑

0<j�ν(T )

h(tj) = E

T∫0

h(t) dν(t) =

T∫0

h(t) dH(t), T < ∞.

Hence

E[(1 + o(1))

∑j�ν(T )

V (g∗(tj))]∼

T∫0

V (g∗(t)) dH(t) � cTV (x) (16.4.12)

(using g(t) � Kx). Moreover, since the r.v.’s ν(T )/T are uniformly integrable(see e.g. p. 56 of [138] or our argument following (16.2.26)) and one has (16.4.9),we have

E[(1 + o(1))

∑j�ν(T )

V (g∗(tj)); D

]� cV (x)E(ν(T ); D) = o(TV (x)). (16.4.13)

From this and (16.4.10)–(16.4.12) we conclude that

P(GT ) � (1 + o(1))

T∫0

V (g∗(t)) dH(t). (16.4.14)

At the same time, from (16.4.6) it also follows that

P(GT ) � P(G) = P(GD) + P(GDD+) + O(P(D+)). (16.4.15)

Repeating the argument used to estimate the probability P(G+D), we obtain

P(GD) = (1 + o(1))

T∫0

V (g∗(t)) dH(t). (16.4.16)

Page 622: Asymptotic analysis of random walks

16.4 The crossing of arbitrary boundaries 591

Further, on the event D+, for sufficiently large x we have

−q(tj − j) = |q|t0j � −|q|εx � −1

2g(j), j � ν(T ),

so that

P(GDD+) = P

( ⋃j�ν(T )

{S0

j � g(j) − q(tj − j)};DD+

)

� P

( ⋃j�ν(T )

{S0

j � 12g(j)

};DD+

)� cV (x)E(ν(T ); D) = o(TV (x)), (16.4.17)

cf. (16.4.13). From (16.4.15)–(16.4.17) and (16.4.9) we obtain

P(GT ) � (1 + o(1))

T∫0

V (g∗(t)) dH(t),

which, together with (16.4.14), establishes (16.4.2) in the case under considera-tion.

If condition [<] is satisfied for a γ ∈ (1, 2) then one considers the events D

and D+ with the quantity εx replaced in them by εz0, where z0 = T 1/γLz(T ).As in the proof of Theorem 16.2.1, I(iii) one can verify that in this case the rela-tion (16.4.9) will hold as well (assuming, without loss of generality, that x � cT ).The remaining part of the proof is carried out in the same way as before, butwith εx replaced by εz0.

II. The case q > 0. (i) Using the left-hand side of (16.4.7) and repeating theargument proving (16.4.14), we obtain

P(GT ) � P(G) � P(GD) � (1 + o(1))

T∫0

V (g∗(t)) dH(t).

On the other hand, from the right-hand side of (16.4.7) we find that

P(GT ) � P(G−) = P(G−D) + P(G−D). (16.4.18)

Since on the event D one has

|tj − j| � εx, τ ′j+1 � 2εx, j � ν(T ),

we obtain

G−D = D⋃

j�ν(T )

{S0

j � inftj�t<t′j+1

[g(t) − q(t − tj)] − q(tj − j)}

⊂ D⋃

j�ν(T )

{S0

j � g(j) − 3εx}

= D⋃

j�ν(T )

{S0

j � (1 + o(1))g(j)},

(16.4.19)

Page 623: Asymptotic analysis of random walks

592 Extension to generalized renewal processes

so that using an argument similar to that leading from (16.4.10) to (16.4.14), onefinds that

P(G−D) � (1 + o(1))

T∫0

V (g∗(t)) dH(t).

To bound the last probability in (16.4.18), note that

g(t) − qt � cx, t ∈ [0, T ] (16.4.20)

with c = 12 min{δ/q, 1}. Indeed, if T � x/2q then by the conditions of the

theorem one has g(t) − qt � δT � δx/2q, while if T < x/2q then g(t) − qt �x − qT > x/2. Therefore

g−(j) − q(tj − j) = inftj�t<t′j+1

(g(t) − qt

)+ qj � cx (16.4.21)

by virtue of (16.4.20), and hence

P(G−D) � cV (x)E(ν(T ); D) = o(TV (x)),

cf. (16.4.13). This completes the proof of part II(i) of the theorem.

(ii) It is not hard to see that the only complication one encounters when consid-ering the case q > 0 is in estimating the probability P(G−Dc) in (16.4.18). As-suming that Fτ (t) = o(V (t)), one can easily see that P(D) = o(TV (x)) (cf. theproof of Theorem 2.1, I(iii); one just has to replace |ν(T )−T | by supt�T |ν(t)−t|,while all the estimates used will remain true). This establishes the required asser-tion.

16.5 The case of linear boundaries

In this section we will consider a more special problem on the asymptotic be-haviour of the probability of the event GT introduced in (16.1.1), in the case oflinear boundaries of the form {g(t) = x + gt; 0 � t � T} as x → ∞. Thisbehaviour will depend on the relationship between the parameters of the problem,and in some particular cases it will immediately follow from the results derivedabove. Therefore we will give a general picture of what happens in that specialcase and will formulate new results only in the form of several separate theorems.As before, we assume that the zero mean trend condition (16.1.16) is met.

First of all note that the case T = O(1) is covered by Theorem 16.4.1 on thecrossing of arbitrary boundaries. In this case the conditions of Theorem 16.4.1reduce to the requirement that the distribution of ξj satisfies (1.3), while whenα ∈ (1, 2) the requirement is that the condition W (t) � cV (t), t > 0, holdsfor the majorant of the left tail and when α > 2 it is that E ξ2

j < ∞. Therelation (16.4.2) established in Theorem 16.4.1 now takes the form

P(GT ) ∼ H(T )V (x), x → ∞.

Page 624: Asymptotic analysis of random walks

16.5 The case of linear boundaries 593

Further, the case g � 0 is covered by Corollary 16.4.4, which holds for bound-aries satisfying (16.4.4). Clearly, in that case it suffices to consider the situationwhen T < ∞ (since by virtue of (16.1.16) and the law of large numbers one hasP(GT ) = 1 when T = ∞). Then, according to Corollary 16.4.4,

P(GT ) ∼ P(S(T ) � x + gT ), (16.5.1)

provided that the conditions of Theorem 16.2.1 are satisfied (with x replaced inthem by x0 := x + gT ), in which case the right-hand side of (16.5.1) can bereplaced by the representations established in the above-mentioned theorem forthe probability P(S(T ) � x0).

Thus, it remains only to consider the situation of an increasing linear boundaryg(t) = x + gt, g > 0, when either T → ∞ or T = ∞. The latter case isevidently covered by Theorem 16.1.2. Indeed, for the GRP S′(t) := S(t) − gt

with a negative mean trend (a′ = aξ + (q − g)aτ = −gaτ by (16.1.16)) we have

G∞ ={S′(∞) � x

},

so that the probability P(G∞) will be asymptotically equivalent, under the re-spective conditions, to the right-hand sides of the relations (16.1.9), (16.1.10)with |a| replaced in them by gaτ . Note that the condition q � 0 (q > 0) ofTheorem 16.1.2 will translate then into r := g − q � 0 (r < 0 respectively).

Now we turn to the case of a (finite) T → ∞. As one can expect from earlierresults in the present chapter, the asymptotic behaviour of the probability P(GT )will depend, generally speaking, on whether the condition g � q is met (thecondition excludes the possibility that the boundary will be crossed in any wayother than by one of the jumps ξj).

• The case r = g − q � 0

Write

P(GT ) = P(G∞) − P(G∞GT ). (16.5.2)

The behaviour of the first probability on the right-hand side of (16.5.2) was con-sidered in Theorem 16.1.2. On the event G∞GT the crossing of the boundaryg(t) = x + gt occurs (and, owing to the assumption r � 0, it occurs at one ofthe jumps ξj) at a time t ∈ (T,∞), and, as we know, with a very high probabilitythis will be a ‘large’ jump. Since a combination of two or more ‘large’ jumps isquite unlikely, on this event the trajectory {S(t); t � T} will, with high probabil-ity, only ‘moderately’ deviate from the (horizontal) mean trend line. Therefore,one can expect that the relation S(T ) = o(x + gT ) will hold true and that hencethe probability of that the boundary x + gt, t > T , is crossed will be close tothe probability of the crossing of the boundary (x + gT ) + gt, t > 0, by theoriginal process {S(t); t � 0}, i.e. the probability will be close to the quantity(x + gT )V (x + gT )/((α− 1)gaτ ) by virtue of Theorem 16.1.2. More precisely,the following theorem holds true.

Page 625: Asymptotic analysis of random walks

594 Extension to generalized renewal processes

Theorem 16.5.1. Let g > 0, r = g−q � 0, let either condition [QT ] or condition[ΦT ] be satisfied for the distribution of ξj and let the mean trend in the process{S(t)} be equal to zero. Then, as x → ∞ and T → ∞,

P(GT ) ∼ 1(α − 1)gaτ

(xV (x) − (x + gT )V (x + gT )

). (16.5.3)

Proof. If T = O(x) then we are in the situation of Theorem 16.4.1 (I or II(i)),and moreover x∗ ≡ inft�T (g(t) − qt) = x � δT for some fixed δ > 0, g∗(t) =g(t) = x+ gt and T → ∞. Therefore, by virtue of the above-mentioned theoremand (16.4.3), one has

P(GT ) ∼ 1aτ

T∫0

V (x + gt) dt

∼ 1(α − 1)gaτ

[xV (x) − (x + gT )V (x + gT )

]by Theorem 1.1.4(iv).

Now if x = o(T ) then the desired assertion will follow from (16.5.2). Indeed,as we have already noted, by Theorem 16.1.2 one has

P(G∞) ∼ xV (x)(α − 1)gaτ

. (16.5.4)

Further, putting x0 := (x + gT )/2 we conclude that, on the event {S(T ) � x0}the crossing of the boundary x + gt in the time interval (T,∞) is only possible ata time t � tν(T )+1, and, moreover, that

S(tν(T )+1) � S(T ) + (tν(T )+1 − T )q + ξν(T )+1

� x0 + (tν(T )+1 − T )g + ξν(T )+1.

Therefore the aforementioned crossing also implies that, for some t > 0,

S(t + tν(T )+1) − S(tν(T )+1) � x + (tν(T )+1 + t)g

− (x0 + (tν(T )+1 − T )g + ξν(T )+1

)= x0 − ξν(T )+1 + gt.

Hence

P(G∞GT ) � P(supt>T

[S(t) − g(t)

]� 0, S(T ) � x0

)+ P(S(T ) > x0)

� P(supt>0

[S(t + tν(T )+1) − S(tν(T )+1)

− (x0 − ξν(T )+1 + gt)]

� 0)

+ O(TV (x0))

� P(ξ � x0/2) + P(supt>0

[S(t) − (x0/2 + gt)

]� 0

)+ O(TV (x0))

= O(V (x0) + x0V (x0) + TV (x0)) = o(xV (x))

Page 626: Asymptotic analysis of random walks

16.5 The case of linear boundaries 595

(here we used Theorem 16.2.1 to bound P(S(T ) > x0), Theorem 16.1.2 to boundthe last probability in the formula, and also the obvious relation (x0+T )V (x0) =O(TV (T )) = o(xV (x))). So we obtain from (16.5.2) that P(GT ) ∼ P(G∞),where the right-hand side was given in (16.5.4). Since (x + gT )V (x + gT ) =o(xV (x)) in the case under consideration, the assertion (16.5.3) remains true.

• The case r = g − q < 0

We have

x∗ ≡ inft�T

(g(t) − qt) = (x + gT ) − qT = x + rT.

If x∗ � δT for a fixed δ > 0 then we are in the situation of Theorem 16.4.1, andhence (16.5.3) will hold true (cf. Theorem 16.5.1).

Now suppose that x∗ → ∞, but x∗ = o(T ). Along with the events D and D+

from (16.4.8), introduce the event

D− :={

inft�T

(ν(t) − t) � −εT}

(note that x � T in the case under consideration, so that in the definition of theevents D and D± one can replace x with T , and this we have done). Now write

P(GT ) = P(GT D) + P(GT D), (16.5.5)

where, using the events G and G± introduced in (16.4.5) and arguing as in theproof of Theorem 16.4.1, II, we obtain that

(1 + o(1))

T∫0

V (g∗(t)) dH(t) � P(GD) � P(GT D)

� P(G−D) � (1 + o(1))

T∫0

V (g∗(t)) dH(t).

In our case of a linear boundary g(t) = x + gt with g > 0, one clearly hasg∗(t) = g(t), so that the asymptotic behaviour of the integral (and therefore thatof the probability P(GT D)) will be given by the right-hand side of (16.5.3).

To estimate the second probability on the right-hand side of (16.5.5), observethat D = D−D+ and therefore D = D−D+ + D+. Hence

P(GT D

)= P

(GT D−D+

)+ P

(GT D+

).

The last term does not exceed P(D+) = o(TV (x)), according to (16.4.9), so thatwe just have to bound the probability

P(GT D−D+) = P(

supt�T

(Sν(t) + qt − (x + gt)

)� 0;D−D+

)= P

(supt�T

[S0

ν(t) − (x + rt + qν(t))]

� 0; D−D+

)(16.5.6)

Page 627: Asymptotic analysis of random walks

596 Extension to generalized renewal processes

(assuming, as usual, for simplicity that aτ = 1). Here it will be convenient for usto estimate supt�(1−ε)T and sup(1−ε)T<t�T separately.

Since r < 0 and q > 0, on the event D+ one clearly has the inequality

supt�(1−ε)T

[S0

ν(t) − (x + rt + qν(t))]

� supt�(1−ε)T

(S0

ν(t) − (x + rt))

� supt�(1−ε)T

(S0

ν(t) − x∗ + rεT)

� maxn�T

S0n − |r|εT.

Therefore,

P(

supt�(1−ε)T

[S0

ν(t) − (x + rt + qν(t))]

� 0;D−D+

)� P

(maxn�T

S0n � |r|εT ;D−D+

)� P

(maxn�T

S0n � |r|εT

)P(D−) = o(TV (x)),

cf. (16.2.29) (and using (16.4.9) again).At the same time, on the event {ν((1 − ε)T ) � εT}D+ we have the bound

sup(1−ε)T<t�T

[S0

ν(t) − (x + rt + qν(t))]

� sup(1−ε)T<t�T

[S0ν(t) − x∗ − εqT ] � max

n�(1+ε)TS0

n − εqT

so that we obtain, again in a way similar to (16.2.29), that

P(

sup(1−ε)T<t�T

[S0

ν(t) − (x + rt + qν(t))]

� 0,

ν((1 − ε)T ) � εT ; D−D+

)= o(TV (x)). (16.5.7)

Summing up the above, we see that

P(GT DD+) = P(

sup(1−ε)T<t�T

[S0

ν(t) − (x + rt + qν(t))]

� 0,

ν((1 − ε)T ) < εT ; D−D+

)+ o(TV (x)).

Now observe that, in the expression under the probability symbol on the right-hand side of the last relation, one can:

(1) remove D− since {ν((1 − ε)T ) < εT} ⊂ D−;(2) replace the event {ν((1 − ε)T ) < εT} by {ν(T ) < 3εT}, since the

Page 628: Asymptotic analysis of random walks

16.5 The case of linear boundaries 597

symmetric difference between these events is the sum

{ν((1 − ε)T ) < εT, ν(T ) � 3εT}+ {ν((1 − ε)T ) � εT, ν(T ) < 3εT}, (16.5.8)

where, by Lemma 16.2.6(i),

P(ν((1 − ε)T ) < εT, ν(T ) � 3εT

)= P(ν(εT ) � 2εT )

= P(t02εT � −εT

)� e−2εTΛθ(1/2) = o(TV (x))

when ε vanishes slowly enough, and the contribution of the second eventin the sum (16.5.8) to the probability of interest is o(TV (x)) by (16.5.7);

(3) remove D+ since P(D+) = o(TV (x)) according to (16.4.9).

Thus, we simply have to estimate

P1 := P(

sup(1−ε)T<t�T

[S0ν(t) − (x + rt + qν(t))] � 0, ν(T ) < 3εT

).

We will represent the probability as a sum of the form (16.2.30), but with theevents C(k) defined now in a somewhat different way: for exactly k of the r.v.’sτj , j � 3εT , one has the inequality τj � δT , whereas for the rest one hasτj < δT . In exactly the same way as in the proof of Theorem 16.2.1, II(ii), weverify that the contribution to this sum of the terms with k �= 1 is negligibly small.The case k = 1 is also considered in a way similar to that used in the argumentin the above-mentioned proof, but instead of (16.2.33) we use here the followingrepresentation (for a small enough δ > 0):

P1 = O

( ∑j<3εT

P(ν(T ) < j, τi < δT, i � j)

)

+∑

j<3εT

P(

sup(1−ε)T<t�T

[S0ν(t) − (x + rt + qν(t))] � 0, ν(T ) = j;C(1)

j+1

)

+∑

j<3εT

∑j�n<3εT

P(

sup(1−ε)T<t�T

[S0ν(t) − (x + rt + qν(t))] � 0,

ν(T ) = n; C(1)j

)(16.5.9)

(the events C(1)j are now defined as in (16.2.32) but with ε replaced by 3ε). Under

the assumption that condition [<] is met, the first term on the right-hand sideof (16.5.9) is negligibly small owing to (16.2.34). Observing that the probabilities

Page 629: Asymptotic analysis of random walks

598 Extension to generalized renewal processes

in the last sum in (16.5.9) do not exceed

P(maxk�n

(S0

k − (x∗ + qk))

� 0)P(ν(T ) = n; C

(1)j

)� P

(sup

n[S0

n − qn] � x∗)P(ν(T ) = n; C

(1)j

),

we use (16.2.35)–(16.2.38) to obtain the bound

o(x2∗V (x∗)Vτ (T ) + TV (T )

)for the whole sum.

Now we turn to the middle term in (16.5.9). As for (16.2.39), one can neglectthe event {tj � 4εT}C

(1)j+1, so that this term will become (up to negligibly small

terms, which we discard)

∑j<3εT

P(

sup(1−ε)T<t�T

[S0ν(t) − (x + rt + qν(t))] � 0,

tj < 4εT, tj+1 � T ; C(1)j+1

).

On the event⋃

j<3εT {tj < 4εT, tj+1 � T} the crossing of the boundary priorto the time tν(T ) can clearly only occur with a probability of order O(εTV (x)) =o(TV (x)). Therefore, under the assumption that Fτ ∈ R, the above sum (againup to discarded negligibly small terms) is equal to∑

j<3εT

P(S0

j � x∗ + qj)P(tj < 4εT, tj+1 � T ; C

(1)j+1

)= (1 + o(1))Vτ (T )

∑j<3εT

jV (x∗ + qj)

(again cf. (16.2.39); we have used relations of the form (16.1.18) for the ran-dom walk {S0

j }). Therefore we obtain (cf. the derivation of (16.2.40)) that P1 =o(TV (T )) for α ∈ (1, 2), whereas for α > 2 we have

P1 = (1 + o(1))x2∗V (x∗)Vτ (T )

q2(α − 1)(α − 2)+ o(TV (T )),

which completes the proof of the following theorem.

Theorem 16.5.2. Let g > 0, r = g − q < 0, x∗ = x + rT → ∞ but x∗ = o(T ),and let Fτ ∈ R, the mean trend of the process {S(t)} being equal to zero.

(i) If the distribution of the jumps ξj satisfies condition [QT ] then

P(GT ) ∼ 1(α − 1)gaτ

(xV (x) − (x + gT )V (x + gT )

).

Page 630: Asymptotic analysis of random walks

16.5 The case of linear boundaries 599

(ii) If condition [ΦT ] holds for the distribution of ξj then

P(GT ) ∼ 1(α − 1)gaτ

(xV (x) − (x + gT )V (x + gT )

)+

x2∗V (x∗)Vτ (T )

q2a2τ (α − 1)(α − 2)

.

Now assume that x∗ = x + rT → −∞. We will restrict ourselves to thecase when x∗ < −δT and x � δT for a fixed δ > 0. As before, starting withthe representation (16.5.5) we verify that the probability P(GT D) asymptoticallybehaves as the right-hand side of (16.5.3):

P(GT D) ∼ 1(α − 1)gaτ

[xV (x) − (x + gT )V (x + gT )

], (16.5.10)

which means that it simply remains to evaluate (16.5.6). If Fτ (t) = o(V (t)) ast → ∞ then

P(GT D−D+) � P(D−D+)

� P(

maxk�(1+ε)T

t0k > εT

)= o(TV (T )), (16.5.11)

cf. (16.2.41), so that we just obtain (16.5.3).In the general case, put

Y := maxk�(1+ε)T

S0k

and observe that P(Y > x/2) = O(TV (x)), cf. (16.2.42). Hence

P(D−{Y > x/2}) = o(TV (x))

by virtue of (16.4.9) as D− ⊂ D. Therefore

P(GT D−D+) = P(GT D−D+{Y � x/2})+ o(TV (x)). (16.5.12)

Further, the event

GT D+ ={

supt�T

[S0

ν(t) − (x + rt + qν(t))]

� 0}

D+

implies that

ν(t) <|r|tq

− x − Y

qfor some t � T, (16.5.13)

or, equivalently, that

t0(1−g/q)t−(x−Y )/q >

gt

q− x − Y

qfor some t � T.

Setting

kz :=(1 − g

q

)T − x − z

q,

Page 631: Asymptotic analysis of random walks

600 Extension to generalized renewal processes

we see that the previous relation means that, for some k � kY , one has

t0k >

g

q − gk + xY , xz :=

x − z

q − g.

From this we see that if Fτ ∈ R then by (16.1.12) (applied to the r.v.’s ξ′j :=τ0j − g/(q − g) with Eξ′j = −g/(q − g) < 0) we have for the probability on the

right-hand side of (16.5.12) (denote it by P2) the bound

P2 �x/2∫0

P(

maxk�kz

(t0k − gk

q − g

)> xz

)P(Y ∈ dz)

∼ q − g

(γ − 1)g

x/2∫0

(xzVτ (xz) −

(xz +

gkz

q − g

)Vτ

(xz +

gkz

q − g

))P(Y ∈ dz).

Since P(Y > εx) = o(1) when ε tends to 0 sufficiently slowly, we obtain,cf. (16.2.43), the bound

P2 � (1 + o(1))q − g

(γ − 1)g

[x

q − gVτ

( x

q − g

)− x + gT

qVτ

(x + gT

q

)].

At the same time,

P2 � P(GT D−D+; min

k�(1+ε)TS0

k > −εx)

� P(

supt�T

[−εx − (x + rt + qν(t))]

> 0; D−D+

)× P

(min

k�(1+ε)TS0

k > −εx

)(16.5.14)

= (1 + o(1))P(

inft�T

(ν(t) − |r|t

q

)< − (1 + ε)x

q

)+ o(TV (x)),

since we have the following:

(1) P(mink�(1+ε)T S0

k � −εx)

= o(1) when ε → 0 slowly enough;(2) the presence of the event D− under the probability symbol in the middle

line of (16.5.14) is superfluous, as{supt�T

[−εx − (x + rt + qν(t))]

> 0}

⊂ D−;

(3) removing the event D+ from under the above-mentioned probability sym-bol will introduce, owing to (16.4.9), an error of order o(TV (x)).

The asymptotic behaviour of the probability on the right-hand side of (16.5.14)can be derived similarly to the way in which we evaluated the probability of the

Page 632: Asymptotic analysis of random walks

16.5 The case of linear boundaries 601

event (16.5.13); this leads to the inequality

P2 � (1 + o(1))q − g

(γ − 1)g

[x

q − gVτ

( x

q − g

)− x + gT

qVτ

(x + gT

q

)]+ o(TV (x)).

Therefore

P(GT D−D+) = o(TV (x)) + (1 + o(1))q − g

(γ − 1)g

×[

x

q − gVτ

( x

q − g

)− x + gT

qVτ

(x + gT

q

)].

Together with (16.5.5), (16.5.10) and (16.5.11), the above relation establishes thefollowing result.

Theorem 16.5.3. Let g > 0, r = g − q < 0, x∗ = x + rT � −δT andx � δT for a fixed δ > 0, T → ∞. Assume that the distribution of the jumps ξj

satisfies either condition [QT ] or condition [ΦT ] and that the mean trend in theprocess {S(t)} is equal to zero. Then the following assertions hold true.

(i) If Fτ (t) = o(V (t)) as t → ∞ then the relation (16.5.3) holds true for theprobability P(GT ).

(ii) If Fτ ∈ R, γ > 1 (see (16.1.4)) then

P(GT ) ∼ 1(α − 1)gaτ

[xV (x) − (x + gT )V (x + gT )

]+

q − g

(γ − 1)gaτ

[x

q − gVτ

( x

q − g

)− x + gT

qVτ

(x + gT

q

)].

Page 633: Asymptotic analysis of random walks

Bibliographic notes

Chapter 1

The concept of slow variation first appeared in [158] (where only continuous functionswere considered). Somewhat earlier, the close concept of a very slowly oscillating se-quence (v.s.o.s., in German sehr langsam oszillirende Folge) had been introduced in [249,250] when studying various types of convergence in the context of extending the conditionsof applicability of Tauberian theorems. By definition, a v.s.o.s. {sn} is characterized bythe property that, for any subsequence of indices k = k(n) such that k/n → u ∈ (0,∞)as n → ∞, one has sk − sn → 0 (i.e. lk/ln → 1 for ln := esn ). In [250] analoguesof Theorems 1.1.2 and 1.1.4(ii) for v.s.o.s.’s were obtained, and it was noted that the veryidea of a v.s.o.s. can be easily extended to functions of a continuous variable.

The assertions of Theorems 1.1.2 and 1.1.3 were established in [158] for continuousfunctions; for arbitrary measurable functions they were first obtained in [174]. The proofsof these theorems presented in this book are close to those in [32] (see ibid. for otherversions of the proofs and further historical comments). According to [251] the traditionalnotation L for s.v.f.’s goes back to [158], although the notation xαL(x) for functions dis-playing regular variation type behaviour can be seen already in [250]. The publication ofthe books [130] and [122] played an important role in introducing r.v.f.’s into probabilitytheory. For a more detailed account of the properties of s.v.f.’s and r.v.f.’s see [32, 251].

The class of subexponential distributions was introduced in [81]. This paper also con-tained the proof of the first part of Theorem 1.2.8. Our proof of Theorem 1.2.12(iii) mostlyfollows the exposition in [15] (the term ‘subexponential distribution’ apparently appearedin that monograph). According to [32], subexponential distributions on the whole realline were first considered in [137]. Theorem 1.2.21(i) was obtained in [133, 166], Theo-rem 1.2.21(ii) was established in [166].

In conjunction with the relations l′(t) → 0 and tl′(t) → ∞ as t → ∞, the sufficientcondition for a distribution G to belong to the class S (and simultaneously to a narrowerclass S∗) from Corollary 1.2.32 was obtained in [166] (as a corollary of Theorem 3 in thatpaper). Along with a number of other conditions sufficient for G ∈ S , it was presentedin [133] (see ibid. for a more complete bibliography); see also [81, 266, 229, 114]. Surveysof the main properties of ‘heavy-tailed’ distributions can be found in [113, 9, 252, 235].

A proof of Theorem 1.3.2 can be found in [83]. A more general case was consideredin [242]. A proof of Theorem 1.4.1 (in a somewhat more general form) can be foundin [84].

Locally subexponential distributions were studied in [10]; some results of that paper areclose to the results of §§ 1.3 and 1.4. For instance, Corollary 2 and Assertion 4 are close toour Theorem 1.3.4, while Theorem 2 is close to our Theorem 1.4.2. Some results presentedin §§ 1.2–1.4 were established in [55].

The general theory of the convergence of sums of independent r.v.’s was presentedin [130] (see also [122]; for results in the multivariate case, historic comments and further

602

Page 634: Asymptotic analysis of random walks

Bibliographic notes 603

bibliography, see e.g. [5, 189]). The role of the contribution of the maximum summandsto the sum Sn in the case of convergence to a non-Gaussian stable distribution was studiedin [97, 7] (see also [182, 100, 98]).

The monographs [286, 86, 273, 245, 153] are concerned with stable distributions andprocesses; see also [256, 211, 25, 246].

The invariance principle (Theorem 1.6.1) was established in [107] (and in [230] for non-identically distributed summands ξj ; for more details, see e.g. [28, 74]). The functionallimit theorem on convergence to stable processes was obtained in [255]; conditions for theconvergence of arbitrary functionals of the processes of partial sums were found in [41, 43,75].

The law of the iterated logarithm was first obtained in the case of the uniform discretedistribution of ξ in [161]. Then this result was extended in [171] to the case of independentnon-identically distributed bounded r.v.’s. In the case of i.i.d. r.v.’s with finite variance thelaw of the iterated logarithm was established in [140], and the converse result was obtainedin [261]. The assertion of Theorem 1.6.6 was first obtained in [147] in the case when Fbelongs to the domain of normal attraction of a stable law Fα,ρ, α < 2, |ρ| < 1; it wasthen generalized in [274]. Detailed surveys of results related to the law of the iteratedlogarithm can be found in our § 3.9, in § 5.5.4 of [260] and in [31].

Chapters 2 and 3

Theorem 2.5.5 was established in [119]. An alternative proof of this theorem, using thesubadditive Kingman theorem, was given in [102].

The assertion of Theorem 2.6.5 was obtained in [197].In [118] a local renewal theorem for ξ � 0 was obtained in the case when condi-

tion [ · , =] holds with 1/2 < α < 1 (for α ∈ (0, 1/2] the paper gave an upper boundonly). In the lattice case, these results were extended in [128, 279] to the case of r.v.’s ξassuming both negative and positive values.

Upper bounds for the distributions of Sn in terms of truncated moments and withoutassuming the existence of majorants were obtained in [201, 127] (and also in [206]).

The asymptotics of P(Sn � x) ∼ nV (x) for x � cn were established in [207]. Someresults of Chapters 2 and 3 were obtained in [51, 54, 57, 66].

An analogue of the uniform representation (4.1.2) for x > n1/α in the case when Fbelongs to the domain of attraction of a stable law with exponent α was obtained in [238,239].

An analogue of the law of the iterated logarithm (the assertion of Corollary 3.9.2) wasfirst obtained in [82] for the case of symmetric stable summands (it was noted in that paperthat, as had been pointed out by V. Strassen, this assertion immediately follows from theresults of [163] and also that, with the help of the results of [96], it could be extended to thecase of distributions F that belong to a special subset of the domain of normal attractionof Fα,ρ, α < 2). For distributions F from the entire domain of attraction of a stablelaw Fα,ρ, α < 2, |ρ| < 1, this law was obtained in [147], while some refinements andextensions to it were established in [190]. See also the bibliography in [190, 164] anddetailed surveys of results, related to the law of the iterated logarithm, presented in § 5.5.4of [260] and in [31]. Some assertions from § 3.9 were obtained in [51].

Chapter 4

For bibliographies concerning the asymptotic equivalence

P(Sn � x) ∼ nV (x), P(Sn � x) ∼ nV (x)

under condition [ · , =] with α > 2, see e.g. [225, 237, 191, 206].The uniform representation (4.1.2) for x >

√n under the minimal additional condition

Page 635: Asymptotic analysis of random walks

604 Bibliographic notes

that E(ξ2; |ξ| > t) = o(1/ ln t), t → ∞, was established in Corollary 7 of [237]. In [206]the representation (4.1.2) was presented in Theorem 1.9 under the additional conditionthat E|ξ|2+δ < ∞, δ > 0 with a reference to [194]; the latter, in fact, only dealt withthe zone x � n1/2 ln n (when (4.1.2) degenerates into (4.1.1)), but under no additionalmoment conditions. According to [191], the result presented in [206] was obtained inthe doctoral thesis of A.V. Nagaev (On large deviations for sums of independent randomvariables, Institute of Mathematics of the Academy of Sciences of the UzSSR, Tashkent,1970). In [225] the representation (4.1.2) was obtained under the assumption that F (t) =O(t−α) as t → ∞ (Corollary 2).

Under the additional assumption that F (t) = O(t−α) as t → ∞, the representa-tion (4.4.3) was established for x/

√n → ∞ in Corollary 1 of [225].

Upper bounds for the distributions of Sn in terms of truncated moments and withoutassuming the existence of majorants were obtained in [201, 127] (and also in [206]). Theseinequalities are, in a sense, more general but also much more cumbersome. One cannotderive from them the assertions of Corollary 4.1.4 (even for the sums Sn). Some lowerbounds for P(Sn � x) were found in [205]. Lower bounds for P(|Sn| � x) wereobtained in [208].

In [194] the following integral and integro-local theorems for the distribution of Sn werepresented. Let F ∈ R, α > 2, x/ ln x >

√n and n → ∞. Then P(Sn > x) ∼ nV (x).

If, moreover, P(ξ ∈ Δ[t)) ∼ αΔV (t)/t as t → ∞ and 0 < c � Δ = o(t) thenP(Sn ∈ Δ[x)) ∼ nαΔV (x)/x.

Under the assumption that the r.v. ξ = ξ′−Eξ′ was obtained by centring a non-negativer.v. ξ′ � 0, the first of the asymptotic relations (4.8.2) was obtained in [90] for distribu-tions with tails of extended regular variation (i.e. in the case when (4.8.3) holds true) andin [210] for distributions with ‘intermediate regular variation’ of the tails (i.e. in the casewhen (4.8.4) holds; see also [89, 91, 156, 248]), but only in the zone x � δn, where δ > 0is fixed.

The assertion of Theorem 4.9.1 in a narrower case (in particular, under the assumptionthat x = cn) was obtained in Theorem 3.1 of [108].

The limiting behaviour (as x → ∞) of the conditional distribution„Sη(x)−1(a)

N(x),

η(x)

N(x),

jS�η(x)t�(a)

η(x), t ∈ [0, 1]

ff,Sη(x)(a) − x

N(x)

«, N(x) :=

F I+(x)

F+(x),

under the condition that η(x) := inf{k : Sk − ak � x} < ∞ was studied in [12]in the case when Eξ = 0, a > 0 and either F ∈ R for α > 1 or F belongs to themaximum domain of attraction of the Gumbel distribution (the latter condition is known tobe equivalent to the property that, for any u > 0,

limt→∞

F+(t + N(t)u)

F+(t)= e−u,

see e.g. Proposition 3.1 of [12] or Theorem 3.3.27 of [113]; an alternative characterizationof this class was given by Theorem 3.3.26 of [113]). The paper also dealt with the asymp-totics of the conditional distribution of ξη(x) and some others; in particular, it was shownthere that, in the case when F ∈ R, α > 1,

limx→∞

P`x−1ξη(x) > u

˛η(x) < ∞´ =

`1 + α(1 − u−1)

´u−α, u > 0.

Some results of Chapter 4 were established in [51, 54, 63].

Chapter 5

Problems on ‘moderately large’ deviations of the sums Sn for semiexponential and re-lated distributions were studied in [152, 201, 220, 221, 280, 281, 247, 212, 206, 238] andsome other papers, where, in particular, the validity of the Cramer approximation (5.1.11)

Page 636: Asymptotic analysis of random walks

Bibliographic notes 605

for P(Sn � x) was established under the condition that E`eh(ξ); ξ � 0

´< ∞ (or

Eeh(ξ) < ∞) for some function h(t) that is close, in a sense, to an r.v.f. of index α ∈ (0, 1)

(conditions on the function h(t) vary from paper to paper). Similar results for P(Sn � x)were obtained in [2].

The asymptotic representation P(Sn � x) ∼ nV (x) for x � n1/(2−2α) was obtainedin [238, 196, 227, 51]. For results concerning large deviations of Sn in the case of dis-tributions satisfying the condition Eeh(|ξ|) < ∞, see also [206, 191] (one can find theremore complete bibliographies as well). In [227] the asymptotics of P(Sn � x) were es-tablished in cases where they coincide with the asymptotics of P(Sn � x). Theorems onthe asymptotics of P(Sn � x) that would be valid on the whole real line were consid-ered in [238]. In particular, the paper gives the form of P(Sn � x) in the intermediatezone x ∈ `σ1(n), σ2(n)

´, but it does that under conditions of which the meaning is quite

difficult to comprehend. The intermediate deviation zone σ1(n) < x < σ2(n) was alsoconsidered in [195]. The paper deals with the rather special case when the distribution Fhas density

f(t) ∼ e−|t|α as |t| → ∞;

the asymptotics of P(Sn � x) were found there for α > 1/2 in the form of recursiverelations, from which one cannot extract, in the general case, a closed-form expression forthe asymptotics of P(Sn � x).

Close results on the first-order asymptotics of P(Sn � x) were obtained in [205] (theyare also discussed in [206]) for the class of distributions with P(ξ � t) = e−l(t), wherethe function l(t) is thrice differentiable and its derivatives satisfy a number of conditions.

Asymptotic representations for P(Sn � x) in the intermediate deviation zone σ1(n) <x < σ2(n) and also in the zone x � σ2(n) were studied in [52]. The asymptotic repre-sentation (5.1.17) for P

`Sn(a) � x

´, a > 0, which holds, as x → ∞, for all n and all

so-called strongly subexponential distributions, was established in [178] (see also [275]);sufficient conditions from [178] for a distribution to be strongly subexponential are satis-fied for F ∈ Se.

A number of results of Chapter 5 were obtained in [51, 52].

Chapter 6

In the Cramer case, methods for studying the probabilities of large deviations of Sn arequite well developed and go back to [95] (see also [233, 16, 219, 259, 120, 49] etc.).Somewhat later, these methods were extended in [37, 44, 69, 70] to enable one to study Sn

and also to solve a number of other problems related to the crossing of given boundariesby the trajectory of a random walk.

Theorem 6.1.1 was established by B.V. Gnedenko (see §§ 49 and 50 of [130]; see alsoTheorem 4.2.1 of [152] and Theorem 8.4.1 of [32]); the multivariate case was consideredin [243]. Sufficiency in Theorem 6.1.2 was established in [259] (cf. [120]; the finite vari-ance case was studied earlier in [254]) and the proof of the necessary part was presented in§ 8.4 of [32]. Uniform versions of Theorems 6.1.1 and 6.1.2 were established in [72] and,to some extent, in [259].

The properties of the deviation function Λ (which is sometimes also referred to as theChernoff function [80] or rate function) are discussed in [38, 49, 67, 101]. The mono-graph [101] is devoted to the analysis of the crude (logarithmic) asymptotics of large devi-ation probabilities.

In the case when the distribution of ξ has a density of the form e−λ+ttγ−1L(t), L(t)being an s.v.f. at infinity, γ > 0, a number of results on the large deviations of Sn (includ-ing (6.1.15)) were obtained in [241].

Some assertions close to the theorems and corollaries of §§ 6.2 and 6.3, were obtainedfor narrower deviation zones in [27] (Lemmata 2 and 3).

Page 637: Asymptotic analysis of random walks

606 Bibliographic notes

A detailed investigation of the ‘lower sub-zone’ in the boundary case θ ≡ x/n = θ+

is presented in [198], where the values of x up to which the ‘classical’ exact asymptoticexpansions for P(Sn � x) remain correct were found. The paper also contains an integralanalogue of Theorem 6.2.1. The multivariate case was considered in [284] in a very specialsituation. Some assertions of Theorems 6.2.1 and 6.3.1 can be obtained from [259].

The main results of § 6.5 were obtained in [64].

Chapter 7

Corollaries 1 and 2 of [272] contain conditions, sufficient for (7.3.2), that are close to theconditions of Theorems 7.1.1 and 7.2.1.

The assertion of Theorem 7.4.1 was obtained in [126] (see ibid. for a more completebibliography on related results). In the special case when τ is the number of the firstpositive sum, it was established in [8].

A necessary and sufficient condition for (7.4.6) in the case F ∈ R was obtained in [135].This condition has the form

limn→∞

lim supt→∞

ˆP(Sτ � t, τ � n) − P(Sn � t, τ � n)

˜/P(ξ � t) = 0.

The assertion of Theorem 7.4.4 was obtained in [136].For upper-power distributions, the asymptotics of P(S � x) from the assertion of

Theorem 7.5.1 was obtained in [39, 42]. For the class of subexponential distributions,Theorem 7.5.1 was established in [275] (see also [213, 282]; special cases were consideredin [278, 269]). That the condition F I

+ ∈ S is necessary for the asymptotics (7.5.1) wasproved in [177]; in the special case when ξ = ζ − τ, where the r.v.’s τ, ζ � 0 are indepen-dent, τ has an exponential distribution (which corresponds to M/G/1 queueing systems),this result was obtained in [213, 115]. The case Eξ = −∞ was studied in [103].

Assertions close to Theorem 7.5.5 and Corollary 7.5.6 were obtained in [26, 11]. Thecondition used in [11] to ensure that (7.5.13), (7.5.14) hold true is that F+ belongs to thedistribution class S∗, which is characterized by the property (7.4.1).

The assertion of Theorem 7.5.8 in the case F ∈ R, under additional moment conditionsand assumptions concerning the smoothness of F+, was established in Corollary 3 of [63].For queueing systems of the type M/G/1 (see above), under the assumption that ζ hasa heavy-tailed density of a special form, such a refinement for the distribution of S (orfor the limiting distribution for the waiting time in M/G/1) was obtained in [267]. Seealso [18, 19, 20].

Theorem 7.5.11 is a slight modification (and simplification) of Theorem 3 of [63] in thecase G ∈ R and of Theorem 2.1 of [52] in the case G ∈ Se.

Chapter 8

A complete asymptotic analysis (including the derivation of asymptotic expansions) ofP(η±(x) = n) for all x and n → ∞ for bounded lattice-valued ξ was given in [33, 34].For an extension of the class R, the asymptotics P(θ = n) ∼ cV (−an), and hencethe asymptotics of P(η− > n) as well, was found in Chapter 4 of [42]. Comprehensiveresults on the asymptotics of P(η−(x) > n) and P(n < η+(x) < ∞) in the case of afixed x � 0 for the classes R and C were obtained in [116, 32, 104, 27, 193]. Necessaryand sufficient conditions for the finiteness of Eηγ

−, γ > 0, and some relevant problemswere studied in [143, 154, 138, 160]. Unimprovable bounds for P(η− > n) were givenin § 43 of [50]. Note also that the local theorems 2.1 and 3.1 of [54] in the lattice case wereestablished earlier, in [105].

For a proof of Theorem 8.2.16 and bibliographic comments on it, see [32], pp. 381–2;that conditions (8.2.45) and (8.2.46) are equivalent was established in [106]. The assertionof Theorem 8.2.17(ii) was established in [39].

Page 638: Asymptotic analysis of random walks

Bibliographic notes 607

Theorem 8.2.18 was established in [116, 32, 104, 27, 193]. In the case when Eξ = 0,Eξ2 < ∞ and the distribution F is arithmetic, local limit theorems for the asymptotics ofP`η±(x) = n

´as n → ∞ were obtained in [3]. In the non-lattice case, it was shown

in [193] that (8.2.54) holds true under the additional condition that E|ξ|3 < ∞. A moregeneral result claiming that Eξ2 < ∞ would suffice for (8.2.54) was published in [117].However, as communicated to us by A.A. Mogulskii, this result is incorrect as its conditionscontain no restrictions on the structure of F (a paper by A.A. Mogulskii with a correctversion of this theorem was submitted to Siberian Adv. Math). In the case of boundedlattice-valued r.v.’s ξ, asymptotic expansions for P(η±(x) = n) were derived in [33, 34].

For distributions F ∈ C, a comprehensive study of the asymptotics of the probabili-ties P

`η+(x) > n

´= P(Sn � x) for arbitrary a = Eξ, under the assumption that F

has an absolutely continuous component, was made in [33, 34, 37]. The assertion of Theo-rem 8.3.1 follows from the results of [37], while that of Theorem 8.3.2 follows from boundsin [45, 47]. The assertion of Theorem 8.3.3 follows immediately from the convergence rateestimate obtained in [203] (see also [204, 202]). An asymptotic expansion for P

`Sn � x

´as x → ∞ in the cases when Eξ = 0, E|ξ|s < ∞, s > 3, and the distribution of ξis either lattice or satisfies the Cramer condition lim sup|λ|→∞ |f(λ)| < 1, was studiedin [204, 37, 175, 176].

The exposition in Chapter 8 is close to [58]. A survey of results close to those presentedin Chapter 8 is contained in [193].

Chapter 9

Necessary and sufficient conditions for partial sums Sn of i.i.d. random vectors in Rd to

converge to a non-Gaussian stable distribution were found in [243]; further references,as well as a detailed discussion of the problem on the convergence of Sn under operatorscaling, can be found in [189]). The concept of regular variation in the multivariate setupwas discussed in [23].

In the case when the Cramer condition Ee(λ,ξ) < C < ∞ holds in a neighbourhood ofa point λ0 = 0, a comprehensive study of the asymptotic behaviour of P

`Sn ∈ Δ[x)

´as

(λ0, x) → ∞ was presented in [69, 70].In the special case when F has a bounded density f(x), which admits a representa-

tion of the form (9.1.9), the large deviation problem was considered in [283, 200]; seealso [199]. In [283], a local limit theorem for the density fn(x) of Sn was established forlarge deviations along ‘non-singular directions’. This result was complemented in [200]by an analysis of the asymptotics of fn(x) along ‘singular directions’ for an even nar-rower distribution class (in particular, it was assumed that there are only finitely many suchdirections, and that the density f(x) decays along such directions as a power function oforder −β−d, β > α). The main result of [200] shows that the principal contribution to theprobabilities of large deviations along singular directions comes not from trajectories witha single large jump (which is the case in the univariate problem and also for non-singulardirections when d > 1) but from those with two large jumps.

The case when ξ follows a lattice distribution (with probabilities regularly varying atinfinity) was studied in [285].

In a more general case, when only conditions (9.1.7) (with α > 0) and (9.1.8) aremet, an integral-type large deviation theorem, describing the behaviour of the probabilitiesP(Sn ∈ xA) when the set A ⊂ R

d is bounded away from 0 and has a ‘regular’ boundary,was obtained in [151] together with its functional version.

The main results presented in Chapter 9 were obtained in [54].

Page 639: Asymptotic analysis of random walks

608 Bibliographic notes

Chapter 10

An assertion close to Theorem 10.2.3 was obtained in [225] in the case when ξ has a densityregularly varying at infinity. In this case, conditions [A1] and [A2] could be somewhatrelaxed where it is assumed that the sets A(t) and Af (t) are unions of finite collections ofintervals. The same paper [225] contains a ‘transient’ assertion for deviations x � √

n,where an approximation for P(Gn) includes the term P

`w(·) ∈ xA

´, where w(·) is the

standard Wiener process.An assertion close to Theorem 10.2.6 was obtained in [131] under more stringent con-

ditions on F+(t) and somewhat relaxed versions of conditions [A1] and [A2].In [150] a theorem was established, which, in the case of Markov processes in R

d withweakly dependent increments and distributions regularly varying at infinity, enables one tomake a transition from the large deviation results for one-dimensional distributions of theprocess to those for the trajectories of the process. This result was used in [151] to obtaina functional version of the large deviation theorem in the case when one has (9.1.7) (foran α > 0) and (9.1.8).

Chapter 11

The problem on the asymptotics of P (1, n, an) as n → ∞, when ξ � a < ∞, τ � 0and the distribution of τ is close to semiexponential, was studied in [13, 124] (see ibid. formotivations and applications to queueing theory).

Chapter 12

In [127] upper bounds for the distribution of Sn in terms of ‘truncated’ moments wereobtained without any assumptions on the existence of regularly varying majorants (suchmajorants always exist in the case of finite moments, but their decay rate will not be thetrue one). This makes bounds from [127] in a sense more general than those presentedin § 12.1 but also substantially more cumbersome. One cannot derive from them boundsfor Sn of the form (12.1.11), (12.1.12).

In the case Eξ2 → d < ∞, the limiting relation (12.5.1) in the problem on transientphenomena was obtained in [165, 231] (see also [42]). Only some partial results for somespecial distributions of ξ were known in the case Eξ2 = ∞ (see [93, 78, 94]).

Under condition [UR] with h(n) = n, the relation P(Sn � x) ∼ nV (x) for x > cnwas obtained in [218].

The main results presented in Chapter 12 were obtained in [59, 61].

Chapter 13

In [127] upper bounds for the distribution of Sn in terms of ‘truncated’ moments wereobtained without any assumptions on the existence of regularly varying majorants (suchmajorants always exist in the case of finite moments, but their decay rate will not be the trueone). This makes bounds from [127] more general in a sense, but also substantially morecumbersome. One cannot derive from them bounds for Sn of the form (13.1.11), (13.1.12).

Inequalities for P(|Sn| � x) close to our lower bounds for P(Sn � x) were obtainedin [208].

The crude inequality P(Sn � x) � 12

Pnj=1 Fj,+(2x) for x � 2

√Dn was obtained

in [206]. Some bounds for the distribution of Sn were also established in [240, 226].Under condition [UR] with H(n) = n, the relation P(Sn � x) ∼ nV (x) for x > cn

was obtained in [218].The main result (13.3.2) for transient phenomena in the i.i.d. case with Eξ2 → d < ∞

was established in [165, 231] (see also § 25, Chapter 4 of [42]).

Page 640: Asymptotic analysis of random walks

Bibliographic notes 609

The main results presented in Chapter 13 were obtained in [60].

Chapter 14

In the case when ξj(Xj) = f(Xj), where f is a given function on X and {Xk} forms aHarris Markov chain (having a positive recurrent state z0), the asymptotics of P(Sn � x)in terms of regularly varying distribution tails of the sums of quantities f(Xk) taken overa cycle (formed by consecutive visits to a fixed positive recurrent atom), were studiedin [187].

The asymptotics of P(S(a) � x) in the case when the Sn are partial sums of a sequenceof moving averages of i.i.d. r.v.’s, satisfying conditions of a subexponential type, wereestablished in [179].

The asymptotics of the distribution of the maximum of a random walk, defined on afinite Markov chain, were studied in [6]. An analogue of Theorem 7.5.1 for processes with‘modulated increments’ (i.e. processes the distributions of whose increments depend onthe value of an unobservable regenerative process) both in discrete and in continuous time,was obtained in [125, 123]. See also [155, 14, 4, 139].

For sequences {ξj} of r.v.’s satisfying a mixing condition and such that F ∈ R, resultson the large deviations of Sn were obtained in [99]; for autoregressive processes withrandom coefficients (i.e. when ξn = Anξn−1 + Yn, where the sequences of positive i.i.d.r.v.’s {An}, {Yn} are independent, Yn ⊂=FY ∈ R), results on the large deviations of Sn

were obtained in [173, 263].

Chapter 15

The properties of processes with independent increments are described in the monographs[256, 245, 25, 246, 273]. Theorem 15.1.1 (including the converse assertion) was provedin [112] (see also Theorem A3.22 of [113] and § 8.2.7 of [32]; more precisely, it wasshown in [112] that the following assertions are equivalent: (1) F ∈ S , (2) G[1] ∈ S ,(3) F+(t) ∼ G[1],+(t) as t → ∞).

A similar assertion for the class R was obtained in [110].The asymptotics of the tails of subadditive functionals of processes with independent

increments in the case when G[1] ∈ R were studied in [79].The asymptotics of P(S(∞) � x) and some other characteristics of a process with

independent increments in the case when the spectral measure has a subexponential righttail was studied in [168].

Chapter 16

The assertion of Theorem 16.1.2, I was proved in [115]. The first assertion of Theo-rem 16.1.3 follows from Theorem 12 in Chapter 4 of [42], where it was established ina more general case (later on, it was shown in [275, 115] that a sufficient condition for thisassertion is that F I

+ ∈ S; that this a necessary condition as well was proved in [177], seealso the remarks on Theorem 7.5.1 above), while the second assertion follows from themain theorem of [178] (where it was proved for strongly subexponential distributions).

An analogue of the uniform representation (4.1.2) for generalized renewal processes,under additional moment assumptions and the condition that the distribution Fτ has abounded density, was presented in [188]. In the case q = 0 and under the assumptions thatthe distribution tail of the r.v. ξ � 0 is of ‘extended regular variation’ (see § 4.8) and that theprocess {ν(t)} satisfies condition (16.1.6) for some ε > 0 and c > 0, the relation (16.1.5)for any fixed δ > 0 was established in [169] (the process {ν(t)} was assumed there to beof a more general form than a renewal process). This latter paper also contains sufficient

Page 641: Asymptotic analysis of random walks

610 Bibliographic notes

conditions for (16.1.6) to hold for a renewal process: Eτ2 < ∞ and Fτ (t) � 1 − e−bt,t > 0, for some b > 0 (Lemma 2.3 of [169]). In [262] it was shown that condition (16.1.6)is always met for renewal processes with aτ < ∞, without any additional assumptionsregarding Fτ . Note that the last fact follows also from the inequality (16.2.13).

The main results presented in Chapter 16 were obtained in [65].

Page 642: Asymptotic analysis of random walks

References

[1] Adler, R., Feldman, R. and Taqqu, R.S., eds. A Practical Guide to Heavy Tails: Sta-tistical Techniques for Analysing Heavy-tailed Distributions (Birkhauser, Boston,1998).

[2] Aleskjavicene, A.K. On the probabilities of large deviations for the maximum ofsums of independent random variables. I, II. Theory Probab. Appl., 24 (1979), 16–33, 322–337.

[3] Alili, L. and Doney, R.A. Wiener-Hopf factorization revisited and some applica-tions. Stochastics and Stoch. Reports., 66 (1999), 87–102.

[4] Alsmeyer, G. and Sbignev, M. On the tail behaviour of the supremum of a randomwalk defined on a Markov chain. Yokohama Math. J., 46 (1999), 139–159.

[5] Araujo, A. and Gine, E. The Central Limit theorem for Real and Banach ValuedRandom Variables (Wiley, New York, 1980).

[6] Arndt, K. Asymptotic properties of the distribution of the supremum of a randomwalk on a Markov chain. Theory Probab. Appl., 25 (1980), 309–324.

[7] Arov, D.Z. and Bobrov, A.A. The extreme members of a sample and their role inthe sum of independent variables. Theory Probab. Appl., 5 (1960), 377–396.

[8] Asmussen, S. Subexponential asymptotics for stochastic processes: extremal be-haviour, stationary distributions and first passage probabilities. Ann. Appl. Probab.,8 (1998), 354–374.

[9] Asmussen, S. Ruin Probabilities (World Scientific, Singapore, 2000).[10] Asmussen, S., Foss, S. and Korshunov, D. Asymptotics for sums of random vari-

ables with local subexponential behaviour. J. Theoretical Probab., 16 (2003), 489–518.

[11] Asmussen, S., Kalashnikov, V., Konstantinides, D., Kluppelberg, C. and Tsitsi-ashvili, G. A local limit theorem for random walk maxima with heavy tails. Statist.Probab. Letters, 56 (2002), 399–404.

[12] Asmussen, S. and Kluppelberg, C. Large deviations results for subexponential tails,with applications to insurance risk. Stoch. Proc. Appl., 64 (1996), 103–125.

[13] Asmussen, S., Kluppelberg, C. and Sigman, K. Sampling of subexponential timeswith queueing applications. Stoch. Proc. Appl., 79 (1999), 265–286.

[14] Asmussen, S. and Møller, J.R. Tail asymptotics for M/G/1 type queueing pro-cesses with subexponential increments. Queueing Systems, 33 (1999), 153–176.

[15] Athreya, K.B. and Ney, P.E. Branching Processes (Springer, Berlin, 1972).[16] Bahadur, R. and Ranga Rao, R. On deviations of the sample mean. Ann. Math.

Statist., 31 (1960), 1015–1027.[17] Baltrunas, A. On the asymptotics of one-sided large deviation probabilities. Lithua-

611

Page 643: Asymptotic analysis of random walks

612 References

nian Math. J., 35 (1995), 11–17.[18] Baltrunas, A. Second-order asymptotics for the ruin probability in the case of very

large claims. Siberian Math. J., 40 (1999), 1226–1235.[19] Baltrunas, A. Second order behaviour of ruin probabilities. Scandinavian Actuar.

J., (1999), 120–133.[20] Baltrunas, A. The rate of convergence in the precise large deviation theorem.

Probab. Math. Statist., 22 (2002), 343–354.[21] Baltrunas, A. Second-order tail behavior of the busy period distribution of certain

GI/G/1 queues. Lithuanian Math. J., 42 (2002), 243–254.[22] Baltrunas, A., Daley, D.J. and Kluppelberg, C. Tail behaviour of the busy period of

a GI/GI/1 queue with subexponential service times. Stochastic Process. Appl.,11 (2004), 237–258.

[23] Basrak, B., Davis, R.A. and Mikosch, T. A characterization of multivariate regularvariation. Ann. Appl. Probab., 12 (2002), 908–920.

[24] Bentkus, V., Bloznelis, M. Nonuniform estimate of the rate of convergence in theCLT with stable limit distribution. Lithuanian Math. J., 29 (1989), 8–17.

[25] Bertoin, J. Levy Processes (Cambridge University Press, Cambridge, 1996).[26] Bertoin, J. and Doney, R.A. On the local behaviour of ladder height distributions.

J. Appl. Probab., 31 (1994), 816–821.[27] Bertoin, J. and Doney, R.A. Some asymptotic results for transient random walks.

Adv. Appl. Probab., 28 (1996), 207–226.[28] Billingsley, P. Convergence of Probability Measures (Wiley, New York, 1968).[29] Bingham, N.H. Maxima of sums of random variables and suprema of stable pro-

cesses. Z. Wahrscheinlichkeitstheorie und verw. Geb., 26 (1973), 273–296.[30] Bingham, N.H. Limit theorems in fluctuation theory. Adv. Appl. Probab., 5 (1973),

554–569.[31] Bingham, N.H. Variants of the law of the iterated logarithm. Bull. London Math.

Soc., 18 (1986), 433–467.[32] Bingham, N.H., Goldie, C.M. and Teugels, J.L. Regular Variation (Cambridge

University Press, Cambridge, 1987).[33] Borovkov, A.A. Limit theorems on the distributions of maxima of sums of bounded

lattice random variables. I. Theory Probab. Appl., 5 (1960), 125–155.[34] Borovkov, A.A. Limit theorems on the distributions of maxima of sums of bounded

lattice random variables. II. Theory Probab. Appl., 5 (1960), 341–355.[35] Borovkov, A.A. Remarks on Wiener’s and Blackwell’s theorems. Theory Probab.

Appl., 9 (1964), 303–312.[36] Borovkov, A.A. Analysis of large deviations in boundary-value problems with ar-

bitrary boundaries. I, II. Siberian Math. J., 5 (1964), 253–289, 750–767. (In Rus-sian.)

[37] Borovkov, A.A. New limit theorems in boundary problems for sums of independentrandom variables. Select Transl. Math. Stat. Probab., 5 (1965), 315–372. (Originalpublication in Russian: Siberian Math. J., 3 (1962), 645–694.)

[38] Borovkov, A.A. Boundary-value problems for random walks and large deviationsin function spaces. Theory Probab. Appl., 12 (1967), 575–595.

[39] Borovkov, A.A. Factorization identities and properties of the distribution of thesupremum of sequential sums. Theory Probab. Appl., 15 (1970), 359–402.

[40] Borovkov, A.A. Notes on inequalities for sums of independent variables. TheoryProbab. Appl., 17 (1972), 556–557.

[41] Borovkov, A.A. The convergence of distributions of functionals of stochastic pro-cesses. Russian Math. Surveys, 271(1) (1972), 1–42.

Page 644: Asymptotic analysis of random walks

References 613

[42] Borovkov, A.A. Stochastic Process in Queueing Theory (Springer, New York,1976).

[43] Borovkov, A.A. Convergence of measures and random processes. Russian Math.Surveys, 31(2) (1976), 1–69.

[44] Borovkov, A.A. Boundary problems, the invariance principle and large deviations.Russian Math. Surveys, 38(4) (1983), 259–290.

[45] Borovkov, A.A. On the Cramer transform, large deviations in boundary value prob-lems, and the conditional invariance principle. Siberian Math. J., 36 (1995), 417–434.

[46] Borovkov, A.A. Unimprovable exponential bounds for distributions of sums ofrandom number of random variables. Theory Probab. Appl., 40 (1995), 230–237.

[47] Borovkov, A.A. On the limit conditional distributions connected with large devia-tions. Siberian Math. J., 37 (1996), 635–646.

[48] Borovkov, A.A. Limit theorems for time and place of the first boundary passageby a multidimensional random walk. Dokl. Math., 55 (1997), 254–256.

[49] Borovkov, A.A. Probability Theory (Gordon and Breach, Amsterdam, 1998).[50] Borovkov, A.A. Ergodicity and Stability of Stochastic Processes (Wiley, Chich-

ester, 1998).[51] Borovkov, A.A. Estimates for the distribution of sums and maxima of sums of

random variables when the Cramer condition is not satisfied. Siberian Math. J., 41

(2000), 811–848.[52] Borovkov, A.A. Probabilities of large deviations for random walks with semi-

exponential distributions. Siberian Math. J., 41 (2000), 1061–1093.[53] Borovkov, A.A. Large deviations of sums of random variables of two types.

Siberian Adv. Math., 4 (2002), 1–24.[54] Borovkov, A.A. Integro-local and integral limit theorems on the large deviations

of sums of random vectors: regular distributions. Siberian Math. J., 43 (2002),402–417.

[55] Borovkov, A.A. On subexponential distributions and asymptotics of the distribu-tion of the maximum of sequential sums. Siberian Math. J., 43 (2002), 995–1022,1253–1264.

[56] Borovkov, A.A. Asymptotics of crossing probability of a boundary by the trajec-tory of a Markov chain. Heavy tails of jumps. Theory Probab. Appl., 47 (2003),584–608.

[57] Borovkov, A.A. Large deviations probabilities for random walks in the absence offinite expectations of jumps. Probab. Theory Relat. Fields, 125 (2003), 421–446.

[58] Borovkov, A.A. On the asymptotic behavior of the distributions of first-passagetimes. I, II. Math. Notes, 75 (2004), 23–37, 322–330.

[59] Borovkov, A.A. Large deviations for random walks with nonidentically distributedjumps having infinite variance. Siberian Math. J., 46 (2005), 35–55.

[60] Borovkov, A.A. Asymptotic analysis for random walks with nonidentically dis-tributed jumps having finite variance. Siberian Math. J., 46 (2005), 1020–1038.

[61] Borovkov, A.A. Transient phenomena for random walks with nonidentically dis-tributed jumps with infinite variances. Theory Probab. Appl., 50 (2005), 199–213.

[62] Borovkov, A.A. and Borovkov, K.A. Probabilities of large deviations for randomwalks with a regular distribution of jumps. Dokl. Math., 61 (2000), 162–164.

[63] Borovkov, A.A. and Borovkov, K.A. On probabilities of large deviations for ran-dom walks. I. Regularly varying distribution tails. Theory Probab. Appl., 46 (2001),193-213.

[64] Borovkov, A.A. and Borovkov, K.A. On probabilities of large deviations for ran-

Page 645: Asymptotic analysis of random walks

614 References

dom walks. II. Regular exponentially decaying distributions. Theory Probab. Appl.,49 (2005), 189–206.

[65] Borovkov, A.A. and Borovkov, K.A. Large deviation probabilities for generalizedrenewal processes with regularly varying jump distributions. Siberian Adv. Math.,15 (2006), 1–65.

[66] Borovkov, A.A. and Boxma, O.J. Large deviation probabilities for random walkswith heavy tails. Siberian Adv. Math., 13 (2003), 1–31.

[67] Borovkov, A.A. and Mogulskii, A.A. Large Deviations and the Testing of Sta-tistical Hypotheses (Proceedings of the Institute of Mathematics, vol. 19. Nauka,Novosibirsk, 1992).

[68] Borovkov, A.A. and Mogulskii, A.A. The second rate function and the asymptoticproblems of renewal and hitting the boundary for multidimensional random walks.Siberian Math. J., 37 (1996), 647–682.

[69] Borovkov, A.A. and Mogulskii, A.A. Integro-local limit theorems including largedeviations for sums of random vectors. I. Theory Probab. Appl., 43 (1999), 1–12.

[70] Borovkov, A.A. and Mogulskii, A.A. Integro-local limit theorems including largedeviations for sums of random vectors. II. Theory Probab. Appl., 45 (2001), 3–22.

[71] Borovkov, A.A. and Mogulskii, A.A. Limit theorems in the boundary hitting prob-lem for a multidimensional random walk. Siberian Math. J., 42 (2001), 245–270.

[72] Borovkov, A.A. and Mogulskii, A.A. Integro-local theorems for sums of indepen-dent random vectors in the series scheme. Math. Notes, 79 (2006), 468–482.

[73] Borovkov, A.A. and Mogulskii, A.A. Integro-local and integral theorems for sumsof random variables with semiexponential distributions. Siberian Math. J., 47

(2006), 990–1026.[74] Borovkov, A.A., Mogulskii, A.A. and Sakhanenko, A.I. Limit Theorems for Ran-

dom Processes (Current Problems in Mathematics. Fundamental Directions 82.Vsesoyuz. Inst. Nauchn. i Tekhn. Inform. (VINITI), Moscow, 1995). (In Russian.)

[75] Borovkov, A.A. and Pecerski, E.A. Weak convergence of measures and randomprocesses. Z. Wahrscheinlichkeitstheorie verw. Geb., 28 (1973), 5–22.

[76] Borovkov, A.A. and Utev, S.A. Estimates for distributions of sums stopped atMarkov time. Theory Probab. Appl., 38 (1993), 214–225.

[77] Borovkov, K.A. A note on differentiable mappings. Ann. Probab., 13 (1985),1018–1021.

[78] Boxma, O.J. and Cohen, J.W. Heavy-traffic analysis for the GI/G/1 queue withheavy-tailed distributions. Queueing Systems, 33 (1999), 177–204.

[79] Braverman, M., Mikosch, T. and Samorodnitsky, G. Tail probabilities of subaddi-tive functionals of Levy processes. Ann. Appl. Probab., 12 (2002), 69–100.

[80] Chernoff, H.A. A measure of asymptotic efficiency for tests of a hypothesis basedon the sums of observations. Ann. Math. Statist., 23 (1952), 493–507.

[81] Chistyakov, V.P. A theorem on sums of independent positive random variables andits application to branching random processes. Theory Probab. Appl., 9 (1964),640–648.

[82] Chover, J. A law of the iterated logarithm for stable summands. Proc. Amer. Math.Soc., 17 (1965), 441–443.

[83] Chover, J., Ney, P.E. and Wainger, S. Functions of Probability Measures. J. AnalyseMath., 26 (1973), 255–302.

[84] Chover, J., Ney, P.E. and Wainger, S. Degeneracy properties of subcritical branch-ing processes. Ann. Probab., 1 (1973), 663–673.

[85] Chow, Y.S. and Lai, T.L. Some one-sided theorems on the tail distribution of sam-ple sums with applications to the last time and largest excess of boundary crossing.

Page 646: Asymptotic analysis of random walks

References 615

Trans. Amer. Math. Soc., 120 (1975), 108–123.[86] Christoph, G. and Wolf, W. Convergence Theorems With a Stable Limit Law

(Akademie-Verlag, Berlin, 1992).[87] Cline, D.B.H. Convolution tails, product tails and domains of attraction. Probab.

Theory Relat. Fields, 72 (1986), 529–557.[88] Cline, D.B.H. Convolution of distributions with exponential and subexponential

tails. J. Austral. Math. Soc. Ser. A, 43 (1987), 347–365.[89] Cline, D.B.H. Intermediate regular and Π variation. Proc. London Math. Soc., 68

(1994), 594–616.[90] Cline, D.B.H. and Hsing, T. Large deviations probabilities for sums and maxima

of random variables with heavy or subexponential tails. Texas A&M Universitypreprint (1991).

[91] Cline, D.B.H. and Samorodnitsky, G. Subexponentiality of the product of indepen-dent random variables. Stoch. Proc. Appl., 49 (1994), 75–98.

[92] Cohen, J.W. Some results on regular variation for distributions in queueing andfluctuation theory. J. Appl. Probab., 10 (1973), 343–353.

[93] Cohen, J.W. A heavy-traffic theorem for the GI/G/1 queue with Pareto-type ser-vice time distributions. J. Appl. Math. Stochastic Anal., 11 (1998), 247–254.

[94] Cohen, J.W. Random walk with a heavy-tailed jump distribution. Queueing Sys-tems, 40 (2002), 35–73.

[95] Cramer, H. Sur un nouveau theoreme-limite de la theorie des probabilites. Actu-alites Sci. Indust., 736 (1938), 5–23.

[96] Cramer, H. On asymptotic expansions for sums of independent random variableswith a limiting stable distribution. Sankhya A, 25 (1963), 12–24.

[97] Darling, D.A. The influence of the maximum term in the addition of independentrandom variables. Trans. Amer. Math. Soc., 73 (1952), 95–107.

[98] Davis, R.A. Stable limits for partial sums of dependent random variables. Ann.Probab., 11 (1983), 262–269.

[99] Davis, R.A. and Hsing, T. Point processes and partial sum convergence for weaklydependent random variables with infinite variance. Ann. Probab., 23 (1995), 879–917.

[100] Davydov, Yu.A. and Nagaev, A.V. On the role played by extreme summands whena sum of independent and identically distributed random vectors is asymptoticallyα-stable. J. Appl. Probab., 41 (2004), 437–454.

[101] Dembo, A. and Zeitouni, O. Large Deviation Techniques and Applications (Jonesand Bartlett, London, 1993).

[102] Denisov, D.F. and Foss, S.G. On transience conditions for Markov chains and ran-dom walks. Siberian Math. J., 44 (2003), 44–57.

[103] Denisov, D., Foss, S. and Korshunov, D. Tail asymptotics for the supremum of arandom walk when the mean is not finite. Queueing Systems, 46 (2004), 15–33.

[104] Doney, R.A. On the asymptotic behaviour of first passage times for a transientrandom walk. Probab. Theory Relat. Fields, 18 (1989), 239–246.

[105] Doney, R.A. A large deviation local limit theorem. Math. Proc. Cambridge Phil.Soc., 105 (1989), 575–577.

[106] Doney, R.A. Spitzer’s condition and ladder variables in random walks. Probab.Theory Relat. Fields, 101 (1995), 577–580.

[107] Donsker, M.D. An invariance principle for certain probability limit theorems. Mem.Amer. Math. Soc., 6 (1951), 1–12.

[108] Durrett, R. Conditional limit theorems for random walks with negative drift.Z. Wahrscheinlichkeitstheorie verw. Geb., 52 (1980), 277–287.

Page 647: Asymptotic analysis of random walks

616 References

[109] Embrechts, P. and Goldie, C.M. On closure and factorization theorems for subex-ponential and related distributions. J. Austral. Math. Soc. Ser. A, 29 (1980), 243–256.

[110] Embrechts, P. and Goldie, C.M. Comparing the tail of an infinitely divisible distri-bution with integrals of its Levy measure. Ann. Probab., 9 (1981), 468–481.

[111] Embrechts, P. and Goldie, C.M. On convolution tails. Stoch. Proc. Appl., 13 (1982),263–278.

[112] Embrechts, P., Goldie, C.M. and Veraverbeke, N. Subexponentiality and infinitedivisibility. Z. Wahrscheinlichkeitstheorie verw. Geb., 49 (1979), 335–347.

[113] Embrechts, P., Kluppelberg, C. and Mikosch, T. Modeling Extremal Events(Springer, New York, 1997).

[114] Embrechts, P. and Omey, E. A property of long-tailed distributions. J. Appl. Prob.,21 (1982), 80–87.

[115] Embrechts, P. and Veraverbeke, N. Estimates for the probability of ruin with specialemphasis on the possibility of large claims. Insurance: Math. Econom., 1 (1982),55–72.

[116] Emery, D.J. Limiting behaviour of the distribution of the maxima of partial sumsof certain random walks. J. Appl. Probab., 9 (1972), 572–579.

[117] Eppel, N.S. A local limit theorem for first passage time. Siberian Math. J., 20(1)

(1979), 130-138.[118] Erickson, K.B. Strong renewal theorems with infinite mean. Trans. Amer. Math.

Soc., 151 (1970), 263–291.[119] Erickson, K.B. The strong law of large numbers when the mean is undefined.

Trans. Amer. Math. Soc., 185 (1973), 371–381.[120] Feller, W. On regular variation and local limit theorems, in Proc. Fifth Berke-

ley Symp. Math. Stat. Prob. II(1), ed. Neyman, J. (University of California Press,Berkeley, 1967), pp. 373–388.

[121] Feller, W. An Introduction to Probability Theory and its Applications I, 2nd edn(Wiley, New York, 1968).

[122] Feller, W. An Introduction to Probability Theory and its Applications II, 3rd edn(Wiley, New York, 1971).

[123] Foss, S., Konstantopoulos, T. and Zachary, S. The principle of a single big jump:discrete and continuous time modulated random walks with heavy-tailed incre-ments. arxiv.org/pdf/math.PR/0509605 (2005).

[124] Foss, S. and Korshunov, D. Sampling at random time with a heavy-tailed distribu-tion. Markov Proc. Related Fields., 6 (2000), 543–568.

[125] Foss, S.G. and Zachary, S. Asymptotics for the maximum of a modulated randomwalk with heavy-tailed increments. Analytic methods in applied probability. Amer.Math. Soc. Transl. Ser. 2, 207 (2002), 37–52.

[126] Foss, S.G. and Zachary, S. The maximum on a random time interval of a randomwalk with long-tailed increments and negative drift. Ann. Appl. Probab., 1 (2003),37–57.

[127] Fuk, D.H. and Nagaev, S.V. Probability inequalities for sums of independent ran-dom variables. Theory Probab. Appl., 16 (1971), 643–660. (Also, ibid., 1976, 21,896.)

[128] Garsia, A. and Lamperti, J. A discrete renewal theorem with infinite mean. Com-ment Math. Helv., 36 (1962), 221–234.

[129] Gikhman, I.I. and Skorokhod, A.V. Introduction to the Theory of Random Pro-cesses (Dover, Mineola, 1996). (Translated from the 1965 Russian original.)

[130] Gnedenko, B.V. and Kolmogorov, A.N. Limit Distributions for Sums of Indepen-

Page 648: Asymptotic analysis of random walks

References 617

dent Random Variables (Addison-Wesley, Reading, 1954). (Translated from the1949 Russian original.)

[131] Godovanchuk, V.V. Probabilities of large deviations for sums of independent ran-dom variables attracted to a stable law. Theory Probab. Appl., 23 (1978), 602–608.

[132] Goldie, C.M. Subexponential distributions and dominated-variation tails. J. Appl.Prob., 15 (1978), 440–442.

[133] Goldie, C.M. and Kluppelberg, C. Subexponential distributions, in A PracticalGuide to Heavy Tails: Statistical Techniques for Analysing Heavy-tailed Distribu-tions, ed. Adler, R. et al. (Birkhauser, Boston, 1998), pp. 435–454.

[134] Gradshtein, I.S. and Ryzhik, I.M. Table of Integrals, Series, and Products (Aca-demic Press, New York, 1980).

[135] Greenwood, P. Asymptotics of randomly stopped sequences with independent in-crements. Ann. Probab., 1 (1973), 317–321.

[136] Greenwood, P. and Monroe, I. Random stopping preserves regular variation ofprocess distributions. Ann. Probab., 5 (1977), 42–51.

[137] Grubel, R. Asymptotic analysis in probability theory using Banach-algebra tech-niques. Essen Universitat Habilitationsschrift (1984).

[138] Gut, A. Stopped Random Walks (Springer, Berlin, 1988).[139] Hansen, N.R. and Jensen, A.T. The extremal behaviour over regenerative cycles

for Markov additive processes with heavy tails. Stoch. Proc. Appl., 115 (2005),579–591.

[140] Hartman, P. and Wintner, A. On the law of the iterated logarithm. Amer. J. Math.,63 (1941), 169–176.

[141] Heyde, C.C. A contribution to the theory of large deviations for sums of indepen-dent random variables. Z. Wahrscheinlichkeitstheorie verw. Geb., 7 (1967), 303–308.

[142] Heyde, C.C. A limit theorem for random walks with drift. J. Appl. Probab., 4

(1967), 144–150.[143] Heyde, C.C. Asymptotic renewal results for a natural generalization of classical

renewal theory. J. Roy. Statist. Soc. Ser. B, 29 (1967), 141–150.[144] Heyde, C.C. On large deviation problems for sums of random variables which are

not attracted to the normal law. Ann. Math. Statist., 38 (1967), 1575–1578.[145] Heyde, C.C. On large deviation probabilities in the case of attraction to a non-

normal stable law. Sankhya A, 30 (1968), 253–258.[146] Heyde, C.C. On the maximum of sums of random variables and the supremum

functional for stable processes. J. Appl. Probab., 6 (1969), 419–429.[147] Heyde, C.C. A note concerning behaviour of iterated logarithm type. Proc. Amer.

Math. Soc., 23 (1969), 85–90.[148] Heyman, D.P., and Lakshman, T.V. Source models for VBR broadcast-video traf-

fic. IEEE/ACM Trans. Netw., 4 (1996), 40–48.[149] Hoglund, T. A unified formulation of the central limit theorem for small and large

deviations from the mean. Z. Wahrscheinlichkeitstheorie verw. Geb., 49 (1979),105–117.

[150] Hult, H. and Lindskog, F. Extremal behaviour for regularly varying stochastic pro-cesses. Stoch. Proc. Appl., 115 (2005), 249–274.

[151] Hult, H., Lindskog, F., Mikosch, T. and Samorodnitsky, G. Functional large de-viations for multivariate regularly varying random walks. Ann. Appl. Probab., 15

(2005), 2651–2680.[152] Ibragimov, I.A. and Linnik, Yu.V. Independent and Stationary Sequences of Ran-

dom Variables (Wolters-Noordhoff, Groningen, 1971). (Translated from the 1965

Page 649: Asymptotic analysis of random walks

618 References

Russian original.)[153] Janicki, A. and Weron, A. Simulation and Chaotic Behavior of α-stable Stochastic

Processes (Marcel Dekker, New York, 1994).[154] Janson, S. Moments for first-passage and last-exit times. The minimum, and related

quantities for random walks with positive drift. Adv. Appl. Probab., 18 (1986), 865–879.

[155] Jelenkovic, P.R. and Lazar, A.A. Subexponential asymptotics of a Markov-modulated random walk with queueing applications. J. Appl. Probab., 25 (1998),132–141.

[156] Jelenkovic, P.R. and Lazar, A.A. Asymptotic results for multiplexing subexponen-tial on-off processes. Adv. Appl. Probab., 31 (1999), 394–421.

[157] Jelenkovic, P.R., Lazar, A.A. and Semret, N. The effect of multiple time scalesand subexponentiality of MPEG video streams on queueing behavior. IEEE J. Sel.Areas Commun., 15 (1997), 1052–1071.

[158] Karamata, J. Sur un mode de croissance reguliere des fonctions. Mathematica(Cluj), 4 (1930), 38–53.

[159] Karamata, J. Sur un mode de croissance reguliere. Theoremes fondamenteaux.Bull. Math. Soc. France, 61 (1933), 55–62.

[160] Kesten, H. and Maller, R.A. Two renewal theorems for general random walks tend-ing to infinity. Probab. Theory Relat. Fields, 106 (1996), 1–38.

[161] Khintchine, A.Ya. Uber einen Satz der Wahrscheinlichkeitsrechnung. Math. An-nalen, 6 (1924), 9–20.

[162] Khintchine, A.Ya. Asymptotische Gesetze der Wahrscheinlichkeitsrechnung(Springer, Berlin, 1933). (In German. Russian translation published by ONTINTKP, 1936.)

[163] Khintchine, A.Ya. Two theorems on stochastic processes with stable incrementdistributions. Matem. Sbornik, 3(45) (1938), 577–584. (In Russian.)

[164] Khokhlov, Yu.S. The law of the iterated logarithm for random vectors with anoperator-stable limit law. Vestnik Moskov. Univ. Ser. XV Vychisl. Mat. Kibernet.,(1995), 62–69. (In Russian.)

[165] Kingman, F.G. On queues in heavy traffic. J. Royal Statist. Soc. Ser. B, 24 (1962),383–392.

[166] Kluppelberg, C. Subexponential distributions and integrated tails. J. Appl. Probab.,25 (1988), 132–141.

[167] Kluppelberg, C. Subexponential distributions and characterizations of relatedclasses. Probab. Theory Relat. Fields, 82 (1989), 259–269.

[168] Kluppelberg, C., Kyprianou, A.E. and Maller, R.A. Ruin probabilities and over-shoots for general Levy insurance risk processes. Ann. Appl. Probab., 14 (2004),1766–1801.

[169] Kluppelberg, C. and Mikosh, T. Large deviations of heavy-tailed random sumswith applications in insurance and finance. J. Appl. Probab., 34 (1997), 293–308.

[170] Kolmogoroff, A.N. Uber die Grenzwertsatze der Wahrscheinlichkeitsrechnung.Math. Annalen, 101 (1929), 120–126.

[171] Kolmogoroff, A.N. Uber das Gesetz des iterierten Logarithmus. Math. Annalen,101 (1929), 126–135.

[172] Kolmogoroff, A.N. Uber die analytischen Methoden in der Wahrscheinlichkeits-rechnung. Math. Annalen, 104 (1931), 415–458.

[173] Konstantinides, D. and Mikosch, T. Large deviations and ruin probabilities forsolutions to stochastic recurrence equations with heavy-tailed innovations. Ann.Probab., 33 (2005), 1992–2035.

Page 650: Asymptotic analysis of random walks

References 619

[174] Korevaar, J., van Aardenne-Ehrenfest, T. and de Bruijn, N.G. A note on slowlyoscillating functions. Nieuw Arch. Wiskunde (2), 23 (1949), 77–86.

[175] Korolyuk, V.S. On the asymptotic distribution of the maximal deviations. Dokl.Akad. Nauk SSSR, 142 (1962), 522–525. (In Russian.)

[176] Korolyuk, V.S. Asymptotic analysis of distributions of maximum deviations in alattice random walk. Theory Probab. Appl., 7 (1962), 383–401.

[177] Korshunov, D.A. On distribution tail of the maximum of a random walk. Stoch.Proc. Appl., 72 (1997), 97–103.

[178] Korshunov, D.A. Large deviations probabilities for maxima of sums of indepen-dent random variables with negative mean and subexponential distribution. TheoryProbab. Appl., 46 (2001), 355-366.

[179] Korshunov, D.A., Shlegel, S. and Shmidt, F. Asymptotic analysis of random walkswith dependent heavy-tailed increments. Siberian Math. J., 44 (2003), 833–844.

[180] Landau, E. Darstellung und Begrundung Einiger Neuer Ergebnisse der Funktion-theorie, 2nd edn (Springer, Berlin, 1929).

[181] Leland, W.E., Taqqu, M.S., Willinger, W. and Wilson, D.V. On the self-similarnature of Ethernet traffic, in Proc. SIGCOMM ’93 (1993), 183–193.

[182] LePage, R., Woodroofe, M. and Zinn, J. Convergence to a stable distribution viaorder statistics. Ann. Probab., 9 (1981), 624–632.

[183] Linnik, Yu.V. On the probability of large deviations for the sums of independentvariables, in Proc. Fourth Berkeley Symp. Math. Stat. Prob. 2 (University of Cali-fornia Press, Berkeley, 1961), pp. 289–306.

[184] Linnik, Yu.V. Limit theorems for sums of independent variables taking into accountlarge deviations. I, II. Theory Probab. Appl., 6 (1961), 113–148, 345–360.

[185] Linnik, Yu.V. Limit theorems for sums of independent variables taking into accountlarge deviations. III. Theory Probab. Appl., 7 (1962), 115–129.

[186] Loeve, M. Probability Theory, 4th edn (Springer, New York, 1977).[187] Malinovskii, V.K. Limit theorems for Harris Markov chains. II. Theory Probab.

Appl., 34 (1989), 252–265.[188] Malinovskii, V.K. Limit theorems for stopped random sequences II: Large devia-

tions. Theory Probab. Appl., 41 (1996), 70–90.[189] Meerschaert, M.M. and Scheffler, H.-P. Limit Distributions for Sums of Indepen-

dent Random Vectors: Heavy Tails in Theory and Practice (Wiley, New York,2001).

[190] Mikosh, T. The law of the iterated logarithm for independent random variablesoutside the domain of partial attraction of the normal law. Vestnik LeningradskogoUniv. Mat. Mech. Astronom., 3 (1984), 35–39. (In Russian.)

[191] Mikosh, T. and Nagaev, A.V. Large deviations of heavy-tailed sums with applica-tions in insurance. Extremes, 1 (1998), 81-110.

[192] Mogulskii, A.A. Integro-local theorem for sums of random variables with regularlyvarying distributions valid on the whole real line. Siberian Math. J. (In print.)

[193] Mogulskii, A.A. and Rogozin, B.A. A local theorem for the first hitting time of afixed level by a random walk. Siberian Adv. Math., 15 (2005), 1–27.

[194] Nagaev, A.V. Limit theorems that take into account large deviations when Cramer’scondition is violated. Izv. Akad. Nauk. UzSSR, Ser. Fiz-Mat Nauk, 13 (1969), 17–22. (In Russian.)

[195] Nagaev, A.V. Integral limit theorems taking large deviations into account whenCramer’s condition does not hold. I, II. Theory Probab. Appl., 14 (1969), 51–64;193–208.

[196] Nagaev, A.V. On a property of sums of independent random variables. Theory

Page 651: Asymptotic analysis of random walks

620 References

Probab. Appl., 22 (1977), 326–338.[197] Nagaev, A.V. On the asymmetric problem of large deviations when the limit law is

stable. Theory Probab. Appl., 28 (1983), 670–680.[198] Nagaev, A.V. Cramer large deviations when the extreme conjugate distribution is

heavy-tailed. Theory Probab. Appl., 43 (1998), 405–421.[199] Nagaev, A.V. and Zaigraev, A. Multidimensional limit theorems allowing large

deviations for densities of regular variation. J. Multivariate Anal., 67 (1998), 385–397.

[200] Nagaev, A.V. and Zaigraev, A. New large deviation local theorems for sums ofindependent and identically distributed random vectors when the limit is α-stable.Bernoulli, 11 (2005), 665–688.

[201] Nagaev, S.V. Some limit theorems for large deviations. Theory Probab. Appl., 10

(1965), 214–235.[202] Nagaev, S.V. On the speed of convergence in a boundary problem. I, II. Theory

Probab. Appl., 15 (1970), 163–186, 403–429.[203] Nagaev, S.V. On the speed of convergence of the distribution of maximum sums of

independent random variables. Theory Probab. Appl., 15 (1970), 309–314.[204] Nagaev, S.V. Asymptotic expansions for the maximum of sums of independent

random variables. Theory Probab. Appl., 15 (1970), 514–515.[205] Nagaev, S.V. Large deviations for sums of independent random variables, in Trans.

Sixth Prague Conf. on Information Theory, Random Proc. and Statistical DecisionFunctions (Academia, Prague, 1973), pp. 657–674.

[206] Nagaev, S.V. Large deviations of sums of independent random variables. Ann.Probab., 7 (1979), 745–789.

[207] Nagaev, S.V. On the asymptotic behavior of one-sided large deviation probabilities.Theory Probab. Appl., 26 (1981), 362–366.

[208] Nagaev, S.V. Probabilities of large deviations in Banach spaces. Math. Notes, 34

(1983), 638–640.[209] Nagaev, S.V. and Pinelis, I.F. Some estimates for large deviations and their appli-

cation to strong law of large numbers. Siberian Math. J., 15 (1974), 153–158.[210] Ng, K.W., Tang, Q., Yan, J.-A. and Yang, H. Precise large deviations for sums of

random variables with consistently varying tails. J. Appl. Prob., 41 (2004), 93–107.[211] Nikias, C.L. and Shao, M. Signal Processing with Alpha-stable Distributions and

Applications (Wiley, New York, 1995).[212] Osipov, L.V. On probabilities of large deviations for sums of independen random

variables. Theory Probab. Appl., 17 (1972), 309–331.[213] Pakes, A. On the tails of waiting times distributions. J. Appl. Probab., 7 (1975),

745–789.[214] Pakshirajan, R.P. and Vasudeva, R. A law of the iterated logarithm for stable sum-

mands. Trans. Amer. Math. Soc., 232 (1977), 33–42.[215] Park, K. and Willinger, W., eds. Self-similar Network Traffic and Performance

Evaluation (Wiley, New York, 2000).[216] Paulauskas, V.I. Estimates of the remainder term in limit theorems in the case of

stable limit law. Lithuanian Math. J., 14 (1974), 127–146.[217] Paulauskas, V.I. Uniform and nonuniform estimates of the remainder term in a

limit theorem with a stable limit law. Lithuanian Math. J., 14 (1974), 661–672.[218] Paulauskas, V. and Skuchaıte, A. Some asymptotic results for one-sided large de-

viation probabilities. Lithuanian Math. J., 43 (2003), 318–326.[219] Petrov, V.V. Generalization of Cramer’s limit theorem. Uspehi Matem. Nauk, 9

(1954), 195–202. (In Russian.)

Page 652: Asymptotic analysis of random walks

References 621

[220] Petrov, V.V. Limit theorems for large deviations when Cramer’s condition is vio-lated. Vestnik Leningrad Univ. Math., 19 (1963), 49–68. (In Russian.)

[221] Petrov, V. V. Limit theorems for large deviations violating Cramer’s condition. II.Vestnik Leningrad. Univ. Ser. Mat. Meh. Astronom., 19 (1964), 58–75. (In Russian.)

[222] Petrov, V.V. On the probabilities of large deviations for sums of independent ran-dom variables. Theory Probab. Appl., 10 (1965), 287–298.

[223] Petrov, V.V. Sums of Independent Random Variables (Springer, Berlin, 1975).(Translated from the 1972 Russian original.)

[224] Petrov, V.V. Limit Theorems of Probability Theory: Sequences of IndependentRandom Variables (Clarendon Press, Oxford University Press, New York, 1987).(Translated from the 1987 Russian original.)

[225] Pinelis, I.F. A problem on large deviations in a space of trajectories. TheoryProbab. Appl., 26 (1981), 69–84.

[226] Pinelis, I.F. On certain inequalitites for large deviations. Theory Probab. Appl., 26

(1981), 419–420.[227] Pinelis, I.F. Asymptotic equivalence of the probabilities of large deviations for

sums and maxima of independent random variables. Trudy Inst. Mat., 5 (1985),144–173. (In Russian.)

[228] Pinelis, I. Exact asymptotics for large deviation probabilities, with applications,in Modelling Uncertainty. Internat. Ser. Oper. Res. Management Sci. 46 (Kluver,Boston, 2002), pp. 57–93.

[229] Pitman, E.J.G. Subexponential distribution functions. J. Austral. Math. Soc. Ser. A.,29 (1980), 337–347.

[230] Prokhorov, Yu.V. Convergence of random processes and limit theorems in proba-bility theory. Theory Probab. Appl., 1 (1956), 157–214.

[231] Prokhorov, Yu.V. Transition phenomena in queueing processes. Litovsk. Math. Sb.,3 (1963), 199–206. (In Russian.)

[232] Resnick, S.I. Extreme Values, Regular Variation, and Point Processes (Springer,New York, 1987).

[233] Richter, V. Multi-dimensional local limit theorems for large deviations. TheoryProbab. Appl., 3 (1958), 100–106.

[234] Rogozin, B.A. On the constant in the definition of subexponential distributions.Theory Probab. Appl., 44 (2001), 409–412.

[235] Rolski, T., Schmidli, H., Schmidt, V. and Teugels, J. Stochastic Processes for In-surance and Finance (Wiley, New York, 1999).

[236] Rozovskii, L.V. An estimate for probabilities of large deviations. Math. Notes, 42

(1987), 590–597.[237] Rozovskii, L.V. Probabilities of large deviations of sums of independent random

variables with common distribution function in the domain of attraction of thenormal law. Theory Probab. Appl., 34 (1989), 625–644.

[238] Rozovskii, L.V. Probabilities of large deviations on the whole axis. Theory Probab.Appl., 38 (1994), 53–79.

[239] Rozovskii, L.V. Probabilities of large deviations for sums of independent randomvariables with a common distribution function from the domain of attraction of anasymmetric stable law. Theory Probab. Appl., 42 (1998), 454–482.

[240] Rozovskii, L.V. A lower bound for the probabilities of large deviations of the sumof independent random variables with finite variances. J. Math. Sci., 109 (2002),2192–2209.

[241] Rozovskii, L.V. Superlarge deviations of a sum of independent random variableshaving a common absolutely continuous distribution under the Cramer condition.

Page 653: Asymptotic analysis of random walks

622 References

Theory Probab. Appl., 48 (2003), 108–130.[242] Rudin, W. Limits of ratios of tails of measures. Ann. Probab., 1 (1973), 982–994.[243] Rvacheva, E.L. On domains of attraction of multi-dimensional distributions. Select.

Transl. Math. Statist. Probab., 2 (1962), 183–205. (Original publication in Russian:L’vov. Gos. Univ. Uc. Zap. Ser. Meh.-Mat., 3 (1954), 5–44.)

[244] Sahanenko, A.I. On the speed of convergence in a boundary problem. TheoryProbab. Appl., 19 (1974), 399–403.

[245] Samorodnitsky, G. and Taqqu, M. Stable Non-Gaussian Random Processes (Chap-man & Hall, New York, 1994).

[246] Sato, K. Levy Processes and Infinitely Divisible Distributions (Cambridge Univer-sity Press, Cambridge, 1999).

[247] Saulis, L. and Statulevicius, V. A. Limit Theorems for Large Deviations (Kluwer,Dordrecht, 1991). (Translated and revised from the 1989 Russian original.)

[248] Schlegel, S. Ruin probabilities in perturbed risk models. Insurance Math. Econom.,22 (1998), 93–104.

[249] Schmidt, R. Uber der Borelsche Summirungsverfahren. Schriften der Konigs-berger gelehrten Gesellschaft, 1 (1925), 202–256.

[250] Schmidt, R. Uber divergentente Folgen und lineare Mittelbildungen. Mathem.Zeitschrift, 22 (1925), 89–152.

[251] Seneta, E. Regularly Varying Functions (Springer, Berlin, 1976).[252] Sigman, K. A primer on heavy-tailed distributions. Queueing Systems, 33 (1999),

261–275.[253] Sgibnev, M.S. Banach algebras of functions with the same asymptotic behavior at

infinity. Siberian Math. J., 22 (1981), 467–473.[254] Shepp, L.A. A local limit theorem. Ann. Math. Statist., 35 (1964), 419–423.[255] Skorokhod, A.V. Limit theorems for stochastic processes with independent incre-

ments. Theory Probab. Appl., 2 (1957), 138–171.[256] Skorokhod, A.V. Random Processes with Independent Increments (Kluwer, Dor-

drecht, 1991). (Translated and revised from the 1964 Russian original.)[257] Sparre Andersen, E. On the collective theory of risk in the case of contagion be-

tween the claims, in Trans. XVth Internat. Congress of Actuaries II (New York,1957), pp. 219–229.

[258] Stone, C. A local limit theorem for nonlattice multi-dimensional distribution func-tions. Ann. Math. Statist., 36 (1965), 546–551.

[259] Stone, C. On local and ratio limit theorems, in Proc. Fifth Berkeley Symp. Math.Stat. Prob. II(2), ed. Neyman, J. (University of California Press, Berkeley, 1967),pp. 217–224.

[260] Stout, W. Almost Sure Convergence (Academic Press, New York, 1974).[261] Strassen, V. A converse to the law of the iterated logarithm. Z. Wahrscheinlichkeits-

theorie verw. Geb., 4 (1966), 265–268.[262] Tang, Q., Su, Ch., Jiang, T. and Zhang, J. Large deviations for heavy-tailed random

sums in compound renewal model. Statist. Probab. Lett., 52 (2001), 91–100.[263] Tang, Q. and Tsitsiashvili, G. Precise estimates for the ruin probability in finite

horizon in a discrete-time model with heavy-tailed insurance and financial risks.Stoch. Proc. Appl., 108 (2004), 299–325.

[264] Tang, Q. and Yang, J. A sharp inequality for the tail probabilities of sums of i.i.d.r.v.’s with dominantedly varying tails. Sci. China A, 45 (2002), 1006–1011.

[265] Teugels, J.L. The sub-exponential class of probability distributions. TheoryProbab. Appl., 19 (1974), 821–822.

[266] Teugels, J.L. The class of subexponential distributions. Ann. Probab., 3 (1975),

Page 654: Asymptotic analysis of random walks

References 623

1000–1011.[267] Teugels, J.L. and Willekens, E. Asymptotic expansions for waiting time probabil-

ities in an M/G/1 queue with long tailed service time. Queueing Systems TheoryAppl., 10 (1992), 295–311.

[268] Thorin, O. Some remarks on the ruin problem in case the epochs of claims form arenewal process Skand. Aktuarietidskr. (1970), 29–50.

[269] Thorin, O. and Wikstad, N. Calculation of ruin probabilities when the claim distri-bution is lognormal. Astin Bulletin, 9 (1976), 231–246.

[270] Tkacuk, S.G. Local limit theorems, allowing for large deviations, in the case ofstable limit laws. Izv. Akad. Nauk UzSSR Ser. Fiz.-Mat. Nauk, 17 (1973), 30–33.(In Russian.)

[271] Tkacuk, S.G. A theorem on large deviations in Rs in case of a stable limit law, inRandom processes and statistical inference, 4 (Fan, Tashkent, 1974), pp. 178–184.

[272] Shneer, V.V. Estimates for the distributions of the sums of subexponential randomvariables. Siberian Math. J., 45 (2004), 1143–1158.

[273] Uchaikin, V.V. and Zolotarev, V.M. Chance and Stability (VSP Press, Utrecht,1999).

[274] Vasudeva, R. Chover’s law of the iterated logarithm and weak convergence. ActaMath. Hungar., 44 (1984), 215–221.

[275] Veraverbeke, N. Asymptotic behaviour of Wiener–Hopf factors of a random walk.Stoch. Proc. Appl., 5 (1977), 27–37.

[276] Vinogradov, V. Refined Large Deviation Limit Theorems (Longman, Harlow,1994).

[277] Vinogradov, V.V. and Godovanchuk, V.V. Large deviations of sums of independentrandom variables without several maximal summands. Theory Probab. Appl., 34

(1989), 512–515.[278] von Bahr, B. Asymptotic ruin probabilities when exponential moments do not exist

Scand. Actuarial Journal (1975), 6–10.[279] Williamson, J.A. Random walks and Riesz kernels. Pacific J. Math, 25 (1968),

393–415.[280] Wolf, W. On probabilities of large deviations in the case in which Cramer’s condi-

tion is violated. Math. Nachr., 70 (1975), 197–215. (In Russian.)[281] Wolf, W. Asymptotische Entwicklungen fur Wahrscheinlichkeiten grosser Abwe-

ichungen. Z. Wahrscheinlichkeitstheorie verw. Geb., 40 (1977), 239–256.[282] Zachary, S. A note on Veraverbeke’s theorem. Queueing Systems, 46 (2004), 9–14.[283] Zaigraev, A. Multivariate large deviations with stable limit laws. Probab. Math.

Statist., 19 (1999), 323–335.[284] Zaigraev, A.Yu. and Nagaev, A. V. Abelian theorems, limit properties of conjugate

distributions, and large deviations for sums of independent random vectors. TheoryProbab. Appl., 48 (2004), 664–680.

[285] Zaigraev, A.Yu., Nagaev, A.V. and Jakubowski, A. Probabilities of large deviationsof the sums of lattice random vectors when the original distribution has heavy tails.Discrete Math. Appl., 7 (1997), 313–326.

[286] Zolotarev, V.M. One-dimensional Stable Distributions (American MathematicalSociety, Providence RI, 1986). (Translated from the 1983 Russian original.)

Page 655: Asymptotic analysis of random walks

Index

Abelian type theorem, 9arithmetic distribution, 44, 302

boundary stopping time, 349

conditionCramer’s, xx, 234, 303, 371, 398Lindeberg’s, 502

conjugate distribution, 301, 382convolution, 14

of sequences, 44Cramer

approximation, 235, 251–253, 298, 433condition, xx, 234, 303, 371, 398deviation zone, 252transform, 301, 382, 393

density, subexponential, 46deviation function, 305, 321, 392, 555deviations,

moderately large, 309normal, 309super-large, 308

distributionarithmetic, 302classes, see belowconjugate, 301, 382exponentially tilted, 301function of, 335locally subexponential, 46, 47, 358regularly varying exponentially decaying, 307semiexponential, 29, 233stable, 61strongly subexponential, 237, 250, 274, 546subexponential, 14tail, 11, 14, 57

domain of attraction, 62

Esscher transform, 301exponentially tilted distribution, 301extreme deviation zone, 252

functionof distribution, 335

deviation, 305, 321, 392, 555generalized inverse, 58, 83, 508locally constant (l.c.), 16, 348, 358–360regularly varying (r.v.f.), 1slowly varying (s.v.f.), 1upper-power, 28, 224, 307, 362, 358ψ-locally constant (ψ-l.c.), 18, 224

generalizedinverse function, 58, 83, 508renewal process, 543

inequality, Kolmogorov–Doob type, 82integral representation theorem, 2intermediate deviation zone, 252invariance principle, 76, 395iterated logarithm, law of the, 79

Kolmogorov–Doob type inequality, 82

large deviation rate function, 305, 321, 392law of the iterated logarithm, 79level lines, 320Lindeberg condition, 502locally constant function, 16, 348, 358–360locally subexponential distribution, 46, 47, 358

Markov time, 345, 428martingale, 506moderately large deviations, 309

normal deviations, 309

partial factorization, 548, 554process

generalized renewal, 543renewal, 543stable, 78, 469Wiener, 75, 76, 472

random walk, 14defined on a Markov chain, 508

regularly varying exponentially decayingdistribution, 307

regularly varying function (r.v.f.), 1

624

Page 656: Asymptotic analysis of random walks

Index 625

renewal process, 543

semiexponential distribution, 29, 233sequence

subexponential, 44convolution of, 44

Skorokhod metric, 75slowly varying function (s.v.f.), 1stable

distribution, 61process, 78, 469

stopping time, 345, 428boundary, 349

strongly subexponential distribution, 237, 250,274, 546

subexponentialdensity, 46distribution, 14sequence, 44

super-large deviations, 308

tail, two-sided, 57Tauberian theorem, 10theorem

Abelian type, 9integral representation, 2Tauberian, 10uniform convergence, 2Wiener–Levy, 55

time, stopping, 345, 428transform

Cramer, 301, 382, 393Esscher, 301Laplace, 9

transient phenomena, 439, 471, 503

uniform convergence theorem, 2upper-power function, 28, 224, 307, 358, 362

Wiener process, 75, 76, 472, 502, 503Wiener–Levy theorem, 55

ψ-(asymptotically) locally constant (ψ-l.c.)function, 18, 224

Boundary classes

Gx,K , 586Gx,n, 155, 455Gx,T,ε, 526Gx,T , 526

Conditions and properties

[<], 551[ · , <] ([ · , >], [ · ,=]), 81, 234[<, · ] ([>, · ], [=, · ]), 81[<, <] ([>, <], [<, >], [=,=]), 81[ · , ≶], 104[ · , <]U, 483, 514[ · ,=]U, 514

[<, <]U ([<,=]U), 509[ · , <]UR, 492[A], 414[A0+], 418[A0,n], 323[A0], 322, 418[A1], 418[A2], 419[Am], 323[An], 322[B], 382[C], 303[C0], 392[Ca], 391[CA], 252[CA], 253[CA∗], 288[D], 167, 218, 250[D1], 223, 294[DA], 172[D(1,O(q))], 144[D(1,q)], 144, 167, 197, 217, 362, 363[D(2,q)], 198[Dh,q], 143[D(k,q)], 199[DΩ], 403, 406[D∗

Ω], 407[H], 458, 498[HΔ], 462, 501[HD

Δ], 502[HUR], 469[N], 444, 485[Q], 138, 454, 512[QT ], 551[Q∗], 423[Rα,ρ], 57[S], 466[U], 137, 442, 483[U1], [U2] 442, 483[U∞], 443, 483[UR], 464, 492[ΦT ], 551

Distribution classes

C, 371ER, 307ESe, 308L, 16Ms, 391R, 11, 15, 371R(α), 11S, 13, 14S(α), 38S+, 14Sloc, 47Se, 371Se(α), 29S∗, 347, 358