24
J. Korean Math. Soc. 57 (2020), No. 6, pp. 1485–1508 https://doi.org/10.4134/JKMS.j190756 pISSN: 0304-9914 / eISSN: 2234-3008 COMPLETE f -MOMENT CONVERGENCE FOR EXTENDED NEGATIVELY DEPENDENT RANDOM VARIABLES UNDER SUB-LINEAR EXPECTATIONS Chao Lu, Rui Wang, Xuejun Wang, and Yi Wu Abstract. In this paper, we investigate the complete f -moment con- vergence for extended negatively dependent (END, for short) random variables under sub-linear expectations. We extend some results on com- plete f -moment convergence from the classical probability space to the sub-linear expectation space. As applications, we present some corollar- ies on complete moment convergence for END random variables under sub-linear expectations. 1. Introduction It is well known that the complete convergence and complete moment con- vergence play important roles in different applications such as probability limit theory, mathematical statistics, especially in the strong law of large numbers and the strong convergence rate for partial sums of random variables. The concept of complete convergence was introduced by Hsu and Robbins [10] as follows. A sequence {X n ,n 1} of random variables is said to converge com- pletely to the constant θ, if for any > 0, X n=1 P (|X n - θ| >) < . By the Borel-Cantelli lemma, this implies that X n θ almost surely, and the converse is true if {X n ,n 1} are independent random variables. Chow [7] introduced a more general concept on the basis of complete con- vergence: complete moment convergence, which is defined as follows. Received November 12, 2019; Revised January 7, 2020; Accepted June 2, 2020. 2010 Mathematics Subject Classification. 60F15. Key words and phrases. END random variables, complete convergence, complete moment convergence, complete f -moment convergence, sub-linear expectations. Supported by the National Natural Science Foundation of China (11671012, 11871072, 11701004, 11701005), the Natural Science Foundation of Anhui Province (1808085QA03, 1908085QA01, 1908085QA07), the Provincial Natural Science Research Project of Anhui Colleges (KJ2019A0001, KJ2019A0003) and the Project on Reserve Candidates for Academic and Technical Leaders of Anhui Province (2017H123). c 2020 Korean Mathematical Society 1485

COMPLETE f-MOMENT CONVERGENCE FOR EXTENDED ...convergence and complete moment convergence for weighted sums of ˆ-mixing random variables. Recently, Wu et al. [23] introduced the concept

  • Upload
    others

  • View
    2

  • Download
    0

Embed Size (px)

Citation preview

Page 1: COMPLETE f-MOMENT CONVERGENCE FOR EXTENDED ...convergence and complete moment convergence for weighted sums of ˆ-mixing random variables. Recently, Wu et al. [23] introduced the concept

J. Korean Math. Soc. 57 (2020), No. 6, pp. 1485–1508

https://doi.org/10.4134/JKMS.j190756

pISSN: 0304-9914 / eISSN: 2234-3008

COMPLETE f-MOMENT CONVERGENCE FOR EXTENDED

NEGATIVELY DEPENDENT RANDOM VARIABLES UNDER

SUB-LINEAR EXPECTATIONS

Chao Lu, Rui Wang, Xuejun Wang, and Yi Wu

Abstract. In this paper, we investigate the complete f -moment con-vergence for extended negatively dependent (END, for short) random

variables under sub-linear expectations. We extend some results on com-

plete f -moment convergence from the classical probability space to thesub-linear expectation space. As applications, we present some corollar-

ies on complete moment convergence for END random variables undersub-linear expectations.

1. Introduction

It is well known that the complete convergence and complete moment con-vergence play important roles in different applications such as probability limittheory, mathematical statistics, especially in the strong law of large numbersand the strong convergence rate for partial sums of random variables. Theconcept of complete convergence was introduced by Hsu and Robbins [10] asfollows. A sequence Xn, n ≥ 1 of random variables is said to converge com-pletely to the constant θ, if for any ε > 0,

∞∑n=1

P (|Xn − θ| > ε) <∞.

By the Borel-Cantelli lemma, this implies that Xn → θ almost surely, and theconverse is true if Xn, n ≥ 1 are independent random variables.

Chow [7] introduced a more general concept on the basis of complete con-vergence: complete moment convergence, which is defined as follows.

Received November 12, 2019; Revised January 7, 2020; Accepted June 2, 2020.

2010 Mathematics Subject Classification. 60F15.Key words and phrases. END random variables, complete convergence, complete moment

convergence, complete f -moment convergence, sub-linear expectations.Supported by the National Natural Science Foundation of China (11671012, 11871072,

11701004, 11701005), the Natural Science Foundation of Anhui Province (1808085QA03,1908085QA01, 1908085QA07), the Provincial Natural Science Research Project of Anhui

Colleges (KJ2019A0001, KJ2019A0003) and the Project on Reserve Candidates for Academic

and Technical Leaders of Anhui Province (2017H123).

c©2020 Korean Mathematical Society

1485

Page 2: COMPLETE f-MOMENT CONVERGENCE FOR EXTENDED ...convergence and complete moment convergence for weighted sums of ˆ-mixing random variables. Recently, Wu et al. [23] introduced the concept

1486 C. LU, R. WANG, X. WANG, AND Y. WU

Definition 1.1. Let Xn, n ≥ 1 be a sequence of random variables and an >0, bn > 0, q > 0. If for any ε > 0,

∞∑n=1

anEb−1n |Xn| − εq+ <∞,

where a+ = max0, a, then Xn, n ≥ 1 is said to be complete q-th momentconvergent.

Many scholars studied the complete moment convergence for sums or weight-ed sums of independent or dependent random variables. Li [13] researched pre-cise asymptotics in complete moment convergence of moving-average processes;Liang et al. [14] studied complete moment and integral convergence for sums ofnegatively associated (NA, for short) random variables; Wu et al. [22] obtainedthe complete moment convergence for weighted sums of weakly dependent ran-dom variables and gave an application in nonparametric regression models;Yan [26] established the complete convergence and complete moment conver-gence for the maximum of weighted sums of extended negatively dependent(END, for short) random variables; Chen and Sung [3] studied the completeconvergence and complete moment convergence for weighted sums of ρ∗-mixingrandom variables.

Recently, Wu et al. [23] introduced the concept of complete f -moment con-vergence which is stronger than complete moment convergence, as follows.

Definition 1.2. Let Sn, n ≥ 1 be a sequence of random variables, an, n ≥1 be a sequence of positive constants and f : R+ → R+ be a nondecreasingfunction with f(0) = 0. Then we can say Sn, n ≥ 1 converges f -momentcompletely, if

∞∑n=1

anEf(|Sn| − ε+) <∞ for any ε > 0.

Since Wu et al. [23] introduced the concept of complete f -moment conver-gence, many authors were devoted to studying the probability limit theoriesfor complete f -moment convergence and obtained many interesting results. Forexample, Wu et al. [23] established some results on complete f -moment conver-gence for sums of arrays of rowwise END random variables under classical linearexpectations; Lu et al. [16] studied complete f -moment convergence for widelyorthant dependent (WOD, for short) random variables and gave its applicationin nonparametric models, and so on.

But in practice, many uncertain phenomena do not satisfy the assumptionof additivity of expectation. Thus, in this background, Peng [17–20] introducedthe notion of sub-linear expectations to extend the classical linear expectations.He also established the general theoretical framework of sub-linear expectationspace. This paper aims to obtain the complete f -moment convergence for sumsof arrays of rowwise END random variables under sub-linear expectation space.

Page 3: COMPLETE f-MOMENT CONVERGENCE FOR EXTENDED ...convergence and complete moment convergence for weighted sums of ˆ-mixing random variables. Recently, Wu et al. [23] introduced the concept

COMPLETE f-MOMENT CONVERGENCE 1487

In the following, we will introduce some basic knowledge about the sub-linearexpectations and END random variables. Let (Ω,F) be a measurable space,and M be the set of all probability measures on (Ω,F). Every nonemptysubset P ⊂ M defines an upper probability V(A) := supP∈P P (A) and alower probability v(A) := infP∈P P (A) for any A ∈ F , and (V, v) satisfies thefollowing properties:

(1) V(φ) = v(φ) = 0, V(Ω) = v(Ω) = 1;(2) Monotonicity: V(A) ≤ V(B) and v(A) ≤ v(B) for any A,B ∈ F and

A ⊆ B;(3) Conjugacy: V(A) + v(Ac) = 1 for any A ∈ F ;(4) Continuity from below: if An, A ∈ F , and An ↑ A, then V(An) ↑ V(A);(5) Continuity from above: if An, A ∈ F , and An ↓ A, then v(An) ↓ v(A).

The corresponding pair of upper expectation E[·] and lower expectation ε[·]generated by P are given as follows: for each F-measurable real random vari-able X such that EP (X) exists for each P ∈ P,

E[X] := supP∈P

Ep(X), ε[X] := infP∈P

EP (X).(1.1)

It is easy to check that ε[X] = −E[−X] and ε[X] ≤ E[X]. The triple (Ω,F , E)

is called a sub-linear expectation space and E[X] is a sub-linear expectation on(Ω,F), which means that for all F-measurable real random variables X and

Y , we can get the following properties for E[X]:

(a) Monotonicity: If X ≥ Y , then E[X] ≥ E[Y ];

(b) Constant preserving: E[c] = c;

(c) Sub-additivity: E[X + Y ] ≤ E[X] + E[Y ];

(d) Positive homogeneity: E[λX] = λE[X], ∀λ ≥ 0.

From the definition, it is easily shown that E[X + c] = E[X] + c and E[X −Y ] ≥ E[X] − E[Y ] for all X,Y ∈ F with E[Y ] being finite. Furthermore, if

E[|X|] is finite, then ε[X] and E[X] are both finite.Next, we define the Choquet integrals/expectations (CV, Cv) by

CV [X] =

∫ ∞0

V (X > x)dx+

∫ 0

−∞(V (X > x)− 1)dx,(1.2)

with V being replaced by V and v, respectively. In this paper, we will useV = V.

From (1.1) and (1.2), we have E[|X|] ≤ CV[|X|] for any X ∈ F .Because the sub-linear expectation provides a very flexible framework to

model the sub-linear probability problems, the limit theorems under the sub-linear expectations have received more and more attentions. Some interest-ing results were established. Peng [17] researched monotonic limit theorem ofBSDE and got some results about nonlinear decomposition theorem of Doob-Meyer type; Peng [18] studied G-expectation, G-Brownian motion and related

Page 4: COMPLETE f-MOMENT CONVERGENCE FOR EXTENDED ...convergence and complete moment convergence for weighted sums of ˆ-mixing random variables. Recently, Wu et al. [23] introduced the concept

1488 C. LU, R. WANG, X. WANG, AND Y. WU

stochastic calculus of Ito type; Chen and Hu [5] got a law of the iterated log-arithm under sub-linear expectations; Chen [4] studied the strong law of largenumbers under sub-linear expectations; Hu [11] proved a strong law of largenumbers under sub-linear expectations based on a general moment condition;Zhang [27–29] established the exponential inequalities, Rosenthal’s inequalitiesand laws of larger numbers, respectively; Wu and Jiang [21] gave some suitableconditions of strong law of large numbers and got the Chover’s law of the iter-ated logarithm under sub-linear expectations; Wu et al. [24] studied the asymp-totic approximation of inverse moment under sub-linear expectations; Filinkovand Elliott [9] studied non-linear expectations in spaces of Colombeau gener-alized functions; Xu and Zhang [25] got three series theorems for independentrandom variables under sub-linear expectations and gave some applications,and so on.

Inspired by Definition 1.2, we introduce the definition of complete f -momentconvergence under sub-linear expectations as follows.

Definition 1.3. Let Sn, n ≥ 1 be a sequence of random variables, an, n ≥1 be a sequence of positive constants and f : R+ → R+ be a nondecreasingfunction with f(0) = 0. Then we can say Sn, n ≥ 1 converges f -momentcompletely under sub-linear expectations, if for any ε > 0,

∞∑n=1

anCV[f(|Sn| − ε+)] <∞.

In this work, we will study the complete f -moment convergence for ENDrandom variables under sub-linear expectations. Now, we first recall the con-cept of END random variables under sub-linear expectations.

Definition 1.4. A sequence of random variables Xk, k ≥ 1 is said to be END

in (Ω,F , E), if for every nonnegative function fk(x) with the same monotonicity

on R and E[fk(Xk)] <∞ for any k ≥ 1, there exists a positive constant M ≥ 1,such that

E

[n∏k=1

fk(Xk)

]≤M

n∏k=1

E[fk(Xk)](1.3)

for any n ≥ 1.An array Xnk, 1 ≤ k ≤ n, n ≥ 1 of random variables is said to be rowwise

END in (Ω,F , E), if each row Xnk, 1 ≤ k ≤ n of the array is END in (Ω,F , E)with the same constant M in (1.3) for any n ≥ 1.

Remark 1.1. In Definition 1.4, if M is replaced by g(n), where g(n) is a finitereal number, then we get the definition of widely negatively dependence (WND,

for short) random variables in (Ω,F , E), which can be found in Lin and Feng[15].

Page 5: COMPLETE f-MOMENT CONVERGENCE FOR EXTENDED ...convergence and complete moment convergence for weighted sums of ˆ-mixing random variables. Recently, Wu et al. [23] introduced the concept

COMPLETE f-MOMENT CONVERGENCE 1489

The definition of stochastic domination below will be used throughout thepaper. Now, we first introduce the definition of stochastic domination in clas-sical probability space.

Definition 1.5. An array Xnk, 1 ≤ k ≤ n, n ≥ 1 of random variables issaid to be stochastically dominated by a random variable X, if there exists apositive constant C such that

(1.4) P (|Xnk| > x) ≤ CP (|X| > x) for any x ≥ 0, n ≥ 1 and 1 ≤ k ≤ n.

Inspired by Definition 1.5, we introduce the definition of stochastic domina-

tion in (Ω,F , E).

Remark 1.2. By Definition 1.5, adding supp∈P on the both sides of (1.4), wecan get

(1.5) V(|Xnk| > x) ≤ CV(|X| > x) for any x ≥ 0, n ≥ 1 and 1 ≤ k ≤ n.

In Zhang [29], it provides the definition of countable sub-additivity as follows.

Definition 1.6. A sub-linear expectation E is called to be countably sub-additive if it satisfies

E[X] ≤∞∑n=1

E[Xn],

whenever X ≤∑∞n=1Xn, X,Xn ∈ F and X ≥ 0, Xn ≥ 0, n = 1, 2, . . ..

Remark 1.3. If E is defined by (1.1), then it is automatically countably sub-additive because each probability measure P is countably sub-additive. Thus,

we can see that E, which is defined by (1.1), satisfies the condition of countablesub-additivity.

We can find the last definition in Denis et al. [8]

Definition 1.7. A set D is a polar set if V(D) = 0 and a property holdsquasi-surely (q.s., for short) if it holds outside a polar set.

The organization of the paper is as follows. Some lemmas are stated inSection 2. Main results and their proofs are provided in Section 3.

Throughout this paper, C denotes a positive constant not depending on n,which may be different in various places. Let I(A) be the indicator functionof the set A and kn, n ≥ 1 be a sequence of positive integers. We denotex+ = max0, x and bxc is the integer part of x.

2. Preliminary lemmas

In this section, we provide some lemmas to prove our main results. The firstone is a basic property for END random variables. The proof is similar to thatof Lin and Feng [15].

Page 6: COMPLETE f-MOMENT CONVERGENCE FOR EXTENDED ...convergence and complete moment convergence for weighted sums of ˆ-mixing random variables. Recently, Wu et al. [23] introduced the concept

1490 C. LU, R. WANG, X. WANG, AND Y. WU

Lemma 2.1. Suppose that Xk, k ≥ 1 is a sequence of END random variables

in (Ω,F , E), and fk(·), k ≥ 1 is a sequence of measurable functions with thesame monotonicity. Then fk(Xk), k ≥ 1 is also a sequence of END random

variables in (Ω,F , E).

The following lemma shows that some important inequalities in classical

probability space still hold in (Ω,F , E). The proof can be found in Chen etal. [6].

Lemma 2.2. Let X and Y be real measurable random variables in (Ω,F , E).(1) Holder’s inequality. For p, q > 1 with 1/p+ 1/q = 1, we have

E[|XY |] ≤ (E[|X|p])1/p(E[|Y |q])1/q.(2) Jensen’s inequality. Let f(x) be a convex function on R. Suppose that

E[X] and E[f(X)] exist. Then

E[f(X)] ≥ f(E[X]).

(3) Chebyshev’s inequality. Let f(x) > 0 be a nondecreasing function on R.Then for any x > 0,

V(X ≥ x) ≤ E[f(X)]

f(x).

The next one is the exponential inequality for upper probability V. Theproof is similar to that of Theorem 3.1 in Zhang [27]. Thus, the details areomitted.

Lemma 2.3. Let Xk, k ≥ 1 be a sequence of END random variables in

(Ω,F , E) with E[Xk] ≤ 0 and E[X2k ] <∞ for each k ≥ 1. Let Bn =

∑nk=1 E[X2

k ]for each n ≥ 1. Then for any x > 0 and y > 0,

(2.1) V

(n∑k=1

Xk > x

)≤ V

(max

1≤k≤n|Xk| > y

)+M exp

x

y− x

yln

(1 +

xy

Bn

).

By the definition and property of sub-linear expectations, we can get thefollowing lemma.

Lemma 2.4. Let X and Y be two random variables in (Ω,F , E). Then∣∣∣E[X]− E[Y ]∣∣∣ ≤ E[|X − Y |].

The following one is a basic property for stochastic domination under sub-linear expectations.

Lemma 2.5. Let Xnk, n ≥ 1, k ≥ 1 be an array of random variables, which

is stochastically dominated by a random variable X in (Ω,F , E). For any a > 0and b > 0, the following two statements hold:

E[|Xnk|aI(|Xnk| ≤ b)] ≤ C1

E[|X|aI(|X| ≤ b)] + baV(|X| > b)

,(2.2)

Page 7: COMPLETE f-MOMENT CONVERGENCE FOR EXTENDED ...convergence and complete moment convergence for weighted sums of ˆ-mixing random variables. Recently, Wu et al. [23] introduced the concept

COMPLETE f-MOMENT CONVERGENCE 1491

E[|Xnk|aI(|Xnk| > b)] ≤ C2E[|X|aI(|X| > b)],(2.3)

where C1 and C2 are positive constants. Thus, E[|Xnk|a] ≤ CE[|X|a], where Cis a positive constant.

Proof. According to Lemma 1 of Adler and Rosaksky [1] and Lemma 3 of Adleret al. [2], we can get

Ep|Xnk|aI(|Xnk| ≤ b) ≤ C1[Ep|X|aI(|X| ≤ b) + baP (|X| > b)],(2.4)

Ep|Xnk|aI(|Xnk| > b) ≤ C2Ep|X|aI(|X| > b).(2.5)

Adding supp∈P on the both sides of (2.4) and (2.5), we can get (2.2) and (2.3),respectively.

The next lemma is important to prove Lemma 2.7.

Lemma 2.6. Suppose that Xk, k ≥ 1 is a sequence of END random variables

in (Ω,F , E) and supk≥1 |Xk| ≤ y q.s. for some constant 0 < y <∞. Then forall x > 0, n ≥ 1 and 0 < p ≤ 2,

(2.6)

V

(n∑k=1

(Xk − E[Xk]) > x

)

≤ M exp

− x

2yln

1 +yp−1x∑n

k=1 E[∣∣∣Xk − E[Xk]

∣∣∣p]+

x

2py

.

Proof. The proof of (2.6) is similar to that of Lemma 2.3 in Hu et al. [12]. For

all n ∈ N, we denote Sn =∑nk=1Xk and gn(x) = x − E[Xn], x ∈ R. Then

by Lemma 2.1, gn(Xn), n ∈ N is also a sequence of END random variablessince gn(x), n ∈ N are all increasing functions. By Definition 1.4, takingfk(x) = etx, where t > 0, we have

(2.7) E

[n∏k=1

etXk

]≤M

n∏k=1

E[etXk ] for any n ≥ 1.

By Lemma 2.2(3) and (2.7), for any n ≥ 1 and x > 0 and arbitrary h > 0, wehave

V

(n∑k=1

(Xk − E[Xk]) > x

)≤ e−hxE

[eh(

∑nk=1(Xk−E[Xk]))

]= e−hxE

[n∏k=1

eh(Xk−E[Xk])

]

≤Me−hxn∏k=1

E[eh(Xk−E[Xk])

].

Page 8: COMPLETE f-MOMENT CONVERGENCE FOR EXTENDED ...convergence and complete moment convergence for weighted sums of ˆ-mixing random variables. Recently, Wu et al. [23] introduced the concept

1492 C. LU, R. WANG, X. WANG, AND Y. WU

Then by inequalities u < eu−1 and eu − 1 − u ≤ 2(coshu − 1) for u ∈ R, wehave

V

(n∑k=1

(Xk − E[Xk]) > x

)(2.8)

≤ Me−hxn∏k=1

E[eh(Xk−E[Xk])

]≤ Me−hx

n∏k=1

expE[eh(Xk−E[Xk])

]− 1

= Me−hxn∏k=1

expE[eh(Xk−E[Xk]) − 1

]− E[h(Xk − EXk)]

≤ Me−hx

n∏k=1

expE[eh(Xk−E[Xk]) − 1− h(Xk − EXk)

]≤ Me−hx

n∏k=1

exp

2E[cosh(h(Xk − EXk))− 1

]= M exp

−hx+ 2

n∑k=1

E[cosh(h(Xk − EXk))− 1

].

Define the function l(u) by

l(u) =

(coshu− 1)|u|−p, if u 6= 0,12 , if u = 0 and p = 2,

0, if u = 0 and p 6= 2.

Thus, we get l(u) is continuous, even, and increasing on the positive semi-

axis. By supk≥1 |Xk| ≤ y q.s., we have |Xk − E[Xk]| ≤ 2y q.s.. Thus, we

further obtain l(h(Xk − E[Xk])) = l(h(|Xk − E[Xk]|)) ≤ l(2hy) q.s.. That is

cosh(h(Xk− E[Xk]))−1 ≤ (cosh(2hy)−1)(2y)−p|Xk− E[Xk]|p q.s., k ∈ N. Weapply this equality to (2.8) to obtain

V

(n∑k=1

(Xk − E[Xk]) > x

)

≤ M exp

−hx+ 2

n∑k=1

E[cosh

(h(Xk − E[Xk])

)− 1]

≤ M exp

−hx+ 2

n∑k=1

(cosh(2hy)− 1)(2y)−pE[|Xk − E[Xk]|p

]

Page 9: COMPLETE f-MOMENT CONVERGENCE FOR EXTENDED ...convergence and complete moment convergence for weighted sums of ˆ-mixing random variables. Recently, Wu et al. [23] introduced the concept

COMPLETE f-MOMENT CONVERGENCE 1493

≤ M exp

−hx+2

n∑k=1

(exp(2hy)−1)(2y)−pE[|Xk − E[Xk]|p

], x > 0, n ∈ N.

The rest of the proof is similar to that of Hu et al. [12]. Thus, the details areomitted. This completes the proof of the lemma.

With Lemma 2.6 accounted for, we can get the following result, whose proofis similar to that of Lin and Feng [15]. Hence, we omit the details.

Lemma 2.7. Let Xnk, 1 ≤ k ≤ kn, n ≥ 1 be an array of rowwise END ran-

dom variables in (Ω,F , E), and an, n ≥ 1 be a sequence of positive constants.Suppose that the following two conditions hold:

(i) for any θ > 0,

(2.9)

∞∑n=1

an

kn∑k=1

V(|Xnk| > θ) <∞;

(ii) there exist some constants η > 0, 0 < p ≤ 2 and δ > 0 such that

(2.10)

∞∑n=1

an

(kn∑k=1

E[∣∣∣XnkI(|Xnk| ≤ δ)− E [XnkI(|Xnk| ≤ δ)]

∣∣∣p])η <∞.Then for any ε > 0,

(2.11)

∞∑n=1

anV

(kn∑k=1

(Xnk − E[XnkI(|Xnk| ≤ δ)]) > ε

)<∞.

3. Main results

With preliminaries accounted for, we can provide the main results as follows.

Theorem 3.1. Let Xnk, 1 ≤ k ≤ kn, n ≥ 1 be an array of rowwise END

random variables in (Ω,F , E), and an, n ≥ 1 be a sequence of positive con-stants, f : R+ → R+ be an increasing function with f(0) = 0 and η ≥ 1 be aconstant. Suppose that the following conditions hold:

(a)∑∞n=1 an

∑knk=1 CV[f(8η|Xnk|I(|Xnk| > θ))] <∞ for any θ > 0;

(b) there exist some constants 0 < p ≤ 2 and δ > 0 such that

∞∑n=1

an

(kn∑k=1

E[∣∣∣XnkI(|Xnk| ≤ δ)− E[XnkI(|Xnk| ≤ δ)]

∣∣∣p])η <∞;

(c)∑knk=1 E

[|Xnk|I

(|Xnk| > δ

16η

) ]→ 0 as n→∞;

(d) let g: R+ → R+ be the inverse function for f(t), that is to say, g(f(t)) =t, t ≥ 0 and s(t) = max

δ≤x≤g(t)x/f(x). Assume that the constants η and δ, and

the function f(x) satisfy the following condition:∫ ∞f(δ)

g−η(t)s(t)dt <∞.

Page 10: COMPLETE f-MOMENT CONVERGENCE FOR EXTENDED ...convergence and complete moment convergence for weighted sums of ˆ-mixing random variables. Recently, Wu et al. [23] introduced the concept

1494 C. LU, R. WANG, X. WANG, AND Y. WU

Then the sequence∑knk=1

(Xnk − E[XnkI(|Xnk| ≤ δ)]

)converges f -moment

completely in (Ω,F , E), that is to say, for any ε > 0,

∞∑n=1

anCV

f kn∑

k=1

(Xnk − E[XnkI(|Xnk| ≤ δ)]

)− ε

+

<∞.(3.1)

Proof. For n ≥ 1, we denote Sn =∑knk=1

(Xnk − E[XnkI(|Xnk| ≤ δ)]

). It can

be checked that for any ε > 0,

∞∑n=1

anCV[f(Sn − ε+)]

=

∞∑n=1

an

∫ ∞0

V(f(Sn − ε+) > t

)dt

≤∞∑n=1

an

∫ ∞0

V(Sn > ε+ g(t))dt

=

∞∑n=1

an

∫ f(δ)

0

V(Sn > ε+ g(t))dt+

∞∑n=1

an

∫ ∞f(δ)

V(Sn > ε+ g(t))dt

:= I1 + I2.

Noting that the function f(x) is increasing and η ≥ 1, we can get by condition(a) that

∞∑n=1

an

kn∑k=1

CV[f(|Xnk|I(|Xnk| > θ))] <∞ for any θ > 0.(3.2)

By Lemma 2.2(3) and (3.2), we have

∞∑n=1

an

kn∑k=1

V(|Xnk| > θ) ≤∞∑n=1

an

kn∑k=1

V[f(|Xnk|I(|Xnk| > θ)) > f(θ)]

≤ 1

f(θ)

∞∑n=1

an

kn∑k=1

E[f(|Xnk|I(|Xnk| > θ))]

≤ 1

f(θ)

∞∑n=1

an

kn∑k=1

CV[f(|Xnk|I(|Xnk| > θ))] <∞.(3.3)

Hence, the conditions of Lemma 2.7 are satisfied. We obtain by Lemma 2.7that

I1 ≤ f(δ)

∞∑n=1

anV(Sn > ε) <∞.(3.4)

Page 11: COMPLETE f-MOMENT CONVERGENCE FOR EXTENDED ...convergence and complete moment convergence for weighted sums of ˆ-mixing random variables. Recently, Wu et al. [23] introduced the concept

COMPLETE f-MOMENT CONVERGENCE 1495

Next, we prove that I2 <∞. Obviously,

I2 ≤∞∑n=1

an

∫ ∞f(δ)

V

(Sn > g(t),

kn⋃k=1

|Xnk| > g(t)

)dt

+

∞∑n=1

an

∫ ∞f(δ)

V

(Sn > g(t),

kn⋂k=1

|Xnk| ≤ g(t)

)dt

≤∞∑n=1

an

∫ ∞f(δ)

V

(kn⋃k=1

|Xnk| > g(t)

)dt

+

∞∑n=1

an

∫ ∞f(δ)

V(

kn∑k=1

(XnkI(|Xnk| ≤ g(t))− E[XnkI(|Xnk| ≤ δ)]

)> g(t)

)dt

:= I3 + I4.

For I3, it follows from (3.2) that

I3 ≤∞∑n=1

an

kn∑k=1

∫ ∞f(δ)

V(|Xnk| > g(t))dt

≤∞∑n=1

an

kn∑k=1

∫ ∞0

V(f(|Xnk|I(|Xnk| > δ)) > t)dt

=

∞∑n=1

an

kn∑k=1

CV[f(|Xnk|I(|Xnk| > δ))] <∞.

To prove I2 <∞, we only need to show I4 <∞. For fixed n ≥ 1, 1 ≤ k ≤ kn,and t ≥ f(δ), denote

Ynk = −g(t)I(Xnk < −g(t)) +XnkI(|Xnk| ≤ g(t)) + g(t)I(Xnk > g(t));

Znk = g(t)I(Xnk < −g(t))− g(t)I(Xnk > g(t)).

It is easy to see that,Ynk − E[Ynk], 1 ≤ k ≤ kn, n ≥ 1

is still an array of

rowwise END random variables by Lemma 2.1. By condition (c), we have

maxt≥f(δ)

1

g(t)

kn∑k=1

E[|Xnk|I(δ < |Xnk| ≤ g(t))]

≤ δ−1kn∑k=1

E[|Xnk|I(|Xnk| > δ)]→ 0 as n→∞.

Thus, for all n large enough and t ≥ f(δ), we have that

kn∑k=1

E[|Xnk|I(δ < |Xnk| ≤ g(t))] < g(t)/2,

Page 12: COMPLETE f-MOMENT CONVERGENCE FOR EXTENDED ...convergence and complete moment convergence for weighted sums of ˆ-mixing random variables. Recently, Wu et al. [23] introduced the concept

1496 C. LU, R. WANG, X. WANG, AND Y. WU

which together with Lemma 2.4 yields that

kn∑k=1

(E[XnkI(|Xnk| ≤ g(t))]− E[XnkI(|Xnk| ≤ δ)]

)≤

∣∣∣∣∣kn∑k=1

(E[XnkI(|Xnk| ≤ g(t))]− E[XnkI(|Xnk| ≤ δ)]

)∣∣∣∣∣≤

kn∑k=1

E[|Xnk|I(δ < |Xnk| ≤ g(t))] < g(t)/2.(3.5)

Hence, we have for all n large enough that

V

(kn∑k=1

(XnkI(|Xnk| ≤ g(t))− E[XnkI(|Xnk| ≤ δ)]

)> g(t)

)

= V

(kn∑k=1

(XnkI(|Xnk| ≤ g(t))− E[XnkI(|Xnk| ≤ g(t))]

)+

kn∑k=1

(E[XnkI(|Xnk| ≤ g(t))]− E[XnkI(|Xnk| ≤ δ)]

)> g(t)

)

≤ V

(kn∑k=1

(XnkI(|Xnk| ≤ g(t))− E[XnkI(|Xnk| ≤ g(t))]

)> g(t)/2

)

= V

(kn∑k=1

(Ynk + Znk − E[Ynk − (−Znk)]

)> g(t)/2

)

≤ V

(kn∑k=1

(Ynk − E[Ynk] > g(t)/4

))

+ V

(kn∑k=1

(Znk + E[−Znk] > g(t)/4

)),(3.6)

which implies that

I4 ≤ C

∞∑n=1

an

∫ ∞f(δ)

V

(kn∑k=1

(Znk + E[−Znk]

)> g(t)/4

)dt

+ C

∞∑n=1

an

∫ ∞f(δ)

V

(kn∑k=1

(Ynk − E[Ynk]

)> g(t)/4

)dt

:= I5 + I6.(3.7)

Page 13: COMPLETE f-MOMENT CONVERGENCE FOR EXTENDED ...convergence and complete moment convergence for weighted sums of ˆ-mixing random variables. Recently, Wu et al. [23] introduced the concept

COMPLETE f-MOMENT CONVERGENCE 1497

For I5, noting that |Znk| = g(t)I(|Xnk| > g(t)), by Lemma 2.2(3) and (3.2),we get

I5 = C

∞∑n=1

an

∫ ∞f(δ)

V

(kn∑k=1

(Znk + E[−Znk]

)> g(t)/4

)dt

≤ C∞∑n=1

an

∫ ∞f(δ)

V

(∣∣∣∣∣kn∑k=1

(Znk + E[−Znk]

)∣∣∣∣∣ > g(t)/4

)dt

≤ C∞∑n=1

an

kn∑k=1

∫ ∞f(δ)

1

g(t)E[|Znk|]dt

= C

∞∑n=1

an

kn∑k=1

∫ ∞f(δ)

V(|Xnk| > g(t))dt

≤ C∞∑n=1

an

kn∑k=1

CV[f(|Xnk|I(|Xnk| > δ))] <∞.(3.8)

For I6, we have E[Ynk− E[Ynk]] = 0. Applying Lemma 2.3 with x = g(t)/4 andy = g(t)/(4η), where η satisfies the condition (d), we obtain

I6 = C

∞∑n=1

an

∫ ∞f(δ)

V

(kn∑k=1

(Ynk − E[Ynk]

)> g(t)/4

)dt

≤ C

∞∑n=1

an

∫ ∞f(δ)

V(

max1≤k≤kn

∣∣∣Ynk − E[Ynk]∣∣∣ ≥ g(t)

)dt

+ C

∞∑n=1

an

∫ ∞f(δ)

exp

η − η ln

(1 +

g2(t)

16ηBn

)dt

:= I7 + I8,(3.9)

where Bn =∑knk=1 E[(Ynk − E[Ynk])2].

By condition (c), we get

kn∑k=1

V(|Xnk| >

δ

16η

)≤ C

kn∑k=1

E[|Xnk|I

(|Xnk| >

δ

16η

)]→ 0 as n→∞.

Hence, for all n large enough,∑knk=1 V

(|Xnk| > δ

16η

)≤ 1/32η, which implies

that

maxt≥f(δ)

max1≤k≤kn

1

g(t)

∣∣∣E[Ynk]∣∣∣

≤ maxt≥f(δ)

max1≤k≤kn

1

g(t)E[|Xnk|I

(|Xnk| ≤

δ

16η

)]

Page 14: COMPLETE f-MOMENT CONVERGENCE FOR EXTENDED ...convergence and complete moment convergence for weighted sums of ˆ-mixing random variables. Recently, Wu et al. [23] introduced the concept

1498 C. LU, R. WANG, X. WANG, AND Y. WU

+1

g(t)E[|Xnk|I

16η< |Xnk| ≤ g(t)

)]+ V(|Xnk| > g(t))

≤ δ−1 · δ

16η+ 2

kn∑k=1

V(|Xnk| >

δ

16η

)≤ 1

16η+ 2 · 1

32η=

1

8η.

Therefore, by condition (a) and |Ynk| ≤ |Xnk|, we have

I7 ≤ C∞∑n=1

an

∫ ∞f(δ)

V(

max1≤k≤kn

|Ynk| ≥g(t)

)dt

≤ C∞∑n=1

an

∫ ∞f(δ)

V(

max1≤k≤kn

|Xnk| ≥g(t)

)dt

≤ C∞∑n=1

an

kn∑k=1

∫ ∞f(δ)

V(|Xnk| ≥

g(t)

)dt

≤ C∞∑n=1

an

kn∑k=1

CV

[f

(8η|Xnk|I

(|Xnk| >

δ

))]<∞.(3.10)

Next, we will show I8 <∞. By Lemma 2.2(2), Lemma 2.4 and Cr-inequality,we have

E[∣∣∣Ynk − E[Ynk]

∣∣∣2]≤ CE

[∣∣∣XnkI(|Xnk| ≤ g(t))− E[XnkI(|Xnk| ≤ g(t))]∣∣∣2

+∣∣∣E[XnkI(|Xnk| ≤ g(t))]− E [−g(t)I(Xnk < −g(t)) +XnkI(|Xnk| ≤ g(t))

+g(t)I(Xnk > g(t))]∣∣∣2

+∣∣∣− g(t)I(Xnk < −g(t)) + g(t)I(Xnk > g(t))

∣∣∣2]≤ CE

[∣∣∣XnkI(|Xnk| ≤ g(t))− E[XnkI(|Xnk| ≤ g(t))]∣∣∣2]

+ Cg2(t)V(|Xnk| > g(t)),

and

E[∣∣∣XnkI(|Xnk| ≤ g(t))− E[XnkI(|Xnk| ≤ g(t))]

∣∣∣2]= E

[∣∣∣XnkI(|Xnk| ≤ δ) +XnkI(δ < |Xnk| ≤ g(t))− E[XnkI(|Xnk| ≤ δ)]

Page 15: COMPLETE f-MOMENT CONVERGENCE FOR EXTENDED ...convergence and complete moment convergence for weighted sums of ˆ-mixing random variables. Recently, Wu et al. [23] introduced the concept

COMPLETE f-MOMENT CONVERGENCE 1499

+ E[XnkI(|Xnk| ≤ δ)]− E[XnkI(|Xnk| ≤ g(t))]∣∣∣2]

≤ CE[∣∣∣XnkI(|Xnk| ≤ δ)− E[XnkI(|Xnk| ≤ δ)]

∣∣∣2+∣∣∣E[XnkI(|Xnk| ≤ δ)]− E[XnkI(|Xnk| ≤ g(t))]

∣∣∣2+∣∣∣XnkI(δ < |Xnk| ≤ g(t))

∣∣∣2]≤ CE

[∣∣∣XnkI(|Xnk ≤ δ)− E[XnkI(|Xnk| ≤ δ)]∣∣∣2]

+ CE[X2nkI(δ < |Xnk| ≤ g(t))

].

Thus, by Cr-inequality, we have

I8 ≤ C

∞∑n=1

an

∫ ∞f(δ)

Bηng−2η(t)dt

(3.11)

= C

∞∑n=1

an

∫ ∞f(δ)

g−2η(t)

(kn∑k=1

E[∣∣∣Ynk − E[Ynk]

∣∣∣2])η dt≤ C

∞∑n=1

an

∫ ∞f(δ)

g−2η(t)

(kn∑k=1

E[∣∣∣XnkI(|Xnk| ≤ δ)− E[XnkI(|Xnk| ≤ δ)]

∣∣∣2])ηdt+ C

∞∑n=1

an

∫ ∞f(δ)

g−2η(t)

(kn∑k=1

E[X2nkI(δ < |Xnk| ≤ g(t))

])ηdt

+ C

∞∑n=1

an

∫ ∞f(δ)

(kn∑k=1

V(|Xnk| > g(t))

)ηdt

:= I9 + I10 + I11.

It is easily seen that 0 < g−η(t) ≤ g−η(f(δ)) = δ−η, since g(t) is increasingand η ≥ 1. Observing that s(t) is nondecreasing, we have s(t) ≤ δ/f(δ). Thus,g−η(t) ≤ δ−η−1f(δ)s(t) and g−2η(t) ≤ Cs(t)g−η(t). To estimate I9, noting

that∣∣∣XnkI(|Xnk| ≤ δ)− E[XnkI(|Xnk| ≤ δ)]

∣∣∣ ≤ 2δ and 1 < p ≤ 2, we have by

conditions (b) and (d) that,

I9 ≤ C(2δ)(2−p)η∞∑n=1

an

(kn∑k=1

E[∣∣∣XnkI(|Xnk| ≤ δ)− E[XnkI(|Xnk| ≤ δ)]

∣∣∣p])η

×∫ ∞f(δ)

g−2η(t)dt <∞.(3.12)

Page 16: COMPLETE f-MOMENT CONVERGENCE FOR EXTENDED ...convergence and complete moment convergence for weighted sums of ˆ-mixing random variables. Recently, Wu et al. [23] introduced the concept

1500 C. LU, R. WANG, X. WANG, AND Y. WU

We next prove I10 <∞. It is easy to check that

I10 ≤ C∞∑n=1

an

∫ ∞f(δ)

g−2η(t)

(g(t)

kn∑k=1

E[|Xnk|I(δ < |Xnk| ≤ g(t))]

)ηdt.

It follows from condition (c) that for all n large enough,

kn∑k=1

E[|Xnk|I(δ < |Xnk| ≤ g(t))

]≤

kn∑k=1

E[|Xnk|I(|Xnk| > δ)

]≤

kn∑k=1

E[|Xnk|I

(|Xnk| >

δ

16η

)]< 1.

Noting that η ≥ 1, we have that for all n large enough,(kn∑k=1

E[|Xnk|I(δ < |Xnk| ≤ g(t))

])η≤

kn∑k=1

E[|Xnk|I(δ < |Xnk| ≤ g(t))

].

Therefore, we have by condition (d), and (3.2) that

I10 ≤ C

∞∑n=1

an

∫ ∞f(δ)

g−η(t)

kn∑k=1

[E|Xnk|I(δ < |Xnk| ≤ g(t))

f(|Xnk|I(δ < |Xnk| ≤ g(t)))

× f(|Xnk|I(δ < |Xnk| ≤ g(t)))]dt

≤ C

∞∑n=1

an

∫ ∞f(δ)

g−η(t)s(t)

kn∑k=1

E[f(|Xnk|I(δ < |Xnk| ≤ g(t)))

]dt

≤ C

∞∑n=1

an

kn∑k=1

CV[f(|Xnk|I(|Xnk| > δ))]

∫ ∞f(δ)

g−ηs(t)dt <∞.(3.13)

At last, we only need to show I11 <∞. For t ≥ f(δ), it follows from Lemma2.2(3) and condition (c) that

kn∑k=1

V(|Xnk| > g(t)) ≤kn∑k=1

V(|Xnk| > δ)

≤ δ−1kn∑k=1

E[|Xnk|I(|Xnk| > δ)]→ 0 as n→∞,

which implies that for all n large enough,∑knk=1 V(|Xnk| > g(t)) < 1. Hence,

for all n large enough,(kn∑k=1

V(|Xnk| > g(t))

)η≤

kn∑k=1

V(|Xnk| > g(t)).

Page 17: COMPLETE f-MOMENT CONVERGENCE FOR EXTENDED ...convergence and complete moment convergence for weighted sums of ˆ-mixing random variables. Recently, Wu et al. [23] introduced the concept

COMPLETE f-MOMENT CONVERGENCE 1501

Thus, we have by (3.2) again that

I11 ≤ C∞∑n=1

an

kn∑k=1

∫ ∞f(δ)

V(|Xnk| > g(t))dt

≤ C∞∑n=1

an

kn∑k=1

CV[f(|Xnk|I(|Xnk| > δ))] <∞.(3.14)

We obtain I6 < ∞ from (3.9)-(3.14), and thus, I4 < ∞. This completes theproof of the theorem.

Obviously, the function f(t) = tq, where t ≥ 0 and q > 0, satisfies conditionsof Theorem 3.1. So we can get the following corollary.

Corollary 3.1. Let q > 0, Xnk, 1 ≤ k ≤ kn, n ≥ 1 be an array of rowwise

END random variables in (Ω,F , E), and an, n ≥ 1 be a sequence of positiveconstants. Suppose that the following conditions hold:

(a)∑∞n=1 an

∑knk=1 CV[|Xnk|qI(|Xnk| > θ)] <∞ for any θ > 0;

(b) there exist some constants η > max1, q, 0 < p ≤ 2 and δ > 0 such that

∞∑n=1

an

(kn∑k=1

E[∣∣∣XnkI(|Xnk| ≤ δ)− E[XnkI(|Xnk| ≤ δ)]

∣∣∣p])η <∞;

(c)∑knk=1 E

[|Xnk|I

(|Xnk| > δ

16η

)]→ 0 as n→∞.

Then for any ε > 0,

∞∑n=1

anCV

kn∑k=1

(Xnk − E[XnkI(|Xnk| ≤ δ)]

)− ε

q+

<∞.(3.15)

Proof. By Theorem 3.1, we only need to verify condition (d). Since f(t) = tq,where q > 0, we get g(t) = t1/q.

When q > 1, noting that η > max1, q, we know η > q > 1. Hence, wehave s(t) = max

δ≤x≤g(t)xxq = max

δ≤x≤g(t)x1−q = δ1−q, which yields that∫ ∞

f(δ)

g−η(t)s(t)dt = δ1−q∫ ∞f(δ)

t−η/q <∞.

When q = 1, we have s(t) = 1. We can get the same result as the case q > 1.When q < 1, we know η > 1 > q, and thus

s(t) = maxδ≤x≤g(t)

x

xq= maxδ≤x≤g(t)

x1−q = t1−qq ,

which implies that ∫ ∞f(δ)

g−η(t)s(t)dt =

∫ ∞f(δ)

t1−ηq −1dt <∞.

Page 18: COMPLETE f-MOMENT CONVERGENCE FOR EXTENDED ...convergence and complete moment convergence for weighted sums of ˆ-mixing random variables. Recently, Wu et al. [23] introduced the concept

1502 C. LU, R. WANG, X. WANG, AND Y. WU

From the statements above, we can get that condition (d) in Theorem 3.1 hasbeen verified. This completes the proof of the corollary.

By Corollary 3.1, we can get the following corollary.

Corollary 3.2. Let q > 0, Xnk, 1 ≤ k ≤ kn, n ≥ 1 be an array of rowwise

END random variables in (Ω,F , E), and an, n ≥ 1 be a sequence of positiveconstants. Suppose that the following conditions hold:

(a)∑∞n=1 an

∑knk=1 CV[|Xnk|qI(|Xnk| > θ)] <∞ for any θ > 0;

(b) there exist some constants η > max1, q, 0 < p ≤ 2 and δ > 0 such that

∞∑n=1

an

(kn∑k=1

E[∣∣∣XnkI(|Xnk| ≤ δ)− E[XnkI(|Xnk| ≤ δ)]

∣∣∣p])η <∞;

(c)∑knk=1 E

[|Xnk|I

(|Xnk| > δ

16η

)]→ 0 as n→∞;

(d) there exists a constant δ1 ≥ δ16η such that

kn∑k=1

E[XnkI(|Xnk| ≤ δ1)]→ 0 as n→∞.

Then for any ε > 0,

∞∑n=1

anCV

kn∑k=1

Xnk − ε

q+

<∞.(3.16)

Proof. By Corollary 3.1, we can get (3.15). Thus, to prove (3.16), it remainsto show that

kn∑k=1

E[XnkI(|Xnk| ≤ δ)]→ 0 as n→∞.(3.17)

If δ1 ≥ δ, then we have by Lemma 2.4, conditions (c) and (d) that∣∣∣∣∣kn∑k=1

E[XnkI(|Xnk| ≤ δ)]

∣∣∣∣∣=

∣∣∣∣∣kn∑k=1

(E[XnkI(|Xnk| ≤ δ1)] + E[XnkI(|Xnk| ≤ δ)]− E[XnkI(|Xnk| ≤ δ1)]

)∣∣∣∣∣≤

∣∣∣∣∣kn∑k=1

E[XnkI(|Xnk| ≤ δ1)]

∣∣∣∣∣+

kn∑k=1

E[|Xnk|I(δ < |Xnk| ≤ δ1)]

∣∣∣∣∣kn∑k=1

E[XnkI(|Xnk| ≤ δ1)]

∣∣∣∣∣+

kn∑k=1

E[|Xnk|I

(|Xnk|>

δ

16η

)]→ 0 as n→∞.

Page 19: COMPLETE f-MOMENT CONVERGENCE FOR EXTENDED ...convergence and complete moment convergence for weighted sums of ˆ-mixing random variables. Recently, Wu et al. [23] introduced the concept

COMPLETE f-MOMENT CONVERGENCE 1503

If δ16η ≤ δ1 < δ, then we can obtain by Lemma 2.4, conditions (c) and (d) again

that∣∣∣∣∣kn∑k=1

E[XnkI(|Xnk| ≤ δ)]

∣∣∣∣∣=

∣∣∣∣∣kn∑k=1

(E[XnkI(|Xnk| ≤ δ1)] + E[XnkI(|Xnk| ≤ δ)]− E[XnkI(|Xnk| ≤ δ1)]

)∣∣∣∣∣≤

∣∣∣∣∣kn∑k=1

E[XnkI(|Xnk| ≤ δ1)]

∣∣∣∣∣+

kn∑k=1

E[|Xnk|I(δ1 < |Xnk| ≤ δ)]

∣∣∣∣∣kn∑k=1

E[XnkI(|Xnk| ≤ δ1)]

∣∣∣∣∣+

kn∑k=1

E[|Xnk|I

(|Xnk|>

δ

16η

)]→ 0 as n→∞.

Thus, (3.17) holds. Hence, for any ε > 0, we have that for all n large enough,

− ε2<

kn∑k=1

E[XnkI(|Xnk| ≤ δ)] <ε

2,(3.18)

which together with (3.15) yields that

∞∑n=1

anCV

kn∑k=1

Xnk − ε

q+

∞∑n=1

anCV

kn∑k=1

(Xnk − E[XnkI(|Xnk| ≤ δ)]

)− ε

2

q+

< ∞.

This completes the proof of the corollary.

Taking q = 1 in Corollary 3.2, we get the following the corollary.

Corollary 3.3. Let Xnk, 1 ≤ k ≤ kn, n ≥ 1 be an array of rowwise END ran-

dom variables in (Ω,F , E), and an, n ≥ 1 be a sequence of positive constants.Suppose that the following conditions hold:

(a)∑∞n=1 an

∑knk=1 CV[|Xnk|I(|Xnk| > θ)] <∞ for any θ > 0;

(b) there exist some constants η > 1, 0 < p ≤ 2 and δ > 0 such that

∞∑n=1

an

(kn∑k=1

E[∣∣∣XnkI(|Xnk| ≤ δ)− E[XnkI(|Xnk| ≤ δ)]

∣∣∣p])η <∞;

(c)∑knk=1 E

[|Xnk|I

(|Xnk| > δ

16η

)]→ 0 as n→∞;

Page 20: COMPLETE f-MOMENT CONVERGENCE FOR EXTENDED ...convergence and complete moment convergence for weighted sums of ˆ-mixing random variables. Recently, Wu et al. [23] introduced the concept

1504 C. LU, R. WANG, X. WANG, AND Y. WU

(d) there exists a constant δ1 ≥ δ16η such that

kn∑k=1

E[XnkI(|Xnk| ≤ δ1)]→ 0 as n→∞.

Then for any ε > 0,

∞∑n=1

anCV

kn∑k=1

Xnk − ε

+

<∞,(3.19)

and thus,

∞∑n=1

anV

(kn∑k=1

Xnk > ε

)<∞.(3.20)

Proof. We can get (3.19) by Corollary 3.2 immediately. So we only need toprove (3.20). It follows from (3.19) that

∞ >

∞∑n=1

anCV

kn∑k=1

Xnk − ε

+

=

∞∑n=1

an

∫ ∞0

V

(kn∑k=1

Xnk − ε > x

)dx

≥ C∞∑n=1

an

∫ ε

0

V

(kn∑k=1

Xnk − ε > x

)dx

≥ C∞∑n=1

anV

(kn∑k=1

Xnk > 2ε

),

which implies (3.20). The proof is completed.

Remark 3.1. Lin and Feng [15] considered the complete convergence for arraysof rowwise independent and identically distributed (i.i.d., for short) randomvariables (see the definition in Chen [4]) as an example. Furthermore, theydrew a conclusion about the strong law of large numbers (SLLN, for short)from (3.20). In this paper, we can also get (3.20). Thus, similar to that of Linand Feng [15], we can also get the SLLN for arrays of rowwise i.i.d. randomvariables.

With Corollary 3.3 in hand, we can get the following corollary.

Corollary 3.4. Let Xnk, 1 ≤ k ≤ kn, n ≥ 1 be an array of rowwise END

random variables in (Ω,F , E) which is stochastically dominated by a randomvariable X with CV[|X|p] <∞ for some 1 ≤ p ≤ 2, and cnk, 1 ≤ k ≤ kn, n ≥

Page 21: COMPLETE f-MOMENT CONVERGENCE FOR EXTENDED ...convergence and complete moment convergence for weighted sums of ˆ-mixing random variables. Recently, Wu et al. [23] introduced the concept

COMPLETE f-MOMENT CONVERGENCE 1505

1 be a sequence of positive constants satisfying

kn∑k=1

|cnk|p = O(n−t) for some t > 0.(3.21)

If there exists some constant δ1 > 0 such that

kn∑k=1

E(cnkXnk)I(|cnkXnk| ≤ δ1)→ 0 as n→∞,(3.22)

then for any ε > 0, and r < t+ 1,

∞∑n=1

nr−2CV

kn∑k=1

cnkXnk − ε

+

<∞,(3.23)

and thus,

∞∑n=1

nr−2V

(kn∑k=1

cnkXnk > ε

)<∞.(3.24)

Proof. Without loss of generality, we can assume that cnk > 0 for each 1 ≤ k ≤kn and n ≥ 1. Otherwise, we can use c+nk and c−nk instead of cnk, respectively.Thus, it follows by Lemma 2.1 that the array cnkXnk, 1 ≤ k ≤ kn, n ≥ 1is still rowwise END. Moreover, we will take η > 1 and 16ηδ1 ≥ δ, which isequivalent to δ1 ≥ δ/16η. Let an = nr−2 and Xnk be replaced by cnkXnk inassumptions of Corollary 3.3. So we only need to verify conditions of Corollary3.3.

For condition (a), noting that p ≥ 1 and r − 2− t < −1, by (3.21), Remark1.2 and Lemma 2.5, we have

∞∑n=1

nr−2kn∑k=1

CV[|cnkXnk|I(|cnkXnk| > θ)]

=

∞∑n=1

nr−2kn∑k=1

∫ ∞θp

1

px

1p−1V(|cnkXnk|p > x)dx

≤ C

∞∑n=1

nr−2kn∑k=1

cpnk

∫ ∞0

V(|Xnk|p >

x

cpnk

)d

(x

cpnk

)

≤ C

∞∑n=1

nr−2−tCV[|X|p] <∞.

For condition (b), noting that η > 1 > r−1t , we get r− 2− tη < −1. We obtain

by 1 ≤ p ≤ 2, r − 2− tη < −1, Lemma 2.2(2) and (3) and Lemma 2.5 that

∞∑n=1

nr−2

(kn∑k=1

E[|(cnkXnk)I(|cnkXnk| ≤ δ)− E(cnkXnk)I(|cnkXnk| ≤ δ)|p]

Page 22: COMPLETE f-MOMENT CONVERGENCE FOR EXTENDED ...convergence and complete moment convergence for weighted sums of ˆ-mixing random variables. Recently, Wu et al. [23] introduced the concept

1506 C. LU, R. WANG, X. WANG, AND Y. WU

≤∞∑n=1

nr−2

(kn∑k=1

E[|cnkXnk|p] +

kn∑k=1

(E[|cnkXnk|]

)p)η

≤ C

∞∑n=1

nr−2

(kn∑k=1

E[|cnkXnk|p]

= C(E[|X|p]

)η ∞∑n=1

nr−2−tη <∞.

For condition (c), it follows from Lemma 2.5 and 1 ≤ p ≤ 2 that

kn∑k=1

E[|cnkXnk|I

(|cnkXnk| >

δ

16η

)]

≤(

δ

16η

)1−p kn∑k=1

E[|cnkXnk|pI

(|cnkXnk| >

δ

16η

)]≤ CE[|X|p]n−t → 0 as n→∞.

Noting that δ1 ≥ δ16η , we can get condition (d) by (3.22).

All conditions of Corollary 3.3 are satisfied. Thus, we can get (3.23), and(3.24) follows from (3.23) immediately. The proof is completed.

References

[1] A. Adler and A. Rosalsky, Some general strong laws for weighted sums of stochasticallydominated random variables, Stochastic Anal. Appl. 5 (1987), no. 1, 1–16. https://

doi.org/10.1080/07362998708809104

[2] A. Adler, A. Rosalsky, and R. L. Taylor, Strong laws of large numbers for weighted sumsof random elements in normed linear spaces, Internat. J. Math. Math. Sci. 12 (1989),

no. 3, 507–529. https://doi.org/10.1155/S0161171289000657

[3] P. Y. Chen and S. H. Sung, On complete convergence and complete moment convergencefor weighted sums of ρ∗-mixing random variables, J. Inequal. Appl. 2018 (2018), Paper

No. 121, 16 pp. https://doi.org/10.1186/s13660-018-1710-2[4] Z. J. Chen, Strong laws of large numbers for sub-linear expectations, Sci. China Math.

59 (2016), no. 5, 945–954. https://doi.org/10.1007/s11425-015-5095-0

[5] Z. J. Chen and F. Hu, A law of the iterated logarithm under sublinear expectations,Int. J. Financ. Eng. 1 (2014), no. 2, 1450015, 23 pp. https://doi.org/10.1142/

s2345768614500159

[6] Z. J. Chen, P. Y. Wu, and B. M. Li, A strong law of large numbers for non-additiveprobabilities, Internat. J. Approx. Reason. 54 (2013), no. 3, 365–377. https://doi.org/

10.1016/j.ijar.2012.06.002

[7] Y. S. Chow, On the rate of moment convergence of sample sums and extremes, Bull.Inst. Math. Acad. Sinica 16 (1988), no. 3, 177–201.

[8] L. Denis, M. S. Hu, and S. G. Peng, Function spaces and capacity related to a sublinearexpectation: application to G-Brownian motion paths, Potential Anal. 34 (2011), no. 2,139–161. https://doi.org/10.1007/s11118-010-9185-x

[9] A. Filinkov and R. J. Elliott, Non-linear expectations in spaces of Colombeau generalizedfunctions, Stoch. Anal. Appl. 37 (2019), no. 4, 509–521. https://doi.org/10.1080/

07362994.2019.1585265

Page 23: COMPLETE f-MOMENT CONVERGENCE FOR EXTENDED ...convergence and complete moment convergence for weighted sums of ˆ-mixing random variables. Recently, Wu et al. [23] introduced the concept

COMPLETE f-MOMENT CONVERGENCE 1507

[10] P. L. Hsu and H. Robbins, Complete convergence and the law of large numbers, Proc.

Nat. Acad. Sci. U.S.A. 33 (1947), 25–31. https://doi.org/10.1073/pnas.33.2.25

[11] C. Hu, A strong law of large numbers for sub-linear expectation under a general momentcondition, Statist. Probab. Lett. 119 (2016), 248–258. https://doi.org/10.1016/j.

spl.2016.08.015

[12] T. C. Hu, A. Rosalsky, and K. L. Wang, Complete convergence theorems for extended

negatively dependent random variables, Sankhya A 77 (2015), no. 1, 1–29.

[13] Y.-X. Li, Precise asymptotics in complete moment convergence of moving-average pro-cesses, Statist. Probab. Lett. 76 (2006), no. 13, 1305–1315. https://doi.org/10.1016/

j.spl.2006.04.001

[14] H. Y. Liang, D. L. Li, and A. Rosalsky, Complete moment and integral convergence forsums of negatively associated random variables, Acta Math. Sin. (Engl. Ser.) 26 (2010),

no. 3, 419–432. https://doi.org/10.1007/s10114-010-8177-5

[15] Y. W. Lin and X.W. Feng, Complete convergence and strong law of large numbers forarrays of random variables under sub-linear expectations, to appear in Comm. Statist.

Theory Methods, https://doi.org/10.1080/03610926.2019.1625924

[16] C. Lu, Z. Chen, and X. J. Wang, Complete f-moment convergence for widely orthantdependent random variables and its application in nonparametric models, Acta Math.

Sin. (Engl. Ser.) 35 (2019), no. 12, 1917–1936. https://doi.org/10.1007/s10114-019-8315-7

[17] S. G. Peng, Monotonic limit theorem of BSDE and nonlinear decomposition theorem of

Doob-Meyer’s type, Probab. Theory Related Fields 113 (1999), no. 4, 473–499. https://doi.org/10.1007/s004400050214

[18] , G-expectation, G-Brownian motion and related stochastic calculus of Ito type,

in Stochastic analysis and applications, 541–567, Abel Symp., 2, Springer, Berlin, 2007.https://doi.org/10.1007/978-3-540-70847-6-25

[19] , Multi-dimensional G-Brownian motion and related stochastic calculus under

G-expectation, Stochastic Process. Appl. 118 (2008), no. 12, 2223–2253. https://doi.org/10.1016/j.spa.2007.10.015

[20] , Nonlinear expectations and stochastic calculus under uncertainty, Probabil-ity Theory and Stochastic Modelling, 95, Springer, Berlin, 2019. https://doi.org/10.

1007/978-3-662-59903-7

[21] Q. Y. Wu and Y. Y. Jiang, Strong law of large numbers and Chover’s law of the iteratedlogarithm under sub-linear expectations, J. Math. Anal. Appl. 460 (2018), no. 1, 252–

270.

[22] Y. Wu, X. J. Wang, and S. H. Hu, Complete moment convergence for weighted sumsof weakly dependent random variables and its application in nonparametric regres-

sion model, Statist. Probab. Lett. 127 (2017), 56–66. https://doi.org/10.1016/j.spl.

2017.03.027

[23] Y. Wu, X. J. Wang, T. C. Hu, and A. Volodin, Complete f-moment convergence for

extended negatively dependent random variables, Rev. R. Acad. Cienc. Exactas Fıs. Nat.

Ser. A Mat. RACSAM 113 (2019), no. 2, 333–351. https://doi.org/10.1007/s13398-017-0480-x

[24] Y. Wu, X. J. Wang, and L. X. Zhang, On the asymptotic approximation of inversemoment under sub-linear expectations, J. Math. Anal. Appl. 468 (2018), no. 1, 182–

196. https://doi.org/10.1016/j.jmaa.2018.08.010

[25] J. P. Xu and L. X. Zhang, Three series theorem for independent random variables undersub-linear expectations with applications, Acta Math. Sin. (Engl. Ser.) 35 (2019), no. 2,

172–184. https://doi.org/10.1007/s10114-018-7508-9[26] J. G. Yan, Complete convergence and complete moment convergence for maximal

weighted sums of extended negatively dependent random variables, Acta Math. Sin.

Page 24: COMPLETE f-MOMENT CONVERGENCE FOR EXTENDED ...convergence and complete moment convergence for weighted sums of ˆ-mixing random variables. Recently, Wu et al. [23] introduced the concept

1508 C. LU, R. WANG, X. WANG, AND Y. WU

(Engl. Ser.) 34 (2018), no. 10, 1501–1516. https://doi.org/10.1007/s10114-018-7133-

7

[27] L. X. Zhang, Strong limit theorems for extended independent and extended nega-tively dependent random variables under non-linear expectation, 2016. arXiv perprint

arXiv:1608.00710.[28] , Exponential inequalities under the sub-linear expectations with applications to

laws of the iterated logarithm, Sci. China Math. 59 (2016), no. 12, 2503–2526. https:

//doi.org/10.1007/s11425-016-0079-1

[29] , Rosenthal’s inequalities for independent and negatively dependent random vari-

ables under sub-linear expectations with applications, Sci. China Math. 59 (2016), no. 4,

751–768. https://doi.org/10.1007/s11425-015-5105-2

Chao LuSchool of Mathematical Sciences

Anhui UniversityHefei 230601, P. R. China

Rui Wang

School of Mathematical SciencesAnhui University

Hefei 230601, P. R. China

Xuejun Wang

School of Mathematical Sciences

Anhui UniversityHefei 230601, P. R. China

Email address: [email protected]

Yi Wu

School of Mathematical Sciences

Anhui UniversityHefei 230601, P. R. China