9
Statistics & Probability Letters 58 (2002) 31–39 The modied bootstrap error process for Kaplan–Meier quantiles Paul Janssen a , Jan Swanepoel b , No el Veraverbeke a ; a Limburgs Universitair Centrum, Universitaire Campus, 3590 Diepenbeek, Belgium b Potchefstroom University for CHE, 2520 Potchefstroom, South Africa Received July 2001; received in revised form February 2002 Abstract We consider a modication of the classical bootstrap procedure for censored observations by choosing a resample size m which is possibly dierent from the original sample size n. In the situation of quantile estimation we establish weak convergence of the bootstrap error process and show that modied bootstrapping leads to improved consistency rates for the maximum error. c 2002 Elsevier Science B.V. All rights reserved. MSC: Primary: 62G09; Secondary: 62G20 Keywords: Berry–Esseen bound; Bootstrap consistency rates; Kaplan–Meier estimator; Modied bootstrap; Quantiles; Weak convergence 1. Introduction Let X 1 ;:::;X n be nonnegative independent and identically distributed (i.i.d.) random variables with continuous distribution function F . These variables of interest are subject to random right censoring by another sequence of i.i.d. nonnegative random variables Y 1 ;:::;Y n with continuous distribution function G. The observations are the pairs (T 1 ; 1 );:::; (T n ; n ) where for i =1;:::;n; T i = min(X i ;Y i ) and i = I (X i 6 Y i ), with I (·) denoting the indicator function. In the general right random censorship model it is assumed that X i and Y i are independent for each i. Consequently, the T i are i.i.d. with continuous distribution function H satisfying 1 H = (1 F )(1 G). Corresponding author. E-mail address: [email protected] (N. Veraverbeke). 0167-7152/02/$ - see front matter c 2002 Elsevier Science B.V. All rights reserved. PII: S0167-7152(02)00100-1

The modified bootstrap error process for Kaplan–Meier quantiles

Embed Size (px)

Citation preview

Statistics & Probability Letters 58 (2002) 31–39

The modi�ed bootstrap error process for Kaplan–Meierquantiles

Paul Janssena, Jan Swanepoelb, No*el Veraverbekea ;∗

aLimburgs Universitair Centrum, Universitaire Campus, 3590 Diepenbeek, BelgiumbPotchefstroom University for CHE, 2520 Potchefstroom, South Africa

Received July 2001; received in revised form February 2002

Abstract

We consider a modi�cation of the classical bootstrap procedure for censored observations by choosinga resample size m which is possibly di4erent from the original sample size n. In the situation of quantileestimation we establish weak convergence of the bootstrap error process and show that modi�ed bootstrappingleads to improved consistency rates for the maximum error. c© 2002 Elsevier Science B.V. All rights reserved.

MSC: Primary: 62G09; Secondary: 62G20

Keywords: Berry–Esseen bound; Bootstrap consistency rates; Kaplan–Meier estimator; Modi�ed bootstrap; Quantiles;Weak convergence

1. Introduction

Let X1; : : : ; Xn be nonnegative independent and identically distributed (i.i.d.) random variables withcontinuous distribution function F . These variables of interest are subject to random right censoringby another sequence of i.i.d. nonnegative random variables Y1; : : : ; Yn with continuous distributionfunction G. The observations are the pairs (T1; 1); : : : ; (Tn; n) where for i=1; : : : ; n; Ti =min(Xi; Yi)and i = I(Xi6Yi), with I(·) denoting the indicator function. In the general right random censorshipmodel it is assumed that Xi and Yi are independent for each i. Consequently, the Ti are i.i.d. withcontinuous distribution function H satisfying 1 − H = (1 − F)(1 − G).

∗ Corresponding author.E-mail address: [email protected] (N. Veraverbeke).

0167-7152/02/$ - see front matter c© 2002 Elsevier Science B.V. All rights reserved.PII: S 0167-7152(02)00100-1

32 P. Janssen et al. / Statistics & Probability Letters 58 (2002) 31–39

The Kaplan and Meier (1958) or product-limit estimator for F , based on the observations (Ti; i)(i = 1; : : : ; n) is given by

Fn(t) = 1 −∏

T(i)6t

(n− i

n− i + 1

)(i)

I(T(n) ¿t);

where T(1) ¡T(2) ¡ · · ·¡T(n) are the ordered Ti and (1); : : : ; (n) are the corresponding i.Breslow and Crowley (1974) were the �rst to consider weak convergence of the process

{n1=2(Fn(t) − F(t)); 06 t6T} where T ¡TH = min(TF; TG) (throughout we use the notationTL = inf{t;L(t) = 1} for the right endpoint of the support of a distribution function L). Let D[0; T ]denote the space of right continuous functions with left-hand limits, endowed with the Skorokhodtopology. The result of Breslow and Crowley (1974) says that, if T ¡TH , then, as n → ∞,

n1=2(Fn(·) − F(·)) ⇒ W 0(·) in D[0; T ];

where W 0 is a Gaussian process with mean function 0 and covariance function �(s; t) =(1 − F(s))(1 − F(t)) d(min(s; t)) with

d(t) =∫ t

0

1(1 − F(y))(1 − H (y−))

dF(y): (1.1)

Burke et al. (1988) and also Major and Rejto (1988) proved a strong approximation result for theKaplan–Meier process with a.s. rate O(n−1=2 log n), exactly the same as in the case of no censoring.Their result says that for T ¡TH and n → ∞,

sup06t6T

|n1=2(Fn(t) − F(t)) −W 0n (t)| = O(n−1=2 log n) a:s:; (1.2)

where W 0n (t) is a sequence of Gaussian processes with the same distribution as W 0 de�ned above.

For 0¡p¡ 1, we denote the pth quantile of F by �p=F−1(p)=inf{t;F(t)¿p}. A simple esti-mator for �p is the pth quantile of the Kaplan–Meier estimator: �n(p)=F−1

n (p)=inf{t; Fn(t)¿p}.Weak convergence of the quantile process {n1=2(�n(p) − �p); 0¡p6p0}, where 0¡p0 ¡min(1; TG(F−1)) is well known (Sander (1975), see e.g. Shorack and Wellner (1986)). The limitingprocess is given by {−W 0(�p)=f(�p); 0¡p6p0}, with W 0 as before and f = F ′. In particular,if F is di4erentiable at �p with F ′(�p) = f(�p)¿ 0, then as n → ∞,

n1=2(�n(p) − �p) d→N (0; �2(�p)=f2(�p)); (1.3)

where

�2(t) = �(t; t) = (1 − F(t))2 d(t): (1.4)

In this paper, we study the weak convergence in D(R) of the modi�ed bootstrap error process

Zn(t) = P∗(m1=2(�∗n;m(p) − �n(p))6 t) − P(n1=2(�n(p) − �p)6 t) (t ∈R): (1.5)

Here P∗ denotes the conditional probability, given the original observations (T1; 1); : : : ; (Tn; n).The modi�ed bootstrap quantile estimator �∗n;m(p) is de�ned as follows. Let (T ∗

1 ; ∗1); : : : ; (T

∗m;

∗m)

P. Janssen et al. / Statistics & Probability Letters 58 (2002) 31–39 33

be a sample of size m6 n, drawn with replacement from {(T1; 1); : : : ; (Tn; n)}. With F∗n;m the

corresponding Kaplan–Meier estimator, we de�ne the modi�ed bootstrap quantile as

�∗n;m(p) = F∗−1n;m (p) = inf{t;F∗

n;m(t)¿p}:From our weak convergence result, we get information on the consistency rate of P∗(m1=2(�∗n;m(p)−�n(p))6 t) as estimator for P(n1=2(�n(p) − �p)6 t).

The above resampling procedure, which is a modi�cation of Efron’s (1981) original nonparametricbootstrap for censored data, is called modi�ed bootstrap or m out of n bootstrap (Swanepoel, 1986;Bickel et al., 1997). The modi�ed bootstrap has been used in situations where the classical bootstrapfails, but recent papers also show superior convergence rates in situations where Efron’s bootstrapis valid (see e.g. Janssen et al. (2001)).

Our result also generalizes a result of Falk and Reiss (1989) to the censored data situation. Themethod of proof is completely di4erent. Their method relies on properties of order statistics, whileours uses the strong approximation result for Fn and Berry–Esseen inequalities.

2. A Berry–Esseen inequality for the (modi�ed) boot-strapped Kaplan–Meier estimator

In this section, we prove a lemma which will be used in the proof of the main theorem inSection 3, and which is of independent interest.

Lemma. There exists an absolute positive constant C such that for all �¡T(n) and for all nsu6ciently large;

supx∈R

∣∣∣∣P∗(

m1=2

�n(�)(F∗

n;m(�) − Fn(�))6 x)− !(x)

∣∣∣∣6C[1 + d−1=2n (�)][1 − Hn(�)]−2m−1=2;

where

Hn(t) =1n

n∑i=1

I(Ti6 t);

�2n(t) = (1 − Fn(t))2dn(t);

dn(t) =∫ t

0

1[1 − Fn(y)][1 − Hn(y−)]

dFn(y) (t ¡T(n)):

Proof. The possibility of ties in the bootstrap sample prevents from applying directly Theorem 3in Chang and Rao (1989). We therefore consider a slightly smoothed version of the empiricaldistribution function; following an idea of Chen and Lo (1996).

Instead of assigning probability 1=n to each observation T1; : : : ; Tn, we assign probability 1=nto intervals [T1 − $n; T1 + $n]; : : : ; [Tn − $n; Tn + $n], where $n ¡ (1=2)min16i¡j6n {|Ti − Tj|;|Ti−�|; |Tj−�|}. Drawing m times with replacement from this density and assigning the correspond-ing ’s, lead to a smoothed bootstrap sample (T

∗1 ;

∗1); : : : ; (T

∗m;

∗m). We will put a tilde sign ( ) on

all functions of (T∗1 ;

∗1); : : : ; (T

∗m;

∗m). For example: H (t) = P(T

∗1 6 t), H

u(t) = P(T

∗1 6 t;

∗1 = 1),

34 P. Janssen et al. / Statistics & Probability Letters 58 (2002) 31–39

F(t)=1−exp(− ∫ t0 dH

u(s)=(1−H (s))), etc. Applying the result of Chang and Rao (1989), we obtain

the following Berry–Esseen inequality for the distribution function of the Kaplan–Meier estimatorF∗

n;m(�) of the smoothed bootstrap sample. Since �¡TH , there exists an absolute positive constantC such that

supx∈R

∣∣∣∣P(m1=2

�(�)(F

∗n;m(�) − F(�))6 x

)− !(x)

∣∣∣∣6 C[1 + d−1=2

(�)][1 − H (�)]−2m−1=2: (2.1)

But for this construction (which depends on the �xed point �) we have, as explained in Chen andLo (1996) that F

∗n;m(�) = F∗

n;m(�), F(�) = Fn(�) and

supt¿0

|H (t) − Hn(t)|6 1n

a:s: (2.2)

Property (2.2) entails that the factor [1 − H (�)]−2 in (2.1) can be replaced by [1 − Hn(�)]−2, bychanging the constant C. The same holds for the factor 1+d

−1=2(�) since d(�)=dn(�) + O(n−1) a.s.

Finally, the replacement of �(�) in (2.1) by �n(�) can be performed since it is easily seen that�2(�)− �2

n(�) = OP(n−1=2) = OP(m−1=2) (see also Chen and Lo, 1996). This �nishes the proof of thelemma.

3. Weak convergence of the bootstrap error process

With ’ = !′, the density of the standard normal distribution function, we de�ne the normingconstant

ct =�(�p)√f(�p)

√1 − G(�p)

/’(tf(�p)�(�p)

):

Theorem. Let 0¡p¡min(1; TG(F−1)). Assume that f(�p)¿ 0 and f is Lipschitz of order 1 in aneighborhood Ip of �p. If m → ∞ as n → ∞ with m=n → c; 06 c¡∞; and n1=2m−3=4 → 0; thenfor all t ∈R;

ctn1=2m−1=4Zn(t);

converges weakly as n → ∞ to a process W (t); where

W (t) ={

W1(t) for t¿ 0;W2(−t) for t ¡ 0

with {W1(t); t¿ 0} and {W2(t); t¿ 0} independent standard Wiener processes such that W1(0)=W2(0) = 0 a.s.

Remark. For the bootstrap error process {Zn(t)} de�ned by (1.5); we conclude from the theoremand the continuous mapping theorem that for any �nite constant t0 ¿ 0

sup|t|6t0

|Zn(t)| = OP(n−1=4);

P. Janssen et al. / Statistics & Probability Letters 58 (2002) 31–39 35

when m = n (the classical or naive bootstrap); and

sup|t|6t0

|Zn(t)| = OP(n−1=3=$1=4n );

if m = Cn2=3=$n (the modi�ed or m out of n bootstrap); for some �nite constant C ¿ 0 and anysequence {$n} of positive numbers such that $n → 0 as n → ∞.

Proof. Let t0 be an arbitrary positive �nite constant. We have for t ∈RP∗(m1=2(�∗n;m(p) − �n(p))6 t) = P∗(F∗

n;m(�n(p) + tm−1=2)¿p)

=P∗(m1=2(F∗n;m(�n(p) + tm−1=2) − Fn(�n(p) + tm−1=2))¿m1=2(p− Fn(�n(p) + tm−1=2)))

=: P∗(W ∗n;m¿− Dn;m):

Hence; with �n given in the Lemma in Section 2;

Zn(t) ={!(

Dn;m

�n(�n(p) + tm−1=2)

)− !

(tf(�p)�(�p)

)}

−{P(n1=2(�n(p) − �p)6 t) − !

(tf(�p)�(�p)

)}

+{P∗(W ∗

n;m¿− Dn;m) − !(

Dn;m

�n(�n(p) + tm−1=2)

)}: (3.1)

The second term on the right-hand side of (3.1) is O(n−1=2); uniformly for t ∈R by the Berry–Esseen theorem for Kaplan–Meier quantiles due to Chang and Rao (1989) (see also Theorem 2(a)in Janssen and Veraverbeke (1992)). Note that their condition that F ′′ has to be bounded in Ip canbe replaced by the condition that f = F ′ is Lipschitz of order 1 in Ip.

To show that the third term in (3.1) is OP(m−1=2) uniformly for |t|6 t0, we use a Berry–Esseeninequality for the distribution function of the modi�ed bootstrapped Kaplan–Meier estimator F∗

n;m(t).This inequality is stated and proved in the lemma in Section 2. Application of the inequality in oursituation requires conditioning on the event {�n(p) + t0m−1=2 ¡T(n)}. We obtain, for some c¿ 0and with C, dn and Hn given in the lemma in Section 2, that

P

(sup|t|6t0

∣∣∣∣P∗(W ∗n;m¿− Dn;m) − !

(Dn;m

�n(�n(p) + tm−1=2)

)∣∣∣∣¿cm−1=2

)

6 P(C[1 + d−1=2n (�n(p) − t0m−1=2)][1 − Hn(�n(p) + t0m−1=2)]−2 ¿c)

+P(�n(p) + t0m−1=2¿T(n)): (3.2)

Now, for any +¿ 0,

P(�n(p) + t0m−1=2¿T(n)) 6 Hn(�p + t0m−1=2 + +) + P(|�n(p) − �p|¿ +)

and both terms on the right-hand side tend to 0 since �n(p) P→�p and H (�p++)¡ 1 for + suQcientlysmall. Also the �rst term in the right-hand side of (3.2) can be made arbitrarily small by an

36 P. Janssen et al. / Statistics & Probability Letters 58 (2002) 31–39

appropriate choice of c, since dn(t)P→d(t) uniformly for t ∈R (see Hall and Wellner, 1980), Hn(t)

P→H (t) uniformly for t ∈R and H (�p)¡ 1.

Hence, uniformly for |t|6 t0,

Zn(t) = Zn(t) + OP(m−1=2); (3.3)

where (using the mean-value theorem)

Zn(t) = !(

Dn;m

�n(�n(p) + tm−1=2)

)− !

(tf(�p)�(�p)

)

= Z0n(t) +

’′(,n; t)2

Z

0n(t)

’( tf(�p)�(�p) )

2

=: Z0n(t) + Rn(t); (3.4)

Z0n(t) =

�(�p)�n(�n(p) + tm−1=2)

’(tf(�p)�(�p)

)1

�(�p)(Dn;m − tf(�p))

+’(tf(�p)�(�p)

)(1

�n(�n(p) + tm−1=2)− 1

�(�p)

)tf(�p); (3.5)

for some ,n; t lying between tf(�p)=�(�p) and Dn;m=�n(�n(p) + tm−1=2).Note that there exists a positive �nite constant C such that

sup|t|6t0

|Rn(t)|6C

{sup|t|6t0

|Z0n(t)|

}2

:

Below we will prove that for every t0 ¿ 0

{ctn1=2m−1=4Z0n(t); |t|6 t0} ⇒ {W (t); |t|6 t0} in D[ − t0; t0]: (3.6)

From the continuous mapping theorem and (3.6) we conclude that

sup|t|6t0

|Rn(t)| = OP(n−1m1=2): (3.7)

From the de�nitions of �2(·) in (1.4) and �2n(·) in Section 2 and the fact that sup06t6T |Fn(t)−F(t)|

and sup06t6T |Hn(t) − H (t)| are OP(n−1=2), some easy calculations show that

sup|t|6t0

∣∣∣∣ �(�p)�n(�n(p) + tm−1=2)

− 1∣∣∣∣= OP(m−1=2) (3.8)

and

sup|t|6t0

∣∣∣∣’(tf(�p)�(�p)

)(1

�n(�n(p) + tm−1=2)− 1

�(�p)

)tf(�p)

∣∣∣∣= OP(m−1=2): (3.9)

P. Janssen et al. / Statistics & Probability Letters 58 (2002) 31–39 37

Now, to prove the weak convergence result stated in (3.6), it follows from (3.5), (3.8), (3.9) andSlutsky’s theorem that it suQces to show that (3.6) holds with Z

0n(t) replaced by

’(tf(�p)�(�p)

)1

�(�p)(Dn;m − tf(�p)):

Using the fact that p = Fn(�n(p)) + O(n−1) a.s. (see Aly et al. (1985, p. 194)), we can write

Dn;m − tf(�p) = Wn(t) + O(m1=2n−1) a:s:;

where

Wn(t) = m1=2(Fn(�n(p) + tm−1=2) − Fn(�n(p))) − tf(�p):

Hence, to prove (3.6) it now suQces to show that{ctn1=2m−1=4’

(tf(�p)�(�p)

)1

�(�p)Wn(t); |t|6 t0

}⇒ {W (t); |t|6 t0} (3.10)

in D[−t0; t0]. Note that the proof of the theorem will then follow from (3.3)–(3.7) and the conditionsimposed on m.

In order to prove (3.10), we �rst introduce the following two-dimensional process.

Yn(s; t) = ctn1=2m−1=4’(tf(�p)�(�p)

)1

�(�p)

×{m1=2[Fn(�p + (s + t)m−1=2) − Fn(�p + sm−1=2)] − tf(�p)}and note that

Yn(m1=2(�n(p) − �p); t) = ctn1=2m−1=4’(tf(�p)�(�p)

)1

�(�p)Wn(t): (3.11)

We now proceed to show that for any �nite constants s0 ¿ 0 and t0 ¿ 0

{Yn(s; t); |s|6 s0; |t|6 t0} ⇒ {W (t); |s|6 s0; |t|6 t0} in D([ − s0; s0] × [ − t0; t0]): (3.12)

We �rst use the strong approximation result (see (1.2)): since �p + (t + s)m−1=2 ¡T ¡TH for mlarge, we have uniformly for |s|6 s0 and |t|6 t0, that

Yn(s; t) = ctm1=4’(tf(�p)�(�p)

)1

�(�p){W 0

n (�p + (s + t)m−1=2) −W 0n (�p + sm−1=2)} + oP(1);

by using the conditions imposed on m and f.It follows from Hall and Wellner (1980) that the process W 0

n (t) can be represented as

W 0n (t) = Bn(K(t))

1 − F(t)1 − K(t)

;

where Bn(t) is a sequence of Brownian bridge processes on [0; 1]; K(t) = d(t)=(1 + d(t)) and d(t)is given in (1.1). Since (1 − F(�p + (s + t)m−1=2))=(1 − K(�p + (s + t)m−1=2)) and

38 P. Janssen et al. / Statistics & Probability Letters 58 (2002) 31–39

(1 − F(�p + sm−1=2))=(1 − K(�p + sm−1=2)) only di4er from (1 − F(�p))=(1 − K(�p)) by a termwhich is O(m−1=2), uniformly for |s|6 s0 and |t|6 t0, we have that

W 0n (�p + (s + t)m−1=2) −W 0

n (�p + sm−1=2)

=1 − F(�p)1 − K(�p)

{Bn(K(�p + (s + t)m−1=2)) − Bn(K(�p + sm−1=2))} + O(m−1=2):

Using the representation Bn(t)d=W (t) − tW (1) and the properties W (0) = 0; W (a + ·) −

W (a) d= W (·) and W (a·) d= |a|1=2W (·) for any �nite constant a, it follows that, for t ¿ 0 (the caset ¡ 0 is dealt with similarly):

Bn(K(�p + (s + t)m−1=2)) − Bn(K(�p + sm−1=2))

d=W (K(�p + (s + t)m−1=2)) −W (K(�p + sm−1=2)) + OP(m−1=2)

d=W1(tm−1=2k(�p) + oP(m−1=2)) −W1(oP(m−1=2)) + OP(m−1=2)

d=m−1=4(k(�p))1=2(W1(t + oP(1)) −W1(oP(1))) + OP(m−1=2)

d=m−1=4(k(�p))1=2W1(t) + OP(m−1=2);

where k(t) = K ′(t) = d′(t)=(1 + d(t))2 ¿ 0. Simple algebra now gives that Yn(s; t)d= W1(t) + oP(1)

and the proof of (3.12) is completed.For ease of notation, write for |t|6 t0

,n;m = m1=2(�n(p) − �p); Y 0n (t) = Yn(,n;m; t):

It will now be shown that

{Y 0n (t); |t|6 t0} ⇒ {W (t); |t|6 t0} in D[ − t0; t0]: (3.13)

From Theorem 2.1 of Billingsley (1968) it follows that proving (3.13) is equivalent to showing thatfor all bounded, uniformly continuous real-valued functions g

E{g(Y 0n )} → E{g(W )} as n → ∞: (3.14)

For brevity, write

An = {|,n;m|6 s0}; Y+n (t) = sup

|s|6s0

g(Yn(s; t)); Y−n (t) = inf

|s|6s0g(Yn(s; t)):

Since m=n → c, it follows from (1.3) and (1.4) that P(An) → P(c1=2|Z |6 s0) as n → ∞, where Zis a normal random variable with E(Z) = 0 and E(Z2) = �2(�p)=f2(�p).

Also, from (3.12) and the continuous mapping theorem we obtain that

{Y+n (t); |t|6 t0} ⇒ {g(W (t)); |t|6 t0} in D[ − t0; t0] (3.15)

P. Janssen et al. / Statistics & Probability Letters 58 (2002) 31–39 39

and

{Y−n (t); |t|6 t0} ⇒ {g(W (t)); |t|6 t0} in D[ − t0; t0]: (3.16)

Since supx |g(x)|6C, for some �nite constant C ¿ 0, we deduce from (3.15), (3.16) and Theorem5.4 of Billingsley (1968) that E{Y+

n } → E{g(W )} and E{Y−n } → E{g(W )}.

Applying these facts it easily follows, by letting n → ∞, that

E{g(W )} − 2CP(c1=2|Z |¿s0)6 lim infn→∞ E{g(Y 0

n )}

6 lim supn→∞

E{g(Y 0n )}6 E{g(W )} + 2CP(c1=2|Z |¿s0): (3.17)

Since s0 ¿ 0 is arbitrary, (3.14) follows from (3.17) by letting s0 → ∞. The proof of the theoremnow follows from (3.10), (3.11), (3.13) and (3.14).

Acknowledgements

This research is supported by Project BIL 00=28, Bilateral and Scienti�c Technological Cooper-ation, Ministry of the Flemish Community. Also partial support by the Belgian IUAP/PAI network“Statistical Techniques and Modeling for Complex Substantive Questions with Complex Data” isacknowledged.

References

Aly, E.-E.A.A., Cs*orgo, M., HorvUath, L., 1985. Strong approximations of the quantile process of the product-limit estimator.J. Multivariate Anal. 16, 185–210.

Billingsley, P., 1968. Convergence of Probability Measures. Wiley, New York.Breslow, N., Crowley, N., 1974. A large sample study of the life table and product limit estimates under random censorship.

Ann. Statist. 2, 437–453.Burke, M.D., Cs*orgo, S., HorvUath, L., 1988. A correction to and an improvement of strong approximation of some

biometric estimates under random censorship. Probab. Theory Related Fields 79, 51–57.Chang, M.N., Rao, P.V., 1989. Berry–Esseen bound for the Kaplan–Meier estimator. Commun. Statist. Theor. Meth. 18,

4647–4664.Chen, K., Lo, S.-H., 1996. On bootstrap accuracy with censored data. Ann. Statist. 24, 569–595.Efron, B., 1981. Censored data and the bootstrap. J. Amer. Statist. Assoc. 76, 312–319.Falk, M., Reiss, R.D., 1989. Weak convergence of smoothed and nonsmoothed bootstrap quantile estimators. Ann. Probab.

17, 362–371.Hall, W.J., Wellner, J.A., 1980. Con�dence bands for a survival curve from censored data. Biometrika 67, 113–143.Janssen, P., Veraverbeke, N., 1992. The accuracy of normal approximations in censoring models. J. Nonparametric Statist.

1, 205–217.Janssen, P., Swanepoel, J., Veraverbeke, N., 2001. Modi�ed bootstrap consistency rates for U -quantiles. Statist. Probab.

Lett. 54, 261–268.Kaplan, E.L., Meier, P., 1958. Nonparametric estimation from incomplete observations. J. Amer. Statist. Assoc. 53,

457–481.Major, P., Rejto, J., 1988. Strong embedding of the estimator of the distribution function under random censorship. Ann.

Statist. 16, 1113–1132.Sander, J.M., 1975. The weak convergence of quantiles of the product-limit estimator. Technical Report 5, Division of

Biostatistics, Stanford University.Shorack, G.R., Wellner, J.A., 1986. Empirical Processes with Applications to Statistics. Wiley, New York.