+ All Categories
Home > Documents > Asymptotic results for hybrids of empirical and partial sums processes

Asymptotic results for hybrids of empirical and partial sums processes

Date post: 23-Dec-2016
Category:
Upload: salim
View: 214 times
Download: 0 times
Share this document with a friend
23
Stat Papers DOI 10.1007/s00362-013-0557-3 REGULAR ARTICLE Asymptotic results for hybrids of empirical and partial sums processes Sergio Alvarez-Andrade · Salim Bouzebda Received: 12 September 2012 / Revised: 21 August 2013 © Springer-Verlag Berlin Heidelberg 2013 Abstract The present paper is devoted to the study of the hybrids of empirical and partial sums processes. In the first part, we present a synthesis of results related to these processes and their connection with the empirical and compound process. We obtain new results on the precise asymptotics in the law of the logarithm related to complete convergence and a.s. convergence, under some mild conditions, for the hybrids of empirical and partial sums processes. Finally, the weighted bootstrap processes and general hybrid processes are also discussed. Keywords Empirical processes · Partial sums · Gaussian process · Strong approximations · Weighted bootstrap processes · Precise asymptotics · Complete moment convergence · Baum–Katz · Convergence rates Mathematics Subject Classification Primary: 62G30 · 60F17 1 Introduction The hybrid process is defined as A (t , n) = 1i n H ( X i )1{ X i t } i , for −∞ < t < , n 1, (1) S. Alvarez-Andrade · S. Bouzebda (B ) Laboratoire de Mathématiques, Appliquées de Compiègne, Université de Technologie de Compiègne, B.P. 529, 60205 Compiègne cedex, France e-mail: [email protected] S. Alvarez-Andrade e-mail: [email protected] 123
Transcript

Stat PapersDOI 10.1007/s00362-013-0557-3

REGULAR ARTICLE

Asymptotic results for hybrids of empirical and partialsums processes

Sergio Alvarez-Andrade · Salim Bouzebda

Received: 12 September 2012 / Revised: 21 August 2013© Springer-Verlag Berlin Heidelberg 2013

Abstract The present paper is devoted to the study of the hybrids of empirical andpartial sums processes. In the first part, we present a synthesis of results related to theseprocesses and their connection with the empirical and compound process. We obtainnew results on the precise asymptotics in the law of the logarithm related to completeconvergence and a.s. convergence, under some mild conditions, for the hybrids ofempirical and partial sums processes. Finally, the weighted bootstrap processes andgeneral hybrid processes are also discussed.

Keywords Empirical processes · Partial sums · Gaussian process ·Strong approximations · Weighted bootstrap processes · Precise asymptotics ·Complete moment convergence · Baum–Katz · Convergence rates

Mathematics Subject Classification Primary: 62G30 · 60F17

1 Introduction

The hybrid process is defined as

A∗(t, n) =∑

1≤i≤n

H(Xi )1{Xi ≤ t}εi , for − ∞ < t < ∞, n ≥ 1, (1)

S. Alvarez-Andrade · S. Bouzebda (B)Laboratoire de Mathématiques, Appliquées de Compiègne,Université de Technologie de Compiègne, B.P. 529, 60205 Compiègne cedex, Francee-mail: [email protected]

S. Alvarez-Andradee-mail: [email protected]

123

S. Alvarez-Andrade, S. Bouzebda

where 1{A} denotes the indicator function of the set A, the sequences of randomvariables {Xi : 1 ≤ i < ∞} and {εi : 1 ≤ i < ∞} and the function H(·) are assumedto satisfy the following conditions:

(H1) The sequences {Xi : 1 ≤ i < ∞} and {εi : 1 ≤ i < ∞} are independent.(H2) {Xi : 1 ≤ i < ∞} are independent, identically distributed [i.i.d.] random

variables with common distribution function F(·).(H3) {εi : 0 ≤ i < ∞} are i.i.d. random variables with E[ε1] = μ,E[ε2

1 ] = 1,without loss of generality, we consider μ = 1, when μ �= 0.

(H4) The function H(·) has bounded variation on the real line.(H5) ε1 has a finite moment generating function in a neighborhood of 0.

Next we replace (H5) by the weaker condition:

(H6) E[|ε1|r ] < ∞, with some r > 2.

Let Q(y) = inf{x : F(x) ≥ y}, for y ∈ (0, 1), denote the quantile function(generalized inverse) of F(·). There are i.i.d. random variables {Yi :≤ i < ∞},uniform on [0, 1] such that Xi = Q(Yi ) (cf. Shorack and Wellner 1986, p. 3) andHorváth (2000) and the references therein. Thus, without loss of generality we canwrite

A∗(t, n) =∑

1≤i≤n

H(Q(Yi ))1{Q(Yi ) ≤ t}εi

=∑

1≤i≤n

V (Yi )1{Yi ≤ F(t)}εi , for − ∞ < t < ∞,

where

V (t) = H(Q(t)), for 0 ≤ t ≤ 1.

By (H4) we can assume without loss of generality that

sup0≤t≤1

|V (t)| := ‖V ‖∞ < 1. (2)

This means that it is enough to consider approximations for

A(t, n) =∑

1≤i≤n

V (Yi )1{Yi ≤ t}εi , for 0 ≤ t ≤ 1. (3)

The hybrid process and related processes have been the subject of intense investiga-tion for many years and this has led to the development of a large variety of methodsand statistical applications. In a series of papers, Diebolt (1995), Diebolt et al. (1997)and Diebolt and Zuber (1999) the process {A∗(t, n) : −∞ < t < ∞; n ≥ 1} wasapproximated by a sequence of Gaussian processes leading to the introduction ofnew methods for testing the goodness-of-fit in regression models. In the particularcase when H(·) is the identity function, Maumy (2002) was considered the related

123

Asymptotic results for hybrids of empirical and partial sums processes

compound empirical process and Haeusler and Mason (1999) were defined the ran-domly weighted empirical process associated with {A(t, n) : −∞ < t < ∞; n ≥ 1}.Horváth (2000) obtained probability inequalities and almost sure rates for the approx-imations of the hybrids of empirical and partial sums processes. These results are usedin Horváth et al. (2000) to approximate the weighted bootstrap process of an empiricalprocess by a sequence of approximating Brownian bridges achieving the best rates thatwas applied to detect a possible change in the distribution of independent observa-tions. Burke (2010) was generalized the one-dimensional approximations of Horváth(2000) and Horváth et al. (2000) to the multidimensional case and applied the resultsto change-point detection in general nonparametric models and the multivariate distri-bution function. Finally, making use of the process {A∗(t, n) : −∞ < t < ∞; n ≥ 1},Alvarez-Andrade (2010) introduced and investigated a change point detection basedon variance change.

The rest of this paper is organized as follows. In Sect. 2.1, we give some moredetailed arguments explaining the interest of this process and we recall some usefulresults. In Sect. 2.2, we present some asymptotic results concerning the hybrids ofempirical processes, and asymptotic for the related compound empirical process. InSect. 3, we state the main results of the present work concerning the moment conver-gence rates of

limε↓0

ε1/s∞∑

n=1

g′(n)E{∥∥∥n−1/2 A(·, n)

∥∥∥− εgs(n)}

+ , (4)

where g(·) is nonnegative, increasing and differentiable on the interval [0,∞), {x}+ =max{x, 0} and ‖ · ‖ will denote ‖ · ‖∞ or ‖ · ‖2, with

‖ f ‖2 =⎛

⎝1∫

0

f 2(t)dt

⎠1/2

.

Therefore we extend the results of Zhang and Yang (2008) and Zang and Huang(2011) to the hybrid process instead of the empirical process. Section 4 is devotedto the general notion of hybrid process constructed by exchangeably weighting thesample. All mathematical developments are given in Sect. 5.

2 Some useful results

Before we present our results in detail, we shall extend our overview of the literatureon the hybrid process {A∗(t, n) : −∞ < t < ∞; n ≥ 1}.

2.1 Motivations

• Under the assumptions (H1)–(H4) and (H5) or (H6), Diebolt (1995) obtained anapproximation of {n−1/2 A∗(t, n) : −∞ < t < ∞; n ≥ 1} involving random time

123

S. Alvarez-Andrade, S. Bouzebda

change that was used to derive the limiting laws of a family of nonparametric testsfor the regression function m(·) in the nonlinear regression model

Yi = m(xi )− H(xi )εi , 1 ≤ i ≤ n,

where x1, . . . , xn are given realizations of the i.i.d. random variables {Xi : 1 ≤ i ≤n} and the i.i.d. random variables {εi : 1 ≤ i ≤ n} satisfying condition (H5) or(H6). Notice that the function H(·) and the distribution functions of X1 and ε1 areunknown in this framework.

• Diebolt et al. (1997) introduced methods for testing the goodness-of-fit of linear ornonlinear parametric autoregression models of order one defined by

Xt+1 = m(Xt ; θ)+ H(Xt )εt+1, (5)

where H(·) is a measurable function and εt are Gt -martingale differences withconditional variance 1, with Gt = σ(X1, ε1, . . . , Xt , εt ), for t ≥ 1. Similar to thepreceding paper, the distribution functions are unknown. Their procedure is basedon a measure of the deviation between a weighted process of residuals given by

B(x, n) = n−1/2n∑

t=1

(Xt+1 − m(Xt ; θn)

)1{Xt ≤ x}, for − ∞ < x < ∞

and a parametric estimate of the cumulated conditional mean function, under thenull hypothesis. The following representation {B(x, n) : −∞ < x < ∞; n ≥ 1} interms of hybrid processes play an important role in their study

B(x, n) = B(x, n)− n−1/2n∑

t=1

(m(Xt ; θn)− m(Xt ; θ0)

)1{Xt ≤ x},

where

B(x, n) = n−1/2n∑

t=1

H(Xt )1{Xt ≤ x}εt+1.

Diebolt et al. (1997) showed that {n−1/2 B(x, n) : −∞ < x < ∞, n ≥ 1} con-verges weakly to a time-transformed Wiener process that was used to derive someasymptotic properties of the statistics, namely, Kolmogorov–Smirnov-type statistic

Sn = sup−∞<x<∞

∣∣B(x, n)∣∣ ,

and Cramér-von Mises-type statistic

Sn =∞∫

−∞B2(x, n)w(Fn(x))d Fn(x),

123

Asymptotic results for hybrids of empirical and partial sums processes

where w(·) is a weight function and Fn(·) denotes the empirical distribution func-tion defined in (8) below. These results were extended by Diebolt and Zuber (1999)in order to study the goodness-of-fit tests for parametric possibly nonlinear het-eroscedastic regression models. The interested reader may refer to Koul and Stute(1999) for more general results, in a very abstract context, which are applied fortesting a parametric autoregressive form.

• The case H(x) = 1. The compound empirical process is defined, for each integern ≥ 1, by

αn,c(t) = n1/2 [Un,c(t)− E[Un,c(t)]]

= n−1/2n∑

i=1

{εi1 {Yi ≤ t} − E[ε11 {Y1 ≤ t}]} , for 0 ≤ t ≤ 1, (6)

where

Un,c(t) = 1

n

n∑

i=1

εi1 {Yi ≤ t} , for 0 ≤ t ≤ 1.

Here, the subscript “c” is used with the meaning of “compound”. Maumy (2002)showed that the order of the best Gaussian approximation for {αn,c(t) : 0 ≤ t ≤1; n ≥ 1} is n−1/2 log n and it occurs under (H5).

Remark 1 If the random variables εi = 1 almost surely, for all i ≥ 1, then thecompound empirical process {αn,c(t) : 0 ≤ t ≤ 1; n ≥ 1} reduces to the classicaluniform empirical process.

• Mason and Newton (1992) and Haeusler and Mason (1999) introduced and derivedthe weighed approximation for the randomly weighted uniform empirical processdefined by

X(n, t) = n1/2

⎝∑

1≤i≤n

ξi,n1{Yi ≤ t} − 1

n

1≤i≤n

1{Yi ≤ t}⎞

⎠ , for 0 ≤ t ≤ 1,

where the triangular array of random variables ξi,n , for 1 ≤ i ≤ n, n ≥ 1, is suchthat for each fixed n ≥ 1, ξi,n , for 1 ≤ i ≤ n, is assumed to be independent ofY1, . . . ,Yn .

• This line of research on the process {A∗(t, n) : −∞ < t < ∞; n ≥ 1} foundits “final results”, in terms of Gaussian approximations, in the works of Horváth(2000), for the one-dimensional case, Burke (2010) and (Bouzebda 2012, Section4), for the multidimentional case.

2.2 The strong approximation results

Below, we introduce notations and definitions regarding some Gaussian processes,which play a central role in the strong approximation theory. By a Brownian bridge

123

S. Alvarez-Andrade, S. Bouzebda

{B(t) : 0 ≤ t ≤ 1}, it is meant a centered Gaussian process with continuous samplepaths, and covariance function

E(B(s)B(t)) = s ∧ t − st, for 0 ≤ s, t ≤ 1.

A Kiefer process {K (y, t) : y ≥ 0, 0 ≤ t ≤ 1}, is a centered Gaussian process, withcontinuous sample paths, and covariance function

E(K (x, s)K (y, t)) = {x ∧ y}{s ∧ t − st}, for x, y ≥ 0 and 0 ≤ s, t ≤ 1,

satisfying also some distributional identities

{K (t, v) : 0 ≤ v ≤ 1} d={√

t B(v) : 0 ≤ v ≤ 1}, for t ≥ 0

and

{K (t, v) : t ≥ 0} d={√v(1 − v)W (t) : t ≥ 0

}, for 0 ≤ v ≤ 1,

where “d=” denotes the equality in distribution and {W (t), t ≥ 0} is a standard

Wiener process, i.e., a centered Gaussian process with continuous sample paths, andcovariance function

E(W (x)W (y)) = x ∧ y, for x, y ≥ 0.

The interested reader may refer to Csörgo and Révész (1981) for further details on theGaussian processes mentioned above.

According to Diebolt (1995), the time-transformation of the limiting Wiener processwill be given by

Gn(t) =∞∫

0

H2(s)d Fn(s), (7)

where

Fn(t) = 1

n

1≤i≤n

1{Xi ≤ t}, for − ∞ < t < ∞, (8)

denotes the empirical distribution function. Later, Horváth (2000), showed that therandom time change given in (7), can be replaced with a non-random time changesays

123

Asymptotic results for hybrids of empirical and partial sums processes

G(t) =t∫

−∞H2(s)d F(s), (9)

where F(·) is the common distribution function of the {Xi : 1 ≤ i < ∞},without reducing the rates of the approximations given in Diebolt (1995). Healso was provided the almost sure approximation of the two-parameter process{A∗(t, n),−∞ < t < ∞; n ≥ 1} , by a two-parameter Wiener process.

2.2.1 Approximations for the process A∗(·, n)

In this subsection we recall some theorems of Horváth (2000), where the time changeis given by (9). For further discussion we refer to the last reference.

Theorem A 1. Assume that the conditions (H1)–(H4) and (H5) are satisfied. Thenwe can define a sequence of Wiener processes {Wn(x), 0 ≤ x < ∞} such that

P

(sup

−∞≤t≤∞

∣∣∣n−1/2 A∗(t, n)− Wn(G(t))∣∣∣ >

c1 log n + x√n

)≤ c2 exp(−c3x)

for all x > 0, where c1, c2 and c3 depend only on H(·), F(·) and the distributionof ε1.

2. If the conditions (H1)–(H4) and (H6) are satisfied. Then we can define a sequenceof Wiener processes

{W ∗

n (x), 0 ≤ x < ∞} such that

P

(sup

−∞<t<∞

∣∣∣n−1/2 A∗(t, n)− W ∗n (G(t))

∣∣∣ >x√n

)≤ a(x)nx−r

for all x > 0 and a(x) → 0 as x → ∞.

Let g(·) be a Lipschitz functional of order one and W (·) be a Wiener process. Weassume that g(W (·)) has a bounded density. Under conditions (H1)–(H4) and (H5),we have that

sup−∞<x<∞

∣∣∣P(

g(

n−1/2 A∗(·, n))

≤ x)

− P (g (W (G(·))) ≤ x)∣∣∣ = O

(log n√

n

).

If conditions (H1)–(H4) and (H6) are satisfied, then

sup−∞<x<∞

∣∣∣P(

g(

n−1/2 A∗(·, n))

≤ x)

− P (g (W (G(·))) ≤ x)∣∣∣

= o(

n−(r−2)/(2(r+1))).

Remark 2 The first example of Lipschitz functional of order one is

g(φ) = ‖φ‖∞ = supt∈[0,1]

|φ(t)| .

123

S. Alvarez-Andrade, S. Bouzebda

In this case, we mention that the distribution function

ψ(s) = P {‖W‖∞ ≤ s} , for s ≥ 0,

has a well-known analytical expression (see e.g. Shorack and Wellner 1986, Eq. (7),p. 34). The second example is

g(φ) =⎛

⎝1∫

0

φ2(s)ds

⎠1/2

.

Note that the distribution of(∫ 1

0 W 2(s)ds)1/2

has a known analytical expression.

More precisely, the analytical expression can be deduced from the Karhunen-Loèveexpansion for the Wiener process over the interval [0, 1] (see, for instance, Adler 1990,pp. 66–79 and Shorack and Wellner 1986).

2.2.2 Asymptotic results associated with the compound empirical process

In the case of the compound Poisson process {αn,c(t) : 0 ≤ t ≤ 1; n ≥ 1} givenby (6), corresponding to the special case H(x) = 1, Maumy (2002), considers thecompound empirical process as a linear combination of a two-parameter process anda Kiefer process and was established almost sure rates of convergence. Assume thatthe hypotheses (H1)–(H4) and (H5) hold. Then, by using Theorem 2.2. of Horváth(2000), we have, almost surely, as n → ∞,

sup0≤x≤1

∣∣∣∣∣

n∑

i=1

[εi1 {Yi ≤ x} − x] − W(x, n)− K (x, n)

∣∣∣∣∣ = O(

n1/4(log n)1/2),

where the process {W(x, y) : x ≥ 0, y ≥ 0} is a two-parameter Wiener process andthe process {K (x, y) : x ≥ 0, y ≥ 0} is a Kiefer process.

3 Main results

In the sequel, without loss of generality, we will consider, the hybrid process{A(t, n), 0 ≤ t ≤ 1, n ≥ 1} , given in (3). In this case, Gn(·) and G(·) become, respec-tively, for each integer n ≥ 1,

Jn(t) =t∫

0

V 2(s)d En(s), for 0 ≤ t ≤ 1,

123

Asymptotic results for hybrids of empirical and partial sums processes

and

J (t) =t∫

0

V 2(s)ds, for 0 ≤ t ≤ 1,

where

En(t) = 1

n

1≤i≤n

1{Yi ≤ t}, for 0 ≤ t ≤ 1, (10)

denotes the uniform empirical distribution function related to the i.i.d. sequence{Yi , 1 ≤ i < ∞} with uniform distribution on [0, 1]. The uniform empirical processis defined by, for each n ≥ 1,

αn(t) = n1/2 (En(t)− t) , for 0 ≤ t ≤ 1.

Zhang and Yang (2008) investigated the uniform empirical process {αn(t) : 0 ≤t ≤ 1, n ≥ 1} and obtained the precise asymptotics in the Baum–Katz–Davis law oflarge numbers given by Gut and Spataru (2000a) and Gut and Spataru (2000b) for asequence of i.i.d. random variables. For further details we refer to Li et al. (2007),Gut and Steinebach (2012), Gut and Stadtmüller (2012), Meng (2012), Zang (2012),Gut and Steinebach (2013a,b) and the references therein. The legendary paper by Hsuand Robbins (1947) introducing the concept of “complete convergence” is to be citedhere. The last mentioned reference generated a series of papers, in particular Baumand Katz (1965)’s seminal work which provided necessary and sufficient conditionsfor the convergence of the series

∞∑

n=1

nr/p−2P

(∣∣∣∣∣

n∑

i=1

Xi

∣∣∣∣∣ ≥ ε

),

for suitable values of r and p. One result, among others, of Zhang and Yang (2008)reads as follows: for 1 ≤ p < 2, r > p, we have, for gn(ε) = εn1/p,

limε→0

ε2(r−p)/(2−p)∞∑

n=1

nr/p−2E {‖αn‖∞ − gn(ε)}+

= p

r − pE

[‖B‖2(r−p)/(2−p)∞

]. (11)

In the case of the uniform empirical process Zang and Huang (2011), in Theorem 2.1,established results on infinite series such as

limε↓0

ε1/s∞∑

n=1

g′(n)E{‖En‖ − εgs(n)

}+ =

( s+12s

)

2s+12s

∞∑

k=1

(−1)k+1k− s+1s , (12)

123

S. Alvarez-Andrade, S. Bouzebda

where

(p) :=∞∫

0

y p−1 exp(−y)dy

is the Gamma function. Some interesting particular cases of the function g(·) are thefollowing:

g(x) = (log log x)b+1 ,

g(x) = (log x)b+1 , for b > −1,

g(x) = (x)rp −1

, for 0 < p < r < 2.

Our main purpose is to extend the results by Zang and Huang (2011) for the uniformempirical process to the normalized hybrids of empirical process given in (3) that is{n−1/2 A(t, n), t ∈ [0, 1]; n ≥ 1

}, and characterize the following

limε↓0

ε1/s∞∑

n=1

g′(n)E{∥∥∥n−1/2 A(·, n)

∥∥∥− εgs(n)}

+ , (13)

where ‖ · ‖ denotes ‖ · ‖∞ or ‖ · ‖2.

Remark 3 The choice of the functionals ‖·‖∞ and ‖·‖2 is motivated by Remark 2 andthat the functional ‖W‖∞ (resp. ‖W‖2) appears in the Kolmogorov–Smirnov (resp.Cramér-von Mises) goodness-of-fit test for distribution functions.

In complete analogy to Theorem 2.1 in Zang and Huang (2011), we have thefollowing theorem where we also consider the case ‖ · ‖ = ‖ · ‖2.

Theorem 1 We assume that the conditions (H1)–(H2)–(H3, with μ = 0)–(H4) and(H6) hold and the function g(·) is differentiable on the interval [0,∞), which isnonnegative and strictly increasing to ∞. Suppose that the derivative of g(·), g′(·) ismonotone. If g′(·) is monotone nonincreasing, we assume that

limx→∞

g′(x + y)

g′(x)= 1.

Then, for s > 0, we have

limε→0

ε1/s∞∑

n=1

g′(n)E{∥∥∥n−1/2 A(·, n)

∥∥∥∞ − εgs(n)}

+= E[‖W (J )‖1/s∞ ]

s.

In the case of ‖ · ‖ = ‖ · ‖2 jointly with J (t) = t, for t ∈ (0, 1) and s > 0, we have,

limε→0

ε1/s∞∑

n=1

g′(n)E{∥∥∥n−1/2 A(·, n)

∥∥∥2− εgs(n)

}

+= E[‖W‖1/s

2 ]s

.

123

Asymptotic results for hybrids of empirical and partial sums processes

In the same spirit to the preceding theorem we extend Theorem 2.2 in Zang and Huang(2011) to the hybrid process and also consider the case ‖ · ‖ = ‖ · ‖2.

Theorem 2 We assume that the conditions (H1)–(H2)–(H3, with μ = 0)–(H4) and(H6) hold and the function g(·) is differentiable on the interval [0,∞) and strictlyincreasing to ∞ with nonnegative derivative function g′(·). Suppose that the functiong′(·)/g(·) is monotone. If g′(·)/g(·) is monotone nondecreasing, we assume that

limx→∞

g′(x + 1)g(x)

g(x + 1)g′(x)= 1.

Then, for s > 0, we have

limε→0

1

− log ε

∞∑

n=1

g′(n)g(n)

E

{∥∥∥n−1/2 A(·, n)∥∥∥− εgs(n)

}

+ = E [‖W‖]

s,

where ‖ · ‖ denotes ‖ · ‖∞ or ‖ · ‖2.

3.1 The weighted bootstrap processes

According to Horváth et al. (2000), we define the smooth bootstrap of the uniformempirical process {αn(t) : 0 ≤ t ≤ 1; n ≥ 1} by

β(t, n) = 1√n

1≤i≤n

(εi − εn)1{Yi ≤ t}, for 0 ≤ t ≤ 1, (14)

where

εn = 1

n

1≤i≤n

εi .

It is worth noticing that the bootstrap technique, which is a form of resampling pro-cedures for statistical inference, was introduced in Efron (1979)’s seminal paper. In avariety of statistical problems, the bootstrap provides a simple method for circumvent-ing technical difficulties due to the intractable distribution theory and has become apowerful tool for setting confidence intervals and critical values of tests for compositehypotheses, we may refer, e.g., to Rothe (1989), Cepar and Radalj (1990), Li (2000),Tanizaki et al (2006) and Jouini (2010).

We state in the following theorem an analogue result to Theorem 2.2 of Zang andHuang (2011) for the smooth bootstrap process {β(t, n) : 0 ≤ t ≤ 1; n ≥ 1}.Theorem 3 We assume that the conditions (H1)–(H2)–(H3, with μ = 0)–(H4) and(H6) hold and the function g(·) is differentiable on the interval [0,∞), which isnonnegative and strictly increasing to ∞. Suppose that the derivative of g(·), g′(·) ismonotone. If g′(·) is monotone nonincreasing, we assume that

123

S. Alvarez-Andrade, S. Bouzebda

limx→∞

g′(x + y)

g′(x)= 1.

Then, for s > 0, we have

limε→0

ε1/s∞∑

n=1

g′(n)E{‖β(·, n)‖∞ − εgs(n)

}+ =

( s+12s

)

2s+12s

∞∑

k=1

(−1)k+1k− s+1s .

We state in the following theorem an analogue result to Zang and Huang (2011,Theorem 2.2) for the smooth bootstrap process {β(t, n) : 0 ≤ t ≤ 1, n ≥ 1}.Theorem 4 We assume that the conditions (H1)–(H2)–(H3, with μ = 0)–(H4) and(H6) hold and the function g(·) is differentiable on the interval [0,∞) and strictlyincreasing to ∞ with nonnegative derivative function g′(·). Suppose that the functiong′(·)/g(·) is monotone. If g′(·)/g(·) is monotone nondecreasing, we assume that

limx→∞

g′(x + 1)g(x)

g(x + 1)g′(x)= 1.

Then, for s > 0, we have

limε→0

1

− log ε

∞∑

n=1

g′(n)g(n)

E{‖β(·, n)‖∞ − εgs(n)

}+ = log 2

s

(π2

)1/2.

The proofs of Theorems 3 and 4 use the following facts:

P(‖B‖ ≥ x) =∑

−∞<k<∞(−1)k exp(−2k2x2), (15)

E(‖B‖r ) = ( r

2

)

2r2

1≤k<∞(−1)k+1k−r < ∞, for r > 0, (16)

and the result due to Horváth et al. (2000), given by

sup−∞<x<∞

|P (‖B‖ ≥ x)− P (‖β(·, n)‖ ≥ x)| = O(

n−1/2 (log n)1/2). (17)

The rest of the proof follows the same lines of that of Theorems 1 and 2 and will betherefore omitted.

4 General hybrid process

Let ξn = (ξn1, . . . , ξnn)� be an exchangeable vector of nonnegative weights which

sum to 1 and

ξn = {ξni , i = 1, 2, . . . , n = 1, 2, . . .}

123

Asymptotic results for hybrids of empirical and partial sums processes

is a triangular array defined on the probability space (Z, E,PW ). The bootstrap weightsξni ’s are assumed to belong to the class of exchangeable bootstrap weights introducedby Mason and Newton (1992) as well as by Præstgaard and Wellner (1993). Theinterested reader may refer to Billingsley (1968), Aldous (1985) and Kallenberg (2002)for excellent general coverage of the theory of exchangeability. The general hybridprocess defined as

A(t, n) =∑

1≤i≤n

H(Xi )1{Xi ≤ t}ξni , for − ∞ < t < ∞, (18)

where the sequences of random variables, for n ≥ 1, {Xi : 1 ≤ i < n} and {ξni : 1 ≤i < n} are independent. Typically, the weights are assumed to satisfy the followingconditions.

(W1) The vector ξn = (ξn1, . . . , ξnn)� is exchangeable for all n = 1, 2, . . ., i.e., for

any permutation π = (π1, . . . , πn) of (1, . . . , n), the joint distribution of

π(ξn) = (ξnπ1, . . . , ξnπn )�,

is the same as that of ξn ;(W2) ξni ≥ 0 for all n, i and

∑ni=1 ξni = 1 for all n;

(W3) For some positive constant c > 0,

nn∑

i=1

(ξni − 1/n)2Pξ−→ c > 0, as n → ∞;

(W4)

max1≤i≤n

nξ2ni

Pξ−→ 0, as n → ∞.

We have the following theorem.

Theorem 5 Let ξ be a triangular array of bootstrap weights satisfying assumptions(W1)–(W4). Assume that the function H(·) fullfils condition (H4). Then along almostall sample sequences, given X1, . . . , Xn,

n1/2A(t, n) converges weakly to c1/2W (G),

where W (·) is a Wiener process and G(·) is defined in (9).

5 Proofs

This section is devoted to the proofs of our results. The previously defined notationcontinues to be used below.

123

S. Alvarez-Andrade, S. Bouzebda

5.1 Proof of Theorem 1

Lemmas 1–4 below are oriented towards proving Theorem 1.

Lemma 1 Assume that the conditions of Theorem 1 hold. For s > 0, we have

limε→0

ε1/s∞∑

n=1

g′(n)E{‖W (J )‖∞ − εgs(n)

}+ = E[‖W (J )‖1/s∞ ]

s

and

limε→0

ε1/s∞∑

n=1

g′(n)E{‖W‖2 − εgs(n)

}+ = E[‖W (1)‖1/s

2 ]s

.

Proof of Lemma 1 Below, we write Zd= N (μ, σ 2) whenever the r.v. Z follows a

normal law with expectation μ and variance σ 2. Let Ψ (x) = P(Z ≥ x), where

Zd= N (0, 1). Set J = J (1). Recall that, (see, e.g., Mörters and Peres 2010, Theorem

2.21),

P (‖W‖∞ ≥ x) = 2P (W (1) ≥ x) .

Notice that random variable ‖W (J )‖∞ has the same distribution as that of J 1/2 ‖W‖∞.Now, using similar arguments to those of Gut and Spataru (2000b), we readily inferthat

2ε1/s

∞∫

1

g′(x)Ψ(ε√J

gs(x)

)dx ≤

∞∑

n=1

ε1/s g′(n)P(‖W (J )‖∞ ≥ εgs(n)

)

≤ 2ε1/s

∞∫

0

g′(x)Ψ(εgs(x)√

J

)dx . (19)

By the change of variable y = εgs (x)√J

, the statement (19) can be rewritten as follows

2

s

(√J)1/s

∞∫

εgs (1)√J

y1/s−1Ψ (y)dy ≤ ε1/s∞∑

n=1

g′(n)P(‖W (J )‖∞ ≥ εgs(n)

)

≤ 2

s

(√J)1/s

∞∫

εgs (0)√J

y1/s−1Ψ (y)dy.

The conclusion follows, for the sup-norm ‖ · ‖∞ case, by making use of the followingargument

123

Asymptotic results for hybrids of empirical and partial sums processes

limε↓0

2

s

(√J)1/s

∞∫

εgs (0)√J

y1/s−1Ψ (y)dy = E[‖W (J )‖1/s∞ ]s

.

In the case of ‖ · ‖2, a similar argument shows, likewise that, for 0 < δ < 1,

2

∞∫

1

ε1/s g′(x)Φ(εgs(x))dx ≤∞∑

n=1

ε1/s g′(n)P(‖W‖2 ≥ εgs(n)

)

≤ 2

∞∫

δ

ε1/s g′(x)Φ(εgs(x))dx . (20)

The rest of the proof follows the same lines of that of the supremum norm ‖ · ‖∞ andwill be therefore omitted. ��Lemma 2 Assume that the conditions of Theorem 1 hold. For some positive constantM > 0, we have

limε→0

ε1/s∑

n≤a(ε)M

g′(n)∣∣∣∣E{∥∥∥n−1/2 A(·, n)

∥∥∥− εgs(n)}

+

−E{‖W (J )‖ − εgs(n)

}+∣∣∣ = 0,

where ‖ · ‖ denotes ‖ · ‖∞ or ‖ · ‖2 and a(ε) is chosen in such a way that

g(a(ε)) = Mε−1/s .

Proof of Lemma 2 We first consider the case of ‖ · ‖ = ‖ · ‖∞. Making use of Theo-rem A, we see that,

Δn = sup−∞<x<∞

∣∣∣P (‖W (J )‖ ≥ x)− P

(∥∥∥n−1/2 A(·, n)∥∥∥ ≥ x

)∣∣∣

= O(

n−1/2 (log n)1/2). (21)

Using the same arguments as those used in the proof of Zang and Huang (2011), weobtain, in turn, that

ε1/s∑

n≤a(ε)M

g′(n)∣∣∣∣E{∥∥∥n−1/2 A(·, n)

∥∥∥− εgs(n)}

+ − E{‖W (J )‖ − εgs(n)

}+

∣∣∣∣

= ε1/s∑

n≤a(ε)M

g′(n)

∣∣∣∣∣∣

∞∫

0

P

(∥∥∥n−1/2 A(·, n)∥∥∥ ≥ x + εgs(n)

)dx

123

S. Alvarez-Andrade, S. Bouzebda

−∞∫

0

P(‖W (J )‖ ≥ x + εgs(n)

)dx

∣∣∣∣∣∣

≤ ε1/s∑

n≤a(ε)M

g′(n)∞∫

0

∣∣∣P(∥∥∥n−1/2 A(·, n)

∥∥∥ ≥ x + εgs(n))

−P(‖W (J )‖ ≥ x + εgs(n)

) ∣∣∣dx

= ε1/s∑

n≤a(ε)M

g′(n) (Δ1 +Δ2) ,

where

Δ1 =Δ

−1/4n∫

0

∣∣∣P(∥∥∥n−1/2 A(·, n)

∥∥∥ ≥ x + εgs(n))

− P(‖W (J )‖ ≥ x + εgs(n)

)∣∣∣ dx,

Δ2 =∞∫

Δ−1/4n

∣∣∣P(∥∥∥n−1/2 A(·, n)

∥∥∥ ≥ x + εgs(n))

− P(‖W (J )‖ ≥ x + εgs(n)

)∣∣∣ dx .

It follows from (21) that

Δ1 ≤ Δ3/4n → 0, as n → ∞.

We next evaluate the second termΔ2. Making use of the triangle inequality, we obtainreadily that

Δ2 ≤∞∫

Δ−1/4n

∣∣∣P(∥∥∥n−1/2 A(·, n)

∥∥∥ ≥ x + εgs(n))∣∣∣

+ε1/s∣∣P(‖W (J )‖ ≥ x + εgs(n)

)∣∣ dx

≤∞∫

Δ−1/4n

∣∣∣P(∥∥∥n−1/2 A(·, n)

∥∥∥ ≥ x + εgs(n))∣∣∣ dx

+∞∫

Δ−1/4n

2J√2πx

e−x2/J 22dx . (22)

123

Asymptotic results for hybrids of empirical and partial sums processes

Let us now recall some useful arguments given in Horváth (2000). We set, for 0 ≤t ≤ 1,

T (t, i, j) =∑

i<k≤ j

V (Yk)1{Yk ≤ t}εk, for 1 < i < j < ∞. (23)

Note that

T (t, 0, nk) = A(t, nk). (24)

For 0 = n0 < n1 < n2 < · · · , the processes {T (t, nk, nk+1) : 0 ≤ t ≤ 1} areindependent with

E[T (t, ni , ni+1)

] = 0,

Var (T (t, ni , ni+1)) = (ni+1 − ni ) J (t),

and

E[|T (t, ni , ni+1)|r ] = (ni+1 − ni )E[|εi |r ]t∫

0

V r (s)ds.

Moreover by using Fuk and Nagaev inequality (see Petrov 1995, p. 78), for somepositive constants Cr and C ′

r depending on r and nk < n ≤ nk+1, which may havedifferent values at each appearance throughout the sequel, we have

P (T (t, nk, nk+1) > x) ≤ Cr (n − nk)E[|εi |r

]t∫

0

V r (s)dsx−r

+ exp{−C ′

r x2 ((n − nk)J (t))−1}.

In the case of the ‖ · ‖∞, we readily infer that

P(‖T (t, 0, nk+1)‖∞ > x

) ≤ Cr nk+1x−r + exp{−C ′

r x2 (nk+1)−1}. (25)

We evaluate the first term in the right hand-side of (22). It follows that

∞∫

Δ−1/4n

P

(∥∥∥n−1/2 A(·, n)∥∥∥ ≥ x + εgs(n)

)dx

=∞∫

Δ−1/4n

P(‖An‖ ≥ √

n(x + εgs(n)))

dx

123

S. Alvarez-Andrade, S. Bouzebda

≤∞∫

Δ−1/4nk

P(‖T (t, nk+1, 0)‖ ≥ √

nk(x + εgs(nk)))

dx

=∞∫

Δ−1/4nk

C2

(x + εgs(n))2dx +

∞∫

Δ−1/4nk

exp{−C ′

2(x + εgs(n))2}

dx

≤∞∫

Δ−1/4nk

C2

x2 dx +∞∫

Δ−1/4nk

exp{−C ′

2x2}

dx → 0, n → ∞.

Note that the second term in the right hand side of (22) also tends to 0 as n → ∞.The rest of the proof is very similar of that of Zang and Huang (2011, p. 1945), andwill therefore be omitted.For the case of ‖ · ‖2, we first observe the following elementary inequality

∣∣‖ f1‖2 − ‖ f2‖2

∣∣ ≤ ‖ f1 − f2‖∞ .

Making use of Proposition 5 of Diebolt (1995) we conclude readily that the statement(21) holds with the formal replacement of ‖ · ‖∞ by ‖ · ‖2. By combining (21) andAbel criteria in a similar way as in Gut and Spataru (2003), we conclude the proof ofLemma 2. ��Lemma 3 For any s > 0, uniformly for ε > 0, we have

limM→∞ ε

1/s∑

n>a(ε)

g′(n)E{‖W (J )‖ − εgs(n)

}+ = 0,

where ‖ · ‖ denotes ‖ · ‖∞ or ‖ · ‖2 with J (t) ≡ t and a(ε) such that, for M > 1,

g(a(ε)) = Mε−1/s .

Proof of Lemma 3 Recall the following useful results

sup0≤t≤1

|W (s)| ≥⎧⎨

1∫

0

W 2(t)dt

⎫⎬

1/2

, (26)

P (‖W‖∞ ≥ x) = 2P (W (1) ≥ x) ≤ 2√2πx

exp

{− x2

2

}, (27)

J 1/2 ‖W‖∞d= ‖W (J )‖∞ . (28)

By combining Eqs. (26)–(28) with similar arguments those are used in the proofProposition 2.3 of Zhang and Yang (2008), we readily conclude the proof of Lemma 3.

��

123

Asymptotic results for hybrids of empirical and partial sums processes

Lemma 4 For any s > 0, uniformly for ε > 0, we have

limM→∞ ε

1/s∑

n>a(ε)

g′(n)E{∥∥∥n−1/2 A(·, n)

∥∥∥− εgs(n)}

+ = 0,

where ‖ · ‖ can be taken as ‖ · ‖∞ or ‖ · ‖2, with J (t) ≡ t in this case and a(ε) ischosen in such a way that, for M > 1,

g(a(ε)) = Mε−1/s .

Proof of Lemma 4 Recall from the proof of Lemma 2, the definitions of T (t, i, j) andA(t, nk) given in (23) and (24), respectively, and for 0 = n0 < n1 < n2 < · · · , theprocesses {T (t, nk, nk+1) : 0 ≤ t ≤ 1} are independent.Consider first, the case of the supremum norm ‖ · ‖∞. From (25), we readily infer that

ε1/s g′(n)P(‖T (t, 0, nk+1)‖∞ > x

)

≤ ε1/s g′(n)Cr nk+1x−r + ε1/s g′(n) exp{−Cr x2 (nk+1)

−1}.

We conclude the proof in this case, by observing that

I1 = limM→∞ ε

1/s∑

nk>a(ε)

∞∫

εgs (nk)√

nk

g′(n)nk+1C2x−2dx → 0,

I2 = limM→∞ ε

1/s∑

nk>a(ε)

∞∫

εgs (nk)√

nk

g′(n) exp{−C ′

2x2 (nk+1)−1}

dx → 0.

Next consider the case of ‖ · ‖ = ‖ · ‖2. We first evaluate the following probability

P (‖A(·, nk)‖2 > x) = P

⎝1∫

0

T 2(t, 0, nk)dt > x

⎠ .

Making use of the Bienaymé–Tchebychev’s inequality, we readily obtain

P (‖A(·, nk)‖2 > x) ≤ E ‖A(·, nk)‖22

x2 . (29)

Next replace x by εgs(nk)√

nk in (29) and by integration, we readily obtain the resultfor the ‖ · ‖2 case, as sought. ��

The proof of our Theorem 1 is a direct consequence of the auxiliary Lemmas 1–4.

123

S. Alvarez-Andrade, S. Bouzebda

5.2 Proof of Theorem 2

The proof of Theorem 2 will be based on the following Lemmas combined with thetriangle inequality.

Lemma 5 For s > 0, we have

limε→0

1

− log ε

∞∑

n=1

g′(n)g(n)

E{‖W‖ − εgs(n)

}+ = E[‖W‖]

s.

Proof of Lemma 5 We readily obtain through a change of variable as in Zang (2012,equation (4.1)) for arbitrary δ > 0,

limε→0

1

− log ε

∞∫

δ

g′(x)g(x)

∞∫

εgs (x)

P(‖W‖ ≥ t)dtdy

= limε→0

1

− log ε

∞∫

g(δ)

1

y

∞∫

εys

P(‖W‖ ≥ t)dtdy

= limε→0

1

−s log ε

∞∫

εgs (δ)

1

x

∞∫

x

P(‖W‖ ≥ t)dtdx

= limε→0

1

−s log ε

∞∫

εgs (δ)

P(‖W‖ ≥ t)dt

t∫

εgs (δ)

1

xdx

= limε→0

1

s

∞∫

εgs (δ)

P(‖W‖ ≥ t)dt

= 1

s

∞∫

0

P(‖W‖ ≥ t)dt.

This suffices for the proof of Lemma 5. ��The proofs of the following lemmas are very similar to that of Lemmas 2–4 and

therefore will be omitted. In the sequel , we set g(B(ε)) = ε−r , for r > 1/s andε > 0.

Lemma 6 For s > 0, we have

limε→0

1

− log ε

B(ε)∑

n=1

g′(n)g(n)

∣∣∣E{∥∥n−1/2 A(·, n)

∥∥− εgs(n)}+ − E

{‖W‖ − εgs(n)}+∣∣∣ = 0.

123

Asymptotic results for hybrids of empirical and partial sums processes

Lemma 7 For s > 0, we have

limε→0

1

− log ε

n>B(ε)

g′(n)g(n)

E{‖W‖ − εgs(n)

}+ = 0.

Lemma 8 For s > 0, we have

limε→0

1

− log ε

n>B(ε)

g′(n)g(n)

E

{∥∥∥n−1/2 A(·, n)∥∥∥− εgs(n)

}

+ = 0.

5.3 Proof of Theorem 5.

According to Mason and Newton (1992), for integers n ≥ 1, we set

Wn(t) =∑

1≤i≤nt

ξni , for 0 ≤ t ≤ 1, (30)

where the empty sum is defined to be 0. The assumptions on the weights allows us toconclude (see Billingsley 1968, Theorem 24.2) that the process {n1/2Wn(t) : 0 ≤ t ≤1; n ≥ 1} converges weakly to {c1/2W (t) : 0 ≤ t ≤ 1}. Set

A(t, n) =∑

1≤i≤n

1{Xi ≤ t}ξni , for − ∞ < t < ∞. (31)

Using the same arguments as those in Mason and Newton (1992, p. 1620), we readilyinfer, conditioned on X1, . . . , Xn , that

n1/2A(t, n)d= n1/2Wn(Fn(t)),

and, almost surely along X1, X2, . . ., as n → ∞

sup−∞≤t≤∞

|n1/2A(t, n)− c1/2W (F(t))| → 0.

Since H(·) is of total bounded variation, by integrating by parts, we readily obtainthat,

A(t, n) =∫

Dt

H(x)dA(x, n),

and

W (G(u)) =∫

Dt

H(x)dW (x),

123

S. Alvarez-Andrade, S. Bouzebda

where

Dt = {x ∈ R : x ≤ t}.

This suffices for the proof of Theorem 5. ��Acknowledgments The authors are grateful to the Editor-in-Chief, an Associate Editor and the two anony-mous referees for thorough proofreading and numerous comments which led to a considerable improvementof the presentation.

References

Adler RJ (1990) An introduction to continuity, extrema, and related topics for general Gaussian processes.Institute of mathematical statistics lecture notes–monograph series, 12. Institute of Mathematical Sta-tistics, Hayward, CA

Aldous DJ (1985) Exchangeability and related topics. In: École d’été de probabilités de Saint-Flour, XIII–1983, vol 1117 of lecture notes in Math., pp 1–198. Springer, Berlin

Alvarez-Andrade S (2010) Variance estimation and change point detection for the hybrids of empirical andpartial-sum processes. Rev Roumaine Math Pures Appl 55(2):79–91

Baum LE, Katz M (1965) Convergence rates in the law of large numbers. Trans Am Math Soc 120:108–123Billingsley P (1968) Convergence of probability measures. Wiley, New YorkBouzebda S (2012) On the strong approximation of bootstrapped empirical copula processes with applica-

tions. Math Methods Stat 21(3):153–188Burke MD (2010) Approximations for a multivariate hybrid process with applications to change-point

detection. Math Methods Stat 19(2):121–135Cepar D, Radalj Z (1990) Some asymptotic behaviour of the bootstrap estimates on a finite sample. Stat

Hefte 31(1):41–46Csörgo M, Révész P (1981) Strong approximations in probability and statistics. Probability Mathematical

Statistics. Academic Press Inc. (Harcourt Brace Jovanovich Publishers), New YorkDiebolt J (1995) A nonparametric test for the regression function: asymptotic theory. J Stat Plan Inference

44(1):1–17Diebolt J, Zuber J (1999) Goodness-of-fit tests for nonlinear heteroscedastic regression models. Stat Probab

Lett 42(1):53–60Diebolt J, Laïb N, Ngatchou Wandji J (1997) Limiting distribution of weighted processes of residuals.

Application to parametric nonlinear autoregressive models. C R Acad Sci Paris Sér I Math, 325(5):535–540

Efron B (1979) Bootstrap methods: another look at the jackknife. Ann Stat 7(1):1–26Gut A, Spataru A (2000a) Precise asymptotics in the Baum–Katz and Davis laws of large numbers. J Math

Anal Appl 248(1):233–246Gut A, Spataru A (2000b) Precise asymptotics in the law of the iterated logarithm. Ann Probab 28(4):1870–

1883Gut A, Spataru A (2003) Precise asymptotics in some strong limit theorems for multidimensionally indexed

random variables. J Multivar Anal 86(2):398–422Gut A, Stadtmüller U (2012) On the Hsu–Robbins–Erdos–Spitzer–Baum–Katz theorem for random fields.

J Math Anal Appl 387(1):447–463Gut A, Steinebach J (2012b) Convergence rates in precise asymptotics. J Math Anal Appl 390(1):1–14Gut A, Steinebach J (2013a) Precise asymptotics—a general approach. Acta Math Hung 138(4):365–385Gut A, Steinebach J (2013b) Convergence rates in precise asymptotics II. Ann Univ Sci Budapest Sect

Comput 39:95–110Haeusler E, Mason DM (1999) Weighted approximations to continuous time martingales with applications.

Scand J Stat 26(2):281–295Horváth L (2000) Approximations for hybrids of empirical and partial sums processes. J Stat Plan Inference

88(1):1–18Horváth L, Kokoszka P, Steinebach J (2000) Approximations for weighted bootstrap processes with an

application. Stat Probab Lett 48(1):59–70

123

Asymptotic results for hybrids of empirical and partial sums processes

Hsu PL, Robbins H (1947) Complete convergence and the law of large numbers. Proc Natl Acad Sci USA33:25–31

Jouini J (2010) Bootstrap methods for single structural change tests: power versus corrected size andempirical illustration. Stat Papers 51(1):85–109

Koul HL, Stute W (1999) Nonparametric model checks for time series. Ann Stat 27(1):204–236Kallenberg O (2002) Foundations of modern probability. Probability and its applications (New York), 2nd

edn. Springer, New YorkLi H (2000) The power of bootstrap based tests for parameters in cointegrating regressions. Stat Papers

41(2):197–210Li DL, Zhang FX, Rosalsky A (2007) A supplement to the Baum–Katz–Spitzer complete convergence

theorem. Acta Math Sin (Engl Ser) 23(3):557–562Mason DM, Newton MA (1992) A rank statistics approach to the consistency of a general bootstrap. Ann

Stat 20(3):1611–1624Maumy M (2002) Etude du processus empirique composé. Thesis, Université Pierre et Marie Curie—

Paris VIMeng Y-J (2012) General laws of precise asymptotics for sums of random variables. J Korean Math Soc

49(4):795–804Mörters P, Peres Y (2010) Brownian motion. Cambridge series in statistical and probabilistic mathematics.

Cambridge University Press, CambridgePetrov VV (1995) Limit theorems of probability theory, vol 4 of Oxford studies in probability. The Clarendon

Press Oxford University Press, New York. Sequences of independent random variables, Oxford SciencePublications

Præstgaard J, Wellner JA (1993) Exchangeably weighted bootstraps of the general empirical process. AnnProbab 21(4):2053–2086

Rothe G (1989) Bootstrap for generalized linear models. Stat Hefte 30(1):17–26Shorack GR, Wellner JA (1986) Empirical processes with applications to statistics. Wiley series in proba-

bility and mathematical statistics: probability and mathematical statistics. Wiley, New YorkTanizaki H, Hamori S, Matsubayashi Y (2006) On least-squares bias in the AR(p)models: bias correction

using the bootstrap methods. Stat Papers 47(1):109–124Zang QP (2012) Precise asymptotics for complete moment convergence of U -statistics of i.i.d. random

variables. Appl Math Lett 25(2):120–127Zang QP, Huang W (2011) A general law of moment convergence rates for uniform empirical process. Acta

Math Sin (Engl Ser) 27(10):1941–1948Zhang Y, Yang X-Y (2008) Precise asymptotics in the law of the iterated logarithm and the complete

convergence for uniform empirical process. Stat Probab Lett 78(9):1051–1055

123


Recommended