+ All Categories
Home > Documents >  · INFERENCE UNDER RANDOM LIMIT BOOTSTRAP MEASURES Giuseppe Cavaliere Department of Economics,...

 · INFERENCE UNDER RANDOM LIMIT BOOTSTRAP MEASURES Giuseppe Cavaliere Department of Economics,...

Date post: 22-Jul-2020
Category:
Upload: others
View: 1 times
Download: 0 times
Share this document with a friend
30
INFERENCE UNDER RANDOM LIMIT BOOTSTRAP MEASURES Giuseppe Cavaliere Department of Economics, University of Bologna Department of Economics, University of Exeter Iliyan Georgiev Department of Economics, University of Bologna Nova School of Business and Economics, Universidade Nova de Lisboa First draft: July 2018; Revised: December 2019; This version: June 2, 2020 Asymptotic bootstrap validity is usually understood as consistency of the dis- tribution of a bootstrap statistic, conditional on the data, for the unconditional limit distribution of a statistic of interest. From this perspective, randomness of the limit bootstrap measure is regarded as a failure of the bootstrap. We show that such limiting randomness does not necessarily invalidate bootstrap inference if validity is understood as control over the frequency of correct inferences in large samples. We rst establish su¢ cient conditions for asymptotic bootstrap validity in cases where the unconditional limit distribution of a statistic can be obtained by averaging a (random) limiting bootstrap distribution. Further, we provide results ensuring the asymptotic validity of the bootstrap as a tool for conditional inference, the leading case being that where a bootstrap distribution estimates consistently a conditional (and thus, random) limit distribution of a statistic. We apply our framework to sev- eral inference problems in econometrics, including linear models with possibly non- stationary regressors, CUSUM statistics, conditional Kolmogorov-Smirnov speci- cation tests and tests for constancy of parameters in dynamic econometric models. Keywords: Bootstrap, random measures, weak convergence in distribution, as- ymptotic inference. Giuseppe Cavaliere: [email protected] Iliyan Georgiev: [email protected] We are grateful to the Co-Editor and three anonymous referees for the many constructive comments and suggestions on earlier versions of the paper. We have also beneted from discussions and feedback from Matias Cattaneo, Graham Elliott, Paola Giuliano, Michael Jansson, Slren Johansen, Ye Lu, Adam McCloskey, Marcelo Moreira, Rasmus Slndergaard Pedersen, Anders Rahbek, Mervyn Silvapulle, Michael Wolf, seminar participants at Boston U, Cambridge, ETH, Exeter, FGV Sªo Paulo, GenLve, Hitotsubashi, Kyoto, LSE, Melbourne, Oxford, PUC Rio, Queens, QUT, SMU, Tilburg, UCL, UPF, Verona, Vienna, as well as participants to the 7th Italian Congress of Econometrics and Empirical Economics (ICEEE), the 28th (EC) 2 Conference, the 14th SETA meeting, the NBER/NSF 2018 Time Series Conference and the IXt Zaragoza Workshop in Time Series Econometrics. We also thank Edoardo Zanelli and Lorenzo Zannin for research assistance. Financial support from the University of Bologna, ALMA IDEA 2017 grants, and from the Danish Council for Independent Research (DSF Grant 7015- 00028), is gratefully acknowledged. 1
Transcript
Page 1:  · INFERENCE UNDER RANDOM LIMIT BOOTSTRAP MEASURES Giuseppe Cavaliere Department of Economics, University of Bologna Department of Economics, University of Exeter Iliyan Georgiev

INFERENCE UNDER RANDOM LIMIT BOOTSTRAPMEASURES

Giuseppe CavaliereDepartment of Economics, University of BolognaDepartment of Economics, University of Exeter

Iliyan GeorgievDepartment of Economics, University of Bologna

Nova School of Business and Economics, Universidade Nova de Lisboa

First draft: July 2018; Revised: December 2019; This version: June 2, 2020

Asymptotic bootstrap validity is usually understood as consistency of the dis-tribution of a bootstrap statistic, conditional on the data, for the unconditionallimit distribution of a statistic of interest. From this perspective, randomness of thelimit bootstrap measure is regarded as a failure of the bootstrap. We show that suchlimiting randomness does not necessarily invalidate bootstrap inference if validity isunderstood as control over the frequency of correct inferences in large samples. We�rst establish su¢ cient conditions for asymptotic bootstrap validity in cases wherethe unconditional limit distribution of a statistic can be obtained by averaging a(random) limiting bootstrap distribution. Further, we provide results ensuring theasymptotic validity of the bootstrap as a tool for conditional inference, the leadingcase being that where a bootstrap distribution estimates consistently a conditional(and thus, random) limit distribution of a statistic. We apply our framework to sev-eral inference problems in econometrics, including linear models with possibly non-stationary regressors, CUSUM statistics, conditional Kolmogorov-Smirnov speci�-cation tests and tests for constancy of parameters in dynamic econometric models.

Keywords: Bootstrap, random measures, weak convergence in distribution, as-ymptotic inference.

Giuseppe Cavaliere: [email protected] Georgiev: [email protected] are grateful to the Co-Editor and three anonymous referees for the many constructive comments

and suggestions on earlier versions of the paper. We have also bene�ted from discussions and feedbackfrom Matias Cattaneo, Graham Elliott, Paola Giuliano, Michael Jansson, Søren Johansen, Ye Lu,Adam McCloskey, Marcelo Moreira, Rasmus Søndergaard Pedersen, Anders Rahbek, Mervyn Silvapulle,Michael Wolf, seminar participants at Boston U, Cambridge, ETH, Exeter, FGV São Paulo, Genève,Hitotsubashi, Kyoto, LSE, Melbourne, Oxford, PUC Rio, Queen�s, QUT, SMU, Tilburg, UCL, UPF,Verona, Vienna, as well as participants to the 7th Italian Congress of Econometrics and EmpiricalEconomics (ICEEE), the 28th (EC)2 Conference, the 14th SETA meeting, the NBER/NSF 2018 TimeSeries Conference and the IXt Zaragoza Workshop in Time Series Econometrics. We also thank EdoardoZanelli and Lorenzo Zannin for research assistance. Financial support from the University of Bologna,ALMA IDEA 2017 grants, and from the Danish Council for Independent Research (DSF Grant 7015-00028), is gratefully acknowledged.

1

Page 2:  · INFERENCE UNDER RANDOM LIMIT BOOTSTRAP MEASURES Giuseppe Cavaliere Department of Economics, University of Bologna Department of Economics, University of Exeter Iliyan Georgiev

1 Introduction

Consider a data sample Dn of size n and a statistic �n := �n (Dn), say a test sta-tistic or a parameter estimator, possibly normalized. Interest is in a distributional ap-proximation of �n. Let a bootstrap procedure generate a bootstrap analogue ��n of �n;i.e., computed on a bootstrap sample. Assume that �n converges in distribution to anon-degenerate random variable [rv], say � . In classic bootstrap inference, asymptoticbootstrap validity is understood and established as convergence in probability (or al-most surely) of the cumulative distribution function [cdf] of the bootstrap statistic ��nconditional on the data Dn, say F �n , to the unconditional cdf of � , say F . This con-vergence, along with continuity of F , implies by Polya�s theorem that supx2R jF �n (x)�F (x) j ! 0, in probability (or almost surely).

In many applications, however, the bootstrap statistic ��n may possess, condition-ally on the data, a random limit distribution. Cases of random bootstrap limit distri-butions appear in various areas of econometrics and statistics; for instance, they aredocumented for in�nite-variance processes (Athreya, 1987; Aue, Berkes and Horváth,2008), time series with unit roots (Basawa et al., 1991; Cavaliere, Nielsen and Rah-bek, 2015), parameters on the boundary of the parameter space (Andrews, 2000), blockresampling methods under �xed-b asymptotics (Lahiri, 2001; Shao and Politis, 2013),cube-root consistent estimators (Sen, Banerjee and Woodroofe, 2010; Cattaneo, Jans-son and Nagasawa, 2020), Hodges-LeCam supere¢ cient estimators (Beran, 1997). Inmost of these cases, the occurrence of a random limit distribution for the bootstrap sta-tistic ��n given the data �in contrast to a non-random limit of the unconditional distri-bution of the original statistic �n �is taken as evidence of bootstrap failure1.

In this paper we show that randomness in the limit distribution of a bootstrap sta-tistic need not invalidate bootstrap inference. On the contrary, although the bootstrapno longer estimates the limiting unconditional distribution of the statistic of interest,it may still deliver hypothesis tests (or con�dence intervals) with the desired null re-jection probability (or coverage probability) when the sample size diverges. This hap-pens because asymptotic control over the frequency of wrong inferences can be guaran-teed by the asymptotic distributional uniformity of the bootstrap p-values, which in itsturn can occur without the convergence in probability (or almost surely) of the boot-strap cdf F �n of �

�n to the asymptotic cdf F of � .

Therefore, in cases where the limit bootstrap distribution is random, our analysisfocuses on the asymptotics of the bootstrap p-value p�n := F �n (�n). We de�ne �(uncon-ditional) bootstrap validity�as the fact that

P (p�n � q)! q (1.1)

for all q 2 (0; 1). Interest in this property is not new in the literature (see, e.g., Hansen,1996, and Lockhart, 2012, among others). When the limit bootstrap distribution is

1For instance, Athreya (1987) writes: �If the bootstrap were to be successful here, then F �n [ournotation] should converge to F [our notation] in distribution. However, this is not the case. There is arandom limit.� (p.725) and �constructing con�dence intervals on the basis of a Monte Carlo simulationof the bootstrap could lead to misleading results�(p.728). Similar considerations appear in, inter alia,Basawa et al. (1991), Lahiri (2001), Aue et al. (2008).

2

Page 3:  · INFERENCE UNDER RANDOM LIMIT BOOTSTRAP MEASURES Giuseppe Cavaliere Department of Economics, University of Bologna Department of Economics, University of Exeter Iliyan Georgiev

random, however, the existing research provides mostly counterexamples to property(1.1) (e.g., Shao and Politis, 2013), whereas general su¢ cient conditions for bootstrapvalidity in the sense of (1.1) have not been studied.

Our �rst set of results provides such su¢ cient conditions. Classic results for boot-strap validity when the limit bootstrap measure is not random can be obtained as spe-cial cases. The main requirement in our results is that the unconditional limit distrib-ution of �n should be an average of the random limit distribution of ��n given the data.

It is often the case that bootstrap validity can be addressed through the lens of aconditioning argument. In this regard, our second set of results concerns the possibilitythat, for a sequence of random elements Xn, it holds that the bootstrap p-value isuniformly distributed in large sample conditionally on Xn:

P (p�n � qjXn)p! q (1.2)

for all q 2 (0; 1). This property, that we call �bootstrap validity conditional on Xn�, im-plies unconditional validity in the sense of (1.1). Moreover, conditional bootstrap valid-ity given Xn implies that the bootstrap replicates asymptotically the property of con-ditional tests and con�dence intervals to have, conditionally on Xn, constant null rejec-tion probability and coverage probability, respectively (for further roles of conditioningin inference, like the relevance of the drawn inferences and information recovery, seeReid, 1995, and the references therein). The leading case where we show (1.2) to hold �under regularity conditions that will be discussed in the paper �is that where the (ran-dom) limit of the conditional distribution of �n given Xn matches the (random) limitdistribution of the bootstrap statistic. The idea of comparing the limit bootstrap dis-tribution with the limit of a conditional distribution of a statistic of interest was putforward by Lepage and Podgórski (1996), but was not recast in terms of bootstrap p-values as in eq. (3.2). The use of a conditioning argument to establish (1.1) and (1.2)can be found in Cavaliere, Georgiev and Taylor (2013) and Georgiev et al. (2019), re-spectively. Their results follow as special cases of those provided in this paper.

When dealing with random limit distributions, the usual convergence concept em-ployed to establish bootstrap validity, i.e. weak convergence in probability, can only beused in some very special cases. Instead, our formal discussion makes extensive use ofthe probabilistic concept of weak convergence of random measures; see e.g. Kallenberg(2017, Ch.4). To our knowledge, in the bootstrap context this concept has so far beenmostly used to obtain negative results of lack of validity for speci�c bootstrap proce-dures (see above), rather than positive validity results, as we do here. As an ingredi-ent of our analysis, we also present some novel results on the weak convergence of con-ditional expectations.

To provide motivation for the practical relevance of our results, we initially illustratethem by using a simple linear model with either stationary or non-stationary regressors,and later we analyze three well-known cases in the econometric literature where thebootstrap features a random limit distribution. The �rst is a standard CUSUM-type testof the i.i.d. property for a random sequence with in�nite variance. This is a case wherethe limit distribution of the CUSUM statistic depends on unknown nuisance parameters(e.g., the tail index) and bootstrap or permutation tests fail to estimate this distribution

3

Page 4:  · INFERENCE UNDER RANDOM LIMIT BOOTSTRAP MEASURES Giuseppe Cavaliere Department of Economics, University of Bologna Department of Economics, University of Exeter Iliyan Georgiev

consistently. We argue that a simple bootstrap based on permutations, albeit havinga random limit distribution and hence being invalid in the usual sense, provides exactconditional inference and hence is also unconditionally valid in the sense of (1.1).

The second application considers a Kolmogorov-Smirnov-type test for correct spec-i�cation of the conditional distribution of a response variable given a vector of covari-ates. Andrews (1997) considers a parametric bootstrap implementation where the co-variates are kept �xed across bootstrap samples. While in the independent case thelimit of the bootstrap distribution is non-random, this is not the case in general. Usingour theory we discuss conditions for validity of the bootstrap within this framework.

Finally, we consider the well-known and widely applied case of bootstrap implemen-tations of �supF�tests for parameter constancy in regressions models where the regres-sors could be non-stationary, with the latter including the case of regressors subject to(possibly random) structural change. As in Hansen (2000), see also Hall (1992, p.170),in the resampling process forming the bootstrap sample, it appears natural to take thedesign matrix as �xed across the bootstrap repetitions. Under a set of assumptionsproposed by Hansen (2000), we argue that the �xed-regressor bootstrap �supF�statis-tic has a random limit distribution, thus invalidating previous claims in the literaturethat the bootstrap is consistent for the unconditional limit distribution of the original�supF�test statistic. We then provide conditions under which the �xed-regressor boot-strap is valid, unconditionally and conditionally on the chosen set of regressors.

Structure of the paper

The paper is organized as follows. In Section 2 we outline the central concepts andideas using a simple linear regression model. The main theoretical results are presentedin Section 3. Section 4 contains the three applications of the theory, whereas Section5 concludes. The paper has two Appendices. In Appendix A we collect some resultson weak convergence in distribution which are useful to prove the main theorems anddevelop the applications. Appendix B contains the proofs of the main theorems. Ad-ditional material and proofs are given in the accompanying supplement (Cavaliere andGeorgiev, 2020). Sections, equations, etc., numbered S.x can be found there.

Notation and definitions

We use the following notation throughout. The spaces of càdlàg functions [0; 1]! Rn,[0; 1] ! Rm�n and R ! R (all equipped with the respective Skorokhod J1-topologies;see Kallenberg, 1997, Appendix A2), are denoted by Dn, Dm�n and D(R), respectively;for the �rst one, when n = 1 the subscript is suppressed. Integrals are over [0; 1] unlessotherwise stated, � is the standard Gaussian cdf, U(0; 1) is the uniform distribution on[0; 1] and If�g is the indicator function. If F is a (random) cdf, F�1 stands for the right-continuous generalized inverse, i.e., F�1(u) := supfv 2 R : F (v) � ug, u 2 R. Unlessdi¤erently speci�ed, limits are for n!1.

Polish (i.e., complete and separable metric) spaces are always equipped with theirBorel �-algebras. Throughout, we assume that all the considered random elements arePolish-space valued. For random elements of a Polish space, the existence of regularconditional distributions is guaranteed and we assume without loss of generality that

4

Page 5:  · INFERENCE UNDER RANDOM LIMIT BOOTSTRAP MEASURES Giuseppe Cavaliere Department of Economics, University of Bologna Department of Economics, University of Exeter Iliyan Georgiev

conditional probabilities are regular (Kallenberg, 1997, Theorem 5.3). Equality of con-ditional distributions is understood in the almost sure [a.s.] sense and, for random cdf�sas random elements of D(R), equalities are up to indistinguishability.

Let Cb(S) be the set of all continuous and bounded real-valued functions on a metricspace S. For random elements Z;Zn (n 2 N) of a metric space SZ , we employ the usualnotation Zn

w! Z for the property that the distribution of Zn weakly converges to thedistribution of Z, de�ned by the convergence E fg (Zn)g!E fg (Z)g for all g 2 Cb(SZ).For random elements (Z;X), (Zn; Xn) of the metric spaces SZ � S and SZ � Sn (n 2N), and de�ned on a common probability space, we denote by ZnjXn

w!p ZjX (resp.ZnjXn

w!a:s: ZjX) the fact that E fg (Zn) jXng!E fg (Z) jXg in probability (resp. a.s.)for all g 2 Cb(SZ). In the special case where E fg (Zn) jXng

w! E fg (Z)g in probability(resp. a.s.) for all g 2 Cb(SZ), we write ZnjXn

w!p Z (resp. ZnjXnw!a:s: Z). In such a

case the weak limit (in probability or a.s.) of the random conditional distribution ZnjXnis the non-random distribution of Z, thus reducing our de�nition to the one of weakconvergence in probability (resp. a.s.) usually employed in the bootstrap literature.

In order to deal with random limit measures, we need a further convergence con-cept. For (Z;X), (Zn; Xn) (n 2 N) de�ned on possibly di¤erent probability spaces, wedenote by ZnjXn

w!w ZjX the fact that Efg(Zn)jXngw! Efg (Z) jXg for all g 2 Cb(SZ)

and label it �weak convergence in distribution�. It coincides with the probabilistic con-cept of weak convergence of random measures (here, of the random conditional distrib-utions ZnjXn; see Kallenberg, 2017, Ch.4). Whenever Zn and Z are rv�s and the condi-tional distribution of Z given X is di¤use (non-atomic), this is equivalent to the weakconvergence P (Zn � �jXn)

w! P (Z � �jX) of the random cdf�s as random elements ofD(R) (see Kallenberg, 2017, Theorem 4.20). Finally, on probability spaces where boththe data Dn and the auxiliary variates used in the construction of the bootstrap data

are de�ned, we use Znw�!p ZjX (resp. w

�!a:s,

w�!w) interchangeably with ZnjDnw!p ZjX

(resp. w!a:s,w!w), and write P �(�) for P (�jDn).

2 A linear regression example

In this section we provide an overview of the main results established in the sectionsto follow, and the concepts employed, by using a simple linear regression model. Fur-ther applications will be given in Section 4. We observe that even for this basic modelbootstrap statistics may have a random limit distribution. Then, we show that conver-gence of the bootstrap statistic to a random limit may imply bootstrap validity in theunconditional sense of eq. (1.1). Finally, we illustrate the possibility that bootstrap in-ference may have a conditional interpretation.

2.1 Model, bootstrap and random limit bootstrap measures

Assume that the data are given by Dn := fyt; xtgnt=1 and consider the linear model

yt = �xt + "t (t = 1; 2; :::; n) (2.1)

where xt; yt are scalar rv�s and "t are unobservable zero-mean errors with !" := Var("t) 2(0;1), t = 1; :::; n. Assume that Mn :=

Pnt=1 x

2t > 0 a.s. for all n; further assumptions

5

Page 6:  · INFERENCE UNDER RANDOM LIMIT BOOTSTRAP MEASURES Giuseppe Cavaliere Department of Economics, University of Bologna Department of Economics, University of Exeter Iliyan Georgiev

will be introduced gradually. Interest is in inference on � based on Tn := �̂� �, with �̂the OLS estimator of �; for instance, a con�dence interval or a test of a null hypothe-sis of the form H0 : � = 0.

The classic (parametric) �xed-design bootstrap, see e.g. Hall (1992), entails gener-ating a bootstrap sample fy�t ; xtgnt=1 as

y�t = �̂xt + !̂1=2" "�t (t = 1; 2; :::; n) (2.2)

where f"�t gnt=1 are i.i.d. N (0; 1), independent of the original data, and !̂" is an estimatorof !", e.g., the residual variance n�1

Pnt=1(yt � �̂xt)

2. The OLS estimator of � fromthe bootstrap sample is denoted by �̂

�and, conditionally on the original data, T �n :=

�̂� � �̂ � N

�0; !̂"M

�1n

�. As is standard, the distribution of Tn is approximated by the

distribution of T �n conditional on the data. With F�n denoting the cdf of T

�n under P

�,the bootstrap p-value is given by p�n := F �n (Tn).

Remark 2.1 A special case where the ensuing bootstrap inference is exact in �nitesamples, such that p�n is uniformly distributed for �nite n, obtains when the original"t�s are N(0; !"), independent of Xn := fxtgnt=1, and !" is known to the econometrician(hence !̂" = !"). Then the conditional distribution of T �n given the data Dn and thedistribution of the original statistic Tn conditional on the regressor Xn (equivalently,on the ancillary statistic Mn), are a.s. equal to each other and to the conditionaldistribution N

�0; !"M

�1n

�jMn. Put di¤erently,

F �n (u) := P (T �n � ujDn) = P (Tn � ujXn) = �(!�1=2" M1=2n u), u 2 R.

Then, as !�1=2" M1=2n TnjMn � N(0; 1), it is straightforward that in this special case

bootstrap inference is exact: p�n = F �n (Tn) = �(!�1=2" M

1=2n Tn)

d= �(N (0; 1)) � U (0; 1),

and that this result also holds conditionally on Mn: p�njMn � U(0; 1). �

Although bootstrap inference is not exact in general, it may still be asymptoti-cally valid. To show this, we distinguish between the cases of a stationary and a non-stationary regressor xt. It is the second case that anticipates the main results of thepaper. We assume !̂"

p! !" throughout.

2.1.1 Classic bootstrap validity when the regressor is stationary

Suppose initially that fxtgt2N is weakly stationary and n�1Mnp! M := Ex21 > 0.

De�ne �n := n1=2(�̂��) and ��n := n1=2(�̂���̂); the bootstrap p-values based on (�n; ��n)and (Tn; T �n) are identical. The distribution of the bootstrap statistic �

�n conditional on

the original data Dn satis�es

P �(��n � u) = �(n�1=2!̂�1=2" M1=2n u)

p! �(!�1=2" M1=2u); u 2 R: (2.3)

Hence, ��nw�!p � � N(0; !"M

�1) and the limit distribution is non-random.If the initial assumptions are strengthened such that a central limit theorem [CLT]

holds for fxt"tgt2N; that is, n�1=2Pnt=1 xt"t

w! N (0; !"M), then it further holds that�n

w! � � N(0; !"M�1). Hence, the bootstrap distribution of ��n consistently estimates

the unconditional limit distribution of �n in the usual sense that supu2R jP � (��n � u)�P (� � u) j p! 0, by Polya�s theorem. As the limit cdf is continuous, the p-value p�nassociated with (�n; ��n) is asymptotically uniformly distributed and (1.1) holds.

6

Page 7:  · INFERENCE UNDER RANDOM LIMIT BOOTSTRAP MEASURES Giuseppe Cavaliere Department of Economics, University of Bologna Department of Economics, University of Exeter Iliyan Georgiev

2.1.2 Random limit bootstrap measures when the regressor isnon-stationary

Suppose now that fxtgt2N is such that, for some constant �, n��Mnw! M , with M >

0 a.s. having a non-degenerate distribution. A well-known special case is that wherext is a �nite-variance random walk and � = 2. Rede�ne �n := n�=2(�̂ � �) and ��n :=n�=2(�̂�� �̂); bootstrap p-values remain unchanged. Now the bootstrap distribution of��n, conditional on the data, remains random in the limit. Speci�cally, by the continuousmapping theorem [CMT],

P �(��n � u) = �(n��=2!̂�1=2" M1=2n u)

w! �(!�1=2" M1=2u), u 2 R; (2.4)

which is a random cdf. In terms of weak convergence in distribution, this amounts to

��nw�!w N(0; !"M

�1)��M . (2.5)

As a result, with ��n and M generally de�ned on di¤erent probability spaces, weak con-vergence in probability of ��n does not occur. Moreover, whatever the (unconditional)limit distribution of �n is, provided that it exists, P (�n � u), u 2 R, will tend to a de-terministic cdf. Therefore, the bootstrap cannot estimate consistently the limit distri-bution of �n and it cannot hold that supu2R jP � (��n � u) � P (� � u) j p! 0. Neverthe-less, bootstrap inference need not become meaningless, as it may even be exact (see Re-mark 2.1). We proceed, therefore, to identify in what sense bootstrap inference couldremain meaningful.

2.2 Bootstrap validity

Within the framework of the linear regression model, we discuss two concepts of boot-strap validity in the case of a random limit bootstrap measure. These are employed tointerpret the bootstrap as a tool for unconditional or conditional inference.

2.2.1 Unconditional bootstrap validity

Under the assumption in Section 2.1.2, consider the random-walk special case, wherext :=

Pts=1 �s with et := ("t; �t)

0 forming a stationary, ergodic and conditionallyhomoskedastic martingale di¤erence sequence [mds] with p.d. variance matrix :=

diagf!"; !�g.2 Then, for � 6= 0 eq. (2.1) is an instance of a cointegration regression.

It holds that (n�1=2Pbn�ct=1 e

0t; n

�1Pnt=1 xt�1"t)

w! (B"; B�;RB�dB") in D2 � R, where

(B"; B�)0 is a bivariate Brownian motion with covariance matrix ; see Theorem 2.4

of Chan and Wei (1988). Moreover, n�2Mnw! M :=

RB2� by the CMT, jointly with

the convergence to a stochastic integral above, so that the assumption in Section 2.1.2holds with � = 2 and

�n := n(�̂ � �) w!�Z

B2�

��1 ZB�dB" � N(0; !"M

�1), (2.6)

2Non-diagonal could be handled by augmenting the estimated regression with �xt, leading to noqualitative di¤erences from the case of diagonal .

7

Page 8:  · INFERENCE UNDER RANDOM LIMIT BOOTSTRAP MEASURES Giuseppe Cavaliere Department of Economics, University of Bologna Department of Economics, University of Exeter Iliyan Georgiev

the limit being (by independence of B� and B") a variance mixture of normals, with

mixing variable M�1 and cdfRR�(!

�1=2" M1=2u)dP (M).

A comparison between the limit distributions of ��n and �n, resp. in (2.5) and (2.6),shows that the bootstrap mimics a component of the mixture limit distribution of �n,since the limit distribution of �n can be recovered by integrating overM the conditionallimit distribution of ��n given the data. This turns out to be su¢ cient for unconditionalbootstrap validity in the sense of eq. (1.1). A direct argument is as follows: thebootstrap p-value p�n := P �(��n � �n) satis�es, by the CMT,

p�n = �(!̂�1=2" M1=2

n (�̂ � �)) w! �((!"RB2�)

�1=2 R B�dB") (2.7)d= �(N(0; 1)) � U(0; 1).

Thus, when inference on � is based on the distribution of ��n conditional on the data,the large-sample frequency of wrong inferences can be controlled.

2.2.2 Conditional bootstrap validity

In the case of unconditional bootstrap validity, it may be possible to �nd an interpreta-tion of bootstrap inference as also valid in the sense of (1.2), i.e. conditionally on someXn de�ned on the probability space of the original data Dn (for instance, but not nec-essarily, the regressor Xn := fxtgnt=1).

In the linear regression case considered here, conditional bootstrap validity with re-spect to the regressor Xn can be obtained under a tightening of our previous assump-tions such that the invariance principle n�1=2

Pbn�ct=1 et

w! (B"; B�)0 holds conditionally

(on Xn for �nite n and on B� in the limit, in the sense of weak convergence in distribu-tion). A su¢ cient condition for the conditional invariance principle is that, additionallyto the assumptions on et in Section 2.2.1, "t is an mds with respect to Gt = �(f"sgts=�1[f�sgs2Z), and that n�1

Pnt=1E("

2t jf�sgs2Z) ! !" a.s. (see the proof of Theorem 2 in

Rubshtein, 1996). Then, by using Theorem 3 of Georgiev et al. (2019), it follows that

�njXnw!w N(0; !"M

�1)��M ,

which compared to (2.5) shows that the distribution of ��n conditional on the data esti-mates consistently the random limit distribution of �n conditional on the regressor Xn.This fact is stated more precisely in Remark 3.9 where it is concluded that p�njXn

w!p

U(0; 1), i.e., the bootstrap is valid conditionally on the regressor.

2.2.3 A numerical illustration

The result in Section 2.2.2 implies that unconditional bootstrap validity can sometimesbe established by means of a conditioning argument; for example, by showing validityconditional on the regressor Xn. To illustrate, in Figure 1, panels (i) and (ii), we sum-marize for two di¤erent data generating processes [DGPs] the cdf�s of p�njXn acrossM =

1; 000 independent realizations of Xn for samples of size n = 10 (upper panels) and n =1; 000 (lower panels). Speci�cally, the DGP used for panel (i) is based on i.i.d. shocks,while the one for (ii) features ARCH-type shocks (details are reported in Section S.5).

8

Page 9:  · INFERENCE UNDER RANDOM LIMIT BOOTSTRAP MEASURES Giuseppe Cavaliere Department of Economics, University of Bologna Department of Economics, University of Exeter Iliyan Georgiev

Figure 1: Fan chart of the simulated cdfs (conditional on Xn) of the bootstrap p-valuesfor the three DGPs (i)�(iii) and n = 10 (upper panels), 1000 (lower panels).

In both cases, the conditions of Section 2.2.2 are satis�ed. For both DGPs, the condi-tional cdf�s of p�n given Xn are, as expected, close to the 45

� line, which correspondsto the implied asymptotic U (0; 1) distribution. Unconditional validity follows accord-ingly.Nevertheless, unconditional validity may also hold without validity conditional onan apparently �natural�conditioning variable Xn, like the regressor in a �xed-regressorbootstrap design. For instance, suppose that for the DGP in Sections 2.2.1 and 2.2.2 itholds that �t = �t(1 + If"t<0g), with f"tg and f�tg two independent i.i.d. sequences ofzero-mean, unit-variance rv�s. Since �t is informative about the sign of "t, the "t�s condi-tionally on their own past and the regressor Xn do not form an mds. It is shown in Sec-tion S.3, eq. (S.9), that this endogeneity fact, not replicated in the bootstrap world,induces the original statistic �n to satisfy

�njXnw!w M

�1=2(!1=2"j� �1 + (1� !"j�)

1=2�2) j(M; �2) , (2.8)

where !"j� := EfVar("sj�s)g 2 (0; 1), and M , �1, �2 are jointly independent with �i �N(0; 1), i = 1; 2. The limit in (2.8) contains more randomness (through �2) than thebootstrap limit in eq. (2.5), thus resulting in a random limit for the distribution ofthe bootstrap p-value p�n conditional on Xn; see panel (iii) of Figure 1, where for thisDGP the cdf�s of p�njXn are reported for 1,000 realizations of Xn. These cdf�s displaysubstantial dispersion around the 45

�line, and this feature does not vanish as n in-

creases. However, and in agreement with the earlier discussion, their unconditional av-erage (plotted in black) is very close to the 45� line, showing indeed unconditional va-lidity of the bootstrap. This follows because et := ("t; �t)

0 is a zero-mean i.i.d. sequencewith a diagonal covariance matrix and p�n

w! U(0; 1) as derived in Section 2.2.1.

Remark 2.2 Although not valid conditionally on the regressor Xn, in the previous ex-ample the bootstrap may be valid conditionally on a non-trivial function of the regres-sor. See, in particular, Section 3.3 and Remark 3.10 therein. �

9

Page 10:  · INFERENCE UNDER RANDOM LIMIT BOOTSTRAP MEASURES Giuseppe Cavaliere Department of Economics, University of Bologna Department of Economics, University of Exeter Iliyan Georgiev

3 Main results

We provide general conditions for bootstrap validity in cases where a bootstrap statisticconditionally on the data possesses a random limit distribution. Before all else, weformally distinguish between two concepts of bootstrap validity.

3.1 Definitions

The following de�nition employs the bootstrap p-value as a summary indicator of theaccuracy of bootstrap inferences (see also Remarks 3.2 and 3.3 below). The originaland the bootstrap statistic are denoted by �n and ��n, respectively.

Definition 1 Let �n := �n(Dn) and ��n := ��n(Dn;W�n), n 2 N, where Dn denotes the

data whereas W �n are auxiliary variates de�ned jointly with Dn on a possibly extended

probability space. Let p�n := P (��n � �njDn) be the bootstrap p-value.We say that the bootstrap based on �n and ��n is valid unconditionally if p

�n is as-

ymptotically U(0; 1) distributed:

P (p�n � q)! q for all q 2 (0; 1); (3.1)

where P (�) denotes probability w.r.t. the distribution of Dn.Let further Xn be a random element de�ned on the probability space of Dn and W �

n .We say that the bootstrap based on �n and ��n is valid conditionally on Xn if p�n isasymptotically U(0; 1) distributed conditionally on Xn :

P (p�n � qjXn)p! q for all q 2 (0; 1); (3.2)

where P (�jXn) is determined up to a.s. equivalence by the distribution of (Dn; Xn).

Remark 3.1 Bootstrap validity conditionally on some Xn implies unconditional valid-ity, by the dominated convergence theorem. In applications, therefore, the discussion ofconditional validity may represent an intermediate step to assess unconditional validity.Remark 3.2 The validity properties in De�nition 1 ensure correct asymptotic nullrejection probability, unconditionally or conditionally on some Xn, for bootstrap hy-pothesis tests which reject the null when the bootstrap p-value p�n does not exceeda chosen nominal level, say � 2 (0; 1). If P (��n � �jDn) converges weakly in D(R)to a sample-path continuous random cdf, then correct asymptotic null rejection prob-ability is ensured also for bootstrap tests rejecting the null hypothesis when ~p�n :=P (��n � �njDn) � � (for an application, see Section 4.3).Remark 3.3 Validity as in De�nition 1 has also implications on the properties of boot-strap (percentile) con�dence sets. Suppose, for instance, that Tn is an estimator of apopulation (scalar) parameter, whose true value is denoted by �0, and assume for sim-plicity that �n is of the form �n = �(n)(Tn � �0), where �(n) is a normalizing factorsuch that �n has a non-degenerate limiting distribution (see Horowitz, 2001, p.3174).Its bootstrap analog is denoted by ��n, and we assume that the bootstrap is valid in thesense of (3.1). Interest is in constructing a right-sided con�dence interval for �0, with(asymptotic) coverage 1�� 2 (0; 1), using a simple bootstrap percentile method. With

10

Page 11:  · INFERENCE UNDER RANDOM LIMIT BOOTSTRAP MEASURES Giuseppe Cavaliere Department of Economics, University of Bologna Department of Economics, University of Exeter Iliyan Georgiev

F �n(x) := P (��n � xjDn), let q�n (1� �) := inffx 2 R : F �n(x) � 1 � �g be the (1 � �)

quantile of the bootstrap distribution F �n . Then, it is straightforward to show that, ifF �n converges weakly to a sample-path continuous random cdf, then

P (�n � q�n (1� �)) = P (p�n � 1� �) + o (1)! 1� �

This implies that a con�dence interval of the form [Tn � �(n)�1q�n (1� �) ;+1) has(unconditional) asymptotic coverage probability of 1 � �. If the bootstrap is validconditionally on some Xn, as in (3.2), then the (asymptotic) coverage is 1 � � alsoconditionally on this Xn. �

Our main results make extensive use of joint weak convergence in distribution. Shouldthe notation not be self-explanatory, we refer to Appendix A for the formal de�nitions.

3.2 Unconditional bootstrap validity

The unconditional validity results in this section have in common the requirement,explicit or implicit, that the unconditional limit distribution of �n should be an averageof the random limit distribution of ��n given the data. Applications of Theorem 3.1 donot require a conditional analysis of �n, in contrast to applications of Theorem 3.2.

Theorem 3.1 Let there exist a rv � and a random element X, both de�ned on thesame probability space, such that (�n; F �n)

w! (� ; F ) in R � D(R) for F �n(u) := P (��n �ujDn) and F (u) := P (� � ujX), u 2 R. If the (possibly) random cdf F is sample-pathcontinuous, then the bootstrap based on �n and ��n is valid unconditionally.

A trivial special case of Theorem 3.1 is obtained for independent � and X. In thiscase the bootstrap cdf of ��n estimates consistently the limiting unconditional cdf of �nand the bootstrap is valid in the usual sense.

In contrast, where the limit of the bootstrap cdf F �n is random (and even if it iscontinuous), the separate convergence facts �n

w! � and F �n(�)w! F (�) = P (� � �jX)

(or ��nw!w � jX) are not su¢ cient for unconditional bootstrap validity. Some remarks

on the joint convergence (�n; F �n)w! (� ; F ) are, hence, in order. A further strategy for

proving it is outlined in Section 4.2.

Remark 3.4 An important special case of Theorem 3.1 involves stable convergence ofthe original statistic �n (see Häusler and Luschgy, 2015, p.33, for a de�nition). Withthe notation of Theorem 3.1, let the data Dn and the random element X be de�nedon the same probability space, whereas the rv � be de�ned on an extension of thisprobability space. Assume that �n ! � �(X)-stably and F �n

p! F := P (� � �jX). Then(�n; F

�n)

w! (� ; F ) by Theorem 3.7(b) of Häusler and Luschgy (2015). For instance, inthe statistical literature on integrated volatility, a result of the form �n ! � stably iscontained in Theorem 3.1 of Jacod et al. (2009) for �n de�ned as a t-type statisticfor integrated volatility, whereas the corresponding F �n

p! F result is established inTheorem 3.1 of Hounyo, Gonçalves and Meddahi (2017) for a combined wild and blocks-of-blocks bootstrap introduced in the latter paper.

Remark 3.5 More generally, if ��nw�!w ��jX and (��n; �n; Xn)

w! (��; � ;X) for someDn-measurable Xn (n 2 N), where the conditional distributions ��jX and � jX are equal

11

Page 12:  · INFERENCE UNDER RANDOM LIMIT BOOTSTRAP MEASURES Giuseppe Cavaliere Department of Economics, University of Bologna Department of Economics, University of Exeter Iliyan Georgiev

a.s. and have a sample-path continuous conditional cdf F , then (�n; F �n)w! (� ; F ); see

Appendix B for additional details. �

Alternatively, unconditional bootstrap validity could be established by means ofan auxiliary conditional analysis of the original statistic �n. In the next theorem theconditioning sequence Xn is chosen such that the bootstrap statistic ��n depends onthe data Dn approximately through Xn (condition (y)). Then, the main requirementfor bootstrap validity is that the limit bootstrap distribution should be a conditionalaverage of the limit distribution of �n given Xn.

Theorem 3.2 With the notation of De�nition 1, let Xn be Dn-measurable (n 2 N).Let it hold that

(P (�n � �jXn) ; P (��n � �jDn))w! (F; F �) (3.3)

in D (R)�D (R), where F and F � are sample-path continuous random cdf�s, and let

(y) there exist random elements X 0; X 0n such that F

� is X 0-measurable, X 0n are Xn-

measurable and X 0nw! X 0 jointly with (3.3).

Then, if EfF (�)jF �g = F �(�), the bootstrap based on �n and ��n is valid unconditionally.

Remark 3.6 Under condition (y) of Theorem 3.2, P (��n � �jXn) and P (��n � �jDn)are both close to P (��n � �jX 0

n), and in this sense ��n depends on the data Dn approx-

imately through Xn. Condition (y) is trivially satis�ed in the case F = F � with thechoice X 0

n = P (�n � �jXn). It is also satis�ed with X 0n = P (~��n � �jXn) if ~��n is some

measurable transformation of Xn and W �n such that �

�n = ~�

�n + op(1) w.r.t. the proba-

bility measure on the space where Dn and W �n are jointly de�ned; see Appendix B. �

Convergence (3.3) could be deduced from the weak convergence of the conditionaldistributions of �n and ��n, as in the next corollary.

Corollary 3.1 Let Dn and Xn (n 2 N) be as in Theorem 3.2. Let the rv � and therandom elements X, X 0 be de�ned on a single probability space and

(�njXn; ��njDn)w!w (� jX; � jX 0) (3.4)

in the sense of eq. (A.1). Let further F (u) := P (� � ujX) and F � (u) := P (� � ujX 0),u 2 R, de�ne sample-path continuous random cdf�s. Then convergence (3.3) holds.Moreover, the bootstrap based on �n and ��n is valid unconditionally provided that oneof the following extra conditions holds:

(a) X 0 = X;

(b) X = (X 0; X 00) and X 0n

w! X 0 jointly with (3.4) for some Xn-measurable randomelements X 0

n.

Remark 3.7 An instance of (3.3) where F and F � are not a.s. equal is provided byDGP (iii) of Section 2.2.3. There (3.4) holds with � :=M�1=2(!

1=2"j� �1+(1�!"j�)

1=2�2)

and X = (X 0; X 00) = (M; (1�!"j�)1=2�2). Moreover, (3.4) is joint with the convergenceX 0nw! X 0 for X 0

n = n�2Mn (see Appendix B). Hence, Corollary 3.1(b) implies that thebootstrap is unconditionally valid, as was already concluded in Section 2.2.1. �

12

Page 13:  · INFERENCE UNDER RANDOM LIMIT BOOTSTRAP MEASURES Giuseppe Cavaliere Department of Economics, University of Bologna Department of Economics, University of Exeter Iliyan Georgiev

3.3 Conditional bootstrap validity

Theorem 3.3 below states the asymptotic behavior of the bootstrap p-value conditionalon an Xn chosen to satisfy condition (y) of Theorem 3.2. It also characterizes the caseswhere the bootstrap is valid conditionally on such an Xn. Should validity conditionalon such an Xn fail, in Corollary 3.2(b) we provide a result for validity conditional on atransformation of it.

Theorem 3.3 Under the conditions of Theorem 3.2, the bootstrap p-value p�n satis�es

P (p�n � qjXn)w! F (F ��1(q)) (3.5)

for almost all q 2 (0; 1), and the bootstrap based on �n and ��n is valid conditionally onXn if and only if F = F �, such that

supu2R

jP (�n � ujXn)� P (��n � ujDn)jp! 0: (3.6)

Remark 3.8 Convergence (3.6) means that the bootstrap distribution of ��n consis-tently estimates the limit of the conditional distribution of �n given Xn. Although un-der condition (y) the proximity of P (�n � �jXn) and P (��n � �jDn) is necessary forbootstrap validity conditional on Xn, no such proximity is necessary for conditional va-lidity in the general case. In fact, validity conditional on some Xn implies validity con-ditional on any measurable transformation X 0

n = n(Xn) and an analogue of (3.6) withX 0n in place of Xn cannot generally hold for all n, unless F

� is non-random. This issimilar to what happens with unconditional bootstrap validity which, according to The-orem 3.1, may occur even if P (�n � �) and P (��n � �jDn) are not close to each other. �

A corollary in the terms of weak convergence in distribution is given next.

Corollary 3.2 Let Dn; Xn (n 2 N), � ; F; F � be as in Corollary 3.1. Let (3.4) holdand F; F � be sample-path continuous random cdf�s. Then:

(a) If X 0 = X, the bootstrap based on �n and ��n is valid conditionally on Xn and(3.6) holds.

(b) If X = (X 0; X 00), (X 0n; X

00n)

w! (X 0; X 00) jointly with (3.4) for some Xn-measurablerandom elements (X 0

n; X00n), and X

00njX 0

nw!w X 00jX 0, then the bootstrap is valid condi-

tionally on X 0n and (3.6) holds with Xn replaced by X

0n.

Remark 3.9 Consider the linear regression example under the extra assumptions ofSection 2.2.2 and set � = (

RB2�)

�1 R B�dB"; X =M . It then follows (by using Theorem3 of Georgiev et al., 2019) that condition (3.4) holds in the form

(�njXn; ��njDn)w!w (� jB�; � jB�) = (1; 1)N(0; !"M�1)jM a.s., (3.7)

where Xn := fxtgnt=1; equivalently, (3.3) holds with F = F � = �(!�1=2" M1=2(�)). Hence,

the bootstrap is consistent for the limit distribution of �n conditional on the regressorand, by Corollary 3.2(a), the bootstrap is valid conditionally on the regressor.

13

Page 14:  · INFERENCE UNDER RANDOM LIMIT BOOTSTRAP MEASURES Giuseppe Cavaliere Department of Economics, University of Bologna Department of Economics, University of Exeter Iliyan Georgiev

Remark 3.10 For DGP (iii) of Section 2.2.3, (3.4) holds with � and X = (X 0; X 00)

given in Remark 3.7. Moreover, (3.4) is joint with the convergence (X 0n; X

00n)

w! (X 0; X 00)

for X 0n = n�2Mn and X 00

n = M�1=2n

Pnt=1 xtE("tj�t) (see Appendix B). By Corollary

3.2(b), the bootstrap would be valid conditionally on Mn if it additionally holds thatX 00njMn

w!w (1� !"j�)1=2�2jM = N(0; 1� !"j�) a.s. �

Strategies for checking the convergence in (3.4) are outlined in Sections 4.2 and 4.3.

4 Applications

4.1 A permutation CUSUM test under infinite variance

Consider a standard CUSUM test for the null hypothesis (say, H0) that f"tgnt=1 is asequence of i.i.d. random variables. The test statistic is of the form

�n := ��1n maxt=1;:::;n

���Xt

i=1("i � "n)

��� , "n := n�1Xn

t=1"t;

where �n is a permutation-invariant normalization sequence. Standard choices are �2n =Pnt=1("t � "n)

2 in the case where E"2t < 1, and �n = maxt=1;:::;n j"tj when E"2t = 1.If "t is in the domain of attraction of a strictly �-stable law with � 2 (0; 2), such thatE"2t = 1, the asymptotic distribution of �n depends on unknown parameters (e.g.,the characteristic exponent �), which makes the test di¢ cult to apply (see also Politis,Romano and Wolf, 1999, and the references therein). To overcome this problem, Aueet al. (2008) consider a permutation analogue of �n, de�ned as

��n := ��1n maxt=1;:::;n

���Xt

i=1("�(i) � "n)

���where � is a (uniformly distributed) random permutation of f1; 2; :::; ng, independent ofthe data.3 In terms of De�nition 1, the data is Dn := f"tgnt=1 and the auxiliary �boot-strap�variate isW �

n := �. With Xn := f"(t)gnt=1 denoting the vector of order statistics off"tgnt=1, there exists a random permutation $ of f1; :::; ng (under H0, uniformly distrib-uted conditionally on Xn) for which it holds that "t = "($(t)) (t = 1; :::; n), whereas the�bootstrap�sample is f"�(t)gnt=1 . The results in Aue et al. (2008, Corollary 2.1, Theo-rem 2.4) imply that, if H0 holds and "t is in the domain of attraction of a strictly �-stable

law with � 2 (0; 2), then �nw! ��(S) and �

�nw�!w ��(S)jS for a certain random function

��(�) and S = (S1; S2)0, with Si = fSijg1j=1 (i = 1; 2) being partial sums of sequencesof i.i.d. standard exponential rv�s, and with the function ��(�) independent of S.4

Aue et al. (2008) do not report the fact that inference is not invalidated by thefailure of the permutation procedure to estimate consistently the distribution of ��(S).

3The normalization of �n is only of theoretical importance for obtaining non-degenerate limit distri-butions. In practice, any bootstrap procedure comparing �n to the quantiles of ��n is invariant to thechoice of �n and can be implemented by setting �n = 1.

4To avoid centering terms, Aue et al. (2008) assume additionally that the location parameter ofthe limit stable law is zero when � 2 [1; 2): Moreover, although they provide conditional convergenceresults only for the �nite-dimensional distributions of the CUSUM process, these could be strengthenedto conditional functional convergence as in Proposition 1 of LePage et al. (1997) in order to obtain theconditional convergence of ��n:

14

Page 15:  · INFERENCE UNDER RANDOM LIMIT BOOTSTRAP MEASURES Giuseppe Cavaliere Department of Economics, University of Bologna Department of Economics, University of Exeter Iliyan Georgiev

In fact, the situation is similar to that of Remark 2.1, as the conditional distributions�njXn and ��njDn coincide a.s. under H0. As a consequence, under H0 the permutationtest implements exact5 �nite-sample inference conditional on Xn and, additionally, thedistribution of ��n given the data estimates consistently the limit of the conditionaldistribution �njXn, in the sense of joint weak convergence in distribution (see eq. (A.1)):

(�njXn; ��njDn)0 w!w (��(S)jS; ��(S)jS) . (4.1)

CUSUM tests can also be applied to residuals from an estimated model in order totest for correct model speci�cation or stability of the parameters (see e.g., Plobergerand Krämer, 1992). Consider thus the case where f"tgnt=1 are the disturbances in a sta-tistical model (e.g., the regression model of Section 2), and we observe residuals "̂t ob-tained upon estimation of the model using a sample Dn not containing the unobserv-able f"tgnt=1. The residual-based CUSUM statistic is �̂n :=�̂�1n maxt=1;:::;n j

Pti=1("̂i �

"̂n)j, where �̂n and "̂n are the analogues of �n and �"n computed from "̂t instead of "t.The bootstrap statistic could be de�ned as �̂�n :=�̂

�1n maxt=1;:::;n j

Pti=1("̂�(i) � "̂n)j. If

�̂n � �np! 0 and (�̂�n � ��n)jDn

w!p 0 under H0 (e.g., due to consistent parameter esti-mation), then also (�̂n��n)jXn

w!p 0, such that the (Lévy) distances between the pairsof conditional distributions �̂njXn and �njXn on the one hand, and �̂�njDn and ��njDnon the other hand, converge in probability to zero. Hence, in view of (4.1), and underthe conjecture that P (��(S) � �jS) de�nes a sample-path continuous cdf, the residual-based permutation procedure is consistent in the sense that

(�̂njXn; �̂�njDn)w!w (��(S)jS; ��(S)jS) (4.2)

for Xn := f"(t)gnt=1 again. It follows that: (i) the permutation residual-based test isvalid conditionally on Xn, by Corollary 3.2(a) with condition (3.4) taking the form(4.2); (ii) this test is valid unconditionally, as a results of either the validity conditionalon Xn, or by Corollary 3.1.

4.2 A parametric bootstrap goodness-of-fit test

The parametric bootstrap is a standard technique for the approximation of a condi-tional distribution of goodness-of-�t test statistics (Andrews, 1997; Lockhart, 2012).When these are discussed in the i.i.d. �nite-variance setting, the limit of the bootstrapdistribution is non-random. However, if we return to the relation (2.1), there exist rel-evant settings where a random limit of the normalized Mn implies that parametricallybootstrapped goodness-of-�t test statistics have random limit distributions.

4.2.1 Set up and a random limit bootstrap measure

Let the null hypothesis of interest, say H0, be that the standardized errors !�1=2" "t in

(2.1) have a certain known density f with mean 0 and variance 1. For expositional easewe assume that !" = 1 and is known to the econometrician. Then, the Kolmogorov-Smirnov statistic based on OLS residuals "̂t is

5By �exact�we mean inference with respect to the true �nite-sample (conditional) distribution of thetest statistic.

15

Page 16:  · INFERENCE UNDER RANDOM LIMIT BOOTSTRAP MEASURES Giuseppe Cavaliere Department of Economics, University of Bologna Department of Economics, University of Exeter Iliyan Georgiev

�n := n1=2 sups2R

���n�1Xn

t=1If"̂t�sg �

Z s

�1f���.

A (parametric) bootstrap counterpart, ��n, of �n could be constructed under H0 by (i)drawing f"�t gnt=1 as i.i.d. from f , independent of the data; (ii), regressing them on xt,thus obtaining an estimator �̂

�and associated residuals "̂�t ; and (iii) calculating �

�n as

��n := n1=2 sups2R jn�1Pnt=1 If"̂�t�sg �

R s�1 f j.

To see that the distribution of the bootstrap statistic ��n conditional on the dataDn := fxt; ytgnt=1 may have a random limit, consider the Gaussian case, f = �0. Underthe assumptions of Johansen and Nielsen (2016, Sec. 4.1-4.2), it holds (ibidem) that��n = ~�

�n + op(1) under the product probability on the product probability space where

the data and f"�t g are jointly de�ned, with

~��n := sups2[0;1]

���n�1=2 nXt=1

(If"�t�q(s)g � s) + �0(q(s))�̂

�n�1=2

nXt=1

xt

���, (4.3)

where q(s) = ��1(s) is the s-th quantile of �. The expansion of ��n holds also condi-

tionally on the data, i.e., ��n � ~��nw�!p 0, since convergence in probability to a constant

is preserved upon such conditioning. Hence, if ~��njDn converges to a random limit, sodoes ��njDn for the same limit. Assume that Xn := n��=2xbn�c

w! U in D for some � >0 and that M :=

RU2 > 0 a.s. (e.g., U = B� if xt =

Pt�1s=1 �s with f�tg introduced in

Section 2.2). Then (Mn; �n) := (Pnt=1 x

2t ;Pnt=1 xt) satis�es (n

���1Mn; n��=2�1�n)

w!(M; �), � :=

RU . Furthermore, if W �

n(s) := n�1Pnt=1(If"�t�q(s)g � s), s 2 [0; 1]; is the

bootstrap empirical process in probability scale, then W �n and M

1=2n �̂

�are independent

of the data individually (the second one being conditionally standard Gaussian), butnot jointly independent of the data, because

Cov�(n1=2W �n(s);M

1=2n �̂

�) = (n���1Mn)

�1=2n��=2�1�n (s)w!M�1=2� (s),

s 2 [0; 1], where (�) := E�["�1If"�1�q(�)g] = ��0(q(�)) is a trimmed mean function, with

Cov�(�) and E�(�) calculated under P �. It is shown in Section S.4 that, more strongly,

(n1=2W �n ; n

(�+1)=2�̂�; n��=2�1�n)

w�!w (W;M�1=2b; �)

��(M; �) (4.4)

on D�R2; where (W; b) is a pair of a standard Brownian bridge and a standard Gaussianrv individually independent of U (and thus, of M; �), but with Gaussian joint condi-tional (on U) distributions having covariance Cov(W (s); bjU) =M�1=2� (s); s 2 [0; 1].Combining the expansion of ��n, (4.3) and (4.4) with an extended CMT (Theorem A.1in Appendix A) yields

��nw�!w

�sups2[0;1]

jW (s) + �0(q(s))M�1=2b�j��(M; �) = � j(M; �) a.s., (4.5)

where � := sups2[0;1] j ~W (s)j for a process ~W which conditionally on U (and thus, on

M; �), is a zero-mean Gaussian process with ~W (0) = ~W (1) = 0 a.s. and conditionalcovariance functionK(s; v) = s(1�v)�M�1�2 (s) (v) for 0 � s � v � 1. In summary,the limit bootstrap distribution is random because the latter conditional covariance israndom whenever M or � are such.

16

Page 17:  · INFERENCE UNDER RANDOM LIMIT BOOTSTRAP MEASURES Giuseppe Cavaliere Department of Economics, University of Bologna Department of Economics, University of Exeter Iliyan Georgiev

4.2.2 Bootstrap validity

We now discuss in what sense ��n can provide a distributional approximation of �n andwhether the bootstrap can be valid in the sense of De�nition 1.

Unconditional validity Under H0 that "t � i.i.d. N(0; 1), the bootstrap couldbe shown to be unconditionally valid using Theorem 3.1. Speci�cally, under H0, theassumptions and results of Johansen and Nielsen (2016, Sec. 4.1-4.2) guarantee that �nhas the expansion �n = ~�n + op(1), with ~�n := sups2[0;1] jn�1=2

Pnt=1(If"t�q(s)g � s) +

�0(q(s))(�̂� �)n�1=2Pnt=1 xtj de�ned similarly to ~��n. Assume that �̂ is asymptotically

mixed Gaussian, such that jointly with n��=2xbn�cw! U it holds that

�n�1=2

nXt=1

(If"t�q(s)g � s); n(�+1)=2(�̂ � �); n��=2�1�n

�w! (W;M�1=2b; �) ;

then �n = ~�n + op(1)w! � = sups2[0;1] j ~W (s)j. Thus, the unconditional limit of �n ob-

tains by averaging (over M; �) the conditional limit of ��n. This is the main prerequi-site for establishing unconditional bootstrap validity via Theorem 3.1. More precisely,it is proved in Section S.4 that

(�n; F�n)

w! (� ; F ) , F �n(�) := P �(��n � �), F (�) := P (� � �jM; �): (4.6)

As F is sample-path continuous (e.g., by Proposition 3.2 of Linde, 1989, applied condi-tionally on M; �), Theorem 3.1 with X := (M; �) guarantees unconditional validity.

Remark 4.1 We outline here our approach to the proof of (4.6), which is of interestalso in other applications. The main ingredients are (a) the convergence (�n; Xn)

w!(� ; U); (b) the fact that its strong version (�n; Xn)

a:s:! (� ; U) can be shown to imply

��nw�!p � jU ; and (c) the fact that the conditional distribution � jU , which is a.s. equal to

� j(M; �), is di¤use. The proof proceeds in two steps: (i) prove that (�n; Xn)w! (� ; U);

(ii) consider, by extended Skorokhod coupling (Corollary 5.12 of Kallenberg, 1997), arepresentation of Dn and (� ; U) such that, with an abuse of notation, (�n; Xn)

a:s:! (� ; U)

and, on a product extension of the Skorokhod-representation space, prove that ��nw�!p

� jU . The latter conditional assertion, due to the product structure of the probabilityspace, can be proved as a collection of unconditional assertions by �xing the outcomesin the factor-space of the data. As F is sample-path continuous, (�n; F �n)

p! (� ; F ) onthe Skorokhod-representation space, whereas (�n; F �n)

w! (� ; F ) on a general probabilityspace. In other applications, the idea of a similar proof would be to choose Xn as Dn-measurable random elements such that ��n depends on the data essentially throughXn.�Conditional validity As �n = ~�n + op(1) under H0, with ~�n related to (Mn; �n)

through the same functional form as ~��n, it is possible for �njXn to have the samerandom limit distribution under H0 as ��n given the data, i.e., �njXn

w!w � j(M; �) .For instance, this occurs if f"tg is an i.i.d. sequence independent of Xn, by the sameargument as for ~��n. Conditional validity can then be established through Corollary 3.2by using the following fact.

17

Page 18:  · INFERENCE UNDER RANDOM LIMIT BOOTSTRAP MEASURES Giuseppe Cavaliere Department of Economics, University of Bologna Department of Economics, University of Exeter Iliyan Georgiev

Remark 4.2 The convergence in (3.4), required in Corollary 3.2, follows from the sep-

arate convergence facts �njXnw!w � jX, ��n

w�!w ��jX 0 and (�n; ��n; �n(Xn); n(Dn))

w!(� ; ��; X;X 0) for some measurable functions �n; n, provided that the conditional dis-tributions � jX 0 and ��jX 0 are equal a.s.; see Section S.4. �

Let �n(Xn) := n(Xn) := (n���1Mn; n

��=2�1�n) and X := X 0 := (M; �). By Remark4.2, the convergence �njXn

w!w � jX 0, eq. (4.5) and the convergence (�n; ��n; �n(Xn))w!

(� ; ��; X 0) with the distributions ��jX 0 and � jX 0 equal a.s. (shown in the proof of (4.6),see Section S.4) are su¢ cient for eq. (3.4) to hold in the form

(�njXn; ��njDn)w!w (� jX 0; � jX 0).

As the random cdf F of the conditional distribution � jX 0 is sample-path continuous,the bootstrap is valid conditionally on Xn by Corollary 3.2(a).

4.3 Bootstrap tests of parameter constancy

4.3.1 General set up

Here we apply the results of Section 3 to the classic problem of parameter constancytesting in regression models (see Andrews, 1993, and the references therein). Speci�-cally, we deal with bootstrap implementations when the moments of the regressors maybe unstable over time; see e.g. Hansen (2000) and Zhang and Wu (2012).

Consider a linear regression model for ynt 2 R given xnt 2 Rm, in triangular arraynotation:

ynt = �0txnt + "nt (t = 1; 2; :::; n). (4.7)

The null hypothesis of parameter constancy is H0 : �t = �1 (t = 2; :::; n), which is testedhere against the alternative H1 : �t = �1 + �Ift�n?g (t = 2; :::; n), where n? := br?ncand � 6= 0 respectively denote the timing and the magnitude of the possible break,6

both assumed unknown to the econometrician. The so-called break fraction r? belongsto a known closed interval [r; r] in (0; 1). In order to test H0 against H1, it is customaryto consider the �supF� (or �sup Wald�) test (Andrews, 1993), based on the statisticFn := maxr2[r;r] Fbnrc; where Fbnrc is the usual F statistic for testing the auxiliary nullhypothesis that � = 0 in the regression

ynt = �0xnt + �0xntIft�brncg + "nt.

We make the following assumption, allowing for non-stationarity in the regressors(see also Hansen, 2000, Assumptions 1 and 2).

Assumption H. The following conditions on fxnt; "ntg hold:(i) (mda) "nt is a martingale di¤erence array with respect to the current value of xnt

and the lagged values of (xnt; "nt);

(ii) (wlln) "2nt satis�es the law of large numbers n�1Pbnrc

t=1 "2nt

p! r(E"2nt) = r�2 > 0;

for all r 2 (0; 1];6We suppress the possible dependence of �t = �nt on n with no risk of ambiguities.

18

Page 19:  · INFERENCE UNDER RANDOM LIMIT BOOTSTRAP MEASURES Giuseppe Cavaliere Department of Economics, University of Bologna Department of Economics, University of Exeter Iliyan Georgiev

(iii) (non-stationarity) (Mn; Vn; Nn)w! (M;V;N) in Dm�m �Dm�m �Dm for

(Mn; Vn; Nn) :=�1n

Xbn�c

t=1xntx

0nt;

1n�2

Xbn�c

t=1xntx

0nt"

2nt;

1n1=2�

Xbn�c

t=1xnt"nt

�and where M and V are a.s. continuous and (except at 0) strictly positive-de�nitevalued processes, whereas N , conditionally on fV;Mg, is a zero-mean Gaussianprocess with covariance kernel EfN (r1)N (r2)0g = V (r1) (0 � r1 � r2 � 1).

Remark 4.3 A special case of Assumption H is obtained when the regressors satisfythe weak convergence xnbn�c

w! U in Dm, such thatM =R �0 UU

0. Under extra conditions(e.g., if supn supt=1;:::;nEjE("2nt � �2jFn;t�i)j ! 0 as i ! 1 for some �ltrations Fn;t,n 2 N, to which f"2nt} is adapted), also V =

R �0 UU

0 (see Theorem A.1 of Cavaliere andTaylor, 2009). �

The null asymptotic distribution of Fn under Assumption H is provided in Hansen(2000, Theorem 2):

Fnw! supr2[r;r]

f ~N(r)0 ~M (r)�1 ~N(r)g (4.8)

with ~N (u) := N (u)�M (u)M (1)�1N (1) and ~M (r) :=M (r)�M (r)M (1)�1M (r).In the case of (asymptotically) stationary regressors, Fn converges to the supremumof a squared tied-down Bessel process; see Andrews (1993). In the general case, how-ever, since the asymptotic distribution in (4.8) depends on the joint distribution of thelimiting processes M;N; V , which is unspeci�ed under Assumption H, asymptotic in-ference based on (4.8) is unfeasible. Simulation methods as the bootstrap can thereforebe appealing devices for computing p-values associated with Fn.

4.3.2 Bootstrap test and random limit bootstrap distribution

Following Hansen (2000), we consider here a �xed-regressor wild bootstrap intended toaccommodate possible conditional heteroskedasticity of "nt. It is based on the residuals~ent from the OLS regression of ynt on xnt and xntIft�b~rncg, where ~r := argmaxr2[r;r] Fbnrcis the estimated break fraction for the original sample. The bootstrap statistic is F �

n :=

maxr2[r;r] F�bnrc, where F

�bnrc is the F statistic for �� = 0 in the auxiliary regression

y�t = ��0xnt + ��0xntIft�brncg + error�nt; (4.9)

with bootstrap data y�t := ~entw�t for an i.i.d. N(0; 1) sequence of bootstrap multipliers

w�t independent of the data (as in Hansen, 2000, we set without loss of generality � = 0in the bootstrap sample). The weak limit of F �

n given the data is stated next.

Theorem 4.1 Under Assumption H and under H0 it holds that, with ~M; ~N as in (4.8),

F �nw�!w sup

r2[r;r]f ~N(r)0 ~M (r)�1 ~N(r)g

��(M;V ) . (4.10)

Remark 4.4 Theorem 4.1 establishes that, in general, the limit distribution of the�xed-regressor bootstrap statistic is random. In particular, it is distinct from the limit ineq. (4.8) and, as a result, the bootstrap does not estimate consistently the unconditional

19

Page 20:  · INFERENCE UNDER RANDOM LIMIT BOOTSTRAP MEASURES Giuseppe Cavaliere Department of Economics, University of Bologna Department of Economics, University of Exeter Iliyan Georgiev

limit distribution of the statistic Fn under H0 (contrary to the claim in Theorem 6 ofHansen, 2000). To illustrate the limiting randomness, consider the case M = V with ascalar regressor xnt 2 R. By a change of variable (as in Theorem 3 of Hansen, 2000),convergence (4.10) reduces to

F �nw�!w sup

u2I(M;r;�r)

nW (u)2

u(1�u)

o���M for I(M; r; �r) :=hM(r)M(1) ;

M(�r)M(1)

i;

where W is a standard Brownian bridge on [0; 1], independent of M . As the maximiza-tion interval I(M; r; �r) depends on M , so does the supremum itself. �

4.3.3 Bootstrap validity

Although under Assumption H the bootstrap does not replicate the asymptotic (un-conditional) distribution in (4.8), unconditional bootstrap validity can be establishedunder no further assumptions than Assumption H, by using the results in Section 3.2.In contrast, if interest is in achieving bootstrap validity conditional on the regressorsXn := fxntgnt=1, as it may appear natural when the regressors are kept �xed acrossbootstrap samples, further conditions are required; e.g., the following Assumption C.

Assumption C. Assumption H holds and, jointly with the convergence in AssumptionH(iii), it holds that (Mn; Vn; Nn)jXn

w!w (M;V;N) j(M;V ) as random measures onDm�m �Dm�m �Dm.

Remark 4.5 Assumption C is stronger than Assumption H due to the fact that, di¤er-ently from the bootstrap variates w�t , the errors f"ntg need not be independent of fxntg.The third DGP of Section 2.2.3 could be used to construct an example, with xnt :=n�1=2xt and "nt := "t, where Assumption H(iii) holds but Assumption C does not.Remark 4.6 The meaning of �jointly� in Assumption C is given in eq. (A.2). ByLemma A.1(b), the convergence in Assumption C will be joint with that in AssumptionH(iii) if n�1��2

Pbn�ct=1 xntx

0nt("

2nt�E("2ntjXn)) = op(1) in Dm�m, such that the process

n�1��2Pbn�ct=1 xntx

0nt"

2nt is asymptotically equivalent to an Xn-measurable process. �

The results on the validity of the bootstrap parameter constancy tests are summa-rized in the following theorem.

Theorem 4.2 Let the parameter constancy hypothesis H0 hold for model (4.7). Then,under Assumption H, the bootstrap based on �n = Fn and ��n = F �

n is unconditionallyvalid. If Assumption C holds, then the bootstrap based on Fn and F �

n is valid alsoconditionally on Xn.

Theorem 4.2 under Assumption H is proved along the lines of Remark 4.1. UnderAssumption C the proof could be recast in terms of the following general strategy tocheck condition (3.4) of Corollary 3.2(a), with �n(Xn) := n(Dn) := (Mn; Vn) andX := X 0 := (M;V ).

Remark 4.7 With the notation of Remark 4.2, convergence (3.4) follows from �njXnw!w � jX and (�n; �n(Xn); n(Dn))

w! (� ;X; X 0) together with the implication (when it

20

Page 21:  · INFERENCE UNDER RANDOM LIMIT BOOTSTRAP MEASURES Giuseppe Cavaliere Department of Economics, University of Bologna Department of Economics, University of Exeter Iliyan Georgiev

holds) from n(Dn)a:s:! X 0 to ��n

w�!p ��jX 0, provided that the conditional distributions

� jX 0 and ��jX 0 are a.s. equal. The convergence �njXnw!w � jX is the new ingredient

compared to Remark 4.1. An implementation strategy is: (i) prove that �njXnw!w

� jX and (�n; �n(Xn); n(Dn))w! (� ;X;X 0); (ii) consider a Skorokhod representation

of Dn and (� ;X;X 0) such that, maintaining the notation, (�n; �n(Xn); n(Dn))a:s:!

(� ;X;X 0) and, as a result, �njXnw!w � jX strengthens to �njXn

w!p � jX (see LemmaA.1 in Appendix A); (iii) rede�ne the bootstrap variates W �

n on a product extension of

the Skorokhod-representation space and prove there that ��nw�!p �

�jX 0. Then (3.4) holdson a general probability space. Notice also that if �n(Xn) = (X

0n; X

00n) and n(Dn) =

X 0n, then the convergence (X

0n; X

00n)

w! (X 0; X 00) in Corollary 3.2(b) is joint with (3.4).�

5 Conclusions

When the distribution of a bootstrap statistic conditional on the data is random inthe limit, the bootstrap fails to estimate consistently the asymptotic distribution of theoriginal statistic. Renormalization of the statistic of interest cannot always be used asa way to eliminate the limiting bootstrap randomness (e.g., it cannot be used in any ofthe applications in Section 4). Nevertheless, we have shown that if bootstrap validityis de�ned as (large sample) control over the frequency of correct inferences, then ran-domness of the limit bootstrap distribution does not imply invalidity of the bootstrap,even without renormalizing the original statistic. A bootstrap scheme, therefore, neednot be discarded for the sole reason of giving rise to a random limit bootstrap measure.

For the asymptotic validity of bootstrap inference, in an unconditional or a condi-tional sense, we have established su¢ cient conditions and strategies to verify these con-ditions in speci�c applications. The conditions di¤er mainly in their demands on the de-pendence structure of the data, and are more restrictive for conditional validity to hold.

We have provided three applications to well-known econometric inference problemswhich feature randomness of the limit bootstrap distribution. As usual, alternativebootstrap schemes giving rise to non random bootstrap measures could also be putforward and in practice the choice of a bootstrap scheme to use would have to be madeon a case-by-case basis. For instance, in the CUSUM application of Section 4.1, them out of n bootstrap could be consistent for the unconditional asymptotic distributionof the statistic of interest; however, di¤erently to the permutation test of Aue et al.(2008), which gives rise to a random limit measure, it would not be exact in �nitesamples. For the structural break test of Section 4.3, the use of the �xed regressorbootstrap (and, consequently, the randomness of the limit bootstrap measure) cannotbe avoided, given the level of generality assumed on the regressors and on the errorterms. However, if the regressors were known to be I(1), then a recursive bootstrap(with unit roots imposed) could in principle be implemented and it would mimic theunconditional limit distribution of the statistic of interest, rather than a conditionallimit. Which of the two bootstraps would be preferable in terms of size and powerrequires additional investigation.

Among the further applications that could be analyzed using our approach arebootstrap inference in weakly or partially identi�ed models, inference in time series

21

Page 22:  · INFERENCE UNDER RANDOM LIMIT BOOTSTRAP MEASURES Giuseppe Cavaliere Department of Economics, University of Bologna Department of Economics, University of Exeter Iliyan Georgiev

models with time-varying (stochastic) volatility, inference after model selection, andthe bootstrap in high-dimensional models. In addition, the methods we propose couldbe useful in problems involving nuisance parameters that are not consistently estimableunder the null hypothesis but where su¢ cient statistics are available (with the bootstrapbeing potentially valid conditionally on such statistics). In these problems it couldbe of further interest to study conservative inferential procedures satisfying a weakercondition than (3.1), e.g., lim inf P (p�n � q) � 1� q for all q 2 (0; 1) in the case of testsrejecting for small values of p�n.

An important issue not analyzed in the paper is whether the bootstrap can deliverre�nements over standard asymptotics in cases where the limit bootstrap measure israndom. We have seen in Sections 2 and 4.1 that bootstrap inference in such cases couldbe exact or close to exact. This seems to suggest that a potential for re�nements exists.Moreover, there is also a potential for the bootstrap to inherit the �nite-sample re�ne-ments o¤ered by conditional asymptotic expansions (in line with Barndor¤-Nielsen�sp�-formula, see Barndor¤-Nielsen and Cox, 1994, Sec. 6.2), as has been established forsome bootstrap procedures (DiCiccio and Young, 2008) in the special case of correctlyspeci�ed parametric models. The study of such questions requires mathematical toolsdi¤erent from those employed here and is left for further research.

References

Andrews, D.W.K. (1993): Tests for parameter instability and structural changewith unknown change point, Econometrica 61, 821�856.

� � (1997): A conditional Kolmogorov test, Econometrica 65, 1097�1128.

� � (2000): Inconsistency of the bootstrap when a parameter is on the boundary ofthe parameter space, Econometrica, 68, 399�405.

Athreya, K.B.(1987): Bootstrap of the mean in the in�nite variance case, The Annalsof Statistics 15, 724-731.

Aue, A., I. Berkes and L. Horvàth (2008): Selection from a stable box,Bernoulli14, 125�139.

Barndorff-Nielsen, O.E. and D.R. Cox (1994): Inference and Asymptotics,Chapman & Hall.

Basawa, I.V., A.K. Mallik, W.P. McCormick, J.H. Reeves, and R.L. Taylor(1991): Bootstrapping unstable �rst-order autoregressive processes, The Annalsof Statistics 19, 1098�1101.

Beran, R. (1997): Diagnosing Bootstrap success, Annals of the Institute of StatisticalMathematics 49, 1�24.

Billingsley, P. (1968): Convergence of Probability Measures, JohnWiley: New York.

Cattaneo, M., M. Jansson and K. Nagasawa (2020): Bootstrap-based inferencefor cube root asymptotics, Econometrica, forthcoming.

22

Page 23:  · INFERENCE UNDER RANDOM LIMIT BOOTSTRAP MEASURES Giuseppe Cavaliere Department of Economics, University of Bologna Department of Economics, University of Exeter Iliyan Georgiev

Cavaliere, G. and I. Georgiev (2020): Inference under random limit bootstrapmeasures: supplemental material.

Cavaliere, G., I. Georgiev and A.M.R. Taylor (2013): Wild bootstrap of themean in the in�nite variance case, Econometric Reviews 32, 204�219.

Cavaliere, G., H.B. Nielsen and A. Rahbek (2015): Bootstrap testing of hy-potheses on co-integration relations in vector autoregressive models, Economet-rica, 83, 813�831.

Cavaliere, G. and A.M.R. Taylor (2009): Heteroskedastic time series with a unitroot, Econometric Theory 25, 1228�1276.

Chan, N.H. and C.Z. Wei (1988): Limiting distributions of Least Squares estimatesof unstable autoregressive processes, Annals of Statistics 16, 367�401.

Crimaldi, I. and L. Pratelli (2005): Convergence results for conditional expecta-tions, Bernoulli 11, 737�745.

DasGupta, A. (2008): Asymptotic Theory of Statistics and Probability, Springer-Verlag: Berlin.

DiCiccio, T. and G.A. Young (2008): Conditional properties of unconditionalparametric bootstrap procedures for inference in exponential families, Biometrika95, 747�758.

Georgiev, I., D. Harvey, S. Leybourne and A.M.R. Taylor (2019): A boot-strap stationarity test for predictive regression invalidity, Journal of Business &Economic Statistics 37, 528�541.

Hall, P. (1992): The Bootstrap and Edgeworth Expansion, Springer-Verlag: Berlin.

Hansen, B.E. (2000): Testing for structural change in conditional models. Journalof Econometrics 97, 93�115.

Häusler, E. and H. Luschgy (2015): Stable Convergence and Stable Limit Theo-rems, Springer-Verlag: Berlin.

Horowitz, J.L. (2001): The bootstrap. In Heckman, J.J. and E. Leamer (eds.)Handbook of Econometrics 5, chapter 52, Elsevier: Amsterdam.

Hounyo, U., S. Gonçalves and N. Meddahi (2017): Bootstrapping pre-averagedrealized volatility under market microstructure noise, Econometric Theory 33,791�838.

Jacod, J., Y. Li, P. Mykland, M. Podolskij and M. Vetter (2009): Mi-crostructure noise in the continuous case: The pre-averaging approach. Stochas-tic Processes and Their Applications 119, 2249-2276.

Johansen, S. and B. Nielsen (2016): Analysis of the Forward Search using somenew results for martingales and empirical processes, Bernoulli 22, 1131�1183.

23

Page 24:  · INFERENCE UNDER RANDOM LIMIT BOOTSTRAP MEASURES Giuseppe Cavaliere Department of Economics, University of Bologna Department of Economics, University of Exeter Iliyan Georgiev

Kallenberg, O. (1997): Foundations of Modern Probability, Springer-Verlag: Berlin.

� � (2017): Random measures: theory and applications, Springer-Verlag: Berlin.

Knight, K. (1989): On the bootstrap of the sample mean in the in�nite variancecase, The Annals of Statistics 17, 1168-1175.

Lahiri, S.N. (1996): E¤ects of block lengths on the validity of block resamplingmethods, Probability Theory & Related Fields 121, 73�97.

LePage, R. and K. Podgórsky (1996): Resampling permutations in regressionwithout second moments, Journal of Multivariate Analysis 57, 119�141.

LePage, R., K. Podgórsky and M. Ryznar (1997): Strong and conditional in-variance principles for samples attracted to stable laws, Probability Theory andRelated Fields 108, 281�298.

Linde, W. (1989): Gaussian measure of translated balls in a Banach space, Theoryof Probability and its Applications 34, 349�359.

Lockhart, R. (2012): Conditional limit laws for goodness-of-�t tests, Bernoulli 18,857�882.

Politis, D., J. Romano and M. Wolf (1999): Subsampling, Springer, Berlin.

Ploberger, W. and W. Krämer (1992): The CUSUM test with OLS residuals,Econometrica 60, 271�285.

Reid, N. (1995): The roles of conditioning in inference, Statistical Science 10, 138�199.

Rubshtein, B. (1996): A central limit theorem for conditional distributions, InBergelson V., P. March, J. Rosenblatt (eds.), Convergence in Ergodic Theory andProbability, De Gruyter: Berlin.

Sen, B., M. Banerjee, and M. Woodroofe (2010): Inconsistency of Bootstrap:The Grenander Estimator, Annals of Statistics 38, 1953�1977.

Shao, X. and D.N. Politis (2013): Fixed b subsampling and the block bootstrap:improved con�dence sets based on p-value calibration, Journal of the Royal Sta-tistical Society B 75, 161�184.

Sweeting, T.J. (1989): On conditional weak convergence, Journal of TheoreticalProbability 2, 461�474.

Zhang, T. and W. Wu (2012): Inference of time-varying regression models, Annalsof Statistics 40, 1376�1402.

24

Page 25:  · INFERENCE UNDER RANDOM LIMIT BOOTSTRAP MEASURES Giuseppe Cavaliere Department of Economics, University of Bologna Department of Economics, University of Exeter Iliyan Georgiev

Appendices

A Weak convergence in distribution

In this section we establish some properties of weak convergence in distribution forrandom elements of Polish spaces. They are useful in applications, in order to verifythe high-level conditions of our main theorems, as well as to prove these very theorems.Recall the convention that, throughout, Polish spaces are equipped with their Borel sets.Finite k-tuples of random elements de�ned on the same probability space are consideredas random elements of a product space with the product topology and �-algebra.

Let (Zn; Xn) and (Z;X) be random elements such that Zn = (Z 0n; Z00n) and Z =

(Z 0; Z 00) are S 0Z �S 00Z-valued, whereas Xn = (X 0n; X

00n) and X = (X 0; X 00) are resp. S 0 �

S 00-valued and S 0X �S 00X -valued (n 2 N). We say that Z 0njX 0nw!w Z

0jX 0 and Z 00njX 00nw!w

Z 00jX 00 jointly (denoted by (Z 0njX 0n; Z

00njX 00

n)w!w (Z

0jX 0; Z 00jX 00)) if�Efh0(Z 0n)jX 0

ng; Efh00(Z 00n)jX 00ng� w!

�Efh0(Z 0)jX 0g; Efh00(Z 00)jX 00g

�(A.1)

for all h0 2 Cb(S 0Z) and h00 2 Cb(S 00Z). Even for X 0n = X 00

n, this property is weakerthan the convergence (Z 0n; Z

00n)jX 0

nw!w (Z 0; Z 00)jX de�ned by Efg(Z 0n; Z 00n)jX 0

ngw!

Efg(Z 0; Z 00)jXg for all g 2 Cb(S 0Z �S 00Z). We notice that for Z 0n = X 0n, (A.1) reduces to�

Z 0n; Efh00(Z 00n)jX 00ng� w!

�Z 0; Efh00(Z 00)jX 00g

�(A.2)

for all h00 2 Cb(S 00Z) and in this case we write (Z 0n; (Z 00njX 00n))

w!w (Z 00; (Z 0jX 0)) (seeCorollary S.1 in Section S.2).

The �rst lemma given here is divided in two parts. In the �rst part, we provideconditions for strengthening weak convergence in distribution to weak convergence inprobability. The second part, in its simplest form, provides conditions such that the twoconvergence facts (Zn; Xn)

w! (Z;X) and ZnjXnw!w ZjX imply the joint convergence

((ZnjXn); Zn; Xn)w!w ((ZjX); Z;X).

Lemma A.1 Let SZ ; S 0Z ;SX and S 0X be Polish spaces. Consider the random elementsZn; Z (SZ-valued), Z 0n; Z 0 (S 0Z-valued), Xn (SX-valued) and X 0

n, X (S 0X-valued) for n 2N. Assume that X 0

n are Xn-measurable and ZnjXnw!w ZjX.

(a) If all the considered random elements are de�ned on the same probability space,(Zn; X

0n)

w! (Z;X) and X 0n

p! X, then ZnjXnw!p ZjX.

(b) If (Zn; X 0n; Z

0n)

w! (Z;X;Z 0), then the joint convergence ((ZnjXn); Zn; X 0n; Z

0n)

w!w

((ZjX); Z;X;Z 0) holds in the sense that, for all h 2 Cb(SZ),

(Efh(Zn)jXng; Zn; X 0n; Z

0n)

w! (Efh(Z)jXg; Z;X;Z 0). (A.3)

Notice that, by choosing Z 0n = Z 0 = 1, a corollary of Lemma A.1(b) not involving Z 0nand Z 0 is obtained. It states that ZnjXn

w!w ZjX and (Zn; X 0n)

w! (Z;X) togetherimply the joint convergence ((ZnjXn); Zn; X 0

n)w!w ((ZjX); Z;X), provided that X 0

n areXn-measurable.

25

Page 26:  · INFERENCE UNDER RANDOM LIMIT BOOTSTRAP MEASURES Giuseppe Cavaliere Department of Economics, University of Bologna Department of Economics, University of Exeter Iliyan Georgiev

By means of eq. (A.1) we de�ned joint weak convergence in distribution and denotedit by (Z 0njX 0

n; Z00njX 00

n)w!w (Z

0jX 0; Z 00jX 00). We now extend it to

((Z 0njX 0n); (Z

00njX 00

n); Z000n )

w!w ((Z0jX 0); (Z 00jX 00); Z 000), (A.4)

de�ned to mean that

(Efh0(Z 0n)jX 0ng; Efh00(Z 00n)jX 00

ng; Z 000n )w! (Efh0(Z 0)jX 0g; Efh00(Z 00)jX 00g; Z 000) (A.5)

for all h0 2 Cb(S 0Z) and h00 2 Cb(S 00Z). The natural equivalence of ((Z 0njX 0n); (Z

0njX 0

n); Z000n )

w!w ((Z0jX 0); (Z 0jX 0); Z 000) and ((Z 0njX 0

n); Z000n )

w!w ((Z0jX 0); Z 000) holds under separabil-

ity of the space S 000 where Z 000n ; Z 000 take values (see Remark S.1).In Lemma A.2(b) below we relate (A.4) to the joint weak convergence of the respec-

tive conditional cdf�s in the case of rv�s Z 0n; Z00n; Z

0 and Z 00. Before that, in Lemma A.2(a)we show how joint weak convergence can be strengthened to a.s. weak convergence ona special probability space. For a single convergence Z 0njX 0

nw!w Z

0jX 0, part (a) implies

that there exists a Skorokhod representation ( ~Z 0n; ~X0n)

d= (Z 0n; X

0n), ( ~Z

0; ~X 0)d= (Z 0; X 0)

such that ~Z 0nj ~X 0nw!a:s:

~Z 0j ~X 0.

Lemma A.2 Let (Z 0n; Z00n; Z

000n ; X

0n; X

00n) and (Z

0; Z 00; Z 000; X 0; X 00) be random elements ofthe same Polish product space, de�ned on possibly di¤erent probability spaces (n 2 N).

(a) If (A.4)-(A.5) hold, then there exist a probability space (~; ~F ; ~P ) and ran-dom elements ( ~X 0

n; ~X00n; ~Z

0n; ~Z

00n;~Z 000n )

d= (X 0

n; X00n; Z

0n; Z

00n; Z

000n ), ( ~X

0; ~X 00; ~Z 0; ~Z 00; ~Z 000)d=

(X 0; X 00; Z 0; Z 00; Z 000) de�ned on this space such that ~Z 0nj ~X 0n

w!a:s:~Z 0j ~X 0; ~Z 00nj ~X 00

nw!a:s:

~Z 00j ~X 00 and ~Z 000na:s:! ~Z 000.

(b) Let Z 0; Z 00 be rv�s and Z 000 be S 000-valued. If the conditional distributions Z 0jX 0

and Z 00jX 00 are di¤use, then (A.4)-(A.5) is equivalent to the weak convergence of theassociated random cdf�s:�

P (Z 0n � �jX 0n); P (Z

00n � �jX 00

n); Z000n

� w!�P (Z 0 � �jX 0); P (Z 00 � �jX 00); Z 000

�(A.6)

as random elements of D(R)�D(R)� S 000.

The de�nition of the convergence ZnjXnw!w ZjX implies that h (Zn) jXn

w!w

h(Z)jX for any continuous h : SZ ! S 0Z between Polish spaces. A generalization forfunctions h with a negligible set of discontinuities is provided in the following CMT(for weak convergence a.s. and weak convergence in probability, see Theorem 10 ofSweeting, 1989).

Theorem A.1 Let SZ ;S 0Z ;SX and S 0X be Polish spaces and the random elements Zn; Zbe SZ-valued, Xn be SX-valued and X be S 0X-valued. If ZnjXn

w!w ZjX and h :

SZ ! S 0Z has its set of discontinuity points Dh with P (Z 2 DhjX) = 0 a.s., thenh (Zn) jXn

w!w h(Z)jX.

Next, we prove in Theorem A.2 a weak convergence result for iterated conditional ex-pectations. The theorem provides conditions under which the convergence E(znjXn)

w!E(zjX 0; X 00) implies, upon iteration of the expectations, that EfE(znjXn)jX 0

ngw!

26

Page 27:  · INFERENCE UNDER RANDOM LIMIT BOOTSTRAP MEASURES Giuseppe Cavaliere Department of Economics, University of Bologna Department of Economics, University of Exeter Iliyan Georgiev

EfE(zjX 0; X 00)jX 0g for rv�s zn; z and for Xn-measurable X 0n. In terms of weak conver-

gence in distribution, the result allows to pass from ZnjXnw!w Zj(X 0; X 00) to ZnjX 0

nw!w

ZjX 0. We need, however, a more elaborate version for joint weak convergence.

Theorem A.2 For n 2 N, let zn be integrable rv�s, Xn, Yn and (X 0n; X

00n) be ran-

dom elements of Polish spaces (say, SX ; SY and S 0X), de�ned on the probability spaces(n;Fn; Pn) and such that (X 0

n; X00n) are Xn-measurable (n 2 N). Let also z be an inte-

grable rv and Y; (X 0; X 00) be random elements of the Polish spaces SY , S 0X de�ned on aprobability space (;F ; P ). If

(E(znjXn); X 0n; X

00n; Yn)

w! (E(zjX 0; X 00); X 0; X 00; Y ) (A.7)

and X 00njX 0

nw!w X

00jX 0, then E(znjX 0n)

w! E(zjX 0) jointly with (A.7).Moreover, let Zn, Z be random elements of a Polish space SZ de�ned resp. on

(n;Fn; Pn) and (;F ; P ). If

((ZnjXn); X 0n; X

00n; Yn)

w!w ((Zj(X 0; X 00)); X 0; X 00; Y ) (A.8)

and X 00njX 0

nw!w X

00jX 0, then

((ZnjX 0n); (ZnjXn); X 0

n; X00n; Yn)

w!w ((ZjX 0); (Zj(X 0; X 00)); X 0; X 00; Y ): (A.9)

Remark A.1 A special case with X 00n = X 00 = 1 and Yn = Y = 1 is that where

(E(znjXn); X 0n)

w! (E(zjX 0); X 0) such that X 00njX 0

nw!w X 00jX 0 is trivial, and hence,

E(znjX 0n)

w! E(zjX 0) if X 0n are Xn-measurable. In terms of conditional distributions,

the joint convergence ((ZnjXn); X 0n)

w! ((ZjX 0); X 0) implies that ZnjX 0nw!w ZjX 0 (or

more strongly, ((ZnjX 0n); (ZnjXn); X 0

n)w! ((ZjX 0); (ZjX 0); X 0)) for Xn-measurable X 0

n.Another special case, clarifying the importance of the uniform integrability requirement,is zn = Xn = X 00

n, z = X 00 and X 0n = X 0 = Yn = Y = 1, where Theorem A.2 reduces to

the fact that znw! z implies Ezn ! Ez for uniformly integrable rv�s zn; z.

Remark A.2 Theorem A.2 can be applied to the bootstrap p-value. Let (A.7) holdfor zn = p�n and Yn = Y = 1, and let G� be the conditional cdf of p�j(X 0; X 00). IfE(G�jX 0) equals pointwise the cdf of the U(0; 1) distribution, then the convergencep�njX 0

nw!w p�jX 0 implied by Theorem A.2 under the condition X 00

njX 0n

w!w X 00jX 0

becomes p�njX 0nw!p U(0; 1). �

We conclude the section with a result that is used for establishing the joint conver-gence of original and bootstrap quantities as an implication of a marginal and a condi-tional convergence.

Lemma A.3 Let (��;F �F�; P �P b) be a product probability space. Let Dn : !SD, W �

n : � ! SW , X : ! SX and Z� : � � ! SZ (n 2 N) be random elements

of the Polish spaces SD, SW , SX = S 0X �S 00X and SZ . Assume further that Xn are Dn-measurable random elements of SX and Z�n are (Dn;W

�n)-measurable random elements

of SZ (n 2 N). If Xnp! X = (X 0; X 00) and Z�njDn

w!p Z�jX 0, then (Z�n; Xn)

w! (Z;X)

and (Z�n; Xn)jDnw!p (Z;X)jX on SZ � SX , where Z is a random element of SZ such

that the conditional distributions Z�jX 0, ZjX 0 and ZjX are equal a.s.

27

Page 28:  · INFERENCE UNDER RANDOM LIMIT BOOTSTRAP MEASURES Giuseppe Cavaliere Department of Economics, University of Bologna Department of Economics, University of Exeter Iliyan Georgiev

The existence of a random element Z with the speci�ed properties, possibly onan extension of the original probability space, is ensured by Lemma 5.9 of Kallenberg(1997). A well-known special case of Lemma A.3 is that where Xn = (1; X 00

n), X =

(1; X 00), X 00n

p! X 00 and Z�njX 00nw!p X

00 (such that Dn = X 00n). Then (Z

�n; X

00n)

w! (Z;X 00)

with X 00 d= Zd= ZjX 00 reducing to the condition that X 00 and Z are independent and

distributed like X 00 (DasGupta, 2008, p.475).

B Proofs of the main results

Proof of Theorem 3.1. The random element (� ; F ) of R � D(R) is a measurablefunction of (� ;X) determined up to indistinguishability by the joint distribution of(� ;X). By extended Skorokhod coupling (Corollary 5.12 of Kallenberg, 1997), we canregard the data and (� ;X) as de�ned on a special probability space where (�n; F �n)!(� ; F ) a.s. in R � D(R) and F (�) = P (� � �jX) still holds. We can also replace therede�ned F by a sample-path continuous random cdf that it is indistinguishable fromit (and maintain the notation F ).

Since F is sample-path continuous and F �n ; F are (random) cdf�s, F�na:s:! F in D(R)

implies that supu2R jF �n(u)�F (u)ja:s:! 0. Therefore, F �n(�n)�F (�n)

a:s:! 0. Since �na:s:! �

and F is sample-path continuous, it holds further that F (�n)a:s:! F (�), so also F �n(�n)

a:s:!F (�) on the special probability space. Hence, in general, F �n(�n)

w! F (�).Finally, we notice that F (�) � U(0; 1). In fact, still by the continuity of F and by

the choice of F�1 as the right-continuous inverse, the equality of events fF (u) � qg =fu � F�1(q)g, q 2 (0; 1), holds and implies that

P (F (u)ju=� � qjX) = P�� � F�1(q)

��X� = F�F�1(q)

�as asserted, the latter equality because F�1(q) is X-measurable. �

Details of Remark 3.5. If ��nw�!w ��jX and (��n; �n; Xn)

w! (��; � ;X) with Dn-measurableXn (n 2 N), then the joint convergence ((��njDn); �n; Xn)

w!w ((��jX); � ;X)

follows by Lemma A.1(b). If the conditional distributions ��jX and � jX are equal a.s.and F is sample-path continuous, then (�n; F �n)

w! (� ; F ) by Lemma A.2(b).

Proof of Theorem 3.2. The result (as well as Corollary 3.1) follows from Theorem3.3, which is proved below in an independent manner. Speci�cally, as the conditionsof Theorem 3.3 are satis�ed, it holds that P (p�n � qjXn)

w! F (F ��1(q)). Let g(�) =minf�; 1gIf��0g. By the de�nition of weak convergence,

P (p�n � q) = Efg(P (p�n � qjXn))gw! Efg(F (F ��1(q)))g

= EfF (F ��1(q))g = EfE[F (F ��1(q))jF �]g = EfF �(F ��1(q))g = q

using for the penultimate equality the F �-measurability of F ��1(q) and the relationE(F ( )jF �) = F �( ) for F �-measurable rv�s . Thus, P (p�n � q) ! q for almost allq 2 (0; 1), which proves that p�n

w! U (0; 1). �

28

Page 29:  · INFERENCE UNDER RANDOM LIMIT BOOTSTRAP MEASURES Giuseppe Cavaliere Department of Economics, University of Bologna Department of Economics, University of Exeter Iliyan Georgiev

Details of Remark 3.6. We justify an assertion of Remark 3.6 regarding condition(y). Let ~��n be a measurable transformation of Xn and W �

n such that the expansion��n = ~�

�n+op(1) holds w.r.t. the probability measure on the space where Dn andW

�n are

jointly de�ned. Then it holds that (��n�~��n)jDnw!p 0 because convergence in probability

to zero is preserved upon conditioning. As the conditional distributions ~��njDn and~��njXn are equal a.s., it follows that the Lévy distance between F �n(�) := P (��n � �jDn)and X 0

n := P ( ~��n � �jXn) is op (1), and since the weak limit F � of F �n is sample-pathcontinuous, F �n = X 0

n + op (1) in the uniform distance. Thus, also X 0nw! F �, such that

condition (y) is satis�ed with X 0n = P ( ~��n � �jXn) 2 D(R) and X = F �. �

Proof of Corollary 3.1. Convergence (3.4) implies condition (3.3) with the spec-i�ed F , F � by Lemma A.2, since the limit random measures are di¤use. Part (a) fol-lows from Theorem 3.2 with F = F � (see Remark 3.6), and part (b) from Theorem 3.2with F �(u) = E(F (u)jX 0), u 2 R. �

Proof of Theorem 3.3. Introduce Fn (�) := P (�n � �jXn), F �n (�) := P (��n � �jDn)and ~F �n(�) := P (��n � �jXn) as random elements of D(R). On the probability space ofX 0 (where X 0 is as in Theorem 3.2), possibly upon extending it, de�ne �� := F ��1(�) fora rv � � U(0; 1) which is independent ofX 0. Then the convergence (F �n ; X

0n)

w! (F �; X 0),where F � is X 0-measurable and sample-path continuous, implies that ((��njDn); X 0

n)w!w

((��jX 0); X 0) by Lemma A.2(b). Since X 0n is Dn-measurable, by Theorem A.2 (see also

Remark A.1) it follows that (��njDn; ��njX 0n)

w!w (��jX 0; ��jX 0). Since the conditional

cdf F � of ��jX 0 is sample-path continuous, for rn(�) := F �n(�)� P (��n � �jX 0n) it follows

that supx2R jrn(x)jp! 0, by using Lemma A.2(b). Then the Dn-measurability of Xn,

the Xn-measurability of X 0n and Jensen�s inequality yield��� ~F �n(u)� P (��n � ujX 0

n)��� = jEfrn(u)jXngj � Efjrn(u)jjXng � Efsupx2R jrn(x)jjXng

for every u 2 R, and further,

supR

��� ~F �n � P (��n � �jX 0n)��� � Efsupx2R jrn(x)jjXng

p! 0

because the op (1) property of supx2R jrn(x)j is preserved upon conditioning and be-cause supx2R jrn(x)j is bounded. Therefore, F �n = P (��n � �jX 0

n)+ rn =~F �n + op (1) uni-

formly. Then the convergence (Fn; F �n)w! (F; F �) in D(R)�2 extends to (Fn; F �n ; ~F �n)

w!(F; F �; F �) in D(R)�3.

Fix a q 2 (0; 1) at which F ��1 is a.s. continuous; such q are all but count-ably many because F ��1 is càdlàg. Here F ��1 stands for the right-continuous gen-eralized inverse of F �, and similarly for other cdf�s. It follows from the CMT that(Fn; F

��1n (q); ~F ��1n (q))

w! (F; F ��1(q); F ��1(q)) in D (R) � R2. Hence, F ��1n (q) =~F ��1n (q) + op(1) such that P (jF ��1n (q) � ~F ��1n (q)j < �) ! 1 for all � > 0. With In;� :=IfjF ��1n (q)� ~F ��1n (q)j<�g = 1 + op(1), it holds that

jP (�n � F ��1n (q)jXn)� P (�n � ~F ��1n (q) + �jXn)j� In;�jP (�n � ~F ��1n (q) + �jXn)� P (�n � ~F ��1n (q)� �jXn)j+ (1� In;�)

29

Page 30:  · INFERENCE UNDER RANDOM LIMIT BOOTSTRAP MEASURES Giuseppe Cavaliere Department of Economics, University of Bologna Department of Economics, University of Exeter Iliyan Georgiev

= In;�jFn( ~F ��1n (q) + �)� Fn( ~F ��1n (q)� �)j+ (1� In;�);

the equality because ~F ��1n (q)� � are Xn-measurable. Using the continuity of F and theCMT, we conclude that the upper bound in the previous display converges weakly tojF (F ��1(q) + �)�F (F ��1(q)� �)j, which in its turn converges in probability to zero as�! 0+ again by the continuity of F . Therefore,

lim�!0+

lim supn!1

P���P (�n � F ��1n (q)jXn)� P (�n � ~F ��1n (q) + �jXn)

�� > ��= 0

for every � > 0. On the other hand, as it was already used, by the Xn measurability of~F ��1n (q) + � and the CMT,

P (�n � ~F ��1n (q) + �jXn) = Fn( ~F��1n (q) + �)

w�!n!1

F (F ��1(q) + �)w�!

�!0+F (F ��1(q)):

Theorem 4.2 of Billingsley (1968) thus yields P (�n � F ��1n (q)jXn)w! F (F ��1(q)).

The proof of (3.5) is concluded by noting that P (p�n � qjXn) di¤ers from P (�n �F ��1n (q)jXn) by no more than the largest jump of F �n , which tends in probability tozero because the weak limit of F �n is continuous.

Asymptotic validity of the bootstrap conditional on Xn requires that F (F ��1(q)) =q for almost all q 2 (0; 1); which by the continuity of F and F � reduces to F = F �. �

Proof of Corollary 3.2. Part (a) follows from Theorem 3.3 with F = F � andPolya�s theorem. Regarding part (b), ((�njXn); (��njDn); X 0

n; X00n)

w!w (� j(X 0; X 00);

(� jX 0); X 0; X 00) and X 00njX 0

nw!w X

00jX 0 imply, by Theorem A.2 with Yn = Efg(��n)jDng,Y = Efg(�)jX 0g and an arbitrary g 2 Cb(R), that (�njX 0

n; ��njDn)

w!w (� jX 0; � jX 0).As the conditional distribution � jX 0 is di¤use, the proof is completed as in part (a). �

Details of Remark 3.10. With (X 0n; X

00n) and (X

0; X 00) as in Remark 3.10, and withthe notation of Section S.3, we argue next that the weak convergence of �njXn, ��njDnand (X 0

n; X00n) is joint. Consider a Skorokhod representation of Dn and (M; �1; �2) on

a probability space where convergence (S.8) is strengthened to (�n; X 0n; (X

0n)1=2X 00

n)jXnw!a:s: (� ;M; (1�!"j�)M1=2�2)j(M; �2) (by Lemma A.2(a)), and !̂"

a:s:! !". Thus, on this

space, �njXnw!a:s: � j(M; �2), (X

0n; X

00n)

a:s:! (M; (1�!"j�)�2), and by (2.4), also P �(��n �u)

a:s:! �(!�1=2" M1=2u), u 2 R, such that ��njDn

w!a:s: � jM . It follows that on a generalprobability space ((�njXn); (��njDn); X 0

n; X00n)

w!w (� j(M; �2); (� jM);M; (1� !"j�)�2).�

30


Recommended