+ All Categories
Home > Documents > Testing for multivariate heteroscedasticity

Testing for multivariate heteroscedasticity

Date post: 20-Dec-2016
Category:
Upload: ghazi
View: 217 times
Download: 5 times
Share this document with a friend
20
This article was downloaded by: [New York University] On: 26 November 2013, At: 23:00 Publisher: Taylor & Francis Informa Ltd Registered in England and Wales Registered Number: 1072954 Registered office: Mortimer House, 37-41 Mortimer Street, London W1T 3JH, UK Journal of Statistical Computation and Simulation Publication details, including instructions for authors and subscription information: http://www.tandfonline.com/loi/gscs20 Testing for multivariate heteroscedasticity H. E. T. Holgersson a & Ghazi Shukur a b a Department of Economics and Statistics , Jönköping International, Business school , P.O. Box 1026, SE-55 111, Jönköping, Sweden b Department of Economics and Statistics, Jönköping International Business School, and Centre for Labour Market Policy Research , Vädjö University , Sweden Published online: 01 Feb 2007. To cite this article: H. E. T. Holgersson & Ghazi Shukur (2004) Testing for multivariate heteroscedasticity, Journal of Statistical Computation and Simulation, 74:12, 879-896, DOI: 10.1080/00949650410001646979 To link to this article: http://dx.doi.org/10.1080/00949650410001646979 PLEASE SCROLL DOWN FOR ARTICLE Taylor & Francis makes every effort to ensure the accuracy of all the information (the “Content”) contained in the publications on our platform. However, Taylor & Francis, our agents, and our licensors make no representations or warranties whatsoever as to the accuracy, completeness, or suitability for any purpose of the Content. Any opinions and views expressed in this publication are the opinions and views of the authors, and are not the views of or endorsed by Taylor & Francis. The accuracy of the Content should not be relied upon and should be independently verified with primary sources of information. Taylor and Francis shall not be liable for any losses, actions, claims, proceedings, demands, costs, expenses, damages, and other liabilities whatsoever or howsoever caused arising directly or indirectly in connection with, in relation to or arising out of the use of the Content. This article may be used for research, teaching, and private study purposes. Any substantial or systematic reproduction, redistribution, reselling, loan, sub-licensing, systematic supply, or distribution in any form to anyone is expressly forbidden. Terms &
Transcript
Page 1: Testing for multivariate heteroscedasticity

This article was downloaded by: [New York University]On: 26 November 2013, At: 23:00Publisher: Taylor & FrancisInforma Ltd Registered in England and Wales Registered Number: 1072954 Registeredoffice: Mortimer House, 37-41 Mortimer Street, London W1T 3JH, UK

Journal of Statistical Computation andSimulationPublication details, including instructions for authors andsubscription information:http://www.tandfonline.com/loi/gscs20

Testing for multivariateheteroscedasticityH. E. T. Holgersson a & Ghazi Shukur a ba Department of Economics and Statistics , JönköpingInternational, Business school , P.O. Box 1026, SE-55 111,Jönköping, Swedenb Department of Economics and Statistics, Jönköping InternationalBusiness School, and Centre for Labour Market Policy Research ,Vädjö University , SwedenPublished online: 01 Feb 2007.

To cite this article: H. E. T. Holgersson & Ghazi Shukur (2004) Testing for multivariateheteroscedasticity, Journal of Statistical Computation and Simulation, 74:12, 879-896, DOI:10.1080/00949650410001646979

To link to this article: http://dx.doi.org/10.1080/00949650410001646979

PLEASE SCROLL DOWN FOR ARTICLE

Taylor & Francis makes every effort to ensure the accuracy of all the information (the“Content”) contained in the publications on our platform. However, Taylor & Francis,our agents, and our licensors make no representations or warranties whatsoever as tothe accuracy, completeness, or suitability for any purpose of the Content. Any opinionsand views expressed in this publication are the opinions and views of the authors,and are not the views of or endorsed by Taylor & Francis. The accuracy of the Contentshould not be relied upon and should be independently verified with primary sourcesof information. Taylor and Francis shall not be liable for any losses, actions, claims,proceedings, demands, costs, expenses, damages, and other liabilities whatsoeveror howsoever caused arising directly or indirectly in connection with, in relation to orarising out of the use of the Content.

This article may be used for research, teaching, and private study purposes. Anysubstantial or systematic reproduction, redistribution, reselling, loan, sub-licensing,systematic supply, or distribution in any form to anyone is expressly forbidden. Terms &

Page 2: Testing for multivariate heteroscedasticity

Conditions of access and use can be found at http://www.tandfonline.com/page/terms-and-conditions

Dow

nloa

ded

by [

New

Yor

k U

nive

rsity

] at

23:

00 2

6 N

ovem

ber

2013

Page 3: Testing for multivariate heteroscedasticity

Journal of Statistical Computation and SimulationVol. 74, No. 12, December 2004, pp. 879–896

TESTING FOR MULTIVARIATE HETEROSCEDASTICITY

H. E. T. HOLGERSSONa,∗ and GHAZI SHUKURa,b

aDepartment of Economics and Statistics, Jonkoping International, Business school, P.O. Box 1026,SE-55 111 Jonkoping, Sweden; bDepartment of Economics and Statistics, Jonkoping International

Business School, and Centre for Labour Market Policy Research, Vadjo University, Sweden

(Revised 12 April 2002; In final form 3 November 2003)

In this article, we propose a testing technique for multivariate heteroscedasticity, which is expressed as a test of linearrestrictions in a multivariate regression model. Four test statistics with known asymptotical null distributions aresuggested, namely the Wald, Lagrange multiplier (LM), likelihood ratio (LR) and the multivariate Rao F-test. Thecritical values for the statistics are determined by their asymptotic null distributions, but bootstrapped critical valuesare also used. The size, power and robustness of the tests are examined in a Monte Carlo experiment. Our main findingis that all the tests limit their nominal sizes asymptotically, but some of them have superior small sample properties.These are the F, LM and bootstrapped versions of Wald and LR tests.

Keywords: Heteroscedasticity; Hypothesis test; Multivariate analysis; Bootstrap

1 INTRODUCTION

In the last few decades, a variety of methods has been proposed for testing for heteroscedasticityamong the error terms in linear regression models. The assumption of homoscedasticity meansthat the disturbance variance should be constant (or homoscedastic) at each observation. Testsagainst heteroscedasticity are frequently used in many branches of applied statistics, suchas quality control, biometry and econometrics, and there exists a fair amount of heteroscedas-ticity tests of which all have their pros and cons. The commonly applied White (1980) testuses a regression of squared residuals on all products and cross-products of the explanatoryvariables. This is not feasible in studies with small or moderate sample sizes, especially whenthe number of explanatory variables is large and causes considerable reduction in the degreesof freedom. For similar reasons, one cannot use the Goldfeld and Quandt (1965) test for het-eroscedasticity since it is based on dividing the sample into two (possibly more) differentgroups, one corresponding to large values of the data and the other corresponding to smallvalues. Another large sample test is the likelihood ratio (LR) test (also known as Bartlett’stest) which involves dividing the error terms into k groups and estimating the error variancesin each group. This test and the Goldfeld–Quandt test require a natural division of the data tobe made, i.e. different regimes or different groups.

On the other hand, the White test and other tests, such as the Ramsey’s RESET test (Ramsey,1969), the Glejser (1969) test, the Breusch–Pagan (BP) test (Breusch and Pagan, 1979) all

∗ Corresponding author. E-mail: [email protected]

ISSN 0094-9655 print; ISSN 1563-5163 online c© 2004 Taylor & Francis LtdDOI: 10.1080/00949650410001646979

Dow

nloa

ded

by [

New

Yor

k U

nive

rsity

] at

23:

00 2

6 N

ovem

ber

2013

Page 4: Testing for multivariate heteroscedasticity

880 H. E. T. HOLGERSSON AND G. SHUKUR

have some implicit assumption regarding the form of heteroscedasticy in the sense that theerror variance could be a function of some unknown variable(s) and, hence, these tests usedifferent proxies for that unknown function. The BP test implies finding reasonable explanatoryvariables, an unspecified function of which models the possible heteroscedasticity. FollowingBickel (1978) a plausible approach is to regress the squared residuals from the restricted modelon powers of the predictions from the same model. This test is a very general test in that itcovers a wide range of heteroscedastic situations.

The above tests are, however, only strictly applicable in a single equation environment.Many models are expressed in terms of multivariate models (sometimes referred to as systemsof equations), due to the fact that the different marginal models are connected to each other.Treating each equation separately, and performing a succession of single equation misspecifi-cation tests, may lead to the problem of mass significance and to reduction of the validity ofthe conclusions. The analysis of systems of equations, and in particular for allocation models,has been addressed by Bewley (1986), who among other things investigated traditional testsof parameter restrictions. In general, misspecification testing is quite uncommon in multi-variate models, which may partly be due to the lack of availability of a standard methodology.A few exceptions are to be found, however. Edgerton et al. (1996) use systemwise testingextensively in their analysis of the demand for food in the Nordic countries, while Huanget al. (1993) and Shukur (1997) develop strategies for testing multiple system hypotheses.Edgerton and Shukur (1999) and Shukur and Edgerton (2000), have used Monte Carlo methodsto investigate the properties of tests for autocorrelation and omitted variables, respectively.Doornik (1996) examined certain properties of a test for multivariate heteroscedasticity sug-gested by Kelejian (1982). This test relies on the assumption that the variance is a function ofa known, observable variable. This assumption is in most situations an unavailable luxury. Inreality, one usually has to guess a proxy variable to the unknown (and sometimes unobservable)variable that explains the variance.

The purpose of this study is to present another classification of heteroscedasticity that is moregeneral than in the previous cases. Since heteroscedasticity testing is a vast area of statisticalmethodology, we previously confined ourselves to a brief description of the test methods, andfurther details are found in cited references. We will, however, discuss more thoroughly theproblems associated with the systemwise testing for heteroscedasticity, since this topic is oftenonly briefly mentioned, if at all, in standard textbooks. In this article, based on the BP testcombined with Bickel’s (1978) approach, we propose a systemwise test for heteroscedasticity.We use Monte Carlo methods to analyse the size and power of various generalisations of ourtest in systems ranging from one to five equations, under conditions where the error terms areboth normally or non-normally distributed.

The article is arranged as follows. In Section 2, we present the model we analyse, and give aformal definition of some heteroscedasticity test. In Section 3, we discuss possible choices ofthe functional form of heteroscedasticity. In Section 4, we show how our null hypothesis maybe expressed as a test of a general linear hypothesis. In Section 5, we show how this hypothesismay be tested, while Section 6 presents the design of our Monte Carlo experiment. In Sections7 and 8, we describe the results concerning the size of the various tests, while power is analysedin Section 9. The conclusions of the article are presented in the final section.

2 MODEL AND HYPOTHESIS SPECIFICATION

The model of main concern in this article is the following linear model

Y = Xβ + ε, (1)

Dow

nloa

ded

by [

New

Yor

k U

nive

rsity

] at

23:

00 2

6 N

ovem

ber

2013

Page 5: Testing for multivariate heteroscedasticity

MULTIVARIATE HETEROSCEDASTICITY 881

where Y is a n × P matrix of observations on P components, X is a fixed n × k observationmatrix, β is a k × P matrix of parameters and ε is a n × P matrix of unobservable disturbanceterms. In particular, the first column of X is a unit vector.

Sometimes we write Eq. (1) as

Vec(Y) = (I ⊗ X)Vec(β) + Vec(ε), (2)

where Vec is the operator stacking the columns of a matrix into one elongated column vectorand ⊗ denotes the Kronecker product. Our primary assumptions of Eq. (1) are

(i) P lim(X′X/n)−1 = Q, a fixed finite (k × k) matrix.(ii) E[εij εi(j−h)] = 0, h = 0, i.e. zero autocorrelation.

(iii) E[ε6ij ] < ∞, ∀ij .

This article concerns the use of diagnostic tests for making inferences of the structure of thecovariance matrix , where

:= E[Vec(ε)Vec(ε)′] =

σ1111 σ1212 · · · σ1P 1P

σ2121 σ2222 · · · σ2P 2P

.... . .

σP 1P 1 σP 2P 2 · · · σPPPP

. (3)

There are many relevant hypotheses concerning possible covariance structures. For example,it is sometimes assumed that the covariance matrix is equal over all marginal models, i.e.that 11 = · · · = PP (see Bilodeau and Brenner, 1999 for test of this hypothesis). In thisarticle, however, we will restrict ourselves to a hypothesis of particular importance, namelyH0: jk = I, j, k ∈ 1, 2, . . . , P versus HA: jk = jk,A, where the subscript A refers tosome known alternative. The null hypothesis states that all block matrixes jk of Eq. (3) equalthe identity matrix, i.e. they differ only by a scalar. Clearly, when the null is true, the covariancematrix of Eq. (2) simply reduces to = σ(P×P) ⊗ I(n×n) where σ = σjkPj,k=1, and hence theestimate of Vec(β) reduces to the ordinary least square (OLS) estimate (see e.g. Srivastava andGiles, 1987). It is readily seen that, without further restrictions, this hypothesis will be rathercomplicated to test in large systems. Therefore, we will focus on the diagonal block matrixes,i.e. jj . Our reduced null hypothesis then becomes

H0: jj = I j ∈ 1, 2, . . . , P HA: jj = jj,A j ∈ 1, 2, . . . , P (4)

with no constraints on jk , for j = k. It seems unlikely that, given that all jj = I, therewould be off diagonal covariance matrixes such that jk = I. Hence this simplification of thehypothesis is not very restrictive. If the complete hypothesis H0: jk = I has to be tested, thiscan be done by a slight modification of the reduced test. We will discuss this matter later on inSection 3.

3 FUNCTIONAL FORM OF HETEROSCEDASTICITY

Before we consider the actual test we will discuss the structures of the alternative blockmatrixes, jj,A. If our modelling procedure is to be feasible we must put some restrictions

Dow

nloa

ded

by [

New

Yor

k U

nive

rsity

] at

23:

00 2

6 N

ovem

ber

2013

Page 6: Testing for multivariate heteroscedasticity

882 H. E. T. HOLGERSSON AND G. SHUKUR

on the alternative covariance matrix. A common assumption in single equation models is thatV (εi) = h(Zi ), where h(·) is some bounded function of Zi for some fixed observable Zi . Animportant property of h(·) in the context of testing for parametric heteroscedasticity is thatit contains a parametric restriction, which yields V [εi] = σ 2 > 0, when the null is true. Anexample when this does not hold is V [εi] = γE[εi]2, since the variance then vanishes forγ = 0. A typical parameterisation that avoids this problem is h(γ, Zi ) = α + γ Zi . The choiceof Z, however, is not obvious. Further, things become more complicated if we have a systemof equations (i.e. if P > 1). Following Kelejian (1982) we may write

E[εjε′k] = jk = diag(Zγ) j, k = 1, 2, . . . , P , (5)

where γ is a matrix of constants. If Z simply is a known observable matrix (e.g. if Z = [1 2 · · · n]or a vector of numbers representing different regimes), then Eq. (5) imposes no serious prob-lems. Furthermore, as we work in a systemwise environment, the dependent variables in Eq. (1)are assumed to be correlated. It may then be reasonable to believe that the variances between themodels also affect each other. This leads us to consider a parameterisation where Zil = h(µil), lis the lth column of Zi andµil := E(Yil). Now, in order to impose systemwise heteroscedasticitywe will express the variance as follows

V [εij ] = αj + h(γ1j , µi1) + · · · + h(γPj , µiP) l, j = 1, . . . , P . (6)

Clearly, we want h(γlj, µil) to be positive. Two reasonable choices that fulfill this criterion areh(γlj, µil) = γljµ

2il and h(γlj, µil) = γlj|µil|. However, note that in some models the dependent

variables are always positive, in which case h(γlj, µil) = γljµil may be used. This setting hasbeen used by Shukur (2002) and Edgerton et al. (1996). In this article, we will limit ourselvesto the case when h(γlj, µil) = γljµ

2il, though all our proposed tests (to follow) may readily be

altered to other specifications of h(γlj, µil). Our parameterisation of the second moment ofEq. (2) then becomes

H0 ∪ HA: E[ε•2i ] = ZiΓ = [1 µ•2

i ]

γ

]= α + µ•2

i γ, (7)

where ε•2i = (ε2

i1 · · · ε2iP), α = (α1 · · · αP ), γ = (γi1 · · · γiP) for γij = (γ1j · · · γPj )

′, µ•2i =

(µ2i1 · · · µ2

iP) and µij = E[Yij ]. Note that, in what follows, •2 always refers to the elementwisesquares.

This parameterisation becomes somewhat heavy in large systems, since it will containP + P 2 parameters. A possible restriction on Eq. (7) is to constrain the cross equation param-eters to be zero, i.e. γij = 0, ∀i =j , or to the even simpler parameterisation γij = 0,

γ11 = · · · = γPP. Such restrictions may be useful, for instance, in very small samples. Notethat if one is not interested in the qualitative question whether the variables are heteroscedasticor not, but rather considers the heteroscedasticity test as a pre-test for choosing between the OLSand the feasible generalized least square (FGLS) estimator, then another parameterisation mayperhaps be preferable. For example, one may adopt the Kelejian (1982) extension of Eq. (7)that H0 ∪ HA: jk = diag[Zγ], a parameterisation where also the off-diagonal covariancematrixes jk are regarded. Details on estimation of the parameters in this model are availablein Kelejian (1982), Doornik (1996) and Godfrey andWickens (1982) and will not be reproducedhere.

Dow

nloa

ded

by [

New

Yor

k U

nive

rsity

] at

23:

00 2

6 N

ovem

ber

2013

Page 7: Testing for multivariate heteroscedasticity

MULTIVARIATE HETEROSCEDASTICITY 883

4 OUTLINE OF HYPOTHESIS TESTS

From Eq. (7) we have E[ε2ij ] = αj + γ1jµ

2i1 + · · · + γPjµ

2iP where µij = E[Yij ]. By adding

ε2ij on both sides of this expression we get E[ε2

ij ] + ε2ij = αj + γ1jµ

2i1 + · · · + γPjµ

2iP + ε2

ij ,or equivalently,

ε2ij = αj + γ1jµ

2i1 + · · · + γPjµ

2iP + ε2

ij − E[ε2ij ].

Putting δij := ε2ij − E[ε2

ij ] we may write this model in matrix form as

ε•2i = ZiΓ + δi , (8)

where ε•2i = [ε2

i1 · · · ε2iP], Zi = [1 Zi1 · · · ZiP] = [1 µ2

i1 · · · µ2iP] and δi = [δi1 · · · δiP] is an addi-

tive error term with covariance matrix V [δi] = Ωi . In our application ε is unobservable andso is Z, but this is relaxed so far. Now, let Γ = [1 · · · P ] for ′

j = [αjγ1j · · · γPj ]′, whereγ1j · · · γPj Pj=1 are our parameters of interest. Our null hypothesis expressed by Eqs. (4) and(7) is thus determined by the parameters γ1j . . . γPj Pj=1 in Eq. (8). The usual practice ofperforming tests separately for all P models, and then intuitively combining the results, canbe misleading due to mass significance and dependent test statistics. A simple and intuitivelyappealing solution to this problem can be assessed by applying systemwise tests. One conve-nient way to do this is to test the linear hypothesis H0: R = r in the regression model (8).Our null hypothesis of homoscedasticity may then be expressed as

H0: Γ = [(α1 0 · · · 0)′ · · · (αP 0 · · · 0)′]((P+1)×P), or equivalently, (9)

HA: R(P×(P+1))Γ((P+1)×P) = 0(P×P), (10)

where R is a matrix of ones and zeros. In other words, our test of systemwise heteroscedasticityis a test of a linear hypothesis in a multivariate regression model. Now, let δH0(ε

•2(n×P)) denote

the restricted OLS residuals of Eq. (8) with the constraint RΓ = 0, and let δH0∪HA(ε•2

(n×P)) be theunrestricted residuals (i.e. with no constraint) where (ε•2) denotes the dependent variable ofEq. (8). Then define

ΩR := δ′H0

(ε•2(n×P))δH0(ε

•2(n×P)) and ΩU := δ′

H0∪HA(ε•2

(n×P))δH0∪HA(ε•2

(n×P))

as the unrestricted and restricted estimators for Ω. Then, following Judge et al. (1984), theWald, Lagrange multiplier (LM) and LR statistics for testing H0: RΓ = 0 are given by

θWald = nh(ΩRΩ−1U ) − P , (11)

θLM = nP − h(ΩUΩ−1R ), (12)

θLR = n ln

|ΩR||ΩU|

, (13)

where |·| and h(·) are the determinant and trace operators, respectively. Following Judgeet al. (1984) the null distributions of our statistics θWald, θLM, θLR to test H0: RΓ = 0 is then,asymptotically, χ2

(P 2), where P is the number of restrictions per equation imposed by H0, and

Dow

nloa

ded

by [

New

Yor

k U

nive

rsity

] at

23:

00 2

6 N

ovem

ber

2013

Page 8: Testing for multivariate heteroscedasticity

884 H. E. T. HOLGERSSON AND G. SHUKUR

equals the number of equations in our case.Another approximation is that given by Rao (1973),namely

θF =( q

P 2

)

|ΩR||ΩU|

1/s

− 1

, (14)

where q = hs − r , s = √(P 2 − 4)/(2P 2 − 5), h = (n − P − 3/2) and r = (P 2/2) − 1.

Under the null hypothesis, θF is asymptotically distributed as F(P 2,q).It is well known that the asymptotic properties of statistics of the kind in Eqs. (11)–(14)

becomes less and less accurate in small samples as the number of equations grows; see, forexample, Laitinen (1978). This effect is expected to be particularly serious in our case as thenumber of restrictions to be tested is P 2. One possibility to improve the small sample propertiesis to use resampling techniques. A particularly interesting procedure is the so-called residualbootstrap. This method has proved useful for improving the critical values in small samples,see Horowitz (1994) and Mantalos and Shukur (1998). We will discuss this technique in detailin the next section.

5 FEASIBLE TESTS FOR PARAMETRIC HETEROSCEDASTICITY

Returning to our model (8), i.e. ε•2i = ZiΓ + δi treated in the previous section, we see imme-

diately that the tests of Eqs. (11)–(14) are not feasible, since both εi and Zi are unobservable.Wetherefore replace the unobservable variables with observable proxy variables in the followingway. Consider the j th equation of Eq. (8), i.e.

ε2ij = αj + γ1jµ

2i1 + · · · + γPjµ

2iP + δij . (15)

The most obvious choice of an observable counterpart of εij is the OLS residual εij := Yij − Yij .It is readily seen that εij = εij − xi (X′X)−1X′εj , and it may be shown that xi (X′X)−1X′εj

vanishes at the rate o(n−1/4) and that ε2ij = ε2

ij − o(n−1/4) (Appendix A.1). Further, E[Yij ]

may be replaced by Yij since Yij = E[Yij ] + xi (X′X)−1X′εj , or, Yij = µij + o(n−1/4). In otherwords,

Y•2i Γ = µ•2

i Γ − o(n−1/4)Γ = µ•2i Γ + o(n−1/4). (16)

Therefore we get from Eqs. (15) and (16) the identity

ε•2i = Zi + δi + o(n−1/4), (17)

where Zij = Y 2ij rather than Zij = E[Y 2

ij ]. The regression model (17) is then an operationalversion of Eq. (8) with the cost of having an additional error term with non-zero mean, thoughvanishing at o(n−1/4). In particular, under the normality of ε we have limn→∞ V [ε•2

i ] = 2σ •2

(see Appendix A.2). Hence, if the primary regression (1) has covariances among the marginalmodels, then so does the secondary regression, i.e. Eq. (17). Proceeding exactly as in Section 4,though with Eq. (8) replaced by Eq. (17), we define our feasible test statistics as

θWald = nh(R−1U ) − P , (18)

Dow

nloa

ded

by [

New

Yor

k U

nive

rsity

] at

23:

00 2

6 N

ovem

ber

2013

Page 9: Testing for multivariate heteroscedasticity

MULTIVARIATE HETEROSCEDASTICITY 885

θLM = nP − h(U−1R ), (19)

θLR = n ln

|R||U|

, (20)

θF =( q

P 2

)

|R||U|

1/s

− 1

, (21)

where R and U are the restricted and unrestricted residual covariance matrices of Eq. (17).Our statistics [Eqs. (18)–(21)] are all simple functions of R and U, and it follows directly

from the Slutsky’s theorem that (R − R)P→ 0 and (U − U)

P→ 0 suffices for θWald, θLM,θLR and θF to have the same limiting distribution as θWald, θLM, θLR and θF.

Our proposed LM test of Eq. (19) has some interesting analogies. For the one dimensionalcase, Eq. (19) reduces to n times the uncentred R2 from the regression ε2

i = Zi + δi . ThisnR2 is equivalent to n times the uncentred R2 from the regression (ε2

i /σ2 − 1) = Zi + δi ,

as noted by Davidson and MacKinnon (1993), which in turn is known as the Koenker andBasset (1982) (KB) test. The KB test is commonly known to be robust against non-normality,as opposed to the BP test, which is not. Hence, our LM test of Eq. (19) may be viewed as amultivariate extension of the KB test.

As mentioned previously, tests for multivariate restrictions usually have true sizes thatseriously under- or overestimate the nominal sizes in small or moderate sample sizes. Hencewe shall consider bootstrapped versions of our tests proposed above. Consider our regressionmodel (17), and let and δi be its OLS regression parameters and residuals, respectively.The residual bootstrap technique for testing heteroscedasticity is then given by the followingalgorithm.

BOOTSTRAP ALGORITHM

(i) Calculate the δ(n×P) OLS residuals of Eq. (17).(ii) Let δ∗

i = [δ∗i1 · · · δ∗

iP] denote a resample observation with replacement from δ. Further,

let ¯δ∗ = [ ¯δ∗

1 · · · ¯δ

∗P ] where ¯

δ∗j = ∑n

i=1 δ∗ji/n. The centred bootstrapped residuals are then

defined as δ∗i = (δ∗

i − ¯δ∗). Next, define ε•2∗

i = Zi0 + δ∗i where 0 is the OLS estimate of

under the null. Then ε•2∗, Z is a residual bootstrap version of ε•2, Z.(iii) Calculate the restricted and unrestricted residuals from each bootstrap sample ε•2∗

b , Zband calculate the test statistic θ∗

b .(iv) Calculate the achieved significance level by

pBoot = 1 + #θ∗b ≥ θObsBb=1

B + 1.

Details of residual bootstrap are given in Freedman (1981). Preliminary simulation results(omitted here) indicate that our Wald and LR tests, i.e. Eqs. (18) and (20), have rather badsmall sample properties with regard to the size of the test. In contrast, the F and the LM testsshow fairly good size properties. Hence, there is no reason for bootstrapping the F and LMtests, and we here only include bootstrapped versions of the Wald and LR in our simulation.These will be denoted as WaldBoot and LRBoot, respectively.

Dow

nloa

ded

by [

New

Yor

k U

nive

rsity

] at

23:

00 2

6 N

ovem

ber

2013

Page 10: Testing for multivariate heteroscedasticity

886 H. E. T. HOLGERSSON AND G. SHUKUR

6 THE MONTE CARLO DESIGN

The finite sample properties of our tests treated in Section 5 are unknown. It is thereforeimportant to examine whether the actual behaviour of these tests is adequately approximatedby asymptotic theory. In the absence of exact results, it is necessary to investigate the finitesample performance of the statistics by means of simulation experiments. When investigatingthe properties of a classical test procedure, three aspects are of prime importance. First, we wishto see if the actual size of the test is close to the nominal size (used to decide the critical regionfor the rejection of the null hypothesis). Given that the actual size is a reasonable approximationto the nominal size, we then wish to investigate the power of the test and the robustness of thetest to violations of imposed assumptions (such as non-normality). In general, when comparingdifferent tests we will therefore prefer those whose (i) actual size lies close to the nominal sizeand, given that (i) holds, (ii) have greatest power and (iii) is least sensitive to violations ofthe assumptions, with respect to maintained size and power. Other relevant criteria, such aswhich test has the soundest theoretical basis, or which test is the simplest to perform, cannotbe judged quantitatively. Therefore, we leave this aspect to be judged by the reader.

In a Monte Carlo study we calculate the estimated size by observing how many times thenull is rejected in repeated samples under conditions where the null is true. However, thisestimated size is associated with a source of uncertainty due to a finite number of replications.Let us say that we define the true size of a test at a nominal size of 5% to be ‘reasonable’ ornot severely biased when it lies between 4% and 6%. Clearly then, we need the confidencelimits of our simulations to be at least as narrow as π ± 1%. It may be shown that, at anactual size π = 0.05, and using 10,000 replications, π ± 0.005 gives an approximate 95%confidence interval for π . Hence, if the estimated size of any of our tests exceeds the interval0.06–0.04, for π = 0.05, we conclude that the actual size of the test systematically exceeds thenominal size. In other words, an estimate outside the above-mentioned range might be viewedas being inconsistent with the assumption that the corresponding finite sample value equalsits asymptotically achieved values. The calculations reported here were performed using theSAS/IML program package.

6.1 Factors that are Held Constant in the Monte Carlo Experiment

As the null distributions of our test statistics [Eqs. (18)–(21)] rely on the normality assumption,it will be of great interest to examine their robustness to non-normality. Therefore, we shallconsider non-normal as well as normal distributions of the disturbances. We will make use ofone multivariate skew distribution, one symmetric heavy-tailed distribution and one normaldistribution. The normal distributed variate is defined by

ε′N := L(P×P)η(P×n), (22)

where ηij

iid∼ N(0, 1) and L is the Choleski root of a covariance matrix Σ, i.e. Σ = LL′. Ourmultivariate skew variable is defined as follows: Let Gamma(λ, C) denote a gamma distributionwith location parameter λ and scale parameter C. The skew distribution is then defined by

ε′K1

:= L(P×P)η(P×n), (23)

where ηij

iid∼ Gamma(1, 9)/√

9. The kurtosis of a gamma variate is given by β2 = 3 + 6/C andthe skewness is given by 2/

√C (see Johnson et al., 1994). Hence the (marginal) skewness of

Dow

nloa

ded

by [

New

Yor

k U

nive

rsity

] at

23:

00 2

6 N

ovem

ber

2013

Page 11: Testing for multivariate heteroscedasticity

MULTIVARIATE HETEROSCEDASTICITY 887

εK1 is 2/3 and the kurtosis is 3 + 2/3 which is a moderate skewness and kurtosis as comparedto a normal variate. Our symmetric distribution is defined by

ε′K2

:= L(P×P)η(P×n), (24)

where

ηij = πijZijUij

and

p(πij ) =

0.5 πij = 1

0.5 πij = −1, Zij

iid∼ [Gamma(1, .25)].5.

The kurtosis of Eq. (24) is given by

β2 = 9(C + 4τ)(C)

5[(C + 2τ)]2

(Johnson, 1987). For our choice of parameters we get β2 = 9, which is three times that of anormal variate. Note that all three variates have first two moments identical, namely 0 and 1. TheεK1 and εK2 distributions will be used in the experiment in order to examine the robustnessof the heteroscedasticity tests to non-normal disturbance. Details of factors varying in theexperiment are displayed in Table I below.

6.2 Factors that Vary in the Monte Carlo Experiment

Several factors are expected to affect the size and power properties of our heteroscedasticitytests. Since we are relying on asymptotic properties in several of our models in Section 5, thenumber of observations is one such prime factor. We have therefore investigated samples typicalfor small, medium, large and very large sizes. This is equivalent to sample sizes of between 30and 1000 observations. Another main interest of this article lies in the analysis of systemwisetests, and thus the number of equations to be estimated is also of central importance. As thenumber of equations grows the consumption time becomes longer, and we chose a systemwith five equations as our largest model when considering the size and power properties ofthe tests. Medium and small size models are represented by two equations and one equation,respectively.

As our parameterisation of heteroscedasticity depends on the expectation of the responsevariable, i.e. V (εij ) ∝ E[Yij ]2, the test will depend on the regression parameters, which there-fore need to be specified explicitly. The extent of heteroscedasticity depends both on the

TABLE I Factors that Vary for Different Models – Size and Power Calculations.

Factor Symbol Value

Number of observations N 30, 40, 60, 100, 200, 500, 1000Distribution of disturbances ε εK1 ,εK2 ,εN

Number of equations P 1, 2, 5

Dow

nloa

ded

by [

New

Yor

k U

nive

rsity

] at

23:

00 2

6 N

ovem

ber

2013

Page 12: Testing for multivariate heteroscedasticity

888 H. E. T. HOLGERSSON AND G. SHUKUR

regression parameters β and the parameter Γ of Eq. (17). In our experiment we have used thefollowing settings for the five, two and one-dimensional systems:

βP=5 =

2 2 2 2 2

1 2 3 1 2

2 3 1 2 3

3 1 2 3 1

, βP=2 =

2 2

1 2

2 3

3 1

, βP=1 =

2

1

2

3

.

Γ5 =

1 1 1 1 1

0.1 0.05 0.03 0.02 0.01

0.05 0.1 0.05 0.03 0.02

0.03 0.05 0.1 0.05 0.03

0.02 0.03 0.05 0.1 0.05

0.01 0.02 0.03 0.05 0.1

, Γ2 =

1 1

0.1 0.05

0.05 0.1

, Γ1 =

[1

0.1

].

At this point it should be stressed that these parameters have no natural meaning in terms ofpower, because the power of the tests will be affected by (a) the shape of X, (b) the value of β

and (c) the value of Γ. For example, holding both β and Γ fixed while increasing the spread in X,or holding Γ and X fixed while increasing β, would result in increasing power. The main pointof our power simulation is to be able to distinguish between different powers among our fourproposed tests for a specific group of heteroscedasticity parameters, and to establish consistencyof the tests. Details of factors varying in the experiment are displayed in Table II below.

7 ANALYSIS OF THE SIZE

In this section, we present the most interesting results of our Monte Carlo experimentconcerning the size of the heteroscedasticity tests proposed in this study (some further resultsregarding the robustness of the test are available from the authors). We analyse the size ofthe tests in systems ranging from one to five equations. The size has been estimated bycalculating the rejection frequencies in 10,000 replications. In this study, as mentioned earlier,we calculated the estimated size by observing how many times the null is rejected in repeatedsamples under conditions where the null is true. By varying factors like those described in theprevious section, we can obtain a succession of estimated sizes under different conditions. In

TABLE II Factors that are Held Constant in the Monte Carlo Experiment.

Factor Value

Properties of X in repeated samples FixedStructure of the error terms White noiseNumber of regressors 3Distribution of fixed∗ regressors U [0, 1]Regression parameters βLevel of heteroscedasticity ΓNumber of resamples B = 99

∗The reader may wonder why the regressors which are assumed fixed seemto be stochastic. The reason is that a considerable improvement in precisioncan be obtained by drawing separate samples at each replication (Edgerton,1996). Hence we save computer time by this approach.

Dow

nloa

ded

by [

New

Yor

k U

nive

rsity

] at

23:

00 2

6 N

ovem

ber

2013

Page 13: Testing for multivariate heteroscedasticity

MULTIVARIATE HETEROSCEDASTICITY 889

TABLE III Size for 1, 2 and 5 Equations, Respectively for Normal Distributed Errors.

N LM LR W F WaldBoot LRBoot

30 0.052 0.061 0.069 0.050 0.052 0.05040 0.051 0.057 0.063 0.049 0.051 0.05060 0.050 0.053 0.057 0.048 0.050 0.051

100 0.050 0.052 0.054 0.049 0.049 0.049200 0.051 0.052 0.053 0.050 0.050 0.050500 0.050 0.050 0.050 0.049 0.050 0.050

1000 0.049 0.049 0.050 0.049 0.049 0.051

30 0.044 0.077 0.113 0.047 0.050 0.05040 0.045 0.069 0.093 0.047 0.051 0.05060 0.047 0.061 0.077 0.048 0.050 0.050

100 0.047 0.056 0.064 0.047 0.049 0.050200 0.049 0.053 0.058 0.049 0.050 0.050500 0.051 0.052 0.054 0.050 0.050 0.050

1000 0.050 0.051 0.051 0.049 0.049 0.050

30 0.035 0.249 0.527 0.058 0.047 0.04840 0.041 0.178 0.374 0.058 0.050 0.04960 0.045 0.123 0.233 0.055 0.050 0.050

100 0.048 0.088 0.140 0.054 0.050 0.050200 0.050 0.067 0.089 0.053 0.050 0.050500 0.051 0.057 0.065 0.052 0.050 0.050

1000 0.051 0.054 0.057 0.051 0.050 0.051

Note: The italic numbers indicate bad performance as defined earlier, i.e. when the results lie outsidethe ±1% interval for actual size.

general, the closer an estimated size is to the nominal size, the better we consider a test to be.To show the main effects of the factors we discussed earlier on the performances of the testswe display the estimated sizes in our tables.

7.1 Size Properties for Normally Distributed Errors

In this sub-section, in addition to our four tests for heteroscedasticity, we present bootstrapresults for the Wald and LR tests results since they have shown to behave badly in ourpreliminary investigation.

In Table III, we present the estimated size of our proposed heteroscedasticity tests in systemswith one, two and five equations when the errors are normally distributed. The main findingsare that all six tests behave well for the case of one equation system except for the W test whichhas shown to overestimate the size in small samples. This result seems to carry over to thetwo-equation case with even worse results for the W test and bad performance for the LR insmall and medium size samples. In large systems when the number of equations is equal to five,the W and LR tests perform badly in the sense that they overestimate the nominal size in small,medium and rather large samples, while the LM test tends to underestimate it when the samplesize is equal to 30 observations. The F test is shown to have the best performance in almost allsituations. The results also show that, when using the bootstrap technique, the WaldBoot andLRBoot tend to perform satisfactorily and rejecting as it should around 5% in all situations.

7.2 Size Properties for Symmetric Non-normal Errors

In Table IV, we present results for the size properties for symmetric non-normal errors. Thetable can show the robustness of our tests to a symmetric but heavy-tailed error distribution.

Dow

nloa

ded

by [

New

Yor

k U

nive

rsity

] at

23:

00 2

6 N

ovem

ber

2013

Page 14: Testing for multivariate heteroscedasticity

890 H. E. T. HOLGERSSON AND G. SHUKUR

TABLE IV Size for 1, 2 and 5 Equations, Respectively for K2 Distributed Errors.

N LM LR W F WaldBoot LRBoot

30 0.055 0.064 0.073 0.053 0.056 0.05540 0.052 0.059 0.065 0.051 0.053 0.05560 0.051 0.055 0.059 0.050 0.052 0.053

100 0.050 0.052 0.055 0.050 0.052 0.051200 0.050 0.051 0.052 0.050 0.051 0.050500 0.048 0.049 0.049 0.048 0.049 0.051

1000 0.050 0.050 0.050 0.050 0.051 0.049

30 0.048 0.082 0.119 0.051 0.057 0.05740 0.048 0.071 0.098 0.050 0.055 0.05560 0.045 0.060 0.076 0.046 0.052 0.053

100 0.047 0.055 0.063 0.047 0.051 0.051200 0.048 0.052 0.056 0.048 0.051 0.050500 0.047 0.049 0.050 0.047 0.049 0.041

1000 0.048 0.049 0.050 0.047 0.048 0.050

30 0.056 0.300 0.566 0.090 0.068 0.06640 0.060 0.216 0.406 0.083 0.064 0.06360 0.062 0.148 0.258 0.076 0.060 0.060

100 0.062 0.105 0.158 0.069 0.058 0.057200 0.057 0.077 0.098 0.060 0.053 0.053500 0.054 0.061 0.068 0.055 0.051 0.052

1000 0.053 0.056 0.059 0.053 0.051 0.050

Note: The italic numbers indicate bad performance as defined earlier, i.e. when the results lie outsidethe ±1% interval for actual size.

In general, the results in the one equation case are rather similar to those when the error wasnormally distributed with the exception that the LR is now overrejecting in small samples, i.e.the properties of the tests seem to be only slightly affected by the non-normality. However, insystems with two equations the W and LR tests are shown to overestimate the nominal sizein small and medium size samples. This effect becomes even more accentuated in the case offive equations for almost all the tests; even the WaldBoot and LRBoot tend to overreject but onlyin small samples. Note that all tests seem to limit their nominal sizes asymptotically. The lastresult stands in stark contrast to the frequently used Breusch and Pagan (1979) test, which iswell known to be sensitive to non-normal kurtosis in the sense that the type-one error limits100% as the sample size increases.

7.3 Size Properties for Skew Non-normal Errors

In Table V, we present our result on the robustness to the skewed K1 distribution. The mainfinding is that the K1 distribution causes an over-rejection among the tests. All tests behavefairly well in the one equation case, but with some over rejection in small samples. Movingto the cases of two equations, we see that the LR and W tests overestimate the size in smalland medium size samples, and that the WaldBoot and LRBoot also overreject but only in smallsamples. Results from the case of five equations show that the LR, W and F tests overreject insmall, medium and rather large samples, especially LR and W where they reject around 30%and 50% of the time, respectively, under the null hypothesis. Note that, when looking at theresults for the LM test, it seems that, in small samples, this test performs better in the casesof two and five equations than in the case of one equation which is rather remarkable. Thereason behind this can be stated as follows: It is noticeable that the K1 distribution of errors

Dow

nloa

ded

by [

New

Yor

k U

nive

rsity

] at

23:

00 2

6 N

ovem

ber

2013

Page 15: Testing for multivariate heteroscedasticity

MULTIVARIATE HETEROSCEDASTICITY 891

TABLE V Size for 1, 2 and 5 Equations, Respectively for K1 Distributed Errors.

N LM LR W F WaldBoot LRBoot

30 0.065 0.075 0.084 0.063 0.064 0.06540 0.062 0.069 0.075 0.060 0.061 0.06260 0.060 0.064 0.068 0.059 0.060 0.059

100 0.056 0.059 0.061 0.056 0.056 0.055200 0.052 0.054 0.055 0.052 0.052 0.053500 0.052 0.053 0.053 0.052 0.052 0.051

1000 0.050 0.051 0.051 0.050 0.050 0.050

30 0.057 0.095 0.133 0.058 0.065 0.06540 0.056 0.083 0.111 0.057 0.063 0.06260 0.054 0.070 0.086 0.056 0.059 0.059

100 0.055 0.064 0.073 0.055 0.058 0.055200 0.054 0.059 0.063 0.052 0.056 0.053500 0.052 0.054 0.055 0.051 0.053 0.051

1000 0.052 0.052 0.053 0.050 0.051 0.050

30 0.043 0.279 0.556 0.071 0.057 0.05840 0.050 0.203 0.402 0.069 0.058 0.05860 0.056 0.141 0.256 0.067 0.058 0.057

100 0.057 0.100 0.154 0.064 0.056 0.057200 0.055 0.074 0.097 0.058 0.054 0.054500 0.054 0.061 0.069 0.055 0.053 0.052

1000 0.052 0.056 0.059 0.053 0.052 0.051

Note: The italic numbers indicate bad performance as defined earlier, i.e. when the results lie outsidethe ±1% interval for actual size.

causes a slight overrejection of the size, especially in small samples. The LM test generallytends to underreject the size, especially in small samples and large systems of equations, evenin those situations when the errors are either normally or non-normally distributed. Hence, anoverrejection of a test that often tends to underreject the size will make it seem not to suffersevere bias.

When comparing these results with those on the previous subsection we find that the effectof the skewed distribution is noticeable in small samples and large systems, but disappear inlarge samples and small systems to be almost like the results from the symmetric non-normalerror case. All tests converge to the nominal size when the sample size increases and that theWaldBoot and LRBoot converge faster than the others.

8 ANALYSIS OF THE POWER

In this section, we present the most interesting results regarding the power properties of thetests (some further robustness results are available from the authors). We analyse the power ofsix versions of our heteroscedasticity test using sample sizes 30, 40, 60, 100, 200, 500 and 1000observations. The power functions were estimated by calculating rejection frequencies from100,000 replications using the parameterisation of heteroscedasticity presented in Section 6.One could, of course, calculate and present the size-corrected power functions that give morecorrect information about the power of the tests. However, there is one drawback in using thismethod, namely that the reader can get a good idea about the real power but a misleadingidea about the performance of the size (when corrected). For this reason, we decide to use the

Dow

nloa

ded

by [

New

Yor

k U

nive

rsity

] at

23:

00 2

6 N

ovem

ber

2013

Page 16: Testing for multivariate heteroscedasticity

892 H. E. T. HOLGERSSON AND G. SHUKUR

TABLE VI Power for 1, 2 and 5 Equations, Respectively for Normal Distributed Errors.

N LM LR W F WaldBoot LRBoot

30 0.111 0.124 0.136× 0.107 0.109 0.11040 0.143 0.154 0.163× 0.140 0.139 0.13860 0.214 0.223 0.231 0.212 0.209 0.210

100 0.380 0.387 0.393 0.378 0.370 0.371200 0.723 0.726 0.729 0.722 0.708 0.707500 0.991 0.991 0.991 0.991 0.989 0.988

1000 1 1 1 1 1 1

30 0.114 0.168× 0.220× 0.121 0.119 0.11940 0.148 0.191× 0.233× 0.152 0.151 0.15160 0.226 0.262× 0.296× 0.228 0.227 0.225

100 0.397 0.424 0.448× 0.399 0.394 0.395200 0.768 0.779 0.788× 0.768 0.758 0.759500 0.998 0.998 0.998 0.997 0.997 0.997

1000 1 1 1 1 1 1

30 0.086× 0.405× 0.677× 0.139 0.114 0.11140 0.129 0.373× 0.599× 0.173 0.150 0.14860 0.220 0.407× 0.573× 0.260 0.238 0.228

100 0.429 0.561× 0.669× 0.464 0.442 0.432200 0.854 0.892× 0.920× 0.868 0.856 0.847500 1 1 1× 1 1 1

1000 1 1 1 1 1 1

Note: Cells marked with by an × sign indicates that the true size is too bad for the power to be meaningful.

TABLE VII Power for 1, 2 and 5 Equations, Respectively for K2 Distributed Errors.

N LM LR W F WaldBoot LRBoot

30 0.094 0.106× 0.116× 0.091 0.094 0.09540 0.103 0.112 0.121× 0.101 0.104 0.10560 0.125 0.132 0.138 0.123 0.125 0.126

100 0.174 0.179 0.184 0.173 0.172 0.176200 0.298 0.301 0.304 0.297 0.292 0.294500 0.621 0.623 0.624 0.621 0.608 0.609

1000 0.896 0.896 0.896 0.896 0.886 0.887

30 0.090 0.139× 0.188× 0.096 0.103 0.10240 0.103 0.142× 0.178× 0.107 0.115 0.11360 0.126 0.154× 0.181× 0.128 0.136 0.137

100 0.184 0.203 0.222× 0.184 0.191 0.192200 0.337 0.349 0.362 0.336 0.337 0.340500 0.728 0.732 0.737 0.726 0.719 0.716

1000 0.967 0.968 0.097 0.967 0.963 0.963

30 0.086× 0.380× 0.644× 0.136× 0.105× 0.105×40 0.105× 0.314× 0.521× 0.142× 0.113× 0.113×60 0.135× 0.271× 0.412× 0.161× 0.133× 0.133×

100 0.192× 0.282× 0.374× 0.211× 0.183 0.183200 0.374 0.432× 0.489× 0.388× 0.354 0.354500 0.853 0.869× 0.883× 0.858 0.838 0.838

1000 0.998 0.998 0.999 0.998 0.998 0.998

Note: Cells marked with by an × sign indicates that the true size is too bad for the power to be meaningful.

Dow

nloa

ded

by [

New

Yor

k U

nive

rsity

] at

23:

00 2

6 N

ovem

ber

2013

Page 17: Testing for multivariate heteroscedasticity

MULTIVARIATE HETEROSCEDASTICITY 893

TABLE VIII Power for 1, 2 and 5 Equations, Respectively for K1 Distributed Errors.

N LM LR W F WaldBoot LRBoot

30 0.184× 0.201× 0.217× 0.179× 0.179× 0.178×40 0.225× 0.239× 0.253× 0.221× 0.219× 0.219×60 0.305× 0.316× 0.326× 0.301 0.297× 0.295

100 0.447 0.455 0.462× 0.445 0.438 0.440200 0.725 0.728 0.731 0.724 0.711 0.710500 0.979 0.979 0.980 0.979 0.975 0.975

1000 1 1 1 1 1 1

30 0.149 0.214× 0.275× 0.156 0.160× 0.163×40 0.190 0.242× 0.291× 0.195 0.199× 0.200×60 0.272 0.313× 0.351× 0.276 0.278 0.278

100 0.433 0.461× 0.487× 0.434 0.434 0.436200 0.752 0.763 0.773× 0.752 0.745 0.746500 0.993 0.993 0.993 0.993 0.991 0.991

1000 1 1 1 1 1 1

30 0.097 0.426× 0.694× 0.155× 0.128 0.12840 0.144 0.394× 0.614× 0.192× 0.165 0.16060 0.230 0.411× 0.571× 0.268× 0.240 0.236

100 0.407 0.532× 0.637× 0.438× 0.408 0.401200 0.794 0.841× 0.876× 0.809 0.787 0.777500 0.999 1× 1× 1 0.999 0.999

1000 1 1 1 1 1 1

Note: Cells marked with by an × sign indicates that the true size is too bad for the powerto be meaningful.

rejection rates at nominal significance levels and leave the reader to make inferential statementsregarding the performances of both the size and the power. In TablesVI–VIII, we present resultsfor all the tests proposed and investigated in this study.

As shown in the previous section, LM, F, WaldBoot and LRBoot behave better than the others,and that the W and LR tests overreject the size. In this situation it is important to mention thateven if a correctly given size is not sufficient to ensure the good performance of the power ofthe test, it is a prerequisite. We will therefore merely discuss the results for the LM, F, WaldBoot

and LRBoot tests in this section. Power functions for the other tests are only meaningful in largesamples and small systems.

In Tables VI–VIII, we present the power results of our tests when the errors are normally,symmetric non-normally and skew non-normally distributed. The power results satisfy theexpected properties of increasing with the sample size to reach their maximum value of one. Ifwe look at these tables, we cannot find any noticeable differences between the results for thedifferent distributions. Note that when the errors are both skewed and non-normal, the powertends to be higher than in the other situations.

9 CONCLUSIONS

In this article we proposed a testing technique for multivariate heteroscedasticity expressed asa test of linear restrictions in a (multivariate) model. The test is applicable in a wide class oflinear models such as random vectors or multiple regressions. Existing tests for multivariate

Dow

nloa

ded

by [

New

Yor

k U

nive

rsity

] at

23:

00 2

6 N

ovem

ber

2013

Page 18: Testing for multivariate heteroscedasticity

894 H. E. T. HOLGERSSON AND G. SHUKUR

heteroscedasticity are, in our opinion, either too complicated to perform and interpret or over-simplified, in the sense that they rely on unrealistic assumptions.

Our test is, to some extent, easy to apply and interpret. It is also (asymptotically) robust tonon-normality as opposed to many other tests. We therefore believe that our test can provide auseful supplement to existing heteroscedasticity tests. Some relevant properties of the test havebeen examined in a Monte Carlo experiment. We have studied the size and power propertieswhen the error terms follow a normal distribution, a symmetric non-normal distribution and askew non-normal distribution. This combination may cover a wide range of such departuresfrom normality among the errors that are likely to affect the performances of the test.

A large number of models were investigated regarding the size of the test, where the distribu-tions of the error terms, number of equations and the number of observations have been varied.For each model we have performed 100,000 replications and studied six different versions ofthe test. The power properties have been investigated using 100,000 replications per model,where, in addition to the properties mentioned above, we imposed a specific heteroscedaticityparameterisation.

When the errors follow a normal distribution, the analysis has revealed that almost all thetests perform satisfactorily regarding the size, especially in the case of single equation tests.The W and LR perform badly when the number of equations increases and the number ofobservations decreases. The results also indicate that the effect of the non-normal distributionsis noticeable in small samples and large systems, but disappears in large samples and smallsystems to be almost identical to the results from the symmetric non-normal error case. Alltests, however, converge to the nominal size when the sample size increases with the WaldBoot

and LRBoot converging faster than the others.With regard to the power of the tests, we find that the sample size plays an important role in

the sense that the power functions approach the value of unity when the sample size increases.The simulation results do not indicate that the system size has any noticeable effects on thepower properties. The power becomes a little bit higher, especially in the case of skewednon-normal errors.

Acknowledgements

The authors are grateful to the Swedish Research Council (vetenskapsradet) and the RoyalSwedish Academy of Sciences for financial support.

References

Bewley, R. (1986). Allocation Models, Specification, Estimation and Applications. Ballinger Publishing Company,Cambridge, Massachusetts.

Bickel, P. J. (1978). Tests for heteroscedasticity, nonlinearity and nonadditivity. Ann. Stat., 6, 266–291.Bilodeau, M. and Brenner, D. (1999). Theory of Multivariate Statistics. Springer-Verlag, New York.Breusch, T. S. and Pagan, A. R. (1979). A simple test for heteroscedasticity and random coefficient variation.

Econometrica, 47, 1287–1294.Davidson, R. and MacKinnon, J. G. (1993). Estimation and Inference in Econometrics.Doornik, J. A. (1996). Testing error autocorrelation and heteroscedasticity (Unpublished Manuscript).Edgerton, D. E. (1996). Should stochastic or non-stochastic exogenous variables be used in Monte Carlo experiments?

Econ. Lett., 53, 153–159.Edgerton, D. and Shukur, G. (1999). Testing autocorrelation in a system perspective. Economet. Rev., 18(4), 343–386.Edgerton, D., Assarsson, B. et al. (1996). The Econometrics of Demand Systems. Kluwer Academic Press, Dordrecht.Freedman, D. A. (1981). Bootstrapping regression models. Ann. Stat., 9(6), 1218–1228.Glejser, H. (1969). A new test for heteroscedasticity. J. Am. Stat. Assoc., 64, 316–323.Godfrey, L. G. and Wickens, M. R. (1982). Tests of misspecification using locally equivalent alternative models.

In: Chow, G. C. and Corsi, P. (Eds.), Evaluating the Reliability of Macroeconomic Models. Wiley, London.Goldfeld, S. M. and Quandt, R. E. (1965). Some tests for homoscedasticity. J. Am. Stat. Assoc., 60, 539–547.Horowitz, J. L. (1994). Bootstrap-based critical values for the information matrix test. J. Econometric, 61, 395–411.

Dow

nloa

ded

by [

New

Yor

k U

nive

rsity

] at

23:

00 2

6 N

ovem

ber

2013

Page 19: Testing for multivariate heteroscedasticity

MULTIVARIATE HETEROSCEDASTICITY 895

Huang, H., McGuirk, A. and Driscoll, P. (1993). Misspecification testing and structural change in the demand formeat. Presented at the AAA meeting in Orlando.

Johnson, M. E. (1987). Multivariate Statistical Simulation. Wiley, New York.Johnson, N. J., Kotz, S. et al. (1994). Continous Univariate Distributions, Vol. 1. Wiley, New York.Judge, G. G., Griffiths, W. E. et al. (1984). The Theory and Practice of Econometrics. Wiley, New York.Kelejian, H. H. (1982). An extension of a standard test for heteroscedasticity. J. Econometrics, 20, 325–333.Koenker, R. and Basset, G. (1982). Robust tests for heteroscedasticity based on regression quantiles. Econometrica,

50(1), 43–62.Laitinen, K. (1978). Why is demand homogenity so often rejected? Econ. Lett., 1, 231–233.Mantalos, P. and Shukur, G. (1998). Size and power of the error correction model cointegration test. A bootstrap

approach. Oxford B Econ. Stat., 60(2), 249–255.Ramsey, J. B. (1969). Tests for specification errors in classical linear least-squares regression analysis. J. Roy. Stat.

Soc. B, 31, 350–371.Rao, C. R. (1973). Linear Statistical Inference. Wiley, New York.Shukur, G. (1997). Some aspects of statistical inference in systems of equations. PhD thesis, Lund University.Shukur, G. (2002). Dynamic specification and misspecification in systems of demand equations. Appl. Econ., 34(6),

709–725.Shukur, G. and Edgerton, D. (2000). The small sample properties of the RESET test as applied to systems of equations.

J. Statist. Comput. Simul. (to appear).Srivastava, V. K. and Giles, D. A. (1987). Seemingly Unrelated Regression Equations Models. Marcel Dekker.White, H. (1980).A heteroscedastic-consistent covariance matrix and a direct test for heteroscedasticity. Econometrica,

48, 421–428.

APPENDIX A.1

LEMMA xi (X′X/n)−1(X′εj /n) = o(n−1/4).

Proof By the assumption i. (p. 4) we have limn→∞(X′X/n)−1 = Q where Q =O(1). Now, under the null hypothesis, V [X′εj /n] = X′σ 2

jj IX/n2 = σ 2jj X′X/n2. Since

limn→∞(X′X/n)−1 = Q and σ 2jj < ∞, we have σ 2

jj (X′X/n)−1 = O(1) and hence

V [n1/4X′εj /n] = o(1). Therefore (X′εj /n) converges in mean square to 0 at the rate n−1/4.Since xi = O(1) and (X′X/n)−1 = O(1) it follows that xi (X′X/n)−1(X′εj /n) = o(n−1/4).

COROLLARY 1 Yij = xi βj = xi (X′X)−1X′Yj = xiβj + xi (X′X)−1X′εj . Hence Yij = E[Yij ]+ o(n−1/4).

COROLLARY 2 εij = Yij − Yij = xiβj + εij − xiβj − xi (X′X)−1X′εj = εij − o(n−1/4).Hence ε2

ij = ε2ij − 2εij o(n−1/4) + o(n−1/4)2 = ε2

ij − o(n−1/4) + o(n−1/2) = ε2ij − o(n−1/4).

APPENDIX A.2

The asymptotic variance of Eq. (17) may be readily found. Consider a two-dimensional variateεi = εi + o(n−1/4) such that εi ∼ N(0, σ2) where

σ2 = V [εi] =[

σ 21 σ1σ2ρ

σ1σ2ρ σ 22

].

The variance of the squared j th marginal variate, ε2ij say, then becomes

V[ε2ij

] = V[ε2ij

] + o(n−1/4) = E[ε4ij

] − E[ε2ij

]2 + o(n−1/4) = 2σ 4i + o(n−1/4),

Dow

nloa

ded

by [

New

Yor

k U

nive

rsity

] at

23:

00 2

6 N

ovem

ber

2013

Page 20: Testing for multivariate heteroscedasticity

896 H. E. T. HOLGERSSON AND G. SHUKUR

the last equality follows from the normality assumption. Similarly,

cov[ε2ij ε

2il

] = E[ε2ij ε

2il

] − E[ε2ij

]E

[ε2

il

] + o(n−1/4)

= (σ 2

1 σ 22 + 2σ 2

1 σ 22 ρ2

) − σ 21 σ 2

2 + o(n−1/4

) = 2σ 21 σ 2

2 ρ2 + o(n−1/4).

Hence

limn→∞ V [ε•2

i ] = 2

[σ 4

1 σ 21 σ 2

2 ρ2

σ 21 σ 2

2 ρ2 σ 42

].

Dow

nloa

ded

by [

New

Yor

k U

nive

rsity

] at

23:

00 2

6 N

ovem

ber

2013


Recommended