Date post: | 02-Jun-2018 |
Category: |
Documents |
Upload: | ajayikayode |
View: | 221 times |
Download: | 0 times |
of 57
8/10/2019 ecmc9
1/57
9 Regression with Time Series
9.1 Some Basic Concepts
Static Models
(1) yt = 0 + 1 x t + u t
t = 1 , 2 , . . . , T , where T is the number of ob-servation in the time series. The relation be-tween y and x is contemporaneous.
Example: Static Phillips Curve:
ination t = 0 + 1 unemployment t + u t .
1
8/10/2019 ecmc9
2/57
Finite Distributed Lag Model (FDL)
In FDL models earlier values of one or moreexplanatory variables affect the current valueof y.
(2) yt = 0 + 0 x t + 1 x t 1 + 2 x t 2 + u t
is a FDL of order two .
Multipliers
Multipliers indicate the impact of a unit changein x on y.
Impact Multiplier: Indicates the immediateone unit change in x on y. In (2) 0 is theimpact multiplier.
2
8/10/2019 ecmc9
3/57
To see this, suppose xt is constant, say c,
before time point t, increases by one unit toc + 1 at time point t and returns back to cat t + 1 . That is
, x t 2 = c, x t 1 = c, x t = c + 1 , x t+1 = c, x t+2 = c, . . .
Suppose for the sake of simplicity that theerror term is zero, then
yt 1 = 0 + 0 c + 1 c + 2 c
yt = 0 + 0 ( c + 1 ) + 1 c + 2 cyt+1 = 0 + 0 c + 1 ( c + 1 ) + 2 c
yt+2 = 0 + 0 c + 1 c + 2 ( c + 1)
yt+3 = 0 + 0 c + 1 c + 2 c
from which we ndyt yt 1 = 0 ,
which is the immediate change in yt .
3
8/10/2019 ecmc9
4/57
In the next period, t + 1 , the change is
yt+1 yt 1 = 1 ,
after that
yt+2 yt 1 = 2 ,
after which the series returns to its initiallevel yt+3 = yt 1 . The series {0 , 1 , 2 } iscalled the lag distribution, which summarizesthe dynamic effect that a temporary increasein x has on y.
4
8/10/2019 ecmc9
5/57
Lag Distribution: A graph of j as a func-
tion of j . Summarizes the distribution of theeffects of a one unit change in x on y as afunction of j , j = 0 , 1 , . . . .
Particularly, if we standardize the initial value
of y at yt 1 = 0 , the lag distribution tracesout the subsequent values of y due to a one-unit, temporary change in x.
5
8/10/2019 ecmc9
6/57
Interim multiplier of order J :
(3) ( J ) =J
j =0 j .
Indicates the cumulative effect up to J of aunit change in x on y. In (2) e.g., (1) = 0 + 1 .
Total Multiplier: (Long-Run Multiplier)
Indicates the total (long-run) change in y asa response of a unit change in x.
(4) = j =0
j .
6
8/10/2019 ecmc9
7/57
Example 9.1: Suppose that in annual data
int t = 1 .6 + 0 .48 inf t 0 .15 inf t 1 + 0 .32 inf t 2 + u t ,
where int is an interest rate and inf is ination rate.
Impact and long-run multipliers?
7
8/10/2019 ecmc9
8/57
Assumptions
Regarding the Classical Assumption, we needto account for the dependencies in time di-mension.
Assumption (4) E [u i | x i] = 0 is replaced by(5) E [u t | x s ] = 0 for all t, s = 1 , . . . , T .
In such a case we say that x is strictly exo-genous. Explanatory variables that are strictlyexogeneous cannot react to what happenedin the past. The assumption of strict exo-geneity implies unbiasedness of OLS-estimates.
A weaker assumption is contemporaneous exo-geneity:
(6) E [u t | x t ] = 0
It implies only consistency of OLS-estimates.
8
8/10/2019 ecmc9
9/57
Assumption
(7) C ov[ u s , u t ] = 0 , for all s = t
is the assumption of no serial correlation orno autocorrelation.
Lack of serial correlation is required for thestandard errors and the usual t- and F-statisticsto be valid.
Unfortunately, this assumption is often vio-lated in economic time series.
9
8/10/2019 ecmc9
10/57
9.2 Trends and Seasonality
Economic time series have a common ten-dency of growing over time. Some seriescontain a time trend.
Usually two or more series are trending overtime for reasons related to some unobservedcommon factors. As a consequence correla-tion between the series may be for the mostpart of the trend.
10
8/10/2019 ecmc9
11/57
Linear time trend:
(8) yt = 0 + 1 t + et .
(9) E [yt ] = 0 + 1 t .
1 > 0, upward trend,
1 < 0, downward trend.
11
8/10/2019 ecmc9
12/57
Exponential trend:
If the growth rate y/y of an economy is 1 .That is
(10) dy( t ) /dt
y( t ) = 1 ,
then
(11) y( t ) = y(0) e1 t .
Thus a constant growth rate leads to expo-nential trend model (c.f. continuously com-
pounded interest rate).
Typical such series are GDP, Manufacturingproduction, and CPI.
Exponential trend is modeled in practice as
(12) log( yt ) = 0 + 1 t + et ,
t = 1 , 2 , . . . , where 1 is the growth rate.
12
8/10/2019 ecmc9
13/57
8/10/2019 ecmc9
14/57
Trending Variables in Regression
Common growth over time of series in theregression model may cause spurious regres-sion relationships. Adding a time trend to theregression eliminates usually this problem.
Example 9.3: Housing Investment and Prices:
Regressing anual observations on housing investmentper capita invpc on a housing price index price in con-stant elasticity form yields
(13a) log(invpc) = 0 .550+2 .241 log(price)
The standard error on the slope coefficient of log(price)is 0.382 , so it is statistically signicant. However,both invpc and price have upward trends. Adding atime trend yields(13b)
log(invpc) = 0 .913 0 .381 log(price) + 0 .0098 t
with a standard error of 0.679 on the price elasticityand 0.0035 on time. The time trend is statistically
signicant and it implies an approximate 1% increase
in invpc per year. The estimated price elasticity is
now negative and not statistically different from zero.
14
8/10/2019 ecmc9
15/57
Example 9.4: Puerto Rican Employment and the U.S.
Minimum Wage.prepop = Puerto Rican employment/popul ratio
mincov = (average minimum wage/average wage)*avgcov,
where avgcov is the proportion of workers covered by
the minimum wage law.
mincov measures the importance of minimum wagerelative to average wage.
Sample period 19501987
15
8/10/2019 ecmc9
16/57
The specied model is
(14a)log(prepop t ) = 0 + i log(mincov t ) + 2 log(usgnp t ) + u t .
Dependent Variable: LOG(PREPOP)Method: Least SquaresSample: 1950 1987Included observations: 38
===========================================================Variable Coefficient Std. Error t-Statistic Prob.-----------------------------------------------------------C -1.054 0.765 -1.378 0.177LOG(MINCOV) -0.154 0.065 -2.380 0.023LOG(USGNP) -0.012 0.089 -0.138 0.891===========================================================R-squared 0.660 Mean dependent var -0.944Adjusted R-squared 0.641 S.D. dependent var 0.093S.E. of regression 0.056 Akaike info criterion -2.862Sum squared resid 0.109 Schwarz criterion -2.733Log likelihood 57.376 F-statistic 34.043Durbin-Watson stat 0.340 Prob(F-statistic) 0.000===========================================================
Elasticity estimate of mincov is 0 .154 and is statisti-
cally signicant. This suggests that a higher minimumwage lowers the employment rate (as expected). The
US GNP is not statistically signicant.
16
8/10/2019 ecmc9
17/57
Example 9.4: Puerto Rican Employment:
Adding a trend to (14a):(14b)
log(prepop t ) = 0 + i log(mincov t )
+ 2 log(usgnp t ) + 3 t + u tproduces estimation results:
Dependent Variable: LOG(PREPOP)
Method: Least SquaresSample: 1950 1987Included observations: 38===========================================================Variable Coefficient Std. Error t-Statistic Prob.-----------------------------------------------------------C -8.729 1.300 -6.712 0.0000LOG(MINCOV) -0.169 0.044 -3.813 0.0006LOG(USGNP) 1.057 0.177 5.986 0.0000
@TREND -0.032 0.005 -6.442 0.0000===========================================================R-squared 0.847 Mean dependent var -0.944Adjusted R-squared 0.834 S.D. dependent var 0.093S.E. of regression 0.038 Akaike info criterion -3.607Sum squared resid 0.049 Schwarz criterion -3.435Log likelihood 72.532 F-statistic 62.784Durbin-Watson stat 0.908 Prob(F-statistic) 0.000===========================================================
17
8/10/2019 ecmc9
18/57
Seasonality
Monthly or quarterly series include often sea-sonality which shows up as regular cycles inthe series.
A common way to account for the season-ality is to include a set of seasonal dummy variables into the model.
For example, monthly data:
(15)yt = 0 + 1 feb t + 2 mar t + + 11 dec t+ 1 x t1 + + kx tk + u t
feb t , . . . , dec t are dummy variables. Januaryis the base month.
18
8/10/2019 ecmc9
19/57
9.3 Time Series Models
Stationarity:
A stochastic process {yt : t = 1 , 2 , . . . } is (co-variance) stationary, if
(i) E [yt ] = for all t(ii) V ar[ yt ] = 2 < for all t(iii) C ov[ yt , yt+ h ] = h for all t, i.e., the co-variance depends only on the lag length h,not time.
Example:A process with a time trend is not stationary,because its mean changes through time.
Establishing stationarity can be very difficult.However, we often must assume it since noth-ing can be learnt from time series regressionswhen the relationship between yt and xt is al-lowed to change arbitrarily out of sample.
19
8/10/2019 ecmc9
20/57
Weakly Dependent Series
yt is weakly dependent if yt and yt+ h are al-most independent as h .
Covariance stationary sequences are said to
be asymptotically uncorrelated if C ov[ x t , x t+ h ] 0 as h . (Intuitive char-acterization of weak dependence.)
The weak dependence replaces the notion of
random sampling implying law of large num-bers (LLN) and the central limit theorem(CLT) holds.
20
8/10/2019 ecmc9
21/57
Basic Time Series Processes
White Noise: Series yt is (weak) white noise(WN) if
(i) E [yt ] = for all t(ii) V ar[ yt ] = 2 < for all t(iii) C ov[ ys , yt ] = 0 for all s = t.
Remark 9.1: (i) Usually = 0 . (ii) WN-process is
stationary.
21
8/10/2019 ecmc9
22/57
Random Walk (RW): yt is a random walk
process if
(13) yt = yt 1 + et ,
where et WN(0 , 2e ) .
If yt RW and assuming y0 = 0 , it can beeasily shown E [yt ] = 0 ,
(14) V ar[ yt ] = t 2e ,
and
(15) C orr[ yt , yt+ h ] = tt + h .Remark 9.2: RW is a nonstationary process.
Random walk with drift:
(16) yt = + yt 1 + et , e t WN(0 , 2e ) .
22
8/10/2019 ecmc9
23/57
8/10/2019 ecmc9
24/57
9.4 Serial Correlation and Heteroscedasticity
in Time Series Regression
Assumption 3 C ov[ u t , u t + h ] = 0 is violated if the error terms are correlated.
This problem is called the autocorrelationproblem.
Consequences of error term autocorrelationin OLS:
(i) OLS is no more BLUE(ii) Standard errors are (downwards) biased( t -statistics etc. become invalid), and thesituation does not improve for n .
However(iii) OLS estimators are still unbiased for strictlyexogeneous regressors.(iv) OLS estimators are still consistent forstationary, weekly dependent data with con-temporaneously exogeneous regressors.
24
8/10/2019 ecmc9
25/57
Testing for Serial Correlation
(22) yt = 0 + 1 x t1 + kx tk + u t .AR(1) errors
(23) ut = u t1 + et , et WN(0 , 2e ) .Typical procedure to test the rst order au-tocorrelation is to obtain OLS residuals u tby estimating (22), t AR(1) in the u t se-ries, and use the resulting t-statistic to inferto test H 0 : = 0 .
An alternative is to use the traditional Durbin-Watson (DW) test.
(24) DW =T
t=2( u t
u
t1 ) 2
T t=1 u
2t
.
It can be shown that
(25) DW 2(1 ) .
25
8/10/2019 ecmc9
26/57
Ljung-Box test
A general test for serial correlation is theLjung-Box Q-statistic,
(26) Q = T ( T + 2)k
j =1
2 j( T
j )
.
If the null hypothesis
(27) H 0 : 1 = 2 = = k = 0is true Q has the asymptotic 2k distribution.
26
8/10/2019 ecmc9
27/57
Serial Correlation-Robust Inference with OLS
The idea is to nd robust standard errors forthe OLS estimates.
Again using matrix notations simplies con-
siderably exposure. Let
(28) y = X + u ,
where
(29) C ov[ u ] = u ,
Again, write
(30) = + ( X X ) 1 X u .Then
(31) C ov[ ] = ( X X ) 1 ( X X ) 1 .The problem is how to estimate = X u X .
27
8/10/2019 ecmc9
28/57
Newey and West (1987)suggest and estima-
tor(32) = T T k
T t =1 u
2t x t x t
+ qv=1 1 vq+1
T t = v+1 ( x t u t u t v x t v + x t v u t v u t x t )
which is supposed to be robust both against
heteroscedasticity and autocorrelation. Theq-variable is determined as a function of thenumber of observations.
28
8/10/2019 ecmc9
29/57
Example 9.5: Puerto Rican Wage:
Correlogram of Residuals
Sample: 1950 1987Included observations: 38
Autocorrelation Partial Correlation AC PAC Q-Stat Prob
1 0.443 0.443 8.0658 0.0052 0.153 -0.054 9.0537 0.011
3 0.085 0.047 9.3684 0.0254 0.065 0.020 9.5595 0.0495 -0.143 -0.228 10.500 0.0626 -0.167 -0.020 11.821 0.0667 -0.243 -0.189 14.707 0.0408 -0.270 -0.115 18.397 0.0189 -0.288 -0.120 22.740 0.007
10 -0.081 0.112 23.098 0.01011 -0.117 -0.153 23.865 0.01312 -0.063 0.019 24.099 0.02013 -0.008 -0.035 24.104 0.03014 -0.035 -0.188 24.179 0.04415 -0.070 -0.050 24.503 0.05716 0.068 0.027 24.819 0.073
The residuals are obviously autocorrelated. Using the
the above autocorrelation robust standard errors yields
the following results.
29
8/10/2019 ecmc9
30/57
Dependent Variable: LOG(PREPOP)
Method: Least SquaresSample: 1950 1987Included observations: 38Newey-West HAC Standard Errors & Covariance (lag truncation=3)==========================================================Variable Coefficient Std. Error t-Statistic Prob.----------------------------------------------------------C -8.728657 1.589053 -5.492994 0.0000LOG(MINCOV) -0.168695 0.038469 -4.385230 0.0001LOG(USGNP) 1.057351 0.219516 4.816736 0.0000@TREND -0.032354 0.006605 -4.898514 0.0000==========================================================R-squared 0.847089 Mean dep. var -0.944074Adjusted R-squared 0.833597 S.D. dep. var 0.092978S.E. of regression 0.037928 Akaike info crit. -3.606957Sum squared resid 0.048910 Schwarz criterion -3.434580Log likelihood 72.53218 F-statistic 62.78374Durbin-Watson stat 0.907538 Prob(F-statistic) 0.000000==========================================================
We observe that particularly the standard error of
log( usgnb ) increases compared to Example 9.4.
30
8/10/2019 ecmc9
31/57
Estimating the residual autocorrelation as an
AR-process
An alternative to the robustifying of the stan-dard errors with the Newy-White procedure(32), is to explicitly model the error term as
an autoregressive process as is done in equa-tion (23) and estimate it.
31
8/10/2019 ecmc9
32/57
Example 9.6: Estimation the Puerto Rican example
with AR(1) errors yields:
=================================================================Dependent Variable: LOG(PREPOP)Method: Least SquaresSample (adjusted): 1951 1987Included observations: 37 after adjustmentsConvergence achieved after 14 iterations=================================================================
Variable Coefficient Std. Error t-Statistic Prob.-----------------------------------------------------------------C -5.256865 1.492220 -3.522850 0.0013LOG(MINCOV) -0.090233 0.048103 -1.875824 0.0698LOG(USGNP) 0.588807 0.207160 2.842278 0.0077@TREND -0.019385 0.006549 -2.959877 0.0058AR(1) 0.701498 0.123959 5.659112 0.0000=================================================================R-squared 0.907075 Mean dependent var -0.949183
Adjusted R-squared 0.895459 S.D. dependent var 0.088687S.E. of regression 0.028675 Akaike info criterion -4.140503Sum squared resid 0.026312 Schwarz criterion -3.922811Log likelihood 81.59930 F-statistic 78.09082Durbin-Watson stat 1.468632 Prob(F-statistic) 0.000000=================================================================
32
8/10/2019 ecmc9
33/57
Residual autocorrelations:
Correlogram of Residuals
Sample: 1951 1987Included observations: 37Q-statistic probabilities adjusted for 1 ARMA term(s)
Autocorrelation Partial Correlation AC PAC Q-Stat Prob
1 0.187 0.187 1.40712 0.078 0.044 1.6551 0.1983 0.097 0.078 2.0557 0.3584 0.085 0.053 2.3723 0.4995 -0.273 -0.320 5.7274 0.2206 -0.016 0.088 5.7395 0.3327 -0.013 -0.005 5.7475 0.4528 -0.122 -0.096 6.4908 0.4849 -0.226 -0.156 9.1276 0.332
10 0.061 0.064 9.3267 0.408
The results indicate no additional autocorrelation in
the residuals.
33
8/10/2019 ecmc9
34/57
Residual Autocorrelation and Common Fac-
tor
The lag operator L is dened as Ly t = yt 1(generally L pyt = yt p).
Thus, equation (23) can be written as
(33) (1 L ) u t = et
or
(34) u t = et
1 LUsing this in (22), we can write (for the sim-plicity, assume k = 1 and denote x t = x t 1 )
(35) yt = 0 + 1 x t + et
1 Lor(36)(1 L ) yt = (1 L ) 0 + 1 (1 L ) x t + e t
34
8/10/2019 ecmc9
35/57
This implies that the dynamics of yt and xtshare 1 L in common, called common fac-tor.
Thus, the autocorrelation in ut is equivalentthat there is a common factor in the regres-
sion in (23).
This can be tested by estimating the unre-stricted regression
(37) yt = 0 + 1 yt
1 + 2 x t + 3 x t
1 + et
and testing whether it satises restrictionsimplied by (36), which can be written as(38)yt = (1 ) 0 + yt1 + 1 x t 1 x t1 + et
35
8/10/2019 ecmc9
36/57
That is, whether
(39) 3 = 1 2If this hypothesis is not accepted, the ques-tion is of wrong dynamic specication of themodel, not autocorrelation in the residuals.
36
8/10/2019 ecmc9
37/57
Example 9.7: Puerto Rican example. The unrestricted
model estimates are:
Dependent Variable: LOG(PREPOP)Method: Least SquaresSample (adjusted): 1951 1987Included observations: 37 after adjustments===========================================================Variable Coefficient Std. Error t-Statistic Prob.-----------------------------------------------------------
C -5.612596 1.547354 -3.627223 0.0011LOG(PREPOP(-1)) 0.535366 0.124932 4.285246 0.0002LOG(MINCOV) -0.142230 0.047224 -3.011794 0.0052LOG(MINCOV(-1)) 0.033409 0.046831 0.713386 0.4811LOG(USGNP) 0.561893 0.188654 2.978437 0.0057LOG(USGNP(-1)) 0.137353 0.226785 0.605651 0.5493@TREND -0.019768 0.005975 -3.308752 0.0024===========================================================R-squared 0.928815 Mean dependent var -0.949183
Adj R-squared 0.914578 S.D. dependent var 0.088687S.E. of reg 0.025921 Akaike info criterion -4.298904Sum squared resid 0.020156 Schwarz criterion -3.994136Log likelihood 86.52972 Hannan-Quinn criter. -4.191459F-statistic 65.23934 Durbin-Watson stat 1.634219Prob(F-statistic) 0.000000===========================================================
37
8/10/2019 ecmc9
38/57
Testing for the implied restriction by AR(1)-residuals
by imposing restrictions in EViews ( View > CoefficientTests > Wald-Coefficient Restrictions . . . )
gives:
Wald Test:Equation: EX97=================================================Test Statistic Value df Probability-------------------------------------------------F-statistic 4.436898 (2, 30) 0.0205Chi-square 8.873795 2 0.0118=================================================
Null Hypothesis Summary:=================================================
Normalized Restriction (= 0) Value Std. Err.-------------------------------------------------C(2)*C(3) + C(4) -0.042736 0.033279C(2)*C(5) + C(6) 0.438171 0.150355=================================================
38
8/10/2019 ecmc9
39/57
The null hypothesis is rejected, which implies that
rather than the error term is autocorrelated the modelshould be specied as
yt = 0 + 1 yt1 + 1 x t1 + 11 x t1 ,1 + 2 x t, 2 + 21 x t1 ,2 + et ,
where y = log( prepop ) , x1 = log( mincov ) , x2 = log( usgnp ) ,
and x3 = trend .
Finally, because the coefficient estimates of log( mincov t1 )and log( usgnp t1 ) are not statistically signicant, theycan be dropped from the model, such that the nal
model becomes:
Dependent Variable: LOG(PREPOP)Method: Least SquaresIncluded observations: 37 after adjustments==========================================================Variable Coefficient Std. Error t-Statistic Prob.----------------------------------------------------------C -5.104888 1.188371 -4.295701 0.0002LOG(PREPOP(-1)) 0.558090 0.103817 5.375713 0.0000LOG(MINCOV) -0.106787 0.031282 -3.413647 0.0018LOG(USGNP) 0.630332 0.154822 4.071322 0.0003
@TREND -0.017463 0.004849 -3.601696 0.0011==========================================================R-squared 0.926226 Mean dependent var -0.949183Adj R-squared 0.917004 S.D. dependent var 0.088687S.E. of reg 0.025550 Akaike info criter. -4.371286Sum sqerd resid 0.020889 Schwarz criterion -4.153594Log likelihood 85.86878 Hannan-Quinn criter. -4.294539F-statistic 100.4388 Durbin-Watson stat 1.468436Prob(F-statistic) 0.000000==========================================================
39
8/10/2019 ecmc9
40/57
Autocorrelation has also disappeared from the residu-
als:
Sample: 1951 1987Included observations: 37==============================================================Autocorr Partial Corr AC PAC Q-Stat Prob--------------------------------------------------------------
. |*. | . |*. | 1 0.203 0.203 1.6465 0.199
. |*. | . | . | 2 0.079 0.039 1.9008 0.387
. |*. | . | . | 3 0.092 0.072 2.2586 0.521
. | . | . | . | 4 0.018 -0.017 2.2724 0.686***| . | ***| . | 5 -0.346 -0.372 7.6823 0.175
.*| . | . | . | 6 -0.111 0.020 8.2560 0.220
.*| . | . | . | 7 -0.067 -0.010 8.4747 0.293
.*| . | . | . | 8 -0.124 -0.052 9.2354 0.323
.*| . | .*| . | 9 -0.143 -0.091 10.2920 0.327
. |*. | . |*. | 10 0.156 0.110 11.5860 0.314==============================================================
40
8/10/2019 ecmc9
41/57
Autoregressive Conditional Heteroscedastic-
ity ARCH
Temporal volatility clustering is typical forspeculative series like stock returns, interestrates, currencies, etc.
41
8/10/2019 ecmc9
42/57
Example 9.7: The following time series plot shows Mi-
crosofts weekly (log) returns ( r t = 100 log( P t /P t1 ) )for the sample period from January 1990 through Oc-
tober 2006.
-30
-20
-10
0
10
20
30
90 92 94 96 98 00 02 04 06
R e
t u r n
Week
Microsoft Weekly Returns [1990 - 2006]
Figure 9.1: Microsoft weekly returns for the sample
period January 1990 through November 2006.
42
8/10/2019 ecmc9
43/57
-12
-8
-4
0
4
8
90 92 94 96 98 00 02 04 06
SP500 Weekly Returns [1990 - 2006]
R e
t u r n
Week
Figure 9.1: SP500 weekly returns for the sample pe-
riod January 1990 through November 2006.
There is probably some volatility clustering present in
both return series.
43
8/10/2019 ecmc9
44/57
Engle (1982) suggested modeling the clus-
tering volatility with the ARCH models.
A time series ut follows an ARCH(1)-process(AutoRegressive Conditional Heteroscedastic-ity) if
(40) E [u t ] = 0 ,
(41) C ov[ u t , u s ] = 0 , for all t = s,
and
(42) ht = V ar[ u t |u t1 ] = 0 + 1 u2t1 ,
where 0 > 0 and 0 1 < 1.
We say that ut follows and ARCH(1)-processand denote ut ARCH(1) .
44
8/10/2019 ecmc9
45/57
An important property of an ARCH process
is that if ut ARCH , then
(43) zt = ut h t WN(0,1) .
That is the standardized variable zt is whitenoise ( E [zt ] = 0 , V ar[ zt ] = 1 , and C ov[ zt , z t+ k] = 0
for all k = 0 ).
Furthermore
(44)V ar[ zt |u t1 ] = V ar
ut h t1 |u t1
= 1
h t V ar[ u t |u t1 ] = 1 .That is, zt has constant conditional variance.
This implies that there should not remain anyvolatility clustering in zt .
This can be checked by investigating the au-tocorrelations of the squared zt -series, z2t , forexample with the Ljung-Box Q-statistic, de-ned in (26).
45
8/10/2019 ecmc9
46/57
It may be noted further that if zt N (0 , 1) ,
then
(45) ut |u t1 N (0 , h t ) .
Remark 9.4: If (43) holds, using (43) we can alwayswrite
(46) ut = h t zt ,where zt are independent N (0 , 1) random variables.
The parameters of the ARCH-process are es-timated by the method of maximum likeli-hood (ML).
46
8/10/2019 ecmc9
47/57
Example 9.8: Consider the SP500 returns. With EViews
we can estimate by selecting Quick > Estimate Equation. . . , specifying rm c in the model box, selecting Esti-
mation Method:
ARCH-Autoregressive Conditional Heteroscedasticity ,
selecting ARCH option equal to 1 and GARCH option
equal to 0.Note that specifying in the model box rm c implies
that we estimate ut = rm,t E [r mt ].These specications yield the following results:
47
8/10/2019 ecmc9
48/57
=================================================================
Dependent Variable: RMMethod: ML - ARCH (Marquardt) - Normal distributionSample (adjusted): 1/08/1990 10/30/2006Included observations: 877 after adjustmentsConvergence achieved after 9 iterationsVariance backcast: ONGARCH = C(2) + C(3)*RESID(-1) 2=================================================================
Coefficient Std. Error z-Statistic Prob.-----------------------------------------------------------------C 0.189986 0.062065 3.061069 0.0022=================================================================
Variance Equation=================================================================C 3.237585 0.135488 23.89579 0.0000RESID(-1)^2 0.247032 0.039617 6.235534 0.0000=================================================================R-squared -0.000294 Mean dependent var 0.154412Adjusted R-squared -0.002583 S.D. dependent var 2.077382
S.E. of regression 2.080063 Akaike info criterion 4.240795Sum squared resid 3781.503 Schwarz criterion 4.257134Log likelihood -1856.588 Durbin-Watson stat 2.149700=================================================================
Thus the estimated ARCH(1) model is
(47) h t = 3 .238+0 .247 u 2t1 ,
(0 .135) (0 .0396)where ut = rm,t with = E [r m,t ], estimated fromthe sample period as = 0 .18986 (i.e., the average
weekly return has been approximately 0 .19%, or 9 .9% per year).
48
8/10/2019 ecmc9
49/57
Check next the autocorrelation of the squared stan-
dardized residuals zt = u t / h t .Correlogram of Standardized Residuals Squared
Sample: 1/02/1990 10/24/2006Included observations: 877
Autocorrelation Partial Correlation AC PAC Q-Stat Prob
1 -0.006 -0.006 0.0274 0.869
2 0.028 0.028 0.7023 0.7043 0.098 0.098 9.1175 0.0284 0.083 0.084 15.178 0.0045 0.044 0.041 16.884 0.0056 0.104 0.093 26.502 0.0007 0.096 0.084 34.603 0.0008 0.037 0.023 35.787 0.0009 0.036 0.010 36.945 0.000
10 0.065 0.033 40.685 0.000
It seems that there is still left some volatility clustering
especially due to the longer lags. The p-value from the
third order forwards is statistically signicant. Thus
the simple ARCH(1) does not seem to fully capture
the volatility clustering in the series. We can try to
improve the model.
49
8/10/2019 ecmc9
50/57
GARCH
An important extension to the ARCH modelis due to Bollerslev (1986), which is calledGARCH, Generalized ARCH. GARCH(1,1) isof the form
(48) ht = 0 + 1 u2t1 + h t1 ,where it is assumed that 0 > 0 and0 < + < 1.
Remark 9.5: If > 0 then must also be > 0.
The GARCH-term ht1 essentially accumu-lates the historical volatility and indicatesthe persistence of the volatility.
50
8/10/2019 ecmc9
51/57
Example 9.9: GARCH(1,1) model for the SP500 re-
turns
Dependent Variable: RMMethod: ML - ARCH (Marquardt) - Normal distributionSample (adjusted): 1/02/1990 10/24/2006Included observations: 877 after adjustmentsConvergence achieved after 17 iterationsVariance backcast: ONGARCH = C(2) + C(3)*RESID(-1) 2 + C(4)*GARCH(-1)
==============================================================Coefficient Std. Error z-Statistic Prob.
--------------------------------------------------------------C 0.197385 0.058686 3.363410 0.0008==============================================================
Variance Equation==============================================================C 0.023269 0.013163 1.767719 0.0771RESID(-1)^2 0.057345 0.012168 4.712861 0.0000
GARCH(-1) 0.937023 0.012920 72.52709 0.0000==============================================================R-squared -0.000428 Mean dependent var 0.154412Adjusted R-squared -0.003866 S.D. dependent var 2.077382S.E. of regression 2.081394 Akaike info criterion 4.119464Sum squared resid 3782.013 Schwarz criterion 4.141250Log likelihood -1802.385 Durbin-Watson stat 2.149410==============================================================
51
8/10/2019 ecmc9
52/57
The autocorrelations of the squared standardized resid-
uals indicate still some possible rst order autocorrela-tion remained in the series. Nevertheless the estimate
of the rst order autocorrelation, though statistically
signicant ( p-value 0.022) at the 5% level, is small by
magnitude (0 .078). Thus, we can conclude that the
GARCH(1,1) ts pretty well to the data.Correlogram of Standardized Residuals Squared
Sample: 1/02/1990 10/24/2006Included observations: 877
Autocorrelation Partial Correlation AC PAC Q-Stat Prob
1 0.078 0.078 5.3952 0.0202 -0.007 -0.014 5.4446 0.0663 0.041 0.043 6.9264 0.074
4 -0.007 -0.014 6.9687 0.1385 -0.017 -0.015 7.2345 0.2046 -0.011 -0.011 7.3509 0.2907 -0.026 -0.024 7.9459 0.3378 -0.040 -0.035 9.3347 0.3159 -0.017 -0.011 9.5916 0.385
10 -0.009 -0.006 9.6665 0.470
52
8/10/2019 ecmc9
53/57
There are several other extensions of the ba-
sic ARCH-model (see EViews manual or help,or for a comprehensive presentation Taylor,Stephen J. (2005). Asset Price Dynamics,Volatility, and Prediction . Princeton Univer-sity Press).
53
8/10/2019 ecmc9
54/57
ARCH in regression
Consider the simple regression
(49) yt = 0 + 1 x t + u t .
If the error term ut follows a GARCH-process
then accounting for it and estimating the re-gression parameters with the method of max-imum likelihood rather than the OLS yieldmore accurate estimates.
Particularly, if the ARCH-effect is strong, OLSmay lead highly unstable estimates to 0 and1 (usually, however, OLS works pretty well).
54
8/10/2019 ecmc9
55/57
Example 9.11: Consider next the market model of
Microsoft weekly returns on SP500.Estimating the market model
(50) r t = 0 + 1 r m + u t ,
with OLS yields
Estimation results:
===================================================================Dependent Variable: RMethod: Least SquaresSample (adjusted): 1/08/1990 10/30/2006Included observations: 877 after adjustments===================================================================Variable Coefficient Std. Error t-Statistic Prob.-------------------------------------------------------------------C 0.272492 0.128328 2.123402 0.0340RM 1.169974 0.061639 18.98110 0.0000===================================================================R-squared 0.291660 Mean dependent var 0.453150Adjusted R-squared 0.290850 S.D. dependent var 4.500432S.E. of regression 3.789860 Akaike info criterion 5.504813Sum squared resid 12567.66 Schwarz criterion 5.515706Log likelihood -2411.861 F-statistic 360.2821Durbin-Watson stat 2.013695 Prob(F-statistic) 0.000000===================================================================
55
8/10/2019 ecmc9
56/57
ML with GARCH(1,1) error specication yields the
following results:
================================================================Dependent Variable: RMethod: ML - ARCH (Marquardt) - Normal distributionSample (adjusted): 1/02/1990 10/24/2006Included observations: 877 after adjustmentsConvergence achieved after 20 iterationsVariance backcast: ON
GARCH = C(3) + C(4)*RESID(-1) 2 + C(5)*GARCH(-1)================================================================
Coefficient Std. Error z-Statistic Prob.----------------------------------------------------------------C 0.262170 0.117688 2.227670 0.0259RM 1.138309 0.068997 16.49795 0.0000================================================================
Variance Equation================================================================
C 0.153567 0.058060 2.644976 0.0082RESID(-1)^2 0.038894 0.009150 4.250488 0.0000GARCH(-1) 0.949674 0.012409 76.52862 0.0000================================================================R-squared 0.291435 Mean dependent var 0.453150Adjusted R-squared 0.288184 S.D. dependent var 4.500432S.E. of regression 3.796977 Akaike info criterion 5.415744Sum squared resid 12571.65 Schwarz criterion 5.442976Log likelihood -2369.804 F-statistic 89.66397Durbin-Watson stat 2.010866 Prob(F-statistic) 0.000000================================================================
The results show that in terms of standard errors there
are no material differences.
56
8/10/2019 ecmc9
57/57
End of the notes: