+ All Categories
Home > Documents > Autocorrelation and Hetroskedasticity - University of...

Autocorrelation and Hetroskedasticity - University of...

Date post: 20-Mar-2018
Category:
Upload: duongxuyen
View: 215 times
Download: 2 times
Share this document with a friend
31
1 Autocorrelation and Hetroskedasticity Having considered the multivariate CLRM, we now want to consider cases when our assumptions are broken. In such cases, OLS estimates are no longer BLUE and we must look to alternative models to correct for lost efficiency. However, before proceeding, it is helpful to understand two types of data sets that exist. First, we can collect data that is generated at the same time from different sources. Examples may be stock returns, labor supply curves or baseball player salaries. Such data is called cross sectional. It takes a cross section of a data set at a point in time, similar to how the balance sheet accounts for the value of assets at a point in time. The second type of data is called time series data. This type of data takes a given data generating process and observes it through time. Examples include GDP, unemployment and perhaps school attendance. Time series data is used to consider data over time, such as the income statement accounting for profit over time. These two types of data sets are prominent in econometrics and call on alternative estimation techniques to deal with the problems they create. Both of these types of data lead to inefficient estimators, therefore, will cast doubt on our estimated variances, consequently, t-statistics. In sum, it causes serious difficulties. Heteroskedasticity You may recall from your introductory statistics course that random variables are assumed to be independently and identically distributed. When our error terms are identically distributed, it implies they have the same variance for all observations. This is known as homoskedasticity. If they are not, it causes serious problems for our estimates and must be corrected if we are to obtain reliable estimates. Hetroskedasticity is a deviation from the identically distributed assumption because the variances are not the same for each value. The consequences of hetroskedasticity are serious. While parameter estimates remain unbiased, they are no longer efficient, i.e., no longer BLUE. Since the estimated error’s variance- covariance is not efficient, it invalidates the t-statistics. Fortunately, we can correct for hetroskedasticity by selecting an alternative estimator that correctly weights the errors and retains
Transcript
Page 1: Autocorrelation and Hetroskedasticity - University of …general.utpb.edu/fac/carson_s/Econometrics/Econometri… ·  · 2001-07-06Autocorrelation and Hetroskedasticity ... The consequences

1

Autocorrelation and Hetroskedasticity

Having considered the multivariate CLRM, we now want to consider cases when our

assumptions are broken. In such cases, OLS estimates are no longer BLUE and we must look to

alternative models to correct for lost efficiency. However, before proceeding, it is helpful to

understand two types of data sets that exist. First, we can collect data that is generated at the

same time from different sources. Examples may be stock returns, labor supply curves or

baseball player salaries. Such data is called cross sectional. It takes a cross section of a data set

at a point in time, similar to how the balance sheet accounts for the value of assets at a point in

time. The second type of data is called time series data. This type of data takes a given data

generating process and observes it through time. Examples include GDP, unemployment and

perhaps school attendance. Time series data is used to consider data over time, such as the

income statement accounting for profit over time. These two types of data sets are prominent in

econometrics and call on alternative estimation techniques to deal with the problems they create.

Both of these types of data lead to inefficient estimators, therefore, will cast doubt on our

estimated variances, consequently, t-statistics. In sum, it causes serious difficulties.

Heteroskedasticity

You may recall from your introductory statistics course that random variables are

assumed to be independently and identically distributed. When our error terms are identically

distributed, it implies they have the same variance for all observations. This is known as

homoskedasticity. If they are not, it causes serious problems for our estimates and must be

corrected if we are to obtain reliable estimates. Hetroskedasticity is a deviation from the

identically distributed assumption because the variances are not the same for each value.

The consequences of hetroskedasticity are serious. While parameter estimates remain

unbiased, they are no longer efficient, i.e., no longer BLUE. Since the estimated error’s variance-

covariance is not efficient, it invalidates the t-statistics. Fortunately, we can correct for

hetroskedasticity by selecting an alternative estimator that correctly weights the errors and retains

Page 2: Autocorrelation and Hetroskedasticity - University of …general.utpb.edu/fac/carson_s/Econometrics/Econometri… ·  · 2001-07-06Autocorrelation and Hetroskedasticity ... The consequences

2

the BLUE properties. This estimator is known as the generalized least squares (GLS) estimator.

The GLS estimator transforms the errors such that the errors become homoskedastic. However,

transformation of the errors only makes the variance-covariance matrix efficient. It does not

change the meaning of the coefficients.

In the CRLM, the following conditions held:

That is, the assumptions implied

Our objective is to transform our model such that the model has constant variances. For

example,

We transform the model by pre-multiplying by

Consider the variance of the errors:

[ ] [ ] [ ] 0,,0 22 === sttt eeCovandeVareE Iσ

[ ]

==

25

24

23

22

21

2

0000

0000

0000

0000

0000

σσ

σσ

σ

σεε ITE

tKtKtt xxy εββα ++++= ...11

tσσ

t

t

tkk

ttt

t xxy

σσε

σσβ

σσβ

σασ

σσ

++++= ...11

2

2

22

2

2

)()( σσσσ

εσσ

σσε

===t

tt

tt

t VarVar

Page 3: Autocorrelation and Hetroskedasticity - University of …general.utpb.edu/fac/carson_s/Econometrics/Econometri… ·  · 2001-07-06Autocorrelation and Hetroskedasticity ... The consequences

3

Hence, the simple transformation produces constant variance, i.e., homoskedasticity.

A convenient matrix demonstration of hetroskedasticity and autocorrelation follows.

Recall that for the least squares estimator, the variance for the coefficients is:

Note: This is a large variance, i.e., inefficient. When OLS is BLUE, V(β)=σ2I. We know that a

consequence of hetroskedasticity and autocorrelation means OLS is no longer BLUE. We can

model V(β) as

Ω weights errors to correct for hetroskedasticity and autocorrelation. GLS produces the BLUE if

we transform the original data so that the variance-covariance matrix transforms errors such that

E[εεT]=σ2I. To do so, we rely on some regulatory conditions and a basic matrix algrebra theorem

that states

( ) ( )( )[ ]( ) ( ) ( ) ( ) ( ) 11211

11)(])ˆ)(ˆ[()ˆ(

−−−−

−−

Ω==

=−−=

XXXXXXXXXEXXX

XXXXXXEEV

TTTTTTT

TTTTT

σεε

εεβββββ

[ ] Ω= 2σεεTE

.,

)()(1

111

−−−

Ω=

==Ω⇒

TTSo

TTTT

ITT

T

TT

T

Page 4: Autocorrelation and Hetroskedasticity - University of …general.utpb.edu/fac/carson_s/Econometrics/Econometri… ·  · 2001-07-06Autocorrelation and Hetroskedasticity ... The consequences

4

Now, we can transform our OLS model by pre-multiplying by T.

We can now see how Generalized Least Squares coefficient estimates efficient.

This transformed GLS variance is BLUE if the A2, A4 and A5 are satisfied.

A proposed matrix T is

( ) ( ) ( )( ) ( ) ( ) ( ) ( )

( ) 112

1112111**1

1******

1**

)(

])ˆ)(ˆ[()ˆ(

−−

−−−−−−−

−−

Ω=

ΩΩΩ==

=−−=

XX

XXXXXXTXTXTXEXTTXTX

XXXEXXXEV

T

TTTTTTTTT

TTTTTGLS

σ

σεε

εεβββββ

=

5

4

3

2

1

10000

01

000

001

00

0001

0

00001

σ

σ

σ

σ

σ

σT

( )

yXXX

TT

However

TyTXTXTX

yXXXSquaresLeastdGeneralize

XyeldTransforme

TandTXXTyyLet

TTXTy

TTGLS

T

TTTT

GLS

T

111

1

1

**1

**

***

***

)(

,

)(

:

:mod

,

−−−

ΩΩ=⇒

Ω=

=

=

+=

===

+=

β

β

εβ

εε

εβ

Page 5: Autocorrelation and Hetroskedasticity - University of …general.utpb.edu/fac/carson_s/Econometrics/Econometri… ·  · 2001-07-06Autocorrelation and Hetroskedasticity ... The consequences

5

Of course, this is the same transformation as our scalar counterpart. And like our scalar

counterpart, the matrix T will transform OLS into GLS.

Detecting Hetroskedasticity

There are multiple tests to determine if a data set is hetroskedastic. Since they produce

the same result, we’ll primarily focus on one, the White test. The procedure for the White test

follows:

1. Regress y on x1, x2 . . . xn

2. Save and square the residual et.

3. Regress et2 on all independent, squared independent and the cross product of all independent

variables.

4. Calculate the test statistic NR2~χ2(1)

5. If NR2 is greater than the χ2 critical value, reject the null hypothesis of equal variances across

observations. If NR2 is less than the χ2 critical value, we fail to reject the null hypothesis, i.e.,

the errors are homoskedastic.

Example

Recall the simple linear regression model, however, I’ve tweeked the data a little to

accentuate hetroskedasticity. We will run four regressions, each using a different transformation

of the dependent and/or independent variables. The first model regresses earnings on work

experience. We then perform a white test to determine if we have hetroskedasticity.

Model 1:

TYPE COMMAND:_read (4) earn we...SAMPLE RANGE IS NOW SET TO: 1 20

TYPE COMMAND:_ OLS earn we/resid=e

REQUIRED MEMORY IS PAR= 2 CURRENT PAR= 500 OLS ESTIMATION 20 OBSERVATIONS DEPENDENT VARIABLE = EARN

Page 6: Autocorrelation and Hetroskedasticity - University of …general.utpb.edu/fac/carson_s/Econometrics/Econometri… ·  · 2001-07-06Autocorrelation and Hetroskedasticity ... The consequences

6

...NOTE..SAMPLE RANGE SET TO: 1, 20

R-SQUARE = 0.4904 R-SQUARE ADJUSTED = 0.4621VARIANCE OF THE ESTIMATE-SIGMA**2 = 0.59787E+08STANDARD ERROR OF THE ESTIMATE-SIGMA = 7732.2SUM OF SQUARED ERRORS-SSE= 0.10762E+10MEAN OF DEPENDENT VARIABLE = 25584.LOG OF THE LIKELIHOOD FUNCTION = -206.388

VARIABLE ESTIMATED STANDARD T-RATIO PARTIAL STANDARDIZED ELASTICITY NAME COEFFICIENT ERROR 18 DF P-VALUE CORR. COEFFICIENT AT MEANSWE 1801.1 432.7 4.162 0.001 0.700 0.7003 0.3590CONSTANT 16398. 2803. 5.849 0.000 0.809 0.0000 0.6410

We now plot the errors on work experience to see if we have hetroskedasticity. Since the model

is y=α+βx+ε . This implies the errors are ε=y-α-βx.

Notice how as work experience increases, the errors increase. This suggests hetroskedasticity

may be a problem. So, a white test is performed to determine if remedial measures are necessary.

The white test requires we square our residuals (resid=e) and regress this new variable on

independent and squared independent variables.

TYPE COMMAND:_g we2=we2**2TYPE COMMAND

OLS errors

-25000-20000-15000-10000

-50000

50001000015000

0 5 10 15 20

Work Experience

OL

S E

rro

rs

Series1

Page 7: Autocorrelation and Hetroskedasticity - University of …general.utpb.edu/fac/carson_s/Econometrics/Econometri… ·  · 2001-07-06Autocorrelation and Hetroskedasticity ... The consequences

7

:_g e2=e**2TYPE COMMAND:_ols e2 we we2

REQUIRED MEMORY IS PAR= 3 CURRENT PAR= 500 OLS ESTIMATION 20 OBSERVATIONS DEPENDENT VARIABLE = E2...NOTE..SAMPLE RANGE SET TO: 1, 20

R-SQUARE = 0.2526 R-SQUARE ADJUSTED = 0.1647VARIANCE OF THE ESTIMATE-SIGMA**2 = 0.79272E+16STANDARD ERROR OF THE ESTIMATE-SIGMA = 0.89035E+08SUM OF SQUARED ERRORS-SSE= 0.13476E+18MEAN OF DEPENDENT VARIABLE = 0.53809E+08LOG OF THE LIKELIHOOD FUNCTION = -392.844

VARIABLE ESTIMATED STANDARD T-RATIO PARTIAL STANDARDIZED ELASTICITY NAME COEFFICIENT ERROR 17 DF P-VALUE CORR. COEFFICIENT AT MEANSWE 0.34010E+08 0.1742E+08 1.952 0.068 0.428 1.4312 3.2235WE2 -0.17997E+07 0.1222E+07 -1.473 0.159-0.336 -1.0798 -1.4039CONSTANT -0.44102E+08 0.4677E+08 -0.9429 0.359-0.223 0.0000 -0.8196

We take the R2 from this auxiliary equation and multiply it by 20. This is the test statistic NR2.

Our null hypothesis is that errors are constant across all observations of work experience, i.e.,

σ12=σ2

2= . . . =σ202 (homoskedasitic) versus the alternative hypothesis that σ1

2≠σ22≠ . . . ≠σ20

2.

The test statistic is NR2=20(.2526)=5.052. This is compared to the χ2 critical value with one

degree of freedom, 3.84. Hence, our earnings model is hetroskedastic. Theory suggests the

variance-covariance matrix is not efficient. The work experience coefficient is also not BLUE

and out test statistics are not valid. However, we are still unbiased. To improve our t-statistics,

we must use an alternative estimation technique. There is an alternative variance-covariance

matrix, called the White variance-covariance matrix that corrects for the hetroskedasticity.

However, this estimation procedure does nothing to our coefficients. This is done by including

the hetcov option in Shazam.

Model 2:

TYPE COMMAND:_ols earn we/hetcov redid=e1

REQUIRED MEMORY IS PAR= 3 CURRENT PAR= 500 OLS ESTIMATION 20 OBSERVATIONS DEPENDENT VARIABLE = EARN

Page 8: Autocorrelation and Hetroskedasticity - University of …general.utpb.edu/fac/carson_s/Econometrics/Econometri… ·  · 2001-07-06Autocorrelation and Hetroskedasticity ... The consequences

8

...NOTE..SAMPLE RANGE SET TO: 1, 20

USING HETEROSKEDASTICITY-CONSISTENT COVARIANCE MATRIX

R-SQUARE = 0.4904 R-SQUARE ADJUSTED = 0.4621VARIANCE OF THE ESTIMATE-SIGMA**2 = 0.59787E+08STANDARD ERROR OF THE ESTIMATE-SIGMA = 7732.2SUM OF SQUARED ERRORS-SSE= 0.10762E+10MEAN OF DEPENDENT VARIABLE = 25584.LOG OF THE LIKELIHOOD FUNCTION = -206.388

VARIABLE ESTIMATED STANDARD T-RATIO PARTIAL STANDARDIZED ELASTICITY NAME COEFFICIENT ERROR 18 DF P-VALUE CORR. COEFFICIENT AT MEANSWE 1801.1 420.7 4.281 0.000 0.710 0.7003 0.3590CONSTANT 16398. 1577. 10.40 0.000 0.926 0.0000 0.6410

Notice how the t-statistics have change. These are correct statistics, so we can conclude that

work experience is significant in explaining earnings. Let’s look at the errors plotted on work

experience to determine if the White variance-covariance matrix corrected for hetroskedasticity.

While White’s variance covariance matrix produces reliable t-statistics, it does not correct for

hetroskedasticty. This is demonstrated with by a White test.

TYPE COMMAND:_g e12=e1**2TYPE COMMAND:_ols e12 we we2

REQUIRED MEMORY IS PAR= 3 CURRENT PAR= 500

White Errors do not Correct Hetroskedasticity

-30000

-20000

-10000

0

10000

20000

0 5 10 15 20

Work Experience

Err

ors

Series1

Page 9: Autocorrelation and Hetroskedasticity - University of …general.utpb.edu/fac/carson_s/Econometrics/Econometri… ·  · 2001-07-06Autocorrelation and Hetroskedasticity ... The consequences

9

OLS ESTIMATION 20 OBSERVATIONS DEPENDENT VARIABLE = E12...NOTE..SAMPLE RANGE SET TO: 1, 20

R-SQUARE = 0.2526 R-SQUARE ADJUSTED = 0.1647VARIANCE OF THE ESTIMATE-SIGMA**2 = 0.79272E+16STANDARD ERROR OF THE ESTIMATE-SIGMA = 0.89035E+08SUM OF SQUARED ERRORS-SSE= 0.13476E+18MEAN OF DEPENDENT VARIABLE = 0.53809E+08LOG OF THE LIKELIHOOD FUNCTION = -392.844

VARIABLE ESTIMATED STANDARD T-RATIO PARTIAL STANDARDIZED ELASTICITY NAME COEFFICIENT ERROR 17 DF P-VALUE CORR. COEFFICIENT AT MEANSWE 0.34010E+08 0.1742E+08 1.952 0.068 0.428 1.4312 3.2235WE2 -0.17997E+07 0.1222E+07 -1.473 0.159-0.336 -1.0798 -1.4039CONSTANT -0.44102E+08 0.4677E+08 -0.9429 0.359-0.223 0.0000 -0.8196

Given the same R2 and coefficients, it is clear that the White variance-covariance matrix fails to

correct for hetroskedasticity. At this point, we revert back to our previous transformation

discussion. If we transform the dependent variable by some monotonic transformation (one

directional), we may be able to correct for hetroskedasticity. We will consider three

transformations. In the first model, we divide earnings by work experience. In the second model,

we transform the earnings by taking the natural log of earnings. The third model transform both

the dependent and independent variables by taking natural logs.

Model 3

The transformation proceeds as follows:

1) Transform model, divide by work experience.

2) Regress transformed the dependant variable on transformed independent variable.

TYPE COMMAND:_

WEWEWE

EWEE

εβαεβα ++=⇒++=

'''''

'''''

Re

,,,1

,

εβα

εεαββα

++=

=====

WEEgress

WEWEWE

WEE

ELet

Page 10: Autocorrelation and Hetroskedasticity - University of …general.utpb.edu/fac/carson_s/Econometrics/Econometri… ·  · 2001-07-06Autocorrelation and Hetroskedasticity ... The consequences

10

REQUIRED MEMORY IS PAR= 2 CURRENT PAR= 500 OLS ESTIMATION 20 OBSERVATIONS DEPENDENT VARIABLE = EP...NOTE..SAMPLE RANGE SET TO: 1, 20

USING HETEROSKEDASTICITY-CONSISTENT COVARIANCE MATRIX

R-SQUARE = 0.8606 R-SQUARE ADJUSTED = 0.8529VARIANCE OF THE ESTIMATE-SIGMA**2 = 0.41717E+07STANDARD ERROR OF THE ESTIMATE-SIGMA = 2042.5SUM OF SQUARED ERRORS-SSE= 0.75090E+08MEAN OF DEPENDENT VARIABLE = 8189.9LOG OF THE LIKELIHOOD FUNCTION = -179.763

VARIABLE ESTIMATED STANDARD T-RATIO PARTIAL STANDARDIZED ELASTICITY NAME COEFFICIENT ERROR 18 DF P-VALUE CORR. COEFFICIENT AT MEANSWEP 15300. 1853. 8.256 0.000 0.889 0.9277 0.7348CONSTANT 2172.0 578.2 3.757 0.001 0.663 0.0000 0.2652

Hence, the estimated model is

3) Transform model back to original variables.

Transforming the model to its original form implies

Note the change in the coefficients from the original OLS model. If this model reduces or

eliminates hetroskedasticity, we believe the transformed model is a better estimate. We want to

determine if the transformation reduced or eliminated hetroskedasticity. We run a White test.

TYPE COMMAND:_

REQUIRED MEMORY IS PAR= 3 CURRENT PAR= 500 OLS ESTIMATION 20 OBSERVATIONS DEPENDENT VARIABLE = E2...NOTE..SAMPLE RANGE SET TO: 1, 20

R-SQUARE = 0.2185 R-SQUARE ADJUSTED = 0.1266VARIANCE OF THE ESTIMATE-SIGMA**2 = 0.27546E+14STANDARD ERROR OF THE ESTIMATE-SIGMA = 0.52485E+07SUM OF SQUARED ERRORS-SSE= 0.46829E+15MEAN OF DEPENDENT VARIABLE = 0.37545E+07LOG OF THE LIKELIHOOD FUNCTION = -336.223

WEE 153002172' +=

WEE 217215300ˆ +=

Page 11: Autocorrelation and Hetroskedasticity - University of …general.utpb.edu/fac/carson_s/Econometrics/Econometri… ·  · 2001-07-06Autocorrelation and Hetroskedasticity ... The consequences

11

VARIABLE ESTIMATED STANDARD T-RATIO PARTIAL STANDARDIZED ELASTICITY NAME COEFFICIENT ERROR 17 DF P-VALUE CORR. COEFFICIENT AT MEANSWE -0.16152E+07 0.1027E+07 -1.573 0.134-0.356 -1.1790 -2.1940WE2 77446. 0.7203E+05 1.075 0.297 0.252 0.8060 0.8658CONSTANT 0.87413E+07 0.2757E+07 3.170 0.006 0.610 0.0000 2.3282

The White test statistic is 20 (.2185)=4.37. Therefore, we reject the hypothesis that the errors are

identically distributed. However, we have eliminated some hetroskedasticity. This is seen by

plotting the transformed model’s errors on work experience.

Model 4

Since our first transformation didn’t work, let’s consider a second transformation. This

second model uses the natural log transformation of earnings to deal with hetroskedasticity. This

is a monotonic transformation on earnings to change the range while the domain of work

experience remains unchanged.

TYPE COMMAND:_g le=log(earn)TYPE COMMAND:_ols le we/resid=e2

REQUIRED MEMORY IS PAR= 3 CURRENT PAR= 500 OLS ESTIMATION 20 OBSERVATIONS DEPENDENT VARIABLE = LE...NOTE..SAMPLE RANGE SET TO: 1, 20

Transformed Model's Errors

-6000

-4000

-2000

0

2000

4000

0 5 10 15 20

Work Experience

Err

ors

Series1

Page 12: Autocorrelation and Hetroskedasticity - University of …general.utpb.edu/fac/carson_s/Econometrics/Econometri… ·  · 2001-07-06Autocorrelation and Hetroskedasticity ... The consequences

12

R-SQUARE = 0.3750 R-SQUARE ADJUSTED = 0.3403VARIANCE OF THE ESTIMATE-SIGMA**2 = 0.10566STANDARD ERROR OF THE ESTIMATE-SIGMA = 0.32505SUM OF SQUARED ERRORS-SSE= 1.9019MEAN OF DEPENDENT VARIABLE = 10.073LOG OF THE LIKELIHOOD FUNCTION = -4.84989

VARIABLE ESTIMATED STANDARD T-RATIO PARTIAL STANDARDIZED ELASTICITY NAME COEFFICIENT ERROR 18 DF P-VALUE CORR. COEFFICIENT AT MEANSWE 0.59784E-01 0.1819E-01 3.286 0.004 0.612 0.6124 0.0303CONSTANT 9.7677 0.1179 82.88 0.000 0.999 0.0000 0.9697

Notice how the constant and slope coefficient are considerably different from the previous

models. The interpretation for the work experience coefficient is the percentage change in

earnings with a one unit change in work experience. Let’s look at our transformed model’s

errors.

The errors of the transformed model follow the same pattern as the non-transformed errors.

However, when you look at the axis, you recognize the errors are small. The White test

demonstrates that the transformation does not account for all hetroskedasticity.

R2=.2017(20)=4.034. But this result may be premature. Independent and squared independent

regressors are only a set of possible alternatives. When alternative regressors are used in for the

Transformed Errors

-1.00E+00-8.00E-01-6.00E-01-4.00E-01-2.00E-010.00E+002.00E-014.00E-016.00E-01

0 5 10 15 20

Work Experience

Err

ors

Page 13: Autocorrelation and Hetroskedasticity - University of …general.utpb.edu/fac/carson_s/Econometrics/Econometri… ·  · 2001-07-06Autocorrelation and Hetroskedasticity ... The consequences

13

White test, hetroskedasticity is no longer a problem. These alternative regressors are accessed

using the diagnos/het command after the OLS regression. Let’s look at the white test:

TYPE COMMAND:_g e22=e2**2TYPE COMMAND:_ols e22 we we2

REQUIRED MEMORY IS PAR= 3 CURRENT PAR= 500 OLS ESTIMATION 20 OBSERVATIONS DEPENDENT VARIABLE = E22...NOTE..SAMPLE RANGE SET TO: 1, 20

R-SQUARE = 0.2017 R-SQUARE ADJUSTED = 0.1078VARIANCE OF THE ESTIMATE-SIGMA**2 = 0.20554E-01STANDARD ERROR OF THE ESTIMATE-SIGMA = 0.14337SUM OF SQUARED ERRORS-SSE= 0.34942MEAN OF DEPENDENT VARIABLE = 0.95094E-01LOG OF THE LIKELIHOOD FUNCTION = 12.0933

VARIABLE ESTIMATED STANDARD T-RATIO PARTIAL STANDARDIZED ELASTICITY NAME COEFFICIENT ERROR 17 DF P-VALUE CORR. COEFFICIENT AT MEANSWE 0.51254E-01 0.2805E-01 1.827 0.085 0.405 1.3843 2.7488WE2 -0.28939E-02 0.1968E-02 -1.471 0.160-0.336 -1.1144 -1.2774CONSTANT -0.44828E-01 0.7532E-01 -0.5952 0.560-0.143 0.0000 -0.4714

Model 5

Our final transformation involves transforming both the dependent and independent

variables with natural logs. Page 52 demonstrates that this transformation means the coefficient

of work experience is an elasticity.

REQUIRED MEMORY IS PAR= 2 CURRENT PAR= 500 OLS ESTIMATION 20 OBSERVATIONS DEPENDENT VARIABLE = LE...NOTE..SAMPLE RANGE SET TO: 1, 20

USING HETEROSKEDASTICITY-CONSISTENT COVARIANCE MATRIX

R-SQUARE = 0.3690 R-SQUARE ADJUSTED = 0.3340VARIANCE OF THE ESTIMATE-SIGMA**2 = 0.10667STANDARD ERROR OF THE ESTIMATE-SIGMA = 0.32660SUM OF SQUARED ERRORS-SSE= 1.9200MEAN OF DEPENDENT VARIABLE = 10.073LOG OF THE LIKELIHOOD FUNCTION = -4.94492

VARIABLE ESTIMATED STANDARD T-RATIO PARTIAL STANDARDIZED ELASTICITY NAME COEFFICIENT ERROR 18 DF P-VALUE CORR. COEFFICIENT AT MEANSLWE 0.27346 0.8505E-01 3.215 0.005 0.604 0.6075 0.0349CONSTANT 9.7213 0.8662E-01 112.2 0.000 0.999 0.0000 0.9651

Page 14: Autocorrelation and Hetroskedasticity - University of …general.utpb.edu/fac/carson_s/Econometrics/Econometri… ·  · 2001-07-06Autocorrelation and Hetroskedasticity ... The consequences

14

Notice the work experience coefficient of .2735. This is an elasticity where a one percent change

in work experience leads to a .273 percent change in annual earnings. We once again run a White

test to determine if this final transformation has dealt with hetroskedasticity.

TYPE COMMAND:_

REQUIRED MEMORY IS PAR= 3 CURRENT PAR= 500 OLS ESTIMATION 20 OBSERVATIONS DEPENDENT VARIABLE = E2...NOTE..SAMPLE RANGE SET TO: 1, 20

R-SQUARE = 0.1886 R-SQUARE ADJUSTED = 0.0931VARIANCE OF THE ESTIMATE-SIGMA**2 = 0.21374E-01STANDARD ERROR OF THE ESTIMATE-SIGMA = 0.14620SUM OF SQUARED ERRORS-SSE= 0.36336MEAN OF DEPENDENT VARIABLE = 0.96002E-01LOG OF THE LIKELIHOOD FUNCTION = 11.7021

VARIABLE ESTIMATED STANDARD T-RATIO PARTIAL STANDARDIZED ELASTICITY NAME COEFFICIENT ERROR 17 DF P-VALUE CORR. COEFFICIENT AT MEANSLWE -0.57872E-02 0.1417 -0.4085E-01 0.968-0.010 -0.0335 -0.0774LWE2 0.30623E-01 0.5386E-01 0.5686 0.577 0.137 0.4665 0.7658CONSTANT 0.29920E-01 0.7445E-01 0.4019 0.693 0.097 0.0000 0.3117

The test statistic is 20(.1886)=3.772 which is less than the critical value of 3.84. Hence, the

natural log transformation of both the dependent and independent variables reduces

hetroskedasticity such that we fail to reject the null hypothesis that our errors are

homoskedastically distributed. However, this model is not necessarily BLUE as there exists a

more efficient estimator. Let’s plot errors on work experience.

Transformed Errors

-0.8

-0.6

-0.4

-0.2

0

0.2

0.4

0 1 2 3

Work Experience

Err

ors

Series1

Page 15: Autocorrelation and Hetroskedasticity - University of …general.utpb.edu/fac/carson_s/Econometrics/Econometri… ·  · 2001-07-06Autocorrelation and Hetroskedasticity ... The consequences

15

This confirms our failure to reject the null hypothesis that the errors in the log-log model are

distributed homoskedastically.

In conclusion, hetroskedasticity introduces several problems for the OLS estimator.

While OLS estimates remained unbiased, they are no longer efficient. Hence, the standard errors

and t-statistics are no longer reliable. To detect if a data set has hetroskedasticity, multiple tests

are available. We have relied exclusively on the While test, yet other are used. All corrections

for hetroskedasticity attempt to eliminate hetroskedasticity by transforming the dependent

variable. While alternative transformations reduce hetroskedasticity, generalized least squares is

BLUE when the appropriate transformation matrix is identified.

Autocorrelation

Autocorrelation is when the influence of errors spill over from one observation to the

next. It is also a sign of incorrect functional from. Autocorrelation is a violation of assumption

A3 and shows up in time series data. It has a similar effect on estimated coefficients as

hetroskedasticity in that it leaves them unbiased, but they are no longer efficient. Hence, t-

statistics are unreliable. When autocorrelation exists, estimators are no longer BLUE. Our

correction for autocorrelation is the same as the solution for hetroskedasiticity, i.e. Generalized

Least Squares.

We first want to determine the expected correlation between εt and ε t-1. Predictably, the

expected correlation between the two is zero. ρ is the correlation between εt and ε t-1.

Now observe the variation of ε t.

[ ] [ ] [ ] [ ][ ] ( ) [ ]

[ ] 0

01

01

1

=

=−⇒=+=+=

+=

t

tt

ttttt

ttt

Eor

EEforSolve

EEvEE

v

εερε

ερερερερεε

( )2

22

222222

112

1

0)(

),cov()()()(

ρσ

σ

σσρσσερ

εερε

ε

εε

−=∴

+=⇒++

++= −−

v

vvt

ttttt

t

ttV

vvVVV

Page 16: Autocorrelation and Hetroskedasticity - University of …general.utpb.edu/fac/carson_s/Econometrics/Econometri… ·  · 2001-07-06Autocorrelation and Hetroskedasticity ... The consequences

16

We derive the correlation between error terms, ε t, εt-1. We first derive the covariance between εt,

εt-1.

Recall from page 13 that our population correlation equals

We can now use this correlation definition to determine the correlation between εt and εt-1.

The statistic ρ describes the correlation between errors that are one period apart. Since 1≥ρ≥-1, it

can be shown that errors k periods apart is modeled as

Hence, there is some correlation between errors no matter how far apart they are. However, since

1≥ρ≥-1, this correlation declines in absolute value as the errors become further apart. Still, the

errors are autocorrelated if ρ≠0.

( ) ( )( ) ( )( )[ ] [ ]( )[ ] [ ] [ ]

[ ]( ) 2

1

1

12

111

1111

,

.

,

ερσεε

εεερερε

εεεεεεεε

=∴

+=+=

=−−=

−−−−

−−−−

tt

tt

tttttt

tttttttt

Cov

assumptionbyvEHowever

vEEvE

EEEECov

( )( )

( ) ( ) yx

xy

N

i

N

iii

N

iii

xy

yyxx

yyxx

σσσ

ρ =

−−

−−=

∑ ∑

= =

=

1 1

22

1

( ) ρσρσ

εεεεεε

ε

ε ===−

−− 2

2

1

11 )()(

,)(

tt

tttt SDSD

CovCorr

kk

ttk

tt CorrCov ρσσρ

εεσρεεε

εε ==⇒= −− 2

2

12

1 )()(

Page 17: Autocorrelation and Hetroskedasticity - University of …general.utpb.edu/fac/carson_s/Econometrics/Econometri… ·  · 2001-07-06Autocorrelation and Hetroskedasticity ... The consequences

17

There are multiple types of autocorrelation, however, here we will only deal with first order

autocorrelation. First order autocorrelation exists when the errors in one period are directly

correlated with the errors in the ensuing period and can be modeled

Where vt is assumed to be independently distributed, but as you can see, ε t is not independently

distributed from ε t-1. They are correlated by the factor ρ.

Replace the subscript t by t-1 and multiply by ρ:

Subtract the second equation from the first:

We can now return to our GLS model to correct for autocorrelation. That is, we seek to transform

our model by some factor T such that our error terms are no longer dependent on other error

terms.

A matrix T that is used to correct for autocorrelation is

10,,

...

1

11

≤≤+=++++=

− ρρεεεββα

ttt

tktktttt

vHowever

xxy

111111 ... −−−− ++++= tktktt xxy ρερβρβραρ

( ) ( ) ( ) 11,1,1111 ..1 −−−− −+−++−+−=− tttkktktttt xxxxyy ρεερβρβραρ

( )

yXXX

TT

However

TyTXTXTX

yXXXSquaresLeastdGeneralize

XyeldTransforme

TandTXXTyyLet

TTXTy

TTGLS

T

TTTT

GLS

T

111

1

1

**1

**

***

***

)(

,

)(

:

:mod

,

−−−

ΩΩ=⇒

Ω=

=

=

+=

===

+=

β

β

εβ

εε

εβ

Page 18: Autocorrelation and Hetroskedasticity - University of …general.utpb.edu/fac/carson_s/Econometrics/Econometri… ·  · 2001-07-06Autocorrelation and Hetroskedasticity ... The consequences

18

Notice that the diagonal elements are the negative corrleations between error terms. We can

transform this model such that

Hence, our transformed model in matrix form is

and

The GLS solution is to apply OLS to the model

Detecting Autocorrelation ,the Dubin-Watson statistic

1,,*,1

*−− −=−= tititittt xxxandyyy ρρ

Ty

yn

y

y

yy

yy

yy

y

n

nn

=

−−

−−

=

=

1

1

1

23

12

* .

.

0000

1000

0100

0010

0001

.

.

ρρ

ρρ

ρ

ρ

ρρ

TX

xxxx

xxxx

xxxx

x

knnknn

kk

kk

=

−−−−−

−−−−−−

=

−− ,11,11,

,2,32,22,3

,1,21,11,2

*

..1

....1

....1

..1

..1

ρρρρρ

ρρρρρρ

*** εβ += Xy

−−

−−

=

ρρ

ρρ

ρ

0000

1000

0100

0010

0001

T

Page 19: Autocorrelation and Hetroskedasticity - University of …general.utpb.edu/fac/carson_s/Econometrics/Econometri… ·  · 2001-07-06Autocorrelation and Hetroskedasticity ... The consequences

19

The Durbin-Watson (DW) test is nearly the exclusive test for autocorrelation. The

intuition behind the DW is to test for successive values that are close to each other. This suggests

autocorrelation may be present. The DW lies between 0 and 4 with DW=2 indicating

autocorrelation is not present. Hence, our hypothesis concerning autocorrelation is

H0: Corr(ε t, εt-1)=0

H1: Corr(ε t, εt-1)≠0

The formal definition of DW is

Therefore, when ρ=0 (no autocorrelation), DW=2. The critical value with which we compare the

DW was adjusted by Durbin and Watson and is presented in PR on page 610. Notice in the table

that two limits are given: dl for the DW’s cv lower limit and du for the DW’s cv upper limit. The

following is a table you will find helpful when testing for autocorrelation with the DW:

DW Value Test Conclusion4-dl<DW<4 Reject null; negative autocorrelation

4-du<DW<4-dl Test inconclusive2<DW<4-du Fail to reject: No Autocorrelationdu<DW<2 Fail to reject: No Autocorrelationdl<DW<du Test inconclusive0<DW<dl Reject null; positive autocorrelation

It’s useful to graph the distribution with rejection, inconclusive and non-rejection areas.

( )( )ρ

ε

εε−=

−=

=

=−

12ˆ

ˆˆ

1

2

2

21

T

tt

ttt

DW

Page 20: Autocorrelation and Hetroskedasticity - University of …general.utpb.edu/fac/carson_s/Econometrics/Econometri… ·  · 2001-07-06Autocorrelation and Hetroskedasticity ... The consequences

20

A final note regarding the DW is that it deteriorates in the presence of lagged dependent

variables. Durbin and Watson accounted for this with the Durbin’s h test.

Examples

We now use the tools presented in this section to assess time series data. The time series

we use is perhaps the most well known time series data, United States GDP estimates. We us

data from 1959-1994. Annual GDP estimates are regressed on time.

TYPE COMMAND:_...SAMPLE RANGE IS NOW SET TO: 1 36

TYPE COMMAND:_OLS GDP year/ resid=e exactdw

REQUIRED MEMORY IS PAR= 13 CURRENT PAR= 500 OLS ESTIMATION 36 OBSERVATIONS DEPENDENT VARIABLE = GDP...NOTE..SAMPLE RANGE SET TO: 1, 36

DURBIN-WATSON STATISTIC = 0.60702

DURBIN-WATSON P-VALUE = 0.000000

R-SQUARE = 0.9922 R-SQUARE ADJUSTED = 0.9919VARIANCE OF THE ESTIMATE-SIGMA**2 = 14294.STANDARD ERROR OF THE ESTIMATE-SIGMA = 119.56SUM OF SQUARED ERRORS-SSE= 0.48599E+06

Page 21: Autocorrelation and Hetroskedasticity - University of …general.utpb.edu/fac/carson_s/Econometrics/Econometri… ·  · 2001-07-06Autocorrelation and Hetroskedasticity ... The consequences

21

MEAN OF DEPENDENT VARIABLE = 4269.5LOG OF THE LIKELIHOOD FUNCTION = -222.269

VARIABLE ESTIMATED STANDARD T-RATIO PARTIAL STANDARDIZED ELASTICITY NAME COEFFICIENT ERROR 34 DF P-VALUE CORR. COEFFICIENT AT MEANSYEAR 125.78 1.918 65.57 0.000 0.996 0.9961 58.2261CONSTANT -0.24433E+06 3791. -64.45 0.000 -0.996 0.0000 -57.2261

The exactdw command that is in the OLS options tells the computer to print an exact estimate of

the DW. Exactdw also prints the probability the errors are uncorrelated. The probability the

errors are uncorrelated is zero. Therefore, we conclude the GDP data is autocorrelated. To

review the DW test and cv, dl=1.41 and du=1.52. According to our table, we reject the hypothesis

of no autocorrelation for the test-result that positive autocorrelation exists. Let’s look at the

plotted OLS errors.

Autocorrelation with a graph of errors on the independent variable is represented by positive

errors followed by positive errors and negative errors followed by negative errors. This is clearly

the case above. From 1959 to around 1970, positive errors are followed by other positive errors.

Between 1970 and 1985, negative errors are followed by negative errors. After 1985, the positive

errors followed by positive errors reemerges. Negative correlation is the present when there are

radical shifts between errors. For example, the error following a positive error is negative and the

OLS Autocorrelated Errors

-400

-300

-200

-100

0

100

200

300

1950 1960 1970 1980 1990 2000

Year

Err

os

Series1

Page 22: Autocorrelation and Hetroskedasticity - University of …general.utpb.edu/fac/carson_s/Econometrics/Econometri… ·  · 2001-07-06Autocorrelation and Hetroskedasticity ... The consequences

22

error following a negative error is positive. Both forms of autocorelation make OLS inconsistent

and invalidate the t-statistics.

The correction for autocorrelation is GLS. Shazam has a nice built in command that

produces GLS estimates that are BLUE. This is the auto command.

TYPE COMMAND:_Auto GDP year/max resid=e1

REQUIRED MEMORY IS PAR= 4 CURRENT PAR= 500

DEPENDENT VARIABLE = GDP..NOTE..R-SQUARE,ANOVA,RESIDUALS DONE ON ORIGINAL VARS

LEAST SQUARES ESTIMATION 36 OBSERVATIONSBY COCHRANE-ORCUTT TYPE PROCEDURE WITH CONVERGENCE = 0.00100

ITERATION RHO LOG L.F. SSE 1 0.00000 -222.269 0.48599E+06 2 0.68187 -211.170 0.25781E+06 3 0.69041 -211.166 0.25758E+06 4 0.69089 -211.166 0.25757E+06

LOG L.F. = -211.166 AT RHO = 0.69089

ASYMPTOTIC ASYMPTOTIC ASYMPTOTIC ESTIMATE VARIANCE ST.ERROR T-RATIORHO 0.69089 0.01452 0.12049 5.73390

R-SQUARE = 0.9958 R-SQUARE ADJUSTED = 0.9957VARIANCE OF THE ESTIMATE-SIGMA**2 = 7575.5STANDARD ERROR OF THE ESTIMATE-SIGMA = 87.038SUM OF SQUARED ERRORS-SSE= 0.25757E+06MEAN OF DEPENDENT VARIABLE = 4269.5LOG OF THE LIKELIHOOD FUNCTION = -211.166

MODEL SELECTION TESTS - SEE JUDGE ET.AL.(1985, P.242) AKAIKE (1969) FINAL PREDICTION ERROR- FPE = 7996.4 (FPE ALSO KNOWN AS AMEMIYA PREDICTION CRITERION -PC) AKAIKE (1973) INFORMATION CRITERION- LOG AIC = 8.9866 SCHWARZ(1978) CRITERION-LOG SC = 9.0746MODEL SELECTION TESTS - SEE RAMANATHAN(1992,P.167) CRAVEN-WAHBA(1979) GENERALIZED CROSS VALIDATION(1979) -GCV= 8021.2 HANNAN AND QUINN(1979) CRITERION -HQ= 8244.8 RICE (1984) CRITERION-RICE= 8049.0 SHIBATA (1981) CRITERION-SHIBATA= 7949.6 SCHWARTZ (1978) CRITERION-SC= 8730.8 AKAIKE (1974)INFORMATION CRITERION-AIC= 7995.5

ANALYSIS OF VARIANCE - FROM MEAN SS DF MSREGRESSION 0.61688E+08 1. 0.61688E+08ERROR 0.25757E+06 34. 7575.5TOTAL 0.61946E+08 35. 0.17699E+07

Page 23: Autocorrelation and Hetroskedasticity - University of …general.utpb.edu/fac/carson_s/Econometrics/Econometri… ·  · 2001-07-06Autocorrelation and Hetroskedasticity ... The consequences

23

ANALYSIS OF VARIANCE - FROM ZERO SS DF MSREGRESSION 0.71793E+09 2. 0.35896E+09ERROR 0.25757E+06 34. 7575.5TOTAL 0.71819E+09 36. 0.19950E+08

VARIABLE ESTIMATED STANDARD T-RATIO PARTIAL STANDARDIZED ELASTICITY NAME COEFFICIENT ERROR 34 DF P-VALUE CORR. COEFFICIENT AT MEANSYEAR 125.69 3.794 33.13 0.000 0.985 0.9954 58.1860CONSTANT -0.24414E+06 7500. -32.55 0.000-0.984 0.0000 -57.1824

VARIANCE-COVARIANCE MATRIX OF COEFFICIENTSYEAR 14.397CONSTANT -28456. 0.56245E+08 YEAR CONSTANT

CORRELATION MATRIX OF COEFFICIENTSYEAR 1.0000CONSTANT -0.99998 1.0000 YEAR CONSTANT

DURBIN-WATSON = 1.4443 VON NEUMANN RATIO = 1.4855 RHO = 0.23705RESIDUAL SUM = -87.763 RESIDUAL VARIANCE = 7802.1SUM OF ABSOLUTE ERRORS= 2351.1R-SQUARE BETWEEN OBSERVED AND PREDICTED = 0.9957RUNS TEST: 15 RUNS, 22 POSITIVE, 14 NEGATIVE, NORMAL STATISTIC =-1.1085DURBIN H STATISTIC (ASYMPTOTIC NORMAL) = 2.0586 MODIFIED FOR AUTO ORDER=1COEFFICIENT OF SKEWNESS = -1.0163 WITH STANDARD DEVIATION OF 0.3925COEFFICIENT OF EXCESS KURTOSIS = 1.2358 WITH STANDARD DEVIATION OF0.7681

GOODNESS OF FIT TEST FOR NORMALITY OF RESIDUALS - 6 GROUPSOBSERVED 2.0 4.0 8.0 17.0 5.0 0.0EXPECTED 0.8 4.9 12.3 12.3 4.9 0.8CHI-SQUARE = 5.9837 WITH 2 DEGREES OF FREEDOM

As is demonstrated by the DW statistic, our test result is inconclusive. Let’s see how the auto

AR(1) command dealt with the model’s errors.

Page 24: Autocorrelation and Hetroskedasticity - University of …general.utpb.edu/fac/carson_s/Econometrics/Econometri… ·  · 2001-07-06Autocorrelation and Hetroskedasticity ... The consequences

24

It appears that the auto command has done a good job reducing autocorrelated errors. However,

our DW test with the auto command suggests that the test is inconclusive. To improve on these

estimates, we want to see how errors are correlated. A useful tool to observe the correlation

between errors in different periods is the autocorrelation function plotted with a correlogram.

The idea behind the ACF is to plot the correlations between congruent periods and see which

perioids have significant correlation. Those periods with significant correlations must be

accounted for. Let’s look at the OLS autocorrelation function.

OLS ACF and PACF

AUTOCORRELATION FUNCTION OF THE SERIES (1-B) (1-B ) E

1 0.66 . + RRRRRRRRRRRRRRRRRRRRRRR . 2 0.26 . + RRRRRRRRRR + . 3 0.04 . + RR + . 4 0.01 . + R + . 5 0.02 . + RR + . 6 -.03 . + RR + . 7 -.05 . + RRR + . 8 -.05 . + RRR + . 9 -.09 . + RRRR + .10 -.10 . + RRRRR + .11 -.11 . + RRRRR + .12 -.14 . + RRRRRR + .

PARTIAL AUTOCORRELATION FUNCTION OF THE SERIES (1-B) (1-B ) E

1 0.66 . + RRRRRRRRRRRRRRRRRRRRRRR .

Auto Errors on Year

-300-250-200-150-100

-500

50100150

1950 1960 1970 1980 1990 2000

Errors

Yea

rsSeries1

Page 25: Autocorrelation and Hetroskedasticity - University of …general.utpb.edu/fac/carson_s/Econometrics/Econometri… ·  · 2001-07-06Autocorrelation and Hetroskedasticity ... The consequences

25

2 -.30 . +RRRRRRRRRRR + . 3 0.04 . + RR + . 4 0.07 . + RRR + . 5 -.02 . + RR + . 6 -.12 . + RRRRR + . 7 0.09 . + RRRR + . 8 -.05 . + RRR + . 9 -.11 . + RRRRR + .10 0.02 . + RR + .11 -.04 . + RR + .12 -.14 . + RRRRRR + .

The OLS ACF indicates that there is significant correlation between errors εt and εt-1. However,

recall the transformation matrix T. It seeks to account for correlation between period t and t-1, t

and t-2, . . ., t and t-n. So we opt for the partial autocorrelation function which detects the

correlation between periods t and t-1, t and t-2, . . ., t and t-n. The OLS’s PACF indicates an we

should not only consider how GDP is correlated with last period’s but how GDP is correlated

with GDP two periods ago. So, we go pack to the AUTO command and specify a T matrix that

accounts for the last and second to the last period’s correlation between errors.

TYPE COMMAND:_Auto GDP year/ order=2 max

REQUIRED MEMORY IS PAR= 4 CURRENT PAR= 500

DEPENDENT VARIABLE = GDP..NOTE..R-SQUARE,ANOVA,RESIDUALS DONE ON ORIGINAL VARS

LEAST SQUARES SECOND-ORDER AUTOCORRELATIONBY COCHRANE-ORCUTT TYPE PROCEDURE WITH CONVERGENCE =0.001000

36 OBSERVATIONS

ITERATION RHO1 RHO2 SSE SSE/N LOG.L.F. 1 0.00000 0.00000 485987.23 13499.645 -222.26932 2 0.95230 -0.37132 222433.35 6178.7043 -208.67883 3 0.95551 -0.37035 222393.47 6177.5965 -208.67859 4 0.95560 -0.37031 222392.51 6177.5698 -208.67860

ASYMPTOTIC ASYMPTOTIC ASYMPTOTIC ESTIMATE VARIANCE ST.ERROR T-RATIOAUTOCORRELATIONRHO1 0.95560 0.02397 0.15482 6.17244 0.69736RHO2 -0.37031 0.02397 0.15482 -2.39192 0.29609COVARIANCE -0.01671

COMPLEX ROOTS - AUTOREGRESSIVE PROCESS DISPLAYS PSEUDO PERIODICBEHAVIOUR

Page 26: Autocorrelation and Hetroskedasticity - University of …general.utpb.edu/fac/carson_s/Econometrics/Econometri… ·  · 2001-07-06Autocorrelation and Hetroskedasticity ... The consequences

26

WITH DAMPED SINE WAVE

R-SQUARE = 0.9964 R-SQUARE ADJUSTED = 0.9963VARIANCE OF THE ESTIMATE-SIGMA**2 = 6541.0STANDARD ERROR OF THE ESTIMATE-SIGMA = 80.876SUM OF SQUARED ERRORS-SSE= 0.22239E+06MEAN OF DEPENDENT VARIABLE = 4269.5LOG OF THE LIKELIHOOD FUNCTION = -208.679

MODEL SELECTION TESTS - SEE JUDGE ET.AL.(1985, P.242) AKAIKE (1969) FINAL PREDICTION ERROR- FPE = 6904.3 (FPE ALSO KNOWN AS AMEMIYA PREDICTION CRITERION -PC) AKAIKE (1973) INFORMATION CRITERION- LOG AIC = 8.8398 SCHWARZ(1978) CRITERION-LOG SC = 8.9278MODEL SELECTION TESTS - SEE RAMANATHAN(1992,P.167) CRAVEN-WAHBA(1979) GENERALIZED CROSS VALIDATION(1979) -GCV= 6925.7 HANNAN AND QUINN(1979) CRITERION -HQ= 7118.8 RICE (1984) CRITERION-RICE= 6949.8 SHIBATA (1981) CRITERION-SHIBATA= 6864.0 SCHWARTZ (1978) CRITERION-SC= 7538.4 AKAIKE (1974)INFORMATION CRITERION-AIC= 6903.6

ANALYSIS OF VARIANCE - FROM MEAN SS DF MSREGRESSION 0.61724E+08 1. 0.61724E+08ERROR 0.22239E+06 34. 6541.0TOTAL 0.61946E+08 35. 0.17699E+07

ANALYSIS OF VARIANCE - FROM ZERO SS DF MSREGRESSION 0.12807E+09 2. 0.64037E+08ERROR 0.22239E+06 34. 6541.0TOTAL 0.12830E+09 36. 0.35638E+07

VARIABLE ESTIMATED STANDARD T-RATIO PARTIAL STANDARDIZED ELASTICITY NAME COEFFICIENT ERROR 34 DF P-VALUE CORR. COEFFICIENT AT MEANSYEAR 125.83 2.998 41.98 0.000 0.990 0.9965 58.2499CONSTANT -0.24442E+06 5925. -41.25 0.000-0.990 0.0000 -57.2480

VARIANCE-COVARIANCE MATRIX OF COEFFICIENTSYEAR 8.9853CONSTANT -17759. 0.35102E+08 YEAR CONSTANT

CORRELATION MATRIX OF COEFFICIENTSYEAR 1.0000CONSTANT -0.99999 1.0000

DURBIN-WATSON = 1.8599 VON NEUMANN RATIO = 1.9131 RHO = 0.00792RESIDUAL SUM = -22.386 RESIDUAL VARIANCE = 6852.2SUM OF ABSOLUTE ERRORS= 2101.6R-SQUARE BETWEEN OBSERVED AND PREDICTED = 0.9962RUNS TEST: 19 RUNS, 21 POSITIVE, 15 NEGATIVE, NORMAL STATISTIC =0.1741COEFFICIENT OF SKEWNESS = -1.1796 WITH STANDARD DEVIATION OF 0.3925

Page 27: Autocorrelation and Hetroskedasticity - University of …general.utpb.edu/fac/carson_s/Econometrics/Econometri… ·  · 2001-07-06Autocorrelation and Hetroskedasticity ... The consequences

27

COEFFICIENT OF EXCESS KURTOSIS = 2.3433 WITH STANDARD DEVIATION OF0.7681

GOODNESS OF FIT TEST FOR NORMALITY OF RESIDUALS - 6 GROUPSOBSERVED 1.0 5.0 9.0 16.0 5.0 0.0EXPECTED 0.8 4.9 12.3 12.3 4.9 0.8CHI-SQUARE = 2.8661 WITH 2 DEGREES OF FREEDOM

The revised AUTO’s DW (1.8599) suggests we have eliminated autocorrelation. The revised

errors are close to random.

This plot of errors on year is close to a random pattern. Hence, the GLS that accounts for second

degree autocorrelation produces good estimates. We can also plot the ACF and PACF for the

revised AUTO model to determine if it accounted for the second-degree autocorrelation.

AR(2) AUTO

0 0 0AUTOCORRELATION FUNCTION OF THE SERIES (1-B) (1-B ) E1

1 0.01 . + R + . 2 -.03 . + RR + . 3 0.02 . + RR + . 4 -.04 . + RR + . 5 0.21 . + RRRRRRRR + . 6 -.05 . + RRR + . 7 0.00 . + R + .

Revised Auto's Errors

-300-250-200-150-100

-500

50100150200

1950 1960 1970 1980 1990 2000

Year

Err

ors

Series1

Page 28: Autocorrelation and Hetroskedasticity - University of …general.utpb.edu/fac/carson_s/Econometrics/Econometri… ·  · 2001-07-06Autocorrelation and Hetroskedasticity ... The consequences

28

8 0.07 . + RRR + . 9 -.08 . + RRRR + .10 -.09 . + RRRR + .11 0.05 . + RRR + .12 -.10 . + RRRR + .

0 0 0PARTIAL AUTOCORRELATION FUNCTION OF THE SERIES (1-B) (1-B ) E1

1 0.01 . + R + . 2 -.03 . + RR + . 3 0.02 . + RR + . 4 -.04 . + RR + . 5 0.21 . + RRRRRRRR + . 6 -.06 . + RRR + . 7 0.02 . + RR + . 8 0.05 . + RRR + . 9 -.07 . + RRR + .10 -.13 . + RRRRRR + .11 0.08 . + RRRR + .12 -.11 . + RRRRR + .

The new model’s ACFs and PACFs demonstrate that the GLS model that accounts for two lagged

period’s errors accounts for all the autocorrelation in the GDP model. To verify this, the new

AUTO model’s DW should fail to reject no autocorrelation. This is true since 2>DW>du where

the new DW=1.8599. Isn’t this stuff neat?

Stationarity and Nonstationarity

Now that we have considered how to account for autocorrelation, there remains an issue

related to time that is critical to our understanding of time-series. In any time-series, there are

three characteristics that must be accounted for: trend, cycle and seasonality. The most important

as it relates to our study of time series is trend. GDP numbers are reported as seasonally adjusted,

so that means all we have to account for are cycle and trend. How do we account for the positive

trend? How appropriate is it you use the model y=α+βx+ε? In regression analysis, we assume

the trend is deterministic. In reality, there is nothing that says the trend must be a deterministic

trend line. What’s to stop it from being stochastic where the trend is not linear but bounces

around? It turns out that how we model for trend is critical to the reliability of our modeling.

Throughout our discussion we have assumed the E(ε)= σ2ε and Cov(ε t,εt-1)=ρ σ2

ε. When

these assumptions are satisfied, we say the time series is stationary. In other words, the time

series does not depend on time and has a finite (bounded) variance. However, many times series

Page 29: Autocorrelation and Hetroskedasticity - University of …general.utpb.edu/fac/carson_s/Econometrics/Econometri… ·  · 2001-07-06Autocorrelation and Hetroskedasticity ... The consequences

29

are not stationary, and the above assumptions will not hold. When E(ε)=σ2ε and Cov(εt,ε t-1)=ρ σ2

ε

fail to hold, time series are frequently de-trended before further analysis is done. There are two

means to de-trend:

1. Estimate y=α+βx+ε , De-trend by ε=y-α-βx. (Stationary Process)

2. Successive differencing. (non-Stationary Process)

The first de-trending technique is simply using the regression model to estimate

variations around trend. In other words,

where f(t) is the trend and ε is the variation around the trend. Note that our normal equations

hold, Σe=0 and Σex=0. This is what OLS assumes. However, assuming yt is stationary when it is

not creates problems.

The second equation represents a non-stationary process. A non-stationary process has a

stochastic trend where stationary processes are fixed in time. The most simple example of a non-

stationary series is the random walk.

To deal with a non-stationary trend, we difference. Differencing means we take the difference

between variables to eliminate trend or

Just a note on the random walk, about 15 years ago there was a famous book entitled A Random

Walk Down Wall Street. Part of its subject matter dealt with exactly our random walk hypothesis.

What the random walk says is that the change in the variable is random. It is as likely to be up as

( ) tt tfy ε+=

ttt yy ε+= −1

t

tttttt

yor

yyyy

εεε

=∆=−⇒+= −− 11

Page 30: Autocorrelation and Hetroskedasticity - University of …general.utpb.edu/fac/carson_s/Econometrics/Econometri… ·  · 2001-07-06Autocorrelation and Hetroskedasticity ... The consequences

30

down. If stocks follow a random walk, they are said to follow a martingale series implying the

stock market is efficient.

A second non-stationary process is the random walk with drift.

where d is the drift. In this case, the first difference is stationary with mean 0 and σ2.

The generalization of the random walk with drift is

This has the same form as a stationary process, except the disturbance, (ε j), is not stationary. The

disturbance has a variance tσ2 that increases with time. Therefore, OLS deteriorates when we use

it with a non-stationary process because Σe=0 and Σex=0 fail to hold. OLS is no longer

consistent (RVs don’t stack up around an expected value) nor is it efficient. However, both the

stationary and non-stationary models follow a linear trend. Hence, we must know if the random

variable yt is stationary or non-stationary before we use OLS on a time-series.

Perhaps a graph will help.

Stationary Non-Stationary

ttt dyy ε++= −1

tttt dydyy εε +=∆⇒++= −1

∑−

++=t

jjt dtyy

10 ε

Page 31: Autocorrelation and Hetroskedasticity - University of …general.utpb.edu/fac/carson_s/Econometrics/Econometri… ·  · 2001-07-06Autocorrelation and Hetroskedasticity ... The consequences

31

It is important to know if a series is stationary or non-stationary because a stationary process

reverts to the mean while a non-stationary is non mean reverting. This has important applications

to macroeconomics because if a series is non-trend reverting, it may have important applications

to what is causes the business cycle. Some macroeconomists have argued that the GDP series is

non-stationary, non-mean reverting because there are technology shocks to trend that cause

cycles. Others accept that GDP follows a stochastic trend while rejecting the notion that cycles

are attributable to technology shocks.

Now that we understand the difference between a stationary and non-stationary process,

we will determine how to distinguish between them. Two econonomists, Nelson and Plosser, use

a test developed by Dickey and Fuller to distinquish between the two types of series. Following

their methodology, we estimate the equation

which belongs to the non-stationary class if ρ=1 and d=0 and the stationary class if ρ<1.

ttt dtyy ερα +++= −1


Recommended