+ All Categories
Home > Documents > MIT15_450F10_lec07

MIT15_450F10_lec07

Date post: 27-Dec-2015
Category:
Upload: seanwu95
View: 17 times
Download: 0 times
Share this document with a friend
Description:
MIT15_450F10_lec07
Popular Tags:
41
The Basics MLE AR and VAR Model Selection GMM QMLE Parameter Estimation Leonid Kogan MIT, Sloan 15.450, Fall 2010 c Leonid Kogan ( MIT, Sloan ) Parameter Estimation 15.450, Fall 2010 1 / 40
Transcript
Page 1: MIT15_450F10_lec07

The Basics MLE AR and VAR Model Selection GMM QMLE

Parameter Estimation

Leonid Kogan

MIT, Sloan

15.450, Fall 2010

c� Leonid Kogan ( MIT, Sloan ) Parameter Estimation 15.450, Fall 2010 1 / 40

Page 2: MIT15_450F10_lec07

The Basics MLE AR and VAR Model Selection GMM QMLE

Outline

1 The Basics

2 MLE

3 AR and VAR

4 Model Selection

5 GMM

6 QMLE

c� Leonid Kogan ( MIT, Sloan ) Parameter Estimation 15.450, Fall 2010 2 / 40

Page 3: MIT15_450F10_lec07

The Basics MLE AR and VAR Model Selection GMM QMLE

Outline

1 The Basics

2 MLE

3 AR and VAR

4 Model Selection

5 GMM

6 QMLE

c� Leonid Kogan ( MIT, Sloan ) Parameter Estimation 15.450, Fall 2010 3 / 40

Page 4: MIT15_450F10_lec07

� �

The Basics MLE AR and VAR Model Selection GMM QMLE

Statistics Review: Parameter Estimation

Sample of observations X = (x1, ..., xT ) with joint distribution p(X , θ0).

Estimator θ� is a function of the sample: θ�(X ).

Estimator is consistent if

plimT ∞ θ� = θ0 →

Estimator is unbiased ifE[θ�] = θ0

An α confidence interval for the i ’th coordinate of the parameter vector, θ0,i , is a stochastic interval

(θ�iL , θ�i

R) such that Prob (θ�iL , θ�i

R) covers θ0,i = α

c� Leonid Kogan ( MIT, Sloan ) Parameter Estimation 15.450, Fall 2010 4 / 40

Page 5: MIT15_450F10_lec07

The Basics MLE AR and VAR Model Selection GMM QMLE

Probability Review: LLN and CLT

Law of Large Numbers (LLN) states that if xt are IID random variables and E[xt ] = µ, then �T

plimT →∞ t

T =1 xt

= µ

plim is limit in probability. plimn ∞ xn = y means that for any δ > 0, Prob[|xn − y | > δ] 0.

→→

Central Limit Theorem (CLT) states that if xt are IID random vectors with mean vector µ and var-cov matrix Ω, then �T

t=1(xt − µ)√T

⇒ N(0, Ω)

“ ⇒” denotes convergence in distribution. xn ⇒ y means that the corresponding cumulative distribution functions Fxn ( ) and Fy ( ) have · ·the property

Fxn (z) → Fy (z) ∀z ∈ R, s.t., Fy is continuous at z c� Leonid Kogan ( MIT, Sloan ) Parameter Estimation 15.450, Fall 2010 5 / 40

Page 6: MIT15_450F10_lec07

The Basics MLE AR and VAR Model Selection GMM QMLE

Example

We observe a sample of IID observations xt , t = 1, ..., T from aNormal distribution N(µ, 1).

We want to estimate the mean µ.

A commonly used estimator is the sample mean:

T1 � µ� = E�[xt ] ≡

T xt

t=1

This estimator is consistent by the LLN: plimT ∞ µ� = µ. →

How do we derive consistent estimators in more complex situations?

c� Leonid Kogan ( MIT, Sloan ) Parameter Estimation 15.450, Fall 2010 6 / 40

Page 7: MIT15_450F10_lec07

The Basics MLE AR and VAR Model Selection GMM QMLE

Approaches to Estimation

If probability law p(X , θ0) is fully known, can estimate θ0 by Maximum Likelihood (MLE). This is the preferred method, it offers the best asymptotic precision.

If the law p(X , θ0) is not fully known, but we know some features of the distribution, e.g., the first two moments, we can still estimate the parameters by the quasi-MLE method.

Alternatively, if we only know a few moments of the distribution, but not the entire pdf p(X , θ0), we can estimate parameters by the Generalized Method of Moments (GMM).

QMLE and GMM methods are less precise (efficient) than MLE, but they are more robust since they do not require the full knowledge of the distribution.

c� Leonid Kogan ( MIT, Sloan ) Parameter Estimation 15.450, Fall 2010 7 / 40

Page 8: MIT15_450F10_lec07

The Basics MLE AR and VAR Model Selection GMM QMLE

Outline

1 The Basics

2 MLE

3 AR and VAR

4 Model Selection

5 GMM

6 QMLE

c� Leonid Kogan ( MIT, Sloan ) Parameter Estimation 15.450, Fall 2010 8 / 40

Page 9: MIT15_450F10_lec07

� � � �

The Basics MLE AR and VAR Model Selection GMM QMLE

Math Review: Jensen’s Inequality

Jensen’s inequality states that if f (x) is a concave function, andwn � 0, n = 1, ..., N, and

�nN =1 wn = 1, then

N N

wnf (xn) � f wnxn n=1 n=1

for any xn, n = 1, ..., N. This result extends to the continuous case: � �� � �

w(x)f (x) dx � f w(x)x dx , if w(x) dx = 1, w(x) � 0

Example: if x is a random variable (e.g., asset return), and f aconcave function (e.g., utility function), then

E[f (x)] � f (E[x ]) (risk aversion)

c� Leonid Kogan ( MIT, Sloan ) Parameter Estimation 15.450, Fall 2010 9 / 40

Page 10: MIT15_450F10_lec07

� � �

The Basics MLE AR and VAR Model Selection GMM QMLE

Maximum Likelihood Estimator (MLE)

IID observations xt , t = 1, ..., T with density p(x , θ0). Maximum likelihood estimation is based on the fact that for any alternative distribution density p(x , θ�),

E ln p(x , θ�) � E [ln p(x , θ0)] , E[�] = � p(x , θ0) dx

To see this, use Jensen’s inequality, and equality p(x , θ�) dx = 1: � � � � � E ln

p(xt , θ�) � ln E p(xt , θ�)

= ln p(x , θ�)

p(x , θ0) dx = p(xt , θ0) p(xt , θ0) p(x , θ0)

ln p(x , θ�) dx = 0

Estimate parameters using the sample analog of the above inequality T1 � 1

θ� = arg max ln p(xt , θ) = arg max ln p(X , θ)) θ T θ T

t=1 c� Leonid Kogan ( MIT, Sloan ) Parameter Estimation 15.450, Fall 2010 10 / 40

Page 11: MIT15_450F10_lec07

The Basics MLE AR and VAR Model Selection GMM QMLE

Maximum Likelihood Estimator (MLE)

Define the Likelihood function

L(θ) = ln p(X , θ)

Likelihood function treats model parameters θ variables. It treats observations X as fixed. We will work with the log of likelihood, L(θ) = ln L(θ). We will often drop the “log” and simply call L likelihood. For IID observations,

T T1 1 � 1 � L(θ) = ln p(xt , θ) = ln p(xt , θ)T T T

t=1 t=1

and therefore θ can be estimated by maximizing (log-) likelihood

MLE �θ = arg max θ

L(θ)

c� Leonid Kogan ( MIT, Sloan ) Parameter Estimation 15.450, Fall 2010 11 / 40

Page 12: MIT15_450F10_lec07

� �

The Basics MLE AR and VAR Model Selection GMM QMLE

Example: MLE for Gaussian Distribution

IID Gaussian observations, mean µ, variance σ2.

The log likelihood for the sample x1, ..., xT is

T T T� � � 1 (xt − µ)2 L(θ) = ln p(xt , θ) = ln p(xt , θ) = ln −√

2πσ2 2σ2 t=1 t=1 t=1

MLE: θ� = arg maxθ L(θ)

Optimality conditions: �T =1(xt − µ�) T

�T =1(xt − µ�)2

t t= 0, − + = 0 σ�2 σ� σ�3

These are identical to the GMM conditions we have derived above!

E�(xt − µ�) = 0, E� (xt − µ�)2 − σ�2 = 0

c� Leonid Kogan ( MIT, Sloan ) Parameter Estimation 15.450, Fall 2010 12 / 40

Page 13: MIT15_450F10_lec07

The Basics MLE AR and VAR Model Selection GMM QMLE

Example: Exponential Distribution

Suppose we have T independent observations from the exponentialdistribution

p(xt , λ) = λ exp(−λxt )

Likelihood function T�

L(λ) = (−λxt + ln λ) t=1

First-order condition �

− T�

xt

+ T

= 0 λt=1

implies ��T �−1 �λ = t=1 xt

T

c� Leonid Kogan ( MIT, Sloan ) Parameter Estimation 15.450, Fall 2010 13 / 40

Page 14: MIT15_450F10_lec07

The Basics MLE AR and VAR Model Selection GMM QMLE

MLE for Dependent Observations

MLE approach works even if observations are dependent. Need dependence to die out quickly enough. Consider a time series xt , xt+1, ... and assume that the distribution of xt+1 depends only on L lags: xt , ..., xt+1−L. Log likelihood conditional on the first L observations:

T −1

L(θ) = ln p(xt+1|xt , ..., xt+1−L; θ) t=L

θ maximizes conditional expectation of ln p(xt+1|xt , ..., xt−L+1; θ) and thus maximizes the (conditional) likelihood if T is large and xt is stationary.

MLE �θ = arg max θ

L(θ)

c� Leonid Kogan ( MIT, Sloan ) Parameter Estimation 15.450, Fall 2010 14 / 40

Page 15: MIT15_450F10_lec07

The Basics MLE AR and VAR Model Selection GMM QMLE

Outline

1 The Basics

2 MLE

3 AR and VAR

4 Model Selection

5 GMM

6 QMLE

c� Leonid Kogan ( MIT, Sloan ) Parameter Estimation 15.450, Fall 2010 15 / 40

Page 16: MIT15_450F10_lec07

The Basics MLE AR and VAR Model Selection GMM QMLE

MLE for AR(p) Time Series

AR(p) (AutoRegressive) time series model with IID Gaussian errors:

xt+1 = a0 + a1xt + ...apxt+1−p + εt+1, εt+1 ∼ N(0, σ2)

Conditional on (xt , ..., xt+1−p), xt+1 is Gaussian with mean 0 and variance σ2. Construct likelihood:

� (xt+1 − a0 − a1xt − ...apxt+1−p)2

L(θ) = T −1

− ln √

2πσ2 − 2σ2

t=p

MLE estimates of (a0, a1, ..., ap) are the same as OLS:

T−1

max L(θ) min (xt+1 − a0 − a1xt − ...apxt+1−p)2

�a ⇔

�a t=p

c� Leonid Kogan ( MIT, Sloan ) Parameter Estimation 15.450, Fall 2010 16 / 40

Page 17: MIT15_450F10_lec07

The Basics MLE AR and VAR Model Selection GMM QMLE

MLE for VAR(p) Time Series

VAR(p) (Vector AutoRegressive) time series model with IID Gaussian errors:

xt+1 = a0 + A1xt + ...Apxt+1−p + εt+1, εt+1 ∼ N(0, Σ)

where xt and a0 are N-dim vectors, An are N × N matrices, and εt are N-dim vectors of shocks.

Conditional on (xt , ..., xt+1−p), xt+1 is Gaussian with mean 0 and var-cov matrix Σ.

Construct likelihood:

T −1 � L(θ) =

� − ln (2π)N |Σ| −

1 εt�+1Σ

−1εt+12t=p

c� Leonid Kogan ( MIT, Sloan ) Parameter Estimation 15.450, Fall 2010 17 / 40

Page 18: MIT15_450F10_lec07

� � � �

The Basics MLE AR and VAR Model Selection GMM QMLE

MLE for VAR(p) Time Series

Parameter estimation:

T −1 � max L(θ) min

� ln (2π)N |Σ| +

1 εt�+1Σ

−1εt+1 a0,A1,...,Ap ,Σ

⇔ a0,A1,...,Ap ,Σ

t=p 2

Optimality conditions for a0, A1, ..., Ap:

xt−i εt�+1 = 0, i = 0, 1, ..., p − 1, εt+1 = 0

t t

where εt+1 = xt+1 − (a0 + A1xt + ...Apxt+1−p)

VAR coefficients can be estimated by OLS, equation by equation.

Standard errors can also be computed for each equation separately.

c� Leonid Kogan ( MIT, Sloan ) Parameter Estimation 15.450, Fall 2010 18 / 40

Page 19: MIT15_450F10_lec07

The Basics MLE AR and VAR Model Selection GMM QMLE

Outline

1 The Basics

2 MLE

3 AR and VAR

4 Model Selection

5 GMM

6 QMLE

c� Leonid Kogan ( MIT, Sloan ) Parameter Estimation 15.450, Fall 2010 19 / 40

Page 20: MIT15_450F10_lec07

The Basics MLE AR and VAR Model Selection GMM QMLE

MLE and Model Selection

In practice, we often do not know the exact model.

In some situations, MLE can be adapted to perform model selection.

Suppose we are considering several alternative models, one of them is the correct model.

If the sample is large enough, we can identify the correct model by comparing maximized likelihoods and penalizing them for the number of parameters they use.

Various forms of penalties have been proposed, defining variousinformation criteria.

c� Leonid Kogan ( MIT, Sloan ) Parameter Estimation 15.450, Fall 2010 20 / 40

Page 21: MIT15_450F10_lec07

The Basics MLE AR and VAR Model Selection GMM QMLE

VAR(p) Model Selection

To build a VAR(p) model, we must decide on the order p.

Without theoretical guidance, use an information criterion.

Consider two most popular information criteria: Akaike (AIC) and Bayesian.

Each criterion chooses p to maximize the log likelihood subject to a penalty for model flexibility (free parameters). Various criteria differ in the form of penalty.

c� Leonid Kogan ( MIT, Sloan ) Parameter Estimation 15.450, Fall 2010 21 / 40

Page 22: MIT15_450F10_lec07

The Basics MLE AR and VAR Model Selection GMM QMLE

AIC and BIC

Start by specifying the maximum possible order p. Make sure that p grows with the sample size, but not too fast:

lim p = ∞, lim p

= 0 T ∞ T ∞ T→ →

For example, can choose p = 41 (ln T )2.

Find the optimal VAR order p� as

2p� = arg max L(θ; p) − penalty(p)

0�p�p T

where ⎧⎨ AIC: T 2 pN2

penalty(p) =⎩ BIC: ln

TT pN2

In larger samples, BIC selects lower-order models than AIC.

c� Leonid Kogan ( MIT, Sloan ) Parameter Estimation 15.450, Fall 2010 22 / 40

Page 23: MIT15_450F10_lec07

The Basics MLE AR and VAR Model Selection GMM QMLE

Example: AR(p) Model of Real GDP Growth

Model quarterly seasonally adjusted GDP growth (annualized rates).

Want to select and estimate an AR(p) model.

-15

-10

-5

0

5

10

15

20

1945 1955 1965 1975 1985 1995 2005

Year

Real GDP grow

th, q

uarterly, 

annu

alized

 (%)

Source: U.S. Department of Commerce, Bureau of Economic Analysis. National Income and Product Accounts.

c� Leonid Kogan ( MIT, Sloan ) Parameter Estimation 15.450, Fall 2010 23 / 40

Page 24: MIT15_450F10_lec07

The Basics MLE AR and VAR Model Selection GMM QMLE

Example: AR(p) Model of GDP Growth

Set p = 7.

0 1 2 3 4 5 6 7

−2.8

−2.75

−2.7

−2.65

LogL

ikel

ihoo

d −

pen

alty

p

AIC

AIC dictates p = 5. AR coefficients a1, ..., a5:

0.3185, 0.1409, −0.0759, −0.0600, −0.0904

c� Leonid Kogan ( MIT, Sloan ) Parameter Estimation 15.450, Fall 2010 24 / 40

Page 25: MIT15_450F10_lec07

The Basics MLE AR and VAR Model Selection GMM QMLE

Example: AR(p) Model of GDP Growth

Set p = 7.

0 1 2 3 4 5 6 7

−2.84

−2.82

−2.8

−2.78

−2.76

−2.74

−2.72

−2.7

LogL

ikel

ihoo

d −

pen

alty

p

BIC

BIC dictates p = 1. AR coefficient a1 = 0.3611.

c� Leonid Kogan ( MIT, Sloan ) Parameter Estimation 15.450, Fall 2010 25 / 40

Page 26: MIT15_450F10_lec07

The Basics MLE AR and VAR Model Selection GMM QMLE

Outline

1 The Basics

2 MLE

3 AR and VAR

4 Model Selection

5 GMM

6 QMLE

c� Leonid Kogan ( MIT, Sloan ) Parameter Estimation 15.450, Fall 2010 26 / 40

Page 27: MIT15_450F10_lec07

The Basics MLE AR and VAR Model Selection GMM QMLE

IID Observations

A sample of independent and identically distributed (IID) observations drawn from the distribution family with density φ(x ; θ0):

X = (x1, . . . , xt , . . . , xT )

Want to estimate the N-dimensional parameter vector θ0, . Consider a vector of functions fj (x , θ) (“moments”), dim(f ) = N. Suppose we know that for any j ,

E[f1(xt , θ0)] = = E[fN (xt , θ0)] = 0, if θ = θ0�N · · ·

2 (Identification) j=1 (E[fj (xt , θ)]) > 0, if θ =� θ0

GMM estimator θ� of the unknown parameter θ0 is defined by

GMM

�E[f (xt , �θ)] ≡ 1 T

T�

t=1

f (xt , �θ) = 0

c� Leonid Kogan ( MIT, Sloan ) Parameter Estimation 15.450, Fall 2010 27 / 40

Page 28: MIT15_450F10_lec07

The Basics MLE AR and VAR Model Selection GMM QMLE

Example: Mean-Variance

Suppose we have a sample from a distribution with mean µ0 and variance σ2

0.

To estimate the parameter vector θ0 = (µ0, σ0)�, σ0 � 0, choose the

functions fj (x , θ), j = 1, 2:

f1(xt , θ) = xt − µ

f2(xt , θ) = (xt − µ)2 − σ2

Easy to see that E[f (x , θ0)] = 0.

If θ =� θ0, then E[f (x , θ)] =� 0 (verify).

Parameter estimates:

E�(xt ) − µ� = 0 µ� = E�(xt ) � � ⇒ � � E� (xt − µ�)2 − σ�2 = 0 σ�2 = E� (xt − µ�)2⇒

c� Leonid Kogan ( MIT, Sloan ) Parameter Estimation 15.450, Fall 2010 28 / 40

Page 29: MIT15_450F10_lec07

The Basics MLE AR and VAR Model Selection GMM QMLE

GMM and MLE

First-order conditions for MLE can be used as moments in GMM estimation. �TOptimality conditions for maximizing L(θ) = t=1 ln p(xt , θ) are

T� ∂ ln p(xt , θ) = 0

∂θ t=1

If we set f = ∂ ln p(x , θ)/∂θ (the score vector), then MLE reduces to GMM with the moment vector f .

c� Leonid Kogan ( MIT, Sloan ) Parameter Estimation 15.450, Fall 2010 29 / 40

Page 30: MIT15_450F10_lec07

� � � � ��

The Basics MLE AR and VAR Model Selection GMM QMLE

Example: Interest Rate Model

Interest rate model:

rt+1 = a0 + a1rt + εt+1, E(εt+1|rt ) = 0, E(εt2 +1|rt ) = b0 + b1rt

Derive moment conditions for GMM.

Note that for any function g(rt ),

E[g(rt )εt+1] = E [E[g(rt )εt+1|rt ]] = E [g(rt )E[εt+1|rt ]] = 0

Using g(rt ) = 1 and g(rt ) = rt ,

E (1, rt )�(rt+1 − a0 − a1rt ) = 0

E (1, rt )� (rt+1 − a0 − a1rt )

2 − b0 − b1rt = 0

c� Leonid Kogan ( MIT, Sloan ) Parameter Estimation 15.450, Fall 2010 30 / 40

Page 31: MIT15_450F10_lec07

� � � � ��

The Basics MLE AR and VAR Model Selection GMM QMLE

Example: Interest Rate Model

GMM using the moment conditions

E (1, rt )�(rt+1 − a0 − a1rt ) = 0

E (1, rt )� (rt+1 − a0 − a1rt )

2 − b0 − b1rt = 0

(a0, a1) can be estimated from the first pair of moment conditions. Equivalent to OLS, ignore information about second moment.

c� Leonid Kogan ( MIT, Sloan ) Parameter Estimation 15.450, Fall 2010 31 / 40

Page 32: MIT15_450F10_lec07

The Basics MLE AR and VAR Model Selection GMM QMLE

Outline

1 The Basics

2 MLE

3 AR and VAR

4 Model Selection

5 GMM

6 QMLE

c� Leonid Kogan ( MIT, Sloan ) Parameter Estimation 15.450, Fall 2010 32 / 40

Page 33: MIT15_450F10_lec07

The Basics MLE AR and VAR Model Selection GMM QMLE

MLE and QMLE

Maximum likelihood estimates are optimal: they have the smallest asymptotic variance.

When we know the distribution function p(X , θ) precisely, MLE is the most efficient approach.

MLE is often a convenient way to figure out which moment conditions to impose.

Even if the model p(X , θ) is misspecified, MLE approach may still be valid as long as the implied moment conditions are valid.

With an incorrect model q(X , θ), MLE is a special case of GMM. GMM results apply.

The approach of using an incorrect (typically Gaussian) likelihood function for estimation is called quasi-MLE (QMLE).

c� Leonid Kogan ( MIT, Sloan ) Parameter Estimation 15.450, Fall 2010 33 / 40

Page 34: MIT15_450F10_lec07

� �

The Basics MLE AR and VAR Model Selection GMM QMLE

Example: QMLE for AR(p) Time Series

AR(p) time series model with IID non-Gaussian errors:

xt+1 = a0 + a1xt + ...apxt+1−p + εt+1, E[εt+1|xt , ..., xt+1−p] = 0

Pretend errors are Gaussian to construct L(θ):

� (xt+1 − a0 − a1xt − ...apxt+1−p)2

L(θ) = T −1

− ln √

2πσ2 − 2σ2

t=p

Optimality conditions:

(xt−i εt+1) = 0, i = 0, ..., p − 1, εt+1 = 0 t t

Valid moment conditions (verify). GMM justifies QMLE.

c� Leonid Kogan ( MIT, Sloan ) Parameter Estimation 15.450, Fall 2010 34 / 40

Page 35: MIT15_450F10_lec07

� � � � ��

The Basics MLE AR and VAR Model Selection GMM QMLE

Example: Interest Rate Model

Interest rate model:

rt+1 = a0 + a1rt + εt+1, E(εt+1|rt ) = 0, E(ε2 t+1|rt ) = b0 + b1rt

GMM using the moment conditions

E (1, rt )�(rt+1 − a0 − a1rt ) = 0

E (1, rt )� (rt+1 − a0 − a1rt )

2 − b0 − b1rt = 0

(a0, a1) can be estimated from the first pair of moment conditions. Equivalent to OLS, ignore information about second moment.

c� Leonid Kogan ( MIT, Sloan ) Parameter Estimation 15.450, Fall 2010 35 / 40

Page 36: MIT15_450F10_lec07

The Basics MLE AR and VAR Model Selection GMM QMLE

Example: Interest Rate Model

QMLE: treat εt as Gaussian N(0, b0 + b1rt−1).

Construct L(θ):

T −1� � (rt+1 − a0 − a1rt )2

L(θ) = − ln 2π(b0 + b1rt ) − 2(b0 + b1rt )t=1

(a0, a1) can no longer be estimated separately from (b0, b1).

Optimality conditions for (a0, a1):

T−1� (rt+1 − a0 − a1rt )(1, rt )

� = 0b0 + b1rtt=1

This is no longer OLS, but GLS. More precise estimates of (a0, a1).

Down-weight residuals with high variance.

c� Leonid Kogan ( MIT, Sloan ) Parameter Estimation 15.450, Fall 2010 36 / 40

Page 37: MIT15_450F10_lec07

The Basics MLE AR and VAR Model Selection GMM QMLE

Example: Interest Rate Model

3-Month Treasury Bill: secondary market rate, monthly.

Scatter plot of interest rate changes vs lagged interest rate values.

Higher volatility of rate changes at higher rate levels.

-5

-4

-3

-2

-1

0

1

2

3

0 5 10 15 20

Interest Rate Ch

ange (r

t+1‐rt, %)

Lagged Interest Rate (rt, %)

Source: Federal Reserve Bank of St. Louis.

c� Leonid Kogan ( MIT, Sloan ) Parameter Estimation 15.450, Fall 2010 37 / 40

Page 38: MIT15_450F10_lec07

The Basics MLE AR and VAR Model Selection GMM QMLE

Discussion

QMLE approach helps specify moments in GMM.

Do not use blindly, verify that the moment conditions are valid.

c� Leonid Kogan ( MIT, Sloan ) Parameter Estimation 15.450, Fall 2010 38 / 40

Page 39: MIT15_450F10_lec07

The Basics MLE AR and VAR Model Selection GMM QMLE

Key Points

Parameter estimators, consistency.

Likelihood function, maximum likelihood parameter estimation.

Identification of parameters by GMM.

QMLE. Verify the validity of QMLE by interpreting the resulting moments in GMM framework.

c� Leonid Kogan ( MIT, Sloan ) Parameter Estimation 15.450, Fall 2010 39 / 40

Page 40: MIT15_450F10_lec07

The Basics MLE AR and VAR Model Selection GMM QMLE

Readings

Tsay, 2005, Sections 1.2.4, 2.4.2, 8.2.4.

Cochrane, 2005, Sections 11.1, 14.1, 14.2.

Campbell, Lo, MacKinlay, 1997, Section A.2, A.4.

c� Leonid Kogan ( MIT, Sloan ) Parameter Estimation 15.450, Fall 2010 40 / 40

Page 41: MIT15_450F10_lec07

MIT OpenCourseWarehttp://ocw.mit.edu

15.450 Analytics of Finance Fall 2010

For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms .