Durbin-Levinson recursive method
A recursive method for computing 'n is useful because
it avoids inverting large matrices;
when new data are acquired, one can update predictions, instead ofstarting again from scratch;
the procedure is a method for computing important theoreticalquantities.
Idea
X̂n+1
= PL(X1
,...,Xn)Xn+1
= PL(X2
,...,Xn)Xn+1
+ a
�X
1
� PL(X2
,...,Xn)X
1
�
Note�X
1
� PL(X2
,...,Xn)X
1
�is orthogonal to the previous.
8 ottobre 2014 1 / 25
Durbin-Levinson recursive method
A recursive method for computing 'n is useful because
it avoids inverting large matrices;
when new data are acquired, one can update predictions, instead ofstarting again from scratch;
the procedure is a method for computing important theoreticalquantities.
Idea
X̂n+1
= PL(X1
,...,Xn)Xn+1
= PL(X2
,...,Xn)Xn+1
+ a
�X
1
� PL(X2
,...,Xn)X
1
�
Note�X
1
� PL(X2
,...,Xn)X
1
�is orthogonal to the previous.
8 ottobre 2014 1 / 25
Durbin-Levinson, 2
X̂n+1
= PL(X1
,...,Xn)Xn+1
= PL(X2
,...,Xn)Xn+1
+ a
�X
1
� PL(X2
,...,Xn)X
1
�
Check orthogonality condition to find a:
i > 1 : hX̂n+1
� Xn+1
,Xi i = hPL(X2
,...,Xn)Xn+1
� Xn+1
,Xi i+ ahX
1
� PL(X2
,...,Xn)X
1
,Xi i = 0 + 0
last step coming from the definitions of projections (i = 2 . . . n).
8 ottobre 2014 2 / 25
Durbin-Levinson, 3
X̂n+1
= PL(X1
,...,Xn)Xn+1
= PL(X2
,...,Xn)Xn+1
+ a
�X
1
� PL(X2
,...,Xn)X
1
�
Check orthogonality condition with i = 1:
0 = hX̂n+1
� Xn+1
,X1
� PL(X2
,...,Xn)X
1
i= hPL(X
2
,...,Xn)Xn+1
�Xn+1
,X1
�PL(X2
,...,Xn)X
1
i+akX1
�PL(X2
,...,Xn)X
1
k2
= �hXn+1
,X1
� PL(X2
,...,Xn)X
1
i+ akX1
� PL(X2
,...,Xn)X
1
k2
=) a =hXn+1
,X1
� PL(X2
,...,Xn)X
1
ikX
1
� PL(X2
,...,Xn)X
1
k2
8 ottobre 2014 3 / 25
Durbin-Levinson, 3
X̂n+1
= PL(X1
,...,Xn)Xn+1
= PL(X2
,...,Xn)Xn+1
+ a
�X
1
� PL(X2
,...,Xn)X
1
�
Check orthogonality condition with i = 1:
0 = hX̂n+1
� Xn+1
,X1
� PL(X2
,...,Xn)X
1
i
= hPL(X2
,...,Xn)Xn+1
�Xn+1
,X1
�PL(X2
,...,Xn)X
1
i+akX1
�PL(X2
,...,Xn)X
1
k2
= �hXn+1
,X1
� PL(X2
,...,Xn)X
1
i+ akX1
� PL(X2
,...,Xn)X
1
k2
=) a =hXn+1
,X1
� PL(X2
,...,Xn)X
1
ikX
1
� PL(X2
,...,Xn)X
1
k2
8 ottobre 2014 3 / 25
Durbin-Levinson, 3
X̂n+1
= PL(X1
,...,Xn)Xn+1
= PL(X2
,...,Xn)Xn+1
+ a
�X
1
� PL(X2
,...,Xn)X
1
�
Check orthogonality condition with i = 1:
0 = hX̂n+1
� Xn+1
,X1
� PL(X2
,...,Xn)X
1
i= hPL(X
2
,...,Xn)Xn+1
�Xn+1
,X1
�PL(X2
,...,Xn)X
1
i+akX1
�PL(X2
,...,Xn)X
1
k2
= �hXn+1
,X1
� PL(X2
,...,Xn)X
1
i+ akX1
� PL(X2
,...,Xn)X
1
k2
=) a =hXn+1
,X1
� PL(X2
,...,Xn)X
1
ikX
1
� PL(X2
,...,Xn)X
1
k2
8 ottobre 2014 3 / 25
Durbin-Levinson, 3
X̂n+1
= PL(X1
,...,Xn)Xn+1
= PL(X2
,...,Xn)Xn+1
+ a
�X
1
� PL(X2
,...,Xn)X
1
�
Check orthogonality condition with i = 1:
0 = hX̂n+1
� Xn+1
,X1
� PL(X2
,...,Xn)X
1
i= hPL(X
2
,...,Xn)Xn+1
�Xn+1
,X1
�PL(X2
,...,Xn)X
1
i+akX1
�PL(X2
,...,Xn)X
1
k2
= �hXn+1
,X1
� PL(X2
,...,Xn)X
1
i+ akX1
� PL(X2
,...,Xn)X
1
k2
=) a =hXn+1
,X1
� PL(X2
,...,Xn)X
1
ikX
1
� PL(X2
,...,Xn)X
1
k2
8 ottobre 2014 3 / 25
Durbin-Levinson, 3
X̂n+1
= PL(X1
,...,Xn)Xn+1
= PL(X2
,...,Xn)Xn+1
+ a
�X
1
� PL(X2
,...,Xn)X
1
�
Check orthogonality condition with i = 1:
0 = hX̂n+1
� Xn+1
,X1
� PL(X2
,...,Xn)X
1
i= hPL(X
2
,...,Xn)Xn+1
�Xn+1
,X1
�PL(X2
,...,Xn)X
1
i+akX1
�PL(X2
,...,Xn)X
1
k2
= �hXn+1
,X1
� PL(X2
,...,Xn)X
1
i+ akX1
� PL(X2
,...,Xn)X
1
k2
=) a =hXn+1
,X1
� PL(X2
,...,Xn)X
1
ikX
1
� PL(X2
,...,Xn)X
1
k2
8 ottobre 2014 3 / 25
Durbin-Levinson. 4
We tried
X̂n+1
= PL(X1
,...,Xn)Xn+1
= PL(X2
,...,Xn)Xn+1
+ a
�X
1
� PL(X2
,...,Xn)X
1
�
and found
a =hXn+1
,X1
� PL(X2
,...,Xn)X
1
ikX
1
� PL(X2
,...,Xn)X
1
k2 = hXn+1
,X1
� PL(X2
,...,Xn)X
1
iv�1
n�1
withvn�1
= E(|X̂n�Xn|2) = kXn�PL(X1
,...,Xn�1
)
Xnk2 = kX1
�PL(X2
,...,Xn)X
1
k2.
We write X̂n+1
= 'n,1Xn + · · ·+ 'n,nX1
=nP
j=1
'n,jXn+1�j
so that PL(X2
,...,Xn)Xn+1
=n�1Pj=1
'n�1,jXn+1�j
and substituting we get a recursion.
8 ottobre 2014 4 / 25
Durbin-Levinson. 4
We tried
X̂n+1
= PL(X1
,...,Xn)Xn+1
= PL(X2
,...,Xn)Xn+1
+ a
�X
1
� PL(X2
,...,Xn)X
1
�
and found
a =hXn+1
,X1
� PL(X2
,...,Xn)X
1
ikX
1
� PL(X2
,...,Xn)X
1
k2 = hXn+1
,X1
� PL(X2
,...,Xn)X
1
iv�1
n�1
withvn�1
= E(|X̂n�Xn|2) = kXn�PL(X1
,...,Xn�1
)
Xnk2 = kX1
�PL(X2
,...,Xn)X
1
k2.
We write X̂n+1
= 'n,1Xn + · · ·+ 'n,nX1
=nP
j=1
'n,jXn+1�j
so that PL(X2
,...,Xn)Xn+1
=n�1Pj=1
'n�1,jXn+1�j
and substituting we get a recursion.
8 ottobre 2014 4 / 25
Durbin-Levinson. 4
We tried
X̂n+1
= PL(X1
,...,Xn)Xn+1
= PL(X2
,...,Xn)Xn+1
+ a
�X
1
� PL(X2
,...,Xn)X
1
�
and found
a =hXn+1
,X1
� PL(X2
,...,Xn)X
1
ikX
1
� PL(X2
,...,Xn)X
1
k2 = hXn+1
,X1
� PL(X2
,...,Xn)X
1
iv�1
n�1
withvn�1
= E(|X̂n�Xn|2) = kXn�PL(X1
,...,Xn�1
)
Xnk2 = kX1
�PL(X2
,...,Xn)X
1
k2.
We write X̂n+1
= 'n,1Xn + · · ·+ 'n,nX1
=nP
j=1
'n,jXn+1�j
so that PL(X2
,...,Xn)Xn+1
=n�1Pj=1
'n�1,jXn+1�j
and substituting we get a recursion.
8 ottobre 2014 4 / 25
Durbin-Levinson. 4
We tried
X̂n+1
= PL(X1
,...,Xn)Xn+1
= PL(X2
,...,Xn)Xn+1
+ a
�X
1
� PL(X2
,...,Xn)X
1
�
and found
a =hXn+1
,X1
� PL(X2
,...,Xn)X
1
ikX
1
� PL(X2
,...,Xn)X
1
k2 = hXn+1
,X1
� PL(X2
,...,Xn)X
1
iv�1
n�1
withvn�1
= E(|X̂n�Xn|2) = kXn�PL(X1
,...,Xn�1
)
Xnk2 = kX1
�PL(X2
,...,Xn)X
1
k2.
We write X̂n+1
= 'n,1Xn + · · ·+ 'n,nX1
=nP
j=1
'n,jXn+1�j
so that PL(X2
,...,Xn)Xn+1
=n�1Pj=1
'n�1,jXn+1�j
and substituting we get a recursion.8 ottobre 2014 4 / 25
Durbin-Levinson algorithm. 5
X̂n+1
=nX
j=1
'n,jXn+1�j = PL(X2
,...,Xn)Xn+1
+ a
�X
1
� PL(X2
,...,Xn)X
1
�
Hence
'n,n = a = hXn+1
,X1
� PL(X2
,...,Xn)X
1
iv�1
n�1
=
2
4�(n)�n�1X
j=1
'n�1,j�(n � j)
3
5v
�1
n�1
.
8 ottobre 2014 5 / 25
Durbin-Levinson algorithm. 6
Then from
nX
j=1
'n,jXn+1�j =n�1X
j=1
'n�1,jXn+1�j + a(X1
�n�1X
j=1
'n�1,jXj+1
)
=n�1X
j=1
'n�1,jXn+1�j + a(X1
�n�1X
k=1
'n�1,n�kXn+1�k)
one sees
'n,j = 'n�1,j � a'n�1,n�j = 'n�1,j � 'n,n'n�1,n�j j = 1 . . . n � 1
We need also a recursive procedure for vn.
8 ottobre 2014 6 / 25
Durbin-Levinson algorithm. 6
Then from
nX
j=1
'n,jXn+1�j =n�1X
j=1
'n�1,jXn+1�j + a(X1
�n�1X
j=1
'n�1,jXj+1
)
=n�1X
j=1
'n�1,jXn+1�j + a(X1
�n�1X
k=1
'n�1,n�kXn+1�k)
one sees
'n,j = 'n�1,j � a'n�1,n�j = 'n�1,j � 'n,n'n�1,n�j j = 1 . . . n � 1
We need also a recursive procedure for vn.
8 ottobre 2014 6 / 25
Durbin-Levinson algorithm. 7
vn = E(|X̂n+1
� Xn+1
|2) = �0
�nX
j=1
'n,j�(j)
= �0
� 'n,n�(n)�n�1X
j=1
('n�1,j � 'n,n'n�1,n�j)�(j)
= �0
�n�1X
j=1
'n�1,j�(j)� 'n,n
0
@�(n)�n�1X
j=1
'n�1,j�(j)
1
A
= vn�1
� 'n,n'n,nvn�1
= vn�1
�1� '2
n,n
�.
The terms in red are equal because of the definition 'n,n.
The final formula vn =�1� '2
n,n
�vn�1
shows that 'n,n determines thedecrease of predictive error with increasing n.
8 ottobre 2014 7 / 25
Durbin-Levinson algorithm. 7
vn = E(|X̂n+1
� Xn+1
|2) = �0
�nX
j=1
'n,j�(j)
= �0
� 'n,n�(n)�n�1X
j=1
('n�1,j � 'n,n'n�1,n�j)�(j)
= �0
�n�1X
j=1
'n�1,j�(j)� 'n,n
0
@�(n)�n�1X
j=1
'n�1,j�(j)
1
A
= vn�1
� 'n,n'n,nvn�1
= vn�1
�1� '2
n,n
�.
The terms in red are equal because of the definition 'n,n.
The final formula vn =�1� '2
n,n
�vn�1
shows that 'n,n determines thedecrease of predictive error with increasing n.
8 ottobre 2014 7 / 25
Durbin-Levinson algorithm. Summary
v
0
= E(|X1
� X̂
1
|2) = E(|X1
|2) = �(0)
'1,1 =
�(1)
v
0
= ⇢(1)
v
1
=�1� '2
1,1
�v
0
= �(0)�1� ⇢(1)2
�
...
'n,n =
2
4�(n)�n�1X
j=1
'n�1,j�(n � j)
3
5v
�1
n�1
'n,j = 'n�1,j � 'n,n'n�1,n�j j = 1 . . . n � 1
vn =�1� '2
n,n
�vn�1
...
One could divide everything by �(0) and work with ACF instead of ACVF
8 ottobre 2014 8 / 25
Durbin-Levinson algorithm. Summary
v
0
= E(|X1
� X̂
1
|2) = E(|X1
|2) = �(0)
'1,1 =
�(1)
v
0
= ⇢(1)
v
1
=�1� '2
1,1
�v
0
= �(0)�1� ⇢(1)2
�
...
'n,n =
2
4�(n)�n�1X
j=1
'n�1,j�(n � j)
3
5v
�1
n�1
'n,j = 'n�1,j � 'n,n'n�1,n�j j = 1 . . . n � 1
vn =�1� '2
n,n
�vn�1
...
One could divide everything by �(0) and work with ACF instead of ACVF
8 ottobre 2014 8 / 25
Durbin-Levinson algorithm. Summary
v
0
= E(|X1
� X̂
1
|2) = E(|X1
|2) = �(0)
'1,1 =
�(1)
v
0
= ⇢(1)
v
1
=�1� '2
1,1
�v
0
= �(0)�1� ⇢(1)2
�
...
'n,n =
2
4�(n)�n�1X
j=1
'n�1,j�(n � j)
3
5v
�1
n�1
'n,j = 'n�1,j � 'n,n'n�1,n�j j = 1 . . . n � 1
vn =�1� '2
n,n
�vn�1
...
One could divide everything by �(0) and work with ACF instead of ACVF
8 ottobre 2014 8 / 25
Durbin-Levinson algorithm. Summary
v
0
= E(|X1
� X̂
1
|2) = E(|X1
|2) = �(0)
'1,1 =
�(1)
v
0
= ⇢(1)
v
1
=�1� '2
1,1
�v
0
= �(0)�1� ⇢(1)2
�
...
'n,n =
2
4�(n)�n�1X
j=1
'n�1,j�(n � j)
3
5v
�1
n�1
'n,j = 'n�1,j � 'n,n'n�1,n�j j = 1 . . . n � 1
vn =�1� '2
n,n
�vn�1
...
One could divide everything by �(0) and work with ACF instead of ACVF
8 ottobre 2014 8 / 25
Durbin-Levinson algorithm. Summary
v
0
= E(|X1
� X̂
1
|2) = E(|X1
|2) = �(0)
'1,1 =
�(1)
v
0
= ⇢(1)
v
1
=�1� '2
1,1
�v
0
= �(0)�1� ⇢(1)2
�
...
'n,n =
2
4�(n)�n�1X
j=1
'n�1,j�(n � j)
3
5v
�1
n�1
'n,j = 'n�1,j � 'n,n'n�1,n�j j = 1 . . . n � 1
vn =�1� '2
n,n
�vn�1
...
One could divide everything by �(0) and work with ACF instead of ACVF
8 ottobre 2014 8 / 25
Durbin-Levinson algorithm. Summary
v
0
= E(|X1
� X̂
1
|2) = E(|X1
|2) = �(0)
'1,1 =
�(1)
v
0
= ⇢(1)
v
1
=�1� '2
1,1
�v
0
= �(0)�1� ⇢(1)2
�
...
'n,n =
2
4�(n)�n�1X
j=1
'n�1,j�(n � j)
3
5v
�1
n�1
'n,j = 'n�1,j � 'n,n'n�1,n�j j = 1 . . . n � 1
vn =�1� '2
n,n
�vn�1
...
One could divide everything by �(0) and work with ACF instead of ACVF
8 ottobre 2014 8 / 25
Durbin-Levinson algorithm. Summary
v
0
= E(|X1
� X̂
1
|2) = E(|X1
|2) = �(0)
'1,1 =
�(1)
v
0
= ⇢(1)
v
1
=�1� '2
1,1
�v
0
= �(0)�1� ⇢(1)2
�
...
'n,n =
2
4�(n)�n�1X
j=1
'n�1,j�(n � j)
3
5v
�1
n�1
'n,j = 'n�1,j � 'n,n'n�1,n�j j = 1 . . . n � 1
vn =�1� '2
n,n
�vn�1
...
One could divide everything by �(0) and work with ACF instead of ACVF8 ottobre 2014 8 / 25
Durbin-Levinson algorithm for AR(1)
Xt stationary with Xt = �Xt�1
+ Zt , Zt ⇠ WN(0,�2)
and E(XsZt) = 0 if s < t
=) �(h) =�2�|h|
1� �2
.
v
0
=�2
1� �2
, '1,1 = �, v
1
= �2,
'2,2 =
�2�2
1� �2
� '�2�
1� �2
�v
�1
1
= 0. '2,1 = '
1,1, v
2
= v
1
,
'n,1 = �, 'n,j = 0 j > 1, vn = v
1
= �2.
8 ottobre 2014 9 / 25
Durbin-Levinson algorithm for AR(1)
Xt stationary with Xt = �Xt�1
+ Zt , Zt ⇠ WN(0,�2)
and E(XsZt) = 0 if s < t =) �(h) =�2�|h|
1� �2
.
v
0
=�2
1� �2
, '1,1 = �, v
1
= �2,
'2,2 =
�2�2
1� �2
� '�2�
1� �2
�v
�1
1
= 0. '2,1 = '
1,1, v
2
= v
1
,
'n,1 = �, 'n,j = 0 j > 1, vn = v
1
= �2.
8 ottobre 2014 9 / 25
Durbin-Levinson algorithm for AR(1)
Xt stationary with Xt = �Xt�1
+ Zt , Zt ⇠ WN(0,�2)
and E(XsZt) = 0 if s < t =) �(h) =�2�|h|
1� �2
.
v
0
=�2
1� �2
, '1,1 = �, v
1
= �2,
'2,2 =
�2�2
1� �2
� '�2�
1� �2
�v
�1
1
= 0. '2,1 = '
1,1, v
2
= v
1
,
'n,1 = �, 'n,j = 0 j > 1, vn = v
1
= �2.
8 ottobre 2014 9 / 25
Durbin-Levinson algorithm for AR(1)
Xt stationary with Xt = �Xt�1
+ Zt , Zt ⇠ WN(0,�2)
and E(XsZt) = 0 if s < t =) �(h) =�2�|h|
1� �2
.
v
0
=�2
1� �2
, '1,1 = �, v
1
= �2,
'2,2 =
�2�2
1� �2
� '�2�
1� �2
�v
�1
1
= 0. '2,1 = '
1,1, v
2
= v
1
,
'n,1 = �, 'n,j = 0 j > 1, vn = v
1
= �2.
8 ottobre 2014 9 / 25
Durbin-Levinson algorithm for MA(1)
Xt = Zt � #Zt�1
, Zt ⇠ WN(0,�2), �(0) = �2(1 + #2), �(1) = ��2#.
v
0
= �2(1 + #2) '1,1 = � #
1 + #2
v
1
=�2(1 + #2 + #4)
1 + #2
'2,2 = � #2
1 + #2 + #4
. . .
v
2
=�2(1 + #2 + #4 + #6)
1 + #2 + #4
. . .
Remarks: Computations are long and tedious.vn converges (slowly) towards �2 (the white-noise variance) if |#| < 1.
8 ottobre 2014 10 / 25
Durbin-Levinson algorithm for MA(1)
Xt = Zt � #Zt�1
, Zt ⇠ WN(0,�2), �(0) = �2(1 + #2), �(1) = ��2#.
v
0
= �2(1 + #2) '1,1 = � #
1 + #2
v
1
=�2(1 + #2 + #4)
1 + #2
'2,2 = � #2
1 + #2 + #4
. . .
v
2
=�2(1 + #2 + #4 + #6)
1 + #2 + #4
. . .
Remarks: Computations are long and tedious.vn converges (slowly) towards �2 (the white-noise variance) if |#| < 1.
8 ottobre 2014 10 / 25
Durbin-Levinson algorithm for MA(1)
Xt = Zt � #Zt�1
, Zt ⇠ WN(0,�2), �(0) = �2(1 + #2), �(1) = ��2#.
v
0
= �2(1 + #2) '1,1 = � #
1 + #2
v
1
=�2(1 + #2 + #4)
1 + #2
'2,2 = � #2
1 + #2 + #4
. . .
v
2
=�2(1 + #2 + #4 + #6)
1 + #2 + #4
. . .
Remarks: Computations are long and tedious.vn converges (slowly) towards �2 (the white-noise variance) if |#| < 1.
8 ottobre 2014 10 / 25
Durbin-Levinson algorithm for MA(1)
Xt = Zt � #Zt�1
, Zt ⇠ WN(0,�2), �(0) = �2(1 + #2), �(1) = ��2#.
v
0
= �2(1 + #2) '1,1 = � #
1 + #2
v
1
=�2(1 + #2 + #4)
1 + #2
'2,2 = � #2
1 + #2 + #4
. . .
v
2
=�2(1 + #2 + #4 + #6)
1 + #2 + #4
. . .
Remarks: Computations are long and tedious.vn converges (slowly) towards �2 (the white-noise variance) if |#| < 1.
8 ottobre 2014 10 / 25
Durbin-Levinson for sinusoidal wave
Xt = B cos(!t) + C sin(!t), with ! 2 R,
E(B) = E(C ) = E(BC ) = 0, V(B) = V(C ) = �2.
Then �(h) = �2 cos(!h).
v
0
= �2 '1,1 = cos(!)
v
1
= �2(1� cos2(!)) = �2 sin2(!) '2,2 =
cos(2!)� cos2(!)
sin2(!)= �1
v
2
= 0
=) Xn+1
= PL(Xn,Xn�1
)
Xn+1
.
8 ottobre 2014 11 / 25
Durbin-Levinson for sinusoidal wave
Xt = B cos(!t) + C sin(!t), with ! 2 R,
E(B) = E(C ) = E(BC ) = 0, V(B) = V(C ) = �2.
Then �(h) = �2 cos(!h).
v
0
= �2 '1,1 = cos(!)
v
1
= �2(1� cos2(!)) = �2 sin2(!) '2,2 =
cos(2!)� cos2(!)
sin2(!)= �1
v
2
= 0
=) Xn+1
= PL(Xn,Xn�1
)
Xn+1
.
8 ottobre 2014 11 / 25
Durbin-Levinson for sinusoidal wave
Xt = B cos(!t) + C sin(!t), with ! 2 R,
E(B) = E(C ) = E(BC ) = 0, V(B) = V(C ) = �2.
Then �(h) = �2 cos(!h).
v
0
= �2 '1,1 = cos(!)
v
1
= �2(1� cos2(!)) = �2 sin2(!) '2,2 =
cos(2!)� cos2(!)
sin2(!)= �1
v
2
= 0
=) Xn+1
= PL(Xn,Xn�1
)
Xn+1
.
8 ottobre 2014 11 / 25
Durbin-Levinson for sinusoidal wave
Xt = B cos(!t) + C sin(!t), with ! 2 R,
E(B) = E(C ) = E(BC ) = 0, V(B) = V(C ) = �2.
Then �(h) = �2 cos(!h).
v
0
= �2 '1,1 = cos(!)
v
1
= �2(1� cos2(!)) = �2 sin2(!) '2,2 =
cos(2!)� cos2(!)
sin2(!)= �1
v
2
= 0
=) Xn+1
= PL(Xn,Xn�1
)
Xn+1
.
8 ottobre 2014 11 / 25
Durbin-Levinson for sinusoidal wave
Xt = B cos(!t) + C sin(!t), with ! 2 R,
E(B) = E(C ) = E(BC ) = 0, V(B) = V(C ) = �2.
Then �(h) = �2 cos(!h).
v
0
= �2 '1,1 = cos(!)
v
1
= �2(1� cos2(!)) = �2 sin2(!) '2,2 =
cos(2!)� cos2(!)
sin2(!)= �1
v
2
= 0
=) Xn+1
= PL(Xn,Xn�1
)
Xn+1
.
8 ottobre 2014 11 / 25
Partial auto-correlation
For a stationary process {Xt} ↵(h) the partial auto-correlation representsthe correlation between Xt and Xt+h, after removing the e↵ect ofintermediate values.
Definition: ↵(1) = ⇢(Xt ,Xt+1
) = ⇢(1).
↵(h) = ⇢(Xt �PL(Xt+1
,...,Xt+h�1
)
Xt ,Xt+h �PL(Xt+1
,...,Xt+h�1
)
Xt+h) h > 1.
↵(h) =E((Xt � PL(Xt+1
,...,Xt+h�1
)
Xt)(Xt+h � PL(Xt+1
,...,Xt+h�1
)
Xt+h))
V(Xt � PL(Xt+1
,...,Xt+h�1
)
Xt)
=hX
1
� PL(X2
,...,Xh)X
1
,Xh+1
� PL(X2
,...,Xh)Xh+1
ikX
1
� PL(X2
,...,Xh)X
1
k2
=hX
1
,Xh+1
� PL(X2
,...,Xh)Xh+1
ikX
1
� PL(X2
,...,Xh)X
1
k2 = 'h,h.
Durbin-Levinson’s algorithm is a method to compute ↵(·).
8 ottobre 2014 12 / 25
Partial auto-correlation
For a stationary process {Xt} ↵(h) the partial auto-correlation representsthe correlation between Xt and Xt+h, after removing the e↵ect ofintermediate values.Definition: ↵(1) = ⇢(Xt ,Xt+1
) = ⇢(1).
↵(h) = ⇢(Xt �PL(Xt+1
,...,Xt+h�1
)
Xt ,Xt+h �PL(Xt+1
,...,Xt+h�1
)
Xt+h) h > 1.
↵(h) =E((Xt � PL(Xt+1
,...,Xt+h�1
)
Xt)(Xt+h � PL(Xt+1
,...,Xt+h�1
)
Xt+h))
V(Xt � PL(Xt+1
,...,Xt+h�1
)
Xt)
=hX
1
� PL(X2
,...,Xh)X
1
,Xh+1
� PL(X2
,...,Xh)Xh+1
ikX
1
� PL(X2
,...,Xh)X
1
k2
=hX
1
,Xh+1
� PL(X2
,...,Xh)Xh+1
ikX
1
� PL(X2
,...,Xh)X
1
k2 = 'h,h.
Durbin-Levinson’s algorithm is a method to compute ↵(·).
8 ottobre 2014 12 / 25
Partial auto-correlation
For a stationary process {Xt} ↵(h) the partial auto-correlation representsthe correlation between Xt and Xt+h, after removing the e↵ect ofintermediate values.Definition: ↵(1) = ⇢(Xt ,Xt+1
) = ⇢(1).
↵(h) = ⇢(Xt �PL(Xt+1
,...,Xt+h�1
)
Xt ,Xt+h �PL(Xt+1
,...,Xt+h�1
)
Xt+h) h > 1.
↵(h) =E((Xt � PL(Xt+1
,...,Xt+h�1
)
Xt)(Xt+h � PL(Xt+1
,...,Xt+h�1
)
Xt+h))
V(Xt � PL(Xt+1
,...,Xt+h�1
)
Xt)
=hX
1
� PL(X2
,...,Xh)X
1
,Xh+1
� PL(X2
,...,Xh)Xh+1
ikX
1
� PL(X2
,...,Xh)X
1
k2
=hX
1
,Xh+1
� PL(X2
,...,Xh)Xh+1
ikX
1
� PL(X2
,...,Xh)X
1
k2 = 'h,h.
Durbin-Levinson’s algorithm is a method to compute ↵(·).
8 ottobre 2014 12 / 25
Partial auto-correlation
For a stationary process {Xt} ↵(h) the partial auto-correlation representsthe correlation between Xt and Xt+h, after removing the e↵ect ofintermediate values.Definition: ↵(1) = ⇢(Xt ,Xt+1
) = ⇢(1).
↵(h) = ⇢(Xt �PL(Xt+1
,...,Xt+h�1
)
Xt ,Xt+h �PL(Xt+1
,...,Xt+h�1
)
Xt+h) h > 1.
↵(h) =E((Xt � PL(Xt+1
,...,Xt+h�1
)
Xt)(Xt+h � PL(Xt+1
,...,Xt+h�1
)
Xt+h))
V(Xt � PL(Xt+1
,...,Xt+h�1
)
Xt)
=hX
1
� PL(X2
,...,Xh)X
1
,Xh+1
� PL(X2
,...,Xh)Xh+1
ikX
1
� PL(X2
,...,Xh)X
1
k2
=hX
1
,Xh+1
� PL(X2
,...,Xh)Xh+1
ikX
1
� PL(X2
,...,Xh)X
1
k2 = 'h,h.
Durbin-Levinson’s algorithm is a method to compute ↵(·).8 ottobre 2014 12 / 25
Remember in fact Durbin-Levinson algorithm. 5
X̂n+1
=nX
j=1
'n,jXn+1�j = PL(X2
,...,Xn)Xn+1
+ a
�X
1
� PL(X2
,...,Xn)X
1
�
Hence
'n,n = a = hXn+1
,X1
� PL(X2
,...,Xn)X
1
iv�1
n�1
=
2
4�(n)�n�1X
j=1
'n�1,j�(n � j)
3
5v
�1
n�1
.
8 ottobre 2014 13 / 25
Examples of PACF
{Xt} AR(1), =) ↵(1) = �, ↵(h) = 0 for h > 1 (seen before).
{Xt} AR(p), i.e. stationary proces s.t.
Xt =pX
k=1
�kXt�k + Zt , {Zt} ⇠ WN(0,�2).
If t � p, PL(X1
,...,Xt)Xt+1
=Pp
k=1
�kXt+1�k (check).
Then 'p,p = ↵(p) = �p, 'h,h = 0 if h > p, i.e. ↵(h) = 0 for h > p.
{Xt} MA(1) =) ↵(h) = �#h/(1 + #2 + · · ·+ #2h) (long
computation)
PACF of AR processes has finite support, while PACF of MA is alwaysnon-zero. This is the opposite as for ACF.
Sample PACF. Apply Durbin-Levinson algorithm to �̂(·).
8 ottobre 2014 14 / 25
Examples of PACF
{Xt} AR(1), =) ↵(1) = �, ↵(h) = 0 for h > 1 (seen before).
{Xt} AR(p), i.e. stationary proces s.t.
Xt =pX
k=1
�kXt�k + Zt , {Zt} ⇠ WN(0,�2).
If t � p, PL(X1
,...,Xt)Xt+1
=Pp
k=1
�kXt+1�k (check).
Then 'p,p = ↵(p) = �p, 'h,h = 0 if h > p, i.e. ↵(h) = 0 for h > p.
{Xt} MA(1) =) ↵(h) = �#h/(1 + #2 + · · ·+ #2h) (long
computation)
PACF of AR processes has finite support, while PACF of MA is alwaysnon-zero. This is the opposite as for ACF.
Sample PACF. Apply Durbin-Levinson algorithm to �̂(·).
8 ottobre 2014 14 / 25
Examples of PACF
{Xt} AR(1), =) ↵(1) = �, ↵(h) = 0 for h > 1 (seen before).
{Xt} AR(p), i.e. stationary proces s.t.
Xt =pX
k=1
�kXt�k + Zt , {Zt} ⇠ WN(0,�2).
If t � p, PL(X1
,...,Xt)Xt+1
=Pp
k=1
�kXt+1�k (check).
Then 'p,p = ↵(p) = �p, 'h,h = 0 if h > p, i.e. ↵(h) = 0 for h > p.
{Xt} MA(1) =) ↵(h) = �#h/(1 + #2 + · · ·+ #2h) (long
computation)
PACF of AR processes has finite support, while PACF of MA is alwaysnon-zero. This is the opposite as for ACF.
Sample PACF. Apply Durbin-Levinson algorithm to �̂(·).
8 ottobre 2014 14 / 25
Examples of PACF
{Xt} AR(1), =) ↵(1) = �, ↵(h) = 0 for h > 1 (seen before).
{Xt} AR(p), i.e. stationary proces s.t.
Xt =pX
k=1
�kXt�k + Zt , {Zt} ⇠ WN(0,�2).
If t � p, PL(X1
,...,Xt)Xt+1
=Pp
k=1
�kXt+1�k (check).
Then 'p,p = ↵(p) = �p, 'h,h = 0 if h > p, i.e. ↵(h) = 0 for h > p.
{Xt} MA(1) =) ↵(h) = �#h/(1 + #2 + · · ·+ #2h) (long
computation)
PACF of AR processes has finite support, while PACF of MA is alwaysnon-zero. This is the opposite as for ACF.
Sample PACF. Apply Durbin-Levinson algorithm to �̂(·).
8 ottobre 2014 14 / 25
Examples of PACF
{Xt} AR(1), =) ↵(1) = �, ↵(h) = 0 for h > 1 (seen before).
{Xt} AR(p), i.e. stationary proces s.t.
Xt =pX
k=1
�kXt�k + Zt , {Zt} ⇠ WN(0,�2).
If t � p, PL(X1
,...,Xt)Xt+1
=Pp
k=1
�kXt+1�k (check).
Then 'p,p = ↵(p) = �p, 'h,h = 0 if h > p, i.e. ↵(h) = 0 for h > p.
{Xt} MA(1) =) ↵(h) = �#h/(1 + #2 + · · ·+ #2h) (long
computation)
PACF of AR processes has finite support, while PACF of MA is alwaysnon-zero. This is the opposite as for ACF.
Sample PACF. Apply Durbin-Levinson algorithm to �̂(·).
8 ottobre 2014 14 / 25
Examples of PACF
{Xt} AR(1), =) ↵(1) = �, ↵(h) = 0 for h > 1 (seen before).
{Xt} AR(p), i.e. stationary proces s.t.
Xt =pX
k=1
�kXt�k + Zt , {Zt} ⇠ WN(0,�2).
If t � p, PL(X1
,...,Xt)Xt+1
=Pp
k=1
�kXt+1�k (check).
Then 'p,p = ↵(p) = �p, 'h,h = 0 if h > p, i.e. ↵(h) = 0 for h > p.
{Xt} MA(1) =) ↵(h) = �#h/(1 + #2 + · · ·+ #2h) (long
computation)
PACF of AR processes has finite support, while PACF of MA is alwaysnon-zero. This is the opposite as for ACF.
Sample PACF. Apply Durbin-Levinson algorithm to �̂(·).
8 ottobre 2014 14 / 25
Examples of PACF
{Xt} AR(1), =) ↵(1) = �, ↵(h) = 0 for h > 1 (seen before).
{Xt} AR(p), i.e. stationary proces s.t.
Xt =pX
k=1
�kXt�k + Zt , {Zt} ⇠ WN(0,�2).
If t � p, PL(X1
,...,Xt)Xt+1
=Pp
k=1
�kXt+1�k (check).
Then 'p,p = ↵(p) = �p, 'h,h = 0 if h > p, i.e. ↵(h) = 0 for h > p.
{Xt} MA(1) =) ↵(h) = �#h/(1 + #2 + · · ·+ #2h) (long
computation)
PACF of AR processes has finite support, while PACF of MA is alwaysnon-zero. This is the opposite as for ACF.
Sample PACF. Apply Durbin-Levinson algorithm to �̂(·).
8 ottobre 2014 14 / 25
Sample ACF and PACF
0 5 10 15
-0.5
0.0
0.5
1.0
Lag
ACF
Oveshort data
5 10 15
-0.4
0.0
0.2
Lag
Par
tial A
CF
8 ottobre 2014 15 / 25
Sample ACF of Huron: AR(1) fit
0 5 10 15
-0.4
-0.2
0.0
0.2
0.4
0.6
0.8
1.0
Lag
ACF
ACF of detrended Huron data
8 ottobre 2014 16 / 25
Sample ACF of Huron: AR(1) fit
0 5 10 15
-0.4
-0.2
0.0
0.2
0.4
0.6
0.8
1.0
Lag
ACF
ACF of detrended Huron data
Add theoretical ACF of AR(1) with � = 0.79.8 ottobre 2014 17 / 25
Sample ACF of Huron: AR(1) fit
0 5 10 15
-0.4
-0.2
0.0
0.2
0.4
0.6
0.8
1.0
Lag
ACF
ACF of detrended Huron data
Add confidence intervals, assuming � = 0.79 (di↵erent from book).8 ottobre 2014 18 / 25
Sample ACF and PACF of Huron data
0 5 10 15
-0.2
0.2
0.6
1.0
Lag
ACF
Huron data
5 10 15
-0.2
0.2
0.6
Lag
Par
tial A
CF
PACF suggests use of an AR(2) model.8 ottobre 2014 19 / 25
The innovations algorithm. Basis
Another recursive algorithm (‘innovations algorithm’) works better in somecases. It will be important in the estimation of ARMA processes.
Let X̂n+1
= PL(X1
,...,Xn)Xn+1
2 L(X1
, . . . ,Xn). We wish to write
X̂n+1
=nX
j=1
#n,j(Xn+1�j � X̂n+1�j).
{Xn+1�j � X̂n+1�j}j=1...n is an orthogonal basis of L(X1
, . . . ,Xn).In fact Xk+1
� X̂k+1
by definition is orthogonal to L(X1
, . . . ,Xk),hence to Xj � X̂j for all j = 1 . . . k .
( Xk+1
� X̂k+1
is named innovation, as it could not be predicted before)
8 ottobre 2014 20 / 25
The innovations algorithm. Basis
Another recursive algorithm (‘innovations algorithm’) works better in somecases. It will be important in the estimation of ARMA processes.Let X̂n+1
= PL(X1
,...,Xn)Xn+1
2 L(X1
, . . . ,Xn). We wish to write
X̂n+1
=nX
j=1
#n,j(Xn+1�j � X̂n+1�j).
{Xn+1�j � X̂n+1�j}j=1...n is an orthogonal basis of L(X1
, . . . ,Xn).In fact Xk+1
� X̂k+1
by definition is orthogonal to L(X1
, . . . ,Xk),hence to Xj � X̂j for all j = 1 . . . k .
( Xk+1
� X̂k+1
is named innovation, as it could not be predicted before)
8 ottobre 2014 20 / 25
The innovations algorithm. Basis
Another recursive algorithm (‘innovations algorithm’) works better in somecases. It will be important in the estimation of ARMA processes.Let X̂n+1
= PL(X1
,...,Xn)Xn+1
2 L(X1
, . . . ,Xn). We wish to write
X̂n+1
=nX
j=1
#n,j(Xn+1�j � X̂n+1�j).
{Xn+1�j � X̂n+1�j}j=1...n is an orthogonal basis of L(X1
, . . . ,Xn).In fact Xk+1
� X̂k+1
by definition is orthogonal to L(X1
, . . . ,Xk),hence to Xj � X̂j for all j = 1 . . . k .
( Xk+1
� X̂k+1
is named innovation, as it could not be predicted before)
8 ottobre 2014 20 / 25
The innovations algorithm. Steps
The orthogonality condition reads: for j = 1 . . . n
hXn+1
,Xn+1�j � X̂n+1�ji = hX̂n+1
,Xn+1�j � X̂n+1�ji= #n,jkXn+1�j � X̂n+1�jk2 = #n,jvn�j . (1)
Take j = n. Then
#n,nv0 = hXn+1
,X1
� X̂
1
i = hXn+1
,X1
i = �(n).
For j < n, #n,jvn�j = hXn+1
,Xn+1�j � X̂n+1�ji
= �(j)�n�jX
k=1
#n�j ,khXn+1
,Xn+1�j�k � X̂n+1�j�ki.
Now insert (1) in the rightmost term.
8 ottobre 2014 21 / 25
The innovations algorithm. Steps
The orthogonality condition reads: for j = 1 . . . n
hXn+1
,Xn+1�j � X̂n+1�ji = hX̂n+1
,Xn+1�j � X̂n+1�ji= #n,jkXn+1�j � X̂n+1�jk2 = #n,jvn�j . (1)
Take j = n. Then
#n,nv0 = hXn+1
,X1
� X̂
1
i = hXn+1
,X1
i = �(n).
For j < n, #n,jvn�j = hXn+1
,Xn+1�j � X̂n+1�ji
= �(j)�n�jX
k=1
#n�j ,khXn+1
,Xn+1�j�k � X̂n+1�j�ki.
Now insert (1) in the rightmost term.
8 ottobre 2014 21 / 25
The innovations algorithm. Steps
The orthogonality condition reads: for j = 1 . . . n
hXn+1
,Xn+1�j � X̂n+1�ji = hX̂n+1
,Xn+1�j � X̂n+1�ji= #n,jkXn+1�j � X̂n+1�jk2 = #n,jvn�j . (1)
Take j = n. Then
#n,nv0 = hXn+1
,X1
� X̂
1
i = hXn+1
,X1
i = �(n).
For j < n, #n,jvn�j = hXn+1
,Xn+1�j � X̂n+1�ji
= �(j)�n�jX
k=1
#n�j ,khXn+1
,Xn+1�j�k � X̂n+1�j�ki.
Now insert (1) in the rightmost term.
8 ottobre 2014 21 / 25
The innovations algorithm. Steps
The orthogonality condition reads: for j = 1 . . . n
hXn+1
,Xn+1�j � X̂n+1�ji = hX̂n+1
,Xn+1�j � X̂n+1�ji= #n,jkXn+1�j � X̂n+1�jk2 = #n,jvn�j . (1)
Take j = n. Then
#n,nv0 = hXn+1
,X1
� X̂
1
i = hXn+1
,X1
i = �(n).
For j < n, #n,jvn�j = hXn+1
,Xn+1�j � X̂n+1�ji
= �(j)�n�jX
k=1
#n�j ,khXn+1
,Xn+1�j�k � X̂n+1�j�ki.
Now insert (1) in the rightmost term.8 ottobre 2014 21 / 25
The innovations algorithm. Steps (cont.)
hXn+1
,Xn+1�j � X̂n+1�ji = #n,jvn�j . (1)
From #n,jvn�j = �(j)�n�jX
k=1
#n�j ,khXn+1
,Xn+1�j�k � X̂n+1�j�ki
= �(j)�n�jX
k=1
#n�j ,k#n,j+kvn�j�k .
Hence in order to compute #n,j we need #n�j ,k (as j � 1 this value hasalready been obtained) and #n,j+k , i.e. #n,l with l > j . At step n, one canthen compute #n,n (first formula), then #n,n�1
down to #n,1.
One needs still a recursive formula for vn.
8 ottobre 2014 22 / 25
The innovations algorithm. Steps (cont.)
hXn+1
,Xn+1�j � X̂n+1�ji = #n,jvn�j . (1)
From #n,jvn�j = �(j)�n�jX
k=1
#n�j ,khXn+1
,Xn+1�j�k � X̂n+1�j�ki
= �(j)�n�jX
k=1
#n�j ,k#n,j+kvn�j�k .
Hence in order to compute #n,j we need #n�j ,k (as j � 1 this value hasalready been obtained) and #n,j+k , i.e. #n,l with l > j . At step n, one canthen compute #n,n (first formula), then #n,n�1
down to #n,1.
One needs still a recursive formula for vn.
8 ottobre 2014 22 / 25
The innovations algorithm. Steps (cont.)
hXn+1
,Xn+1�j � X̂n+1�ji = #n,jvn�j . (1)
From #n,jvn�j = �(j)�n�jX
k=1
#n�j ,khXn+1
,Xn+1�j�k � X̂n+1�j�ki
= �(j)�n�jX
k=1
#n�j ,k#n,j+kvn�j�k .
Hence in order to compute #n,j we need #n�j ,k (as j � 1 this value hasalready been obtained) and #n,j+k , i.e. #n,l with l > j . At step n, one canthen compute #n,n (first formula), then #n,n�1
down to #n,1.
One needs still a recursive formula for vn.
8 ottobre 2014 22 / 25
The innovations algorithm. Summary
vn = kXn+1
� X̂n+1
k2 = kXn+1
k2 + kX̂n+1
k2 � 2hXn+1
, X̂n+1
i= kXn+1
k2 + kX̂n+1
k2 � 2hXn+1
� X̂n+1
, X̂n+1
i � 2hX̂n+1
, X̂n+1
i= kXn+1
k2 � kX̂n+1
k2
as Xn+1
� X̂n+1
is orthogonal to L(X1
, . . . ,Xk), hence to X̂n+1
.
kXn+1
k2 = �(0), kX̂n+1
k2 =nP
j=1
#2
n,jvn�j .
The algorithm starts with v
0
= �(0).Then for each n, #n,n = �(n)/v
0
,
#n,j = [�(j)�n�jX
k=1
#n�j ,k#n,j+kvn�j�k ]/vn�j , j = n � 1, . . . , 1.
vn = �(0)�nX
j=1
#2
n,jvn�j .
8 ottobre 2014 23 / 25
The innovations algorithm. Summary
vn = kXn+1
� X̂n+1
k2 = kXn+1
k2 + kX̂n+1
k2 � 2hXn+1
, X̂n+1
i= kXn+1
k2 + kX̂n+1
k2 � 2hXn+1
� X̂n+1
, X̂n+1
i � 2hX̂n+1
, X̂n+1
i= kXn+1
k2 � kX̂n+1
k2
as Xn+1
� X̂n+1
is orthogonal to L(X1
, . . . ,Xk), hence to X̂n+1
.
kXn+1
k2 = �(0), kX̂n+1
k2 =nP
j=1
#2
n,jvn�j .
The algorithm starts with v
0
= �(0).Then for each n, #n,n = �(n)/v
0
,
#n,j = [�(j)�n�jX
k=1
#n�j ,k#n,j+kvn�j�k ]/vn�j , j = n � 1, . . . , 1.
vn = �(0)�nX
j=1
#2
n,jvn�j .
8 ottobre 2014 23 / 25
The innovations algorithm. Summary
vn = kXn+1
� X̂n+1
k2 = kXn+1
k2 + kX̂n+1
k2 � 2hXn+1
, X̂n+1
i= kXn+1
k2 + kX̂n+1
k2 � 2hXn+1
� X̂n+1
, X̂n+1
i � 2hX̂n+1
, X̂n+1
i= kXn+1
k2 � kX̂n+1
k2
as Xn+1
� X̂n+1
is orthogonal to L(X1
, . . . ,Xk), hence to X̂n+1
.
kXn+1
k2 = �(0), kX̂n+1
k2 =nP
j=1
#2
n,jvn�j .
The algorithm starts with v
0
= �(0).Then for each n, #n,n = �(n)/v
0
,
#n,j = [�(j)�n�jX
k=1
#n�j ,k#n,j+kvn�j�k ]/vn�j , j = n � 1, . . . , 1.
vn = �(0)�nX
j=1
#2
n,jvn�j .
8 ottobre 2014 23 / 25
The innovations algorithm. Summary
vn = kXn+1
� X̂n+1
k2 = kXn+1
k2 + kX̂n+1
k2 � 2hXn+1
, X̂n+1
i= kXn+1
k2 + kX̂n+1
k2 � 2hXn+1
� X̂n+1
, X̂n+1
i � 2hX̂n+1
, X̂n+1
i= kXn+1
k2 � kX̂n+1
k2
as Xn+1
� X̂n+1
is orthogonal to L(X1
, . . . ,Xk), hence to X̂n+1
.
kXn+1
k2 = �(0), kX̂n+1
k2 =nP
j=1
#2
n,jvn�j .
The algorithm starts with v
0
= �(0).Then for each n, #n,n = �(n)/v
0
,
#n,j = [�(j)�n�jX
k=1
#n�j ,k#n,j+kvn�j�k ]/vn�j , j = n � 1, . . . , 1.
vn = �(0)�nX
j=1
#2
n,jvn�j .
8 ottobre 2014 23 / 25
The innovations algorithm. Summary
vn = kXn+1
� X̂n+1
k2 = kXn+1
k2 + kX̂n+1
k2 � 2hXn+1
, X̂n+1
i= kXn+1
k2 + kX̂n+1
k2 � 2hXn+1
� X̂n+1
, X̂n+1
i � 2hX̂n+1
, X̂n+1
i= kXn+1
k2 � kX̂n+1
k2
as Xn+1
� X̂n+1
is orthogonal to L(X1
, . . . ,Xk), hence to X̂n+1
.
kXn+1
k2 = �(0), kX̂n+1
k2 =nP
j=1
#2
n,jvn�j .
The algorithm starts with v
0
= �(0).Then for each n, #n,n = �(n)/v
0
,
#n,j = [�(j)�n�jX
k=1
#n�j ,k#n,j+kvn�j�k ]/vn�j , j = n � 1, . . . , 1.
vn = �(0)�nX
j=1
#2
n,jvn�j .
8 ottobre 2014 23 / 25
Innovations algorithm applied to MA(1)
It is easy to see that #n,j = 0 for n > 1 and j > 1. In fact
#n,j = [�(j)�n�jX
k=1
#n�j ,k#n,j+kvn�j�k ]/vn�j .
Then
#n,1 =�(1)
vn�1
and vn = �(0)� #2
n,1vn�1
= �(0)� �2(1)
vn�1
.
8 ottobre 2014 24 / 25
Projection on infinite past
We can consider projections based on knowledge of all the past:
Mt = sp(Xs)st
i.e. the smallest closed subset containing all the finite linear combinationsof Xs , s t, i.e. the limits (in L
2) of finite linear combinations of Xs .
An example. MA(1): Xt = Zt � #Zt�1
. Show that, if |#| < 1,
�1X
j=1
#jXt+1�j = PMtXt+1
.
1 the series converges.
2
Xt+1
+1Pj=1
#jXt+1�j is orthogonal to Xt�i , i � 0.
What could be PMtXt+1
if |#| > 1?
8 ottobre 2014 25 / 25
Projection on infinite past
We can consider projections based on knowledge of all the past:
Mt = sp(Xs)st
i.e. the smallest closed subset containing all the finite linear combinationsof Xs , s t, i.e. the limits (in L
2) of finite linear combinations of Xs .
An example. MA(1): Xt = Zt � #Zt�1
. Show that, if |#| < 1,
�1X
j=1
#jXt+1�j = PMtXt+1
.
1 the series converges.
2
Xt+1
+1Pj=1
#jXt+1�j is orthogonal to Xt�i , i � 0.
What could be PMtXt+1
if |#| > 1?
8 ottobre 2014 25 / 25
Projection on infinite past
We can consider projections based on knowledge of all the past:
Mt = sp(Xs)st
i.e. the smallest closed subset containing all the finite linear combinationsof Xs , s t, i.e. the limits (in L
2) of finite linear combinations of Xs .
An example. MA(1): Xt = Zt � #Zt�1
. Show that, if |#| < 1,
�1X
j=1
#jXt+1�j = PMtXt+1
.
1 the series converges.
2
Xt+1
+1Pj=1
#jXt+1�j is orthogonal to Xt�i , i � 0.
What could be PMtXt+1
if |#| > 1?
8 ottobre 2014 25 / 25
Projection on infinite past
We can consider projections based on knowledge of all the past:
Mt = sp(Xs)st
i.e. the smallest closed subset containing all the finite linear combinationsof Xs , s t, i.e. the limits (in L
2) of finite linear combinations of Xs .
An example. MA(1): Xt = Zt � #Zt�1
. Show that, if |#| < 1,
�1X
j=1
#jXt+1�j = PMtXt+1
.
1 the series converges.
2
Xt+1
+1Pj=1
#jXt+1�j is orthogonal to Xt�i , i � 0.
What could be PMtXt+1
if |#| > 1?
8 ottobre 2014 25 / 25
Projection on infinite past
We can consider projections based on knowledge of all the past:
Mt = sp(Xs)st
i.e. the smallest closed subset containing all the finite linear combinationsof Xs , s t, i.e. the limits (in L
2) of finite linear combinations of Xs .
An example. MA(1): Xt = Zt � #Zt�1
. Show that, if |#| < 1,
�1X
j=1
#jXt+1�j = PMtXt+1
.
1 the series converges.
2
Xt+1
+1Pj=1
#jXt+1�j is orthogonal to Xt�i , i � 0.
What could be PMtXt+1
if |#| > 1?
8 ottobre 2014 25 / 25