Proof of In(ωk) =∑|h|<n γ(h)e−ihωk .
First of all, if k 6= 0,n∑
t,s=1e−i(s−t)ωk =
n∑t=1
e itωk
n∑s=1
e−isωk
=n∑
t=1e i(t−1)ωk
1− e−inωk
1− e−iωk=
n∑t=1
e i(t−1)ωk1− e−i2πk
1− e−iωk= 0.
Then
In(ωk) =1
n
n∑t,s=1
xsxte−i(s−t)ωk =
1
n
n∑t,s=1
(xs − x)(xt − x)e−i(s−t)ωk
(h = s − t) =1
n
n∑t=1
(xt − x)n−t∑
h=1−t(xt+h − x)e−ihωk (exchange integrals)
=−1∑
h=−(n−1)e−ihωk
1
n
n∑t=1−h
(xt − x)(xt+h − x) +n−1∑h=0
e−ihωk1
n
n−h∑t=1
(xt − x)(xt+h − x)
=−1∑
h=−(n−1)e−ihωk γ(h) +
n−1∑h=0
e−ihωk γ(h) =∑|h|<n
e−ihωk γ(h).
21 ottobre 2013 1 / 16
Proof of In(ωk) =∑|h|<n γ(h)e−ihωk .
First of all, if k 6= 0,n∑
t,s=1e−i(s−t)ωk =
n∑t=1
e itωk
n∑s=1
e−isωk
=n∑
t=1e i(t−1)ωk
1− e−inωk
1− e−iωk=
n∑t=1
e i(t−1)ωk1− e−i2πk
1− e−iωk= 0.
Then
In(ωk) =1
n
n∑t,s=1
xsxte−i(s−t)ωk =
1
n
n∑t,s=1
(xs − x)(xt − x)e−i(s−t)ωk
(h = s − t) =1
n
n∑t=1
(xt − x)n−t∑
h=1−t(xt+h − x)e−ihωk (exchange integrals)
=−1∑
h=−(n−1)e−ihωk
1
n
n∑t=1−h
(xt − x)(xt+h − x) +n−1∑h=0
e−ihωk1
n
n−h∑t=1
(xt − x)(xt+h − x)
=−1∑
h=−(n−1)e−ihωk γ(h) +
n−1∑h=0
e−ihωk γ(h) =∑|h|<n
e−ihωk γ(h).
21 ottobre 2013 1 / 16
Proof of In(ωk) =∑|h|<n γ(h)e−ihωk .
First of all, if k 6= 0,n∑
t,s=1e−i(s−t)ωk =
n∑t=1
e itωk
n∑s=1
e−isωk
=n∑
t=1e i(t−1)ωk
1− e−inωk
1− e−iωk=
n∑t=1
e i(t−1)ωk1− e−i2πk
1− e−iωk= 0.
Then
In(ωk) =1
n
n∑t,s=1
xsxte−i(s−t)ωk =
1
n
n∑t,s=1
(xs − x)(xt − x)e−i(s−t)ωk
(h = s − t) =1
n
n∑t=1
(xt − x)n−t∑
h=1−t(xt+h − x)e−ihωk (exchange integrals)
=−1∑
h=−(n−1)e−ihωk
1
n
n∑t=1−h
(xt − x)(xt+h − x) +n−1∑h=0
e−ihωk1
n
n−h∑t=1
(xt − x)(xt+h − x)
=−1∑
h=−(n−1)e−ihωk γ(h) +
n−1∑h=0
e−ihωk γ(h) =∑|h|<n
e−ihωk γ(h).
21 ottobre 2013 1 / 16
Proof of In(ωk) =∑|h|<n γ(h)e−ihωk .
First of all, if k 6= 0,n∑
t,s=1e−i(s−t)ωk =
n∑t=1
e itωk
n∑s=1
e−isωk
=n∑
t=1e i(t−1)ωk
1− e−inωk
1− e−iωk=
n∑t=1
e i(t−1)ωk1− e−i2πk
1− e−iωk= 0.
Then
In(ωk) =1
n
n∑t,s=1
xsxte−i(s−t)ωk =
1
n
n∑t,s=1
(xs − x)(xt − x)e−i(s−t)ωk
(h = s − t) =1
n
n∑t=1
(xt − x)n−t∑
h=1−t(xt+h − x)e−ihωk
(exchange integrals)
=−1∑
h=−(n−1)e−ihωk
1
n
n∑t=1−h
(xt − x)(xt+h − x) +n−1∑h=0
e−ihωk1
n
n−h∑t=1
(xt − x)(xt+h − x)
=−1∑
h=−(n−1)e−ihωk γ(h) +
n−1∑h=0
e−ihωk γ(h) =∑|h|<n
e−ihωk γ(h).
21 ottobre 2013 1 / 16
Proof of In(ωk) =∑|h|<n γ(h)e−ihωk .
First of all, if k 6= 0,n∑
t,s=1e−i(s−t)ωk =
n∑t=1
e itωk
n∑s=1
e−isωk
=n∑
t=1e i(t−1)ωk
1− e−inωk
1− e−iωk=
n∑t=1
e i(t−1)ωk1− e−i2πk
1− e−iωk= 0.
Then
In(ωk) =1
n
n∑t,s=1
xsxte−i(s−t)ωk =
1
n
n∑t,s=1
(xs − x)(xt − x)e−i(s−t)ωk
(h = s − t) =1
n
n∑t=1
(xt − x)n−t∑
h=1−t(xt+h − x)e−ihωk (exchange integrals)
=−1∑
h=−(n−1)e−ihωk
1
n
n∑t=1−h
(xt − x)(xt+h − x) +n−1∑h=0
e−ihωk1
n
n−h∑t=1
(xt − x)(xt+h − x)
=−1∑
h=−(n−1)e−ihωk γ(h) +
n−1∑h=0
e−ihωk γ(h) =∑|h|<n
e−ihωk γ(h).
21 ottobre 2013 1 / 16
Proof of In(ωk) =∑|h|<n γ(h)e−ihωk .
First of all, if k 6= 0,n∑
t,s=1e−i(s−t)ωk =
n∑t=1
e itωk
n∑s=1
e−isωk
=n∑
t=1e i(t−1)ωk
1− e−inωk
1− e−iωk=
n∑t=1
e i(t−1)ωk1− e−i2πk
1− e−iωk= 0.
Then
In(ωk) =1
n
n∑t,s=1
xsxte−i(s−t)ωk =
1
n
n∑t,s=1
(xs − x)(xt − x)e−i(s−t)ωk
(h = s − t) =1
n
n∑t=1
(xt − x)n−t∑
h=1−t(xt+h − x)e−ihωk (exchange integrals)
=−1∑
h=−(n−1)e−ihωk
1
n
n∑t=1−h
(xt − x)(xt+h − x) +n−1∑h=0
e−ihωk1
n
n−h∑t=1
(xt − x)(xt+h − x)
=−1∑
h=−(n−1)e−ihωk γ(h) +
n−1∑h=0
e−ihωk γ(h) =∑|h|<n
e−ihωk γ(h).
21 ottobre 2013 1 / 16
Proof of In(ωk) =∑|h|<n γ(h)e−ihωk .
First of all, if k 6= 0,n∑
t,s=1e−i(s−t)ωk =
n∑t=1
e itωk
n∑s=1
e−isωk
=n∑
t=1e i(t−1)ωk
1− e−inωk
1− e−iωk=
n∑t=1
e i(t−1)ωk1− e−i2πk
1− e−iωk= 0.
Then
In(ωk) =1
n
n∑t,s=1
xsxte−i(s−t)ωk =
1
n
n∑t,s=1
(xs − x)(xt − x)e−i(s−t)ωk
(h = s − t) =1
n
n∑t=1
(xt − x)n−t∑
h=1−t(xt+h − x)e−ihωk (exchange integrals)
=−1∑
h=−(n−1)e−ihωk
1
n
n∑t=1−h
(xt − x)(xt+h − x) +n−1∑h=0
e−ihωk1
n
n−h∑t=1
(xt − x)(xt+h − x)
=−1∑
h=−(n−1)e−ihωk γ(h) +
n−1∑h=0
e−ihωk γ(h) =∑|h|<n
e−ihωk γ(h).
21 ottobre 2013 1 / 16
Estimation of the spectral density
As f (λ) =1
2π
∞∑h=−∞
γ(h)e−ihλ while In(ωk) =∑|h|<n
γ(h)e−ihωk ,
In(ωk) appears a natural estimator of 2πf (λ).
Extend In to
Definition
For 0 < λ ≤ π, In(λ) = In(g(n, λ)) where g(n, λ) is the multiple of 2π/nclosest to λ
(i.e. g(n, λ) = 2πkn if
(k − 1
2
)< λn
2π ≤(k + 1
2
)).
Proposition
Let {Xt} be a stationary process with mean µ and absolutely summableACVF γ(·). Then
1 E(In(0))− nµ2 → 2πf (0) for n→∞;
2 E(In(λ))→ 2πf (λ) for n→∞ if λ 6= 0.
Proof by the dominated convergence theorem.
21 ottobre 2013 2 / 16
Estimation of the spectral density
As f (λ) =1
2π
∞∑h=−∞
γ(h)e−ihλ while In(ωk) =∑|h|<n
γ(h)e−ihωk ,
In(ωk) appears a natural estimator of 2πf (λ). Extend In to
Definition
For 0 < λ ≤ π, In(λ) = In(g(n, λ)) where g(n, λ) is the multiple of 2π/nclosest to λ
(i.e. g(n, λ) = 2πkn if
(k − 1
2
)< λn
2π ≤(k + 1
2
)).
Proposition
Let {Xt} be a stationary process with mean µ and absolutely summableACVF γ(·). Then
1 E(In(0))− nµ2 → 2πf (0) for n→∞;
2 E(In(λ))→ 2πf (λ) for n→∞ if λ 6= 0.
Proof by the dominated convergence theorem.
21 ottobre 2013 2 / 16
Estimation of the spectral density
As f (λ) =1
2π
∞∑h=−∞
γ(h)e−ihλ while In(ωk) =∑|h|<n
γ(h)e−ihωk ,
In(ωk) appears a natural estimator of 2πf (λ). Extend In to
Definition
For 0 < λ ≤ π, In(λ) = In(g(n, λ)) where g(n, λ) is the multiple of 2π/nclosest to λ (i.e. g(n, λ) = 2πk
n if(k − 1
2
)< λn
2π ≤(k + 1
2
)).
Proposition
Let {Xt} be a stationary process with mean µ and absolutely summableACVF γ(·). Then
1 E(In(0))− nµ2 → 2πf (0) for n→∞;
2 E(In(λ))→ 2πf (λ) for n→∞ if λ 6= 0.
Proof by the dominated convergence theorem.
21 ottobre 2013 2 / 16
Estimation of the spectral density
As f (λ) =1
2π
∞∑h=−∞
γ(h)e−ihλ while In(ωk) =∑|h|<n
γ(h)e−ihωk ,
In(ωk) appears a natural estimator of 2πf (λ). Extend In to
Definition
For 0 < λ ≤ π, In(λ) = In(g(n, λ)) where g(n, λ) is the multiple of 2π/nclosest to λ (i.e. g(n, λ) = 2πk
n if(k − 1
2
)< λn
2π ≤(k + 1
2
)).
Proposition
Let {Xt} be a stationary process with mean µ and absolutely summableACVF γ(·). Then
1 E(In(0))− nµ2 → 2πf (0) for n→∞;
2 E(In(λ))→ 2πf (λ) for n→∞ if λ 6= 0.
Proof by the dominated convergence theorem.
21 ottobre 2013 2 / 16
Estimation of the spectral density
As f (λ) =1
2π
∞∑h=−∞
γ(h)e−ihλ while In(ωk) =∑|h|<n
γ(h)e−ihωk ,
In(ωk) appears a natural estimator of 2πf (λ). Extend In to
Definition
For 0 < λ ≤ π, In(λ) = In(g(n, λ)) where g(n, λ) is the multiple of 2π/nclosest to λ (i.e. g(n, λ) = 2πk
n if(k − 1
2
)< λn
2π ≤(k + 1
2
)).
Proposition
Let {Xt} be a stationary process with mean µ and absolutely summableACVF γ(·). Then
1 E(In(0))− nµ2 → 2πf (0) for n→∞;
2 E(In(λ))→ 2πf (λ) for n→∞ if λ 6= 0.
Proof by the dominated convergence theorem.
21 ottobre 2013 2 / 16
Asymptotic distribution of In
Previous result: In(λ) are unbiased estimators of 2πf (λ). Otherproperties?
Theorem
Let {Xt} be a stationary process s.t. Xt =+∞∑
j=−∞ψjZt−j
where {Zt} ∼ IID(0, σ2) and+∞∑
j=−∞|ψj | <∞. Assume fX (λ) > 0 for
λ ∈ [−π, π], and let In(λ) the periodogram of (X1, . . . ,Xn).
Then, given 0 < λ1 < λ2 < · · · < λm < π, (In(λ1), . . . , In(λm)) convergein distribution to a vector of independent exponentially distributed r.v.with mean (2πfX (λ1), . . . , 2πfX (λm)).
21 ottobre 2013 3 / 16
Asymptotic distribution of In
Previous result: In(λ) are unbiased estimators of 2πf (λ). Otherproperties?
Theorem
Let {Xt} be a stationary process s.t. Xt =+∞∑
j=−∞ψjZt−j
where {Zt} ∼ IID(0, σ2) and+∞∑
j=−∞|ψj | <∞. Assume fX (λ) > 0 for
λ ∈ [−π, π], and let In(λ) the periodogram of (X1, . . . ,Xn).
Then, given 0 < λ1 < λ2 < · · · < λm < π, (In(λ1), . . . , In(λm)) convergein distribution to a vector of independent exponentially distributed r.v.with mean (2πfX (λ1), . . . , 2πfX (λm)).
21 ottobre 2013 3 / 16
Asymptotic distribution of In
Previous result: In(λ) are unbiased estimators of 2πf (λ). Otherproperties?
Theorem
Let {Xt} be a stationary process s.t. Xt =+∞∑
j=−∞ψjZt−j
where {Zt} ∼ IID(0, σ2) and+∞∑
j=−∞|ψj | <∞. Assume fX (λ) > 0 for
λ ∈ [−π, π], and let In(λ) the periodogram of (X1, . . . ,Xn).
Then, given 0 < λ1 < λ2 < · · · < λm < π, (In(λ1), . . . , In(λm)) convergein distribution to a vector of independent exponentially distributed r.v.with mean (2πfX (λ1), . . . , 2πfX (λm)).
21 ottobre 2013 3 / 16
Proof in a special case: {Xt} ∼ IID N(0, σ2)
First, note that, if n large enough, In(λj) = α2n(ωk) + β2n(ωk) for some
k > 0 where αn(ωk) = 〈Xn,Ck〉 and βn(ωk) = 〈Xn, Sk〉
with Xn =
X1...Xn
, Ck =
√2
n
cos(ωk)...
cos(nωk)
, Sk =
√2
n
sin(ωk)...
sin(nωk)
.
Ck and Sk are orthonormal vectors (use trigonometric identities).Further, if v , w ∈ Rn with 〈v ,w〉 = 0 and {Xt} ∼WN(0, σ2)V =
∑nt=1 vtXt and W =
∑nt=1 wtXt are uncorrelated. In fact
E(VW ) =n∑
t,s=1
vtwsE(XtXs) =n∑
t=1
vtwtE(X 2t ) = σ2
n∑t=1
vtwt = 0.
Hence αn(ωk) and βn(ωk), hence normal and independent if Xt are alsonormal. In(λj) is then a χ2(2), i.e. exponential.
Analogously, In(λj) and In(λl) are independent for n large enough.
21 ottobre 2013 4 / 16
Proof in a special case: {Xt} ∼ IID N(0, σ2)
First, note that, if n large enough, In(λj) = α2n(ωk) + β2n(ωk) for some
k > 0 where αn(ωk) = 〈Xn,Ck〉 and βn(ωk) = 〈Xn, Sk〉
with Xn =
X1...Xn
, Ck =
√2
n
cos(ωk)...
cos(nωk)
, Sk =
√2
n
sin(ωk)...
sin(nωk)
.
Ck and Sk are orthonormal vectors (use trigonometric identities).
Further, if v , w ∈ Rn with 〈v ,w〉 = 0 and {Xt} ∼WN(0, σ2)V =
∑nt=1 vtXt and W =
∑nt=1 wtXt are uncorrelated. In fact
E(VW ) =n∑
t,s=1
vtwsE(XtXs) =n∑
t=1
vtwtE(X 2t ) = σ2
n∑t=1
vtwt = 0.
Hence αn(ωk) and βn(ωk), hence normal and independent if Xt are alsonormal. In(λj) is then a χ2(2), i.e. exponential.
Analogously, In(λj) and In(λl) are independent for n large enough.
21 ottobre 2013 4 / 16
Proof in a special case: {Xt} ∼ IID N(0, σ2)
First, note that, if n large enough, In(λj) = α2n(ωk) + β2n(ωk) for some
k > 0 where αn(ωk) = 〈Xn,Ck〉 and βn(ωk) = 〈Xn, Sk〉
with Xn =
X1...Xn
, Ck =
√2
n
cos(ωk)...
cos(nωk)
, Sk =
√2
n
sin(ωk)...
sin(nωk)
.
Ck and Sk are orthonormal vectors (use trigonometric identities).Further, if v , w ∈ Rn with 〈v ,w〉 = 0 and {Xt} ∼WN(0, σ2)V =
∑nt=1 vtXt and W =
∑nt=1 wtXt are uncorrelated.
In fact
E(VW ) =n∑
t,s=1
vtwsE(XtXs) =n∑
t=1
vtwtE(X 2t ) = σ2
n∑t=1
vtwt = 0.
Hence αn(ωk) and βn(ωk), hence normal and independent if Xt are alsonormal. In(λj) is then a χ2(2), i.e. exponential.
Analogously, In(λj) and In(λl) are independent for n large enough.
21 ottobre 2013 4 / 16
Proof in a special case: {Xt} ∼ IID N(0, σ2)
First, note that, if n large enough, In(λj) = α2n(ωk) + β2n(ωk) for some
k > 0 where αn(ωk) = 〈Xn,Ck〉 and βn(ωk) = 〈Xn, Sk〉
with Xn =
X1...Xn
, Ck =
√2
n
cos(ωk)...
cos(nωk)
, Sk =
√2
n
sin(ωk)...
sin(nωk)
.
Ck and Sk are orthonormal vectors (use trigonometric identities).Further, if v , w ∈ Rn with 〈v ,w〉 = 0 and {Xt} ∼WN(0, σ2)V =
∑nt=1 vtXt and W =
∑nt=1 wtXt are uncorrelated. In fact
E(VW ) =n∑
t,s=1
vtwsE(XtXs) =n∑
t=1
vtwtE(X 2t ) = σ2
n∑t=1
vtwt = 0.
Hence αn(ωk) and βn(ωk), hence normal and independent if Xt are alsonormal. In(λj) is then a χ2(2), i.e. exponential.
Analogously, In(λj) and In(λl) are independent for n large enough.
21 ottobre 2013 4 / 16
Proof in a special case: {Xt} ∼ IID N(0, σ2)
First, note that, if n large enough, In(λj) = α2n(ωk) + β2n(ωk) for some
k > 0 where αn(ωk) = 〈Xn,Ck〉 and βn(ωk) = 〈Xn, Sk〉
with Xn =
X1...Xn
, Ck =
√2
n
cos(ωk)...
cos(nωk)
, Sk =
√2
n
sin(ωk)...
sin(nωk)
.
Ck and Sk are orthonormal vectors (use trigonometric identities).Further, if v , w ∈ Rn with 〈v ,w〉 = 0 and {Xt} ∼WN(0, σ2)V =
∑nt=1 vtXt and W =
∑nt=1 wtXt are uncorrelated. In fact
E(VW ) =n∑
t,s=1
vtwsE(XtXs) =n∑
t=1
vtwtE(X 2t ) = σ2
n∑t=1
vtwt = 0.
Hence αn(ωk) and βn(ωk), hence normal and independent if Xt are alsonormal. In(λj) is then a χ2(2), i.e. exponential.
Analogously, In(λj) and In(λl) are independent for n large enough.
21 ottobre 2013 4 / 16
Proof in a special case: {Xt} ∼ IID N(0, σ2)
First, note that, if n large enough, In(λj) = α2n(ωk) + β2n(ωk) for some
k > 0 where αn(ωk) = 〈Xn,Ck〉 and βn(ωk) = 〈Xn, Sk〉
with Xn =
X1...Xn
, Ck =
√2
n
cos(ωk)...
cos(nωk)
, Sk =
√2
n
sin(ωk)...
sin(nωk)
.
Ck and Sk are orthonormal vectors (use trigonometric identities).Further, if v , w ∈ Rn with 〈v ,w〉 = 0 and {Xt} ∼WN(0, σ2)V =
∑nt=1 vtXt and W =
∑nt=1 wtXt are uncorrelated. In fact
E(VW ) =n∑
t,s=1
vtwsE(XtXs) =n∑
t=1
vtwtE(X 2t ) = σ2
n∑t=1
vtwt = 0.
Hence αn(ωk) and βn(ωk), hence normal and independent if Xt are alsonormal. In(λj) is then a χ2(2), i.e. exponential.
Analogously, In(λj) and In(λl) are independent for n large enough.
21 ottobre 2013 4 / 16
idea of how to extend the proof
If {Xt} ∼ IID(0, σ2) (not necessarily normal), central limit theorem shows
that αn =n∑
t=1cos(ωkt)Xt and βn =
n∑t=1
sin(ωkt)Xt are asymptotically
normal. Then the computations of the covariance matrix yield the result.
If Xt =+∞∑
j=−∞ψjZt−j , first one shows (recall formula for spectral density)
In,X (λ) =∣∣∣Ψ(e−ig(n,λ))
∣∣∣2 In,Z (λ) + Rn(g(n, λ)) with supλ
E|Rn(·)| n→∞−−−→ 0.
(maxE(R2n) = O(n−1) if E(Z 4
t ) <∞ and∞∑
j=−∞|j |1/2|ψj | <∞.).
The result then follows relatively easily from the previous one.
21 ottobre 2013 5 / 16
idea of how to extend the proof
If {Xt} ∼ IID(0, σ2) (not necessarily normal), central limit theorem shows
that αn =n∑
t=1cos(ωkt)Xt and βn =
n∑t=1
sin(ωkt)Xt are asymptotically
normal. Then the computations of the covariance matrix yield the result.
If Xt =+∞∑
j=−∞ψjZt−j , first one shows (recall formula for spectral density)
In,X (λ) =∣∣∣Ψ(e−ig(n,λ))
∣∣∣2 In,Z (λ) + Rn(g(n, λ)) with supλ
E|Rn(·)| n→∞−−−→ 0.
(maxE(R2n) = O(n−1) if E(Z 4
t ) <∞ and∞∑
j=−∞|j |1/2|ψj | <∞.).
The result then follows relatively easily from the previous one.
21 ottobre 2013 5 / 16
idea of how to extend the proof
If {Xt} ∼ IID(0, σ2) (not necessarily normal), central limit theorem shows
that αn =n∑
t=1cos(ωkt)Xt and βn =
n∑t=1
sin(ωkt)Xt are asymptotically
normal. Then the computations of the covariance matrix yield the result.
If Xt =+∞∑
j=−∞ψjZt−j , first one shows (recall formula for spectral density)
In,X (λ) =∣∣∣Ψ(e−ig(n,λ))
∣∣∣2 In,Z (λ) + Rn(g(n, λ)) with supλ
E|Rn(·)| n→∞−−−→ 0.
(maxE(R2n) = O(n−1) if E(Z 4
t ) <∞ and∞∑
j=−∞|j |1/2|ψj | <∞.).
The result then follows relatively easily from the previous one.
21 ottobre 2013 5 / 16
Variance of the estimator
Remember that, if U ∼ Exp(λ), then V(U) = (E(U))2. We can thensuspect that V(In(λ))→ 4π2f 2(λ).
Indeed one can prove
Cov(In(λi ), In(λj)) =
8π2f 2(0) + O(n−1/2) if λi = λj = 0 or π
4π2f 2(λ) + O(n−1/2) if λi = λj = λ 6= 0
O(n−1) if λi 6= λj
In(λ) is not a consistent estimator.P(|In(λ)− 2πf (λ)| > c) does not converge to 0 as n→∞.
21 ottobre 2013 6 / 16
Variance of the estimator
Remember that, if U ∼ Exp(λ), then V(U) = (E(U))2. We can thensuspect that V(In(λ))→ 4π2f 2(λ). Indeed one can prove
Cov(In(λi ), In(λj)) =
8π2f 2(0) + O(n−1/2) if λi = λj = 0 or π
4π2f 2(λ) + O(n−1/2) if λi = λj = λ 6= 0
O(n−1) if λi 6= λj
In(λ) is not a consistent estimator.P(|In(λ)− 2πf (λ)| > c) does not converge to 0 as n→∞.
21 ottobre 2013 6 / 16
Variance of the estimator
Remember that, if U ∼ Exp(λ), then V(U) = (E(U))2. We can thensuspect that V(In(λ))→ 4π2f 2(λ). Indeed one can prove
Cov(In(λi ), In(λj)) =
8π2f 2(0) + O(n−1/2) if λi = λj = 0 or π
4π2f 2(λ) + O(n−1/2) if λi = λj = λ 6= 0
O(n−1) if λi 6= λj
In(λ) is not a consistent estimator.P(|In(λ)− 2πf (λ)| > c) does not converge to 0 as n→∞.
21 ottobre 2013 6 / 16
Discrete spectral average estimators
To remedy to the lack of consistency of the periodogram, idea is to exploitthe fact that In(λi ) and In(λj) are approximately independent for λi 6= λj .By averaging a large number of In(λj), all λj close to λ, one may get abetter estimator.
Definition
A discrete spectral average estimator has the form
f (λ) =1
2π
mn∑k=−mn
Wn(k)In
(g(n, λ) + 2π
k
n
).
where limn→∞
mn = +∞ while limn→∞
mnn = 0 (e.g. mn =
√n)
and the weights satisfy Wn(k) ≥ 0, Wn(k) = Wn(−k),mn∑
k=−mn
Wn(k) = 1mn∑
k=−mn
W 2n (k)
n→∞−−−→ 0.
Example: Wn(k) = (2m + 1)−1 for |k| ≤ mn, Wn(k) = 0 otherwise.
21 ottobre 2013 7 / 16
Discrete spectral average estimators
To remedy to the lack of consistency of the periodogram, idea is to exploitthe fact that In(λi ) and In(λj) are approximately independent for λi 6= λj .By averaging a large number of In(λj), all λj close to λ, one may get abetter estimator.
Definition
A discrete spectral average estimator has the form
f (λ) =1
2π
mn∑k=−mn
Wn(k)In
(g(n, λ) + 2π
k
n
).
where limn→∞
mn = +∞ while limn→∞
mnn = 0 (e.g. mn =
√n)
and the weights satisfy Wn(k) ≥ 0, Wn(k) = Wn(−k),mn∑
k=−mn
Wn(k) = 1mn∑
k=−mn
W 2n (k)
n→∞−−−→ 0.
Example: Wn(k) = (2m + 1)−1 for |k| ≤ mn, Wn(k) = 0 otherwise.
21 ottobre 2013 7 / 16
Discrete spectral average estimators
To remedy to the lack of consistency of the periodogram, idea is to exploitthe fact that In(λi ) and In(λj) are approximately independent for λi 6= λj .By averaging a large number of In(λj), all λj close to λ, one may get abetter estimator.
Definition
A discrete spectral average estimator has the form
f (λ) =1
2π
mn∑k=−mn
Wn(k)In
(g(n, λ) + 2π
k
n
).
where limn→∞
mn = +∞ while limn→∞
mnn = 0 (e.g. mn =
√n)
and the weights satisfy Wn(k) ≥ 0, Wn(k) = Wn(−k),mn∑
k=−mn
Wn(k) = 1mn∑
k=−mn
W 2n (k)
n→∞−−−→ 0.
Example: Wn(k) = (2m + 1)−1 for |k| ≤ mn, Wn(k) = 0 otherwise.
21 ottobre 2013 7 / 16
Discrete spectral average estimators
To remedy to the lack of consistency of the periodogram, idea is to exploitthe fact that In(λi ) and In(λj) are approximately independent for λi 6= λj .By averaging a large number of In(λj), all λj close to λ, one may get abetter estimator.
Definition
A discrete spectral average estimator has the form
f (λ) =1
2π
mn∑k=−mn
Wn(k)In
(g(n, λ) + 2π
k
n
).
where limn→∞
mn = +∞ while limn→∞
mnn = 0 (e.g. mn =
√n)
and the weights satisfy Wn(k) ≥ 0, Wn(k) = Wn(−k),mn∑
k=−mn
Wn(k) = 1mn∑
k=−mn
W 2n (k)
n→∞−−−→ 0.
Example: Wn(k) = (2m + 1)−1 for |k| ≤ mn, Wn(k) = 0 otherwise.
21 ottobre 2013 7 / 16
Consistency of discrete spectral average estimators
Theorem
Let {Xt} be a stationary process s.t. Xt =+∞∑
j=−∞ψjZt−j
where {Zt} ∼ IID(0, σ2), E(Z 4t ) <∞ and
+∞∑j=−∞
|j |1/2|ψj | <∞.
Let f (·) be a discrete spectral average estimator. Then, for λ, ω ∈ [0, π],
1 limn→∞
E(f (λ)) = f (λ);
2 limn→∞
(mn∑
k=−mn
W 2n (k)
)−1Cov(f (λ), f (ω)) =
2f 2(0) if λ = ω = 0
f 2(λ) if λ = ω
0 if λ 6= ω.
Because of the assumption onmn∑
k=−mn
W 2n (k), f is a consistent estimator.
21 ottobre 2013 8 / 16
Consistency of discrete spectral average estimators
Theorem
Let {Xt} be a stationary process s.t. Xt =+∞∑
j=−∞ψjZt−j
where {Zt} ∼ IID(0, σ2), E(Z 4t ) <∞ and
+∞∑j=−∞
|j |1/2|ψj | <∞.
Let f (·) be a discrete spectral average estimator. Then, for λ, ω ∈ [0, π],
1 limn→∞
E(f (λ)) = f (λ);
2 limn→∞
(mn∑
k=−mn
W 2n (k)
)−1Cov(f (λ), f (ω)) =
2f 2(0) if λ = ω = 0
f 2(λ) if λ = ω
0 if λ 6= ω.
Because of the assumption onmn∑
k=−mn
W 2n (k), f is a consistent estimator.
21 ottobre 2013 8 / 16
Remarks on spectral estimators
In practice, one has a finite n, and has to choose a finite m andappropriate weights W for the spectral average estimator.
Using m relatively large and roughly equal weights will produce asmooth estimate with low variance but possibly with a large bias, asthe estimate of f (λ) will depend on values far away from λ.On the other hand, a narrow band around λ will produce an estimatorwith a large variance. It is advisable experimenting with differentweights.
The previous theorem concerns processes Xt with 0 mean. Generally,one will apply estimates to Yt = Xt − x . The only difference betweenfX and fY occurs at frequency 0. it is then usual to estimate f (0)through
f (0) =1
2π
(In(ω1) + 2
m∑k=1
Wn(k)In(ωk+1)
).
21 ottobre 2013 9 / 16
Remarks on spectral estimators
In practice, one has a finite n, and has to choose a finite m andappropriate weights W for the spectral average estimator.
Using m relatively large and roughly equal weights will produce asmooth estimate with low variance but possibly with a large bias, asthe estimate of f (λ) will depend on values far away from λ.On the other hand, a narrow band around λ will produce an estimatorwith a large variance. It is advisable experimenting with differentweights.
The previous theorem concerns processes Xt with 0 mean. Generally,one will apply estimates to Yt = Xt − x . The only difference betweenfX and fY occurs at frequency 0. it is then usual to estimate f (0)through
f (0) =1
2π
(In(ω1) + 2
m∑k=1
Wn(k)In(ωk+1)
).
21 ottobre 2013 9 / 16
Remarks on spectral estimators
In practice, one has a finite n, and has to choose a finite m andappropriate weights W for the spectral average estimator.
Using m relatively large and roughly equal weights will produce asmooth estimate with low variance but possibly with a large bias, asthe estimate of f (λ) will depend on values far away from λ.On the other hand, a narrow band around λ will produce an estimatorwith a large variance. It is advisable experimenting with differentweights.
The previous theorem concerns processes Xt with 0 mean. Generally,one will apply estimates to Yt = Xt − x . The only difference betweenfX and fY occurs at frequency 0. it is then usual to estimate f (0)through
f (0) =1
2π
(In(ω1) + 2
m∑k=1
Wn(k)In(ωk+1)
).
21 ottobre 2013 9 / 16
Application on sunspots data
sunspots data
Year
Mon
thly
sun
spot
num
bers
1750 1800 1850 1900 1950
050
100
150
200
250
21 ottobre 2013 10 / 16
Spectral density estimates of sunspot data
0.0 0.5 1.0 1.5 2.0 2.5 3.0
0500
1000
1500
2000
Spectral density of yearly sunspot 1770-1869
freq
f
0.0 0.5 1.0 1.5 2.0 2.5 3.0
0500
1000
1500
Smoothed (span=3) spectral density of sunspot 1770-1869
freq
f
0.0 0.5 1.0 1.5 2.0 2.5 3.0
0200
400
600
800
1000
1200
1400
Smoothed (span=5) spectral density of sunspot 1770-1869
freq
f
0 10 20 30 40 50
02000
4000
6000
8000
Period
f
21 ottobre 2013 11 / 16
Spectral density estimates of sunspot data
0.0 0.5 1.0 1.5 2.0 2.5 3.0
0500
1000
1500
2000
Spectral density of yearly sunspot 1770-1869
freq
f
0.0 0.5 1.0 1.5 2.0 2.5 3.0
0500
1000
1500
Smoothed (span=3) spectral density of sunspot 1770-1869
freq
f
0.0 0.5 1.0 1.5 2.0 2.5 3.0
0200
400
600
800
1000
1200
1400
Smoothed (span=5) spectral density of sunspot 1770-1869
freq
f
0 10 20 30 40 50
02000
4000
6000
8000
Period
f
21 ottobre 2013 11 / 16
Spectral density estimates of sunspot data
0.0 0.5 1.0 1.5 2.0 2.5 3.0
0500
1000
1500
2000
Spectral density of yearly sunspot 1770-1869
freq
f
0.0 0.5 1.0 1.5 2.0 2.5 3.0
0500
1000
1500
Smoothed (span=3) spectral density of sunspot 1770-1869
freq
f
0.0 0.5 1.0 1.5 2.0 2.5 3.0
0200
400
600
800
1000
1200
1400
Smoothed (span=5) spectral density of sunspot 1770-1869
freq
f
0 10 20 30 40 50
02000
4000
6000
8000
Period
f
21 ottobre 2013 11 / 16
Spectral density estimates of sunspot data
0.0 0.5 1.0 1.5 2.0 2.5 3.0
0500
1000
1500
2000
Spectral density of yearly sunspot 1770-1869
freq
f
0.0 0.5 1.0 1.5 2.0 2.5 3.0
0500
1000
1500
Smoothed (span=3) spectral density of sunspot 1770-1869
freq
f
0.0 0.5 1.0 1.5 2.0 2.5 3.0
0200
400
600
800
1000
1200
1400
Smoothed (span=5) spectral density of sunspot 1770-1869
freq
f
0 10 20 30 40 50
02000
4000
6000
8000
Period
f
21 ottobre 2013 11 / 16
Spectral density of simulated AR(1) process (ϕ = 0.7)
0 5 10 15 20
-0.2
0.0
0.2
0.4
0.6
0.8
1.0
Lag
ACF
ACF of simulated AR(1), theta=0.7
0.0 0.5 1.0 1.5 2.0 2.5 3.0
0.0
0.5
1.0
1.5
2.0
2.5
Spectrum of simulated AR(1) phi=0.7
freq
f
theoretical density
0.0 0.5 1.0 1.5 2.0 2.5 3.0
0.0
0.5
1.0
1.5
Smoothed (span=3) spectrum of simulated AR(1) phi=0.7
freq
f
0.0 0.5 1.0 1.5 2.0 2.5 3.0
0.0
0.2
0.4
0.6
0.8
1.0
Smoothed (span=5) spectrum of simulated AR(1) phi=0.7
freq
f
21 ottobre 2013 12 / 16
Spectral density of simulated AR(1) process (ϕ = 0.7)
0 5 10 15 20
-0.2
0.0
0.2
0.4
0.6
0.8
1.0
Lag
ACF
ACF of simulated AR(1), theta=0.7
0.0 0.5 1.0 1.5 2.0 2.5 3.0
0.0
0.5
1.0
1.5
2.0
2.5
Spectrum of simulated AR(1) phi=0.7
freq
f
theoretical density
0.0 0.5 1.0 1.5 2.0 2.5 3.0
0.0
0.5
1.0
1.5
Smoothed (span=3) spectrum of simulated AR(1) phi=0.7
freq
f
0.0 0.5 1.0 1.5 2.0 2.5 3.0
0.0
0.2
0.4
0.6
0.8
1.0
Smoothed (span=5) spectrum of simulated AR(1) phi=0.7
freq
f
21 ottobre 2013 12 / 16
Spectral density of simulated AR(1) process (ϕ = 0.7)
0 5 10 15 20
-0.2
0.0
0.2
0.4
0.6
0.8
1.0
Lag
ACF
ACF of simulated AR(1), theta=0.7
0.0 0.5 1.0 1.5 2.0 2.5 3.0
0.0
0.5
1.0
1.5
2.0
2.5
Spectrum of simulated AR(1) phi=0.7
freq
f
theoretical density
0.0 0.5 1.0 1.5 2.0 2.5 3.0
0.0
0.5
1.0
1.5
Smoothed (span=3) spectrum of simulated AR(1) phi=0.7
freq
f
0.0 0.5 1.0 1.5 2.0 2.5 3.0
0.0
0.2
0.4
0.6
0.8
1.0
Smoothed (span=5) spectrum of simulated AR(1) phi=0.7
freq
f
21 ottobre 2013 12 / 16
Spectral density of simulated AR(1) process (ϕ = 0.7)
0 5 10 15 20
-0.2
0.0
0.2
0.4
0.6
0.8
1.0
Lag
ACF
ACF of simulated AR(1), theta=0.7
0.0 0.5 1.0 1.5 2.0 2.5 3.0
0.0
0.5
1.0
1.5
2.0
2.5
Spectrum of simulated AR(1) phi=0.7
freq
f
theoretical density
0.0 0.5 1.0 1.5 2.0 2.5 3.0
0.0
0.5
1.0
1.5
Smoothed (span=3) spectrum of simulated AR(1) phi=0.7
freq
f
0.0 0.5 1.0 1.5 2.0 2.5 3.0
0.0
0.2
0.4
0.6
0.8
1.0
Smoothed (span=5) spectrum of simulated AR(1) phi=0.7
freq
f
21 ottobre 2013 12 / 16
Spectral density of simulated AR(1) process (ϕ = −0.7)
0 5 10 15 20
-0.5
0.0
0.5
1.0
Lag
ACF
Series arsim
0.0 0.5 1.0 1.5 2.0 2.5 3.0
02
46
810
Spectrum of simulated AR(1) phi=-0.7
freq
f
0.0 0.5 1.0 1.5 2.0 2.5 3.0
01
23
45
67
Smoothed (span=3) spectrum of simulated AR(1) phi=-0.7
freq
f
0.0 0.5 1.0 1.5 2.0 2.5 3.0
01
23
4
Smoothed (span=5) spectrum of simulated AR(1) phi=-0.7
freq
f
21 ottobre 2013 13 / 16
Spectral density of simulated AR(1) process (ϕ = −0.7)
0 5 10 15 20
-0.5
0.0
0.5
1.0
Lag
ACF
Series arsim
0.0 0.5 1.0 1.5 2.0 2.5 3.0
02
46
810
Spectrum of simulated AR(1) phi=-0.7
freq
f
0.0 0.5 1.0 1.5 2.0 2.5 3.0
01
23
45
67
Smoothed (span=3) spectrum of simulated AR(1) phi=-0.7
freq
f
0.0 0.5 1.0 1.5 2.0 2.5 3.0
01
23
4
Smoothed (span=5) spectrum of simulated AR(1) phi=-0.7
freq
f
21 ottobre 2013 13 / 16
Spectral density of simulated AR(1) process (ϕ = −0.7)
0 5 10 15 20
-0.5
0.0
0.5
1.0
Lag
ACF
Series arsim
0.0 0.5 1.0 1.5 2.0 2.5 3.0
02
46
810
Spectrum of simulated AR(1) phi=-0.7
freq
f
0.0 0.5 1.0 1.5 2.0 2.5 3.0
01
23
45
67
Smoothed (span=3) spectrum of simulated AR(1) phi=-0.7
freq
f
0.0 0.5 1.0 1.5 2.0 2.5 3.0
01
23
4
Smoothed (span=5) spectrum of simulated AR(1) phi=-0.7
freq
f
21 ottobre 2013 13 / 16
Spectral density of simulated AR(1) process (ϕ = −0.7)
0 5 10 15 20
-0.5
0.0
0.5
1.0
Lag
ACF
Series arsim
0.0 0.5 1.0 1.5 2.0 2.5 3.0
02
46
810
Spectrum of simulated AR(1) phi=-0.7
freq
f
0.0 0.5 1.0 1.5 2.0 2.5 3.0
01
23
45
67
Smoothed (span=3) spectrum of simulated AR(1) phi=-0.7
freq
f
0.0 0.5 1.0 1.5 2.0 2.5 3.0
01
23
4
Smoothed (span=5) spectrum of simulated AR(1) phi=-0.7
freq
f
21 ottobre 2013 13 / 16
Other related topics
Lag window estimators of spectral density:
fL(λ) =1
2π
∑|h|≤r
w
(h
r
)γ(h)e−ihλ
where w(0) = 1, |w(x)| ≤ 1, w(x) ≡ 0 ∀ |x | > 1. (several choices ofw(·) are used: Bartlett, Daniell,. . . )
. Remember
In(ωk) =∑|h|<n
γ(h)e−ihωk .
It is possible to show that
fL(λ) =1
2π
π∫−π
W (x)In(λ+ x) dx ≈ 1
2π
∑|k|≤[n/2]
W (ωk)In(g(n, λ) +ωk)2π
n
where W (x) = 12π
∑|h|≤r
w(h/r)e−ihx , In(·) extends the periodogram.
Lag windows estimators are thus not very different from discretespectral average estimators.
21 ottobre 2013 14 / 16
Other related topics
Lag window estimators of spectral density:
fL(λ) =1
2π
∑|h|≤r
w
(h
r
)γ(h)e−ihλ
where w(0) = 1, |w(x)| ≤ 1, w(x) ≡ 0 ∀ |x | > 1. (several choices ofw(·) are used: Bartlett, Daniell,. . . ) . Remember
In(ωk) =∑|h|<n
γ(h)e−ihωk .
It is possible to show that
fL(λ) =1
2π
π∫−π
W (x)In(λ+ x) dx ≈ 1
2π
∑|k|≤[n/2]
W (ωk)In(g(n, λ) +ωk)2π
n
where W (x) = 12π
∑|h|≤r
w(h/r)e−ihx , In(·) extends the periodogram.
Lag windows estimators are thus not very different from discretespectral average estimators.
21 ottobre 2013 14 / 16
Other related topics
Lag window estimators of spectral density:
fL(λ) =1
2π
∑|h|≤r
w
(h
r
)γ(h)e−ihλ
where w(0) = 1, |w(x)| ≤ 1, w(x) ≡ 0 ∀ |x | > 1. (several choices ofw(·) are used: Bartlett, Daniell,. . . ) . Remember
In(ωk) =∑|h|<n
γ(h)e−ihωk .
It is possible to show that
fL(λ) =1
2π
π∫−π
W (x)In(λ+ x) dx ≈ 1
2π
∑|k|≤[n/2]
W (ωk)In(g(n, λ) +ωk)2π
n
where W (x) = 12π
∑|h|≤r
w(h/r)e−ihx , In(·) extends the periodogram.
Lag windows estimators are thus not very different from discretespectral average estimators.
21 ottobre 2013 14 / 16
Other related topics
Lag window estimators of spectral density:
fL(λ) =1
2π
∑|h|≤r
w
(h
r
)γ(h)e−ihλ
where w(0) = 1, |w(x)| ≤ 1, w(x) ≡ 0 ∀ |x | > 1. (several choices ofw(·) are used: Bartlett, Daniell,. . . ) . Remember
In(ωk) =∑|h|<n
γ(h)e−ihωk .
It is possible to show that
fL(λ) =1
2π
π∫−π
W (x)In(λ+ x) dx ≈ 1
2π
∑|k|≤[n/2]
W (ωk)In(g(n, λ) +ωk)2π
n
where W (x) = 12π
∑|h|≤r
w(h/r)e−ihx , In(·) extends the periodogram.
Lag windows estimators are thus not very different from discretespectral average estimators.
21 ottobre 2013 14 / 16
Other related topics
Confidence intervals for f (λ):
a method is based on theapproximation:
ν f (ωk)
f (ωk)∼ χ2(ν) with ν =
2m∑
k=−mW 2
n (k)
.
Testing for periodicities in a time series. For instance
H0: Xt = µ+ Zt with {Zt} ∼ IID N(0, σ2);
H1: Xt = µ+ A cos(ωt) + B sin(ωt) + Zt with{Zt} ∼ IID N(0, σ2), (A,B) 6= (0, 0).
21 ottobre 2013 15 / 16
Other related topics
Confidence intervals for f (λ): a method is based on theapproximation:
ν f (ωk)
f (ωk)∼ χ2(ν) with ν =
2m∑
k=−mW 2
n (k)
.
Testing for periodicities in a time series. For instance
H0: Xt = µ+ Zt with {Zt} ∼ IID N(0, σ2);
H1: Xt = µ+ A cos(ωt) + B sin(ωt) + Zt with{Zt} ∼ IID N(0, σ2), (A,B) 6= (0, 0).
21 ottobre 2013 15 / 16
Other related topics
Confidence intervals for f (λ): a method is based on theapproximation:
ν f (ωk)
f (ωk)∼ χ2(ν) with ν =
2m∑
k=−mW 2
n (k)
.
Testing for periodicities in a time series.
For instance
H0: Xt = µ+ Zt with {Zt} ∼ IID N(0, σ2);
H1: Xt = µ+ A cos(ωt) + B sin(ωt) + Zt with{Zt} ∼ IID N(0, σ2), (A,B) 6= (0, 0).
21 ottobre 2013 15 / 16
Other related topics
Confidence intervals for f (λ): a method is based on theapproximation:
ν f (ωk)
f (ωk)∼ χ2(ν) with ν =
2m∑
k=−mW 2
n (k)
.
Testing for periodicities in a time series. For instance
H0: Xt = µ+ Zt with {Zt} ∼ IID N(0, σ2);
H1: Xt = µ+ A cos(ωt) + B sin(ωt) + Zt with{Zt} ∼ IID N(0, σ2), (A,B) 6= (0, 0).
21 ottobre 2013 15 / 16
Testing for periodicities, 2
Under H0 (Xt = µ+ Zt) with ω = ωk , In(ωk) = 12‖PL(Ck ,Sk )Xn‖2.
Hence 2In(ωk) ∼ σ2χ2(2) and is independent of
‖Xn − PL(Ck ,Sk )Xn‖2 =n∑
t=1
X 2t − In(0)− 2In(ωk) ∼ σ2χ2(n − 3).
An F-test on(n − 3)In(ωk)∑n
t=1 X2t − In(0)− 2In(ωk)
can then test for H0 against H1.
The idea can be extended to more complex situations. Fisher’s test (seeTSTM) tests H0 against
H1 : Xt = µ+ Zt + f (t) with f a periodic function.
21 ottobre 2013 16 / 16
Testing for periodicities, 2
Under H0 (Xt = µ+ Zt) with ω = ωk , In(ωk) = 12‖PL(Ck ,Sk )Xn‖2.
Hence 2In(ωk) ∼ σ2χ2(2) and is independent of
‖Xn − PL(Ck ,Sk )Xn‖2 =n∑
t=1
X 2t − In(0)− 2In(ωk) ∼ σ2χ2(n − 3).
An F-test on(n − 3)In(ωk)∑n
t=1 X2t − In(0)− 2In(ωk)
can then test for H0 against H1.
The idea can be extended to more complex situations. Fisher’s test (seeTSTM) tests H0 against
H1 : Xt = µ+ Zt + f (t) with f a periodic function.
21 ottobre 2013 16 / 16
Testing for periodicities, 2
Under H0 (Xt = µ+ Zt) with ω = ωk , In(ωk) = 12‖PL(Ck ,Sk )Xn‖2.
Hence 2In(ωk) ∼ σ2χ2(2) and is independent of
‖Xn − PL(Ck ,Sk )Xn‖2 =n∑
t=1
X 2t − In(0)− 2In(ωk) ∼ σ2χ2(n − 3).
An F-test on(n − 3)In(ωk)∑n
t=1 X2t − In(0)− 2In(ωk)
can then test for H0 against H1.
The idea can be extended to more complex situations. Fisher’s test (seeTSTM) tests H0 against
H1 : Xt = µ+ Zt + f (t) with f a periodic function.
21 ottobre 2013 16 / 16
Testing for periodicities, 2
Under H0 (Xt = µ+ Zt) with ω = ωk , In(ωk) = 12‖PL(Ck ,Sk )Xn‖2.
Hence 2In(ωk) ∼ σ2χ2(2) and is independent of
‖Xn − PL(Ck ,Sk )Xn‖2 =n∑
t=1
X 2t − In(0)− 2In(ωk) ∼ σ2χ2(n − 3).
An F-test on(n − 3)In(ωk)∑n
t=1 X2t − In(0)− 2In(ωk)
can then test for H0 against H1.
The idea can be extended to more complex situations. Fisher’s test (seeTSTM) tests H0 against
H1 : Xt = µ+ Zt + f (t) with f a periodic function.
21 ottobre 2013 16 / 16