九州大学学術情報リポジトリKyushu University Institutional Repository
Likelihood Estimation of Stable Levy Processesfrom Discrete Data
Masuda, HirokiFaculty of Mathematics, Kyushu University
http://hdl.handle.net/2324/3391
出版情報:MHF Preprint Series. 2006-18, 2006-04-10. Faculty of Mathematics, Kyushu Universityバージョン:権利関係:
MHF Preprint SeriesKyushu University
21st Century COE ProgramDevelopment of Dynamic Mathematics with
High Functionality
Likelihood estimation
of stable Levy processes
from discrete data
H. Masuda
MHF 2006-18
( Received April 10, 2006 )
Faculty of Mathematics
Kyushu UniversityFukuoka, JAPAN
Likelihood Estimation of Stable Levy Processes
from Discrete Data
Hiroki Masuda
Graduate School of Mathematics, Kyushu University,6-10-1 Hakozaki, Higashi-ku, Fukuoka 812-8581, Japan
Email: [email protected]
Abstract
We study the likelihood inference for real-valued non-Gaussian stable Levy processesX = (Xt)t∈R+ based on sampled data (Xihn)n
i=0, where hn ↓ 0, focusing on casesof either symmetric or completely skewed (one-sided) Levy density. First, the localasymptotic normality with always degenerate Fisher information matrix is obtained,so that the maximum likelihood estimation is inappropriate for joint estimation of allparameters involved. Second, supposing that either index or scale parameter is known,we obtain the uniform asymptotic normality of the maximum likelihood estimates andtheir asymptotic efficiency, where the resulting optimal convergence rates reveal that, asopposed to the Gaussian case, that nhn →∞ is not necessary for consistent estimationfor all parameters.
Keywords: discrete sampling; efficiency; maximum likelihood estimation; stable Levyprocess.
1 Introduction
The purpose of this study is to investigate likelihood-based parametric estimation for dis-cretely observed non-Gaussian stable Levy processes whose Levy measures are either sym-metric or completely skewed (one-sided). Our main results are presented in Section 3: first,we prove the local asymptotic normality (LAN) with an always degenerate Fisher informa-tion matrix; second, the uniform asymptotic normalities and the asymptotic efficiencies ofthe maximum likelihood estimators (MLE) are obtained when either index or scale param-eter is supposed to be known. Recall that uniform asymptotic normality is theoreticallyimportant for constructing asymptotic confidence intervals. The resulting phenomena turnout to differ considerably from the Wiener case (see below). The proofs of the results aregiven in Section 4.
Our precise model setup will be described in Section 2, however, prior to it let us recallsome well known facts and then give some remarks on our problem.
Consider the real-valued process X = (Xt)t∈R+ given by
Xt = γt + σwt, (1)
where γ ∈ R, σ > 0, and w is a standard Wiener process. If we can get a continuousrecord (Xt)0≤t≤T , then on account of the quadratic-variation character we can identify σwithout error for each T > 0: any two statistical experiments corresponding to different σare mutually singular (entirely separated). So, in this case we may suppose σ > 0 is knownand only estimating γ makes sense. The MLE of γ possesses the asymptotic normality andefficiency with optimal rate
√T and asymptotic variance σ2. On the other hand, if we have
1
only discrete observation (Xihn)ni=1 for some hn > 0 possibly depending on the sample size
n, then estimation of σ makes sense too and actually we have the following.
(i) If limn→∞ nhn ∈ [0,∞), then the MLE of σ is asymptotically normal and efficientwith optimal rate
√n and asymptotic variance σ2/2, regardless of the behavior of hn ↓ 0,
while there exists no consistent estimate of γ; more precisely, the sequence of observedinformation associated with γ is stochastically bounded without norming.
(ii) If nhn → ∞ and hn → 0 as n → ∞, then the MLE of (γ, σ) is asymptoticallynormal and efficient with optimal rate diag(
√nhn,
√n) and asymptotic covariance matrix
diag(σ2, σ2/2).
Importantly, much more general diffusions give rise to analogous phenomena; see Gobet [3, 4]for efficiency issues, and Yoshida [14] and Kessler [9] for optimal estimating procedures.
Now consider the question “what about cases where X is instead a non-Gaussian stableLevy process?”, to which we shall give a reply in this article. We know that continuouslyobserved cases makes no sense for all parameters involved: indeed, applying Kabanov etal. [8, Theorem 15] we can easily check that, for every T > 0, PT
θ and PTθ′ are absolutely
continuous if and only if θ = θ′, where PTθ denotes the image measure of (Xt)0≤t≤T associated
with θ whose components consist of the index, scale, location. Historically, (possibly skewed)non-Gaussian stable distributions have been rather typical for independent and identicallydistributed (iid) data whose distribution seem to possess regularly varying tails, and havebeen deeply investigated by many statisticians as well as probabilists: standard referencesfor basic facts of stable distributions can be found in the nice bibliography of Nolan [10].Despite the lack of explicit expressions of the densities except for a few particular cases, onecan build on numerical procedures when attempting parametric inference: especially, for theimplementation of maximum likelihood estimation based on iid samples, one should consultthe above Nolan’s paper.
When X is sampled at equally spaced time points, the continuous-time background re-duces to the usual iid-sample case since a Levy process has independent and stationaryincrements. However, it may be often useful and reasonable when we try to accommodate“asymptotic high frequency” of data such as intraday minute-by-minute one into the model,in which cases we may obtain precise asymptotic results by considering sampling length de-creasing as sample size increase, such as the case where we observe (Xi/n)n
i=1: actually, thisis the scope of our present study. When a distribution of observed data seems to possess theParetian tail, stable-Levy processes then may serve as a building block of modelling them.
We end this section with mentioning a few existing results. Woerner [13] proved the LANproperty with rate
√n for the scale parameter of symmetric stable Levy processes X based
on discrete data (Xihn)ni=1, where either hn = h > 0 or hn → 0. Aıt-Sahalia and Jacod [1]
studied asymptotic behaviors of the Fisher information of Levy processes with symmetric-stable convolution factor sampled at i/n, i ≤ n, and then Aıt-Sahalia and Jacod [2] exhibitedan explicit construction of an explicit rate-efficient estimator for the scale parameter.
2 Objective
For a random variable ξ we denote its law by L(ξ). Let γ ∈ R and σ > 0 be constants. Weshall deal with the following two cases.
Case A. (Stable Levy process with drift and symmetric Levy density)For α ∈ (0, 2), let SSα(σ) denote the α-stable distribution on the real line with thecharacteristic function
u 7→ exp(−σα|u|α), u ∈ R.
Then let X = (Xt)t∈R+ with X0 = 0 a.s. be a Levy process such that L(X1 − γ) =SSα(σ).
2
Case B. (Stable subordinator with drift)For α ∈ (0, 1), let S+
α (σ) denote the α-stable distribution on the positive half line withthe characteristic function
u 7→ exp{− σα|u|α
(1− i tan
πα
2· signu
)}, u ∈ R.
Then let X = (Xt)t∈R+ with X0 = 0 a.s. be a Levy process such that L(X1 − γ) =S+
α (σ).
In other words, we shall not consider skewed cases with β ∈ (−1, 0) ∪ (0, 1), where β standsfor the skewness parameter, see Nolan [10, Section 1.1]; this is equivalent to excluding caseswhere L(Xt − γt) admits an everywhere positive and skewed density for each t > 0. Bytaking the negative of X in Case B, stable Levy processes with only negative jumps can betreated as well. Note that for every t > 0
L(Xt − γt) ={
SSα(σt1/α) in Case A,S+
α (σt1/α) in Case B.
Now we describe our statistical model. Suppose we have equally spaced discrete dataXhn , X2hn , . . . , Xnhn , where (hn)n∈N is a non-random bounded positive sequence fulfilling
limn→∞
hn = 0 and lim infn→∞
nhn > 0. (2)
If hn ³ n−a for example, then a ∈ (0, 1] necessarily; the symbol an ³ bn means that thereexists a constant c > 0 such that c−1 ≤ an/bn ≤ c for every n large enough. Let us emphasizethat no restriction on decreasing rate of hn other than (2) will be put in the sequel. Thuswe are led to the statistical model (Pn
θ )θ∈Θ, the image measure of (Xihn)ni=1, which depends
on the unknown parameterθ = (α, γ, σ) ∈ Θ ⊂ R3.
The parameter space Θ is supposed to be a bounded domain whose closure is contained in:{ {(α, γ, σ) : α ∈ (0, 2), γ ∈ R, σ > 0} in Case A,{(α, γ, σ) : α ∈ (0, 1), γ ∈ R, σ > 0} in Case B.
We implicitly suppose that there exists a true value θ ∈ Θ generating the observed data.With the above setup, the first goal of this article is to provide a LAN property for
θ ∈ Θ with an always degenerate Fisher information matrix (Section 3.1). This entails anegative conclusion, that is, joint estimation for (α, γ, σ) based on the maximum likelihoodestimator is out of place. Nevertheless, this bad job does not arise if we suppose either α orσ is known (see (6) below). As the second goal, we provide uniform asymptotic normalitiesof the maximum likelihood estimates (MLE) of either (α, γ) or (γ, σ) (Section 3.2). Theoptimal rates for estimating α, γ, and σ will turn out to be
√n log(1/hn),
√nh
1−1/αn , and√
n, respectively, implying that the answer to the question “which estimate converges withmost speed?” changes according as the true value of α; see (10) below. Also seen is that, asopposed to the Wiener case, we need not impose that nhn →∞ for consistent estimation ofany component of θ; that is to say, the observed information over any (non-empty) boundedtime interval is rich enough. It is the 1/α-selfsimilarity of stable-Levy processes, which playsa key role in our study as in [13] and [1], that induces these inherent phenomena; recall thata Levy process is selfsimilar if and only if it is stable.
Remark 2.1. The parameter θ determines the generating triplet (γ, 0, gα,σ(z)dz) of theprocess X, where, letting
c(α, σ) = σα
{1α
Γ(1− α) cosαπ
2
}−1
,
3
the Levy density gα,σ is given by
gα,σ(z) ={
2−1c(α, σ)|z|−α−11{z 6=0} in Case A,c(α, σ)z−α−11{z>0} in Case B.
(3)
(3) is readily seen by invoking Sato [11, Lemma 14.11]. Note that (α, σ) 7→ c(α, σ) is continu-ous on (0, 2)×(0,∞), especially limα→1 c(α, σ) = σ/π and limα↓0 c(α, σ) = limα↑2 c(α, σ) = 0for every σ > 0. It is important to remind that any distribution of X possibly infinite-dimensional such as L(inft≤1 Xt) is determined by θ.
3 Statement of results
Under (2), without loss of generality we may suppose that hn ∈ (0, 1) and hence log(1/hn) >0, taking the sample size n large enough. Any notation of asymptotics will be used for n →∞unless otherwise stated.
Consider Case A and denote by x 7→ φα(x;σ) the density of SSα(σ), especially φα(x) :=φα(x; 1). By the scaling property we have φα(x; aσ) = a−1φα(a−1x;σ) for every a > 0 andx ∈ R. Noticing that L(Xihn
−X(i−1)hn− γhn) = SSα(σh
1/αn ) for each i ≤ n, we see that
the log-likelihood function of (Xihn)ni=1, say `n(θ), is computed as
`n(θ) =n∑
i=1
log φα(Xihn −X(i−1)hn− γhn; σh1/α
n )
=n∑
i=1
log{
σ−1h−1/αn φα(Yni)
}
=n∑
i=1
{− log σ + α−1 log(1/hn) + log φα(Yni)}
,
where (Yni)ni=1 defined by
Yni = σ−1h−1/αn (Xihn −X(i−1)hn
− γhn), i ≤ n, (4)
forms an iid triangular array with common law SSα(1) independent of n. The exactly sameas above remains true for Case B; just replace “SSα(σ)” with “S+
α (σ)”.
3.1 LAN property with degenerate Fisher information
Recall that the experiment (Pnθ )n∈N is said to satisfy LAN at θ ∈ Θ with rate An(θ) if there
exist a random vector ∆n(θ) and deterministic matrix I(θ) such that, for any bounded vectorsequence (un) ⊂ R3 fulfilling un → u, we have
logdPn
θ+{An(θ)}−1un
dPnθ
= u>∆n(θ)− 12u>I(θ)u + oP n
θ(1),
where An(θ) ∈ R3⊗3 is invertible for each n, and ∆n(θ) ∈ R3 weakly converges along (Pnθ )-
sequence to a centred normal variable with covariance matrix I(θ), which corresponds to theFisher information matrix.
Theorem 3.1. For both of Cases A and B, the experiment (Pnθ ) satisfies LAN at any
θ = (α, γ, σ) ∈ Θ with rate
An(α) := diag{√
n log(1/h),√
nh1−1/α,√
n}
(5)
4
and with the always degenerate Fisher information matrix
I(θ) =
Hα/α4 Jα/(σα2) Hα/(σα2)Jα/(σα2) Mα/σ2 Jα/σ2
Hα/(σα2) Jα/σ2 Hα/σ2
. (6)
Here
Hα =∫ {φα(y) + y∂φα(y)}2
φα(y)dy,
Mα =∫ {∂φα(y)}2
φα(y)dy,
Jα =∫
∂φα(y){φα(y) + y∂φα(y)}φα(y)
dy,
(7)
all being finite, especially, Jα equals zero in Case A while is positive in Case B.
Remark 3.2. Under (2) it is clear that {An(α)}−1 → 0 whenever α ∈ (0, 2); this is notnecessarily the case if we drop the second one of (2).
Remark 3.3. Theorem 3.1 says that, in contrast to the Wiener case, the maximum likeli-hood estimation for θ based on discrete sampling still leads to a degenerate Fisher informationmatrix. On the other hand, I(θ) does not depend on γ alike the Wiener case.
3.2 Uniform asymptotic normality when either α or σ is known
The form (6) says that we can proceed to considering efficiency issues if either α or σ isknown.
We need to prepare some notation. In the sequel the notation ⇒u indicates the weak con-vergence along (Pn
θ )-sequence, which holds uniformly over Θ− (the closure of Θ): precisely,for any random vectors ζn(θ) and ζ(θ) with distribution Pn and P depending on θ ∈ Θ, wewrite ζn ⇒u ζ(θ) if
supθ∈Θ
∣∣∣∣∫
f(y)Pn(dy)−∫
f(y)P (dy)∣∣∣∣ → 0
for every continuous bounded function f . Analogous notation will be used for other modesof convergence (including ordinary convergence of a nonrandom sequence). Write θ′ =(α, γ) and θ′′ = (γ, σ), and let Θ′ and Θ′′ respectively denote the corresponding admissibleparameter spaces induced from Θ. Write `n(θ′) (resp. `n(θ′′)) instead of `n(θ) when σ (resp.α) is known, and let
D1n(α) = diag{√n log(1/hn),√
nh1−1/αn }, D2n(α) = diag{√nh1−1/α
n ,√
n}.Putting I(θ) = [Ikl(θ)]3k,l=1 we write
I1(θ′) =(
I11(θ) I12(θ)I21(θ) I22(θ)
), I2(θ′′) =
(I22(θ) I23(θ)I32(θ) I33(θ)
).
Finally, put θ′n = supθ′∈Θ′− `n(θ′) (resp. θ′′n = supθ′′∈Θ′′− `n(θ′′)) when σ (resp. α) is known.Now we can state our second result.
Theorem 3.4. Let Z stand for a two-dimensional standard normal variable. For both ofCases A and B, we have the following.
(a) Suppose σ is known, while θ′ is unknown. Then there exists a local maximum θ′n of`n(θ′) with probability tending to 1, for which
D1n(α)(θ′n − θ′) ⇒u {I1(θ′)}−1/2Z. (8)
The estimate is asymptotically efficient.
5
(b) Suppose α is known, while θ′′ is unknown. Then there exists a local maximum θ′′n of`n(θ′′) with probability tending to 1, for which
D2n(α)(θ′′n − θ′′) ⇒u {I2(θ′′)}−1/2Z. (9)
The estimate is asymptotically efficient.
Remark 3.5. To see that both of I1(θ′) and I2(θ′′) are positive-definite, it suffices to showthat Ikk(θ) > for k = 1, 2 and that Ikk(θ)I ll(θ)−{Ikl(θ)}2 > 0 for (k, l) = (1, 2), (2, 3). Butthe former is obvious and the latter is a direct consequence of the fact J2
α < HαMα comingfrom Schwarz’s inequality (the equality J2
α = HαMα cannot be satisfied).
Remark 3.6. Clearly {Djn(α)}−1 →u 0, j = 1, 2, under our model setup described inSection 2. Since the components of each Djn(α) are different, it may be informative tomention the magnitude relations of the optimal rates. In Case A they are summarized asfollows:
σn ≺ αn ≺ γn if α ∈ (0, 1),σn ∼ γn ≺ αn if α = 1,γn ≺ σn ≺ αn if α ∈ (1, 2).
(10)
Here, αn ≺ γn (resp. αn ∼ γn) means that “any rate-efficient estimate of γ converges fasterthan (resp. with the same rate as) that of γ”. In (10), ignore σn (resp. αn) in case of (a)(resp. (b)). Of course, the order is always σn ≺ αn ≺ γn in Case B.
Remark 3.7. It follows from (8) (resp. (9)) that αn and γn (resp. γn and σn) are asymptoti-cally independent in Case A, especially, it is identical to the Wiener case that the estimationsof drift and scale parameters are asymptotically independent (case (b)). In contrast, this isnot the case in Case B, where the estimates are asymptotically correlated (namely, Jα > 0).
4 Proofs
The proofs for Cases A and B are almost the same, hence for conciseness we shall firstcomplete the proofs of Case A in Sections 4.1 and 4.2, and then turn to Case B in Section4.3, where only the variation from the proof of Case A will be mentioned.
4.1 Proof of Theorem 3.1 for Case A
It is well known that (α, y) 7→ φα(y) is everywhere positive and of class C∞; by means of theFourier-inversion formula, for any k, k′ ∈ N ∪ {0} there exist constants ci = ci(α, k, k′) > 0such that
|∂k∂k′α φα(y)| ≤ c0
∫e−|u|
α |u|c1{1 + (log |u|)c2}du,
so that |∂k∂k′α φα(y)| < ∞. It also follows from the series expansion of the density (e.g., Sato
[11, Remark 14.18]) that for any k, k′ ∈ N ∪ {0}|∂k∂k′
α φα(y)| ³ (log |y|)k′ |y|−α−1−k, (11)
as |y| → ∞. Therefore:
the quantities Hα,Mα and Jα of (7) and∫ (
∂αφα(y)φα(y)
)2
dy are finite. (12)
We denote by →a.s. the almost sure convergence under Pnθ , and introduce some notation as
follows:
ψn(Yni; θ) := σ−1h−1/αn φα(Yni),
gni(θ) := ∂θ log ψn(Yni; θ),θn := θ + An(α)un.
6
To complete the proof it suffices to verify Lemmas 4.1 to 4.3 below (law of large numbers, Lin-deberg condition, and L2(P )-differentiability, respectively; see, e.g., Greenwood and Shiryaev[5, Sections 6, 7 and 8] for details).
Lemma 4.1. C1n(θ) := An(α)n∑
i=1
gni(θ){gni(θ)}>An(α) →a.s. I(θ).
Lemma 4.2. C2n(θ) :=n∑
i=1
Enθ
[{u>n An(α)gni(θ)
}21{|u>n An(α)gni(θ)|≥ε}
]→ 0 for every ε >
0.
Lemma 4.3. C3n(θ) := n
∫ {√ψn(y; θn)−
√ψn(y; θ)− (θn − θ)>∂θ
√ψn(y; θ)
}2
dy → 0.
4.1.1 Proof of Lemma 4.1
Put gni(θ) = [gni;k(θ)]3k=1. Direct partial differentiations yield
gni;1(θ) = −α−2 log(1/hn)F1(Yni) + F2(Yni), (13)
gni;2(θ) = −σ−1h1−1/αn F3(Yni), (14)
gni;3(θ) = −σ−1F1(Yni), (15)
where
F1(y) =φα(y) + y∂φα(y)
φα(y), F2(y) =
∂αφα(y)φα(y)
, F3(y) =∂φα(y)φα(y)
.
Put An(α) = diag{A1n(α), A2n(α), A3n(α)} and C1n(θ) = [C1n;kl(θ)]3k,l=1. Substituting theexpression (C1n(θ) is symmetric)
C1n;kl(θ) =n∑
i=1
{Akn(α)Aln(α)}−1gni;k(θ)gni;l(θ), 1 ≤ k, l ≤ 3,
with (13) to (15), we get Pnθ -a.s.
C1n;11(θ) = α−4 1n
n∑
i=1
F1(Yni)2 + O({log(1/hn)}−1
) 1n
n∑
i=1
F1(Yni)F2(Yni),
C1n;22(θ) = σ−2 1n
n∑
i=1
F3(Yni)2,
C1n;33(θ) = σ−2 1n
n∑
i=1
F1(Yni)2,
C1n;12(θ) = (α2σ)−1 1n
n∑
i=1
F1(Yni)F3(Yni) + O({log(1/hn)}−1
) 1n
n∑
i=1
F2(Yni)F3(Yni),
C1n;13(θ) = (α2σ)−1 1n
n∑
i=1
F1(Yni)2 + O({log(1/hn)}−1
) 1n
n∑
i=1
F1(Yni)F2(Yni),
C1n;23(θ) = σ−2 1n
n∑
i=1
F1(Yni)F3(Yni).
Since (Yni)ni=1 is an iid array with common law SSα(1) not depending on n (recall (4)), it
follows from the strong law of large numbers that
1n
n∑
i=1
Fk(Yni)Fl(Yni) →a.s.
∫Fk(y)Fl(y)φα(y)dy
7
for k, l ∈ {1, 2, 3}, where the finiteness of the limit can be ensured by means of Schwarz’sinequality and (12). In particular, we have
∫F1(y)F3(y)φα(y)dy = 0. (16)
since y 7→ y{∂φα(y)}2/φα(y) is odd. Thus we get Lemma 4.1.
4.1.2 Proof of Lemma 4.2
Fix any constants ε, δ > 0, and write un = (ukn)3k=1. In the sequel the notation an . bn
indicate that there exists a constant c0 > 0 such that an ≤ c0bn for every sufficiently largen. Then, using the Lyapunov-type estimate we have
C2n(θ) . nEnθ
[∣∣∣∣3∑
k=1
uknAkn(α)−1gn1;k(θ)∣∣∣∣2+δ]
. n
3∑
k=1
{Akn(α)}−(2+δ)Enθ
[{gn1;k(θ)}2+δ]. (17)
At the same time, from the expressions (13) to (15) it is easy to see that
Enθ [|gn1;1(θ)|2+δ] . {log(1/hn)}2+δ + 1,
Enθ [|gn1;2(θ)|2+δ] . h(1−1/α)(2+δ)
n ,
Enθ [|gn1;3(θ)|2+δ] . 1,
the finiteness being guaranteed by (11), from which combined with (17) we have
C2n(θ) . O(n−δ/2
[{log(1/hn)}−(2+δ) + 1
])= o(1),
completing the proof of Lemma 4.2.
4.1.3 Proof of Lemma 4.3
For simplicity, let ∂k stand for the partial differentiation with respect to the kth componentof θ. First we estimate C3n(θ) as, using the standard notation for multi-indices,
C3n(θ) . n
∫ { ∑
|r|=2
1r!
(u>n An(α)−1)r
∫ 1
0
(1− v)∂rθ
√ψn(y; θ + v(θn − θ))dv
}2
dy
. n
3∑
k,l=1
∫∫
R×[0,1]
{Akn(α)Aln(α)}−2[∂k∂l
√ψn(y; θ + v(θn − θ))
]2
dvdy. (18)
Below we fix a small ε > 0 for which we have B3(θ; ε) := {ρ = (ρk)3k=1 ∈ Θ : |θ−ρ| < ε} ⊂ Θ.Note that for any ρ = (ρk)3k=1
[∂l∂k
√ψn(y; ρ)
]2
=[12
√ψn(y; ρ)
{∂l∂k log ψn(y; ρ)
+(∂l log ψn(y; ρ)
)(∂k log ψn(y; ρ)
)}
−14
√ψn(y; ρ)
(∂k log ψn(y; ρ)
)(∂l log ψn(y; ρ)
)]2
. ψn(y; ρ){(
∂l∂k log ψn(y; ρ))2
+(∂l log ψn(y; ρ)
)2(∂k log ψn(y; ρ)
)2}. (19)
8
On the other hand, we have the following:
∂2α log ψn(y; θ) =
1α4
[log(1/hn)]2{
y∂φα(y)φα(y)
+ y2 φα(y)∂2φα(y)− (∂φα(y))2
φα(y)2
}
+2α3
log(1/hn){
1 + y∂φα(y)φα(y)
− αyφα(y)∂∂αφα(y)− (∂φα(y))(∂α(y))
φα(y)2
}
+φα(y)∂2
αφα(y)− (∂αφα(y))2
φα(y)2, (20)
∂2γ log ψn(y; θ) =
1σ2
h2(1−1/α)n
φα(y)∂2φα(y)− (∂φα(y))2
φα(y)2, (21)
∂2σ log ψn(y; θ) =
1σ2
{1 + 2y
∂φα(y)φα(y)
+ y2 φα(y)∂2φα(y)− (∂φα(y))2
φα(y)2
}, (22)
∂γ∂α log ψn(y; θ) =1
σα2h1−1/α
n log(1/hn){
∂φα(y)φα(y)
+ yφα(y)∂2φα(y)− (∂φα(y))2
φα(y)2
}
− 1σ
h1−1/αn
φα(y)∂∂αφα(y)− (∂φα(y))(∂αφα(y))φα(y)2
, (23)
∂σ∂α log ψn(y; θ) =1
σα2log(1/hn)
{y∂φα(y)φα(y)
+ y2 φα(y)∂2φα(y)− (∂φα(y))2
φα(y)2
}
− σ−1y
{∂∂αφα(y)
φα(y)− ∂φα(y)∂αφα(y)
φα(y)2
}, (24)
∂γ∂σ log ψn(y; θ) =1σ2
h1−1/αn
{∂φα(y)φα(y)
+ yφα(y)∂2φα(y)− (∂φα(y))2
φα(y)2
}. (25)
On account of (11) it is easy to see that the above quantities are φα(y)dy-integrable. Lettingn be so large that θn ∈ B3(θ; ε) and then piecing together the displays (18) to (25), we seethat
C3n(θ) . n
3∑
k,l=1
∫ [sup
ρ∈B3(θ;ε)
{Akn(ρ1)Aln(ρ1)}−2ψn(y; ρ)
·{(
∂l∂k log ψn(y; ρ))2
+(∂l log ψn(y; ρ)
)2(∂k log ψn(y; ρ)
)2}]dy
. O(n−1) = o(1),
as desired.
4.2 Proof of Theorem 3.4 for Case A
In both of cases (a) and (b), the asymptotic efficiency directly follows from Theorem 3.1 andthe Hajek’s minimax theorem [6]. We shall only prove (a) since the proof of (b) is similar.
Denote by I1n(θ′) the observed information matrix associated with θ′:
I1n(θ′) = [Ikl1n(θ′)]2k,l=1 :=
( −∂2α`n(θ′) −∂γ∂α`n(θ′)sym. −∂2
γ`n(θ′)
).
For θ′1, θ′2 ∈ Θ′, we also define
I1n(θ′1, θ′2) = [Ikl
1n(θ′1, θ′2)]
2k,l=1 :=
( −∂2α`n(θ′1) −∂γ∂α`n(θ′1)
−∂γ∂α`n(θ′2) −∂2γ`n(θ′2)
).
Let Θ′α stand for the second-coordinate space of Θ′. We shall consistently denote by→a.s. thePn
θ′-a.s. convergence. According to Sweeting [12, Theorems 1 and 2], the proof is achievedby verifying the following Lemmas 4.4 to 4.6.
9
Lemma 4.4.∣∣D1n(α)−1I1n(θ′)D1n(α)−1 − I1(θ′)
∣∣ →a.s.u 0.
Lemma 4.5. sup ∗|D1n(α)−1D1n(α′)− I2| →u 0 for every c > 0, where sup∗ is taken overthe set {α′ ∈ Θ′−α :
√n log(1/hn)|α′ − α| ≤ c}.
Lemma 4.6. sup ∗∗|D1n(α)−1{I1n(θ′1, θ′2)− I1n(θ′)}D1n(α)−1| →a.s.
u 0 for every c > 0, wheresup∗∗ is taken over the set {θ′k ∈ Θ′−, k = 1, 2 : |D1n(α)(θ′k − θ′)| ≤ c}.Remark 4.7. Concerning Lemmas 4.4 and 4.6, the original Sweeting’s conditions requireweaker ⇒u and →p
u (convergence in Pnθ′-probability) rather than →a.s.
u , respectively; seeC1 and C2 (ii) of [12]. The primary reason why we prove the stronger uniform Pn
θ′-a.s.convergence is that the derivation then becomes much more easy in our framework; when oneattempts to prove the uniform convergence in Pn
θ′ -probability, the techniques of Ibragimovand Has’minskı [7, Theorems I.7 and I.20] are often employed, however, one can see thatthe modulus of continuity of the random field (i.e., the condition (1) of Theorem I.20 of [7])does not seem to be fulfilled in our model.
4.2.1 Proof of Lemma 4.4
We prepare a simple version of uniform strong laws of large numbers.
Proposition 4.8. Let U ⊂ Rp be compact, and let {(ψn(u))u∈U}n∈N be a sequence of real-valued random fields defined on some probability space. Suppose that u 7→ ψn(u) is continuousa.s. for every n ∈ N, and that ψn(u) → 0 a.s. for every u ∈ U large enough. Then we havesupu∈U |ψn(u)| → 0 a.s.
Proof. The regularity conditions stated remain true for ψn(u) replaced with −ψn(θ), henceit suffices to prove limn→∞ supu∈U ψn(u) ≤ 0 a.s.
Fix any ε > 0. Since u 7→ ψn(u) is uniformly continuous, there exists a constant δ(ε) > 0such that for every large n ∈ N
supui:|u1−u2|<δ(ε)
|ψn(u1)− ψn(u2)| < ε. (26)
For this δε we can find a finite δε-net (vj)Mεj=1 of U . Next fix any u ∈ U , and then take a
j(u) ≤ Mε for which u ∈ Bp(vj(u); δε). Then we have ψn(u) ≤ ψn(vj(u)) + ε a.s. for everylarge n ∈ N, so that on account of (26) we have
supu∈U
ψn(u) ≤ supu∈U
ψn(vj(u)) + ε ≤ maxj≤Mε
ψn(vj) + ε a.s. (27)
(the net (uj)Mεj=1 can be taken to be independent of u). Since Mε is finite, we get the claim
on taking the limit in (27) together with the assumed u-pointwise convergence.
Now, recalling the expressions (20), (21) and (23), we can conclude the proof by applyingLemma 4.8 with replacing U and ψn(u) with Θ′− and the components of
Gn(θ′) := D1n(α)−1I1n(θ′)D1n(α)−1 − I1(θ′),
respectively; as a matter of fact, by using (11) and the elementary integration by parts (toderive
∫y∂2φα(y)dy = 0, and so on) we can prove that Gn(θ′) →a.s. 0 for every θ′ ∈ Θ′.
Building on the continuity of θ′ 7→ Gn(θ′), Proposition 4.8 ends the proof of Lemma 4.4.
4.2.2 Proof of Lemma 4.5
We have |D1n(α)−1D1n(α′)− I2| = |h1/α−1/α′n − 1|. Observe that
sup ∗∣∣∣log h1/α−1/α′
n
∣∣∣ ≤ sup ∗∣∣∣∣α′ − α
αα′
∣∣∣∣ log(1/hn) ≤ c
α√
nsup ∗
1α′
. 1√n→u 0,
so that sup∗ h1/α−1/α′n →u 1. Hence the claim follows.
10
4.2.3 Proof of Lemma 4.6
First we note that
supθ′∈Θ′
sup ∗∗∣∣D1n(α)−1{I1n(θ′1, θ
′2)− I1n(θ′)}D1n(α)−1
∣∣
≤ supθ′∈Θ′
sup ∗∗∣∣n−1{log(1/hn)}−2{I11
1n(θ′)− I111n(θ′1)}
∣∣
+ supθ′∈Θ′
sup ∗∗∣∣∣n−1h−2(1−1/α)
n {I221n(θ′)− I22
1n(θ′2)}∣∣∣
+ 2 supθ′∈Θ′
sup ∗∗∣∣∣n−1{log(1/hn)}−1h−(1−1/α)
n {I121n(θ′)− I12
1n(θ′1)}∣∣∣
= H1n + H2n + H3n, say.
For convenience we denote by P the underlying probability measure (defined on the Skorohodspace); the law of the parametric family of X associated with all admissible θ′ ∈ Θ′. Theproof is achieved by proving Hkn → 0 P -a.s. for all k, and to this end we shall again utilizeProposition 4.8 partly combined with the fact sup ∗∗|h1/α−1/α′
n | →u 1, which has seen inSection 4.2.2. We shall only show H1n →a.s. 0, since the others can be shown in a similarmanner.
From Lemma 4.4 we have
1n[log(1/hn)]2
I111n(θ′) =
1n
n∑
i=1
1α4
{Yni∂φα(Yni)}2φα(Yni)2
+ o(1) =: Jn(θ′) + o(1),
Pnθ′-a.s. uniformly in θ′ ∈ Θ′, where we applied Proposition 4.8 to the o(1) term. Hence,
applying Taylor’s formula around θ′ to the summand of Jn(θ′) and then taking the definitionof sup∗∗ into account, we see that P -a.s.
H1n ≤ supθ′∈Θ′
sup ∗∗ |Jn(θ′)− Jn(θ′1)|+ o(1)
. 1√n
{sup
θ′∈Θ′
∣∣∣∣[log(1/hn)]−1∂αJn(θ′)∣∣∣∣
+ supθ′∈Θ′
∣∣∣∣h−(1−1/α)n ∂γJn(θ′)
∣∣∣∣ · supθ′∈Θ′
sup ∗∣∣∣h1/α−1/α1
n
∣∣∣}
+ o(1),
. 1√n
{sup
θ′∈Θ′
∣∣∣∣[log(1/hn)]−1∂αJn(θ′)∣∣∣∣ + sup
θ′∈Θ′
∣∣∣∣h−(1−1/α)n ∂γJn(θ′)
∣∣∣∣}
+ o(1). (28)
At the same time, the partial differentiations yields that
[log(1/hn)]−1∂αJn(θ′)
= 2[log(1/hn)]−1 1n
n∑
i=1
{Yni∂φα(Yni)α2φα(Yni)
}2{∂α∂φα(Yni)∂φα(Yni)
− ∂αφα(Yni)φα(Yni)
− 2α
}
− 2α6
1n
n∑
i=1
{Yni∂φα(Yni)α2φα(Yni)
}2[1 + Yni
{∂2φα(Yni)∂φα(Yni)
− ∂φα(Yni)φα(Yni)
}],
h−(1−1/α)n ∂γJn(θ′)
=2σ
1n
n∑
i=1
{Yni∂φα(Yni)α2φα(Yni)
}2{ 1Yni
+∂αφα(Yni)φα(Yni)
∂2φα(Yni)∂φα(Yni)
},
11
from which, on account of (11) and Proposition 4.8 once again, it follows that
[log(1/hn)]−1∂αJn(θ′)
→a.s.u
2α6
∫ [y3{∂φα(y)}3{φα(y)}2 − y2{∂φα(y)}2 + y3∂φα(y)∂2φα(y)
φα(y)
]dy,
h−(1−1/α)n ∂γJn(θ′)
→a.s.u
2σα4
∫ [y{∂φα(y)}2 − y2∂φα(y)∂2φα(y)
φα(y)+
y2{∂φα(y)}3{φα(y)}2
]dy,
both limits being finite. Therefore we have seen that P -a.s.
supθ′∈Θ′
|{log(1/hn)}−1∂αJn(θ′)| = O(1),
supθ′∈Θ′
|h−(1−1/α)n ∂γJn(θ′)| = O(1),
from which together with (28) we get H1n ≤ O(n−1/2) + o(1) = o(1) P -a.s., as desired.
4.3 Proof of Theorems 3.1 and 3.4 for Case B
The proof of Case B can be achieved along with that of Case A, except for the following:
(i) the display (16) changes to∫
F1(y)F3(y)φα(y)dy = Jα;
(ii) the asymptotic behavior (11) remains valid only for y ↑ ∞.
Actually, (i) does not matter; it just changes the expression of I(θ), and the positivity of Jα
is clear from the definition. As for (ii), invoking the series expansion of Sato [11, Remark14.18 (vi)] together with the scaling property of φα(y), we see that there exist constants cα,c′α, and c′′αj (j ≥ 1) for which, given any m ∈ N,
φα(y) = cα exp{−c′αy−α/(1−α)}y−(2−α)/(1−α)
·{
1 +m∑
j=1
c′′αjyαj/(1−α) + O
(yα(1+m)/(1−α)
) }(29)
for y ↓ 0. Here the constants cα, c′α, and c′′αj smoothly depend on α over (0, 1). Due to theexponential factor exp{−c′αy−α/(1−α)} appearing in (29), we see that ∂k∂k′
α φα(y) decreasesto 0 very fast as y ↓ 0 for any k, k′ ∈ N ∪ {0}, which enables us to verify, especially, thefiniteness of the Fisher information matrix I(θ), and indeed, to follow the same line as inCase A.
Acknowledgment
This work was partly supported by Grant-in-Aid for Scientific Research from the Ministry ofEducation, Japan, and Cooperative Research Program of the Institute of Statistical Mathe-matics, Japan.
12
References
[1] Aıt-Sahalia, Y. and Jacod, J. (2004), Fisher’s information for discretely sampled Levyprocesses. preprint.
[2] Aıt-Sahalia, Y. and Jacod, J. (2004), Volatility estimators for discretely sampled Levyprocesses. preprint.
[3] Gobet, E. (2001), Local asymptotic mixed normality property for elliptic diffusion: aMalliavin calculus approach. Bernoulli 7, 899–912.
[4] Gobet, E. (2002), LAN property for ergodic diffusions with discrete observations. Ann.Inst. H. Poincare Probab. Statist. 38, 711–737.
[5] Greenwood, P. E. and Shiryayev, A. N. (1985), Contiguity and the Statistical InvariancePrinciple. Gordon and Breach Science Publishers, New York.
[6] Hajek, J. (1972), Local asymptotic minimax and admissibility in estimation. Proceed-ings of the Sixth Berkeley Symposium on Mathematical Statistics and Probability(Univ. California, Berkeley, Calif., 1970/1971), Vol. I: Theory of statistics, pp. 175–194. Univ. California Press, Berkeley, Calif.
[7] Ibragimov, I. A. and Has’minskı, R. Z. (1981), Statistical Estimation. Asymptotic The-ory. Springer-Verlag, New York-Berlin.
[8] Kabanov, Yu., Liptser, R. S. and Shiryaev, A. N. (1980), Absolute continuity andsingularity of locally absolutely continuous probability distributions. II. Math. USSRSbornik 36, 31–58. (English translation)
[9] Kessler, M. (1997), Estimation of an ergodic diffusion from discrete observations.Scand. J. Statist. 24, 211–229.
[10] Nolan, J. P. (2001), Maximum likelihood estimation and diagnostics for stable distri-butions. Levy Processes. Theory and Applications. Barndorff-Nielsen, O. E., Mikosch,T. and Resnick, S. I. (eds.), 379–400, Birkhauser Boston, Boston.
[11] Sato, K. (1999), Levy Processes and Infinitely Divisible Distributions. Cambridge Uni-versity Press, Cambridge.
[12] Sweeting, T. J. (1980), Uniform asymptotic normality of the maximum likelihoodestimator. Ann. Statist. 8, 1375–1381. [Corrections: (1982) Ann. Statist. 10, 320.]
[13] Woerner, J. H. C. (2001), Statistical Analysis for Discretely Observed Levy Processes.PhD thesis, University of Freiburg.
[14] Yoshida, N. (1992), Estimation for diffusion processes from discrete observation. J.Multivariate Anal. 41, 220–242.
13
List of MHF Preprint Series, Kyushu University21st Century COE Program
Development of Dynamic Mathematics with High Functionality
MHF2003-1 Mitsuhiro T. NAKAO, Kouji HASHIMOTO & Yoshitaka WATANABEA numerical method to verify the invertibility of linear elliptic operators withapplications to nonlinear problems
MHF2003-2 Masahisa TABATA & Daisuke TAGAMIError estimates of finite element methods for nonstationary thermal convectionproblems with temperature-dependent coefficients
MHF2003-3 Tomohiro ANDO, Sadanori KONISHI & Seiya IMOTOAdaptive learning machines for nonlinear classification and Bayesian informa-tion criteria
MHF2003-4 Kazuhiro YOKOYAMAOn systems of algebraic equations with parametric exponents
MHF2003-5 Masao ISHIKAWA & Masato WAKAYAMAApplications of Minor Summation Formulas III, Plucker relations, Latticepaths and Pfaffian identities
MHF2003-6 Atsushi SUZUKI & Masahisa TABATAFinite element matrices in congruent subdomains and their effective use forlarge-scale computations
MHF2003-7 Setsuo TANIGUCHIStochastic oscillatory integrals - asymptotic and exact expressions for quadraticphase functions -
MHF2003-8 Shoki MIYAMOTO & Atsushi YOSHIKAWAComputable sequences in the Sobolev spaces
MHF2003-9 Toru FUJII & Takashi YANAGAWAWavelet based estimate for non-linear and non-stationary auto-regressive model
MHF2003-10 Atsushi YOSHIKAWAMaple and wave-front tracking — an experiment
MHF2003-11 Masanobu KANEKOOn the local factor of the zeta function of quadratic orders
MHF2003-12 Hidefumi KAWASAKIConjugate-set game for a nonlinear programming problem
MHF2004-1 Koji YONEMOTO & Takashi YANAGAWAEstimating the Lyapunov exponent from chaotic time series with dynamicnoise
MHF2004-2 Rui YAMAGUCHI, Eiko TSUCHIYA & Tomoyuki HIGUCHIState space modeling approach to decompose daily sales of a restaurant intotime-dependent multi-factors
MHF2004-3 Kenji KAJIWARA, Tetsu MASUDA, Masatoshi NOUMI, Yasuhiro OHTA &Yasuhiko YAMADACubic pencils and Painleve Hamiltonians
MHF2004-4 Atsushi KAWAGUCHI, Koji YONEMOTO & Takashi YANAGAWAEstimating the correlation dimension from a chaotic system with dynamicnoise
MHF2004-5 Atsushi KAWAGUCHI, Kentarou KITAMURA, Koji YONEMOTO, TakashiYANAGAWA & Kiyofumi YUMOTODetection of auroral breakups using the correlation dimension
MHF2004-6 Ryo IKOTA, Masayasu MIMURA & Tatsuyuki NAKAKIA methodology for numerical simulations to a singular limit
MHF2004-7 Ryo IKOTA & Eiji YANAGIDAStability of stationary interfaces of binary-tree type
MHF2004-8 Yuko ARAKI, Sadanori KONISHI & Seiya IMOTOFunctional discriminant analysis for gene expression data via radial basis ex-pansion
MHF2004-9 Kenji KAJIWARA, Tetsu MASUDA, Masatoshi NOUMI, Yasuhiro OHTA &Yasuhiko YAMADAHypergeometric solutions to the q‐Painleve equations
MHF2004-10 Raimundas VIDUNASExpressions for values of the gamma function
MHF2004-11 Raimundas VIDUNASTransformations of Gauss hypergeometric functions
MHF2004-12 Koji NAKAGAWA & Masakazu SUZUKIMathematical knowledge browser
MHF2004-13 Ken-ichi MARUNO, Wen-Xiu MA & Masayuki OIKAWAGeneralized Casorati determinant and Positon-Negaton-Type solutions of theToda lattice equation
MHF2004-14 Nalini JOSHI, Kenji KAJIWARA & Marta MAZZOCCOGenerating function associated with the determinant formula for the solutionsof the Painleve II equation
MHF2004-15 Kouji HASHIMOTO, Ryohei ABE, Mitsuhiro T. NAKAO & YoshitakaWATANABENumerical verification methods of solutions for nonlinear singularly perturbedproblem
MHF2004-16 Ken-ichi MARUNO & Gino BIONDINIResonance and web structure in discrete soliton systems: the two-dimensionalToda lattice and its fully discrete and ultra-discrete versions
MHF2004-17 Ryuei NISHII & Shinto EGUCHISupervised image classification in Markov random field models with Jeffreysdivergence
MHF2004-18 Kouji HASHIMOTO, Kenta KOBAYASHI & Mitsuhiro T. NAKAONumerical verification methods of solutions for the free boundary problem
MHF2004-19 Hiroki MASUDAErgodicity and exponential β-mixing bounds for a strong solution of Levy-driven stochastic differential equations
MHF2004-20 Setsuo TANIGUCHIThe Brownian sheet and the reflectionless potentials
MHF2004-21 Ryuei NISHII & Shinto EGUCHISupervised image classification based on AdaBoost with contextual weak clas-sifiers
MHF2004-22 Hideki KOSAKIOn intersections of domains of unbounded positive operators
MHF2004-23 Masahisa TABATA & Shoichi FUJIMARobustness of a characteristic finite element scheme of second order in timeincrement
MHF2004-24 Ken-ichi MARUNO, Adrian ANKIEWICZ & Nail AKHMEDIEVDissipative solitons of the discrete complex cubic-quintic Ginzburg-Landauequation
MHF2004-25 Raimundas VIDUNASDegenerate Gauss hypergeometric functions
MHF2004-26 Ryo IKOTAThe boundedness of propagation speeds of disturbances for reaction-diffusionsystems
MHF2004-27 Ryusuke KONConvex dominates concave: an exclusion principle in discrete-time Kolmogorovsystems
MHF2004-28 Ryusuke KONMultiple attractors in host-parasitoid interactions: coexistence and extinction
MHF2004-29 Kentaro IHARA, Masanobu KANEKO & Don ZAGIERDerivation and double shuffle relations for multiple zeta values
MHF2004-30 Shuichi INOKUCHI & Yoshihiro MIZOGUCHIGeneralized partitioned quantum cellular automata and quantization of clas-sical CA
MHF2005-1 Hideki KOSAKIMatrix trace inequalities related to uncertainty principle
MHF2005-2 Masahisa TABATADiscrepancy between theory and real computation on the stability of somefinite element schemes
MHF2005-3 Yuko ARAKI & Sadanori KONISHIFunctional regression modeling via regularized basis expansions and modelselection
MHF2005-4 Yuko ARAKI & Sadanori KONISHIFunctional discriminant analysis via regularized basis expansions
MHF2005-5 Kenji KAJIWARA, Tetsu MASUDA, Masatoshi NOUMI, Yasuhiro OHTA &Yasuhiko YAMADAPoint configurations, Cremona transformations and the elliptic difference Painleveequations
MHF2005-6 Kenji KAJIWARA, Tetsu MASUDA, Masatoshi NOUMI, Yasuhiro OHTA &Yasuhiko YAMADAConstruction of hypergeometric solutions to the q‐Painleve equations
MHF2005-7 Hiroki MASUDASimple estimators for non-linear Markovian trend from sampled data:I. ergodic cases
MHF2005-8 Hiroki MASUDA & Nakahiro YOSHIDAEdgeworth expansion for a class of Ornstein-Uhlenbeck-based models
MHF2005-9 Masayuki UCHIDAApproximate martingale estimating functions under small perturbations ofdynamical systems
MHF2005-10 Ryo MATSUZAKI & Masayuki UCHIDAOne-step estimators for diffusion processes with small dispersion parametersfrom discrete observations
MHF2005-11 Junichi MATSUKUBO, Ryo MATSUZAKI & Masayuki UCHIDAEstimation for a discretely observed small diffusion process with a linear drift
MHF2005-12 Masayuki UCHIDA & Nakahiro YOSHIDAAIC for ergodic diffusion processes from discrete observations
MHF2005-13 Hiromichi GOTO & Kenji KAJIWARAGenerating function related to the Okamoto polynomials for the Painleve IVequation
MHF2005-14 Masato KIMURA & Shin-ichi NAGATAPrecise asymptotic behaviour of the first eigenvalue of Sturm-Liouville prob-lems with large drift
MHF2005-15 Daisuke TAGAMI & Masahisa TABATANumerical computations of a melting glass convection in the furnace
MHF2005-16 Raimundas VIDUNASNormalized Leonard pairs and Askey-Wilson relations
MHF2005-17 Raimundas VIDUNASAskey-Wilson relations and Leonard pairs
MHF2005-18 Kenji KAJIWARA & Atsushi MUKAIHIRASoliton solutions for the non-autonomous discrete-time Toda lattice equation
MHF2005-19 Yuu HARIYAConstruction of Gibbs measures for 1-dimensional continuum fields
MHF2005-20 Yuu HARIYAIntegration by parts formulae for the Wiener measure restricted to subsetsin R
d
MHF2005-21 Yuu HARIYAA time-change approach to Kotani’s extension of Yor’s formula
MHF2005-22 Tadahisa FUNAKI, Yuu HARIYA & Mark YORWiener integrals for centered powers of Bessel processes, I
MHF2005-23 Masahisa TABATA & Satoshi KAIZUFinite element schemes for two-fluids flow problems
MHF2005-24 Ken-ichi MARUNO & Yasuhiro OHTADeterminant form of dark soliton solutions of the discrete nonlinear Schrodingerequation
MHF2005-25 Alexander V. KITAEV & Raimundas VIDUNASQuadratic transformations of the sixth Painleve equation
MHF2005-26 Toru FUJII & Sadanori KONISHINonlinear regression modeling via regularized wavelets and smoothingparameter selection
MHF2005-27 Shuichi INOKUCHI, Kazumasa HONDA, Hyen Yeal LEE, Tatsuro SATO,Yoshihiro MIZOGUCHI & Yasuo KAWAHARAOn reversible cellular automata with finite cell array
MHF2005-28 Toru KOMATSUCyclic cubic field with explicit Artin symbols
MHF2005-29 Mitsuhiro T. NAKAO, Kouji HASHIMOTO & Kaori NAGATOUA computational approach to constructive a priori and a posteriori errorestimates for finite element approximations of bi-harmonic problems
MHF2005-30 Kaori NAGATOU, Kouji HASHIMOTO & Mitsuhiro T. NAKAONumerical verification of stationary solutions for Navier-Stokes problems
MHF2005-31 Hidefumi KAWASAKIA duality theorem for a three-phase partition problem
MHF2005-32 Hidefumi KAWASAKIA duality theorem based on triangles separating three convex sets
MHF2005-33 Takeaki FUCHIKAMI & Hidefumi KAWASAKIAn explicit formula of the Shapley value for a cooperative game induced fromthe conjugate point
MHF2005-34 Hideki MURAKAWAA regularization of a reaction-diffusion system approximation to the two-phaseStefan problem
MHF2006-1 Masahisa TABATANumerical simulation of Rayleigh-Taylor problems by an energy-stable finiteelement scheme
MHF2006-2 Ken-ichi MARUNO & G R W QUISPELConstruction of integrals of higher-order mappings
MHF2006-3 Setsuo TANIGUCHIOn the Jacobi field approach to stochastic oscillatory integrals with quadraticphase function
MHF2006-4 Kouji HASHIMOTO, Kaori NAGATOU & Mitsuhiro T. NAKAOA computational approach to constructive a priori error estimate for finiteelement approximations of bi-harmonic problems in nonconvex polygonaldomains
MHF2006-5 Hidefumi KAWASAKIA duality theory based on triangular cylinders separating three convex sets inRn
MHF2006-6 Raimundas VIDUNASUniform convergence of hypergeometric series
MHF2006-7 Yuji KODAMA & Ken-ichi MARUNON-Soliton solutions to the DKP equation and Weyl group actions
MHF2006-8 Toru KOMATSUPotentially generic polynomial
MHF2006-9 Toru KOMATSUGeneric sextic polynomial related to the subfield problem of a cubic polynomial
MHF2006-10 Shu TEZUKA & Anargyros PAPAGEORGIOUExact cubature for a class of functions of maximum effective dimension
MHF2006-11 Shu TEZUKAOn high-discrepancy sequences
MHF2006-12 Raimundas VIDUNASDetecting persistent regimes in the North Atlantic Oscillation time series
MHF2006-13 Toru KOMATSUTamely Eisenstein field with prime power discriminant
MHF2006-14 Nalini JOSHI, Kenji KAJIWARA & Marta MAZZOCCOGenerating function associated with the Hankel determinant formula for thesolutions of the Painleve IV equation
MHF2006-15 Raimundas VIDUNASDarboux evaluations of algebraic Gauss hypergeometric functions
MHF2006-16 Masato KIMURA & Isao WAKANONew mathematical approach to the energy release rate in crack extension
MHF2006-17 Toru KOMATSUArithmetic of the splitting field of Alexander polynomial
MHF2006-18 Hiroki MASUDALikelihood estimation of stable Levy processes from discrete data