Generalized Local Regularizationfor
Ill-Posed Problems
Patricia K. Lamm
Department of MathematicsMichigan State University
AIP2009 July 22, 2009
Based on joint work with Cara Brooks,
Zhewei Dai, and Xiaoyue Luo
AIP2009 July 22, 2009 P. K. Lamm Generalized Local Regularization for Ill-Posed Problems
Basic problem
Problem: SolveAu = f
for u ∈ dom(A) ⊂ X , where the “ideal” data f ∈ R(A) ⊂ X isonly approximately given.
We assume
the operator A may be linear or nonlinear;
X is a Hilbert or reflexive Banach space;
Existence/uniqueness of solution u for f ∈ R(A), which doesnot depend continuously on f ;
The data is only approximately given by
f δ ∈ X , with ‖f − f δ‖X < δ.
(Precise conditions on A to be given later.)
AIP2009 July 22, 2009 P. K. Lamm Generalized Local Regularization for Ill-Posed Problems
Examples of (related) regularization methods
• Equation for 0th order Tikhonov regularization:
Let α > 0 and find uδα ∈ dom(A) ⊂ X minimizing
‖Au − f δ‖2X + α‖u‖2X .
If A bounded linear, the regularized solution uδα satisfies
A?Au + αu = A?f δ. (1)
Features (for generic linear case):
Pros: Cons :
• Solution of (1) easily • Stability of (A?A)−1 vs A−1
handled for f δ ∈ X • A nonlinear =⇒ modify or discard (1)
• “All-purpose” regularizer • Ignores special structure of A• Easy to implement • Can be oversmoothing
AIP2009 July 22, 2009 P. K. Lamm Generalized Local Regularization for Ill-Posed Problems
Examples of (related) regularization methods
• Equation for Lavrent’ev (or simplified) regularization:
In contrast to the Tikhonov regularization equation for linear A,
A?Au + αu = A?f δ,
the method of Lavrent’ev finds uδα ∈ dom(A) satisfying
Au + αu = f δ.
Features:
Pros: Cons:
• Underlying operator is still A, so • Solvability guaranteed
preserves structure of equation; only for certain A• Method the same for both linear • Numerical results
and nonlinear problems not always satisfactory
• Easy to implement
AIP2009 July 22, 2009 P. K. Lamm Generalized Local Regularization for Ill-Posed Problems
“Diagonal” quantities
Comparisons:
Both Tikhonov (linear case) and Lavrent’ev methods stabilizeusing the ”diagonal quantity” αu.
But local regularization also involves a “diagonal quantity”:
• Tikhonov and Lavrent’ev regularization:
* αu added to the underlying operator, which makes a smallshift of singular values of the operator away from zero.
• Local regularization:
* We first subtract a diagonal quantity from the operator . . .use this to construct a diagonal ∼ αu to be added back in.=⇒ αu more closely linked to A =⇒ can preserve structure
* The subtraction effectively splits the underlying operator(As we’ll see later, the splitting should be related to thesensitivity of the data to local changes in the solution.)
AIP2009 July 22, 2009 P. K. Lamm Generalized Local Regularization for Ill-Posed Problems
Local regularization
• The equation for local regularization:
For each α ∈ (0, α] and Xα ⊂ X , let uδα ∈ dom(Aα) ⊂ Xα satisfy
Aαu + aαu = f δα
for given
(nonlinear) operator Aα : dom(Aα) ⊂ Xα 7→ X ,
parameter (function/operator) aα(·), and for
f δα ∈ Xα defined via
f δα = Tα f δ,
for a family {Tα} of bounded linear operators, Tα : X 7→ Xα ,
‖Tα f ‖Xα ≤ ‖f ‖X , uniformly in f ∈ X .
[E.g., Tα = I , or Tα = a (normalized) mollification operator,
or a number of other choices.]
AIP2009 July 22, 2009 P. K. Lamm Generalized Local Regularization for Ill-Posed Problems
Compare Tikhonov, Lavrent’ev, and local regularization
Comments about the local regularization equation,
Aαu + aα u = Tα f δ
• Tikhonov and Lavrent’ev are special cases, withparameter-independent Aα and Xα = X for each:
Method Aα aα Tα
Tikhonov: A?A α (scalar) A?
Lavrent’ev: A α (scalar) I
• In local regularization, Tα plays the role of “consolidating”
a local piece of the function f δ:
Tikhonov: Tα = A? (not local at all)
Lavrent’ev: Tα = I (may be too local)
AIP2009 July 22, 2009 P. K. Lamm Generalized Local Regularization for Ill-Posed Problems
Operators/parameters for local regularization
Selection of Aα, aα, and Tα for local regularization method:
Motivation: Look at what’s needed to show stability and
convergence to u of uδα , the solution of the (generic) equation
Aαu + aαu = Tαf δ. (2)
To simplify the motivation:
* Take operators A, Aα bounded linear, α > 0.
* Take Xα = X , α > 0.
Will make comparisons with:
* stability/convergence results for Tikhonov/Lavrent’ev
(since these methods also take the form of equation (2) ).
AIP2009 July 22, 2009 P. K. Lamm Generalized Local Regularization for Ill-Posed Problems
Stability of the three methods
(1) Stability for methods based on Aαu + aαu = Tαfδ:
Want: solution uδα unique + depends continuously on data f δ.
=⇒ need (Aα + aαI ) invertible with bounded inverse.
• This is easy for Tikhonov regularization:
Aα = A?A automatically nonnegative self-adjoint.
• Not automatic for Lavrent’ev or local regularization:
Need added conditions on Aα.
E.g., assume Aα nonnegative self-adjoint.
Note difference:
? For Lavrent’ev =⇒ assumption is on Aα = A;
? For local reg. =⇒ assumption is on Aα (we construct).
AIP2009 July 22, 2009 P. K. Lamm Generalized Local Regularization for Ill-Posed Problems
Convergence for the three methods
So assume we already have stability, or that
‖(Aα + aαI )−1‖ = O(
1
c(α)
)where c(α) → 0 as α → 0; here ‖ · ‖ denotes the operator norm.
(2) Convergence for methods based on Aαu + aαu = Tαfδ:
Must show:
For each f ∈ R(A) there is a choice of α = α(δ, f δ) for which
α(δ, f δ) → 0 as δ → 0,
and for whichuδ
α(δ,f δ) → u as δ → 0,
for all f δ ∈ X satisfying ‖f − f δ‖X ≤ δ.
AIP2009 July 22, 2009 P. K. Lamm Generalized Local Regularization for Ill-Posed Problems
Convergence for the three methods, cont’d.
Verification of convergence of uδα to u, where
Aαuδα + aαuδ
α = Tαf δ. (3)
with Aα, aα, Tα given:
• First note u ∈ X uniquely satisfies
Au = f ,
=⇒ TαAu = Tαf ,
=⇒ Aαu + aαu = Tαf + (Aαu + aαu − TαAu) . (4)
• So subtract: equation (3)−(4) to obtain
(Aα + aαI )(uδα − u) = Tα dδ − (Aαu + aαu − TαAu),
where dδ ≡ (f δ − f ) ∈ X satisfies ‖dδ‖X ≤ δ.
AIP2009 July 22, 2009 P. K. Lamm Generalized Local Regularization for Ill-Posed Problems
Convergence for the three methods, cont’d.
We then have
(uδα − u) = (Aα + aαI )−1 Tαdδ
+(Aα + aαI )−1 [(TαAu −Aαu)− aαu] ,
where ∥∥∥(Aα + aαI )−1 Tαdδ∥∥∥
X= O
(δ
c(α)
).
Goal: select α = α(δ, f δ) so that α(δ, f δ) → 0 as δ → 0, and sothat both terms above in (uδ
α− u) go to zero as δ → 0. That is,
The term due to data error goes to zero, i.e.,
δ
c ( α(δ, f δ) )→ 0 as δ → 0.
The term due to method error goes to zero:
(Aα + aαI )−1 (Dαu − aαu) → 0 as δ → 0.
Here we have defined Dα = TαA−Aα.
AIP2009 July 22, 2009 P. K. Lamm Generalized Local Regularization for Ill-Posed Problems
Convergence for the three methods, cont’d.
Note:
• There is generally little difficulty in making an a priori choice ofα = α(δ, f δ) so that α(δ, f δ) → 0 and
δ
c ( α(δ, f δ) )→ 0 as δ → 0.
• The problem then is to show that for a selection of α-values withα → 0, the operator (Aα + aαI ) is continuously invertible, and
(Aα + aαI )−1 (Dαu − aαu) → 0, as α → 0.
It is well-known that these last statements are true (for linear A)in the case of:
Tikhonov regularization, for A bounded;
Lavrent’ev regularization, for a narrower class ofoperators A (e.g. A nonnegative self-adjoint).
AIP2009 July 22, 2009 P. K. Lamm Generalized Local Regularization for Ill-Posed Problems
Local regularization: choice of Aα and aα
For local regularization, we’ll use the desired convergencecondition,
(Aα + aαI )−1 (Dαu − aαu) → 0 as α → 0, (5)
to motivate the selection of Aα and aα. (Recall Dα ≡ TαA−Aα.)
• That is, u satisfies Au = f
=⇒ TαAu = Tαf ,
where the operator TαA may be split as follows:
TαA = Aα + (TαA−Aα)
=⇒ TαA = Aα + Dα.
• Using (5), this splitting of TαA needs to be such that (roughly)
Dαu ≈ aαu for α ≈ 0,
for some choice of aα , i.e., Dα ≈ “diagonal” when applied to u.
AIP2009 July 22, 2009 P. K. Lamm Generalized Local Regularization for Ill-Posed Problems
Local regularization: choice of Aα and aα
So in local regularization:
We first select Tα (selection criteria to be discussed later).
We then select Aα and aα (and define Dα ≡ TαA−Aα)
so that for α > 0 we have:
1 (Aα + aαI ) continuously invertible, and
2 the operator (Dα − aαI ) satisfies∥∥∥(Aα + aαI )−1 (Dα − aαI ) u∥∥∥
X→ 0 as α → 0.
Roughly, we take the modified equation TαAu = Tαf , and split itsleading operator via TαA = Aα + Dα, where
• Aα = the “bigger, nearly continuously invertible” part of TαA,
• Dα = the “smaller, diagonal-like” part of TαA.
AIP2009 July 22, 2009 P. K. Lamm Generalized Local Regularization for Ill-Posed Problems
Statement of convergence
The above statement of convergence, namely,∥∥∥(Aα + aαI )−1 (Dα − aαI ) u∥∥∥
X→ 0 as α → 0. (6)
is obtained for Tikhonov & Lavrent’ev, without splitting TαA:
Tikhonov: TαA = Aα = A?A, Dα = 0, aα = α.
Lavrent’ev: TαA = Aα = A, Dα = 0, aα = α.
In these cases the convergence as α → 0 in (6) takes the form(respectively) of∥∥∥(A?A + αI )−1 (−αu)
∥∥∥X→ 0,
∥∥∥(A+ αI )−1 (−αu)∥∥∥
X→ 0.
In contrast: With local regularization we can obtain
a stronger statement of convergence (than (6)) for
many problems.
AIP2009 July 22, 2009 P. K. Lamm Generalized Local Regularization for Ill-Posed Problems
Local regularization: choice of Aα and aα
Stronger statement of convergence: A stronger result than∥∥∥(Aα + aαI )−1 (Dαu − aαu)∥∥∥
X→ 0 as α → 0,
is obtained if the method satisfies
‖Dαu − aαu‖X = o (c(α)) as α → 0 (7)
(recall that ‖(Aα + aαI )−1‖ = O( 1c(α)) ).
• The stronger result (7) cannot hold for Tikhonov or
Lavrent’ev (in general). For these methods the usual case is∥∥∥(Aα + αI )−1∥∥∥ = O
(1α
)as α → 0,
i.e., c(α) = α, but ‖Dαu − aαu‖X = ‖−αu‖X = O(α)
=⇒ ‖Dαu − aαu‖X 6= o ( c(α) ) as α → 0.
AIP2009 July 22, 2009 P. K. Lamm Generalized Local Regularization for Ill-Posed Problems
Local regularization: the selection of Tα
How to select Tα in the method of local regularization ?
• We will select Tα in order to:
Facilitate a regularization method which is “localized” in anappropriate sense.
Work toward the goal of finding a method which iseasier/faster than classical regularization methods.
Effectively utilize information about the sensitivity of the dataf to (local) changes in u (for f , u satisfying the Au = f ).
We will demonstrate the selection of Tα, as well as of Aα and aα,through some examples.
AIP2009 July 22, 2009 P. K. Lamm Generalized Local Regularization for Ill-Posed Problems
Examples of local regularization
Three motivating examples . . . from where local regularizationtheory is the most complete, i.e., for Volterra operators.
1 Linear problem on X = Lp(0, 1), 1 < p ≤ ∞:∫ t
0k(t, s)u(s) ds = f (t), a.e. t ∈ [0, 1].
2 Nonlinear Hammerstein problem on X = C [0, 1]:∫ t
0k(t, s)g(s, u(s)) ds = f (t), a.e. t ∈ [0, 1],
under (standard) conditions on g .
3 Nonlinear autoconvolution problem, X = L2(0, 1), C [0, 1]:∫ t
0u(t − s)u(s) ds = f (t), a.e. t ∈ [0, 1].
AIP2009 July 22, 2009 P. K. Lamm Generalized Local Regularization for Ill-Posed Problems
Linear Volterra example
• First consider the linear Volterra problem
Au(t) =
∫ t
0k(t, s)u(s) ds = f (t), a.e. t ∈ [0, 1],
in the special case of X = L2(0, 1), and for k ∈ L2([0, 1]×[0, 1]).
• To determine an appropriate choice for Tα, we first look atsensitivities of the data f = Au to local parts of u:
A is bounded linear on X =⇒ the Frechet differential of A atu ∈ X with increment ∆u ∈ X is given by
DA(u;∆u) = A∆u,
i.e.,
DA(u;∆u)(t) =
∫ t
0k(t, s)∆u(s) ds, a.e. t ∈ [0, 1].
AIP2009 July 22, 2009 P. K. Lamm Generalized Local Regularization for Ill-Posed Problems
Sensitivities of data for linear Volterra problem
Graphs of: (1) ∆u = “unit impulse” of width .1, (1st & 3rd rows of graphs above),and (2) the response DA(u;∆u) (2nd & 4th rows), immediately below each ∆u.
(Here the kernel for A is k(t, s) = 1.)
AIP2009 July 22, 2009 P. K. Lamm Generalized Local Regularization for Ill-Posed Problems
Sensitivities of data for linear Volterra problem
Combined graphs of DA(u;∆u) for each “unit impulse” ∆uof width .1 (As before, the kernel for A is k(t, s) = 1 .)
AIP2009 July 22, 2009 P. K. Lamm Generalized Local Regularization for Ill-Posed Problems
Sensitivities of data for linear Volterra problem
Combined graphs of DA(u;∆u) for each “unit impulse” ∆uof width .1 (Here the kernel for A is k(t, s) = t − s .)
AIP2009 July 22, 2009 P. K. Lamm Generalized Local Regularization for Ill-Posed Problems
Sensitivities of data for linear Volterra problem
Combined graphs of DA(u;∆u) for each “unit impulse” ∆uof width .1 (Here the kernel for A is k (t, s) = (t − s)2 .)
AIP2009 July 22, 2009 P. K. Lamm Generalized Local Regularization for Ill-Posed Problems
Sensitivities of data for linear Volterra problem
Combined graphs of DA(u;∆u) for each “unit impulse” ∆uof width .1 (Here the kernel for A is k(t, s) = (t − s)3 .)
AIP2009 July 22, 2009 P. K. Lamm Generalized Local Regularization for Ill-Posed Problems
Selection of Tα for linear Volterra problem
=⇒ for the linear Volterra problem, the solution u at t mostinfluences data f on the interval [t, t + α], for some α > 0 small.
The length α of the interval is the regularization parameter.
(Alternatively, use two parameters – one for sensitivity region andanother for regularization. E.g., see [PKL & Dai, 2005] )
• One possible choice of Tα:
Tαf (t) =1
α
∫ α
0f (t + ρ) dρ, a.e. t ∈ [0, 1− α],
(averaging function on [t, t + α]).
• More generally,
Tαf (t) =
∫ α0 f (t + ρ) dηα(ρ)∫ α
0 dηα(ρ), a.e. t ∈ [0, 1− α],
for suitable measure ηα on [0, α].
AIP2009 July 22, 2009 P. K. Lamm Generalized Local Regularization for Ill-Posed Problems
Selection of Aα, aα for linear Volterra problem.
For example, we will apply the simple averaging Tα to the originallinear Volterra equation. Assume α ∈ (0, α], for α > 0 small:
• Start with the original equation Au = f , or∫ t
0k(t, s)u(s) ds = f (t), a.e. t ∈ [0, 1].
• Next compute TαAu = Tαf , for a.e. t ∈ [0, 1− α],
1
α
∫ α
0
∫ t+ρ
0k(t + ρ, s)u(s) ds dρ =
1
α
∫ α
0f (t + ρ) dρ.
• Now split: TαA = Aα + Dα, where Dα is nearly “diagonal”:
TαAu(t) =1
α
∫ α
0
∫ t
0k(t + ρ, s)u(s) ds dρ
+1
α
∫ α
0
∫ t+ρ
tk(t + ρ, s)u(s) ds dρ.
AIP2009 July 22, 2009 P. K. Lamm Generalized Local Regularization for Ill-Posed Problems
Selection of Aα, aα for linear Volterra problem.
So the splitting of TαA is thus given by
TαA = Aα + Dα,
where, for a.e. t ∈ [0, 1− α],
Aαu(t) =
∫ t
0kα(t, s)u(s) ds,
Dαu(t) =1
α
∫ α
0
∫ t+ρ
tk(t + ρ, s)u(s) ds dρ,
and kα(t, s) ≡ 1α
∫ α0 k(t + ρ, s) dρ.
=⇒ Aα is of the same form as A.
• Finally: select aα = aα(t) so that Dαu ≈ aαu for α small.
AIP2009 July 22, 2009 P. K. Lamm Generalized Local Regularization for Ill-Posed Problems
Selection of aα for linear Volterra problem.
• Need aα = aα(t) so that, for a.e. t ∈ [0, 1− α],
Dαu(t) =1
α
∫ α
0
∫ t+ρ
tk(t + ρ, s)u(s) ds dρ ≈ aα(t)u(t),
when u = u.
• If we approximate u on the local interval [t, t + α] by
the constant u(t) , then we have (formally)
Dαu(t) ≈(
1
α
∫ α
0
∫ t+ρ
tk(t + ρ, s)ds dρ
)u(t) = aα(t)u(t),
for
aα(t) =1
α
∫ α
0
∫ t+ρ
tk(t + ρ, s)ds dρ, a.e. t ∈ [0, 1− α].
AIP2009 July 22, 2009 P. K. Lamm Generalized Local Regularization for Ill-Posed Problems
Local regularization for the linear Volterra problem
To summarize: The particular construction we have chosen leads
to a local regularization equation for the linear Volterra problem
given, for noisy data f δ, by
Aαu + aαu = f δα on Xα = L2(0, 1− α),
where for a.e. t ∈ [0, 1− α],
f δα (t) = Tαf δ(t) =
1
α
∫ α
0f δ(t + ρ) dρ,
Aαu(t) =
∫ t
0kα(t, s)u(s) ds
=
∫ t
0
(1
α
∫ α
0k(t + ρ, s) dρ
)u(s) ds,
aα(t) =1
α
∫ α
0
∫ t+ρ
tk(t + ρ, s)ds dρ.
AIP2009 July 22, 2009 P. K. Lamm Generalized Local Regularization for Ill-Posed Problems
Local regularization for the linear Volterra problem
Result: For a large class of kernels k and for α sufficiently small,
1 The quantity aα,
aα(t) =1
α
∫ α
0
∫ t+ρ
tk(t + ρ, s)ds dρ,
satisfies
0 < c1 c(α) ≤ aα(t) ≤ c2 c(α),
a.e. t ∈ [0, 1− α], for some c(α) > 0 with the property that
c(α) → 0 as α → 0; and,
2 (Aα + aαI ) is invertible with∥∥∥(Aα + aαI )−1∥∥∥ = O
(1
c(α)
)as α → 0.
AIP2009 July 22, 2009 P. K. Lamm Generalized Local Regularization for Ill-Posed Problems
Stronger convergence result
Question: So why does the local regularization method satisfy thestronger convergence result?
Recall that in this case we need to verify
‖(Dαu − aαu)‖Xα= o (c(α)) as α → 0.
In contrast to Tikhonov and Lavrent’ev, where Xα = X and
‖(Dαu − αu)‖Xα= ‖0− αu‖Xα 6= o(α),
the method of local regularization gives
Dαu(t)− aα(t)u(t) =
=1
α
∫ α
0
∫ t+ρ
t
k(t+ρ, s)u(s) ds dρ−(
1
α
∫ α
0
∫ t+ρ
t
k(t + ρ, s) ds dρ
)u(t).
AIP2009 July 22, 2009 P. K. Lamm Generalized Local Regularization for Ill-Posed Problems
Stronger convergence result, cont’d
So for local regularization we have
Dαu(t)− aα(t)u(t) =1
α
∫ α
0
∫ t+ρ
tk(t + ρ, s) (u(s)− u(t)) ds dρ,
a.e. t ∈ [0, 1− α), and thus as α → 0,
‖Dαu − aαu‖Xα = O(‖aα( · )‖L∞
)· ‖Sαu − u‖Xα
= O(c(α)
)· ‖Sαu − u‖Xα
where
Sαu(t) =1
α
∫ α
0u(t + s) ds.
Note that Sαu(t) is a special case of the Hardy-Littlewoodmaximal function for u at t.
AIP2009 July 22, 2009 P. K. Lamm Generalized Local Regularization for Ill-Posed Problems
Stronger convergence result, cont’d
So we have
‖Dαu − aαu‖Xα = O(c(α)
)· ‖Sαu − u‖Xα ,
and it follows from the Lebesgue differentiation theorem that
‖Sαu − u‖Xα → 0 as α → 0,
for any u ∈ X . Thus
‖Dαu − aαu‖Xα = o(c(α)
)as α → 0,
and the stronger convergence result is obtained.
For smoother u, it’s obvious that from this same estimate we canobtain a rate of convergence.
AIP2009 July 22, 2009 P. K. Lamm Generalized Local Regularization for Ill-Posed Problems
General results for linear Volterra problems
More general results for the linear Volterra problem
( [PKL] for earlier work]; [Brooks thesis, 2007] and
[Brooks & PKL, 2009] for Lp results and discrepancy principle)
f ∈ R(A) ⊂ X = Lp[0, 1], f δ ∈ Lp[0, 1], 1 < p ≤ ∞.
For ν-smoothing problems, with a large class of measures(depending on ν) used to define suitable operators Tα.
Convergence as δ → 0 of uδα to u in Xα = Lp[0, 1− α] for
a priori choices of α = α(δ, f δ).
Optimal rates of convergence for the ν-smoothing problem.
A discrepancy principle of the form
d(α) = τδ,
where d(α) ≡(
αm ‖f δ‖X‖Tαf δ‖Xα
)‖Aαuδ
α − Tαf δ‖Xα .
AIP2009 July 22, 2009 P. K. Lamm Generalized Local Regularization for Ill-Posed Problems
Nonlinear Volterra problems
Nonlinear Hammerstein problem:
Start with the usual local regularization equation
Aαu + aαu = Tαf δ,
except now Aα and the term aαu are both nonlinear in u.
• How to handle the nonlinearity?Use the local regularization interval to facilitate linearization ofthe nonlinear problem [Luo thesis ’07; Brooks, PKL, Luo ’09]
i.e., for t > 0, let τ = min{t, α}, and linearize aα = aα(t, u(t))about the prior u-value u(t − τ).
• Numerical implementation of the linearized problem:
u = (Tαf δ −Aαu)/aα(·), t > 0
=⇒ solve a linear problem for u(t) sequentially, since Aαuand (the linearized) aα only involve values of u prior to t.
AIP2009 July 22, 2009 P. K. Lamm Generalized Local Regularization for Ill-Posed Problems
Nonlinear Volterra problems, cont’d
Nonlinear autoconvolution problem:
For the autoconvolution problem (as with Hammerstein), theusual local regularization equation
Aαu + aαu = Tαf δ (8)
involves nonlinearities in both Aα and the term aα u.
• First: Solve (8) for u(t), t ∈ [0, α] (α > 0 small):
* Nonlinear equation (8) not too difficult to solve on [0, α];
* Need only a reasonable approximation to u on [0, α] for convergence
• Then: Solve (8) for u(t), t ∈ (α, 1] using linear methods:
* The (approximate) solution u on (α, 1] can be found sequentially
from (8) using linear techniques.
• Thus, linearization of (8) not required in practice for theautoconvolution problem (but used as tool in convergence theory).
[Dai thesis, 2006; Dai & PKL, 2008; Brooks, Dai, & PKL, 2009]AIP2009 July 22, 2009 P. K. Lamm Generalized Local Regularization for Ill-Posed Problems
Nonlinear Volterra problems, cont’d
• Numerical results for nonlinear Volterra problems:
In general,
Qualitatively equivalent to Tikhonov solution, except notoversmoothed; considerably better than Lavrent’ev solution.
Many times faster than a nonlinear Tikhonov (iterative)approach in practice.
(See above references.)
• Other non-Volterra work:
2-D image deblurringOlder work: [Cui thesis], [Cui, Lamm, Scofield, ’07].Newer results (to appear) are based on generalized localregularization approach =⇒ less computationally intensive.
General n-dimensional problems governed by a nonlinearmonotone operator (to appear; from generalized theory).
AIP2009 July 22, 2009 P. K. Lamm Generalized Local Regularization for Ill-Posed Problems
Sample numerical results
AIP2009 July 22, 2009 P. K. Lamm Generalized Local Regularization for Ill-Posed Problems
Sample numerical results, cont’d
AIP2009 July 22, 2009 P. K. Lamm Generalized Local Regularization for Ill-Posed Problems