Optimal data-independent pointlocations for RBF interpolation
S. De Marchi, R. Schaback and H. Wendland
Universita di Verona (Italy), Universitat Gottingen (Germany)
Metodi di Approssimazione: lezione dell’ 11 maggio 2004. – p.1/26
Preliminaries
� � � ����� � � � � � � � �
, distinct, data sites.
� ��� � � � � �
, data values to be interpolated.
RBF interpolation (easiest): fix a symmetric PD kernel��� �� � �
and form
����� � ��
� �� ! � � "$# � � �% (1)
& �� ' � � " � "��( � � �% %� )(� �) �: the interpolation matrix, invertible.
If
& �� ' is even positive definite
* � �
, then
�
is called a
positive definite, PD kernel. It is often radial,� "� � + % � , "- � . +-0/ %
, and therefore defined on
� � � � �
because every CPD kernel has an associated normalized PD
kernel.
We confine on the PD case.
Metodi di Approssimazione: lezione dell’ 11 maggio 2004. – p.2/26
Some useful notations� Take
1 ' � span
� � "$# � � % � � 2 � �
. The interpolant � �� '
can be written in terms of cardinal functions 3 � 2 1 ' ,3 �"��( % � 4 �( , i.e. �5��� ' � � � �� � "� �% 3 �
� For the purpose of stability and error analysis the following
quantities are important:
separation distance: 6 ' � 7 8:9;<� ;= > '� �? � @ - � � . � @ - / A
fill-distance:
B '� C � DEF ; > C 7 89;< > ' - � . � �-G/
uniformity: H '� C � 6 'B '� C
Metodi di Approssimazione: lezione dell’ 11 maggio 2004. – p.3/26
The problem
Are there any good or even optimal point setsfor the interpolation problem?
Metodi di Approssimazione: lezione dell’ 11 maggio 2004. – p.4/26
Literature
1. BEYER, A. Optimale Centerverteilung bei Interpolation
mit radialen Basisfunktionen. Diplomarbeit, Universität
Göttingen, 1994.
2. BOS, L. P., AND MAIER, U., On the asymptotics of points
which maximize determinants of the form det
"�I "J � ( . � �J % %
.
In Advances in Multivariate Approximation (Berlin, 1999),
W. Haussmann, K. Jetter, and M. Reimer, Eds., vol. 107 of
Math. Res., Wiley-VCH., pp.1–22.
3. ISKE, A., Optimal distribution of centers for radial basis
function methods. Tech. Rep. M0004, Technische
Universität München, 2000.
Metodi di Approssimazione: lezione dell’ 11 maggio 2004. – p.5/26
Literature
1. BEYER, A. considered numerical aspects of the problem.
2. BOS, L. P., AND MAIER, U., investigated on Fekete-type
points for univariate RBFs for a broad class of functions
�
,
proving that: Equally spaced points give asymptotically largest
determinants for the interpolation matrix
& �� ' .3. ISKE, A. constructed and characterized admissible sets by
varying the centers for stability and quality of approximation by
RBF, proving that uniformly distributed points gives better results.
He also provided a bound for the uniformity: H '� C K / L �M� N� ,
O
=
space dimension.
Metodi di Approssimazione: lezione dell’ 11 maggio 2004. – p.6/26
Our approach(I) Power function estimates.
(II) Geometric arguments.
Metodi di Approssimazione: lezione dell’ 11 maggio 2004. – p.7/26
Power function estimates
The kernel
�
defines on the space
1 C � span
� � "$# � � % � � 2 �
an inner product
�� �� ! � � "$# � � �% �
P@ ��
Q @ � "$# � + @ %�
� ��
� ��P
@ �� ! � Q @ � "� �� + @ %
so that
�
is a reproducing kernel of1 C. Set
R 1 C � � � " %
, the
native Hilbert space. If
� 2 � " %, then
� "� % . �5��� ' "� % � � � � "$# � � % .�
� �� 3 �"� % � "$# � � �%� �
and by Cauchy-Schwarz inequalityJ � "� % . �5�� ' "� % J K S �� ' "� % - � - � (2)
Metodi di Approssimazione: lezione dell’ 11 maggio 2004. – p.8/26
Some properties of the Power function
1.
S �� ' "� %
is the norm of the pointwise error functional;
2. Error estimates bound
S �� ' "� %
in terms of the fill distanceB '� C;3. If
� � T
then
S �� ' "� %VU S �� W "� % � *� 2
.
X X XIf
�
is translation invariant, integrable and has Fourier transform
such that Y�Z "[ \ - ]- // %_^ ` K a, " ] % K b � "[ \ - ]-// %_^ `
withQdc O ef
,
bZ U Y�Z c g, then � " � � %
is norm-equivalent to the
space
`/ " � � %. Therefore
- � . �5�� ' -0h i L C N K b B `^ � j/'� C - � - k lnm Lo p N (3)
Metodi di Approssimazione: lezione dell’ 11 maggio 2004. – p.9/26
Main resultThe hypotheses on and are as before.Theorem 1 Then for every q r there exists aconstant s r t
with the following property. If u r t
and v wyx{z |} } } | x ~ �
are given such that
� � ����� � ��� � �� � u � ��� | for all � �� � �d� |
(4)then the fill distance of satisfies
� �� � s u � ��� � �� } (5)
Comment: optimally distributed data sites are sets that cannot
have a large region in
without centers, i.e.
B '� C is sufficiently
small. Metodi di Approssimazione: lezione dell’ 11 maggio 2004. – p.10/26
Quasi-uniformity and fill-distance
The previous theorem fails in two situations:"�� %
When ! � Q
we have � ¡ and we don’t getB `^ p m'� C K b£¢ ."¤ % �
is the Gaussian (cfr. Paley-Wiener theory).
Now, assuming that
�
is already quasi-uniform, i.e.B '� C ¥ 6 ' , we can define
�n¦ � � "$# � + % . � � �� 3 �" + % � "$# � � �% for
every + 2
. For this function we have
J �¦ " + % . �5� §� ' " + % J � S �� ' " + % - �¦ - � �
i.e. there is equality in (2). Hence, the assumption on the approx-
imation properties of the set
�
gives
S �� ' " + % K ¢ and the desired
results follow from lower bounds on the power function .Metodi di Approssimazione: lezione dell’ 11 maggio 2004. – p.11/26
The Greedy Method (G.M.)
Idea: we generate larger and larger data sets by adding the
maxima of the Power function w.r.t. preceeding set. This method
produces well-distributed point sets.
Greedy Algorithm (G.A.)
� starting step:
�� � ���¨� � � �¨( 2 � ©ª «¬ ª ©ª +.
� iteration step:
� � � � �^ � ® � � �� withS �� '<°¯ ± "� �% � - S �� '<°¯ ± -0h i L C N .Convergence: we hope that
- S �� '< -0h i L C N � g
as
² � ¡ when
convex,
� 2 ³/ " � %or
� 2 ³/ " � � � %
,
� � convex.
Metodi di Approssimazione: lezione dell’ 11 maggio 2004. – p.12/26
The greedy algorithm converges
Theorem 2 Suppose
�
is compact and satisfiesan interior cone condition. Suppose further that� � � z ´ z �
is a positive definite kernel definedon a convex and compact region z . Then, thegreedy algorithm converges at least like
� µ �� � �� � ¶· � �
with a constant r t.
Remark: µ ¸ v �º¹� � .
Metodi di Approssimazione: lezione dell’ 11 maggio 2004. – p.13/26
Geometric Greedy Method (G.G.M.)
Notice: Practical experiments show that the greedy minimization
algorithm of the power function, fills the currently largest hole in
the data point close to the center of the hole.
Geometric Greedy Algorithm (G.G.A.)
� starting step:
�¼» � ½
and define
dist
"� � ½ % � � & � &c diam" %
� iteration step: given� 2 � J � J � ¾ pick� M� 2 ¿ � s.t. � M� � 7 ÀÁ ; > C Â 'ÄÃ dist
"� � � %
. Then,
form
� M� � � � ® � � M� �
.
Remark: the algorithm works very well for subsets
� of
, with
small fill-distance
B '� C and large separation distance 6 ' .Metodi di Approssimazione: lezione dell’ 11 maggio 2004. – p.14/26
Convergence of the G.G.A.
Define 6 � � [f 7 8:9; ? �¦ > 'ÅÃ
- � . +-G/ ,
O "� % � � 7 8:9¦ > 'ÅÃ- � . +-G/ and
B � � 7 À Á; > C O "� % � 7 ÀÁ; > C 7 89¦ > 'ÄÃ- � . +- / � O "� M� % � B 'ÅÃ� C
Lemma 1 The G.G.A. produces point sets which are
quasi-uniform. To be more precise,
B U 6 U[
f B ^ � U [f B � for all ¾U f
Metodi di Approssimazione: lezione dell’ 11 maggio 2004. – p.15/26
Remarks
If
is a bounded region in
� �
, the G.G.A. constructs
asymptotically uniformly distributed data sets that cover
in
asymptotically optimal way since
Æ " � � B %cover
whileÆ " � � 6 %
are disjoint. With
� � � + 2 � � � dist" + � % K 6 �
we find ¾ 6 � Ç� Kvol
" %
vol" % K ¾ B � Ç�
� � vol
È Æ� " � � % É, showing that both
B and 6 decay asymptot-
ically like ¾^ � j �.
Metodi di Approssimazione: lezione dell’ 11 maggio 2004. – p.16/26
Examples
� Ê . [ � [ Ë � Ê . [ � [ Ë
discretized on a regular grid ofÌ g Í[ � Î[ � Î[
pts.
The kernels are: the Gaussian (with scale 1) and Wendland’s
function (with scale 15).X Greedy method (G.M.). Executed untill- S � - /h i L C N K f # [ g^ Ï
.
X Greedy geometric method (G.G.M.). The sets
� are com-
puted by the G.G.A. while the error is evaluated on this point set.
Metodi di Approssimazione: lezione dell’ 11 maggio 2004. – p.17/26
G.M. and G.G.M. : Gaussian I
−1 −0.8 −0.6 −0.4 −0.2 0 0.2 0.4 0.6 0.8 1−1
−0.8
−0.6
−0.4
−0.2
0
0.2
0.4
0.6
0.8
1
100
101
102
10−6
10−4
10−2
100
102
104
106
108
Error
N
Figure 1: Gaussian: (left) the N=48 optimal points, (right) the
error as function of N, decays as
Ð^ ÑyÒ/
Metodi di Approssimazione: lezione dell’ 11 maggio 2004. – p.18/26
G.M. and G.G.M. : Gaussian II
−1 −0.8 −0.6 −0.4 −0.2 0 0.2 0.4 0.6 0.8 1−1
−0.8
−0.6
−0.4
−0.2
0
0.2
0.4
0.6
0.8
1
−1 −0.8 −0.6 −0.4 −0.2 0 0.2 0.4 0.6 0.8 1−1
−0.8
−0.6
−0.4
−0.2
0
0.2
0.4
0.6
0.8
1
Figure 2: Gaussian: (left) the N=13 optimal points when- S � - /h i L C N K g [, (right) the power function where the maxima
are taken.Metodi di Approssimazione: lezione dell’ 11 maggio 2004. – p.19/26
G.M. and G.G.M. : Gaussian III
−1 −0.8 −0.6 −0.4 −0.2 0 0.2 0.4 0.6 0.8 1−1
−0.8
−0.6
−0.4
−0.2
0
0.2
0.4
0.6
0.8
1
100
101
102
10−6
10−4
10−2
100
102
104
106
108
Error
N
Figure 3: Gaussian, (left) geometric greedy data
�ÔÓÕ , (right) the
error is larger by a factor 4 and decays as ¾^ Ö Ò�
.
Metodi di Approssimazione: lezione dell’ 11 maggio 2004. – p.20/26
G.M. and G.G.M.: Wendland’s function I
−1 −0.8 −0.6 −0.4 −0.2 0 0.2 0.4 0.6 0.8 1−1
−0.8
−0.6
−0.4
−0.2
0
0.2
0.4
0.6
0.8
1
100
101
102
10−5
10−4
10−3
10−2
10−1
100
Error
N
Figure 4: Wendland’s fnc.: (left) the N=100 optimal points,
(right) the error as function of N that decays as
Ð^ � Ò ×
.
Metodi di Approssimazione: lezione dell’ 11 maggio 2004. – p.21/26
G.M. and G.G.M.: Wendland’s function II
−1 −0.8 −0.6 −0.4 −0.2 0 0.2 0.4 0.6 0.8 1−1
−0.8
−0.6
−0.4
−0.2
0
0.2
0.4
0.6
0.8
1
100
101
102
10−5
10−4
10−3
10−2
10−1
100
Error
N
Figure 5: Wendland’s fnc., (left) geometric greedy data
�� » » ,
(right) the error is larger by a factor 1.4 and decays as ¾^ � Ò Ñ/
.
Metodi di Approssimazione: lezione dell’ 11 maggio 2004. – p.22/26
Distances
0 20 40 60 800.05
0.1
0.15
0.2
0.25
0.3
0.35
0.4
0.45
0.5Gaussian: separation distances
geometricgreedy
0 20 40 60 800
0.5
1
1.5
2Gaussian: fill distances
geometricgreedy
Figure 6: Gaussian.Metodi di Approssimazione: lezione dell’ 11 maggio 2004. – p.23/26
Distances
0 20 40 60 80 1000.05
0.1
0.15
0.2
0.25
0.3
0.35
0.4
0.45
0.5Inverse multiquadrics: separation distances
geometricgreedy
0 20 40 60 80 1000
0.5
1
1.5
2Inverse multiquadrics: fill distances
geometricgreedy
Figure 7: Inverse Multiquadrics function.Metodi di Approssimazione: lezione dell’ 11 maggio 2004. – p.24/26
Distances
0 20 40 60 800.05
0.1
0.15
0.2
0.25
0.3
0.35
0.4
0.45
0.5Wendland: separation distances
geometricgreedy
0 20 40 60 800
0.5
1
1.5
2Wendland: fill distances
geometricgreedy
Figure 8: Wendland’s function.Metodi di Approssimazione: lezione dell’ 11 maggio 2004. – p.25/26
Final remarks� The G.G.A. is independent on the kernel and we proved that
generates asymptotically optimal sequences. It still inferior
to the G.A. that takes maxima of the power function.
� So far, we have no proof of the fact the G.G.A. generates a
sequence with
B K b ¾^ � j �, as required by asymptotic
optimality.
Metodi di Approssimazione: lezione dell’ 11 maggio 2004. – p.26/26