+ All Categories
Home > Documents > [American Institute of Aeronautics and Astronautics 10th AIAA/ISSMO Multidisciplinary Analysis and...

[American Institute of Aeronautics and Astronautics 10th AIAA/ISSMO Multidisciplinary Analysis and...

Date post: 15-Dec-2016
Category:
Upload: layne
View: 212 times
Download: 0 times
Share this document with a friend
15
Linear Shepard Interpolation for High Dimensional Piecewise Smooth Functions Vladimir B. Gantovnik * , Zafer G¨ urdal , and Layne T. Watson Virginia Polytechnic Institute and State University 1 Introduction Scattered data interpolation and approximation problems arise in a variety of applications including data mining, engineering, meteorology, computer graphics, and scientific visu- alization. The basic problem, referred to as the functional scattered data problem, is to find a surface that interpolates or approximates a finite set of points in an m-dimensional space R m . If the scattered data obtained is noisy, approximation is desirable. If the data obtained is exact or fairly accurate, interpolation is desired. This paper concerns the functional scattered data interpolation problem in high dimensions. Solutions to the scattered data interpolation problem are varied. Popular choices in- clude subdivision methods, radial basis function (RBF) methods, Shepard’s techniques, multivariate adaptive regression splines (MARS), and non-uniform rational B-splines (NURBS). In spite of a large number of techniques available for scattered data interpola- tion, there is a need for developing new techniques and better understanding of existing algorithms. This situation obtains because there are serious gaps and shortcomings in many existing techniques. For example, NURBS is a very successful technique for grid- ded data, but it does not work on scattered data. Radial basis function methods cannot * Department of Engineering Science and Mechanics, Virginia Polytechnic Institute and State Univer- sity, Blacksburg, VA, 24061 Departments of Aerospace and Ocean Engineering, and Engineering Science and Mechanics, Virginia Polytechnic Institute and State University, Blacksburg, VA, 24061 Departments of Computer Science and Mathematics, Virginia Polytechnic Institute and State Uni- versity, Blacksburg, VA, 24061 1 10th AIAA/ISSMO Multidisciplinary Analysis and Optimization Conference 30 August - 1 September 2004, Albany, New York AIAA 2004-4486 Copyright © 2004 by Vladimir Gantovnik. Published by the American Institute of Aeronautics and Astronautics, Inc., with permission.
Transcript

Linear Shepard Interpolation for High DimensionalPiecewise Smooth Functions

Vladimir B. Gantovnik∗, Zafer Gurdal†,and Layne T. Watson‡

Virginia Polytechnic Institute and State University

1 Introduction

Scattered data interpolation and approximation problems arise in a variety of applicationsincluding data mining, engineering, meteorology, computer graphics, and scientific visu-alization. The basic problem, referred to as the functional scattered data problem, is tofind a surface that interpolates or approximates a finite set of points in an m-dimensionalspace Rm. If the scattered data obtained is noisy, approximation is desirable. If the dataobtained is exact or fairly accurate, interpolation is desired. This paper concerns thefunctional scattered data interpolation problem in high dimensions.

Solutions to the scattered data interpolation problem are varied. Popular choices in-clude subdivision methods, radial basis function (RBF) methods, Shepard’s techniques,multivariate adaptive regression splines (MARS), and non-uniform rational B-splines(NURBS). In spite of a large number of techniques available for scattered data interpola-tion, there is a need for developing new techniques and better understanding of existingalgorithms. This situation obtains because there are serious gaps and shortcomings inmany existing techniques. For example, NURBS is a very successful technique for grid-ded data, but it does not work on scattered data. Radial basis function methods cannot

∗Department of Engineering Science and Mechanics, Virginia Polytechnic Institute and State Univer-sity, Blacksburg, VA, 24061

†Departments of Aerospace and Ocean Engineering, and Engineering Science and Mechanics, VirginiaPolytechnic Institute and State University, Blacksburg, VA, 24061

‡Departments of Computer Science and Mathematics, Virginia Polytechnic Institute and State Uni-versity, Blacksburg, VA, 24061

1

10th AIAA/ISSMO Multidisciplinary Analysis and Optimization Conference30 August - 1 September 2004, Albany, New York

AIAA 2004-4486

Copyright © 2004 by Vladimir Gantovnik. Published by the American Institute of Aeronautics and Astronautics, Inc., with permission.

effectively work with data sets consisting of more than 300 data points. The general-ized version of Shepard’s method for constructing global interpolants by blending localinterpolants using locally-supported weight functions can create good solutions. However,these solutions depend on a number of parameters such as the support of weight functions,the choice of weight functions, and the choice of the local interpolant.

Typically, the researcher makes an a priori choice regarding the type of solutions. Thechoice of the type of solution may be guided by the application domain, the distributionof the data points, type of the approximating function, or the previous methodologyprevalent in the discipline.

Although all of the techniques mentioned above are useful in diverse applications, thediscussion here is restricted to multivariate adaptive regression splines, and quadratic andlinear Shepard’s methods. These techniques are likely to be useful in practical engineeringapplications, do not assume any underlying grid, and can be easily extended to highdimensions.

The purpose of the present paper is to study the usefulness of linear Shepard’s methodfor response surface approximation, and its performance in comparison with multivari-ate adaptive regression splines and quadratic Shepard’s methods for high dimensionalscattered data interpolation of piecewise smooth functions.

2 Engineering Motivation

The enormous computational cost of complex high fidelity engineering simulations makesit impractical to use exact simulation codes for the purpose of design analysis and op-timization. To reduce computational cost, surrogate models are constructed from andthen used instead of the actual simulation models. Response surface techniques havebeen widely used for design analysis and optimization in many engineering applications.Surrogate applications in structural optimization are surveyed by Barthelemy and Haftka[1], and in multidisciplinary design optimization by Sobieszczanski-Sobieski and Haftka[2].

A typical nonlinear programming problem can be formulated as

minimize f(x), (1)

where x = (x1, . . . , xm) ∈ Em, is subjected to bound constraints

` ≤ x ≤ µ (2)

and general inequality constraintsg(x) ≤ 0, (3)

where g : Em → Ep. Equality constraints

h(x) = 0 (4)

can be approximated by inequality constraints |h(x)| − δ ≤ 0, where δ is a small positivevector that indicates the degree of constraint violation.

The primary engineering motivation for this study is the nature of the response func-tions in constrained engineering design optimization problems. Many engineering re-sponses, such as critical buckling load, are piecewise smooth functions. A constrainedoptimization problem is often transformed into an unconstrained one by augmenting theobjective function of the original problem with penalty terms associated with individualconstraint violations. This constraint handling technique is known as the penalty functionmethod. Since the individual response functions gk(x) in many engineering problems aremostly piecewise smooth functions of continuous variables (although they can be highlynonlinear), high quality interpolations to individual functions can be constructed withoutrequiring a large number of function evaluations. In previous work [3] the authors suc-cessfully used the modified quadratic Shepard’s method for 2D interpolation of responsefunctions. The attempts to apply this method to higher dimensions showed that the num-ber of data points required to solve a scattered data interpolation problem is very largeand prohibitive for most engineering optimization problems, where m À 3 and functionevaluations are computationally expensive.

3 Description of the methods

3.1 Multivariate Adaptive Regression Splines

The multivariate adaptive regression spline [4], commonly referred to as MARS, techniqueadaptively selects a set of basis functions for approximating the response function througha forward/backward iterative approach.

A MARS model for m variables x = (x1, . . . , xm) can be written as

f(x) =n∑

j=1

ajBj(x), (5)

where the basis functions Bj(x) are multivariate B-splines of the form

Bj(x) =

Kj∏

k=1

Bi(k,j)(xv(k,j)), (6)

where Kj is the number of factors (interaction order) in the j-th basis function, xv(k,j) isthe v-th variable, 1 ≤ v(k, j) ≤ m, and Bi(k,j)(xv) is a one-dimensional B-spline whoseorder and knot sequence are adaptively determined.

The major advantages of MARS are accuracy and efficiency in the sense that thenumber n of basis functions is kept as small as possible to represent the data well. TheMARS technique has become particularly popular in the area of data mining because itdoes not assume any particular type of relationship between the predictor variables and thedependent variable. Instead, MARS constructs this relation from a set of coefficients andbasis functions that are entirely based on the regression data. The algorithm partitionsthe input space into regions, each with its own regression equation.

3.2 Shepard Algorithm

Local methods are attractive for very large data sets because the interpolation or ap-proximation at any point can be achieved by considering only a local subset of the data.Many local methods can be characterized as weighted sums of local approximations Pk(x),where the weights Wk(x) form a partition of unity. In order for the overall method to belocal, it is necessary that the weight functions have local support, that is, be nonzero overa bounded region, or at a limited number of the data points.

The original global inverse distance weighted interpolation method is due to Shepard[5]. All methods of this type may be viewed as generalizations of Shepard’s method.

Let Em denote m-dimensional Euclidean space, x = (x1, . . . , xm) ∈ Em, and for real wlet w+ = max0, w. The scattered data interpolation problem can be defined as: given aset of irregularly distributed points x(i) ∈ Em, i = 1, . . . , n, and scalar values fi associatedwith each point satisfying fi = f

(x(i)

)for some underlying function f : Em → E, look

for an interpolating function f ≈ f such that f(x(i)

)= fi. Assume that every m-simplex

formed from the points x(i) is nondegenerate (has a nonempty interior).Define an approximation to f(x) by

f(x) =

n∑

k=1

Wk(x)fk

n∑i=1

Wi(x)

,

where the weight functions Wk(x) are defined in the original paper [5] as

Wk(x) =1∥∥x− x(k)

∥∥2

2

.

However, this form of the weight functions accords too much influence to data points thatare far away from the point of approximation and may be unacceptable in some cases.

Franke and Nielson [6] developed a modification that eliminates the deficiencies ofthe original Shepard’s method. They modified the weight function Wk(x) to have localsupport and hence to localize the overall approximation, and replaced fk with a suitablelocal approximation Pk(x). This method is called the local quadratic Shepard method andhas the general form

f(x) =

n∑

k=1

Wk(x)Pk(x)

n∑i=1

Wi(x)

, (7)

where Pk(x) is a local approximant to the function f(x) centered at x(k), with the propertythat Pk

(x(k)

)= fk. The choice for the weight functions Wk(x) used by Renka [9] was

suggested by Franke and Nielson [6] and is of the form

Wk(x) =

[(R

(k)w − dk(x)

)+

R(k)w dk(x)

]2

, (8)

where dk(x) =∥∥x− x(k)

∥∥2

is the Euclidean distance between the points x and x(k), and

the constant R(k)w > 0 is a radius of influence about the point x(k) chosen just large enough

to include Nw points. The data at x(k) only influences f(x) values within this radius.The polynomial function Pk is written as a Taylor series about the point x(k) with

constant term fk = Pk(x(k)) and coefficients chosen to minimize the weighted square error

n∑i=1i6=k

ωi

(x(k)

) [Pk(x

(i))− fi

]2

with weights

ωi(x(k)) =

[(R

(i)p − di(x

(k)))+

R(i)p di(x(k))

]2

,

and R(k)p > 0 defining a radius about x(k) within which data is used for the least squares

fit. Rw and Rp are taken by Franke and Nielson [6] as

Rw =D

2

√Nw

n, Rp =

D

2

√Np

n,

where D = maxi,j

∥∥x(i) − x(j)∥∥

2is the maximum distance between any two data points,

and Nw and Np are arbitrary constants.The constant values for Rw and Rp are appropriate assuming uniform data density. If

the data density is not uniform, then the radii Rw and Rp should depend on k.The basis function Pk(x) was the constant fk in the original Shepard algorithm [5], and

later variants used a quadratic polynomial [6, 9, 10, 8], a cubic polynomial [11], and a co-sine trigonometric polynomial [12]. The primary disadvantages for large data sets is thata considerable amount of preprocessing is needed to determine closest points and calculatethe local approximation. The second order polynomial models have (m + 2)(m + 1)/2coefficients for m design variables, therefore the number of coefficients for Pk(x) is at least(m + 2)(m + 1)/2, which becomes prohibitive for a typical engineering problem, wherem À 5 and function values are expensive. Also the use of multivariate polynomials forthe evaluation of the nodal functions leads to the loss of the main advantage of Shepard’soriginal method, namely its independence in the evaluation phase from the space dimen-sion. In fact, by increasing the space dimension m, the evaluation of the nodal functionsPk(x) can become computationally expensive.

Of course, the use of polynomials of degree < 2 for Pk(x) is inadequate to describe thelocal behavior of highly nonlinear objective functions, e.g., the penalty function based fit-ness functions in GAs. However, since the individual response functions (constraints, com-ponents of objective function) in many engineering problems are slowly varying smoothfunctions of continuous variables, high quality approximations to these component func-tions can be constructed without requiring a large number of function evaluations byusing linear local approximations.

In the context of a genetic algorithm with a memory binary tree for the discreteand continuous variables, each tree node would have to accumulate Np = Ω

(m2

)function

values fk before an approximation f(x) could be constructed at that node using quadraticPk(x). These complexity considerations motivate the choice of Pk(x) as linear, which onlyrequires Np > m function values to construct the local least squares fit Pk(x). The radiiRw and Rp vary with k and are taken to be

R(k)w = 2 min

1≤i≤ni6=k

∥∥x(k) − x(i)∥∥

2,

R(k)p = minr | B(x(k), r) contains at least 3m/2 of the points x(i),

(9)

where B(x, r) is the closed ball of radius r with center x.

The Linear Shepard’s method would choose Pk(x) as

Pk(x) = fk +m∑

j=1

a(k)j

(xj − x

(k)j

). (10)

Let S =i1, i2, . . . , isk

=

i | i 6= k and ωi

(x(k)

) 6= 0, the set of indices cor-

responding to points and weights ωi

(x(k)

) 6= 0 that determine the local least squaresapproximation Pk(x). Define the sk ×m matrix A and sk-vector b by

Aj· =√

ωij (x(k))(x(ij) − x(k)

)t, bj =

√ωij (x(k))

(fij − fk

).

The coefficients a(k) of Pk(x) are then the minimum norm solution of the linear leastsquares problem

mina∈Em

‖Aa− b‖2 ,

found by using a complete orthogonal factorization of A via the LAPACK subroutineDGELSX.

4 Approximation Model Test Problems

In order to compare the performance of the methods, five m-variate test functions werechosen. A common feature of these functions is that they are continuous and piecewisesmooth. An essential feature of this comparison is how the accuracy of the differentmethods varies as the number of dimensions m increases. Data is compiled for the testproblems with m = 2, 3, 5.

4.1 Test Problem Formulation

The test functions for m = 2 are shown in Figure 1. The test functions are as follows:

f1(x) =

2

m

m∑i=1

xi,

m∑i=1

xi ≤ m

2,

− 2

m

m∑i=1

xi + 2,m∑

i=1

xi >m

2,

maximum: f1(x?) = 1 at

m∑i=1

x?i =

m

2, for i = 1, . . . , m.

(11)

0

0.25

0.5

0.75

1

x1

0

0.25

0.5

0.75

1

x2

0

0.25

0.5

0.75

1

f1

0

0.25

0.5

0.75x1

(a) Function f1

0

0.25

0.5

0.75

1

x1

0

0.25

0.5

0.75

1

x2

0

0.25

0.5

0.75

1

f2

0

0.25

0.5

0.75x1

(b) Function f2

0

0.25

0.5

0.75

1

x1

0

0.25

0.5

0.75

1

x2

0

0.25

0.5

0.75

1

f3

0

0.25

0.5

0.75

x1

(c) Function f3

0

0.25

0.5

0.75

1

x1

0

0.25

0.5

0.75

1

x2

0

0.25

0.5

0.75

1

f4

0

0.25

0.5

0.75

x1

(d) Function f4

0

0.25

0.5

0.75

1

x1

0

0.25

0.5

0.75

1

x2

0

0.25

0.5

0.75

1

f5

0

0.25

0.5

0.75x1

(e) Function f5

Figure 1: Test functions.

f2(x) = 1− 2

m

m∑i=1

|xi − 0.5|,

maximum: f2(x?) = 1 at x?

i = 0.5, for i = 1, . . . , m.

(12)

f3(x) = 1− 2 max1≤i≤m

(|xi − 0.5|),maximum: f3(x

?) = 1 at x?i = 0.5, for i = 1, . . . , m.

(13)

f4(x) =m∏

i=1

gi(x), (14)

where

gi(x) =

2xi, xi ≤ 1

2,

2(1− xi), otherwise,

maximum: f4(x?) = 1 at x?

i = 0.5, for i = 1, . . . , m.

f5(x) = 1− 1

0.5m + 0.5m

(m∑

i=1

|xi − 0.5|+m∏

i=1

|xi − 0.5|)

,

maximum: f5(x?) = 1 at x?

i = 0.5, for i = 1, . . . ,m.

(15)

4.2 Evaluation of Accuracy

The test functions have been sampled at n0 random (uniformly distributed) scatteredpoints of Q = [0, 1]m. Let

S =

Pi(x)n0

1⊂ Q (16)

be this point set.The approximation error is characterized using three error metrics. These are the

maximum absolute error emax, the mean absolute error e, and the root mean squarederror (RMSE) er. The absolute approximation error is defined as

ei = |f(xi)− f(xi)|, (17)

where f(x) is the interpolant function, f(x) is the test function, and the xi are the pointsof a uniform n1×· · ·×n1 grid G ⊂ [0.1, 0.9]m. The total number of grid points is n = nm

1 .Using this notation, the maximum absolute error is

emax = max1≤i≤n

ei, (18)

the mean absolute error is

e =1

n

n∑i=1

ei, (19)

and the root mean squared error is defined as

er =

√∑ni=1 e2

i

n. (20)

5 Results

5.1 2D test problem

For the 2D test problem n0 = 10, 20, 40, 60, 80, 100 and n = 121. The 121 test sites arepoints of the 11 × 11 grid G. The approximation errors for MARS [4], QSHEP [9], andLSHEP (Pk(x) in Eq. (7) is linear) methods were calculated for all test functions and arelisted in Table 1.

5.2 3D test problem

For the 3D test problem n0 = 50, 100, 200, 300, 400, 500 and n = 1331. The 1331 testsites are points of the 11 × 11 × 11 grid G. The approximation errors for MARS [4],QSHEP [10], and LSHEP methods were calculated for all test functions and are listed inTable 2.

5.3 5D test problem

For the 5D test problem n0 = 500, 1000, 2000, 3000, 4000, 5000 and n = 3125. The 3125test sites are points of the 5× 5× 5× 5× 5 grid G. The approximation errors for MARSand LSHEP methods were calculated for all test functions and are listed in Table 3.

Table 1: Interpolation errors for 2D test problems.

fi n0mars qshep lshep

e er emax e er emax e er emax

f1

10 - - - - - - 7.36e-2 9.71e-2 2.48e-120 1.34e-1 1.61e-1 3.50e-1 3.81e-2 5.29e-2 1.70e-1 4.30e-2 5.65e-2 1.47e-140 8.98e-2 1.10e-1 2.73e-1 1.76e-2 3.02e-2 1.17e-1 3.57e-2 4.77e-2 1.31e-160 4.29e-2 5.57e-2 1.68e-1 1.14e-2 1.93e-2 7.72e-2 3.16e-2 4.56e-2 1.13e-180 2.81e-2 4.10e-2 1.42e-1 7.80e-3 1.47e-2 4.67e-2 2.67e-2 4.18e-2 1.05e-1100 2.54e-2 3.72e-2 1.39e-1 7.68e-3 1.33e-2 3.61e-2 2.18e-2 3.14e-2 9.82e-2

f2

10 - - - - - - 8.19e-2 9.96e-2 2.73e-120 6.99e-2 8.79e-2 2.30e-1 4.48e-2 5.74e-2 1.77e-1 5.50e-2 7.32e-2 2.30e-140 1.52e-2 2.61e-2 1.12e-1 2.08e-2 3.12e-2 1.14e-1 3.88e-2 5.87e-2 2.28e-160 7.38e-3 1.34e-2 6.22e-2 1.34e-2 2.15e-2 7.46e-2 3.19e-2 4.33e-2 1.28e-180 4.70e-3 7.91e-3 3.41e-2 1.07e-2 1.62e-2 6.40e-2 2.64e-2 3.53e-2 1.23e-1100 2.53e-3 5.81e-3 2.98e-2 8.91e-3 1.37e-2 5.10e-2 2.56e-2 3.47e-2 1.01e-1

f3

10 - - - - - - 9.63e-2 1.20e-1 2.89e-120 1.28e-1 1.59e-1 4.53e-1 5.82e-2 7.52e-2 2.09e-1 7.30e-2 9.29e-2 2.78e-140 9.91e-2 1.22e-1 3.21e-1 3.18e-2 4.24e-2 1.23e-1 5.31e-2 6.77e-2 2.22e-160 6.54e-2 7.89e-2 1.62e-1 2.02e-2 3.06e-2 1.19e-1 4.63e-2 6.13e-2 1.73e-180 5.24e-2 6.37e-2 1.36e-1 1.68e-2 2.55e-2 9.83e-2 4.27e-2 5.56e-2 1.59e-1100 3.85e-2 4.81e-2 1.29e-1 1.50e-2 2.34e-2 8.29e-2 3.24e-2 4.48e-2 1.35e-1

f4

10 - - - - - - 1.15e-1 1.37e-1 3.46e-120 7.14e-2 1.02e-1 3.61e-1 6.60e-2 9.20e-2 3.57e-1 1.04e-1 1.15e-1 3.36e-140 6.81e-2 7.50e-2 2.88e-1 3.11e-2 4.83e-2 1.66e-1 5.02e-2 6.88e-2 2.81e-160 6.10e-2 6.98e-2 2.03e-1 1.78e-2 2.84e-2 1.04e-1 4.74e-2 6.24e-2 1.55e-180 5.49e-2 6.33e-2 1.49e-1 1.24e-2 2.00e-2 8.13e-2 3.73e-2 4.98e-2 1.47e-1100 5.31e-2 6.11e-2 1.42e-1 1.04e-2 1.82e-2 7.87e-2 3.67e-2 4.83e-2 1.42e-1

f5

10 - - - - - - 7.63e-2 9.38e-2 2.69e-120 6.23e-2 7.78e-2 1.73e-1 4.21e-2 5.31e-2 1.41e-1 5.15e-2 6.72e-2 1.91e-140 8.48e-3 1.09e-2 3.18e-2 1.51e-2 2.20e-2 6.90e-1 2.95e-2 4.02e-2 1.24e-160 2.99e-3 4.81e-3 1.61e-2 1.10e-2 1.62e-2 5.27e-2 2.45e-2 3.29e-2 1.08e-180 2.21e-3 4.13e-3 1.56e-2 9.69e-3 1.47e-2 5.07e-2 2.32e-2 3.13e-2 8.54e-2100 1.81e-3 3.63e-3 1.44e-2 8.08e-3 1.27e-2 3.91e-2 2.28e-2 3.03e-2 1.04e-1

Table 2: Interpolation errors for 3D test problems.

fi n0mars qshep lshep

e er emax e er emax e er emax

f1

50 7.33e-2 7.51e-2 3.43e-1 2.14e-2 2.82e-2 1.25e-1 3.75e-2 5.09e-2 2.03e-1100 6.31e-2 6.54e-2 3.38e-1 1.87e-2 2.77e-2 1.06e-1 3.12e-2 4.25e-2 1.37e-1200 6.15e-2 6.36e-2 3.28e-1 1.28e-2 2.13e-2 9.95e-2 3.07e-2 4.12e-2 1.36e-1300 6.09e-2 6.30e-2 3.19e-1 1.09e-2 1.81e-2 9.74e-2 2.46e-2 3.38e-2 1.28e-1400 5.85e-2 6.22e-2 3.10e-1 8.47e-3 1.55e-2 9.21e-2 2.01e-2 3.14e-2 1.20e-1500 4.79e-2 5.12e-2 3.02e-1 6.52e-3 1.21e-2 8.35e-2 1.41e-2 2.46e-2 1.03e-1

f2

50 1.05e-2 1.47e-2 7.33e-2 3.04e-2 3.97e-2 1.50e-1 4.82e-2 6.00e-2 1.64e-1100 7.96e-3 1.08e-2 4.42e-2 2.14e-2 2.96e-2 1.24e-1 4.48e-2 5.56e-2 1.69e-1200 6.65e-3 8.66e-3 3.55e-2 1.36e-2 2.04e-2 1.03e-1 3.02e-2 3.77e-2 1.29e-1300 4.75e-3 6.54e-3 2.70e-2 1.18e-2 1.82e-2 9.40e-2 2.78e-2 3.65e-2 1.17e-1400 3.44e-3 5.37e-3 2.28e-2 1.05e-2 1.59e-2 7.46e-2 2.50e-2 3.14e-2 1.02e-1500 1.67e-3 3.10e-3 1.54e-2 8.48e-3 1.26e-2 5.44e-2 2.22e-2 2.83e-2 9.51e-2

f3

50 8.98e-2 1.22e-1 5.16e-1 7.19e-2 9.27e-2 2.64e-1 7.94e-2 1.00e-1 2.71e-1100 8.94e-2 1.20e-1 4.97e-1 4.23e-2 5.37e-2 2.03e-1 6.71e-2 8.24e-2 2.59e-1200 8.49e-2 1.17e-1 4.44e-1 3.02e-2 3.98e-2 1.54e-1 5.02e-2 6.49e-2 2.38e-1300 8.46e-2 1.11e-1 4.04e-1 2.40e-2 3.25e-2 1.52e-2 4.60e-2 5.80e-2 2.16e-1400 8.31e-2 1.07e-1 3.72e-1 2.09e-2 2.86e-2 1.33e-2 4.43e-2 5.61e-2 2.06e-1500 7.67e-2 1.01e-1 3.12e-1 1.96e-2 2.76e-2 1.23e-2 4.18e-2 5.41e-2 1.86e-1

f4

50 1.06e-1 1.40e-1 5.95e-1 3.87e-2 5.39e-2 3.24e-1 7.06e-2 9.00e-2 3.24e-1100 7.18e-2 1.11e-1 5.63e-1 3.01e-2 4.41e-2 2.16e-1 5.25e-2 7.38e-2 3.33e-1200 6.57e-2 1.07e-1 5.52e-1 1.90e-2 3.01e-2 1.71e-1 4.46e-2 6.57e-2 2.78e-1300 6.47e-2 1.01e-2 4.92e-1 1.23e-2 2.13e-2 1.38e-1 3.94e-2 5.82e-2 2.03e-1400 6.28e-2 9.95e-2 4.42e-1 1.20e-2 2.03e-2 1.61e-1 3.41e-2 5.07e-2 1.42e-1500 6.14e-2 9.71e-2 3.71e-1 1.17e-2 1.97e-2 1.53e-1 3.17e-2 4.84e-2 1.31e-1

f5

50 1.24e-2 1.55e-2 5.53e-2 2.87e-2 3.75e-2 1.40e-1 4.59e-2 5.72e-2 1.51e-1100 1.07e-2 1.38e-2 4.65e-2 2.03e-2 2.80e-2 1.16e-1 3.68e-2 4.82e-2 1.39e-1200 1.05e-2 1.34e-2 4.29e-2 1.36e-2 1.97e-2 7.89e-2 2.74e-2 3.49e-2 1.27e-1300 9.14e-3 1.16e-2 3.45e-2 1.09e-2 1.53e-2 6.81e-2 2.44e-2 3.15e-2 1.16e-1400 8.11e-3 9.97e-3 3.32e-2 9.69e-3 1.37e-2 6.65e-2 2.37e-2 2.97e-2 9.36e-2500 7.48e-3 9.34e-3 2.52e-2 8.11e-3 1.21e-2 5.22e-2 2.11e-2 2.67e-2 8.39e-2

Table 3: Interpolation errors for 5D test problems.

fi n0mars lshep

e er emax e er emax

f1

500 3.60e-2 5.15e-2 1.43e-1 3.56e-2 5.03e-2 1.62e-11000 3.46e-2 4.55e-2 1.24e-1 3.16e-2 4.42e-2 1.54e-12000 3.19e-2 4.08e-2 1.25e-1 2.89e-2 3.89e-2 1.23e-13000 3.17e-2 4.03e-2 1.21e-1 2.87e-2 3.86e-2 1.16e-14000 3.15e-2 3.97e-2 1.12e-1 2.82e-2 3.80e-2 1.09e-15000 3.06e-2 3.87e-2 1.08e-1 2.72e-2 3.66e-2 1.01e-1

f2

500 8.54e-3 1.11e-2 4.00e-2 3.99e-2 5.02e-2 1.75e-11000 4.72e-3 6.57e-3 1.95e-2 3.23e-2 4.07e-2 1.69e-12000 2.13e-3 2.86e-3 1.02e-2 3.11e-2 3.93e-2 1.65e-13000 2.07e-3 2.75e-3 7.50e-3 3.05e-2 3.83e-2 1.56e-14000 1.56e-3 2.05e-3 8.81e-3 3.00e-2 3.79e-2 1.51e-15000 1.17e-3 1.56e-3 5.82e-3 2.91e-2 3.68e-2 1.45e-1

f3

500 9.07e-2 1.17e-1 5.94e-1 8.10e-2 9.32e-2 3.14e-11000 7.99e-2 1.05e-1 5.86e-1 7.19e-2 8.16e-2 2.92e-12000 7.65e-2 9.89e-2 4.80e-1 6.79e-2 7.05e-2 2.68e-13000 7.43e-2 9.74e-2 4.74e-1 5.87e-2 6.53e-2 1.30e-14000 7.05e-2 9.71e-2 4.65e-1 5.18e-2 6.31e-2 1.21e-15000 7.00e-2 9.60e-2 4.59e-1 4.73e-2 5.07e-2 1.18e-1

f4

500 3.64e-2 6.41e-2 7.63e-1 3.17e-2 5.38e-2 3.48e-11000 3.52e-2 6.32e-2 7.53e-1 3.10e-2 5.28e-2 3.08e-12000 3.46e-2 6.18e-2 7.49e-1 2.84e-2 5.17e-2 2.05e-13000 3.36e-2 6.15e-2 7.42e-1 2.75e-2 4.85e-2 2.02e-14000 3.34e-2 6.14e-2 7.44e-1 2.68e-2 4.38e-2 1.92e-15000 3.28e-2 6.12e-2 7.36e-1 2.61e-2 4.13e-2 1.81e-1

f5

500 1.30e-2 1.85e-2 6.34e-2 3.95e-2 4.96e-2 1.73e-11000 1.23e-2 1.75e-2 5.84e-2 3.17e-2 3.89e-2 1.60e-12000 1.09e-2 1.40e-2 4.88e-2 3.09e-2 3.88e-2 1.55e-13000 1.07e-2 1.37e-2 4.72e-2 3.07e-2 3.85e-2 1.49e-14000 9.02e-3 1.17e-2 4.04e-2 2.96e-2 3.75e-2 1.45e-15000 6.81e-3 9.14e-3 2.84e-2 2.87e-2 3.64e-2 1.17e-1

6 Conclusions

Table 1 makes the point that for a small number n0 of sample points MARS and thequadratic Shepard algorithm cannot even generate an approximation, whereas the linearShepard algorithm gives very reasonable estimates. This advantage of the linear Shepardalgorithm grows much stronger as the dimension m increases, for the theoretical reasonsdiscussed earlier. For large n0, all three algorithms are comparable, with one or the otherbeing slightly better depending on the problem. For example, for f2 and f5, the ridgesalign with the coordinate axes, making the tensor product splines of MARS a very goodapproximation to the ridges (perfect, in fact, if the spline knots were optimally placed).For f1, f3, and the locally nonlinear f4, tensor product splines do not well approximatethe diagonal ridges of f1 and f3, or the high degree polynomial local behavior of f4.

All the results in Tables 1–3 are consistent and predictable from the nature of the testfunctions and the approximation power of the basis functions used by the three algorithms.

References

[1] Barthelemy JM, Haftka RT. Approximation concepts for optimum structural design.Structural Optimization 1993;5:129-144.

[2] Sobieszczanski-Sobieski J, Haftka RT. Multidisciplinary aerospace design optimiza-tion: Survey of recent developments. Sctructural Optimization 1997;14:1-23.

[3] Gantovnik VB, Gurdal Z, Watson LT, Anderson-Cook CM. A genetic algorithm formixed nonlinear programming problems with separate constraint approximations. 44rdAIAA/ ASME/ ASCE/ AHS/ ASC Structures, Structural Dynamics, and MaterialsConference, AIAA Paper No. 2003-1700. Norfolk, VA; 2003.

[4] Friedman JH. Multivariate Adaptive Regression Splines. Annals Statistics 1991;19:1-141.

[5] Shepard D. A Two-Dimensional Interpolation Function for Irregularly Spaced Data.Proceedings of 23rd ACM National Conference 1968;517-523.

[6] Franke R, Nielson G. Smooth Interpolations of Large Sets of Scattered Data. Interna-tional Journal for Numerical Methods in Engineering 1980;15:1691-1704.

[7] Renka RJ. Multivariate interpolation of large sets of scattered data. ACM Transac-tionsons on Mathematical Software 1988;14(2):139-148.

[8] Berry MW, Minser KS. Algorithm 798: High-Dimensional Interpolation Usingthe Modified Shepard Method. ACM Transactionsons on Mathematical Software1999;25(3):353-366.

[9] Renka RJ. Algorithm 660: QSHEP2D: Quadratic Method for Bivariate Interpolationof Scattered Data. ACM Transactionsons on Mathematical Software 1988;14(2):149-150.

[10] Renka RJ. Algorithm 661: QSHEP3D: Quadratic Method for Trivariate Interpolationof Scattered Data. ACM Transactionsons on Mathematical Software 1988;14(2):151-152.

[11] Renka RJ. Algorithm 790: CSHEP2D: Cubic Shepard Method for Bivariate In-terpolation of Scattered Data. ACM Transactionsons on Mathematical Software1999;25(1):70-73.

[12] Renka RJ. Algorithm 791: TSHEP2D: Cosine Series Shepard Method for BivariateInterpolation of Scattered Data. ACM Transactionsons on Mathematical Software1999;25(1):74-77.


Recommended