+ All Categories
Home > Documents > [American Institute of Aeronautics and Astronautics 11th AIAA/ISSMO Multidisciplinary Analysis and...

[American Institute of Aeronautics and Astronautics 11th AIAA/ISSMO Multidisciplinary Analysis and...

Date post: 10-Dec-2016
Category:
Upload: gene
View: 215 times
Download: 3 times
Share this document with a friend
14
1 A Robust and Efficient Probabilistic Approach for Challenging Industrial Applications with High-dimensional and Non-monotonic Design Spaces Liping Wang Don Beeson GE Global Research Center GE Aviation K1-2A62B, One Research Circle 1 Neumann Way Niskayuna, NY12309 Cincinnati, OH 45215 Tel: 518-387-5500 Gene Wiggs GE Aviation 1 Neumann Way Cincinnati, OH 45215 Abstract The objective of this paper is to apply state-of-the-art meta-modeling techniques to achieve more efficient and robust probabilistic analysis for challenging industrial applications with high dimensional and non-monotonic design spaces. The proposed approach enables Cumulative Distribution Function (CDF) and Probability Density Function (PDF) calculations in design spaces that are monotonic or non-monotonic and have a large number of variables (100+). The proposed method includes 1) constructing an accurate and fast running meta-model from a small number of training points; 2) applying a large number of Monte Carlo runs to the meta-model; 3) post-processing the Monte Carlo output in a special way so that accurate CDF and PDF curves and other probabilistic information are obtained. Since accurate meta-models can be constructed for design spaces that are non-monotonic or have a very large number of variables (100+), this approach provides a practical general-purpose solution process that is applicable to most probabilistic design problems encountered in industry. Introduction In spite of the exponential growth of computing power, the enormous computational cost of complex and large-scale engineering design problems make it impractical to rely exclusively on high fidelity simulation codes. This is especially true for probabilistic and robust design problems. For example, a single Computation Fluid Dynamics (CFD) analysis may require a few days of simulation execution - even if it is performed in a parallel processing environment. Other types of activities, such as heat transfer, mechanical, material and product life analysis, may also require prohibitively large amounts of computation time. Performing probabilistic and robust design on these long running applications is a formidable task in any industry. 11th AIAA/ISSMO Multidisciplinary Analysis and Optimization Conference 6 - 8 September 2006, Portsmouth, Virginia AIAA 2006-7014 Copyright © 2006 by the American Institute of Aeronautics and Astronautics, Inc. All rights reserved.
Transcript
Page 1: [American Institute of Aeronautics and Astronautics 11th AIAA/ISSMO Multidisciplinary Analysis and Optimization Conference - Portsmouth, Virginia ()] 11th AIAA/ISSMO Multidisciplinary

1

A Robust and Efficient Probabilistic Approach for ChallengingIndustrial Applications with High-dimensional and Non-monotonic

Design Spaces

Liping Wang Don Beeson

GE Global Research Center GE AviationK1-2A62B, One Research Circle 1 Neumann Way

Niskayuna, NY12309 Cincinnati, OH 45215Tel: 518-387-5500

Gene WiggsGE Aviation

1 Neumann WayCincinnati, OH 45215

Abstract

The objective of this paper is to apply state-of-the-art meta-modeling techniques toachieve more efficient and robust probabilistic analysis for challenging industrialapplications with high dimensional and non-monotonic design spaces. The proposedapproach enables Cumulative Distribution Function (CDF) and Probability DensityFunction (PDF) calculations in design spaces that are monotonic or non-monotonic andhave a large number of variables (100+). The proposed method includes 1) constructingan accurate and fast running meta-model from a small number of training points; 2)applying a large number of Monte Carlo runs to the meta-model; 3) post-processing theMonte Carlo output in a special way so that accurate CDF and PDF curves and otherprobabilistic information are obtained. Since accurate meta-models can be constructedfor design spaces that are non-monotonic or have a very large number of variables (100+),this approach provides a practical general-purpose solution process that is applicable tomost probabilistic design problems encountered in industry.

Introduction

In spite of the exponential growth of computing power, the enormous computational costof complex and large-scale engineering design problems make it impractical to relyexclusively on high fidelity simulation codes. This is especially true for probabilistic androbust design problems. For example, a single Computation Fluid Dynamics (CFD)analysis may require a few days of simulation execution - even if it is performed in aparallel processing environment. Other types of activities, such as heat transfer,mechanical, material and product life analysis, may also require prohibitively largeamounts of computation time. Performing probabilistic and robust design on these longrunning applications is a formidable task in any industry.

11th AIAA/ISSMO Multidisciplinary Analysis and Optimization Conference6 - 8 September 2006, Portsmouth, Virginia

AIAA 2006-7014

Copyright © 2006 by the American Institute of Aeronautics and Astronautics, Inc. All rights reserved.

Page 2: [American Institute of Aeronautics and Astronautics 11th AIAA/ISSMO Multidisciplinary Analysis and Optimization Conference - Portsmouth, Virginia ()] 11th AIAA/ISSMO Multidisciplinary

2

To alleviate the high computational cost issue, there has been an increasing trend towardfast probabilistic approaches based upon various approximations. Well-known examplesare the First-Order Second-Moment (FOSM) [1-4] and the Point Estimate Method (PEM)[6-9]. FOSM performs probability analysis based upon the approximations to theperformance functions and the approximations to the uncertainties of the inputparameters. The HL-RF method in [1] is a commonly used FOSM method that is oftenreferred to in the literature. This method approximates the limit state surface by itstangent plane at the most probable failure point (MPP), and then obtains an improvedpoint by computing the shortest distance from the approximate hypersurface to the origin.The HL-RF method converges rapidly for problems that are linear or close to linear, but itrequires several iterations for nonlinear problems. In fact, it may converge slowly, oroscillate or even fail to converge if the limit surface is complicated or highly nonlinear.The efficient safety index method in [4] approximates the limit state surface using a two-point nonlinear adaptive approximation (TANA). This technique is significantly moreefficient and stable than HL-RF and other iterative algorithms due to the high qualityapproximation that TANA provides. The AMV method in [3] provides an efficient wayto estimate cumulative probability distributions (CDFs) based upon a linearapproximation at the mean value point and function corrections at multiple selected CDFpoints. By fitting a spline function through the estimated CDF points, the failureprobability and safety index at any given limit of the responses can be calculated.

Most FOSM approaches require gradient information to find the failure “most probablepoint” (MPP) and to create the limit state surface approximation for the safety index andfailure probability calculations. In most practical engineering applications, the necessarypartial derivatives are not available directly from the simulation codes. Instead, they mustbe approximated by a finite-difference (FD) method, which replaces each occurrence ofan exact partial derivative by a finite-difference approximation. This technique may becomputationally expensive, but is easy to implement and very popular. Unfortunately,finite-difference approximations have accuracy problems when non-linear problems areencountered and the step size is incorrectly chosen, or when the design space is “noisy”.Noise may be caused by convergence of the simulation algorithms or by truncated digitsin the performance output files that the simulation produces. For problems withsignificant design space noise, many of the gradient-based FOSM methods fail toconverge to the correct MPPs. Automatic step size procedures proposed in [9-11]sometimes provide more accurate gradients for the situations when the noise is caused byinsufficient output digits, but they are still incapable of solving all problems with noiseissues. In addition, these automatic step size execution adjustments will become timeconsuming if the number of input random variables is large.

A non-gradient based FOSM approach given in [5] improves the efficiency androbustness of probabilistic analysis for noisy problems in general and, in particular, forproblems with output files that contain insufficient digits for the performance functions.The method uses the non-gradient downhill simplex optimization algorithm of Nelder andMead [27] to solve the safety index. Noisy example problems are given in [5], whichdemonstrate the efficiency and robustness of the method.

Page 3: [American Institute of Aeronautics and Astronautics 11th AIAA/ISSMO Multidisciplinary Analysis and Optimization Conference - Portsmouth, Virginia ()] 11th AIAA/ISSMO Multidisciplinary

3

In additional to FOSM approaches, the Point Estimate Method (PEM) is another fastprobabilistic approach that has been explored in depth at the General Electric Company toachieve efficient probabilistic and robust design. This method, which was originallyproposed by Rosenblueth [6], is a simple but powerful technique for evaluating thestatistical moments and probability distribution information of performance functions.Early versions of PEM have been used in many geo-technical and civil engineeringanalyses [7-8]. Unlike most of the FOSM techniques, PEM does not require thederivatives of the performance functions with respect to the input variables. Also, it doesnot use any optimization search technique, and can deal with many output factorssimultaneously without additional simulation calls. The distribution of each performancefunction is approximated at the end of the analysis by a 4-parameter Beta or Lambdadistribution. The non-calculus nature of the algorithm makes it robust to typicalengineering simulations that may be noisy or discontinuous. A unique strength of thePEM method is that it predicts the moments of the output variables accurately. This ispossible because it uses the first four moments (mean, standard deviation, skewness andkurtosois) of each input variable to calculate the output information. Inherently, thecomputational cost of PEM significantly increases with the number of design variables.An improved PEM approach is given in [9] that dramatically reduces the number of runswhen the number of design variables is large.

While the FOSM and PEM methods have been successfully applied to many problems inindustry, a significant shortcoming of both methods is their inherent inability to correctlyhandle non-monotonic design spaces. This is because calculating with random variablesin non-monotonic spaces often produces unusual CDF and PDF curves that the FOSMand PEM mathematics cannot construct. For example, non-monotonic PDF plots willfrequently be multimodal. Sometimes, corrective steps can be added to the FOSM orPEM mathematics to deal with non-monotonic spaces, but in general, the methods faildramatically.

An approach that efficiently and robustly enables random variable calculations in alltypes and sizes of design spaces, both monotonic and non-monotonic, is highly desirable.The following three-step process defines a method that realizes this goal:

(1) An accurate and fast running meta-model is constructed from a small number oftraining points.

(2) A large number of Monte Carlo runs are applied to the meta-model.(3) The Monte Carlo output is post-processed in a special way so that accurate CDF

and PDF curves and other probabilistic information are obtained. Since accuratemeta-models can be constructed for design spaces that are monotonic or have avery large number of variables (100+), this approach provides a practical general-purpose solution process that is applicable to most probabilistic design problemsencountered in industry.

The computational cost of this proposed approach occurs when the meta-model trainingdata is being obtained. However, since these training points are independent, the

Page 4: [American Institute of Aeronautics and Astronautics 11th AIAA/ISSMO Multidisciplinary Analysis and Optimization Conference - Portsmouth, Virginia ()] 11th AIAA/ISSMO Multidisciplinary

4

necessary simulation runs can easily be distributed over a network of computers orservers to shorten the overall analysis time.

Specifically, the objective of this paper is to apply state-of-the-art meta-modelingtechniques to achieve more efficient and robust probabilistic analysis for challengingindustrial applications with high dimensional and non-monotonic design spaces. As aresult of comparisons of non-parametric meta-models [26] including Gaussian Process(GP) [13-17], Kriging [18-19], Multivariate Adaptive Regression Splines (MARS) [20],Radial Basis Functions (RBF) [21-23], Artificial Neural Network (ANN) [24], AdaptiveWeighted Least Squares (AWLS – GE Internal) and Support Vector Regression (SVR)[25], the RBF meta-model was demonstrated to be the most accurate and efficient methodunder practical industry requirements. Therefore, in this paper, RBF is used to create themeta-models for the engineering performance factors of interest. Using these RBF meta-models, Monte Carlo simulation is then applied with special post-processing to obtainaccurate CDF, PDF, statistical moments and other probabilistic estimations. In addition,space filling DACE (Design and Analysis of Computer Experiments) methods, includingHammersley, Latin Hypercube and Optimized Latin Hypercube designs, will be discussedand applied for large X (100+) problems to further improve the accuracy and robustnessof the RBF meta-models used. The predictive quality of each RBF meta-model ismeasured by a simple “leave-one-out” cross validation procedure. The results of thiscross validation process can be viewed graphically or condensed to a single metric. Ingeneral, the predictive metric will improve as more training points are used to build themeta-model. Finally, the paper demonstrates the ability of using RBF meta-models tosolve difficult engineering probabilistic and robust design problems by using severalrealistic but easily understood engineering problems that possess the difficulties of highdimensional and non-monotonic design spaces.

Four examples are presented to demonstrate the concepts discussed above.

Radial Basis Function Meta-Model

A Radial Basis Function (RBF) meta-model is constructed from a surface fittingtechnique that can create response surfaces not limited to polynomial equations. AtGeneral Electric, RBF model construction is a specific tool in a general-purpose softwaresystem called PEZ. Similar to Gaussian Process, the RBF technique does not assume anyspecific form (such as polynomial) for the overall approximation function. Instead, themethod uses linear combinations of a radially symmetric function based on the Euclideandistance or other metric to approximate the response functions. RBF models have asignificant advantage over response surface method models (RSM) when the region ofinterest is a large portion of a highly non-linear or non-monotonic design space –especially in cases when there are many input factors but only a limited number ofsampled points are available. The construction of a RBF model is much faster than theconstruction of a GP model when a large set of training points (>300) is used to build the

Page 5: [American Institute of Aeronautics and Astronautics 11th AIAA/ISSMO Multidisciplinary Analysis and Optimization Conference - Portsmouth, Virginia ()] 11th AIAA/ISSMO Multidisciplinary

5

model. Similar to the GP model, a RBF model is mathematically more complicated thanthe polynomial based response surfaces currently used by many engineers.

A classical radial basis function model can be expressed as:

)()(~

12∑

=

−=N

iii xxxf ϕλ

where iλ are interpolation constants to be determined, N is the number of sample or

training data points. The radial basis functions ϕ are functions of the Euclidean norm

2ixx − from node i, which is the radial distance r of the point x from the center ix .

2i,nn

2i,22

2i,11i

Ti2i )xx(...)xx()xx()xx()xx(xxr −++−+−=−−=−=

The unknown interpolation coefficients iλ can be determined by minimizing

∑∑==

−−=N

iikik

N

k

xxxfJ1

2

21

)]()([ ϕλ

The minimization equation in matrix form can be written as

[ ] }{}{ FA =λ

where

)]([][2ij xxA −= ϕ

},...,,{}{ 21 NT λλλλ =

)}(),...,(),({}{ 21 NT xfxfxfF =

Since there are N equations with N unknown constants λ , j=1,2, …,N, the resulting surfaceis an interpolating surface.

The above classical radial functions often have the following limitations:

(1) RBF does not posses compact support, i. e., a change in any one of thecenter coordinates, ix , affects the entire interpolation;

(2) Matrix [A] may not be positive definite, and(3) Almost all the RBF can reproduce a constant function only in the limit as

N→∝, i. e. , the number of sampling points is large.

Therefore, there are many radial functions used in the literatures. The most commonlyused radial function is a multiquadratic function, which is given as

Page 6: [American Institute of Aeronautics and Astronautics 11th AIAA/ISSMO Multidisciplinary Analysis and Optimization Conference - Portsmouth, Virginia ()] 11th AIAA/ISSMO Multidisciplinary

6

22)( crr +=ϕ

The constant c in the radial basis function is adjusted to obtain the best fit. In PEZ, all thetraining data are automatically scaled in the range of (0, 1), therefore, the values of c formost examples are in the range of (0,1]. The default value is set to 1, which works wellfor most problems. For some complex problems, PEZ will automatically find the best cvalue.

In order to validate the accuracy of RBF meta-models, cross-validation is performedduring the creation of the model. Cross-validation allows PEZ to assess the accuracy ofthe model without sampling any new points beyond those training points used to fit themodel. The basic idea of cross validation is to leave out one observation and then predictit back based only on the N-1 remaining training points. The average relative percentageerror of the cross-validation predictions at all the training points is computed to measurethe accuracy of the model. In order to obtain accurate cross-validation predictions, theinterpolation coefficients iλ are re-estimated when the number of training points is small.

When the number of training points is greater than 300, the interpolation coefficients

iλ are estimated based upon all the N training points that are used for the cross-validation.

Since the RBF meta-model can be constructed for design spaces that are non-monotonicor have a very large number of variables (100+), this approach provides a practicalgeneral-purpose solution process for CDF and PDF analysis in which FORM and SORMmathematics cannot provide.

CDF and PDF Analysis

To be useful, the MCS-RBF approach described above must also include post-processingutilities, which make it easy to obtain accurate and smooth CDF and PDF plots.Probabilistic analysis over non-monotonic spaces will often produce PDF curves that aremulti-modal. The ability to visualize these curves becomes important because the rules-of-thumb associated with the common, well known probability distributions may nolonger be true. For example, the mean of a multi-modal distribution may have a verysmall probability of ever being realized. (Illustrated in the first example below.)

Obtaining accurately estimated non-parametric PDF curves from Monte Carlo data is aninteresting and challenging mathematical problem. Successful approaches have beendocumented using the kernel methods of computational statistics and the signalprocessing mathematics of control theory.

A different and novel approach, which has more of an engineering flavor, has been usedto obtain the PDF and CDF curves presented in this paper. The approach can besummarized with the following steps:

(1) Monte Carlo results are obtained from the fast running RBF model.(2) These results are sorted into increasing order and scaled to the interval [0,1].

Page 7: [American Institute of Aeronautics and Astronautics 11th AIAA/ISSMO Multidisciplinary Analysis and Optimization Conference - Portsmouth, Virginia ()] 11th AIAA/ISSMO Multidisciplinary

7

(3) Each sorted Monte Carlo point is associated with the fraction of the total data thatit represents – also an interval [0,1]. When plotted, this curve represents a non-smooth scaled empirical approximation to the true CDF.

(4) To smooth the CDF and also obtain the PDF (which is the first derivative of theCDF) the coordinate system is rotated counterclockwise 45 degrees, so that in thenew coordinate system, the scaled curve extends from (0,0) to (0, square root (2)).

(5) In the rotated system, the curve looks like a pinned-pinned vibrating beam with acomplex mode shape. A linear combination of the well-known, orthogonal,vibrating beam modes shapes are then used to fit and smooth the curve in aFourier least-squares sense (10 to 20 modes are usually sufficient).

(6) An approximate smooth first derivative of the scaled and rotated CDF is thenobtained by differentiating the Fourier fit expression term by term.

(7) Finally, the desired smooth CDF and PDF curves are obtained by rotating thecoordinate system clockwise 45 degrees and scaling it back to the original space.

Numerical Examples

The Radial Basis Function meta-modeling technique has been tested with dozens ofbenchmark engineering problems. In this paper, two examples are presented to validatethe proposed approach.

Example 1The first example is a one dimensional regression problem. The exact function is given as

)2sin( xxy +=

If x is assumed to be a random variable with a uniform distribution over the interval [0,1],y will be also be a random variable. For this simple one-dimensional case, the exact shapeof the y CDF and PDF can be calculated exactly by integration. These results are shownin Figure 1.

Figure 1. One Dimensional Non-monotonic Example – Exact Result

Page 8: [American Institute of Aeronautics and Astronautics 11th AIAA/ISSMO Multidisciplinary Analysis and Optimization Conference - Portsmouth, Virginia ()] 11th AIAA/ISSMO Multidisciplinary

8

The probabilistic challenge is obtain comparable CDF and PDF results from only a fewevaluations of the non-monotonic function. This can be accomplished by using a RBFapproximation model. The RBF model is created based on fifteen randomly selectedtraining points shown in Figure 2 ( black circles). The predicted curve ( blue line)matches the exact function (red). This example demonstrates that the RBF meta-modelpredictions are accurate at all the validation and training points for this multi-modefunction. Traditional low-degree polynomial regression models would have difficulty inaccurately modeling this function. In particular, the quadratic RSM regression produces avery poor fit.

Figure 2. RBF Metal –Model Comparison

By applying the Monte Carlo Simulation to the RBF meta-model generated from the 15training points, the Cumulative Distribution Function (CDF) and Probability DensityFunction (PDF) of the output variable y are computed. Figure 3 shows that the CDF andPDF curves calculated from MCS+RBF match favorably with the true CDF and PDFcurves generated from the exact integration of the original function even though only 15function evaluations were used.

Figure 3. One Dimensional Non-monotonic Example – RBF Meta-Model Result

Both the PEM and the FOSM methods fail to produce accurate CDF results for thisexample. Figure 4 shows the FOSM (FPI) CDF (11 function calls) compared to the

Page 9: [American Institute of Aeronautics and Astronautics 11th AIAA/ISSMO Multidisciplinary Analysis and Optimization Conference - Portsmouth, Virginia ()] 11th AIAA/ISSMO Multidisciplinary

9

exactly integrated CDF. The significant error caused by the non-monotonic nature of thefunction is obvious.

Figure 4. Comparison of FPI CDF to Exact CDF

Example 2The second example is a special two-dimensional non-monotonic problem. The exactfunction is given as

1222121

22

21

32

31 xxxxxxxxxxy −−+−+++=

The RBF model based on 25 training points (black) and 40 random selected validationpoints (red) are shown in Figure 5. Figure 6 shows the cross-validation result at eachtraining point. The average percentage relative error for the 25 training points is 0.37%;for the 40 validation points the average percent error is 0.075%.

Figure 5. 2D Cubic Function

Page 10: [American Institute of Aeronautics and Astronautics 11th AIAA/ISSMO Multidisciplinary Analysis and Optimization Conference - Portsmouth, Virginia ()] 11th AIAA/ISSMO Multidisciplinary

10

Figure 6. Cross-validation at 25 Training Points

By applying Monte Carlo Simulation to the RBF model generated based on 25 trainingpoints, the Cumulative Distribution Function (CDF) of the output variable y is computed.The result (navy blue) from MCS + RBF matches the CDF curve (pink) generated from100,000 MCS runs (interpreted to be the exact solution). The PEM result (light blue) isclosed to the correct solution but FOSM failed to provide an accurate CDF analysis.

Figure 7. CDF Comparison for Example 2

Example 3Figure 8 is a cantilevered beam problem with a variable number of elements, which isused by the authors to simulate large variable engineering problems. By changing thenumber of beam sections, the problem can have as many as design variables as desired. Inthis study, 33 sections are used and each section has 3 variables (length, height and

CDF Comparison

0

0.2

0.4

0.6

0.8

1

1.2

0 200 400 600 800

y

Pro

babi

lity

MCS+GP MCS FOSM PEMMCS+RBF

FOSM

PEM

MCSMCS+RBF

CDF Comparison

0

0.2

0.4

0.6

0.8

1

1.2

0 200 400 600 800

y

Pro

babi

lity

MCS+GP MCS FOSM PEMMCS+RBF

FOSM

PEM

MCSMCS+RBF

Page 11: [American Institute of Aeronautics and Astronautics 11th AIAA/ISSMO Multidisciplinary Analysis and Optimization Conference - Portsmouth, Virginia ()] 11th AIAA/ISSMO Multidisciplinary

11

width). By including Young’s modulus and the loading, it makes a total of 101 designvariables. Four output variables are the stress, the volume, and the displacements at thebase and tip.

Figure 8. Cantilevered Beam Problem with Variable Sections

For this problem, 300 training and 50 validation points are generated using the LatinHypercube method. Figure 9 shows the average percentage errors of RBF at 50 validationpoints for the four output variables. The output variable stress has the largest averageerror, but it is only 1.6285%. Figure 10 shows the CDF and PDF results of the beamdisplacement at the tip. These results are obtained from only 300 function evaluations.Since this problem has 101 design variables, it is not practical to use PEM for the CDFand PDF analysis.

Average percentage error vs output

0

0.2

0.4

0.6

0.81

1.2

1.4

1.6

1.8

0 1 2 3 4 5

Output

Ave

rag

ep

erce

nta

ge

erro

r Stress

Volume

Displacement

Slope

1.6285%

0.8467%

0.3866%

0.8045%

Figure 9. Average percentage error of RBF at 50 validation points

Page 12: [American Institute of Aeronautics and Astronautics 11th AIAA/ISSMO Multidisciplinary Analysis and Optimization Conference - Portsmouth, Virginia ()] 11th AIAA/ISSMO Multidisciplinary

12

Figure 10. CDF and PDF of Beam Displacement at Tip

Summary

This paper proposed a practical approach for CDF and PDF analysis in design spaces thatare non-monotonic or have a large number of design variables. Numerical results fromthe test examples demonstrate that that the proposed approach is capable of providingaccurate probability CDF and PDF analysis with a small number of simulation runs.

References

1. H. O. Madsen, S. Krenk, N. C. Lind, “Methods of Structural Safety”, Prentice-HallInternational Series in Civil Engineering and Engineering Mechanics, 1986, pp. 44-101.

2. R. E. Melchers , “Structural Reliability Analysis and Prediction”, Ellis HorwoodLimited Publishers, Halsted Press, a Division of John Wiley & Sons, 1987, pp. 104-141.

3. Y. T. Wu and O. H. Burnside , “Efficient Probabilistic Structural Analysis Using AnAdvanced Mean Value Method”, Proceeding of 5th ASCE specialty Conference –Probabilistic Methods in Civil Engineering, ASCE, New York, pp. 492-495.

4. Liping Wang and Ramana Grandhi, “Efficient Safety Index Calculation for StructuralReliability Analysis”, Journal of Computers and Structures, Vol. 52, Nov.1, 1994, pp.103-111.

5. Liping Wang and Don Beeson, “Non-Gradient Based Methods For ProbabilsiticAnalysis”, 44th AIAA/ASME/ASCE/AHS Structures, Structural Dynamics, andMaterials Conference, AIAA 2003-1782, 7-10 April 2003, Norfolk, Virginia.

6. Emilio Rosenblueth, “Point Estimates for Probability Moments”, AppliedMathematics Modeling, Vol. 5, October 1981, pp. 329-335, 1981.

7. Milton Harr, “Reliability Based Design in Civil Engineering”, McGraw-Hill BookCompany, New York, 1987.

Page 13: [American Institute of Aeronautics and Astronautics 11th AIAA/ISSMO Multidisciplinary Analysis and Optimization Conference - Portsmouth, Virginia ()] 11th AIAA/ISSMO Multidisciplinary

13

8. Hyun Seok Seo and Byung Man Kwak, “Efficient Statistical Tolerance Analysis forGeneral Distributions Using Three-Point Information”, Int. J Prod. Res. 2002, Vol.40, No. 4, pp931-944

9. Liping Wang, Don Beeson and Gene Wiggs, “Efficient and Accurate Point EstimateMethod for Moments and Probability Distribution Estimation”, 10th AIAA/ISSMOSymposium on Multidisciplinary Analysis and Optimization”, August 30 – Sept 1,2004, Albany, New York.

10. P. E. Gill, W. Murray, M. A. Saunders and M. H. Wright, “Computing Forward-Difference Intervals for Numerical Optimization”, SIAM J. SCI. Stat. Comput., Vol.4, No. 2, June 1983

11. J. Iott, R. T. Haftka, and H. M. Adelman, “Selecting Step Sizes in SensitivityAnalysis by Finite Differences,” NASA TM- 86382, 1985.

12. Liping Wang and Kristy Gao, “Automatic Step-size Procedure in forward-differencefor reliability and design optimization”, DETC99/DAC-8603, 1999 ASME DesignEngineering Technical Conference, Sept. 12-15, 1999, Las Vegas, Nevada

13. M. N. Gibbs and David J. C. MacKay, “Efficient Implementation of GaussianProcesses”, 1997.

14. David J. C. MacKay, “Introduction to Gaussian Processes”, 1999.15. Debora D. Daberkow and Dimitri N. Mavris, “An Investigation of Metamodeling

Techniques for Complex System Design”, 9th AIAA/ISSMO Symposium onMultidisciplinary Analysis and Optimization”, Sept. 4-6, 2002, Atlanta, Georgia.

16. Radford M. Neal, “Monte Carlo Implementation of Gaussian Process Models forBayesian Regression and Classification”, Technical Report, No. 9702, Department ofStatistics, University of Toronto.

17. Liping Wang, Don Beeson, Gene Wiggs, “Gaussian Process Meta-models forEfficient Probabilistic Design In Complex Engineering Design Spaces”, ASME 2005International Design Engineering Technical Conferences & Computers andInformation in Engineering Conference September 24-28, 2005, Long Beach,California.

18. Sacks, J., Welch, W. J., Mitchell, T. J. and Wynn, H. P., "Design and Analysis ofComputer Experiments," Statistical Science, 4(4), pp. 409-435, 1989.

19. Booker, A. J., Dennis, J. E., Jr., Frank, P. D., Serafini, D. B., Torczon, V. and Trosset,M. W., "A Rigorous Framework for Optimization of Expensive Functions bySurrogates," Structural Optimization, 17(1), pp. 1-13, 1999.

20. Friedman, J. H., “Multivariate Adaptive Regression Splines,” The Annals ofStatistics, 19(1), pp. 1-141, 1991.

21. Dyn, N., Levin, D. and Rippa, S., "Numerical Procedures for Surface Fitting ofScattered Data by Radial Basis Functions," SIAM Journal of Scientific and StatisticalComputing, Vol. 7, No. 2, pp. 639-659, 1986.

22. Hardy, R.L., “Multiquadratic Equations of Topography and Other Irregular Surfaces,”J. Geophys. Res., Vol. 76, pp. 1905-1915, 1971.

23. Anoop A. Mullur and Achille Messac, “Extended Radial Basis Functions: MoreFlexible and Effective Metamodeling”, 10th AIAA/ISSMO Symposium onMultidisciplinary Analysis and Optimization”, August 30 – Sept 1, 2004, Albany,New York.

Page 14: [American Institute of Aeronautics and Astronautics 11th AIAA/ISSMO Multidisciplinary Analysis and Optimization Conference - Portsmouth, Virginia ()] 11th AIAA/ISSMO Multidisciplinary

14

24. Smith, M., “Neural Networks for Statistical Modeling”, Von Nostrand Reinhold, NewYork, 1993.

25. Stella M. Clarke, Jan H. Griebsch, and Timothy W. Simpson, “Analysis of SupportVector Regression for Approximation of Complex Engineering Analyses”, ??

26. Liping Wang, Don Beeson, Gene Wiggs, and Mahidhar Rayasam, “A Comparison ofMeta-modeling Methods Using Practical Industry Requirements”, 47thAIAA/ASME/ASCE/AHS Structures, Structural Dynamics, and MaterialsConference, 1-4 May 2006, Newport, Rhode Island.

27. Nelder, J. A. and Mead, R. "A Simplex Method for Function Minimization." Comput.J. 7, 308-313, 1965


Recommended