+ All Categories
Home > Documents > [American Institute of Aeronautics and Astronautics 10th AIAA/ISSMO Multidisciplinary Analysis and...

[American Institute of Aeronautics and Astronautics 10th AIAA/ISSMO Multidisciplinary Analysis and...

Date post: 12-Dec-2016
Category:
Upload: gene
View: 212 times
Download: 0 times
Share this document with a friend
11
Efficient and Accurate Point Estimate Method for Moments and Probability Distribution Estimation Liping Wang General Electric Global Research Center Niskayuna, NY 12309 Don Beeson & Gene Wiggs GE Transportation Cincinnati, OH 45215 Abstract The point estimate method (PEM) is an alternative to Monte Carlo Simulation (MCS) and First Order Second Moments (FOSM) for evaluating the moments and probability distribution of the system or component performance. Although PEM is a powerful and simple method, it is often limited by the need to make 2 n or even 3 n evaluations when there are n random variables, which is unaffordable for many engineering applications. In addition, robustly finding 4-parameter of Beta or Lambda distribution is a difficult task and has not been well discussed in the past. This paper proposes two unique methods that enable faster and more accurate estimation for moments and probability distributions.. These methods include, 1) affordable and more accurate moments estimation by applying different numbers of points for different variables, 2) robust procedure for finding the four parameters of Beta and Lambda distributions. The efficiency and accuracy of the proposed methods are validated with three benchmark problems. Instruction Performance function probability distributions and their associated first moments are important statistical unknowns in engineering probabilistic design activities for robustness and reliability. Methods of predicting these quantities using computer simulations enable engineers to more accurately model the physical world and make design changes which will produce engineering systems, components or processes that are less sensitive to manufacturing and environmental variation while at the same time achieving requirements for performance, reduced costs and improved reliability. Commonly used methods for moment calculations are: Taylor’s series expansion, transfer function construction, Monte Carlo Simulation (MCS), and First Order Second Moment (FOSM). FOSM and Taylor’s series expansion both require accurate gradient calculations that can impose excessive restrictions Copyright 2004 by L. Wang, D. Beeson, G. Wiggs. Published by the American Institute of Aeronautics and Astronautics, Inc., with permission. on the performance functions (smoothness and existence and continuity of the first or first few derivatives). On the other hand, MCS and transfer functions generated from DOEs do not require gradient calculations, but may also have difficulty because the computational cost for them may be too expensive if high accuracy is required or if the number of input variables is large (greater than 10). Commonly used methods for probability distribution analysis to generate the Cumulative Distribution Function (CDF) and Probability Density Function (PDF) are: MCS and FOSM. Similar to moment estimation, MCS usually requires a minimum of thousands of runs for CDF and PDF generation. Using AMV (Advanced Mean Value method) [1] can produce CDF or PDF results with 2-4 orders of magnitude fewer runs than MCS, and FOSM method usually works well with linear or almost linear problems. However, it may fail to provide accurate results for any problem with a noisy performance function or multiple MPPs (Most Probable failure Points). In addition, the method becomes less efficient when applied to problems with multiple performance functions. The Point Estimate Method (PEM), originally proposed by Rosenblueth [2], is a simple but powerful technique for evaluating the moments 10th AIAA/ISSMO Multidisciplinary Analysis and Optimization Conference 30 August - 1 September 2004, Albany, New York AIAA 2004-4359 Copyright © 2004 by L. Wang, D. Beeson, G. Wiggs. Published by the American Institute of Aeronautics and Astronautics, Inc., with permission.
Transcript

Efficient and Accurate Point Estimate Method for Moments and Probability Distribution Estimation

Liping WangGeneral Electric Global Research Center

Niskayuna, NY 12309

Don Beeson & Gene WiggsGE Transportation

Cincinnati, OH 45215

Abstract

The point estimate method (PEM) is an alternative to Monte Carlo Simulation (MCS) and First Order Second Moments (FOSM) for evaluating the moments and probability distribution of the system or component performance. Although PEM is a powerful and simple method, it is often limited by the need to make 2n or even 3n evaluations when there are n random variables, which is unaffordable for many engineering applications. In addition, robustly finding 4-parameter of Beta or Lambda distribution is a difficult task and has not been well discussed in the past. This paper proposes two unique methods that enable faster and more accurate estimation for moments and probability distributions.. These methods include, 1) affordable and more accurate moments estimation by applying different numbers of points for different variables, 2) robust procedure for finding the four parameters of Beta and Lambda distributions. The efficiency and accuracy of the proposed methods are validated with three benchmark problems.

Instruction

Performance function probability distributions and their associated first moments are important statistical unknowns in engineering probabilistic design activities for robustness and reliability. Methods of predicting these quantities using computer simulations enable engineers to more accurately model the physical world and make design changes which will produce engineering systems, components or processes that are less sensitive to manufacturing and environmental variation while at the same time achieving requirements for performance, reduced costs and improved reliability.

Commonly used methods for moment calculations are: Taylor’s series expansion, transfer function construction, Monte Carlo Simulation (MCS), and First Order Second Moment (FOSM). FOSM and Taylor’s series expansion both require accurate gradient calculations that can impose excessive restrictions

Copyright 2004 by L. Wang, D. Beeson, G. Wiggs. Published by the American Institute of Aeronautics and Astronautics, Inc., with permission.

on the performance functions (smoothness and existence and continuity of the first or first few derivatives). On the other hand, MCS and transfer functions generated from DOEs do not require gradient calculations, but may also have difficulty because the computational cost for them may be too expensive if high accuracy is required or if the number of input variables is large (greater than 10).

Commonly used methods for probability distribution analysis to generate the Cumulative Distribution Function (CDF) and Probability Density Function (PDF) are: MCS and FOSM. Similar to moment estimation, MCS usually requires a minimum of thousands of runs for CDF and PDF generation. Using AMV (Advanced Mean Value method) [1] can produce CDF or PDF results with 2-4 orders of magnitude fewer runs than MCS, and FOSM method usually works well with linear or almost linear problems. However, it may fail to provide accurate results for any problem with a noisy performance function or multiple MPPs (Most Probable failure Points). In addition, the method becomes less efficient when applied to problems with multiple performance functions.

The Point Estimate Method (PEM), originally proposed by Rosenblueth [2], is a simple but powerful technique for evaluating the moments

10th AIAA/ISSMO Multidisciplinary Analysis and Optimization Conference30 August - 1 September 2004, Albany, New York

AIAA 2004-4359

Copyright © 2004 by L. Wang, D. Beeson, G. Wiggs. Published by the American Institute of Aeronautics and Astronautics, Inc., with permission.

2

and probability distribution information of performance functions. It has been widely used in many geo-technical and civil engineering analyses [3-10]. Unlike most FOSM techniques [1, 11-13], PEM does not require the gradients of the performance functions with respect to the input variables. Also, it does not use any search techniques. The distribution of the performance function can be described in terms of a 4-parameter Beta or Lambda distribution. The non-calculus nature of the algorithm makes it extremely robust to typical engineering simulations that may be noisy, discontinuous or non-monotonic.

Many PEM methods described in the literature [2-10] use two points per variable. This requires 2n

(n is the number of input variables) function calls. N. C. Lind [9] developed a face center point method that requires 4n3 runs and a two-point approximation method that requires 4n2 runs. Furthermore, Milton Harr and H. P. Hong attempt to reduce the number of runs to 2n [10]. Lind’s, Harr’s and Hong’s methods all involve replacing the points at the 2n corners of the hypercube with 2n points at or near the intersections of the circumscribing hyper-surface with its principal axes. However, these approaches must move the evaluation points farther away from the means as n increases since the radius of a hyper-sphere circumscribing a unit hypercube of n dimensions

is n ; for a bounded variable, the points may easily fall outside the domain of definition of the variable. Seo and Kwak [14] developed a more accurate method for moment calculations using three points per variable, which requires 3n

function calls. This approach becomes unaffordable if the number of input variables is large (greater than 10).

In addition, most articles on PEM did not discuss in detail how to solve for the four parameters of the Beta and Lambda distributions from the predicted four moments of the performance functions. Accurately calculating these four distribution parameters is a key activity in the PEM method. Like SEO-KWAK’s paper [14], the four parameters are solved using the Pearson system. In fact, robustly finding the four parameters of Beta and Lambda is not an easy task. No single approach has been published that can be applied universally to guarantee the correct determination of the four parameters. There is a need for a new robust procedure to calculate the four distribution parameters.

The objective of this paper is to develop a new efficient and accurate method for moment and probability distribution estimation that can be used with all computer simulations. This paperdocuments and describes the two unique techniques of this method. These are: 1) Efficient and accurate moments estimation by applying a variable number of PEM points for each input variable, 2) A robust procedure for finding the four parameters of the Beta and Lambda distributions. The efficiency and accuracy of the proposed methods have been validated with many examples. Three benchmark problems are included below.

Efficient and Accurate Moment Estimation

All the existing PEM approaches apply the same number of points for each input variable, which is inefficient for many practical problems since different variables may have different contributions to the performance. In order to reduce the number of runs, while at the same time achieving better accuracy, the proposed new PEM approach applies a different numbers of points for each input variable. In the proposed PEM, the percent contribution of each variable x to the total variation and the non-linearity of y with respect to x are taken into account to determine how many points to use for each variable. In general, more points, for example, 3 or 4 points, will be used if the variables cause high non-linearity and make significant contributions to y. Otherwise, fewerpoints (1 or 2) points will be used if the variables are linear or close to linear and have low contributions to y. The percent contribution of the ith variable to the total variation, Pcti , is computed as

222

2

21

1

2

)(...)()(

)(

nn

ii

i

x

y

x

y

x

yx

y

Pctσσσ

σ

∂∂++

∂∂+

∂∂

∂∂

=

(1)

A Two-Point Adaptive Nonlinear Approximation (TANA) [16] is used to determine the non-

linearity index ir . At the pre-selected two points,

1X and 2X , the approximation is given as

3

=

=

−+

−∂

∂+=

n

i

ri

ri

ri

ri

i

ni

n

i

ii

ii

xx

xxr

x

x

XgXgXg

1

21,

1,

11,

1 1

12

)(2

1

)()(

)()(~

ε

(2)

The non-linearity index ir is solved by matching

the function value and gradients at 2X , that is,

ni

gg

rxxxxX

xx

xX

i

r

i

r

i

r

i

i

ri

i

i

i

iii ,...,2,1,)(

)()(

)(

1

2,1,2,

11

1,

2,1

=−

+=

ε

∂∂

∂∂

(3a)

=

=

−+−

+=

n

i

ri

i

ri

i

ri

i

ri

i

n

ii

ri

i

i

xxxx

xxX

XX r

g

gg

1

2

1,2,1,2,

1

1

1,1

12

)(2

1)(

)(

)()(

ε

∂∂

(3b)

(See Ref. [16] for more details on TANA)

The number of points for each variable can be determined based on the non-linearity and percentage contribution to Ys of each variable . If the number of the points is one, the point is located at the mean and probability is 1.0, that is,

0.1== px xµ (4)

where xµ is the mean of variable x.

If the number of the points is two, the Rosenblueth equation [2] is used for the x-locations and probability calculation, that is,

12

2

1

22

21

1

)2

(1

1

215.0

,)2

(12

,)2

(12

pp

p

x

x

x

x

xxx

x

xxx

x

−=

+−=

+−+=

+++=

υυ

συυµ

συυµ

(5)

where xσ and xυ are the standard deviation and

skewness of the variable x, respectively.

If the number of the points is three, the Seo-Kwak equations are applied to find the three points locations and probabilities. The equations are given as

−++

−−+

=

2

2

2

2

1

3422

3422

xxx

xx

x

x

xxx

xx

x

x

x

x

υγσ

υσµ

µυγ

συ

σµ

(6a)

−−

−−−−

−−−−

−+−

=

))(34(2

3434

1))(34(2

3434

22

22

2

2

22

22

3

2

1

xxxx

xxxxx

xx

xx

xxxx

xxxxx

p

p

p

υγυγυγυυγ

υγυγ

υγυγυγυυγ

(6b)

where xγ is the kurtosis of the variable x. The

detailed information on Seo-Kwak approach can be found in Ref. [14].

In the case that the number of points equals to four, the 4-point locations and probabilities can be derived from the area and the first 7 moments given below, that is,

4

74743

731

721

71

64643

631

621

61

54543

531

521

51

44443

431

421

41

34343

331

321

31

24243

231

221

21

144331211

4321 1

mpzpzpzpz

mpzpzpzpz

mpzpzpzpz

mpzpzpzpz

mpzpzpzpz

mpzpzpzpz

mpzpzpzpz

pppp

=+++

=+++

=+++

=+++

=+++

=+++

=+++=+++

(7)

where 43,21 ,, zzzz and 43,21 ,, pppp are the

point locations in the standardized space and

corresponding probabilities. 71 ,...,mm are the

standardized moments.

It is difficult to derive analytical equations directly for the point locations and probabilities from the above system of equations without introducing any assumptions. If the point locations are solved based on the area and the first 3 moments, the approach may work for linear or close-to-linear problems, however it may not be accurate for nonlinear or highly nonlinear problems due to neglecting the higher moments (fourth moments and above). Directly applying optimization algorithms to search for eight

unknown parameters ( 43,21 ,, zzzz and

43,21 ,, pppp ) may find solutions for some

problems, but will not be robust and accurate for all problems.

The proposed new approach takes advantage of the fact that all the probabilities

)4,3,2,1( =ipi are linear factors in the above

system of equations. By taking into account of this feature, the probabilities can be simply expressed as shown below

MZZZP TT 1)( −= (8)

where

=

4

3

2

1

p

p

p

p

P

=

=

7

6

5

4

3

2

1

74

73

72

71

64

63

62

61

54

53

52

51

44

43

42

41

34

33

32

31

24

23

22

21

4321

11111

m

m

m

m

m

m

m

M

zzzz

zzzz

zzzz

zzzz

zzzz

zzzz

zzzz

Z

The errors for the area and moments can then be calculated as

PZMzzzzE ⋅−=),,( 4,321 (9)

Solving for the 4-point locations and probabilities from Eq. (7) can be expressed as the following optimization problem,

4321

4,3214,321

..

),,(),,(

zzzztS

zzzzEzzzzEMin T

<<< (10) The optimum search for the above problem is much more robust and accurate since the problem has only 4 variables instead of 8 parameters. This can be applied for solving 4-point locations of any distributions.

For the distributions like normal, lognormal, uniform and exponential, more robust and simpler methods can be used to find point locations and probabilities. Instead of performing optimization searches for point locations every time PEM is run, a pre-calculated table that stores the point locations for standardized distributions can be generated. In order to obtain the pre-calculated table, the transformation functions of the distributions must be used to convert the first 7 moments to standardized moments, and the four point locations in the standardized space Z are

computed. Based on ,,, 221 ZZZ and ,4Z the 4-

5

point locations in the original space X can be easily computed from the transformation functions. For example, for a normal distribution, the transformation function is given as

x

xxxz

σµ−

=)( (11)

The standardized 7 moments are given as

=

0

15

0

3

0

1

0

1

7

6

5

4

3

2

1

1

M

M

M

M

M

M

M

(12)

The four point locations in the standardized space Z and probabilities are computed as

=

99212.33441397

30950.74196315

48530.74196411-

17192.33441421-

4

3

2

1

Z

Z

Z

Z (13a)

=

62270.04587588

4210.45412444

22670.45412384

32550.04587585

4

3

2

1

P

P

P

P (13b)

For skewed distributions like weibull and extreme value, a one dimensional table of the point locations and probabilities as a function of the shape parameter β can be generated by solving

the optimization problem given by Eq. (10).

The performance function values at all the combinations of the point locations for all the variables are computed by running the simulation code. The total number of runs is

nNNNNofRun ...21= (14)

where iN is the number of points for the thi variable. For the problem with n = 10 factors

(x’s), if 3 x’s need 3 points,3 x’s need 2 points and the remaining 4 x’s only need 1 point, the total number of runs is (34)(23)(14) + 2(10+1)= 670, which is only 1.1% of 59049 runs required by 3-level Seo-Kwak PEM.

The first four moments of the performance function Y, which is non-linear in general, are then calculated from the following equations,

22

41

114

2

3

23

111

3

21

112

111

/)),...,((...

/)),...,((...

)),...,((...

),...,(...

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

MMxxyppM

MMxxyppM

MxxyppM

xxyppM

n

n

n

n

n

n

n

n

n

n

n

n

n

n

n

n

i

N

iii

N

ii

i

N

iii

N

ii

i

N

iii

N

ii

i

N

iii

N

ii

−=

−=

−=

=

∑∑

∑∑

∑∑

∑∑

==

==

==

==

(15)

where ,,, 321 MMM and 4M are the mean,

variance, skewness and kurtosis of the performance function.

Robust procedure for finding the four parameters of Beta and Lambda distributions

In order to robustly solve for the four parameters of the Beta and Lambda distributions, the proposed procedure starts with finding analytical solutions for the Beta distribution and following up with numerical solutions using a set of optimization algorithms including Newton method, BFGS and FR (Vanderplatt, DesignOptimization Tool (DOT)) and the well-known downhill simplex method.

The analytical solution of the four parameters of Beta distribution can be found by solving the following four equations,

2

11

)(

λλab

aM−

+= (16a)

)1()(

)(

212

21

212

2 +++−

=λλλλλλab

M (16b)

2

1

)(221

21

21

125.12

3

++

++

−=λλλλλλ

λλM

M (16c)

6

)3)(2(

)]6()(2[

)1(3

212121

21212

21

2122

4

++++−+++

++=

λλλλλλλλλλλλ

λλM

M

(16d)

where ,,, 321 MMM and 4M are the mean,

variance, skewness and kurtosis of the performance function.

For most of test cases, the analytical equation can provide exact solutions for the four parameters ofthe Beta distribution and produce a good fit of theCDF/PDF for response Y. In the case that the analytical solutions are not available, numerical approaches such as Newton method and optimization algorithms are applied to find the solutions. The optimization problem for the Beta distribution is defined as

1221

2121

,0,0

),(2),(1

λλλλλλλλ

>>>+

toSubject

ErrErrMinimize

(17)

where

2

1

)(2),(121

21

21

125.12

321 ++

++

−−=λλλλλλ

λλλλM

MErr

)3)(2(

)]6()(2[

)1(3),(2

212121

21212

21

2122

421

++++−+++

++−=

λλλλλλλλλλλλ

λλλλM

MErr

The optimization problem for the Lambda distribution is defined as

10,10

),(2),(1

43

4343

≤≤≤≤+

λλλλλλ

toSubject

ErrErrMinimize

(18)

where

33212

31213

43 M-)S-(S

2S+S3S-S),(1 =λλErr

(19a)

42212

1341

2124

43 )S-(S

S4S-3S-S6S+S),2( MErr −=λλ

(19b)

Equations for computing ,,, 321 SSS and 4S are

given in Appendix 1. From 3λ and 4λ above,

1λ (location) and 2λ (scale) are computed as

2

111

2

212

2

S-M

S=

λλ

λ

=

−M

S

(20)

After finding the four parameters, the first four moments of the performance function are checked. If the total error of the four moments is less than 0.001, the optimization search is considered to be successful; otherwise, the system will switch to another optimizer until the total error is less than 0.001.

The detailed steps of the proposed PEM approach are shown in Figure 1.

Examples

The proposed PEM approach has been tested with many nonlinear examples, particularly for some examples which existing methods such as Seo-Kwak method (Ref. [14]) could not produce accurate CDF and PDF curves. Three benchmark problems are illustrated to demonstrate the efficiency and accuracy improvements of the newPEM.

Example 1

The performance function is

7577.8),( 22121 −+= xxxxg

7

in which 1x and 2x are the random variables with

normal distributions (mean=10, standard deviation = 3). Figure 2 shows the CDF curve (blue) generated from the new PEM matches Monte Carlo’s (red) exactly, however 3-level DOE based Seo-Kwak PEM is off from the correct solution at the tail. Table 1 shows the comparison of four moments from MCS, Seo-Kwak approach and the new PEM. The moments from the new PEM almost exactly match MCSresults.

Method Seo-Kwak New PEM MCS (105)

Mean 110.2423 110.2423 110.1703Std Dev 61.4085 61.4580 61.6242Skewness 0.84577 0.86777 0.86804Kurtosis 4.02432 4.01803 4.01879

Table 1. Four Moments Computed from MCS, Seo-Kwak and New PEM

Example 2: 10 Variables I-Beam Problem

In the example of an I-Beam CDF/PDF analysis shown in Figure 3, 10 input variables are taken into account (see Figure 3 for detailed inputs), and the performance function is computed as

SY −= maxσwhere S is the strength, which is a random

variable, and maxσ is the maximum stress, which

is given as

IL

daLap

⋅⋅⋅−⋅⋅=

2

)(maxσ

where

12

)2)(( 33fwff tdtbdb

I−−−

=

The new PEM requires 213 function evaluations while Seo-Kwak PEM needs 59049 runs. Itscomputational cost is 0.36% of the Seo-Kwak approach. Figure 4. shows the CDF curve generated from the new PEM matches the MCSresults with 100,000 runs well.

Example 3: Burst Margin of Disk

This example has six random variables. The burst margin of a disk is computed as

)0)((60

282.3853

))((

30

32

RRRRN

UTSMUFM b

−−

××

=πδ

where

bM = Burst Margin

UTS = Ultimate, Tensile Strength(lb/in2) ~ Normal ( µ =220,000, σ =5000)

δ = Density(lb/in3) ~ Uniform (0.28, 0.30) N = Rotor Speed (rpm) ~ ( µ =21,000, σ =1000)

The new PEM needs 77 function evaluations while Seo-Kwak PEM takes 729 runs. Figure 5 shows the CDF curve generated from the new PEM matches the MCS results with 100,000 runs well.

Summary

The new PEM provides nearly identical results as Monte Carlo simulation with a large number of runs, while the number of PEM runs is significantly fewer than the number of Monte Carlo runs. Benchmark examples demonstrated that the new PEM improves the efficiency of the existing 3-level PEM, and the accuracy of CDF/PDF results for highly non-linear problems.

References

1. Y. T. Wu and O. H. Burnside , “Efficient Probabilistic Structural Analysis Using An Advanced Mean Value Method”, Proceeding of 5th ASCE speciality Conference –Probabilistic Methods in Civil Engineering, ASCE, New York, pp. 492-495.

2. Emilio Rosenblueth, “Point Estimates for Probability Moments”, Applied Mathematics Modeling, Vol. 5, October 1981, pp. 329-335, 1981.

= 5000) R = Outer radius (in) ~ Normal (µ = 24, σ = 0.5)R0 = Inner radius (in) ~ Normal (µ = 8, σ = 0.3)MUF = Material Utilization Factor ~ Weibull (β = 25.508, η = 0.958)

8

3. Miltone Harr, “Reliability Based Design in Civil Engineering”, McGraw-Hill Book Company, New York, 1987.

4. John T. Christian and Gregory B. Baecher, “The Point-Estimate Method With Large Numbers of Variables”, International Journal for Numerical and Analytical Methods in Geo-mechanics, 26, pp1515-1529, 2002.

5. Chengqing Wu, Hong Hao and Yingxin Zhou, “Distinctive and Fuzzy Failure Probability Analysis of An Anisotropic Rock Mass to Explosion Load”, International Journal for Numerical Methods In Engineering, 56, pp767-786, 2003.

6. Geethanjali Panchalingam, “Modeling of Many Correlated and Skewed Random Variables”, Appl. Math. Modeling, Vol. 18, Nov. pp. 635-640, 1994.

7. K.S.Li, “Point Estimate Method for Calculating Statistical Moments, Journal of Engineering Mechanics, Vol. 118, No. 7, July 1992.

8. Nagaraman Sivakugan and Ali Al-Harthy, “ Probabilistic Solutions to Geotechnical Problems”, Probabilistic Mechanics & Structural Reliability: Proceeding of the Seventh Specialty Conference, Worcester Polytechnic Institute, Worcester, Massachusetts, Aug. 7-9, 1996.

9. N. C. Lind, “Modeling of Uncertainty In Discrete Dynamical Systems”, Appl. Math. Modeling, Vol. 7, June, 1983.

10. H. P. Hong, “An Efficient Point Estimate Method for Probabilistic Analysis”, Reliability Engineering and System Safety, 1998, 59, pp. 261-267.

11. H. O. Madsen, S. Krenk, N. C. Lind, “Methods of Structural Safety”, Prentice-Hall International Series in Civil Engineering and Engineering Mechanics, 1986, pp. 44-101.

12. R. E. Melchers , “Structural Reliability Analysis and Prediction”, Ellis Horwood Limited Publishers, Halsted Press, a Division of John Wiley & Sons, 1987, pp. 104-141.

13. Liping Wang and Ramana Grandhi, “Efficient Safety Index Calculation for Structural Reliability Analysis”, Journal of Computers and Structures, Vol. 52, Nov.1, 1994, pp. 103-111.

14. Hyun Seok Seo and Byung Man Kwak, “Efficient Statistical Tolerance Analysis for General Distributions Using Three-Point Information”, Int. J Prod. Res. 2002, Vol. 40, No. 4, pp931-944.

15. Allen C. Miller, III and Thomas R. Rice, “ Discrete Approximations of Probability Distributions”, Management Science, Vol. 29, No. 3, March 1983.

16. Liping Wang, and Ramana Grandhi, “Improved Two-point Function Approximation for Design Optimization”, AIAA Journal, Vol. 33, No. 9, 1995, pp. 1720-1727.

Analytical equations and numerical methods

to ensure robustnessand accuracy

3. Compute four moments of responses Ys using

,

a b4. Finding 4 parametersof Beta or Lambda distribution based onfour moments of Ys

. λ1 λ2

Key Steps

1. Determine how many pointsto use for each x

1) Percent contribution of x to the total variation y

2) Non-linearity of y with respect to x

xp1_1

x = locationp = probability

.

(P1) (g1k) + (P2) (Z2K) + (P3) (g3K) + (P4) (g4K) = MK

Mean (k=1), Std dev (K=2), Skewness (K=3), Kurtosis (k=4)

2. Run simulation code to calculate function values g and

corresponding probabilities

gi is Y value at (x1_i,x2)pi is product of p1_i and p2

Engineering Simulation

X1_1, x1_2X1_3, x1_4

X2

g1, p1g2, p2g3, p3g4, p4

For example, x1 uses 4 points andx2 uses 1 point

xp2

x = locationp = probability

.

x1

x2

p1_2 p1_3 p1_4

Beta or Lambda distribution

Figure 1. Flow-chart of Proposed PEM

Figure 2. CDF Curves Generated from New PEM, Seo-Kwak PEM and MCS

100 0 100 200 300 400 500 6000

0.2

0.4

0.6

0.8

1

Monte Carlo 10^5 RunsNew PEMPrevious PEM

1

0

Y_mc

Y_lambda

Y_old

504.4837.44− X_mc X_lambda, X_old,

New PEM (5 runs) exactly matches MCS (105

runs) at the tail, however, 3-point PEM (9 runs) is far off the correct answer

g(x1,x2)= x12+x2-8.7577

Normal, Mean = 10.0,Standard deviation = 3.0

X1:

X2:

100 0 100 200 300 400 500 6000

0.2

0.4

0.6

0.8

1

Monte Carlo 10^5 RunsNew PEMPrevious PEM

1

0

Y_mc

Y_lambda

Y_old

504.4837.44− X_mc X_lambda, X_old,

New PEM (5 runs) exactly matches MCS (105

runs) at the tail, however, 3-point PEM (9 runs) is far off the correct answer

g(x1,x2)= x12+x2-8.7577

Normal, Mean = 10.0,Standard deviation = 3.0

X1:

X2:

10

Figure 3. I-Beam Example

1.5 .105

1 .105

5 .104

0 5 .104

1 .105

1.5 .105

0

0.5

11

0

Y_pem

Y_mc

1.192 105×1.023− 10

5× X_pem X_mc,

Figure 4 . CDF Curve Generated from New PEM and MCS

a P

L

tw

bf

dtf

E, rho

Variable Type Mean StdP Normal 6070 200L Normal 120 6a Normal 72 6S Normal 170E3 4760E Normal 30E6 3E6

rho Normal 0.28 0.028d Normal 2.3 1/24bf Normal 2.3 1/24tw Normal 0.16 1/24tf Normal 0.26 1/24

a P

L

tw

bf

dtf

E, rho

Variable Type Mean StdP Normal 6070 200L Normal 120 6a Normal 72 6S Normal 170E3 4760E Normal 30E6 3E6

rho Normal 0.28 0.028d Normal 2.3 1/24bf Normal 2.3 1/24tw Normal 0.16 1/24tf Normal 0.26 1/24

11

Figure 5 . CDF Curve Generated from New PEM and MCS

0. 0.35 0.4 0.45 0.5 0.5 0.60

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1MONTE CARLO (100000 RUNS) – New PEM (77)

PROBABI


Recommended