+ All Categories
Home > Documents > [American Institute of Aeronautics and Astronautics 10th AIAA/ISSMO Multidisciplinary Analysis and...

[American Institute of Aeronautics and Astronautics 10th AIAA/ISSMO Multidisciplinary Analysis and...

Date post: 14-Dec-2016
Category:
Upload: prabhat
View: 213 times
Download: 0 times
Share this document with a friend
11
1 Probabilistic design & optimization of aging structures S. Vittal & P. Hajela Department of Mechanical, Aerospace and Nuclear Engineering Rensselaer Polytechnic Institute, NY (USA) ABSTRACT Probabilistic approaches are becoming popular as design tools for reliability-based design. One of the shortcomings in current probabilistic design methods is the inability to predict time-dependent reliability analytically. In this paper, a new simulation-based approach to predict the degradation of parts using probabilistic design algorithms is presented. This proposed approach uses a combination of probabilistic design algorithms like the FORM/SORM algorithms along with stochastic degradation models and statistical resampling theory. It is hoped that this approach can provide a bridge between the methods of analytical probabilistic design and classical reliability theory. The algorithm can be used to model a variety of fleet life management scenarios ranging from optimal operating intervals to estimates of part residual life. This paper contain details of the proposed algorithm as well as results from representative numerical case studies. Keywords: Probabilistic Design, Aging Structures, Statistical Resampling INTRODUCTION In this paper, an approach to analytically compute the reliability of aging structures is presented. This provides a bridge between classical reliability theory, which requires time-dependent failure data, and pure probabilistic design that computes a static time- independent probability of failure. It is hoped that the methodology presented in this paper will provide a possible solution to the practical and important problem of managing the risk of aging fleets like aircraft, turbines, marine structures, etc. A big problem facing decision makers today is the issue of managing aging structures [1]. The term ‘aging’ refers to any kind of degradation that takes place with service and renders the product or structure unsafe after a certain period of use. This could be corrosion, wear, fatigue, chemical erosion etc. Classic examples of aging structures include fleets of aircraft that have flown beyond their intended design lives, power plants and boilers that were designed decades ago that continue to produce power, ships that have been exposed to significant hull corrosion and civil engineering structures like bridges which have been exposed to decades of fatigue loading and corrosion. These products are typically high value assets, and it is both impractical and uneconomical to retire them without assessing the risk involved in their continued operation. Again, there has to be a tradeoff between the benefits of continued operation and the costs of unintended failure. From an engineering perspective, it is now recognized that a significant percentage of a product’s life cycle cost is tied into the product design. Modern design techniques now recognize that it is critical to understand the relationship between design variability and risk, which has given rise to the field of probabilistic design. Popular probabilistic design algorithms include the Mean-Value-First Order Second moment (MV-FOSM) method, the First & Second Order Reliability Methods (FORM/SORM), Fast Probability Integration (FPI), etc. A significant shortcoming of all current probabilistic design techniques is the inability to add time as a dimension of risk, i.e. every estimate of failure probability is only at a snapshot in time. An algorithm is proposed in this paper that uses the best features of probabilistic design algorithms [2](like the First & Second Order Reliability Methods) and statistical resampling methods like the Bootstrap algorithm [3] to analytically model time- dependent reliability of structures still in the design stage. METHODS OF PROBABILISTIC DESIGN The primary objective of any modern reliability estimation algorithm is to evaluate the probability of a limit state being violated. If 1 2 , ,..., n X x x x be the set of design variables of interest, then a performance function can be established in the form shown in equation (1). * Member AIAA. **Dept. of Mechanical Engineering, Rensselaer Polytechnic Institute & Fellow, AIAA. Copyright 2003 by S. Vittal . Published by the American Institute of Aeronautics and Astronautics, Inc. with permission 10th AIAA/ISSMO Multidisciplinary Analysis and Optimization Conference 30 August - 1 September 2004, Albany, New York AIAA 2004-4617 Copyright © 2004 by Sameer Vittal. Published by the American Institute of Aeronautics and Astronautics, Inc., with permission.
Transcript

1

Probabilistic design & optimization of aging structures

S. Vittal & P. Hajela

Department of Mechanical, Aerospace and Nuclear Engineering Rensselaer Polytechnic Institute, NY (USA)

ABSTRACT Probabilistic approaches are becoming popular as design tools for reliability-based design. One of the shortcomings in current probabilistic design methods is the inability to predict time-dependent reliability analytically. In this paper, a new simulation-based approach to predict the degradation of parts using probabilistic design algorithms is presented. This proposed approach uses a combination of probabilistic design algorithms like the FORM/SORM algorithms along with stochastic degradation models and statistical resampling theory. It is hoped that this approach can provide a bridge between the methods of analytical probabilistic design and classical reliability theory. The algorithm can be used to model a variety of fleet life management scenarios ranging from optimal operating intervals to estimates of part residual life. This paper contain details of the proposed algorithm as well as results from representative numerical case studies.

Keywords: Probabilistic Design, Aging Structures, Statistical Resampling

INTRODUCTION

In this paper, an approach to analytically compute the reliability of aging structures is presented. This provides a bridge between classical reliability theory, which requires time-dependent failure data, and pure probabilistic design that computes a static time-independent probability of failure. It is hoped that the methodology presented in this paper will provide a possible solution to the practical and important problem of managing the risk of aging fleets like aircraft, turbines, marine structures, etc.

A big problem facing decision makers today is the issue of managing aging structures [1]. The term ‘aging’ refers to any kind of degradation that takes place with service and renders the product or structure unsafe after a certain period of use. This could be corrosion, wear, fatigue, chemical erosion etc. Classic examples of aging structures include fleets of aircraft that have flown beyond their intended design lives, power plants and boilers that were designed decades ago that continue to produce power, ships that have been exposed to significant hull corrosion and civil engineering structures like bridges which have been exposed to decades of fatigue loading and corrosion. These products are typically high value assets, and it is both impractical and uneconomical to retire them without assessing the risk involved in their continued operation.

Again, there has to be a tradeoff between the benefits of continued operation and the costs of unintended failure.

From an engineering perspective, it is now recognized that a significant percentage of a product’s life cycle cost is tied into the product design. Modern design techniques now recognize that it is critical to understand the relationship between design variability and risk, which has given rise to the field of probabilistic design. Popular probabilistic design algorithms include the Mean-Value-First Order Second moment (MV-FOSM) method, the First & Second Order Reliability Methods (FORM/SORM), Fast Probability Integration (FPI), etc. A significant shortcoming of all current probabilistic design techniques is the inability to add time as a dimension of risk, i.e. every estimate of failure probability is only at a snapshot in time. An algorithm is proposed in this paper that uses the best features of probabilistic design algorithms [2](like the First & Second Order Reliability Methods) and statistical resampling methods like the Bootstrap algorithm [3] to analytically model time-dependent reliability of structures still in the design stage.

METHODS OF PROBABILISTIC DESIGN

The primary objective of any modern reliability estimation algorithm is to evaluate the probability of a limit state being violated. If 1 2, ,..., nX x x x be the set of design variables of interest, then a performance function can be established in the form shown in equation (1).

* Member AIAA. **Dept. of Mechanical Engineering, Rensselaer Polytechnic Institute & Fellow, AIAA. Copyright

2003 by S. Vittal . Published by the American Institute of Aeronautics and Astronautics, Inc. with permission

10th AIAA/ISSMO Multidisciplinary Analysis and Optimization Conference30 August - 1 September 2004, Albany, New York

AIAA 2004-4617

Copyright © 2004 by Sameer Vittal. Published by the American Institute of Aeronautics and Astronautics, Inc., with permission.

2

1 2, ,..., ng f x x x (1)

The failure surface, or limit state is the surface defined by g = 0. Using this approach, failure occurs when g < 0. If 1,...,X nf x x be the joint probability density function of the design variables, then the probability of failure, Pf, is given by the multiple integral (2),

1 10

... ,..., ..f X n ng

P f x x dx dx (2)

The integration is performed over the entire failure region, i.e. g(.) <0. This approach, also called the full distributional approach requires the evaluation of the joint density function in addition to multiple integrals, and is computationally expensive. If the limit state can be expressed in closed form, then the integral (2) can be solved using a variety of algorithms that include the MV-FOSM, FORM/SORM and FPI methods. Rackwitz and Fiessler developed a flexible and popular variant of the FORM algorithm, which is shown in Figure 1 [Ditlevson, 1986]. They proposed that any probability distribution could be approximated by an equivalent normal distribution at the point of interest. This involves matching the CDF of the original distribution, F(X*) and it’s PDF, f(X*) to the CDF (.) and the PDF, (.) of an equivalent normal variable both of which are evaluated at the point of interest (also called the checking point), X*. This point is assumed to lie on the failure surface, i.e. g(X*) = 0

For the special case of a limit state in which all the probabilistic design variables are normally distributed with coefficients of variation (COV) less then 0.15, the reliability safety index, , can be written in the simple form shown in equation (3). This is the MV-FOSM method.

22

1

n

ii i

g

gx

(3)

Here, g is the mean value of the limit state. The probability of failure is then calculated using the same approach as FORM, as shown in (4). In this paper, the reliability of aging structures has been computed using (3) to keep the calculations simple. However, real-world design variables are usually non-normal, and the FORM algorithm described in Figure 1 below should be used to calculate the failure probability.

Figure1: Overview of the FORM method

OVERVIEW OF THE PROPOSED ALGORITHM

A “population” of similar designs is generated at the initial design stage (t = 0), in which each design has the same overall set of limit states, and the design variables X=[ X1,…Xn] are probabilistic and vary from unit to unit due to manufacturing variability. Then these virtual units are allowed to “degrade” according to some probabilistic time-based degradation rule. In this paper, a simple linear degradation model is assumed. The

( )0 1 2

( )1 2

1( )

( ) 1

, ,....

' , ,....

kn

ki n

i

Xi iki

Xi i

ki i i Xi i

g g x x x

gg x x x

z

F x

f x

x F x

Calculate:

( )

1

( )

1

2 2( ) ' 2

1

'

'

nk

i ii

nk

X i ii

nk

X i ii

x g x

g

g

( ) ( )( )

( )

( ) ( )( )

( )

( ) ( ) ( ) ( ) ( )

' k kk i i

i kX

k kk o X

kX

k k k k ki i i i

g

x g

x

YesOutput results

( ) ( ) ( ) ( ), , ,k k k ki ix

Initial design vector, X(0) = E(x1,…xn)

No

Calculate:

( ) ( 1)k k

Calculate:

( )0 1 2

( )1 2

1( )

( ) 1

, ,....

' , ,....

kn

ki n

i

Xi iki

Xi i

ki i i Xi i

g g x x x

gg x x x

z

F x

f x

x F x

Calculate:

( )

1

( )

1

2 2( ) ' 2

1

'

'

nk

i ii

nk

X i ii

nk

X i ii

x g x

g

g

( ) ( )( )

( )

( ) ( )( )

( )

( ) ( ) ( ) ( ) ( )

' k kk i i

i kX

k kk o X

kX

k k k k ki i i i

g

x g

x

YesOutput results

( ) ( ) ( ) ( ), , ,k k k ki ix

Initial design vector, X(0) = E(x1,…xn)

No

Calculate:

( ) ( 1)k k

Calculate:

3

average values of design variables at the next time increment can be obtained using equation (4)

( ) (.)t t ti i iX X C t (4)

Hhere (.)iC is the degradation rate for the ith design variable at a known level of environmental severity / usage. Its parameters depend on the past operating history, the severity of its usage and operating environment, etc. This allows flexibility in including the effect of stochastic mission environments while computing the expected degradation. Using this approach, the effect of a real life stochastic operating environment can be included in the design process itself

Several “virtual parts” are created at the time increment, t t . If we assume that we are tracking a population of ‘N’ units in the field, then ‘N’ or a multiple of ‘N’ virtual designs can be generated. These could be closed form models or larger finite element models of the part with the new mean values of the random design variables. Using this “virtual fleet” as a representative sample, a probability of failure of each unit is individually computed using probabilistic design algorithms like the MV-FOSM, HLRF or by stochastic finite element codes. At the end of this stage we have sample of reliability estimates for our virtually degraded fleet. The sample of “degraded” parts at each time period now provides the seed for computing a probability distribution function using statistical resampling approaches like the Bootstrap method. Thus, a distribution of part probabilities of failure is obtained. From the bootstrap algorithm, estimates of the statistics of interest can be made. They include the expected value of probability of failure (“mean”), variance, mode, median, confidence bounds on the failure probability and percentiles of interest. These steps are now repeated at different time points, and a series of reliability curves can then be developed at various levels of confidence. Using maximum likelihood methods, parametric probability density functions can be derived that characterize the required percentile of reliability as a function of time. The output from these algorithms is in the form of a curve in which the reliability of the structure, along with its confidence intervals is plotted against time.

Using the proposed algorithm, a “classical” failure density function, F(t) is obtained – it is now possible to incorporate optimization approaches to compute the optimal operating time, t* or topt for the new ‘virtual’ design. At discrete time periods, it is also possible to compute the expected residual life, which is an indication of the residual value in a part that has survived until time t*. One can also develop an “optimal operating time confidence bound” by

calculating a value of t* at each level of confidence. Thus, we not only have an optimal solution, but also it’s bounds with a known level of confidence. The various steps required in this approach are described in the flowchart shown in Figure 2.

Figure 2: Flowchart of proposed approach

The primary motivation in the development of the proposed part degradation algorithm is to predict time dependent reliability and its confidence intervals as a function of time, and not as a static probability of failure. The proposed algorithm can be easily extended to include the following three main areas, as a part of an overall life and risk management framework. 1) Part risk analysis and failure forecasting 2) Optimal maintenance and repair interval based on the relative costs of replacement and failure, and, 3) Estimation of mean residual life after a given operating interval.

CPU

T[t]

1 -P

f[t]

time

Pf[t]

tL tUtopt time

Starting Design

Initial fleet reliability(0) (0) (0)

1 ,...,

1..j nD f x x

j m

Update fleet reliability( ) ( ) ( )

1 ,...,

1,..,

t t t t t tj nD f x x

j m

Random degradationat time increment

( ) (.)t t ti i ijx x C t

Generate & store distribution of Pf (t) using Bootstrapping

Calculate time-dependent reliability & confidence bounds

1 expF t t

Output, , *R t L t t

CPU

T[t]

1 -P

f[t]

time

Pf[t]

tL tUtopt time

Starting Design

Initial fleet reliability(0) (0) (0)

1 ,...,

1..j nD f x x

j m

Update fleet reliability( ) ( ) ( )

1 ,...,

1,..,

t t t t t tj nD f x x

j m

Random degradationat time increment

( ) (.)t t ti i ijx x C t

Generate & store distribution of Pf (t) using Bootstrapping

Calculate time-dependent reliability & confidence bounds

1 expF t t

Output, , *R t L t t

4

The expected number of part failures ;X t U in a fleet can be calculated using equation (5).

1;

1

Ni i

i i

F t U F tX t U

F t (5)

Here, X(t;U) is the expected number of future failures in a future operating period ‘U’ for a fleet of size ‘N’units, given that the fleet has survived until time ‘t’.Here, F(t) is the cumulative failure density function that is calculated using the proposed algorithm.

Another critical aspect of part life management is the estimation of safe and optimal operating lives. A “safe life” is essentially determined by regulatory constraints (e.g. in the aircraft industry, which mandates 10-7 as a approved single flight probability of failure), or by economic considerations. The two main cost drivers for the optimal life are the costs of an unplanned failure (which is the cost of all damage resulting from the failure) and the part replacement cost. Usually unplanned failure costs are significantly higher than planned replacement / maintenance costs. The cost per unit operating time, CPUT(t) is given by,

0

1P P U PTp

C R T C R TCPUT t

R d

(6)

Here, CU is the unplanned failure costs, CP is the planned (replacement / maintenance) costs, TP is the operating interval and R(t) = Reliability cumulative density function. The optimal operating interval, t*, is the time that minimizes (6), and can be easily computed for the “virtual fleet” being analyzed using the proposed algorithm.

Finally, the mean residual life of the part can also be computed. This is usually calculated at the end of its design life, or at its optimal operating interval. The residual life of a part whose reliability is expressed in the form R(t) can be written as “the expected remaining life, given that the product has survived to time t”. The equation for mean residual life is,

1

t

L t f d tR t

(7)

The failure probability density function is given by the expression,

1 Rf (8)

The degradation based time-dependent algorithm described in this paper can be combined with any or all of the life management techniques described in (2)-(4) to provide a composite picture of the various risks involved in the part design. It is important to note that using the proposed approach; one can compute all of these risks at the design stage itself, thus reducing the costs of repair and redesign. It is also possible to devise a two-stage optimization approach in which compromises re made between the “design reliability” of the part and it’s operational worth.

The proposed approach is subsequently demonstrated via a pressure tank and I-Beam optimization case study

NUMERICAL CASE STUDY 1: PRESSURE TANK

The pressure tank case study was originally described by Dai and Wang [5] and is based on actual test and inspection data. The objective of the original analysis was to evaluate the reliability of a Liquefied Natural Gas (LNG) pressure tank using analytic limit states and experimentally determined stochastic design variables. In support of this, measurements were made regarding the wall thickness of the pressure tank, the tensile strength of the tank material and burst pressure of the vessel. From inspection data, a random sample of liquefied petroleum gas storage tanks was drawn from a pool of eight manufacturers and tested to failure. The problem layout is shown in Figure 7.

The material tensile strength, Su, was normally distributed with a mean of 387 MPa and a coefficient of variation of 0.05. The wall thickness, h, was normally distributed with a mean of 3 mm and a coefficient of variation of 0.03777. The bursting pressure, Pb, in bursting tests is assumed to be normal with a mean of 14.495 MPa and a coefficient of variation of 0.1. The average radius, R, was 158.5 mm, and the height, H = 1.5*R. The design pressure, p, can be written in terms of the burst pressure, Pb as p = Pb/n, where ‘n’ is a factor of safety for bursting pressure. This should not be confused with the factor of safety for the tank, which is a different quantity. The probability of failure for this tank is defined as the probability that the hoop stress exceeds the ultimate tensile strength of the material. Accordingly, the limit state, g(X) is formulated as follows.

2

1 02

bU

P R Rg X S

nh h (9)

5

Using the Mean-Value First-Order-Second-Moment approach, the reliability safety index of the tank is as shown in (10).

2

2

2 22 22 2 2

2

12

12

bu

Tank

bSu Pb h

P R RS

nh H

PR Rnh H h

(10)

The probability of failure, Pf, is computed from the inverse normal distribution using TankfP .

The next step is to develop a probabilistic model for corrosion of the tank walls. Experimental data was obtained for atmospheric corrosion of 10 cm X 15 cm steel samples collected over a 2 year period at various locations in the US, Panama, Canada, Australia, Philippines and the UK. The distribution that best fits the corrosion rate data is a two-parameter Weibull, with a characteristic value, C = 0.095024 mm/yr and slope,

C = 1.352. The subscript ‘C’ is used to distinguish this from the reliability safety index and is also used to identify the active failure mode, i.e., corrosion. The probability density function for this dataset can be written using Eq. (11).

1 expC

CC

hF X (11)

Here h is the loss in thickness per year, (mm/yr). In the algorithm, values of h are generated using a uniform number generator U(0,1) combined with the Weibull. The corrosion rate is modeled using Eq. (12)

1

ln 1 0,1 CCh U (12)

The Weibull plot for the corrosion rate data along with two-sided 95% confidence bounds is shown in Figure 3. The change in tank wall thickness over some time period can now be written using Eq. 13.

( )t t th h h t (13)

Using the process described in Figure 2, a virtual fleet of ‘N’ tanks was created and allowed to degrade. Several simulations were run to study the accuracy of increasing the fleet size ‘N’ and the number of bootstrap samples required to generate the distribution. Note the whole purpose of bootstrapping is to reduce overall computational cost by reducing the “virtual fleet” size. This is very useful of complex finite element runs are required to evaluate the reliability at each time step. For the simple limit state used in the pressure tank

case study, increasing the virtual fleet size to 100 or more without any bootstrapping would have the same effect as a small virtual fleet with many bootstrap samples. Results from both cases – with & without resampling (bootstrap) are presented.

Figure 3: Weibull plot of corrosion data

Reliability results without resampling: The reliability of the virtual fleet is calculated for the expected design life of the tank, i.e. fifteen years, in increments of one year. The mean, standard deviation and confidence bounds of the fleet reliability are computed at each time step. The 5th and 95th percentiles of reliability predicted at each time step are also computed and displayed. The results for n = 10, 30, 50 and 100 are shown in the tables and figures below.

Figure 4: Tank reliability output, No resampling

1.00E-3 1.000.01 0.101.00

5.00

10.00

50.00

90.00

99.00

Corrosion Rate, mm/yr

T hickness Reduction, mm/yr

Prob

abili

ty

WeibullBenignW2 MLE - SRM MED

F=31 / S=0CB[FM]@95.00%2-Sided-B [T 1]

0 5 10 150

10

20

30

40

50

60

70

80

90

100

Time, years

Rel

iabi

lity

with

95%

con

fiden

ce b

ound

s

LPG pressure tank, time-dependent reliability, 10 Samples

6

Increasing the size of the “virtual fleet” resulted in a stable value of the fleet reliability function, as well as tight confidence bounds. The average fleet characteristic life (Eta) was 10.364 years, with a slope (Beta) of 2.283 for a virtual fleet size of 100 tanks. The optimal operating interval was calculated by assuming that a failure-in-service is five times more expensive than replacing a good tank before it has failed. The optimal operating time, t* = 5.13 years was obtained by minimizing the cost per unit time calculated using Eq. (6). The mean residual life (MRL) of the tank was calculated at t* using Eq (7). For a 100 sample “virtual fleet”, the residual life (defined as the expected future time to failure given survival at current time) was 5.85 years. Results indicated that the confidence bounds do not provide adequate coverage (they are less than percentile bounds, for the same sample) and are highly sensitive to the size of the virtual fleet. In addition, as mentioned previously, this method becomes computationally expensive and impractical when limit states have to be evaluated using finite element models or if the degradation process is complex. The bootstrap version of this algorithm reduces computational costs significantly with negligible changes in accuracy.

Reliability results with resampling (Bootstrap): The bootstrap version of the proposed algorithm, as described in Figure 2, overcomes the disadvantages mentioned in the preceding section. It must be kept in mind that any statistical resampling technique is always expensive – however, if the trade off is between a reduced number of finite element runs and an increased number of bootstrap samples, it may be worth it. As most of the engineering degradation problems need to be handled using finite element codes, the bootstrap approach may be the method of choice. The key advantage is that this method bases its confidence intervals on percentile estimates directly. It is extremely robust and provides a good coverage probability without placing a burden on the size of the virtual population. A reasonable seed of starting designs is required to ensure that the population has enough variability to make the bootstrap method work. In this section a variety of bootstrap runs have been performed to study the method in detail and make comparisons between the accuracy and stability of the algorithm as the sample population and the number of bootstrap samples is varied. A typical output from the bootstrap algorithm is shown in Figure 5. Results are shown in Figures 6 & 7 and are compared in Table 1.

Results indicate that a bootstrap version of the probabilistic aging algorithm with a virtual fleet size of just 20 tanks and 1000 bootstrap samples provides a fleet reliability with a characteristic life (Eta) of 10.34 years and slope (Beta) of 2.27. The optimal operating

interval was 5.12 years with a mean residual life of 5.37 years. These results are very close to those obtained from a virtual fleet size of 100 tanks. I.e., using the resampling method, you can reduce the number of complex function (or FEA) evaluations by a factor of five without any loss in accuracy. The results are summarized in Table 1 shown below.

Without Bootstrap

WithBootstrap

Error(%)

Eta (yrs) 10.364 10.339 0.24% Beta 2.283 2.271 0.52% t* (yrs) 5.13 5.12 0.19% MRL (yrs) 5.85 5.37 8.21% Analyses Required

100 20 NA

Table 1: Results from pressure tank case study

Figure 5: Pressure Tank Reliability With Bootstrap

Figure 6: Pressure tank results of t* with Bootstrap

0 5 10 150

10

20

30

40

50

60

70

80

90

100

Time, years

Rel

iabi

lity

with

90%

Con

fiden

ce In

terv

als,

Per

cent

ile M

etho

d

LPG pressure tank, Bootstrap 1000 trials, Actual Population 20

Optimal Operating Time, Bootstrap Algorithm

3 3.5 4 4.5 5 5.5 6 6.5 7

90 % LowerConf Interval

MeanOptimal Time

90% UpperConf. Interval

Years

[30,1000][20,1000][10,1000][30,200][20,200][10,200]

7

Figure 7: Pressure Tank Residual Life with Bootstrap

Figure 8: Pressure Tank Case Study Schematic

Results & Discussion: A new approach for modeling the time-dependent reliability of structures is presented using probabilistic design and stochastic degradation analysis methods. This algorithm is further refined using statistical resampling theory (Bootstrapping). Results from the Pressure Tank reliability case study indicate that these algorithms work well. This approach could act as the theoretical “link” between the static probabilities of failure calculated by traditional probabilistic design algorithms and time-dependent reliability functions obtained from field test data. Using the proposed method, it will be now be possible to simulate the aging behavior of a fleet of proposed parts at the design stage itself and quantify all time-dependent reliability characteristics of interest. In addition, it will be possible to make optimal trade-offs between part reliability; its optimal service life and its

salvage value (obtained from the mean residual life). This approach therefore provides a theoretical framework for optimally designing parts by considering both design variability as well as lifecycle degradation /usage into account.

PROBABILISTIC OPTIMIZATION OF AGING STRUCTURES

In the previous section, a new approach to probabilistic design was proposed. In actual design practice, several techniques need to be applied in an optimal manner to create a cost effective and reliable design. One such integrated, practical approach to probabilistic design and optimization is now proposed and validated using a representative case study.

The proposed approach is based on a two-level design optimization approach described in Figure 8. At the first level, an preliminary set of optimal designs are generated from a reliability based optimization algorithm like FORM or SORM. These sets of designs are allowed to deteriorate in time in a stochastic manner and the time dependent reliability equation for the design is obtained, along with confidence bounds. The reliability at each step can be computed using either MV-FOSM, HLRF or the proposed empirical distribution based probabilistic design algorithms. The time dependent reliability is computed using the bootstrap approaches described previously. The second level of optimization is based the optimal operating interval and mean residual life computed at that interval. For each “optimal design” from the original Pareto front, there is a corresponding optimal operating time and mean residual life. From these estimates, a metric that summarizes the life cycle costs; risks and benefits can be computed. From an optimization perspective one seeks to maximize a life cycle Return-On-Investment (RoI) metric that is a function of design variables, expected operating environment, optimal service life and expected mean residual life.

The various terms that factor into life cycle worth calculations are described below. Costs:1.Initial cost of the design (material, manufacture, engineering, etc), CD2.Operating costs per unit time, CO3.Replacement / repair costs, CP4.Costs due to an unpredicted failure, CU

Benefits (revenues)1.Revenues from successful operation of the part, BO2.Salvage value, linearly proportional to the mean residual life, BMRL

Residual Life (t = 5yr), Bootstrap Algorithm

0 1 2 3 4 5 6 7 8 9

90% UCIRes. Life

Res. LifeMean (yr)

90% UCIRes. Life

Years

[30,1000][20,1000][10,1000][30,200][20,200][10,200]

R

h

pH

x

y

R

h

pH

x

y

8

3.Other benefits, such as a bonus from successfully meeting a performance or availability guarantee, BX

Risks1.Probability of failure due to design (static risk),

0 1 0fP R t2.Probability of failure due to degradation or aging (time dependent risk), 1fP t R t

Choosing variables, T* = optimal operating interval obtained from Eq. (6) R(t) = Time dependent reliability cumulative density

function. A typical choice would be a two parameter Weibull distribution, with shape parameter R and scale parameter R.

L(T*) is the mean residual life obtained from Eq. (7)

The equation for return on investment, RoI, takes the form shown in Eq. (14)

0

0

* * ( *) ( *)(0) *

MRL X

D

R T B T L T B R T BRoI

C R C T (14)

The optimization problem seeks to obtain a design that maximizes Eq. (14) subject to constraints on design variables, acceptable reliability, minimum expected service life and other design constraints.

CASE STUDY 2: I-BEAM OPTIMIZATION

The objective of the I-beam case study is to develop a beam cross-sectional design that minimizes weight, mid-span displacement and ensures high reliability. A schematic of the problem is shown in Figure 9.

Figure 9: I-Beam problem layout

The deterministic form of the multiobjective optimization problem is written as shown in Eq. (15)

1 21 2

1 2* *. .,

0, 1, 2,3, 4L U

i i i

f fMin W W

f f

s tG

X X X i

(15)

The probabilistic version of the optimization problem is written as follows (Eq. 16)

1 21 2

1 2* *. .,

max, 1, 2,3, 4L U

i i i

f fMin w w

f f

s t

Pf Pf

X X X i

(16)

The deterministic constraint is shown in Eq. (17)

0y zg

y z

M MG k

W W (17)

The weight of the beam, 1f , is given in (18).

1 2 4 3 1 42 2f x x x x x (18)

The maximum displacement, 2f is written as Eq (19) 3

2 332 1 2 3 1 42

4812

PLf

x x x x x xE

(19)

The section moduli of the beam are in Eq. 20 & 21. 33

2 1 2 3 1 4

1

26y

x x x x x xW

x (20)

3 34 2 3 1 4

2

2 26z

x x x x xW

x (21)

The side constraints on the design variables are,

1

2

3

4

10 8010 500.9 50.9 5

cm x cm

cm x cm

cm x cm

cm x cm

(22)

The four design variables are normally distributed with a coefficient of variation equal to 0.1. The nominal

P

Q

L = 200 cm

x1

x2

x3

x4

L/2

Y

Z

P

Q

L = 200 cm

x1

x2

x3

x4

L/2

Y

Z

9

values of the deterministic problem parameters are P = 600 kN, Q = 50 kN, E = 2 x 104 kN/cm2, kg =16 kN/cm2, L = 200 cm, My = 30000 kN-cm, Mz= 2500 kN-cm.

The probability of failure of the beam is the probability that limit state (17) exceeds zero, i.e.

0fP P G (23)

Where the reliability safety index, , is calculated using the MV-FOSM method as shown in Eq. (24). Note that for complex limit states with non-normal design variables, the FORM algorithm described in Figure 1 should be used.

22i

i

G

Gx

(24)

The various steps in the multilevel optimization process are described below.

Step 1: An initial set of “Level 1” optimal designs are generated using different problem formulations. These could be deterministic, probabilistic constraints or probabilistic multiobjective formulations. The objective is to generate a population of candidate solutions that will then be analyzed with the probabilistic degradation algorithm. The various problem formulations considered in this case study are 1) Deterministic weighted Objective optimization with deterministic constraints 2) Deterministic Weighted Objective optimization with probabilistic constraint, Pfmax = 0.13) Deterministic Weighted Objective optimization with probabilistic constraint, Pfmax = 0.014) Deterministic Weighted Objective optimization with probabilistic constraint, Pfmax = 0.001

Step 2: From the optimal designs obtained in step 1, pick a representative sample of optimal designs which is then subjected to degradation analysis. The four cases considered Step1 generated a total of 44 optimal designs (all on the Pareto front). A sample of twelve designs was chosen from this set.

Step 3: The twelve designs were individually analyzed with the degradation algorithm presented earlier. A degradation mechanism was assumed to be reduction in wall thickness due to corrosion. The probabilistic corrosion model is identical to that used in the pressure tank case study. The algorithm generated two-sided 90% confidence intervals based on bootstrap percentiles. The time-dependent reliability equation for

each design was obtained by allowing a virtual population of twenty units and 1000 bootstrap samples for every time period. The analysis was run for 15 time periods ( one per year) per design.

Step 4 : Once the time dependent reliability of each design was obtained, a two-parameter Weibull distribution was fitted to each design.

Step 5: From the time dependent reliability equation for each design obtained in step 4, the optimal operating interval, T*, was calculated by minimizing Eq. (6) For this calculation, the cost of replacing an I-beam was assumed to be one unit, and the cost of an unexpected failure was assumed to be 50 units.

Step 6: The mean residual life was calculated for each design at it’s optimal operating interval using Eq.(14) Steps 5 and 6 form a part of the second level of optimization in which the “optimal design” is frozen and optimal operating parameters are computed for that design

Step 6: The last step is to obtain the Return on Investment (RoI) for each design based on design costs, risk and consequences of failure. The final “optimal” design was selected based on a designs ability to provide a high RoI. In a deterministic decision model, single values cost assumptions are used and the design with the highest RoI is selected. A more powerful approach would be to use a probabilistic decision approach in which the costs are assumed to have a distribution and the RoI distribution with the highest expected value and lowest variability. The final list of twelve candidate designs from the Level 1 stage (Step 1) is shown in Table 1.

Table 2: I-Beam optimization Level 1 results

The time-dependent reliability curves were obtained by running the probabilistic degradation algorithm described in Figure 2.

Level 1Design

x1(cm)

x2(cm)

x3(cm)

x4(cm)

Level 1Reliability

1 80 50 0.9004 1.6118 0.9994092 80 50 0.9004 0.9004 0.7938123 80 50 0.9004 2.5365 14 80 49.1755 0.9004 1.1941 0.9470225 80 50 0.9004 1.0641 0.96 49.1138 50 0.9004 1.0708 0.97 77.4613 50 0.9004 1.3824 0.998 80 50 0.9004 1.3759 0.999 47.8355 50 0.9004 1.2324 0.9910 80 50 0.9004 1.6118 0.99940911 80 50 0.9004 1.5742 0.99912 47.3788 50 0.9004 1.3503 0.999

10

Figure 10: Weibull approximations for Level 2 reliability designs, I-Beam Case Study

It is interesting to note that the most “reliable” initial design (static probability at the design stage) is not necessarily the most reliable when time-dependent reliability (field usage) is considered. It could explain why designs that appear reliable when analyzed using conventional probabilistic design algorithms are sometime sub-optimal and fail prematurely in the field. A parametric formulation of reliability was necessary to calculate optimal operating intervals, residual life and expected cost per unit time. The “equivalent Weibull” curves are plotted in Figure 10, and are a close approximation to the reliability estimates obtained from the probabilistic degradation algorithm

At this point, all the information required to compute the optimal operating time, T*, for a particular design is available. It is generally recognized that a structural element like an I-Beam can be used in a variety of applications and hence the consequence of an unexpected failure cannot be quantified precisely. This is taken into account by generating different failure scenarios in which the consequences of failure can vary widely. The design cost of the I-Beam, CP is assumed to be one unit and the costs if an unexpected failure, CU is assumed to be five, ten and hundred times the cost of a scheduled replacement, which are operating scenarios 1, 2 and 3 respectively. Results from the Level 2 optimization analyses are presented in Table 3. This includes the optimal operating time for each design, T*, obtained for Scenarios 1,2 and 3 as well the mean residual life, MRL*, obtained at T*

The ratio of the sectional area of a design to the minimum area produced by the set of candidate designs (CD* ) is computed, and called the design cost factor, K5. Dividing the numerator and denominator of (14) by the base design cost CD*, we obtain a modified form of the RoI equation as shown in Eq. 25

0

* * *

0

* *

* * ( *) ( *)

(0) *

MRL X

D D D

D

D D

B B BR T T MRL T R T

C C CRoI

CCR T

C C

(25)

Choosing factors,

01 2 3

* * *

04 5

* *

, , ,

,

MRL X

D D D

D

D D

B B BK K K

C C C

C CK K

C CThe RoI equation becomes,

1 2 3

5 4

* * ( *) ( *)(0) *

R T K T MRL T K R T KRoI

K R K T(26)

Where, K1 = Ratio of part revenues/year to part cost K2 = Ratio of residual life value/year to part cost K3 = Ratio of annual warranty benefits/year to part cost K4 = Ratio of operating costs/year to part cost K5 = Ratio of design cost to minimum design cost amongst candidate designs

The following values of factors are assumed in calculating the best design for each operating scenario: K1 = 1.2, K2 = 0.6, K3 = 0.2, K4 = 0.5. The design cost, CD, is assumed to be proportional to the cross sectional area of the I-beam. The design cost of the part with the lowest sectional area is assumed to be unity. Note that the remaining values of CD are scaled in the ratio of the designs sectional area to the minimum sectional area achieved in the candidate design set.

Results & Discussion : Design No. 12 appears to be the best design under Scenario 1, in which the consequences of failure are low. This had a sectional area of 175.2 cm2 with a design reliability of 99.9% and a final reliability of 80.9% at the optimal operating interval.

In Scenario 2, the costs of an unplanned failure are ten times the replacement cost, Design No. 10 was the optimal solution. This design has a sectional area of 230.31 cm2, a design reliability of 99.9% and a final reliability of 85.5% at the end of its optimal operating interval.

Reliability of optimal designs - Weibull approximation

0.00

10.00

20.00

30.00

40.00

50.00

60.00

70.00

80.00

90.00

100.00

0 3 6 9 12 15 18 21 24 27 30 33 36 39 42

Operating time (years)

Relia

bility

(%)

Design 1Design 2Design 3Design 4Design 5Design 6Design 7Design 8Design 9Design 10Design 11Design 12

11

In Scenario 3, the costs of an unplanned failure are hundred times the replacement cost, Design No. 1 turned out to be the optimal solution. This design has a sectional area of 230.31 cm2, a design reliability of 99.94 % and a final reliability of 98.6% at the end of its optimal operating interval.

The optimal operating periods are strongly dependent on the ratios between the costs of a planned replacement and unplanned failure. The higher the cost of an unexpected failure relative to the design cost, the smaller the operating interval. This is again consistent with engineering practice, where high-risk parts are often removed even when they have significant service life left. Note that current probabilistic design approaches do not even consider time dependent reliability while formulating optimal designs. Results also indicate that the residual life increases as the optimal operating interval, T*, is reduced. As T* depends on the costs of unplanned failure relative to the design cost in addition to the reliability of the part, it is possible to develop mathematically optimal strategies for life management. This is an area of future research, where algorithms could be developed to optimally shift parts from risky to benign operating environments as they spend time in service.

To summarize, a new method for computing the time-dependent reliability of aging structures using analytical probabilistic design algorithms is presented. The method is also extended into the domain of multilevel probabilistic design based optimization of aging structures. Both approaches are validated with numerical case studies, and it is hoped that these

methods will become a part of an intelligent part life management system.

References [1] Haldar, A and Mahadevan S., Probability, Reliability and Statistical Methods in Engineering Design, John Wiley & Sons, 2000 [2] Vittal, S., and Hajela, P., “Probabilistic Design using Empirical Distributions”, Proc. of 44th AIAA/ ASME/ASCE – SDM Conf., Norfolk, VA (Apr 2003) [3] Meeker, W. and Escobar, L., Statistical Methods for Reliability Data, Wiley Series in Probability and Statistics, 1998 [4] Vittal, S., and Hajela, P., “State Transition Methods in RBDO”, Proc. of 44th AIAA/ ASME/ASCE – SDM Conf., Norfolk, VA (Apr 2003) [5] Dai, Shu-Ho and Ming-O Wang, Reliability Analysis in Engineering Applications, Van Nostrand Reinhold, New York. 1992. [6] Vittal, S., and Hajela, P., “Approaches to Reliability based Multicriteria Optimization ”, Proc. of 9th AIAA/ ISSO symposium on MDO, Atlanta, GA (Sep 2002)

Disclaimer : The views expressed in this paper are those of the authors only, and do not necessarily reflect the view of Rensselaer Polytechnic Institute.

PW PW T* MRL* T* MRL* T* MRL*Design 1 1.722642 248.8437 139.55 142.91 85.32 164.88 20.89 218.9Design 2 0.421173 101.7403Design 3 5.735964 870.0173 521.32 303.76 452.42 362.71 297.76 509.21Design 4 0.740733 170.2356Design 5 0.597266 147.1353Design 6 1.150715 24.0865 57.27 17.74 21.79 19.58 2.33 22.14Design 7 1.115595 214.1317 810.37 163.54 261.31 179.51 24.54 199.73Design 8 1.08988 231.4111 1529.4 177.95 390.53 196.41 31.94 217.58Design 9 1.660304 45.464 26.48 26.68 15.85 30.66 3.68 37.54

Design 10 1.730277 270.8598 151.29 155.96 92.73 179.78 22.85 221.71Design 11 1.66535 207.4381 120.38 121.88 72.18 139.95 16.81 171.31Design 12 2.218477 68.1563 33.86 35.35 23.31 41.35 7.86 52.96

Table 3 : Optimal operating time and residual lives for I-Beam designs

Not available for W < 1.0

Not available for W < 1.0 Not available for W < 1.0

Weibull Parameters CU = 5CP CU = 10CP CU = 100CP


Recommended