Date post: | 30-Jan-2016 |
Category: |
Documents |
Upload: | ferasalkam |
View: | 4 times |
Download: | 0 times |
UNCERTAINTY QUANTIFICATION AND OPTIMIZATION OF STRUCTURAL RESPONSE
USING EVIDENCE THEORY
A dissertation submitted in partial fulfillment of the requirement for the degree of
Doctor of Philosophy
By
HA-ROK BAE B.S., Ajou University, South Korea, 1999 M.S., Ajou University, South Korea, 2001
_____________________________________
2004 Wright State University
WRIGHT STATE UNIVERSITY SCHOOL OF GRADUATE STUDIES
November 20, 2004
I HEREBY RECOMMEND THAT THE DISSERTATION PREPARED UNDER MY SUPERVISION BY Ha-Rok Bae ENTITLED Uncertainty Quantification and Optimization of Structural Response Using Evidence Theory BE ACCEPTED IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF Doctor of Philosophy
______________________________ Ramana V. Grandhi, Ph.D. Dissertation Director Director, Engineering Ph.D. Program ______________________________ Robert A. Canfield, Ph.D. Co-Director ______________________________ Joseph F. Thomas, Jr, Ph.D. Dean, School of Graduate Studies
Committee on Final Examination ______________________________ Ramana V. Grandhi, Ph.D., WSU ______________________________ Richard J. Bethke, Ph.D., WSU ______________________________ Joseph C. Slater, Ph.D., PE, WSU ______________________________ Robert A. Canfield, Ph.D., AFIT ______________________________ Gary Kinzel, Ph.D., OSU
iii
ABSTRACT
Bae, Ha-Rok. Ph.D., Department of Mechanical and Materials Engineering, Wright State University, 2004. Uncertainty Quantification and Optimization of Structural Response Using Evidence Theory. For the last two decades, non-deterministic analysis has been studied extensively to
enable analytical certification of an engineering structural component or an entire system
for their demanding performances. Probability theory with strong mathematical
formulations has gained its popularity in Uncertainty Quantification (UQ). However,
recently, many scientific and engineering communities have recognized that intrinsic
uncertainties in an engineering system have multifaceted nature (randomness, non-
randomness, partial randomness, vagueness, and so forth) and traditional probability
theory does not always provide an appropriate framework for describing the multiple
types of uncertainties, especially for a large-scale and complex engineering system. In
developing high-performance practical mechanical systems, it becomes obvious that our
knowledge and data suffer from sheer imprecision because we must explore beyond the
current level of technological knowledge and experience. The primary objective of this
research work is to develop an appropriate and unified UQ framework for multiple types
of uncertainty sources. One of the main challenges of UQ for practical implementation in
engineering designs is the computational cost, and that is the focus of this dissertation;
efficient computational algorithm development. Evidence theory is advanced for large-
scale aircraft structural design in multi-physics environment.
iv
TABLE OF CONTENTS
1. Introduction……………………..………………………………………… 1
2. Structural Reliability Analysis…………………………………….…….. 12
2.1 Limit State Function………………………………………………….. 13
2.1 Probabilistic Approaches…………………………………………….. 14
2.1 Non-probabilistic Approaches………..……………………………… 20
3. Evidence Theory…………………………………………………………. 25
3.1 Set Operations and Mappings………………………………………… 25
3.2 Frame of Discernment……………………………………………….. 28
3.3 Basic Belief Assignment………………………………………. ……. 31
3.4 Combination of Evidence……………………………………………. 40
3.5 Belief and Plausibility Functions.……………………………..……… 46
4. Structural Uncertainty Quantification Using Evidence Theory………. 51
4.1 Problem Definition.………………………………………………….. 51
4.2 BBA Structure in Engineering Applications….……………………… 52
4.3 Evaluation of Belief and Plausibility Functions..…..……….……….. 54
4.4 Numerical Example……………………………………..…….……… 58
5. System Reanalysis Methods for Reliability Analysis………………..…. 65
5.1 Surrogate-Based Reanalysis Techniques…...………………….…….. 68
5.2 Coefficient Matrix-Based Reanalysis Techniques…..……………….. 78
5.3 Combined Iterative Technique….………………………….…..……. 100
6. Cost-Efficient Evidence Theory Algorithm….………….……………… 119
6.1 Multi-Point Approximation………………....………………….……. 121
v
6.2 Cost Efficient Algorithm for Structural Uncertainty Quantification… 122
6.3 Numerical Examples…………………………………………………. 127
7. Comparison of Reliability Approaches with Imprecise Information…. 140
7.1 Problem Definition with Imprecise Information……………….……. 140
7.2 Case Study I: Three Bar Truss……………………………………..… 143
7.3 Case Study II: Intermediate Complex Wing…………………………. 158
8. Reliability Assessment Using Evidence Theory and Design Optimization 162
8.1 Plausibility Decision Function………………………………….……. 163
8.2 Sensitivity Analysis Using Evidence Theory.……………………..… 165
8.3 Reliability-Based Design Optimization Using Evidence Theory……. 173
8.4 Numerical Example…………………………………………….……. 177
9. Summary…………………………………………………….……………. 194
10. References…………………………………………………..…………….. 201
vi
LIST OF FIGURES
Figure 1.1 Uncertainty Quantification Techniques 2
Figure 2.1 Limit-State Surface Between Failure and Safe Domains 14
Figure 2.2 Graphical Interpretation of the Reliability Index 18
Figure 2.3 The Relationship Between the Reliability Index and the Safe 19 Probability
Figure 2.4 Triangular Fuzzy Membership Function 22
Figure 3.1 Frame of Discernment with Elementary Intervals 29
Figure 3.2 Constructing BBA Structure in Evidence Theory 33
Figure 3.3 Degree of Ignorance, m(X) 35
Figure 3.4 Probabilistic BBA Structure 39
Figure 3.5 Complementary BBA Structure 39
Figure 3.6 Consonant BBA Structure 40
Figure 3.7 General BBA Structure 40
Figure 3.8 Rules of Combination by Parameter k 45
Figure 3.9 Belief (Bel) and Plausibility (Pl) 47
Figure 3.10 Bel and Pl in a Given BBA Structure 47
Figure 3.11 BBA Structure (m(x1)=0.5, m(x2)=0.3, m(x1, x2)=0.1, 48 m(Ω)=0.1)
Figure 3.12 Belief (Bel), Plausibility (Pl), and Probability (Pf) in Elementary 50 Propositions
Figure 4.1 Multiple Interval Information and BBA for an Uncertain Parameter, x1 52
Figure 4.2 The Failure Set, UF and Joint BBA Structure for Two Uncertain 56 Parameters
vii
Figure 4.3 Uncertainty Quantification Algorithm in Evidence Theory 57
Figure 4.4 ICW Structure Model 58
Figure 4.5 Elastic Modulus Factor Information 59
Figure 4.6 Load Factor Information 60
Figure 4.7 Combined Information for Elastic Modulus Factor 61
Figure 4.8 Combined Information for Load Factor 62
Figure 4.9 Complementary Cumulative Plausibility and Belief Functions 64
Figure 5.1 Three-Bar Truss 73
Figure 5.2 Two Design Points of the Three-Bar Truss 75
Figure 5.3 Relative Error Plots of Various Approximation Methods 76
Figure 5.4 Successive Matrix Inversion (SMI) Algorithm for m Columns 88 Modification
Figure 5.5 Relative Computational Cost Ratios of SMI to LU Decomposition 90
Figure 5.6 Plane Truss Structure 94
Figure 5.7 Design Variables (βi) for Elements Under Uncertainty 97 in the Elastic Modulus
Figure 5.8 Sequential Computation Procedure of the SMI Method in Monte Carlo Simulation for Two Probabilistic Variables (β1 and β2) 98
Figure 5.9 Combined Iterative (CI) Method 101
Figure 5.10 Separating [∆K] into the Parts for SMI and an Iterative Method 104
Figure 5.11 Successive Predicting Process of the BSI Method 107
Figure 5.12 BSI Method Flowchart 110
Figure 5.13 Iterative Result and the Improved Eigenvalue Distribution 112
Figure 5.14 Intermediate Complexity Wing Structure Model and Design Variables 114
Figure 5.15 Iterative Solution History of CI Method 116
viii
Figure 5.16 Improved Eigenvalue Distribution During Reanalysis Using CI method 117
Figure 6.1 Identifying the Failure Region Using an Optimization Technique 123
Figure 6.2 Deploying AEPs and Constructing the Surrogate on the Failure Region 124
Figure 6.3 The Cost-Efficient Algorithm for Assessing Bel and Pl 126
Figure 6.4 Composite Cantilever Beam Structure Model 127
Figure 6.5 Scale Factors (α, β) Information for EL and ET 129
Figure 6.6 Tip Displacement (δTip) of the Composite Cantilever Beam with 130 respect to the Scale Factors (α and β) and the Surrogate Failure Region Using the Proposed Method
Figure 6.7 ICW Structure with Uncertainties in the Root Region 132
Figure 6.8 Aerodynamic Model of ICW 133
Figure 6.9 Aerodynamic Pressure (Cp_lift) Distributions from Steady Aeroelastic 134 Trim Analysis of Lift Forces
Figure 6.10 Aerodynamic Pressure (Cp_roll) Distributions from Steady Aeroelastic 136 Rolling Trim Analysis
Figure 6.11 Interval Information for Uncertain Variables (α, β, and γ) from the First Expert 136
Figure 6.12 Interval Information for Uncertain Variables (α, β, and γ) from 137
the Second Expert
Figure 7.1 Three Bar Truss 143
Figure 7.2 Imprecise Information for the Scale Factors of Uncertain Parameters 144 (E and P)
Figure 7.3 Consonant Intervals and an Approximate Membership Function for 146 the Scale of Uncertain Parameter (E) Using the Inclusion Technique
Figure 7.4 Consonant Intervals and an Approximate Membership Function for 148 the Scale of Uncertain Parameter (P) Using the Inclusion Technique
Figure 7.5 System Response (Displacement) Membership Function for the Three 149 Bar Truss
ix
Figure 7.6 PDF of e (Scale of Elastic Eodulus) Using Uniform Distribution 149 Assumption
Figure 7.7 PDF of p (Scale of Force) Using Uniform Distribution Assumption 149
Figure 7.8 Complementary Cumulative Measurements of Possibility Theory, 152 Probability Theory and Evidence Theory for Three Bar Truss Example
Figure 7.9 Discretized Normal PDF (N: the number of discretization) 157
Figure 7.10 The Convergence of Bel, Pl, and Probability Regarding the Number 158 of Discretization
Figure 7.11 Scale Factor Information for Static Force from Different Sources 159
Figure 7.12 Discretized Intervals for Elastic Modulus with Given Interval Statistics 160
Figure 8.1 The Failure Region, f -1(Uy)∩ck, in a Joint Proposition ck 164
Figure 8.2 The Network of Local Approximations 170
Figure 8.3 Linear Response Surface Models (LRSMs) for Sensitivity Analysis 171
Figure 8.4 ICW for RBDO 177
Figure 8.5 Elastic Modulus Factor Information 178
Figure 8.6 Load Factor Information 179
Figure 8.7 Combined Information for Elastic Modulus Factor 180
Figure 8.8. Combined Information for Load Factor 182
Figure 8.9 Proposition’s Sensitivities of Plausibility of Elastic Modulus Factor 183
Figure 8.10 Proposition’s Sensitivities of Plausibility of Load Factor 184
Figure 8.11 Sensitivity of Plausibility with Thickness Factors (TH 1, TH 2, and TH 3) 185
Figure 8.12 The Optimization History of Objective Function and Design Variables 189
Figure 8.13 Trust Region Uncertainty Quantification for Sequential Optimization 192 Under Multiple Types of Uncertain Variables
x
LIST OF TABLES
Table 3.1 The Evidence for True Value of x 48
Table 4.1 Tip Wing Skin Thickness Factor (t1) 61
Table 4.2 Root Wing Skin Thickness Factor (t2) 61
Table 6.1 Composite Cantilever Beam Results Using the Vertex and Proposed 131 Methods
Table 6.2 ICW Results Using the Sampling, Vertex and Proposed Methods 138
Table 7.1 Comparison of Results and Costs for Three Bar Truss Example 152
Table 7.2 ICW Results Using the Vertex and Proposed Methods 161
Table 8.1 Intermediate Complexity Wing Results 181
Table 8.2 Failure Degrees of Belief, Plausibility Decision and Plausibility 187
xi
ACKNOWLEDGEMENTS
I am deeply grateful to my advisor, Professor Ramana V. Grandhi, for his
academic guidance and individual attention. During the first year of my doctoral studies,
his respect for and belief in my limited knowledge made me responsible and passionate
about my study and research. His continuous suggestions and encouragement sustained
me throughout research and writing of this dissertation.
I wish to express my sincere thanks to Professor Robert A. Canfield of the Air
Force Institute of Technology. Many discussions with him and his excellent advices were
greatly helpful to my studies.
I would also like to extend my thanks to the Ph.D. committee members and my
colleagues at the Computational Design Optimization Center at Wright State University,
for their valuable suggestions and comments. Especially, Dr. Ravi Penmetsa and Mr. Ed
Alyanak, whose friendship and encouragement were other benefits of my Ph.D. research
work, and Brandy Foster, whose corrections and suggestions on English style and
grammar is really appreciated. I am also grateful to Professor Youngsuk Shin of Ajou
University, who introduced me to the field of structural optimization and gave me the
opportunity to continue my studies in the U.S.A.
I would like to acknowledge the support from the Air Force Office of Scientific
Research (AFOSR) under grant F49620-00-1-0377 and from the Ph.D. fellowship
granted by the Dayton Area Graduate Studies Institute (DAGSI).
I would like to give my special thanks to my parents and sisters, Sun-Jung Bae
and Hee-Jung Bae, for their continuous support that enabled me to complete this work.
I am greatly indebted to my lovely wife, Min-Suk Chun, and my little daughter,
Jae-Hee Bae, for their patient love and trust in me.
xii
To my father and mother.
1
1. Introduction
In addition to deterministic analysis, non-deterministic analysis has been adopted
in the last several decades for Uncertainty Quantification (UQ) in many structural
systems. Probability theory has obtained its popularity in many research areas; and,
stochastic analysis techniques, which are based on probability theory, have been widely
used in engineering systems to model and propagate uncertainties. However, as
mechanical systems and multidisciplinary performance requirements become complex
and stringent, it is imperative to take various types of uncertainties that cannot be just
addressed by a probabilistic framework into consideration.
When conceiving innovative mechanical systems, it becomes obvious that
available resources, such as our knowledge, experimental budget, and timeframe may
often be very limited and never enough. Some uncertainties in those systems which occur
with the nature of randomness can be modeled with well-known probabilistic functions
such as a Probability Density Function (PDF). However, the other parameters may not be
assigned with any random function of probability theory due to lack of sufficient
information and data. In that case, the uncertain parameters may take values within
certain bounds instead of explicit PDFs.
2
In a probabilistic UQ framework, strong assumptions are usually made to furnish
the complete randomness to the imprecise and bounded uncertain parameters. The strong
assumptions include approximating or assuming a PDF in given bounds without any
sufficient supporting evidence. Consequently, the result of reliability analysis using the
probabilistic framework might be the mere reflection of the reinforced assumption. In this
work, to address these limitations of the traditional UQ framework and to enable the
certification of the systems’ performance, alternative UQ techniques are explored for a
reliable structural design.
Figure 1.1 Uncertainty Quantification Techniques
No
Aleatory (Random)
Uncertainty
Epistemic (Subjective)
Uncertainty
Parameter
Material properties Loads
Geometries
Physical System Modeling
Initial conditions Model form
Scenario Abstraction Failure modes
Probability Theory Possibility Theory
Evidence Theory
Sufficient data?
3
Depending on the nature of uncertainty in a system, various UQ techniques can be
applied for appropriate propagation and quantification of uncertainty as shown in Fig. 1.1.
Uncertainties can be classified into two distinct types in the risk assessment community:
aleatory and epistemic uncertainty [1-4]. Aleatory uncertainty is also called irreducible or
inherent uncertainty. Parameter uncertainties with variability are basically aleatory
uncertainties, but they should be treated as epistemic uncertainties when data is
insufficient to construct a complete and smooth PDF.
Epistemic uncertainty is subjective or reducible uncertainty that stems from lack
of knowledge and data. Model form and scenario abstraction uncertainties, which usually
come from boundary conditions, different choices of solution approaches, unexpected
failure modes, and so on, are included in epistemic uncertainties. Formal theories
introduced to handle those uncertainties are classical probability theory, possibility theory,
evidence theory, and so forth. The common issue among these theories is how to
determine the degree to which uncertain events are likely to occur. A distinct difference
among these theories is in assignment of degree of belief [5, 6]. Both classical probability
theory and evidence theory limit the total belief for all possible events to be unity. On the
other hand, there is no such restriction in possibility theory, since one may have perfect
confidence for a certain event and may give a possibility of one through a possibility
distribution.
Probability theory, as a popular approach in uncertainty quantification in
engineering structural problems for the last several decades, has been developed mostly
4
for aleatory uncertainty. With complete and sufficient information, aleatory uncertainty
is well represented by a probabilistic function, such as a PDF. The most familiar
technique is Monte Carlo Simulation (MCS) [7]. It generates random values of aleatory
uncertain variables in a target system from given PDFs. The model is simulated with
these random values to evaluate a certain performance probability. Besides MCS, there
are several well-developed methods for reliability analysis using probability theory: First-
Order Reliability Method (FORM) [8], Second-Order Reliability Method (SORM) [9-11],
Stochastic Finite Element Method (SFEM) [12], and so on.
However, there may exist some epistemic uncertainties to which probability
theory is not appropriate, because epistemic uncertainties often cannot be assigned to
every single event in a way that satisfies the axioms of probability theory. Many
researchers prefer the possibility theory to probability theory to model these epistemic
uncertainties in a system. Fuzzy set theory, which is also called possibility theory, was
first introduced by Zadeh in 1965 [13]. Fuzzy set theory was intended to deal with
problems involving vagueness and imprecision in real-life problems. Classically, a set of
an uncertain variable is defined by its members. An event may be either a member or a
non-member of the set based on different degrees of membership or α cuts. Usually, the
membership of a fuzzy variable is given by a continuous mathematical function which
can be viewed as a PDF of probability theory. Structural design problems with fuzzy
parameters were investigated by researchers such as Wood, Otto and Antonsson [14], and
Penmetsa and Grandhi [15].
5
In a real engineering structural system, there may be partial evidence of an
uncertain variable to which either the probabilistic or possibilistic framework is not
appropriate. Only certain intervals can be given for an uncertain parameter. Moreover, in
many practical engineering cases, both aleatory and epistemic uncertainties may be
present simultaneously. For instance, in an aircraft design, sufficient data for the
dimensions and material properties of some parts of the structure may exist with
probability distributions. However, the information for other issues, such as gust loads,
control surface settings, operating conditions, and so forth, might not be expressed by
either a membership function or interval information. Until now, when multiple types of
uncertainties coexist in a target structural reliability analysis, UQ analyses have been
performed by treating them separately or by making assumptions to accommodate either
the probabilistic framework or the fuzzy set framework.
For an alternative UQ technique, Shafer [16] developed Dempster’s work and
presented evidence theory, also called Dempster-Shafer Theory. Evidence theory is a
generalization of classical probability and possibility theories from the perspective of
bodies of evidence and their measures, even though the methodologies for manipulation
of evidence are totally different. Hence, evidence theory can handle not only epistemic
uncertainty, but also aleatory uncertainty in its framework. The framework of evidence
theory allows for pre-existing probability information to be treated together with
epistemic information (such as a membership function, interval information, and so on) to
assess likelihood for a limit-state function of interest. However, most applications of
evidence theory have been for system maintenance or artificial intelligence, such as radio
6
communication systems, image processing, system management in nuclear industry, and
decision making in design optimization problems [17-19]. Oberkampf and Helton [20]
initially demonstrated evidence theory by quantifying uncertainty for a problem involving
closed-form equations of mechanical problems.
In this work, we attempt to apply evidence theory to practical engineering systems
with implicit analysis techniques. In evidence theory, a Basic Belief Assignment (BBA)
structure, which is similar to a PDF in probability theory, is constructed with imprecise
and insufficient information. The information or hypotheses for an uncertain variable are
given with flexible multiple intervals that might overlap one another. The BBA structures
can be given from several independent knowledge sources over the same frame of
discernment, but based on distinct bodies of evidence. In evidence theory, Dempster
introduced Dempster’s rule of combination [16] that enables us to compute orthogonal
sum of given belief structures from multiple sources to fuse given interval data from
different independent sources.
Unlike the PDF or the fuzzy membership function, the BBA structure in evidence
theory usually cannot be expressed by a continuous explicit function with the given
imprecise information. Because of the discontinuity in BBA, the resulting uncertainty in a
system is usually quantified by many repetitive system simulations for all the possible
propositions given by BBA structures of uncertain variables. However, in modern
structural designs, structural systems are usually numerically simulated with intensive
computer codes, such as Finite Element Analysis (FEA), Computational Fluid Dynamics
7
(CFD), and so on. Hence the computational cost of UQ analysis can be very high and
prohibitive for most engineering structural systems.
To alleviate the intensive computational cost, which is one of the major
difficulties in applying evidence theory to engineering structures, a robust and efficient
technique is developed. In many engineering structural UQ analyses, the failure region is
comparatively smaller than the entire function space of interest, and a large amount of
computational resources is wasted on the non-contributive region to the UQ result.
Therefore, in the proposed cost-efficient UQ algorithm, the computational resources are
focused only on the failure region to reduce the overall computational cost with a robust
surrogate model approach.
First, the proposed algorithm identifies the failure region in a defined UQ space
by employing a mathematical optimization technique, and then an approximation
approach is adopted to construct the surrogate of an original limit-state function for the
repetitive simulations of UQ analysis. In this work, for the robust surrogate model, Multi-
Point Approximation (MPA) method [21] is employed. MPA is a network of multiple
local approximations that are combined with a weighting function to determine the
contribution of each local approximation. The accuracy of MPA mainly depends on that
of the local approximation, hence the choice of local approximation is important. The
Two-Point Adaptive Non-linear Approximation (TANA2) method, developed by Wang
and Grandhi [22], is employed as a local approximation method. The efficiency and
accuracy of TANA2 were extensively demonstrated earlier in many engineering
8
disciplines[21-26]. TANA2 is very efficient when dealing with highly nonlinear implicit
problems with a large number of design variables. It was found that the belief and
plausibility functions were computed efficiently without sacrificing the accuracy of
resulting measurements by employing the proposed cost-efficient UQ algorithm.
In the effort of reducing the computational cost further, a new direct and exact
reanalysis technique, the Successive Matrix Inversion (SMI) method, is developed based
on the binomial series expansion of a structural stiffness matrix. The SMI method gives
exact solutions for any variations to an initial design of a Finite Element Analysis (FEA);
that is, there is no restriction on the valid bounds of the design modification in using SMI.
The SMI method includes the capability to update both the inverse of the modified
stiffness matrix and the modified response vector of a target structural system by
introducing an influence vector storage matrix and a vector-updating operator. Since the
cost of reanalysis using SMI is proportional to the ratio of the changed portion to the
initial stiffness matrix, the SMI method is especially effective for a regional modification
in a structural FEA model. As a complementary reanalysis technique of SMI, the
Binomial Series Iterative (BSI) method is also developed for global modifications with a
small degree of changes. By coupling the SMI method with an iterative method, a
Combined Iterative (CI) technique in which the weaknesses of a typical iterative method
are overcome by the direct method—the SMI method is introduced for the first time. The
CI method is a new class of linear system solvers. With the cost-efficient system
reanalysis techniques and UQ algorithm, the general UQ framework of evidence theory
can be successively applied to practical and large-scale engineering applications.
9
The strengths and weaknesses of evidence theory and improvements for solving
large-scale uncertainty quantification problems are also discussed and compared with
those of probability theory and possibility theory. Probability theory does not allow any
impreciseness on the given information, so it gives a single-valued result. On the other
hand, possibility theory and evidence theory give a bounded result. The result from
possibility theory gives the most conservative bound ([0, Necessity]), essentially because
of Zadeh’s extension principle [27]. In that principle, the degree of membership of the
system response corresponds to the degree of membership of the overall most preferred
set of fuzzy variables. Evidence theory gives an intermediate bounded result ([Belief,
Plausibility]), which always includes the probabilistic result; that is, lower and upper
bounds of probability based on the available information. It was found that a BBA
structure in evidence theory can be used to model both probability and possibility
distribution functions due to its flexibility. This explains why different types of
information (fuzzy membership function and PDF) can be incorporated into one
framework of evidence theory to quantify uncertainty in a system. The bounded result of
evidence theory can be viewed as the best estimate of system uncertainty because the
given imprecise information is propagated through the given limit-state function without
any unnecessary assumptions.
Sensitivity information for the quantified uncertainties can be very useful in the
design phase of an engineering structural system. With the sensitivity analysis, we can
determine the primary contributor to the uncertainties in a designed structural system, and
10
sensitivity analysis also makes it possible to improve the structural design by decreasing
the uncertainties in the system. In finding sensitivity of plausibility with respect to an
expert opinion, it is the goal in this work to find the primary contributing expert opinion.
The result from sensitivity analysis indicates on which proposition the computational
effort and future collection of information should be focused. This sensitivity analysis
can be easily shifted from the sensitivity for plausibility to the sensitivity for ignorance,
which is defined by the subtraction of belief from plausibility. By decreasing the degree
of ignorance, we can be more confident in the reliability analysis result. The sensitivity
of a deterministic parameter in an engineering structural system is also developed to
improve the current design by decreasing the failure plausibility of a limit-state function
efficiently. However, the plausibility function in evidence theory is a discontinuous
function for varying values of a deterministic parameter, because of the discontinuity of a
BBA structure for uncertain parameters. The gradient of plausibility is represented using
the degree of plausibility decision (Pl_dec), which was introduced by applying the
generalized insufficient reason principle [73] to the plausibility function. Pl_dec can be
used as a supplemental measurement to make a decision as to whether a system can be
accepted.
For optimization of structural system based on performance reliabilities, UQ
analysis is incorporated into mathematical optimization techniques. The performance
reliability is obtained with not only perfect and complete data, but also imprecise and
insufficient information using the framework of evidence theory. By virtue of the
developed cost-efficient UQ algorithm with innovative system reanalysis techniques, an
11
intrinsic discontinuous and repetitive design optimization procedure with performance
reliabilities is tackled successfully in this work. The development is demonstrated using
several structural models including a space truss structure, a composite cantilever beam,
and an intermediate complexity wing representing a fighter aircraft.
12
2. Structural Reliability Analysis
The uncertainty in structural systems has been recognized by many researchers,
and it has been admitted by many engineering societies that the performance of a system
is non-deterministic and should be addressed by reliability analysis. Reliability is the
belief measurement of a system performing its designed function over a specific period of
time and under specified service conditions. Generally, by performing Uncertainty
Quantification (UQ), we can obtain better understanding of real structural behaviors and
the reliability of the designed structural system. Even with advanced computing
techniques, the challenges in structural reliability analysis are to achieve an accurate and
fast reliability method for the calculation and prediction of the propagated uncertainty
with multidisciplinary analyses and how to obtain the optimum design efficiently under
uncertainty. In this chapter, the popular probabilistic approach and the non probabilistic
fuzzy approach are introduced after a brief description of a limit-state function in UQ.
13
2.1 Limit State Function
In the context of UQ in engineering systems, a limit-state function describes the
state of a structure or a structural element. With a specific limit-state value on a desired
performance measure, the design space of the structure is separated into ‘failure’ and
‘safe’ regions such that:
0)( >xg , xi∈ Safe Region (2.1.1)
0)( =xg , xi∈ Failure Surface (2.1.2)
0)( <xg , xi∈ Failure Region (2.1.3)
The definition of limit-state function is not unique, but the function is usually expressed
as:
Function Response unctionF AllowableXg −=)( (2.1.4)
where the Allowable Function defines the acceptable level of the response and the
Response Function is the structural response obtained from an explicit or implicit
functions of design parameters of interest. The limit-states in most engineering structures
can be classified into ultimate, damage, and serviceability limit-states. Compared to the
ultimate and damage limit-states, the serviceability limit-states defining the state of
serviceability by measuring excessive deflection, excessive vibration, and so on can be
less critical.
14
2.2 Probabilistic Approaches
As shown in Fig. 1.1, in the cases in which we have only aleatory uncertain
parameters with complete and sufficient information and data for their randomness,
probabilistic approaches will be appropriate for UQ. The simplest example of the limit-
state function can be given as the following stress-strength problem:
SRSRg −=),( (2.2.1)
where R is the strength, S is the stress resultant, and ),( SRg is the limit-state function of
the structural reliability. It is assumed that R and S are non-negative and independent
random variables with each Probability Density Functions (PDFs), )(and),( SfRf SR .
Figure 2.1 Limit-State Surface Between Failure and Safe Domains
Limit-state function 0),( =−= SRSRg
safe domain 0>g
failure domain 0<g
R
constSfRf SR =)()(
S
)(Rf R
)(SfS
15
In Fig. 2.1, the failure domain and the safe domain are separated by the limit-state
surface, 0),( =SRg . The probability of failure with the simple limit-state function is
computed as
Ω
=f
dSdRSRfP RSf ),( (2.2.2)
where ),( SRf RS is the joint PDF of R and S. fΩ is the failure domain, as shown in Fig.
2.1. The probabilistic techniques can be classified into sampling-based methods and
analytical approximation methods.
For a sampling method, the Monte Carlo Simulation (MCS) method [7] is one of
the most popular techniques. A failure region is defined with a limit-state function g and
a random variable vector X as 0)( <Xg . The failure probability is
dXXfXgPPXg
Xf ≤
=≤=0)(
)(]0)([ (2.2.3)
A failure set indicator function, ][•I , can be defined as
••
=•falseisif
trueisifI
][0][1
][ (2.2.4)
Then, Eq. (2.2.3) can be written as
16
dXXfXgIP Xf ≤= )(]0)(([ (2.2.5)
In general, the joint PDF, fx(X), is equal to the product of the marginals when all the
random variables are mutually independent.
∏=
=n
iiXX xfXf
i1
)()( (2.2.6)
where n is the number of random variables. Instead of the multidimensional integration of
Eq. (2.2.5) by picking N randomly distributed points, the failure probability can be
estimated as
=
≤=N
kf XgI
NP
1
]0)([1ˆ (2.2.7)
where fP represents the crude Monte Carlo estimator of failure probability,fPµ . The
variance of the sample mean is computed as
]]0)([[1
]ˆ]0)([[]ˆ[1
2
≤=
−≤==
XgIVarN
PXgIPVarN
kff
(2.2.8)
17
The variance is proportional to 1/N; that is, the standard deviation is proportional to
N/1 . To decrease the variance of sampling methods efficiently, there are several useful
techniques including the Important Sampling technique [28], the Latin Hypercube
Sampling technique [29], and so on. For large-scale high fidelity simulations, it is well
known that the sampling methods might not be efficient and practical for use. There are
advanced sampling techniques, such as adaptive importance sampling [30] for handling
complex and large-scale problems. However, in the procedure of important sampling, the
quality of result depends on that of the analytical approximation to the original
probability density function of interest.
On the other hand, the analytical approximation methods have been devised to
alleviate the high computational cost by employing truncated series expansions of the
original model. One of the approximation methods is the First-Order Second Moment
(FORM) method [8]. In the mean value FORM method, the limit-state function is
expanded by the first-order Taylor series expansion at the mean of random variables,
Txxx n
,,,21
µµµµ = as
)()()()(~ µµµ gXgXg ′−+≈ (2.2.9)
The mean value ( g~µ ) and the variance ( g~σ ) of the approximate limit-state function
)(~ Xg are
)()]([~ µµµ ggEg =≈ (2.2.10)
18
2
2
1
~ )]()[()]([ix
n
i Xig x
ggXVargVar σµµµσ
µ
= =
∂∂=′−+≈ (2.2.11)
The reliability (safety) index β is computed as
g
g
~
~
σµ
β = (2.2.12)
The reliability index can be interpreted as a shortest distance from the mean point to the
limit-state surface, as shown in Fig. 2.2.
Figure 2.2 Graphical Interpretation of the Reliability Index
Once the reliability index is obtained, the safe probability can be easily computed as
follows:
Limit-state surface (g=0)
Failure region (g<0)
Safe region (g>0)
µg g 0
β σg
19
)(21
exp21 2 βπ
Φ=
−= ∞
duuPLusafe (2.2.13)
where Φ is a Cumulative Density Function (CDF), u is the standard normalized variable
and uL is the lower limit of u for a limit state function, as shown in Fig. 2.3.
Figure 2.3 Relationship Between the Reliability Index and the Safe Probability
By linearizing the original limit-state function at the mean value point, the original
complex problem is changed into a simple problem. However, due to the linearization of
the given limit-state function, the approximation method can give erroneous estimates for
highly nonlinear cases. To increase the accuracy of approximate estimates, some
variations of the approximation method were developed, such as Second-Order
Reliability Methods (SORM) [9-11], Advanced Mean Value methods (AMV) [31], and
so on.
Limit-state surface (g=0)
Failure region (g<0) Safe region
(g>0)
u
β
µL
)(βΦ
µ
20
2.3 Non-probabilistic Approaches
The framework of probability theory for UQ is mathematically precise, rigorous,
and straightforward. However, in complex and large-scale systems, the probabilistic
approach might not be so effective because the description tools in the classical
probability theory are not sufficiently expressive to characterize the propagating
uncertainty with imprecise and incomplete information. To address the imprecision and
incompleteness in reality, classical set theory is reviewed and several alternative
frameworks proposed in the middle of the last century are reviewed. The alternative
frameworks include fuzzy set theory [13], interval theory [32, 33], evidence theory [16],
and so on. In this section, the fuzzy set theory is introduced briefly as a popular non-
probabilistic tool for UQ.
In the classical set theory, an individual or element set is either a member or a
non-member of a specified set. A sharp, crisp, and unambiguous distinction (Boolean
phenomenon) is the basic concept between a member and a non-member for a well-
defined set in the classical set theory. In probability and statistics, it can be said, “The
probability for an individual to be a member of a set is 80%.” The final outcome is still
either “it is” or “it is not” a member of the set. That is, there is 80% chance that the
prediction, “it is a member of the set,” is right. The prediction does not mean that it has
80% membership in the set, and it also has 20% non-membership of the same set.
Basically, in the classical set theory, it is not allowed that an individual is partially in a
set and also partially not in the same set at the same time. On the contrary, the degree of
membership is modeled by fuzzy set theory to express the partial membership to a
21
specified set. The fundamental mathematical difference between the fuzzy set theory and
the classical probability theory is in the ways of assigning the mass of belief to a set. The
classical probability theory assigns its basic mass of belief to each element or individual
set whereas the fuzzy set theory allocates to consonant subsets of a set.
In fuzzy set theory, a membership function is associated with the referential fuzzy
set of a variable. The referential fuzzy set could be viewed as a finite sample space in the
probability theory. The subset of the fuzzy set is determined by the membership function
with respect to the specified level of membership. In most engineering problems, a fuzzy
variable is defined as a continuous variable and the set of interest is expressed with an
interval. With the interval of a fuzzy variable, the membership function can be also
described as a continuous function. For example, with the given fuzzy membership
function shown in Fig. 2.4, the α1-level of the fuzzy variable x is defined as Xα1 in the
fuzzy set, X. At all levels of the membership from zero (non-membership) to one (full-
membership), different intervals of confidence can be considered with the given
membership function. Generally, a subset Xα denotes the α-cut of the fuzzy set X at a
specified α-level of the given membership function )(xXµ as follows:
)(| αµα ≥∈= xXxX X (2.3.1)
22
Figure 2.4 Triangular Fuzzy Membership Function
Although a membership function does not have to be continuous, or integrable, there are
two basic properties: normality and convexity.
Normality: A fuzzy set is said to be a normal fuzzy set if and only if
1)(max =∈
xXRx
µ (2.3.2)
Convexity: A fuzzy set is convex with a membership function )(xXµ and X⊂R if
]1,0[,, 21 ∈∈∀ λXxx (2.3.3)
)(),(min])1([ 2121 xxxx XXX µµλλµ ≥−+ (2.3.4)
Xα1
Xα2
α1
α2
α
x
23
When multiple fuzzy variables are considered in a functional relationship, the
corresponding fuzzy responses are computed by the Zadeh’s extension principle [27]. For
instance, let X and Y be two fuzzy sets with Z⊆R, and consider a two-variable function:
ZYXF →×: (2.3.5)
Let )(xXµ , )(yYµ , and )(zZµ be their associate membership functions. Given
)(xXµ and )(yYµ , define
)(zZµ = )(),(minmax),(
yx YXyxFz
µµ=
(2.3.6)
The fuzzy membership function for the implicit response of an engineering application is
usually obtained by using interval analysis techniques [34] at each α level with Eq.
(2.3.6).
Recently, many scientific and engineering communities have admitted that both
aleatory and epistemic uncertainties coexist in a practical engineering system. However,
neither probability theory nor fuzzy set theory always provide an appropriate framework
for handling multiple types of uncertainties due to the fundamental theoretical
incompatibility to develop an appropriate and unified framework for multiple types of
uncertainty sources.
24
Until now, when multiple types of uncertainties coexist in a structural reliability
analysis, UQ analyses have been performed by treating them separately or by making
strong assumptions to accommodate either the probabilistic framework or the fuzzy set
framework, because the frameworks are not compatible with each other due to
fundamental theoretical differences.
However, due to the flexibility of the basic axioms, evidence theory can accept
aleatory uncertainty information (pre-existing probability information) as well as any
epistemic information (certain bounds or possibilistic membership functions, etc) to
assess likelihood for a limit-state function. As a generalization of classical probability
and possibility theories from the perspective of bodies of evidence and their measures,
evidence theory is investigated and the unified framework for multiple types of
uncertainties are developed in the following chapters.
25
3. Evidence Theory
Evidence theory [16], also known as Dempster-Shafer Theory, was originated by
Arthur P. Dempster. It was further developed by Glenn Shafer. Evidence theory allows
us to express not only aleatory uncertainty, but also epistemic uncertainty. Aleatory
uncertainty is irreducible and related to natural variability. Epistemic uncertainty can be
defined as a lack of knowledge or data in any phase or activity of the modeling process.
The derivation of evidence theory is based on set theory because possible propositions of
interest can be expressed as subsets of the set of all possible events. Set theory provides
useful tools to handle subset and superset relationships in an explicit and consistent
manner. Hence, some basic notions and notations in set theory are introduced first.
3.1 Set Operations and Mappings
A set consists of a finite or infinite number of elements. There are several ways to
denote a set for each particular circumstance. First of all, we can denote a set by listing its
elements within braces. For example, if a1, a2, …, and an are the elements of a set A, then
we write
26
,,, 21 naaaA = (3.1.1)
Alternatively, we can use a certain condition expression for a set, as in Eq. (3.1.2).
for xcondition the|x (3.1.2)
The most basic and well-known symbols used in set theory are ∈, ⊂, ⊆, and =.
First, we write Ax ∈ to indicate that x is an element of A and x is said to be contained in
A. BA ⊆ is to indicate that A is a subset of B and B is a superset of A. We can say
BA = if and only if BA ⊆ and AB ⊆ . In the case in which one of these symbols is
negated, we put a slash through it: Aa ∉ , BA ⊄ , and BA ≠ . There are some set
operators to make a new set with available sets. The notation BA ∩ is used to denote the
intersection of A and B. The intersection indicates the set of all elements that are in both
sets as
| BxandAxxBA ∈∈=∩ (3.1.3)
The notation BA ∪ denotes the union of the two sets as
| BxorAxxBA ∈∈=∪ (3.1.4)
The difference of the two sets, A and B, is denoted as
27
| BxorAxxBA ∉∈=− (3.1.5)
The complementary set of A, which is defined as a subset of a set Θ , is indicated as
AA −Θ= (3.1.6)
The symbol ∅ is used to denote the empty set.
=∅ (3.1.7)
A mapping from A into B, which assigns each element Ax ∈ to an element Bx ∈)(σ is
denoted by
BA →:σ (3.1.8)
A mapping σ from a set A into B is called a function on A, and we can denote the
function σ by f, g, and so on. For AX ⊆ , we denote
|)()( XxxX ∈= σσ (3.1.9)
For By ∈ ,
28
)(,|)(1 yxAxxy =∈=− σσ (3.1.10)
A mapping is said to be “one-to-one” if the elements of A and B are distinctively mapped
to each other. And a mapping is to be from A onto B if for every Bb ∈ there exists Aa ∈
such that BA =)(σ . If there exists a one to one mapping from A onto B, then we say that
it is one-to-one corresponding mapping between A and B.
Given two sets A and B, we can make a set BA× called the Cartesian product of A and B
as
,|),( BbAabaBA ∈∈=× (3.1.11)
For a given set A, the collection of all subsets of A and itself is called the power set of A.
The power set of A is denoted by
|2 AXXA ⊆= (3.1.12)
3.2 Frame of Discernment
Evidence theory starts by defining a frame of discernment that is a set of mutually
exclusive “elementary” propositions. Any problem of likelihood takes some possible set
29
as given. The given propositions might be nested in one another or they might partially
overlap. Complex hierarchies of events can be imagined. However, the finest subdivision
of the set becomes the ‘elementary’ proposition. The frame of discernment may consist of
all finite elementary propositions and may be viewed the same as a finite sample space in
probability theory. For instance, all the basic components of a system can be elementary
components for determining the failed component of the system. Frame of discernment is
denoted by Ω or X.
In the case of a structural design problem, uncertainty can exist in structural
parameters of the analysis model as an epistemic uncertainty. For operating load, only
interval information might be given with suspected elementary propositions, shown in
Fig. 3.1, due to the lack of information or data. As in Fig. 3.1, elementary proposition x1
has interval [0, 1], and x1 represents a proposition that a true load value is in interval [0,
1]. Other elementary propositions can be interpreted in a similar way. In this example,
the frame of discernment can be given as,
X=x1, x2, x3 (3.2.1)
where, x1, x2, and x3 are elementary propositions.
Figure 3.1 Frame of Discernment with Elementary Intervals
x1 x2 x3
0 1 2 3
30
Various propositions can be expressed for negation, conjunction, and disjunction
to elementary propositions. The power set of X (Eq. 3.1.12) represents all the possible
distinct propositions. The total number of the possible propositions is 2n, where n is the
number of elementary propositions. Hence, elementary propositions should be defined to
reflect all of the available evidence within the power set of X, 2n, where n is the number
of elementary propositions. The power set of X is given as
2X = ∅, x1, x2, x3, x1, x2, x2, x3, x1, x3, X (3.2.2)
Proposition x1, x2 in the set of 2X means that one and only one of the two
propositions is true, but we don’t know which one is true. Because elementary
propositions are selected to be mutually exclusive to each other, the true value of load is
assumed not to be located in both of the elementary propositions. The proposition X of 2X
set means that the true value of the load is located in interval of x1, x2 or x3, and it is
always true, because we assume that the true value of load exists in the frame of
discernment, proposition X. However, this proposition X does not convey any useful
information for us to quantify uncertainty with respect to defined elementary propositions
in the parameter load. Hence, we may be able to say that proposition X represents our
degree of complete uncertainty, instead of our degree of belief in the proposition X.
31
3.3 Basic Belief Assignment
In evidence theory, the basic propagation of information is through Basic Belief
Assignment (BBA). BBA expresses our degree of belief in a proposition. It is determined
by various forms of information: sources, experimental methods, quantity and quality of
information, and so forth. BBA is assigned by making use of a mapping function (m) in
order to express our belief in a proposition with a number in the unit interval [0, 1]. For
example,
m: 2X→[0, 1] (3.3.1)
The number m(A) represents the portion of total belief assigned exactly to
proposition A. The total belief will be obtained by considering Belief and Plausibility
functions that will be discussed later. This measure m, the basic belief assignment
function, must satisfy the following three axioms:
I. m(A) ≥0 for any A ∈ 2X (3.3.2)
II. m(∅)=0 (3.3.3)
III. 1)(2
=∈ XA
Am (3.3.4)
We do not assign any degree of belief to the empty proposition ∅; that is, we
ignore the possibility of an uncertain parameter being located out of the frame of
discernment in evidence theory. Though these three axioms of evidence theory look
32
similar to those of probability theory, the axioms for the BBA functions are less
restrictive than those for probability measure. In probability theory, the probability mass
function p is defined only for an elementary, single proposition. For instance,
Ω=y1, y2, y3 (3.3.5)
where Ω is a sample space. We can obtain a probability distribution like the following by
a probability mass function p,
p(y1)=0.2 (3.3.6)
p(y2)=0.6 (3.3.7)
p(y3)=0.2 (3.3.8)
p(y1)+ p(y2)+ p(y3)=1 (3.3.9)
On the other hand, in evidence theory, the frame of discernment is initially
defined in terms of elementary propositions with all available evidence. The given
evidence may not exactly correspond to a defined elementary proposition. For example,
when a frame of discernment is given as X=x1, x2, x3. Evidence may not be available
for all of the single, elementary propositions x1, x2, and x3, but, there may exist
evidence for proposition x1, x2 that cannot be divided for two propositions x1 and
x2.
33
In this case, in order to use probability theory, the evidence for proposition x1,
x2 should be distributed to its subsets, propositions x1 and x2, by employing a
baseless assumption, such as a uniform distribution function without reasonable
information.
Figure 3.2 Constructing BBA Structure in Evidence Theory
However, with the BBA function, m, in evidence theory, BBA can be given to any
possible subset of X. The evidence for event x1, x2 is suitable for a proposition x1, x2
which is already defined in a possible subset of X, 2X. And, this evidence can be used to
assign the degree of belief (BBA) to the proposition x1, x2 directly, without being split
in two propositions, x1 and x2, individually. And moreover, given evidences might
not be sufficient to assign BBAs to all of the set 2X. It is a more natural and intuitive way
to express one’s degree of belief with partial information. In Fig. 3.2, the evidences for
x1 and x1, x2 are available so that only BBAs for x1 and x1, x2 are defined.
x1, x2,
x3
X
(Frame of discernment)
Possible Events
∅∅∅∅
x1
x2
x1, x2
Available Evidence
Evidence for x1
x1
x1, x2
Evidence for x1, x2
××××
x3
x1, x3
x2, x3
X
×××× ××××
34
All the possible set with defined elementary propositions are
2X: x1, x2, x3, x1, x2, x2, x3, x1, x3, X (3.3.10)
For example, the BBA structure can be given like this
m(x1)=0.75, m(X)=0.25 (3.3.11)
Where, m(x1) from E1 evidence is interpreted such that we are certain with a 0.75
degree of belief that x1 is true. As shown in Fig. 3.3, m(x1) is obtained from E1
evidence, and it is assumed that E1 evidence can be used only to define BBA for
proposition x1, that is, E1 evidence does not imply that m(x2)+m(x3) or m(x2, x3).
Hence, we cannot give the rest of 0.25 to m(x2), m(x3), or m(x2, x3) based on the
evidence for x1.
In other words, in evidence theory, it can be said that if evidence, which exactly
corresponds to proposition x1, is available, its information is not transmitted to the rest
of the propositions of X as evidence to determine the BBA of propositions x2 or x3. For
example, let’s assume that there is another source E2 for x1. And, assume that if the E2
evidence supports just the proposition x1, then m(x1) will be 1.0; in another case, if
the E2 evidence supports the proposition x2, then m(x1) will be just 0.75 and m(x2)
will be 0.25. However, before it is possible to access the E2 evidence, we are in total
35
ignorance about the rest of the BBA degree 0.25. Hence, the remaining 0.25 BBA should
be given to proposition X to express our degree of Uncertainty.
Figure 3.3 Degree of Uncertainty, m(X)
For example, when there is a murder case and there are three suspects, the frame
of discernment consists of three elementary propositions, X=x1, x2, x3, where x1
means that x1 is the murderer. If there is a witness (E1 evidence), and he gave his
testimony just for x1, then we can be sure that x1 is the murderer with a degree of belief
from the testimony, let’s say m(x1) is 0.75. However, we cannot assign the remaining
0.25 to m(x1, x2), m(x1) or m(x3), because the evidence given by the witness is just
for suspect x1, and he did not testify against suspects x2 and x3. When we find other
witnesses (E2 evidence), our degree of belief can be changed, and, at this time, we do not
By E1 evidence m(x1)=
0.75
Is x1 true?
E1 Evidence may or may not support proposition x1 Uncertainty: m(X)=0.25
m(x1)=1.0
m(x1)=0.75
m(x2)=0.25
By E2 evidence
Supporting x2 Supporting x1
36
know how it will be changed. At least we know that the other testimonies (E2 evidence)
can support x1, x2, or x3, which means that we have no idea about the remaining
0.25. It should be included in the degree of Uncertainty.
For another example, a BBA structure with X=x1, x2, x3 can be also given like
this,
m(x1)=0.5, m(x2)=0.3, m(x1, x2)=0.1, m(X)=0.1 (3.3.12)
The function m satisfies the three axioms. Thus, m is a basic belief assignment function.
The BBA can be determined by various information: sources, methods, quantity of
information, quality of information, and so on. We accept our deficiency in knowledge
and information to produce a perfect and complete opinion. In this BBA structure, it
seems that the evidence related with x3 is not available, that is, m(x3) is zero. With
these examples of a BBA structure, the following properties are summarized:
1) Additivity does not necessarily hold:
m(x1)+m(x2)≠m(x1, x2) (3.3.13)
In probability theory, additivity is one of the axioms [ p(a)+p(b)=p(a∪b) ]. On the
other hand, in evidence theory, it is not necessarily true. For instance, m(x1)+m(x2) is
not the same as m(x1, x2), because there is uncertainty in the information. The BBA for
37
proposition x1, x2 is not obtained by adding up m(x1) and m(x2), rather it is
obtained from evidence for x1, x2 and the evidence for x1, x2 might be independent of
m(x1) and m(x2), shown in Fig.3.2. However, if we handle only aleatory uncertainties
and there is sufficient information for all elementary propositions, then BBA structure
will be the PDF of probability theory.
2) Monotonicity does not necessarily hold:
m(x1)≥m(x1, x2) even though x1 is a subset of x1, x2 (3.3.14)
In probability theory, probability for x1, p(x1), should always be smaller than
probability for x1 ∪ x2, p(x1, x2). In evidence theory, it is shown in Fig. 3.3 how BBAs
are assessed with the given partial evidence. The evidence for x1 is not transmitted to
x1, x2, and evidence for x1, x2 also does not affect its subsets x1 and x2. In
evidence theory, we cannot determine any distribution of the BBA of proposition x1, x2
to its subsets. Hence, m(x1, x2) can be both “degree of uncertainty” between x1 and
x2, and “degree of belief” for the proposition in x1, x2 by making use of the Belief
and Plausibility functions. Therefore, it is possible that m(x1) ≥ m(x1, x2) even
though x1 is a subset of x1, x2. In evidence theory, we cannot determine any
distribution of the BBA of proposition x1, x2 to its subsets. When we are interested in
the degree of belief in x1, x2, then m(x1, x2) can be taken as one’s total degree of
belief by the belief function.
38
3) It is not required that m(X)=1, but m(X) ≤ 1:
In probability theory, p(∅)=0 implies that p(X)=1. However, in evidence theory,
this implication is not accepted. The BBA can be assigned only with reasonable evidence
or other information.
In summary, BBA is not probability, but it is just a belief in a particular
proposition irrespective of other propositions. In evidence theory, BBA is not the final
goal in which we are interested, but it expresses a portion of the total belief exactly
assigned to a proposition. The final goal is to determine a bound with degrees of belief
and plausibility by considering all of the possible beliefs that may be partial and
incomplete. By contrast, in probability theory, we finally obtain a single value of
probability for a proposition.
The BBA structure enables the flexibility to express belief for possible
propositions with the given partial and insufficient evidence, and it also makes it possible
for us to avoid making excessive or baseless assumptions when assigning our belief to
propositions. With the flexibility, the BBA structure can be successfully used to express
typical partial-belief structures. For instance, with a frame of discernment, X=x1, x2, x3,
x4, x5, the following BBA structures are valid:
39
Probabilistic BBA Structure
BBAs are assigned to all of the elementary propositions
m(x1)=0.7, m(x2)=0.2, m(x3)=0.1, m(x4)=0.2, m(x5)=0.1
Complementary BBA Structure
BBAs are given to a subset of X and its complementary subset. The
complementary belief structure is not necessarily a probabilistic BBA structure
because the subset is not always a single, elementary proposition.
m(x1, x3)=0.7, m(x2, x4, x5)=0.3
Consonant BBA Structure
BBAs are given to subsets which are consonant subsets to each other.
Figure 3.5 Complementary BBA Structure
m(x1, x3)
m(x2, x4, x5)
Figure 3.4 Probabilistic BBA Structure
m(x1) m(x2) m(x5)m(x3) m(x4)
40
m(x3)=0.2, m(x2, x3, x4)=0.3, m(x1, x2, x3, x4, x5)=0.5
General Belief Structure
In this BBA structure, the BBA can be assigned in any way: discontinuous,
partially consonant, or partially overlapped.
m(x1)=0.7, m(x1, x3)=0.2, m(x4, x5)=0.1
3.4 Combination of Evidence
Different BBA structures can be obtained from several independent knowledge
sources over the same frame of discernment. In evidence theory, the combination of
Figure 3.7 General BBA Structure
m(xn)m(x1)
m(x2)
m(x3) …
m(x1)
m(xn)
m(x2)…
Figure 3.6 Consonant BBA Structure
41
evidence or information is still an open question and there is no unique method as there is
in probability theory. Initially, Dempster introduced Dempster’s rule of combination,
which enables us to compute the orthogonal sum of given belief structures from multiple
sources. After that, several combination rules have been introduced to overcome the
criticism of Dempster’s rule of combining [35]. Recently, Sentz and Ferson [36] surveyed
combination rules by defining types of evidence and investigated Dempster’s rule of
combining by comparing the algebraic properties with other combination rules. Here,
some of the combination rules are introduced.
3.4.1 Dempster’s rule of combining
Two BBA structures, m1 and m2, given by two different evidence sources, can be
fused by Demspster’s rule of combining in order to make a new BBA structure, as shown
in Eq. (3.4.1),
=)(Am
∅=∩
=∩
−ji
ji
CCji
ACCji
CmCm
CmCm
)()(1
)()(
21
21
, A≠∅ (3.4.1)
where Ci and Cj denote propositions from each source. In Eq. (3.4.1), the denominator
can be viewed as a contradiction or conflict among the information given by independent
knowledge sources. Even when some conflicts are found among the information,
Dempster’s rule disregards every contradiction by normalizing with the complementary
degree of contradiction because it is designed to use consistent opinions from different
42
sources as much as possible. However, this normalization can cause a counterintuitive
and numerically unstable combining of information when the given information from
independent sources contains extreme contradictions or conflicts [35, 37]. In other words,
Dempster’s rule can be appropriate to a situation in which there is some consistency and
sufficient agreement among the opinions of different sources. On the other hand, Yager
[35] has proposed an alternative rule of combination in which all contradiction is
attributed to total ignorance. In this paper, we assume that there is enough consistency
among given sources to use Dempster’s rule of combining.
3.4.2 Yager’s rule of combination
The main difference between Dempster’s rule of combining and Yager’s rule of
combination is in the handling of contradiction in given belief structures. Yager [35]
argued that the conflict or contradiction comes from our ignorance: thus, instead of
normalizing out the contradiction, he allocates the contradicted portion to the frame of
discernment (X or Ω), which implies total ignorance. The ground probability mass
assignment (q) is introduced in Yager’s formulation and has different properties that
allow the ground probability mass assignment of the null set to be greater than 0, i.e.
0)( ≥∅q (3.4.2)
Yager’s rule of combination is given by:
43
=∩
=CBA
BmAmCq )()()( 21 (3.4.3)
)()( CqCm = for C≠∅, X (3.4.4)
)()()( ∅+= qXqXm (3.4.5)
3.4.3 Inagaki’s unified rule of combining
Toshiyuki Inagaki introduced a combination rule with a continuous paramerized
combination operations [38], which include both Dempster’s rule and Yager’s rule.
Inagaki uses Yager’s ground probability assignment (q) and develops a rule of
combination in a systematic manner. Any rule of combination can thus be expressed as:
)()()()( ∅+= qCfCqCm , C≠∅ (3.4.6)
∅≠∈
=CC X
Cf,2
1)( , 0)( ≥Cf (3.4.7)
where, f(C) denotes an allocation coefficient for proposition C in a restricted property
such as
)()(
)()(
DqCq
DmCm = (3.4.8)
44
For any propositions C and D, except X or ∅, the above equation addresses that no
knowledge is assumed regarding relative importance or credibility of propositions. The
general expression in Eq. (3.4.6) can be rewritten with the restriction of Eq. (3.4.8).
)()()()(
)()()()(
DqqDfDq
CqqCfCq ∅+=∅+
(3.4.9)
where f(C) can be interpreted as a scaling function for q(∅), where the conflict k is
defined by:
)()(
CqCf
k = for any C≠X, ∅ (3.4.10)
From the above equations, we obtain a unified rule of combination as follows:
)()](1[)( CqkqCm ∅+= for C≠X, ∅ (3.4.11)
)(])(1[)()](1[)( ∅−∅++∅+= qkkqXqkqXm (3.4.12)
0)( =∅m , )()(1
10
Xqqk
−∅−≤≤ (3.4.13)
With Inagaki’s rule, Dempster’s rule is obtained by setting k=[1-q(∅)]-1. And Yager’s
rule can be realized when k=0 in the above equations. Since k is continuous-valued, the
unified rule of combination represents infinitely many rules of combination, as shown in
Fig. 3.8. Inagaki found that system safety could be changed not only by the type of
45
safety-control policy, but also by the choice of a rule of combination. Hence, Inagaki has
proposed to find an optimal value of k with safety-control policies and resulting
plausibility of system event.
Figure 3.8 Rules of Combination by Parameter k
3.4.4 Mixing or averaging method
Information from multiple independent sources is treated as equally credible, and
contradiction or conflict among those sources is not taken into consideration by simply
averaging the given opinions. The formula for the mixing combination rule is
=
=n
iiin Amw
nm
1...1 )(
1 (3.4.14)
where, mi’s are the BBA for the belief structures and the wi’s are the weights assigned
based on the credibility of the evidence.
Yager’s rule Dempster’s rule
0
k
1/[1-q(∅)] 1/[1-q(∅)-q(X)]
46
There are several other combination methods. The most crucial point in those
combination methods is the reallocation of the degree of BBA regarding contradiction or
conflict. A mixing method generalizes the averaging operation that is usually used for
aleatory uncertainty by assuming a uniform distribution. Inagaki’s unified rule of
combining gives us a useful tool to interpolate or extrapolate the rules of combination
proposed by Yager and Dempster. However, the procedure to determine k is not well
justified yet, and this rule is not associative except at the k value that coincides with
Dempster’s rule. Yager’s rule of combination transfers the degree of contradiction to the
degree of ignorance and it seems very persuasive. However, Yager’s rule satisfies Quasi-
associative; hence, when there are multiple knowledge sources, the resulting combined
BBA structure may be affected by the order of combination.
In this study, Dempster’s rule of combining is selected to aggregate information
from different independent sources with the assumption that there is some consistency
among the given information. It is investigated that Dempster’s rule of combination
performs satisfactorily under situations of low conflict [36].
3.5 Belief and Plausibility Functions
Due to a lack of information, it is more reasonable to present bounds for the result
of uncertainty quantification, as opposed to a single value of probability. Our total degree
of belief in a proposition “A” is expressed within a bound [Bel(A), Pl(A)], which lies in
the unit interval [0, 1], as shown in Fig. 3.9, where Bel(•) and Pl(•) are given as,
47
⊂
=AC
ii
CmABel )()( : Belief function (3.5.1)
∅≠∩
=AC
ii
CmAPl )()( : Plausibility function (3.5.2)
Due to Uncertainty, the degree of belief for the proposition A and the degree of
belief for a negation of the proposition A do not have to sum up to unity. Bel(A) is
obtained by a summation of the BBAs for propositions that are included in the
proposition A. With this viewpoint, Bel(A) is our “total” degree of belief. We called m(Ci)
a “portion” of total belief in the proposition A in the previous section. The degree of
plausibility Pl(A) is calculated by adding the BBAs of propositions whose intersection
with the proposition A is not an empty set. That is, every proposition that allows for the
proposition A to be included at least partially is considered to imply the plausibility of
proposition A, because the BBA in a proposition is not divided in any way to its subsets.
Figure 3.10 Bel and Pl in a given BBA structure
A (Shaded area)
BBA structure
Bel(A) Uncertainty Bel(¬A)
Pl(A)
Figure 3.9 Belief (Bel) and Plausibility (Pl)
48
Again, Bel(A) is obtained by adding the BBAs of propositions that imply the proposition
A; whereas, Pl(A) is plausibility calculated by adding the BBAs of propositions that
imply or could imply the proposition A. In a sense, these two measurements consist of
lower and upper probability bounds. For example, Fig. 3.10 represents a BBA structure
where the proposition A is expressed in the shaded area. The belief function, Bel(A), is
obtained by adding up the BBAs for C1 and C3 that are totally included in the shaded area.
On the other hand, C1, C2, C3, C4, and C5 are added up for Pl(A) because those
propositions are partially or totally implying the proposition A.
- Assessing Bel and Pl with evidence theory
For a simple numerical example, assume that there are three different methods to
detect the true value of x.
Table 3.1 The Evidence for True Value of x
Method BBA Result (interval) 0.5 [0, 1] 1st method 0.3 [1, 2]
2nd method 0.1 [0, 2] 3rd method 0.1 [0, 3]
Figure 3.11 BBA Structure (m(x1)=0.5, m(x2)=0.3, m(x1, x2)=0.1, m(Ω)=0.1)
0.5
0
0.3
0.1
0.1
1 2 3 x1 x2 x3
49
The first method is suspected to have a ±0.5 error range from a median value. We
have no further evidence to decide what kind of PDF exists in the error range, and we
cannot even find whether the error comes from unknown PDF of input data or from an
incompletely defined model. Therefore, the error range can be viewed as epistemic
uncertainty. The second method has a ±1 error range, and the third method has a ±1.5
error range. After consuming all of the available resources, we obtained test results, as
shown in Table 3.1, where the BBA was assumed to be determined by multi-criteria
evaluations, including: the number of experiments, the reliability of the experiment
method, the quality of the engineers, and so on.
The BBA structure is shown in Fig. 3.11 with a frame of discernment Ω=x1, x2,
x3. With this BBA structure, the degrees of belief and plausibility for only elementary
propositions x1, x2, and x3 are computed as follows,
Bel(x1)=m(x1)=0.5 (3.5.1)
Pl(x1)=m(x1)+m(x1,x2)+m(Ω)=0.7 (3.5.2)
Bel(x2)=m(x2)=0.3 (3.5.3)
Pl(x2)=m(x2)+m(x1,x2)+m(Ω)=0.5 (3.5.4)
and,
Bel(x3)=0.0 (3.5.5)
Pl(x3)=m(Ω)=0.1 (3.5.6)
50
The Plausibility and Belief functions for each elementary proposition can be
expressed as shown in Fig. 3.12. Probabilities over the frame of discernment with the
assumption of uniform distribution for BBAs are also illustrated. As mentioned before,
Belief and Plausibility can be viewed as lower and upper bounds of probability. So, the
probability value is always supposed to be placed between Belief and Plausibility, even if
different distribution functions are assumed. The degree of Uncertainty, the difference
between Belief and Plausibility, becomes smaller as we obtain more information and
knowledge.
Figure 3.12 Belief (Bel), Plausibility (Pl), and Probability (Pf) in Elementary
Propositions
Even though evidence theory does not give us a single value, the given bound [Bel, Pl]
retains all of the information without any excessive and baseless assumptions. That is, the
result of evidence theory is consistent with given partial information. Since the bound
represents the current uncertainty situation based on available evidence, a decision-maker
can obtain insight into the problem and avoid mistakes made by misusing assumptions.
0.7
0.5
0.1
0.5
0.3
0.0
0.583
0.383
0.033
Pl
Bel
Pf
51
4. Structural Uncertainty Quantification Using Evidence Theory
Uncertainty Quantification (UQ) using the framework of evidence theory for
engineering structural systems is introduced in this chapter. First, the problem definition
of UQ and the Basic Belief Assignment (BBA) structure of engineering applications are
presented. And, some computational issues of using evidence theory are also discussed.
4.1 Problem Definition
The structural responses can be expressed as a vector Y that depends upon an
input vector X, with a system model f.
)(XfY = (4.1.1)
It is assumed that the variables in this model are independent of each other and
that uncertainties exist only in system parameters. When only parametric uncertainties in
a system model are considered, the uncertainties in responses are determined by the
uncertainties in input parameters.
52
Parametric uncertainty is typically included in aleatory uncertainty due to its
stochastic nature. With incomplete and insufficient information, the input parameters can
be represented as aleatory uncertainties by crude representations to probability density
functions, but not very accurately. Hence, the nature of parameter uncertainty in
insufficient information situations is better characterized as epistemic.
4.2 BBA Structure in Engineering Applications
In this work, we consider the situation that multiple intervals for an uncertain
parameter in an engineering structure system are given by information sources, instead of
by an approximated PDF. Each interval represents a proposition of the true value of an
uncertain parameter, and BBAs from each expert are assigned to each interval with the
mapping function m based on the available evidence. The intervals can be discontinuous
and scattered, and they even may overlap, as in Fig.4.1.
x11=[0, 0.25], x12=[0.5 0.75], x13=[0.9 1.0], x14=[0 0.5], x15=[0, 1.0]
m(x11)=0.4, m(x12)=0.15, m(x13)=0.1, m(x14)=0.25, m(x15)=0.1.
Figure 4.1 Multiple Interval Information and BBA for an Uncertain Parameter, x1
x13 x11 x12
0.0 1.0 0.5 0.75 0.25
x15
x14
0.9
53
In Fig. 4.1, the two subscripts are the indications for an uncertain parameter and
interval, respectively. From the given information, the frame of discernment for the
uncertain parameter x1 is defined as an interval, [0, 1.0]. The BBA structure satisfies the
three axioms of BBA structure. As mentioned before, since the evidence is not
transmitted to other propositions, the BBA of x11 can be higher than that of x14 that is
including the interval proposition, x11. When a proposition like x15 in given information
indicates the frame of discernment, the BBA of the proposition is also viewed as the
degree of ignorance, because the proposition can be interpreted such that the information
source has no idea on how to give specific interval propositions in the frame of
discernment with available partial evidence. The BBA for an interval proposition is not
distributed over the interval with any distribution function. When enough discretized
intervals are obtained from available evidence, the BBA structure will express an
unidentified PDF acceptably.
Multiple sources for the BBA structure, such as two experts, are assumed.
Dempster’s rule of combining fuses interval information from independent sources
without refining the intervals. It is employed because there is no assumed distribution
function of BBA within an interval. It is the basic concept in Dempster’s rule of
combining that the propositions in agreement with other information sources are given
more credence. Those propositions are emphasized by the normalization of the
complementary degree of contradiction in Dempster’s rule of combining.
54
After obtaining the combined BBA structure for each uncertain parameter, xci, the
joint proposition, is constructed for the structural system model by using the Cartesian
product of each uncertain parameter. The joint BBA structure must follow the three
axioms of BBA structure. For example, for only two uncertain parameters, the joint
proposition is defined as,
,:],[ 22112121 cnccmcncmckcc xxxxxxcxx ∈∈==×= (4.2.1)
And, the BBA for the joint proposition setis defined by
)()()( 21 ncmckc xmxmcm = (4.2.2)
4.3 Evaluation of Belief and Plausibility Functions
The two measurements of evidence theory, the degree of plausibility and the
degree of belief, are obtained by setting the XF set of uncertain input vectors and the UF
set of a failure system response, as in Eqs. (4.3.1) and (4.3.2). The failure occurrence of a
target system response is defined with a limit-state value, v.
UF =y : y=f(x) >v and x=[x1, x2,…, xn] ∈X (4.3.1)
XF =x : y=f(x) >v and x=[x1, x2,…, xn] ∈X (4.3.2)
55
After determining the sets, XF and UF, the Belief and Plausibility functions are evaluated
by checking all propositions of the joint BBA structure, as given in Eqs. (4.3.3) and
(4.3.4).
∈⊂
=kFkk cXcc
kcF cmUBel,:
)()( (4.3.3)
∈∅≠∩
=kFkk cXcc
kcF cmUPl,:
)()( (4.3.4)
Since the uncertain parameters in a joint proposition are continuous in an engineering
application, it is required in the evaluation of Belief and Plausibility functions to find the
maximum and minimum responses over the joint proposition range.
[ymax, ymin] = [ min [ f(ck)], max [ f(ck)] ] (4.3.5)
Then, by comparing the range of system responses with the limit-state value, v, the Belief
and Plausibility functions are calculated. For instance, when joint propositions ck border
on each other in two-dimensional uncertain parameter space as shown in Fig.4.2, each
joint proposition will be evaluated as to whether the response range of the joint
proposition is included in the UF set partially or entirely. Graphically, it is observed that
joint propositions c2, c3, and c6 are partially included in the UF set and the BBAs for those
propositions will be added up for the degree of plausibility.
56
Figure 4.2 Failure Set, UF and Joint BBA Structure for Two Uncertain Parameters
Several methods have been proposed to find the system response range for each
joint proposition in engineering applications: the vertex method [39], sampling method [7,
28, 29, 40], optimization method [36], and so forth. When a system response is
continuous and monotonic with respect to every uncertain parameter, the vertex method
[39] can be used to find the system response range. However, when the limit-state
function is expressed as a nonlinear function, as in many engineering applications,
sampling or sub-optimization techniques can be applied to find the maximum and
minimum range values in each joint proposition. Those two techniques may require
intolerable computational effort in a complex and large-scale system. Hence, in order to
alleviate the computational requirement without sacrificing accuracy, a surrogate model
can be introduced by taking advantage of available approximation methods. To secure the
accuracy of a surrogate model, the function space defined by the frame of discernment
a
b
Failure set, UF c6
c3 c2 c1
c4 c5
c9 c8 c7
57
for a joint BBA structure can be divided into several sub-spaces, and surrogate models
will be constructed over the sub-spaces.
Figure 4.3 Uncertainty Quantification Algorithm in Evidence Theory
However, since it is our intention to introduce the BBA structure and two
measures of evidence theory to an uncertainty quantification problem of an engineering
application, the simple vertex method is used in the following example. Applying the
vertex method is justified by the assumption that the target system model has linear
variations in a small function space of each joint proposition with respect to every
uncertain parameter. The summary of the uncertainty quantification scheme using
evidence theory is presented in Fig. 4.3. The following are the major steps in evaluating
uncertainty using evidence theory. After combining the information for each parameter,
Given Information
Constructe & Combine BBA Structure
Define Structural System Failure Set (Uf) & Function Evaluation Space (FES)
Evaluate Bel and Pl functions
Assess Bel & Pl
For Failure Region
[ Bel , Pl_dec, Pl ]
FEM Analyzer
∈⊃ −
=CcUfcc
kf
kfkk
cmUBel),(: 1
)()( ∈≠∩ −
=CcUfcc
kf
kfkk
cmUPl,0)(: 1
)()(
58
the joint BBA structure is constructed under the assumption of independency of uncertain
parameters. The joint BBA structure must follow the three axioms of BBA structure. The
function evaluation spaces are determined by constructing the joint BBA structure. The
degrees of plausibility and belief are obtained by checking all of the joint propositions
with the Belief and Plausibility functions.
Figure 4.4 ICW Structure Model
4.4 Numerical Example
Fig. 4.4 shows the structural model of an Intermediate Complexity Wing (ICW).
There are 62 quadrilateral composite membrane elements ([0°/90°/±45°]) for upper and
lower skins and 55 shear elements for ribs and spars. Root chord nodes are constrained as
supports. Static loads, which represent moments of aerodynamic lifting forces, are
applied along the surface nodes, and the tip displacement at the identified point in Fig.
Tip part (t1)
Upper wing skin
Lower wing skin
Spars and Ribs
Tip displacement
Root part (t2)
Wing Root
59
4.4 is considered as a limit-state response function. It is assumed that there are four
uncertain parameters: the elastic modulus factor, the load factor, the tip and the root
region of the wing skin thickness factors. The nominal value for each parameter is fixed
and the real values are obtained by multiplying with the uncertain scale factors; for
instance, the nominal value of elastic modulus is 1.85×107 (psi). Physical linking is used
for the skin thicknesses, so there are two uncertain factors for the tip and the root regions,
as shown in Fig. 4.4 We consider the situation in which two experts (Expert1, Expert2)
give their uncertain information for the four parameters with discontinuous and discrete
intervals, because the available data for the parameters is not enough to predict any
variability. The interval information is considered to be the most appropriate way to
express those uncertainties based on insufficient evidence.
Figure 4.5 Elastic Modulus Factor Information
Two equally credible experts are assumed to give their opinion with multiple
intervals for each uncertain parameter. The interval information for elastic modulus and
E11
BBA: 0.025 0.5 0.025
PID: E13 E15
0.25
E12
0.2
E14
0.2 0.9 1.1 1.5 1.0 1.2 0.8 0.7
E21
BBA: 0.04 0.7 0.02
PID: E23 E25
0.1
E22 0.14
E24
0.2 0.9 1.1 1.5 1.0 1.2 0.8 0.7
Expert2
Expert1
60
load are given in Figs. 4.5 and 4.6 where PID denotes an indicator of each interval.
Because of the lack of information, the interval information in evidence theory may not
be continuous and intervals can overlap. In Fig. 4.5, E11 indicates the first expert’s first
interval proposition for E factor. There is a discontinuous interval [0.7, 0.8] that is not
covered by Expert1’s opinion. That is, there is no evidence from Expert1 that supports the
proposition that the elastic modulus factor exists in that interval.
Figure 4.6 Load Factor Information
As mentioned before, even though the interval E22 includes the interval E23, the
BBA of E23 is higher than that of E22 because the evidence that is supporting the interval
E23 is independent of the evidence supporting the interval E22. This scheme allows us to
express our opinion intuitively and realistically for given partial information without
making additional assumptions. The tip and root skin thickness factors information is
shown in Tables 4.1 and 4.2.
0.4 0.4
P24
PID: P11
BBA: 0.02
0.5 1.3 2.0 1.0 1.6 0.8
Expert1
1.1 1.5
P13
0.2
P15
0.3
P16
0.005 P12
0.4
P14
0.075
PID: P21
BBA: 0.01
0.5 1.3 2.0 1.0 1.6 0.8
Expert2
1.1 1.5
P23 P26
0.02 P25
0.1
P22
0.07
61
Table 4.1 Tip Wing Skin Thickness Factor (t1)
Interval [0.8, 1.0] [0.95,1.05] [1.0, 1.2] Expert1 BBA 0.05 0.9 0.05 Interval [0.8,0.95] [0.95,1.05] [1.05 1.2]
Expert2 BBA 0.1 0.85 0.05
Table 4.2 Root Wing Skin Thickness Factor (t2)
Interval [0.7,0.9] [0.9,1.1] [1.1,1.3] Expert1 BBA 0.08 0.82 0.1
Interval [0.7,0.9] [0.9,1.0] [1.0,1.1] BBA 0.03 0.2 0.7
Interval [1.1,1.3] Expert2
BBA 0.07
The opinions from two different experts are consolidated by using Dempster’s rule of
combining. For example, the combined information for elastic modulus and load is given
in Figs. 4.7 and 4.8.
BBAs Ec1 Ec2 Ec3 Ec4 Ec5 Ec6
0.0014 0.0355 0.8173 0.1393 0.0057 0.007
Figure 4.7 Combined Information for Elastic Modulus Factor
Ec5 Ec1 Ec3
Ec2
Ec4
0.2 0.9 1.1 1.5 1.0 1.2 0.8 0.7
Ec6
62
BBAs Pc1 Pc2 Pc3 Pc4
0.0005 0.0032 0.4427 0.4744 Pc5 Pc6 Pc7 Pc8
0.0621 0.0034 0.0136 0.0002
Figure 4.8 Combined Information for Load Factor
The structural analyses were conducted by using ASTROS [42] to obtain the tip
displacements. Here, our goal is to obtain an assessment of the likelihood that the tip
displacement exceeds the limit-state value of 0.5″.
5.0: ′′≥= TipTipF dispdispU (4.4.1)
This goal is realized by obtaining the plausibility )( FUPl for the set of FU with the joint
BBA structure for uncertain parameters: elastic modulus, force, and thickness.
∅≠∩
=Fkk X
kcF cmUPlεε :
)()( (4.4.2)
In this structural analysis problem with four uncertain parameters, the vertex method
requires 1800 function evaluations that are performed by using ASTROS. As a result, the
0.5 1.3 1.0 1.6 0.8 1.1 1.5
Pc1
Pc2
Pc3 Pc4 Pc5 Pc6 Pc7
2.0
Pc8
63
belief is 0.0001 and the plausibility is 0.0236 for exceeding the tip displacement limit-
state. This result shows that degree of plausibility is 0.0236 for the tip displacement limit-
state violation, whereas there is at least 0.0001 belief for the failure. Belief and
Plausibility can be accepted as lower and upper bounds of an unspecified probability.
Thus, a probability for FU can be as low as 0.0001 and as high as 0.0236 with the given
body of evidence. The complementary cumulative functions for plausibility and belief
(CCPF & CCBF) are defined with the functions Pl and Bel for the set UFv with respect to
the varying value of v∈U, where U and UFv are defined as in Eqs. (4.4.3) and (4.4.4).
),...,,),(: 21 XxxxxxfdispdispU nTipTip ∈=== (4.4.3)
,: UvvdispdispU TipTipFv ∈≥= (4.4.4)
The CCPF and CCBF functions are illustrated in Fig. 4.9. CCPF can be interpreted in the
same way as cumulative distribution function (CDF) in probability theory. For instance,
when we want the plausibility for the occurrence y > 1.0 (v = 1.0), the plausibility value
0.0016 is determined by the axis of plausibility of y < v, as indicated in Fig. 4.9. The
difference between plausibility and belief can be viewed as the degree of Uncertainty as
shown in Fig. 4.9. Uncertainty reflects the lack of confidence in the result of the analysis.
Ignorance varies with limit function value v in Fig. 4.9. By increasing the available
information, Uncertainty will ultimately be zero, and the three measures, plausibility,
belief and probability, will have the same value.
64
Figure 4.9 Complementary Cumulative Plausibility and Belief Functions
In some cases, it is difficult to make a decision when Uncertainty is too large.
However, the bound [Bel(UFv), Pl(UFv)] is obtained based on given evidence and without
any assumptions. The two measures from evidence theory bracket the failure probability
values that could result from any assumed probability distributions within the given
interval information. Hence, we can say that the bound result from evidence theory is
reasonably consistent with the given partial information. With this bound, we can obtain
and apply the insight regarding the possible uncertainty in a system response.
CCPF (Solid line) / CCBF (Dashed line) for the occurrence y > v
v
•
=)( FvUPl 0.0016 ; v = 1.0
Plausibility & Belief for y < v
65
5. System Reanalysis Methods for Reliability Analysis
Unlike probability theory, in evidence theory, the uncertainty in a system is
propagated through a discrete Basic Belief Assignment (BBA) structure, which cannot be
expressed by any explicit function. Hence, the resulting uncertainty in a system is usually
quantified by many repetitive system simulations for all of the possible propositions
given by BBA structures of uncertain variables. The popular numerical methods of
calculating the resulting uncertainty using evidence theory are the sampling method [7,
28, 29, 40] and the vertex method [39]. However, in modern structural designs, systems
are usually numerically simulated with high fidelity tools, such as Finite Element
Analysis (FEA), Computational Fluid Dynamics (CFD), and so on. The computational
cost of UQ analysis using the sampling method can be prohibitive in most engineering
structural systems. Hence in this work, efficient computational tools, system reanalysis
techniques, are explored and developed. There are two general categories of reanalysis
techniques: surrogate-based methods and coefficient matrix-based methods. General
reviews of reanalysis methods can be found in literature [43, 44].
66
Surrogate-based methods generally construct an approximation model of a
specific response of a target system with minimum interactions with an original system
analyzer, which is usually a “black box,” such a computer intensive Finite Element
Analysis (FEA). The approximation model is usually constructed as a simple, closed-
form equation based on series expansions [22, 25, 40, 41] or Design of Experiments [47-
49]. Surrogate-based methods are extensively demonstrated in engineering disciplines
and successively applied to many engineering designs, such as optimization, reliability
analysis, optimization based on reliability, and so forth. Once the surrogate model is
obtained, the system response of interest can be regenerated without the actual simulation.
However, the solutions of the surrogate-based methods are valid only within certain
bounds. The valid bounds depend on the efficiency of the surrogate method and the
characteristics of the original system.
One of the robust ways to increase the accuracy and efficiency of an applied
surrogate-based method is to provide more simulation data. On the other hand, in
coefficient matrix-based methods of structural system reanalysis techniques, the response
of the modified system is obtained by using a special linear system solver for the
discretized system directly. Coefficient matrix-based methods include iterative methods
[50-53], the Sherman-Morrison and Woodbury (SMW) formulas [54, 55], Combined
Approximation (CA) method [56], and so forth. Iterative methods are found to be
effective for a small degree of changes in a design and for a sparse stiffness matrix.
However, the iterative procedure should be continued until the solutions are converged,
and the convergence rate might be slow or even divergent in certain numerical conditions
67
of the stiffness matrix. Since the SMW formulas have been introduced, there have been
many efforts to incorporate SMW in structural reanalysis [57]. The application of SMW
is limited to modifications on either extremely small portions of an initial structure or a
specific type of element (truss) in FEA. Moreover, most FEA solvers do not obtain the
inverse matrix directly but use a decomposition method to solve the FE equilibrium
equations. Some techniques using the SMW formulas [58,59] have been developed to
compute the modified displacement vector instead of the modified inverse matrix of the
global stiffness matrix. However, if the displacement vector rather than the inverse of the
modified stiffness matrix is updated, sequential reanalyses for modifications of different
parts of the structure, which are the main processes of optimization and reliability
analysis, cannot be performed successively.
In this work, the Successive Matrix Inversion (SMI) method, which is an
improvement of SMW, but originated from the binomial series expansion, is developed
with the capability to update both the inverse of the modified stiffness matrix and the
modified response vector efficiently. By employing SMI in an iterative method, the
Combined Iterative (CI) method in which a direct matrix method, SMI, and an iterative
method are coupled is also developed and presented in this chapter.
68
5.1 Surrogate-Based Reanalysis Techniques
Most of surrogate-based techniques are based on a polynomial expansion or
Taylor series expansion at a given design point. The simplest approximation using
gradient information is the linear approximation based on a first-order Taylor series
expansion. There are several one-point approximations (linear, reciprocal, and
conservative) which can be constructed with function value and gradients information.
The accuracy of these one-point approximation can be increased by adding higher order
gradient information, such as second-order gradient information. However, the
computational cost could be expensive to obtain the higher order gradient in many
engineering problems. Since most of the nonlinear solutions of engineering systems are
sequential and iterative, we have function and gradient information at more than one
point. There are several approximation methods in which the information from two
design points is used, such as the Two-point Exponential Approximation (TPEA) method
by Fadel et al [60], the generalized convex approximation [61], and Two-Point Adaptive
Nonlinear Approximation (TANA) method [21-26]. In this work, TANA presented by
Wang and Grandhi is employed for the surrogate-based method.
5.1.1 Two-Point Adaptive Nonlinear Approximation (TANA)
TANA with adaptive intervening variables has the capability of adjusting its
nonlinearity to any target function automatically by using two-point information. The
intervening variables are defined as
69
rii xy = , i=1, 2, …, n (5.1.1)
where r denotes the nonlinearity index, which is the same for all variables. The first-order
Taylor series is expanded at the second point, X2 in terms of the intervening variables, yi.
=
∂∂−+=
n
i YiiiI y
gyyYgYg
12,2
2
)()()( (5.1.2)
We can apply the Chain Rule to obtain the function with the physical variables as
i
i
ii yx
xg
yg
∂∂
∂∂=
∂∂ r
ii
xrx
g −
∂∂= 11
1−=∂∂ r
ii
i xrxy
(5.1.3)
By substituting the intervening variables with the physical variables, the TANA function
is
=
−
∂∂−+=
n
i Xi
ri
ri
riT x
gxxx
rXgXg
1
2,1
2,2
2
)(1
)()(~ (5.1.4)
The unknown nonlinearity index is determined by matching the function value of the
previous design point; that is, r is numerically calculated so that the difference of the
exact and approximate gT(X) at the previous point X1 is zero or minimized.
70
0)(1
)()(1
2,1,1
2,21
2
=
∂∂−+−
=
−n
i Xi
ri
ri
ri x
gxxx
rXgXg (5.1.5)
Therefore, r can be any positive or negative real number (not equal to zero).
5.1.2 Improved Two-Point Adaptive Nonlinear Approximation (TANA1 & TANA2)
As mentioned earlier, TANA uses the same nonlinearity index for all design
variables and only the function values at the previous design point are matched to
construct the approximation. However, in the improved TANA, such as TANA1 and
TANA2, both function and derivative values of two points are utilized to determine the
nonlinear indices, which are different for each design variables in developing the
approximations. The following intervening variables are defined as
nixy ipii ,,1== (5.1.6)
where pi is the nonlinear index for each design variable. The approximate function is
assumed as
11
1,
11,
11 )()()(~
1
ε+−
∂∂+=
=
−n
i
pi
pi
i
pi
XiT
ii
i
xxp
x
xg
XgXg (5.1.7)
71
where ε1 is a constant, representing the residue of the first-order Taylor approximation in
terms of the intervening variables yi. Unlike the other two-point approximations, this
approximation is expanded at the previous point X1 instead of the current point X2. The
reason is that if the approximation is constructed at X2, the approximate function value
would not be equal to the exact function value at the expanding point because of the
correction term ε1. By differentiating Eq. (5.1.7), the derivative of the approximate
function with respect to the ith design variable xi is written as
nixg
xx
xXg
Xi
p
i
i
i
T
i
,,2,1)(
1
1
1,
1=
∂∂
=
∂∂
−
(5.1.8)
From this equation, pi can be evaluated by letting the exact derivatives at X2 equal the
approximation derivatives at this point
nixg
x
x
xXg
xXg
Xi
p
i
i
i
T
i
i
,,2,1)()(
1
1
1,
2,212=
∂∂
=
∂∂=
∂∂
−
(5.1.9)
Eq. (5.1.7) has n equations and n unknown nonlinearity indices. The n equations can be
solved by using any numerical techniques. Eq. (5.1.7) matches only the derivative values
of the current point, so a difference between the exact and approximate function values at
the current point may exist. This difference is eliminated by adding the correct term ε1 in
the approximation. Then, ε1 is computed by matching the approximate and exact function
values at the current point.
72
−
∂∂+−=
=
−n
i
pi
pi
i
pi
Xi
ii
i
xxp
x
xg
XgXg1
1,2,
11,
121 )()()(1
ε (5.1.10)
TANA1 is simple to formulate, and more importantly, the approximate function
and derivative values are equal to the exact values at the current point. TANA2 uses the
same intervening variables used in TANA1. The approximate function is given as
==
−
−+−
∂∂+=
n
i
pi
pi
n
i
pi
pi
i
pi
XiT
iiii
i
xxxxp
x
xg
XgXg1
22,2
1
2,
12,
22 )(21
)()()(2
ε (5.1.11)
The approximation is a second-order Taylor series expansion in which the Hessian matrix
has only diagonal elements of the same value ε2. As in TANA1, there are n+1 unknown
constants and they are obtained by using the following equations.
nipxxxxg
x
x
xXg
ip
ip
ipi
Xi
p
i
i
i
iii
i
,,2,1)()( 1
1,2,1,2
1
2,
1,1
2
=−+
∂∂
=
∂∂ −
−
ε (5.1.12)
==
−
−+−
∂∂+=
n
i
pi
pi
n
i
pi
pi
i
pi
Xi
iiii
i
xxxxp
x
xg
XgXg1
22,1,2
1
2,1,
12,
21 )(21
)()()(2
ε (5.1.13)
73
A C B
l 60° 60°
y, v
x, u
8p
p
A1
A2
A1
Young’s modulus : E
From Eq. (5.1.12), the nonlinearity indices are determined and the diagonal element of
the Hessian matrix is from Eq. (5.1.13). In TANA2 method, the exact function and
derivative values are equal to the appropriate function and derivative values, respectively
at both points. Therefore, this approximation is more accurate than others. The TANA
method and its variations (TANA1 and TANA2) have been extensively used in truss,
frame, plate, and turbine blade structural optimization and probabilistic design. The
results presented in Refs. [21-26] demonstrate the accuracy and adaptive nature of
building nonlinear approximation.
5.1.3 Numerical Example
A three bar truss (Fig. 5.1) presented by Haftka and Gurdal [68] is selected to
compare the accuracy of various approximations.
Figure 5.1 Three Bar Truss
74
The horizontal force p can act either to the right or to the left. The truss is
designed subject to stress and displacement constraints with the design variables being
the cross sectional areas A1 and A2. The stress of member C is required to be less than σ0
both in tension and compression. After defining normalized design variables, the
constraint function of the stress in member C is written as
025.0
23
311)(
1210
≥+
−+=−=xxx
xg C
σσ
(5.1.14)
where pAx /011 σ= and pAx /022 σ= . As shown in Fig. 5.2, two design points are
selected for approximate functions. The function and derivative values at the design
points are
]25.1,75.0[1 =X : 3785.0)( 1 =Xg , 7844.01
−=∂∂xg
, 9679.02
=∂∂xg
(5.1.15)
]00.1,00.1[2 =X : 0226.0)( 2 −=Xg , 2574.01
−=∂∂xg
, 28.12
=∂∂xg
(5.1.16)
The following constants are obtained for TANA approximations.
TANA: r = 1.5553 (5.1.17)
TANA1: pi = [-2.8742, -0.2527], ε1 = 0.0083 (5.1.18)
TANA2: pi = [-0.5482, -0.3109], ε2 = 2.6224 (5.1.19)
75
Figure 5.2 Two Design Points of the Three Bar Truss
To compare the approximations along a straight line, a design point is given by the
function of t.
12 )5.0()5.0( XtXtX −++= (5.1.20)
In Fig. 5.3, the relative error in the estimation from various approximation methods are
shown with respect to the value of t. The relative error is calculated as follows
ExactionApproximat-Exact
Error Relative = (5.1.21)
X2
X1
X1=[0.75 1.25]
X2=[1.00 1.00] g(x)
76
Figure 5.3 Relative Error Plots of Various Approximation Methods
Error
t
t
Error
77
In Fig. 5.3, gL, gR, gC, and gQR denote the approximations from one-point linear,
reciprocal, conservative, and quadratic reciprocal approximations, respectively. The
approximations, TANA, TANA1 and TANA2 are indicated by gT, gT1, and gT2. It is
observed that the approximations have zero slope at the current design point (t=0.5) and
TANA2 gives the most accurate results for a wide range of design points.
78
5.2 Coefficient Matrix-Based Reanalysis Techniques
In this section, a new reanalysis technique, the Successive Matrix Inversion (SMI)
method, is developed to include the capability of updating both the inverse of the
modified stiffness matrix and the modified response vector. The SMI method is an
improved version of the Sherman-Morrison and Woodbury (SMW) formulas [54, 55], but
here SMI originated from the binomial series expansion. The SMI method, as a direct and
exact matrix solver, has a wider applicable range of modification than any other
technique and the computational cost is significantly reduced. Updating processes for
both the modified inverse matrix and the modified response vector can be used in a
combined way for many sequential reanalyses to reduce the overall computational cost.
On the other hand, over the last century, a number of iterative methods for solving
large and sparse linear systems have been developed. The most popular methods are
Conjugate Gradient (CG) type methods (CG, BiCG, CGS, BiCGSTAB) [51] and
Generalized Minimal Residual (GMRES) methods [52]. An excellent review of these
iterative methods can be found in the literature [53]. For a small change to the previous
system in a system reanalysis, the modified response can be obtained very efficiently
within a few iterations by using information from the previous analysis. However, when a
design change is arbitrarily large, the iterative solution converges very slowly, and it is
even hard to predict whether the iterative solution is converged or not. Hence, it has been
desired to develop an efficient iterative solver that is combined with an exact solution
technique to alleviate the difficulties of iterative methods and to improve their
79
performance [53]. In this work, the Combined Iterative (CI) method in which an iterative
method is coupled with an exact matrix solver, the SMI method, is introduced. By
employing SMI, it is found that the convergence rate is accelerated, and even a diverged
solution can obtain convergence. Additionally in this work, a new iterative technique, the
Binomial Series Iterative (BSI) method, is developed from the binomial series expansion
by using the same technical concept as SMI.
5.2.1 Successive Matrix Inversion Method
In Finite Element Analysis (FEA), most of the computational cost is incurred in
inverting or decomposing the stiffness matrix of an engineering structure to solve the
equilibrium equations. In a sequential analysis, the target structure is changed with small
modifications. So, the main idea of reanalysis techniques is to regenerate the modified
system response efficiently without another complete system analysis. The SMI method
updates the inverse of the stiffness matrix by considering only the modified portion of the
stiffness matrix for the reanalysis of the modified structure. Assume that the initial
simulation given by Eq. (5.2.1) is performed using FEA.
][ 00 fdK = (5.2.1)
where ][ 0K is the initial stiffness matrix, f is the force vector, and 0d is the initial
response vector. From this initial analysis, the inversion of the stiffness matrix, 10 ][ −K
80
and the initial response vector 0d are available. In a sequential analysis, the structural
design is changed as follows,
])[]([ 0 fdKK =∆+ (5.2.2)
where ][ K∆ is the stiffness modification matrix d is the modified response vector.
To evaluate the modified response vector, premultiplying Eq. (5.2.2) by 10 ][ −K gives
][])[][]([ 10
10 fKdKKI −− =∆+ (5.2.3)
We assume for convenience that the first m columns of [∆K] have non-zero elements. For
Eq. (5.2.3), binomial series is considered to obtain the inverse of ])[][]([ 10 KKI ∆+ − as
follows,
++++=− − 321 ][][][][])[]([ BBBIBI (5.2.4)
where
][][][ 10 KKB ∆−= − (5.2.5)
This series expansion is known variously as the Binomial Series expansion, Geometric
Series expansion, and Neumann Series expansion. However, there are some limitations
[62] for using this series expansion directly to find the inverse of the matrix, 1])[]([ −− BI .
81
1. A sufficient condition for the convergence of the series is the spectral radius of
the matrix ][B is less than unity.
2. The convergence could be quite slow in some cases.
Due to the first convergence limitation, there is a valid bound on the amount of
design modification allowed for using the series method. Even if the convergence
criterion is satisfied, using more than three series expansion terms for finding an inverse
matrix might not be prudent from a computational cost point of view.
However, the inversion of the matrix, 1])[]([ −− BI , can be calculated from the
element level of the infinite series expansion terms in order to alleviate the
aforementioned problems. In Eq. (5.2.4), we define the matrix ][P for the [B] matrix
series expansion terms, as shown in Eq. (5.2.6).
+++= 32 ][][][][ BBBP (5.2.6)
The elements of [P] can be obtained as follows,
...... )()3()2()1( +++++= kijijijijij BBBBP (5.2.7)
82
where )(kijB is the (i, j)th element of kB][ in Eq. (5.2.6). The kth recursive factor in the
element series expansion ( )(kijr ) in terms of Eq. (5.2.7), is obtained as,
)(
)1()(
kij
kijk
ij B
Br
+
= (5.2.8)
In the case where the recursive term is constant through all of the series expansion terms,
that is, ijk
ij rr =)( , Eq. (5.2.7) can be expressed as follows,
)1( 432+++++= ijijijijijij rrrrBP (5.2.9)
ijP term can be obtained by assuming that there exists an original equation for the series
expansion of each B matrix element, as given in Eq. (5.2.9). The right side term (the
series expansion) of Eq. (5.2.9) is transformed into a simple expression, as shown in Eq.
(5.2.10).
ij
ijij r
BP
−=
1 (5.2.10)
However, in a general case, it can be easily observed that the kth recursive term, )(kijr , is
not same with the neighboring recursive terms; that is, the recursive term is not constant
83
but variable for the series expansion. Hence, the transformation in Eq. (5.2.10) is not
valid in general to obtain the series solution.
However, the variability of the recursive term in the series could be eliminated by
decomposing the modified stiffness matrix into separate matrices as follows,
[ ] ][1
)(=
∆=∆N
j
jKK (5.2.11)
where N is the total degrees of freedom in a structural model and ][ )( jK∆ is the matrix
which has non-zero elements only in the jth column. When ][ )( jK∆ is considered with the
definition in Eq. (5.2.5), the B matrix also has only jth column elements. By calculating
the series terms with the B matrix, it is easily observed that the recursive term for the B
matrix is nothing but the (j, j)th element of the B matrix, as a constant value.
jjBr = (5.2.12)
Each element of [P] is simply given as,
r
BP ij
ij −=
1 (5.2.13)
84
Due to the decomposed column vector of ][ K∆ , the inverse of the modified stiffness
matrix is obtained by a successive inversion procedure using the following three
equations:
][ )(1)1()( jjj KKB ∆−= −− (5.2.14)
)1/( )()()(j
jjj BBBs −= (5.2.15)
Tjjjj KbBsKK ][][ )1()(1)1(1)( −−−− += (5.2.16)
where the superscript, )( j , indicates the successive step, and the subscript j indicates the
jth element in a vector. Furthermore, )( jK∆ is the jth column vector of ][ K∆ , )1( −jKb is
the jth row vector of 1)1( ][ −−jK , and the initial 1)0( ][ −K is given as 10 ][ −K . The required
number of successive steps is the number of non-zero columns in ][ K∆ . Since the
inverse of the modified stiffness matrix, 10 ])[]([ −∆+ KK , is obtained in SMI, then for the
next modification ][ 2K∆ the inverse of the second modified stiffness matrix,
120 ])[][]([ −∆+∆+ KKK , can be obtained by setting 1
0 ])[]([ −∆+ KK as a new initial
inverse of the stiffness matrix. However, it is noted that most FEA solvers do not actually
obtain the inverse matrix, and computational resources are wasted away in repeatedly
updating the whole inverse of the modified stiffness matrix unnecessarily.
85
Therefore, for Eq. (5.2.3), we formulate another problem whose initial matrix is
[I]. The modification matrix is [B] = ][][ 10 KK ∆− − and the right side of Eq. (5.2.3) is the
initial response, d0. The ultimate purpose of this formulation is to obtain the influence
matrix, [S] = 1])[]([ −+ BI , which updates the initial response to the modified response
with respect to the given modification matrix, as shown in Eq. (5.2.17).
][ 0dSd = (5.2.17)
As in the SMI procedure, by decomposing [B] into column vectors, the influence matrix
is updated from the initial identity matrix by successive procedures with the following
three equations.
][ )()1()( jjj BSB −= (5.2.18)
)1/( )()()(j
jjj BBBs −= (5.2.19)
Tjjjj SbBsSS ][][ )1()()1()( −− += (5.2.20)
where )1( −jSb is the jth row vector of ][ )1( −jS . Compared to the previous procedure (Eqs
(5.2.14)- (5.2.16)), this procedure is more cost effective because only the influence
matrix, which is initially [I], is updated successively, rather than the whole inverse of the
stiffness matrix. The column vector of [B] is required to be altered by the influence
matrix, as shown in Eq. (5.2.18), at each step. Note that the modified stiffness matrix,
10 ])[]([ −∆+ KK , can be obtained as 1
0 ]][[ −KS . For the next modification, ][ 2K∆ , the
86
inverse of the second modified stiffness matrix, 120 ])[][]([ −∆+∆+ KKK , can be
computed as 102 ]][][[ −KSS sequentially. Hence, not only the modified response vector,
but also the inverse of the modified stiffness matrix, can be tackled through the influence
matrix. This means that the influence matrix makes it possible to perform sequential
reanalyses.
However, close examination of the equations reveals that in Eq. (5.2.20) the
updated [S] matrix, which is started with an identity matrix, is also unnecessary. The jth
column of ][ )1( −jS is filled up with )( jBs and the previously updated (j-1) columns in
[S (j-1)] are updated due to the jth column )( jBs . Because of this updated influence matrix
[S (j)] at the next step, the column vector of [B] is directly changed by matrix-vector
multiplication, as shown in Eq. (5.2.18), which is a kind of simultaneous superposition
operation for the jth column of ][B . However, if a successive vector-updating scheme is
employed instead of Eqs. (5.2.18) and (5.2.20), then the process of updating the influence
matrix, Eq. (5.2.20), can be avoided. Since the successive scheme is updating only
vectors, no additional cost is required beyond the computation cost of Eq. (5.2.18). Thus,
to save the unnecessary computational cost in the SMI procedure, updating the influence
matrix, Eq. (5.2.20), is skipped, but a new influence vector storage matrix and a new
vector-updating operator are introduced.
The influence vector storage matrix, [P], which eventually becomes N×m matrix
for m columns modification, starts with an empty zero-order matrix. At the first updating
step, the first column vector of [B] is not changed because [P] is empty, and the
87
manipulated vector, )1(Bs , is stored in the first column of [P]. At the next stage, the
second column vector of [B] is updated by the influence matrix, [P], as follows,
2)2()1()2()2( BPBB ×+= (5.2.21)
The vector, )1/( 2)2()2()2( BBBs −= , is stored as the second column of [P]. At the jth
stage, the influence vector storage matrix becomes N×(j-1) matrix, and the jth column
vector of [B] is updated sequentially with the (j-1) columns of [P] one by one as follows,
11 )()()()1( −=+=+ jkPBrBrBr kk
kkk (5.2.22)
where Br(1) is B (j), and Br(j) is the updated vectorB (j), which is the same as the
vector from Eq. (5.2.18). For convenience, the operation of Eq. (5.2.22) is expressed
from now on with a new successive vector-updating operator, U, as follows:
][ )(1
1
)( jj
i
j BPUB−
== (5.2.23)
After the computations for the B (j) vector in Eq. (5.2.23), B (j) is simply stored at the
jth column of [P]. As a summary, at the jth stage, B (j) is sequentially updated as shown
in Eq. (5.2.23), instead of by the simultaneous superposition operation shown in Eq.
(5.2.18). And then, the influence vector storage matrix [P] simply stores the B (j) vector
88
in the corresponding column and becomes a matrix with N×j size, without the updating
procedure. The procedure for the proposed SMI method is shown in Fig. 5.4.
Figure 5.4 Successive Matrix Inversion (SMI) Algorithm for m Columns Modification
Finally, after obtaining [P] (N×m size) for all non-zero columns of [B], the modified
response vector is obtained as follows:
][ 01dPUd
m
i== (5.2.24)
)1(1][ )()()()(1
1−=+=
−
=jkPBrBrBPU k
kkkj
j
i
where Br(1) is B (1).
][ )()3()2()1( mBBBBB ++++=
][ )(1
1
)( jj
i
j BPUB−
==
)1/( )()()(j
jjj BBP −=
][ P
j=1 ~ m
(m: number of columns that have non-zero elements in [B])
][][][ 10 KKB ∆−= − []][, =P
(N×m size)
89
When the non-zero columns of [∆K] are scattered randomly, one vector that has the
information of the locations of non-zero columns in [∆K] might be needed and
considered in the SMI procedures.
5.2.2 Some Computational Issues of SMI in Engineering Applications
The computational cost of SMI, expressed by the number of the floating point
operations (flops), is compared with that of a popular direct method, LU decomposition.
One flop is approximately the work required to compute one addition and one
multiplication. Since the SMI method gives an exact solution for the symmetric and non-
symmetric modification matrices, the LU decomposition method, instead of the Cholesky
decomposition method, is selected as a direct complete analysis method to compare the
efficiency with SMI. For an N×N matrix, the LU decomposition method requires 2/3N3
flops to solve the system, which is modified with any rank size. However, by using the
proposed SMI method, the computational cost depends on the size of modified rank from
the initial stiffness matrix. Obviously, from Eq. (5.2.23), the cost of the proposed SMI
method is about mNm )1(21 − for m columns modification in the stiffness matrix. Fig. 5.5
shows the ratio of the computational cost of SMI to LU decomposition. It is found from
Fig. 5.5 that the reanalysis cost for 50% rank modification to the initial stiffness matrix is
less than 20% of the complete analysis cost using LU decomposition. It is noted that even
for full modification of N×N stiffness matrix, the SMI method is more efficient than the
conventional LU decomposition method with about 25% cost savings, and besides there
is no pivoting procedure in SMI.
90
Figure 5.5 Relative Computational Cost Ratios of SMI to LU Decomposition
In many practical engineering structural problems, computational methods such as
FEM lead to a sparse matrix, which has a small number of non-zero elements. Most
direct solvers use decomposition methods. Even though the stiffness matrix is very sparse,
it is very hard to take advantage of the sparseness because of the unpredicted fill-in
within the process of decomposition. However, in the proposed SMI method, the
sparseness of the stiffness matrix can be considered more explicitly to obtain the
computational benefit. Since the modification stiffness matrices, [∆K], in engineering
structures are usually very sparse, the cost of obtaining [B] is ignored throughout this
work. For a specially structured matrix such as a diagonally banded stiffness matrix, the
(%)100×ondecompositLUofFlops
methodSMIofFlops
(m: number of modified columns) %100×Nm
91
influence vector storage matrix, [P], can be obtained by efficient systematic computations
due to the pattern of sparseness in [K].
In a specific analysis, such as reliability analysis, design optimization, and so on,
many simulations may be required for sequential modifications to a target structure. By
using SMI, the sequential reanalysis can by performed by accumulating the influence
vectors sequentially in [P]. For example, the first modified response, d1, is obtained
from the initial response, d0, and [P] for the m1 rank modification as follows,
d1= ][ 01
1
dPUm
i= (5.2.25)
For the next modification, ][ 2K∆ with m2 non-zero columns, the additional influence
vectors are accumulated in the previous [P] matrix. The second modified response, d2,
can be computed as follows,
d2= ][ 11
21
1
dPUmm
mi
+
+== ][ 01
21
dPUmm
i
+
= (5.2.26)
Hence, for the kth sequentially modified system with mk modified columns, only
additional mk times updating processes in Eq. (5.2.23) are required with the kth initial [P]
matrix whose size becomes N×(m1+m2+…+mk-1). However, as the size of the column of
[P] increases in sequential modifications, the cost for the updating process in Eq. (5.2.23)
increases exponentially. Therefore, in the case of many sequential reanalyses with small
92
change fractions, the intermediate process of updating the inverse of the modified
stiffness matrix can reduce the overall computational cost by decreasing the sequential
updating processes in Eq. (5.2.23). For example, when Tn sequential reanalyses are
required with q modification ratio to N, that is, mk (= q×N) independent modified columns
to the previous columns, the total cost (flops) of Tn sequential reanalyses is as follows,
=
−
= =
−+=
nT
j
qN
i
i
i
NqNjNSSMI_cost1
21
1 1
)1(1
1
2
(5.2.27)
In the right side of the above equation, the first term is for finding [P] with qN modified
columns and the second term is for updating B in Eq. (5.2.23) with the previously
stored updating vectors in [P]. On the other hand, if the inverse stiffness is updated dn
times in Tn sequential reanalyses, the total cost (flops) is computed as follows,
1+=
ndK
D (5.2.28)
)1()1(1
21
1 11
1
2
+×
−+=
=
−
= =n
D
j
qN
i
i
i
dNqNjNpSMI (5.2.29)
ndNqDUK 3= (5.2.30)
UKpSMITSMI_cost += (5.2.31)
where pSMI is the cost of dn+1 SMI procedures with D sequential reanalyses at each
procedure and UK is the cost of updating the inverse of the stiffness matrix dn times. The
93
marginal number of sequential reanalyses, Tm, for SSMI_cost against TSMI_cost can be
found by solving the following problem
0)d4(1
qTqTdcostiondecompositLU
TSMI_costSSMI_cost |TT
n
nnnnm >
+−
=−=)2(3
(5.2.32)
It is obvious that the marginal Tm is 2/q from the above equation. The marginal Tm
indicates that it is better to employ the process of updating the stiffness matrix for more
than Tm sequential reanalyses in a sense of overall cost savings. For the sequential
reanalyses more than Tm, the minimum TSMI_cost can be obtained by a sufficient
number of dn as follows:
)1(4)2(3
lim2
&n
n
NTd TqqT
costiondecompositLUTSMI_cost
TSMI_costMinimumnn +
+==
∞→→ (5.2.33)
For example, when q=0.01, one million simulation results can be obtained with only
about the cost of 15,000 complete analyses by using SMI, i.e., 1.5% of the cost of a
complete solver using LU decomposition. In other words, for one complete analysis cost,
about 66 simulation results can be obtained by SMI for q=0.01.
94
5.2.3 Numerical Examples
5.2.3.1 Plane Truss
The plane truss shown in Fig. 5.6 has 30 rod elements that have an elastic
modulus, E=107 (psi), and an initial uniform cross section area, A0 = 5.0 (in2).
Figure 5.6 Plane Truss Structure
A design optimization can be performed to minimize the mass of the structure by setting
the cross section areas of the rod elements as design variables. The maximum
displacement at node 7, as shown in Fig. 5.6, can be considered as a design constraint. In
most gradient-based optimization techniques, there are two major steps in every
optimization iteration: finding a search direction and performing a one-dimensional
search for a step size. First, for the search direction, sensitivity information of the
20,000 lb 20,000 lb 20,000 lb 20,000 lb 20,000 lb 20,000 lb
Node: 7
L=2160 in
H=360 in 2
3
5
4
1 6
8
11
13
16
18
21
23
26
28
10
9
15
14
20
19
25
24
29
30 7 12 17 27 22
95
objective and constraint functions is usually utilized. In this example, the sensitivity
analysis of the constraint function with respect to the design variables involves structural
simulations for the specified response, the displacement at node 7. If a non-intrusive
sensitivity technique such as the Finite Difference Method (FDM) is employed, at least
30 additional simulations are required to obtain sensitivity information in each
optimization iteration. However, by using the SMI method, the sensitivity information
can be efficiently calculated with half the cost of one simulation as follows:
=∂
∂
i
node
x
Disp 7
T01.002.001.000.000.003.004.008.000.008.0
05.006.032.000.030.007.008.078.000.076.009.010.056.100.054.112.012.074.200.072.2
−−−−−−−−−−−−−−−−−−−−−−−−−−−−−− (5.2.34)
where xi denotes a design variable, Ai. Since each design variable makes changes in about
16% of the ranks of the stiffness coefficient matrix, the computational cost of SMI is
about 1.5% of the complete simulation cost using the popular LU decomposition
technique, as shown in Fig. 5.5. This means that the number of simulations for the
sensitivity analysis using FDM is significantly reduced from 31 to 1.5.
5.2.3.2 The Application of SMI to Reliability Analysis Using a Sampling Technique
The Intermediate Complexity Wing (ICW) structure shown in Fig. 5.7 is selected
to demonstrate the efficiency of the proposed SMI method in reliability analysis. The
metallic structural model of ICW is a representative wing-box structure for a fighter
aircraft. There are 62 quadrilateral membrane elements for upper and lower skins and 55
96
shear elements for eight ribs and three spars. Structural reliability analysis is performed to
determine the probability of failure of a structure with a limit-state function in which a
required performance of a target structure is defined. The limit-state function (G)
separates the design space into failure and safe regions.
G(X)>0, xi ∈ Failure region (5.2.35)
G(X)=0, xi ∈ Failure boundary surface (5.2.36)
G(X)<0, xi ∈ Safe region (5.2.37)
where X (∈ℜn) is a vector of uncertain parameters in the structural design, including
random loads, uncertain geometric dimensions, material properties, and so on. Each
uncertain parameter is assumed to have an independent Probability Density Function
(PDF). With the limit-state function, the probability of failure (Pf) is computed as
dXXpPXGf <
=0)(
)( (5.2.38)
where p(X) is the joint probability density function of X. In engineering structural
reliability applications, numerical methods, such as the Monte Carlo Simulation (MCS)
[7], can generally be performed to evaluate the multiple integration in Eq. (5.2.38). The
crude MCS can be expressed as follows:
]0)([1ˆ
1
>= =
n
iif XGI
nP (5.2.39)
97
where Xi indicates a realization of random parameters from given PDFs, fP represents
the crude MCS estimator of failure probability, and n is the total number of MCS.
Figure 5.7 Design Variables (βi) for Elements Under Uncertainty in the Elastic Modulus
In this example, the proposed SMI method is applied to MCS to demonstrate its
applicability to sequential repetitive reanalyses in structural reliability analysis. In Fig 5.6,
five scale factors (β1~ β5) of an elastic modulus (E =1.05×107 psi) at different local parts
of ICW are defined as random variables to describe a locally damaged situation. In MCS,
samples are obtained from the Cartesian product of the samples of each random variable
generated from each PDF. To describe the sequential procedure of MCS, a simple case
that has only two random variables (β1 and β2) is shown in Fig. 5.8 as an example.
β1
β2
β3
β4
β5
Upper Skin
Ribs and Spans
Lower Skin
98
Figure 5.8 Sequential Computation Procedure of the SMI Method in Monte Carlo
Simulation for Two Probabilistic Variables (β1 and β2)
For the first random variable (β1), first stage reanalyses are performed from the
initial design for the selected n1 samples from the PDF of β1, as shown in Fig. 5.8a. Then,
as shown in Fig. 5.8b, for the n2 samples of the second variable (β2) from the given PDF,
second stage reanalyses are performed by considering [P] from each first stage reanalysis
of the first random variable. It is noted that as shown in Fig. 5.8a, the total computational
cost of the first stage is only the number of samples (n1) times the cost of SMI for the β1
modification, which is about 0.68% of the complete analysis cost. The total cost of the
second stage for β2 modifications is the total number of second stage samples (n1×n2)
times the cost for sequential SMI, which can be obtained from Eq. (5.2.29). The total cost
is about 2.8% of the complete analysis cost. This means that if both n1 and n2 are 100,
β1
β2
β1
a) First stage reanalysis b) Second stage reanalysis
99
then the results of 10,000 simulations are obtained while incurring only the cost of less
than three complete analyses through SMI. For the current example with five random
variables, SMI is applied in the same sequential way as in the case of two random
variables. In this ICW example, the limit-state function is as follows:
G(β1, β2, β3, β4, β5)= 1)(5.11
),,,,( 54321 −in
Disptip βββββ (5.2.40)
Where β1 = Normal [0.9, 0.1] (5.2.41)
β2 = Uniform [0.7, 1.0] (5.2.42)
β3 = Uniform [0.8, 1.0] (5.2.43)
β4 = Normal [0.8, 0.1] (5.2.44)
β5 = Normal [0.7, 0.1] (5.2.45)
To obtain the failure probability for the displacement response of ICW, MCS is
performed with 25 samples of each random variable and, as a result, gives about 0.42%
failure probability. In this MCS, the total number of simulations is about 10 million.
However, through the sequential reanalyses using the proposed SMI method, the
computational cost of MCS is reduced to about 6.5% of the cost of using complete
analyses without reducing the number of total samples. Moreover, the successive SMI
analyses for the successive random variable can be assigned to separate computers for an
efficient parallel computation scheme.
100
5.3 Combined Iterative Technique
Over the last century, a number of iterative methods for solving large and sparse
linear systems have been developed. The most popular methods are Conjugate Gradient
(CG) type methods [51] and Generalized Minimal Residual (GMRES) methods [52].
Since the modification in a reanalysis problem is usually small, these iterative methods
can be used as efficient tools for a system reanalysis by utilizing the information from the
previous analysis. That is, the inverse stiffness coefficient matrix of the previous system
can be selected as a preconditioner to speed up the convergence, and the previous
response vector can be used as an initial solution vector in the iterative procedures. For a
small change to the previous system, the modified response can be obtained very
efficiently within a few iterations by using information from the previous analysis.
Unfortunately, when a design change is large, the iterative solution can only be
converged very slowly, and it is even hard to predict whether the iterative solution is
converged or not. However, in spite of these numerical difficulties, the use of iterative
methods is increasing in practical applications due to several important benefits (in terms
of computing time and computer storage). Therefore, it has been desired to develop an
efficient iterative solver that is combined with an exact solution technique to alleviate the
difficulties of iterative methods and to improve their performance [53].
101
Figure 5.9 Combined Iterative (CI) Method
It is the main objective of this section to propose the Combined Iterative (CI)
method with an exact matrix solver, SMI. As shown in Fig. 5.9, by combining the
techniques from direct and iterative methods, we can expect a more robust and efficient
solver for a linear system reanalysis. The SMI method, which originated from the
binomial series expansion, requires computational cost proportional to the amount the
system is changed. Also, the SMI method makes it possible to perform a sequential
reanalysis for both symmetric and non-symmetric coefficient matrices by employing an
Influence Vector Storage (IVS) matrix and a Successive Vector-Updating (SVU)
High global accuracy (exact solutions) Computationally inexpensive for local modifications Performance for sequential modifications Simple and easy to understand and implement
Global modification with small design variations Approximate solution
Pros
Cons
• Computational cost: O(N2) instead of O(N3) • Small storage requirement • Only matrix-vector products: Sparsity of a
coefficient matrix
• Slow convergence: No finite iteration number & No guarantee of convergence
• Finite precision computations: Possible stagnation of iterative procedures
Pros
Cons
Direct Methods Iterative Methods
System Reanalysis Techniques
Combined Iterative Method
(SMI method)
Partially direct solution + Iterative procedure Robustness & accuracy (adjustable solution accuracy) Small storage requirement /Small computational cost Utilization of sparsity of a coefficient matrix Simple and easy to understand and implement
102
operator. Even for a full-rank modification in a non-symmetric coefficient matrix, the
SMI method has better performance than the popular decomposition matrix, LU
decomposition. The IVS matrix can be obtained partially for any part of the given
modification, and the intermediate solution for the partial modification can be calculated
exactly by using the SVU operator. Hence, the SMI method can be applied to only certain
parts of the whole modification so that the numerical properties for an iterative process
with the rest of the modification are effectively improved.
The IVS matrix obtained for a certain part of a whole modification from SMI can
be used as a successive preconditioner in an iteration procedure for the rest of the
modification. It is found that the convergence rate is accelerated, and even a diverged
solution from other stationary iterative methods can be converged by the SMI method.
Additionally, in this work, a new iterative technique, the Binomial Series Iterative (BSI)
method, is developed from the binomial series expansion by using the same concept as
SMI. Since the BSI method is also valid for non-symmetric cases, the performance of
BSI is compared with that of the most advanced iterative method, BiCGSTAB [51]. The
CI method from SMI and BSI shows improved efficiency and robustness through a stable
iterative behavior due to simple and straightforward computations in its procedure.
103
5.3.1 Combined Iterative (CI) Method with SMI
In practice, the design of an engineering structure can involve the modification of
the entire structure. For an overall modification with a small degree of change, iterative
methods can be applied more efficiently than the SMI method, because the reanalysis
cost of using SMI is fixed by the modified rank ratio in a coefficient matrix. Popular
iterative techniques for the structural reanalysis problem include preconditioned iterative
Krylov-type methods, such as the Conjugate Gradient (CG) type methods (BiCGSTAB
[51] and GMRES [52]).
Iterative methods use successive approximation to obtain an accurate solution in a
structural reanalysis. In most reanalyses of design optimization procedures, the given
modification is less than the previous system, that is, the norm of [K] is larger than that of
[∆K]. In those cases, an iterative procedure to find the modified response can be started
by using the previous response as an initial iterative solution, and the inverse of the
previous stiffness matrix, [K0]-1, as a preconditioner to the modified linear system to
speed up the solution convergence as follows:
][][][ 10
10 fKdKK −− = (5.3.1)
When the modification is very small, the preconditioned system, ][][ 10 KK − , is close to
the identity matrix, [I], and the modified solution can be found in a few iterations. The
modification is generally given by sensitivity information of interest in a design
optimization. In every iteration of an optimization procedure, the sensitivity information
104
for objective and constraint functions with respect to design variables is changed to
improve the current design. Among the defined design variables of a design optimization,
typically there are major and minor contributing variables which impose large and small
modifications on the current structural design based on the sensitivity information. It is
obvious that the major contributing design variables have a large influence on the
numerical properties of the iterative procedure.
Figure 5.10 Separating [∆K] Into the Parts for SMI and an Iterative Method
In the iterative procedure, the SMI method can be efficiently employed to
improve the numerical properties for better convergence with minimum computational
cost. The basic idea is that when an arbitrary [∆K] is given, we separate the major
contributing part to the numerical condition from [∆K] and apply SMI to the major part
and an iterative method to the rest of [∆K], as shown in Fig. 5.10. The part of [∆K] to
which SMI is applied is denoted as [∆K]smi and the rest is denoted as [∆K]iter, which is
handled by an iterative procedure. To maximize the efficiency of SMI when improving
Primary contributing part of dK to the numerical conditions for an iterative procedure SMI Iterative method
= +
[∆K] [∆K]smi [∆K]iter
105
the numerical condition of an iterative process, the extremal eigenvalues of the
preconditioned system, which are related to the spectral radius, should be eliminated. In
this work, the numerical properties are improved by applying SMI for the columns of [B],
which have the largest diagonal elements. After obtaining the IVS matrix for [∆K]smi, the
preconditioned linear system for the reanalysis is given as follows:
][][][ 11 fMdKM smismi−− = (5.3.2)
where [M]smi is a preconditioner augmented with the SMI method as
101
1 ]][[][ −
=
− = KPUMm
ismi (5.3.3)
After obtaining the iterative solution with [∆K]iter first, the modified response can be
transformed by the [P] matrix, which is computed for [∆K]smi from SMI as follows:
iter
m
kdPUd ][
1== (5.3.4)
Generally, since the SMI method is also valid for a non-symmetric matrix,
BiCGSTAB which combines BiCG with repeated GMRES is employed for the
preconditioned linear system in this work. The convergence behavior of the CG-like
methods is known to depend on the distribution of the extremal eigenvalues of the matrix,
][][ 1 KM smi− . This is because CG tends to eliminate components of the error in the
106
direction of eigenvectors associated with extremal eigenvalues successively. The fast
convergence rate can be obtained as the condition number of the linear system, which is
the function of the extremal eigenvalues, becomes smaller in each iteration. By using the
augmented preconditioner with SMI in a system reanalysis, the iteration method can start
with improved numerical properties of the system and show better performance.
Moreover, the SMI method can also be applied to an initial complete analysis
with any preconditioning iterative technique, such as Incomplete LU (ILU)
decomposition. The partial matrix that is accounted for by the ILU decomposition is
accepted as an initial matrix. And, the remaining part that is not addressed by the ILU
decomposition is assumed as a given modification matrix, [∆K]. As described previously,
the SMI method can be used for a certain [∆K]smi, which is selected from [∆K] to improve
the numerical condition of the iterative system.
5.3.2 Binomial Series Iterative (BSI) Method
A new iterative method, the Binomial Series Iterative (BSI) method, which is
developed based on the binomial series expansion, is developed in this work as an
efficient and robust iterative reanalysis method. In an optimization procedure of an
engineering structural design, the modification is usually smaller than the previous design.
This means that the spectral radius of the linear system is usually less than unity. In those
cases, the binomial series solution always converges. The BSI method is developed to
compute the binomial series expansion efficiently as in Eq. (5.3.5).
107
ii dBdd ][ 11 +=+ (5.3.5)
where d1 is the previous response vector, d0. The B matrix can be replaced by [B]iter
if the SMI method is applied to improve the numerical condition before the iterative
procedure.
Figure 5.11 Successive Predicting Process of the BSI Method
In Fig. 5.11a, it is shown that an element of the d vector is converged to its true
value as the iteration number of Eq. (5.3.5) is increased. However, it might be tedious and
computationally expensive to find a converged solution from the iterative procedure,
which requires a matrix and vector multiplication in every iteration. In special cases in
the binomial series expansion, a constant recursive vector for a response vector rdc can
1dc 2dc
Nonlinear recursive part
Constant recursive part
a) Nonlinear and constant recursive terms b) Successive predicting process
Response
Iteration Iteration
108
be found, and the converged solution dc can be computed with the following simple
equation:
)1/(. 1 cc rddd −= (5.3.6)
Unfortunately, the recursive vector usually is not a constant vector, and Eq.
(5.3.6) cannot be used directly. It is found that when the series has a converged solution,
the recursive term in each element of the iterative solution is also converged to a constant
after showing nonlinear behavior for some number of iterations. Hence, as shown in Fig.
5.11a, the iteration history is divided into a nonlinear recursive part and a constant
recursive part. The converged solution can be obtained efficiently by reducing the
computational cost for the constant recursive part as follows:
)1/(. 21
2
11 −−
−
=
−++= nn
n
ii rdsdsdddc (5.3.7)
where n is the number of iterations for finding the converged recursive vector, and the
two vectors, sd and rd, are defined as follows:
1 iii ddsd −= + (5.3.8)
/ 1 iii sdsdrd += (5.3.9)
109
The second and the third terms in the right hand side of Eq. (5.3.7) denote the
nonlinear recursive part and the constant recursive part, respectively. To accelerate this
procedure in finding an acceptable solution, the converged solution is predicted
successively with an approximated recursive vector using Eq. (5.3.7). The approximated
recursive vector is obtained with a minimum number of iterations (m), which is at least
three, and gives better accuracy than previous solutions without verifying the
convergence of the recursive vector. If the predicted solution is not satisfactory, the
predicting procedure is repeated by using the current predicted solution as an initial
vector, as shown in Fig. 5.11b. Hence, there is a main loop in the BSI method for
computing the predicted converged solution and an inner loop for obtaining the
approximated recursive vector.
To simplify the computations in the jth predicting procedure, the computations of
Eq. (5.3.5) in the inner loop are rewritten as follows:
,,1, ijijij rdd +=+ (5.3.10)
where rj,i is the residual vector, ][])[]([ 1,,0, −=−−= ijijij rBdBIdr . The
recursive vector in the inner loop is obtained in terms of the residuals as
/ 1,,2, −− = ijijij rrrd (5.3.11)
110
From Eqs. (5.3.10) and (5.3.11), the j+1th predicted solution with the minimum iteration
number, m, is given as
)1/() 2,1,1, −−− −+= mjmjmjj rdrddc (5.3.12)
BSI procedure is shown in Fig. 5.12. The convergence is checked directly by
computing the norm of the residual vector.
Figure 5.12 BSI Method Flowchart
, 01,101,1 rrdd ==
1,1 =+= ijj
,,1, ijijij rdd +=+
][ ,1, ijij rBr =+
/ 1,,2, −− = ijijij rrrd
)1/( 2,1,1, −−− −== ijijijj rdrddc
])[]([
01,1
1,1
jj
jj
dcBIdr
dcd
−−=
=
+
+Converged?r ij 1, +
Improved?dc j
No
No Yes
1, += ijdd
Yes
111
The minimum number of iterations for an approximated recursive vector in the jth
prediction procedure can be determined by checking the residual with dcj. This
checking process can be performed approximately by comparing several selective
elements in the residual vectors from dj,i and dcj, instead of a full matrix by vector
computation for the norm of the residual with dcj. So, in every inner loop, one matrix
by vector multiplication and two vector by vector multiplications are required. Since the
BSI method does not build up orthogonal basis vectors, any possible breakdown or
stagnation, which is possible in Krylov subspace methods, can be avoided. The BSI
method, which has a stationary iterative procedure in the inner loop, can be used in both
symmetric and non-symmetric cases and shows stable convergence behavior. As
described in the previous section, the SMI method can be used to improve the numerical
properties of a linear system for the BSI method.
5.3.3 Numerical Examples
In this section, two examples of engineering structure reanalysis are presented to
demonstrate the efficiency and accuracy of the proposed methods.
5.3.3.1 Plane Truss
The space truss shown in Fig. 5.6 is presented again. After obtaining the sensitivity
information from the previous example in section 5.2.3, a one-dimensional search is
112
usually performed in a design optimization. Suppose that the negative of the gradient
information (Eq. 5.2.34) is the feasible and usable search direction at the current stage.
The one-dimensional search is performed by changing the value of a step size α in the
following equation:
1 Sxx ii α+=+ (5.3.13)
where xi is the current design and S is the search direction. For every value of α, the
constraint function, which involves structural simulations for the maximum displacement,
is evaluated repeatedly to find an appropriate α value in most optimization algorithms.
Figure 5.13 Iterative Result and the Improved Eigenvalue Distribution
- + - BSI - ∗ - BSI+SMI(1) - o - BiCGSTAB - ∆ - BiCGSTAB+SMI(1)
- + - [B] - ∗ - [B] improved by SMI
a) Iterative solution history b) Eigenvalue distribution
Eigenvalue
Order
Norm of residual
Iteration
113
Since the one-dimensional search usually requires overall modification in the
stiffness coefficient matrix, the SMI method might not be so effective at reducing the
total computational cost. However, the structural system is modified based on the
previous structure, and the modification is usually smaller than the current structural
design.
It makes sense to use the information from the analysis of the previous structural
system for the modified structural analysis using the iterative methods, BiCGSTAB and
BSI. In the search direction vector, there are major and minor direction elements. The
design variables for the major search direction makes a major effect on the numerical
condition of the linear system. Fig. 5.13a shows the iterative results of each method for
α=2.0 in Eq. (5.3.13). The spectral radius of the B matrix in this numerical example is
more than unity. Even though the series solution should theoretically diverge, the BSI
method gives a converged solution. This is because when a small number of extremal
eigenvalues is more than unity, the effect of the high spectral radius of the B matrix is
suppressed by the repeated prediction using the minimum number of inner loops in the
BSI method. However, the performance of iterative methods improved significantly,
when combined with the SMI method as a Combined Iterative (CI) method. As shown in
Fig. 5.13b of the eigenvalue distributions of the B matrix with the SMI method, the
extremal eigenvalues are eliminated by using the SMI method so that the numerical
properties of the matrix are improved for the iterative methods. This explains the fast
convergence in iteration methods with the augmented preconditioner. The SMI method is
114
conducted with the cost of only one matrix by vector multiplication in this example and it
is indicated with “SMI(1)” in the Fig. 5.13..
5.3.3.2 Intermediate Complexity Wing (ICW)
The metallic structural model of the ICW shown in Fig. 5.14 is a representative wing-
box structure of a fighter aircraft.
Figure 5.14 Intermediate Complexity Wing Structure Model and Design Variables
c) Interpolated skin thickness
a) Shape parameters (ti) of the wing thickness
t1
t3
t2
t4
t5
t6
T(x,y)
115
The tip displacement of the wing structure is considered as a target response. The
design variables are the shape parameters of the wing skin thicknesses, as shown in Fig.
5.14a. The skin thickness of an arbitrary location (x, y), which is symmetric between the
upper and lower skins on the wing, is obtained as shown in Fig. 5.14b by applying a
weighting function to the shape parameters as
=
=NDV
iii yxWtyxT
1
),(),( (5.3.14)
where x and y indicate the rectangular coordinate system on the wing skin, NDV is the
number of design variables, ti is the ith design variable, and Wi is the weighting function
=
=NDV
kk
ii yxW
1
),(φ
φ (5.3.15)
The weighting function, Wi, determines the contribution of ti to the thickness at the
location of interest. And, the blending function φ is the inverse of the distance between
the locations of interest and the shape parameters as follows:
γ
φ
=
ii h
1 (5.3.16)
where h is the distance between the location of a design variable and the current location
and γ is a nonlinear index for the blending function (e.g. γ=2.0). Unlike the previous
116
plane truss example, in which one design variable causes changes to all of the skin
elements on the wing, the SMI method, which is an exact method, might not be cost
efficient in sensitivity analysis using FDM. In this case, approximated responses can be
computed by employing iterative methods for the sensitivity analysis. It is obvious that
the additional solutions, which have enough accuracy for FDM, can be obtained in a few
iterations through preconditioned iterative procedures with information from the initial
analysis, because usually a small deviation of each design variable is imposed in FDM.
For a one-dimensional search with the following current design and direction
vectors, the results with α=0.7 from different iterative methods are shown in Fig. 5.15.
xT=[3.0000 3.0000 0.7500 3.0000 1.5000 1.1250]×10-2 (5.3.17)
ST=[2.9400 -0.1837 0.0735 0.5512 -0.3675 -0.0735]×10-2 (5.3.18)
Figure 5.15 Iterative Solution History of CI Method
- + - BSI - ∗ - BSI+SMI(1) - o - BiCGSTAB - ∆ - BiCGSTAB+SMI(1)
Iteration
Res
idua
l Nor
mal
117
Again, the performance of iterative methods combined with the SMI method is
improved for each iterative method. Fig. 5.16 shows the distributions of the eigenvalues
of the B matrix. After applying SMI to the B matrix, the extremal eigenvalues are
eliminated, and the band of eigenvalues becomes small. The SMI method is applied to the
B matrix for the cost of only one matrix by vector computation.
Figure 5.16 Improved Eigenvalue Distribution During Reanalysis Using CI method
- + - [B]
Order
Eig
enva
lue
- ∗ - [B] improved by SMI
Order
Eig
enva
lue
118
Even though the BSI method alone is not competitive to other CG-like methods,
by enhancing the numerical condition with SMI, the number of iterations of the BSI
method is significantly reduced to obtain an acceptable solution. However, the
BiCGSTAB method, whose convergence rate mainly depends on the separation of
extremal eigenvalues of the B matrix, is less sensitive to the value of the spectral radius
than the BSI method and obtains a relatively small benefit from the SMI method. As
shown in Fig. 5.15, the iterative behavior of the BSI method is usually more stable than
the CG-like methods because there is no computation that might involve a numerical
instability. Other iterative methods that are not mentioned in this work can also take
advantage of the SMI method.
In this chapter, the SMI method for minor modification in structures, which is
useful in gradient calculation, is presented. Also, the CI method is developed by coupling
a direct matrix method (SMI) with any iterative method. In the CI method, the numerical
conditions for a converged iterative solution are successfully improved by SMI.
Additionally, a new iterative technique, the BSI method, is also developed by using the
same technical concept as SMI.
119
6. Cost-Efficient Evidence Theory Algorithm
For multiple uncertain parameters in a structural system, a joint BBA structure,
which is similar to the joint probability density function in probability theory, is defined
by the Cartesian product of the combined BBA structures. The Belief and Plausibility
functions are calculated by comparing the range of system responses with the limit-state
value. The popular methods for computing those minimum and maximum are the
sampling method [7, 28, 29, 40] and the vertex method [39]. In the sampling method, the
simplest way is to assume a uniform PDF for each possible event. After generating a
desired sample population from the assumed PDF, those Belief and Plausibility functions
could be evaluated by simulating the target system for the limit-state function. If the
population is large enough, then the sampling method gives a robust result. However, it
requires extensive computational effort for repetitive simulations with FEA or CFD codes
and it could be inappropriate for other engineering design problems, such as the
sensitivity analysis in evidence theory [13]. By using the vertex method, in which only
the structural simulations of vertices of each possible event are required, the evaluations
of the Belief and Plausibility functions are simplified and the computational cost is
reduced.
120
With the assumption that the limit-state function is monotonic, the vertex method
could be useful to quantify the uncertainty. However, if the limit-state function is
nonlinear and non-monotonic, the response sets of joint events of a limit-state function
can be inaccurate and the prediction of uncertainty can fail. Even though structural
system responses, such as displacement, stress, buckling load, fundamental frequency,
and so on, can be monotonic with given uncertain parameters in some cases, the system
failures can be defined by non-monotonic limit-state functions. However, in many
engineering structural UQ analyses, the failure region is usually small, and a large
amount of computational resources is wasted on the non-contributive region to the
resulting uncertainty. Therefore, the motivation of this section is to develop a cost-
effective algorithm by using a surrogate model approach to reduce the overall
computational cost and by focusing the computational resources only on the failure
region. First, the proposed algorithm identifies the failure region in a defined UQ space
by employing a mathematical optimization technique, and then an approximation
approach is adopted to construct a surrogate of the original limit-state function for the
repetitive simulations of UQ analysis.
121
6.1 Multi-Point Approximation
In this work, the Multi-Point Approximation (MPA) method [21] is employed.
The general formulation of MPA is given as follows:
=
=N
iii XFXwXF
1
)(~
)()(~ (6.1.1)
where N is the number of local approximations, X is the vector of uncertain variables,
)(~
XFi is a local approximation of an original limit-state function, and )(Xwi is a
weighting function that determines the contributions of each local approximation
function. The weighting function can be expressed as follows:
=
=N
ii
ii
X
XXw
1
)(
)()(
φ
φ (6.1.2)
where )(Xiφ is a blending function. The weighting functions in Eq. (6.1.2) are
constructed to reproduce the exact function value and gradient values at the points where
the local approximations were built. It is assumed that the information at the sampled
points is accurate. There are several possible blending functions, and in this work the
blending function is given by:
122
ii h
X1
)( =φ (6.1.3)
where ih is basically the distance between a current target point and the sampled points
that are used for constructing local approximations. Physically, when a current target is
far from a sampling point of a particular local approximation, the contribution of that
local approximation is minimal. The details for evaluating the weight function and the
blending function can be found in Ref. [21]. The accuracy of MPA mainly depends on
the local approximation, hence the choice of local approximation is important. In this
work, the Two-Point Adaptive Nonlinear Approximation (TANA2) method, developed
by Wang and Grandhi [22], is employed as a local approximation method. The efficiency
and accuracy of this method was extensively demonstrated in many engineering
disciplines [21-26].
6.2 Cost Efficient Algorithm for Structural Uncertainty Quantification
When conducting the UQ analysis using the sampling method or the vertex
method, it is required to explore the entire joint frame of discernment, defined by the
Cartesian product of the frame of discernments of uncertain parameters, with the given
imprecise information. The main computational cost of UQ analysis is from the large
number of structural model simulations needed to explore the entire joint frame of
discernment. However, in many cases of UQ analysis of engineering structural systems,
the failure region is small compared to the entire space of the joint frame of discernment.
123
Hence, instead of investigating over the entire space for a limit-state function, the
computational effort of structural simulations could be allocated efficiently by identifying
the failure region. Also, a surrogate of the original limit-state function constructed by
using the MPA method can be used instead of the repetitive simulations to reduce the
computational cost in UQ analysis.
The proposed algorithm consists of two main steps: i). finding the failure region
in a defined joint frame of discernment and ii). constructing a surrogate of the original
limit-state function using the MPA method. For the first step, it is assumed that the
failure region is comparatively small in the defined joint frame of discernment.
Figure 6.1 Identifying the Failure Region Using an Optimization Technique
This failure region could be identified by solving an optimization problem. The
problem can be formulated as follows,
Initial point
Failure boundary point
Failure region
x2 x1
Safe region
F
124
minimize : )( iLimit XfY − (6.2.1)
subject to : UiL XXX ≤≤ (6.2.2)
where X L and X U indicate the lower and upper bounds of each parameter from the frame
of discernment, X i is the design vector of uncertain parameters at the ith iteration, and
YLimit is the limit-state value of a system response. To solve this optimization problem, a
number of techniques are available [41]. In this work, a gradient-based optimization
technique, the Sequential Quadratic Programming method, is applied. Identifying the
failure region by an optimization technique is illustrated in Fig 6.1 with arbitrary limit-
state function and value.
Figure 6.2 Deploying Aps and Constructing the Surrogate on the Failure Region
The cost for this optimization procedure can be reduced by relaxing the
convergence criteria, because the exact optimum point (or the exact failure boundary
Approximation Point (AP)
Constructed approximation
x2 x1
F
x2 x1
Surrogate model
=
=N
iii XFXwXF
1
)(~
)()(~
125
point) is not required. Only the approximate optimum, which is close to the boundary of
the failure region, is needed in this step. After obtaining the failure boundary point,
Approximation Points (APs) for constructing local approximations are deployed over the
failure region, as shown in Fig 6.2. The deployment of APs can be performed with a
factorial design, which is a Design Of Experiments (DOE) technique [47-49]. The first
APs are deployed with large variations of factorial design and TANA2s are constructed
between the neighboring points. To confirm the MPA accuracy, the exact simulation
values and the approximation values are obtained and compared at several intermediate
sampling points. If the MPA accuracy is not acceptable, additional APs are distributed
with small variations of factorial design and the local approximations are updated. Until
the desired accuracy of MPA is obtained, this procedure is repeated.
For a special case in which multiple failure regions (e.g. multiple most probable
failure points in the probabilistic context) are expected, the procedure of identifying the
failure region can be performed with multiple initial points to find the multiple failure
boundary points. After finding the multiple failure regions, the MPA is constructed over
the failure regions as previously described. Once the surrogate from the proposed
algorithm is obtained, the two measurements of evidence theory, the degree of
plausibility and the degree of belief, are calculated by Eqs. (4.3.3) and (4.3.4). Since the
uncertain parameters in a joint proposition are continuous in an engineering application,
it is numerically required in the evaluation of Belief and Plausibility functions to find the
maximum and minimum responses in each joint proposition, ck.
126
[ymax, ymin] = [ min [ f(ck)], max [ f(ck)] ] (6.2.3)
The maximum and minimum responses are obtained with trivial computational
cost by using the surrogate model constructed by the proposed algorithm because the
surrogate is just a closed form equation and it replaces computationally intensive
simulations, such as FEA or CFD.
Figure 6.3 The Cost-Efficient Algorithm for Assessing Bel and Pl
Given information
Combining information
Defining function evaluation space
( JFD)
Evaluating Bel & Pl functions
[Bel, Pl]
Identifying the failure region
boundary
Initial factorial design for TANAs
Constructing MPA
Is the accuracy of MPA acceptable?
FEM Analyzer
Reconstructing factorial design
(Decreasing variation)
Surrogate model (MPA)
Is the whole failure region covered?
Reconstructing factorial design
(Expanding design space)
Constructing a Surrogate Model
No
No
Yes
Yes
127
In the calculation of the Belief and Plausibility functions, joint propositions in the
failure region are evaluated as to whether the response range of the joint proposition is
included in the UF set partially or entirely, instead of by obtaining the XF set. For the
summary of the proposed cost effective algorithm, Fig. 6.3 shows the procedure of UQ
analysis using evidence theory.
6.3 Numerical Examples
6.3.1 Composite Cantilever Beam
A composite cantilever beam with a point load is considered, as shown in Fig. 6.4.
To simplify the calculation of tip displacement of the composite beam, a symmetric
laminated beam is used with one composite material and [±45]s angle plies.
Figure 6.4 Composite Cantilever Beam Structure Model
b
h
(45°)
(-45°)
(-45°)
(45°)
F0
L
128
The tip displacement is obtained by the classical laminated plate theory [63] in
terms of composite material properties as follows:
+++++−=)2(
)24(42
22
3
3
LTTTLLTL
LTTLTTLLTTLTLoTip EEEGE
EGEEEGEhLF
νννδ (6.3.1)
where h, L, and F0 are the height (3.81 cm), length (50.8 cm) of the beam, and the applied
load per width (350 kN) respectively.
For the composite material (graphite fabric-carbon matrix), EL and ET are the
longitudinal and transverse Young’s moduli (173 GPa and 33.1 GPa), GLT is the shear
modulus (9.38 GPa), and νLT is the Poisson’s ratio (0.036). In this example, the Young’s
moduli, EL and ET, are considered as uncertain variables, and the goal is to obtain the
assessment of the likelihood that the tip displacement exceeds the limit-state value of
5.59 cm.
59.5: cmU TipTipF ≥= δδ (6.3.2)
Due to the lack of data and knowledge, only the multiple interval information for
the scales (α and β) of the Young’s moduli (EL and ET) is available, as shown in Fig. 6.5.
The interval information for the uncertain variables, EL and ET, are taken as BBA
structures without imposing any additional assumptions on the intervals. The BBAs of
intervals may not be continuous and they could overlap. The possible values of the
129
uncertain Young’s moduli are obtained by multiplying the scale factors to the previously
given material properties.
α1 α2 α3 α4 α5 α6 α7 α8 α9
BBA 0.0086 0.0086 0.0240 0.0103 0.2243 0.4966 0.0993 0.0514 0.0771
β1 β2 β3 β4 β5 β6 β7
BBA 0.0075 0.0075 0.0226 0.3158 0.5263 0.0902 0.0301
Figure 6.5 Scale factors (α, β) Information for EL and ET
In this example, it is expected that the vertex method will not fail to calculate the
plausibility and belief, because the limit-state function is monotonic with respect to
uncertain variables, as shown in Fig. 6.6.
α1
0.500 0.875 1.125 0.375 1.000 1.250 0.750 0.625
α2
α3
α3 α5
α6
α7
α4
α9
1.500
α8
β1
0.500 1.062 1.437 1.625 0.875 0.687
β2 β3 β4 β7
2.0
β6 β5
Scale factor (α) information for EL
Scale factor (β) information for ET
130
Figure 6.6 Tip Displacement (δTip) of the Composite Cantilever Beam with Respect to
the Scale Factors (α and β) and the Surrogate Failure Region Using the
Proposed Method
The vertex method requires 72 original function evaluations to check the vertices
of the joint BBA structure. However, by using the proposed method, the number of total
function evaluations required for identifying the failure region boundary and for
constructing MPA is only 24. The computational cost saved is about 67% and the same
UQ analysis results as the vertex method are obtained, as shown in Table 6.1. The
computational savings garnered by the proposed method mainly depends on the ratio of
failure region to the entire joint frame of discernment; that is, the smaller the ratio
becomes, the lower the computational cost.
δTip (cm)
Surrogate failure region
β α
131
Table 6.1 Composite Cantilever Beam Results Using the Vertex and Proposed Methods
Bel Pl Number of function evaluations
Vertex method 1.2875×10-4 0.0100 72
Proposed method 1.2875×10-4 0.0100 24
In this example from Table 6.1, the degree of plausibility is 0.01 for the failure of
the composite cantilever beam regarding the defined tip displacement limit-state, whereas
there is at least 1.2875×10-4 belief for the failure. Belief and Plausibility can be accepted
as lower and upper bounds of an unspecified probability density function for the given
interval information. Thus, a probability for FU can be as low as 1.2875×10-4 and as high
as 0.01 with the given imprecise information.
6.3.2 Intermediate Complexity Wing (ICW)
The structural model of an intermediate complexity aircraft wing is shown in Fig.
6.7. In this model, the relative tip displacement at the marked point is restricted to less
than 20.3 cm as a limit-state function, and the system failure set is defined by Eq. (6.3.3).
3.20: cmU TipTipF ≥= δδ (6.3.3)
132
Figure 6.7 ICW Structure with Uncertainties in the Root Region
The uncertainties are assumed to exist in the static loads, Young’s moduli, and ply angles
of the composite elements. The uncertainties of the Young’s moduli and ply angles are
considered only in the root region, as indicated in Fig. 6.8, in order to represent the
structural integration defects that can reduce the structural stiffness from fatigue, crack
propagation, and so on.
Wing Root
Upper wing skin
Lower wing skin
Spars and Ribs
Tip displacement
Uncertain Young’s moduli region
Span (cm)
Chord (cm)
133
Figure 6.8 Aerodynamic Model of ICW
The actual values of the Young’s moduli and ply angles are obtained by uncertain
factors (α , β ) as follows:
×= αE original Young’s modulus (6.3.4)
×= βθ original ply angle (6.3.5)
Due to various operational conditions, different aerodynamic pressure distributions are
imposed on the wing model. Two aerodynamic pressure distributions are obtained by the
steady aeroelastic trim analyses (roll and lift) with an aerodynamic model of ICW, shown
in Fig. 6.8 at 0.7 Mach using ASTROS [42].
Span
(cm
)
Chord (cm)
134
Figure 6.9 Aerodynamic Pressure (Cp_lift) Distributions from Steady Aeroelastic Trim
Analysis of Lift Forces
Figure 6.10 Aerodynamic Pressure (Cp_roll) Distributions from Steady Aeroelastic
Rolling Trim Analysis
Cp (Kpa)
Span (cm) Chord (cm)
Span (cm)
Chord (cm)
Cp (Kpa)
135
The aerodynamic pressure distribution of rolling trim analysis (Cp_roll) is obtained
from the rolling rate 1.0 (rad/sec), and the aerodynamic pressure distribution of lifting
analysis (Cp_lift) is for the angle of attack 5°, as shown in Figs. 6.9 and 6.10. In this
example, the static loads on the structural model are assumed to be independent of
material properties and they are obtained by the combination of the aerodynamic pressure
distributions, as given by Eq. (6.3.6):
liftprollpp CCC __ 5.1)1(
5.1γγ −+= (6.3.6)
where γ is the uncertain combination factor in this example.
After obtaining the combined aerodynamic pressure distribution on the
aerodynamic model, the structural static loads along the surface nodes are obtained by the
equivalent force transfer method integrated with the spline transformation technique [42].
Therefore, there are three uncertain scale variables (α , β , and γ ). It is assumed that
only imprecise information is available because of lack of data. Multiple intervals of
imprecise information for each variable are given by two independent experts, as shown
in Figs. 6.11 and 6.12. The interval information from two experts is aggregated by
Dempster’s rule of combining [Eq. (3.4.1)] to obtain the combined BBA structure of each
uncertain variable.
136
α1 α2 α3 α4 α5 α6 α7 α8 BBA 0.010 0.020 0.020 0.050 0.120 0.700 0.050 0.030
β1 β2 β3 β4 β5 β6 BBA 0.550 0.150 0.100 0.050 0.100 0.050
γ1 γ2 γ3 γ4 γ5 γ6 γ7 γ8 BBA 0.005 0.020 0.040 0.150 0.500 0.200 0.050 0.025
Figure 6.11 Interval Information for Uncertain Variables (α, β, and γ) from the First
Expert
α6
0.50 0.80 1.10 1.30 0.90 1.00 0.70 0.60 1.50 1.20 1.40
α1 α4 α2
α3
α7 α5
α8
0.50 0.80 1.10 1.30 0.90 1.00 0.70 0.60 1.50 1.20 1.40
β1
β3
β5 β2
β4
β6
0.50 0.80 1.10 1.30 0.90 1.00 0.70 0.60 1.50 1.20 1.40
γ1
γ2 γ4
γ5 γ6
γ7
γ8 γ3
α (Scale factor for Young’s moduli)
β (Scale factor for ply angles)
γ (Combination factor of aerodynamic pressures)
137
α1 α2 α3 α4 α5 α6 BBA 0.030 0.070 0.400 0.400 0.070 0.030
β1 β2 β3 β4 β5 BBA 0.400 0.100 0.150 0.050 0.300
γ1 γ2 γ3 γ4 γ5 BBA 0.100 0.050 0.700 0.050 0.100
Figure 6.12 Interval Information for Uncertain Variables (α, β, and γ) from the Second
Expert
0.50 0.80 1.10 1.30 0.90 1.00 0.70 0.60 1.50 1.20 1.40
α1 α3 α2 α6 α5
0.50 0.80 1.10 1.30 0.90 1.00 0.70 0.60 1.50 1.20 1.40
β1 β3 β5
β2 β4
0.50 0.80 1.10 1.30 0.90 1.00 0.70 0.60 1.50 1.20 1.40
γ1 γ2 γ4 γ5 γ3
α4
α (Scale factor for Young’s moduli)
β (Scale factor for ply angles)
γ (Combination factor of aerodynamic pressures)
138
As a result, Table 6.2 shows the UQ analysis result of the proposed method by
comparing it to the results from the sampling method containing 150,000 simulations and
the vertex method.
Table 6.2 ICW Results Using the Sampling, Vertex, and Proposed Methods
Bel Pl Number of function evaluations
Sampling method 0.006491 0.200417 150,000
Vertex method 0.006500 0.156408 9504
Proposed method 0.006491 0.200417 1631
Even though the vertex method reduces the number of simulations of the limit-
state function from 150,000 to 9504, it fails to calculate the correct degrees of belief and
plausibility due to the non-monotonicity of the limit-state function for the ICW structure.
However, the proposed method gives robust results, as does the sampling method,
because the nonlinearity and the non-monotonicity are captured by the surrogate model.
The original function evaluation number is significantly reduced to 1631 by using the
proposed method, compared to 9504 by using the vertex method (around 80%
computational cost savings). From the three methods, the results of UQ analysis using
evidence theory show that there are 0.006491 belief and 0.200417 plausibility values to
face the failure of the wing structure regarding the tip displacement limit-state function.
The gap of the bound can be reduced, or even a single value result can be calculated by
employing additional assumptions. However, it should be remembered that without
justifying the assumptions with evidence or data, the result could be merely the reflection
139
of the assumptions. Hence, the bound result ([0.006491, 0.200417]) using evidence
theory can be viewed as a robust result because it is obtained without any additional
assumption and it includes all the probability results that could be obtained by using
different assumptions to the given imprecise information in probability theory. In this
example, the surrogate of the limit-state function is constructed for the tip displacement
limit-state function using MPA with three variables. In the local approximation TANA2,
the addition of one more uncertain variable needs only one function gradient with respect
to the additional variable and one error correction term, as shown in Eq. (5.1.11). TANA2
can handle a large number of uncertain variables efficiently. Unlike the vertex method or
sampling method, once the surrogate is constructed, there is no additional high
computational cost in evidence theory for updating the bound result with a reinforced
expert opinion or refined interval information. Moreover, since the limit-state in UQ
analysis is expressed by a single closed-form equation using MPA, the benefits of the
proposed algorithm can be realized in other analyses using evidence theory, such as
sensitivity analysis, reliability-based optimization, and so on.
140
7. Comparison of Reliability Approaches With Imprecise Information
Until now, when both aleatory and epistemic uncertainties are present together in
a system, Uncertainty Quantification (UQ) has been performed by treating them
separately, or by making assumptions to accommodate either a probabilistic framework
or a possibilistic framework. However, because of the flexibility of the basic axioms in
evidence theory, not only epistemic uncertainty, but also aleatory uncertainty can be
tackled in its framework without any baseless assumptions. In this section, the possibility
of adopting evidence theory as a general tool of UQ in an engineering structural system is
investigated with the cost-efficient UQ methodology that was introduced in the previous
chapter.
7.1 Problem Definition with Imprecise Information
The form of the mathematical model that describes the physical system can be
expressed abstractly as Eq. (7.1.1):
)(XfY = (7.1.1)
141
where Y =[y1, y2, … , yn] is a vector of system responses and X=[x1, x2, … , xn] is a vector
of input data. In this work, only parametric uncertainty is considered; that is, there is no
uncertainty in the defined mathematical modeling, system failure modes, and so on.
When only parametric uncertainty is considered, the uncertainty of Y is determined from
the uncertainty of X in the model. Once enough data for those parameters of X are
obtained, the parametric uncertainties in X can be expressed by PDFs and probabilistic
UQ techniques can be used. When available data is not sufficient to construct a PDF,
upper and lower bounds might be provided from experts’ opinions. For the imprecise
bound information (epistemic uncertainty) of an uncertain parameter, the Bayesian
method can be used in probability theory under the assumption that the imprecise
information is given to events which are mutually exclusive and exhaustive [64]; that is,
the uncertain information consists of a probability density p on all finite elementary
events of S, the universal set of events, such that p: S [0,1] and
∈
=Ss
sp 1)( (7.1.2)
Hence, in case the imprecise information is given to any subset of S, the
probability information for each elementary event should be reproduced by using any
assumption for the probability mass distribution in the subset. On the other hand, in
possibility theory, for the given bound information, a membership function is defined to
represent the degree of belonging or not belonging to the leveled interval (membership)
by taking the uncertain variable as a fuzzy variable. With different levels of degree of
membership (α cuts), fuzzy subsets of the fuzzy variable are obtained. Since the fuzzy set
142
is originally developed with the contention that meaning in natural language is a matter of
degree [65], the fuzzy subsets are consonant sets with corresponding α cuts. When the
imprecise information is given by multiple non-consonant intervals with corresponding
degrees of belief, the fuzzy membership function should be approximated to solve with
possibility theory [66].
In evidence theory, imprecise information expressed by any subset of FD is
assigned to a BBA structure without any additional assumption. The subsets (intervals of
an uncertain variable) to which the bodies of information (BBAs) are assigned can be
consonant or non-consonant and continuous or discrete. The interval can be the interval
of physical value or the interval of imprecise statistics. As mentioned previously,
evidence theory gives a bounded result ([Bel, Pl]) due to lack of information, and the
bounded result includes the probability result, which can be obtained by assuming any
distribution for the given interval information. The measurements (Bel, Pl and
probability) eventually will converge to a single value when the information is increased
sufficiently. However, unlike a PDF of probability theory and a membership function of
possibility theory, the BBA structure in evidence theory cannot be expressed with an
explicit function.
For multiple uncertain parameters, the joint BBA structure, which is similar to the
joint probability density function in probability theory, is defined for UQ analysis of a
structural system. The possible joint set, denoted by , is constructed by using the
Cartesian product of the propositions of each uncertain parameter, as shown in Eqs.
143
(4.2.1) and (4.2.2). The joint BBA structure must follow the three axioms of BBA
structure. Every possible event is required to be checked in the evaluation of the Belief
and Plausibility functions, [Eqs. (4.3.3) and (4.3.4)], by finding the maximum and
minimum responses using the proposed cost-efficient algorithm in the previous chapter.
7.2 Case Study I: Three Bar Truss
The structural model of a three bar truss is shown in Fig. 7.1. There are three truss
elements and a static load is applied at node 4. The finite element analysis (FEA) of this
structure was performed using GENESIS 6.0 [67].
Figure 7.1 Three Bar Truss
The displacement of node 4 is considered as a limit-state response function. It is
assumed that uncertainties exist in the independent parameters of elastic modulus (E) and
applied force (P). The nominal values for the uncertain parameters are fixed and the
1 2 3
4
A1 A1 A2 10″
10″ 10″
Material:E=1.06 psi P (40000lb, -40000lb)
144
actual values are obtained by multiplying the nominal values by uncertain factors. The
goal of this problem is to obtain an assessment of the likelihood that the displacement of
node 4 is larger than the limit-state value (δlimit = 3.0″); that is, the likelihood that the
displacement is in the set given by Eq. (7.2.1).
: lim4_4_ itNodeNodefail δδδδ ≥= (7.2.1)
Figure 7.2 Imprecise Information for the Scale Factors of Uncertain Parameters
(E and P)
In this example, we consider the situation in which an expert gives multiple
interval information for the two uncertain parameters, as shown in Fig. 7.2. Different
solution approaches (evidence theory, possibility theory, and probability theory) are
investigated and discussed in the following subsections.
e1
0.5 1.0 1.2 1.4 0.8 0.6 1.5 1.1 1.3 0.9 0.7
e2 e4 e5 e6
e3
Elastic modulus factor
0.03 0.05 0.60 0.15 0.07
0.10
p1
0.5 1.0 1.2 1.4 0.8 0.6 1.5 1.1 1.3 0.9 0.7
p2
p4
p5 p6
p3
Force factor
0.10 0.05
0.70
0.02
0.03 0.10
145
Possibility theory approach
Since only parametric uncertainties, which are characteristically aleatory
uncertainties, are considered in this example, it is possible to calculate bounds on the
probability of system failure with a frequentistic view of fuzzy sets of possibility theory.
A fuzzy set is characterized by a fuzzy membership grade (also called a possibility) that
ranges from 0.0 to 1.0, indicating a continuous increase from non-membership to full
membership. A degree of membership is associated to every element x, and a fuzzy set A
over the referential X is defined by means of membership function: Fµ from X to [0, 1].
The referential X could be viewed as the frame of discernment in evidence theory and
also as the sample space in probability theory. For any x in X, )(xFµ is the membership
degree of x in A. The α-level cut of A is the subset defined by )(, αµ ≥xx F . As a
special case of BBA structure, the BBA structure can be defined as fuzzy sets when the
intervals are consonant [69]. In this example, since the given intervals shown in Fig. 7.2
are not consonant, the possibility theory approach cannot be applied directly. When the
given interval sets are not consonant, the consonant interval information can be
reproduced by performing inclusion techniques. The inclusion procedure proposed by
Tonon et al [66] is applied to the current problem. In the inclusion procedure, consonant
intervals are constructed to give a conservative result by decreasing the loss of
information. The intervals are ordered based on the effect on the reliability index and
extended to include other intervals. The BBAs of the obtained consonant intervals are
corrected by introducing a correction mass β. The reader is referred to reference [66] for
the details of the inclusion procedure.
146
Figure 7.3 Consonant Intervals and an Approximate Membership Function for the Scale
of Uncertain Parameter (E) Using the Inclusion Technique
The reproduced consonant intervals and the plausibility function of the singletons
are shown in Figs. 7.3 and 7.4. The plausibility function for focal sets is accepted as the
approximate membership function of the fuzzy set in this procedure. When multiple
fuzzy variables are considered in a functional relationship, the corresponding fuzzy
responses must be computed via Zadeh’s extension principle [27].
Elastic modulus factor
e1
0.5 1.0 1.2 1.4 0.8 0.6 1.5 1.1 1.3 0.9 0.7
e2 e4 e5 e6
e3 0.03 0.05 0.60 0.15 0.07
0.10
0.07-5β
0.15-4β
0.60-3β
0.10-2β
0.05-β
0.03
β
β
β
β
β
β
β
β
β
β
β
β β
*β = 0.0001
m(e6′)= 0.07-5β =0.0695
m(e5′)= 0.15-3β =0.1497
m(e4′)= 0.60-1β =0.5999
m(e3′)= 0.10+β =0.1001
m(e2′)= 0.05+3β =0.0503
m(e1′)= 0.03+5β =0.0305
β
β
Plausibility of the singletons (membership function of E)
Plau
sibi
lity
E
Updated masses
+ +
+ +
+ + +
+ +
+
+ +
+
+ +
147
Figure 7.4 Consonant Intervals and an Approximate Membership Function for the Scale
of Uncertain Parameter (P) Using the Inclusion Technique
Based on Zadeh’s extension principle, Dong and Wong [34] proposed the Level
Interval Algorithm (LIA), also called the Fuzzy Weighted Average algorithm and the
vertex method. LIA, which is basically the vertex method, is reliable only for a
monotonic system response. Several variation methods were developed to improve the
0.10-5β
0.70-4β
0.05-3β
0.02-2β
0.03-β
0.10
β
β
β
β
β
β
β
β
β
β
β
β
β
p1
0.5 1.0 1.2 1.4 0.8 0.6 1.5 1.1 1.3 0.9 0.7
p2
p4
p5 p6
p3
Force factor
0.10 0.05
0.70
0.02
0.03 0.10
β β
*β = 0.0001
m(p1′)= 0.10-5β =0.0995
m(p2′)= 0.70-3β =0.6997
m(p3′)= 0.05-1β =0.0499
m(p4′)= 0.02+β =0.0201
m(p5′)= 0.03+3β =0.0303
m(p6′)= 0.10+5β =0.1005
Plausibility of the singletons (membership function of P)
Plau
sibi
lity
P
Updated masses
+ + +
+ +
+ + +
+
+ +
+
+ +
+
148
computational performance in the fuzzy sets context by Liou and Wang [70], Guh et al
[71], and so on.
Figure 7.5 System Response (Displacement) Membership Function for the Three Bar
Truss
In this example, LIA is applied due to its simplicity of implementation. LIA
simplifies the process to obtain the fuzzy output by discretizing the membership functions
of the input fuzzy variables into prescribed α-cuts. The reader is referred to reference
[72] for the details of the LIA procedure. With the approximate membership functions of
uncertain variables (E and P) from the inclusion technique, the fuzzy response
(displacement) is obtained, as shown in Fig. 7.5 by LIA. From the response membership
function, the possibility of failure can be obtained as 0.1308 for the defined failure set
given in Eq. (7.2.1). Further discussions comparing the results of evidence theory and
probability theory are presented later.
0.1308δ
Poss
ibili
ty
149
Probability theory approach
Since in the probabilistic framework probability should be assigned to only
elementary events, the given imprecise information shown in Fig. 7.2 is not appropriate
for the probabilistic analysis. In probability theory, when a PDF for an uncertain variable
is not available, the uniform distribution function is often used, which is justified by
Laplace’s Principle of Insufficient Reason [73]. This principle can be interpreted to mean
that all simple events for which a PDF is unknown have equal probabilities.
Figure 7.6 PDF of e (Scale of Elastic Modulus) Using Uniform Distribution Assumption
Figure 7.7 PDF of p (Scale of Force) Using Uniform Distribution Assumption
e
Prob
abili
ty
p
Prob
abili
ty
150
In this example, there is no further information to select or approximate a PDF for
the given intervals, but only the probability masses (BBAs) are assigned by available
evidence (expert’s opinion or experimental data). The approximate PDFs of uncertain
variables are obtained, as shown in Figs. 7.6 and 7.7, by the assumption that probability
mass in each interval is distributed uniformly. The popular sampling technique, Monte
Carlo Simulation (MCS) with 100,000 samples is performed for the obtained PDFs of
uncertain variables (e and p). The resulting failure probability is obtained as 0.0058 for
the current example. The discussions of the result are presented later.
Evidence theory approach
In evidence theory, unlike in possibility theory and probability theory, there is no
need to make any assumption or approximation for the given imprecise information
because the BBA structure can consist of any combination of the possible subset of FD
(see the three axioms of Basic Belief Assignment). The given imprecise interval
information is adopted as a BBA structure itself. For multiple independent uncertain
parameters in a structural system, a joint BBA structure, which is similar to the joint
probability density function in probability theory, is defined by using the Cartesian
product in the JFD. As a result, the Belief and Plausibility functions are evaluated and the
bounded result ([0.0039, 0.0345]) is obtained with the cost effective algorithm. From this
result, we have the bound probability [0.0039 0.0345] for the system failure based on the
151
given limit-state function. It is intuitive and reasonable to obtain the bound result instead
of a single value, such as the probability, because the given information is not precise.
Comparison and discussions of different approaches
Table 7.1 shows the results from each approach and corresponding computational
cost. Possibility theory and evidence theory give bounded results and probability theory
gives a single-valued result. The necessity in possibility theory is zero because the
interval for determining the measurements is set to [δlimit, +∞]. Figure 7.8 shows the
Complementary Cumulative Functions (CCFs) for each measurement. CCFs are defined
for the set δfail, with a varying value of δlimit∈δ, where δ and δfail are defined as in Eqs.
(7.2.2) and (7.2.3)
),...,,),(: 21 Xxxxxxf nTipTip ∈=== δδδ (7.2.2)
,: limlim δδδδδδ ∈≥= ititTipTipfail (7.2.3)
CCFs can be interpreted in the same way as the cumulative distribution function in
probability theory. From Fig. 7.8, useful insights into the confidence of the result from
UQ analysis with imprecise information can be obtained.
Probability theory does not allow any impreciseness on the given information, so
it gives a single-valued result. However, possibility theory and evidence theory give a
bounded result. Specially, the difference between plausibility and belief in evidence
152
theory can be defined as another Ignorance ( BelPl −= ). This Ignorance reflects the lack
of confidence in an UQ analysis result. By increasing the available data and knowledge,
the difference (Ignorance) decreases to zero and the confidence on the resulting
measurement increases to one.
Table 7.1 Comparison of Results and Costs for Three Bar Truss Example
UQ Approaches UQ Results Solution techniques / Number of simulations
Possibility theory [0.0000 0.1308] LIA / 48
Probability theory 0.0058 MCS/100000
Evidence theory [0.0039, 0.0345] Proposed Algorithm / 17
Figure 7.8 Complementary Cumulative Measurements of Possibility Theory, Probability
Theory, and Evidence Theory for Three Bar Truss Example
Plausibility
Belief
Probability
Possibility
δ
153
If Pl and Bel are the same for a certain limit-state value, so the degree of
Uncertainty is zero, then it can be interpreted that there is no doubt about the resulting
degree of belief of system failure. For the computational cost, it is shown in Table 7.1
that the cost effective algorithm is useful in decreasing the computational cost. The
computational performances of possibility theory and probability theory can be enhanced
by using advanced techniques; however, the cost-efficient algorithm has the most
efficiency and generality. Even in possibilistic and probabilistic approaches, the
algorithm can be incorporated to reduce the computational cost. The detailed discussions
are given for the result of each approach as follows:
1) The result from possibility theory gives the most conservative value essentially
because of Zadeh’s extension principle. In that principle, the degree of membership of the
system response corresponds to the degree of membership of the overall most preferred
set of fuzzy variables, as in Eq. (7.2.4).
)]([sup)()(:
xy Fxfyx
F µµ=
= (7.2.4)
where x can be viewed as a vector of fuzzy variables for a multiple dimension problem.
However, in the inclusion procedure to reproduce consonant intervals, the location where
the reliability is maximized in the referential X should be correctly identified to avoid the
extreme conservative result. Hence, there are no unique consonant intervals, and the
extension of intervals in the inclusion technique is not limited to only one side; that is, the
constructed consonant intervals are dependent on the given limit-state functions. For
154
example, in a convex limit-state function, when the maximizing reliability location is at
the middle of X, the original intervals are extended in both directions (right and left) to be
the new inclusion intervals. However, in a concave limit-state function which gives two
boundary points in the referential X as maximizing reliability locations, the inclusion
technique can give an extreme result (0 or 1) for the possibility and necessity
measurements, unless other assumptions or criteria are introduced for the inclusion
technique. Thus, even though it is not clearly stated in the reference [66], the inclusion
technique can be applied only for a system for which limit-state functions are monotonic.
By expanding the intervals to include other intervals in the inclusion technique,
the information given to an interval could lose its physical meaning. For example, the
BBA of e1 interval in Fig. 7.3, which can be viewed as a probability mass of the interval,
is assigned to the new interval, which is the same as the referential X ([0.5, 1.5]) to
include the other consonant intervals with the given correction mass β. Moreover, based
on Zadeh’s basic idea of fuzzy sets, the transition between membership and non-
membership of a location in the set is gradual [13]; the sharp boundaries in the
approximate membership function shown in Fig. 7.5 should be smoothed by introducing
other assumptions.
In this example, non-consonant multiple intervals are reproduced as a fuzzy
membership function to apply the possibilistic approach. Conversely, the membership
function can be modeled as a consonant BBA structure to analyze within the evidence
theory framework. When the memebership function is modeled by BBA structure, there
155
is no need for additional techniques or assumptions once the α cut is accepted as a level
of basic belief. The consonant BBA structure can be constructed with discretized α cuts.
2) Contrary to possibility theory, probability theory gives the smallest prediction
of system failure among the upper limits (possibility, plausibility, and probability). Based
on different assumptions other than uniform distribution function, the resulting
probability is changed significantly. Hence, probability theory can seriously
underestimate a possible event unless the additional assumption (uniform distribution) is
not justified properly. In other words, once an assumption is introduced, the resulting
probability would be merely the reflection of the assumption on a target system with
imprecise information. Moreover, since it just gives a single value result, additional
techniques might be required to obtain supplementary measurements (expectation,
variation, confidence bound, and so on), which can be used in a decision making
situation.
3) Evidence theory gives a bounded result ([Belief, Plausibility]) that always
includes the probabilistic result; that is, the lower and upper bounds of probability based
on the available information. The two main reasons that structural analysts were not
familiar with evidence theory are the high computational cost and the misunderstanding
of the capability of incorporating the pre-existing probabilistic information.
As discussed throughout this paper, a BBA structure in evidence theory can be
used to model both fuzzy sets and probability distribution functions due to its flexibility.
156
That is, different types of information (fuzzy membership function and PDF) can be
incorporated in one framework to quantify uncertainty in a system. The obtained
bounded result of evidence theory, which tends to be less conservative than that of
possibility theory, and less marginal than the result of probability theory, can be viewed
as the best estimate of system uncertainty, because the given imprecise information is
propagated through the given limit-state function without any unnecessary assumptions in
evidence theory. As shown in Table 7.1, the computational cost of evidence theory can be
significantly reduced by using the cost effective algorithm. It shows that even though
there is no closed-form function for the given imprecise information, the Belief and
Plausibility evaluations can be performed efficiently by the proposed algorithm.
As mentioned previously, even in possibilistic and probabilistic approaches, the
algorithm can be employed to reduce the computational cost. Once the surrogate model is
constructed, there is no additional cost for updating the result with increased information.
For example, when there exist two exact normal PDFs for the scale factors, e and p
(means of one and standard deviations of 0.2), an imprecise information situation can be
assumed due to lack of information or data in the current three bar example. For
imprecision, discretized exclusive probability sets might be obtained, as shown in Fig. 7.9,
with different levels of discretization.
157
Figure 7.9 Discretized Normal PDF (N: the number of discretization)
As the number of discretization levels increases, Fig. 7.10 shows that the bound of
evidence theory decreases. This result shows that the three measurements (belief,
probability, and plausibility) eventually converge to a single value by increasing the data
sufficiently. The updated bounds in Fig. 7.10 are calculated without additional
simulations due to the construction of the surrogate for the limit-state function.
N=5
N=30
158
Figure 7.10 The Convergence of Bel, Pl, and Probability Regarding the Number of
Discretization
7.3 Case Study II: Intermediate Complexity Wing (ICW)
For the second numerical example, the structural model of an intermediate
complexity wing is shown in Fig. 4.4. This is a representative wing-box structure for a
fighter aircraft. The dominant frequency and tip displacement at the marked point, as
shown in Fig. 4.4, are considered as multiple limit-state functions.
1. Displacement : 0.1)(0.2
≤in
Disptip (7.3.1)
Plausibility
Belief True Probability
Number of discretization levels (N)
Probability
159
2. Frequency : 0.1)(0.20
≤Hz
Freq (7.3.2)
3. Combination :
≤
≤ 0.1
)(5.60.1
)(45.0 HzFreq
in
Disptip (7.3.3)
In this example, the uncertainties are expressed by intervals of scale factor for the static
loads and by an interval of statistical mean value of the elastic modulus of the skin
elements from two information sources, as shown in Figs. 7.11 and 7.12.
Figure 7.11 Scale Factor Information for Static Force from Different Sources
The information of force factor from two different sources is aggregated by
Dempster’s rule of combining, and the averaging discretization method [74] has been
used to obtain the BBA structure with the interval mean value of the normal distribution
of elastic modulus factor, as shown in Fig. 7.12. The surrogates are constructed for each
limit-state function.
P11
BBA: 0.025 0.5 0.025
PID: P13 P15
0.25
P12
0.2
P14
0.2 0.9 1.1 1.5 1.0 1.2 0.8 0.7
P21
BBA: 0.04 0.7 0.02
PID: P23 P25
0.1
P22 0.14
P24
0.2 0.9 1.1 1.5 1.0 1.2 0.8 0.7
Source 1
Source 2
160
Figure 7.12 Discretized Intervals for Elastic Modulus with Given Interval Statistics
As a result, Table 7.2 was obtained with multiple limit-state functions. The result
of the proposed method shows us that we have as much as 0.0526 plausibility, which is
determined by the third limit-state function, for the failure of the wing structure. When
the limit-state function is not monotonic, the failure event can be missed and plausibility
can be underestimated by using the vertex method, as shown in Table 7.2, unless other
considerations, such as linear variations of responses, are given. However, by using the
proposed algorithm, the nonlinearity and non-monotonicity can be reflected to assess
more accurate Bel and Pl measures. The number of computations also decreases by
approximately 85% by using the proposed method instead of the simple vertex method.
The benefit of the proposed method is expected to increase as the scale of the problem
increases.
0 .2 0 .4 0 .6 0 .8 1 1 .2 1 .4 1 .60
0 .1
0 .2
0 .3
0 .4
0 .5
0 .6
0 .7
0 .8
0 .9
1
Cumulative Normal distributions µ = [0.8 1.0] σ=0.12
Number of discretized intervals: 32
Elastic Modulus Factor
161
Table 7.2 ICW Results Using the Vertex and Proposed Methods
Bel Pl Number of function evaluations
Vertex Method 0.000 0.0101 512
Proposed Method 0.000 0.0526 79
162
8. Reliability Assessment Using Evidence Theory and Design
Optimization
Due to the inevitable natural variability and uncertainties of design parameters in
engineering structural systems, design optimization without any consideration of a
reliability or safety index might be unreliable and vulnerable to a system failure in service.
Reliability Based Design Optimization (RBDO) techniques are developed to address the
analytical certification of the performance of a structural system. In many engineering
applications, probability theory has been employed in a multidisciplinary design
optimization procedure to address uncertainty in a structural system. However, it is not
always possible to obtain the precise and complete information for the probabilistic
uncertainty description in practice. In such cases, the probabilistic approach might not be
appropriate for RBDO unless strong assumption is accepted for the uncertainty of interest.
Therefore in this section, the Uncertainty Quantification (UQ) using evidence theory
proposed in the previous chapters is employed for the reliability assessment in a design
optimization procedure with multiple types of uncertainty. To address the discontinuity of
the measurements (Bel and Pl), a supplementary measurement, plausibility decision, is
introduced first. Sensitivity analyses of evidence theory are developed for effective
design modification and data acquisition.
163
8.1 Plausibility Decision Function
The plausibility function is a discontinuous step function, as can be seen from Fig.
4.9. However, in a decision-making situation, such as in a procedure of design
optimization in which an initial design is to be improved by considering uncertainty with
evidence theory, one needs a continuous measurement that can be used to make a
decision in the sequential iterative procedure. So as a supplementary continuous
measurement, a plausibility decision (Pl_dec) can be introduced by employing the
generalized insufficient reason principle [73] to obtain a continuous function. In this
principle it is assumed that the BBA for a set A, m(A), can be equally distributed to the
focal subsets of A, when the given information is very poor. The Pl_dec function is
obtained for the degree of plausibility after the BBA structures are combined by Eq.
(4.3.4) as follows:
)(_ fUdecPl = ∈∅≠∩
−
−
∩
CcUfcc k
kf
k
kfkkc
cUfcm
,)(:
1
1
)()( (8.1.1)
where | | indicates the total magnitude of a proposition. However, Pl_dec can also be
viewed as white probability, introduced by Elishakoff (1999), if white probability is
defined after applying Dempster’s rule of combining. Basically, with the limit-state
function f(x,b), Pl_dec is obtained by calculating the ratio of the failure region to the
entire region, which is expressed by the proposition shown in Fig. 8.1 for a one-
dimensional example.
164
Figure 8.1 The Failure Region, f -1(Uy)∩ck, in a Joint Proposition ck
The failure region, f -1(Uf)∩ck, can be obtained numerically by defining the H function as
follows:
)),((),,( LimitbxfIxLimitbH −= (8.1.2)
where
(8.1.3)
and where b is the vector of system deterministic parameters and x is the vector of
uncertain parameters. Then, the failure region can be obtained by integrating the H
function as follows:
Ω=
=∩
Ω
−−
dbH
dxdxdxxxxLimitbHcUf i
x
x
x
x
x
x ifi
i
),(
...)...,,,,,(...)( 21, 11211 2, 2,2
1,2
2,1
1,1
x (8.1.4)
f - Limit > 0 1 I = otherwise 0
Limit
x1 x2
f(x,b)
f -1(Uf)∩ck
ck=[x1 , x2]
f is fail f is safe
165
where Ω indicates the multidimentional uncertain space. As a continuous, single-valued
function between Pl and Bel, Pl_dec makes it possible to compute the sensitivities of
plausibility with respect to other model parameters.
8.2 Sensitivity Analysis Using Evidence Theory
Sensitivity information for the quantified uncertainties that are expressed with
degrees of plausibility and belief can be useful in a structural system design procedure.
With the sensitivity analysis, we can determine the primary contributor to the
measurements, Plausibility and Belief, which are obtained by the limit-state function of a
structural system. Sensitivity analysis also makes it possible to improve the current
design by efficiently decreasing the quantified failure likelihood in the structural system.
Since the degree of plausibility as an upper bound is more interesting than the degree of
belief, the sensitivity is derived for the degree of plausibility of an engineering structure
problem that has epistemic uncertain parameters. A similar procedure could be applicable
for the sensitivity of belief. Two sensitivities, the sensitivity analysis of plausibility with
respect to a BBA of proposition, )()(
ijAmCPl
∂∂ , and the sensitivity analysis with respect to
deterministic parameter b, bPl
∂∂ , are derived.
166
8.2.1 Sensitivity Analysis of Plausibility for BBAs of Propositions
In sensitivity analysis, it is our goal to find the primary contributing expert
opinion for the degree of plausibility. The result from sensitivity analysis indicates to
which proposition the computational effort and future collection of information should be
focused. Additionally, this sensitivity analysis can be easily shifted from the sensitivity
for plausibility to the sensitivity for the degree of ignorance, which is defined by the
discrepancy of belief from plausibility. By decreasing the degree of ignorance, we can be
more confident in the reliability analysis result. In this work, it is assumed that the
number of experts and their intervals are given and fixed. The sensitivities of plausibility
for the BBA of a proposition, m(Aemn), is obtained by analytically differentiating the
degree of plausibility by the proposition’s BBA, as shown in Eq. (8.2.1):
)()(
emnAmUPl
∂∂ =
)(
)(0
emn
Uck
Am
cmk
∂
∂ ≠∩ =
)(
)()(0
emn
Ucji
Am
BmAmk
∂
∂ ≠∩ =
≠∩∂∂
0)(
)( )(Uc
jAmAm
k
emn
i Bm (8.2.1)
where Ai and Bj are combined propositions for parameters A and B, and Aemn indicates the
nth proposition of the mth expert (e). Assume that there are two experts, (m=1, 2), who
give us their opinion for the following derivation. If we want to derive the sensitivity for
plausibility with respect to the BBA of nth proposition of Expert 1, )( 1neAm , then Eq.
(8.2.1) can be expanded for the Dempster’s rule of combining:
167
)()(
1ne
i
AmAm
∂∂
=
−∂∂
∅=∩
=∩
qepe
iqepe
AAqepe
AAAqepe
ne AmAm
AmAm
Am21
21
)()(1
)()(
)( 21
21
1
(8.2.2)
Using the notation, ],[ 21 qepe AAcomb = =∩ iqepe AAA
qepe AmAm21
)()( 21,
and ],[ 21 qepe AAconstr = ∅=∩ qepe AA
qepe AmAm21
)()( 21, Eq. (8.2.2) becomes
−∂∂
],[1
],[
)( 21
21
1 qepe
qepe
ne AAconstr
AAcomb
Am =
],[1
]',[
21
21
qepe
qepe
AAconstr
AAcomb
−
2
21
2121
]),[1(
]',[],[
qepe
qepeqepe
AAconstr
AAconstrAAcomb
−×
+ (8.2.3)
where ]',[ 21 qepe AAcomb is the derivative of ],[ 21 qepe AAcomb and it is expanded as
follows:
]',[ 21 qepe AAcomb =
∂∂
=∩ iqepe AAA
qepene
AmAmAm
21
)()()( 21
1
= =∩
∂∂
+∂∂
iqepe AAA ne
qepeqe
ne
pe
Am
AmAmAm
Am
Am
21)(
)()()(
)(
)(
1
212
1
1 (8.2.4)
and the terms, )(
)(
1
2
ne
qe
Am
Am
∂∂ and
)(
)(
1
1
ne
pe
Am
Am
∂∂ on the right side of Eq. (8.2.4), are defined through
the basic axioms for BBAs as follows:
0)(
)(
1
2 =∂∂
ne
qe
Am
Am (8.2.5)
168
1)(
)(
1
1 =∂∂
ne
pe
Am
Am , when p=n (8.2.6)
11
)(
)(
11
1
−−=
∂∂
nene
pe
AAm
Am , when p≠n =
=N
nneAm
11 1)( (8.2.7)
For ]',[ 21 qepe AAconstr in Eq. (8.2.3), the same procedure as ]',[ 21 qepe AAcomb is applied.
In the case that there are N experts who are giving their opinion, the combined
proposition for a parameter is obtained by applying Dempster’s rule of combining
sequentially, as given in Eqs. (8.2.8) and (8.2.9), due to its algebraic commutative and
associative properties [36].
)()( ekpkpc AmAm = k=1 (8.2.8)
=∅∩+
=∩+
+
+
++
−=
qkekp
kqkekp
AAqkekp
AAAqkekpc
pkc AmAm
AmAm
Am
)1(
)1()1(
)()(1
)()(
)()1(
)1(
)1( k=2, 3, …, N-1 (8.2.9)
Therefore, the whole procedure for sensitivity analysis with the differential of Dempster’s
rule of combining is repeated for N experts.
8.2.2 Sensitivity Analysis of Plausibility for Structural Parameters
It is useful to obtain the sensitivities of deterministic parameters in an engineering
structural system. With the results of sensitivity analysis, we can improve a current
design efficiently by changing the current deterministic (controllable) design parameters
169
to decrease the expected failure likelihood in the structural system. However, the
plausibility function in evidence theory is a discontinuous function for varying values of
a deterministic parameter, because of the discontinuity of a BBA structure of an uncertain
parameter. The gradient of plausibility is approximated using the degree of plausibility
decision: Pl_dec. Pl_dec can be used as a supplemental measurement to make a decision
whether a system can be accepted or not when the resulting bound [Bel, Pl] is too large.
Also, Pl_dec makes it possible to compute the sensitivities of plausibility of the system
deterministic parameters, because the Pl_dec function is a continuous function whose
value lies between Pl and Bel. The sensitivity with respect to a system deterministic
parameter is derived with Pl_dec as follows:
∈∅≠∩
Ω
∈∅≠∩
−
−
−
Ω∂
∂
=
∩
∂∂=
∂∂
CcUfcc k
ik
CcUfcc k
kf
kii
kfkk
kfkk
c
db
xbH
cm
c
cUfcm
bbdecPl
,)(:
,)(:
1
1
1
),(
)(
)()(
_
(8.2.10)
where i
kf
b
cUf
∂∩∂ − ))(( 1
is the gradient of the failure region with respect to a deterministic
parameter, bi. However, the multi-dimensional integral of the limit-state function in Eq.
(8.2.10) might be quite complex or even impossible in many engineering applications.
To alleviate the numerical and computational difficulties, the proposed cost-efficient
algorithm can be employed to construct a surrogate model of a limit-state function for
each joint interval proposition, ck, shown in Eq. (4.2.1). The local approximations are
170
constructed in the subspaces of the total function evaluation space. The subspaces are
determined by the given interval information for each uncertain parameter. For example,
as shown in Fig. 8.2a, the subspaces for local approximations are defined by the
Cartesian products of the disjointed intervals of each uncertain parameter.
a) The subspaces for local approximations defined by disjointed
intervals of each uncertain parameter
b) Constructing a network of local approximations
Figure 8.2 The Network of Local Approximations
171
The surrogate model of the limit-state function is expressed by the network of
local approximations of each subspace. The joint interval proposition, ck, is evaluated by
using the surrogate model instead of the actual limit-state function. As a local
approximation, a quadratic response surface model (RSM) is selected due to its simplicity
of implementation. As shown in Fig. 8.2b, the local RSM is constructed by obtaining
sampling points. The degree of fitness of the constructed RSM in a subspace can be
checked by performing a residual analysis. When the fitness is not satisfactory, the
subspace can be further divided into more than two subspaces for better accuracy of the
surrogate model.
a) Identifying and projecting a failure limit-state surface on the function evaluation space
b) Second level subdivisions of LRSMs for the integration of failure regions
Figure 8.3 Linear Response Surface Models (LRSMs) for Sensitivity Analysis
),( 12 bxLRSMx j=
172
To perform the multidimensional integration in Eq. (8.2.10) efficiently, as shown
in Fig. 8.3, the failure region of the limit-state function is identified by any available
technique, such as a random search or an optimization technique. Over the identified
failure region, linear RSM (LRSM) functions are reconstructed over the network of the
original RSM models by making finer subdivisions, as shown in Fig. 8.3b. Since the
integration of the H function in Eq. (8.2.10) is not the integration for the limit-state
function value, but for the failure region of the function evaluation space, LRSMs are
constructed by selecting one of the uncertain parameters as a dependent variable with a
given limit-state value. The dimensions of the LRSM are decreased by one. This
numerical procedure is performed with the obtained closed-form surrogate model without
a high computational cost. After obtaining the LRSM functions, the multidimensional
integration of the H function is obtained by the summation of the integrations of LRSMs
as follows:
=
Ω −Ω −
Ω=Ωm
jnj
n
dLRSMdxbH1
11
),( (8.2.11)
where m is the number of subdivisions for LRSM. For the sensitivity analysis, the
integration term in Eq. (8.2.10) is also obtained as follows:
=
Ω −Ω −
Ω∂
∂=Ω
∂∂ m
jn
j
n
db
LRSMd
bxbH
11
1
),( (8.2.12)
173
Hence, after linearizing the obtained nonlinear surrogate model sequentially, the multi-
dimensional integral in Eq. (8.2.10) is performed by using a conventional numerical
integral scheme, such as trapezoial rule, Simpson’s rule, and so on.
8.3 Reliability-Based Design Optimization Using Evidence Theory
The Reliability-Based Design Optimization (RBDO) can generally be formulated as:
Minimize )(df (8.3.1)
Subject to NgjRGU jj ,,1,)0),(( =≥≤Xd (8.3.2)
NrkXNdiddd kuii
li ,,1,;,,1, ==≤≤ (8.3.3)
where f and Gj are the objective and constraint functions, respectively; X is the uncertain
design vector; d is the controllable deterministic design vector; and Nd, Nr, and Ng are
the number of deterministic design variables, uncertain design variables, and non-
deterministic uncertainty-based constraints, respectively. The non-deterministic,
uncertainty-based constraints are described by an uncertainty measure )(•U , and it is
required that the value of the uncertainty measure be greater than the reliability, Rj, for a
failure event 0),( ≥XdjG . There are many studies of RBDO with probabilistic
uncertainties. In the probabilistic framework, the uncertainty constraints [Eq. (8.3.2)] can
be characterized by failure probability, )(•P , and a required reliability index, tβ , as
follows:
174
NgjGP tj ,,1,0)()0),(( =≤−Φ−≥ βXd (8.3.4)
The failure probability for a jth constraint can be expressed by multiple dimensional
integrations:
)()()0),((0),( 1 tG Nrj
i
dXdXfGP β−Φ≤=≥ ≥Xd X XXd (8.3.5)
where )(XXf is the joint probabilistic density function of all probabilistic variables.
The traditional RBDO with Eqs. (8.3.1) ~ (8.3.3) requires a double loop iteration
process with reliability objective and constraints. The inner loop is to find the uncertainty
measurements with many repetitive simulations and the outer is the regular design
optimization loop to find the optimum design that satisfies the given constraints. Usually,
a reliability requirement is imposed on a constraint and it is well known that the multiple
integration for a reliability constraint function in a practical engineering application is
computationally prohibitive. To address the computational difficulty, some approximate
probability integration methods have been developed, such as the first-order reliability
method (FORM) [8], or the second-order reliability method (SORM) [9-11].
Several popular approaches are proposed for design optimization under
probabilistic uncertainties. To reduce the computational cost in the inner loop, response
surface models for constraint functions or reliability indices can be constructed for fast
175
probability calculation. These methods are useful only for the functions that can be well
approximated by the pre-fixed, non-linearity regression-based method. The safety-factor
based approach proposed by Wu et al. [75] uses an “approximately equivalent
deterministic constraint.” The basic idea is to replace random variables with the safety-
factor based values. By varying the design parameters, the original probabilistic
constraints can be adjusted to reach a specified reliability target. Sequential Optimization
and Reliability Assessment (SORA) is presented by Du and Chen [76] using a shifting
design vector to move back the boundaries of violated constraints to a feasible region
based on the reliability information from the previous cycle. An enriched Performance
Measure Approach (PMA+) is proposed by Youn et al. [77]. In the PMA+, the
probabilistic constraint is replaced with the performance measure and the RBDO model
using PMA can be redefined as
Minimize )(df (8.3.6)
Subject to NgjG pj ,,1,0),( =≤Xd (8.3.7)
NrkXNdiddd kuii
li ,,1,;,,1, ==≤≤ (8.3.8)
where pjG is the jth performance measure and is obtained from a nonlinear optimization
problem in U-space, defined as
Minimize ),( XdpjG (8.3.9)
Subject to tU β= (8.3.10)
176
where tβ is a prescribed target reliability. Furthermore, PMA+ for RBDO has three key
ideas: as a way to launch RBDO at a deterministic optimum design, as a probabilistic
feasibility check, and as a fast reliability analysis under the condition of design closeness.
Most of the efficient RBDO approaches in a probabilistic framework employ
representative indicators (i.e., reliability index or performance measure) to avoid the
actual multidimensional integration of reliability of failure. In a practical large-scale and
complex structural system, there might be enough probabilistic information for some
uncertain design variables and they can be expressed by well-known Probability Density
Functions (PDFs). However, the other uncertain variables might not be described by
probabilistic functions due to insufficient and incomplete data. Therefore, the RBDO
methods based on the probabilistic reliability index or performance measure are not valid
for multi-type uncertain variables. There are several alternative frameworks to handle
non-probabilistic uncertainties, such as possibility theory [13], evidence theory [16],
interval mathematics [33], and so forth. Among the non-probabilistic theories, it is found
that evidence theory can provide a unique generality in the incorporation of various types
of uncertainties (e.g. probabilistic data, fuzzy membership, and interval information), as
shown in the previous chapters. In this chapter, RBDO is tackled with multiple types of
uncertain parameters in an engineering structural problem with the proposed cost-
efficient algorithm.
177
8.4 Numerical Example
Figure 8.4 shows the structural model of an Intermediate Complexity Wing (ICW)
for RBDO. Static loads, which represent aerodynamic lifting forces, are applied along the
surface nodes, and the tip displacement at the marked point in Fig. 8.4 is considered as a
limit-state function.
Figure 8.4 ICW for RBDO
In this system, it is assumed that there are two uncertain factors describing
parameters for elastic modulus (E) and load (P). The nominal value for each parameter is
fixed and the actual values are obtained by multiplying the uncertain factors. For instance,
the basic value of elastic modulus is 1.85×107 (psi). There are three deterministic factors
Upper wing skin
Lower wing skin
Spars and Ribs
Tip displacement
Section for factor, TH 2
Section for factor, TH 3
Section for factor, TH1
178
for the thicknesses of three wing sections (TH1, TH2, and TH3 ), as shown in Fig. 8.4. Let us
consider the situation in which two experts (Expert1, Expert2) give their uncertain
information for the two uncertain parameters with discontinuous intervals. Because the
available data for the parameters is not enough to predict precise variability, interval
information is considered to be the most appropriate way to express those variabilities
based on available partial evidence. It is assumed that two equally credible experts are
giving their opinions with multiple intervals for each uncertain parameter with respective
BBAs. The interval information for elastic modulus and load is given in Figs. 8.5 and 8.6.
Because of the lack of information, the interval information in evidence theory may not
be continuous and intervals can overlap.
Figure 8.5 Elastic Modulus Factor Information
BBA:
PID:
Expert1
0.5 1.0 1.5 2.0
E11
0.015
1.25 0.875 1.75
0.18
0.4
0.25
E12
E13
E14
0.75
0.15
E15
0.005
E16
Expert2
0.5 1.0 1.5 2.0
E21
0.002
1.25 0.875 1.75
0.022 0.26 0.7
E22 E23
0.625
0.015
E24
0.001
E25 E26
BBA:
PID:
179
Figure 8.6 Load Factor Information
In Fig. 8.5, E11 indicates the first expert’s first interval proposition for the factor E.
Even though the interval E12 includes the interval E13, the BBA of E13 is higher than that
of E12. The reason is that the evidence that is supporting interval E13 is not included in the
evidence supporting the interval E12. Unlike probability in probability theory, BBAs do
not necessarily possess monotonicity and additivity. This scheme allows the expression
of opinions intuitively and realistically without making assumptions to reproduce any
kind of probabilistic information. The opinions from two different experts are combined
using Dempster’s rule of combining. It is the basic concept in Dempster’s rule of
combining that the propositions in agreement with other information sources are given
more credence and they are emphasized by the normalization with the degree of
Expert1
0.5 0.875 1.125 1.5
P11
0.003
1.0 0.625 1.25 1.375
0.07 0.115
0.8 0.01 0.002
P12 P13
P14 P15 P16 BBA:
PID:
Expert2
0.5 0.875 1.125 1.5
P21
0.005
1.0 0.75 1.25
0.9
0.005 P23
P24 P22
0.09 BBA:
PID:
180
contradiction in Dempster’s rule of combining. By using Dempster’s rule of combining,
the combined information is given in Figs. 8.7 and 8.8.
Ec1 Ec2 Ec3 Ec4 Ec5 0.00001 0.0004 0.0069 0.0113 0.1133 Ec6 Ec7 Ec8 Ec9
Elastic Modulus factor
0.8670 0.0010 0.0001 0.00001
Figure 8.7 Combined Information for Elastic Modulus Factor
Pc1 Pc2 Pc3 Pc4 Pc5 0.00004 0.0005 0.0054 0.5550 0.3827 Pc6 Pc7 Pc8
Force factor
0.0533 0.0031 0.00006
Figure 8.8. Combined Information for Load Factor
The structural analyses were conducted with the finite element analysis method
by using commercial finite element analysis software (GENESIS7.0 [67]) to obtain the
tip displacements. Here, our goal is to obtain an assessment of the likelihood that the tip
displacement exceeds a limit value, 1.0″ as given in Eq. (8.4.1).
0.5 0.875 1.125 1.5
Ec2
1.0 0.625 1.25 1.375
Ec3 Ec1 Ec4
Ec5
Ec6
0.75
Ec7
Ec8 Ec9
0.5 1.0 1.5 2.0 1.25 0.875 1.75
Pc1
0.75 0.625
Pc2 Pc3 Pc4 Pc5 Pc6 Pc7 Pc8
181
00.1: " <−= TipTipfail δδδ (8.4.1)
This goal is realized by obtaining the plausibility )( failPl δ for the set of failδ with the
given experts’ opinions.
∅≠∩
=fail
combinedfail mPlδεε
εδ:
)()( (8.4.2)
As presented in Table 8.1, three measurements, Pl, Pl_dec, and Bel, are obtained
by using the surrogate model, the network of response surface functions in each joint
proposition. The results are compared with those from the uniform sampling of the
original function. It is found that the number of simulations is significantly reduced by
using this surrogate model as compared to the uniform sampling scheme. This result
shows us that we have 0.3198×10-2 degree of plausibility to face the failure of the wing
structure with the displacement limit-state function, and there is 0.1576×10-4 degree of
belief for the failure based on given partial evidence.
Table 8.1 Intermediate Complexity Wing Results
)( faildispBel )(_ faildispdecPl )( faildispPl Number of Simulations
Proposed Method 0.1576×10-4 0.5121×10-3 0.3198×10-2 360
Sampling Method 0.1576×10-4 0.4970×10-3 0.3198×10-2 100,000
182
The degrees of belief and plausibility give the bounds of possible probabilities. If
designers want a robust design, then the degree of plausibility might need to be used as
an upper bound of probability. But the degree of Uncertainty can be also used to
determine how much one can rely on the result of UQ. The information of sensitivity can
be effectively used to improve the certainty of the UQ result. And as a supplementary
measurement, Pl_dec, which is placed between Bel and Pl, is 0.5121×10-3.
8.4.1 Sensitivity Analysis
The sensitivity of plausibility with respect to each proposition of each expert is
shown in Figs. 8.9 and 8.10. With those sensitivity analysis results, one can tell which
propositions have negative or positive contributions to the degree of plausibility, and
which expert’s opinion is a major uncertainty propagation source. In this example, the
fifth proposition of the first expert for the parameter E factor, E15, in Fig. 8.9 can be seen
as the primary contributor for decreasing the plausibility. On the other hand, the
sensitivity for the first interval of the first expert of parameter P factor, P11, in Fig. 8.10
is almost zero. This means that the first expert’s BBA for P11 has a trivial effect on the
degree of plausibility. By comparing the magnitudes of sensitivities for each parameter’s
BBA, the BBA for the parameter E factor is found to be a more significant contributor to
the degree of plausibility than the parameter P. The difference of magnitudes of
183
sensitivity between the elastic modulus factor and the load factor stems from the effect
of structural sensitivity of those parameters and from the formation of given interval
information in each parameter.
Figure 8.9 Proposition’s Sensitivities of Plausibility of Elastic Modulus Factor
As mentioned previously, the sensitivity information can be used to determine
future data acquisition strategies in which limited resources (due to financial budget,
human power, limited time, and so on) should be invested efficiently to quantify the
uncertainty in a system. For example, based on sensitivity analysis results in this
example, the most contributing intervals (E15 and P12) to the degree of plausibility
E11 E12 E13 E14 E15 E16
Expert1
E21 E22 E23 E24 E25 E26
Expert2
)()(
1 jEmCPl
∂∂
)()(
2 jEmCPl
∂∂
184
should be investigated by investing resources to collect more data on the intervals, and
by refining the intervals to obtain a more reliable UQ result.
Figure 8.10 Proposition’s Sensitivities of Plausibility of Load Factor
For the sensitivity of plausibility with respect to deterministic parameters, three
thickness factors for three sections of ICW, as shown in Fig. 8.4, are considered. The
purpose is to evaluate the effect of those parameters on the degree of plausibility. Figure
8.11 shows the resulting sensitivities for each thickness factor. In this example, the
P11 P12 P13 P14 P15 P16
Expert1
! " # $ $ # "
% & ' (
P21 P22 P23 P24
Expert2
)()(
1 jPmCPl
∂∂
)()(
2 jPmCPl
∂∂
185
sensitivities for plausibility have the same trend as the sensitivities for the limit-state
function with respect to those deterministic parameters because the deterministic
parameters are independent of the uncertain parameters and the limit-state function is also
monotonic for the deterministic parameters. In general, the sensitivity of plausibility does
not necessarily have the same tendency as the sensitivity of deterministic analysis for a
system deterministic parameter because of the dependency of plausibility on the
uncertain parameters.
Figure 8.11 Sensitivity of Plausibility with Thickness Factors (TH 1, TH 2, and TH 3)
The sensitivity results with respect to deterministic parameters could be used in a
reliability-based design phase. That is, when a desired level of plausibility in a system has
to be achieved with given imprecise information for uncertain parameters, the plausibility
could be efficiently controlled by changing the values of other deterministic parameters
with the obtained sensitivity information. For instance, it is found from Fig. 8.11 that the
designer can decrease the failure plausibility to a desired level in the current system much
)+*-, .+*
)+*-, /+/
)+*-, 0+*
)+*-, 1+/
)+*-, 2+*
*-, *+/
*-, 1+*
3 4 5TH 1 TH 2 TH 3
iTHPl
∂∂
186
more by increasing the value TH 3 than by increasing the value TH 1. This work is the first
attempt to develop the sensitivity analysis of an uncertainty quantification problem using
evidence theory. The additional Pl_dec measurement has been employed to address the
sensitivity analysis problem. The sensitivity offers an appropriate and efficient tool for a
robust system design based on reliability prediction.
8.4.2 Reliability Based Design Optimization
Fig. 8.4 shows the structural model of the Intermediate Complexity Wing (ICW),
which was used in the ASTROS manual [42], to demonstrate design optimization based
on reliability analysis using evidence theory. The model consists of 62 quadrilateral
membrane elements with uniform upper and lower skin thicknesses (0.25 in).
Aerodynamic loads are applied along the wing surface. The thicknesses are expressed
using three thickness factors, TH1, TH2, and TH3, for three parts of the model, as shown
in Fig. 8.4. The structural analysis of ICW is performed by finite element analysis (FEA)
using GENESIS 6.0 [67]. It is assumed that uncertainties exist in the scale factors of the
elastic modulus (E) and the applied force (F) in the structural model. Hence, there are
deterministic design variables, Xd =TH1, TH2, TH3, and uncertain variables, Xu=E, F,
in the ICW model. The design variables define the design space of interest for the design
optimization with given side bounds, and the uncertain variables define the finite
uncertain parameter hyperspace with the frame of discernment. The total space of both
types of variables, X=Xd, Xu, is denoted as the function evaluation space. The following
are the limit-state functions in the context of evidence theory.
187
Tip displacement : 0.1)(0.7
≤in
Disptip (8.4.3)
Frequency : 0.1)(3.2
≥Hz
Freq (8.4.4)
For example, when the thickness factors, TH1, TH2, and TH3, are [0.2 0.3 0.5], the failure
degrees of belief, plausibility decision, and plausibility are obtained, as shown in Table
8.2, for each limit-state function.
Table 8.2 Failure Degrees of Belief, Plausibility Decision, and Plausibility
×10-5 Limit state Bel Pl_dec Pl
Tip displacement 0.0005 0.3174 4.9488 Frequency 0.0000 0.1230 0.4101
For the ICW model, we assume that uncertainties in the elastic modulus and the
applied force are inevitable. The uncertainties in those parameters determine the
uncertainties in responses of the system. In this ICW example, the design variables are
the scale factors of skin thickness that are controllable and free from uncertainty. The
objective is to minimize the volume of ICW while placing the constraint on the degrees
of safe plausibility for the limit-state functions that should be greater than an acceptable
degree (0.99). The degrees of plausibility and belief are discontinuous; hence, the degree
of plausibility decision (Pl_dec) is used for the constraint functions. By using the Pl_dec
measure, the sensitivity of plausibility with respect to structural design can be obtained.
188
The response surface method for a partially suspected proposition has been employed to
obtain the gradient of the failure region [18]. In MATLAB 6.0 [78], the sequential
quadratic programming (SQP) method with BFGS formulation is selected for the
following optimization problem:
Objective:
To minimize the total volume of the wing for a lighter aircraft.
Constraints:
The safe degrees of plausibility decision (Pl_dec) for limit-state functions (tip
displacement and fundamental frequency) >0.99
Side bounds of design variables ([0.2, 2.0])
Design variables:
The scale factors of the thickness for each part of the wing (TH1, TH2, and TH3).
Fig. 8.12 shows the optimization history of the objective function and design
variables for the wing skin. Both displacement and frequency constraints are active for
the obtained optimum; that is, the safe degrees of plausibility decision (Pl_dec) for both
limit-state functions are 0.99. The optimum result is obtained in 12 iterations with 112
function evaluations. At the optimum, the safe degrees of plausibility decision of both
constraints are computed with actual simulations to validate the optimum result. Those
safe degrees of plausibility decision for each tip displacement and fundamental frequency
189
constraints are 0.9921 and 0.9995. In each function evaluation, the degrees of plausibility
decision for both displacement and frequency constraints should be obtained for the
uncertain function evaluation space. Since the MPA model is constructed with uncertain
parameters and deterministic variables all together at the initial stage, the creation of the
MPA model is required only once in the total iterative procedure of design optimization.
In this ICW example, a total of 5,423 FEA simulations are needed to construct the MPA
model. The number of actual simulations is highly dependent on the accuracy of the local
approximation method and the size of the failure region in the function evaluation space
of both the deterministic design variables and the uncertain variables.
Figure 8.12 The Optimization History of Objective Function and Design Variables
Iteration Number
Win
g th
ickn
ess (
TH
1, T
H2,
and
TH
3) Total V
olume (O
BJ, in
3)
[TH1*, TH2
*, TH3*] = [0.208 0.245 0.302], OBJ*= 431 (in3)
190
By constructing the MPA model at the initial stage in this ICW example, the
whole uncertain function evaluation space defined by only uncertain variables can be
included in the failure region for some levels of deterministic variables. In those cases,
the computational cost for an optimization routine might be very expensive and
prohibitive. To avoid the high computational cost of both the optimization (outer) loop
and the UQ (inner) loop, an efficient sequential optimization strategy for multi-type
uncertain variables, Trust Region Based Reliability Optimization (TRBRO) can be
proposed.
In this method, the key idea is to define trust regions for both deterministic design
variables and uncertainty variables in sequential optimization iteration. UQ using
evidence theory is performed only for a limited trust region of uncertain variables with a
surrogate model to reduce computational cost and the partial measurement (plausibility or
belief) from the trust region is employed as a representative UQ indicator, instead of the
reliability index or performance measure, as in probability theory. To increase the
efficiency of the proposed method, RBDO can start from the deterministic optimum
design with mean-like values of uncertain variables similar to SORA and PMA+. The
deterministic optimum design will have a reliability of approximately 0.5.
The overall design procedure is to move the deterministic optimum design back to
a reliability-based optimum design with the trust-regional sequential scheme. Trust
region approaches [79, 80] manage the selection of move limits (i.e., local variable
191
bounds) for each sequence of approximate minimization based on the value of the
objective and constraint functions obtained in the previous sequence. At the tth iteration, a
local optimization problem is formulated with surrogate models in a limited trust region
from Eqs. (8.3.1) ~ (8.3.3) as follows:
Minimize )(~ dtf (8.4.6)
Subject to NgjRGPl jjt ,,1,)0),(~
( , =≥≤Xd (8.4.7)
NrkXXXNdiddd uktkt
lkt
uitit
lit ,,1,;,,1, ,,,,,, =≤≤=≤≤ (8.4.8)
where Pl, the degree of plausibility of evidence theory, is selected for the measure of
uncertainty as an upper bound of probability. Eq. (8.4.7) is solved around the current trust
region of dt and Xt. The move limits are defined by the trust region td ,∆ , where
tdptdd ,∆≤− and the p norm define the shape of the region. Similarly, tX ,∆ is defined
for uncertain variables. In this paper, t∆ defines a hypercube around Xt and dt, which
defines the local bounds ],[ Ut
Lt XX and ],[ U
tLt dd , respectively. It is required for TRBRO
that the probabilistic variables with unbounded PDFs are described as bounded uncertain
variables by lumping marginal probability onto trimmed boundaries appropriately. The
move limits of a trust region are restricted by the global limits (the entire function
evaluation space) of deterministic and uncertain variables.
Since surrogate models are used for UQ in uncertainty constraints in a defined
trust region, uncertainty measure (degree of plausibility) can be obtained with minor
192
computational cost. The trust region of deterministic design variables is traditionally
updated based on the value of the previous objective and constraint functions. On the
other hand, the trust region of uncertain variables is updated to keep the degree of failure
plausibility in the trust region to be greater than the required one, unless the trust region
reaches the global bound of uncertain variables.
Figure 8.13 Trust Region Uncertainty Quantification for Sequential Optimization Under
Multiple Types of Uncertain Variables
Constructing the Entire Function Evaluation Space
(EFES)
Does TR reach the global limits of
uncertain variable?
Deterministic Design Optimization
Define initial Trust Region (TR) for both deterministic
and uncertain variables
Construct Surrogate Models for the defined TR
Reliability-Based Design Optimization within the TR
Update TR of deterministic
variables
Update TR of uncertain variables End
Converged?
Pl_dec = Pl_dec target
Yes
Yes
No
Yes
No
No
193
It is noted that the initial design of the reliability-based optimization has
approximately 0.5 reliability, and in most engineering problems very high reliability (i.e.,
0.9 or six sigma reliability) is required. That is, the failure region might be very small
compared to the entire function evaluation space. As a result, the computation cost is
reduced significantly by limiting the computational region of the UQ calculation region.
The updating procedure of the trust region of uncertain variables needs to check the
failure surface and its sensitivity information. The continuous Pl_dec function is used as
a supplementary measure in TRBRO to obtain UQ sensitivity regarding deterministic
design variables. The conceptual numerical procedures for the proposed TRBRO are
illustrated in Fig. 8.13 In the proposed method, there is no iterative procedure for the UQ
level and the actual UQ is performed instead of using an approximating UQ index. Hence,
TRBRO is robust and efficient. The proposed method is not only valid for non-
probabilistic variables, but also for probabilistic variables by defining reasonable
trimmed boundaries.
194
9. Summary
As a generalization of classical probability and possibility theories from the
perspective of bodies of evidence and their measures, evidence theory can handle both
epistemic and aleatory uncertainties in one framework. Evidence theory allows for pre-
existing probability information to be utilized together with epistemic information
(certain bounds or possibilistic membership function, etc) to assess likelihood for a limit-
state function. Until now, when multiple types of uncertainties coexist in a target
structural reliability analysis, UQ analyses have been performed by treating them
separately or by making assumptions to accommodate either the probabilistic framework
or the fuzzy set framework. Hence, the possibility of adopting evidence theory as a
general tool of UQ analysis for multiple types of uncertainties was investigated. It was
found that because of the flexibility of the basic axioms in evidence theory, not only
aleatory (random) uncertainty, but also epistemic (non-random) uncertainty could be
tackled in its framework without any baseless assumptions. The Basic Belief Assignment
(BBA) structure in evidence theory usually is not a continuous explicit function for the
given imprecise information. Because of the discontinuity in BBA, intensive
computational cost might be inevitable when quantifying uncertainty using evidence
theory.
195
To alleviate the intensive computational cost, a cost-efficient algorithm using
MPA was developed. In the algorithm, optimization and approximation techniques were
employed to identify the failure region and invest the computational resources only on
the identified failure region. It was found that the Belief and Plausibility functions were
computed efficiently without sacrificing the accuracy of resulting measurements by using
the proposed cost-efficient algorithm.
In the effort of reducing the computational cost further, a new direct and exact
reanalysis technique, the Successive Matrix Inversion (SMI) method, is developed based
on the binomial series expansion of a structural coefficient matrix. The SMI method gives
exact solutions for any variations to an initial design of a Finite Element Analysis (FEA);
that is, there is no restriction on the valid bounds of the design modification in the use of
SMI. The SMI method includes the capability to update both the inverse of the modified
coefficient matrix and the modified response vector of a target structural system by
introducing an influence vector storage matrix and a vector-updating operator. Since the
cost of reanalysis using SMI is flexible to the ratio of the changed portion to the initial
coefficient matrix, the SMI method is especially effective for a regional modification in a
structural FEA model. The SMI method is utilized in an iterative reanalysis procedure to
accelerate the convergence rate and even to make an iterative solution converges that
may have diverged otherwise. As a new class of linear system solver that combines a
direct solver and an iterative solver for the first time, the proposed Combined Iterative
(CI) methods (SMI and BSI) can be efficiently applied in a design optimization to reduce
196
the computational cost of many repetitive simulations for a general, non-symmetric
coefficient matrix. A new iterative method, the Binomial Series Iteration (BSI) method, is
developed and demonstrated with numerical examples. Since there is no computation for
building up orthogonal basis vector in BSI, BSI with SMI can be efficiently applied for a
general, non-symmetric coefficient matrix. It is found that with the cost-efficient system
reanalysis techniques and UQ algorithm, the general UQ framework of evidence theory
can be applied to practical and large-scale engineering applications successfully.
In the comparison study of different reliability approaches, probability theory
does not allow any impreciseness on the given information, so it gives a single-valued
result. However, possibility theory and evidence theory give a bounded result. The result
from possibility theory gives the most conservative bound ([0, Necessity]), essentially
because of the Zadeh’s extension principle. In that principle, the degree of membership of
the system response corresponds to the degree of membership of the overall most-
preferred set of fuzzy variables. Evidence theory gives an intermediate bounded result
([Belief, Plausibility]), which always includes the probabilistic result; that is, the lower
and upper bounds of probability based on the available information. It was found that a
BBA structure in evidence theory can be used to model both fuzzy sets and probability
distribution functions due to its flexibility. That is, multiple types of information (fuzzy
membership function, PDF, interval information, and so forth) can be incorporated in a
unified framework of evidence theory to quantify uncertainty in a system.
197
The obtained bounded result of evidence theory, which tends to be less
conservative than that of possibility theory and less restrictive than the result of
probability theory, can be viewed as the best estimate of system uncertainty because the
given imprecise information is propagated through the given limit-state function without
any unnecessary assumptions in evidence theory.
In sensitivity analysis of plausibility with respect to an expert opinion, it was my
goal to find the primary contributing expert opinion for the degree of plausibility. The
result from sensitivity analysis indicates on which proposition the computational effort
and future collection of information should be focused. This sensitivity analysis can be
easily shifted from the sensitivity for plausibility to the sensitivity for uncertainty, which
is defined by the subtraction of belief from plausibility. By decreasing the degree of
uncertainty, we can be more confident in the reliability analysis result. The sensitivity of
a deterministic parameter in an engineering structural system was developed to improve
the current design by decreasing the failure plausibility of a limit-state function
efficiently. However, the plausibility function in evidence theory is a discontinuous
function for varying values of a deterministic parameter because of the discontinuity of a
BBA structure for uncertain parameters. The gradient of plausibility was represented
using the degree of plausibility decision (Pl_dec), which was introduced by employing
the generalized insufficient reason principle to the plausibility function. Therefore,
Pl_dec can be used as a supplemental measurement to make a decision as to whether a
system can be accepted.
198
For the high efficiency design of engineering structures, mathematical
optimization techniques are usually employed. However, without considering the
uncertainty in design parameters, operating conditions, and physical behavior, the
optimized design might result in a catastrophically high risk. Hence, in conjunction with
the cost-effective algorithm and sensitivity techniques, evidence theory was applied to the
design optimization based on reliability analysis by using an efficient sequential
optimization strategy. To avoid the high computational cost of both the optimization
(outer) loop and the UQ (inner) loop, an efficient sequential optimization strategy for
multi-type uncertain variables, Trust Region Based Reliability Optimization (TRBRO), is
proposed. The proposed method starts from the deterministic optimum design with mean-
like values of uncertain variables, similar to SORA and PMA+, and moves the
deterministic optimum design back to a reliability-based optimum design with the trust-
regional sequential scheme. In the proposed method, the key idea is to define trust
regions for both deterministic design variables and uncertainty variables in sequential
optimization iteration. UQ using evidence theory is performed only for a limited trust
region of uncertain variables with a surrogate model. The resulting optimum design of a
target structure has a robust optimal performance for the intrinsic uncertainties.
Future Directions
As mentioned earlier, evidence theory is not well known in the structural
mechanics societies. Recently, due to the physically appealing theoretical strength, many
structural researchers have started to show their interest in evidence theory and its
199
applications. However, there are still many issues in discussion. Some of the issues in
UQ and system reanalysis techniques are listed as follows:
First, we need to investigate the method of the aggregation of imprecise multiple
uncertainty information from multiple sources. The correlation effect of both uncertain
variables and multiple sources must be considered for unbiased reliability analysis.
Second, it is addressed that the BBA structure can express other types of
information (e.g., possibilistic and probabilistic distributions) because of its flexibility.
The information conversion must be studied for the appropriate and reasonable
translation of the different formats of belief assignments because in some case, slightly
different reasoning can make a big difference in the result of UQ. It is important to
appropriately incorporate pre-existing probabilistic or possibilistic information in the
framework of evidence theory.
Third, in this work, only the parametric uncertainty is addressed. In practice, the
uncertainty from an imperfect or vague model form can be more influential and critical to
the uncertainty propagation in a system. There are some attempts to express the model
form uncertainty from the probabilistic viewpoint. The model form uncertainty is
fundamentally epistemic and it can be tackled properly by the framework of evidence
theory.
200
Fourth, advanced computational schemes regarding evidence theory can be
investigated for (sampling-based or analytical) reliability analysis, sensitivity analysis,
and optimization techniques for the better computational performance in an engineering
structural design. Even though many computational methods are presented in this work,
more efficient methodologies can be still developed by using different approaches.
Fifth, the proposed Combined Iterative (CI) method is also needed to be studied to
provide some solid and efficient guidelines for combining schemes between SMI and an
iterative method.
201
10. References
1. Oberkampf, W. L., and Helton J. C., “Investigation of Evidence Theory for
Engineering Applications,” Non-Deterministic Approaches Forum, Denver, CO,
April-2002, AIAA-2002-1569.
2. Hoffman, F. O., and Hammonds, J. S., “Propagation of Uncertainty in Risk
Assessments: The Need to Distinguish Between Uncertainty Due to Lack of
Knowledge and Uncertainty Due to Variability,” Risk Analysis, Vol. 14, No. 5,
1994, pp. 707-712.
3. Helton, J. C., “Treatment of Uncertainty in Performance Assessments for Complex
Ssytems,” Risk Analysis, Vol. 14, No. 4, 1994, pp. 483-511.
4. Ferson, S., and Ginzburg, L. R., “Different Methods are Needed to Propagate
Ignorance and Variability,” Reliability Engineering and System Safety, Vol. 54,
1996, pp. 133-144.
5. Hunter, A., and Parsons, S., Applications of Uncertainty Formalisms, Springer,
New York, 1998, pp. 8-32.
6. Nikolaidis, E., Cudney, H. H., Chen, S., Haftka, R., and Rosca R., “Comparison of
Probabilistic and Possibility Theory-Based Methods for Design Against
Catastrophic Failure Under Uncertainty,” ASME International Conference on
202
Design Theory and Methodology, Las Vegas, NE, AIAA paper 99-1570, Sept.,
1999.
7. Metropolis, N., and Ulam, S. “The Monte Carlo Method,” Journal of the American
Statistical Association, Vol. 44, 1949, pp. 335-341.
8. Hasofer, A. M., and Lind, N. C., “Exact and Invariant Second-Moment Code
Format,” Journal of the Engineering Mechanics Division, ASCE, 100(EM), 1974,
pp. 111-121.
9. Breitung, K., “Asymptotic Approximations for Mutinormal Integrals,” Journal of
the Engineering Mechanics Division, ASCE, Vol.110, No.3, 1984, pp. 357-366.
10. Tvedt, L., “Distribution of Quadratic Forms in Normal Space-Application to
Structural Reliability,” Journal of the Engineering Mechanics Division, ASCE,
Vol.116, 1990, pp. 1183-1197.
11. Tvedt, L., “Two Second Order Approximations to the Failure Probability,” Section
on Structural Reliability, A/S vertas Research, Hovik, Norway, 1984.
12. Ghanem, G. R., and Spanos, P. D., Stochastic Finite Elements: A Spectral
Approach, Springer-Verlag, NY, 1991
13. Zadeh, L., “Fuzzy Sets,” Information and Control, Vol. 8, 1965, pp. 338-353.
14. Wood, L. K., Otto, N. K., and Antonsson, K. E., “Engineering Design Calculations
with Fuzzy Parameters,” Fuzzy Sets and Systems, Vol. 53, 1992, pp. 1-20.
15. Penmetsa, R. C., and Grandhi, R. V., “Efficient Estimation of Structural Reliability
for Problems with Uncertain Intervals,” International Journal of Computers and
Structures, Vol. 80, 2002, pp. 1103-1112.
16. Shafer, G., A Mathematical Theory of Evidence, Princeton, NJ, 1976.
203
17. Dempster, A. P., Laird, N.M., and Rubin, D.B., “Maximum Likelihood from
Incomplete Data Via the EM Algorithm,” Statistics Society. Ser., Vol. 39, No. 1,
1977, pp. 1-38.
18. Bloch, I., and Maitre, H., “Data Fusion in 2D and 3D Image Processing,” X
Brazilian Symposium on Computer Graphics and Image Processing, Campos do
Jord, Brazil, Oct., 1997, pp. 127-136.
19. Chen, L., and Rao, S. S., “A Modified Dempster-Shafer Theory for Multicriteria
Optimization,” Engineering Optimization, Vol. 30, 1998, pp. 177-201.
20. Oberkampf, W. L., and Helton, J. C., “Mathematical Representation of
Uncertainty,” Non-Deterministic Approaches Forum, Seattle, WA, AIAA2001-
1645, 2001.
21. Xu, S., and Grandhi, R. V., “Multi-Point Approximation Development: Thermal
Structural Optimization Case Study,” International Journal for Numerical Methods
in Engineering, Vol. 48, 2000, pp. 1151-1164.
22. Wang, L. P., and Grandhi, R. V., “Improved Two-Point Function Approximations
for Design Optimization,” AIAA Journal, Vol. 33, No. 9, 1995, pp. 1720-1727.
23. Xu, S., and Grandhi, R. V., “Structural Optimization with Thermal and Mechanical
Constraints,” Journal of Aircraft, Vol. 36, No. 1, 1999, pp. 29-35.
24. Wang, L.P., and Grandhi, R.V., “Multi-Point Approximations: Comparisons Using
Structural Size, Configuration and Shape Design,” Structural Optimization, Vol. 12,
1996, pp. 177-185.
204
25. Wang, L. P., and Grandhi, R. V., “Effective Safety Index Calculation for Structural
Reliability Analysis,” International Journal of Computers and Structures, Vol. 52,
No. 1, 1994, pp. 103-111.
26. Wang, L. P., Grandhi, R. V., and Hopkins, D. A., “Structural Reliability
Optimization Using An Effective Safety Index Calculation Procedure,”
International Journal for Numerical Methods in Engineering, Vol. 38, 1995, pp.
1721-1738.
27. Zadeh, L. A., “The Concept of a Linguistic Variable and its Application to
Approximate Reasoning,” Journal of Information Science, Vol. 8, 1975, pp. 199-
249.
28. Hammersley, J., and D. Handscomb, Monte Carlo Methods, Methuen & Co., Ltd.,
London, 1964.
29. McKay, M. D., Beckman, R. J. and Conover, W. J., “A Comparison of Three
Methods for Selecting Values of Input Variables in the Analysis of Output from a
Computer Code,” Technometrics, Vol. 21, No. 2, 1979, pp. 239-245.
30. Wu, Y.-T., “Computational Method for Efficient Structural Reliability and
Reliability Sensitivity Analysis,” AIAA Journal, Vol. 32, 1994, pp. 1319-1336.
31. Wu, Y.-T., Millwater, H. R., and Cruse, T. A., “Advanced Probabilistic Structural
Analysis Methods for Implicit Performance Functions,” AIAA Journal, Vol. 28, No.
9, 1990, pp. 1663-1669.
32. Moore, R. E., Methods and Applications of Interval Analysis, SAIM Publ.,
Philadelphia, PA, 1979.
33. Alefeld, G., and Herzberger, J., Introduction to Interval Computations, Academic
Press, New York, 1983.
205
34. Dong, W.M., and Wong, F. S., “Fuzzy Weighted Averages and Implementation of
the Extension Principle,” Fuzzy Sets and Systems, Vol. 21, 1987, pp. 183-199.
35. Yager, R. R., “On the Dempster-Shafer Framework and New Combination Rules,”
Information Sciences, Vol. 41, 1987, pp. 93-137.
36. Sentz, K., and Ferson, S., “Combination of Evidence in Dempster-Shafer Theory,”
SAND2002-0835 Report, Sandia National Laboratories, April 2002.
37. Zadeh, L., “Review of Shafer’s A Mathematical Theory of Evidence,” Artificial
Intelligence Magazine, Vol. 5, 1984, pp. 81-83.
38. Inagaki, T., ”Interdependence Between Safety-Control Policy and Multiple-Sensor
Schemes Via Dempster-Shafer Theory”, IEEE Transactions on Reliability, Vol. 40,
No. 2, 1991, pp. 182-188.
39. Dong, W. M., and Shah, H. C., “Vertex Method for Computing Functions of Fuzzy
Variable,” Fuzzy Sets and Systems, Vol. 24, 1987, pp. 65-78.
40. Walpole, R. E., Probability and Statistics for Engineers and Scientists, New Jersey,
Prentice, 1998.
41. Arora, J. S., Introduction to Optimum Design, New York, McGraw-Hill, 1989.
42. ASTROS Theoretical Manual for Version 20 – Universal analytics, Inc., Torrance,
CA, 1997.
43. Arora J. S., “Survey of Structural Reanalysis Techniques,” Journal of the Structural
Division, Vol. 102, 1976, pp. 783-802.
44. Barthelemy, J.-F. M., and Haftka, R. T., “Approximation Concepts for Optimum
Design—a Review,” Structural Optimization, Vol. 5, 1993, pp. 129-144.
206
45. Haftka, R. T., Nachlas, J. A., Watson, L.T., Rizzo, T., and Desai, R., “Two-point
Constraint Approximation in Structural Optimization,” Computer Methods in
Applied Mechanics and Engineering, Vol. 60, No.3, 1987, pp. 289-301.
46. Starnes, J. H., and Haftka, R. T., “Preliminary Design of Composite Wings for
Buckling, Stress and Displacement Constraints,” AIAA Journal of Aircraft, Vol. 16,
1976, pp. 564-570.
47. Box, G. E. P., and Draper, N. R., Evolutionary Operation: A Statistical Method for
Process Management, John Wiley & Sons, Inc. New York. 1969.
48. Sacks, J., Welch, W. J., Mitchell, T. J., and Wynn, H. P., “Design and Analysis of
Computer Experiments,” Statistical Science, Vol. 4, No. 4, 1989, pp. 409-435.
49. Toropov, V.V., Filatov, A.A., and Polynkin, A.A., “Multiparameter Structural
Optimization Using FEM and Multipoint Explicit Approximations,” Structural
Optimization, Vol. 6, No. 1, 1993, pp. 7-14.
50. Noor, A.K., and Lowder, H.E., “Approximate Techniques of Structural
Reanalysis,” Computers and Structures, Vol. 4, 1974, pp. 801-812.
51. Van der Vorst, H. A., “Bi-CGSTAB: A Fast and Smoothly Converging Variant of
Bi-CG for the Solution of Nonsymmetric Linear Systems,” SIAM Journal on
Scientific and Statistical Computing, Vol. 13, 1992, pp. 631-644.
52. Saad, Y., and Schultz, M. H., “GMRES: a Generalized Minimal Residual
Algorithm for Solving Nonsymmetric Linear Systems.” SIAM Journal on Scientific
and Statistical Computing, Vol. 7, 1986, pp. 856-869.
207
53. Saad, Y., and Van der Vorst, H., “Iterative Solution of Linear Systems in the 20th
Century,” Journal of Computational and Applied Mathematics, Vol. 123, 2000, pp.
1-33.
54. Sherman, J., and Morrison, W. J., “Adjustment of an Inverse Matrix Corresponding
to a Change in One Element of a Given Matrix,” Annals of Mathematical Statistics,
Vol. 20, 1950, pp. 124-127.
55. Woodbury, M., “Inverting Modified Matices,” Memorandum Report No. 42,
Statistical Research Group, Princeton University, Princeton, NJ, 1950.
56. Kirsch, U., “Efficient-Accurate Reanalysis for Structural Optimization,” AIAA
Journal, Vol. 37, No. 12, 1999, pp. 1663-1669.
57. Akgün, M. A., Garelon, J. H., and Haftka, R. T., “Fast Exact Linear and Non-linear
Structural Reanalysis and the Sherman-Morrison-Woodbury Formulas,”
International Journal for Numerical Methods in Engineering, Vol. 50, 2001, pp.
1587-1606.
58. Kavlie, D., Graham, H., and Powell, G. H., “Efficient Reanalysis of Modified
Structures,” Journal of the Structural Division, Vol. 97 (ST1), 1971, pp. 377-392.
59. Ohsaki, M., “Random Search Method Based on Exact Reanalysis for Topology
Optimization of Trusses with Discrete Cross-Sectional Areas,” International
Journal of Computers and Structures, Vol. 79, 2001, pp. 673-679.
60. Fadel, G. M., Riley, M. F., and Barthelemy, J.-F. M., “Two Point Exponential
Approximation in Structural Optimization,” Computer Methods in Applied
Mechanics and Engineering, Vol. 60, 1989, pp. 289-301.
208
61. Chickermane, H., and Gea, H. C., “Structural Optimization Using a New Local
Approximation Method,” International Journal for Numerical Methods in
Engineering, Vol. 39, 1996, pp. 829-846.
62. Pozrikidis, C., Numerical Computation in Science and Engineering, Oxford
University Press, Inc., New York, 1998.
63. Reddy, J. N., Mechanics of Laminated Composite Plates: Theory and Analysis.
CRC Press, Boca Raton, 1997.
64. Cohen, P. R., Heuristic Reasoning about Uncertainty: An Artificial Intelligence
Approach, Morgan Kaufmann, London, 1985.
65. Nguyen, H. T., Walker, E. A., A First Course in Fuzzy Logic, CRC Press, Boca
Raton, 1997.
66. Tonon, F., Bernardini, A., and Mammino, A., “Determination of Parameters Range
in Rock Engineering by Means of Random Set Theory,” Reliability Engineering
and System Safety, Vol. 70, 2000, pp. 241-261.
67. GENESIS User Manual., Vanderplaats Research & Development, Colorado, 2000.
68. Haftka, R. T., and Gürdal, Z., Elements of Structural Optimization, 3rd ed., Kluwer
Academic Publishers, Dordrecht, Netherlands, 1992.
69. Dubois, D., and Prade, H., “Random Sets and Fuzzy Interval Analysis,” Fuzzy Sets
and Systems, Vol. 38, 1990, pp. 308-312.
70. Liou, T. S., and Wang, M. J., “Fuzzy Weighted Average: an Improved Algorithm,”
Fuzzy Sets and Systems, Vol. 49, 1992, pp. 307-315.
209
71. Guh, Y. Y., Hon, C. C., Wang, K. M., and Lee, E. S., “Fuzzy Weighted Average: a
Max-min Paired Elimination Method,” Computers and Mathematics with
Applications, Vol. 32, 1996, pp. 115-123.
72. Antonsson, E. K., and Otto, N. K., Improving Engineering Design with Fuzzy Sets
In: Dubois, D., Prade, H., Yager, R. R., editors. Fuzzy Information Engineering: A
Guided Tour of Applications, John Wiley & Sons, New York, 1997.
73. Savage, L. J., The Foundations of Statistics. Dover Pub, New York, 1972.
74. Tonon, F., “Using Random Set Theory to Propagate Epistemic Uncertainty Through
a Mechanical System,” Reliability Engineering and System Safety, Vol. 85, 2004,
pp. 169-181.
75. Wu, Y-T., Shin Y., Sues, R., and Cesare, M., “Safety Factor Based Approach for
Probability-based Design Optimization,” 42nd AIAA/ASME/ASCE/AHS/ASC
Structures, Structural Dynamics and Materials Conference, Seattle, WA, AIAA
paper 2001-1645, April, 2001.
76. Du, X., and Chen, W., “Sequential Optimization and Reliability Assessment
Method for Efficient Probabilistic Design,” ASME Design Engineering Technical
Conferences, DETC2002/DAC-34127, Montreal, Canada, 2002.
77. Youn, B., Choi, K., and Du, L., “Enriched Performance Measure Approach
(PMA+) and its Numerical Method for Reliability-Based Design Optimization,”
10th AIAA/ISSMO Multidisciplinary Analysis and Optimization Conference,
Albany, NY, AIAA-2004-4401, 2004.
78. MATLAB Optimization Toolbox User’s Guide, The MathWorks, Inc., Natick, MA,
1997.
210
79. Moré, J. J., and Sorensen, D. C., “Computing a Trust Region Step,” SIAM Journal
on Scientific and Statistical Computing, Vol. 3, 1983, pp. 553-572.
80. Eldred, M. S., Giunta, A. A., Wojtkiewicz, S. F., Jr., and Trucano, T. G.,
“Formulations for Surrogate-Based Optimization Under Uncertainty,” 9th
AIAA/ISSMO Symposium on Multidisciplinary Analysis and Optimization, Atlanta,
GA, AIAA-2002-5585, 2002.