+ All Categories
Home > Documents > CHAPTER - 1 INTRODUCTION AND BASIC...

CHAPTER - 1 INTRODUCTION AND BASIC...

Date post: 03-Jul-2020
Category:
Upload: others
View: 8 times
Download: 0 times
Share this document with a friend
44
CHAPTER - 1 INTRODUCTION AND BASIC CONCEPTS 1.1 INTRODUCTION The mathematical theory of reliability has grown out of the demands of the modern technology and particularly of the experiences in the World War II with complex military systems. In the early 1950’s, certain areas of reliability, especially life testing and electronic and missile reliability problems started to receive a great deal of attention both from mathematical statisticians and from the engineers in the military-industrial complex. Evidence of the intimate relationship between reliability and statistics is available in the significant number of papers written on statistical method in reliability. In December 1950, the Air Force formed an ad-hoc group on reliability of the electronic equipments to study the whole reliability situation and recommended measures that would increase the reliability of equipment and reduce maintenance. By late 1952, the department of defence had established the Advisory Group on Reliability of Electronic Equipment (AGREE). AGREE published its first report on reliability in June 1957. This report include minimum acceptability limits, requirements for reliability tests, effect of storage on reliability, etc. The overall scientific discipline that deals with general methods and procedures to which one needs to adhere during the planning, preparation, acceptance, transportation, use of manufactured articles to ensure their maximum
Transcript
Page 1: CHAPTER - 1 INTRODUCTION AND BASIC CONCEPTSshodhganga.inflibnet.ac.in/.../27321/6/06_chapter-1.pdf · 2018-07-02 · CHAPTER - 1 INTRODUCTION AND BASIC CONCEPTS 1.1 INTRODUCTION The

CHAPTER - 1

INTRODUCTION AND BASIC CONCEPTS

1.1 INTRODUCTION

The mathematical theory of reliability has grown out of the demands of the

modern technology and particularly of the experiences in the World War II with

complex military systems. In the early 1950’s, certain areas of reliability,

especially life testing and electronic and missile reliability problems started to

receive a great deal of attention both from mathematical statisticians and from the

engineers in the military-industrial complex. Evidence of the intimate relationship

between reliability and statistics is available in the significant number of papers

written on statistical method in reliability. In December 1950, the Air Force

formed an ad-hoc group on reliability of the electronic equipments to study the

whole reliability situation and recommended measures that would increase the

reliability of equipment and reduce maintenance. By late 1952, the department of

defence had established the Advisory Group on Reliability of Electronic

Equipment (AGREE). AGREE published its first report on reliability in June

1957. This report include minimum acceptability limits, requirements for

reliability tests, effect of storage on reliability, etc.

The overall scientific discipline that deals with general methods and

procedures to which one needs to adhere during the planning, preparation,

acceptance, transportation, use of manufactured articles to ensure their maximum

Page 2: CHAPTER - 1 INTRODUCTION AND BASIC CONCEPTSshodhganga.inflibnet.ac.in/.../27321/6/06_chapter-1.pdf · 2018-07-02 · CHAPTER - 1 INTRODUCTION AND BASIC CONCEPTS 1.1 INTRODUCTION The

2

effectiveness during use or examining a treatment to ensure that it is effective

enough to produce maximum life-time in a certain disease and develops general

methods of evaluating the quality of systems from known qualities of their

components parts or from strength and stress variable has received the name

reliability/survival theory. Obviously, reliability is an important consideration in

the planning, design and operation of systems.

The reliability theory is concerned with random occurrence of undesirable

events or failures during the life of a physical or biological system. Reliability is

an inherent attribute of a system just as is the system’s capacity or power rating.

The concept of reliability has been known for a number of years but has got

greater significance and importance during the past decade, particularly, due to the

impact of automation, development of complex missile and space programmes.

With increasing automation and the use of highly complex systems, the

importance of obtaining highly reliable systems has recently been recognized.

From a purely economic point of view, high reliability is desirable to reduce

overall costs. For example, the yearly cost of maintaining some military systems

in an operable state is as high as ten times the original cost of the system. The

failure of a component most often results in the breakdown of the system as a

whole. A space satellite may be rendered completely useless if a switch fails to

operate. Safety is an equally important consideration. A leaky brake cylinder

could result in personal injury and undue expenses. Also, caused by unreliability

are scheduled delays, inconvenience, customer dissatisfaction and perhaps also the

loss of national security. The need for reliability has been felt both by the

Page 3: CHAPTER - 1 INTRODUCTION AND BASIC CONCEPTSshodhganga.inflibnet.ac.in/.../27321/6/06_chapter-1.pdf · 2018-07-02 · CHAPTER - 1 INTRODUCTION AND BASIC CONCEPTS 1.1 INTRODUCTION The

3

government and industry. For example, the Department of Defence and NASA

(USA) impose some degree of reliability requirements. MIL-STD-785

(Requirements for Reliability program for system and Equipments) and NASA

NPC 250-1 (Reliability program provisions for space system contractors), provide

in detail the requirements for a reliability programme to achieve reliable products.

Everyone has experienced the frustration of waiting in lines to obtain service.

It usually seems like an unnecessary waste of time. In our private lives, we have

the option of seeking service elsewhere or going without the service. Such

defections have direct economic consequences for the organization providing the

service. When a customer leaves a waiting line, he becomes an opportunity cost,

the opportunity to make a profit by providing the service is lost. An important

aspect of system design is to balance this cost against the expense of additional

capacity. The study of waiting lines, called the ‘queuing theory’, is one of the

oldest and most widely used operation research techniques. The first recognized

effort to analyze queues was made by a Danish engineer, A.K. Erlang, in his

attempts to eliminate bottlenecks created by telephone calls on switching circuits.

A queuing situation is basically characterized by a flow of customers arriving at

one or more service facilities. On arrival at the facility the customer may be

serviced immediately or if willing, may have to wait until the facility is made

available. The service time allocated to each customer may be fixed or random

depending on the type of service. Situations of this type exists in every-day life.

A typical example occurs in a barbershop. Here, the arriving individuals are the

customers and the barbers are the servers. Another example is represented by

Page 4: CHAPTER - 1 INTRODUCTION AND BASIC CONCEPTSshodhganga.inflibnet.ac.in/.../27321/6/06_chapter-1.pdf · 2018-07-02 · CHAPTER - 1 INTRODUCTION AND BASIC CONCEPTS 1.1 INTRODUCTION The

4

letters arriving at a typist’s desk represent another example. The letter represents

the customers and the typist represents the server.

In general a queue is formed when either units requiring services –

commonly referred to as customers, wait for service or the service facilities, stand

idle and wait for customers. Some customers wait when the total number of

customers requiring service exceeds the number of service facilities, some service

facilities stand idle when the total number of service facilities exceeds the number

of customers requiring service. But in a few situations waiting lines cause

significant congestion and a corresponding increase in operating costs. For

example, ships wait to be unloaded at docks, project wait attention by the

engineering staff, aircraft wait to land at an airport and breakdowns await repair by

maintenance crews. These examples show that the term “Customer” may be

interpreted in a variety of ways. Also, a service may be performed by moving

either the server to the customer or the customer to the server. Such service

facilities are difficult to schedule "optimality" because of the presence of the

randomness element in the arrival and service patterns. A mathematical theory has

thus evaluated that provides means for analyzing such situations. This is queuing

(or waiting line) theory, which in based on describing the arrival and/or departure

(service) patterns by the appropriate probability distribution. Operating

characteristics of a queuing situation are then derived by using probability theory.

Examples of these characteristics are the expected waiting time until the service of

a customer is completed or the percentage of idle time per server. Availability of

such measures enables analysis to make inferences concerning the operation of the

Page 5: CHAPTER - 1 INTRODUCTION AND BASIC CONCEPTSshodhganga.inflibnet.ac.in/.../27321/6/06_chapter-1.pdf · 2018-07-02 · CHAPTER - 1 INTRODUCTION AND BASIC CONCEPTS 1.1 INTRODUCTION The

5

system. The parameter of the system (such as the service rate) may then be

adjusted to ensure a more effective utilization from the viewpoints of both

customer and server.

Queuing theory analysis involves the study of a system's behaviour over

time. A system is said to be in transient state when its operating characteristics

(behaviour) vary with time. This usually occurs at the early stages of the system's

operation where its behaviour is still dependent on the initial condition. However,

since one is mostly interested in the "long-run" behaviour, most attention in

queuing theory analysis has been directed to steady state results. A steady state

condition is said to prevail when the behaviour of the system becomes independent

of time.

1.2 BASIC CONCEPTS IN RELIABILITY THEORY

Reliability engineering is a branch of science and every branch of science is

studied systematically i.e., first of all, its basic concepts are understood. The

following subsections include the definitions and mathematical expressions of

some important concepts which are necessary to understood before entering the

reliability theory.

System: A system is defined as an arbitrary device performing an activity. By

nature, systems are classified as under:

(i) Man Made or Engineering System - As a result of advancement of science,

today man has to his credit so many sophisticated systems which are fully

Page 6: CHAPTER - 1 INTRODUCTION AND BASIC CONCEPTSshodhganga.inflibnet.ac.in/.../27321/6/06_chapter-1.pdf · 2018-07-02 · CHAPTER - 1 INTRODUCTION AND BASIC CONCEPTS 1.1 INTRODUCTION The

6

designed by his hands and brain. As for example, computer system, electric power

supply system, television system, etc. are some man made systems.

(ii) Natural or God Made System - Besides, the man made systems, the universe

have some other systems whose existence is independent of human hands and thus

called the natural or God made system. Human body system, solar energy system,

weather changing system etc. are some examples of God made systems. Generally,

when we perform the life-testing experiments with man made systems, we call it

'Reliability Analysis' while on the other hand, when we deal with God made

systems, we name it 'Survival Analysis'. Hence Reliability and Survival are

interchangeable terms. The concepts of reliability characteristics defined in many

ways by different authors. In the present study, the following definitions of

various reliability characteristics have been used.

1.2.1 Reliability

According to Bozovasky (1961), the reliability is a yardstick of the capability

of an equipment to operate without failure when put into service. A more rigorous

definition of reliability is as follows -

'Reliability' of a component (or a system) is the probability that the

component performs its intended function adequately for a specified period of time

without a major breakdown under the stated operating conditions or environment.

Mathematically if T is the time till the failure of the unit occurs, then the

probability that it will not fail in a given environment before time 't' is

R(t) = P (T>t) = Pt1 F (t) ...(1)

Page 7: CHAPTER - 1 INTRODUCTION AND BASIC CONCEPTSshodhganga.inflibnet.ac.in/.../27321/6/06_chapter-1.pdf · 2018-07-02 · CHAPTER - 1 INTRODUCTION AND BASIC CONCEPTS 1.1 INTRODUCTION The

7

where F(t) is the cumulative distribution function (c.d.f.) of T, called unreliability

R(t) of the system so that F (t) + R (t) = 1. Thus, the reliability is a function of

time and depends on environmental conditions which may or may not vary with

time.

Noticeable features of reliability functions are

(i) 0 < R(t) < 1

(ii) t 0 tLim R(t) = 1 and Lim R(t) 0

(iii) R(t), in general, is a decreasing function of time.

1.2.2 Failure Rate or Hazard Rate Function

Experiences have shown that a very good way to present the failure data is to

compute and plot either the failure density function or the hazard-rate as a function

of time. The failure density is a measure of the overall speed at which failures are

occurring whereas the hazard-rate is a measure of instantaneous speed of failure.

As the time passes on, the unit get out worn and begin to deteriorate. There are

several causes of failures of components such as :

(a) Careless planning, substandard equipment and raw material used, lack poor

quality control, etc.

(b) Human errors

(c) Random or chance causes. Random failure occurs quite unpredictable

random intervals and cannot be eliminated by taking necessary steps the

planning production or inspection stage.

Page 8: CHAPTER - 1 INTRODUCTION AND BASIC CONCEPTSshodhganga.inflibnet.ac.in/.../27321/6/06_chapter-1.pdf · 2018-07-02 · CHAPTER - 1 INTRODUCTION AND BASIC CONCEPTS 1.1 INTRODUCTION The

8

(d) Poor manufacturing techniques

Since the item is likely to fail at any time, it is quite customary to assume

that the life of the item is a random variable with a distribution function F(t),

where F(t) is the probability that the item fails before time T. A failure is the

partial or total loss or change in the properties of a device (or system) in such a

way that its functioning is seriously affected or completely stopped. Hazard

function of a system is a useful concept in life testing experiments. It provides the

instantaneous failure rate of a unit at time t, given that the unit has survived up to

time t. Hazard function is usually denoted by h(t). In economics, reciprocal of

this function is called "Mill's ratio" and in demography, its name is 'age specific

death rate'. This function is also known as force of mortality in actuarial and life

contingency problems. Mathematically expression is developed as under:

dt 0

dt 0

dt 0

P [a device of age t will fail in (t, t+dt)/it has survived upto t)]h(t) = Lim dt

P[t < T < t + dt/ T > t] = Lim dt

[T t+dt] P [T t] = Lim P[T t]. dt

=

dt 0

dt 0

F(t t) F(t)Lim[1 F(t)]dt

R(t) R(t+ t) = Lim R(t) dt

1 d = R (t) R(t) dt

f(t) = R(t)

...(2)

On integrating the expression in (2), one gets

t t

00

h(t) dt = log [1 F(t)] log R(t)

Page 9: CHAPTER - 1 INTRODUCTION AND BASIC CONCEPTSshodhganga.inflibnet.ac.in/.../27321/6/06_chapter-1.pdf · 2018-07-02 · CHAPTER - 1 INTRODUCTION AND BASIC CONCEPTS 1.1 INTRODUCTION The

9

t

o R(t) = exp h(u) du

...(3)

It follows that the probability of failure free operation during the

interval (t1, t2) is expressed by

t21 2

t1R(t , t ) = exp h(t)d(t)

Let us denote t

0h(t) dt by H(t), represents the cumulative hazard function.

From (2) and (3), it follows that

t

0f (t) = h (t). exp h(t) dt

...(4)

From relation (4), it is clear that hazard function is helpful in deciding the form of

p.d.f. of failure time distribution. In general, failure phenomenon of a system can

be represented by

(i) a monotonically increasing hazard function or increasing failure rate

(IFR),

(ii) a monotonic decreasing hazard function or decreasing failure rate

(DFR),

(iii) a constant hazard function or constant failure rate (CFR),

(iv) a bath-tub shaped or U-shaped hazard function.

1.2.3 Mean Time To System Failure (MTSF)

Mean time to system failure or mean life of the system is the expected value

of the failure time distribution, i.e.

Page 10: CHAPTER - 1 INTRODUCTION AND BASIC CONCEPTSshodhganga.inflibnet.ac.in/.../27321/6/06_chapter-1.pdf · 2018-07-02 · CHAPTER - 1 INTRODUCTION AND BASIC CONCEPTS 1.1 INTRODUCTION The

10

0

MTSF = E(T) = t. f (t).dt

But, d df(t) = F(t) = R(t)dt dt

Hence, 0

MTSF = t.dR(t)

00

= tR(t) R(t)dt

0

MTSF = R(t)dt ...(5)

1.3 SOME STATIC SYSTEM CONFIGURATIONS

A system is a combination of units or components forming an actively and to

find the system reliability, we must know the reliabilities of its units/components

and their network sufficiently well. Suppose that a system with life time T,

consists of n-different limits / components C1, C2, ..., Cn with life time Ti of the ith

unit/component. At time t the system reliability is

R (t) = P (T > t)

and the reliability of the ith unit/component is

(t)iR = P [Ti > t]

The system configurations generally considered are as follows:

Page 11: CHAPTER - 1 INTRODUCTION AND BASIC CONCEPTSshodhganga.inflibnet.ac.in/.../27321/6/06_chapter-1.pdf · 2018-07-02 · CHAPTER - 1 INTRODUCTION AND BASIC CONCEPTS 1.1 INTRODUCTION The

11

1.3.1 Series Configuration

The simplest and the most common structure in the reliability analysis is the

series configuration. In this case, the functional operation of the system depends

on the proper operation of all system units/components. The examples of series

configuration are:

(i) The aircraft electronics system consists mainly of a sensor subsystem, a

guidance subsystem, computer subsystem and the fire control subsystem. This

system can only operate successfully if all these operate simultaneously.

(ii) Deepawali or Christmas glow bulbs, where if one bulb fails, it leads to entire

system failure.

The reliability of n units/components series system is given by

R (t) = P [ T1 > t, T2 > t, ... Tn > t]

n n

i ii=1 i=1

P [T > t] = R (t) ...(6)

If hi (t) is the instantaneous failure rate of the ith unit and h (t) is the instantaneous

failure rate. Then, we have,

n

ii=1

h (t) = h (t) ...(7)

Thus, in series configuration the units / components' reliabilities are multiplied to

obtain the system reliability and the unit's/component's hazard rates are added up

to the obtain the system hazard rate. An n-component series configuration is

shown in Fig. 1

Page 12: CHAPTER - 1 INTRODUCTION AND BASIC CONCEPTSshodhganga.inflibnet.ac.in/.../27321/6/06_chapter-1.pdf · 2018-07-02 · CHAPTER - 1 INTRODUCTION AND BASIC CONCEPTS 1.1 INTRODUCTION The

12

FIG. 1

1.3.2 Parallel Configuration

In many systems, several signal paths perform the same operation. If the

system configuration is such that the function of atleast one path is sufficient to do

the job, the system can be represented by a parallel model. In this configuration,

all the units/components of the system are arranged in parallel and the failure of

the system occurs only when all the units/components of the system fail.

The reliability of the n-units/components parallel system is given by

n

ii=1

R(t) 1 (1 R (t)) ...(8)

An n-component parallel system configuration is shown in Fig. 2.

1.3.3 k-out of -n configuration

Another practical system is one where more than one of its parallel

components are required to meet the demands. For instance, two of the four

IN OUT Cn Ci C2 C1

Cn

Ci

C2

C1

OUT IN

FIG.2

Page 13: CHAPTER - 1 INTRODUCTION AND BASIC CONCEPTSshodhganga.inflibnet.ac.in/.../27321/6/06_chapter-1.pdf · 2018-07-02 · CHAPTER - 1 INTRODUCTION AND BASIC CONCEPTS 1.1 INTRODUCTION The

13

generators in a generating station may be necessary to supply the required power

to the customers. The other two are added to increase the supply reliability.

Let us consider a such type of system which functions if at least k (l < k < n)

out of n units/components functions. For identical and statistically independent

units/ components the system reliability is given by

m i n i

c ci=k

R(t) = R (t) 1 R (t) ...(9)

where, Rc (t) is the reliability of each component at time t. In particular, the block

diagram for 2 out of 3 configuration having three identical components C1, C2, C3

can be represented as follows

C1

C1

C2 C3

. . OUT

C2

C3IN

FIG. 3

The series and parallel configurations are the particular cases of k-out of n

configuration when k = n and k = 1, respectively.

1.4 CENSORING IN LIFE TESTING EXPERIMENTS

Life testing experiments are usually much time consuming and

expensive. There are several situations where it is neither possible nor desirable to

Page 14: CHAPTER - 1 INTRODUCTION AND BASIC CONCEPTSshodhganga.inflibnet.ac.in/.../27321/6/06_chapter-1.pdf · 2018-07-02 · CHAPTER - 1 INTRODUCTION AND BASIC CONCEPTS 1.1 INTRODUCTION The

14

use complete sampling. We, therefore, make the sample censored. Obviously, in

a censoring situation, only a portion of the sample of individual is studied. The

censoring is broadly classified in two ways.

1.4.1 Time Censoring or Type I Censoring

In this type of censoring plan, the amount of time required to obtain the

information from a complete sample is reduced. We may put the complete sample

to test and the test is terminated at a prefixed time. This type of censoring is called

Time Censoring or Type I Censoring. It is frequently arises in medical and

agricultural research. Mathematically, let n items are put on test and the test is

terminated at the t0. Let T be the r.v. denoting the lifetime of an item, such that

0 0F (t ) P[T t ]

Let m be the number of items failed before time t0. Then

(t )om ~ B n,F ; m = 0, 1, 2, ... n

Note that from type I censored sample we get the following information:

(i) Out of n items, m items failed before time t0 with lifetimes

(1) (2) (m)x x ...x

(ii) (n m) items did not fail time to. The likelihood of the sample

(1) (2) (m)(m, x x ...x ) is given by

(1) (2) (m) 0L x x ...x | F(t )

Page 15: CHAPTER - 1 INTRODUCTION AND BASIC CONCEPTSshodhganga.inflibnet.ac.in/.../27321/6/06_chapter-1.pdf · 2018-07-02 · CHAPTER - 1 INTRODUCTION AND BASIC CONCEPTS 1.1 INTRODUCTION The

15

no

nn m

i 0i 1

[1 F(t )] ; m = 0 n f (x )[1 F(t )] ; m =1,2...n

(n m)!

where f(.) is the pdf of T.

1.4.2 Failure Time or Type II Censoring

If we put n individuals on test and terminate the experiment at the failure of

the first 'r' individuals, where r (< n) is pre-assigned number of failure. Then, the

terminated sample from such an experiment is called as Failure Censored Sample

or Type II Censored Sample.

In this type of censoring, the number of observations or failures 'r' is decided

before the data are collected. But the time of failure of r (fixed) individuals is a

random variable. It is mostly used in dealing with high cost sophisticated items

such as vacuum tube, X-ray machine, colour T.V. picture tubes, etc.

Suppose the failure times consist of the first 'r' smallest life times be

(1) (2) (r)x x ...x out of a r sample of n. Lifetimes x1, x2, ..., xn, which are i.i.d.

random variables having pdf f(t) and reliability function R(t). Then, it follows

from the general results on order statistic that the likelihood function of the sample

(1) (2) (r)x x ...x given by

n

n r(1) (2) (n) (1) (r)

i 1

n!L x x ...x f (x )[R(x )](n r)!

Page 16: CHAPTER - 1 INTRODUCTION AND BASIC CONCEPTSshodhganga.inflibnet.ac.in/.../27321/6/06_chapter-1.pdf · 2018-07-02 · CHAPTER - 1 INTRODUCTION AND BASIC CONCEPTS 1.1 INTRODUCTION The

16

1.5 BAYESIAN APPROACH AT A GLANCE

Bayesian framework has several interesting features that make it more

attractive to applied statisticians than to its frequent counterpart. It combines the

prior information with the information contained in the data to formulate the

posterior distribution, which is the basis of Bayesian inference. It possesses a well

developed and straight forward procedure for facing the problem of optimal action

in a state of uncertainty. Application of Bayesian concepts and methods abound in

Econometrics, Sociology, Engineering, Reliability Estimation, O.R. and Quality

Control. It addresses the question of how the model underlying the data may be

revised in the light of new information and experience. Bayes analysis is an

essentially self-contained paradigm for statistics. It is an excellent alternative for

use of large sample asymptotic statistical procedures. Bayesian procedures are

almost always equivalent to classical large-sample procedure when the sample size

is very large. The foundation stone of this technique is Baye's theorem and

conditional probability. This theorem was presented by Reverend Thomas Bayes,

an English Minister who lived in the 18th century. Laplace modified the Bayes

basic theorem and the modified version is known as Bayes theorem.

1.5.1 Bayes Theorem

Consider an unobservable vector and observable vector y of length k and n

respectively with their density (, y). From standard probability theory, we have

p1 (, y) = p ( / y) p (y) ...(10)

p1 (, y) = p (y /) p () ...(11)

Page 17: CHAPTER - 1 INTRODUCTION AND BASIC CONCEPTSshodhganga.inflibnet.ac.in/.../27321/6/06_chapter-1.pdf · 2018-07-02 · CHAPTER - 1 INTRODUCTION AND BASIC CONCEPTS 1.1 INTRODUCTION The

17

From (10) and (11), we get p ( / y) = p(y/θ) p(θ)p(y)

...(12)

Note that p(y) = p , y .d

p y / p( ). d = E [ p y/ ] ...(13)

where E indicates averaging with respect to distribution of [e.g. Box and Tiao

(1973); Gelman, Carlin, Stern and Rubin (1995) and Lee (1997)], it is clear that

p(y) is not a function of . As a result (12), can be rewritten as

p / y p (y/ ) p ( ) ...(14)

This is well known Bayes theorem. In the Baysesian terminology, p() is a

prior density of , which tells us what is known about without knowledge of

data. The density p(y/) is likelihood function of , which represents the

contribution of y (data) to knowledge about [e.g. Berger (1985), and Zellener

(1971)]. Finally, p(/y) is the posterior density, which tell us what is known about

given knowledge of data y.

A good feature of Bayesian analysis is that it takes explicit account of prior

information in the analysis and gives a satisfactory way of explicitly organizing

assumptions regarding prior knowledge or ignorance leading to posterior

distribution inferences. It creates the possibility of minimization of expected loss

consisting the parameters as random variables.

Suppose that n items are placed on test. It is assumed that their recorded life-

times form random sample say x1, x2, ..., xn which follow a distribution with

Page 18: CHAPTER - 1 INTRODUCTION AND BASIC CONCEPTSshodhganga.inflibnet.ac.in/.../27321/6/06_chapter-1.pdf · 2018-07-02 · CHAPTER - 1 INTRODUCTION AND BASIC CONCEPTS 1.1 INTRODUCTION The

18

p.d.f. f(x, ). Here, we will assume to be a random variable. Let g() be the

p.d.f. of which is known or otherwise estimated by statistical techniques. Thus,

the failure time p.d.f. f (x, ) can be regarded as a conditional p.d.f. of x given .

Here g () is known as the prior p.d.f. Therefore, the joint p.d.f. of (x1, x2, ..., xn;

) is

n,1 2 n i 1 2 ni 1

H x x , ..., x ; f x , g ( ) = L x , x , ..., x ; g( )

...(15)

The marginal p.d.f. of (x1, x2, ..., xn) is given by

1 2 n 1 2 np(x , x , ..., x ) = H (x , x , ..., x ; ). d

...(16)

Therefore

1 2 n1 2 n

1 2 n

H (x , x , ..., x ; )g ( / x , x , ..., x ) =

p (x , x , ..., x )

1 2 n

1 2 n

L (x , x , ..., x ; ) g( ) =

L(x , x , ..., x ; ) g( ) d

...(17)

The variation in observed prior to the data (x1, x2, ..., xn) is represented by g()

and is known as prior p.d.f. The conditional distribution of given (x1, x2, ..., xn)

obtained posterior to the experiment is called posterior distribution and denoted by

g(|x1, x2,...,xn). Just as the prior distribution reflects beliefs about prior to the

experimentation, so g(|x1, x2,...,xn) reflects updated beliefs about the parameter

posterior to observing the sample x1, x2,...,xn. In other word, the uncertainty about

Page 19: CHAPTER - 1 INTRODUCTION AND BASIC CONCEPTSshodhganga.inflibnet.ac.in/.../27321/6/06_chapter-1.pdf · 2018-07-02 · CHAPTER - 1 INTRODUCTION AND BASIC CONCEPTS 1.1 INTRODUCTION The

19

the parameter prior to the experiment is represented by the prior p.d.f. g () and

the same after the experiment is posterior p.d.f. denoted by (|x1, x2,...,xn).

This process is a straightforward application of Bayes theorem. After

obtaining the posterior distributions of the parameters involved in the parent

population, any statistical inference like estimation, testing of hypothesis about

these parameters may be drawn with the help of these distribution. In case when

prior distribution of is discrete, the integral sign in (16) and (17) are replaced by

the sign of summation over .

In Bayesian analysis, the estimator of i.e. * is one which minimizes

expected loss w.r.f. the posterior distribution, i.e., it depends on the loss function

chosen. If the loss function is taken as quadratic loss function defined as L (*,)

= (*)2, then the Bayes estimator that accomplishes the task of estimating is the

posterior mean, i.e.

*1 2 n 1 2 n = E |x , x , ..., x . |x , x , ..., x d

On using the posterior distribution, ( 100% Bayes confidence interval

(PPfor may be obtained from

P2

1 2 nP1

( | x , x , ..., x ) d = 1 ...(18)

The origin of Bayesian theory may be attributed to a very primary paper by

Rev. Thomas Bayes republished in 1958 due to its fundamental importance. The

details of Bayesian statistical theory can be found in Raifla and Schlaifer (1961),

Page 20: CHAPTER - 1 INTRODUCTION AND BASIC CONCEPTSshodhganga.inflibnet.ac.in/.../27321/6/06_chapter-1.pdf · 2018-07-02 · CHAPTER - 1 INTRODUCTION AND BASIC CONCEPTS 1.1 INTRODUCTION The

20

Jeffreys (1961), Savage (1962), Lindeley (1965), Box and Tiao (1973), Berger

(1985), S.K. Sinha (1998) and Bernardo and Smith (1993).

1.6 CONCEPT OF PRIOR DISTRIBUTION

A prior distribution, which is supposed to represent, what is known about

unknown parameter before the data are available, plays an important role in

Bayesian analysis. The prior information concerning the parameter can be

summarized mathematically in the form of prior distribution g() on the parameter

space . A detailed discussion to obtain the solution to the problem concerning the

choice of a prior distributions of is given in Raiffa and Schlaifer (1961), but here

we shall confine ourselves just in defining them. Priors for the parameters in

distribution differ in respect of their widen properties.

The natural conjugate priors satisfy the closer property implying that the

prior and posterior distributions for the parameter belong to the same family. This

method of choosing the priors is much popular because it leads to mathematical

simplicity and tractability. This property of conjugate priors is also known as

'closer under sampling' property Weltherill (1961). Raiffa and Schlaifer (1961)

have considered a method of gathering prior densities on the parameter space. A

family of such densities has been called by them a 'natural conjugate family'. For

example in case of an exponential density, the gamma priors form such a family.

In case, when the decision maker does not have any prior knowledge about

the parameter, non-informative quasi density may be used. The role of non-

informative prior quasi densities in Bayesian analysis is available in Bhattacharya

Page 21: CHAPTER - 1 INTRODUCTION AND BASIC CONCEPTSshodhganga.inflibnet.ac.in/.../27321/6/06_chapter-1.pdf · 2018-07-02 · CHAPTER - 1 INTRODUCTION AND BASIC CONCEPTS 1.1 INTRODUCTION The

21

(1967). Jeffreys (1961) proposed a general rule for obtaining the prior distribution

of . According to this rule, the unknown parameter , which is assumed to be a

random variable follows such a distribution which is proportional to the square

root of the fisher information on . Mathematically, we have

g ( ) I ( ) ...(19)

or g ( ) = constant I ( )

where 2 2

2 log f (x, ) log f (x, )I ( ) = E = E

...(20)

for the case when there is a single unknown parameter . For a situation when is

a vector valued parameter, the determinant of the information matrix i.e. |I()|

takes place where

2 log (f) I ( ) = E

i j

...(21)

A difficulty arises when the prior information about the parameter is vague or

worse still, there is no prior information whatever. This leads to the consideration

of what are known as improper or quasi prior distribution. For a proper prior we

have

g ( ) 0

and g ( ). d = 1,

when is a continuous random variable.

or g ( ) = 1,

when is a discrete random variable.

While for an improper prior

Page 22: CHAPTER - 1 INTRODUCTION AND BASIC CONCEPTSshodhganga.inflibnet.ac.in/.../27321/6/06_chapter-1.pdf · 2018-07-02 · CHAPTER - 1 INTRODUCTION AND BASIC CONCEPTS 1.1 INTRODUCTION The

22

g ( ) 0

but g ( )d 1,

when is a continuous random variable.

or g ( ) 1,

when is a discrete random variable.

Jeffreys prior may or may not be proper. Various other rules have also

been suggested for the selection of a prior but no neat solution appears to the

problem until now.

In Bayesian analysis of reliability characteristics, the prior knowledge is

updated by using experimental data and variations in parameters are represented

by posterior distribution. The admissibility of the parameters of the prior

distribution can not be tested unless we make use of additional information on the

prior distribution. However, the variations in the parameters can be neutralized by

averaging as we do in the case of compound distribution of concerned variable.

Obviously, it seems more logical to infer about the parameters of the prior

distribution with the help of the compound distribution, which also involves these

parameters.

1.7 LOSS FUNCTION

Suppose be an unknown parameter of some distribution f (x|) and we

estimate by some statistics ˆ ˆ. Let L ( , ) represent the loss incurred when the

true value of the parameter is and we are estimating by the statistic ̂ . The

loss function denoted by ˆL ( , ) is defined to be real valued function satisfying:

(i) ˆL ( , ) 0 for all possible estimates ̂ and all

Page 23: CHAPTER - 1 INTRODUCTION AND BASIC CONCEPTSshodhganga.inflibnet.ac.in/.../27321/6/06_chapter-1.pdf · 2018-07-02 · CHAPTER - 1 INTRODUCTION AND BASIC CONCEPTS 1.1 INTRODUCTION The

23

(ii) ˆL ( , )= 0 for ̂ = .

We now consider the following loss functions:

1.7.1 Quadratic Loss Function

A function defined as 2ˆ ˆ L ( , ) = k ( ) is called quadratic loss function.

Such a loss function is widely used in most estimation problems, if k is a function

of , the loss function is called the weighted quadratic loss function. If k = 1, we

have 2ˆ ˆ L ( , ) = ( ) , known as squared error loss function (SELF). Under the

SELF, the Bayes estimator is the posterior mean. The squared error loss function is

a symmetric function of ̂ and . The reason for the popularity of SELF is due to

makes the calculations relatively straightforward and simple.

1.7.2 Linex Loss Function

A symmetric loss function assumes that positive and negative errors are

equally serious. However, in some estimation problems such as assumption may

be inappropriate, Canfield (1970) points out that the use of symmetric loss

function may be inappropriate in the estimation of reliability function. Over

estimation of reliability function or average lifetime is usually much more serious

than under estimation of reliability function or mean time to failure (MTTF).

Also, an under-estimation of the failure rate results in more serious consequences

then the over estimation of the failure rate. This lead to the statistician to think

about asymmetrical loss function which has been proposed in statistical literature

Ferguson (1967), Zellner and Geisel (1968), Aitcheson and Dunsmore (1975) and

Berger (1980) have considered the linear asymmetric loss function. Varian (1975)

Page 24: CHAPTER - 1 INTRODUCTION AND BASIC CONCEPTSshodhganga.inflibnet.ac.in/.../27321/6/06_chapter-1.pdf · 2018-07-02 · CHAPTER - 1 INTRODUCTION AND BASIC CONCEPTS 1.1 INTRODUCTION The

24

introduced the following convex loss function known as Linex (linear -

exponential) loss function:

aL ( ) = be c b; a, c 0, b > 0 ...(23)

where, ˆ = . It is clear that L (0) = 0 and the minimum occurs when

c = a.b, therefore,

aL ( ) = b e a 1 , d 0, b > 0

where a and b are the parameters of the loss function may be defined as shape and

scale respectively. This loss function has been considered by Zellner (1986), Rajo

(1987) and Base and Ebrahimi (1991) considered the L () as

aL( ) = be c b; a, c 0, b > O

where ˆ

=

And studied the Bayesian estimation under the Linex loss function for

exponential life time distribution. This loss function is suitable for the situations

where observations of is more costly than its underestimation. This loss function

L () have the following nice properties:

(i) For a = 1, the function is quite asymmetric about zero with overestimation

being more costly than underestimation.

(ii) For a < 0, L () rises exponentially when < 0 (underestimation) and almost

linearly when > 0 (overestimation).

(iii) For small value of |a|, L() is almost symmetric and not far from a squared

error loss function. Indeed, on expanding

Page 25: CHAPTER - 1 INTRODUCTION AND BASIC CONCEPTSshodhganga.inflibnet.ac.in/.../27321/6/06_chapter-1.pdf · 2018-07-02 · CHAPTER - 1 INTRODUCTION AND BASIC CONCEPTS 1.1 INTRODUCTION The

25

2 2

e 1 + + z

or L() is a squared error loss function. Thus for small values of |a|, optimal

estimates are not far different from those obtained with a squared error loss

function.

1.7.3 Precautionary Loss

Norstrom (1996) introduced an alternative asymmetric precautionary loss

function and also present a general class of precautionary loss function with

quadratic loss function such as a special case. These loss functions approach

infinitely near the origin to prevent underestimation and thus giving a conservative

estimators, especially when, low failure rates are being estimated. These

estimators are very useful when underestimation may lead to serious

consequences. A very useful and simple asymmetric loss function is

ˆ( )ˆL ( , ) = ˆ

1.7.4 Entropy Loss

In many practical situations, it appears to be more realistic to express the loss

in terms of the ratio ˆ

. In this case, Calabria and Pulcini (1994) point out that a

useful asymmetric loss function is the entropy loss

peL ( ) p log ( ) 1 ...(27)

where ˆ

=

Page 26: CHAPTER - 1 INTRODUCTION AND BASIC CONCEPTSshodhganga.inflibnet.ac.in/.../27321/6/06_chapter-1.pdf · 2018-07-02 · CHAPTER - 1 INTRODUCTION AND BASIC CONCEPTS 1.1 INTRODUCTION The

26

And who minimum occurs at ̂ when a positive error ( ̂ ) causes more

serious consequences than a negative error and vice-versa. For small |p| value, the

function is almost symmetric when both ˆ and are measured in a logarithmic

scale and approximately

22

e eˆL ( ) p log ( ) log ( ) ...(28)

Also, the loss function L () has been used in [Dey et al. (1987)] and [Dey and Liu

(1992)], in the original from having p = 1. Thus, L () can be written as

eL ( ) = b log ( ) 1 ; b > 0 ...(29)

where ˆ

=

1.8 SOME IMPORTANT FAILURE-TIME DISTRIBUTION MODELS

Life-testing is an important and useful section in the area of statistics for

engineering sciences. The analysis of failure time data over the years has given

birth to a number of parametric models. These models were found suitable for

representing a wide range of situation and particularly in problems related to the

modeling of various aging or failure process. Among univariate models, a few

particular distributions occupy a central role because of their demonstrated

usefulness. The choice of a failure-time model is largely a skill. In most

experiments, the measurements are assumed to be drawn from certain distribution.

The choice of the distributions depends on past experience with the process,

mathematical expediency and to some extent faith. However, in some cases, the

relationship between the failure-mechanism and the failure-time function may be

used in making a choice. A useful series of references for this purpose is the

Page 27: CHAPTER - 1 INTRODUCTION AND BASIC CONCEPTSshodhganga.inflibnet.ac.in/.../27321/6/06_chapter-1.pdf · 2018-07-02 · CHAPTER - 1 INTRODUCTION AND BASIC CONCEPTS 1.1 INTRODUCTION The

27

Johnson and Kotz (1970), which extensively catalogs mathematical and statistical

properties of most of the distributions and provides additional references

concerning their areas of application. Some frequently used lifetime models are as

follows:

1.8.1 The Exponential Distribution:

In reliability theory, the exponential distribution plays an important role in

life testing experiments as the part played by the normal distribution in agricultural

experiments the effect of different treatments on the yield. Historically, the

exponential distribution was the first life time model for which statistical methods

were extensively developed. Early, Sobel (1953, 1954, 1955) and Epstein (1954,

1960) gave numerous results and popularized the exponential as a life time

distribution, especially in the area of industrial life testing. The desirability of the

exponential distribution is due to its simplicity and its inherent association with the

well developed theory of Poission Process.

The p.d.f. of one parameter exponential distribution is written as

x|1f (x, ) = e ; x, > 0

...(30)

where , the scale parameter, is the average or the mean life of the item.

In life testing, 1

, the first part of the density function in (30), is referred to

as constant hazard rate. The reliability function for time t of items whose life time

follow exponential distribution in (30) is

t / R(t) = P [X > t] = e ...(31)

for = 1, the density in (30) is called standard exponential distribution.

Page 28: CHAPTER - 1 INTRODUCTION AND BASIC CONCEPTSshodhganga.inflibnet.ac.in/.../27321/6/06_chapter-1.pdf · 2018-07-02 · CHAPTER - 1 INTRODUCTION AND BASIC CONCEPTS 1.1 INTRODUCTION The

28

In many life testing problems, often it has been found useful to fit a two-

parameter exponential model to failure time data with p.d.f.

1 (x )f (x, , ) exp ; x > , > 0 ...(32)

where = 0 is called the guarantee time or threshold parameter. If = 0 , we get

the one parameter exponential density. Again, for this model the reliability

function for time t are

1 if t

R(t) exp[ (t ) / ] if t >

...(33)

1.8.2 The Gamma Distribution

The gamma distribution is a natural extension of exponential distribution.

The p.d.f. of two parameter gamma distribution is given by

k

k 1xf (x; , k) = e x ; x, , k > 0k

...(34)

where and k are the scale and the shape parameters respectively. For k =1, the

gamma distribution reduces to the one parameter exponential distribution with

parameter . For integer value of k, the gamma p.d.f. is also known as the

Erlangian p.d.f. Its reliability and hazard functions involve the incomplete gamma

function i.e.

f(t)R(t) = 1 I (k, t) and h(t) = , t > 0R(t)

...(35)

where I (k, t) is an incomplete gamma function defined as

t k 1

0

1I(k, t) = e dk

...(36)

Page 29: CHAPTER - 1 INTRODUCTION AND BASIC CONCEPTSshodhganga.inflibnet.ac.in/.../27321/6/06_chapter-1.pdf · 2018-07-02 · CHAPTER - 1 INTRODUCTION AND BASIC CONCEPTS 1.1 INTRODUCTION The

29

It can be shown that h (t) is monotonic decreasing (increasing) for k < 0 (k>1)

constant for k = 1. The shape parameter k is also defined as the intensity of IFR or

DFR by Sharma and Rana (1990).

1.8.3 The Weibull Distribution

The Weibull distribution is perhaps the most extensively used life

distribution model in reliability theory. Weibull (1951) and Berretoni (1964)

advocated its applications in connection with life-times of many types of

manufactured items. It has been used as a model with diverse type of items such as

vacuum tubes (Kao, 1959), ball-bearings (Lieblein and Zelen, 1956) and electrical

insulation (Nelson, 1972). Weibull distribution is widely used in biomedical

applications like in studies on time to the occurrence of tumors in human

populations (Whittemore and Altschuler, 1976) or in laboratory animals (Pike

1966; Peto et al., 1972). The p.d.f. of the Weibull distribution is given by

kx kkf (x; ,k) = e x ; x, k, > 0

...(37)

Here, is referred to as a scale parameter and k is shape parameter. The random

variable X having p.d.f. in (37), is said to have two parameter (k and ) Weibull

distribution. For this distribution, the reliability function for time t, is given by

kt /R(t) = e ...(38)

and the instantaneous failure rate or hazard function becomes

k 1kh(t) = t

...(39)

Page 30: CHAPTER - 1 INTRODUCTION AND BASIC CONCEPTSshodhganga.inflibnet.ac.in/.../27321/6/06_chapter-1.pdf · 2018-07-02 · CHAPTER - 1 INTRODUCTION AND BASIC CONCEPTS 1.1 INTRODUCTION The

30

Obviously, h(t) is monotonically decreasing (increasing) for k < 1 (k >1) leads to

the exponential distribution. This distribution was first introduced by a Swedish

Physicist Weibull (1939).

1.8.4 The Geometric Distribution

Generally, the life time distributions of the system's components are assumed

to be continuous. However, there exist systems whose components life time are

measured in terms of the number of completed stock cycles. Even for a

continuous operation, involving continuous measurement of life-time observations

made at periodic time point given rise to discrete situation and therefore a discrete

model may be more appropriate. Yakub and Khan (1981) Mishra (1982), Patel

and Gajjan (1990), Mishra and Singh (1992), Patel (2003), Dillip (2004), Krishna

and Jain (2004), and Anwar Hasan et al. (2007), etc. considered the geometric

distribution in analysis. Consider an experiment consisting of independent trials,

called Bernoulli trials such that there are only two outcomes E1 (occurrence of a

particular event) and E2 (non-occurrence of particular event). Thus, the sample

description space of this experiment is S = {E1, E2}. Define a r.v. Xi such that

th

1i th

2

0 : E occurrence on i trialX

1 : E occurrence on i trial

...(40)

Also, let P (Xi = 0) = and P (Xi = 1) = () i. Define another random

variate 'X' to denote the number of independent trials to the first non-occurrence.

The sample description space of X is S = {X: X = 1, 2, ...} and

xP(X x) (1 ). ; x = 0, 1, 2 ... ...(41)

Page 31: CHAPTER - 1 INTRODUCTION AND BASIC CONCEPTSshodhganga.inflibnet.ac.in/.../27321/6/06_chapter-1.pdf · 2018-07-02 · CHAPTER - 1 INTRODUCTION AND BASIC CONCEPTS 1.1 INTRODUCTION The

31

The p.m.f. in (41) has been suggested as life-time distribution as it gives the

probability of x successful cycles of a system followed by one non-survival, where

is the probability of survival. For this distribution, the reliability function for

time t is

tR(t) = ...(42)

and the hazard rate becomes as

f(t, )h(t) = = (1R(t)

...(43)

Since the exponential distribution as a life time distribution, has some nice

properties. It represents constant hazard rate, has a memoryless property and

possesses much of the mathematical feasibility. Being a discrete analogue of the

exponential distribution, the geometric distribution also has a pride place among

life time distributions. It is the only discrete life time distribution having constant

hazard rate and following the momoryless property also.

1.9 COMPOUND DISTRIBUTION

These distributions are formed by 'mixtures of distributions'. The compound

distributions in the symbolic form can be written as 1 2ˆF F . Where F1 represents

the original distribution, the varying parameter and F2 is the compounding

(mixing) distribution. If the cumulative distribution function of a random variable

is F(x|1, 2,..., n), depending on n parameters 1, 2,..., n then a compound

distribution is constructed by ascribing to some or all of the 's, a probability

distribution. The new distribution has the cumulative distribution function E

Page 32: CHAPTER - 1 INTRODUCTION AND BASIC CONCEPTSshodhganga.inflibnet.ac.in/.../27321/6/06_chapter-1.pdf · 2018-07-02 · CHAPTER - 1 INTRODUCTION AND BASIC CONCEPTS 1.1 INTRODUCTION The

32

[F(x|1, 2,..., n)], the expectation being taken with respect to the joint distribution

of 's.

Some of well known compound distributions are

(i) Exponential () ̂ gamma)

Means a compound exponential distribution formed by ascribing the gamma

distribution with parameter and to the expected value of of exponential

distribution. This compound distribution is a Pareto distribution and is defined by

x 1

0f (x, , ) = e e d

( +1) = ; , , x > 0

(x+ )

This is the p.m.f. of Pareto distribution with mean = 1

and

variance = 2

2 ; > 2

( 1) ( 2)

The co-efficient of variation is ( 2) and is independent of the scale

parameter .

(ii) Geo ̂ (u,v)

i.e. 1 x u 1 v 1

0

1f (x, u, v) = (1 ) (1 ) dB(u, v)

[x]

[1 x]v u ; x = 0, 1, 2 ...

(u v)

This is the p.m.f. of inverse Poly-Eggenberger distribution. Here

Page 33: CHAPTER - 1 INTRODUCTION AND BASIC CONCEPTSshodhganga.inflibnet.ac.in/.../27321/6/06_chapter-1.pdf · 2018-07-02 · CHAPTER - 1 INTRODUCTION AND BASIC CONCEPTS 1.1 INTRODUCTION The

33

u[r] = u (u+1) ... (u+r

(iii) Paission () ̂ Gamma (, )

Means a compound Poission distribution formed by ascribing the gamma

distribution to the expected value of , of a Poission distribution. This compound

Poission distribution is a negative binomial distribution.

(iv) Binomial (n, p) p̂ Beta (, )

Means a compound binomial distribution formed by ascribing the beta

distribution of first kind to the expected value of of a binomial distribution. This

compound distribution is a Poly-Eggenberger distribution.

(v) Normal ˆ) Normal 2

2 μ ︵ θ,σ ︶

This compound normal distribution is also normal with parameters and

2 21 2( ) .

(vi) Hypergeometric (n, x, N) x̂ Binomial (N, p)

This compound distribution is a binomial with parameter n and p.

1.10 ROBUSTNESS

Robust statistics provides an alternative approach to classical statistical

methods. The motivation is to produce estimators that are not unduly affected by

the small departures from model assumptions. In statistics, classical methods rely

heavily on assumptions which are often not met in practice. In particular, it is

often assumed that the data residuals are normally distributed at last

approximately, or that the central limit theorem can be relied on to produce

normally distributed estimates. Unfortunately, when there are outliers in the data,

Page 34: CHAPTER - 1 INTRODUCTION AND BASIC CONCEPTSshodhganga.inflibnet.ac.in/.../27321/6/06_chapter-1.pdf · 2018-07-02 · CHAPTER - 1 INTRODUCTION AND BASIC CONCEPTS 1.1 INTRODUCTION The

34

classical methods often have very poor performance. Robust statistics seeks to

provide methods that emulate classical methods but which are not unduly affected

by outliers or other small departures from model assumptions. The subject of

robustness is receiving considerable attention of late. The first theoretic approach

to the robust statistics was introduced by Huber (1981). His book on this topic

made this fundamental work to a wider audience. Other good books on robust

statistics are Hampel et al. (1986) and Rousseeuw and Leroy (1987). A modern

treatment is given by Maronna et al. (2006). Huber's book is quite theoretical,

whereas the book by Rousseeuw and Leroy is very practical. Hampel et al.

(1987) and Maronna et al. (2006) fall somewhere in the middle ground. Robust

parametric statistics tends to rely on replacing the normal distribution in classical

methods with the t-distribution with low degree of freedom (i.e. high Kurtosis;

degree of freedom between 4 and 6 have often been found to be useful in practice)

or with a mixture of two or more distributions. Examples of robust and non-robust

statistics are

(i) The 'median' is a robust measure of central tendency while the 'mean' is not.

(ii) The 'median absolute deviation' and inter-quartile range are robust measures

of statistical dispersion while the 'standard deviation' and 'range' are not Robust

statistics, in a loose, non-technical sense, is concern with the fact that assumptions

commonly made in statistics are almost approximations to reality. As a collection

of related theories, robust statistics is the statistics of the approximate parametric

methods. The moral is clear. One should check carefully to see that the underlying

assumptions are satisfied before using parametric method.

Page 35: CHAPTER - 1 INTRODUCTION AND BASIC CONCEPTSshodhganga.inflibnet.ac.in/.../27321/6/06_chapter-1.pdf · 2018-07-02 · CHAPTER - 1 INTRODUCTION AND BASIC CONCEPTS 1.1 INTRODUCTION The

35

1.11 REVIEW OF LITERATURE

In recent years reliability has been formulated as the science of predicting,

estimating or optimizing the probability or survival, the mean life or more

generally the life distribution of components or systems. To study and solve

problems that arise in reliability theory, the knowledge of methods of probability

theory and mathematical statistics is necessary. At present, not only engineers and

scientists but also government leaders are concerned with increasing the reliability

of a system.

In order to obtain different parameters of interest like reliability (survival)

function, nature of hazard rate, mean time to system failure, availability, etc. called

reliability characteristics, the research area can be broadly classified into the

following two categories:

(1) In studies like Dhillon and Singh (1980), Govil (1983), and Balagurusamy

(1984), the authors developed stochastic models under the various assumption

which best fit to the engineering system used in day to day practical life. They

obtained the reliability characteristics and net expected profit during a finite

interval of time using the well known techniques such as regenerative point

technique, semi-markov process and supplementary variable technique.

(2) On the other hand, studies like Epstein and Sobel (1952), Barlow and

Proschan (1965, 1975), Mann et al. (1974), Kapur and Lamberson (1977), Elandt-

Johnson and Johnson (1980), Kalbfleich and Prentice (1980), Miller (1981), Cox

and Oakes (1984), Lawless (1982), Martz and Waller (1982), Sinha (1986),

include recording of lifetime data on individuals and then various inference

Page 36: CHAPTER - 1 INTRODUCTION AND BASIC CONCEPTSshodhganga.inflibnet.ac.in/.../27321/6/06_chapter-1.pdf · 2018-07-02 · CHAPTER - 1 INTRODUCTION AND BASIC CONCEPTS 1.1 INTRODUCTION The

36

techniques are used to estimate various reliability characteristics of the system.

The reliability characteristics are analysed in respect of variation in the parameters

involved in lifetime distributions and repair time distributions.

The literature on reliability analysis includes, in a broad sense, the analysis

of a positive valued random variable representing time to failure of a physical or

biological system and its analysis is gaining importance for research workers in the

field of industry and engineering. Obviously, the nature of problems in such

analysis is extremely varied. In this category, experiments are conducted to record

life time data and these are used for analysing the life phenomenon of various

systems (human made or biological system) in terms of reliability or survival

function, increasing or decreasing hazard rate, mean time to survival etc. In other

words, the recorded life time data are used to draw inferences on the reliability

characteristics of the system or subunit to see its worth in accomplishing an

intending task and therefore, we can rightly call it as "Inferential Reliability

Analysis". A vast literature on this aspect is available in Lawless (1982), Sinha

(1986), Kapoor and Lamberson (1977), Mann et al., (1974), Martz and Walheer

(1982), Harris and Soms (1983), Nandi and Aich (1974), Basu and Ebrahimi

(1991), Sharma and Krishna (1994, 1995).

Life testing experiments are costly and time consuming phenomenon and

therefore it should be recognized that the parameters characterizing reliability

characteristics in a life time distributions are bound to follow some random

variations due to environmental changes. Therefore, it is a factor which should be

considered with the experimental data for analysing the reliability characteristics

of the system. Thomas Bayes (1763) introduced Bayesian inference in his famous

Page 37: CHAPTER - 1 INTRODUCTION AND BASIC CONCEPTSshodhganga.inflibnet.ac.in/.../27321/6/06_chapter-1.pdf · 2018-07-02 · CHAPTER - 1 INTRODUCTION AND BASIC CONCEPTS 1.1 INTRODUCTION The

37

research paper entitled, "An essay towards solving a problem in the Doctrine of

Chance". Further, for basic theory and foundations one can also refer to Jeffreys

(1961) and Savage (1962). Lindley (1965) and Box and Tiao (1973) have

popularized and given this approach an unique important place in the field of

statistics. They developed a literature based on Bayes's approach. Today a vast

literature on Bayesian analysis of life testing problems in terms of some standard

text is available. A few of them are Savage (1962), Bhattacharya (1967), Martz

and Waller (1982), Sinha (1986) and Gelman et al. (1995) presented the Bayesian

analysis of the system reliability using many prior distributions. Some, priors with

their inherent statistical properties are also given in the study by Raiffa and

Schlaifer (1961). Studies like Sharma et al. (1993, 1994, 1995) are also effort in

the same direction. Apostolakis (1990) reviewed the literature on Bayesian theory

to assess the probabilistic safety of various engineering system. But in many

practical situation it may happen that the operational experiment with the complete

system is limited, non-existent or very expensive to realize. Moreover, we often

need to predict the reliability of complete system at the early stage of designing.

In this regard, Kaplan et al. (1989) studied about the prediction of reliability of

complete system assuming that the operational experience with the complete

system is limited, non-existent or very expensive to realize by using the

information available on boxes or subunits of the system.

The study analysed the behaviour of various probability curves, which in

turn may be used to express our degree of confidence about the complete system

reliability and the way in which Bayes' theorem updates prior probability curves to

account for various evidences. Like reliability, availability is also a measure of

Page 38: CHAPTER - 1 INTRODUCTION AND BASIC CONCEPTSshodhganga.inflibnet.ac.in/.../27321/6/06_chapter-1.pdf · 2018-07-02 · CHAPTER - 1 INTRODUCTION AND BASIC CONCEPTS 1.1 INTRODUCTION The

38

effectiveness of a system for long-term performance. Gray and Lewis (1967) and

Masters and Lewis (1987) have obtained confidence interval for steady state

availability after using failure and repair information on respective distributions.

However, this approach was not considered satisfactory and a modified approach

was discussed in later. In this modified approach confidence interval for

availability was developed by getting simultaneous intervals for MTBF and

MTTR. However, when failure and repair information are recorded over a large

interval of time, it may be reasonable to assume random variations in the

parameters of failure time and repair time distributions. These parametric random

variations might result due to environmental impact on operating conditions of the

system or component. Recent contributions in these directions are by Sharma and

Krishna (1994, 1995a,b), Sharma and Bhutani (1994a, b) and Sharma et al.

(2004).

Queuing theory is concerned with the statistical description of the behaviour

of queues with findings. For example, the probability distribution of the number

in the queue from which the mean and variance of queue length can be found. In

queuing theory, the investigators must measure the existing system to make an

objective assessment of its characteristics and must determine how changes may

be made to the system and what effect of various kinds of changes in the system's

characteristics would be there. Probability mass function (p.m.f.) obtained in a

steady state situation is the basis of analyzing various queue systems in respect of

their characteristics. Traffic intensity () defined as the ratio of the arrival rate to

service rate is an important parameter of the p.m.f. Saaty (1961), Ackoff and

Page 39: CHAPTER - 1 INTRODUCTION AND BASIC CONCEPTSshodhganga.inflibnet.ac.in/.../27321/6/06_chapter-1.pdf · 2018-07-02 · CHAPTER - 1 INTRODUCTION AND BASIC CONCEPTS 1.1 INTRODUCTION The

39

Sasieni (1968) and Taha (1976) studied various queue characteristics which have

been defined using the parameter . D.G. Kendal (1953) introduced a useful

notation for multiple - server queuing models which describes the three

characteristics namely, arrival distribution, departure distribution and number of

parallel service channels. Later, A. Lee (1966) added the fourth and fifth

characteristics to the notation; that is, the service discipline and the maximum

number in the system. In Taha (1976), the Kendall-Lee notation is augmented by

the sixth characteristics describing the calling source. The complete notation has

thus appears in the following symbolic form:

(a / b / c) : (d / e / f)

where

a arrival (or interarrival) distribution

b departure (or service time) distribution

c number of parallel service channels in the service

d service discipline

e maximum number allowed in the system (in service + waiting)

f calling source.

The following conventional codes are usually used to replace the symbols a,

b and d.

Symbols a and b replaced by

M Poisson (Markovian) arrival or departure distribution (or equivalently

exponential inter-arrival or service time distribution).

D Deterministic inter-arrival or service times.

Page 40: CHAPTER - 1 INTRODUCTION AND BASIC CONCEPTSshodhganga.inflibnet.ac.in/.../27321/6/06_chapter-1.pdf · 2018-07-02 · CHAPTER - 1 INTRODUCTION AND BASIC CONCEPTS 1.1 INTRODUCTION The

40

Ek Erlangian or gamma inter-arrival or service time distribution with parameter

k.

GI General independent distribution of arrivals (or inter-arrival times).

G General distribution of departures (or service times).

Symbol d:

FCFS first come, first served

LCFS last come, first served

SIRO service in random order

GD general service discipline

The symbol c is replaced by any positive number representing the number of

parallel serves. The symbol e and f represent a finite or infinite number in the

system and calling a source, respectively.

To illustrate, the use of this notation, consider (M/M/c):(FCFS/N/). This

denotes Poisson arrival (exponential inter-arrival), Poisson departure (exponential

service time), c parallel servers, "first come, first served" discipline, maximum

allowable number N in the system, and infinite calling source.

In all such analysis, is assumed to be constant. However, over a long

period of time the assumption about constant seems to be restrictive. With the

advancement in science and technology over a period, the parameters involved in

queue characteristics can not be considered as constant. Here, it should be

recognized that the investigator has considerable a-priori knowledge about the

variations in these parameters. On the repeated analysis of various queue systems,

we can have a strong base for collecting prior information showing variations in .

Page 41: CHAPTER - 1 INTRODUCTION AND BASIC CONCEPTSshodhganga.inflibnet.ac.in/.../27321/6/06_chapter-1.pdf · 2018-07-02 · CHAPTER - 1 INTRODUCTION AND BASIC CONCEPTS 1.1 INTRODUCTION The

41

Following the concept, Muddapur (1972) and Armero (1985) presented a Bayesian

analysis of some queue characteristics. The primary aim of these studies has been

updated the prior knowledge about the parameters using experimental data. The

studies in Ackoff and Sasieni (1968) and Taha (1976) have not considered

estimation and testing of hypothesis aspects of queue characteristics. Sharma and

Kumar (1999) studied the statistical inferences on various important technological

performances measures for a (M/M/1) ( /FIFO) queue system model. Of late,

Maurya (2004), has confined his considerable attention to analyze a more

generalized queue (M/G/ ) ( /GD) regarding statistical inferences on its useful

characteristics.

The arrival and servicing patterns in the system are greatly influenced by a

number of factors which can not be controlled or assessed in advance and

therefore, the sample information may be used to draw valid inferences about

queue characteristics. Here, it should be recognized that the investigator has a

priori knowledge about the variations in these parameters. Now, the investigator

needs to combine this a-priori knowledge with the operational data. On the

queue's system, obviously the objective can be met with the Bayesian analysis of

various queue characteristics. Studies like Apostoolakis (1990), Kaplan et al.

(1989) include the conceptual framework and methodology for such analysis.

Some more studies (Martz and Waller, 1982; Sharma and Bhutani,1992, 1994;

Krishna and Sharma, 1995; Sharma et al., 2004) include the classical and Bayesian

analysis of steady state, point-wise and interval availability of the system.

Page 42: CHAPTER - 1 INTRODUCTION AND BASIC CONCEPTSshodhganga.inflibnet.ac.in/.../27321/6/06_chapter-1.pdf · 2018-07-02 · CHAPTER - 1 INTRODUCTION AND BASIC CONCEPTS 1.1 INTRODUCTION The

42

Reviewing all the above studies, the investigator has been able to highlight

some vital problems of very practical nature in Engineering Reliability and

Queuing Theory.

1.12 THESIS AT A GLANCE

The present thesis includes six chapters. The contents, developments and

findings in different chapters are discussed below.

The Chapter I contains a brief account of introduction and development of

reliability and queuing theory. The important statistical techniques and concepts

like Bayesian inference, concept of prior distributions, life time models or - life

time distributions and compound distributions have been discussed in a concise

form. This thesis also include work on queue systems. A brief summary of the

queue system is also included in this chapter. In the end, we have important

matter on review of literature and thesis at a glance providing a brief outline of the

results in the present research work.

For time consuming life testing experiments, it seems unrealistic to treat the

parameters involve in the life time distribution as constant throughout. Thus, the

parameters in the life time distribution are treated as a random variable. Following

this concept, the chapter-2 of the thesis deals with the development of statistical

methodology useful in the analysis of the robust character of various static system

configurations with Geometric life time of components.

The chapter-3 of the present thesis deals with the reliability analysis of

different types of system configurations such as series, parallel, m - out of n and

non-series parallel complex system with Raileigh life time of component. In this

chapter an easier alternative method for the construction of the structure function

Page 43: CHAPTER - 1 INTRODUCTION AND BASIC CONCEPTSshodhganga.inflibnet.ac.in/.../27321/6/06_chapter-1.pdf · 2018-07-02 · CHAPTER - 1 INTRODUCTION AND BASIC CONCEPTS 1.1 INTRODUCTION The

43

of different types of systems configurations are suggested. Further, evaluation as

well as estimation of the different reliability characteristics have been carried out.

The chapter-4 deals with the analysis of the robust character of queue system

in a power supply problems when traffic intensity is treated as a random variable

with beta distribution of second kind. Here, it is assumed that the random variable

X follows binomial distribution with parameter 1

and the prior belief about

is assumed to be beta distribution of the second kind. Then the compound

distribution of X, comes out to be Polya-Eggenberger distribution. The analysis

depends upon the information available on units of system. Now for developing

the updated compound distribution/predictive basic distribution, the posterior

distribution of is developed for given data. The predictive basic distribution is

also comes out to be Polya-Eggenberger distribution. The study reveals that the

value of the estimates with the predictive distribution is uniformly higher as

compared to the estimates obtained by using compound distribution. It is also

noted that estimates tend to be more precise and consistent in the case of predictive

basic distribution as that compared with the simple compound distribution.

Chapter -5 in its first section presents the Bayesian analysis of various queue

characteristics in a power supply system model. The analysis depends upon the

operational data on the queue system. The arrival and service time distributions for

system are taken to be exponential. Prior belief about arrival and service rate of

the system have been employed in the analysis. The posterior distribution of traffic

intensity is essentially a powerful tool to analyze the system characteristics in the

Bayesian framework. Also for this purpose, the Squared Error Loss Function

Page 44: CHAPTER - 1 INTRODUCTION AND BASIC CONCEPTSshodhganga.inflibnet.ac.in/.../27321/6/06_chapter-1.pdf · 2018-07-02 · CHAPTER - 1 INTRODUCTION AND BASIC CONCEPTS 1.1 INTRODUCTION The

44

(SELF) and Linex Loss Function (LLF) have been used in the analysis. The

second section of this chapter deals with the development of methodology to study

the effect of random variations in the parameter of arrival and service time

distributions on various queue characters in power supply system.

The sixth chapter of the present thesis considered the (M/M/1) (/FCSF)

queue system model. Since over a long period of time, the assumption about

constant arrival and service rate seems to be restrictive. To overcome this

situation, the parameter involved in arrival and service time distributions are

treated as a random variable. To study the robust character of various queue

characteristics of a (M/M/1) (/FCSF) queue system, this chapter deals with the

developments of methodology for updating the basic arrival and service time

distributions in respect of their prior variations. These updated distributions have

been used to study the robust character of various queue characteristics of the

(M/M/1) : (/FCSF) queue system.


Recommended