+ All Categories
Home > Documents > Exponential Distribution Theory and Methods

Exponential Distribution Theory and Methods

Date post: 06-Apr-2018
Category:
Upload: ibad-tantawi
View: 224 times
Download: 0 times
Share this document with a friend

of 158

Transcript
  • 8/2/2019 Exponential Distribution Theory and Methods

    1/158

  • 8/2/2019 Exponential Distribution Theory and Methods

    2/158

  • 8/2/2019 Exponential Distribution Theory and Methods

    3/158

    MATHEMATICSRESEARCHDEVELOPMENTS

    EXPONENTIAL DISTRIBUTION:

    THEORY AND METHODS

    No part of this digital document may be reproduced, stored in a retrieval system or transmitted in any form or

    by any means. The publisher has taken reasonable care in the preparation of this digital document, but makes no

    expressed or implied warranty of any kind and assumes no responsibility for any errors or omissions. No

    liability is assumed for incidental or consequential damages in connection with or arising out of informationcontained herein. This digital document is sold with the clear understanding that the publisher is not engaged in

    rendering legal, medical or any other professional services.

  • 8/2/2019 Exponential Distribution Theory and Methods

    4/158

    MATHEMATICSRESEARCH

    DEVELOPMENTS

    Additional books in this series can be found on Novas website

    under the Series tab.

    Additional E-books in this series can be found on Novas website

    under the E-book tab.

  • 8/2/2019 Exponential Distribution Theory and Methods

    5/158

    MATHEMATICSRESEARCHDEVELOPMENTS

    EXPONENTIAL DISTRIBUTION:

    THEORY AND METHODS

    M.AHSANULLAH

    ANDG.G.HAMEDANI

    Nova Science Publishers, Inc.

    New York

  • 8/2/2019 Exponential Distribution Theory and Methods

    6/158

    Copyright 2010 by Nova Science Publishers, Inc.

    All rights reserved. No part of this book may be reproduced, stored in a retrieval system or

    transmitted in any form or by any means: electronic, electrostatic, magnetic, tape, mechanical

    photocopying, recording or otherwise without the written permission of the Publisher.

    For permission to use material from this book please contact us:

    Telephone 631-231-7269; Fax 631-231-8175Web Site: http://www.novapublishers.com

    NOTICE TO THE READER

    The Publisher has taken reasonable care in the preparation of this book, but makes no expressed

    or implied warranty of any kind and assumes no responsibility for any errors or omissions. No

    liability is assumed for incidental or consequential damages in connection with or arising out of

    information contained in this book. The Publisher shall not be liable for any special,

    consequential, or exemplary damages resulting, in whole or in part, from the readers use of, or

    reliance upon, this material. Any parts of this book based on government reports are so indicated

    and copyright is claimed for those parts to the extent applicable to compilations of such works.

    Independent verification should be sought for any data, advice or recommendations contained in

    this book. In addition, no responsibility is assumed by the publisher for any injury and/or damage

    to persons or property arising from any methods, products, instructions, ideas or otherwise

    contained in this publication.

    This publication is designed to provide accurate and authoritative information with regard to the

    subject matter covered herein. It is sold with the clear understanding that the Publisher is not

    engaged in rendering legal or any other professional services. If legal or any other expert

    assistance is required, the services of a competent person should be sought. FROM ADECLARATION OF PARTICIPANTS JOINTLY ADOPTED BY A COMMITTEE OF THE

    AMERICAN BAR ASSOCIATION AND A COMMITTEE OF PUBLISHERS.

    LIBRARY OF CONGRESS CATALOGING-IN-PUBLICATION DATA

    Ahsanullah, M. (Mohammad)

    Exponential distribution : theory and methods / Mohammad Ahsanullah, G.G.

    Hamedani.

    p. cm.

    Includes bibliographical references and index.

    ISBN978-1-61324-566-8 (eBook)

    1. Distribution (Probability theory) 2. Exponential families (Statistics)

    3. Order statistics. I. Hamedani, G. G. (Gholamhossein G.) II. Title.

    QA273.6.A434 2009

    519.2'4--dc22

    2010016733

    Published by Nova Science Publishers, Inc. New York

  • 8/2/2019 Exponential Distribution Theory and Methods

    7/158

    To Masuda, Nisar, Tabassum, Faruk,

    Angela, Sami, Amil and Julian

    MA

    To Azam , Azita , Hooman , Peter ,

    Holly , Zadan and Azara

    GGH

  • 8/2/2019 Exponential Distribution Theory and Methods

    8/158

  • 8/2/2019 Exponential Distribution Theory and Methods

    9/158

    Contents

    Preface ix

    1. Introduction 1

    1.1 Preliminaries 3

    2. Order Statistics 11

    2.1 Preliminaries and Definitions 11

    2.2 Minimum Variance Linear Unbiased Estimators Based

    on Order Statistics

    18

    2.3 Minimum Variance Linear Unbiased Predictors

    (MVLUPs)

    24

    2.4 Limiting Distributions 27

    3. Record Values 31

    3.1 Definitions of Record Values and Record Times 31

    3.2 The Exact Distribution of Record Values 31

    3.3 Moments of Record Values 38

    3.4 Estimation of Parameters 44

    3.5 Prediction of Record Values 46

    3.5 Limiting Distribution of Record Values 48

    4. Generalized Order Statistics 51

    4.1 Definition 51

    4.2 Generalized Order Statistics of Exponential Distribution 52

  • 8/2/2019 Exponential Distribution Theory and Methods

    10/158

    vi Contents

    5. Characterizations of Exponential Distribution I 65

    5.1 Introduction 655.2 Characterizations Based on Order Statistics 66

    5.3 Characterizations Based on Generalized Order Statistics 86

    6. Characterizations of Exponential Distribution II 99

    6.1 Characterizations Based on Record Values 99

    6.2 Characterizations Based on Generalized Order Statistics 120

    References 121

    Index 143

  • 8/2/2019 Exponential Distribution Theory and Methods

    11/158

    Preface

    The univariate exponential distribution is the most commonly used

    distribution in modeling reliability and life testing analysis. The exponential

    distribution is often used to model the failure time of manufactured items in

    production. If X denotes the time to failure of a light bulb of a particular make,

    with exponential distribution, then P(X>x) represent the survival of the light

    bulb. The larger the average rate of failure, the bigger will be the failure time.

    One of the most important properties of the exponential distribution is the

    memoryless property; P(X>x+y|X>x) = P(X>y). Given that a light-bulb has

    survived x units of time, the chances that it survives a further y units of time is

    the same as that of a fresh light-bulb surviving y units of time. In other words

    past history has no effect on the light-bulbs performance. The exponential

    distribution is used to model Poisson process, in situations in which an object

    actually in state A can change to state B with constant probability per unit.

    The aim of this book is to present various properties of the exponential

    distribution and inferences about them. The book is written on a lower

    technical level and requires elementary knowledge of algebra and statistics.

    This book will be a unique resource that brings together general as well as

    special results for the exponential family. Because of the central role that the

    exponential family of distributions plays in probability and statistics, this book

    will be a rich and useful resource for Probabilists, Statisticians and researchers

    in the related theoretical as well as applied fields. The book consists of sixchapters. The first chapter describes some basic properties of exponential

    distribution. The second chapter describes order statistics and inferences based

    on order statistics. Chapter three deals with record values and chapter 4

    presents generalized order statistics. Chapters 5 and 6 deal with the

    characterizations of exponential distribution based on order statistics, record

    values and generalized order statistics.

    Summer research grant and sabbatical leave from Rider University

    enabled the first author to complete part of the book. The first author expresses

    his sincere thanks to his wife Masuda for the longstanding support and

  • 8/2/2019 Exponential Distribution Theory and Methods

    12/158

    x M.Ahsanullah and G.G.Hamedani

    encouragement for the preparation of this manuscript. The second author

    thanks his family for their encouragement during the preparation of this work.

    He is grateful to Marquette University for partial support during preparation of

    part of this book.

    The authors wish to thank Nova Science Publishers for their willingness

    to publish this manuscript.

    M. Ahsanullah

    G.G.Hamedani

  • 8/2/2019 Exponential Distribution Theory and Methods

    13/158

    About the Authors

    Dr. M.Ahsanullah is a Professor of Statistics at Rider University. He earned his

    Ph.D. from North Carolina State University ,Raleigh, North Carolina. He is a

    Fellow of American Statistical Association and Royal Statistical Society. He is

    an elected member of the International Statistical Institute. He is editor-in-Chief

    of Journal of Applied Statistical Science and Co-editor of Journal of Statistical

    Theory and Applications. He has authored and co-authored more than twenty

    books and published more than 200 research articles in reputable journals. His

    research areas are Record Values, Order Statistics, Statistical Inferences, Char-

    acterizations of Distributions etc.

    Dr. Hamedani is a Professor of Mathematics and Statistics at Marquette

    University in Milwaukee Wisconsin. He received his doctoral degree from

    Michigan State University, East Lansing, Michigan in 1971. He is Co-Editor of

    Journal of Statistical Theory and Applications and Member of Editorial Board

    of Journal of Applied Statistical Science and Journal of Applied Mathematics,

    Statistics and Informatics. Dr. Hamedani has authored or co-authored over 110

    research papers in mathematics and statistics journals. His main research areas

    are characterizations of continuous distributions and differential equations

  • 8/2/2019 Exponential Distribution Theory and Methods

    14/158

  • 8/2/2019 Exponential Distribution Theory and Methods

    15/158

    Chapter 1

    Introduction

    The exponential family of distributions is a very rich class of distributions with

    extensive domain of applicability. The structure of the exponential family al-

    lows for the development of important theory as it is shown via a body of work

    related to this family in the literature.

    We will be using some terminologies in the next few paragraphs which will

    formally be defined later in the chapter. To give the reader some ideas aboutthe nature of the univariate exponential distribution, let us start with a basic

    random experiment, a corresponding sample space and a probability measure.

    We follow the usual notational convention: X, Y, Z, . . . stand for real-valuedrandom variables; boldface XXX, YYY, ZZZ, . . . denote vector-valued random variables.

    Suppose that X is a real-valued continuous random variable for the basic experi-

    ment with cumulative distribution function F and the corresponding probability

    density function f. We perform n independent replications of the basic exper-

    iment to generate a random sample of size n from X: (X1,X2, . . . ,Xn). Theseare independent random variables, each with the same distribution as that of X.

    If Xis are exponential random variables with cumulative distribution function

    F(x) = 1 ex, x 0, where > 0 is a parameter, then ni=1Xi is distributedas Gamma with parameters n and . The random variable 2 ni=1Xi has a Chi-square distribution with 2 n degrees of freedom. Consider a series system (a

    system which works only if all the components work) with independent com-

    ponents with common cumulative distribution function F(x) = 1 ex, x 0,and let T be the life of the system. Then P (T > t) = P (min1inXi > t) =

  • 8/2/2019 Exponential Distribution Theory and Methods

    16/158

    2 M. Ahsanullah and G.G. Hamedani

    P (X1 > t,X2 > t, . . . ,Xn > t) =ni=1 P(Xi > t) = e

    nt, which is an exponentialrandom variable with parameter n.

    Let N be a geometric random variable with probability mass functionP (N = k) = p qk1, k = 1,2, . . . where p + q = 1. Now if Xi s are indepen-dent and identically distributed with cumulative distribution function F(x) =1 ex , x 0 and ifV = Ni=1 Xi is the geometrically compounded randomvariable, then pV

    d= Xi

    d= means equal in distribution

    . To see this, let L (t)

    be the Laplace transform ofV, then

    L (t) = EEetV|N =

    k=11 +t

    k

    pqk

    1 = 1 + tp

    1.

    Thus, p Vd= Xi.

    Suppose the random variable X has cumulative distribution function F(x) =1 ex, x 0, and Y = [X], the integral part of X, then Y has the geometricdistribution with probability mass function P (Y = k) = pqk, k = 0,1, . . . andp = 1 e,

    P (Y = y) = P (y X< y + 1) = F(y + 1) F(y)= ey e(y+1) =

    1 e

    ey.

    Let Xk,n denote the kth smallest of(X1,X2, . . . ,Xn). Note that Xk,n is a func-

    tion of the sample variables, and hence is a statistic, called the kth order statistic.

    Our goal in Chapter 2 is to study the distribution of the order statistics, their

    properties and their applications. Note that the extreme order statistics are the

    minimum and maximum values:

    X1,n = min{X1,X2, . . . ,Xn},andXn,n = max{X1,X2, . . . ,Xn}.

    If X has cumulative distribution function F(x) = 1 ex, x 0, then 1 F1,n (x) = P (X1,n x) = enx and Fn,n (x) = P (Xn,n x) =

    1 exn .

    Record values arise naturally in many real life applications such as in sports,

    environment, economics, business, to name a few. Let X be a random variable.

    We keep drawing observations from X and, from time to time, an observation

    will be larger than all the previously drawn observations: this observation is

  • 8/2/2019 Exponential Distribution Theory and Methods

    17/158

    Introduction 3

    then called a record, and its value a record value, or, more precisely, an upper

    record value. The first observation is obviously a record. We call it the first

    record. The second upper record is the first observation whose value is largerthan that of the first one. We can define the lower records similarly by consider-

    ing lower values. In Chapter 3 we will study record values, in particular when

    the underlying random variable X has an exponential distribution.

    Order statistics and record values are special cases of generalized order

    statistics. Many of their properties can be obtained from the generalized or-

    der statistics. In chapter 4, we have presented generalized order statistics of

    exponential distribution.

    The problem of characterizing a distribution is an important problem whichhas attracted the attention of many researchers in recent years. Consequently,

    various characterization results have been reported in the literature. These char-

    acterizations have been established in many different directions. The goal of

    Chapters 5 and 6 is to present characterizations of the exponential distribution

    based on order statistics and based on generalized order statistics (Chapter 5) as

    well as based on record values (Chapter 6).

    For the sake of self-containment, we mention here some elementary defini-

    tions, which most of the readers may very well be familiar with them. The read-ers with knowledge of introduction to probability theory may skip this chapter

    all together and go straight to the next chapter.

    1.1. Preliminaries

    Definition 1.1.1. A random or chance experiment is an operation whose

    outcome cannot be predicted with certainty.

    We denote a random experiment with E. Throughout this book experi-ment means random experiment.

  • 8/2/2019 Exponential Distribution Theory and Methods

    18/158

    4 M. Ahsanullah and G.G. Hamedani

    Examples 1.1.2.

    (a) Flipping a coin once.

    (b) Rolling a die once.

    Definition 1.1.3. The set of all possible outcomes of an experiment E is

    called the sample space for E and is denoted by S.

    Examples 1.1.4. Sample spaces corresponding to Examples (a) and (b)above are:

    Sa = {H,T}, H for heads and T for tails;S

    b=

    {1,2, . . . ,6

    }.

    Note that the set {even,odd} is also an acceptable sample space for E ofExample 1.1.2 (b), so sample space is not unique.

    Event 1.1.5. An event is a collection of outcomes of an experiment. Hence

    every subset of sample space is an event.

    We denote events with capital letters A, B, C , . . . . We denote two events arecalled mutually exclusive if they have no common elements.

    Definition 1.1.6. A probability function is a real-valued set function defined

    on the power set of S (P(S)), denoted by P, whose range is a subset of[0,1],

    i.e.

    P :P(S) [0,1] ,satisfying the following Axioms of probability

    (i) P (A) 0 for any A P(S).(ii) P (S) = 1.

    (iii) If A1,A2, . . . is a sequence (finite or infinite) of mutually exclusiveevents (subsets) ofS ( or elements ofP(S) ), then

    P(A1 A2 ) = P (A1) + P (A2) + .

    Definition 1.1.7. A random variable (rv for short) is a real-valued functiondefined on S, a sample space for an experiment E.

    We denote rv s with capital letters X,Y,Z, . . . (as mentioned before) andtheir values with lower case letters x,y,z, . . .. Range of a rv X is the set of all

    possible values ofX and is denoted by R(X) .

  • 8/2/2019 Exponential Distribution Theory and Methods

    19/158

    Definition 1.1.8. A rv X is called

    (i) discrete ifR(X) is countable;

    (ii) continuous ifR(X) is an interval and P (X = x) = 0, for all x R(X) ;(iii) mixed ifX is neither discrete nor continuous.

    Definition 1.1.9. Let X be a rv. The cumulative distribution function (cd f)of X denoted by FX is a real-valued function defined on R whose range is a

    subset of[0,1]. FX is defined by

    FX (t) = P (X t), t R.

    Properties ofcd f F X :

    (i) limt+ FX (t) =01;(ii) FX is non-decreasing on R;(iii) FX is right-continuous on R.

    Proposition 1.1.10. The set of discontinuitypoints of a distributionfunction

    is at most countable.

    Remark 1.1.11. A point x is said to belong to the support of the cd f F if

    and only if for every > 0, F(x + ) F(x ) > 0. The set of all such pointsis called the support of F and is denoted by Supp F.

    We will restrict our attention, throughout this book, to continuous rv s, inparticular exponential rv.

    Definition 1.1.12. Let X be a continuous rv with cd f F X.

    Then the proba-

    bility density function (pd f) ofX (or pd f corresponding to cd f F X) is denotedby fX and is defined by

    fX(t) =

    ddt

    FX (t), if derivative exists,

    0, otherwise.

    Remark 1.1.13. Since FX is continuous and non-decreasing, its derivative

    exists for all t, except possibly for at most a countable number of points in R.We define fX(t) = 0 at those points.

  • 8/2/2019 Exponential Distribution Theory and Methods

    20/158

    6 M. Ahsanullah and G.G. Hamedani

    Properties of pd f f X :

    (i) fX(t) 0 for all t R;(ii)

    R fX (t)dt = 1.

    Definition 1.1.14. The rv X has an exponential distribution with location

    parameter ( < < ) and scale parameter (> 0) if its cd f is given by

    FX (t) =

    0, t< ,

    1 e(t), t,

    where = 1

    .

    Graph ofFX for = 0 and different values of

    It is clear that ddt

    FX (t) exists everywhere except at t = , so the correspondingpd f ofFX is given by

    fX(t) =

    e(t), t> ,0, otherwise,

    Figure 1.1. Graph o f FX f o r = 0 and di f f erent values o f .

  • 8/2/2019 Exponential Distribution Theory and Methods

    21/158

    Introduction 7

    Graph of fX for = 0 and different values of

    We use the notation X

    E(,) for such a rv. The rv X

    E(0,) will be de-

    noted by XE(). We use the notation XE(1) for the standard exponentialrandom variable.

    Figure 1.2. Graph o f f X f o r = 0 and di f f erent values o f .

    We observe that the condition P (X> s + t|X> s) = P (X> t) is equivalentto 1 F(s + t) = (1F(s))(1F(t)). Now, ifX is a non-negative and non-degenerate rv satisfying this condition, then cd f of X will be F(x) = 1 ex,

    x 0. To see this, note that condition 1 F(s + t) = (1 F(s))(1 F(t)) willlead to the condition

    1 F(nx) = (1 F(x))n , for all n 1 and all x 0,that is, 1 F(x) = 1 F(x

    n)n. The solution of this last equation with bound-

    ary conditions F(0) = 0 and F() = 1 is F(x) = 1 ex.

    The hazard rate (f(x)/(1 F(x))) is constant for E(,). In fact E(,)is the only family of continuous distributions with constant hazard rate. It can

    easily be shown that the constant () hazard rate of a continuous cd f F togetherwith boundary conditions F(0) = 0 and F() = 1 imply that F(x) = 1 ex.

  • 8/2/2019 Exponential Distribution Theory and Methods

    22/158

    8 M. Ahsanullah and G.G. Hamedani

    The linear exponential distribution with increasing hazard rate has pd f of

    the form

    f(x) = (+x) e(x+x2/2)

    , ,> 0, x 0,and the corresponding cd f is F(x) = 1 e(x+x2/2) , , > 0, x 0. Thehazard rate is + x. If = 0, then it is the exponential with c d f F (x) =1 ex.

    IfX E(), then P (X> s + t|X> s) = P (X> t) for all s, t 0.

    This property is known as memoryless property of the standard exponential

    random variable (or distribution).The pth quantile of a rv X is defined by F1 (p). For X E(), we haveF1 (p) = ln(1p) . The first, second and fourth quartiles are 1 ln

    43

    , ln2 and

    ln 4

    respectively.

    Definition 1.1.15. Let X be a continuous rv with pd f f X, then the rth mo-

    ment ofX about the origin is defined by

    r = E[Xr] =R

    xrfX(x) dx, r= 0,1, . . . ,

    provided the integral is absolutely convergent.

    Note that throughout this book we will use the notation E[h (X)] =R

    h (x) dFX (x) for the expected value of the rv h (X).

    Remarks 1.1.16.

    (a)

    0 = 1,

    1 = E [X] is expected value or mean of X . 2

    X =

    2 21 is

    variance ofX and X is standard deviation of X.(b) The rth moment ofX about X =

    1 is defined by

    r = E[(XX)r] =R

    (xX)rfX(x) dx, r= 1,2, . . . ,

    provided the right hand side (RHS) exists. Note that 2 = 2

    X.

    (c) It is easy to show that from rs one can calculate r s and vice versa.

    In fact if the moments about any real number a are known, then moments about

  • 8/2/2019 Exponential Distribution Theory and Methods

    23/158

    Introduction 9

    any other real number b can be calculated from those about a. Moments about

    zero, r s , are the most common moments used.

    Example 1.1.17. Let X E(). Find all the moments of X which exist.

    Solution:

    r =

    0

    xrexdx =(r+ 1)

    r, r= 1,2, . . . .

    Definition 1.1.18. Let X be a continuous rv with pd f f X. The MGF (Mo-

    ment Generating Function) of X denoted by MX (t) is defined by

    MX (t) = E

    etX

    =

    R

    etx fX(x) dx,

    for those t s for which the RHS exists.

    Properties ofMGF :

    (i) MX(0) = 1;

    (ii) M(r)

    X (0) =

    r, r = 1,2, . . ., where M(r)

    X (0) is the rth derivative of theMGF evaluated at 0.

    Example 1.1.19. For X E(,), the MGF is

    MX (t) =

    etxe(x)dx = et

    0e(t)xdx = et( t)1 , if t< ,

    from which we obtain

    1 = M(1)

    X (0) = +1,

    2 = M(2)

    X (0) = 2 +

    2

    +

    2

    2.

    So, X = +1 ,

    2X =

    2 + 2 +22

    ( + 1 )2 = 12 and X = 1 .

    For X E(), MX(t) = ( t)1, i f t< and

    M(r)X (t) = (r!) ( t)(r+1) , for r= 1,2, . . . ,

  • 8/2/2019 Exponential Distribution Theory and Methods

    24/158

    10 M. Ahsanullah and G.G. Hamedani

    Then

    r =r!r

    = r

    r1, which is a recurrence relation for the moments of E().

    The nth

    cumulant of a rv X is defined by Kn =dn

    dtn ln MX(t) | t=0. Here ln is used for natural logarithm. For X E(), MX(t) = ( t)1, t< andKn = (n)/

    n.

    Remarks 1.1.20. If X1,X2, . . . ,Xn form an independent sample from anexponential distribution with parameter , then

    (i) method of moments estimator of is = 1X

    , where X = 1n

    ni=1 Xi;

    (ii) maximum likelihood estimator of is also = 1X

    ;

    (iii) entropy of is 1 ln .For E(,), the maximum likelihood estimators of and

    are given by = X1,n and = 1/XX1,n

    respec-

    tively, where, as mentioned before, X1,n = min{X1,X2, . . . ,Xn}. The entropy ofE(,) denoted by EEEX is

    EEEX =

    ( ln f(x)) f (x) dx =

    lne(x)

    e(x)dx = 1 ln,

    which does not depend on location parameter . It is the same as the entropy of

    the exponential distribution E().

  • 8/2/2019 Exponential Distribution Theory and Methods

    25/158

    Chapter 2

    Order Statistics

    2.1. Preliminaries and Definitions

    Let X1,X2, . . . ,Xn be n independent and identically distributed (i.i.d.) rvs with

    common c d f F and pd f f . Let X1,n X2,n Xn,n denote the orderstatistics corresponding to 1,X2, . . . ,Xn. We call Xk,n, 1

    k

    n, the kth or-

    der statistic based on a sample X1,X2, . . . ,Xn. The joint pd f of order statisticsX1,n,X2,n, . . . ,Xn,n has the form

    f1,2,...,n:n (x1,x2, . . . ,xn)

    =

    n!nk=1 f(xk) , < x1 < x2 < < xn < ,0, otherwise.

    (2.1.1)

    Let f k:n denote the pd f ofXk,n. From (2.1.1) we have

    fk:n (x)

    =

    . . .

    f1,2,...,n:n (x1, . . . ,xk1,x,xk+1, . . . ,xn) dx1 dxk1dxk+1 dxn

    = n!f(x)

    . . .

    k1j=1

    f(xj)n

    j=k+1

    f(xj)dx1 dxk1dxk+1 dxn, (2.1.2)

    where the integration is over the domain

    < x1 < < xk1 < xk+1 < < xn < .

  • 8/2/2019 Exponential Distribution Theory and Methods

    26/158

    12 M. Ahsanullah and G.G. Hamedani

    The symmetry ofk1j=1 f (xj) with respect to x1, . . . ,xk1 and that ofnj=k+1 f

    (xj) with respect to xk+1, . . . ,xn help us to evaluate the integral on the RHS of

    (2.1.2) as follows:

    . . .

    k1j=1

    f(xj)n

    j=k+1

    f(xj)dx1 dxk1dxk+1 dxn

    =1

    (k1)!k1j=1

    x

    f(xj) dxj1

    (n k)!n

    j=k+1

    x

    f(xj) dxj

    = (F(x))k1 (1

    F(x))nk/(k

    1)! (n

    k)!. (2.1.3)

    Combining (2.1.2) and (2.1.3), we arrive at

    fk:n (x) =n!

    (k1)! (n k)! (F(x))k1 (1F(x))nkf(x) . (2.1.4)

    Clearly, equality (2.1.3) immediately follows from the corresponding formulafor cd fs of single order statistics, but the technique, which we used to arrive at(2.1.3), is applicable for more complicated situations. The following exercise

    can illustrate this statement.

    The joint pd f f k(1),k(2),...,k(r):n (x1,x2, . . . ,xr) of order statistics Xk(1),n,Xk(2),n, . . . ,Xk(r),n, where 1 k(1) < k(2) < < k(r) n, is given by

    fk(1),k(2),...,k(r):n (x1,x2, . . . ,xr)

    =n!

    r+1j=1 (k(j)k(j 1) 1)!

    r+1

    j=1

    (F(xj)

    F(xj

    1))

    k(j)k(j1)1 rj=1

    f(xj) ,

    if x1 < x2 < < xr,

    and = 0, otherwise.In particular, if r= 2, 1 i < j n, and x1 < x2, then

    fi,j:n (x1,x2) =n!

    (i 1)! (j i 1)! (n j)! (F(x1))i1 [F(x2)F(x1)]ji1 [1 F(x2)]nj f(x1) f(x2) .

  • 8/2/2019 Exponential Distribution Theory and Methods

    27/158

    Order Statistics 13

    The conditional pd f ofXj,n given Xi,n = x1 is

    fj|i,n

    (x2|x1)

    =(n i)!

    (j i 1)! (n j)!

    F(x2)F(x1)1 F(x1)

    ji11 F(x2)1 F(x1)

    njf(x2)

    1 F(x1).

    Thus, Xj,n given Xi,n = x1 is the (j i)th order statistic in a sample of n i fromtruncated distribution with cd f F c (x2|x1) = F(x2)F(x1)1F(x1) . For F(x) = 1 ex,x 0, we will have Fc (x2|x1) = 1 e(x2x1),x2 x1.

    If X

    E(1) and Z1,n

    Z2,n

    Zn,n are the n order statistics corre-

    sponding to a sample of size n from X, then it can be shown that the joint pd fofZ1,n, Z2,n, . . . ,Zn,n is

    f1,2,...,n (z1,z2, . . . ,zn) =

    n!e(

    ni=1zi), 0 z1 z2 zn

  • 8/2/2019 Exponential Distribution Theory and Methods

    28/158

    14 M. Ahsanullah and G.G. Hamedani

    and

    Cov (Zk,n,Zs,n) =

    k

    i=1Var Win i + 1 =k

    i=11

    (n i + 1)2 , k s.

    Furthermore, letting ki,n =EXki,n

    ,k 1, n 1, then we have the following

    theorems (see, Joshi, (1978)).

    Theorem 2.1.1. k1,n =knk11,n , k 1, n 1.

    Proof.

    k1,n =

    0

    xknenxdx = xkenx|0 +

    0kxk1enxdx =

    k

    nk11,n .

    Theorem 2.1.2. ki,n = ki1,n1 +

    knk1i,n , k 1, 2 i n.

    Proof. For k 1 and 2 i n,

    k

    1

    i,n =

    n!

    (i 1)! (n i)!

    0xk

    1 1 exi1 e(ni+1)xdx.

    Integrating by parts, we obtain

    k1i,n =n!

    (i 1)! (n i)!k[

    0(n i + 1)xk1 exi1 e(ni+1)xdx

    0(i 1)xk1 exi2 e(ni+2)xdx]

    =n!

    (i 1)! (n i)!k[n

    0 xk1 exi1 e(ni+1)xdx

    (i 1)

    0xk

    1 exi2 e(ni+1)xdx]

    =n

    kki,n

    n

    kki1,n1.

    Thus,

    ki,n = ki

    1,n

    1 +

    k

    n

    k1i,n .

  • 8/2/2019 Exponential Distribution Theory and Methods

    29/158

    Order Statistics 15

    Theorem 2.1.3. Leti,j,n = E[Xi,nXj,n], 1 i < j n, then

    i,i+1,n = 2i,n +1

    n ii,n, 1 i n 1,and

    i,j,n = i1,j,n +1

    n j + 1i,j1,n, 1 i < j n, j i 2.

    Proof.

    i,n = EXi,nX0i+1,n=

    n!

    (i 1)! (n i + 1)!

    0

    xi

    xi

    1 exii1 exi e(ni)xi+1 dxi+1dxi=

    n!

    (i 1)! (n i + 1)!

    0xi

    1 exii1 exiIxi dxi,where

    Ixi =

    xi e(n

    i)xi+1

    dxi+1 = xi+1e(n

    i)xi+1

    |

    xi + (n i)

    xi xi+1e(n

    i)xi+1

    dxi+1.

    Thus,

    i,n =n! (n i)

    (i 1)! (n i + 1)!

    0

    xi

    xixi+1

    1 exii1 exi e(ni)xi+1 dxi+1dxi n!

    (i 1)! (n i + 1)!

    0x2i

    1 exi

    i1

    e(ni+1)xi dxi

    = (n i)i,i+1,n (n i)2i,n.

    Upon simplification, we obtain

    i,i+1,n = 2i,n +

    1

    n ii,n, 1 i n 1.

    For j > i + 1,

    i,j1,n = EXi,nX0j1,n n!(i 1)! (j i 1)! (n i + 1)!

  • 8/2/2019 Exponential Distribution Theory and Methods

    30/158

    16 M. Ahsanullah and G.G. Hamedani

    0

    xi

    xi

    1 exii1 (exi exj )ji1exi e(nj+1)xj dxjdxi=

    n!

    (i 1)! (n i 1)! (n j)!

    0xi 1 exii1 exiJxi dxi,

    where

    Jxi =

    xi

    (exi exj )ji1e(nj+1)xj dxj

    = (n j + 1)

    xi

    xj(exi exj )ji1e(nj+1)xj dxj

    (j i 1)

    xi(exi exj )ji2e(nj+2)xj dxj.

    Thus,

    i,j1,n =n!

    (i 1)! (j i 1)! (n j)!

    0xi

    1 exii1 exi[(n j + 1)

    xi

    xj (exi exj )ji1e(nj+1)xj dxj

    (j i 1)

    xi(exi exj )ji2e(nj+2)xj dxj]dxi

    = (n j + 1)i,j,n (n j + 1)i1,j,n.Upon simplification, we arrive at

    i,j,n = i1,j,n +1

    n j + 1i,j1,n, 1 i < j n, j i 2.

    The relation, in this case, to uniform rv is interesting. If we let U be auniformly distributed rv on (0,1) and Ui,n is the ith order statistic from U, then

    it can be shown that

    Xi,nd= lnUni+1,n or equivalently Xi,n d= ln (1 Ui,n) .

    Let

    f1,...,r1,r+1,...,n|r(x1, . . . ,xr1,xr+1, . . . ,xn|v)denote the joint conditional pd f of order statistics X1,n, . . . ,Xr

    1,n,

    Xr+1,n, . . . ,Xn,n given that Xr,n = v. We suppose that fr:n (v) > 0 for this value of

  • 8/2/2019 Exponential Distribution Theory and Methods

    31/158

    Order Statistics 17

    v, where fr:n ,as usual, denotes the pd f ofXr,n. The standard procedure gives us

    the required pd f:

    f1,...,r1,r+1,...,n|r(x1, . . . ,xr1,xr+1, . . . ,xn|v)= f1,2,...,n:n (x1, . . . ,xr1,v,xr+1, . . . ,xn)/fr:n (v) . (2.1.6)

    Upon substituting (2.1.1) and (2.1.4) in (2.1.6), we obtain

    f1,...,r1,r+1,...,n|r(x1, . . . ,xr1,xr+1, . . . ,xn|v)

    = (r1)!r1

    j=1f(xj)

    F(v) (n j)!n

    j=r+1f(xj)

    1 F(v) ,x1 < < xr1 < xr+1 < < xn, (2.1.7)

    and equal zero otherwise.

    Finally, we would like to present Fishers Information, I, for the order statis-

    tics from E(). Fishers Information for a continuous random variable X withpd f f(x,) and parameter , under certain regularity conditions, is given by

    I = E2

    2ln(f(X,))

    .

    The exponential distribution E() satisfies the regularity conditions and theFishers Information for order statistics from this distribution are as follows:

    For X1,n,

    I1 = E 22 lnnenX = 12 .For X2,n,

    I2 = E2

    2ln{n(n 1)(1 eX)e(n1)X}

    = E 12 + X2eX(1 eX)2

  • 8/2/2019 Exponential Distribution Theory and Methods

    32/158

    18 M. Ahsanullah and G.G. Hamedani

    =

    0

    1

    2+

    x2ex

    (1

    ex)2

    n(n 1)(1 ex)e(n1)xdx

    =1

    2+

    0n(n 1) x

    2

    1 ex enxdx

    =1

    2+

    2n(n 1)2

    k=0

    1

    (n + k)3.

    For Xr,n, r> 2,

    Ir =

    E 2

    2ln n!(r1)!(n r)!(1eX)r1e(nr+1)X

    = E

    1

    2+

    (r1)X2eX(1 eX)2

    =

    0

    1

    2+

    (r1)X2eX(1 eX)2

    n!

    (r1)!(n r)!(1 ex)r1e(nr+1)xdx

    =1

    2+

    n!

    (r

    2)!(n

    r)!

    0

    x2(1 ex)r3e(nr+2)xdx

    =1

    2+

    n(n r+ 1)(r2)

    (n 1)!(r3)!(n r+ 1)!

    0

    x2(1ex)r3e(nr+2)xdx

    =1

    2+

    n(n r+ 1)(r2)2 E[X

    2r2,n]

    =1

    2

    1 +

    n(n r+ 1)(r2)

    r3

    k=0

    1

    (n k)2 + (r3

    k=0

    1

    n k)2

    .

    2.2. Minimum Variance Linear Unbiased Estimators

    Based on Order Statistics

    We will use MVLUEs for minimum variance linear unbiased estimators. Let us

    begin from MVLUEs of location and scale parameters. Suppose that X has an

    absolutely continuous cd f F of the form

    F(x) , < < , > 0.

  • 8/2/2019 Exponential Distribution Theory and Methods

    33/158

    Order Statistics 19

    Further, assume that

    E[Xr,n] = +r, r= 1,2, . . . ,n,Var[Xr,n] = vrr

    2, r= 1,2, . . . ,n,

    Cov (Xr,n,Xs,n) = Cov (Xs,n,Xr,n) = vrs2, 1 r< s n.

    Let

    XXX = (X1,n,X2,n, . . . ,Xn,n) .

    We can write

    E[XXX] = 111 +++, (2.2.1)

    where

    111 = (1,1, . . . ,1) , = (1,2, . . . ,n)

    ,

    and

    Var(XXX) = 2,

    where is a matrix with elements vrs,1 r s n.Then the MVLUEs of the location and scale parameters and are

    =1

    1(111 111)1

    X, (2.2.2)

    and

    =1

    111

    1(1111)1

    X, (2.2.3)

    where

    =

    1

    1111111 1112 .

    The variances and covariance of these estimators are given by

    Var() =2

    1

    , (2.2.4)

    Var() =2 1111 111 , (2.2.5)

  • 8/2/2019 Exponential Distribution Theory and Methods

    34/158

    20 M. Ahsanullah and G.G. Hamedani

    and

    Cov (,) =

    2

    1 111

    . (2.2.6)The following lemma (see Garybill, 1983, p. 198) will be useful in finding

    the inverse of the covariance matrix.

    Lemma 2.2.1. Let = (rs) be an nn matrix with elements, which satisfythe relation

    rs = sr = crds, 1 r,s n, for some positive numbers c1,c2, . . . ,cn and d1,d2, . . . ,dn. Then its inverse

    1

    = (rs

    ) has elements given as follows:

    11 = c2/c1 (c2d1 c1d2) ,nn = dn1/dn (cndn1 cn1dn) ,

    k+1k = kk+1 = 1/(ck+1dkckdk+1) ,kk = (ck+1dk1 ck1dk+1)/(ckdk1 ck1dk) (ck+1dkckdk+1),

    k= 2,3, . . . ,n 1,and

    i j = 0, if |i j| > 1.

    Let

    f(x) =

    1

    exp((x)/), < < x < , 0 < < ,

    0, otherwise.

    We have seen that

    E[Xr,n] = +r

    j=1

    1

    n j + 1

    Var[Xr,n] = 2

    r

    j=1

    1

    (n j + 1)2 , r= 1,2, . . . ,n,

    and

    Cov (Xr,n,Xs,n) = 2

    r

    j=11

    (n j + 1)2 , 1 r s n.

  • 8/2/2019 Exponential Distribution Theory and Methods

    35/158

    Order Statistics 21

    One can write that

    Cov (Xr,n,Xs,n) =

    2

    crds, 1 r s n,where

    cr =r

    j=1

    1

    (n j + 1)2 , 1 r n,

    and

    ds = 1, 1 s n.Using Lemma 2.2.1, we obtain

    j j = (n j)2 + (n j + 1)2 , j = 1,2, . . . ,n,j+1j = j j+1 = (n j)2 , j = 1,2, . . . ,n 1,

    and

    i j = 0, if |i j| > 1, i, j = 1,2, . . . ,n.It can easily be shown that

    1111 = n2,0,0, . . . ,0 , 1 = (1,1, . . . ,1)

    and

    = n2 (n 1) .The MVLUEs of the location and scale parameters and are respectively

    =nX1,n X

    n 1, (2.2.7)

    and

    =nXX1,n

    n 1 , (2.2.8)

    where X = nr=1Xr,n

    n.

    The corresponding variances and covariance of the estimators are

    Var[] =

    2

    n (n 1) , (2.2.9)

  • 8/2/2019 Exponential Distribution Theory and Methods

    36/158

    22 M. Ahsanullah and G.G. Hamedani

    Var[] =2

    n 1 , (2.2.10)

    andCov (, ) =

    2

    n (n 1) . (2.2.11)

    The remainder of this section will be devoted to MVLUEs based on cen-

    sored samples. We consider the case, when some smallest and largest obser-

    vations are missing. In this situation we construct the MVLUEs for location

    and scale parameters. Suppose now that the smallest r1 and largest r2 of these

    observations are lost and we can deal with order statistics

    Xr1+1,n Xnr2,n.We will consider here the MVLUEs of the location and scale parameters

    based on Xr1+1,n, . . . ,Xnr2,n.Suppose that X has an absolutely continuous cd f F of the form

    F((x)/) , < < , > 0.Further, we assume that

    E[Xr,n] = +r,

    Var[Xr,n] = vrr2, r1 + 1 r n r2,

    Cov (Xr,n,Xs,n) = vrs2, r1 + 1 r, s n r2.

    Let XXX = (Xr1+1,n, . . . ,Xnr2,n), then we can write

    E

    XXX

    = 111 +++,

    with 111 === (1,1, . . . ,1) , === (r1+1, . . . ,nr2), and

    VarXXX

    = 2,

    where is an (n r2 r1) (n r2 r1) matrix with elements vrs, r1 < r, s n r2.

    The MVLUEs of the location and scale parameters and based on theorder statistics XXX are

    = 11(111 111)1X, (2.2.12)

  • 8/2/2019 Exponential Distribution Theory and Methods

    37/158

    Order Statistics 23

    and

    =1

    111

    1(1111)1X, (2.2.13)where

    =

    1

    1111

    111

    11112

    .

    The variances and covariance of these estimators are given as

    Var

    =2

    1

    , (2.2.14)

    Var = 2

    1111 111 , (2.2.15)

    and

    Cov

    ,

    = 21 111

    . (2.2.16)

    Now, we consider the exponential distribution with cd f F as

    F(x) = 1 exp{(x )/} , < < x < , 0 <

  • 8/2/2019 Exponential Distribution Theory and Methods

    38/158

    24 M. Ahsanullah and G.G. Hamedani

    and

    Cov , = r1+1

    2

    n r2 r1 1.

    Sarhan and Greenberg (1967) prepared tables of the coefficients, Best Linear

    Unbiased Estimators (BLUEs), variances and covariances of these estimators

    for n up to 10.

    2.3. Minimum Variance Linear Unbiased Predictors

    (MVLUPs)

    Suppose that X1,n, X2,n, . . . ,Xr,n are r(r< n) order statistics from a distributionwith location and scale parameters and respectively. Then the best (in thesense of minimum variance) linear predictor Xs,n ofXs,n (r< s n) is given by

    Xs,n = +s+WWWsVVV

    1 (XXX 111 ),where and are MVLUEs of and respectively, based on

    XXX = (X1,n,X2,n, . . . ,Xr,n) ,

    s = E[(Xs,n )/] ,and

    WWWs = (W1s,W2s, . . . ,Wrs) ,

    where

    Wjs = Cov (Xj,n,Xs,n) , j = 1,2, . . . , r.

    Here VVV1 is the inverse of the covariance matrix of XXX.

    Suppose that for the exponential distribution with cd f

    F(x) = 1 exp{(x )/} , < < x < , 0 < < ,all the observations were available. We recall that

    E[Xr,n] = +r

    j=1

    1

    n j + 1 ,

    Var[Xr,n] = 2

    r

    j=11

    (n j + 1)2, r= 1,2, . . . ,n,

  • 8/2/2019 Exponential Distribution Theory and Methods

    39/158

    Order Statistics 25

    and

    Cov (Xr,n,Xs,n) = 2

    r

    j=1

    1

    (n j + 1)2, 1

    r

    s

    n.

    To obtain MVLUEs for the case, when r1 + r2 observations are lost, weneed to deal with the covariance matrix of size (n r1 r2) (n r1 r2),elements of which coincide with

    Cov (Xr,n,Xs,n) = 2crds, r1 + 1 r s n r2,

    where

    cr =

    r

    j=11

    (n j + 1)2 ,and

    ds = 1.

    We can obtain the inverse matrix 1 using Lemma 2.2.1 as

    1

    =

    (n

    r1 1)

    2

    + 1/c

    r1+1 (n

    r1 1)

    2

    . . . 0(n r1 1)2 (n r1 2)2 + (n r1 1)2 . . . 00 (n r1 2)2 . . . 00 0 . . . 0...

    ... . . . 0

    0 0 . . . (r2 + 1)20 0 . . . (r2 + 1)

    2

    ,

    where

    11 = (n r1 1)2 + 1/cr1+1,nr1r2 nr1r2 = (r2 + 1)

    2 ,

    j j = (n r1 j)2 + (n r1 j + 1)2 , j = 2,3, . . . ,n r1 r2 1,j+1j = j j+1 = (n r1 j)2 , j = 1,2, . . . ,n r1 r2 1,

    and

    i j = 0, for |i j| > 1, i, j = 1,2, . . . ,n r1 r2.

  • 8/2/2019 Exponential Distribution Theory and Methods

    40/158

    26 M. Ahsanullah and G.G. Hamedani

    Note also that we have

    === (r1+1, . . . ,nr2) ,

    where

    r = E[(Xr,n )/] =r

    j=1

    1

    n j + 1 .

    Simple calculations show that

    1

    = r1+1c

    r1+1 (n

    r1

    1) ,1,1, . . . ,1, r2 + 1 ,

    1 ===

    2r1+1cr1+1

    + (n r1 r2 1) ,

    1

    111 = r1+1/cr1+1,

    1111

    111 = 1/cr1+1,

    1111 === r1+1/cr1+1,

    1111

    1111

    =1

    cr1+1 r1+1cr1+1 (n r1 1) ,1,1, . . . ,r2 + 1 ,111

    1111

    1=

    1

    cr1+1

    r1+1cr1+1

    ,0,0, . . . ,0

    ,

    =

    1

    1111

    111

    11112

    = (n r1 r2 1)/cr1+1.

    Upon simplification, we obtain

    = 11111 1111 11111111X

    =1

    n r2 r1 1

    nr2

    j=r1+1

    Xj,n (n r1)Xr1+1,n + r2Xnr2,n.

    Analogously, from (2.2.12) and (2.2.14)(2.2.16) we have the necessary ex-

    pressions for , Var

    , Var

    , and Cov

    ,

    .

  • 8/2/2019 Exponential Distribution Theory and Methods

    41/158

    Order Statistics 27

    2.4. Limiting Distributions

    Let X1,X2, . . . ,Xn be i.i.d. exponentially distributed rvs with cd f F (x) = 1 ex, x 0. Then with a sequence of real numbers an = ln n and bn = 1, we have

    P (Xn,n an + bnx) = P (Xn,n ln n +x) =

    1e(ln n+x)n

    =

    1 e

    x

    n

    n eex as n .

    Thus the limiting distribution of Xn,n with the constant an = ln n and bn = 1

    whenXjs

    areE

    (1) is type 1 extreme value distribution. The numbersa

    n andb

    nare known as normalizing constants.

    Remark 2.4.1. We know that if Y has type 1 extreme value distribution,

    then E[Y] = , the Euler constant. Thus E[Xn,n] ln n , as n . ButE[Xn,n] =

    nj=1

    1nj+1 , so we have the known result,

    nj=1

    1nj+1 ln n , as

    n .

    For the derivation of the limiting distribution of X1,n, we need the following

    lemma.

    Lemma 2.4.2. Let(Xn)n1 be a sequence of i.i.d. rvs with cd f F. Consider

    a sequence (en)n1 of real numbers. Then for any , 0 < , the followingtwo statements are equivalent:

    (a) limn (nF(en)) = ;

    (b) limnP (X1,n > en) = e.

    Since limnn1 e xn = ex, ifXjs are i.i.d.E(1), thenlim

    nP

    X1,n >x

    n

    = ee

    x

    .

    Thus, the limiting distribution of X1,n with constants cn = 0 and dn = 1/n

    when Xjs are i.i.d. E(1) is type 3 extreme value distribution. Here again thenumbers cn and dn are normalizing constants.

    Let us now consider the asymptotic distribution of Xn

    k+1,n for fixed k as n

    tends to . It is given in the following theorem.

  • 8/2/2019 Exponential Distribution Theory and Methods

    42/158

    28 M. Ahsanullah and G.G. Hamedani

    Theorem 2.4.3. Let X1,X2, . . . ,Xn be n i.i.d. rvs with cd f F and Xnk+1,n

    be their(n k+ 1)th order statistic. If for some stabilizing constants a n and bn(an + bnx as n ), F

    n

    (an + bnx) T(x) as n , for all x, for somedistribution T (x), then

    P (Xnk+1,n an + bnx) k1

    j=0

    T(x) ( ln T(x))j /j! as n ,

    for any fixed k and all x.

    Proof. Let us consider a sequence (cn)n

    1 such that as n

    , cn

    c. Then

    limn 1 cnn n = ec. Take cn (x) = n (1 F(an + bnx)). Now, for every fixedx,

    P (Xnk+1,n an + bnx) =n

    j=nk+1

    n

    j

    (F(an + bnx))

    j (1F(an + bnx))nj

    =k1

    j=0

    n

    j

    (cn (x)/n)

    j (1 cn (x)/n)nj .

    Thus, for each fixed x, the RHS of the above equality can be considered as

    the value of a binomial cd f with parameters n and cn (x)/n at k 1. SinceFn (an + bnx) T(x), as n , we have

    n ln [1 (1 F(an + bnx))] T(x) , as n .

    Thus, for sufficiently large n, we have

    n ln [1 (1 F(an + bnx))] = n (1 F(an + bnx))= cn (x) T(x) , as n ,

    from which we obtain limn cn (x) = T(x) uniformly in x. Now, using Pois-son approximation to binomial, we arrive at

    P (Xnk+1,n an + bnx) k1

    j=0

    T(x) ( ln T(x))/j!, as n , for all x.

  • 8/2/2019 Exponential Distribution Theory and Methods

    43/158

    Order Statistics 29

    For the special case ofi.i.d.E(1) rvs with an = ln n and bn = 1, we haveFn (an + bnx) eex , as n , for all x 0. We will then have

    P (Xnk+1,n an + bnx) k1

    j=0

    ejx

    j!ee

    x

    , as n , for all x 0.

    The asymptotic distribution of Xk,n for fixed k as n is given by thefollowing theorem whose proof is similar to that of Theorem 2.4.3 and hence

    will be omitted.

    Theorem 2.4.4. Let X1,X2, . . . ,Xn be n i.i.d. rvs with cd f F and Xk,n

    be their kth order statistic. If for some stabilizing constants a n and bn(an + bnx 0 as n ) , Fn (an + bnx) G (x), as n , for all x, for somedistribution G (x), then

    P (Xk,n > an + bnx) k1

    j=0

    G (x)

    ln G (x)jj!

    , as n , for any fixed kand all x.

    Again, for the special case ofi.i.d. E(1) rvs with an = 0 and bn = 1/n,we have F

    n(an + bnx) ex as n . But, in this case Fn

    0 + 1

    nx

    = ex forall n, and hence we will have

    P (Xk,n > an + bnx) =k1

    j=0

    G (x)

    ln G (x)jj!

    =k1

    j=0

    exxj

    j!, for all x and all n.

  • 8/2/2019 Exponential Distribution Theory and Methods

    44/158

  • 8/2/2019 Exponential Distribution Theory and Methods

    45/158

    Chapter 3

    Record Values

    3.1. Definitions of Record Values and Record Times

    Suppose that (Xn)n1 is a sequence of i.i.d. rvs with c d f F . Let Yn =

    max (min){Xj|1 j n} for n 1. We say Xj is an upper (lower) recordvalue of

    {Xn

    |n

    1

    }, if Xj > ( 1. By definition X1 is an upper

    as well as a lower record value. One can transform the upper records tolower records by replacing the original sequence of (Xn)n1 by (Xn)n1 or(ifP (Xn > 0) = 1 for all n) by (1/Xn)n1; the lower record values of this se-quence will correspond to the upper record values of the original sequence.

    The indices at which the upper record values occur are

    given by the record times {U(n) ,n 1}, where U(n) =min

    j|j > U(n 1) ,Xj > XU(n1),n > 1

    and U(1) = 1. The record times

    of the sequence (Xn)n

    1 are the same as those for the sequence (F(Xn))n

    1 .

    Since F(X) has a uniform distribution for rv X, it follows that the distributionof U(n), n 1 does not depend on F. We will denote L (n) as the indiceswhere the lower record values occur. By our assumption U(1) = L (1) = 1.The distribution of L (n) also does not depend on F.

    3.2. The Exact Distribution of Record Values

    Many properties of the record value sequence can be expressed in terms of the

    function R (x) = ln F(x), 0 < F(x)< 1. If we define Fn (x) as the cd f ofXU(n)

  • 8/2/2019 Exponential Distribution Theory and Methods

    46/158

    32 M. Ahsanullah and G.G. Hamedani

    for n 1, then we haveF

    1(x) = PXU(1) x = F(x) ,

    F2 (x) = PXU(2) x

    =

    x

    y

    j=1

    (F(u))j1 dF(u) dF(y)

    =

    x

    y

    dF(u)

    1 F(u)dF(y) =x

    R (y) dF(y) ,

    (3.2.1)

    where R (x) = ln (1 F(x)), 0 < F(x) < 1.IfF has a pd f f, then the pd f ofXU(2) is

    f2 (x) = R (x) f(x) . (3.2.2)

    The cd f

    F3 (x) = PXU(3) x

    =

    x

    y

    j=0

    (F(u))jR (u) dF(u) dF(y)

    = x

    y

    R (u)

    1 F(u)dF(u) dF(y) =

    x

    (R (u))2

    2!dF(u) . (3.2.3)

    The pd f f 3 ofXU(3) is

    f3 (x) =(R (x))2

    2!f(x) . (3.2.4)

    It can similarly be shown that the cd f F n ofXU(n) is

    Fn (x) = x

    (R (u))n1

    (n 1)!dF(u) ,

    < x < . (3.2.5)

    This can be expressed as

    Fn (x) =

    R(x)

    un1

    (n 1)! eudu, < x < ,

    and

    Fn (x) = 1

    Fn (x) = F(x)n1

    j=0(R (x))j

    j!= eR(x)

    n1

    j=0(R (x))j

    j!.

  • 8/2/2019 Exponential Distribution Theory and Methods

    47/158

    Record Values 33

    The pd f f n ofXU(n) is

    fn (x) = (R (x))

    n

    1

    (n 1)! f(x) , < x < . (3.2.6)

    Note that Fn (x)Fn1 (x) = F(x)f(x) fn (x), and for E(), Fn (x)Fn1 (x) =n1 xn1

    (n) ex.

    A rv X is said to be symmetric about zero if X and X have the samedistribution function. If f is their pd f, then f(x) = f(x) for all x. Tworvs X and Y with cd fs F and G are said to be mutually symmetric if F(x) =

    1 G (x) for all x, or equivalently if their corresponding pd fs f and g exist,then f (x) = g (x) for all x. If a sequence ofi.i.d. rvs are symmetric aboutzero, then they are also mutually symmetric about zero but not conversely. It is

    easy to show that for a symmetric or mutually symmetric (about zero) sequence

    (Xn)n1 ofi.i.d. rvs, XU(n) and XL(n) are identically distributed.

    The joint pd f f (x1,x2, . . . ,xn) of the n record values XU(1), XU(2), . . . ,XU(n)is given by

    f(x1,x2, . . . ,xn)

    =n1j=1

    r(xj) f(xn) , < x1 < x2 < < xn1 < xn < , (3.2.7)

    where, as before,

    r(x) =d

    dxR (x) =

    f(x)

    1 F(x) , 0 < F(x) < 1.

    The joint pd f ofXU(i) and XU(j) is

    fi j (xi,xj) =(R (xi))

    i1

    (i 1)! r(xi)[R (xj)R (xi)]ji1

    (j i 1)! f (xj ) ,

    for < xi < xj < . (3.2.8)In particular, for i = 1 and j = n we have

    f1n (x1,xn) = r(x1)[R (x

    n)

    R (x1

    )]n2

    (n 2)! f(xn) , for < x1 < xn < .

  • 8/2/2019 Exponential Distribution Theory and Methods

    48/158

    34 M. Ahsanullah and G.G. Hamedani

    The conditional pd f ofXU(j)|XU(i) = xi is

    f(xj|xi) = fi j (xi,xj)fi (xi)

    =[R (xj)R (xi)]ji1

    (j i 1)! f(xj)

    1 F(xi) , for < xi < xj < . (3.2.9)

    For j = i + 1

    f(xi+1|xi) = f(xi+1)1

    F(xi)

    , for < xi < xi+1 0, 1 k< m, the joint conditional pd f of XU(i+k) and XU(i+m) givenXU(i) is

    f(i+k)(i+m)x,y|XU(i) = z

    =

    1

    (m k) 1

    (k)[R (y)R (x)]mk1 [R (x) R (z)]k1 f (y) r(x)

    F(z),

    for

    < z < x < y < .

    The marginal pd f of the nth lower record value can be derived by using the

    same procedure as that of the nth upper record value. Let H(u) = ln F(u),0 < F(u) < 1 and h (u) = d

    duH(u), then

    PXL(n) x

    =

    x

    (H(u))n1

    (n 1)! dF(u) , (3.2.11)

    and corresponding pd f f (n) can be written as

    f(n) (x) =(H(x))n1

    (n 1)! f(x) . (3.2.12)

    The joint pd f ofXL(1), XL(2), . . . ,XL(m) can be written as

    f(1)(2)...(m) (x1,x2, . . . ,xm)

    = m1j=1 h (xj) f (xm) , < xm < xm1 < < x1

  • 8/2/2019 Exponential Distribution Theory and Methods

    49/158

    Record Values 35

    The joint pd f ofXL(i) and XL(j) is

    f(i)(j) (x,y) = (H(x))i

    1

    (i 1)! [H(y)H(x)]j

    i

    1

    (j i 1)! h (x) f(y) ,j > i and < y < x 0) are parameters.The corresponding cd f F and the hazard rate rof the rv X with pd f (3.2.15)

    are respectively

    F(x) = 1 exp1 (x) , x ,and

    r(x) = f(x)/(1 F(x)) = 1. (3.2.16)Again, as before, we will denote the ex-

    ponential distribution with pd f (3.2.15) withE(,), the exponential distribution ( = 0, = 1/) with E(), and the

    standard exponential distribution with E(1) .For E(,),the joint pd f ofXU(m) and XU(n), m < n is

    fm,n (x,y)

    =

    n

    (m) (x )m1(yx)nm1(nm) exp

    1 (y) , x < y

  • 8/2/2019 Exponential Distribution Theory and Methods

    50/158

    36 M. Ahsanullah and G.G. Hamedani

    It can be shown that XU(m)d= XU(m1) +U, (m > 1) where U is independent

    of XU(m) and XU(m1) and is identically distributed as X1 if and only if X1 E(). For E(1)with n 1,

    PXU(n+1) > wXU(n)

    =

    0

    wx

    xn1

    (n)eydydx

    =

    0

    xn1

    (n)ewxdx = wn.

    The conditional pd f ofXU(n) given XU(m) = x is

    f (y|x) = mn (yx)nm1(nm) exp1 (yx) , x < y < ,0, otherwise.

    (3.2.18)

    Thus, P

    XU(n)XU(m)

    = y|XU(m) = x

    does not depend on x. It can be shown

    that if = 0, then XU(n)XU(m) is identically distributed as XU(nm), m < n.We take = 0 and = 1 and let Tn =

    nj=1XU(j). Since

    Tn = XU(n)

    XU(n

    1) + 2XU(n1)XU(n2)+ + (n 1)XU(2)XU(1)+ nXU(1)

    =n

    j=1

    jWj,

    where Wj s are i.i.d.E(1), the characteristic function of Tn can be written as

    n (t) =n

    j=1

    1

    1

    jt. (3.2.19)

    Inverting (3.2.19), we obtain the pd f f Tn ofTn as

    fTn (u) =n

    j=1

    1

    (j) (1)

    nj

    (n j + 1) eu/jjn2. (3.2.20)

    Theorem 3.2.1. Let (Xn)n1 be a sequence of i.i.d. rvs with the standard

    exponential distribution. Suppose j =XU(j)

    XU(j+1)

    , j = 1,2, . . . ,m

    1, then s are

    independent.

  • 8/2/2019 Exponential Distribution Theory and Methods

    51/158

    Record Values 37

    Proof. The joint pd f ofXU(1), XU(2), . . . ,XU(m) is

    f(x1,x2, . . . ,xm) = m1j=1

    xjexm , 0 < x1 < x2 < < xm

  • 8/2/2019 Exponential Distribution Theory and Methods

    52/158

    38 M. Ahsanullah and G.G. Hamedani

    3.3. Moments of Record Values

    Without any loss of generality we will consider in this section the standard ex-ponential distribution E(1), with pd f f (x) = exp(x), x > 0, for which wealso have f (x) = 1 F(x). We already know that XU(n) ,in this setting, can bewritten as the sum of n i.d. rvs V1,V2,..,Vn with common distribution E(1).Further, we have also seen that

    EXU(n)

    = n,

    Var

    XU(n)

    = n,

    and

    CovXU(n),XU(m)

    = m, m < n. (3.3.1)

    For m < n,

    EX

    p

    U(n)X

    q

    U(m)

    =

    0

    u0

    1

    (m) 1(n m) u

    qexvm+p1 (u v)nm1 dvdu.

    Substituting tu = v and simplifying we get

    EX

    p

    U(n)X

    q

    U(m)

    =

    0

    0

    1

    (m) 1(n m) u

    n+p+q1extm+p1 (1 t)nm1 dtdu

    =(m +p)(n +p + q)

    (m)(n +p).

    Using (3.2.19), it can be shown that for Tn = nj=1XU(j), we have

    E[Tn] = n (n + 1)/2 and Var[Tn] = n (n + 1) (2n + 1)/6.

    Some simple recurrence relations satisfied by single and product moments

    of record values are given by the following theorem.

    Theorem 3.3.1. For n 1 and k= 0,1, . . .

    EXk+1U(n) = EXk+1U(n1)+ (k+ 1)EXkU(n) , (3.3.2)

  • 8/2/2019 Exponential Distribution Theory and Methods

    53/158

    Record Values 39

    and consequently, for0 m n 1 we can write

    EXk+1U(n) = EXk+1U(m)+ (k+ 1) nj=m+1EXkU(j) , (3.3.3)with E

    Xk+1

    U(0)

    = 0 and E

    X0

    U(n)

    = 1.

    Proof. For n 1 and k= 0,1, . . . , we have

    EXk

    U(n) =1

    (n)

    0

    xr(R (x))n1 f(x) dx

    =1

    (n)

    0

    xk(R (x))n1 (1 F(x))dx, (f(x) = 1 F(x)) .

    Upon integration by parts, treating xk for integration and the rest of the integrand

    for differentiation, we obtain

    E

    XkU(n)

    = 1(k+ 1)(n)

    0

    xk1 (R (x))n1 f (x) dx (n 1)

    0xk+1 (R (x))n2 f(x) dx

    =1

    k+ 1

    0

    xk+11

    (n)(R (x))n1 f(x) dx

    0

    xk+11

    (n 1) (R (x))n2

    f(x) dx

    =

    1

    k+ 1

    EXk+1

    U(n)

    E

    Xk+1

    U(n1)

    ,

    which, when rewritten, gives the recurrence relation (3.3.2). Then repeated

    application of (3.3.2) will derive the recurrence relation (3.3.3).

    Remark 3.3.2. The recurrence relation (3.3.2) can be used in a simple wayto compute all the simple moments of all the record values. Once again, using

    property that f(x) = 1 F(x), we can derive some simple recurrence relationsfor the product moments of record values.

    Theorem 3.3.3. For m 1 and p, q = 0,1,2, . . .

    EXpU(m)Xq+1U(m+1) = EXp+q+1U(m) + (q + 1)EXpU(m)XqU(m+1) , (3.3.4)

  • 8/2/2019 Exponential Distribution Theory and Methods

    54/158

    40 M. Ahsanullah and G.G. Hamedani

    and for1 m n 2, p,q = 0,1,2, . . .

    EXpU(m)Xq+1U(n) = EXpU(m)Xq+1U(n1)+ (q + 1)EXpU(m)XqU(n) , (3.3.5)Proof. Let us consider 1 m < n and p, q = 0,1,2, . . .

    EX

    p

    U(m)Xq

    U(n)

    =

    1

    (m)(n m)

    0xp (R (x))m1 r(x)I(x) dx, (3.3.6)

    where

    I(x) =

    x

    yq [R (y)

    R (x)]nm1 f(y) dy

    =

    xyq [R (y) R (x)]nm1 (1 F(y))dy, since f(y) = 1 F(y) .

    Upon performing integration by parts, treating yq for integration and the rest of

    the integrand for differentiation, we obtain, when n = m + 1, that

    I(x) =1

    q + 1

    x

    yq+1f (y) dy xq+1 (1 F(x)),

    and when n m + 2, that

    I(x) =1

    q + 1

    x

    yq+1 {R (y)R (x)}nm1 f(y) dy

    (n m1)

    xyq+1 {R (y)R (x)}nm2 f(y) dy

    .

    Substituting the above expression of I(x) in equation (3.3.6) and simplifying,

    we obtain, when n = m + 1 that

    EX

    p

    U(m)X

    q

    U(m+1)

    =

    1

    q + 1

    EX

    p

    U(m)X

    q+1U(m+1)

    E

    X

    p+q+1U(m)

    ,

    and when n m + 2, that

    EX

    p

    U(m)Xq

    U(n)

    =

    1

    q + 1

    E

    X

    p

    U(m)Xq+1

    U(n)

    E

    X

    p

    U(m)Xq+1

    U(n1)

    .

    The recurrence relations (3.3.4) and (3.3.5) follow readily when the above twoequations are rewritten.

  • 8/2/2019 Exponential Distribution Theory and Methods

    55/158

    Record Values 41

    Remark 3.3.4. By repeated application of the recurrence relation (3.3.5),with the help of the relation (3.3.4), we obtain, for n m + 1, that

    EX

    p

    U(m)Xq+1

    U(n)

    = E

    X

    p+q+1U(m)

    + (q + 1)

    n

    j=m+1

    X

    p

    U(m)Xq

    U(j)

    . (3.3.7)

    Corollary 3.3.5. For n m + 1,Cov

    XU(m),XU(n)

    = Var

    XU(m)

    .

    Proof. By setting p = 1 and q = 0 in (3.3.7), we obtain

    EXU(m)XU(n)

    = E

    X2U(m)

    + (n m)EXU(m) . (3.3.8)

    Similarly, by setting p = 0 in (3.3.3), we obtain

    EXU(n)

    = E

    XU(m)

    + (n m) , n > m. (3.3.9)

    With the help of (3.3.8) and (3.3.9), we get for n m + 1

    CovXU(m),XU(n) = EXU(m)XU(n)EXU(m)EXU(n)= E

    X2U(m)

    + (n m)EXU(m)EXU(m)2 (n m)EXU(m)

    = VarXU(m)

    .

    Corollary 3.3.6. By repeated application of the recurrence relations (3.3.4)and (3.3.5), we also obtain for m 1

    EXpU(m)Xq+1U(m+1) =q+1

    j=0 (q + 1)(j)

    EXp+q+1jU(m) ,and for1 m n 2

    EX

    p

    U(m)Xq+1

    U(n)

    =

    q+1

    j=0

    (q + 1)(j)EX

    p

    U(m)Xq+1j

    U(n1),

    where

    (q + 1)(0) = 1 and (q + 1)(j) = (q + 1) q (q + 1 j + 1) , for j 1.

  • 8/2/2019 Exponential Distribution Theory and Methods

    56/158

    42 M. Ahsanullah and G.G. Hamedani

    Remark 3.3.7. The recurrence relations (3.3.4) and (3.3.5) can be used ina simple way to compute all the product moments of all record values.

    Theorem 3.3.8. For m 2 and p, q = 0,1,2, . . . ,

    EX

    p+1U(m1)X

    q

    U(m)

    = E

    X

    p+q+1U(m)

    (p + 1)E

    X

    p

    U(m)Xq

    U(m+1)

    , (3.3.10)

    and for2 m n 2 and p, q = 0,1,2, . . . ,

    E

    X

    p+1U(m1)X

    q

    U(n1)

    = E

    X

    p+1U(m)X

    q

    U(n1)

    (p + 1)E

    X

    p

    U(m)Xq

    U(m+1)

    .

    (3.3.11)

    Proof. For 2 m n and p, q = 0,1,2, . . . ,

    EX

    p

    U(m)X

    q

    U(n)

    =

    0

    0

    xpyqfmn (x,y)dxdy

    =1

    (m 1)! (n m1)!

    0yqf(y)J(y) dy, (3.3.12)

    where

    J(y) =

    y0

    xp { ln(1 F(x))}m1

    { ln (1 F(x)) + ln (1 F(y))}nm1 f (x)1 F(x) dx

    =

    0xp { ln (1 F(x))}m1 { ln(1 F(x)) + ln (1 F(y))}nm1 dx,

    since f(x) = 1 F(x). Upon integration by parts, treating xp

    for integrationand the rest of the integrand for differentiation, we obtain, for n = m + 1, that

    J(y) =1

    p + 1

    yp+1 { ln(1 F(y))}m+1

    (m 1)

    y0

    xp+1 {ln (1 F(x))}m2 f(x)1 F(x)dx,

  • 8/2/2019 Exponential Distribution Theory and Methods

    57/158

    Record Values 43

    and when n m + 2, that

    J(y) =1

    p + 1 [(n m 1)y

    0x

    p+

    1

    { ln (1 F(x))}m

    1 f (x)

    1 F(x){ ln (1 F(y)) + ln (1 F(x))}nm2 dx

    (m 1)y

    0xp+1 { ln(1 F(x))}m2 f(x)

    1 F(x){ ln (1 F(y)) + ln (1 F(x))}m2 dx].

    Now, substituting the above expression of J(y) in equation (3.3.12) and simpli-

    fying, we obtain, for n = m + 1, that

    EX

    p

    U(m)Xq

    U(n)

    =

    1

    p + 1

    E

    X

    p+q+1U(m)

    E

    X

    p+1U(m1)X

    q

    U(m)

    ,

    and for n m + 2 that

    EX

    p

    U(m)Xq

    U(n)

    =

    1

    p + 1

    EX

    p+1U(m)X

    q

    U(n)

    E

    X

    p+1U(m1)X

    q

    U(n1)

    .

    The recurrence relations (3.3.10) and (3.3.11) follow readily when the abovetwo equations are rewritten.

    Corollary 3.3.9. By repeated application of the recurrence relation

    (3.3.11) , with the help of (3.3.10) , we obtain for 2 m n 1 and p,q = 0,1,2, . . .

    EX

    p+1U(m1)X

    q

    U(n1)

    = EX

    p+q+1U(n1)

    (p + 1)

    n1

    j=m

    EX

    p

    U(j)Xq

    U(n)

    .

    Corollary 3.3.10. By repeated application of the recurrence relations

    (3.3.10) and(3.3.11), we also obtain for m 2 that

    EX

    p+1U(m1)X

    q

    U(m)

    =

    p+1

    j=0

    (1)j (p + 1)(j)EX

    p+q+1jU(m+j)

    ,

    and for2 m n2 that

    EXp+1U(m1)XqU(n1) = p+1j=0 (1)j (p + 1)(j)EXp+1jU(mj)XqU(n+1j) ,

  • 8/2/2019 Exponential Distribution Theory and Methods

    58/158

    44 M. Ahsanullah and G.G. Hamedani

    where (1 +p)(j) is as defined earlier.

    It is also important to mention here that this approach can easily be adopted

    to derive recurrence relations for product moments involving more than two

    record values.

    3.4. Estimation of Parameters

    We shall consider here the linear estimations of and .

    (a) Minimum Variance Linear Unbiased Estimator (MVLUE)

    Suppose XU(1),XU(2), . . . ,XU(m) are the m record values from an i.i.d. se-quence of rvs with common c d f E (,). Let Yi = 1

    XU(i)

    , i =

    1,2, . . . ,m. Then

    E[Yi] = i = Var[Yi] , i = 1,2, . . . ,m,

    and

    Cov (Yi,Yj) = min

    {i, j

    }.

    Let

    XXX =XU(1),XU(2), . . . ,XU(m)

    ,

    then

    E[XXX] = LLL +,

    Var[XXX] = 2VVV,

    where

    LLL = (1,1, . . . ,1) , = (1,2, . . . ,m) ,VVV = (Vi j) , Vi j = min{i, j}, i, j = 1,2, . . . ,m.

    The inverse VVV1 =

    Vi j

    can be expressed as

    Vi j =

    2 if i = j = 1,2, . . . ,m 1,1 if i = j = m,

    1 if

    |i

    j

    |= 1, i, j = 1,2, . . . ,m,

    0, otherwise.

  • 8/2/2019 Exponential Distribution Theory and Methods

    59/158

    Record Values 45

    The MLVLUEs , of and respectively are

    = V

    VV

    1 LLLLLLVVV1XXX///, = LLLVVV1

    LLL

    LLLVVV1XXX///,where

    =LVVV1L

    VVV1

    LLLVVV12 ,and

    Var[] = 2LLLVVV1///,

    Var[] = 2LLLVVV1LLL///,Cov (, ) = 2LLLVVV1///.

    It can be shown that

    LLLVVV1 = (1,0,0, . . . ,0), VVV1 = (0,0, . . . ,0,1),

    VVV1 = m and = m 1.

    Upon simplification we get

    =

    mXU(1)XU(m)/(m 1) ,

    =XU(m)XU(1)

    /(m 1) , (3.4.1)

    with

    Var[] = m2/(m 1) , Var[] = 2/(m 1) andCov (, ) = 2/(m 1) . (3.4.2)

    (b) Best Linear Invariant Estimator (BLIE)

    The best linear invariant (in the sense of minimum mean squared error and

    invariance with respect to the location parameter ) estimators, BLIEs,

    ,

    of

    and are

    = E121 +E12 ,

  • 8/2/2019 Exponential Distribution Theory and Methods

    60/158

    46 M. Ahsanullah and G.G. Hamedani

    and

    = /(1 +E12) ,

    where and are MVLUEs of and andVar[] Cov (, )

    Cov (, ) Var[]

    = 2

    E11 E12

    E21 E22

    .

    The mean squared errors of these estimators are

    MSE[

    ] = 2

    E11 E212 (1 +E22)1

    ,

    and

    MSE[] = 2E22 (1 +E22)1 .We have

    E[( ) ()] = 2E12 (1 +E22)1 .Using the values ofE11, E12 and E22 from (3.4.2), we obtain

    = (m + 1)XU(1)XU(m)/m, = XU(m)XU(1)/m,Var[] = 2 m2 + m 1/m,

    and

    Var[] = 2 (m 1)/m2.3.5. Prediction of Record Values

    We will predict the sth upper value based on the first m record values for s > m.Let

    WWW = (W1,W2, . . . ,Wm) ,

    where

    2Wi = CovXU(i),XU(s)

    , i = 1,2, . . . ,m,

    and

    = 1EXU(s) .

  • 8/2/2019 Exponential Distribution Theory and Methods

    61/158

    Record Values 47

    The best linear unbiased predictor of XU(s) is XU(s), where

    XU(s) = + +WWWVVV

    1

    (XXX LLL ), and are MVLUEs of and respectively. It can be shown that WWWVVV1(XXXLLL ) = 0, and hence

    XU(s) =

    (s 1)XU(m) + (m s)XU(1)/(m 1) , (3.5.1)

    E[ XU(s)] = + s,

    Var[ XU(s)] = 2 m + s2 2s/(m1) ,

    MSE[ XU(s)] = E[( XU(s)XU(s))2] = 2 (s m) (s 1)/(m1) .Let XU(s) be the best linear invariant predictor ofXU(s). Then it can be shown

    that XU(s) = XU(s)C12 (1 +E22)1 , (3.5.2)where

    C122 = Cov,LLL WWWVVV1LLL +WWWVVV1

    and

    2E22 = Var[].

    Upon simplification, we get

    XU(s) = m sm

    XU(1) +s

    mXU(m),

    EXU(s) = +ms + m sm ,Var

    XU(s) = 2 m2 + ms2 s2/m,MSE

    XU(s) = MSE[ XU(s)] + (s m)2m (m1)

    2 =s (s m)

    m2.

    It is well-known that the best (unrestricted) least square predictor of XU(s) is

    XU(s) = EXU(s)|XU(1), . . . ,XU(m) = XU(m) + (s m). (3.5.3)

  • 8/2/2019 Exponential Distribution Theory and Methods

    62/158

    48 M. Ahsanullah and G.G. Hamedani

    But XU(s)depends on the unknown parameter . If we substitute the minimum

    variance unbiased estimate for , then XU(s)becomes equal to XU(s). Now

    E[ XU(s)] = + s = EXU(s)

    ,Var[ XU(s)] = m

    2

    and

    MSE[ XU(s)] = E[(XU(s)XU(s))2] = (s m)2.

    We like to mention also that by considering the mean squared errors of

    XU(s),

    XU(s) and

    XU(s), it can be shown that

    MSE[ XU(s)] = E[( XU(s)XU(s))2] = (s m)2.

    3.6. Limiting Distribution of Record Values

    We have seen that for = 0 and = 1, EXU(n)

    = n and Var

    XU(n)

    = n.

    Hence

    PXU(n)nn x = PXU(n) n +xn=

    n+xn0

    xn1ex

    (n)dx, (3.6.1)

    = pn (x) , say.

    Let

    (x) =12

    x

    et2/2dt.

    The following table gives values of pn (x) for various values of n and x

    and values of (x) for comparison.

  • 8/2/2019 Exponential Distribution Theory and Methods

    63/158

    Record Values 49

    Table 1. Values of pn (x)

    n\x 2 1 0 1 25 0.0002 0.1468 0.5575 0.8475 0 .9590

    10 0.0046 0.1534 0.5421 0.8486 0 .960115 0.0098 0.1554 0.5343 0.8436 0 .965325 0.0122 0.1568 0.5243 0.8427 0 .9684

    45 0.0142 0.1575 0.5198 0.8423 0 .9698 (x) 0.0226 0.1587 0.5000 0.8413 0 .9774

    Thus for large values of n, (x) is a good approximation of pn (x).Finally, the entropy of nth upper record value XU(n) is

    n + ln(n) ln (n 1)(n) ,

    where (n) is the digamma function, (n) = (n)/(n). To see this we ob-

    serve that pd f ofXU(n) is given by

    fn (x) =n

    (n)xn1ex, x 0,

    and its entropy is computed as follows

    En = E[ lnXU (n)]

    =

    0

    n

    (n) ex (ln(n) n ln+x (n 1) lnx) dx

    = ln(n)n ln+ n (n 1){ ln+(n)}= n + ln(n) ln(n 1)(n) .

  • 8/2/2019 Exponential Distribution Theory and Methods

    64/158

  • 8/2/2019 Exponential Distribution Theory and Methods

    65/158

    Chapter 4

    Generalized Order Statistics

    In this chapter we will consider some of the basic properties of the generalized

    order statistics from exponential distribution. We shall present some inferences

    based on the distributional properties of the generalized order statistics.

    4.1. Definition

    The concept of generalized order statistics (gos) was introduced by Kamps(1995) in terms of their joint pd f. The order statistics, record val-

    ues and sequential order statistics are special cases of the gos. The rvsX(1,n,m,k),X(2,n,m,k), . . . ,X(n,n,m,k), k > 0, m R, are n gos froman absolutely continuous c d f F with corresponding pd f fif their joint pd f

    f1,2,...,n (x1,x2, . . . ,xn) can be written as

    f1,2,...,n (x1,x2, . . . ,xn)

    = k

    n1j=1

    j

    n1j=1

    (1 F(xj))m f(xj)

    (1 F(xn))k1 f(xn) ,

    F1 (0+) < x1 < < xn < F1 (1) , (4.1.1)

    where j = k+ (n j) (m + 1) 1 for all j,1 j n,kis a positive integerand m

    1. A more general form of(4.1.1), again due to Kamps, with a new

    notation for the joint pd f will be given in Chapter 5.

  • 8/2/2019 Exponential Distribution Theory and Methods

    66/158

    52 M. Ahsanullah and G.G. Hamedani

    Ifk= 1 and m = 0, then X(s,n,m,k) reduces to the ordinary sth order statis-tic and (4.1.1) will be the joint pd f of the n order statistics X1,n X2,n Xn,n. Ifk= 1 and m = 1, then (4.1.1) will be the joint pd f of the first n upperrecord values of the i.i.d. rvs with cd f F and pd f f .

    Integrating out x1,x2, . . . ,xs1,xs+1,..,xn from (4.1.1) we obtain the pd ffs,n,m,k ofX(s,n,m,k)

    fs,n,m,k(x) =cs

    (s 1)! (1 F(x))s1 f(x) gs1m (F(x)), (4.1.2)

    where cs = sj=1 j and

    gm (x) =

    1

    (m+1)

    1 (1 x)m+1

    , m = 1,

    ln (1 x) , m = 1, x (0,1) .

    Since limm1 1m+1

    1 (1x)m+1

    = ln (1 x) , we will write gm (x) =1

    m+1

    1 (1 x)m+1

    , for all x (0,1) and for all m with g1 (x) =

    limm

    1 gm (x) .

    4.2. Generalized Order Statistics of Exponential

    Distribution

    Recall that pd f ofX E(,) , is given by

    f(x) = 1 exp

    1 (x) , x > , > 0,0, otherwise. (4.2.1)Lemma 4.2.1. Let (Xi)i1 be a sequence of i.i.d. rv

    s from E(,), then

    1X(1,n,m,k) E(1,) and X (s,n,m,k) d= + sj=1 Wjj , whereWj E(0,1) = E(1) for all j s.Proof. From (4.1.2), pd f f s,n,m,k ofX(s,n,m,k), in this case, is

    fs,n,m,k(x) =c

    s(s 1)! 1e1(x) sgs1m 1 e1(x) . (4.2.2)

  • 8/2/2019 Exponential Distribution Theory and Methods

    67/158

  • 8/2/2019 Exponential Distribution Theory and Methods

    68/158

    54 M. Ahsanullah and G.G. Hamedani

    (c) From (4.2.6) it follows that s {X(s,n,m,k)X(s 1,n,m,k)} E(0,). This property can also be obtained by considering the joint pd f

    of X(s,n,m,k) and X(s 1,n,m,k) and using the transformation U1 =X(s 1,n,m,k) and Ts = s {X(s,n,m,k)X(s 1,n,m,k)} .

    (d) For k= 1 and m = 1, we obtain XU(s) = +sj=1 Wj.

    For XE(,) , we have from (4.2.5),E[X(s,n,m,k)] = +sj=1 1j andthe recurrence relation

    E[X(s,n,m,k)]E[X(s 1,n,m,k)] = s.

    Let

    D (1,n,m,k) = 1X(1,n,m,k),

    D (s,n,m,k) = sX(s,n,m,k)X(s 1,n,m,k), 2 s n,

    then for X E() all the D (j,n,m,k), j = 1,2, . . . ,n are i.i.d. E() . Thus, wehave the obvious recurrence relation

    E[D (s,n,m,k)] = E[D (s 1,n,m,k)] .For k= 1 and m = 0, it coincides with the known results corresponding to

    order statistics. For k= 1 and m = 1, it coincides with the known results ofupper record values.

    In the remainder of this section we would like to present two recurrence

    relations for the moments (single moments and product moments) of gos from

    the standard exponential distribution E(1) .

    The joint pd f of X(r,n,m,k) and X(s,n,m,k), denoted by fr,s,n,m,k(x,y) is(see p. 68 of Kamps (1995))

    fr,s,n,m,k(x,y) =cs

    (r1)! (s r1)! (1 F(x))m

    f(x)

    gr1m (F(x)) [h (F(y))h (F(x))]sr1 (1F(y))s f(y) , (4.2.7)

    where

    h (x) = 1m+1 (1 x)m+1 , m

    = 1,

    ln (1 x) , m = 1.

  • 8/2/2019 Exponential Distribution Theory and Methods

    69/158

    Generalized Order Statistics 55

    Theorem 4.2.3. For E(1) and s > 1

    E(X(s,n,m,k))p+1 = E(X(s 1,n,m,k))p+1+ p + 1s E[(X(s,n,m,k))p] ,and consequently for s > r

    E

    (X(s,n,m,k))p+1

    =E

    (X(r,n,m,k))p+1

    +s

    j=r+1

    p + 1

    jE[(X(j,n,m,k))p] .

    Proof. We have

    E[(X(s,n,m,k))p] =

    0xp

    cs

    (s 1)! esxgs1m

    1 exdx

    =

    0

    scs(p + 1) (s 1)!x

    p+1esxgs1m

    1 exdx

    0

    cs (s 1)(p + 1) (s 1)!x

    p+1esxgs2m

    1 exe(m+1)dx=

    s

    (p + 1) E(X(s,n,m,k))p+1

    E(X(s 1,n,m,k))p+1

    ,from which the result follows.

    For k= 1 and m = 1, Theorem 4.2.3 coincides with Theorem 3.3.1. Fork= 1 and m = 0, we obtain

    E

    (Xs,n)p+1

    = E

    (Xs1,n)p

    +1

    +p + 1

    n s + 1E[(Xs,n)p]

    and consequently

    E

    (Xs,n)p+1

    = E

    (Xs1,n)p+1

    +

    s

    j=r+1

    p + 1

    n j + 1E[(Xj,n)p] .

    Letting p = 0, in the last equation, we have

    E[Xs,n] = E[X1,n] +s

    j=2

    1

    n

    j + 1,

  • 8/2/2019 Exponential Distribution Theory and Methods

    70/158

    56 M. Ahsanullah and G.G. Hamedani

    that is,

    E[Xs,n] =s

    j=1

    1

    n j + 1.

    Theorem 4.2.4. For E(1) ,1 r< s n and p,q = 0,1,2, . . . we have

    E

    (X(r,n,m,k))p (X(s,n,m,k))q+1

    = E

    (X(r,n,m,k))p (X(s 1,n,m,k))q+1

    +

    q + 1

    s E[(X(r,n,m,k))p

    (X(s,n,m,k))q

    ].

    Proof. We have

    E[(X(r,n,m,k))p (X(s,n,m,k))q]

    =

    0

    cs

    (r1)! (s r1)! e(m+1)xgr1m

    1 exI(x) dx, (4.2.8)

    where

    I(x) =

    xyq

    1

    m + 1

    1 e(m+1)y

    1

    m + 1

    1 e(m+1)x

    sr1esydy

    =s

    q + 1

    x

    yq+1

    1

    m + 1

    1 e(m+1)y

    1

    m + 1

    1 e(m+1)x

    sr1esydy

    1q + 1

    x

    yq+1

    1

    m + 1 1 e(m+1)y

    1

    m + 1 1 e(m+1)x

    sr2

    es1ydy.

    Upon substituting for I(x) in (4.2.8), we obtain

    E[(X(r,n,m,k))p (X(s,n,m,k))q]

    =s

    q + 1E

    (X(r,n,m,k))p (X(s,n,m,k))q+1

    E

    (X(r,n,m,k))p (X(s 1,n,m,k))q+1

    .

  • 8/2/2019 Exponential Distribution Theory and Methods

    71/158

    Generalized Order Statistics 57

    Thus,

    E(X(r,n,m,k))p (X(s,n,m,k))q+1= E

    (X(r,n,m,k))p (X(s 1,n,m,k))q+1

    +

    q + 1

    sE[(X(r,n,m,k))p (X(s,n,m,k))q].

    For k= 1 and m = 1, Theorem 4.2.4 coincides with Theorem 3.3.3. Fork= 1 and m = 0, we obtain from Theorem 4.2.4

    EXpr,nXq+1s,n = EXpr,nXq+1s1,n+ qn s + 1EXpr,nXqs1,n .Estimation of and

    Minimum Variance Linear Unbiased Estimators (MVLUEs)

    Lemma 4.2.5. Let

    and be the MVLUEs of and respectively, based

    on n gos X(1,n,m,k),X(2,n,m,k), . . . ,X(n,n,m,k)from an absolutely contin-

    uous cd f F with pd f f. Then

    = X(1,n,m,k) ( /1)

    and

    =

    1

    n 1

    n

    j=1

    (j j+1)X(j,n,m,k)1X(1,n,m,k), with n+1 = 0,

    Var

    = n2/(n 1)21,Var = 2/(n 1) ,Cov

    ,

    = 2/(n 1)1.

    Proof. It is not hard to show that

    E[X(s,n,m,k)] = +s,

    Var[X(s,n,m,k)] = 2Vs, for 1 s n,

  • 8/2/2019 Exponential Distribution Theory and Methods

    72/158

    58 M. Ahsanullah and G.G. Hamedani

    where s = sj=1

    1j, Vs =

    sj=1

    12j. Let

    XXX = (X(1,n,m,k),X(2,n,m,k), . . . ,X(n,n,m,k)) ,

    then

    E[XXX] = 1 +,

    Cov (X(j,n,m,k),X(i,n,m,k)) = 2Vi, 1 i < j n,Var[XXX] = 2VVV,

    where, as in Chapter 3, 111is an n 1vector of units, === (1,2, . . . ,n), VVV ===(Vi j) and Vi j = Vi for 1 j n.

    Let === VVV1 =

    Vi j, then

    Vii = 2i + 2i+1, i = 1,2, . . . ,kn, n+1 = 0,

    Vi+1i = Vii+1 = i+1,Vi j = 0, for |i j|> 1.

    The MVLUEs

    and respectively are (see, David, (1981))

    = VVV1(111111)VVV1XXX///,

    = 111VVV1(111111)VVV1XXX///,

    where

    = (111VVV1111)))(VVV1)))

    (111VVV1)))2.

    We also have

    Var

    = 2VVV1, Var

    = 2111VVV1111///,

    and

    Cov

    ,

    = 2111VVV1///.

  • 8/2/2019 Exponential Distribution Theory and Methods

    73/158

    Generalized Order Statistics 59

    It can easily be shown that

    111V

    VV

    1

    = (1,0,0, . . . ,0) ,VVV1 = (1 2,2 3, . . . ,n1 n,n) ,

    VVV1 = n,111VVV1111 = 21, 111VVV1 = 1 and = (n 1)21.

    Now,

    111VVV1(111111)VVV1XXX/// = 1

    (111VVV1111111VVV1111)VVV1XXX

    =1

    (21VVV1XXX1111VVV1XXX) =

    1

    n 1 (VVV1XXX1X(1,n,m,k)).Hence

    =

    1

    n 1

    n

    j=1

    (j j+1)X(j,n,m,k)1X(1,n,m,k).

    We can write = 11 111 + ccc, where

    ccc =

    0,1

    2,

    1

    2+

    1

    3,

    1

    2+

    1

    3+

    1

    4, . . . ,

    n

    j=2

    1

    j

    .

    Thus

    = cccVVV1(111 111)VVV1XXX///

    1.

    We have

    cccVVV1111 = 0,cccVVV1 = n 1 and hence = X(1,n,m,k)

    1.

    Ifk= 1 and m = 0, then j = n j + 1 and and coincide with MVLUEs

    given by order statistics (see, Arnold et al., (1992), p. 176). If k = 1 andm = 0, then j = 1 and

    and

    coincide with MVLUEs given by Ahsanullah,

    ((1980), p.466).

  • 8/2/2019 Exponential Distribution Theory and Methods

    74/158

    60 M. Ahsanullah and G.G. Hamedani

    The variances and covariance of

    and are

    Var = 2 (VVV1) = n2/(n 1)21,Var

    =2

    (VVV1111) = 2/(n 1) ,

    Cov

    ,

    =

    2

    (VVV1111) = 2/(n 1)1.

    Best Linear Invariant Estimators (BLIEs)

    The best linear invariant (in the sense of minimum mean squared error and in-

    variance with respect to the location parameter )

    and of and are

    =

    E12

    1 +E22

    and

    =

    /(1 +E22) ,

    where

    and are MVLUEs of and and Var[] Cov,

    Cov

    ,

    Var[

    ]

    = 2 E11 E12E21 E22 .The mean squared errors of these estimators are

    MSE

    = 2

    E11 E212 (1 +E22)1

    and

    MSE

    = 2E22 (1 +E22)1 .

    Substituting the values of E12 and E22 in the above equations, we have, on sim-

    plification, that

    =

    +

    1

    n1

    and

    =

    n 1n

    ,

    MSE = n + 1n 2

    21 andMSE = 1n2.

  • 8/2/2019 Exponential Distribution Theory and Methods

    75/158

    Generalized Order Statistics 61

    Prediction ofX(s,n, m, k)

    We shall assume that s > n. Let = (1,2, . . . ,n) where

    j = Cov(X(s,n,m,k),X(j,n,m,k)), j = 1,2, . . . ,n and =1E[X(x,n,m,k)]. The best linear unbiased predictor (BLUP)X(s,n,m,k) ofX(s,n,m,k) is X(x,n,m,k) =

    + +VVV1(XXX 111 ),

    where

    and are the MVLUEs of and respectively. But = s and

    = (V1,V2, . . . ,Vn) . It can be shown that VVV1 = (0,0, . . . ,0,1) and hence

    X(s,n,m,k) =

    +s+X(n,n,m,k)

    n

    = X(n,n,m,k) + (s n). (4.2.9)

    Ifk= 1 and m = 0, then j = n j + 1 and

    X(s,n,m,k) coincides with theBLUP based on the order statistics (see, Arnold et al. (1992), p. 181). If k= 1and m = 1, then j = 1 and X(s,n,m,k) coincides with the BLUP based onrecord values (see, Ahsanullah (1980), p. 467).

    We have

    E[ X(x,n,m,k)] = + (s n),

    Var[ X(x,n,m,k)] = 2Vn + (s n)2 2

    n 1 + 2 (s n)Cov(X(n,n,m,k),

    )

    = 2

    1

    21+ + 1

    2n

    +

    1

    n 1

    1

    n+1+ 1

    s

    2+

    2

    n 1 1

    n+1+

    +

    1

    s1

    2+

    +

    1

    n,MSE[ X(x,n,m,k)] = E[( X(x,n,m,k)X(s,n,m,k))2]

    = E[(X(n,n,m,k)X(x,n,m,k) + (s n))2]

    = 2

    Vn +Vs 2Vn + (s n)2 1n 1

    = 2

    Vs Vn + (s n)2 (n 1)1

    .

    Ifk= 1 and m = 1, then the BLUP XU(s) of the sth upper record value from

  • 8/2/2019 Exponential Distribution Theory and Methods

    76/158

    62 M. Ahsanullah and G.G. Hamedani

    (4.2.9) is

    XU(s) =(x1)XU(n) (s n)XU(1)

    (n 1), (4.2.10)

    and

    E[ XU(n)] = 2

    m + s2 2s/(m1) . (4.2.11)Let X(x,n,m,k) be the best linear invariant predictor of X(s,n,m,k). Then

    X(s,n,m,k) = X(s,n,m,k) c

    12

    1 + c22

    , (4.2.12)

    where

    c122 = Cov(

    ,

    1 VVV1111 +VVV1 ) and c222 = Var[ ].It can easily be shown that c12 = (s n)/(n 1) and since c22 = 1/(n 1),we have

    c121+c22

    = snn

    . Thus

    X(s,n,m,k) = X(s,n,m,k)s n

    n

    = X(n,n,m,k)+n 1

    n(s n)

    (4.2.13)

    E[X(s,n,m,k)] = +s + s nn

    (4.2.14)

    and

    Var[X(s,n,m,k)] = 2Vn +n 1

    n 2

    (sn)

    2 1

    n 1= 2

    s n + n 1

    n2(s n)2

    . (4.2.15)

    The bias term is

    E[

    X(s,n,m,k)X(s,n,m,k)] = (n s)+ n 1

    n(s n)

    = 1

    n (s n).

  • 8/2/2019 Exponential Distribution Theory and Methods

    77/158

    Generalized Order Statistics 63

    Thus,

    MSE[X(s,n,m,k)] = Var[X(s,n,m,k)]+(bias)2= 2

    s n + n1

    n2(s n)2

    +

    1

    n(s n)

    2= 2

    s n + 1

    n(s n)2

    = MSE[ X(s,n,m,k)] 1

    n (n 1) (s n)2 .

    For k= 1 and m = 0, we obtain

    E[ XU(s)] = +

    s +

    s nn

    ,

    Var[ XU(s)] = 2

    n2 + ns s2

    n2

    .

  • 8/2/2019 Exponential Distribution Theory and Methods

    78/158

  • 8/2/2019 Exponential Distribution Theory and Methods

    79/158

    Chapter 5

    Characterizations of

    Exponential Distribution I

    5.1. Introduction

    The more serious work on characterizations of exponential distribution based

    on the properties of order statistics, as far as we have gathered, started in earlysixties by Ferguson (1964,1965), Tanis (1964), Basu (1965), Crawford (1966)

    and Govindarajulu (1966). Most of the results reported by these authors were

    based on the independence of suitable functions of order statistics. Chan (1967)

    reported a characterization result based on the expected values of extreme order

    statistics. The goal of this chapter is first to review characterization results re-

    lated to the exponential distribution based on order statistics (Section 5.2) and

    then based on generalized order statistics (Section 5.3). We will discuss these

    results in the chronological order rather than their importance. We apologize inadvance if we missed to report some of the existing pertinent results.

    Let X1and X2 be two i.i.d. random variables with common c d f F (x) and letX(1) = min{X1,X2} and X(2) = max{X1,X2} . Basu (1965) showed that if F(x)is absolutely continuous with F(0) = 0, then a necessary and sufficient condi-

    tion for F to be the cd f of an exponential random variable with parameter ,is that the random variables X(1) and

    X(2)X(1)

    are independent. Freguson

    (1964) and Crawford (1966) also used the property of independence of X(1) and

    (X1 X2) to characterize the exponential distribution. Puri and Rubin (1970)

  • 8/2/2019 Exponential Distribution Theory and Methods

    80/158

    66 M. Ahsanullah and G.G. Hamedani

    showed that X(2) X(1) X1 ( means having the same distribution) charac-terizes the exponential distribution among the class of absolutely continuous

    distributions. Seshardi et al. (1969) reported a characterization of the expo-nential distribution based on the identical distribution of an (n1)-dimensionalrandom vector of random variables Vr = Sr/Sn,r= 1,2, . . . ,n 1, where Sr isthe rth partial sum of the random sample, and vector of order statistics of n 1i.i.d. U(0,1) random variables. Csorgo et al. (1975) and Menon and Seshardi(1975) pointed out that the proof given in Seshardi et al. was incorrect and

    presented a new proof. Puri and Rubin (1970) established a characterization

    of the exponential distribution based on the identical distribution of Xs,n Xr,nand Xsr,nr(these rvs will be defined in the next paragraph) . Rossberg (1972)gave a more general result when s = r+ 1, which will be stated in the followingsection. A different type of result characterizing the exponential distribution

    based on a function of the order statistics having the same distribution as the

    one sampled was established by Desu (1971), which is stated in the following

    section as well.

    Let X1,X2, . . . ,Xn be a random sample from a random variable X with c d f F .Let

    X1,n X2,n Xn,n,be the corresponding order statistics. As pointed out by Gather et al. (1997),

    the starting point for many characterizations of exponential distribution via

    identically distributed functions of order statistics is the well-known result of

    Sukhatme (1937): The normalized spacings

    D1,n = nX1,n and Dr,n = (n r+ 1) (Xr,n Xr1,n) , 2 r n (5.1.1)

    from an exponential distribution with parameter , i.e., F(x) = 1 ex

    ,x 0, > 0, are again independent and identically exponentially distributed. Thus,we have

    F E() implies that D1,n,D2,n, . . . ,Dn,n are i.i.d.E(). (5.1.2)

    5.2. Characterizations Based on Order Statistics

    We start this section with the following result due to Desu (1971).

  • 8/2/2019 Exponential Distribution Theory and Methods

    81/158

    Characterizations of Exponential Distribution I 67

    Theorem 5.2.1. If F is a nondegenerate cd f , then for each positive integer

    k,kX1,k and X1 are identically distributed if and only if F (x) = 1 ex forx 0, where is a positive parameter.

    Arnold (1971) proved that the characterization is preserved if in Theorem

    5.2.1 the assumption for all k is replaced with the assumption for two rela-

    tively prime positive integers k1, k2 > 1 . Here is his theorem:

    Theorem 5.2.2. Let supp(F) = (0,). Then X1 E() if and only ifniX1,ni E() for1 < n1 < n2 with ln n1/ ln n2 irrational.

    Ahsanullah and Rahman (1972)


Recommended