+ All Categories
Home > Documents > MATHEMATia RESEARCH CENTER.Let m.. m, and ni be second, third and fourth central sample c 3 4...

MATHEMATia RESEARCH CENTER.Let m.. m, and ni be second, third and fourth central sample c 3 4...

Date post: 26-Jan-2021
Category:
Upload: others
View: 0 times
Download: 0 times
Share this document with a friend
28
MATHEMATia RESEARCH CENTER. NAriONAL"TKHNICAI. INfORMATION SERVICE ... . ..... v .. 12111 , . .. .. 1 ' ' 3/ '
Transcript
  • MATHEMATia RESEARCH CENTER.

    NAriONAL"TKHNICAI. INfORMATION SERVICE

    .... ..... v .. 12111

    , . .. .. ~-

    1 '

    '

    3/ '

  • THE UNIVERSITY OF WISCONSIN

    MATHEMATICS RESEARCH CENTm

    Contract No. : DA-31-124-ARO-D-462

    A TEST OF FIT FOR CONTINUOUS DISTRIBUTIONS BASED ON GENERAUZED MINIMUM CHI-SQUARE

    John Gurland and Ram C. Dahlya

    This document has been approved for public release and sale; its distribution is unlimited.

    MRC Technical Summary Report #1057 April 1970

    Madison, Wisconsin 53706

    L

  • w^mm

    ABSTRACT

    A test of fit based on minimum chi-square techniques is developed

    for continuous distributions. This procedure is investigated in detail for the

    special case of testing for normality, where the test statistic is based on

    the first four sample moments. The asymptotic non-null distribution of the

    general test statistic is obtained, and in particular the power of the test

    of normality is derived for several alternative families of distributions.

  • A TEST OF FIT FOR CONTINUOUS DISTRIBUTIONS BASED ON

    GENERALIZED MINIMUM CHI-SQUARE

    John Gurland and Ram C. Dahiya

    1. Introduction

    In this paper a test of fit for continuous distributions is devr .oped based

    on generalized minimum chi-square techniques. Although the Pearson chi-

    square test of fit is widely used especially in the case of discrete distributions,

    there are difficulties in applying it, particularly in the case of continuous

    distributions. A discussion of these difficulties is included in the paper by Dahiya

    and Gurland (1970a). A motivation for obtaining the results in the present paper i. to

    developa testwhichis free of complications associated with the Pearson chi-square

    test. In particular the question of how to form class intervals does notarlsein the test

    2 of fit presented here. Furthermore the asymptotic distribution is exactly that of a x f

    in contradistinction to the asymptotic distribution of the statistic employed in the

    Pearson x test when the estimators of parameters are obtained from the ungrouped

    sample (cf. Chernoff and Lehmann (1954)).

    The asymptotic non-null distribution of the test statistic proposed here

    is developed for general alternatives. As a special case the asymptotic power

    is obtained for testing normality against Si' -nr ■'. aoecifis: :.Uc■o^lvo fnim.u-c .■

    distributions. The power of this test is compared with that of a modified form of

    the Pearson chi-square 'est based on random intervals presented by Dahiya

    and Gurland (1970a, 1970b).

    This work was supported in part by the National Science Foundation, the Wisconsin Alumni Research Foundation, and the United States Army under Contract No.: DA-3].-124-ARO~D-462.

  • Although the test of fit presented here is for continuous distributions,

    the method based on minimum chi-square techniques is quite general and can

    in fact be adapted to discrete distributions. Hinz and Gurland (1970) have

    applied such techniques to develop a test of fit for the negative binomial

    and other contagious distributions.

    2. Formulation of a test statistic based on sample moments

    First we consider the problem in a general context and show how to

    construct a statistic for testing the fit of a hypothesized distribution based on

    a set of sample moments. In a subsequent section the result obtained here

    will be applied to develop a test of normality.

    Let X.. X,,, .... X be a random sample from a certain distribution 12' ' n

    with p.d.f.

    Px(x I 6) (2.1)

    where 9 is a parameter vector of q components, that is

    0' =[0^ 02, ..., 0q] (2.2)

    Denote the j raw sample moment by

    i n ^ m. -iV vJ n^ Xi (2-3) J "1=1

    Let

    m' i' =[mj, m^, ..., m^] (2.4)

    -2- #10^7

  • where s, (s > q), is a fixed number that remains to be specified. (A low

    value of s is generally desirable due to the large sampling fluctuations of

    higher order moments.) Under the assumption that the (2s) order moment of

    X exists, we can easily show by making use of the Central Limit Theorem that

    the asymptotic distribution of vnlm-^) is normal,

    N(0; G) (2.5)

    where the vector \x is the population counterpart of m, given by

    ^ =[^p l^' "» Kjl (2.6)

    and the s X s covariance matrix G is given by

    G = V = (^r^) (2.7)

    If h,, h„. .... h be s functions of m, that is r 2' ' s '

    hj = h^mj, m^, .. ., m' ) i = 1, 2, ..., (2.8)

    such that their population counterparts

    ^ =hi^J> ^2» '•> ^ i = 1, 2, . , ., s (2.9)

    are differentiable to the second order with respect to \i', \i.' t .. ., IJ.' , then

    the asymptotic distribution of

    Vn(h-C) (2.10)

    #1057 -3-

  • is given by

    N(0-, Z) (2.11)

    where

    h' =[hi> h2, •", hs^

    V =[&!, ^2, •-., &s] >

    S = JGJ' -^

    a; L

    (2.12)

    and J is the s X s Jacobian matrix (T~n •

    From this result it follows that the asymptotic distribution of

    Q =n(h- t.)' Zf^h-U (2.13)

    is that of x • Furthermore, if 2 is a consistent estimator of 2 , which is s

    obtained from S on replacing parameters by maximum likelihood or some

    other consistent estimators, then according to Gurland (1948), Barankin and

    Gurland (1951), the asymptotic distribution of

    Q* =n(h- O'f'Vi-U (2.14)

    is the same as the asymptotic distribution of Q.

    Now suppose we select functions h. such that £ are linear functions

    of the parameters 6., 9 , ..., 6 , that is 1 £ Vi

    C =we (2.15)

    -4- #1057

  • where W is a s X q matrix of known constants. In such a case we can

    find an estimator for 6 by minimizing the expression for Q in (2.14)

    This estimator, 0,, say, is given by

    --W1-- --1- 6 = (W S 1A0 W S h (2.16)

    Let

    ^ =we (2.17)

    and

    Q =n(h-E),s"1(h -E) [ ' 18)

    Now let

    R = WCW'i'lW)~lW' £'

    A= 2 (I - R)

    (2.19)

    Then

    Q = n(h - Rh)'£~1(h - Rh) = nh'a - R)Z~l(l - R)h

    (2.20)

    = nh'Ah .

    From results of Gurland (1948), Barankin and Gurland (1951), the asymptotic

    distribution of nh'Ah is the same as the asymptotic distribution of nh'Ah

    where A is obtained from A on replacing 2 by 2 . In order to find the

    distribution of nh'Ah we make use of the following lemma.

    #1057 -5-

  • Lemma 1.

    If X is distributed as N(|JL; 2) and B is a matrix such the SB is

    idempotent then the distribution of X'BX is non-central chi-square with r

    2 degrees of freedom and noncentrality parameter \, denoted by Y x »

    where r is the rank(B) and \ = [I'BJJL.

    Proof:

    Let P be a nonsingular matrix such that

    PSP' -I, (2.21)

    an identity matrix. On making use of the transformation

    Y = PX (2.22)

    it follows that Y is distributed as N(P(jf, I) and

    X'BX = Y'P-'^P'Sf . (2.23)

    Now P'~ BP~ is an idempotent matrix of rank r since SB = P" P'' B

    2 is idempotent of rank r. Hence the distribution of X'BX is Y \ where

    'Y, \

    \ = (P^'P'^BP'W) = ji'BfjL (2. 24)

    This proves Lemma 1.

    From the above lemma and also assuming W of full rank q , we see

    that the asymptotic null distribution of nh'Ah is x since ZA is an s-q

    idempotent matrix of rank s-q and C'A£ =0 which can easily be verified.

    * 2 Thus it follows that the asymptotic distribution of Q is that of v As-q '

    -6- #1057

  • The statistic Q can be utilized for testing the fit of an assumed

    distribution. In order to ascertain how well such a test of fit behaves, its

    power against specific alternatives can be obtained from the non-null

    distribution given in section 3.

    3. Asymptotic non-null distribution of Q

    The asymptotic non-null distribution of Q turns out to be that of a

    weighted sum of independent non-central x random variables each with one

    degree of freedom. A derivation of this result along with the precise weights and

    non-centralltles is given in the following theorem.

    Theorem 1.

    Let the null and alternative hypotheses H , H. respectively be as

    follows:

    (3.1)

    (3.2)

    H : X has p.d.f. p (x | 0) 0 x

    -oo ^ x < oo, e- = [ Gj, e2, ..., eq]

    H^ X has p.d.f. p^x IY)

    -oo < x < oo , Y' = [ Yj» Y2, • • •, Yp]

    > where p < q. Here 6 and y are parameter vectors.

    Then the asymptotic non-null distribution of Q, defined in (2.20),

    is of the form

    SVq 2 LclX . (3.3) 1 i 1>ai

    #1057 -7-

  • ■I ■■

    The constants d are given by (3.7) and a. by (3.8), (3.10).

    Proof:

    Let us denote the matrix to which Z converges under H, in

    probability by S , that is

    * P * S —• S under H 1 (3.4)

    Then S involves the parameter vector y. Now the asymptotic A

    non-null distribution of nh'Ah is the same as that of

    (1) * Qv ' = nh'A h (3.5)

    * * A * (1) where A is obtained from A on replacing 2 by S . Let S denote

    the asymptotic ocvariance matrix of N/nh under H. which can be found

    in the same way as S is found under H . Also if £ denotes the

    population counterpart of h under H., then the asymptotic non-null

    distribution of -v/ndi-; ') is that of N(0; S( ') . There exists a non-

    singular matrix T and an orthogonal matrix P such that

    TS(1)T, =I (3.6)

    and

    #1057

  • -1 * -1 Fl'' A T P' = D =

    0

    d s-q

    0

    0

    0

    0

    where I is the identity matrix and D is a diaQonal such that the last q

    (3.7)

    diagonal elements of D are zero. This is possible since rank(A ) = s - q.

    Let

    and

    Then we have

    u = Fl'h

    • -1 • -1 nh'A h = n(T P'u)'A (T P'u) = nu'Du

    s-q • ~ dt'"Tn ui)z •

    1

    (3.8)

    ( 3. 9)

    Now since the asymptotic distribution of ..Jn(h - ~ (1)) is N(O; 2;(1)) ,

    it follows that the asymptotic distribution of .../n(u - ~) is N(O; 1) •

    • Hence the asymptotic distribution of nh'A h is that of

    #1057 -9-

  • ^FT

    syqd 2 ^ dixl a 1 1 ^i

    where

    » =Vn"* i=l, ...,s-q (3.10)

    and * is the i element of *. This proves the theorem.

    4. Test of fit for normal distribution based on Q

    We shall now consider a test of fit based on Q when the null

    distribution is normal, that is, X has p.d.f.

    (x-Oj)2

    l 26

    Px(x|e)=72^7e

    (4.1)

    -oo < x < «, -« < e < «, e > o .

    Let m.. m, and ni be second, third and fourth central sample c 3 4

    moments respectively. The statistics b., b given by

    3/2 2 bj = m3/m2 , b = m4/m2 (4.2)

    are sometimes employed for testing normality by means of skewness and

    kurtosis. Instead of considering these twc statistics separately it

    appears more rational to formulate a single statistic involving the first

    four moments. This motivates our selection of functions h based on the

    first four sample moments. The mean, variance, third and fourth central

    moments of X are respectively given by:

    -10- #1057

  • ^=6V ^=e2' ^=0» ^^6l (4.3)

    If we define

    e2 = log G2

    t,'=[v[f log »x2, fi3, log(—)]

    (4.4)

    then the elements of £ are linear functions of the parameters 6. and 9

    We can now write

    ^ =we (4.5)

    with

    W

    1 0 0 1 0 0 0 2

    The corresponding h functions are given by

    m ^ = mj; h2 = log m^ h3 ■ m : h^ = log(—) (4.6)

    where ml is the sample mean and m , m . m denote second, third and

    fourth central sample moments, respectively, as previously Indicated.

    The transformation from sample raw moments to functions h is

    achieved in two stages, that is, from ( mj, m', m* , m'] to

    ( mj, m , m , m ) and then finally to (h, h , h , h ). In the notations

    of section 3, *Jn{h - t) is asymptotic N(0; 1.) t where

    #1057 •11-

  • ■n-

    'i'

    1

    0

    •39.

    0

    1

    0

    0

    0

    1

    0

    0

    0

    0 1

    2 =1 I GJ'T'

    1 0 0 0 '

    ; V = 0 0

    i/o2 0 0

    0 1 0

    0 o o 1/(392)

    (4.7)

    (4.8)

    G =

    39.

    29.

    129:

    39.

    159:

    3 129^

    969 2j

    (4.9)

    After simplification we obtain:

    9 2

    0 2

    0 0

    0

    0 69: t

    4 0

    0

    4

    0

    32/3

    (4.10)

    Now let

    £ = (£! e2=m2

    (4.11)

    where m is the maximum likelihood estimator of 0 . Then a statistic

    0 for testing normality is given by

    Q = nh'Ah (4.12)

    -12- #1057

  • ^^^*"

    where

    A = S '(I - R)

    R = WCW'f'VfVf'1 (4.13)

    After simplification we can show that

    A =

    0

    0

    0

    0

    1.5

    0

    -.75 0

    -.75

    l/tem^) 0

    .375

    (4.14)

    Hence a simplified form of Q is given by

    Q = nu'Bu (4.15)

    where

    m u' =fh2' h3' V ^^(m^, m3, log(^)] (4.16)

    and

    B =

    1.5 0 -.75

    0 lAbrn^) 0

    -.75 .375

    (4.17)

    The statistic Q in (4.15) can easily be computed on a desk calculator.

    #1057 -13-

  • The asymptotic distribution of Q is x2 since here s = 4 and

    q = 2 in the notations of section Z . Thus to carry out a test of fit for

    normality at a particular level of significance, one merely requires the

    2 corresponding critical point of the x2 distribution.

    5. Power of the test of normality

    Let Py (XIY)| where Y' = [ YI» VO» •••I V ] is a parameter vector,

    denote a general alternative to the null hypothesis of normality. If we denote

    its i raw moment by \i and the corresponding central moment by

    (1) (IK ,(1), K! then the asymptotic non-null distribution of N/n(h - t* ') is N(0, L* '),

    where

    (1) r(l)' , (1)' . (1) (1) .£4_rt ^ s [^ , 109 »i2 , Hj , log (— )J

    2(i) = (Ja)J(i))G(i)(Ja)J(i)).

    ^(D . a)' d)' {i)\ G = (»i1+J " ^ ^ )

    >

    ,(1) a2,l 4

    a3,l V

    4,1 '4,2

    i, J = 1, 2, 3, 4

    0 0

    1 0

    a '.' lJ

    (5.1)

    J

    (5.2)

    with

    -14- #1057

  • •^

    a2,l=-^l

    a3,2=-^l (1)'

    v.3 a4fl

    =4(-3,1l '^l ^2 "^ ,

    ,(1)

    1

    0

    0

    0

    o

    0

    0

    1

    0

    0

    0

    lAi (i) 4 J

    (5.3)

    £ and £ are the asymptotic covariance matrices of

  • Proof:

    p (1) Since m. — »i under Hii the asymptotic non-null distribution

    of Q = nu'B u is the same as the asymptotic non-null distribution of

    Q(1) = nu'B(1)u (5.4)

    where

    .(D

    ri.5o 0 -.75

    0 i/(v,2,,.3 0

    L-.75 0 .375

    (5.5)

    The distribution of u is invariant with respect to the location

    parameter and B does not involve this parameter, hence it follows that

    the asymptotic non-null distribution of Q does not involve the location

    parameter.

    Now let ß be the scale parameter in the alternative distribution

    of X. If we take

    -? (5.6)

    then the distribution of Y does not involve jJ,

    Let V = (V , V , V ] be such that

    V. = u - 2 log p = log m2(y)

    V2=u2/p3 = m3(y)

    V = u - 4 log (3 = log m (y) - log 3

    (5.7)

    -16- #1057

  • where

    m^y) = mj/p1, i = 2, 3, 4 . (5.8)

    Then the distribution of V does not involve the parameter ß since

    the distributions of mJy). m.(y) and m .(y) do not involve this parameter. £ ' 3 4

    If J42(Y) be the variance of Y then we have

    (1) rvm2 ^2 =»i2(Y)ß (5.9)

    and

    Q{1) m u'S^u « (V, + 2 log p, V p3, V + 4 log ß]B 1 fc 3

    (l)

    where

    = V'B*V

    V + 2 log ß

    LV ^ 4 log ß

    (5.10)

    1.50 0 -.75

    0 1/(6M*(Y)) 0

    -.75 0 .375

    It is surprising that although the scale parameter ß is involved in

    u and B , it cancels out In u'B u as is evident in (5.10).

    #1057 ■17-

  • * Since the distribution of V does not involve ß and since B is

    also free of this parameter, the asymptotic distribution of Q and hence

    that of Q does not involve the scale parameter ß. This completes the

    proof of Theorem 2.

    A

    6. Calculation of power for the test of normality based on Q A

    For studying the behavior of the test of fit for normality based on Q

    we have carried out power computations for several alternative families of

    distributions. The null hypothesis has been stated in (4.1) and the test

    statistic 0 formulated in (4.15).

    The following alternative distributions A., A , A , A , A are

    considered.

    A.: Exponential X-V,

    p^xly.^e V2 x>v A To 1

    -000

    A : Double Exponential

    IX-Y^

    Px (x I v) =— e & -Oo

  • A : Logistic X-Y,

    P^Cx 1 Y) X-Y,

    Y2(l + e 2 .2

    -oo < x < oo

    -oo < Y < oo j y7> 0

    A : Pearson Type III

    X-Vi

    2 x-v

    PX(XIY)

    - v2r(ß) ^2 i-^f'1 x> v.

    -oo < Y, < oo, Y? > 0i ß > 0

    A : " Power Distribution" 5

    P^X | Y)

    i 2

    X-Y

    2 1+ß

    ,„,.«»»•'' -oo < x < oo

    -oo< Yi< oo, Y2> 0 , ß> -1 .

    All the alternatives A - A , inclusive, involve unknown parameters

    V. and Y? which are location and scale respectively. Thus the power will

    be the same for all possible values of YI and \? according to the result

    proved in Theorem 2.

    #1057 •19-

  • The asymptotic power is given by

    2

    P{Z Vl » ^X>)} (6.i)

    2 2 as where the asymptotic non-null distribution of Q is that of 2J ^Xi

    1 '»"i 2

    proved in Theorem 1, and X2^ is the lOO(l-o) percent point of the

    2 X-, distribution.

    A generalization of Gurland's (1955, 1956) Laguerre series expansion

    has been given by Kotz et al (1967) for the distribution of quadratic forms

    in non-central normal variates. We make use of this expansion in order to

    compute the power given by (6.1). These calculations have been carried out

    for sample sizes n = 50, 75, 100, and the two levels of significance a - . 05,

    . 01. The results appear in Table 1 for all the alternatives A. through A ,

    with several different specified values of the parameter ß in the case of AA 4

    and A as indicated in the table. 5

    A modified form of the Pearson chi-square test has been considered by

    2 Dahiya and Garland (1970b) where the test statistic is denoted by >C. According

    to this modification, the estimators obtained from the ungrouped sample are

    utilized in determining the class interval end points as well as in the test statistic

    2 X . For convenience in making some comparisons with the Q test the values of

    R 2

    power of the )C test against the alternatives listed in Table 1 are included for

    .hose cases corresponding to sample sizes n = 50, 100 which are available from

    Dahiya and Gurland (1970b). These values are enclosed in parentheses and are

    based in each case on the number of class intervals giving the maximum power In

    Tables 1, 2 of Dahiya and Gurland (1970b). For example, in the case of alternative

    A the power of the )C test attains a maximum value of 1. 000 for sample sizes

    -20- #1057

  • n = 50, 100 when the number of class intervals is 7, and in the case of

    alternative A its power attains maximum values .547, .800 corresponding

    to sample sizes 50, 100 respectively based on 3 class intervals.

    It is evident on examining the values of power for the Q test in Table 1

    that for most of the cases considered there its value is higher, and sometimes

    2 very much higher, than the value for the X test.

    As we examine Table 1 in detail, we note that for alternatives A. and

    A , namely, the exponential and double exponential, the power is rather high.

    2 For the exponential, the power is slightly lower than for that of the X^ test

    whereas for the double exponential, the reverse is true. For a logistic alternative

    the difference in the power of the two tests is dramatic. For example, when

    2 n = 100 the power of the X^ test with optimal number of classes k = 3 is .180

    for a = . 05 whereas for the Q test the corresponding power is .654.

    As regards A , namely the Pearson Type III, it is evident from the

    table that the power is higher for low values of the parameter ß and decreases

    slowly as ß increases. The decrease in the power is explained by the fact

    that the alternative A tends to normal as ß becomes increasingly large. As

    evident from the few values of power of the X^ test appearing for alternative A

    it behaves similarly to the Q test although its power is substantially less.

    Alternative Ac is considered in the table with values of ß decreasing

    from 3.0 to -.95. Similar to the behavior of the X^ test, the pov/er increases as |i

    incr ,'ases for ß > 0, and it also increases as ß decreases for ß < 0, which behavior

    is explained by the fact that the normal distribution is a special case of the family A.

    with ß = 0. For all the values of ß considered here- except ß - -. 50, the

    2 power of 0 test is obviously higher than that of t;i

  • ^^,

    TABLE 1

    Power of 0 test for normality

    Alternative

    a = .05 o = .01

    n = 50 75 100 50 75 100

    Ar Exponential .927(1.000) .941 .953 (1.000) .892 .913 .930

    A2: Double Exponential .833 (.547) .858 .879 (.800) .754 .789 .818

    A : Logistic • .606 (. 128) .631 .654 (. 180) .465 .495 .523

    A4: with p = .5 .966 .972 .976 .949 .957 .964

    A : with ß = 2.0 .865 .898 923 .804 .849 .88 3

    A4: with P »2.5 .839 (.502) .879 909 (.864) .765 .820 .861

    A4: with ß = 3.0 .814 (.391) .860 t^j (.716) .732 .791 .8 37

    A : with ß = 3.5 .790 (.318) .841 .881 (.597) .698 .762 .814

    A : with P = 4.0 .767 (.268) .822 .865 (.506) .667 .734 .789

    A : with P = 5.0 .722 (.205) .784 .834 (.381) .608 .680 .741

    A : with P = 10.0 .548 .617 .678 .407 .473 .534

    A : with P = 3.0 .996 .996 .997 .994 .995 .995

    A5: with P = 2.0 .974 .976 .978 .960 .963 .966

    A : with ß = .95 .815 .842 .866 .730 .767 .799

    A J with P = .75 .721 (.376) .759 .792 (.603) .604 .652 .695

    A : with P = .50 .527 (.211) .571 .611 (.343) .372 .418 .462

    A : with P = .25 .249 .271 .292 . 117 .133 . 149

    A : with C = -.so .036 (.144) .105 .216 (.262) .002 .009 .029

    AS: with [' = -.75 .154 .48 3 .785 .008 .078 .280

    A : with P = -.95 . 328 (.331) .779 .969 (.583) .027 .237 621

    n = sample size

    .' = level of significance

    Aj corresponds to the Pearson Type III distribution

    A^ corresponds to the "power distribution"

    * >-.

    #1057

  • 7. Conclusion

    The use of the statistic Q In testing for normality results in high

    values of power for many of the alternatives considered in Table 1. The form

    of Q for this test turns out to be relatively simple and could, in fact, be

    computed on a desk calculator if need be. A modified form of the Pearson

    chi-square statistic, designated as X-, which could also be used to test for

    normality as shown by Dahiya and Gurland (1970a, 1970b) has been compared

    with the Q test for several cases of the alternatives considered in Table 1

    and found to have lower power for the most part.

    #10S7 -23-

  • REFERENCES

    1. Barankin, E. W. and Gurland, J. (1951). On asymptotically normal

    efficient estimators: 1. University of California Publications in

    Statistics 1: 89-129.

    2. Chernoff, H. and Lehmann, E. L. (1954). The use of maximum

    likelihood estimates in x goodness of fit. Ann. Math. Stat. 25,

    579-586.

    3. Dahiya, R. C. andGurland, J. (1970a). Pearson chi-square test of fit with

    random intervals. I. Null Case. MRC Technical Summary Report #1046, 1970.

    4. Dahiya, R. C. and Gurland, J. (1970b). Pearson chi-squjre test of

    fit with random intervals. II. Non-null rise. MRC Technical Summary

    Report #1051, 1970.

    5. Gurland, J. (1948). Best asymptrtically normal estimates. Unpublished

    Ph. D. Thesis, University of California, Berkeley.

    6. Gurland, J. (1955). Distributionof definite and of indefinite quadratic forms.

    Ann. Math. Stat. 26, 122-127. Correction: Ann. Math. Stat. 33(1962), 813.

    7. Gurland, J. (1956). Quadratic forms in normally distributed random

    variables. Sankhyä 17, 37-50.

    8. Hinz, P. and Gurland, J. (1970). A test of fit for the negative binomial

    and other contagious distributions. To appear in June, 1970 issue

    of ). Amor. Stat. Assoc.

    9. Kotz, S., Johnson, N. L. and Boyd, D. W. (1967). Series representations

    of distributions of quadratic forms in normal variables. II. Non-central

    case. Ann. Math. Stat. 38, 838-848.

    -24- #1057

  • ^T-

    AR 70-31 unclassified

    DOCUMENT CONTIOL DATA ■ I 4 D

    Mathematics Research Center University of Wisconsin, Madison, Wls. 53706

    Unclassified

    None » ••*••< tit«.«

    A TEST OF FIT FOR CONTINUOUS DISTRIBUTIONS BASED ON GENERALIZED MINIMUM CHI~SOÜARE

    Summary Report; no specific reporting period.

    John Gurland and Ram C. Dahiya

    April 1970 C RBflSTTSfSSSSTri

    Contract No. DA-il-124-ARO-D-462

    None

    TO»»». «•.

    24

    1057 , ^^"«MV %• aeetev^

    None

    Distribution of this document is unlimited.

    None

    It. fM—»W MIMTMi» »««»•»»

    Army Research Office-Durham, N.C.

    The asymptotic null and non-null distributions of the proposed test

    statistic are obtiined. In particular, a test of normality is presented

    and its power investinated.

    DD /•,r..1473 Unclassified


Recommended