+ All Categories
Home > Documents > 10739308_10203914552004027_1698851518_n

10739308_10203914552004027_1698851518_n

Date post: 02-Jun-2018
Category:
Upload: jose-pinto-de-abreu
View: 213 times
Download: 0 times
Share this document with a friend

of 149

Transcript
  • 8/10/2019 10739308_10203914552004027_1698851518_n

    1/149

    Mathematical Statistics witApplicationStudents SolutionsManu

    KandethodyM.RamachandDepartmentofMathematicsandSta

    UniversityofSouthFl

    Tam

    ChrisP.TsoDepartmentofMathematicsandSta

    UniversityofSouthFl

    Tam

    AMSTERDAM BOSTON HEIDELBERG LONDON

    NEW YORK OXFORD PARIS SAN DIEGO

    SAN FRANCISCO SINGAPORE SYDNEYTOKYO

    Academic Press is an imprint of Elsevier

  • 8/10/2019 10739308_10203914552004027_1698851518_n

    2/149

  • 8/10/2019 10739308_10203914552004027_1698851518_n

    3/149

    Elsevier Academic Press30 Corporate Drive, Suite 400, Burlington, MA 01803, USA525 B Street, Suite 1900, San Diego, California 92101-4495, USA84 Theobalds Road, London WC1X 8RR, UK

    Copyright 2009, Elsevier Inc. All rights reserved.No part of this publication may be reproduced or transmitted in any form or by any means, electronic ormechanical, including photocopy, recording, or any information storage and retrieval system, withoutpermission in writing from the publisher.

    Permissions may be sought directly from Elseviers Science & Technology Rights Department in Oxford, UK:phone: (+44) 1865 843830, fax: (+44) 1865 853333, E-mail: [email protected]. You may alsocomplete your request on-line via the Elsevier homepage (http://elsevier.com), by selecting CustomerSupport and then Obtaining Permissions.

    Library of Congress Cataloging-in-Publication Data

    Applications submitted

    For all information on all Elsevier Academic Press publications

    visit our Web site at www.elsevierdirect.com

    Typeset by: diacriTech, India

    09 10 9 8 7 6 5 4 3 2 1

    ISBN 13: 978-0-08-096443-0

  • 8/10/2019 10739308_10203914552004027_1698851518_n

    4/149

    Contents

    CHAPTER 1 Descriptive Statistics . . . . . . . .. . . . . . . .. . . . . . . . .. . . . . . . .. . . . . . . . .. . . . . . . 1

    CHAPTER 2 Basic Concepts from Probability Theory . . . . . . . .. . . . . . . . .. . . . . . . .. . . . . 11

    CHAPTER 3 Additional Topics in Probability . . . . . . . .. . . . . . . .. . . . . . . . .. . . . . . . .. . . . . 21

    CHAPTER 4 Sampling Distributions . . . . . . . .. . . . . . . .. . . . . . . . .. . . . . . . .. . . . . . . . .. . . . 37

    CHAPTER 5 Point Estimation . . . . . . . .. . . . . . . . .. . . . . . . . .. . . . . . . .. . . . . . . . .. . . . . . . .. . 49

    CHAPTER 6 Interval Estimation. . . . . . . . .. . . . . . . .. . . . . . . . .. . . . . . . . .. . . . . . . .. . . . . . . 59

    CHAPTER 7 Hypothesis Testing . . . . . . . .. . . . . . . . .. . . . . . . . .. . . . . . . . .. . . . . . . .. . . . . . . . 69

    CHAPTER 8 Linear RegressionModels. . . . . . . . .. . . . . . . .. . . . . . . . .. . . . . . . .. . . . . . . . .. 81

    CHAPTER 9 Design of Experiments. . . . . . . . .. . . . . . . .. . . . . . . . .. . . . . . . .. . . . . . . . .. . . . 87

    CHAPTER 10Analysis of Variance . . . . . . . . .. . . . . . . .. . . . . . . . .. . . . . . . .. . . . . . . . .. . . . . . . 91

    CHAPTER 11Bayesian Estimation and Inference. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101

    CHAPTER 12Nonparametric Tests. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109

    CHAPTER 13Empirical Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117

    CHAPTER 14Some Issues in Statistical Applications: An Overview . . . . . . . . . . . . . . . . . . 127

  • 8/10/2019 10739308_10203914552004027_1698851518_n

    5/149

  • 8/10/2019 10739308_10203914552004027_1698851518_n

    6/149

    Chapter1Descriptive Statisti

    EXERCISES 1.2

    1.2.1. The suggested solutions:

    For qualitative data we can have color, sex, race, Zip code and so on. For quantitative datawe can have age, temperature, time, height, weight and so on. For cross section data we can

    have school funding for each department in 2000. For time series data we can have the crude

    oil price from 1995 to 2008.

    1.2.3. The suggested questions can be

    1. What types of data the amount is?

    2. Are these Federal Agency get same amount of money? If not, why?

    3. Which Federal Agency should get more money? Why?

    The suggested inferences we can make is

    1. These Federal Agency get different amount of money.

    2. The differences of money between the Agencies are kind of big.

    EXERCISES 1.3

    1.3.1. For stratified sample, we can say suppose we decide to sample 100 college students from the

    population of 1000 (that is 10% of the population). We know these 1000 students come

    from three different major, Math, Computer Science and Social Science. We have Math 200,

    CS 400 and SS 400 students. Then we choose 10% of each of them Math 20, CS 40 and SS

    40 by using random sampling within each major.

    For cluster sample, we can say suppose we decide to sample some college students from the

    population of 2000. We know these 2000 students come from 20 different countries and

    we choose 3 out of the 20 countries by random sampling. Then we get all the individual

    information from each of the 3 countries.

  • 8/10/2019 10739308_10203914552004027_1698851518_n

    7/149

    2 CHAPTER 1 Descriptive Statistics

    EXERCISES 1.4

    1.4.1. By minitab

    (a) Bar graphBar graph for the percent of road mileage

    35.00%

    C2

    C1

    30.00%

    25.00%

    20.00%

    15.00%

    10.00%

    5.00%

    0.00%Poor Mediocre Fair Good Very good

    (b) Pie chart

    PoorVery goodGoodFairMediocre

    Category

    Pie chart of the percent of road mileage

    1.4.3. (a) Bar graph

    40.00%

    30.00%

    20.00%C2

    C1

    10.00%

    0.00%

    Bar graph

    Renewable

    Energy

    PetroliumNyclear

    Electric Power

    Natural

    Gas

    Coal

  • 8/10/2019 10739308_10203914552004027_1698851518_n

    8/149

    Students Solutions Manu

    (b) Pareto chart

    Renewable

    Energy

    Petrolium Nyclear

    Electric

    Power

    Natural

    Gas

    Coal

    Percentage

    Percent

    Percentage

    Percent

    Cum %

    0.40

    40.0

    40.0

    0.23

    23.0

    63.0

    0.22

    22.0

    85.0

    0.08

    8.0

    93.0

    0.07

    7.0

    100.0

    C1

    1.0

    0.8

    0.6

    0.4

    0.2

    0.0

    100

    80

    60

    40

    20

    0

    Pareto graph

    (c) Pie chart

    CoalNatural GasNyclear Electric PowerPetroliumRenewable Energy

    Category

    Pie chart of speciesspecies

    1.4.5. (a) Bar graph

    6

    5

    4

    3

    2

    1

    0

    Count

    C1

    A B C D F

    Bar graph

  • 8/10/2019 10739308_10203914552004027_1698851518_n

    9/149

    4 CHAPTER 1 Descriptive Statistics

    (b) Pie chart

    A

    BC

    DF

    Category

    Pie chartspecies

    1.4.7. (a) Pie chart

    Mining

    Construction

    Manufacturing

    Transportation

    Wholesale

    Retail

    Finance

    Services

    Category

    Pie chartspecies

    (b) Bar graph

    Servi

    ces

    Finan

    ce

    Retail

    Who

    lesale

    Transp

    ortation

    Manu

    factur

    ing

    Constru

    ction

    Minin

    g

    8000

    7000

    6000

    5000

    4000

    3000

    2000

    1000

    0

    C1

    C2

    Bar graph

  • 8/10/2019 10739308_10203914552004027_1698851518_n

    10/149

    Students Solutions Manu

    1.4.9. Bar chart

    20001990198019601900

    80

    70

    60

    50

    40

    30

    20

    10

    0

    C1

    C2

    Bar graph

    1.4.11. (a) Bar graph

    300

    250

    200

    150

    100

    50

    Accid

    ents

    0

    C2

    Bar graph

    C1

    C

    hronic C

    C

    ancer

    Diabetes

    Heart

    Kidney

    Pneu

    monia

    Stroke

    S

    uicid

    e

    (b) Pareto graph

    268.0

    38.7

    38.7

    119.4

    28.8

    67.6

    58.5

    8.5

    76.0

    42.3

    6.1

    82.1

    35.1

    5.1

    87.2

    34.5

    5.0

    92.2

    23.9

    3.5

    95.6

    30.2

    4.4

    100.0

    Percentage

    Percent

    Cum %

    C1

    Othe

    r

    Diabetes

    Accid

    ents

    Pneu

    monia

    Heart

    Cancer

    Stroke C

    700

    600

    500

    400

    300

    200

    1000

    100

    80

    60

    40

    20

    0

    Percentage

    Percent

    Pareto graph

  • 8/10/2019 10739308_10203914552004027_1698851518_n

    11/149

    6 CHAPTER 1 Descriptive Statistics

    1.4.13.

    90807560

    9

    8

    7

    6

    5

    4

    3

    2

    1

    0

    Histogram

    C1

    Frequency

    1.4.15. (a) Stem and leafStem-and-leaf of C1 N = 20

    Leaf Unit = 10

    1 4 7

    3 4 99

    8 5 00011

    10 5 22

    10 5 4455

    6 5 6667

    2 5 9

    1 6 0

    (b) Histogram

    600580560540520500480

    5

    4

    3

    2

    1

    0

    C1

    Frequency

    Histogram

  • 8/10/2019 10739308_10203914552004027_1698851518_n

    12/149

    Students Solutions Manu

    (c) Pie chart

    526

    542

    546

    553

    558

    565

    568

    572

    595

    605

    475

    493

    499502

    503

    506

    510

    517

    525

    Category

    Pie chartspecies

    EXERCISES 1.5

    1.5.1. Mean is 165.6667 and standard deviation is 63.15397

    1.5.3. Data is 3,3,5,13 and standard deviation is 4.760952

    1.5.5. (a) lower quantiles is 80, median is 95, upper quantiles is 115 and inter quantile range

    is 35. The lower limit of outliers is 27.5 and upper limit of outliers is 167.5.

    (b) The box plot is

    (c) Therefore there are no outliers.

  • 8/10/2019 10739308_10203914552004027_1698851518_n

    13/149

    8 CHAPTER 1 Descriptive Statistics

    1.5.7.

    li=1

    fi(mi x) =l

    i=1fi(mi)

    li=1

    x = nx nx = 0

    1.5.9. (a) Mean is 33.105, variance is 177.0430 and range is 48.19.

    (b) Lower quantile is 24.9225, median is 32 and upper quantiles is 42.985. The inter

    quantile range is 18.0625. The lower limit of outliers is 2.17125 and upper limit ofoutliers is 70.07875. Therefore there are no outliers.

    (c)

    (d)

    Histogram of y

    Frequency

    0

    0

    2

    4

    6

    8

    10 20 30 40 50 60y

    1.5.11. (a) Mean is 110, standard deviation is 83.4847.

    (b) 68%, 95%, 99.7%.

    1.5.13. (a) Mean is 3.7433, variance is 3.501 and standard deviation is 1.871323.

  • 8/10/2019 10739308_10203914552004027_1698851518_n

    14/149

  • 8/10/2019 10739308_10203914552004027_1698851518_n

    15/149

    10 CHAPTER 1 Descriptive Statistics

    (b) Mean is 74.0625, median is 74, variance is 7.223892 and standard deviation is

    2.68773.

    (c)

    The lower limit of outliers is 66 and upper limit of outliers is 82. Therefore we have nooutlier.

  • 8/10/2019 10739308_10203914552004027_1698851518_n

    16/149

    Chapter2Basic Concepts from Probability Theo

    EXERCISES 2.2

    2.2.1. (a) S= {(R, R, R), (R, R, L), (R, L, R), (L, R, R), (R, L, L), (L, R, L), (L, L, R), (L, L, L)}(b) P= 78(c) P= 48(d) P= 38(e) P= 28

    2.2.3. (a) P= 536(b) P= 536(c) P= 48(d) P= 38

    2.2.5. PAB = {(H, H ), (H, T ), (T, H)} .2.2.7. (a) Probability space is

    S= {(1, 1), (1, 2), (1, 3), (1, 4), (1, 5), (1, 6), (2, 1), (2, 2), (2, 3), (2, 4), (2, 5), (2, 6),(3, 1), (3, 2), (3, 3), (3, 4), (3, 5), (3, 6), (4, 1), (4, 2), (4, 3), (4, 4), (4, 5), (4, 6),

    (5, 1), (5, 2), (5, 3), (5, 4), (5, 5), (5, 6), (6, 1), (6, 2), (6, 3), (6, 4), (6, 5), (6, 6)}(b) P= 636(c) P= 136

    2.2.9. (a) Probability space isS= {N, N, N, S, S}Nstands for normal and Sstands for spoiled.

    (b) P= 35 24= 620(c) No more than one means no or just one.

    P= 35 24+ 2 25 34= .92.2.11. P= p + 2q

  • 8/10/2019 10739308_10203914552004027_1698851518_n

    17/149

    12 CHAPTER 2 Basic Concepts from Probability Theory

    2.2.13. (a) SinceA Bthen letA Ac = B, we knowP(A) + P(Ac) = P(B)by the axiom 3. SinceP(Ac) 0 by axiom 1 we knowP(A) + P(Ac) P(A)thenP(A) P(B).

    (b) Let C= AB, A= Aa C and B= Ba C, by axiom 3 we know P(AB)=P(Aa) + P(Ba) + P(C).

    Since P(A) = P(Aa

    )+ P(C) and P(B) = P(Ba

    )+ P(C) by axiom 3 again, we knowP(Aa) = P(A) P(C) and P(Ba) = P(B) P(C). Plug them back in previous equa-tion P(A B) = P(Aa) + P(Ba) + P(C) and we get the following equation P(A B)=P(A) + P(B) P(C)=P(A) + P(B) P(A B). IfA B=then by axiom 1 we knowP(A B) = 0 and just plug in we complete the proof.

    2.2.15. (a) From 2.2.13 we knowP (A B) = P(A) + P(B) P(A B)and from axiom 2 we knowP(A B) 1, we can see thatP(A) + P(B) P(A B) 1 and that complete the proofP(A) + P(B) 1 P(A B).

    (b) From 2.2.13 we knowP(A1 A2)= P(A1) + P(A2) P(A1 A2). From axiom 1we knowP(A1 A2) 0 it meansP(A1 A2) 0 and we can get the following

    inequalityP (A1 A2) = P(A1) + P(A2) P(A1 A2) P(A1) + P(A2).2.2.17. (a) P= .24 + .67 .09 = .82

    (b) P= 1 .82 = .18(c) P= 1 .09 = .91(d) P= 1 .09 = .91(e) P= 1 .82 = .18

    2.2.19. (a) P= .55(b) P= .3(c) P= .7

    2.2.21. (a) P= 3

    5 2

    4+ 2

    5 1

    4= .4(b) P= 35 24+ 35 24= .6(c) P= 2 35 24+ 25 14= .7(d) P= 35 24= .3

    2.2.23. Without loss of generality let us assumeAn is increasing sequence then A1 A2 . . .An . . .. We know that ifA1 A2 . . . An . . . then A1 A2 . . .An . . . =

    i=1

    Ai=

    limn An. From the condition we know limn An=

    i=1

    Ai and if we take probability on

    both sides then limn

    P(An) = P

    i

    =1

    Ai = P limnAn

    EXERCISES 2.3

    2.3.1. (a) 45

    (b) 1

  • 8/10/2019 10739308_10203914552004027_1698851518_n

    18/149

    Students Solutions Manual

    (c) 10

    (d) 5400

    (e) 2520

    2.3.3. 1024

    2.3.5. 53130

    2.3.7. 155117520

    2.3.9. 440

    2.3.11. (a) p = .4313 .44425 .46798 .5263 1 = .04719(b) p = .0001189(c) p = .4313 .21419 .10344 1 1 = .009557

    2.3.13. 180

    2.3.15.

    (a) p

    = 1 365

    364

    ...(365

    20

    +1)

    36520 = .4114(b) p = 1 .2936 = .7063(c) Ifn = 23 thenp = .4927

    2.3.17. p = 1 .27778 .16667 = .55562.3.19. (a) 7776

    (b) 3.954 1021(c) 5.36447 1028(d) 3.022285 1012

    2.3.21. The question is asking when the cell does the splitting to produce a child. There will be a

    cell with half of the chromosomes. According to this understanding we have(a) 223

    (b)

    239

    223

    = .097416

    EXERCISES 2.4

    2.4.1. (a) .999

    (b) 13

    2.4.3. (a) P(A

    |B)

    +P(Ac

    |B)

    = P(AB)

    P(B)

    + P (AcB)

    P(B)

    = P(B|A)P(A)+P(B|Ac)P(Ac)

    P(B)

    = P(B)P(B)

    =1

    (b) (i) ifP (A|B) + P(A|Bc)= 1 then we knowP (A|Bc)= 1 P(A|B)= P(Ac|B)thatmeansA andB are symmetric in probability. But it is not always true.

    (ii) ifP (A|B) + P(Ac|Bc) = 1 then we knowP (Ac|Bc) = 1 P(A|B) = P(Ac|B)That means B and Bcs conditional probability are same, which is same as A and B are

    independent. But that is not always true.

  • 8/10/2019 10739308_10203914552004027_1698851518_n

    19/149

    14 CHAPTER 2 Basic Concepts from Probability Theory

    2.4.5. IfA andB are independent thenP (A B) = P(A) P(B)

    (i) P(AcB) = P(B)P(AB) = P(B)P(A)P(B) = (1P(A))(P(B)) = P(Ac)P(B)then we knowAc andB are independent.

    (ii) According to(i)

    just switchA

    andB

    and we can prove it.(iii) P(Ac Bc)= P(Bc) P(A Bc)= P(Bc) P(A) P(Bc)= (1 P(A))(P(Bc))=P(Ac) P(Bc).

    2.4.7. P(E|F ) = 113= 452= P(E)thenE and Fare independent2.4.9. .1948

    2.4.11. (a) P= .031125(b) P= .06

    2.4.13. .8

    2.4.15. (a) P(a dime is selected) =12

    i=2P(a dime is selected|boxi is selected)P(boxi is selected)

    =12

    i=2i

    12 P(the sum of dies= i)

    = 2(1)+3(2)+4(3)+5(4)+6(5)+7(6)+8(5)+9(4)+10(3)+11(2)+12(1)12(36)= .583333

    (b) P(box 4 is selected|a penny is selected)

    = P(box 4 is selected & a penny is selected)P(a penny is selected)

    = (3/36)(8/12)1 P(a dime is selected)

    = .133332.4.17. P= .65752.4.19. P= .609762.4.21. (a) P(Accident rate) = .25 .086 + .257 .044 + .347 .056 + .146 .098 = .066548

    (b) P(gourp4|Accident) = .146.098.0665 = .2152.4.23. P= .166672.4.25. P(Working) = P(B, C) + P(A, B,notC) + P(A,notB, C)

    = [1 P(notB) P(notC) P(notB,notC)] + P(A)P(B|notC)P(notC)+ P(A) [P(notB) P(notB,notC)]

    = [1 0.1 0.05 0.75 0.05] + 0.85 0.25 0.05 + 0.85 [0.1 0.75 0.05]= 0.8875 + 0.010625 + 0.053125 = 0.95

    2.4.27. (a) P(same type of blood) = 1840 1739+ 1640 1539+ 440 339+ 240 139= .358974(b) P(type isO|type isB) = 1839= .4615

    2.4.29. LetE denoteA ends up with all the money when he starts withi.

  • 8/10/2019 10739308_10203914552004027_1698851518_n

    20/149

    Students Solutions Manual

    LetF denote A ends up with all the money when he starts with N i. ForA starts withN imeansBstarts withibecauseNis the total moneyAandBhas so if we gotP(E)thenP(F) = 1 P(E).LetHdenote the event that the first flip lands heads andp denote the probability to have

    Hon the first flip.P(E) = P(E|H)P(H) + P(E|HC)P(HC )

    This probability represents thatA gets a head and combined with the probability ifB win

    the first coin.

    Now we letP(E) = P(E|H )p + P(E|HC)(1 p) = Piand define this as the first round.New, given that the first flip lands heads, the situation after the first bet is thatA has i + 1units andB has N (i + 1)units.Since the successive flips are assumed to be independent with a common probabilityp of

    heads, it follows that, from that point on, As probability of winning all the money is exactly

    the same as if the game were just starting with A having an initial fortune ofi + 1 and B

    having an initial fortune ofN (i + 1)Therefore,P (E|H) = Pi+1and P (E|HC) = Pi+1. Letq to be 1 p,

    Pi= pPi+1 + qPi1 i = 1,2, . . . , N 1

    By applying the condition thatP0= 0 andPN= 1

    Pi= 1Pi= (p + q)Pi= pPi+1 + qPi1

    Pi+1 Pi= q

    p(Pi Pi1) i = 1,2, . . . , N 1

    After plug ini

    We got

    P2 P1= q

    p(P1 P0) =

    q

    pP1

    P3 P2= q

    p(P2 P1) =

    q

    p

    2P1

    PN PN1= q

    p(PN1 PN2) =

    q

    p

    N1P1

    If qp= 1

    ThenP2 P1= P1and P2= 2P1P3= 3P1

    PN= NP1PN= 1

  • 8/10/2019 10739308_10203914552004027_1698851518_n

    21/149

    16 CHAPTER 2 Basic Concepts from Probability Theory

    Which meansP1= 1NThereforePi= iN

    If qp 1, thenPN P1= 1 P1= P1

    q

    p

    1 qp

    N11

    qp

    = P1

    qp q

    p

    N1

    qp

    1 = P1

    qp

    qp

    N1 qp

    + 1 = P1

    1

    qp

    N1 qp

    P1= 1 qp

    1 qp N

    Add all equations, we got

    PN

    P1=

    P1 q

    p + q

    p

    2

    + + q

    pN

    1

    Add firsti 1 of all equations, we got

    Pi P1= P1

    q

    p+

    q

    p

    2+ +

    q

    p

    i1= P1

    q

    p

    1

    qp

    i11 qp

    Pi=

    1 qp1

    qp

    N

    1

    qp

    i1 qp

    =

    1

    qp

    i1

    qp

    N

    Then if we start withN i, just replacep byq and i byN i.

    Qi=

    1

    pq

    Ni1

    pq

    N if p

    q 1

    Qi= N i

    Nif

    p

    q= 1

    Pi + Qi= 1

    EXERCISES 2.5

    2.5.1. (a) c = e(b) P= e(c) P= 1 e e 2e2

  • 8/10/2019 10739308_10203914552004027_1698851518_n

    22/149

    Students Solutions Manual

    2.5.3. F(x) =

    0, where x 5.2, where 5 x 0.3, where 0 x 3.7, where 3

    x

    6

    1, where 6 6

    2.5.5. p(x) =

    0, where x 1.2, where 1 x 3.6, where 3 x 9.2, where x 9

    2.5.7. (a) c = 19(b) P= .7037

    (c) F(x) = 0, where x 0

    1

    27 x

    3

    , where 0 x 31, where x 3

    2.5.9. f(x) =

    0, where x 02x

    (1+x)2 , where x 02.5.11. p = .7013 .55809 = .1432

    p = .7364

    2.5.13. f(t) =

    0, where t 0(t)1

    e

    (t) , where 0 t

    EXERCISES 2.6

    2.6.1. m(t) = 16

    et + e2t + e3t + e4t + e5t + e6tVar(x) = 2.9167

    2.6.3. (a) E(Y) = 3.6E(Y2) = 17.2E(Y3) = 95.3

    VAR(Y ) = 4.24(b) My(t) = et .1 + .05 + e2t .25 + e5t .4 + e6t .2

    2.6.5. E(X)

    = x xP(X = x) =

    n=12nP(X

    =2n)

    =

    n=12n 1

    2n

    =

    n=11

    = 2.6.7. a = 12

    b = 12.6.9. (a) E(c) = cf (x) = c

    (b) E(cg(x)) = cg(x)f (x) = c g(x)f(x) = cE(g(x))

  • 8/10/2019 10739308_10203914552004027_1698851518_n

    23/149

    18 CHAPTER 2 Basic Concepts from Probability Theory

    (c) E

    i

    gi(x)

    =

    i

    gi(x)fi(x) =

    i

    g(x)f(x) =

    i

    E(gi(x))

    (d) V(ax + b)= E((ax + b) E(ax + b))2 = E(ax + b aE(x) b)2 = E(a(x E(x))2= a2V(x)

    Plug inb = 0 get another one.2.6.11. E(X) = c 1 + 0 = c

    V(X) = Ex2 (E(x))2 = c2 c2 = 0CDF isF (X) = (x c)where is indicator function

    2.6.13. Mx(t) = e(et1)E(X) = Mx(0) = ete(e

    t1)|t=0= E(X2) = Mx (0) = ete(e

    t1) + (et)2e(et1)|t=0= + 2V(X) = E(X2) (E(x))2 = + 2 2 =

    2.6.15. (a) p = 1x+1 letq = 1 p = xx+1 wherexstart from 0 to infinity means number of failures

    before the 1st success. Therefore the total number of trails is x+

    1.

    E(x) =

    x=0xp(1 p)x = p

    x=0

    xp(q)xq 1q= qp

    x=0

    x + 1

    x

    (q)x = qpp2 = q

    p by nega-

    tive binomial

    Another way to prove is

    E(x) ==

    x=0xp(1 p)x =p

    x=0

    x(q)x1q = pq

    x=0

    d(q)x

    dq

    = pq

    d

    x=0(q)x

    dq = pqd

    11qdq = pq

    1

    (1 q)2= pq

    p2= q

    p

    E(x2) =

    x=0x2p(1 p)x =pq

    x=0

    x2(q)x1 = pq

    x=0

    d(xqx)

    dq

    = pqd

    x=0

    (xqx)

    dq= pq

    d

    1p

    x=0

    (xpqx)

    dq= pq

    d

    11q E(x)

    dq

    =pq

    d 1

    1qqp

    dq =pq

    d q

    (1q)2dq =

    pq1 2q + q2 + 2q 2q2

    (1 q)4 =pq(1 q2)

    p4 =pq pq3

    p4

    V(x) = E(x2) (E(x))2 = pq pq3

    p4 q

    2

    p2= pq pq

    3 p2q2p4

    = pq pq2(q + p)

    p4

    = pq(1 q)p4

    = p2q

    p4 = q

    p2

  • 8/10/2019 10739308_10203914552004027_1698851518_n

    24/149

    Students Solutions Manual

    (b) Mx(t) =

    x=0extp(1 p)x = p

    x=0

    (etq)x = p1etq= p

    1(1p)et

    Whenetq 1 and that isetq 1 take ln both sides give us ln(et) ln 11p

    That is whent ln(1 p).

    2.6.17. E(x) = 10 x(x2)dx + 21 x6x2x232 dx + 32 x (x3)22 dx =18+ 1 + 38= 1.52.6.19. Mx(t) =

    0 e

    xt12 e

    xdx + 0 ext12 exdx = 1(t+1)(t1)2.6.21. My(t)=

    0 e

    tyeydy= 0 etyydy= 1(t) 0 e(t)yd(t )y= 1(t) e(t)y|0 = 1

    (t)= (t) t and 0 SinceMy(t)= Mx(t)= (t) and MGF uniquely define the PDF therefore we know thatxhas the same distribution as y and

    g(x) =

    ey, 0 and 0 x

    0, otherwise

  • 8/10/2019 10739308_10203914552004027_1698851518_n

    25/149

    This page intentionally left blank

  • 8/10/2019 10739308_10203914552004027_1698851518_n

    26/149

    Chapter3Additional Topics in Probabili

    EXERCISES 3.2

    3.2.1. (a) P(X = 7) = 107 (0.5)

    7(0.5)107

    = 120(0.5)7(0.5)3= 0.117

    (b) P(X 7) = 1 P(8) P(9) P(10)= 1 0.044 0.010 0.001= 0.945

    (c) P(X >0) = 1 P(0)= 1 0.001= 0.999

    (d) E(X) = 10(0.5) = 5Var(X) = 10(0.5)(1 0.5) = 2.5

    3.2.3. (a) P(Z >1.645) = 0.05, soz0= 1.645.(b) P(Z 1.645) = 0.95, soz0= 1.645.

    3.2.5. (a) P(X 20) = P

    Z 20 105

    = P(Z 2) = 0.9772

    (b) P(X >5) = PZ >5 10

    5 = P(Z > 1) = 0.8413(c) P(12 X 15) = P

    12 10

    5 Z 15 10

    5

    = P(0.4 Z 1) = P(Z 1) P(Z

  • 8/10/2019 10739308_10203914552004027_1698851518_n

    27/149

    22 CHAPTER 3 AdditionalTopics in Probability

    (d) P(|X 12| 15) = P(15 X 12 15) = P(3 X 27)

    = P3 10

    5 Z 27 10

    5

    =P(

    2.6

    Z

    3.4)

    =P(Z

    3.4)

    P(Z 0 andx 0.

    Since eachp(x) 0, then

    p(x) xp(x) = x exx! = ex x

    x!= e

    1 + 1

    1!+2

    2!+

    = e(e) = 1 here we apply Taylors expansion one.

    This shows thatp(x) 0 and xp(x) = 1.3.2.13. The probability density function is given by

    f(x) =

    1

    10, 0 x 10

    0, otherwise

  • 8/10/2019 10739308_10203914552004027_1698851518_n

    28/149

    Students Solutions Manual

    Hence,

    P(5 X 9) =9

    5

    1

    10dx = 0.4.

    Hence, there is a 40% chance that a piece chosen at random will be suitable for kitchen use.

    3.2.15. The probability density function is given by

    f(x) =

    1

    100, 0 x 100

    0, otherwise

    (a) P(60 X 80) = 8060 1100 dx = 0.2.(b) P(X >90) =

    100

    90

    1

    100dx = 0.1.

    (c) There is a 20% chance that the efficiency is between 60 and 80 units; there is 10% chancethat the efficiency is greater than 90 units.

    3.2.17. LetX= the failure time of the component. And X follows exponential distribution with rate0.05. Then the p.d.f. ofX is given by

    f(x) = 0.05e0.05x, x >0.Hence,

    R(10) = 1 F (10) = 1 10

    0

    0.05e0.05xdx = 1 1 e0.5 = e0.5 = 0.607.3.2.19. The uniform probability density function is given by

    f(x) = 1, 0 x 1.

    Hence,

    P(0.5 X 0.65|X 0.75) = P(0.5 X 0.65 and X 0.75)P(X 0.75)

    = P(0.5 X 0.65)P(X

    0.75)

    =

    0.650.5

    1dx

    0.750 1dx

    = 0.150.75

    = 0.2

    3.2.21. First, findz0such thatP(Z > z0) = 0.15.P(Z >1.036) = 0.15, soz0= 1.036.x0= 72 + 1.036 6 = 78.22

    The minimum score that a student has to get to an A grade is 78.22.

  • 8/10/2019 10739308_10203914552004027_1698851518_n

    29/149

    24 CHAPTER 3 AdditionalTopics in Probability

    3.2.23. P(1.9 X 2.02) = P

    1.9 1.960.04

    Z 2.02 1.960.04

    = P(1.5 Z 1.5) = 0.866

    P(X 2.02) = 1 P(1.9 X 2.02) = 0.13413.4% of the balls manufactured by the company are defective.

    3.2.25. (a) P(X >125) = P

    Z > 125 11510

    = P(Z >1) = 0.16

    (b) P(X z2) = 0.5 andP(Z > z3) = 0.8

    Using standard normal table, we can find thatz1= 0.842,z2= 0 andz3= 0.842.Then

    y1= 0 + (0.842) 0.65= 0.5473 x1= exp(y1)= 0.58, similarly we can obtainx2= 1 andx3= 1.73.For the probability of surviving 0.2, 0.5 and 0.8 the experimenter should choose doses 0.58,

    1 and 1.73, respectively.

    3.2.29. (a) MX(t) = E(etX) =

    0

    etx 1

    ()x1ex/dx

    = 1()

    0

    x1 exp1 t

    x

    dx

    = 1()

    0

    1 tu1

    eu

    1 tduby lettingu = 1 t

    xwith 1 t >0

    = 1

    ()

    1 t

    0

    u1 exp(

    u)du

    note that the integrand is the kernel density of(, 1)

    = 1()

    1 t

    () 1

    = (1 t)when t < 1

    .

  • 8/10/2019 10739308_10203914552004027_1698851518_n

    30/149

    Students Solutions Manual

    (b) E(X) = MX(0) = (1 t)1()

    t=0= , and

    E(X2) = M(2)X (0) = ddt[(1 t)1()]

    t=0

    = (

    +1)(1

    t)2(

    )2

    t=0=(

    +1)2

    Then Var(X) = E(X2) E(X)2 = ( + 1)2 ()2 = 2.3.2.31. (a) First consider the following product

    ()() =

    0

    u1eudu

    0

    v1evdv

    =

    0

    x2(1)ex2

    2xdx

    0

    y2(1)ey2

    2ydv by lettingu = x2 and v = y2

    = 2

    0

    |x|21 e

    x2

    dx

    2

    0

    |y|21 e

    y2

    dy noting that the integrands are

    even functions

    =

    |x|21 ex2 dx

    |y|21 ey2 dy

    =

    |x|21 |y|21 e(x2+y2)dxdy

    Transforming to polar coordinates withx = r cos and y = r sin

    ()() =2

    0

    0

    |r cos |21 |r sin |21 er2 rdrd

    =

    0

    r2+22er2

    rdr

    20

    (cos )21(sin )21 d

    =1

    2

    0

    s+1esds

    4

    /20

    (cos )21(sin )21d

    by lettings = r2

    = ( + )2/20

    (cos )21(sin )21d

    = ( + )21

    0

    t1/2(1 t)1/2 12

    t(1 t) dtby lettingt= cos2

  • 8/10/2019 10739308_10203914552004027_1698851518_n

    31/149

    26 CHAPTER 3 AdditionalTopics in Probability

    = ( + )1

    0

    t1(1 t)1dt

    = ( + )B(, )

    Hence, we have shown that

    B(, ) = ()()( + ) .

    (b) E(X) = 1B(, )

    10

    x x1(1 x)1dx = B( + 1, )B(, )

    10

    x(+1)1(1 x)1B( + 1, ) dx

    = B( + 1, )B(, )

    1 = ( + 1)()( + + 1)

    ( + )()()

    = ()()

    ( + )( + )( + )()() =

    + , and

    E(X2) = 1B(, )

    10

    x2 x1(1 x)1dx = B( + 2, )B(, )

    10

    x(+2)1(1 x)1B( + 2, ) dx

    = B( + 2, )B(, )

    1 = ( + 2)()( + + 2)

    ( + )()()

    = ( + 1)()()( + )( + + 1)( + )

    ( + )()()

    = ( + 1)( + )( + + 1) .

    Then Var(X)

    =E(X2)

    E(X)2

    =

    ( + 1)

    ( + )( + + 1)

    +

    2

    =

    ( + )2

    ( + + 1).

    3.2.33. In this case, the number of breakdowns per month can be assumed to have Poisson

    distribution with mean 3.

    (a) P(X= 1)= e3311! = 0.1494. There is a 14.94% chance that there will be just onenetwork breakdown during December.

    (b) P(X 4) = 1 P(0) P(1) P(2) P(3) = 0.3528. There is a 35.28% chance thatthere will be at least 4 network breakdowns during December.

    (c) P(X7)=7

    x=0e33x

    x! =0.9881. There is a 98.81% chance that there will be at most 7network breakdowns during December.

    3.2.35. (a) P(1< X

  • 8/10/2019 10739308_10203914552004027_1698851518_n

    32/149

    Students Solutions Manual

    The probability that an acid solution made by this procedure will satisfactorily etch a

    tray is 0.6442.

    (b) P(1< X

  • 8/10/2019 10739308_10203914552004027_1698851518_n

    33/149

    28 CHAPTER 3 AdditionalTopics in Probability

    Thus, ifc= 1/4, then 11 11 f (x, y)dxdy= 1. And we also see thatf (x, y) 0 for all xandy. Hence,f (x, y)is a joint probability density function.

    3.3.5. By definition, the marginal pdf ofXis given by the row sums, and the marginal pdf ofY is

    obtained by the column sums. Hence,

    xi 1 3 5 otherwisefX(xi) 0.6 0.3 0.1 0

    yi 2 0 1 4 otherwisefY(yi) 0.4 0.3 0.1 0.2 0

    3.3.7. FromExercise 3.3.5we can calculate the following.

    P(X = 1|Y= 0) = P(X = 1, Y= 0)fY(0)

    = 0.10.3

    = 0.33.

    3.3.9. (a) The marginal ofXis

    fX(x) =2

    x

    f (x, y)dy =2

    x

    8

    9xydy = 4

    9(4x x3), 1 x 2.

    (b) P(1.5< X 1) =1.75

    1.5

    2

    x

    8

    9xydy

    dx =

    1.751.5

    4

    9

    4x x3 dx

    = 49 2x2

    x4

    4 1.75

    1.5

    = 0.2426.

    3.3.11. Using the joint density inExercise 3.3.9we can obtain the joint mgf of(X, Y )as

    M(X,Y )(t1, t2) = E(et1X+t2Y) =2

    1

    2x

    et1x+t2y8

    9xydydx

    =2

    1

    8

    9xet1x

    2

    x

    et2yydy

    dx =

    21

    8

    9xet1x

    K x

    t2et2x + 1

    t22et2x

    dx

    whereK = e2t2

    t22(2t2 1)

    = 89

    K

    21

    xet1xdx 89t2

    21

    x2e(t1+t2)xdx + 89t22

    21

    xe(t1+t2)xdx

    = 89

    K

    x

    t1et1x 1

    t21et1x

    21

  • 8/10/2019 10739308_10203914552004027_1698851518_n

    34/149

    Students Solutions Manual

    89t2

    x2

    t1 + t2e(t1+t2)x 2x

    (t1 + t2)2e(t1+t2)x + 2

    (t1 + t2)3e(t1+t2)x

    2

    1

    + 89t2

    2

    x

    t1+

    t2e(t1+t2)x 1

    (t1+

    t2)2e(t1+t2)x

    2

    1

    After simplification we then have

    M(X,Y) (t1, t2) =t1 + 3t2 t21 3t22 4t1t2 + t21 t2 + 2t1t22+ t32

    t22 (t1 + t2)3 et1+t2 + (2t2 1)(1 t1)

    t21 t22

    et1+2t2

    +

    t1 3t2 + 2t21+ 6t22+ 8t1t2 4t21 t2 + 8t1t22 4t32t22 (t1 + t2)3

    + (2t2 1)(2t1 1)t21 t

    22

    e2t1+2t2

    3.3.13. (a) fX(x) = n

    y=0f (x, y) = n

    y=0

    6xy

    n(n + 1)(2n + 1)

    2 = 36x2[n(n + 1)(2n + 1)]2

    ny=0

    y2

    = 6x2

    n(n + 1)(2n + 1) , x = 1,2, . . . , n.

    fY(y) =n

    x=0f (x, y) =

    nx=0

    6xy

    n(n + 1)(2n + 1)2

    = 36y2

    [n(n + 1)(2n + 1)]2n

    x=0x2

    = 6y2

    n(n + 1)(2n + 1) , y = 1,2, . . . , n.

    Giveny = 1,2, . . . , n, we have

    f (x|y) = f (x, y)fY(y)

    =

    6xy

    n(n + 1)(2n + 1)

    2

    6y2

    n(n + 1)(2n + 1)

    = 6x2

    n(n + 1)(2n + 1) , x = 1,2, . . . , n.

    (b) Givenx = 1,2, . . . , n, we have

    f (y|x) = f (x, y)fX(x)

    =

    6xy

    n(n + 1)(2n + 1)2

    6x2

    n(n + 1)(2n + 1)

    = 6y2

    n(n + 1)(2n + 1) , y = 1,2, . . . , n.

    3.3.15. (a) E(XY) = x,y

    xy f (x, y) =3

    x=1

    3y=1 xy f (x, y) =

    35

    12 .

    (b) E(X) = x,y

    x f (x, y) =3

    x=1

    3y=1

    x f (x, y) = 53

    , and

    E(Y) = x,y

    y f (x, y) =3

    x=1

    3y=1

    y f (x, y) = 116

    .

  • 8/10/2019 10739308_10203914552004027_1698851518_n

    35/149

    30 CHAPTER 3 AdditionalTopics in Probability

    Then, Cov(X, Y ) = E(XY) E(X)E(Y) = 3512

    53 11

    6= 5

    36.

    (c) Var(X) =

    x,y[x E(X)]2 f (x, y) =

    3

    x=13

    y=1 x 53

    2 f (x, y) = 5

    9, and

    Var(Y ) = x,y

    [y E(Y)]2 f (x, y) = 3x=1

    3y=1

    y 116

    2 f (x, y) = 2336

    .

    Then,XY= Cov(X, Y )

    Var(X)Var(Y )= 5/36

    (5/9)(23/36)= 0.233.

    3.3.17. Assume thata and c are nonzero.

    Cov(U, V ) = Cov(aX + b, cY+ d) = acCov(X, Y ),

    Var(U) = Var(aX + b) = a2Var(X), and Var(V ) = Var(cY+ d) = c2Var(Y ).

    Then, UV

    =

    Cov(U, V )

    Var(U)Var(V ) =

    acCov(X, Y )

    a

    2Var(X)c

    2Var(Y ) =

    ac

    (ac)

    2XY

    = ac|ac| XY=

    XY, if ac >0

    XY, otherwise .

    3.3.19. We fist state the famous CauchySchwarz inequality:

    |E(XY)|

    E(X2)E(Y2)and the equality holds if and only if there exists some constant

    and, not both zero, such thatP ( |X|2 = |Y|2) = 1.Now, consider

    |XY| = 1

    Cov(X, Y )Var(X)Var(Y )

    = 1 |Cov(X, Y )| =

    Var(X)Var(Y )

    |E(X X)(Y Y)| =

    E(X X)2E(Y Y)2

    By the CauchySchwarz inequality we have

    P

    |X X|2 = |Y Y|2 = 1

    P(X X= K(Y Y)) = 1 for some constantK P(X = aY+ b) = 1 for some constantsa andb.

    3.3.21. (a) First, we compute the marginal densities.

    fX(x) = x

    f (x, y)dy = x

    eydy = ex, x 0, and

    fY(y) =y

    0

    f (x, y)dx =y

    0

    eydx = yey, y 0.

  • 8/10/2019 10739308_10203914552004027_1698851518_n

    36/149

    Students Solutions Manual

    For giveny 0, we have the conditional density as

    f (x|Y= y) = f (x, y)fY(y)

    = ey

    1

    yey

    = 1y

    , 0 x y.

    Then,(X|Y= y)followsUniform(0, y). Thus,E(X|Y= y) = y2

    .

    (b) E(XY) = y xxy f (x, y)dxdy = 0y

    0 xyeydx

    dy = 0 12 y3eydy = 3,

    E(X) = x

    x fX(x)dx =

    0 xexdx = 1, and

    E(Y) = y

    y fY(y)dy =

    0 y2eydy = 2.

    Then, Cov(X, Y ) = E(XY) E(X)E(Y) = 3 1 2 = 1.(c) To check for independence ofX andY

    fX(1)fY(1) = e2 = e1 = f (1, 1).

    Hence,X and Yare not independent.

    3.3.23. Let2 = Var(X) = Var(Y ). SinceX and Yare independent, we have Cov(X, Y )= E(XY) E(X)E(Y) = E(X)E(Y) E(X)(Y) = 0. Then, Cov(X, aX+Y ) = aCov(X, X)+Cov(X, Y ) =aVar(X)= a2, and Var(aX + Y )= a2Var(X) + Var(Y )= (a2 + 1)2. Thus, X, aX+Y=

    Cov(X, aX + Y )Var(X)Var(aX + Y ) =

    a22(a2 + 1)2

    aa2 + 1

    .

    EXERCISES 3.4

    3.4.1. The pdf ofX is fX(x) = 1a if 0 < x < aand zero otherwise.

    FY(y) = P(Y < y) = P(cX + d < y) = P

    X 0.

    3.4.7. LetU= X + YandV= Y.ThenX = U V andY= V, and

    J=

    x

    u

    x

    v

    y

    u

    y

    v

    =

    1 10 1 = 1.

    Then the joint pdf ofUand Vis given by

    fU,V(u, v) = fX,Y(u v, v) |J| = fX,Y(u v, v).

    Thus, the pdf of U is given byfU(u) = fX,Y(u v, v)dv.

  • 8/10/2019 10739308_10203914552004027_1698851518_n

    38/149

    Students Solutions Manual

    3.4.9. (a) Here let g(x) = x

    , and hence, g1(z) = z+ . Thus, ddz

    g1(z) = . Also,

    fX(x) = 12

    e 12

    x

    2, < x < . Therefore, the pdf ofZ isfZ(z) = fX(g1(z))

    ddz g1(z) = 12 e 12 z2 , < z < , which is the pdf ofN (0, 1).(b) The cdf ofUis given by

    FU(u) = P(U u) = P

    (X )22

    u

    = Pu X

    u

    = Pu Z u

    =

    u

    u

    12

    e12 z

    2dz = 2

    u

    0

    12

    e12 z

    2dz, since the integrand is an even function.

    Hence, the pdf ofUis fU(u)=

    d

    duFU(u)

    = 2

    2e

    u2

    1

    2u = 1

    2u

    12 e

    u2 , u >0 and

    zero otherwise, which is the pdf of2(1).

    3.4.11. Since the support of the pdf ofV isv > 0, then g(v)= 12 mv2 is a one-to-one function onthe support. Hence, g1(y)=

    2ym

    . Thus, dde

    g1(y)= 12

    m2y

    2m

    . Therefore, the pdf ofE is

    given by

    f(y) = fV(g1(y)) ddy g1(y)

    = c 2ym e2ym

    12my

    = c

    2ym3

    e2y

    m , y >0.

    3.4.13. LetU=

    X2 + Y2 andV= tan1

    YX

    . HereUis considered to be the radius and Vis the

    angle. Hence, this is a polar transformation and hence is one-to-one.ThenX = Ucos V andY= Usin V, and

    J=

    x

    u

    x

    v

    y

    u

    y

    v

    =cos v u sin vsin v u cos v

    = u cos2 v + u sin2 v = u.

    Then the joint pdf ofUand Vis given by

    fU,V(u, v) = fX,Y(uv, v) |J| = 1

    22exp

    1

    22

    u2 cos2 v + u2 sin2 v

    u.

    = u22

    e u2

    22 , u >0, 0 v 0.

  • 8/10/2019 10739308_10203914552004027_1698851518_n

    39/149

    34 CHAPTER 3 AdditionalTopics in Probability

    Apply the result in Exercise 3.4.14with = 2. We have the joint pdf ofU= XY2 and V= YasfU,V(u, v) = 12 e(u+v), v > 2u, v >0. Thus, the pdf ofUis given by

    fU(u) = v

    fU,V(u, v)dv

    =

    0

    1

    2e(u+v)dv = 1

    2eu, u 0

    2u

    1

    2e(u+v)dv = 1

    2eu, u

  • 8/10/2019 10739308_10203914552004027_1698851518_n

    40/149

    Students Solutions Manual

    3.5.5. Apply Chebyshevs theorem we have

    P

    Xn

    n p

    2) = P

    X100 22/

    100

    > 2 22/

    100

    P(Z >0) = 0.5, whereZ N(0, 1).3.5.13. First note thatE(Xi) = 1/2 and Var(Xi) = 1/12. Then, by CLT we know thatZn= Snn/2n/12 =

    X1/21/12/

    n

    approximately followsN (0, 1)for largen.

  • 8/10/2019 10739308_10203914552004027_1698851518_n

    41/149

  • 8/10/2019 10739308_10203914552004027_1698851518_n

    42/149

    Chapter

    4Sampling Distributio

    EXERCISES 4.1

    4.1.1. (a) There are 53 = 10 equally likely possible samples of size 3, so the probability for each

    is1 10without replacement:

    X M S

    (2, 1, 0) 1 1 1(2, 1, 1) 2 3 1 21 3(2, 1, 2) 1 3 1 41 3(2,0,1) 1 3 0 21 3(2,0,2) 0 0 4(2,1,2) 1 3 1 41 3(1,0,1) 0 0 1(1,0,2) 1 3 0 21 3(1,1,2) 2 3 1 21 3(0,1,2) 1 1 1

    (i)

    X 1 2/3 1/3 0 1/3 2/3 1p(X) 1/10 1/10 2/10 2/10 2/10 1/10 1/10

    (ii)

    M 1 0 1p(M) 3/10 4/10 3/10

    (iii)S 1

    7 3 2

    13 3

    p(S) 3/10 4/10 1/10 2/10

  • 8/10/2019 10739308_10203914552004027_1698851518_n

    43/149

    38 CHAPTER 4 Sampling Distributions

    (iv)E

    X = (1)1 10+ 2 31 10+ 1 31 10+ 0 1 10+ 1 32 10

    +

    23

    1

    10

    + (1)

    1

    10

    = 0

    E

    X2 = (1)2 1 10+ 2 32 1 10+ 1 32 1 10+ 02 2 10

    +

    13

    2 210

    +

    23

    2 110

    + 12

    1

    10

    = 1 3

    Var

    X = EX2 EX2 = 1

    3 02 = 1

    3

    (b) We can get 53 = 125 samples of size 3 with replacement4.1.3. Population: {1,2,3}.p(x) = 1 3, forx in {1,2,3}

    (a) = 1N

    Ni=1

    ci= 2, 2 = 1NN

    i=1(ci )2 = 2 3

    (b)

    Sample X Sample X Sample X

    (1,1,1) 1 (2,1,1) 11 3 (3,1,1) 12 3

    (1,1,2) 11 3 (2,1,2) 12 3 (3,1,2) 2

    (1,1,3) 12 3 (2,1,3) 2 (3,1,3) 21 3

    (1,2,1) 11 3 (2,2,1) 12 3 (3,2,1) 2

    (1,2,2) 12 3 (2,2,2) 2 (3,2,2) 21 3

    (1,2,3) 2 (2,2,3) 21 3 (3,2,3) 22 3

    (1,3,1) 12 3 (2,3,1) 2 (3,3,1) 21 3

    (1,3,2) 2 (2,3,2) 21 3 (3,3,2) 22 3

    (1,3,3) 21 3 (2,3,3) 22 3 (3,3,3) 3

    X 1 11 3 12 3 2 21 3 22 3 3

    p(X) 1/27 1/9 2/9 7/27 2/9 1/9 1/27

    (c) E

    X

    =X

    x p(x) = 2, E

    X2

    =X

    x2 p(x) = 42 9, then VarX

    = 2 9

    4.1.5. Since

    i

    (xi x)2 =

    i

    x2i nx2, we have E(S)2 = 1n

    i

    EX2i nEX2nAssuming the sampling from a population with mean and variance2, we have

    E

    S2 = 1

    nn

    2 + 2

    2

    n+ 2

  • 8/10/2019 10739308_10203914552004027_1698851518_n

    44/149

    Students Solutions Manual

    = 2 2

    n

    =

    n 1n

    2 < 2 = E

    S2

    4.1.7. LetXbe the weight of sugarX N( = 5lb, = 0.2lb)

    ThenX = 1n

    ni=1

    Xiis the mean weight, wheren = 15.

    By Corollary 4.2.2,E(X) = , and Var(X) = 2n

    . ThenX N(5, 0.22 15), and X50.22 15

    = ZN(0, 12). Therefore, the probability requested is P(0.2< X 5< 0.2) = P(

    15< Z

    170) = P(Z >226) = 04.1.11. Let X be the time. X

    N(

    =95, 2

    =102). Then X9510

    =Z

    N(0, 12). Therefore,

    P( X< 85) = P( Z< 1) = 0.8413, or 84.13% of measurement times will fall below85 seconds.

    4.1.13. According the information, = 215 and= 35.(a) Ifn = 55, we can assumeX N(, ), thenPX >230 = PZ > 2302153555

    = 0.0007

    (b) Ifn = 200, we can assumeX N(, ), thenPX >230 = PZ > 23021535200

    = 0

    (c) Ifn = 35, we can assumeX N(, ), thenPX >230 = PZ > 2302153535

    = 0.0056(d) Increasing the sample size, decrement the probability

    4.1.15. LetTbe the temperature.

    Sincen=60, we assumeT N (98.6, 0.952). ThenT N (98.6, 0.952 60). Therefore, P (T99.1) = 0

    EXERCISES 4.2

    4.2.1. We have thatY 2(15)(a) We can see, for example in a table, thatP (Y 6.26) = 0.025. Theny0= 6.26(b) Choosing upper and lower tail area to 0.025, and since P(Y 27.5)= 0.975, and

    P(Y 6.26)= 0.025, then P (a < Y < b )= 0.95, then b= 20.975,15 = 27.5, a=20.025,15= 6.26

    (c) P(Y

    22.307)=

    1

    P(Y 2) = 0.4232

  • 8/10/2019 10739308_10203914552004027_1698851518_n

    45/149

    40 CHAPTER 4 Sampling Distributions

    4.2.5. SinceX1, X2, . . . , X5are i.i.d.N (55, 223), thenY=5

    i=1(Xi55)2

    223 2(5)

    (a) Since Z = Y n

    X552223 , and

    n

    X552223 2(1), Zis Chi-square distributed with 4 degrees

    of freedom andYis Chi-square distributed with 5 degrees of freedom

    (b) Yes

    (c) (i)P (0.62 Y 0.76) = 0.0075 (ii)P (0.77 Z 0.95) = 0.0251

    4.2.7. Since the random sample comes from a normal distribution, (n1)S2

    2 2(n1).

    Setting the upper and lower tail area equal to 0.05, even this is not the only choices,and using

    a Chi-square table withn 1 = 14 degrees of freedom, we have (n1)b2

    = 20.95,14= 23.68,and (n1)a

    2 = 20.05,14= 6.57. Then, with= 1.41,b = 3.36, anda = 0.93

    4.2.9. SinceT t8(a) P(T 2.896) = 0.99(b) P(T

    1.860)

    =0.05

    (c) Sincet-distribution is symmetric, we finda such thatP(T > a) = 0.012 . Thena = 3.35

    4.2.11. According with the information,=11.4, n=20, y= 11.5, ands= 2, thent= ys

    n=

    0.224. The degrees of freedom are n 1 = 19, so the critic value is 1.328 at = 0.05-level.Then, the data tend to agree with the psychologist claims.

    4.2.13. If X 2(v), then X

    = v2 , = 2

    , then E(X)= v2 (2)= v and Var(X)= v2 (2)2 = 2v

    4.2.15. IfX1, X2, . . . , Xnis fromN (, 2)

    then, by Theorem 4.2.8,

    (n

    1)S2

    2 is from2

    (n1)then, by Exercise 4.2.13, Var

    (n1)S2

    2

    = 2(n 1)

    Since Var(aX) = a2Var(X), (n1)24

    Var(S2) = 2(n 1)Simplifying after multiplying by

    4

    (n1)2 , we obtain Var(S2) = 24

    n1

    4.2.17. IfX and Yare independent random variables from an exponential distribution with com-

    mon parameter = 1, then using 4.2.16 with n= 1, 2X 2(2) and 2Y 2(2) thenXY= 2X2Y F (2, 2)

    4.2.19. IfX

    F (9,12)

    (a) P(X 3.87) = 0.9838(b) P(X 0.196) = 0.01006(c) F0.975(9,12) = 0.025 thenF0.975= 3.4358.

    0.025 = P(X < F0.975) = P

    1X

    > 1F0.975

    , where 1

    X F (12,9)

    Then 1F0.975

    = 3.8682 andF0.975= 0.258518. Thus,a = 0.2585, b = 3.4358

  • 8/10/2019 10739308_10203914552004027_1698851518_n

    46/149

    Students Solutions Manual

    4.2.21. IfX F (n1, n2)the PDF is given by

    f(x) =

    [(n1 + n2)/2](n1/2)(n2/2)

    n1

    n2

    n1

    n2x

    n121

    1 + n1n2

    x

    (n1+n2)/2, 0< x <

    0, otherwise

    Then

    EX =

    0

    x[(n1 + n2)/2]

    (n1/2)(n2/2)

    n1

    n2

    n1

    n2x

    n121

    1 + n1n2

    x

    (n1+n2)/2dx

    = [(n1 + n2)/2](n1/2)(n2/2)

    n1

    n2

    n1

    n2

    n121

    0

    xn12

    1 + n1

    n2x

    (n1+n2)/2dx

    Let y

    =1

    1

    + n1n2

    x1

    then x

    = n1n2

    1y(1

    y)1 and dx

    = n1n2

    1(1

    y)2dy and

    limx

    1

    1 + n1

    n2x1= 1, then

    EX = [(n1+n2)/2](n1/2)(n2/2)

    n1n2

    n22

    n2n1

    n22+1 1

    0

    yn22 (1 y)n12 dy, which converges forn1 >2.

    For > 0, > 01

    0

    y1(1 y)1dy = ()()(+) , where () =

    0 x

    1exdx with the

    property() = ( 1)( 1).

    ThenEX = [(n1 + n2)/2](n1/2)(n2/2)

    n1

    n2

    n22

    n2

    n1

    n22+1 (n1/2 + 1)(n2/2 1)

    [(n1 + n2)/2]

    = n2n1

    [(n1 + n2)/2](n1/2)(n2/2 1)(n2/2 1)

    n12

    (n1/2)(n2/2 1)[(n1 + n2)/2]

    = n2n2 2

    , n2 >2

    Similarly,

    EX2 = [(n1 + n2)/2](n1/2)(n2/2)

    n1

    n2

    n12

    n2

    n1

    n22+2

    10

    yn12 (1 y)

    n223dy, which converges forn2 >4

    = [(n1 + n2)/2](n1/2)(n2/2

    1)(n2/2

    2)(n2/2

    2)

    n2

    n12 (n1/2 + 1)(n1/2)(n1/2)(n2/2 2)

    [(n1

    +n2)/2

    ]=

    n2

    n1

    2 n1(n1 + 2)(n2 2)(n2 4)

    , n2 >4.

    Now, Var(X) = EX2 (EX)2. Therefore,

    EX = n2n2 2

    , n2 >2 and Var X =n22(2n1 + 2n2 4)

    n1(n2 2)2(n2 4)

  • 8/10/2019 10739308_10203914552004027_1698851518_n

    47/149

    42 CHAPTER 4 Sampling Distributions

    4.2.23. IfX1, X2, . . . , Xn1 is a random sample from a normal population with mean 1 and vari-

    ance2 and ifY1, Y2, . . . , Yn2 is a random sample from an independent normal population

    with mean2 and variance 2, then X N

    1,

    2

    n1

    , Y N

    2,

    2

    n2

    ,

    (n1 1)S212

    2(n11), and (n

    2 1)S2

    22

    2(n21).

    ThenX Y N

    1 2, 2n1 + 2

    n2

    and

    (n11)S212

    + (n21)S222

    2(n1+n22)then XY(12)

    2n1

    + 2n2

    N(0, 12)and (n11)S21

    2 + (n21)S22

    2 2(n1+n22)

    Then, since the samples are independent, we have by definition that

    X Y (1 2)2

    n1+

    2

    n2

    (n1 1)S212

    + (n2 1)S22

    2

    [n1 + n2 2]

    T(n1+n22)

    This after simplification becomes:

    X Y (1 2)(n1 1)S21+ (n2 1)S22

    n1 + n2 2

    1

    n1+ 1

    n2

    T(n1+n22) Q.E.D.

    4.2.25. IfX 2(v)withv >0, then the pdf ofX is given by

    f(x) =

    1

    (v/2)2v/2ex/2xv/21, 0 < x <

    0, x 0

    Then, by definition of MGF,

    MX(t) = 1

    (v/2)2v/2

    0

    ex(t1/2)xv/21dx =

    1

    1 2tv/2

    0

    ewwv/21(v/2)

    dw

    = (1 2t)v/2, t 0, then the cumulative distribution of1is

    F1 (t) =t

    0

    1

    10ex/10dx = ex/10

    t

    0= 1 et/10.

    Let Y represent the life length of the system, then Y=min(1, 2) and FY(y) = 1[1 Fi (y)]2, then the pdf ofYis of the formfY(y) = 2fi (y)[1 Fi (y)] and is given by

    fy(y) =

    1

    5ey/5, 0 < y <

    0, otherwise

    4.3.3. X1, X2take values 0, 1;X3take values 1,2, 3, andY1= min {X1, X2, X3}Since the values ofX1, X2are less or equal to the values forX3, Y1take values 0, 1

    Since the values ofX3are greater than the values forX1, X2, thenY3= max{X1, X2, X3}take values 1,2, 3

    SinceY1 Y2 Y3, Y2take values 0, 14.3.5. LetX1, X2, . . . , Xn be a random sample from exponential distribution with mean , then

    the common pdf is given byf(x) = 1

    ex , ifx >0

    Using Theorem 4.3.2, the pdf of thek-th order statistic is given by

    fk(y) = fYk (y) = n!

    (k 1)!(n k)! f(y)(F(y))k1(1 F(y))nk, whereF(y) = 1 e y

    thenfk(y) = n!(k1)!(nk)! f(y)

    1 e yk1

    ey

    nk

    Then, the pdf ofY

    1isf

    1(y)

    =nf (y) e

    yn1 = ne

    ny

    , which is the pdf of an exponentialdistribution with mean n

    , and the pdf ofYn is

    fn(y) = nf (y)[F(y)]n1

    = n

    ey

    1 e ny

    n1

  • 8/10/2019 10739308_10203914552004027_1698851518_n

    49/149

    44 CHAPTER 4 Sampling Distributions

    4.3.7. X1, . . . , Xna random sample are i.i.d with pdff (x) = 12 , 0 x 2thenF(x) = x0 12 dx = x2 , if 0 x 2

    thenF(x) = 1, x >2

    x

    2 , 0 x 20, x 10) = 1 P(Yn 10)

    The CDF Fn(y)ofYnis [F(y)]n, whereF(y)is the cdf ofXevaluated inyThen,P (Yn

    y)

    =Fn(y)

    = [F(y)

    ]n

    = [P(X

    y)

    ]n

    andP (X y) = P

    Z y102

    ThenP (Yn y) =

    P

    Z y102n

    ThenP(Yn > y) = 1 P(Yn < y)= 1

    P

    Z y102n

  • 8/10/2019 10739308_10203914552004027_1698851518_n

    50/149

    Students Solutions Manual

    ThereforeP (Yn >10) = 1 [P(Z 0)]n= 1 (0.5)n

    4.3.11. X1, . . . , Xnis a random sample from Beta(x = 2, = 3)The joint pdf ofY1and Yn, according Theorem 4.3.3, is given by

    fY1,Yn (x, y) = n!

    (1 1)!(n 1 1)!(n n)! [F(x)]11[F(y) F(x)]n11[1 F(y)]nnf(x)f(y)

    = n(n 1)[F(y) F(x)]n2f(x)f(y), if x < y

    SinceXi Beta(X = 21 = 3)fori = 1,2, . . . , n, the pdf is

    f(x) = ( + )()()

    1(1 x)1, x [0, 1]

    and, the DF is

    F(x) =

    0, x 0( + )()()

    x0 t

    1(1 t)1dt, 0 x 1

    1, x 1In our case,

    f(x) = (5)(2)(3)

    x21(1 x)31 = 4!1!2! x(1 x)

    2 = 12x(1 x)2, if 0 x 1

    and

    F(x) =

    x

    0

    12t(1 t)2

    dt= 12x2

    2 2x3

    3 +x4

    4

    , if 0 x 1

    Then, the joint pdffY1,Yn (x, y), using the 4.3.3, is given by

    fY1,Yn (x, y) = n(n 1)

    12

    y2

    2 2y

    3

    3 + y

    4

    4

    10

    x2

    2 2x

    3

    3 + x

    4

    4

    n212x(1 x)212y(1 y)2

    = 12nn(n 1)

    1

    2

    y2 x2 2

    3

    y3 x3+ 1

    4

    y4 x4n2 xy(1 x)2(1 y)2,

    if 0 x y 1, andfY1,Yn (x, y) = 0, otherwise

    EXERCISES 4.4

    4.4.1. X1, X2, . . . , Xn, wheren = 150, = 8, 2 = 4, thenx= 8 and2x= 4

    By Theorem 4.4.1: limn P

    Z X x

    x

    = 1

    2k

    z e

    2 2du

    thenP (7.5< X

  • 8/10/2019 10739308_10203914552004027_1698851518_n

    51/149

    46 CHAPTER 4 Sampling Distributions

    4.4.3. LetTbe the time spent by a customer coming to certain gas station to fill up gas

    SupposeT1, T2, . . . , Tn are independent random variables, with t= 3 minutes, 2t = 1.5minutes, andn = 75

    ThenY

    =

    n

    i=1

    Tiis the total time spent by then customers

    Since,Y= 3 hours = 180 minutes,P(Y

  • 8/10/2019 10739308_10203914552004027_1698851518_n

    52/149

    Students Solutions Manual

    4.4.13. SIDS occurs between the ages 28 days and one year

    Rate of death due to SIDS is 0.0013 per year.

    Randon sample of 5000 infants between the ages 28 day and are your

    LetX be the number of SIDS related deaths

    p = 0.00103,n = 5000ThenX Bin(n = 5000, p = 0.00103)The probability requested is

    P(X >10) P

    Z >10 np 0.5

    np(1 p)

    = 0.0274

  • 8/10/2019 10739308_10203914552004027_1698851518_n

    53/149

    This page intentionally left blank

  • 8/10/2019 10739308_10203914552004027_1698851518_n

    54/149

  • 8/10/2019 10739308_10203914552004027_1698851518_n

    55/149

    50 CHAPTER 5 Point Estimation

    5.2.11. We have

    E(X) = = xVariance(2) = 1

    n

    Xi X

    2

    2

    = 1

    n x2i x2

    =

    1n

    Xi

    2

    Xin

    2The method of moments estimator for i s given by T (X1, . . . . . . .Xn) =

    1n

    Xi

    2

    Xin

    2

    5.2.13. The method of moments estimator for = T(X1, . . . . . . .Xn) = E(X) = XT(X1, . . . . . . .Xn) = XSince and2 both are unknown.2 = 1

    n

    Xi X

    2This implies2

    = (n1).1

    n.(n

    1) Xi

    X2,2 = n1n s2Lets2 = n1

    n s2

    method of moments estimator for2 is given by:

    s2 = 1n

    ni=1

    Xi X

    2

    EXERCISES 5.3

    5.3.1. f(x) = nx

    px(1 p)nx

    L(p, x1, x2, . . . .xn) = log

    n

    i=1n

    xi

    +

    Xi logp + n

    Xi

    log(1 p)

    plogL(p, X1, X2, . . . .Xn) =

    Xi

    p+ n

    Xi

    1 p (1)

    plogL(p, X1, X2, . . . .Xn) =

    Xi

    p n

    Xi

    1 p

    For maximum likelihood estimator ofp

    plogL(p, X1, X2, . . . .Xn) = 0

    Xi

    p

    n

    Xi

    1 p

    =0

    (1 p)Xi pn Xi = 0. This implies Xi= pXi + n Xip =

    Xi

    n ,

    p = X.Hence, MLE ofp = p = X.By invariance propertyq = 1 p is MLE ofq.

  • 8/10/2019 10739308_10203914552004027_1698851518_n

    56/149

    Students Solutions Manual

    5.3.3. f(x) = 1

    ex impliesL() = 1

    ne

    Xi

    ln L() = n ln

    Xi

    Now taking the derivatives with respect to and and setting both equal to zero, we have

    ln L = n + Xi2 = 0n+Xi= 0.= XFrom the given data:

    MLE ofis given by= X = 1+2++7+214 . = 6.07.5.3.5. Here pdf ofX is given by

    f(x) =

    2x2

    ex2

    3 if, x >0

    0, otherwise

    (5.1)

    L(, X1, . . . . . . . .Xn) = 2(2)2n

    i=1 Xie x2

    2

    ln L = ln 2 2 ln +ni=1 ln Xi ni=1 Xi22

    ln L = 2n

    + 2

    3

    ni=1 X

    2i

    = 0 implies, 2n

    + 2

    3

    ni=1 X

    2i= 0

    n2 +

    X2 = 0

    2

    = X2i

    n

    =

    X2in

    .

    5.3.7.

    f(x) =

    x1e x

    if, x 00, otherwise.

    L(, , X) = nn i=1 nX

    1i e

    Xi

    L(, , x) = n ln ln + ( 1) ln Xi Xi 2

    L(, , x) = n

    +ni=1 ln Xi n ln Xi ln Xi

    L(, , x) = n Xi (a)1

    L(, , x) = n+ (1) Xi

  • 8/10/2019 10739308_10203914552004027_1698851518_n

    57/149

    52 CHAPTER 5 Point Estimation

    For maximum likelihood estimator of:

    ln L = 0. This implies;n+ni=1 ln Xi n ln Xi ln Xi = 0

    similarly, n+ (1) Xi

    = 0 n+ (1)

    Xi

    n

    Xi

    = 0 solving for we get =

    Xin

    +1

    Hence,

    n

    2+

    Xi

    Xi

    1 Xin

    +1

    ln

    Xi

    ln

    Xin

    +1

    There is no closed form solution for and . In this case, one can use numerical methodssuch as Newton-Raphson method to solve for, and then with this value to solve for.

    5.3.9. f(x) = (2)()2

    [x(1 x)](1), 0ln L(, x) = n ln (2) ln ()2+ ( 1)ni=1 ln(Xi 1)

    ln L(, x) = n

    2(2)(2) 2

    ()2(0)2

    +ni=1 ln Xi(Xi 1).

    5.3.13.

    f(x) =

    13+2 if, 0 x 3+ 20, otherwise.

    This impliesL(, x) = 1(3+2)2 for 0 x 3+ 2

    When 3+2 max(Xi), the likelihood is 1(3+2)2 . Which is positive and decreasing functionof(forfixedn). However, for < max(Xi)23 , thelikelihood drops to 0, creating discontinuityat point max(Xi)23 .Hence we will not be able to find the derivative. The MLE is the largest order statistic=max(Xi)2

    3 = Xn.5.3.15. HereX N(, 2)

    ln L(, , x) = n2

    ln 2 n2

    ln 2 n

    i=1

    (xi )2

    22

    L

    =

    ni=1

    (Xi )

    Similarly L2

    = n22

    + 122

    ni=1(Xi )2

    For maximum likelihood estimates of and ;

    (Xi ) = 0 implies = X

  • 8/10/2019 10739308_10203914552004027_1698851518_n

    58/149

    Students Solutions Manual

    Similarly, for2,

    n22

    + 122

    ni=1

    (Xi )2 = 0

    =

    (Xi X)n

    .

    5.3.17. f(x) = 1

    e x

    It is given that the reliabilityR(x)= 1 F(x). This implies F(x)= f(x). Hence F(x)= 1

    2e

    x .

    Thus,R(x) = 1 F(x)and

    L(x, ) =n

    i=1

    1 + 1

    2e

    Xi

    .

    EXERCISES 5.4

    5.4.1. E(X) = E

    Xin

    i.e E(X) = 1

    n

    E(Xi). Where, E(X) =

    xe(x)dx. By integration by

    partsE(X) = (1 + ). ThusEX = 1n

    (1 + ). This impliesE(X) = 1 + .

    5.4.3. Sample standard deviations =

    1n1

    (xi X)2

    E(s) = E

    1

    n 1

    (Xi + X)2

    E(s) = E

    1

    n

    (xi )2 (X )2

    E(s) =

    1

    n

    E(xi )2 E(X )2

    E(s) =

    1

    n

    2

    2

    n

    5.4.5. LetY

    =C1X1

    +C2X2

    + +CnXn.

    For an unbiased estimate, we need to have E(C1E(X1) + + CnE(Xn)) = That is, C1 + Cn = Which is possible if and only ifCis= 1n for all i= 1, 2 . . . . . . . . . n . This implies 1n +

    1n

    + + 1n

    =

    nn= i.e = . Verified.

  • 8/10/2019 10739308_10203914552004027_1698851518_n

    59/149

    54 CHAPTER 5 Point Estimation

    5.4.7. Xi U(0.).Yn= maxX1, . . . . . Xn= Ynis the MLE of.(a) By method of moment estimatorE(X)

    =X

    = +0

    2 . This implies

    =2X.

    Hence the method of moment estimator= 2X(b)E() = E(Yn)E() = E{max(X1, . . . Xn)}E() = n

    n+1E() = E(2X).E() = 2.E(/2). That is, E() = . Hence, 2 ia an unbiased estimate of .(c)E(3) = n+1n E()This impliesE(3) = .is an unbiased estimate of.

    5.4.9. Here,Xi N(, 2)

    f(x) = 122

    exp

    (x )

    2

    22

    We have E() = E(X) = is an unbiased estimate for. By definition, the unbiased estimatethat minimizes themean square error is called the minimum variance unbiased estimate (MVUE) of.

    MSE() = E( )2

    That is MSE()= var(). Minimizing the MSE implies that the minimization of Var().

    is the MVUE for.

    5.4.11. E(M) = E(X) = . Thus, sample median is an unbiased estimate of population mean .Now, Var(X) = 1

    n

    (X )2

    Var(M) = 1n

    (M )2

    Where Var(X) VarM5.4.13.

    f(X) = 1

    2exp

    |X|

    for < x <

    The likelihood function is given by:

    f (X1, X2, . . . . . . . . Xn, ) = 12n

    1n

    exp |Xi|

    .

    Takeg |Xi|, = 1nexp |Xi| andh(X1, X2 . . . . . . . Xn) = 12n

    |Xi| is sufficient for.

  • 8/10/2019 10739308_10203914552004027_1698851518_n

    60/149

  • 8/10/2019 10739308_10203914552004027_1698851518_n

    61/149

    56 CHAPTER 5 Point Estimation

    5.4.21. The likelihood function is given by

    f(x1, . . . . . . xn) =

    nn

    i=1(xi)1 for 0< x 00 otherwise

    LetU= (X1, X2, . . . . . . . . Xn)theng(x1, . . . . . . . Xn, ) = n

    ni=1 x

    1i andh(x1, . . . . . . xn) = 1. Therefore U is sufficient for.

    5.4.23. The likelihood function is given by

    f (x1, . . . . . . . . xn) = 2n

    n

    n

    i=1Xi

    e

    x2

    i

    Letg

    x2i, = 2n

    ne

    x2

    i andh(x1, . . . . . . . . xn) =

    ni=1 xi

    Hence, x2i is sufficient for the parameter.

    EXERCISES 5.5

    5.5.1. ln L(p, x) = lnn

    i=1

    nxi

    + xi ln pni=1(n xi) ln(1 p) For MLE (1 p) xi p

    (n xi) = 0. This implies p =

    xin2

    . Suppose Yn =

    xin

    . Thus p = Ynn

    . Where

    E

    Ynn

    = 1

    nE(Yn)= 1n2 nnp=p.pis an unbiased estimate ofp. Similarly Varp=Var

    Ynn=

    Var

    xin2

    This implies Varp = nn4

    np(1 p). Thus Varp 0 as n . Ynn

    is consistent.

    5.5.3. E(Xi) = i, E(X2i) = 2and E(Xi) = 4.E(s2) =

    1n

    E

    Xi + X2This impliesE(s2) = n22

    n . That isE(s2) = (n1)2

    n .

    S2 is an biased estimator of2

    HereS2 = n1n

    S2

    Var(S2) = n1n2

    2 0 as n .Where (n1)s

    22

    2(n1).Thus, Var

    (n1)s2

    2

    = 2(n 1). This implies (n1)2

    4 var(S2) = 2(n 1).

    Finally, Var(S2) = 2n1

    2.

    Moreover, Bias(s2) = E(s2) 2. This implies2n

    0 as n .Thuss2 is an unbiased estimate of2.

    5.5.5. Here E(x) = and var(x)= 2 (For exponential distribution). Now E(X = E) andVar() = 2

    n 0 as n . X is an unbiased estimate of.

    5.5.7. Here, ln(, X) = n ln + 1

    ni=1 ln(Xi).

    Differentiating above equation and equating to zero we get =n

    i=1ln Xin

  • 8/10/2019 10739308_10203914552004027_1698851518_n

    62/149

    Students Solutions Manual

    5.5.9. Here, var(1) = 112n and Var(2) = 112 .Thus the efficiency of2relative to 1is (2,1) = 1n 1

    Similarly,e

    2,3 = n+23 >1 ifn >1.

    2is efficient than 3ifn >1.

  • 8/10/2019 10739308_10203914552004027_1698851518_n

    63/149

    58 CHAPTER 5 Point Estimation

    5.5.17. It can be easily verified thatE

    1 = , E2 = and E3 =

    Similarly, Var

    1 = 3181 2, Var2 = 6n1725(n3) 2.

    Var

    3

    = 2

    n.

    Now the corresponding efficiencies are given bye 2,1=31775(n3)81

    (6

    n17

    ), e 3,1 =

    31n

    81

    .

    e

    3,2 = (6n17)n25(n3) .

    5.5.21. The ratio of the joint density function of two sample points is given by;

    L(x1, . . . . xn)

    L(y1, . . . yn)= exp

    122

    ni=1

    X2in

    i=1Y2i

    2

    ni=1

    Xi n

    i=1Yi

    .

    For this ratio to be free of and 2, we must haven

    i=1 Xi=n

    i=1 Yi andn

    i=1 X2i =n

    i=1 Y2i .

    Thusn

    i=1 Xiandn

    i=1 X2i are jointly minimal sufficient statistics for and

    2. SinceXis

    unbiased estimate forands2 is an unbiased estimate for2. The estimators are functions

    of the minimal sufficient statistics. This implies theXand s2 are MVUES for and2.

    5.5.23. The ratio of joint density function at two sample points, we have

    L(x1, . . . . xn)

    L(y1, . . . yn)=n

    i=1(Yi)ni=1(Xi)!

    Xi

    Yi .

    For the ratio to be free of we must have

    Xi

    Yi= 0. Thus

    Xi, form the minimal

    sufficient statistics for.

    5.5.25.

    L(x1, . . . . xn)

    L(y1, . . . yn) =eXiYi

    For the ratio to be independent of, we need to have

    Xi=

    Yi. Thus

    Xiis minimal

    sufficient for. NowE

    Xi = nXis UMVUE, by RaoBlackwells theorem.

    5.5.27.

    L(x1, . . . . xn)

    L(y1, . . . yn)=n

    i=1(Yi)ni=1(Yi)!

    e

    X2i

    Y2i .

    The ratio to be free of, we must have

    X2i

    Y2i =0. Therefore

    X2i is MVUE for.

    Moreovers2 is an unbiased estimator for2.

  • 8/10/2019 10739308_10203914552004027_1698851518_n

    64/149

  • 8/10/2019 10739308_10203914552004027_1698851518_n

    65/149

    60 CHAPTER 6 Interval Estimation

    p

    1

    21/2<

    2

    (n1)s2 < 12/2

    = 1

    p

    (n1)s2

    2/2< 2 5So the given data can be approximated as a normal distribution.

    Here 1 = 0.98 = 0.02/2 = 0.01z/2= z0.01= 2.325

    Thus the 98% confidence interval is given byp z/2

    p(1p)

    n

    =

    925 2.325

    925 1625

    50

    = (0.202, 0.518)

    6.2.7. n = 50

    x = 11.4= 4.51 = 0.95 = 0.05z/2= 1.9695% confidence interval is

    x z/2 n

    =

    11.4 1.96

    4.550

    = (10.153, 12.647)

    6.2.9. n = 400p = 0.3np = 120> 5n(1 p) = 280> 595% confidence interval is given by

    p z/2

    p(1p)n

    =

    0.3 1.96

    0.30.7400

    = (0.255,0.345)

    6.2.11. Proportion of defectionp = 40500= 2251 = 0.9/2 = 0.05z/2= 1.64590% confidence interval is given by

    225 1.645

    225 2325500

    = (0.06, 0.1)

  • 8/10/2019 10739308_10203914552004027_1698851518_n

    68/149

    Students Solutions Manual

    6.2.13. x N(, 16)p(x 2< < x + 2) = 0.95z/2

    n= 2

    n = (z/22 )2 = 1.9642 2 = 15.37 16

    6.2.15. n = 425p = 0.45np >425 0.45> 5n(1 p) = 425 0.55> 595% confidence interval is given by

    p z/2

    p(1p)n

    =

    0.45 1.96

    0.450.55425

    = (0.403, 0.497)

    For 98% confidence interval

    1 = 0.98 = 0.02z/2= 2.335

    0.45 2.335

    0.450.55425

    = (0.394, 0.506)

    6.2.19. p = 52601 = 0.95/2

    =0.025

    z/2= 1.96The 95% confidence interval is given by

    p z/2

    p(1p)n

    =

    5260 1.96

    5260 860

    60

    = (0.781, 0.953)

    6.2.21. = 35E = 15E = z/2 nn

    = z/2

    E 2 = 1.9635

    15 2 = 20.92 216.2.23. x = 12.07

    = 12.911 = 0.98/2 = 0.01z/2= 2.335

  • 8/10/2019 10739308_10203914552004027_1698851518_n

    69/149

    64 CHAPTER 6 Interval Estimation

    98% confidence interval for mean is given byx z/2 n

    =

    12.07 2.335 1.9135

    = (11.32, 12.82)

    EXERCISES 6.3

    6.3.1. (a) When standard deviation is not given and there is not enough sample size, we use

    t-distribution.

    (b) As differences decreases the sample sizen increases which means we are closing in on

    the true parameter value of.

    (c) The data are normally distributed, and the values ofx and thesamplestandard deviation

    are known.

    6.3.3. x = 20s = 4

    1 = 0.95(a)

    x t/2,4 sn

    =

    20 t0.025,4 45

    (b)

    20 t0.025,9 410

    (c)

    20 t0.025,19 420

    6.3.5. x = 2.22s = 0.005n

    =26

    98% confidence interval for isx t/2,25 sn

    =2.22 2.485 1.67

    26

    = (3.03,1.41)

    6.3.7. x = 0.905s = 1.671 = 0.98n = 1098% confidence interval for is

    0.905 t0.025,90.00510

    6.3.9. Similar to 6.3.8

    6.3.11. x = 410.93s = 312.87

  • 8/10/2019 10739308_10203914552004027_1698851518_n

    70/149

    Students Solutions Manual

    95% confidence interval for isx t/2,14 sn

    =

    410.93 2.145 312.8715

    = (237.65, 584.21)

    6.3.13. x

    =3.12

    s = 1.04n = 1799% confidence interval foris

    x t/2,4 sn

    =

    3.12 2.921

    1.0417

    = (2.40, 3.84)

    6.3.15. x = 3.85s = 4.55n = 2098% confidence interval foris

    x t/2,19 sn

    =

    3.85 2.5394.5520

    = (2.64, 5.06)

    6.3.17. x = 148.18s = 1.91n = 1095% confidence interval foris

    x t/2,9 sn

    =

    148.18 2.262

    1.9110

    = (147.19, 149.17)

    EXERCISES 6.4

    6.4.1. x = 2.2s = 1.421 = 0.90 = 0.10n = 2090% confidence interval for2 is given by

    (n1)s22/2,19

    , (n1)s2

    21

    /2,19 = 19(1.42)2

    32.85 ,19(1.42)2

    8.90655 = (1.1663, 4.3015)6.4.3. x = 60.908

    s2 = 12.661 = 0.99 = 0.01n = 10

  • 8/10/2019 10739308_10203914552004027_1698851518_n

    71/149

    66 CHAPTER 6 Interval Estimation

    99% confidence interval for2 is given by(n1)s2

    2/2,9, (n1)s

    2

    21/2,9

    = 912.6623.58 , 912.661.73 = (4.8321, 65.8613)

    6.4.5. x = 2.27

    s2 = 1.021 = 0.99 = 0.01n = 1899% confidence interval for2 is given by

    (n1)s22/2,9

    , (n1)s2

    21/2,9

    = 171.0227.58 , 171.025.69 = (0.6287, 3.0475)

    6.4.9. From excel or by calculation, sample variance s2 = 148.44, sample mean x = 97.24, n = 2599% confidence interval for population variance is given by

    (n1)s22/2,24

    , (n1)s2

    21/2,24

    = 24148.4436.41 , 24148.449.886 = (97.8456, 360.3642)

    6.4.11. x = 13.95s2 = 495.0851 = 0.98 = 0.02n = 25

    98% confidence interval for2

    is given by(n1)s22/2,24

    , (n1)s2

    21/2,24

    = 24495.08542.97 , 24495.08510.85 = (276.5194, 1095.1189)

    EXERCISES 6.5

    6.5.1. For procedure I,x1= 98.4,s21= 235.6,n1= 10For procedure II, x2= 95.4,s22= 87.15,n2= 10 = 0.02/2 = 0.01z/2= 2.98598% confidence interval for difference of mean is

    x1 x2 z/2

    s21n1

    + s22n2

    =

    98.4 95.4 2.985

    235.610 + 87.1510

    = (13.9580, 19.9580)

  • 8/10/2019 10739308_10203914552004027_1698851518_n

    72/149

    Students Solutions Manual

    6.5.3. x1= 16.0, s1= 5.6, n1= 42x2= 10.6, s2= 7.9, n2= 45 = 0.01/

    2 = 0.005z/2= 2.575

    99% confidence interval for difference of mean is

    x1 x2 z/2

    s21n1

    + s22n2

    =

    16.0 10.6 2.575

    (5.6)2

    42 + (7.9)2

    45

    = (1.6388, 9.1612)

    6.5.5. x1= 58, 550, s1= 4, 000, n1= 25x2= 53, 700, s2= 3, 200, n2= 23

    Since21= 22 but unknown, we can use pooled estimatorS2p= (n11)s

    21+(n21)s22

    (n1+n22)

    S2p= 24(4,000)2+22(3,200)246

    Sp= 3639.398The 90% confidence interval is

    x1 x2 t/2,(n1+n22) Sp

    1n1

    + 1n2

    = 58, 550 53, 700 2.326 3639.398 125+ 123 = (2404, 7296)6.5.7. x1= 28.4, s1= 4.1, n1= 40

    x2= 25.6, s2= 4.5, n2= 32(a) MLE of1 2is given by(x1 x2)(b) 99% confidence interval for1 2is

    x1 x2 z/2

    s21n1

    + s22n2

    =

    28.4 25.6 2.565

    (4.1)2

    40 + (4.5)2

    32

    =(0.1678, 5.4322)

    6.5.9. x1= 148, 822, s1= 21, 000, n1= 100x2= 155, 908, s2= 23, 000, n2= 1501 = 0.98 = 0.02

  • 8/10/2019 10739308_10203914552004027_1698851518_n

    73/149

    68 CHAPTER 6 Interval Estimation

    /2 = 0.01z/2= 2.57598% confidence interval for difference of mean is given by

    x1 x2 z/2

    s21n1

    + s22n2

    =

    148, 822 155, 908 2.2

    (21,000)2

    100 + (23,000)2

    150

    = (508, 13, 664)6.5.11. x1= 35.18, s21= 19.76, n1= 11

    x2= 38.76, s22= 12.69, n2= 131 = 0.9 = 0.1

    90% confidence interval for

    21

    22 is given bys21s22

    1Fn11,n21,1/2

    , s21s22

    1Fn11,n21,/2

    =

    19.7612.69

    1F10,12,0.95

    , 19.7612.691

    F10,12,0.05

    = 19.7612.69 12.75 , 19.7612.69 2.91 = (0.5662, 4.5313)6.5.13. x1= 68.91, s21= 287.17, n1= 12

    x2= 80.66, s22= 117.87, n2= 121 = 0.95 = 0.05/2 = 0.025(a) 95% confidence interval of difference of mean is

    x1 x2 z/2

    s21n1

    + s22

    n2

    =

    68.91 80.66 1.96

    281.1712 + 117.8712

    = (23.0525, 0.4475)

    (b) 95% confidence interval for 2122

    is given by

    s21s22

    1

    Fn11,n21,1/2 ,

    s21

    s22

    1

    Fn11,n21,/2 = 281.17117.87 1F11,11,0.975 , 281.17117.87 1F11,12,0.025 = 281.17118.87 13.48 , 281.17118.87 3.48 = (0.6855, 8.3055)

  • 8/10/2019 10739308_10203914552004027_1698851518_n

    74/149

    Chapter7Hypothesis Testin

    EXERCISES 7.1

    7.1.1. (a) H0: = 0H1: > 0

    (b) H0: = 0H1: >1.20

    7.1.3. H0 : p = 0.5H1 : p >0.5

    n = 15(a) = probability of type I error= p(rejectH0|H0is true)

    = p(y 10|p = 0.5) = 1 p(y 10|p = 0.5)

    =1

    15

    y=10

    c(15, y)(0.5)y(0.5)15y

    = 1 0.941= 0.059

    (b) = p(acceptH0|H0is false)= p(y 9|p = 0.7)

    =9

    y=0c(15, y)(0.7)y(0.3)15y

    = 0.278(c)

    =p(y

    9

    |p

    =0.6)

    =9

    y=0c(15, y)(0.6)y(0.4)15y

    = 0.597

  • 8/10/2019 10739308_10203914552004027_1698851518_n

    75/149

    70 CHAPTER 7 Hypothesis Testing

    (d) For = 0.010.01 = p(y k|p = 0.5)From binomial table = 0.01 falls between k = 2 and k = 3. However, fork= 3, =0.018 which exceeds 0.01. If we want to limit to be no more than 0.01, we will take

    k= 2. That is we rejectH0ifk 2.For = 0.010.03 = p(y k|p = 0.5)From binomial table = 0.03 falls between k = 3 and k = 4. However, fork= 4, =0.059 which exceeds 0.05. If we want to limit to be no more than 0.05, we will take

    k= 3. That is we rejectH0ifk 3.(e) When = 0.01. From part d, rejection region is of the form(y 2).

    Forp = 0.7 = p(y 2|p = 0.7)

    = 1 p(y 10

    (a) = probability of type I error= p(rejectH0|H0is true)= p(x >11.2| = 10)

    = P x/n > 11.2104/25| = 10= P(z >1)= 0.1587

    (b) = p(acceptH0|H0is false)= p(x 11.2| = 11)= P

    x

    /

    n 11.211

    4/

    25| = 11

    = P(z 0.25)= 0.5787

    (c) 0= 10a

    =11

    z= z0.01= 2.33z= z0.8= 0.525

    n = (z+z)22(a0)2

    = (2.330.525)242(1110)2

    = 52.13 rounded up to 53

  • 8/10/2019 10739308_10203914552004027_1698851518_n

    76/149

    Students Solutions Manual

    7.1.9. 2 = 16H0 : = 25H1 : = 25n

    = (z+z)22

    (a

    0)2

    = (1.645+1.645)216(1)2

    = 173.19 rounded up to 174

    EXERCISES 7.2

    7.2.1. H0 : = 0H1 : = 1L() = 1

    2n

    nexp

    (xi)222

    L(0)

    = 12nn exp

    (xi0)222

    L(1) = 12n

    nexp

    (xi1)222

    L(0)L(1)

    = exp

    (xi0)222

    +

    (xi1)222

    = exp

    (xi1)2

    (xi0)2

    22

    ln

    L(0)L(1)

    =

    (xi1)2

    (xi0)2

    22

    =

    x2i 2nx1+21

    x2i 2nx02022

    = 2nx(01)(01)(0+1)

    22

    = (01)

    xi2

    (2021)22

    Therefore, the most powerful test is to rejectH0if

    (0 1)

    xi

    2

    20 21

    22

    ln k

    (0 1)

    xi

    2 ln k +

    20 21

    22

    xi

    2 ln k +

    2021

    2

    (0 1)

    = C

    Assume0 < 1, rejection region for = 1is given byxi C

    WhereC =

    2 ln k+

    2021

    2(01)

  • 8/10/2019 10739308_10203914552004027_1698851518_n

    77/149

    72 CHAPTER 7 Hypothesis Testing

    Rejection region for = 0is given by xi C

    7.2.5. H0: = 0

    H1: < 0f(y) = 2y

    2ey2/2 x >0

    L() =n

    i=1

    2yi2

    ne

    y2i/2

    L(0) =n

    i=1

    2yi20

    ne

    y2i/20

    L(1) =n

    i=1

    2yi21

    ne

    y2i/21

    L(0)

    L(1) =

    n

    i=12yi20

    ni=1

    2yi21

    n e y2i/21 y2i/20=

    10

    2ne

    y2i/21

    y2i/20

    ln

    L(0)L(1)

    = 2n ln

    10

    + y2i/21 y2i/20 ln k

    y2i CWhereC=

    ln k 2n ln

    10

    20

    21

    20217.2.7. H0: p = p0

    H1: p = p1wherep1 > p0f(p) = px(1 p)1xL(p) = p

    xi (1 p)n

    xi

    L(p0) = p

    xi0 (1 p0)n

    xi

    L(p1) = p

    xi1 (1 p1)n

    xi

    L(p0)L(p1)

    =

    p0p1

    xi 1p01p1

    n xi kTaking natural logarithm, we have

    xi ln

    p0

    p1

    +

    n

    xi

    ln

    1 p01 p1

    ln k

    lnp

    0p1

    ln1 p01 p1 xi + n ln1 p

    01 p1

    ln k

    xi

    ln k n ln

    1 p01 p1

    ln

    p0

    p1

    ln

    1 p01 p1

    To find the rejection region for a fixed value of, write the region asxi C, whereC=

    ln k n ln

    1p01p1

    ln

    p0p1

    ln

    1p01p1

  • 8/10/2019 10739308_10203914552004027_1698851518_n

    78/149

    Students Solutions Manual

    EXERCISES 7.3

    7.3.1. f(x) = 12

    exp (x)2

    22

    L(1)

    = 12nn

    1

    exp(xi)2

    221 Here0=

    20

    anda= R

    20

    Hence

    L

    20

    = max 120

    nexp

    (xi )2220

    Since the only unknown parameter in the parameter space is 2, < 2 < ; maximumlikelihood function is achieved when2 equals to its maximum likelihood estimator

    2mle= 1

    n

    n

    i=1

    (xi x)2

    =

    21

    20

    n/2exp

    (xi )2

    221

    (xi )2220

    =

    (xi x)2n20

    n/2exp

    n

    (xi )22

    (xi x)2

    (xi )2

    220

    The likelihood ratio test has the rejection region:

    Reject if k, which is equivalent ton

    2

    ln(xi x)2

    n

    2

    lnn20+

    n(xi)2

    2

    (xix)2

    (xi)2

    220

    ln k

    7.3.3. f(x) = 12

    exp (x)2

    22

    L(1) = 1

    2n

    n1

    exp

    (xi)2221

    L(2) = 12n

    n2

    exp

    (yi)2222

    Let = L(22 )

    L

    21

    = 21

    nexp

    (yi)2

    222

    (xi)2221

    Thus the likelihood ratio test has the rejection region

    RejectH0if kn ln

    2

    1

    +

    (yi )2222

    (xi )2221

    ln k

    (yi )2

    22

    (xi )221

    2 ln k 2n ln

    2

    1

  • 8/10/2019 10739308_10203914552004027_1698851518_n

    79/149

    74 CHAPTER 7 HypothesisTesting

    21= 1

    n

    ni=1

    (xi x)2

    22= 1

    n

    n

    i=1

    (xi x)2

    n

    (yi 2)2(yi y)2

    n

    (xi 1)2(xi x)2

    2 ln k 2n ln

    2

    1

    The rejection region is n

    (yi 2)2(yi y)2

    n

    (xi 1)2(xi x)2

    C

    7.3.5. f(x) = 1

    ex/ forx >0

    L() = 1n

    exp

    xi

    L(0)

    = 1n

    0

    expxi

    0

    L(1) = 1n1 exp

    xi1

    L(0)L(1)

    = n1n0

    exp

    xi1

    xi0

    =

    10

    nexp

    xi

    1

    xi

    0

    We reject the null hypothesis if

    n ln

    1

    0

    +

    1

    1 1

    0

    xi ln k

    xi ln k n ln1

    0

    100 1

    Where we reject null hypothesis if

    xi m1or

    xi m2

    Provided m1=

    ln k n ln

    10

    1001

    andm2= 1

    ln kn ln

    10

    0110

    EXERCISES 7.4

    7.4.1. n = 50, = 0.02, x = 62, s = 8H0 : 64H1 : 1.769) = 0.0384

  • 8/10/2019 10739308_10203914552004027_1698851518_n

    80/149

    Students Solutions Manual

    (c) Smallest = 0.0384p-value > 0.02

    We fail to reject the null hypothesis.

    7.4.3. H0:

    =0.45

    H1: 0.85

    } =0.1977

    (b) Herez = 0.85z/2= z0.005= 2.58Rejection region is {z < z0.005, z > z0.005}, i.e. {z < 2.58, z >2.58}p-value = min{z < 0.85, z >0.85} = 0.3954

    (c) Assumptions: even though population standard deviation is unknown, because of the

    large sample size, normal distribution is assumed.

    7.4.5. x = $58, 800, n = 15, s = $8, 300(a) P(rejectH0|H0is true) = 0

    Since the probability of rejecting null hypothesis equals to zero. Therefore, the null

    hypothesis is accepted.(b) = 0.01H0: = 55, 648H1: >55.648

    t= xs/

    n= 58,80055,648

    8,300/

    15 = 31522143.05= 1.47

    Rejection region is {t > t0.01,14}t0.01,14= 2.624Sincet= 2.642 is greater than 1.47, we fail to reject the null hypothesis

    7.4.7. H0: p0= 0.3H

    1:p

    0>

    0.3p = 5501500= 0.366z = pp0

    p0q0n

    = 0.3660.30.30.71500

    = 0.0660.0118= 5.593

    Rejection region is {z > z0.01}z0.01= 2.33

  • 8/10/2019 10739308_10203914552004027_1698851518_n

    81/149

    76 CHAPTER 7 Hypothesis Testing

    i.e. {z >2.33}Yes, customer has preference over ivory color

    7.4.9. (a) x = 42.9, s = 6.3674, = 0.1H0 :

    =44

    H1 : = 44The data is normally distributed.

    z =x s/

    n

    = 42.9 446.3674/

    10

    =1.12.013

    = 0.5644

    Rejection region forz is|z| < z0.05}wherez0.05= 1.645

    z0.05= 1.645< 0.56444We fail to reject the null hypothesis

    (b) 90% confidence interval for is

    x z/2 sn , x z/2 s

    n

    = (42.9 1.645

    2.013, 42.9

    +1.645

    2.013)

    =(39.588, 46.21138)

    (c) From a, we can see that it is reasonable to take = 44. The argument is supported bythe confidence interval in b.

    7.4.11. x = 13.7, s = 1.655, n = 20H0 : = 14.6H1 : z0.01}z0.01= 2.33

    We reject the null hypothesis. Thus there is a statistical evidence to support this claim.

    7.4.13. x = 32,277, s = 1,200, n = 100, = 0.05H0 : = 30,692H1 : >30,692

    z = 32,27730,6921,200/

    100

    = 13.20Rejection region is

    z > z0.05

    z0.05= 1.645Hence we reject the null hypothesis that the expenditure per consumer is increased from

    1994 to 1995

    7.4.15. H0 :

    =1.129

    H1 : = 1.129t= 1.241.129

    0.01/

    24 = 0.1110.00204= 54.41

    t0.05,23= 2.807Wheret= 54.41> t0.05,23Thus we reject the null hypothesis. That is price of gas is changed recently.

  • 8/10/2019 10739308_10203914552004027_1698851518_n

    82/149

    Students Solutions Manual

    EXERCISES 7.5

    7.5.1. H0: 1 2= 0H1: 1 2= 0

    z = (

    y1

    y2)

    (

    1

    2)

    s21n1

    + s22

    n2

    = 74

    71

    8150 + 10050 =

    31.9026= 1.5767

    Rejection region forzis|z| > z0.025

    z0.025= 1.96Sincez0.025 >1.57, we fail to reject the null hypothesis. To see the significant difference we

    need to have = 0.0582 level of significance.7.5.3. x1= 58, 550, x2= 53, 700, s1= 4000, s2= 3200, n1= 25, n2= 23

    H0: 1 2= 0H1: 1 2 >0S2p

    =

    (n11)s21+(n21)s22(n1

    +n2

    2)

    =

    24(4000)2+22(3200)225

    +23

    2

    =13245217

    Sp= 3639.3979t= (x1x2)0

    Sp

    1n1

    + 1n2

    = (58,55053,700)03639

    125 + 123

    = 4.61288

    Rejection region is {t > t0.05,46} i.e. {t >1.679}Since1.679 < 4.61288, werejectthe null hypothesis. Thus implies that there existssignificant

    evidence to show that the males salary is higher than that of female.

    7.5.5. x1= 105.9, x2= 100.5, s21= 0.21, s22= 0.19, n1= 80, n2= 100H0: 1 2= 0H1: 1 2= 0

    Uset=

    (

    x1

    x2)

    0

    s21n1

    + s22n2

    and use two sidedt

    -test.

    7.5.7. x1= 7.65, x2= 9.75, s1= 0.9312, s2= 0.852, n1= 10, n2= 10(a) H0: 1 2= 0

    H1: 1 2 >0S2p= (n1

    1)s21+(n21)s22(n1+n22) =

    9(0.9312)2+9(0.852)218

    Sp= 0.892479t= (x1x2)0

    Sp

    1n1

    + 1n2

    = (7.659.75)00.89

    110+ 110

    = 5.276

    Rejection region is {t > t0.05,18} i.e. {t >1.734}Thus fail to reject the null hypothesis.

    (b) H0: 21= 22

    H1: 21= 22

    Test statisticF= s21

    s22= (0.9312)2

    (0.852)2= 1.1945

    From theF-tableF0.025(9,9)= 4.03

  • 8/10/2019 10739308_10203914552004027_1698851518_n

    83/149

    78 CHAPTER 7 HypothesisTesting

    F0.95(9,9)= 14.03= 0.248Rejection region isF > 4.03 andF < 0.248

    Since the observed value of the statistic 1.1945 < 4.03, we fail to reject the null

    hypothesis

    (c) H0 : d= 0H1 : d >0

    d= 2.1Sd= 1.1670t= dd0

    Sd/

    n= 2.1

    1.670/

    10= 3.16

    Fromt-tablet0.05,9= 1.833Wereject the null hypothesis, this implies hat the down stream is less than the upstream.

    7.5.9. (a) x1= 2.04, x2= 3.55, s1= 0.551, s2= 0.6958, n1= 14, n2= 14H0 : 1 2= 0H1 : 1 2 1.77, we fail to reject the null hypothesis.

    (b) Test statisticF= s21

    s22= 0.6281

    From theF-tableF0.025(13,13)

    =3.11

    F0.95(13,13)= 13.11= 0.321Rejection region isF >3.11 andF

  • 8/10/2019 10739308_10203914552004027_1698851518_n

    84/149

    Students Solutions Manual

    Rejection region isF >2.585 andF 9.48

    By using contingency table

    2 = 43.862 falls in the rejection region at= 0.05, we reject the null hypothesis. That is collectivebargaining is dependent on employee classification.

    7.6.3. O1= 12, O2= 14, O3= 78, O4= 40, O5= 6We now computei(i = 1,2,3,4,5)using continuity correction

    1= p(x 55)2= p

    z 65.5704

    3= p

    z 75.5704

    4= p

    z 85.5704

    5= p

    z 95.5704

    Taking above probability we need to findEiand follow 6.6.5.

    7.6.5. E1= 950 0.35, E2= 950 0.15, E3= 950 0.20, E4= 950 0.30O1= 950 0.45, O2= 950 0.25, O3= 950 0.02, O4= 950 0.282 = 834.7183From Chi-Square table

    20.05,3= 7.81Thus2 = 834.71> 7.81, we reject the null hypothesis, at least one probabilities is differentfrom hypothesized value.

  • 8/10/2019 10739308_10203914552004027_1698851518_n

    85/149

    This page intentionally left blank

  • 8/10/2019 10739308_10203914552004027_1698851518_n

    86/149

    Chapter8Linear Regression Mode

    EXERCISES 8.2

    8.2.1. (a) Proof see example 8.2.1

    (b) SSE

    2 follows central Kai square with degree of freedomn 2E

    SSE2

    = n 2 and2 is a constant

    ThereforeE(SSE) = (n 2)28.2.3. (a) Least-squares regression line isy = 84.1674 + 5.0384x

    (b)

    30 40 50 60 70

    50

    100

    150

    200

    250

    x

    y

    8.2.5. (a) Check the proof in Derivation of

    0and

    1.

    (b) We know the line of best fit is y = 0+1x and plug in the point (x, y) we get0= y 1x. Complete the proof.

    8.2.7. y = 1x + SSE = n

    i=1e2i=

    ni=1

    [yi (1xi)]2

  • 8/10/2019 10739308_10203914552004027_1698851518_n

    87/149

    82 CHAPTER 8 Linear Regression Models

    (SSE)1

    =

    ni=1

    [yi(1xi)]2

    1= 2 n

    i=1[yi (1xi)]xi

    = 2 ni=1

    [xiyi (1x2i)] = 0

    1=n

    i=1 (xiyi)n

    i=1(x2i)

    y = 40.175 + .9984x8.2.9. Least-squares regression line isy = .62875 + .83994x

    20 30 40 50 60

    20

    25

    30

    35

    40

    45

    50

    x

    y

    8.2.11. Least-squares regression line isy = 2.2752 + .00578x

    50 100 150

    1

    2

    3

    4

    x

    y

  • 8/10/2019 10739308_10203914552004027_1698851518_n

    88/149

    Students Solutions Manual

    EXERCISES 8.3

    8.3.1. (a) Least-squares regression line isy = 57.2383 .4367x(b)

    40 50 60 70 80 90

    15

    20

    2

    5

    30

    35

    40

    45

    50

    x

    y

    (c) The 95% confidence intervals for0is (40.5929, 73.8837)

    The 95% confidence intervals for1is (.6806, .1928)

    8.3.3. 1andyare both normally distributed. In order to show these two are independent we justneed to show the covariance of them is 0. (Property of normal distribution)

    Cov

    1, y

    = E

    1 y

    E

    1

    E (y)

    = E

    Sxy

    Sxx y

    1 E

    1

    n

    ni=1

    yi

    =E

    ni=1(yi y) (xi x)n

    i=1(xi x)2

    y 1

    1

    n

    n

    i=1

    (0

    +1

    x)

    = 1 (0 + 1x) 1 0 21x