IVMultiple Comparisons A.Contrast Among Population Means ( i )

Post on 15-Jan-2016

17 views 0 download

description

IVMultiple Comparisons A.Contrast Among Population Means (  i ) 1. A contrast among population means is a difference among the means with appropriate algebraic sign.  pairwise contrast:  nonpairwise contrast:. 2.Contrasts are defined by a set of underlying - PowerPoint PPT Presentation

transcript

1

IV Multiple Comparisons

A. Contrast Among Population Means (i)

1. A contrast among population means is a

difference among the means with appropriate

algebraic sign.

pairwise contrast:

nonpairwise contrast:

i =μ j −μ ′j

i =μ j −(μ ′j + μ ′′j ) / 2

2

2. Contrasts are defined by a set of underlying

coefficients (cj ) with the following characteristics:

The sum of the coefficients must equal zero,

c j =c1 + c2 +L + cp =0j=1

p∑ .

cj ≠ 0 for some j

For convenience (to put all contrasts on the

same measurement scale), coefficients are

chosen so that c jj=1

p∑ =2.

3

3. Pairwise contrast for means 1 and 2, where c1 = 1

and c2 = –1

1 =(1)μ1 + (−1)μ2

4. Nonpairwise contrast for means 1, 2, and 3, where

c1 = 1, c2 = –1/2, and c3 = –1/2

2 =(1)μ1 + (−1 / 2)μ2 + (−1 / 2)μ3

=μ1 −

μ2 + μ32

=μ1−μ2

4

5. Pairwise contrast: all of the coefficients except

two are equal to 0.

6. Nonpairwise contrast: at least three coefficients

are not equal to 0.

7. A contrast among sample means, denoted by

is a difference among the sample means with

appropriate algebraic sign.

i ,

Pairwise contrast 1 =(1)X1 + (−1)X2

Nonpairwise contrast

2 =(1)X1 + (−1/ 2)X2 + (−1/ 2)X3

5

V Fisher-Hayter Multiple Comparison Test

A. Characteristics of the Test

1. The test uses a two-step procedure. The

first steps consists of testing the omnibus

null hypothesis using an F statistic.

2. If the omnibus test is significant, the

Fisher-Hayter statistic is used to test all

pairwise contrasts among the p means.

6

B. Fisher-Hayter Test Statistic

qFH =X. j −X. ′j

MSWG2

1nj

+1

n ′j

⎝⎜

⎠⎟

where and are means of random samples from

normal populations, MSWG is the denominator of the

F statistic from an ANOVA, and nj and nj′ are the

sizes of the samples used to compute the sample

means.

X. j

X. ′j

7

1. Reject H0: μj = μj′ if |qFH| statistic exceeds or

equals the critical value, , from the

Studentized range table (Appendix Table D.10).

C. Computational Example Using the Weight- Loss Data

1. Step 1. Test the omnibus null hypothesis

qα; p−1,ν

F =

MSBGMSWG

=43.3345.037

=8.60*

2. Step 2. Because F is significant, test all pairwise

contrasts using qFH.

*p < .02

8

qFH =X. j −X. ′j

MSWG2

1nj

+1

n ′j

⎝⎜

⎠⎟

qFH =8.00 −9.00

5.0372

110

+110

⎛⎝⎜

⎞⎠⎟

=−1.41

qFH =8.00 −12.00

5.0372

110

+110

⎛⎝⎜

⎞⎠⎟

=−5.64*

qFH =9.00 −12.00

5.0372

110

+110

⎛⎝⎜

⎞⎠⎟

=−4.23*

1 =X.1 −X.2

2 =X.1 −X.3

3 =X.2 −X.3

q.05; 3−1, 27 ≅2.90

9

D. Assumptions of the Fisher-Hayter Test

1. Random sampling or random assignment

of participants to the treatment levels

2. The j = 1, . . . , p populations are normally

distributed.

3. The variances of the j = 1, . . . , p populations are

equal.

10

VI Scheffé Multiple Comparison Test and Confidence Interval

A. Characteristics of the Test

1. The test does not require a significant

omnibus test.

2. Can test both pairwise and nonpairwise contrasts

and construct confidence intervals.

3. The test is less powerful than the Fisher-Hayter

test for pairwise contrasts.

11

B. Scheffé Test Statistic

FS =(c1X.1 + c2X.2 +L + cpX.p)

2

MSWGc12

n1+

c22

n2+L +

cp2

np

⎝⎜

⎠⎟

where c1, c2, . . . , cp are coefficient that define a

contrast, , . . . , are sample means, MSWG

is the denominator of the ANOVA F statistic, and n1,

n2, . . . , np are the sizes of the samples used to

compute the sample means.

X.1

X.p

12

1. Reject a null hypothesis for a contrast if the FS

statistic exceeds or equals the critical value,

is obtained from the

F table (Appendix Table D.5).

C. Computational Example Using the Weight- Loss Data

1. Critical value is

( p −1)Fα;ν1, ν2

. Fα;ν1, ν2

(3−1)F.05; 2, 27 =(2)(3.35) =6.70.

13

FS =(c1X.1 + c2X.2 +L + cpX.p)

2

MSWGc12

n1+

c22

n2+L +

cp2

np

⎝⎜

⎠⎟

FS =[(1)8.00 + (0)9.00 + (−1)12.00]2

5.037(1)2

10+(0)2

10+(−1)2

10

⎝⎜

⎠⎟

=15.88* 1 =X.1 −X.3

FS =[(0)8.00 + (1)9.00 + (−1)12.00]2

5.037(0)2

10+

(1)2

10+

(−1)2

10

⎝⎜

⎠⎟

= 8.93* 2 =X.2 −X.3

14

FS =[(12)8.00 + (1

2)9.00 + (−1)12.00]2

5.037(12)2

10+(12)2

10+(−1)2

10

⎝⎜⎜

⎠⎟⎟

=16.21* 3 =

X.1−X.2

2−X.3

D. Two-Sided Confidence Interval

i − (p−1)Fα;ν1,ν2 MSWGcj2

njj=1

p∑ < i

< ψ i + (p −1)Fα ;ν1,ν 2MSWG

c j2

n jj =1

p∑

15

1. Computational example for =( 12 )μ1 + ( 12 )μ2 −μ3

(12)8.0 + (12)9.0 + (−1)12.0⎡⎣ ⎤⎦− (2)(3.35) (5.037)

(12)2 + (12)

2 + (−1)2

10 +10 +10⎡

⎣⎢

⎦⎥

< (12)8.0 + (1

2)9.0 + (−1)12.0⎡⎣ ⎤⎦+ (2)(3.35) (5.037)(1

2)2+ (1

2)2+ (−1)2

10 + 10 + 10

⎣⎢

⎦⎥

<

−5.45 < < −1.55

L

1 = – 5 . 4 5 L

2 = – 1 . 5 5

– 5 – 4 – 3 – 2 – 1 0– 6

( 1

2)μ1 + ( 12 )μ2 −μ3

16

E. Assumptions of the Scheffé Test and Confidence Interval

1. Random sampling or random assignment

of participants to the treatment levels

2. The j = 1, . . . , p populations are normally

distributed.

3. The variances of the j = 1, . . . , p populations are

equal.

17

F. Comparison of Fisher-Hayter and SchefféTests

1. The Fisher-Hayter test controls the Type I error

at α for the collection of all pairwise contrasts.

2. The Scheffé test controls the Type I error at α for

the collection of all pairwise and nonpairwise

contrasts.

3. The Scheffé statistic can be used to construct

confidence intervals.

18

VII Practical Significance

A. Omega Squared

1. Omega squared estimates the proportion of the

population variance in the dependent variable that

is accounted for by the p treatments levels.

2. Computational formula

(ω2 )

ω 2 =(p−1)(F −1)

(p−1)(F −1)+np

19

3. Cohen’s guidelines for interpreting omega squared

ω2 =.010 is a small association

ω2 =.059 is a medium association

ω2 =.138 is a large association

4. Computational example for the weight-loss data

ω 2 =

(p−1)(F −1)(p−1)(F −1) + np

=(3−1)(8.60−1)

(3−1)(8.60−1) + (10)(3)=.34

20

B. Hedges’s g Statistic

1. g is used to assess the effect size of contrasts

g =X. j −X. ′j

σPooled

σPooled = MSWG

2. Computational example for the weight-loss data

σPooled = 5.037 =2.244

21

g =|8 −12|2.244

=1.78 2 =X.1 −X.3

g =|9 −12|2.244

=1.34 3 =X.2 −X.3

g =|8 −9|2.244

=0.45 1 =X.1 −X.2

g =.2 is a small effect

g =.5 is a medium effect

g =.8 is a large effect

3. Guidelines for interpreting Hedges’s g statistic