Optical illusion ? Correlation ( r or R or )

Post on 31-Dec-2015

30 views 0 download

Tags:

description

Optical illusion ? Correlation ( r or R or  ) -- One-number summary of the strength of a relationship -- How to recognize -- How to compute Regressions -- Any model has predicted values and residuals. (Do we always want a model with small residuals ? ) -- Regression lines - PowerPoint PPT Presentation

transcript

Optical illusion ?

Correlation ( r or R or )-- One-number summary of the strength of a relationship-- How to recognize-- How to compute

Regressions-- Any model has predicted values and residuals.

(Do we always want a model with small residuals ? )-- Regression lines

--- how to use--- how to compute

-- The “regression effect”(Why did Galton call these things “regressions” ? )

-- Pitfalls: Outliers-- Pitfalls: Extrapolation-- Conditions for a good regression

Which looks like a stronger relationship?

-1

1

3

-0.43 0.32 1.07 1.82

X

Y

-6

-4

-2

0

2

4

6

8

-4 -3 -2 -1 0 1 2 3 4 5

X

Y

Mortality vs. Education

9

9.5

10

10.5

11

11.5

12

12.5

13

800 850 900 950 1000 1050 1100 1150

Education

Mortality vs. Education

9

9.5

10

10.5

11

11.5

12

12.5

13

800 850 900 950 1000 1050 1100 1150

Education

Optical Illusion ?

Kinds of Association…

Positive vs. Negative

Strong vs. Weak

Linear vs. Non-linear

CORRELATION

CORRELATION(or, the CORRELATION COEFFICIENT)measures the strength of a linear

relationship.

If the relationship is non-linear, it measures the strength of the linear part of the relationship. But then it doesn’t tell the whole story.

Correlation can be positive or negative.

-1

0

1

2

-1 0 1 2

X

Y

correlation = .97

-1

0

1

2

-2 0 2

X

Y

correlation = .71

correlation = –.97

correlation = –.71

-1

0

1

2

-1 0 1 2

X

Y

-1

0

1

2

-2 0 2

X

Y

-1

0

1

2

-1 0 1 2

X

Y

correlation = .97

correlation = .97

-1

0

1

2

-1 0 1 2

X

Y

-0.75

0.00

0.75

1.50

-2 -1 0 1

X

Y

correlation = .24

-1

0

1

2

-1 0 1 2

X

Y

correlation = .90

-0.75

0.00

0.75

1.50

-1.25 0.00 1.25

X

Y

correlation = .50

-0.75

0.00

0.75

1.50

-2 -1 0 1

X

Y

correlation = 0

Computing correlation…

1. Replace each variable with its standardized version.

2. Take an “average” of ( xi’ times yi’ ):

' ( ) /

' ( ) /i i x

i i y

x x x s

y y y s

(use n-1 if you used n-1 to standardize)' '

i ix yr

n

Computing correlation

' 'i ix yr

n

r, or R, or greek

(rho)

n-1 or n ?

sum of all

the products

Good things about correlation

It’s symmetric ( correlation of x and y means same as correlation of y and x )

It doesn’t depend on scale or units— adding or multiplying either variable by

a constant doesn’t change r — of course not; r depend only on the

standardized versions

r is always in the range from -1 to +1 +1 means perfect positive correlation; dots on line -1 means perfect negative correlation; dots on line 0 means no relationship, OR no linear relationship

Bad things about correlation

Sensitive to outliers

Misses non-linear relationships

Doesn’t imply causality

Made-up Examples

PERCENT TAKING SAT

STATE AVE

SCORE

Made-up Examples

SHOE SIZE

IQ

Made-up Examples

BAKING TEMP

JUD

GE

’S IM

PR

ES

SIO

N

250 350 450

Made-up Examples

GDP PER CAPITA

LIFE

EX

PE

CT

AN

CY

Observed Values, Predictions, and Residuals

explanatory variable

resp. var.

Observed Values, Predictions, and Residuals

explanatory variable

resp. var.

Observed Values, Predictions, and Residuals

explanatory variable

resp. var.

Observed Values, Predictions, and Residuals

explanatory variable

resp. var.

Observed value

Predicted value

Residual = observed – predicted

Linear models and non-linear models

Model A: Model B:

y = a + bx + error y = a x1/2 + error

Model B has smaller errors. Is it a better model?

aa opas asl poasie ;aaslkf 4-9043578

y = 453209)_(*_n &*^(*LKH l;j;)(*&)(*& + error

This model has even smaller errors. In fact, zero errors.

Tradeoff: Small errors vs. complexity.

(We’ll only consider linear models.)

JPM (vertical axis) vs. DJI (horizontal axis) daily changes

-10.0000

-8.0000

-6.0000

-4.0000

-2.0000

0.0000

2.0000

4.0000

6.0000

8.0000

10.0000

-6 -4 -2 0 2 4 6

DJI

JPM

JPM (vertical axis) vs. DJI (horizontal axis) daily changes

-10.0000

-8.0000

-6.0000

-4.0000

-2.0000

0.0000

2.0000

4.0000

6.0000

8.0000

10.0000

-6 -4 -2 0 2 4 6

DJI

JPM

About Lines

y = mx + b

b

slope = m

About Lines

y = mx + b

y intercept

slope b

slope = m

About Lines

y = mx + b

b

slope = m

About Lines

y = mx + b

y = b + mx

b

slope = m

About Lines

y = mx + b

y = b + mx

y = + xy = 0 + 1x

About Lines

y = mx + b

y = b + mx

y = + xy = 0 + 1x

y = b0 + b1x

About Lines

y = mx + b

y = b + mx

y = + xy = 0 + 1x

y = b0 + b1x

y intercept slope

b0

slope = b 1

About Lines

y = mx + b

y = b + mx

y = + xy = 0 + 1x

y = b0 + b1x

y intercept slope

b0

slope = b 1

Computing the best-fit line

In STANDARDIZED scatterplot:-- goes through origin-- slope is r

In ORIGINAL scatterplot:-- goes through “point of means”-- slope is r × Y x

6

8

10

4 6 8 10 12

x1

y1

4.5

6.0

7.5

9.0

4 6 8 10 12

x2

y2

6

8

10

12

4 6 8 10 12

x3

y3

6

8

10

12

10.0 12.5 15.0 17.5

x4

y4

The “Regression” Effect

A preschool program attempts to boost children’s reading scores.

Children are given a pre-test and a post-test.

Pre-test: mean score ≈ 100, SD ≈ 10Post-test: mean score ≈ 100, SD ≈ 10

The program seems to have no effect.

A closer look at the data shows a surprising result:

Children who were below average on the pre-test tended to gain about 5-10 points on the post-test

Children who were above average on the pre-test tended to lose about 5-10 points on the post-test.

A closer look at the data shows a surprising result:

Children who were below average on the pre-test tended to gain about 5-10 points on the post-test

Children who were above average on the pre-test tended to lose about 5-10 points on the post-test.

Maybe we should provide the program only for children whose pre-test scores are below average?

Fact:In most test–retest and analogous situations, the

bottom group on the first test will on average tend to improve, while the top group on the first test will on average tend to do worse.

Other examples:• Students who score high on the midterm tend on

average to score high on the final, but not as high.

• An athlete who has a good rookie year tends to slump in his or her second year. (“Sophomore jinx”, "Sports Illustrated Jinx")

• Tall fathers tend to have sons who are tall, but not as tall. (Galton’s original example!)

80

90

100

110

120

130

80 90 100 110 120

pre-test

post-test

It works the other way, too:

• Students who score high on the final tend to have scored high on the midterm, but not as high.

• Tall sons tend to have fathers who are tall, but not as tall.

• Students who did well on the post-test showed improvements, on average, of 5-10 points, while students who did poorly on the post-test dropped an average of 5-10 points.

Students can do well on the pretest…-- because they are good readers, or-- because they get lucky.

The good readers, on average, do exactly as well on the post-test. The lucky group, on average, score lower.

Students can get unlucky, too, but fewer of that group are among the high-scorers on the pre-test.

So the top group on the pre-test, on average, tends to score a little lower on the post-test.

Extrapolation

Interpolation: Using a model to estimate Yfor an X value within the range on which the model was based.

Extrapolation: Estimating based on an X value outside the range.

Extrapolation

Interpolation: Using a model to estimate Yfor an X value within the range on which the model was based.

Extrapolation: Estimating based on an X value outside the range.

Interpolation Good, Extrapolation Bad.

Nixon’s Graph:Economic Growth

Nixon’s Graph:Economic Growth

Start ofNixon Adm.

Nixon’s Graph:Economic Growth

Now

Start ofNixon Adm.

Nixon’s Graph:Economic Growth

Now

Start ofNixon Adm. Projectio

n

Conditions for regression

“Straight enough” condition (linearity)

Errors are mostly independent of X

Errors are mostly independent of anything else you can think of

Errors are more-or-less normally distributed

How to test the quality of a regression—

Plot the residuals.Pattern bad, no pattern good

R2

How sure are you of the coefficients ?