+ All Categories
Home > Documents > Teacher Evaluation and Performance Measurement

Teacher Evaluation and Performance Measurement

Date post: 24-Feb-2016
Category:
Upload: dena
View: 23 times
Download: 2 times
Share this document with a friend
Description:
Doug Staiger , Dartmouth College. Teacher Evaluation and Performance Measurement. Not this. Satisfactory (or equivalent) Unsatisfactory (or equivalent) . - PowerPoint PPT Presentation
61
Teacher Evaluation and Performance Measurement Doug Staiger, Dartmouth College
Transcript
Page 1: Teacher Evaluation and Performance Measurement

Teacher Evaluation and Performance Measurement

Doug Staiger, Dartmouth College

Page 2: Teacher Evaluation and Performance Measurement

Not this.

2Weisberg, D., Sexton, S., Mulhern, J. & Keeling, D. (2009) The Widget Effect: Our National Failure to Acknowledge and Act on Differences in Teacher Effectiveness. New York: The New Teacher Project.

Satisfactory (or equivalent) Unsatisfactory (or equivalent)

Page 3: Teacher Evaluation and Performance Measurement

3

Not this.

Page 4: Teacher Evaluation and Performance Measurement

4

Transformative Feedback

Page 5: Teacher Evaluation and Performance Measurement

Recent Work on Teacher Evaluation

Efforts to identify effective teaching using achievement gains– Work with Tom Kane & others in LAUSD, NYC, Charlotte…

www.dartmouth.edu/~dstaiger

Efforts to better identify effective teaching– Measures of Effective Teaching (MET) Project

(Bill & Melinda Gates Foundation)www.metproject.org

– National Center for Teacher Effectiveness (NCTE)(US Department of Education)www.gse.harvard.edu/ncte

5

Page 6: Teacher Evaluation and Performance Measurement

The Measures of Effective Teaching Project

• Two school years: 2009-10 and 2010-11• Grades 4-8: ELA and Math• High School: ELA I, Algebra I and Biology

Participating Teachers

Page 7: Teacher Evaluation and Performance Measurement

The MET data is unique … in the variety of indicators tested,

5 instruments for classroom observations (use FFT here)

Student surveys (Tripod Survey)

Value-added on state tests

in its scale,3,000 teachers

22,500 observation scores (7,500 lesson videos x 3 scores)

900 + trained observers

44,500 students completing surveys and supplemental assessments in year 1

3,120 additional observations by principals/peer observers in Hillsborough County, FL

and in the variety of student outcomes studied. Gains on state math and ELA testsGains on supplemental tests (BAM & SAT9 OE)

Student-reported outcomes (effort and enjoyment in class, grit)

7

Page 8: Teacher Evaluation and Performance Measurement

What is “Effective” Teaching? Can be an inputs based concept

– Observable actions or characteristics

Can be outcomes based concept– Measured by student success

Ultimately, care about impact on student outcomes– Current focus on standardized exams– Interest in other outcomes (college, non-cognitive)

8

Page 9: Teacher Evaluation and Performance Measurement

Multiple Measures of Teaching Effectiveness

9

Page 10: Teacher Evaluation and Performance Measurement

10

Measure #1Student Achievement Gains

(“Value Added”)

Page 11: Teacher Evaluation and Performance Measurement

11

Basics of Value Added Analysis Teacher value added compares actual student achievement at the

end of the year to an expectation for each student

Difference between actual and expected achievement, averaged over all of teacher’s students

Expected achievement is typical achievement for other students who looked similar at start of year– Same prior-year test scores– Same demographics, program participation– Same characteristics of peers in classroom or school

Various flavors, all work similarly– Student growth percentiles– Average change in score or percentile– Based on prior year test or Fall pre-test

Page 12: Teacher Evaluation and Performance Measurement

There are Large Differences in Teacher Effects on Student Achievement Gains

Most evidence from “value added” analysis, but similar findings from randomized experiments

Huge literature about “teacher effects” on achievement– Large persistent variation across teachers– Difficult to predict at hire– Partially predictable after hire– Improve only in the first few years of teaching– Not related to most determinants of pay

• Certification, degrees, experience beyond first few years

Page 13: Teacher Evaluation and Performance Measurement

Large Variation in Value Added of LAUSD Teachers is Not Related to Teacher Certification

0.0

3.0

6.0

9.1

2P

ropo

rtion

of C

lass

room

s

-15 -10 -5 0 5 10 15Change in Percentile Rank of Average Student

Traditionally Certified UncertifiedAlternatively Certified

Note: Classroom-level impacts on average student performance, controlling for baseline scores,student demographics and program participation. LAUSD elementary teachers, grade 2 through 5.

by Initial CertificationTeacher Impacts on Math Performance

Page 14: Teacher Evaluation and Performance Measurement

Variation in Value Added of LAUSD Teachersis Related to Prior Performance

0.0

3.0

6.0

9.1

2P

ropo

rtion

of C

lass

room

s

-15 -10 -5 0 5 10 15Change in Percentile Rank of Average Student

Bottom 3rd Quartile2nd Quartile Top Quartile

Note: Classroom-level impacts on average student performance, controlling for baseline scores,student demographics and program participation. LAUSD elementary teachers, < 4 years experience.

by Ranking After First Two YearsTeacher Impacts on Math Performance in Third Year

Page 15: Teacher Evaluation and Performance Measurement

Why Not Just Hire Good Teachers? Wise selection is the best means of improving the

school system, and the greatest lack of economy exists wherever teachers have been poorly chosen.

• Frank Pierrepont Graves, NYS Commissioner, 1932

Unfortunately, easier said than done– Decades of work on type of certification, graduate

education, exam scores, GPA, college selectivity, TFA– (Very) small, positive effects on student outcomes

Page 16: Teacher Evaluation and Performance Measurement

Large Variation in Value Added of NYC Teachers is Not Related to Recruitment Channel

0

24

68

Ker

nel D

ensi

ty E

stim

ate

-.4 -.3 -.2 -.1 0 .1 .2 .3 .4Student Level Standard Deviations

Traditionally Certified Teaching FellowTeach for America Uncertified

Note: Shown are estimates of teachers' impacts on average student performance, controlling for teachers' experience levels and students' baselinescores, demographics and program participation; includes teachers of grades 4-8 hired since the 1999-2000 school year.

Page 17: Teacher Evaluation and Performance Measurement

Of Course, Teacher Impact on State Test Score is Not All We Care About

Depends on design & content of test

Test scores are proximate measures– But recent evidence suggests they capture long-

run impact on student learning and other outcomes

Test scores are only one dimension of performance– Non-cognitive skills (grit, dependability, …)

Page 18: Teacher Evaluation and Performance Measurement

Value Added is Controversial “We need to find a way to measure classroom

success and teacher effectiveness. Pretending that student outcomes are not part of the equation is like pretending that professional basketball has nothing to do with the score.” (Arne Duncan 2009)

“There is no way that any of this current data could actually, fairly, honestly or with any integrity be used to isolate the contributions of an individual teacher.” (Randi Weingarten 2008)

18

Page 19: Teacher Evaluation and Performance Measurement

19

What we learned from MET:Value-added measures

• Identified teachers who caused students to learn more on state tests following random assignment.

• Same teacher’s also caused students to learn more on supplemental assessments and enjoy class more.

• Low year-to-year correlations in value-added (and other performance measures) understate year-to-career correlations.

Page 20: Teacher Evaluation and Performance Measurement

20

Page 21: Teacher Evaluation and Performance Measurement

21

-.1-.0

50

.05

.1A

ctua

l Ach

ieve

men

t afte

r Ran

dom

Ass

ignm

ent

-.1 -.05 0 .05 .1Predicted achievement using teacher's past measures of teaching.

Note: Teachers were sorted into 20 groups by their predicted student achievement relative to the randomization group mean. Means are reported for each of the 20. Predictions are adjusted for non-compliance.

of Randomized Classrooms (Math)Figure 1. Actual and Predicted Achievement

Actual = Predicted

Page 22: Teacher Evaluation and Performance Measurement

22

-.1-.0

50

.05

.1A

ctua

l Ach

ieve

men

t afte

r Ran

dom

Ass

ignm

ent

-.1 -.05 0 .05 .1Predicted achievement using teacher's past measures of teaching.

Note: Teachers were sorted into 20 groups by their predicted student achievement relative to the randomization group mean.

Means are reported for each of the 20. Predictions are adjusted for non-compliance.

of Randomized Classrooms (ELA)Figure 2. Actual and Predicted Achievement

Actual = Predicted

Page 23: Teacher Evaluation and Performance Measurement

23

Measure #2Classroom Observations

Page 24: Teacher Evaluation and Performance Measurement

Classroom Observation Using Digital Video

24

Page 25: Teacher Evaluation and Performance Measurement

Access to Validation Engine:

What you can expect from us:

SEA/LEA chooses a rubric, trains

raters

The MET Project delivers sample

videos

SEA/LEA ratings used to

-Predict value added-Gauge reliability

Helping Districts Test Their Own New Classroom Observations

25

Page 26: Teacher Evaluation and Performance Measurement

26

Instrument Developer Origin Instructional Focus

Structure Scoring

Framework for Teaching

Charlotte Danielson

Outgrowth ofETS’s PRAXIS III licensing exam

Constructivism

Intellectual Engagement

4 domains; 22 components

MET uses 8 components*

4 Points

Classroom Assessment Scoring System (CLASS)

Robert Pianta, Univ. of Virginia

Tool for research on early childhood development

Teacher-student interactions

3 domains; 12 dimensions

7 Points

Two Cross-Subject Observation Instruments

*not: “flexibility & responsiveness” & “organization of physical space”

Page 27: Teacher Evaluation and Performance Measurement

27

FFT competencies scored:

CLASSROOM ENVIRONMENTCreating an environment of respect and rapport Establishing a culture of learning Managing classroom procedures Managing Student Behavior

INSTRUCTION

Communicating with Students Using Questioning and Discussion Techniques Engaging Students in Learning Using Assessments in Instruction

Page 28: Teacher Evaluation and Performance Measurement

28

Instrument Developer Origin Instructional Focus

Structure Scoring

Mathematical Quality of Instruction (MQI)

Heather Hill, Harvard

Outgrowth from written test of math teaching knowledge

Math errors and imprecision

6 overall elements of instruction

3 Points

UTEACH Observation Protocol (UTOP)

Michael Marder, Univ. of Texas-Austin

Teacher prep program for math & science majors

Values different modes, from direct instruction to inquiry-based

4 sections; 22 subsections

5 Points

Math Observation Instruments

Page 29: Teacher Evaluation and Performance Measurement

29

Instrument Developer Origin Instructional Focus

Structure Scoring

Protocol for Language Arts Teaching Observations (PLATO)

Pam Grossman Stanford

Research on effective middle grade ELA instruction

Modeling, explicit teaching of strategies, guided practice

13 elements

6 elements included in MET study

4 Points

ELA Observation Instrument

Page 30: Teacher Evaluation and Performance Measurement

30

What we learned from MET:Classroom observations:

• Observation scores were correlated with a teacher’s value-added (.15-.27).

• Different instruments were highly correlated with each other (although subject-specific instruments were distinct from the general-pedagogical instruments).

• Reliability requires certified observers and more than one observer per teacher (because rater judgments differ).

• Principals rate their own teachers higher than other observers do, but their rankings are similar.

• When teachers select their own videos, scores are higher, but ranking remains the same.

Page 31: Teacher Evaluation and Performance Measurement

31

Four Steps

Four Steps to High-Quality Classroom Observations

Page 32: Teacher Evaluation and Performance Measurement

Actual scores for 7500 lessons.

Step 1: Define ExpectationsFramework for Teaching (Danielson)

32

Four Steps

Unsa

tisfa

ctor

y Yes/no Questions, posed in rapid succession, teacher asks all questions, same few students participate.

Basic

Some questions ask for student explanations, uneven attempts to engage all students.

Profi

cient

Most questions ask for explanation, discussion develops/teacher steps aside, all students participate.

Adva

nced All questions high quality,

students initiate some questions, students engage other students.

Page 33: Teacher Evaluation and Performance Measurement

Step 2: Ensure Accuracy of Observers

33

Four Steps

Page 34: Teacher Evaluation and Performance Measurement

Step 3: Monitor Reliability

34

Four Steps

Page 35: Teacher Evaluation and Performance Measurement

35

More than 1 observer

One more lesson +.07

One more observer +.16

Page 36: Teacher Evaluation and Performance Measurement

Step 4: Verify Alignment with Outcomes

36

Four Steps

Teachers with Higher Observation Scores Had Students Who Learned More

Page 37: Teacher Evaluation and Performance Measurement

37

Measure #3What do students say?

Page 38: Teacher Evaluation and Performance Measurement

38

Students Distinguish Between TeachersPercent of Students by Classroom Agreeing

Page 39: Teacher Evaluation and Performance Measurement

39

Students Distinguish Between TeachersPercent of Students by Classroom Agreeing

Page 40: Teacher Evaluation and Performance Measurement

40

Students Distinguish Between TeachersPercent of Students by Classroom Agreeing

Page 41: Teacher Evaluation and Performance Measurement

41

Students Distinguish Between TeachersPercent of Students by Classroom Agreeing

Page 42: Teacher Evaluation and Performance Measurement

42

Students Distinguish Between TeachersPercent of Students by Classroom Agreeing

Page 43: Teacher Evaluation and Performance Measurement

43

What we learned from MET:Student surveys:

• Surveys are a low-cost way to cover untested grades and subjects.

• Student surveys are related to teacher value-added (.15-.25).

• Student surveys are the most reliable measures we tested.

Page 44: Teacher Evaluation and Performance Measurement

44

Multiple MeasuresThe “Dynamic Trio”:

Classroom observations, student feedback and student achievement gains.

Page 45: Teacher Evaluation and Performance Measurement

Dynamic Trio

45

Three Criteria:Predictive power: Which measure could most accurately identify teachers likely to have large gains when working with another group of students?Reliability: Which measures were most stable from section to section or year to year for a given teacher?Potential for Diagnostic Insight: Which have the potential to help a teacher see areas of practice needing improvement? (We’ve not tested this yet.)

Page 46: Teacher Evaluation and Performance Measurement

Measure Predictive power ReliabilityPotential for

Diagnostic Insight

Value-added

Student survey

Observation

Dynamic Trio

Measures have different strengths …and weaknesses

46

H

ML

M

HM/H

L

MH

Page 47: Teacher Evaluation and Performance Measurement

Dynamic Trio

Combining Measures Improved Reliabilityas well as Predictive Power

47

Note: For the equally weighted combination, we assigned a weight of .33 to each of the three measures. The criterion weights were chosen to maximize ability to predict a teacher’s value-added with other students. The next MET report will explore different weighting schemes.

Observation alone (FFT)

Student survey alone

VA alone

Combined(Equal Weights)

Combined(Criterion Weights)

.05

.1.1

5.2

.25

Diff

eren

ce in

Mat

h VA

(Top

25%

vs.

Bot

tom

25%

)

0 .1 .2 .3 .4 .5 .6 .7Reliability

Note: Table 16 of the research report. Reliability based on one course section, 2 observations.

The Reliability and Predictive Power of Measures of Teaching:

Page 48: Teacher Evaluation and Performance Measurement

48

What we learned from MET:Combining measures:

• The teachers identified as more effective caused students to learn more following random assignment.

• Combining value added with student surveys and classroom observations produces two benefits: • Increased reliability• Increased correlation with other outcomes such as value-

added on supplemental assessments and happiness in class• Weighting value-added below .33, though, lowered

correlation with other outcomes and lowered reliability.

Page 49: Teacher Evaluation and Performance Measurement

49

Can the measures be used for “high stakes”?

High-stakes decisions are being made now, with little or no data. No information is perfect, but better information should lead to

better decisions and fewer mistakes.

Scenario 1: Teacher

You have been teaching biology for 10 years and want to improve your practice. What weaknesses should you focus on and how will you know if you're making progress?

Scenario 2: Principal

A probationary teacher in your school is approaching the end of their 2nd year. If you retain him/her, the teacher automatically earns tenure under the collective bargaining agreement. Should you grant tenure (or recruit a new novice teacher)?

Scenario 3: Superintendent

Your district is considering offering coaching opportunities/higher pay to a subset of your teachers. Should you (i) allocate those slots on the basis of seniority, (ii) ensure that only excellent instructors are coaches? How would you measure effectiveness fairly?

Page 50: Teacher Evaluation and Performance Measurement

50

No information is perfect.

How do these compare to existing measures?

But better information → better decisions

• Masters Degrees• Years of Experience• Classroom Observations Alone

Page 51: Teacher Evaluation and Performance Measurement

Compared to What?

Compared to MA Degrees and Years of Experience, the Combined Measure Identifies Larger Differences

51

… on state tests

Page 52: Teacher Evaluation and Performance Measurement

Compared to What?

…and on low stakes assessments

52

Page 53: Teacher Evaluation and Performance Measurement

Compared to What?

…as well as on student-reported outcomes.

53

Page 54: Teacher Evaluation and Performance Measurement

54

The Value of Going Beyond Classroom Observation

+

++

• Observations

• Observations• Student

Perceptions

• Observations• Student

Perceptions• VA on state

tests

Page 55: Teacher Evaluation and Performance Measurement

55

Compared to Classroom Observations Alone, the Combined Measure Identifies Larger Differences (Math Value Added)

-.2-.1

0.1

.2.3

Ave

rage

mat

h Va

lue

Add

ed, O

ther

Cla

ss

0 20 40 60 80 100Percentile Rank on FFT

Rank using FFT only Rank using FFT and TripodRank using FFT, Tripod, and Value Added

Page 56: Teacher Evaluation and Performance Measurement

56

Improving TeachingWhat are Districts Doing?

Page 57: Teacher Evaluation and Performance Measurement

Robust evaluation systems themselves improve teaching outcomes

Source: Eric S. Taylor and John H. Tyler, “Can Teacher Evaluation Improve Teaching?” Education Next, Fall 2012

Page 58: Teacher Evaluation and Performance Measurement

Teacher Effectiveness Continues to Improve in Better Environments

Source: Matthew A. Kraft and John P. Papay, “Can Professional Environments in Schools Promote Teacher Development? Explaining Heterogeneity in Returns to Teaching Experience,” January 2013 (on NCTE website).

Page 59: Teacher Evaluation and Performance Measurement

59

The Best Foot Forward Project1. Teachers record their own lessons.

• Record ≥1 lesson every 2 weeks.• Submit 5 lessons over course of the year.• Viewed by principals, content experts.

2. Observers view and discuss videos with teachers.• Observers trained to use video for feedback.• Identify discreet, coachable changes.

3. Teachers can share videos with each other.4. Students provide anonymous feedback.

Page 60: Teacher Evaluation and Performance Measurement

60

Next Up: Dashboard for Tracking Teacher Evaluations and Benchmarking Performance

1. Distribution of Observation Scores: What are the differences in scores and are the differences between schools, districts, grades and subjects larger than might have occurred by chance?

2. Observations and Value-Added: What are the relationships among the different measures? Do they differ by district, school, grade level, subject? Are they weaker/stronger than we observed in MET?

3. Reliability: How does each measure vary from school to school and year to year?

Page 61: Teacher Evaluation and Performance Measurement

Useful ResourcesAvailable at: http://www.metproject.org/resources.php

Student surveys: Tripod survey and “Asking Students about Teaching Practitioner Brief”

Roster Validation: Report by Battelle for Kids on ways to allow teachers to verify students in their class: “Identifying The Importance of Accurately Linking Instruction to Students to Determine Teacher Effectiveness”

Software for Certifying Observers using Pre-Scored Videos: Certification engine from Empirical Education

Available at: http://www.gse.harvard.edu/ncte/resources/default.php

Classroom Observation: Links to FFT, CLASS, etc., and webinars with six organizations currently supporting classroom observations

Additional examples of sites with useful resources: TNTP: http://tntp.org/ideas-and-innovations

Pearson: http://educatoreffectiveness.pearsonassessments.com/


Recommended