+ All Categories
Home > Documents > Improving instructional delivery: reflections on student ...1095/fulltext.pdfinstructional practices...

Improving instructional delivery: reflections on student ...1095/fulltext.pdfinstructional practices...

Date post: 18-Jul-2020
Category:
Upload: others
View: 0 times
Download: 0 times
Share this document with a friend
177
1 IMPROVING INSTRUCTIONAL DELIVERY: REFLECTIONS ON STUDENT FEEDBACK A thesis presented by Lisa C. Oliveira to The School of Education In partial fulfillment of the requirements for the degree of Doctor of Education in the field of Education College of Professional Studies Northeastern University July 2013
Transcript
Page 1: Improving instructional delivery: reflections on student ...1095/fulltext.pdfinstructional practices through reflection is the driver behind the development of a student feedback tool;

1

IMPROVING INSTRUCTIONAL DELIVERY: REFLECTIONS ON STUDENT FEEDBACK

A thesis presented by

Lisa C. Oliveira

to The School of Education

In partial fulfillment of the requirements for the degree of

Doctor of Education

in the field of

Education

College of Professional Studies Northeastern University

July 2013

Page 2: Improving instructional delivery: reflections on student ...1095/fulltext.pdfinstructional practices through reflection is the driver behind the development of a student feedback tool;

2

Abstract

Teachers work countless hours preparing instruction for their students with the

best intentions in mind. Each day many students enter into these classrooms to experience

the delivery of the prepared instruction, yet their perceptions do not always match those

of their teachers. The purpose of this research study is to develop and pilot a student

feedback tool and then to determine if the student feedback tool is an appropriate useful

instrument to elicit feedback from students on instructional delivery. Gathering this data

will allow teachers to reflect on their practices as interpreted by students and make

informed changes to their instructional delivery with the intention of improving student

outcomes. In order to incorporate this innovation the teacher must feel safe and must be

provided not only with the data but with the opportunity to reflect. Identifying where the

teacher falls in relations to Gene Hall’s stages of concern, an instrument that “ describes,

explains and predicts probable behaviors throughout the change process”, (George, 2006,

p. 5) will provide administration the information needed for successful implementation of

the student feedback tool. The student feedback tool provided the teacher with the data

needed for reflection, but will also meet the mandates of the Massachusetts Department

of Elementary and Secondary Education Model for Teacher Evaluation. This study serves

these purposes in that it pilots the use of student feedback for reflection with the goal of

improving teacher delivery of instruction to improve student learning. The use of the

Stages of Concern questionnaire allowed the researcher to reflect on the results from the

questionnaire in order to design a plan for a larger pilot or implementation of a student

feedback tool in her building in the future.

Page 3: Improving instructional delivery: reflections on student ...1095/fulltext.pdfinstructional practices through reflection is the driver behind the development of a student feedback tool;

3

Table of Contents CHAPTER 1: INTRODUCTION ........................................................................................................................ 9

STATEMENT OF THE PROBLEM AND SIGNIFICANCE .................................................................................................. 10 PRACTICAL AND INTELLECTUAL GOALS .................................................................................................................. 13 RESEARCH QUESTIONS AND DESIGN ....................................................................................................................... 13 LIMITATIONS ........................................................................................................................................................... 15 THEORETICAL FRAMEWORK .................................................................................................................................... 15 JEAN PIAGET AND CONSTRUCTIVISM ....................................................................................................................... 16 CHARLOTTE DANIELSON’S FRAMEWORK FOR TEACHING ......................................................................................... 18 ALBERT BANDURA’S CONCEPT OF SELF-EFFICACY .................................................................................................. 21 GENE HALL ’S CONCERNED BASED ADOPTION MODEL. ............................................................................................. 23

CHAPTER 2: LITERATURE REVIEW ........................................................................................................... 27

REFLECTIVE PRACTICE ............................................................................................................................................ 27 STUDENT EVALUATIONS OF TEACHING ................................................................................................................... 30 STUDENT GROWTH .................................................................................................................................................. 31 TEACHER EFFECTIVENESS AND TEACHER QUALITY ................................................................................................ 35 TEACHER EVALUATION ........................................................................................................................................... 37

CHAPTER 3: RESEARCH DESIGN ................................................................................................................ 42

RESEARCH QUESTIONS ............................................................................................................................................ 42 RESEARCH SEQUENCE ............................................................................................................................................. 43 APPROACH ............................................................................................................................................................... 43 SITE AND PARTICIPATION ........................................................................................................................................ 45 PROTECTION OF PARTICIPANTS................................................................................................................................ 46 DATA COLLECTION AND ANALYSIS ......................................................................................................................... 47 VALIDITY AND RELIABILITY .................................................................................................................................... 50 LIMITATIONS ........................................................................................................................................................... 52

CHAPTER 4: REPORT OF RESEARCH FINDINGS ..................................................................................... 54

TEACHER PARTICIPANT MS1 ................................................................................................................................... 56 TEACHER PARTICIPANT HS1 ................................................................................................................................... 61 TEACHER PARTICIPANT MS2 ................................................................................................................................... 67 TEACHER PARTICIPANT HS2 ................................................................................................................................... 71 TEACHER PARTICIPANT MS3 ................................................................................................................................... 75 TEACHER PARTICIPANT HS3 ................................................................................................................................... 79 TEACHER PARTICIPANT HS4 ................................................................................................................................... 83 RESEARCH QUESTION 3 .............................................................................................................................................. 100 RESEARCH QUESTION 5 ......................................................................................................................................... 103

CHAPTER 5: DISCUSSION OF RESEARCH FINDINGS ............................................................................ 109

INTRODUCTION ...................................................................................................................................................... 109 REFLECTIVE PRACTICE .......................................................................................................................................... 111 I-SAID AS A TOOL FOR TEACHER EVALUATION .................................................................................................... 116 SUMMARY OF FINDINGS ........................................................................................................................................ 122 DELIMITATIONS AND LIMITATIONS OF THE STUDY ................................................................................................ 124 RECOMMENDATIONS FOR FUTURE PRACTICE ........................................................................................................ 125 RECOMMENDATIONS FOR FURTHER STUDY ........................................................................................................... 126

REFERENCES ................................................................................................................................................. 127

APPENDIX A ................................................................................................................................................... 135

Page 4: Improving instructional delivery: reflections on student ...1095/fulltext.pdfinstructional practices through reflection is the driver behind the development of a student feedback tool;

4

APPENDIX-B TEST-RE-TEST DATA BY TEACHER PARTICIPANT ...................................................... 140

APPENDIX C-QUESTIONS ASKED OF CRITICAL PEERS ..................................................................... 147

APPENDIX D- STUDENT FEEDBACK SURVEY ........................................................................................ 148

APPENDIX E-REFLECTIVE MEMO ............................................................................................................ 150

APPENDIX F- STAGES OF CONCERN QUESTIONNAIRE ....................................................................... 151

APPENDIX G—PERMISSION FROM DR. GENE HALL ............................................................................ 154

APPENDIX H- EXAMPLE OF STUDENT DATA PRESENTED TO TE ACHER FOR REFLECTION .... 156

APPENDIX I-RESULTS OF OPEN ENDED REFLECTIVE MEMO ........................................................... 160

APPENDIX J- IRB APPROVAL ..................................................................................................................... 171

Page 5: Improving instructional delivery: reflections on student ...1095/fulltext.pdfinstructional practices through reflection is the driver behind the development of a student feedback tool;

5

List of Tables

Table 1 .......................................................................................................................................... 56

Table 2 .......................................................................................................................................... 57

Table 3 .......................................................................................................................................... 58

Table 4 .......................................................................................................................................... 59

Table 5 .......................................................................................................................................... 63

Table 6 .......................................................................................................................................... 64

Table 7 .......................................................................................................................................... 65

Table 8 .......................................................................................................................................... 68

Table 9 .......................................................................................................................................... 69

Table 10 ........................................................................................................................................ 70

Table 11 ........................................................................................................................................ 72

Table 12 ........................................................................................................................................ 73

Table 13 ........................................................................................................................................ 74

Table 14 ........................................................................................................................................ 77

Table 15 ........................................................................................................................................ 77

Table 16 ........................................................................................................................................ 78

Table 17 ........................................................................................................................................ 81

Table 18 ........................................................................................................................................ 81

Table 19 ........................................................................................................................................ 82

Table 20 ........................................................................................................................................ 85

Table 21 ........................................................................................................................................ 86

Table 22 ........................................................................................................................................ 87

Page 6: Improving instructional delivery: reflections on student ...1095/fulltext.pdfinstructional practices through reflection is the driver behind the development of a student feedback tool;

6

Table 23 ........................................................................................................................................ 90

Table 24 ........................................................................................................................................ 94

Table 25 ........................................................................................................................................ 95

Table 26 ........................................................................................................................................ 95

Page 7: Improving instructional delivery: reflections on student ...1095/fulltext.pdfinstructional practices through reflection is the driver behind the development of a student feedback tool;

7

List of Figures

Figure 1 ......................................................................................................................................... 24

Figure 2 ......................................................................................................................................... 59

Figure 3 ......................................................................................................................................... 61

Figure 4 ......................................................................................................................................... 65

Figure 5 ......................................................................................................................................... 67

Figure 6 ......................................................................................................................................... 69

Figure 7 ......................................................................................................................................... 71

Figure 8 ......................................................................................................................................... 73

Figure 9 ......................................................................................................................................... 75

Figure 10 ....................................................................................................................................... 78

Figure 11 ....................................................................................................................................... 79

Figure 12 ....................................................................................................................................... 82

Figure 13 ....................................................................................................................................... 83

Figure 14 ....................................................................................................................................... 87

Figure 15 ....................................................................................................................................... 88

Figure 16 ....................................................................................................................................... 91

Figure 17 ....................................................................................................................................... 91

Figure 18 ....................................................................................................................................... 92

Figure 19 ....................................................................................................................................... 92

Figure 20 ....................................................................................................................................... 93

Figure 21 ....................................................................................................................................... 96

Page 8: Improving instructional delivery: reflections on student ...1095/fulltext.pdfinstructional practices through reflection is the driver behind the development of a student feedback tool;

8

Figure 22 ....................................................................................................................................... 96

Figure 23 ....................................................................................................................................... 97

Figure 24 ....................................................................................................................................... 97

Figure 25 ....................................................................................................................................... 98

Figure 26 ..................................................................................................................................... 106

Figure 27 ..................................................................................................................................... 106

Figure 28 ..................................................................................................................................... 107

Figure 29 ..................................................................................................................................... 107

Figure 30 ..................................................................................................................................... 108

Page 9: Improving instructional delivery: reflections on student ...1095/fulltext.pdfinstructional practices through reflection is the driver behind the development of a student feedback tool;

9

Chapter 1: Introduction

“If you think in terms of a year, plant a seed; if in terms of ten years, plant trees; if in terms of

100 years, teach the people”-Confucius (551-479 BCE).

When one chooses to teach the choice is made not because of the hours or wages, but

because there is great power and fulfillment in guiding the learning of others. Additionally, our

teachers believe that our students are the most important assets we have; therefore student

feedback on instructional delivery is the next logical step in improving teacher practice, and

ultimately increasing student learning. Most of the current research on the use of student ratings

has occurred at the college level; however, due to recent federal grant opportunities and reform

to the teacher evaluation process, school districts are expected to incorporate evidence of the use

of student feedback into their teacher evaluation system. Obtaining student feedback will shift

the focus from teacher evaluations based solely on a few classroom observations to a teacher

evaluation process that encompasses a more complete picture of how students learn. The time is

now to develop a student feedback instrument grounded in theory that provides students with the

opportunity to share feedback on instructional delivery with their teacher. This goal of providing

feedback to teachers is two-fold: first, to improve instruction and second for evaluation purposes.

A student feedback instrument would provide the teachers with information to reflect upon and

as a result develop individual professional development plans aimed at increasing their

effectiveness in the classroom. For example, the survey could provide teachers with information

about whether students’ perception of instructional practices aligns with the intended design,

thus providing teachers with data from student perception of the instructional delivery of the

teacher upon which they could reflect. Similar to a formative assessment, teachers could change

their practices while it is still meaningful for the students they have right in front of them. In

Page 10: Improving instructional delivery: reflections on student ...1095/fulltext.pdfinstructional practices through reflection is the driver behind the development of a student feedback tool;

10

addition, student feedback could assist administrators in measuring teacher effectiveness and

growth.

Statement of the Problem and Significance

Research has shown that the single most important factor in student achievement is

having an effective teacher (Bain-Pate, 1989; 2011, p. 158; Munoz, 2007). “In comparison to

gains from higher teacher quality, even a very costly ten student reduction in class size produces

smaller benefits than a one standard deviation improvement in teacher quality” (Rivkin,

Hanushek, & Kain, 2005, p. 419). Student feedback has been suggested to be a mechanism for

improving teacher quality. According to Bailey, “classroom teachers can acquire more

knowledge about what they are doing in the classroom and how they can systematically improve

their performance by using student feedback” (Bailey, 1983, p. 5); however, the practice of

obtaining student feedback with the goal of improving instructional practices is not widespread

in K-12 education. Student feedback has remained highly utilized at the college level. Typically,

K-12 teachers only receive feedback from administrators as a result of planned observations of

specific lessons. Students spend much more time observing instructional delivery than do

administrators and could provide powerful information to their teachers. Students not only have a

unique perspective to understand what increases their own learning, but what is clear to adults

may not be so clear to students as their cognitive structures are different than adults. According

to Dr. Francis Jensen the teenage brain is structurally different in that it has less grey matter and

the myelin sheath which covers our nerves in the brain is also less developed. She states, “the

thinning of gray matter that starts around puberty corresponds to increasing cognitive abilities.

This probably reflects improved neural organization, as the brain pares redundant connections

and benefits from increases in the white matter that helps brain cells communicate” (Jensen,

Page 11: Improving instructional delivery: reflections on student ...1095/fulltext.pdfinstructional practices through reflection is the driver behind the development of a student feedback tool;

11

2010). Students can provide information such as what they think the teacher is doing, what they

like about what the teacher is doing, and why he/she likes/dislikes those approaches. Students

may be the key to providing the information to teachers need for the purposes of reflection and

having meaningful conversations about their instructional practices with their colleagues and

administrators. Beaty states, “The practice of reflection is important to the development of all

professionals because it enables them to learn from experience” (1997, p. 8). Improvement of

instructional practices through reflection is the driver behind the development of a student

feedback tool; however, a secondary driver is the need to include evidence of the use of student

feedback in the evaluation process for schools in Massachusetts as a result of the requirements

for use of student feedback as part of a teacher’s evaluation.

The No Child Left Behind Act (NCLB), written in 2001, has brought curriculum

standards into education along with accountability for schools. Schools became accountable for

their entire student body as a whole and for the performance of particular sub-groups such as

special education and low income groups with the goal of helping each and every child reach

their full potential. The mandate was that all students must reach proficiency by the year 2014.

This has been problematic for many school districts. The latest federal intervention allows for

states to apply for a waiver releasing them from the regulation of every student reaching

proficiency by 2014. “An estimated 48% of the nation’s public schools did not make adequate

yearly progress, (AYP) in 2011. This marks an increase from 39% in 2010 and is the highest

percentage since NCLB took effect” (Usher, 2011b, p. 2). In another report Usher informs us that

“81% of Massachusetts schools did not make AYP in 2011 compared to 57% in 2010 (Usher,

2011a, p. 5). Massachusetts has been granted waiver status thus releasing them from the

obligation of ensuring that all students achieve proficiency by 2014 largely in part due to their

Page 12: Improving instructional delivery: reflections on student ...1095/fulltext.pdfinstructional practices through reflection is the driver behind the development of a student feedback tool;

12

agreement to implement the Common Core State Standard Initiative (CCSSI) and their

commitment to implement a new teacher evaluation system based in part on student

performance.

The Massachusetts Department of Elementary and Secondary Education, (MADESE)

has recently begun the rollout of its Massachusetts Model System for Educator Evaluation. They

have solidified the first phase which calls for districts to “Rate every educator based on

attainment of goals and performance against the following four standards: 1-Curriculum,

Planning and Assessment 2-Teaching All Students 3-Family and Community Engagement 4-

Professional Culture” (MADESE, 2011, p. 5). MADESE is looking for districts to begin

developing and piloting phase II “Rate every educator’s impact on student learning gains based

on trends and patterns on state and district determined measures of student learning and phase III

the use of feedback from students as evidence in the evaluation process” (MADESE, 2011, p. 8),

thus this study may yield information to support larger scale pilots of the developed student

feedback tool to meet the MADESE’s mandate.

In conclusion, the success of implementing a new innovation such as a student feedback

tool will be highly dependent upon the stage of concern the teacher is at and how that concern is

supported and worked through by an effective change facilitator. “Change success depends less

on whether the source of change is internal or external and significantly more on the degree to

which the culture of the organization is open and ready to consider what is currently being done

and continually examining ways to improve” (G. E. Hall, Hord, S.M., 2001, p. 2). The Concerns

Based Adoption Model was developed by Gene Hall. Through his research he identified a

developmental process through which individuals proceed in their adoption and implementation

of a new innovation. Earlier stages must be worked through in order to proceed in the process.

Page 13: Improving instructional delivery: reflections on student ...1095/fulltext.pdfinstructional practices through reflection is the driver behind the development of a student feedback tool;

13

(George, 2006, p. 8) Therefore, this study will evaluate the stage of concern for each teacher

participant by using the Stages of Concern Questionnaire.

Practical and Intellectual Goals

The practical goals (Maxwell, 2005, p. 21) for this study are to create a student feedback

tool, (SFT), to obtain student feedback as a means for teachers to reflect on and utilize to create

an action plan to improve their instructional practices. Finally, to better understand the teachers’

level of concern after the process of reviewing and reflecting on the data. The SFT will be

designed using the Framework for Teaching, (FFT) by Charlotte Danielson and will here on out

be called The Individual Student Assessment of Instructional Delivery, (I-SAID). The I-SAID

may also serve as an appropriate instrument for collecting student feedback as required in Phase

III of the Massachusetts Model System for Educator Evaluation.

The Intellectual goal (Maxwell, 2005, p. 21) of this proposed research project is to

develop a deeper understanding of how teachers could use the I-SAID as a tool for reflective

practice and meaningful conversation, and how to craft a plan for successful implementation of

the use of student feedback as a reflective tool utilizing the concerns based adoption model.

Research Questions and Design

In order to begin to understand if the developed student feedback tool the I-SAID would

be a valid and reliable tool for teachers to use to reflect on with the intention of improving their

instructional practices the following central research questions explored in this study are:

1.) What elements should be in a valid and reliable student feedback tool?

2.) What can teachers learn from student feedback data?

Page 14: Improving instructional delivery: reflections on student ...1095/fulltext.pdfinstructional practices through reflection is the driver behind the development of a student feedback tool;

14

3.) How do teachers incorporate student feedback as a means of improving instructional

delivery?

4.) Can the developed SFT meet the need of the Massachusetts Model System for

Educator Evaluation?

5.) What was the indicated Stage-of-Concern regarding the implementation of the I-

SAID of selected teacher participants?

My problem of practice is the lack of a valid, useful instrument for obtaining student

feedback to assist secondary school teachers in the use of reflection on their practices,

identification of professional development needs, development of instructional goals and

increasing student learning gains. On a broader scope it is to develop an instrument that may

meet the requirements of phase III of the Massachusetts Model System for teacher evaluation

which requires the educator to provide evidence of the use of student feedback as an element of

the evaluation system.

To address this problem a two phased mixed methods pilot study was conducted. The

first part of phase one of this study consists of the development of the student feedback tool, and

the second part of phase one consisted of administering the survey and benchmarking it against

Danielson’s observation rubric for effective teaching. Phase two of the study assembled and

made available the data to the participants. They were then asked to complete a reflective memo

to share their experience and insights garnered from the process and the tool with the researcher.

The final step in this phase participants was for the teacher participants to complete the Stages of

Concern Questionnaire in order to obtain information on the interventions needed to assist in

implementation of the I-SAID. This takes into consideration the perspectives of the teacher

Page 15: Improving instructional delivery: reflections on student ...1095/fulltext.pdfinstructional practices through reflection is the driver behind the development of a student feedback tool;

15

participants and attempts to examine the process holistically taking into consideration the

experiences of the teacher participants.

Limitations

This study is a small scale pilot study for which a student feedback instrument was

designed, implemented and analyzed. The implementation occurred with a 7-12 suburban school

district utilizing only teachers who have professional status and were not on-cycle to be

evaluated during the 2012-2013 school year. As a result there may be limitations with respect to

generalizing the findings of this study to other school districts with different contexts and

demographics. Every effort has been made to minimize potential researcher bias as the

researcher’s role is that principal of the high school of this district and this role could lend itself

to cause potential pressure to meet individual or shared needs.

Theoretical Framework

The two phases of this study are supported by distinct theoretical frameworks. The first

phase aims at developing a tool to provide student feedback of teacher’s instructional delivery.

The development of the I-SAID was grounded in the Framework for Teaching developed by

Charlotte Danielson. Danielson’s framework is rooted in the earlier works of Jean Piaget,

particularly his work on constructivism. Constructivism explains how one constructs learning.

These theories working together provide the framework under which the I-SAID was developed.

Albert Bandura’s concept of self-efficacy provides the researcher with a richer understanding of

teacher experiences and reactions towards data from the I-SAID. It is a determining factor in

whether or not the teacher continues to invest time and energy into refining their teaching or

succumbs to the feelings of inadequacy and gives up. “Efficacy expectations determine how

Page 16: Improving instructional delivery: reflections on student ...1095/fulltext.pdfinstructional practices through reflection is the driver behind the development of a student feedback tool;

16

much effort people will expend and how long they will persist in the face of obstacles and

aversive experiences” (Albert Bandura, 1977, p. 80).

The second phase of the research study aims at developing a deeper understanding of

how teachers could use the I-SAID as a tool for reflective practice and meaningful conversation.

The Concerns Based Adoption Model, (CBAM) is utilized to determine where the teachers who

piloted the I-SAID stand in regards to their willingness to adopt the I-SAID. This model

addresses the personal side of change and the stages which an individual travels through in the

implementation of an innovation (G. E. Hall, Hord, S.M., 2001). This model will be discussed

further later in this section.

Jean Piaget and constructivism

Jean Piaget’s work has centered on acquisition of knowledge by both students and adults.

Piaget defines intelligence as, “the state of equilibrium towards which tend all the successive

adaptations of a sensory-motor and cognitive nature, as well as all assimilatory and

accommodatory interactions between the organism and the environment” (1950, p. 21). Piaget

believed that individuals relied on linked images called schema. The central premise of this

theory is that as knowledge is introduced, it is with assimilated in our existing cognitive schema

or our cognitive schema changes, makes accommodations, in order to incorporate the new

knowledge. True learning occurs through changes in one’s schema (Paiget, 1950). The

acquisition of new learning must then occur by assimilating new information into the existing

cognitive schema or accommodating the existing schema to incorporate the new learning. The

teacher’s role in the acquisition of new learning is to assess what the student already knows,

learn about the students existing schema, and then present new knowledge in a way for the

student to experience it so that accommodation or assimilation can occur. “It is by adapting to

Page 17: Improving instructional delivery: reflections on student ...1095/fulltext.pdfinstructional practices through reflection is the driver behind the development of a student feedback tool;

17

things that thought organizes itself and it is by organizing itself that it structures things” (Paiget,

1950, p. 8).

Piaget also believed that the acquisition of knowledge is developmentally based. This

means that adults may acquire knowledge in a different way than students lending more

credibility to the need to develop a survey of student feedback on instructional delivery. As one

develops he/she is able to assimilate or accommodate more abstract knowledge than in the earlier

stages. He defined four stages through which one develops. The end stage often referred to as the

formal operational stage is the stage in which one can assimilate and accommodate the most

abstract material. (Paiget, 1950) According to these stages the typical middle and high school

student is somewhere between stage three and stage four. Thus, while they are capable of logical

thought they struggle with abstract reasoning.

Consequently, in order to achieve high student learning gains framing of the learning

must occur. Educators must make connections to prior learning, and clarify misunderstanding if

they hope for students to accurately assimilate and accommodate the new learning. Additionally,

they must present the information in multiple ways offering all students the opportunity to

experience the learning in ways that match their developmental stage. Students cannot be

presented with new information with the expectation that they immediately understand it. They

must be provided the opportunity to connect new to old, and construct their own knowledge as a

result.

It is important for the teacher to use instructional methods which assess prior knowledge

and check for understanding while presenting new knowledge if the expectation is growth.

Students must have pre-existing knowledge upon which to assimilate or accommodate new

knowledge. Constructivism also emphasizes the importance of clarity. Effective learning can

Page 18: Improving instructional delivery: reflections on student ...1095/fulltext.pdfinstructional practices through reflection is the driver behind the development of a student feedback tool;

18

only occur in the presence of clarity and teachers need to incorporate strategies which reinforce

clarity in the classroom. Middle and high school students have the ability to logically assess;

therefore, they are at the correct developmental stage to provide feedback to their teachers

regarding the instructional framework and the level of use of the instructional framework.

Charlotte Danielson’s framework for teaching

In 1986 Bodner described teaching and learning by saying, “Teaching and learning are

not synonymous, we can teach, and teach well without the students learning” (p. 873); however,

today, the expectations are that unless students are learning then one is not teaching well.

Danielson has developed a framework for teaching to capture the complexity of the teacher’s

responsibilities to facilitate and ensure learning. The Framework for Teaching (FFT) is based in

Piaget’s constructivism and Danielson states, “Constructivism recognizes that, for all human

beings-adults as well as children- it is the learner who does the learning. That is, people’s

understanding of any concept depends entirely on their experience in deriving that concept for

themselves” (2007, p. 15). Since the learner is doing the learning they are the target audience and

should have the opportunity to reflect on the delivery of instruction as it pertains to them and

provide this feedback to their teachers. Danielson has developed a framework which provides

teachers and administrators a deeper understanding into the teaching practices which enhance the

construction of knowledge and lead to higher learning gains.

The FFT addresses commonalities that occur in the classroom across all subjects.

“Educators know, there is only a single framework for teaching. The framework for teaching is

not prescriptive; it does not endorse any particular teaching methodology. It provides a structure

that educators can us as a guide against which to examine their own practice” (Danielson, 2009,

pp. 1-2). Reflective thinking dates back to John Dewey. His definition of reflective thinking is as

Page 19: Improving instructional delivery: reflections on student ...1095/fulltext.pdfinstructional practices through reflection is the driver behind the development of a student feedback tool;

19

follows; “Active persistent and careful consideration of any belief or supposed form of

knowledge in the light of the grounds that support it and the further conclusion to which it tends”

(Dewey, 1910, p. 6). The FFT provides opportunities to teachers to increase reflection in their

practice as it is a set of performance standards with accompanying rubrics under which teachers

and administrators can assess performance (Danielson, 2007). Danielson’s framework is divided

into four domains: 1-planning and preparation, 2-the classroom environment, 3-instruction, and

4- professional responsibilities (2007, pp. 26-31). Domain-3 instruction is further sub-divided

into five components: “3a communicating with students, 3b using questioning and discussion

techniques 3c engaging students in learning 3d using assessment in Instruction and 3e

demonstrating flexibility and responsiveness” (Danielson, 2007, p. 29).

Domain 3 of Charlotte Danielson’s Framework for Teaching will be used to develop the

student feedback tool for this research project. “Domain 3 contains the components that are the

essential heart of teaching- the actual engagement of students in content” (Danielson, 2007, p.

29). The student feedback tool will attempt to elicit information from the students on the

teacher’s instructional framework. Statements such as: “my teacher gives challenging

assignments” to “my teacher asks follow up questions” provides valuable information to the

teacher to design their goals, assess their professional development needs and measure their

personal improvements. All questions contained in the student feedback tool have been designed

based on Charlotte Danielson’s Domain 3- Instruction. Danielson divides the components of

Domain 3 into smaller measurable units which assisted this researcher in developing the

questions. Component 3a is divided into, “expectations for learning, directions and procedures,

explanations of content and use of oral and written language” (Danielson, 2007, p. 80). Sample

questions from this component include: “my teacher uses real life examples to explain new

Page 20: Improving instructional delivery: reflections on student ...1095/fulltext.pdfinstructional practices through reflection is the driver behind the development of a student feedback tool;

20

material”, and “my teacher gives clear directions”. Answers to these questions can assist

educators in refining their practice in an effort to improve their instructional practices by setting

professional goals and identifying their professional development needs.

Component 3b encompasses “quality of questions, discussion techniques and student

participation” (Danielson, 2007, p. 82). Component 3b informs the teacher on how the she

questions and leads discussion in order to construct new learning, check for understanding, and

promote deeper engagement. Component 3c includes the elements, “activities and assignments,

groupings of students, instructional material and resources, and structure and pacing” (Danielson,

2007, p. 85). Danielson claims that this aspect of the framework is the most important as she

states, “Engaging students in learning is the raison d’ etre of school; it is through student

engagement that students learn complex content” (2007, p. 82). Questions related to this

component include: “The work in this class challenges me”; “The activities and assignments

require me to think deeply,”; “My teacher provides choice of activities”; and “I start on a warm

up/ bell work when I take my seat”. These questions speak to the degree of engagement fostered

by the teacher.

Component 3d looks at using assessment in instruction and is broken down into the

following elements, “assessment criteria, monitoring of student learning, feedback to students

and students self-assessment and self monitoring of progress” (Danielson, 2007, p. 89). This

element has become important as teachers have realized that appropriate feedback enhances

lesson design, allows for unscrambling confusion and keep a check on pacing. Finally,

Component 3e includes the elements, “lesson adjustment, response to students, and persistence”

(Danielson, 2007, p. 91). Domain 3 with its five components and multiple elements creates a

sound framework for the developed to elicit student feedback from.

Page 21: Improving instructional delivery: reflections on student ...1095/fulltext.pdfinstructional practices through reflection is the driver behind the development of a student feedback tool;

21

In addition to the straight forward approach taken by Danielson, the FFT “identifies those

aspects of a teacher’s responsibilities that have been documented through empirical studies and

theoretical research as promoting improved student learning” (Danielson, 2007, p. 1). The FFT

was developed by Danielson as a result of her work with the Educational Testing Service (ETS)

on the Praxis III: Classroom Performance Assessment. The Praxis series is utilized for licensing

beginning teachers. The best practices within the framework are not only based on the research

conducted by the ETS for the Praxis series but originate in the research of Madeline Hunter.

“Hunter was one of the first educators to argue persuasively that teaching is not only an art but

also a science; some instructional practices are demonstrably more effective than others”

(Danielson, 2007, p. 7).

Charlotte Danielson has developed a rubric which can be utilized for self-evaluation or

observation based on her FFT. Danielson states, “Domain 3 comprises the components that are at

the core of teaching and reflects the primary mission of school to enhance student learning. Each

of the components in this domain represents a distinct aspect of instructional skill” (2007, p.

249). The components of domain three are each divided further into elements which are

represented in rubric form for easy self or observational assessment. The relationship between

the components of the framework, elements of the rubric which will be used for the observations

and the questions developed for the I-SAID can be viewed in Appendix A. The following section

explores Albert Bandura’s concept of self-efficacy as it relates to the second phase of this study.

Albert Bandura’s concept of self-efficacy

Phase two of this study explores the experiences of the teacher during the implementation

and data review of the I-SAID. How teachers engage with, interpret and put into practice the

information garnered from the I-SAID may be understood by Bandura’s concept of self-efficacy.

Page 22: Improving instructional delivery: reflections on student ...1095/fulltext.pdfinstructional practices through reflection is the driver behind the development of a student feedback tool;

22

This concept is embedded in Bandura’s social learning theory. “Social learning theory

emphasizes the prominent roles played by vicarious, symbolic, and self-regulatory processes in

psychological functioning” (A. Bandura, 1995, p. vii). Furthermore, “social learning theory

approaches the explanation of human behavior in terms of a continuous reciprocal interaction

between cognitive, behavioral and environmental determinants” (A. Bandura, 1995, p. vii).

Teachers have expressed concern over the use of SFTs for the purpose of evaluation. This

concern and how the teacher approaches the data provided by the developed instrument may be

understood through Bandura’s self-efficacy concept. Bandura’s theory is distinguished in its

affirmation of self-regulatory behaviors. It is through these affirmations that behaviors are

determined (A. Bandura, 1995). Teachers may approach the administration of a SFT with

confidence or ambivalence nonetheless; there is a mandate, which states that student feedback

must be incorporated into the evaluation process. How teachers approach and engage with data

from SFTs may largely depend on their prior experiences with feedback. Bandura states, “The

strength of people’s convictions in their own effectiveness determines whether they will even try

to cope with difficult situations” (1995, p. 79). Feedback can be challenging to reflect on, yet we

have the capacity to do so, and our self-efficacy beliefs can assist us in doing so. “Perceived self-

efficacy not only reduces anticipatory fears and inhibitions but, through expectations of eventual

success, it affects coping efforts once they are initiated” (A. Bandura, 1995, p. 80). Participation

in the pilot study affords teachers in the district the opportunity to reduce their fears of

incorporating student feedback into the evaluation process, and increase their coping responses

through reflection.

Page 23: Improving instructional delivery: reflections on student ...1095/fulltext.pdfinstructional practices through reflection is the driver behind the development of a student feedback tool;

23

Gene Hall’s concerned based adoption model.

Innovations occur in two phases, implementation and adoption. This study is concerned

with both the implementation and adoption phases for the innovation the I-SAID. The

implementation phase will allow this researcher to determine if the tool is reliable, valid and

useful to the teacher. Analysis will be conducted to determine the potential this tool has for

adoption within this school district. Utilizing student feedback as a component of the teacher

evaluation process is relatively new to secondary education. New innovations can be

uncomfortable and the concerns based model describes stages of concern that each individual

typically moves through as a new innovation is implemented and adopted. “When people are

excited about change they will try it. But if they perceive threat or loss, people will hold back

from engaging with the process. These feelings and perceptions can be sorted into what we call

concerns” (G. E. Hall, Hord, S.M., 2001, p. 68). This model identifies seven specific categories

of concern about innovation, “unconcerned, informational, personal, management, consequence,

collaboration and refocusing” (G. E. Hall, Hord, S.M., 2001, p. 73). See Figure 1for a detailed

description of the seven categories. “An aroused state of personal feelings and thought about a

demand is concern” (G. Hall, Hord, S.M., 1987, p. 59). Identifying the stages of concern for

those involved in the adoption of an innovation allows the change facilitator to craft a plan which

addresses both individual and group concerns in order to move forward with implementing the

innovation. The Stages of Concern Theory has been utilized to determine the concerns of

teachers on a variety of innovations. The focus of this research was the Stages of Concern for

teachers in the adoption of the I-SAID.

Page 24: Improving instructional delivery: reflections on student ...1095/fulltext.pdfinstructional practices through reflection is the driver behind the development of a student feedback tool;

24

Figure 1

Descriptions for Categories of the Stages of Concern ____________________________________________________________________________________________

6 Refocusing: The focus in on the exploration of more universal benefits from the innovation, including

the possibility of major changes or replacement with a more powerful alternative. Individual has definite

ideas about alternatives to the proposed or existing form of the innovation.

5 Collaboration: The focus in on coordination and cooperation with others regarding the use of the

innovation.

4 Consequence: Attention focuses on impact of the innovation on “clients” in the immediate sphere of

influence.

3 Management: Attention is focused on the processes and tasks of using the innovation and the best use of

information and resources. Issues related to efficiency, organizing, managing, scheduling, and time

demands are utmost.

2 Personal: Individual is uncertain about the demands of the innovation, his/her inadequacy to meet those

demands, and his/her role with the innovation. This includes analysis of his/her role in relation to the

reward structure of the organization, decision making, and consideration of potential conflicts with existing

structures or personal commitment. Financial or status implications of the program for self and colleagues

may also be reflected.

1Informational: A general awareness of the innovation and interest in learning more detail about it is

indicated. The person seems to be unworried about himself/herself in relation to the innovation. She/he is

interested in substantive aspects of the innovation in a selfless manner, such general characteristics, effects,

and requirements of use.

0 Unconcerned: Little concern about or involvement with the innovation is indicated. Concern about other

thing(s) is more intense.

Reprinted with permission (G. E. Hall, Hord, S.M., 2001, p. 73)

Impact

Task

Self

Unrelate

d

Page 25: Improving instructional delivery: reflections on student ...1095/fulltext.pdfinstructional practices through reflection is the driver behind the development of a student feedback tool;

25

Theories as a framework

These theories working together assist in the development, and piloting of the I-SAID as

well as in developing a deeper understanding of how teachers could use the I-SAID as a tool for

reflective practice and meaningful conversation. The Stages of Concern identify where the

participants fall within the category of concern which accompanies the implementation of a new

innovation. Such knowledge provides the researcher with the information to craft a plan of action

for implementation of the I-SAID into the evaluation practices of the school district. Danielson’s

framework presents the structure from which the I-SAID was developed. This framework

provides insight into the questions one can ask students in order to receive feedback on the

delivery of instruction. It also provides the researcher with the framework for the I-SAID that is

based on teaching practices and captures the essential work a teacher must complete in order to

promote student learning. Being based in constructivism provides the logical sequence that the

framework includes. This allowed the teacher, during reflection, to assess the practices they are

using, plan professional development for areas of weakness that they may have identified

through the student feedback results, and set and measure progress on goals.

Knowledge is not all factual and static. It has to be continuously constructed

and enriched by investigation, predicting, imagining, manipulation of

information and invention. Finally, meaningful learning involves reflective

learning that seeks to resolve cognitive conflicts by improving on the prior

framework for understanding (Hamat, 2010, p. 238).

Reflective thinking and integration of new knowledge may be further explained by the

individuals’ self-efficacy. Teachers who demonstrate high self-efficacy may embrace the

opportunity to gather formative feedback to inform their practice more readily than those

Page 26: Improving instructional delivery: reflections on student ...1095/fulltext.pdfinstructional practices through reflection is the driver behind the development of a student feedback tool;

26

teachers with lower senses of self-efficacy. Therefore, by looking at teachers’ experiences

through the lens of self-efficacy and the concerns based adoption model we may be able to

better construct a plan for introducing the I-SAID into the teacher’s repertoire.

Page 27: Improving instructional delivery: reflections on student ...1095/fulltext.pdfinstructional practices through reflection is the driver behind the development of a student feedback tool;

27

Chapter 2: Literature Review

The literature reviewed examines five major themes all related to this study. The first

theme consists of reflection in practice. This section identifies the components of reflective

practice, where reflective practice has been widely used, and how reflective practice improves

outcomes. Student growth will be the second body of literature reviewed. This body of literature

examines the historical perspective of student growth and how it is now utilized as part of the

new evaluation system. Next, teacher effectiveness/quality was reviewed in relation to its

importance for student success and to describe characteristics which identify high quality/ highly

effective teachers. The assimilation of information from this body of literature provides the

reader with an understanding of why it is important to identify the appropriate tools that lend

themselves to create reflective practice in our teachers with the goal of improving instructional

delivery. Then the process of teacher evaluation is discussed with a focus on its important role in

ensuring quality teachers for all students. The new Massachusetts Model System for Educator

Evaluation is reviewed in this section, and a discussion on how student evaluation tools are

currently used closes out this topic. The last body of literature discusses the use of student

feedback. This section of literature describes how the implementation of a student feedback tool

improves instruction and thus improves student learning gains.

Reflective Practice

Reflective practice has been around for centuries, however, in education it has been used

primarily by novice teachers and teachers in training. “Reflective practice was born out of

constructivism, cognitive reflection is the key process through which individuals extract

knowledge from their concrete experience” (Jordi, 2011, p. 182). John Dewey defined reflective

thought as, “active, persistent, and careful consideration of any belief or supposed knowledge in

Page 28: Improving instructional delivery: reflections on student ...1095/fulltext.pdfinstructional practices through reflection is the driver behind the development of a student feedback tool;

28

the light of the grounds that support it, and the further conclusions to which it tends” (1910, p.

8). It is through reflection that one can make sense out of our experiences and cognitions related

to those experiences. Reflection assists us in interpreting the impact experiences and behaviors

have on thought processes and provide insight that experience alone cannot present. “The

purpose of reflective practice is to increase learning at the individual and organizational levels”

(Kim, 1993, p. 31).

Donald Schön emphasizes that knowing-in-action is what separates the expert from the

novice practitioner. Schön described two forms of reflection, reflection-in-action and reflection-

on-action. The first being reflection which occurs in the midst of an action and the former being

looking at the action after the fact (1987, p. 5). The reflective memo designed for this study is an

example of reflection-on-action as the teacher participants examined the data from the I-SAID

and reflected on the results and their practice. This reflective tool was designed to allow the

teacher to reflect on the frame the student holds and compare it to their own frame. “When

practitioners are unaware of their frames for roles or problems, they do not experience the need

to choose among them. They do not attend to the ways in which they construct the reality in

which they function: for them it is simply the given reality” (Schon, 1983, p. 310). Reflecting on

the frame of the student enhances the ability of the teacher to function in the reality of the

student, and how the student learns best.

Osterman and Kottkamp state that, “reflective practice is a meaningful and effective

professional development strategy. Even more, it is a way of thinking that fosters personal

learning, behavioral change and improved performance” (2004, p. 1). The I-SAID has been

developed upon a framework which is explicit. Reflecting on the feedback elicited by the I-SAID

provides the teacher with an authentic opportunity to develop professionally and improve their

Page 29: Improving instructional delivery: reflections on student ...1095/fulltext.pdfinstructional practices through reflection is the driver behind the development of a student feedback tool;

29

performance. Reflecting on student feedback encourages the teacher to examine their

instructional delivery from the perspective of the student and change the way they do things.

Reflective practice is critical to developing expertise. “One of the key differences

between experts and novices is that experts’ knowledge is organized and structured differently

than novices. The expert acquires a rich highly complex conceptual structure that is used

consciously to represent and reason about situations” (Marzano, 2012, p. 5). Developing

expertise as a teacher requires changes in practices that lead to improved educational outcomes.

Reflecting on student feedback provides the educator with an opportunity to assess the view from

the student and to change practices to achieve better results. It provides an excellent opportunity

for action orientated change which can be measured.

Jennifer York-Barr speaks of twelve potential benefits to incorporating reflective practice

into the profession. These benefits are,

Guidance, continuous learning, bridges between theory and practice,

consideration of multiple perspectives, productive engagement of conflict,

knowledge for immediate action, embedded formative assessment, growth in

cultural competence, understanding of role and identity, individual and collective

efficacy, strengthened connections among staff, greater professionalism and voice

and reduced external mandates (2006, p. 15).

She goes on to encourage reflective practice that begins with the individual and spirals out to

develop a culture of collaboration. Reflection and collaboration are essential pieces of the

Massachusetts Model System for Educator Evaluation and the focus of standard IV Professional

Culture on the teacher rubric. This research provides the teacher with the data to reflect both

individually and collaboratively on their students’ perspectives of their instructional delivery.

Page 30: Improving instructional delivery: reflections on student ...1095/fulltext.pdfinstructional practices through reflection is the driver behind the development of a student feedback tool;

30

Student Evaluations of Teaching

Student feedback systems have been widely used by institutions for over forty years

now. In the late 1920’s Douglass noted the ease of collecting the opinions of students on

instruction and the potential importance of reviewing these opinions (1928). Hulpiau states,

“Evaluation systems that make use of student feedback generally serve two main functions:

improvement and accountability” (2007, p. 35). Much of the current research has focused on the

use of student feedback for accountability or evaluative purposes. “College administrators

eagerly embraced SETE [student evaluation of teacher effectiveness] in the 1960’s because they

were perceived to be able to offer a ready vehicle for assessing faculty hired to teach the droves

of students entering post-secondary institutes”(Charles, Tracy, & Robert, 2003, p. 38). Yet, the

controversy over their validity and reliability remains a constant in the conversation regarding

students evaluating teacher effectiveness.

Student evaluations of instruction have long been used to evaluate the

teaching performance of instructors. However, despite the widespread

use of data from student evaluations for the purpose of determining faculty

teaching effectiveness, a review of the literature in the area indicates that

issues concerning validity and usefulness of such evaluations remain

unresolved (Wright, 2006, p. 417).

Many researchers have found that student scores on evaluations can be related to grades, teacher

characteristics, and student perceptions. These researchers have made the argument that they are

biased instruments as a result (Cashin, Kansas State Univ, & Development in Higher, 1989;

Madichie, 2011; Vevere & Kozlinskis, 2011; Weinberg, Hashimoto, & Fleisher, 2009; Wright,

2006). However, a significant body of research exists which purports while these aspects can

Page 31: Improving instructional delivery: reflections on student ...1095/fulltext.pdfinstructional practices through reflection is the driver behind the development of a student feedback tool;

31

factor into student evaluations of teachers, they do not negate the validity or reliability of these

evaluations (Brockx, Spooren, & Mortelmans, 2011; C. L. Patrick, 2011; Remedios &

Lieberman, 2008).

Despite the disagreement in the literature one point is salient, “SET [student evaluation

tool] instruments are only effective if they assist professors in improving teaching performance

by providing diagnostic information that can result in actionable changes” (Engelland, 2004, p.

45). The typical student feedback tool which asks questions such as, “The teacher was accepting

and supportive of students” or “The teacher handled discipline fairly” may not provide the

quality diagnostic information alluded to by Engelland. The I-SAID relies on the framework for

teaching developed by Danielson in order to provide quality diagnostic information to teachers.

Therefore, changes can be made through reflection on instructional delivery in order to improve

the quality of learning. The purpose of the I-SAID is to assist teachers in improving their

performance through identifying areas where professional development is needed and for setting

professional practice goals, it may also serve to fulfill the requirements of the 2014 mandate to

incorporate student feedback into the teacher evaluation system. This is all being done to

improve student growth and ultimately narrow the achievement gap. “ NCLB goal of 100 percent

proficient is being replaced with a new goal of reducing proficiency gaps by half by 2017”

(Chester, 2012, p. 7). Marsh, who has widely studied student evaluations of teaching states,

“Student evaluation tools are probably the most thoroughly studied of all forms of personnel

evaluation, and one of the best in terms of being supported by empirical research (2001, p. 184).

Student Growth

The evolution of the student growth model began with the focus on education

accountability in the late 1980’s and really exploded in 1994 with the reauthorization of the

Page 32: Improving instructional delivery: reflections on student ...1095/fulltext.pdfinstructional practices through reflection is the driver behind the development of a student feedback tool;

32

Elementary and Secondary Education Act, (ESEA) by the Clinton Administration. This version

was called the Improving America’s Schools Act and was focused on, “four key elements of

comprehensive education improvement: 1) high standards for all students; 2) teachers better

trained for teaching to high standards; 3) flexibility to stimulate local reform, coupled with

accountability for results; and 4) close partnerships among families, communities and schools”

(Riley, 1995, p. 3). Goals 2000 also drafted by the Clinton Administration quickly followed and

it was signed into law on March 31, 1994. This set in motion a focus on standards and

accountability. This act provided resources to states and communities to “ensure that all student

reach their full potential” (Laboratory, 1995). The act focused on six education goals: school

readiness, school completion, student academic achievement, leadership in math and science,

adult literacy, and safe and drug free schools (Laboratory, 1995). This act later added two

additional goals focusing on professional development for teachers and increasing parental

involvement in the education of their children. Goals 2000 was the precursor to the next re-

authorization of the Elementary and Secondary Education Act, which was written in 2001 as the

No Child Left Behind Act often referred to as NCLB. This re-authorization act brought

curriculum standards into education along with accountability for schools. Schools became

accountable for their entire student body, but also for the performance of sub-groups such as

special education and low income groups with the goal of helping each and every child reach

their full potential.

More recently with the debates and discussions concerning the reauthorization of the ESEA a

focus on accountability has shifted to evaluating teachers based on student performance . The

desire to ensure highly qualified teachers in the profession is the driving force behind developing

and incorporating student growth data into the teacher evaluation process. Student growth

Page 33: Improving instructional delivery: reflections on student ...1095/fulltext.pdfinstructional practices through reflection is the driver behind the development of a student feedback tool;

33

models originated in Tennessee where the statistician, William Sanders, “used the state’s

recently created annual test data to gauge the effectiveness of individual teachers by comparing

an estimate of how their students’ test scores were expected to grow, based on the students’

performance history, to how much their students’ test scores actually grew” (Carey, 2011, p. 2).

In 2001 the President George W. Bush brought Sandy Kress on as his chief educational

advisor. Kress, a former school board member in the state of Texas, supported the use of growth

models as a measure of teacher effectiveness. “In 2005, U.S. Secretary of Education Margaret

Spellings announced that states would be allowed to apply for permission to incorporate growth

models into their accountability system” (Carey, 2011, p. 3). Although this provision has been

made to incorporate growth models into accountability systems, Secretary Spellings also decided

that the accountability system had to, “remain anchored to a criterion-referenced proficiency

measure” (Carey, 2011, p. 4). This means that schools would remain accountable for ensuring

that all student reach proficiency by 2014.

Since Secretary Spelling’s announcement that states could utilize growth models as one

method to assist them in monitoring accountability nine states have developed and implemented

the use of growth models and 12 states are in the process of developing growth models.

(Potemski, 2010, p. 2). Student growth as defined by the U.S. Department of Education is “The

change in student achievement for and individual student between two or more points in time”

(2009).

Massachusetts utilizes student growth percentiles in order to measure student growth.

According to this model, “student growth percentiles are a measure of student progress that

compares changes in a student’s MCAS scores to changes in MCAS scores of other students

with similar scores in prior years” (Chester, 2011, p. 1). This method of measuring student

Page 34: Improving instructional delivery: reflections on student ...1095/fulltext.pdfinstructional practices through reflection is the driver behind the development of a student feedback tool;

34

progress was developed by Dr. Damian Betebenner at the national Center of Assessment in

Dover, NH. In the spring of 2009 the Massachusetts Department of Elementary and Secondary

Education piloted the student growth model with nine school districts. These nine districts

represented a wide population with the goal of developing reports and interpretive material for

the use with growth data (Chester, 2009, p. 1). Subsequently, in the fall of 2009 preliminary

growth scores were released to districts for pre-view with the official release of state wide

growth scores in the fall of 2010. The importance of this lies in the promise that Massachusetts

makes in its application for Race to the Top funding. The Massachusetts application states,

“Massachusetts will develop an approach to differentiate educator effectiveness using multiple

measures, including student growth data, and align these measures of effectiveness with

decisions along the educator career continuum” (D. C. Patrick, M.D., Banta, M., 2010, p. 13).

The types of skill able to be assessed using standardized testing are limited, and there is

much more that we need our students to know and be able to do than what is tested by such

assessments. This is important as thus far growth measures “of teaching effectiveness rest

exclusively on skills assessable on very narrow standardized tests” (Corcoran & Annenberg

Institute for School Reform at Brown, 2010, p. 14). Therefore, the need to develop phase III, a

tool to incorporate student feedback, is essential to procuring a well-rounded picture of teacher

effectiveness and encouraging reflective practice. The Massachusetts Model System for Educator

Evaluation allows for multiple data points to assess the educator as ultimately, the goal is to

improve student learning by increasing teacher effectiveness. Danielson has provided us with the

framework to assess and develop the actions required to increase growth, thus making it the ideal

framework from which to develop the I-SAID.

Page 35: Improving instructional delivery: reflections on student ...1095/fulltext.pdfinstructional practices through reflection is the driver behind the development of a student feedback tool;

35

Teacher Effectiveness and Teacher Quality

Policy makers continuously focus on teachers when they are looking for ways to improve

education and student achievement. The focus on teacher effectiveness is most likely a direct

result of the large portion of budgets that teacher salaries consume and its relationship to student

achievement. “Reviewers of these empirical studies have almost uniformly agreed that the body

of research on teacher quality stands up to scrutiny. Teacher quality is the single most important

feature of the schools that drives student achievement” (Haskins & Loeb, 2007, p. 53).

Furthermore Cook reports, “Teacher quality more heavily influences differences in student

performance than does race, class, or school of the student; disadvantaged students benefit more

from good teachers than do advantaged students” (2006, p. 58). The spotlight on teacher

effectiveness is important as there is a significant body of research that speaks to the importance

of “teacher quality” and “teacher effectiveness” with respect to student achievement; however,

much like defining at risk or absenteeism, defining what makes a teacher effective has proven to

be a formidable challenge as there is not agreed upon definition of teacher quality (Hinchey &

University of Colorado at Boulder, 2010, p. 2). What we do have is a framework for teaching

developed by Charlotte Danielson which “identifies those aspects of a teacher’s responsibilities

that have been documented through empirical studies and theoretical research as promoting

improved student learning”(2007, p. 1). College end of course assessments often focus on the

student’s perception of internal characteristics. These characteristics are represented in domain 2:

The Classroom Environment of Danielson’s Framework for Teaching. The I-SAID will focus on

the external aspects of teaching by utilizing domain 3: Instruction. These external aspects of

teaching can be identified by both student and evaluator observers. They can easily be enhanced

through reflective practice and professional development.

Page 36: Improving instructional delivery: reflections on student ...1095/fulltext.pdfinstructional practices through reflection is the driver behind the development of a student feedback tool;

36

Studies representing the body of literature examining external characteristics of teacher

quality have thus far focused on characteristics such as years of service, level of licensure,

degree and where degree was sought have been inconclusive or limited at best, at determining

the relationship between these characteristics and student achievement. Many of these studies

conclude that there is a relationship between teacher experience and student achievement;

however, the effects are seen only for the years after a teacher moves from novice to professional

and they cancel out with greater amounts of experience.

Additionally, characteristics of education, institution where educated and degree level,

produced no significant gains in student achievement (Hanushek, Rivkin, & Urban Institute,

2010; T. J. Kane, Rockoff, J.E., Staiger, D.O., 2008; Marco & Florence, 2007; Rivkin, et al.,

2005; Wayne & Youngs, 2003). Coursework and certification in the subject area taught,

particularly in mathematics displayed a positive correlation with student achievement (Wayne &

Youngs, 2003, pp. 103-104). It appears that there are multiple teacher characteristics which

impact student achievement; in spite of this, “meaningful conversations about teaching and valid

evaluations of teaching must be grounded in a clear definition of practice-a framework for

teaching” (Danielson, 2008, p. 1).

It is significant that the Race to the Top is specific in its prescription of an effective

teacher. Highly effective teachers are those whose students achieve high rates of growth, defined

narrowly as “a change in test scores between two or more points in time” (Corcoran, 2010, p. 2).

Despite this variation in definition of highly effective, one thing that most seem to agree upon is,

“a succession of good teachers goes a long way toward closing the achievement gap” (Rivkin, et

al., 2005, p. 449). Being able to identify good teachers is critical toward closing the achievement

Page 37: Improving instructional delivery: reflections on student ...1095/fulltext.pdfinstructional practices through reflection is the driver behind the development of a student feedback tool;

37

gap and ensuring higher student learning gains for all students. The impact teacher quality has on

student achievement was a focal point of NCLB 2002.

Teacher effectiveness matters so much that low-income students lucky enough

to have three very good teachers in a row in elementary school earn test scores

that, on average, are similar to middleclass children. Conversely, almost all

children, regardless of their socio-economic status, will be harmed academically

by poor teaching three years running (N. A. o. Education, 2009, p. 1).

The next section in this literature review focuses on teacher evaluation. It provides a

historical perspective as well as provides information on the Massachusetts Model System for

Teacher Evaluation.

Teacher Evaluation

In Massachusetts’ teacher evaluation is governed by legislation tied to the Education

Reform Act of 1993 and has remained unchanged since 1995, until now. In 2009 the movement

to focus on improving student growth by improving the evaluation systems for teachers was

conceived in the federal American Recovery and Re-investment Act. It was under this act that

the US Department of Education developed its Race to the Top Program which encouraged states

to implement comprehensive education reform.

The Race to the Top program, a $4.35 billion fund created under the American

Recovery and Reinvestment Act of 2009 (ARRA), is the largest competitive

education grant program in U.S. history, warranting unprecedented transparency

and participation to ensure the best possible results. The $4 billion for the Race to

the Top State competition is designed to provide incentives to States to implement

large-scale, system-changing reforms that improve student achievement, close

Page 38: Improving instructional delivery: reflections on student ...1095/fulltext.pdfinstructional practices through reflection is the driver behind the development of a student feedback tool;

38

achievement gaps, and increase graduation and college enrollment rates (U. D. o.

Education, 2009).

Although Massachusetts is a Race to the Top state, control over the evaluation process in

Massachusetts previously rested at the local level in an effort to allow school districts to design

evaluation systems which address their needs. The Board of Elementary and Secondary

Education’s task force has been assigned the challenge of developing an evaluation system

which, “makes student learning and growth a significant factor in educator evaluation”

(Administrators, 2011, p. 10). The costs of keeping the status quo with regards to teacher

evaluation are just too costly to our students.

The consequences of a poor teacher evaluation process are two-fold: little

improvement in teachers’ instruction in the classroom and the continued

employment of weak teachers. Given the profound influence that teachers have on

student achievement, accurately evaluating their performance is a natural leverage

point for increasing teacher quality and expanding student learning (M. L.

Donaldson, Peske, H.G., 2010, p. 1).

Massachusetts recently released its model entitled, The Massachusetts Model System for

Educator Evaluation. This model designed by the Department of Elementary and Secondary

Education is available to districts to adopt or adapt. There are three phases to the model. Phase I

calls for the rating of educators based on attainment of goals and performance against

performance standards. Phase II requires the use of measurements of student learning gains to be

incorporated into the rating of the educator during the evaluation process. Phase III, the

requirement for educators to provide evidence of the use of student feedback as a component of

Page 39: Improving instructional delivery: reflections on student ...1095/fulltext.pdfinstructional practices through reflection is the driver behind the development of a student feedback tool;

39

their evaluation, is the secondary focus of this research as an instrument to obtain student

feedback has not yet been developed. There is a gap in the knowledge for phase III as the

Department of Elementary and Secondary Education plan on providing guidelines for the use of

student feedback in June of 2013. This research could inform the guidelines to be developed.

Knowing that teacher quality is imperative to student achievement it should then be a

priority to develop a system of evaluation that not only can identify highly effective teachers, but

can provide teachers with tools to increase their opportunity to reflect and refine their

instructional delivery. Goe states, “Although research has shown that teachers are the most

significant school-based factor in student achievement, traditional methods of evaluating

teachers have not been able to capture or explain differences between effective and ineffective

teachers” (2011, p. 2). Consistency in teacher evaluation has been absent due not only to the

subjective nature of the current evaluation tools but also as a result over the lack of consensus on

which assessment strategies accurately assess teacher performance. Hinchey describes three

categories to be considered when evaluating teachers: teacher quality, teacher performance, and

teacher effectiveness. She defines the three as follows: “Teacher quality refers to teacher

characteristics such as education, experience, and beliefs. Teacher performance refers to what the

teacher does, both inside and outside the classroom, and teacher effectiveness refers to teacher

influence on student learning” (2010, p. 3).

Currently, there appears to be two types of teacher assessment included in the evaluation

process in the school district, where this study is located. This evaluation system includes both

formative and summative assessment. The formative assessment occurs in the form of classroom

observations where feedback is provided to the teacher from the administrator conducting the

scheduled classroom observation. The teacher is then encouraged to reflect on the observational

Page 40: Improving instructional delivery: reflections on student ...1095/fulltext.pdfinstructional practices through reflection is the driver behind the development of a student feedback tool;

40

data and conference with the administrator after doing so. The summative assessment combines

information collected throughout the year and unlike the observations is presented to the teacher

with a list of recommendations for the following year. James Popham argued that each type of

assessment is “splendid in itself but that they are counterproductive when combined” (1998, p.

270). He further describes formative evaluation as “fixing” the teacher and summative evaluation

as “firing” the teacher. This dichotomy appears problematic in that it creates an environment of

mistrust limiting the growth of the teacher and ultimately restricting the range of student

achievement. Popham’s insight may suggest the need for a comprehensive evaluation system

which approaches evaluation from multiple angles rather than the one shot deal of principals

being responsible for the whole gamut. In conclusion, the National Board for Professional

Teaching outlines five recommendations for teacher assessment systems:

Be grounded in student learning, not student achievement, Employ

measures of student learning explicitly aligned with the elements of

curriculum for which the teachers are responsible for, Strive to attribute

student growth to the teachers responsible, Establish the link between

student learning and teacher practice, and Use measures that, to the

greatest extent possible, reflect the full curriculum, the full scope of a

teacher’s responsibilities, and the full domain of skills and competencies

students are expected to develop” (National Board for Professional

Teaching, 2011, pp. 14-15).

Patricia Hinchey suggests a systemic model of teacher evaluation which includes

classroom observations, instructional artifacts, peer reviews, portfolios, self-assessments, student

surveys as well as using value-added assessments (Hinchey & University of Colorado at

Page 41: Improving instructional delivery: reflections on student ...1095/fulltext.pdfinstructional practices through reflection is the driver behind the development of a student feedback tool;

41

Boulder, 2010). Classroom observations have long been a component of the teacher evaluation

tools, however there is little standardization for these observations. Currently, our school district

trains administrators and evaluators to utilize the CEIJ system. CEIJ refers to claims, evidence,

interpretation and judgment and while the evaluator must provide evidence to support the claim

they are making two evaluators observing the same lesson can produce widely different

observation reports. Danielson, in her Framework for Teaching provides tools for the educator

and evaluator to use that refines this process and creates focus to the practices that have been

demonstrated to improve student learning. Danielson writes, “The observation of classroom

practice is the cornerstone of the evidence of a teacher’s skill; engaging students in important

learning is rightly considered to be the key to professional teaching” (2008, p. 2). This study

utilizes the rubric for classroom observation designed by Charlotte Danielson. The rubric

provides four levels of performance: unsatisfactory, needs improvement, proficient and

exemplary. Each performance provides a description of what is expected at that particular level.

Utilizing the Framework for Teaching in classroom observations ensures that both educators and

teachers understand the components and how they are evident in the everyday classroom. Data

from the observation can easily be shared as the Framework provides a common language.

Furthermore the standardization of observation provides the potential for more than supervisory

observation. It allows for peer observation and self evaluation lending to deeper reflection in

practice.

Page 42: Improving instructional delivery: reflections on student ...1095/fulltext.pdfinstructional practices through reflection is the driver behind the development of a student feedback tool;

42

Chapter 3: Research Design

This section discusses in detail the development of the SFT, the I-SAID. Subsequently,

site and participants are discussed in depth along with the data collection and data analysis

process. Finally a discussion of validity, credibility and generalizability and protection of human

subjects ends the section. The research design is a pilot study with two phases. The research

incorporated both qualitative and quantitative data. The pilot study allowed the researcher to

develop and test the validity and reliability of the I-SAID. The content validity of the dimensions

that Danielson identified was established through the pilot study. The phases of the study are

described in detail later in this section.

Research Questions

The student feedback tool the I-SAID was developed based on Charlotte Danielson’s

Framework for Teaching. The primary driver for the development of this tool was to create a tool

for teachers to use to obtain student feedback on their instructional delivery. The data gleaned

from this tool enhances the teacher’s ability to reflect on the feedback and take action to improve

their instruction for their students. A secondary purpose was to design a student feedback tool

that could assist teachers and school districts in meeting the mandates of the new evaluation

system in Massachusetts.

The research questions for the study are:

1.) What elements should be in a valid and reliable SFT?

2.) What can teachers learn from student feedback data?

3.) How do teachers incorporate student feedback as a means of improving their

instructional delivery?

Page 43: Improving instructional delivery: reflections on student ...1095/fulltext.pdfinstructional practices through reflection is the driver behind the development of a student feedback tool;

43

4.) Can the developed SFT meet the need of the Massachusetts Model System for teacher

evaluation?

5.) What was the indicated Stage-of-Concern regarding the implementation of the I-SAID

for selected teacher participants?

Research Sequence

In order for the researcher to assess the content validity of the survey feedback from

critical peers was elicited. A structured interview was created to elicit feedback on the content

and readability of the I-SAID from critical peers. It was distributed through email to several

school districts in the surrounding area. The data collected yielded minor changes to the

questions as most participants approved the survey as written. The survey was then administered

by the teacher participants to their students. During this time two observations by this researcher

occurred and the teacher was rated using the rubric developed by Charlotte Danielson for

Domain 3-Instruction. Data was assimilated and student evaluation data was presented to the

teacher participants. They were given time to reflect on the data and asked to complete the

reflective memo. Lastly, once the reflective memo was completed the teachers were asked to

complete the Stages of Concern questionnaire keeping the I-SAID in their mind as the

innovation.

Approach

These research questions call for sequential approach because the study seeks to develop

and pilot the student feedback survey, the I-SAID, as well as develop a deeper understanding of

how teachers could use the I-SAID as a tool for reflective practice and meaningful conversation.

Page 44: Improving instructional delivery: reflections on student ...1095/fulltext.pdfinstructional practices through reflection is the driver behind the development of a student feedback tool;

44

The qualitative strand used a reflective memo in order to develop a deeper understanding

of how teacher participants used the I-SAID as a tool for reflective practice and meaningful

conversation. The participants completed the Stages of Concern questionnaire which provided

the researcher with rich information on how the participants felt about the implementation and

adoption of the innovation the I-SAID.

The quantitative data used both a correlational approach and an examination of standard

deviation, linear regression and the coefficient of determination to explore if the I-SAID would

be able to meet the need of the Massachusetts Model System for Teacher Evaluation. Data from

the I-SAID and the classroom observations will be compared in an effort to look at how the I-

SAID may be used in conjunction with other measures used to evaluate teachers. Calculations of

the standard deviation between the I-SAID scores and the observations scores allowed the

researcher to better assess the quality of the I-SAID instrument. A small deviation indicated that

the observer scores and the student scores were similar when evaluating the teaching practices of

the particular teacher.

This study utilized a sequential design as data from each phase relies on the information

gathered from the previous phase. “Sequential timing occurs when the researcher implements the

strands in two distinct phases, with the collection and analysis of the other type of data occurring

after the collection and analysis of the other type” (J. A. Creswell, Plano Clark, V.L, 2007, p.

66). The study not only sought to develop and pilot a SFT, the I-SAID, but also to develop a

deeper understanding of how teachers could use the I-SAID as a tool for reflective practice and

meaningful conversation. It was important to assess the concerns associated with implementation

and adoption of the I-SAID as a new innovation, as this information is useful in planning a larger

scale implementation. This method also provided the researcher with the opportunity to correlate

Page 45: Improving instructional delivery: reflections on student ...1095/fulltext.pdfinstructional practices through reflection is the driver behind the development of a student feedback tool;

45

the proposed SFT, the I-SAID, with other measures of teacher effectiveness prescribed by the

Massachusetts Model System of Teacher Evaluation. This information provided further

validation of the tool and may promote its use across Massachusetts.

Site and Participation

The participants of this study were professional status off-cycle teachers who teach in

grades 7-12 in a suburban east coast school district. Off-cycle teachers are those teachers who

will not be evaluated by an administrator during the current school year. These teachers were

solicited to safe guard the participants by ensuring that information gathered from the study

would not affect their evaluation. Teacher participants were drawn from both tested and non-

tested subject areas as Danielson’s framework is based on actions which apply to all teaching

disciplines (Danielson, 2007). Each teacher teaches five classes of approximately 24 students and

is therefore responsible for the growth of approximately 145 students. The high school principal,

who is also the researcher, conducted all classroom observations. I-SAID was administered to

students by their classroom teacher on two separate occasions utilizing a web based survey

program. All students in the classroom of teacher participants did participate in this study.

Consent was sought from their parents/guardians utilizing an opt out /passive consent procedure.

Students themselves were offered the opportunity to decline participation each time they were

asked to complete the online survey. Students were brought to the computer lab with an assigned

identification number to complete the I-SAID. The identification number was utilized so that

students could feel confident that their identity was protected. Only the researcher had access to

this information.

Page 46: Improving instructional delivery: reflections on student ...1095/fulltext.pdfinstructional practices through reflection is the driver behind the development of a student feedback tool;

46

Protection of Participants

“Ethical practices involve much more than merely following a set of static guidelines” (J.

A. Creswell, 2009, p. 88). This research study involved both teacher and student participants at

some level, thus the utmost care was taken to ensure dignity of the participants and preserve the

integrity of the study. This approach began with the inception of the study concept and was

carried throughout the study. Great consideration was given to the practical and intellectual goals

of this research so that its results may yield benefits not only to students and teachers in this east

coast suburban school district but potentially all school districts with the Commonwealth. These

goals were shared with potential participants so that they could make an informed decision about

their potential role in the study.

During the data collection portion of this study, the rights and privacy of the participants

and site of this research were protected. Permission was sought and obtained from the

superintendent, the District Teachers Association, the parents and the students. The results of the

study will be made available to all upon its final approval. Teachers who expressed interest in

participating were met with to further describe the details of the study. The teachers who chose

to become participants were presented with an informed consent form, “acknowledging that

participants rights will be protected during data collection” (J. A. Creswell, 2009, p. 89). All

students and parents of students within the classrooms of the teacher participants were informed

of the study and through an opt out method were also presented with an informed consent form.

Students were given another opportunity to opt out on the day the data was collected by their

classroom teacher. Student identities were protected by coding them in a effort to protect their

anonymity and privacy. Only the researcher could match the codes. In addition, the Institutional

Page 47: Improving instructional delivery: reflections on student ...1095/fulltext.pdfinstructional practices through reflection is the driver behind the development of a student feedback tool;

47

Review Board (IRB) of Northeastern University reviewed the study and the researcher followed

all policies and procedures outlined by the IRB.

During the data analysis and interpretation, the identity of the teacher participants was

protected by having their names redacted and codes assigned to them. This researcher ensured

that all electronic data was kept in a password protected computer and that paper surveys were

kept in a locked file cabinet. Upon completion of the study all electronic files will be erased and

the paper copies shredded.

Data Collection and Analysis

Initial data regarding the language, content and readability of the I-SAID was collected

from a critical peer group which consisted of teachers and administrators. This data was

reviewed and minor adjustments were made to the I-SAID. The second part of phase one used

observational data collected by this observer. Observations for each participating teacher

occurred on two occasions and the observations were rated utilizing the Danielson rubric for

Domain 3. In phase two the data from the I-SAID was presented to the teachers individually and

they were asked if they had any clarifying questions. Teacher participants were then asked to

complete a reflective memo. At the completion of this part of phase two teacher participants

were provided with the Stages of Concern questionnaire and asked to complete it.

Research question 1. What elements should be in a reliable and valid SFT?

The question is broken down into the following open-ended sub-questions:

a. To what extent do the responses to the questions from the I-SAID correlate with or

deviate from observational data matching components?

b. To what extent do the questions on the I-SAID accurately measure students’ perception

of the teachers’ use of the instructional framework?

Page 48: Improving instructional delivery: reflections on student ...1095/fulltext.pdfinstructional practices through reflection is the driver behind the development of a student feedback tool;

48

To answer this research question feedback from critical peers on the design of the I-SAID

was elicited from two independent school districts. Teachers and administrators in grades 5

through 12 were invited to provide feedback on the usefulness, readability and wording of the I-

SAID. (See Appendix C) Information received from critical peers provided information to the

researcher which was used to reshape the questions as needed. Responses provided by critical

peers were obtained through email. The responses were tracked and questions with more than

one quarter of the responders stating concern were reshaped. Only questions 7 and 19 were re-

shaped by adding the term “Bell Work” as more than 25% of the responses indicated that they

frequently use this term for their activator. The review of the literature particularly Charlotte

Danielson’s framework for teaching provided the content validity for the I-SAID.

Research question 2. What can teachers learn from student feedback data?

The question is broken down into the following open-ended sub-questions:

a. What aspects of the I-SAID had the most benefit for reflection?

b. What additional questions would benefit teachers in reflecting to improve their

instructional delivery and would you change any of the questions?

c. What benefit or limitations does pairing additional data have on the reflective process?

• Teacher growth data?

• Self-Assessment data?

• Observational data?

Research question 3. How do teachers incorporate student feedback as a means of improving

their instructional delivery?

The question is broken down into the following open-ended sub-questions:

a. What actions/processes did you utilize after reflecting on the data?

Page 49: Improving instructional delivery: reflections on student ...1095/fulltext.pdfinstructional practices through reflection is the driver behind the development of a student feedback tool;

49

b. What ways will you use the I-SAID in the future?

c. How can the I-SAID assist you in developing your professional goals?

d. How can the I-SAID contribute to developing your professional development plan?

e. How could you use the results from the I-SAID to enhance peer/mentor observation?

f. What must you do to facilitate improving your instructional practice based on the results

of the I-SAID?

To answer research questions 2 and 3 a reflective memo was designed (see Appendix E) and

distributed to the teacher participants. The transcripts were read through twice to become

familiar with what was included in it. In-vivo and descriptive coding was used to describe

relevant features of the text. Pattern coding was used next to interpret themes related to the

question and sub questions. “In-vivo coding as a code refers to a word or short phrase from the

actual language found in the qualitative data” (Saldana, 2013, p. 87). Pattern coding was used

next to interpret themes related to the question and sub-questions. “Pattern codes are explanatory

or inferential codes, ones that identify an emergent theme, configuration, or explanation. They

pull together a lot of material into a more meaningful and parsimonious unit of analysis”

(Saldana, 2013, p. 157). Pattern coding is used in the second cycle of coding. “Second cycle

coding is to develop a sense of categorical, thematic, conceptual, and/or theoretical organization

from your first array of First Cycle codes” (Saldana, 2013, p. 149).

Research question 4. Can the developed SFT meet the need of the Massachusetts Model System

for Teacher Evaluation?

The question is broken down into the following open-ended sub-questions:

a. What is the best way to incorporate student feedback into the teacher evaluation process?

Page 50: Improving instructional delivery: reflections on student ...1095/fulltext.pdfinstructional practices through reflection is the driver behind the development of a student feedback tool;

50

b. How could the I-SAID be used to meet the need of the Massachusetts Model System for

Teacher Evaluation requirement of incorporating evidence of student feedback?

To answer research questions 4 responses from the reflective memo were analyzed utilizing

the same methods as those used to answer research question 3.

Research question 5. What was the indicated Stage-of-Concern regarding the implementation of

the I-SAID for selected teacher participants?

The question is broken down into the following sub-question:

(a) Were there differences in the teachers’ Stage of Concern when grouped by their grade

level?

(b) Were there differenced in the teachers Stages of Concern when grouped by years of

experience?

To answer question 5 and its subsequent sub-question, data from the Stages of Concern

questionnaire was examined and separated into sub groups of grade level, and number of years of

service.

Validity and Reliability

The design of this study pilot study incorporated the use of quantitative data to demonstrate

instrument validity of the I-SAID. Fraenkel states, “ It is not uncommon for researchers to

examine the relationship of responses to one question in a survey to another or of a score based

on another set of data” (2009, p. 392). This research will attempt to determine the correlation of

survey question on the I-SAID to corresponding components on the Danielson teacher

observation rubric. While a perfect correlation is not expected a correlation which meets the

significance level of p≤ .05 will be considered significant. According to Fraenkel and Wallen,

validity is the “appropriateness, correctness, meaningfulness, and usefulness” (2009, p. 147) of

Page 51: Improving instructional delivery: reflections on student ...1095/fulltext.pdfinstructional practices through reflection is the driver behind the development of a student feedback tool;

51

the instrumentation. Fraenkle and Wallen further state that the importance of validity for

instrumentation rests in the ability to draw correct conclusions from the instrument (2009, p.

148).

The I-SAID has been assessed for content and criterion related validity by aligning the

questions to the particular component of Domain 3 from the FFT by Danielson. Criterion related

validity is defined as, “when a correlation is used to describe the relationship between a set of

scores obtained by the same group of individuals on a particular instrument and their scores on

some criterion measure” (Fraenkel, 2009, p. 152). The higher the correlation the more assured

one can be that I-SAID questions can estimate future performance on the observational scoring

rubric for Domain 3 of the FFT, however a correlation of p≤ .05 level will be considered

statistically significant for the purposes of this study.

Reliability of this study was determined using a test-retest method. “Reliability refers to

the consistency of the scores obtained-how consistent they are for each individual from one

administration of an instrument to another” (Fraenkel, 2009, p. 154). The standard deviation

between the two administrations of the I-SAID was calculated to determine if the I-SAID can be

administered over time on separate occasions and yield the same results keeping the instruction

relatively constant. Standard deviation is, “the most useful index of variability. It is a single

number that represents the spread of a distribution” (Fraenkel, 2009, p. 195). The reliability

coefficient is able to “express the relationship between scores of the same individuals on the

same instrument at different times” (Fraenkel, 2009, p. 155). For the purposes of this research

study the time lapse between administrations of the I-SAID was no longer than two weeks. This

was done in order to minimize the potential of finding a lower reliability coefficient due to

change in instruction after the first administration of the I-SAID. According to Fraenkel and

Page 52: Improving instructional delivery: reflections on student ...1095/fulltext.pdfinstructional practices through reflection is the driver behind the development of a student feedback tool;

52

Wallen, “for most educational research, stability of scores over a two-to-three month period is

usually viewed as sufficient evidence of test-retest reliability” (2009, p. 156). Finally, the

coefficient of determination was calculated. “The coefficient of determination indicates the

percentage of variability among the criterion scores that can be attributed to differences in the

scores on the predictor variable” (Fraenkel, 2009, p.332).

Limitations

Limitations occur in every research study and should be taken into consideration when

attempting to generalize results. The time lapse between the first administration and the second

administration of the I-SAID is a limitation to this study. While the time lapse was not greater

than two weeks it is possible that the teacher participant made changes to her instructional

delivery significantly enough to change student perception between administrations of the I-

SAID. The researcher attempted to control this limitation by not providing the data from the

student feedback until all administrations of the I-SAID were completed.

A second limitation to this study occurred through the use of a survey for this research.

Fraenkel and Wallen identify three potential biases in survey research: one must ensure that the

questions are clear, and not misleading, there is a potential for respondents to not answer survey

questions truthfully and the number of respondents must be so that it makes analysis of the data

meaningful (2009, p. 12). Attempts were made through the use of a critical peer group to review

and provide feedback on the I-SAID to ensure that questions are clear and not misleading;

however limitations remain with regards to the participants answering truthfully and further

study should occur to determine if the student understanding of the questions was what the

author intended. This could be accomplished through the use of student focus groups.

Page 53: Improving instructional delivery: reflections on student ...1095/fulltext.pdfinstructional practices through reflection is the driver behind the development of a student feedback tool;

53

The researcher’s role as principal of a school within this east coast suburban school

district could be a prospective source of bias. Due to the researcher’s role it may have limited

teacher who choose to participate in this study. Furthermore, participation rates may have been

limited as teachers may be unsure of the confidentiality and influence the results may have on

their evaluation of job performance in the future. This was addressed by reviewing the

procedures individually with all who were interested in participating. Assurance of anonymity

was provided to teacher subjects and the only persons to review individual data were the

researcher and the teacher participant.

Page 54: Improving instructional delivery: reflections on student ...1095/fulltext.pdfinstructional practices through reflection is the driver behind the development of a student feedback tool;

54

Chapter 4: Report of Research Findings

The primary focus of this research was to explore the experience of the teacher as he/she

utilizes student feedback to improve his/her instructional practice. This chapter is organized into

an introduction, the body of research and a conclusion section. The introduction will review with

the reader the scope and sequence of the research. The body of research is divided into seven

sections, one for each teacher participant. In this section the reader will read and understand the

research results by teacher. Data is presented on the student feedback tool, the observations,

reflective memo and Stages of Concern for each teacher participant. The final section presents a

summarization of the results in order to preview Chapter 5.

The teachers in this study presented their students with the I-SAID on two occasions.

During the same time frame the teacher was observed twice by this researcher. A data meeting

was held with each teacher and the student feedback data was shared at this time, an example of

the data shared with each teacher participant can be viewed in Appendix H. The pilot instrument,

the I-SAID has been developed for the purpose of providing an opportunity for teachers to reflect

on student feedback and in response to the upcoming requirement from the Massachusetts

Department of Elementary and Secondary Education to incorporate evidence of student feedback

on the instructional practices of teachers into the new teacher evaluation system by 2014. The I-

SAID was designed utilizing the Framework for Teaching authored by Charlotte Danielson. The

I-SAID reflects components of Domain 3, Instruction. The components of Domain 3 are:

communicating with students, questioning and discussion techniques, engaging students in

learning, using assessment in instruction, and demonstrating flexibility and responsiveness. Each

component is further divided into three elements. The I-SAID contains two to three questions for

Page 55: Improving instructional delivery: reflections on student ...1095/fulltext.pdfinstructional practices through reflection is the driver behind the development of a student feedback tool;

55

each element contained in Danielson’s Domain 3 the framework for Teaching. The breakdown of

questions to components and elements can be reviewed in Appendix A.

In order to develop, pilot and evaluate the newly designed I-SAID a number of processes

occurred. The first process was to determine the reliability of the instrument. A test, re-test

method was utilized. Test-retest scores for each teacher participant can be viewed in Appendix

B. Additionally, utilizing the corresponding rubric for teacher observation from Charlotte

Danielson’s framework for teaching the scores from the I-SAID and the observations conducted

by this researcher, a trained observer, were compared. This process was able to speak to the

validity of the instrument. Finally, the usefulness of the instrument was explored through the

reflective process. The participants completed a reflective memo after reviewing their individual

student data. An example of the data presented to each teacher for purposes of reflection has

been represented in Appendix H. The last stage of the process asked the teacher participants to

complete the Stages of Concern Questionnaire. Since implementing a student feedback tool will

be foreign to teachers this questionnaire assisted the researcher in determining where in the

change process the teachers are, and how to best plan for further school wide implementation

based on concerns expressed by this pilot group.

Table 1 indicates the questions by number which related to each component of

Danielson’s Framework for teaching for Domain 3. The components are further divided into

elements. The I-SAID questions can also be further divided into the elements. This breakdown is

depicted in Table 3. However, for the purposes of this study we are examining Domain 3 at the

component level as the observation rubric utilized examines teacher performance at this level.

The specific questions can be viewed in Appendix A as separated out by component and element

or in Appendix D as presented to the students on the I-SAID instrument.

Page 56: Improving instructional delivery: reflections on student ...1095/fulltext.pdfinstructional practices through reflection is the driver behind the development of a student feedback tool;

56

Table 1

Relationship of I-SAID Questions to Components

Components 3a 3b 3c 3d 3e Questions 1

2 3 4 5 6 7

8 9 10 11 12 13

14 15 16 17 18 19

20 21 22 23 24 25

26 27 28 29 30 31

Teacher Participant MS1

Teacher MS1 is an 8th grade English Language Arts Teacher. MS1 has been teaching in

the district for twelve years. MS1 is the curriculum team leader for ELA as well as the mentor

supervisor. She recently re-wrote the mentoring curriculum which focuses on supporting best

instructional practices for new teachers. MS1 has been recognized as a Massachusetts Teacher of

the Year Finalist and the Wal-Mart Teacher of the Year recipient.

Table B1in Appendix B represents the average student rating scores on the I-SAID the

first and the second time the students scored their teacher, MS1, per question. The standard

deviations between the first rating score and the second rating score are small, thus allowing the

researcher to conclude that the instrument is reliable for teacher MS1 as the scores were not

widely dispersed. These results suggest that one could expect similar scores on the I-SAID

repeatedly if there was no change in instruction.

Teacher MS1 completed the reflective memo after spending some time reviewing her

student data, see Appendix H. According to teacher MS1 she felt that as a result of reviewing the

data she “needs to do more pre-assessments to ascertain at what entry point my students are

coming to for a unit of study”. She further indicates that “I created a number of pre-assessments”

as her response to the question, what actions/processes did you utilize after reflecting on the

Page 57: Improving instructional delivery: reflections on student ...1095/fulltext.pdfinstructional practices through reflection is the driver behind the development of a student feedback tool;

57

data? Finally, she indicated that “question # 25-using pre-tests… was very important in knowing

what level of knowledge my students bring to a new unit of study. I have not been assessing that

but need to in the future to improve my instruction”. In response to research question 3, how do

teachers incorporate student feedback as a means of improving their instructional delivery,

teacher MS1 wrote, “A SMART goal can be developed to improve upon my shortcomings in my

repertoire of teaching tools based on how the student answered the survey questions”. She

further wrote, “the SMART goal can become part of my professional development plan”.

The questions for MS1 were grouped by components for the first and second rating. The

standard deviations between the first and second ratings per component were calculated to test if

there was consistency between the ratings per components. This speaks to the reliability of the

instrument. Small standard deviations indicate that the two scores were not widely dispersed thus

indicating that the I-SAID is a reliable tool for reporting ratings based on the grouping of

questions that form components. Table 2 depicts this data.

Table 2

MS1 I-SAID Scores by Component (with Standard Deviations in Parentheses)

Component Average Score from First Testing +/- Stdeva

Average Score from Second Testing +/- Stdeva

Mean Score from First and Second Test

3a 3.31 (.11) 3.14 (.11) 3.22

3b 3.11 (.06) 3.02 (.06) 3.06

3c 3.15 (.05) 3.07 (.05) 3.11

3d 3.05 (.05) 2.97 (.05) 3.01

3e 3.27 (.14) 3.06 (.14) 3.17

Table 3 examines the observation scores of MS1 for observation one and observation

two. The standard deviation was calculated for the scores of the first and second observation by

Page 58: Improving instructional delivery: reflections on student ...1095/fulltext.pdfinstructional practices through reflection is the driver behind the development of a student feedback tool;

58

component. Again the standard deviations were small, indicating that the scores were not widely

dispersed, thus the repeated observations for MS1 appear reliable.

Table 3

MS1 Scores from Observation Instrument (with Standard Deviations in Parentheses)

Figure 2 below visually demonstrates the students scorning of MS1 and the observer

scoring of MS1 on the components within Domain 3. This comparison is done in order to assess

the validity of the I-SAID. The I-SAID is a valid instrument if it measures what it claims to

measure. It is plausible to state that what the students saw in relation to Domain 3 the observer

saw in a very similar manner. The scores do not deviate widely. Overall the observer rating

scored teacher MS1 slightly higher than the students. This will be discussed later in chapter five.

The standard deviations between the average observation score and the average student scores on

the I-SAID per component is very low. This indicates that these scores are not widely dispersed,

which is a sign that both the observer and the students saw very similar things in respect to the

instructional delivery of MS1. The numerical results can be seen below in Table 4. As a result it

appears that the I-SAID is a valid tool in the case of MS1.

Component First Observation Score +/- Stdeva

Second Observation Score +/- Stdeva

Mean Score for Observations

3a 3.50 (0) 3.50 (0) 3.50

3b 3.50 ( 0) 3.50 (0) 3.50

3c 3.50 ( 0) 3.50 (0) 3.50

3d 3.50 ( 0) 3.50 (0) 3.50

3e 3.00 (.35) 3.50 (.35) 3.25

Page 59: Improving instructional delivery: reflections on student ...1095/fulltext.pdfinstructional practices through reflection is the driver behind the development of a student feedback tool;

59

Figure 2

Juxtaposition of the Mean Score from I-SAID and Observation Instrument by Component for

MS1

Table 4

MS1 Observation and I-SAID Score Comparisons (with Standard Deviations in Parentheses)

Figure 3 below is the line graph for the Stages of Concerns Questionnaire for teacher

participant MS1. “The Stages of Concern Questionnaire is the primary tool for determining

where an individual is in the stages relative to the implementation of an innovation. They are

called stages because usually there is developmental movement through them” (George, 2006,

0 1 2 3 4

Component 3e

Component 3d

Component 3c

Component 3b

Component 3a

Mean I-SAID

Mean Observation

Component Mean Observation score +/- Stdeva

Mean I-SAID Score +/- Stdeva

3a 3.50 (.19) 3.22 (.19)

3b 3.50 (.31) 3.06 ( .31)

3c 3.50 (.27) 3.11 (.27)

3d 3.50 (.34) 3.01 (.34)

3e 3.25 (.05) 3.17 ( .05)

Page 60: Improving instructional delivery: reflections on student ...1095/fulltext.pdfinstructional practices through reflection is the driver behind the development of a student feedback tool;

60

pp. 6-7). MS1’s scaled score for Stage 0 was 22. This low score indicates that the I-SAID is of

high priority to her and that student feedback is central to her thinking when reflecting on her

teaching. MS1’s scaled score for Stage 1 is 69. This score indicates that MS1 is interested in

learning more about the I-SAID. She is interested in fundamental areas such as the reliability and

validity of the instrument, how the instrument can assist her in growing professionally, and

whether or not the instrument captures her teaching accurately. Stage 2 with a scaled score of 78

is her second highest scaled score. This Stage speaks to MS1’s personal concerns regarding the I-

SAID. Concerns relative to this stage may include, will the students rate me fairly, will my

evaluator hold student feedback in perspective to other available data, how will students who do

not do as well as expected rate my teaching. Teacher MS1 had low scaled scores for Stages 3 and

Stages 4, 23 and 21 respectively. Low scores in these stages indicate that teacher MS1 does not

have concerns regarding management. She does not have concerns relative to the organizing,

managing and scheduling the administration of the I-SAID. MS1’s highest scaled score was for

Stage 5, collaboration, given her role as curriculum team leader and mentoring supervisor this is

not surprising. Finally, Stage 6, refocusing was a relatively low score indicating that MS1 is

comfortable with the I-SAID and desires to make little to no change to the instrument.

Page 61: Improving instructional delivery: reflections on student ...1095/fulltext.pdfinstructional practices through reflection is the driver behind the development of a student feedback tool;

61

Figure 3

Line Graph of the Scaled Scores for the Stages of Concern Questionnaire for Teacher MS1

Teacher Participant HS1

HS1 is a foreign language teacher at the high school level. This is her first year teaching

at the high school, but her seventh year teaching in the district. Prior to this year she taught at the

middle school. HS1 is also one of the coordinator for the mentoring program, who developed the

curriculum with teacher MS1. With a partner she trains mentors and delivers monthly instruction

to the mentors and mentees. She was the curriculum team leader while at the middle school.

HS1 was presented with the data from her students and asked to reflect on the data. She

indicates that, “I liked the variety of questions which address multiple areas” as being the most

important aspect of the I-SAID in relation to reflection. She felt that the instrument could be

improved upon by, “utilizing the language I specifically use, for example I call the activator a

warm up”, additionally she indicates that, “it would be helpful to have some feedback regarding

the pacing of my lesson. Question 28 touches upon it but maybe something more specific.”

When reflecting on the pairing of data from the I-SAID with teacher growth data, teacher HS1

22

6978

23 21

84

30

0

10

20

30

40

50

60

70

80

90

Re

lati

ve

In

ten

sity

SoC MS1

Page 62: Improving instructional delivery: reflections on student ...1095/fulltext.pdfinstructional practices through reflection is the driver behind the development of a student feedback tool;

62

writes, “Comparing I-SAID data from year to year with the teacher growth data may paint a

more complete picture. If teachers are able to use the data from the I-SAID and improve their

instructional strategies, the growth data should also show improvement.” HS1 indicates that

pairing the I-SAID student results with a self assessment using the I-SAID could “benefit a

teacher as they can see how aligned their self assessment is with the feedback from their

students.” When asked how beneficial pairing the I-SAID data with observational data HS1

commented,

the data from the I-SAID would most likely be based on typical practices over a

longer period rather than an observation or walk through that shows a snap shot.

Some of the information gathered on the I-SAID may not be gathered in an

observation, for instance depending on when the observation took place, the

observer may both see the teacher use pre-tests but the students could determine if

this is done.

HS1 reflected on the information provided by her students by, “using the average score to

find the areas ranked the lowest and then I tried to think of examples in my teaching practice that

would have led the students to answer in the way they did. The pie charts were helpful by

providing a visual representation for the data.” HS1 indicated that the I-SAID “is helpful in

pointing out areas of strength and where improvement is needed. If I would be able to give this

same I-SAID to students in June I would hope to see the numbers shift towards the “strongly

agree” category. I think it is a nice tool in addition to self-reflection and observation to create a

more complete picture.” Finally, HS1 feels that the I-SAID is “helpful. The mentor could review

the results before observing the mentee and look for specific things during the observation.” “It

(I-SAID) is useful in making a long term plan to address areas needing improvement”. HS1

Page 63: Improving instructional delivery: reflections on student ...1095/fulltext.pdfinstructional practices through reflection is the driver behind the development of a student feedback tool;

63

states that, “I think I need to search for more opportunities to offer my students choices (question

#16). Perhaps finding a way to incorporate more choice would help reach the students who are

reluctant to study a foreign language.” HS1’s reflection of the data allows her to cite specific

areas of improvement. She is able to see the benefits of pairing data to create a complete picture

as well as a tool to improve the conversation between the mentor and mentee.

Table B2 in Appendix B indicates consistency for the results on the I-SAID on the test

and re-test administrations. Standard deviation scores are low which suggest that the scores from

the first time the students rated HS1 to the second time they rated HS1 do not vary widely. Table

5 represents the I-SAID scores for the components for the two administrations of the instrument.

It is clear that the deviation for the component scores are very small, thus supporting the

reliability of the instrument.

Table 5

HS1 I-SAID Scores by Component (with Standard Deviations in Parentheses)

Component Average Score from First Testing +/- Stdeva

Average Score from Second Testing +/- Stdeva

Mean Score from First and Second Test

3a 3.16 (.02) 3.12 (.02) 3.14

3b 2.88 (.02) 2.85 (.02) 2.87

3c 3.02 (.08) 3.15 (.08) 3.08

3d 3.12 (0) 3.12 (0) 3.12

3e 2.93 (.14) 3.13 (.14) 3.03

Table 6 is the data related to observation scores for HS1 on the first and second occasion.

Observer scores from the first observation to the second observation did not deviate much. The

observer saw very similar instructional strategies on both occasions for HS1.

Page 64: Improving instructional delivery: reflections on student ...1095/fulltext.pdfinstructional practices through reflection is the driver behind the development of a student feedback tool;

64

Table 6

HS1 Scores from Observation Instrument

Figure 4 and Table 7 are comparative models, comparing the student reported data from

the I-SAID and the observer data from the observations. When rating teacher HS1, both the

students and the observer appear to rate this teacher participant very closely. The component

with the greatest deviation is component 3b, yet even this deviation is small 0.09. A discussion

of the possible causes for the deviations will be further discussed in chapter five. It can be

concluded by reviewing the data for HS1 that the I-SAID is a valid tool to measure instructional

delivery for teacher HS1.

Component First Observation Score +/- Stdeva

Second Observation Score +/- Stdeva

Mean Score for Observations

3a 3.00 (.35)

3.50 (.35) 3.25

3b 3.00 (0)

3.00 (0) 3.00

3c 3.00 (0) 3.00 (0)

3.00

3d 3.00 (0) 3.00 (0) 3.00

3e 3.00 (0) 3.0 0 (0) 3.00

Page 65: Improving instructional delivery: reflections on student ...1095/fulltext.pdfinstructional practices through reflection is the driver behind the development of a student feedback tool;

65

Figure 4

Juxtaposition of the Mean Score from I-SAID and Observation Instrument by Component for

HS1

Table 7

HS1 Observation and I-SAID Score Comparisons (with Standard Deviations in Parentheses)

The final piece of data for HS1 is the Stages of Concern Questionnaire that she

completed at the conclusion of the pilot study. HS1 score on Stage 0 of 97 indicates that she

feels there are a number of initiatives, tasks, and activities that are of concern to her, therefore

this innovation, the I-SAID is of little concern to her. Stage 1 has a scaled score of 75 which

indicates interest in the I-SAID and that she would like to know more about the I-SAID.

0 1 2 3 4

Component 3e

Component 3d

Component 3c

Component 3b

Component 3a

Mean I-SAID

Mean Observation

Component Mean Observation Score +/- Stdeva

Mean I-SAID Score +/- Stdeva

3a 3.25 (.07) 3.14 (.07)

3b 3.00 (.09) 2.87 (.09)

3c 3.00 (.05) 3.08 (.05)

3d 3.00 (.08) 3.12 (.08)

3e 3.00 (.02) 3.03 (.02)

Page 66: Improving instructional delivery: reflections on student ...1095/fulltext.pdfinstructional practices through reflection is the driver behind the development of a student feedback tool;

66

Fundamental knowledge of how to implement, interpret and use the I-SAID would be of most

interest to HS1. Stage 2 with a scaled score of 87 indicates self concerns. Concerns in this area

relate to what effects the innovation may have on HS1 personally. In order for implementation of

the I-SAID to be successful with HS1 she will have to feel that the results yielded from the I-

SAID will not jeopardize her career. Stage 3, management detects concern over time and

logistical aspects of the I-SAID. HS1 scaled score of 27 indicates that she has little to no concern

regarding the management of implementing the I-SAID. Similarly, HS1 has a low Stage 4 scaled

score indicating that she has minimal concerns about the effects of the I-SAID on her students.

Stage 5 with a scaled score of 55 and Stage 6 with a scaled score of 65 are within the average

range. Stage 5 assesses collaboration, and the investment of the individual in collaborating with

colleagues around the use of the I-SAID. HS1 indicates as a result of the score of 55 that she is

interested in collaborating with colleagues around the use of the I-SAID. Stage 6, refocusing

with a scaled score of 65 indicates that HS1 may be interested in refining or looking for

alternatives to the I-SAID, however this is not a strong desire of hers.

Page 67: Improving instructional delivery: reflections on student ...1095/fulltext.pdfinstructional practices through reflection is the driver behind the development of a student feedback tool;

67

Figure 5

Line Graph of the Scaled Scores for the Stages of Concern Questionnaire for Teacher HS1

Teacher Participant MS2

Teacher participant MS2 is a middle school mathematics teacher. MS2 is a career

changer who began her career in business and has been teaching middle school mathematics for

twelve years. MS2 was presented with the data from her students in order to reflect and respond

to the reflective memo. MS2 found that the most important benefit of the I-SAID was “to see the

student’s perspective and look for ways to improve teaching effectiveness”. MS2 would like to

see the addition of questions specific to challenging mathematical processes added to the I-SAID

as a means of improving the tool. This teacher felt that questions 21”I know when and why my

work meets or does not meet the standards/teacher’s expectations”, 24 “My teacher asks

questions to check to see if we understand” and 31, “my teacher has high expectations for my

success” provided the most informative data. MS2 also indicated that the tool could be “used to

measure growth” and that it could be used to “focus on key areas as needed” and provide

97

75

87

27 30

5565

0

20

40

60

80

100

120

Re

lati

ve

In

ten

sity

SoC HS1

Page 68: Improving instructional delivery: reflections on student ...1095/fulltext.pdfinstructional practices through reflection is the driver behind the development of a student feedback tool;

68

information for teachers to “look for professional development in the areas of focus” In

particular teacher MS2 felt that the I-SAID has led her to “specifically work on more deliberate

ways to emphasize the learning goals, closing activities and improve my wait time.”

The questions for MS2 were grouped by components for the test and re-test

administration of the I-SAID. The standard deviations between the first and second ratings per

component were calculated to test if there was consistency between the ratings per components.

Small standard deviations indicate that the two scores were not widely dispersed thus indicating

that the I-SAID is a reliable tool for reporting ratings based on the grouping of questions that

form components. One can expect that given the I-SAID on two separate occasions with little to

no change in instruction will produce similar scores.

Table 8

MS2 I-SAID Scores by Component (with Standard Deviations in Parentheses)

Component Average Score from First Testing +/- Stdeva

Average Score from Second Testing +/- Stdeva

Mean Score from First and Second Test

3a 3.08 (.06) 3.00 (.06) 3.04

3b 2.83 (.01) 2.86 (.01) 2.84

3c 2.82 (.08) 2.94 (.08) 2.88

3d 3.13 (.05) 3.05 (.05) 3.09

3e 2.96 (.03) 3.01 (.03) 2.98

Table 9 is the compilation of the data for the first and second observation by component.

The observer scored teacher participant MS2 very similar on observation one and observation

two: the highest deviation occurring on component 3a with the deviation score of 0.35.

Page 69: Improving instructional delivery: reflections on student ...1095/fulltext.pdfinstructional practices through reflection is the driver behind the development of a student feedback tool;

69

Table 9

MS2 Scores from Observation Instrument (with Standard Deviations in Parentheses)

Figure 6 and Table 10 compare the data from the I-SAID to the observation data for

teacher participant MS2. Thus far, the greatest deviation between what the students reported

through the I-SAID and what the observer reported on the scoring rubric is found in teacher

participant MS2’s scores. Reasons for the deviation will be explored further in chapter five.

Figure 6

Juxtaposition of the Mean Score from I-SAID and Observation Instrument by Component for

MS2

0 1 2 3 4

Component 3e

Component 3d

Component 3c

Component 3b

Component 3a

Mean I-SAID

Mean Observation

Component First Observation Score +/- Stdeva

Second Observation Score +/- Stdeva

Mean Score for Observations

3a 3.00 (.35) 2.5 (.35) 2.75

3b 2.50 (0) 2.50 (0) 2.50

3c 3.00 (0) 3.00 (0) 3.00

3d 3.00 (0) 3.00 0) 3.00

3e 2.50 (0) 2.50 (0) 2.50

Page 70: Improving instructional delivery: reflections on student ...1095/fulltext.pdfinstructional practices through reflection is the driver behind the development of a student feedback tool;

70

Table 10

MS2 Observation and I-SAID Score Comparisons (with Standard Deviations in Parentheses)

Figure 7 is the scatter plot representing the scaled scores for teacher participant MS2 for

the stages on the Stages of Concern Questionnaire. Generally speaking MS2 has an overall high

score through all stages. This individual poses the most resistance to the adoption of a new

innovation. She feels strongly that there are too many initiatives to add another one on; she has

strong personal reservations about adopting the I-SAID. MS2 feels that the management of the I-

SAID would pose challenges and that she has concerns about collaborating with others around

the use of the I-SAID. Her lowest score was for Stage 4 with a score of 63. This indicates that

she has less concern about the implications the I-SAID has on her students than concern in the

other stages.

Component Mean Observation Score +/- Stdeva

Mean I-SAID Score +/- Stdeva

3a 2.75 (.20) 3.04 (.20)

3b 2.50 (.24) 2.84 (.24)

3c 3.00 (.08) 2.88 (.08)

3d 3.00 (.06) 3.09 (.06)

3e 2.50 (.33) 2.98 (.33)

Page 71: Improving instructional delivery: reflections on student ...1095/fulltext.pdfinstructional practices through reflection is the driver behind the development of a student feedback tool;

71

Figure 7

Line Graph of the Scaled Scores for the Stages of Concern Questionnaire for Teacher MS2

Teacher Participant HS2

Teacher participant HS2 is a foreign language teacher at the high school level. She has

been teaching in this district for five years. She has previous teaching experience from another

district as well. She teaches all levels of Spanish and she is the English as a second language

teacher coordinator. In addition to this she is the curriculum team leader for the department.

Teacher participant HS2 administered the I-SAID to her students then reflected on the data. She

shared those reflections through the completion of a reflective memo. She reported that the

“importance of bell activities and activators” provided her the most benefit for reflection. When

asked for suggestions she states, “create some frequency rated questions as opposed to all

agree/disagree” as well as including the following questions, “are you engaged in class? How

much elapses between taking an assessment and having it returned?” HS2 did not comment on

the usefulness of pairing data from the I-SAID with additional data. She indicated that the most

informative data came from questions 24 and 25 which are, “my teacher asks questions to check

87

9992

83

6371

77

0

20

40

60

80

100

120

Re

lati

ve

In

ten

sity

Soc MS2

Page 72: Improving instructional delivery: reflections on student ...1095/fulltext.pdfinstructional practices through reflection is the driver behind the development of a student feedback tool;

72

to see if we understand and my teacher uses pre-tests, review games/activities before a test to see

if we understand the material”. HS2 stated that she is “not sure what is meant by students asking

questions of other students” when asked which questions seemed irrelevant to your subject area

or too difficult to answer by students. Finally, she stated that, “it is helpful in seeing how

students perceive our teaching” when asked how the I-SAID can assist you in developing your

professional goals and she identifies, “more activators and more emphasis on objectives, perhaps

pass out and agenda” as what she needs to do to facilitate improving her instructional practice.

HS2’s I-SAID data by component on the test and re-test administration is contained in

Table 11. The standard deviation between the two administrations is very low for all

components, thus indicating that the I-SAID was reliable when used in this pilot situation.

Students in HS2’s class viewed her instructional delivery very similarly on two separate

occasions. This strong similarity holds true for not only the data from the students on the I-SAID

but the data from the observer on the observation rubric from the first scoring to the second

scoring. This data is depicted in Table 12.

Table 11

HS2 I-SAID Scores by Component (with Standard Deviations in Parentheses)

Component Average Score from First Testing +/- Stdeva

Average Score from Second Testing +/- Stdeva

Mean Score from First and Second Test

3a 3.05 (.03) 3.01 (.03) 3.03

3b 3.17 (.02) 3.13 (.02) 3.15

3c 2.55 (.05) 2.62 (.05) 2.58

3d 3.25 (.01) 3.23 (.01) 3.24

3e 3.46 (.10) 3.31 (.10) 3.38

Page 73: Improving instructional delivery: reflections on student ...1095/fulltext.pdfinstructional practices through reflection is the driver behind the development of a student feedback tool;

73

Table 12

HS2 Scores from Observation Instrument

Figure 8 and Table 13 compare the results from the students on the I-SAID to the

observer ratings on the rubric. The standard deviation between the I-SAID and the observation

for all components is low, indicating that the I-SAID is a valid tool as a measurement of student

assessment of instructional delivery.

Figure 8

Juxtaposition of the Mean Score from I-SAID and Observation Instrument by Component for

HS2

0 1 2 3 4

Component 3e

Component 3d

Component 3c

Component 3b

Component 3a

Mean I-SAID

Mean Observation

Component First Observation Score +/- Stdeva

Second Observation Score +/- Stdeva

Mean Score for Observations

3a 3.00 (0) 3.00 (0)

3.00

3b 3.00 (0) 3.00 (0)

3.00

3c 2.50 (0) 2.50 (0)

2.50

3d 3.00 (.35) 3.50 (.35)

3.25

3e 3.5 0 (0) 3.50 (0) 3.50

Page 74: Improving instructional delivery: reflections on student ...1095/fulltext.pdfinstructional practices through reflection is the driver behind the development of a student feedback tool;

74

Table 13

H2R Observation and I-SAID Score Comparisons (with Standard Deviations in Parentheses)

Figure 9 below represents the scaled scores for teacher participant HS2 on the Stages of

Concern Questionnaire. HS2’s profile suggests a high level of resistance to the innovation, the I-

SAID. The tailing up of Stage 6 suggests that she has strong ideas about how to gather and use

student feedback differently than what has been presented by the I-SAID. This desire to collect

and use student feedback differently is also mirrored throughout her reflective memo. Like

several of the teacher participants HS2 has a high scaled score for Stage 0 which indicates that

she is concerned about the number of innovations she presently has to deal with and that this

innovation is not one of priority. The high scaled score for Stage 1 indicates that HS2 is

interested in learning more about the I-SAID. HS2 appears to have intense personal concerns

regarding the I-SAID as evidenced by her scaled score of 92 for Stage 2. Personal concerns

include the fear of inadequately performing on the I-SAID and the consequences that can bring.

This will be discussed further in chapter five. Managing the implementation of the I-SAID is

another concern of HS2 with relatively high intensity. Concern over the consequence is a relative

low compared to other scores. This indicates that HS2’s concern regarding the impact of the I-

SAID on her students is a relative low. HS2 is least concerned about working with others on the

use of the I-SAID. This profile is one which will require a great deal of exposure to the

Component Mean Observation Score +/- Stdeva

Mean I-SAID Score +/- Stdeva

3a 3.00 (.02) 3.03 (.02)

3b 3.00 (.10) 3.15 (.10)

3c 2.50 (.05) 2.58 (.05)

3d 3.25 (0) 3.24 (0)

3e 3.50 (.08) 3.38 (.08)

Page 75: Improving instructional delivery: reflections on student ...1095/fulltext.pdfinstructional practices through reflection is the driver behind the development of a student feedback tool;

75

innovation and will require the participant to contribute to further use and development if

implementation becomes required. HS2’s strong scaled score on Stage 6 could indicate that she

may easily sabotage the implementation if not allowed adequate input.

Figure 9

Line Graph of the Scaled Scores for the Stages of Concern Questionnaire for Teacher HS2

Teacher Participant MS3

Teacher participant MS3 has been teaching in the district as a middle school science

teacher for five years. She holds a PhD in biology and is a second career teacher as she was

previously a researcher for a reproductive science company. She is currently mentoring a new

teacher. MS3 was provided her student data for reflection and then completed the reflective

memo. MS3 stated, “I reflected on the question that involved reviewing material at the end of the

class and checking if we met the goal” when asked about which aspect provided her the most

benefit for reflection. MS3 replied, “perhaps asking if there are enough hands on activities or

demos that connect the ideas we learn, as science is based on asking questions” when answering

the question regarding what additional questions she may like to see on the instrument. In

9993 92 92

66

36

92

0

20

40

60

80

100

120

Re

lati

ve

In

ten

sity

Soc HS2

Page 76: Improving instructional delivery: reflections on student ...1095/fulltext.pdfinstructional practices through reflection is the driver behind the development of a student feedback tool;

76

addition to this question she suggests, “it may be nice to have a question assessing if students

really feel that the teacher is approachable”. Teacher MS3 does not comment on the pairing of

the I-SAID data with additional data, but does state that, “all of the questions that had a wide

response range were to me most informative since somehow I am not connecting with some

students in certain ways”. She stated that, “the question, students ask questions of other students,

could be clarified as well as the question, I start on an activator when I take my seat could be

clarified to it is expected that I start my activator when I take my seat” when asked if there were

any questions too difficult to answer by students. MS3 states that she could “use the I-SAID as a

periodic reflective tool to see if I have succeeded in meeting goals perhaps unmet previously

such as an obvious summarizer”. Furthermore she states that, “I could foresee the development

of a SMART goal that strives to measure success by attaining a certain percentage of students

agreeing or strongly agreeing that we summarize most days”. With respects to how the I-SAID

could be used in the mentoring relationship she states, “we could certainly discuss between the

mentor and mentee which classroom practices were being implemented successfully based on

student responses vs. those that were not being implemented successfully”. General pattern

coding from this teacher participant as well as the others will be discussed further in this chapter.

Scores from the test re-test of the I-SAID are displayed in Table 14. The scores from the

first test and the second test are very similar resulting in low standard deviations. The

observations from the first and second observation show more deviation on components 3a, 3b,

and 3c than for some of the other teacher participates. This data can be viewed in Table 15

below. The causes for this variation will be discussed further in chapter five.

Page 77: Improving instructional delivery: reflections on student ...1095/fulltext.pdfinstructional practices through reflection is the driver behind the development of a student feedback tool;

77

Table 14

MS3 I-SAID Scores by Component

Component Average Score from First Testing +/- Stdeva

Average Score from Second Testing +/- Stdeva

Mean Score from First and Second Test

3a 3.55 (0) 3.55 (0) 3.55

3b 3.24 (.08) 3.36 (.08) 3.30

3c 3.06 (.06) 3.22 (.06) 3.14

3d 3.54 (.03) 3.49 (.03) 3.52

3e 3.60 (.02) 3.63 (.02) 3.62

Table 15

MS3 Scores from Observation Instrument

Figure 10 and Table 16 depict the comparisons of mean I-SAID scores and mean

observation scores with the standard deviations in parenthesis. The standard deviations for

teacher participant MS3 between the two scores is low confirming that the I-SAID is a valid tool

for assessing the instructional delivery of MS3 by her students.

Component First Observation Score +/- Stdeva

Second Observation Score +/- Stdeva

Mean Score for Observations

3a 3.50 (.35) 4.00 (.35) 3.75

3b 3.00 (.35) 3.50 (.35) 3.25

3c 3.50 (.35) 3.00 (.35) 3.25

3d 3.50 (0) 3.50 (0) 3.50

3e 3.50 (0) 3.50 (0) 3.50

Page 78: Improving instructional delivery: reflections on student ...1095/fulltext.pdfinstructional practices through reflection is the driver behind the development of a student feedback tool;

78

Figure 10

Juxtaposition of the Mean Score from I-SAID and Observation Instrument by Component for

MS3

Table 16

MS3 Observation and I-SAID Score Comparisons (with Standard Deviations in Parentheses)

Figure 11 is the line graph for teacher participant MS3’s scaled scores on the Stages of

Concern Questionnaire. MS3’s peak scores occurred in Stages 0 and 1. Stage 0 results indicate

that MS3 is not concerned about implementation of the I-SAID as she has other initiatives taking

priority. Stage 1 results indicate that MS3 is interested in learning more about the I-SAID and its

functions as a method of obtaining student feedback. Her Stage 2 results are average indicating

0 1 2 3 4 5

Component 3e

Component 3d

Component 3c

Conponent 3b

Component 3a

Mean I-SAID

Mean Observation

Component Mean Observation Score +/- Stdeva

Mean I-SAID Score +/- Stdeva

3a 3.75 (.14) 3.55 (.14)

3b 3.25 (.03) 3.30 (.03)

3c 3.25 (.07) 3.14 (.07)

3d 3.50 (.01) 3.52 (.01)

3e 3.50 (.08) 3.62 (.08)

Page 79: Improving instructional delivery: reflections on student ...1095/fulltext.pdfinstructional practices through reflection is the driver behind the development of a student feedback tool;

79

that along with her previous stage scores her response pattern indicates that she is open to and

interested in this innovation. She does not have strong personal concern about her involvement

with the I-SAID. MS3 does not indicate great concern over the management of implementation

of the I-SAID and even lower concern is expressed for her students over the implementation of

the I-SAID. Furthermore, MS3’s lowest scaled score is on Stage 5, collaboration. This indicates

her willingness to collaborate around the I-SAID. The scaled score of 47 on Stage 6 suggests that

she does not have strong feelings concerning the instrument itself, and does not feel that major

changes to the I-SAID need to occur prior to implementation.

Figure 11

Line Graph of the Scaled Scores for the Stages of Concern Questionnaire for Teacher MS3

Teacher Participant HS3

Teacher participant HS3 is a high school chemistry teacher. She is also a second career

teacher as she was previously a pharmacist. HS3 has been teaching in the district for 19 years.

She is co-director of mentoring for the high school and recently re-wrote the mentoring

curriculum with MS1, and HS1. She has received many awards and recognitions the most recent

0

20

40

60

80

100

120

Re

lati

ve

In

ten

sity

Soc MS3

Page 80: Improving instructional delivery: reflections on student ...1095/fulltext.pdfinstructional practices through reflection is the driver behind the development of a student feedback tool;

80

being named a finalist for the Massachusetts Teacher of the Year. HS3 was presented with

student feedback data similar to that found in Appendix H and then she was asked to complete

the reflective memo. In-vivo coding resulted in the following salient comments. HS3 comments

that the aspect of the I-SAID which provided her the most benefit was, “the ranking of the

answers which allowed me to reflect on my student’s perception of how the class is taught”. She

agreed with some of the other teacher participants that it may be beneficial for “departments to

include department specific questions on the questionnaire”. When asked about pairing the I-

SAID data with additional data HS3 states, “it would be beneficial as it gives the teacher an

opportunity to reflect on her teaching with the student’s perception of the teaching. It provided a

check and balance system between the two types of data which could help to eliminate biases”.

HS3 commented that, “the ones students agreed with gave me confirmation of my techniques

and the ones that they disagreed with allowed me to reflect on my practices” when asked about

which questions were the most informative. Finally, HS3 felt that the data from the I-SAID could

assist her in “self-reflection” and in “identifying courses or workshops that could help her learn

new techniques”. She also felt that, “if I scored low on a question I was surprised about I could

ask a colleague to stop in and observe to give me feedback to see if the student’s perception was

correct and if I needed to change methods”.

Table 17 represents the test re-test data for teacher HS3 by component. The data indicates

that the I-SAID is a reliable tool in the case of HS3 as the standard deviations between the scores

of the two administrations are low. This indicates that the two scores do not deviate much from

each other. Thus one can expect similar scores from separate administrations if there is no

change to the instructional delivery.

Page 81: Improving instructional delivery: reflections on student ...1095/fulltext.pdfinstructional practices through reflection is the driver behind the development of a student feedback tool;

81

Table 17

HS3 I-SAID Scores by Component (with Standard Deviations in Parentheses)

Component Average Score from First Testing +/- Stdeva

Average Score from Second Testing +/- Stdeva

Mean Score from First and Second Test

3a 3.07 (.04) 3.13 (.04) 3.10

3b 2.76 (.02) 2.80 (.02) 2.78

3c 2.70 (.08) 2.83 (.08) 2.76

3d 2.96 (.03) 3.01 (.03) 2.99

3e 2.90 (.15) 3.11 (.15) 3.01

Table 18

HS3 Scores from Observation Instrument (with Standard Deviations in Parentheses)

Figure 12 and Table 19 below contain the I-SAID and observation score comparisons for

teacher HS3. According to the data, the students and the observer rated HS3 in very similar

manners. The standard deviations were lowest for teacher participant HS3 than any of the teacher

participants. This similarity allows us to conclude that the I-SAID is a reliable and valid tool to

measure assessment of instructional delivery by students for teacher HS3.

Component First Observation Score +/- Stdeva

Second Observation Score +/- Stdeva

Mean Score for Observations

3a 3.00 (0) 3.00 (0) 3.00

3b 2.50 (.35) 3.00 (.35) 2.75

3c 2.50 (.35) 3.00 (.35) 2.75

3d 3.00 (0) 3.00 (0) 3.00

3e 3.00 (0) 3.00 (0) 3.00

Page 82: Improving instructional delivery: reflections on student ...1095/fulltext.pdfinstructional practices through reflection is the driver behind the development of a student feedback tool;

82

Figure 12

Juxtaposition of the Mean Score from I-SAID and Observation Instrument by Component for

HS3

Table 19

HS3 Observation and I-SAID Score Comparisons (with Standard Deviations in Parentheses)

Figure 13 is the line graph of scaled scores for teacher participant HS3 on the Stages of

Concern Questionnaire. HS3’s peak score occurs in Stage 1, information, indicating that most of

her concern is in the area of obtaining more information with regards to the I-SAID. Again as

with several of the teacher participants, HS3 is a respondent who is generally open and interested

in using the I-SAID as the student feedback tool which will soon be required. She is not fully

0 1 2 3 4

Component 3d

Component 3c

Component 3c

Component 3b

Component 3a

Mean I-SAID

Mean Observation

Component Mean Observation Score +/- Stdeva

Mean I-SAID Score +/- Stdeva

3a 3.00 (.07) 3.10 (.07)

3b 2.75 (.02) 2.78 (.02)

3c 2.75 (0) 2.76 (0)

3d 3.00 (0) 2.99 (0)

3e 3.00 (0) 3.01 (0)

Page 83: Improving instructional delivery: reflections on student ...1095/fulltext.pdfinstructional practices through reflection is the driver behind the development of a student feedback tool;

83

aware of the innovation, the I-SAID, and has some concern with other initiatives in her life. She

does not have significant management concerns and is not intensely concerned about the I-

SAID’s consequences for student or collaborating with others. Finally, her profile contains a low

tailing off which indicates that she does not have any competing ideas which could potentially

replace the I-SAID.

Figure 13

Line Graph of the Scaled Scores for the Stages of Concern Questionnaire for Teacher HS3

Teacher Participant HS4

Teacher HS4 is a 30 year veteran teacher. She teaches physical science. She is the current

science department curriculum team leader and recently she was also the mathematics

curriculum team leader as well. Teacher HS4 is a mentor and is mentoring a new teacher this

year. HS4 was presented with her student data and asked to complete the reflective memo. She

indicated that, “the questions that focus on clarity of instruction, challenging my students,

presenting material with clarity and in a way that allows students to relate to meaningfulness and

real life situations as well as those queries focusing on my ability to move through material in a

75

91

7265

33

64

52

0

1020

30

4050

60

7080

90

100

Re

lati

ve

In

ten

sity

SoC HS3

Page 84: Improving instructional delivery: reflections on student ...1095/fulltext.pdfinstructional practices through reflection is the driver behind the development of a student feedback tool;

84

way that addresses the needs of all are the aspects that have allowed me to really reflect on my

teaching practices” were the aspects of the I-SAID that had the most benefit for reflection. With

regards to changing or adding questions HS4 comments that, “I would not add any additional

questions” and “I would change the question regarding choices as it does not really apply to the

high school experience in science”. HS4 was asked if there would be any benefits or limitations

to pairing the I-SAID data with other data and she states, “it would be a great benefit to pair I-

SAID data with self assessment data and we all tend to be our own worse critics” additionally,

she comments “this would be very beneficial as the observer may see things very differently

from the students as well as the teacher” when asked for the benefits of pairing the I-SAID data

with observation data. When reviewing which questions provided her with the most informative

data HS4 states that, “the questions that involve clarity, those that incorporate the use of real life

experiences and those that involve challenging my students, as well as the use of feedback”. HS4

was asked in what ways could you use the I-SAID in the future to which she responded, “The I-

SAID is a good way for a teacher to take a hard look at themselves through the eyes of the

students who are the largest target audience. It is an objective survey”. She further states, “The

survey will point me in the direction of areas that will be benefit my students. Most of the tools

that I need are in my possession but I must take the time to review available information”. HS4

indicates that the I-SAID can assist her in developing her professional goals by, “reviewing the

data and using the results to focus on improving my weaknesses and continuing my strengths”.

HS4 states, “as a mentor I can use this information to become a better observer of those I mentor

and to offer more concrete examples to benefit those being mentored. Also, seeing a student’s

perspective allows us to focus on the most important things; are we presenting material in the

best way to engage the greatest number of students, and providing an environment that supports

Page 85: Improving instructional delivery: reflections on student ...1095/fulltext.pdfinstructional practices through reflection is the driver behind the development of a student feedback tool;

85

the motivation to learn”. Based on her results HS4 realizes that, “reviewing the data and

reflecting on my current practices indicates that I should focus my attention to practices that

make students feel that they are involved”. Pattern coding was applied to the in-vivo coding and

will be presented later in this chapter.

Table 20 represents the average scores from the first administration of the I-SAID and the

average scores from the second administration of the I-SAID with the standard deviation in

parentheses. The standard deviations are low which indicate that the deviation between the

average score from the first administration to the second administration is not wide thus the I-

SAID is a reliable tool for measuring student perception of instructional delivery of HS4.

Observations for the teacher participants served as a benchmark for criterion related validity for

the I-SAID. Table 21 reports the scores from the first and second observations on teacher

participant HS4. Again the standard deviations were calculated to demonstrate that the two

scores did not deviate widely.

Table 20

HS4 I-SAID Scores by Component (with Standard Deviations in Parentheses)

Component Average Score from First Testing +/- Stdeva

Average Score from Second Testing +/- Stdeva

Mean Score from First and Second Test

3a 3.55 (.04) 3.62 (.04) 3.58

3b 3.18 (.06) 3.27 (.06) 3.23

3c 2.98 (.12) 3.16 (.12) 3.07

3d 3.28 (.05) 3.35 (.05) 3.32

3e 3.32 (.17) 3.57 (.17) 3.45

Page 86: Improving instructional delivery: reflections on student ...1095/fulltext.pdfinstructional practices through reflection is the driver behind the development of a student feedback tool;

86

Table 21

HS4 Scores from Observation Instrument (with Standard Deviations in Parentheses)

Figure 14 and Table 22 depict the comparison between the average score per component

on the I-SAID and the average observation score per component for teacher HS4. Comparing

these two instruments speaks to content validity as the observation rubric is an instrument that

has been benchmarked and is both valid and reliable when scored by a trained observer. The

deviation between the scores on the I-SAID and the observations for teacher HS4 are small, thus

one can conclude that the I-SAID is a valid tool for teacher participant HS4.

Component First Observation Score +/- Stdeva

Second Observation Score +/- Stdeva

Mean Score for Observations

3a 3.50 (0) 3.50 (0) 3.50

3b 3.00 (0) 3.00 (0) 3.00

3c 3.00 (0) 3.00 (0) 3.00

3d 3.50 (.35) 3.00 (.35) 3.25

3e 3.50 (0) 3.50 (0) 3.50

Page 87: Improving instructional delivery: reflections on student ...1095/fulltext.pdfinstructional practices through reflection is the driver behind the development of a student feedback tool;

87

Figure 14

Juxtaposition of the Mean Score from I-SAID and Observation Instrument by Component for

HS4

Table 22

HS4 Observation and I-SAID Score Comparisons (with Standard Deviations in Parentheses)

Figure 15 represents the line graph of the scaled scores on the Stages of Concern

questionnaire for teacher participant HS4. HS4’s profile is similar to MS4’s profile. Like MS1,

HS4 has the thought of obtaining and reflecting on student feedback as a priority over other

initiative that may be taking place in her life. Additionally, she like many of the teacher

participants is interested in learning more about the I-SAID and how it can be incorporated into

0 1 2 3 4

Component 3a

Component 3b

Component 3c

Component 3d

Component 3e

Mean Observation

Mean I-SAID

Component Mean Observation Score +/- Stdeva

Mean I-SAID Score +/- Stdeva

3a 3.50 (.05) 3.58 (.05)

3b 3.00 (.16) 3.23 (.16)

3c 3.00 (.04) 3.07 (.04)

3d 3.25 (.04) 3.32 (.04)

3e 3.50 (.03) 3.45 (.04)

Page 88: Improving instructional delivery: reflections on student ...1095/fulltext.pdfinstructional practices through reflection is the driver behind the development of a student feedback tool;

88

the evaluation process in order to meet the mandates of the Massachusetts Department of

Elementary and Secondary Education. Generally speaking HS4’s relative intensity of her scaled

scores is very low, however she demonstrates four peak scores in the area of information (Stage

1), personal (Stage 2), consequence (Stage 4) and collaboration (Stage 5). This translates into the

profile of a non-user who is interested in more information, has some personal concerns,

concerns for her students and concerns regarding the collaboration process when reflecting on

the I-SAID. HS4 demonstrates little desire to modify or change the I-SAID in order to consider

implementation.

Figure 15

Line Graph of the Scaled Scores for the Stages of Concern Questionnaire for Teacher HS4

As a result of the individual data analysis the newly developed and piloted I-SAID

appears to be a valid and reliable tool to measure student perception of instructional delivery for

the teacher participants when examined on a case-by-case basis. The following section will

present the research questions as developed in chapter three and attempt to present data to

answer the questions proposed by this researcher.

5

26 25

5

23

26

9

0

5

10

15

20

25

30

Re

lati

ve

In

ten

sity

SoC HS4

Page 89: Improving instructional delivery: reflections on student ...1095/fulltext.pdfinstructional practices through reflection is the driver behind the development of a student feedback tool;

89

Research Question 1 What elements should be in a reliable and valid SFT?

The reliability of the instrument, the I-SAID was assessed by analyzing the test, re-test data.

The first research question is broken down into the following open-ended sub-questions to

address the area of validity:

a. To what extent do the responses to the questions from the I-SAID deviate from or

correlate with observational data matching components?

b. To what extent do the questions on the I-SAID accurately measure students’ perception

of the teachers’ use of the instructional framework?

Reliability of the instrument was assessed by statistical analysis of test re-test data.

Standard deviations and Pearson’s r was calculated for questions and components to determine

the reliability of the I-SAID. Appendix B contains the standard deviation tables for questions by

teacher. Data has also been presented by teacher per component in the above section. Table 23

presents the largest, smallest and average standard deviation per component between the test and

re-test. Component 3e had the largest average standard deviation of 0.10, however it is still

considered low enough to indicate that the I-SAID is a reliable tool. A discussion of why 3e

yielded the highest score will occur in depth in chapter five.

Page 90: Improving instructional delivery: reflections on student ...1095/fulltext.pdfinstructional practices through reflection is the driver behind the development of a student feedback tool;

90

Table 23

Standard Deviations between the test and re-test per Component Across all Seven Teacher

Participants.

Largest Stdeva Smallest Stdeva Average Stdeva

Component 3a .11 0 .04

Component 3b .08 .01 .02

Component 3c .12 .05 .08

Component 3d .05 0 .03

Component 3e .17 .02 .10

Regression analysis was used to test if the scores on the test could significantly

predict the scores on the re-test per component. Below figures O through S represent the scatter

plots for the linear regression for test, re-test for components 3a, 3b, 3c, 3d, 3e. Review of these

figures indicates that for components 3a, 3b, 3c, and 3d the coefficient of determination is 0.75

or above, thus the variation in the first test can be explained by the variation in the re-test with

75 % or better accuracy for those components. Component 3e data results indicate a coefficient

of determination of 0.45 which is moderate as the variation in the first test can only be explained

by the variation in the re-test with 45% accuracy. This will be discussed further in chapter five;

however it is visible that there are an increased number of outliers for this component over the

other four components. As a result of reviewing all of the data it is plausible to state that for this

pilot study the I-SAID has proved to be a reliable instrument to obtain student perception of

teacher delivery of instruction.

Page 91: Improving instructional delivery: reflections on student ...1095/fulltext.pdfinstructional practices through reflection is the driver behind the development of a student feedback tool;

91

Figure 16

Scatter plot for test scores and re-test scores for Teachers 1-7 on Component 3a

Figure 17

Scatter plot for test scores and re-test scores for Teachers 1-7 on Component 3b

y = 0.7608x + 0.7658

R² = 0.7509

1

1.5

2

2.5

3

3.5

4

1 1.5 2 2.5 3 3.5 4 4.5

Te

st

Re-Test

Component 3a Teachers 1-7

y = 0.7671x + 0.7219

R² = 0.8416

1

1.5

2

2.5

3

3.5

4

1 1.5 2 2.5 3 3.5 4

Te

st

Re-Test

Component 3b Teachers 1-7

Page 92: Improving instructional delivery: reflections on student ...1095/fulltext.pdfinstructional practices through reflection is the driver behind the development of a student feedback tool;

92

Figure 18

Scatter plot for test scores and re-test scores for Teachers 1-7 on Component 3c

Figure 19

Scatter plot for test scores and re-test scores for Teachers 1-7 on Component 3d

y = 0.8265x + 0.6028

R² = 0.9137

1

1.5

2

2.5

3

3.5

4

1 1.5 2 2.5 3 3.5 4

Te

st

Re-Test

Component 3c Teachers 1-7

y = 0.7923x + 0.6592

R² = 0.8243

1

1.5

2

2.5

3

3.5

4

1 1.5 2 2.5 3 3.5 4

Te

st

Re-Test

Component 3d Teachers 1-7

Page 93: Improving instructional delivery: reflections on student ...1095/fulltext.pdfinstructional practices through reflection is the driver behind the development of a student feedback tool;

93

Figure 20

Scatter plot for test scores and re-test scores for Teachers 1-7 on Component 3e

To inform the question of validity an evaluation of the data was performed to determine

the standard deviation for all seven teacher participants between the I-SAID scores and the

observation scores per component of Domain 3. The data was also assessed to determine if a

statistically significant correlation occurred between Domain 3 components on the I-SAID and

the observations. To obtain this information the questions which related to the components were

averaged for each teacher. The two observations for each teacher per component were also

averaged. These averages were entered into excel in order to calculate the standard deviations

and Pearson’s r for the measures. Once the data was entered into Microsoft excel, standard

deviations and correlations were run on the data sets significance was then checked at the 0.05

level and 0 .01 level. Scatter plots were created to check for outliers and complete regression

analysis on the data.

y = 0.5167x + 1.6071

R² = 0.4539

1

1.5

2

2.5

3

3.5

4

1 1.5 2 2.5 3 3.5 4 4.5

Te

st

Re-Test

Component 3e Teachers 1-7

Page 94: Improving instructional delivery: reflections on student ...1095/fulltext.pdfinstructional practices through reflection is the driver behind the development of a student feedback tool;

94

The following table represents Pearson’s r for correlations between I-SAID scores and

observation scores for all seven teacher participants. Standard deviations were presented earlier

for each teacher individually. Table 24 lists the highest standard deviation found per component

across all seven teacher participants and the lowest standard deviation found per component

across all seven teacher participants along with the average standard deviation per component

across all seven teacher participants. The average standard deviation for the components ranges

from 0.07 to 0.13 thus one can expect that the student scores do not deviate from the observer

scores to a greater degree than 0.13 suggesting that the I-SAID is a valid tool.

Table 24

Standard Deviations between the I-SAID and Observations per Component Across all Seven

Teacher Participants.

Largest Stdeva Smallest Stdeva Average Stdeva

Component 3a .20 .02 .10

Component 3b .31 .02 .13

Component 3c .27 0 .08

Component 3d .34 0 .07

Component 3e .33 0 .08

The Pearson’s r reliability coefficients for the components for each teacher can be

reviewed in Table 25 below. Reliability coefficients which do not meet the significance level of

p≤ .05 are presented with an asterisk. According to the data MS2’s results for three components

do not meet the level of significance. It is possible that overall she is an outlier and the

elimination of her data may allow the remaining data to meet the level of significance.

Components 3d and 3e contain more insignificant correlations than do the other components,

Page 95: Improving instructional delivery: reflections on student ...1095/fulltext.pdfinstructional practices through reflection is the driver behind the development of a student feedback tool;

95

however, according to summary Table 26 the two components which did not meet the level of

significance are component 3b and component 3d.

Table 25

Pearson’s r Reliability coefficients for test re-test of the I-SAID per component for teachers 1-7.

Component

3a

Component

3b

Component

3c

Component

3d

Component

3e

MS1 .833 .937 .899 .940 -.117*

HS1 .834 .966 .997 .279* .805

MS2 .752* .872 .968 .093* .678*

HS2 .913 .901 .957 .469* .786

MS3 .856 .952 .937 .464* .822

HS3 .928 .912 .970 .952 .774

HS4 .548* .952 .993 .932 .454*

* Did not the critical value for p≤ .05

Table 26

Pearson’s r for Mean I-SAID Scores by Component for all 7 Teachers to Mean Observation

Score by Component for all 7 Teachers

Observation 3a

Observation 3b

Observation 3c

Observation 3d

Observation 3e

I-SAID 3a .854*

I-SAID 3b .604

I-SAID 3c .875**

I-SAID 3d .502

I-SAID 3e .861*

* P ≤ .05 ** P ≤ .01

Page 96: Improving instructional delivery: reflections on student ...1095/fulltext.pdfinstructional practices through reflection is the driver behind the development of a student feedback tool;

96

Figure 21

Scatter plot for I-SAID scores and Observations for Teachers 1-7 on Component 3a

Figure 22

Scatter plot for I-SAID scores and Observations for Teachers 1-7 on Component 3b

y = 1.297x - 0.9487

R² = 0.7307

2

2.2

2.4

2.6

2.8

3

3.2

3.4

3.6

3.8

4

2 2.2 2.4 2.6 2.8 3 3.2 3.4 3.6 3.8

Ob

serv

ati

on

Sco

res

Te

ach

ers

1-7

I-SAID Scores Teachers 1-7

Component 3a

y = 0.9511x + 0.1155

R² = 0.3652

2

2.2

2.4

2.6

2.8

3

3.2

3.4

3.6

2 2.2 2.4 2.6 2.8 3 3.2 3.4

Ob

serv

ati

on

Sco

res

Te

ach

ers

1-7

I-SAID Scores Teachers 1-7

Component 3b

Page 97: Improving instructional delivery: reflections on student ...1095/fulltext.pdfinstructional practices through reflection is the driver behind the development of a student feedback tool;

97

Figure 23

Scatter plot for I-SAID scores and Observations for Teachers 1-7 on Component 3c

Figure 24

Scatter plot for I-SAID scores and Observations for Teachers 1-7 on Component 3d

y = 1.3295x - 0.9164

R² = 0.7658

2

2.2

2.4

2.6

2.8

3

3.2

3.4

3.6

2 2.2 2.4 2.6 2.8 3 3.2

Ob

serv

ati

on

Sco

res

Te

ach

ers

1-7

I-SAID Scores Teachers 1-7

Component 3c

y = 0.597x + 1.3133

R² = 0.2528

2

2.2

2.4

2.6

2.8

3

3.2

3.4

3.6

2 2.2 2.4 2.6 2.8 3 3.2 3.4 3.6

Ob

serv

ati

on

Sco

res

Te

ach

ers

1-7

I-SAID Scores Teachers 1-7

Component 3d

Page 98: Improving instructional delivery: reflections on student ...1095/fulltext.pdfinstructional practices through reflection is the driver behind the development of a student feedback tool;

98

Figure 25

Scatter plot for I-SAID scores and Observations for Teachers 1-7 on Component 3e

Sub-question b asks, to what extent do the questions on the I-SAID accurately measure

students’ perception of the teachers’ use of the instructional framework? According to the data,

the overall student perception appears very similar to the overall observer’s perception of the

teacher’s instructional delivery. The observer is a trained observer using the rubric for Domain 3

of Charlotte Danielson’s Framework for Teaching thus it can be implied that overall the I-SAID

has accurately measured the student’s perception of the teachers’ use of the instructional

framework. As this pilot study was limited to seven teacher participants wider pilot programs

should be run in order to draw definite conclusions.

Research Question 2

What can teachers learn from student feedback data?

a) What aspects of the I-SAID has had the most benefit for reflection?

y = 1.2843x - 0.9751

R² = 0.7416

2

2.2

2.4

2.6

2.8

3

3.2

3.4

3.6

3.8

2 2.2 2.4 2.6 2.8 3 3.2 3.4 3.6 3.8

Ob

serv

ati

on

Sco

res

Te

ach

ers

1-7

I-SAID Scores Teachers 1-7

Component 3e

Page 99: Improving instructional delivery: reflections on student ...1095/fulltext.pdfinstructional practices through reflection is the driver behind the development of a student feedback tool;

99

b) What additional questions would benefit teachers in reflecting to improve their

instructional delivery?

c) What benefit or limitations does pairing additional data have on the reflective process?

• Teacher growth data?

• Self-Assessment data?

• Observational data?

To inform this question teacher participants completed and online reflective memo after

reviewing their individual student data reports which contained the results from the I-SAID. An

example of the data provided to each teacher can be viewed in Appendix H. The reflective

memos (Appendix E) were analyzed utilizing first and second cycle coding. Initially the memos

were coded utilizing in-vivo coding to identify important aspects of individual responses from

the transcript. Second cycle coding utilized pattern coding, the in-vivo codes were reviewed with

a lens to identify patterns in the responses. Table results of open-ended reflective memo

responses can be found in Appendix I. Analysis of the reflective memo revealed that there were

poignant themes that could be derived from the data. In response to sub-question a, the emergent

themes were that information on student participation, student perception, lesson structure,

identifying deficits and presenting a complete picture were noted as the most important benefits

for reflection when using the I-SAID. Teacher participants felt that including additional

questions on specific strategy inclusion and student perception would be helpful to teachers.

Finally, when asked to reflect on the benefits or limitations to pairing the I-SAID with additional

data the predominant theme was that this process would create a more complete picture for

reflection and evaluation. More specifically, the teacher participants indicated their unfamiliarity

with growth data thus leading to tentative responses over the pairing of growth data and I-SAID

Page 100: Improving instructional delivery: reflections on student ...1095/fulltext.pdfinstructional practices through reflection is the driver behind the development of a student feedback tool;

100

data. When reflecting on self-reflective data being paired with I-SAID data, the participants

indicated that they felt this would allow for a check and balance approach. They further indicated

that reviewing the data in this grouping would provide them with data on student vs. teacher

perceptions, thus allowing them to refine their practices. Lastly, when evaluating the benefits and

limitations of pairing the I-SAID data and observational data the themes which emerged were

again the theme of paired data leading to a more complete picture of instructional delivery, the

potential for paired data to be a check and balance system and finally the theme of students being

able to form a bigger picture as they spent more time with the teacher than the observer was

salient throughout all responses. Limitations noted throughout all three pairings were the

possibility of biased opinions, and the need to personalize the survey to better suit the individual

subject or teacher.

Research Question 3

How do teachers incorporate student feedback as a means of improving their instructional

delivery?

a. What actions/processes did you utilize after reflecting on the data?

b. What ways will you use the I-SAID in the future?

c. How can the I-SAID assist you in developing your professional goals?

d. How can the I-SAID contribute to developing your professional development plan?

e. How could you use the results from the I-SAID to enhance peer/mentor observation?

f. What must you do to facilitate improving your instructional practices based on the

results of the I-SAID?

Research question 3 was answered using the same techniques as those used to answer

research question 2. Coding was completed to identify prominent themes derived from the

Page 101: Improving instructional delivery: reflections on student ...1095/fulltext.pdfinstructional practices through reflection is the driver behind the development of a student feedback tool;

101

individual responses to the reflective memo. Although incorporating student feedback into the

new teacher evaluation system is a requirement set forth by the Massachusetts Department of

Elementary and Secondary Education the most important reason to incorporate student feedback

is to improve student learning outcomes. The sub-questions contained in research question 3 seek

to determine how teachers may use the information from the I-SAID to improve their

instructional delivery and ultimately improve student learning.

The first sub-question of research question 3 asks participants what processes/actions

they utilized after reflecting on the data. Responses from the participants fell into two themes,

action planning and action steps. Participants who demonstrated action planning described a plan

they undertook after reviewing the data. Responses in this category included ones such as these:

“used data to change some of the things that happen in my class” and “I used the average score

category to find areas that were ranked lowest and reflect on my practice”. Whereas responses in

the action steps category included responses such as: “I created a number of pre-assessments”

and “I am trying not to lose track of time and explain the assignments better”. When asked how

the participants will use the I-SAID in the future three predominant themes emerged, to use the I-

SAID as a reflective tool, to assist teachers in identifying deficits and to gather the perceptions of

their students in order to improve their instruction. Nearly half of the participants indicated that

they would like to use the tool at least two times per year starting at the beginning of the second

semester and again at the end of the year. Two out of seven of the participants indicated that they

would use the tool quarterly and another two of the seven indicated that they would use it

annually. The responses to how the I-SAID could assist teachers in developing their professional

goals centered around the themes of developing SMART goals, identifying areas of the lesson

structure or a particular strategy to focus a goal on. Participants further indicated that the I-SAID

Page 102: Improving instructional delivery: reflections on student ...1095/fulltext.pdfinstructional practices through reflection is the driver behind the development of a student feedback tool;

102

could assist them in identifying areas where professional development is needed. This would

then become a part of the teacher’s professional development plan. Finally, the participants

indicated that the I-SAID could play an important role in the mentor/mentee relationship. It was

felt that the I-SAID could be used to develop discussions, identify deficits, guide the mentor and

mentee and provide insight from the student perspective. The last sub question, what must you do

to facilitate improving your instructional practice based on the results from the I-SAID seeks to

explore what the teacher was able to obtain from the data. All seven participants identified a skill

that they need to better develop as a result of reviewing the data. One participant did not respond

to the question, thus the I-SAID data provided the participants the opportunity to reflect, identify

deficits, understand the student perspective of their instructional delivery and choose a skill

which needs development.

Research Question 4

Can the developed SFT meet the need of the Massachusetts Model System for Teacher

Evaluation?

The question is broken down into the following open-ended sub-questions:

a. What is the best way to incorporate student feedback into the teacher evaluation process?

b. How could you use the I-SAID to meet the need of the Massachusetts Model System for

Teacher Evaluation requirement of incorporating evidence of student feedback?

Data analysis of the reflective memo was completed in order to answer research question 4

and its sub-questions. Teacher participants were asked how often they would use the I-SAID to

collect information from their students. Table I-8 in Appendix I lay out the number of times each

teacher participant would like to use the I-SAID per year. Three out of seven teachers state that

they would like to use the I-SAID two times per year. The logic behind using the tool multiple

Page 103: Improving instructional delivery: reflections on student ...1095/fulltext.pdfinstructional practices through reflection is the driver behind the development of a student feedback tool;

103

times relates to sub-question b. The Massachusetts Model System for Teacher Evaluation at this

point simply requires evidence of the use of the student feedback by 2014 and these teacher

participants have made suggestions on how to use the feedback in order to improve their

instructional delivery. Data from Table I-9 and I-10 in Appendix I suggest the following themes:

SMART goal development; identify professional development needs, a growth measurement

tool, a reflective tool, a tool to identify deficits. These themes further suggest that the I-SAID

would be an ideal tool for teachers to use to meet the requirement of the Massachusetts

Department of Elementary and Secondary Education’s requirement of student feedback. A large

component of the new evaluation system is the development of tools to measure growth and the

use of SMART goals to measure successful teaching. The I-SAID provides the teacher with

opportunities to do both as suggested by these teacher participants.

Research Question 5

What were the indicated Stage-of-Concern levels regarding the I-SAID of selected teachers?

The question is broken down into the following sub-questions:

(a) Were there differences in the teachers’ Stages of Concern when grouped by their grade

level?

(b) Were there differences in the teachers Stages of Concern when grouped by years of

experience?

To answer research question 5 and its subsequent sub-questions, data from the Stages of

Concern questionnaire was examined and separated into sub groups of grade level, and

number of years of service. Figure 26 and Figure 27 are the line graphs for the Stages of

Concern for the middle school teachers and the high school teachers. This grouping is the

most logical as the District that this study is being conducted in has two buildings, one

Page 104: Improving instructional delivery: reflections on student ...1095/fulltext.pdfinstructional practices through reflection is the driver behind the development of a student feedback tool;

104

middle and one high school. Examining the data below it is clear that five out of seven

teacher participants have high scaled scores for Stage 0. This indicates that these teachers

feel that implementing a new innovation, the I-SAID, is not a priority. High scores in this

area tell us that the teacher participants may feel overwhelmed by the number of initiatives

they are faced with. This is not surprising as there are several district initiatives underway,

curriculum writing to update the curriculum to the common core, the negotiation of a new

evaluation system, emphasis on the instructional framework in the classroom and the

increased usage of the student information system are among a few. In addition the high

school has had four new principals over the past six years. Five out of seven teacher

participants had Stage 1 awareness as a relative high. This may be a function of their

knowledge that the state of Massachusetts will require the use of student feedback as a

component of the evaluation process by 2014; therefore gleaning information about the

process and possible instrument is important to all these participants. Four out of seven

teachers had profiles in which the end scores “tailed up”. This indicates that these teachers

have ideas about the I-SAID and how to improve the instrument. Most of the teacher

participants indicated in their reflective memos that they found the I-SAID to have benefits,

however to move toward adoption of this innovation it will be important to modify the

instrument to provide the potential of adding several teacher designed questions. This would

increase the possibility of the teachers using the instrument. The two teacher participants

MS1 and HS4 who had a relative peak score on collaboration are curriculum leaders. They

may have concerns about convincing their colleagues in their department to implement and

adopt the I-SAID, while the other five participants do not have such responsibilities and this

may explain their relatively lower scores. There does not appear to be any differences or

Page 105: Improving instructional delivery: reflections on student ...1095/fulltext.pdfinstructional practices through reflection is the driver behind the development of a student feedback tool;

105

similarities related to the grade level, however there does seem to be similar trends within the

years of experiences groups. Those educators who have less than 10 years experience have

similar profiles to those having less than 15 years experience. Essentially, the educators with

less than 15 years of teaching experience demonstrate higher intensity on the stages

concerning self and task. They may worry about how much time it will take for them to

implement the I-SAID in all of their classes, will it impact them personally, or how will they

manage the data. All teacher participants with the exception of MS1 in this category have

profiles that tail up at the end. This indicates that these individuals may be resistant to using

the I-SAID without modifying it. This information matches their responses on the reflective

memo as this group indicated that the I-SAID would better suit them if they were allowed to

add questions of interest or questions related to their department/discipline. MS1 while being

in the 10-15 year category has tremendous experience with mentoring and undertaking

leadership roles within her building and profession, thus her profile is more similar to the

individuals with 15+ years of teaching experience. These individuals are less concerned

about the personal or management of the I-SAID but have greater concern around having to

collaborate or lead colleagues in the use of the I-SAID.

Page 106: Improving instructional delivery: reflections on student ...1095/fulltext.pdfinstructional practices through reflection is the driver behind the development of a student feedback tool;

106

Figure 26

Line Graph of Stages of Concern for Middle School Teachers

Figure 27

Line Graph of Stages of Concern for High School Teachers

0

20

40

60

80

100

120

Re

lati

ve

In

ten

sity

SoC Middle School Teachers

MS1

HS1

MS3

0

20

40

60

80

100

120

Re

lati

ve

In

ten

sity

SoC High School Teachers

HS1

HS2

HS3

HS4

Page 107: Improving instructional delivery: reflections on student ...1095/fulltext.pdfinstructional practices through reflection is the driver behind the development of a student feedback tool;

107

Figure 28

Line graph of Stages of Concern for teachers with under ten years of experience.

Figure 29

Line graph of Stages of Concern for teachers with 10-15 years of experience

0

20

40

60

80

100

120

Re

lati

ve

In

ten

sity

SoC Under 10 Years of Experience

Under 10

Under 10

0

20

40

60

80

100

120

Re

lati

ve

In

ten

sity

SoC 10-15 Years of Experience

10-15 Years

10-15 Years

10-15 years

Page 108: Improving instructional delivery: reflections on student ...1095/fulltext.pdfinstructional practices through reflection is the driver behind the development of a student feedback tool;

108

Figure 30

Line graph of Stages of Concern for teachers with more than 15 years of experience

The next chapter will be the discussion of the research findings where the researcher will tie

together the findings of this research project with the theoretical framework and literature review

upon which this study is based. Discussion of the significance of this study along with limitations

will be presented. In addition the researcher will form conclusions and make recommendations

future practice and further research.

0

1020

30

4050

60

7080

90100

Re

lati

ve

In

ten

sity

SoC 15+ Years of Experience

15+ years

15+ years

Page 109: Improving instructional delivery: reflections on student ...1095/fulltext.pdfinstructional practices through reflection is the driver behind the development of a student feedback tool;

109

Chapter 5: Discussion of Research Findings

Introduction

Chapter five discusses the research findings and is broken down into the introduction,

which discusses the methodology chosen and the relationship of the study to the theoretical

framework. It continues with an analysis of the findings in relation to reflective practice, a

section on the I-SAID as a tool to meet the mandates of MADESE for their Massachusetts Model

Teacher Evaluation System and a conclusion section. The conclusion section will present

recommendations for future practice and future research.

This research encompassed the importance of reflection in becoming skilled and

competent in what we do, deliver instruction. The I-SAID was the vehicle developed to assist us

in getting there. Throughout this work, it has become more evident that reflective practice is

what separates the average from the exceptional and that change cannot occur in the absence of

reflection. Teachers and administrators need to spend time reviewing data, exchanging ideas and

reflecting on how to improve their instructional delivery so that we can continue to improve

student growth.

Prior to this research there was a gap in the knowledge surrounding the use of a student

feedback tool in K-12 education. It became the purposed of this research to develop and pilot a

student feedback tool focusing on what teacher participants could glean through using the data

from the tool for reflective purposes to improve their instructional delivery. Therefore, this pilot

study utilized both a quantitative and qualitative approach. The quantitative approach examined

the statistical data from the study in order to determine the validity and reliability of the

developed instrument, the I-SAID, while the qualitative strand focused on gaining a deeper

understanding of how the teachers could use the data from the I-SAID to improve their

Page 110: Improving instructional delivery: reflections on student ...1095/fulltext.pdfinstructional practices through reflection is the driver behind the development of a student feedback tool;

110

instructional delivery and meet the mandates of phase III of the Massachusetts Model for

Teacher Evaluation.

Charlotte Danielson’s Framework for Teaching was the theoretical framework upon

which the I-SAID was built. It was her Framework for Teaching which also provided a basis for

the development of the new teacher evaluation system in Massachusetts, therefore it made

logical sense to develop the tool for collecting student feedback upon this framework. The

framework is broad and encompasses all aspects of teaching, as such Domain 3: Instruction was

chosen to work with in developing the I-SAID. This domain encompasses the visible aspects of

research based teaching. The intent of this researcher was to develop a student feedback tool

which solicited the opinions of students on whether or not their teacher was delivering their

instruction in the manner supported by the Framework for Teaching. This method of

instructional delivery was originally supported by Jean Piaget’s constructivism as learning

occurs best in an environment which is rich in structure and clarity.

Gene Hall frequently points out that change is a process and incorporating evidence of

the use of student feedback into the teacher evaluation process will be a big change for K-12

education (Hall, 2001, p.8). Whether or not this suburban east coast school district is ready for

this change depends heavily on understanding the concerns associated with implementation and

adoption of the I-SAID and mindful planning. Utilizing the Stages of Concern questionnaire as a

component of this research study has provided this researcher with a deeper understanding of the

challenges of change and how to assist her faculty in moving from implementation to the

adoption of using evidence of student feedback in the evaluation process. The following section

discusses the findings of this research in relation to reflective practice.

Page 111: Improving instructional delivery: reflections on student ...1095/fulltext.pdfinstructional practices through reflection is the driver behind the development of a student feedback tool;

111

Reflective Practice

The ancient Masters didn't try to educate the people but kindly taught them

to not-know. When they think that they know the answers, people are difficult to

guide. When they know that they don't know, people can find their own way (Tzu,

500 B.C.E).

The use of reflection in practice has a long standing history as it has been used to gain

knowledge that one did not already possess (Osterman, 2004 p.2). While reflection on action as

described by Donald Schön is the most common type of reflective practice used in teaching

reflection in action is worth cultivating and made possible by using data from student feedback

twice per year (1987, p.5). Reflection in action allows teachers to make adjustments to their

teaching while they are in front of the group providing the feedback. This has often been labeled

demonstrating flexibility, however K-12 teachers previously have lacked the tool necessary to

collect data on student perception of their instructional delivery so that they may make

adjustments in a timely manner to their instructional delivery.

Teachers in higher education have long used student feedback as a tool for reflection on

their practices. K-12 education is new to the dialogue of engaging teachers in reflecting on

student feedback as a means of improving their instruction. “Reflection is an important human

activity in which people recapture their experience, think about it, mull it over, and evaluate it. It

is this working with experience that is important in learning” (Boud, 1985, p. 19). This research

study allows teachers to utilize student feedback as a tool for reflection. The information gleaned

from their reflection allows them to make changes to their instructional practices in order to

increase their students’ growth.

Page 112: Improving instructional delivery: reflections on student ...1095/fulltext.pdfinstructional practices through reflection is the driver behind the development of a student feedback tool;

112

The usefulness of the I-SAID rests heavily on how the teachers can reflect on the data to

improve their professional practices. In addition to how the teacher will use the data, they must

also be willing to implement the I-SAID and reflect on the data. This is where knowledge of the

teacher’s stage of concern is important. Teacher participants reviewed the data from the I-SAID

then reflected on how the data from the I-SAID could be used to inform their instructional

practice. Teacher participants indicated that they could use the I-SAID to assist them in creating

SMART goals, for action planning, to inform their professional development needs and expand

their knowledge of the student perception of their instruction that they are delivering.

SMART goals are goals that are Specific, Measurable, Achievable, Relevant and Time

Bound. According to the teacher participants the data from the I-SAID allows them to set goals

around areas that the students perceive as deficient. They reported that they could use the tool as

a formative assessment to identify these areas and also as a summative assessment to measure

progress on the goals they have set. Goal setting as a result of reflection is the practice that is

encouraged by historical and current practitioners from Dewey to Schön, Osterman, Kottkamp

and York-Barr.

The literature of Donald Schön emphasizes that knowing in action is what separates the

expert from the novice practitioner (1985, p.5). The new Massachusetts Model for Teacher

Evaluation requires the use of reflective practice. Throughout the evaluation teachers are

required to gather evidence, reflect on the evidence and make changes to their professional

practice. By 2014 they must also incorporate evidence of their use of student feedback into this

process.

The teacher participants all reported that the I-SAID could assist them in reflecting on

student feedback in order to provide evidence of its use in the evaluation process. Three out of

Page 113: Improving instructional delivery: reflections on student ...1095/fulltext.pdfinstructional practices through reflection is the driver behind the development of a student feedback tool;

113

seven participants reported that they would like to utilize the tool two times per year as it would

allow them to identify areas of growth and then measure their progress in those areas. Two

teacher participants reported they would like to undertake this process four times per year while

two participants identified using the I-SAID once per year would satisfy them.

Potential threats to repeated use of the instrument include instrument decay. Student

participants could remember the questions or attempt to skew the results either positively or

negatively for the teacher. Using a model of administering the I-SAID in each class two times

per year could potentially mean that a student would reply to the I-SAID fourteen times per year.

This threat can be mitigated by either randomizing all the questions or allowing for the addition

of teacher or department questions. The I-SAID has been developed to be broken down into

many forms. Each component of Domain 3 in the FFT is broken down into elements; the I-SAID

was developed to have two student response statements per element, thus lending itself to the

creation of two forms. Furthermore, teachers who administer the instrument midyear could

identify the areas which they would like to focus on improving and at the end of the year

administer only those student statements which relate to the component or element they are

interested in measuring. Thus the first administration would provide the teacher with formative

data to reflect on, and make instructional changes based on the data. The second administration

of the I-SAID could provide data to the teacher which would measure the teacher’s growth in use

of the practice.

Osterman refers to reflection as, “meaningful and effective professional development”

(2004, p.1). The teacher participants indicated that they found value in the information from the

I-SAID which could inform their professional development. Teacher participants clearly felt that

Page 114: Improving instructional delivery: reflections on student ...1095/fulltext.pdfinstructional practices through reflection is the driver behind the development of a student feedback tool;

114

obtaining the student perception of their instructional delivery is important and can assist them in

improving their instruction resulting in student growth.

The teacher participants appeared to perceive that the addition of specific department

related questions could improve the I-SAID and make it more useful to them personally. While

these questions would not be tied to Danielson’s FFT, it provides the teacher with some

flexibility in the data that they are personally concerned with. Teachers may like to know if they

are timely in their feedback to their students, or if the student feels welcomed in their classroom.

The department might want to gather student perception on the use of specific lessons, or a

specific technique such as whether or not the interactive notebook has worked well for the

student or not. Many of the teacher participants indicated that they would like to add some

subjective questions to garner more information on the students’ perceptions of likeability of the

lessons and approachability of the teacher. Specific examples of these questions which were

expressed in the reflective memo include the following: “My teacher uses manipulatives that

help me to discover a mathematical concept”, “my teacher conducts the class at a speed where I

do not feel bored”, “I understand what my grade average is from day to day and my teacher

allows adequate testing time.” All participants indicated that they thought the idea of pairing the

data produced by the I-SAID with growth data, observation data and self-assessment data creates

a comprehensive picture of their instructional practices more so than when any of these actions

are used alone to evaluate a teacher’s performance. This aligns with the literature that supports

changing the evaluation system to incorporate more evidence rather than sole reliance on a few

scheduled observations. 360° evaluations refer to incorporating evidence from students, parents,

colleagues and supervisors into the evaluation process (M. L. Donaldson, 2009, p. 4). As a result

Page 115: Improving instructional delivery: reflections on student ...1095/fulltext.pdfinstructional practices through reflection is the driver behind the development of a student feedback tool;

115

of incorporating student feedback into the teacher evaluation system we can now paint a broader

picture of teaching practices that includes data from all stakeholders than we did before.

While Charlotte Danielson’s FFT was the basis out of which the I-SAID was developed,

Gene Hall’s concerned based adoption model shed light into the stages of concern for each

teacher participant. When a new innovation is introduced the users follow a pattern through the

various stage of concern. At times the concern is intense. The concern may then subside as the

user moves through a different concern, although concerns can occur at high or low levels of

intensity at the same time. The seven stages of concern are awareness, informational, personal,

management, consequence, collaboration, and refocusing (George, 2006, p. 7).This model is

significant in providing information on the direction that the district should take in order to

implement the use of the I-SAID or another student feedback tool. This data from this portion of

the study explains how the participants feel about the implementation of the I-SAID. It describes

where each participant lies in relation to the seven stages of adoption. The profiles of the teacher

participants were very informative and although similar profiles did not appear to be tied to level

of teaching, high school versus middle school or to years of experience teaching the profiles

appeared to be tied to specific roles that the teacher participants hold within their respective

buildings.

Interestingly, the teacher participants who indicated their desire to use the I-SAID as a bi-

annual planning tool all scored high information, low on management and high on collaboration

on their stages of concern profile. According to their profiles these teachers would like more

information on the fundamental areas of the I-SAID such as how it can assist them in their

personal growth and is it a reliable and valuable tool. These teacher participants, with a low score

on management, do not feel that utilizing the I-SAID would be disruptive to their classroom, and

Page 116: Improving instructional delivery: reflections on student ...1095/fulltext.pdfinstructional practices through reflection is the driver behind the development of a student feedback tool;

116

have little concern around the tasks associated with organizing and implementing the I-SAID.

Lastly, all three teacher participant profiles showed a relative high for stage five, collaboration.

Given their roles the results of the Stages of Concern Questionnaire is not surprising, as all three

teacher participants have leadership roles within their respective buildings. Their concern lies in

coordinating the I-SAID for others to use, and in achieving buy in from their colleagues. These

individuals are the “front line users” according to Gene Hall, and their perspectives are critical to

the change process (1987, p. 53) .

I-SAID as a Tool for Teacher Evaluation

The evaluation of teachers has been a long standing practice in education. Author Mary

Lynne Derrington describes the teacher evaluation process best when she states,

Principals, checklist in hand, head down the hall once or twice a school year to

conduct the obligatory classroom observation. Then the principal determines if

what is seen in the 60-minuteor-less observation complies with a checklist of

items believed to correlate to effective teaching. Months later, when the

summative evaluation is due, the busy principal often chooses from a menu of

narrative phrases, resulting in strikingly similar comments for each recipient’s

evaluation, causing teachers to feel that the reports were a product of a cut and

paste activity(2011, p. 51)

We have been conducting benign evaluations despite what research tells us about the effects of

having a highly qualified teacher in front of each student does for student growth. (Haskins &

Loeb, 2007; Hinchey & University of Colorado at Boulder, 2010; Rivkin, et al., 2005; Wayne &

Youngs, 2003) Collaboratively, teachers, scholars and the Massachusetts Department of

Elementary and Secondary education examined our evaluation process and drafted a model with

Page 117: Improving instructional delivery: reflections on student ...1095/fulltext.pdfinstructional practices through reflection is the driver behind the development of a student feedback tool;

117

significant changes to it from the old method of evaluating teachers. No longer will the

evaluation process be a “drive by” but educators and administrators will have to work together to

collect evidence of teacher practices that support student learning gains. Part of the evidence will

have to be the use of student feedback in the evaluation process by 2014. This section provides

an examination of the findings in relation to the I-SAID as a tool for teacher evaluation. A

discussion of how the I-SAID as a tool for teacher evaluation is supported by the literature and

the theoretical framework will be woven throughout this section. Finally, the significance of this

study for the field of education and suggestions are made for next steps for further study.

Using the lens of Charlotte Danielson’s Framework for Teaching which was guided by

and grounded in the roots of Jean Piaget’s Constructivism this study developed an instrument to

gather student feedback on a teacher’s delivery of instruction. Danielson’s observation rubrics

for Domain 3 Instruction were also used as they were created as a result of her FFT. This

instrument was piloted and teachers were given an opportunity to reflect on the data and make

suggestions on how this data could assist them in growing professionally. The results of the

study indicated that the newly developed student feedback tool the I-SAID was a valid and

reliable tool for the purpose of this limited study.

The content of the I-SAID was derived from the works of Charlotte Danielson and Jean

Piaget. Piaget informs us that in order for the learner to learn new information he/she must

assimilate that new information into his or her existing structure of accommodate the information

in a way that allows it to fit with his or her schema (Paiget, 1950). Therefore when considering

the practices of teaching it makes sense that the teacher utilizes a framework through which he or

she presents new knowledge. Charlotte Danielson has developed this framework and this

researcher developed questions based upon that framework. The questions contained in the

Page 118: Improving instructional delivery: reflections on student ...1095/fulltext.pdfinstructional practices through reflection is the driver behind the development of a student feedback tool;

118

I-SAID read like a story. They ask the student if the teacher uses practices that present the

previously learned material to assist them in recalling the schema upon which the new learning

will occur. They ask the student if the teacher regularly checks for understanding, allows for

feedback from multiple sources and summarizes the lesson to ensure that the new information

has its place within the existing schema. Danielson’s Framework for Teaching provides even

greater detail to the practices that have been proven to increase student learning as it is a

framework which developed out of the research that developed the Praxis III criteria.

The criteria [for the Praxis III] were based on formal analyses of important

tasks of beginning teachers; reviews of research; analysis of state regulations

and extensive field work that included pilot testing the criteria and assessment

process (Danielson, 2007, p. vii).

The response statements for the I-SAID were developed straight from the information contained

in the evaluation rubric for Domain 3 of the FFT. These theories working together provide the

content validity for the I-SAID. The instrument was further validated by analyzing the scores

from the instrument to the scores from the observations conducted by this researcher. The

observations conducted utilized the rubric from the FFT which was the same rubric that the I-

SAID statements were developed from. In order to assess whether the students reported on their

teacher’s delivery of instruction in the same manner that the observer did Pearson’s r was

calculated for mean I-SAID scores by component for all seven teacher participants to mean

observation scores by component for all seven teacher participants. The results were mixed.

Components 3a, 3c and 3e were valid at the p≤ .05 level while components 3b and 3d were not.

This could have been a result of the language of the I-SAID statements not matching the

concepts of the rubric closely enough to yield the same results. The I-SAID questions were

Page 119: Improving instructional delivery: reflections on student ...1095/fulltext.pdfinstructional practices through reflection is the driver behind the development of a student feedback tool;

119

designed to reflect the concepts in the rubric matching element concept to I-SAID question. It is

possible that this match is not accurate enough so that both student and observer reports the same

results. As one explanation could be a result of language further research could incorporate the

use of student focus groups so that the researcher could better understand if the students’

perception of the statements is the same as her own. Questions could be tweaked and re-tested

until the results indicate a stronger correlation. This finding could also be real and indicate that

the students and the observer do not perceive this aspect of lesson delivery in the same manner.

While the observations do not match this is still relevant data for teachers to reflect on. The

students taking the I-SAID in December have four months of exposure to their teacher’s

instructional delivery where as this observer conducted two observations for a duration totaling

94 minutes to report out on teacher delivery of instruction. The same limitation to traditional

teacher evaluations which are based on one or two observations has been widely discussed in the

literature as a factor contributing to the need to revamp the current system. Kane and Cantrell

state, “Teachers should be evaluated on three factors—classroom observations, student

achievement gains, and feedback from students. The use of multiple measures is meant to

compensate for the imperfections of each individual measure and produce more accurate and

helpful evaluations” (T. J. Kane, Cantrell, S., 2012, p. 2). Thus, by adding evidence of the use of

student feedback teachers may be able to provide their evaluator with a more accurate picture of

their instructional delivery by including student perception. Lastly, the teen brain and the adult

brain process information differently as they are developmentally different. The wide difference

between the researcher and student perceptions of teacher instructional delivery on components

3b and 3d c could simply be a factor of the developmental differences between teens and adults.

Page 120: Improving instructional delivery: reflections on student ...1095/fulltext.pdfinstructional practices through reflection is the driver behind the development of a student feedback tool;

120

When teenagers perform certain tasks, their prefrontal cortex, which handles decision making, is

working much harder than the same region in adults facing the same circumstances. The teen

brain also makes less use of other regions that could help out. Under challenging conditions,

adolescents may assess and react less efficiently than adults (Sabbagh, 2007, p. 1).

The reliability of the instrument was analyzed using the test re-test method and the

instrument was found to be reliable as the average standard deviation for components

3a,3b,3c,3d were all ≤ 0.08. Component 3e had an average standard deviation of ≤ 0.10 which is

slightly higher but still indicates that the scores from that component should not deviate from one

administration to the next more than =/- .10 with no changes in instruction. According to the

coefficient of determination one can be confident that the variation in the test scores from the

first test on component 3a, 3b, 3c, 3d could be explained by the variation in the re-test scores

with 75% accuracy or better. Component 3e’s variation in the test scores can be explained by the

variation in the re-test scores with 45% accuracy. The statements developed to represent

component 3e may rely too heavily on students’ perception creating much variability in the

answers. Students may not appreciate the rigor of the classroom and or they may rate a teacher

high or low based solely on their like or dislike of a teacher. The I-SAID could be affected by

relationship that the student perceives he/she has with the teacher; however steps were taken

such as coding the student identity so that the student responses were anonymous in order to

lessen this potential limitation. According to the MADESE, evidence of the use of student

feedback will be required during the 2014 evaluation cycle. As they have currently not provided

additional guidelines the I-SAID is an effective tool to accomplish this. Larger scale studies

should also be performed to further evaluate components 3b and 3d correlation to the observation

rubric as well as student focus groups to ensure the language of the I-SAID accurately matches

Page 121: Improving instructional delivery: reflections on student ...1095/fulltext.pdfinstructional practices through reflection is the driver behind the development of a student feedback tool;

121

the concepts of elements within the components of Domain 3. However, as a reflective tool the

I-SAID has proven useful to teachers in identifying areas of their teaching where additional

professional development is needed, in setting their professional practice goals, as a tool to

measure teacher progress on these goals and as a source of data to provide a more complete

picture of the teaching practices of our teachers.

Teacher participants all demonstrated unique characteristics within the results of the

Stages of Concern Questionnaire, however as mentioned earlier our three teacher leaders all

demonstrated strong similarities. As the building principal the most important information

gleaned from the results is that many of the teachers feel overwhelmed by the number of new

innovations that they are presented with. Gene Hall’s following statement encompasses the

feelings of these teachers, “All too frequently innovations are ‘laid on’ teachers or presented

during an August ‘God bless you’ workshop. The teachers are then left to struggle and discover

through trial and error what the innovation is about and how to use it effectively” (G. Hall, Hord,

S.M., 1987, p. 17). The profiles which resulted from analysis of the Stages of Concern

questionnaires provides this researcher with information about precursors that must be put into

place prior to expanding the use of the I-SAID. As we focus on the new teacher evaluation

system in the District it will be important to reflect further on the stages of concern that the

teachers in this study arrived at as well as consider using this instrument to assess the stages that

the larger district population is at prior to expanding this study. Bandura’s concept of self-

efficacy offers the reader with a theory upon which to better understand the stages of concern

that a teacher is in. According to Bandura a teacher’s self-efficacy beliefs can facilitate of hinder

their process of engagement. Working through the personal concerns so that teachers may expect

that the use of the student feedback tool will enhance their evaluation rather than hinder it will

Page 122: Improving instructional delivery: reflections on student ...1095/fulltext.pdfinstructional practices through reflection is the driver behind the development of a student feedback tool;

122

allow that teacher to increase their sense of self-efficacy and thus be a more engaged participant

in the innovation (A. Bandura, 1995, p. 80).

The MADESE will require evidence of the use of student feedback in the teacher

evaluation process by 2014. This means that teachers will need to demonstrate how they use

student feedback to guide their instruction. As stated earlier, according to the feedback from the

teacher participants, the I-SAID can provide teachers with data for them to identify areas of

weakness. This information allows the teacher participant to create SMART goals with specific

professional development action steps to guide them in improving their instructional practices.

The teacher participants also indicated that the data from the I-SAID can provide them with a

deeper understanding of the perspectives of their students. This is helpful as the literature

provides us with information on how the teen brain and teen perceptions vary from adults.

Additionally, the new model requires that teachers gather and produce evidence as a part of their

evaluation. Since the data from the I-SAID can be reported out at the statement level, element

level and component level it provides the teacher with various means to produce evidence of use

of student feedback, thus the I-SAID is a tool that meets the requirements of MADESE mandate

for use of student feedback.

Summary of Findings The results of the pilot study appear to support the reliability and validity of the I-SAID

as a reflective tool which lends itself to the improvement of instructional practices. The data

from the reflective memo indicate that the I-SAID is a student feedback tool which meets the

mandates of the MADESE in their Massachusetts Model for Teacher Evaluation. The statistical

data indicated the average standard deviation for repeated measures testing was between 0.02 to

0.10 per component, thus one can expect a student’s score not to deviate more than one tenth of a

Page 123: Improving instructional delivery: reflections on student ...1095/fulltext.pdfinstructional practices through reflection is the driver behind the development of a student feedback tool;

123

point when administered repeatedly with no change in instruction. Statistical analysis of the I-

SAID scores by component to the observation scores by component indicated that on

components 3a, 3c and 3e the observer and the students rated the teacher in a similar manner.

The statistical results from component 3b indicate that it misses the p≤ .05 level of significance

by 0.15. The results from component 3d miss the p≤ .05 level of significance by 0.25. This

means that on components 3b and 3d the students’ perception of teacher instructional delivery

was different than the observer’s perception. The factors influencing these differences include

the amount of time students spend observing their teacher’s instructional delivery as opposed to

the amount of time an administrator spends observing the teacher’s instructional delivery. Other

factors influencing the differences could be the change in instruction that often occurs when an

administrator observes a class versus what regularly occurs when administrators are not in the

classroom observing. While results which indicate agreement between the students and the

observer would yield a distinct picture mixed agreement between students and observer indicate

the importance of including evidence of student feedback in the evaluation remains critical to

understanding the complex nature of teaching. Reaching this understanding can assist both the

teacher and the administrator in crafting SMART goals that lead to significant improvement in

instructional practices.

Page 124: Improving instructional delivery: reflections on student ...1095/fulltext.pdfinstructional practices through reflection is the driver behind the development of a student feedback tool;

124

Delimitations and Limitations of the Study

This research study examined the use of a pilot instrument by teachers in one suburban

east coast school district in Massachusetts. The instrument was newly developed for the purpose

of this study and had not been previously tested. Teacher participants were restricted to those

teachers with professional status and were not being evaluated during the current school year.

The student participation was limited to those students of the teachers who agreed to participate

in the study and thus may not represent all types of student groups; therefore, the instrument may

not be able to be used with sub-groups such as special education students without further piloting

with such subgroups. As, such the generalizability of this study is limited. Finally, it must be

noted that the researcher is the principal of one of the schools within this east coast suburban

school district and is also a member of the evaluation committee which is currently negotiating

language for the new evaluation system with the teacher’s association. The researcher has an

interest in developing a student feedback tool to meet the mandates put forth by the MADESE

thus resulting in a threat to the credibility of the study. This limitation was addressed by the

researcher adhering to the Ethical Standards of the American Educational Research Association.

(AERA, 1992)

Reactivity may have also played a role in the results of this study. Students may have

responded to the statements on the I-SAID in a manner that is not truthful either out of a desire to

make their teacher look bad or good. An attempt to control this study limitation was made by

coding the students’ identities. Students were carefully informed that their identity and individual

responses would not be shared with their teacher.

Page 125: Improving instructional delivery: reflections on student ...1095/fulltext.pdfinstructional practices through reflection is the driver behind the development of a student feedback tool;

125

Recommendations for Future Practice

The use of student feedback will become part of every educator’s practice by 2014. How

educators collect student feedback and how they provide evidence of its use has yet to be

determined by the MADESE. The I-SAID is a tool through which teachers can obtain feedback

from their students on their instructional practices. The following are recommendations for

future practice using the I-SAID:

1) It is recommended that the I-SAID is used two times per year. The first administration should

mid first semester. The data from this administration allows for reflection on areas which suggest

that change or growth is needed.

2) The second administration of the I-SAID should occur near the end of the second semester.

This administration would be a summative assessment of the progress the teacher has made on

the particular elements or components identified earlier for improvement. Only student response

statements which reflect the component of element the teacher is trying to measure should be

used in the second administration in order to protect against instrument decay.

3) Reflections on the data from the I-SAID should occur to a) Identify areas of weakness b)

develop SMART goals related to the identified areas of weakness c) identify relevant

professional development in the area of instructional practice where the teacher scores low.

4) The evidence of the use of student feedback should not be the direct data from the I-SAID but

rather the goals and professional development plans developed as a result of the teacher

reflecting on the data. This will assist the teacher diminishing their personal concerns over the

implementation of the I-SAID.

Page 126: Improving instructional delivery: reflections on student ...1095/fulltext.pdfinstructional practices through reflection is the driver behind the development of a student feedback tool;

126

Recommendations for Further Study

Based upon the results of the study there is strong indication that a larger pilot study

would be worthwhile to improve the generalizability of the research. The following are

additional recommendations for further study:

1) Focus groups with students to assess if their interpretation of the I-SAID statements

(language) agrees with the researcher’s intent of the statements (accurate portrayal of the element

from Domain 3) as student perception did not always agree with observer perception in this

study.

2) Future studies should occur to assess whether or not sub separate populations in middle and

high school are able to accurately complete the I-SAID. This will increase the generalizability of

the tool. Questions related to this would include: Can students in substantially separate

classrooms read and participate in using the I-SAID in the same manner as students from regular

education classroom? Given the intellectual limitations of students in substantially separate

classrooms identify the instructional framework of a teacher in the same manner as students in

regular education classrooms?

3) Future studies with a larger population should also occur to increase the generalizability of the

I-SAID. Larger population studies could further support the reliability of the I-SAID as a tool for

obtaining student feedback on instructional delivery.

4) Longitudinal studies of how teachers use the data from the I-SAID to enhance their

instructional practices will solidify its role in meeting the mandates of the MADESE.

Page 127: Improving instructional delivery: reflections on student ...1095/fulltext.pdfinstructional practices through reflection is the driver behind the development of a student feedback tool;

127

References

Administrators, M. T. F. o. t. E. o. T. a. (2011). Buildingt a Breakthrough Framework for

Educator Evaluation in the Commonwealth.

AERA. (1992). The Ethical Standards of the American Educational Research Association.

Bailey, G. D. (1983). Teacher-Designed Student Feedback: A Strategy for Improving Classroom

Instruction. Washingtom, D.C.: National Education Association.

Bain-Pate, H., Lintz, M.N., Wood, E. (1989). A Study of Fifty Effective Teachers Whose Class

Average Gain Scores Ranked in the Top 15% of Each of Four School Types in Project

STAR. (Paper presented at the Annual Meeting of the American Educational Research

Association (San Francisco, CA, March 27-31, 1989).). Knoxville Tennessee State

University.

Bandura, A. (1977). Social Learning Theory. Englewood Cliffs, NJ: Prentice Hall Inc.

Bandura, A. (1995). Self-Efficacy in Changing Societies. Cambridge: Cambridge University

Press.

Beaty, L. (1997). Developing Your Teaching Through Reflective Practice. Buckingham,UK:

Society for Research into Higher Education.

Bodner, G. M. (1986). Constructivism: The Theory of Knowledge. The Journal of Chemical

Education, 63(10), 873-878.

Boud, D., Keogh, Rosemary, Walker, David. (1985). Reflection: Turning Experience into

Learning. London: Nicholas.

Brockx, B., Spooren, P., & Mortelmans, D. (2011). Taking the grading leniency story to the

edge. The influence of student, teacher, and course characteristics on student evaluations

Page 128: Improving instructional delivery: reflections on student ...1095/fulltext.pdfinstructional practices through reflection is the driver behind the development of a student feedback tool;

128

of teaching in higher education. Educational Assessment, Evaluation & Accountability,

23(4), 289.

Carey, K., Manwaring, R. (2011). Growth Models and Accountability: A Recipe for Remaking

ESEA. Washington, D.C.: Education Sector.

Cashin, W. E., Kansas State Univ, M. C. f. F. E., & Development in Higher, E. (1989). Defining

and Evaluating College Teaching. Idea Paper No. 21.

Charles, R. E., Tracy, R. K., & Robert, G. T. (2003). Return to academic standards: a critique of

student evaluations of teaching effectiveness. [Article]. Quality Assurance in Education:

An International Perspective, 11(1), 37-46.

Chester, M. D. (2009). Growth Model Pilot.

Chester, M. D. (2011). MCAS Student Growth Percentiles: Interpretive Guide.

Chester, M. D. (2012). ESEA Flexibility: Changes to School and District Accountability and

Assistance.

Cook, G. (2006). What's a teacher worth? Houston joins the push for merit pay. [Article].

American School Board Journal, 193(3), 4-6.

Corcoran, S. P. (2010). Can Teachers Be Evaluated by Their Students' Test Scores? Should They

Be? The Use of Value-Added Measures of Teacher Effectiveness in Policy and Practice.

Executive Summary. Education Policy for Action Series: Annenberg Institute for School

Reform at Brown University.

Corcoran, S. P., & Annenberg Institute for School Reform at Brown, U. (2010). Can Teachers

Be Evaluated by Their Students' Test Scores? Should They Be? The Use of Value-Added

Measures of Teacher Effectiveness in Policy and Practice. Education Policy for Action

Series: Annenberg Institute for School Reform at Brown University.

Page 129: Improving instructional delivery: reflections on student ...1095/fulltext.pdfinstructional practices through reflection is the driver behind the development of a student feedback tool;

129

Creswell, J. A. (2009). Research Design Qualitative, Quantitative, and Mixed Methods

Approaches (Third ed.). Thousand Oaks, CA: Sage.

Creswell, J. A., Plano Clark, V.L. (2007). Designing and Conducting Mixed Methods Research

Thousand Oaks, CA: Sage Publications.

Danielson, C. (2007). Enhancing Professional Practice A Framework for Teaching (2nd ed.).

Alexandria, VA: Association for Supervision and Curriculum Developement.

Danielson, C. (2008). The Handbook of Enhancing Professional Practice; Using the Framework

for Teaching in Your School. Alexandria, VA: Association of Supervision and

Curriculum Development.

Danielson, C. (2009). Implementing the Framework for Teaching in Enhancing Professional

Practice. Alexandria, VA: Association for Supervision and Curriculum Development.

Derrington, M. L. (2011). Changes in Teacher Evaluation: Implications for the Principal's Work.

the Delta Kappa Gamma Bulletin: International Journal for Professional Educators,

77(3), 51-55.

Dewey, J. (1910). How We Think. Boston: D.C. Heath & Co.

Donaldson, M. L. (2009). So Long, Lake Wobegon Using Teacher Evaluation to Raise Teacher

Quality. Washington DC: Center for American Progress.

Donaldson, M. L., Peske, H.G. (2010). Supporting Effective Teaching Through the Teacher

Evaluation Process. Washington D.C.: Center for American Progress.

Douglass, H. (1928). Rating the Teacher Effectiveness of College Instructors. School and

Society, 28, 192-197.

Education, N. A. o. (2009). Teacher Quality: Education Policy White Paper. Washington, DC:

National Academy of Education.

Page 130: Improving instructional delivery: reflections on student ...1095/fulltext.pdfinstructional practices through reflection is the driver behind the development of a student feedback tool;

130

Education, U. D. o. (2009). Race to the Top-Game Changing Reforms. from

www.ed.gov/open/plan/race-top-game-changing-reforms

Engelland, B. T. (2004). Making Effective Use of Student Evaluations to Improve Teaching

Performance. [Article]. Journal for Advancement of Marketing Education, 5, 40-46.

Fraenkel, J. R., Wallen, N.E. (2009). How to Design and Evaluate Research in Education. New

York, NY: McGraw-Hill Higher Education.

Gentry, M., Steenbergen-Hu, S., & Choi, B.-y. (2011). Student-Identified Exemplary Teachers:

Insights From Talented Teachers. [Article]. Gifted Child Quarterly, 55(2), 111-125.

George, A. A., Hall, G.E., Stiegelbauer, S.M. (2006). Measuring Implementation in School: The

Stages of Concern Questionnaire. Austin, TX: SEDL.

Goe, L., Holdheide, L., Miller, T., & National Comprehensive Center for Teacher, Q. (2011). A

Practical Guide to Designing Comprehensive Teacher Evaluation Systems: A Tool to

Assist in the Development of Teacher Evaluation Systems: National Comprehensive

Center for Teacher Quality.

Hall, G., Hord, S.M. (1987). Change in Schools: Facilitationg the Process. Albany, NY: State

University of New Your Press.

Hall, G. E., Hord, S.M. (2001). Implementing Change; Patterns, Principles, and Potholes (Third

ed.). Upper Saddle River, New Jersey: Pearson Education Inc.

Hamat, A., Embi, M.A. (2010). Constructivism in the Design of Online Learning Tools.

European Journal of Educational Studies, 2(3).

Hanushek, E. A., Rivkin, S. G., & Urban Institute, N. C. f. A. o. L. D. i. E. R. (2010). Using

Value-Added Measures of Teacher Quality. Brief 9: National Center for Analysis of

Longitudinal Data in Education Research.

Page 131: Improving instructional delivery: reflections on student ...1095/fulltext.pdfinstructional practices through reflection is the driver behind the development of a student feedback tool;

131

Haskins, R., & Loeb, S. (2007). A Plan to Improve the Quality of Teaching. [Article]. Education

Digest, 73(1), 51-56.

Hinchey, P. H., & University of Colorado at Boulder, N. E. P. C. (2010). Getting Teacher

Assessment Right: What Policymakers Can Learn from Research: National Education

Policy Center.

Hulpiau, V., Masschelein, E., Van Der Stockt, L., Verhesschen, P., & Waeytens, K. (2007). A

System of Student Feedback: Considerations of Academic Staff Taken into Account.

[Article]. Tertiary Education & Management, 13(1), 35-45.

Jensen, F. (2010). The Teen Brain: Its Not Just Grown Up Yet. On NPR News Boston. Boston:

National Public Radio.

Jordi, R. (2011). Reframing the Concept of Reflection: Consciousness, Experiential Learning,

and Reflective Learning Practices. Adult Education Quarterly: A Journal of Research

and Theory, 61(2), 181-197.

Kane, T. J., Cantrell, S. (2012). Learning about Teaching: Initial Findings from the Measuring

Effective Teaching Project. Seatle, WA: Bill and Melinda Gates Foundation.

Kane, T. J., Rockoff, J.E., Staiger, D.O. (2008). What Does Certifications Tell Us About Teacher

Effectiveness? Evidence from New York City. Economics of Education Review, 27, 615-

631.

Kim, D. (1993). The Link Between Individual and Organizational Learning. Sloan Management

Review, 37-50.

Laboratory, N. C. R. E. (1995). Summary of Goals 2000 Act. from

http://www.ncrel.org/sdrs/areas/issues/envrnmnt/stw/sw0goals.htm

Page 132: Improving instructional delivery: reflections on student ...1095/fulltext.pdfinstructional practices through reflection is the driver behind the development of a student feedback tool;

132

MADESE. (2011). Massachusetts Model System for Educator Evaluation Part I District-Level

Planning Guide.

Madichie, N. O. (2011). Students' evaluation of teaching (SET) in higher education: A question

of reliability and validity. [Article]. Marketing Review, 11(4), 381-391.

Marco, M., & Florence, C. (2007). The Elusive Relationship Between Teacher Characteristics

and Student Academic Growth: A Longitudinal Multilevel Model for Change. [Article].

Journal of Personnel Evaluation in Education, 20(3/4), 147-164.

Marsh, H. W. (2001). Distinguishing between Good (Useful) and Bad Workloads on Students'

Evaluations of Teaching. American Educational Research Journal, 38(1), 183-212.

Marzano, R., Boogren, T., Heflebower, T., Kanold-McIntyre, J., Pickering, D. (2012). Becoming

a Reflective Teacher. Bloomington, IN: Marzano Research Laboratories.

Maxwell, J. A. (2005). Qualitative Research Design An Interactive Approach (Second ed. Vol.

42). Thousand Oaks, Ca: Sage Publications.

Munoz, M., A., Chang Florance, C. (2007). The Elusive Relationship Between Teacher

Characteristics and Student Academic Growth: A Longitudinal Multilevel Model for

Change. Journal of Personnel Evaluation in Education, 20, 147-164.

National Board for Professional Teaching, S. (2011). Student Learning, Student Achievement:

How Do Teachers Measure Up? Executive Summary: National Board for Professional

Teaching Standards.

Osterman, K. F., Kottkamp, R.B. (2004). Reflective Practice for Eduators (second ed.).

Thousand Oaks, CA: Corwin Press.

Paiget, J. (1950). The Psychology of Intelligence (M. P. D. E. Berlyne, Trans. ebook ed.). New

York: Routeledge & Kegan Paul.

Page 133: Improving instructional delivery: reflections on student ...1095/fulltext.pdfinstructional practices through reflection is the driver behind the development of a student feedback tool;

133

Patrick, C. L. (2011). Student evaluations of teaching: effects of the Big Five personality traits,

grades and the validity hypothesis. [Article]. Assessment & Evaluation in Higher

Education, 36(2), 239-249.

Patrick, D. C., M.D., Banta, M. (2010). Race to the Top Application for Initial Funding.

Popham, J. (1998). The Dysfunctional Marriage of Fromative and Summative Teacher

Evaluation. Journal of Personnel Evaluation in Education, 1, 269-273.

Potemski, A., Brennan-Gac, T., Oakes, A., Kershaw, L., Lohse, C., Johnston, M., Stillman, L.

(2010). Measurement of Student Growth; Emerging Trends Reflected in the State Phase 1

Race to the Top Applications. Naperville, Il: Learning Points & Council of Chief State

School Officers.

Remedios, R., & Lieberman, D. A. (2008). I liked your course because you taught me well: the

influence of grades, workload, expectations and goals on students' evaluations of

teaching. [Article]. British Educational Research Journal, 34(1), 91-115.

Riley, R. W. (1995). Improving America's School Act 1994.

Rivkin, S. G., Hanushek, E. A., & Kain, J. F. (2005). Teachers, Schools, and Academic

Achievement. Econometrica, 73(2), 417-458.

Sabbagh, L. (2007). The Teen Brain, Hard at Work. [Article]. Scientific American Special

Edition, 17(2), 54-59.

Saldana, J. (2013). The Coding Manual for Qualitative Research. London: Sage Publishers.

Schon, D. A. (1983). The Reflective Practioner: How Professionals Think in Action. New York,

NY: Basic Books.

Schon, D. A. (1987). Educating the Reflective Practioner. San Francisco, CA: Jossey-Bass.

Tzu, L. (500 B.C.E). Tao Te Ching.

Page 134: Improving instructional delivery: reflections on student ...1095/fulltext.pdfinstructional practices through reflection is the driver behind the development of a student feedback tool;

134

Usher, A. (2011a). AYP Results 2010-2011. Washington, DC: Center for Policy Education.

Usher, A. (2011b). How Many School Have Not Made AYP: Update with 2009-2010 Data and

Five Year Trends. Washington, DC: Center for Education Policy.

Vevere, N., & Kozlinskis, V. (2011). Students' Evaluation of Teaching Quality. Online

Submission.

Wayne, A. J., & Youngs, P. (2003). Teacher Characteristics and Student Achievement Gains: A

Review. Review of Educational Research, 73(1), 89-122.

Weinberg, B. A., Hashimoto, M., & Fleisher, B. M. (2009). Evaluating Teaching in Higher

Education. [Article]. Journal of Economic Education, 40(3), 227-261.

Wright, R. E. (2006). Student Evaluations of Faculty: Concerns Raised in the Literature, and

Possible Solutions. College Student Journal, 40(2), 417-422.

York-Barr, J. S., W.A., Ghere, G.S., Montie, J. (2006). Reflective Practice to Improve Schools:

An Action Guide for Educators. Thousand Oaks, CA: Corwin Press.

Page 135: Improving instructional delivery: reflections on student ...1095/fulltext.pdfinstructional practices through reflection is the driver behind the development of a student feedback tool;

Component 3a

Communicating with Students

I. Expectations for Learning

A. Near the end of the class we review what we have learned to check if we met the

learning goal

B. The learning goal is posted in my classroom

C. I know how each activity supports our learning goal

I. Directions and Procedures

A. Directions for homework and class work are clear

B. I understand the directions my teacher gives

I. Explanation of Content

A. Real life examples are used to help me understand new material

B. Our warm up/ bell work connects to what we are

Appendix A

Expectations for Learning

Near the end of the class we review what we have learned to check if we met the

The learning goal is posted in my classroom

I know how each activity supports our learning goal

and Procedures

Directions for homework and class work are clear

I understand the directions my teacher gives

Real life examples are used to help me understand new material

Our warm up/ bell work connects to what we are learning

135

Near the end of the class we review what we have learned to check if we met the

Page 136: Improving instructional delivery: reflections on student ...1095/fulltext.pdfinstructional practices through reflection is the driver behind the development of a student feedback tool;

Component 3b

Using Questioning and Discussion Techniques

I. Quality of Questions

A. My teachers asks questions that take more than a few words to answer

B. Students ask questions of other students

I. Discussion Techniques

A. Follow up questions such as, "Can anyone tell me more?" are used

B. My teacher waits until many hands are raised to answer a question

I. Student Participation

A. All voices are heard in a discussion

B. All students participate in our discussions

Using Questioning and Discussion Techniques

My teachers asks questions that take more than a few words to answer

Students ask questions of other students

ions such as, "Can anyone tell me more?" are used

My teacher waits until many hands are raised to answer a question

All voices are heard in a discussion

All students participate in our discussions

136

Page 137: Improving instructional delivery: reflections on student ...1095/fulltext.pdfinstructional practices through reflection is the driver behind the development of a student feedback tool;

Component 3c

Engaging Students in Learning

I. Activities and assignments

A. The work in my class challenges me

B. The activities and assignments require me to think deeply

I. Instructional materials and resources

A. My teacher provides choice of activities

B. There is a closing activity that reviews what we have learned

I. Structure and pacing

A. I can view the agenda to know what will happen next

B. I start on a warm up/bell work when I take my seat

Activities and assignments

The work in my class challenges me

The activities and assignments require me to think deeply

Instructional materials and resources

My teacher provides choice of activities

a closing activity that reviews what we have learned

I can view the agenda to know what will happen next

I start on a warm up/bell work when I take my seat

137

Page 138: Improving instructional delivery: reflections on student ...1095/fulltext.pdfinstructional practices through reflection is the driver behind the development of a student feedback tool;

Component 3d

Using Assessment in Instruction

I. Assessment criteria

A. I know what criteria needs to be met to be successful on my assignments

B. I know when and why my works meets or does not meet the standards

I. Feedback to students

A. I can use the feedback from my teacher to improve my work

B. Feedback on my works comes from my teacher and my peers

I. Monitoring of student learning

A. My teacher asks questions to check if we understand

B. My teacher uses pre-tests, review games/activities before a test to see if we understand

the material

I know what criteria needs to be met to be successful on my assignments

I know when and why my works meets or does not meet the standards

I can use the feedback from my teacher to improve my work

works comes from my teacher and my peers

Monitoring of student learning

My teacher asks questions to check if we understand

tests, review games/activities before a test to see if we understand

138

I know what criteria needs to be met to be successful on my assignments

tests, review games/activities before a test to see if we understand

Page 139: Improving instructional delivery: reflections on student ...1095/fulltext.pdfinstructional practices through reflection is the driver behind the development of a student feedback tool;

Component 3e

Demonstrating Flexibility and Responsiveness

I. Lesson Adjustment

A. When we don't understand my teacher uses examples to explain

B. My teacher uses many techniques to help me learn such as: lectures, video,

internet, group work

I. Response to Students

A. My teacher only moves on when we all understand

B. My teacher makes me feel that my questions are important

I. Persistence

A. My teacher gives hints or asks the question a different way when we don't respond or

understand

B. My teacher has high expectations for my success

onstrating Flexibility and Responsiveness

When we don't understand my teacher uses examples to explain

My teacher uses many techniques to help me learn such as: lectures, video,

My teacher only moves on when we all understand

My teacher makes me feel that my questions are important

My teacher gives hints or asks the question a different way when we don't respond or

expectations for my success

139

My teacher uses many techniques to help me learn such as: lectures, video, readings,

My teacher gives hints or asks the question a different way when we don't respond or

Page 140: Improving instructional delivery: reflections on student ...1095/fulltext.pdfinstructional practices through reflection is the driver behind the development of a student feedback tool;

140

Appendix-B Test-Re-test data by teacher participant Table B1

MS1 Test- Re-test Scores from I-SAID Instrument (with Standard Deviations in Parentheses)

Average Class Score from First

testing +/- Stdeva

Average Class Score from second testing

+/- Stdeva

Mean Class Score from First and Second Test

1 3.66 (0) 3.66 (0) 3.66

2 3.28 (.13) 3.09 (.13) 3.19

3 3.57 (.20) 3.28 (.20) 3.42

4 3.20 (.28) 2.80 (.28) 3.00

5 3.26 (.11) 3.10 (.11) 3.18

6 2.75 (.07) 2.85 (.11) 2.80

7 3.47 (.16) 3.23 (.16) 3.35

8 3.71 (.06) 3.61 (.06) 3.66

9 2.57 (.11) 2.73 (.11) 2.65

10 3.47 (.16) 3.23 (.16) 3.35

11 2.85 (0) 2.85 (0) 2.85

12 3.61 (.33) 3.14 (.33) 3.38

13 2.47 (.06) 2.57 (.06) 2.52

14 3.30 (.03) 3.25 (.03) 3.27

15 3.33 (.10) 3.19 (.10) 3.26

16 2.66 (.06) 2.57 (.06) 2.61

17 2.95 (0) 2.95 (0) 2.95

18 3.19 (.06) 3.28 (.06) 3.23

19 3.50 (.21) 3.20 (.21) 3.35

20 3.47 ( .20) 3.19 (.20) 3.33

21 2.80 (.10) 2.95 (.10) 2.87

22 3.45 (.07) 3.35 (.07) 3.40

23 2.95 (.10) 2.80 (.10) 2.88

24 3.28 (0) 3.28 (0) 3.28

25 2.38 (.06) 2.28 (.06) 2.33

26 3.04 (.03) 3.00 (.03) 3.02

27 3.09 (.06) 3.19 (.06) 3.14

28 2.95 (.13) 3.14 (.13) 3.04

29 3.25 (.24) 2.90 ( .24) 3.07

30 3.38 (.20) 3.09 (.20) 3.23

31 3.95 (.17) 3.70 (.17) 3.82

Page 141: Improving instructional delivery: reflections on student ...1095/fulltext.pdfinstructional practices through reflection is the driver behind the development of a student feedback tool;

141

Table B2

HS1 Test- Re-test Scores from I-SAID Instrument (with Standard Deviations in Parentheses)

Average Class Score from First

testing +/- Stdeva

Average Class Score from second testing

+/- Stdeva

Mean Class Score from First and Second Test

1 3.90 (.14) 3.70 (.14) 3.80

2 3.26 (.18) 3.13 (.18) 3.13

3 3.15 (.24) 2.80 (.24) 2.97

4 3.30 (.14) 3.10 (.14) 3.20

5 3.26 (.11) 3.10 (.11) 3.18

6 2.00 (.17) 2.65 (.17) 2.77

7 3.25 (.10) 3.40 (.10) 3.32

8 3.45 (.17) 3.20 (.17) 3.32

9 2.42 (.14) 2.63 (.14) 2.52

10 3.15 (.11) 3.00 (.11) 3.07

11 2.85 (.03) 2.80 (.03) 2.82

12 3.23 (0) 3.23 (0) 3.23

13 2.23 (.04) 2.29 (.04) 2.26

14 2.90 (.07) 3.00 (.07) 2.95

15 2.71 (.13) 2.90 (.13) 2.80

16 2.34 (.17) 2.60 (.17) 2.47

17 2.89 (.07) 3.00 (.07) 2.94

18 3.65 (.03) 3.70 (.03) 3.67

19 3.65 (.03) 3.70 (.03) 3.67

20 3.15 (.03) 3.20 (.03) 3.17

21 3.0 0 (.07) 3.15 (.07) 3.07

22 3.00 (.07) 3.10 (.07) 3.05

23 3.16 (.11) 3.00 (.11) 3.08

24 3.35 (.10) 3.20 (.10) 3.27

25 3.10 (.03) 3.15 ( .03) 3.12

26 3.05 (.11) 3.21 (.11) 3.13

27 3.21 (.26) 3.57 (.26) 3.39

28 2.88 (.07) 3.00 (.07) 2.94

29 2.89 (.03) 2.84 (.03) 2.86

30 2.70 (.10) 2.85 (.10) 2.77

31 2.89 (.33) 3.36 (.33) 3.13

Page 142: Improving instructional delivery: reflections on student ...1095/fulltext.pdfinstructional practices through reflection is the driver behind the development of a student feedback tool;

142

Table B3

MS2 Scores from Observation Instrument (with Standard Deviations in Parentheses)

Average Class Score from First

testing +/- Stdeva

Average Class Score from second testing

+/- Stdeva

Mean Class Score from First and Second Test

1 3.34 (.27) 2.95 (.27) 3.15

2 3.09 (.09) 3.22 (.09) 3.15

3 2.77 (0) 2.77 (0) 2.77

4 3.30 (.14) 3.20 (.14) 3.2

5 3.25 (.07) 3.15 (.07) 3.2

6 3.18 (.12) 3.00 (.12) 3.09

7 2.68 (.03) 2.73 (.03) 2.71

8 3.24 (.06) 3.04 (.06) 3.09

9 3.13 (0) 3.13 (0) 3.13

10 2.95 (.14) 2.75 (.14) 2.75

11 2.29 (.24) 2.64 (.24) 2.47

12 2.95 (0) 2.95 (0) 2.95

13 2.56 (.06) 2.65 (.06) 2.60

14 2.90 (.03) 2.85 (.03) 2.88

15 2.85 (.13) 3.04 (.13) 2.95

16 2.25 (.24) 2.60 (.24) 2.42

17 2.30 (.17) 2.55 (.17) 2.42

18 3.31 (.03) 3.36 (.03) 3.34

19 3.33 (.06) 3.23 (.06) 3.28

20 3.40 (.28) 3.00 (.28) 3.20

21 3.08 (.06) 3.17 (.06) 3.13

22 3.12 (0) 3.12 (0) 3.12

23 2.78 (.15) 3.00 (.15) 2.89

24 3.09 (.09) 2.95 (.09) 3.02

25 3.31 (.16) 3.09 (.16) 3.20

26 3.27 (.22) 2.95 (.22) 3.11

27 3.00 (.03) 3.04 (.03) 3.02

28 2.50 (.09) 2.62 (.09) 2.56

29 3.13 (.06) 3.04 (.06) 3.08

30 2.69 (.19) 2.97 (.19) 2.83

31 3.18 (.19) 3.45 (.19) 3.31

Page 143: Improving instructional delivery: reflections on student ...1095/fulltext.pdfinstructional practices through reflection is the driver behind the development of a student feedback tool;

143

Table B4

HS2 Scores from Observation Instrument (with Standard Deviations in Parentheses)

Average Class Score from First

testing +/- Stdeva

Average Class Score from second testing

+/- Stdeva

Mean Class Score from First and Second Test

1 3.21 (.14) 3.42 (.14) 3.31

2 3.33 (.15) 3.11 (.15) 3.22

3 2.47 (.03) 2.52 (.03) 2.50

4 3.11 (.07) 3.00 (.07) 3.05

5 3.42 (.22) 3.10 (.22) 3.26

6 3.52 (.07) 3.42 (.07) 3.47

7 2.33 (.11) 2.50 (.07) 2.41

8 3.36 (.18) 3.10 (.18) 3.23

9 2.72 (.15) 2.94 (.15) 2.83

10 3.15 (.07) 3.26 (.07) 3.21

11 2.72 (.03) 2.66 (.03) 2.69

12 3.83 (.11) 3.66 (.11) 3.75

13 3.21 (0) 3.21 (0) 3.21

14 2.94 (.11) 2.78 (.11) 2.86

15 3.00 (.11) 2.84 (.11) 2.92

16 2.63 (.07) 2.73 (.07) 2.68

17 2.31 (.18) 2.57 (.18) 2.44

18 2.66 (.07) 2.77 (.07) 2.72

19 1.76 (.20) 2.05 (.20) 1.91

20 3.42 (.14) 3.21 (.14) 3.31

21 3.26 (.11) 3.10 (.11) 3.18

22 3.31 (.03) 3.36 (.03) 3.34

23 2.89 (.11) 3.05 (.11) 2.97

24 3.44 (.11) 3.27 (.11) 3.36

25 3.21 (.14) 3.42 (.14) 3.31

26 3.52 (.11) 3.36 (.11) 3.44

27 3.77 (.23) 3.44 (.23) 3.61

28 3.15 (.03) 3.10 (.03) 3.12

29 3.36 (.07) 3.26 (.07) 3.31

30 3.47 (.18) 3.21 (.18) 3.34

31 3.50 (0) 3.50 (0) 3.50

Page 144: Improving instructional delivery: reflections on student ...1095/fulltext.pdfinstructional practices through reflection is the driver behind the development of a student feedback tool;

144

Table B5

MS4 Scores from Observation Instrument (with Standard Deviations in Parentheses)

Average Class Score from First

testing +/- Stdeva

Average Class Score from second testing

+/- Stdeva

Mean Class Score from First and Second Test

1 379 (0) 3.79 (0) 3.79

2 3.58 (.02) 3.51 (.02) 3.56

3 3.14 (.10) 3.28 (.10) 3.21

4 3.65 (.12) 3.47 (.12) 3.56

5 3.50 (.02) 3.45 (.02) 3.47

6 3.56 (.06) 3.65 (.06) 3.60

7 3.69 (.03) 3.73 (.03) 3.71

8 3.73 (.03) 3.69 (.03) 3.71

9 3.00 (.09) 3.13 (.09) 3.06

10 3.59 (0) 3.59 (0) 3.59

11 2.71 (.26) 3.09 (.26) 2.90

12 3.58 (.11) 3.75 (.11) 3.66

13 2.86 (.06) 2.95 (.06) 2.91

14 2.85 (.26) 3.23 (.26) 3.04

15 3.36 (.09) 3.50 (.09) 3.43

16 2.66 (0) 2.66 (0) 2.66

17 2.70 (.17) 2.95 (.17) 2.82

18 3.16 (.02) 3.20 (.02) 3.18

19 3.65 (.09) 3.78 (.09) 3.71

20 3.62 (.11) 3.45 (.11) 3.54

21 3.45 (0) 3.45 (0) 3.45

22 3.75 (.14) 3.54 (.14) 3.64

23 3.50 (.14) 3.30 (.14) 3.40

24 3.65 (.09) 3.78 (.09) 3.71

25 3.31 (.09) 3.45 (.09) 3.38

26 3.50 (.11) 3.66 (.11) 3.58

27 3.79 (.11) 3.62 (.11) 3.70

28 3.33 (.05) 3.41 (.05) 3.37

29 3.69 (.03) 3.73 (.03) 3.71

30 3.50 (.05) 3.58 (.05) 3.54

31 3.83 (0) 3.83 (0) 3.83

Page 145: Improving instructional delivery: reflections on student ...1095/fulltext.pdfinstructional practices through reflection is the driver behind the development of a student feedback tool;

145

Table B6

HS3 Scores from Observation Instrument (with Standard Deviations in Parentheses)

Average Class Score from First

testing +/- Stdeva

Average Class Score from second testing

+/- Stdeva

Mean Class Score from First and Second Test

1 3.76 (.06) 3.85 (.06) 3.80

2 3.15 (.10) 3.30 (.10) 3.22

3 2.70 (.03) 2.75 (.03) 2.72

4 3.15 (.10) 3.00 (.10) 3.07

5 3.14 (0) 3.14 (0) 3.14

6 2.60 (.21) 2.90 (.21) 2.75

7 3.00 (0) 3.00 (0) 3.00

8 3.20 (0) 3.20 (0) 3.20

9 2.76 (.13) 2.57 (.13) 2.66

10 2.90 (.06) 3.00 (.06) 2.95

11 2.60 (.14) 2.80 (.14) 2.70

12 2.95 (.02) 2.91 (.02) 2.93

13 2.19 (.10) 2.33 (.10) 2.26

14 3.00 (.03) 3.04 (.03) 3.02

15 2.94 (.07) 3.05 (.07) 3.00

16 2.00 (.23) 2.33 (.23) 2.16

17 2.33 (.06) 2.42 (.06) 2.38

18 3.09 (.03) 3.04 (.03) 3.07

19 2.85 (.13) 3.04 (.13) 2.95

20 3.19 (.10) 3.33 (.10) 3.26

21 3.20 (.03) 3.15 (.03) 3.17

22 3.35 (.10) 3.20 (.10) 3.27

23 2.50 (.07) 2.60 (.07) 2.55

24 3.04 (.03) 3.09 (.03) 3.07

25 2.52 (.13) 2.71 (.13) 2.61

26 2.80 (.17) 3.05 (.17) 2.92

27 3.47 (.10) 3.33 (.10) 3.40

28 2.30 (.18) 2.56 (.18) 2.43

29 3.05 (.03) 3.11 (.03) 3.08

30 3.13 (.06) 3.22 (.06) 3.18

31 3.38 (.03) 3.44 (.03) 3.41

Page 146: Improving instructional delivery: reflections on student ...1095/fulltext.pdfinstructional practices through reflection is the driver behind the development of a student feedback tool;

146

Table B7

HS4 Scores from Observation Instrument (with Standard Deviations in Parentheses)

Average Class Score from First

testing +/- Stdeva

Average Class Score from second testing

+/- Stdeva

Mean Class Score from First and Second Test

1 3.72 (0) 3.72 (0) 3.72

2 3.50 (.12) 3.68 (.12) 3.59

3 3.38 (0) 3.38 (0) 3.38

4 3.22 (.28) 3.63 (.28) 3.43

5 3.50 (.06) 3.59 (.06) 3.54

6 3.90 (.13) 3.71 (.13) 3.80

7 3.63 (0) 3.63 (0) 3.63

8 3.72 (.06) 3.62 (.06) 3.68

9 3.10 (0) 3.10 (0) 3.10

10 3.28 (.10) 3.42 (.10) 3.35

11 3.09 (.13) 3.28 (.13) 3.19

12 3.25 (.03) 3.30 (.03) 3.27

13 2.68 (.19) 2.95 (.19) 2.81

14 3.52 (.10) 3.66 (.10) 3.59

15 3.52 (.03) 3.57 (.03) 3.54

16 1.52 (.23) 1.85 (.23) 1.69

17 2.95 (.23) 3.28 (.23) 3.11

18 3.40 (.06) 3.50 (.06) 3.45

19 3.00 (.09) 3.13 (.09) 3.06

20 3.45 (.09) 3.59 (.09) 3.52

21 3.31 (.09) 3.45 (.09) 3.38

22 3.72 (.12) 3.54 (.12) 3.63

23 3.19 (.03) 3.14 (.03) 3.16

24 3.57 (0) 3.57 (0) 3.57

25 2.45 (.28) 2.86 (.28) 2.65

26 3.38 (.10) 3.52 (.10) 3.45

27 3.09 (.20) 3.38 (.20) 3.23

28 2.90 (.50) 3.65 (.50) 3.27

29 3.31 (.22) 3.63 (.22) 3.47

30 3.47 (0) 3.47 (0) 3.47

31 3.81 (0) 3.81 (0) 3.81

Page 147: Improving instructional delivery: reflections on student ...1095/fulltext.pdfinstructional practices through reflection is the driver behind the development of a student feedback tool;

147

Appendix C-Questions asked of critical peers

I-SAID

Individual Student Assessment of Instructional Delivery This is a DRAFT of a survey which I developed as part of my Doctoral Dissertation Project. I

am very interested in your feedback on the questions especially on:

Readability—do you think your students can understand the language?

Usefulness—would the information from the survey provide you with diagnostic data upon

which you can reflect and make adjustments to your instructional practice?

Wording—do you use other wording for the same concepts?

Do you have any other thoughts?

Please indicate the grade level you teach.

Page 148: Improving instructional delivery: reflections on student ...1095/fulltext.pdfinstructional practices through reflection is the driver behind the development of a student feedback tool;

148

Appendix D- Student Feedback Survey

I-SAID

Individual Student Assessment of Instructional Delivery

Please reflect on what happens in your classroom and rate whether or not you disagree or agree

with the following statements.

In my classroom Strongly Disagree

Disagree Neither Agree or Disagree

Agree Strongly Agree

1 The learning goal is posted in my classroom 2 I know how each activity supports our learning

goal

3 Near the end of the class we review what we have learned to check if we met the learning goal

4 Directions for homework and class work are clear

5 I understand the directions my teacher gives 6 Real life examples are used to help me

understand new material

7 Our warm up/ bell work connects to what we are learning

8 My teachers asks questions that take more than a few words to answer

9 Students ask questions of other students 10 Follow up questions such as, "Can anyone tell

me more?" are used

11 My teacher waits until many hands are raised to answer a question

12 All voices are heard in a discussion 13 All students participate in our discussions 14 The work in my class challenges me 15 The activities and assignments require me to

think deeply

16 My teacher provides choice of activities 17 There is a closing activity that reviews what we

have learned

18 I can view the agenda to know what will happen next

19 I start on a warm up/bell work when I take my seat

20 I know what criteria needs to be met to be successful on my assignment

Page 149: Improving instructional delivery: reflections on student ...1095/fulltext.pdfinstructional practices through reflection is the driver behind the development of a student feedback tool;

149

21 I know when and why my works meets or does not meet the standards

22 I can use the feedback from my teacher to improve my work

23 Feedback on my works comes from my teacher and my peers

24 My teacher asks questions to check if we understand

25 My teacher uses pre-tests, review games/activities before a test to see if we understand the material

26 When we don't understand my teacher uses examples to explain

27 My teacher uses many techniques to help me learn such as: lectures, video, readings, Internet, group work

28 My teacher only moves on when we all understand

29 My teacher makes me feel that my questions are important

30 My teacher gives hints or asks the question a different way when we don't respond or understand

31 My teacher has high expectations for my success

Page 150: Improving instructional delivery: reflections on student ...1095/fulltext.pdfinstructional practices through reflection is the driver behind the development of a student feedback tool;

150

Appendix E-Reflective Memo

What aspects of the I-SAID has had the most benefit for reflection?

What additional questions would benefit teachers in reflecting to improve their instructional delivery and would you change any of the questions to better suit you?

What benefit or limitations does pairing additional data have on the reflective process?

• With teacher growth data?

• With self-Assessment data?

• With observational data?

What actions/processes did you utilize after reflecting on the data?

What ways will you use the I-SAID in the future?

How frequently would you use the I-SAID to gather data from your students?

How can the I-SAID assist you in developing your professional goals?

How can the I-SAID contribute to developing your professional development plan?

How could you use the results from the I-SAID to enhance peer/mentor observation?

What must you do to facilitate improving your instructional practiced based on the results of the

I-SAID?

Page 151: Improving instructional delivery: reflections on student ...1095/fulltext.pdfinstructional practices through reflection is the driver behind the development of a student feedback tool;

151

Appendix F- Stages of Concern Questionnaire

Concerns Based Systems International Southwest Educational Development Laboratory

Stages of Concern Questionnaire Name (optional): ______________________________________________________________ The purpose of this questionnaire is to determine what people who are using or thinking about using

various programs are concerned about at various times during the adoption process.

The items were developed from typical responses of school and college teachers who ranged from no

knowledge at all about various programs to many years’ experience using them. Therefore, many of

the items on this questionnaire may appear to be of little relevance or irrelevant to you at this

time.

For the completely irrelevant items, please circle “0” on the scale. Other items will represent those

concerns you do have, in varying degrees of intensity, and should be marked higher on the scale.

For example:

This statement is very true of me at this time. 0 1 2 3 4 5 6 7

This statement is somewhat true of me now. 0 1 2 3 4 5 6 7

This statement is not at all true of me at this time. 0 1 2 3 4 5 6 7

This statement seems irrelevant to me. 0 1 2 3 4 5 6 7

Please respond to the items in terms of your present concerns, or how you feel about your involvement

with this innovation. We do not hold to any one definition of the innovation so please think of

it in terms of your own perception of what it involves. Phrases such as “this approach” and “the new

system” all refer to the same innovation. Remember to respond to each item in terms of your present

concerns about your involvement or potential involvement with the innovation.

Thank you for taking time to complete this task.

Concerns Based Systems International Southwest Educational Development Laboratory

1. I am concerned about students’ attitudes toward the innovation. 0 1 2 3 4 5 6 7

2. I now know of some other approaches that might work better. 0 1 2 3 4 5 6 7

3. I am more concerned about another innovation. 0 1 2 3 4 5 6 7

4. I am concerned about not having enough time to organize myself each day. 0 1 2 3 4 5 6 7

5. I would like to help other faculty in their use of the innovation. 0 1 2 3 4 5 6 7

6. I have a very limited knowledge of the innovation. 0 1 2 3 4 5 6 7

Page 152: Improving instructional delivery: reflections on student ...1095/fulltext.pdfinstructional practices through reflection is the driver behind the development of a student feedback tool;

152

7. I would like to know the effect of reorganization on my professional status. 0 1 2 3 4 5 6 7

8. I am concerned about conflict between my interests and my responsibilities. 0 1 2 3 4 5 6 7

9. I am concerned about revising my use of the innovation. 0 1 2 3 4 5 6 7

10. I would like to develop working relationships with both our faculty and outside faculty using this

innovation. 0 1 2 3 4 5 6 7

11. I am concerned about how the innovation affects students. 0 1 2 3 4 5 6 7

12. I am not concerned about the innovation at this time. 0 1 2 3 4 5 6 7

13. I would like to know who will make the decisions in the new system. 0 1 2 3 4 5 6 7

14. I would like to discuss the possibility of using the innovation. 0 1 2 3 4 5 6 7

15. I would like to know what resources are available if we decide to adopt the innovation 0 1 2 3 4 5 6 7

16. I am concerned about my inability to manage all that the innovation requires. 0 1 2 3 4 5 6 7

17. I would like to know how my teaching or administration is supposed to change. 0 1 2 3 4 5 6 7

18. I would like to familiarize other departments or persons with the progress of this new approach

0 1 2 3 4 5 6 7

19. I am concerned about evaluating my impact on students. 0 1 2 3 4 5 6 7

20. I would like to revise the innovation’s approach. 0 1 2 3 4 5 6 7

21. I am preoccupied with things other than the innovation. 0 1 2 3 4 5 6 7

22. I would like to modify our use of the innovation based on the experience of our students.

0 1 2 3 4 5 6 7

23. I spend little time thinking about the innovation. 0 1 2 3 4 5 6 7

24. I would like to excite my students about their part in this approach. 0 1 2 3 4 5 6 7

25. I am concerned about time spent working with nonacademic problems related to the innovation.

0 1 2 3 4 5 6 7

26. I would like to know what the use of the innovation will require in the immediate future. 0 1 2 3 4 5 6 7

27. I would like to coordinate my efforts with others to maximize the innovation’s effects. 0 1 2 3 4 5 6 7

28. I would like to have more information on time and energy commitments required by the innovation.

0 1 2 3 4 5 6 7

29. I would like to know what other faculty are doing in this area. 0 1 2 3 4 5 6 7

Page 153: Improving instructional delivery: reflections on student ...1095/fulltext.pdfinstructional practices through reflection is the driver behind the development of a student feedback tool;

153

30. Currently, other priorities prevent me from focusing my attention on the innovation. 0 1 2 3 4 5 6 7

31. I would like to determine how to supplement, enhance, or replace the innovation. 0 1 2 3 4 5 6 7

32. I would like to use feedback from students to change the program. 0 1 2 3 4 5 6 7

33. I would like to know how my role will change when I am using the innovation. 0 1 2 3 4 5 6 7

34. Coordination of tasks and people is taking too much of my time. 0 1 2 3 4 5 6 7

35. I would like to know how the innovation is better than what we have now. 0 1 2 3 4 5 6 7

Page 154: Improving instructional delivery: reflections on student ...1095/fulltext.pdfinstructional practices through reflection is the driver behind the development of a student feedback tool;

154

Appendix G—Permission from Dr. Gene Hall Dear Dr. Hall,

I am a doctoral student working at Northeastern University in Massachusetts. My dissertation

is entitled, "Improving Instructional Practices: Reflection on Student Feedback." In addition to

developing a student feedback tool I am interested in ascertaining information on the stages of

concern the initial adopters are in after piloting the feedback tool. I have determined that your

stages of concern questionnaire would be the best tool to accomplish this.

I am seeking your permission to use your questionnaire, re-print it in the appendix, and to

reprint figure 4.2 Stages of Concern About the Innovation: Paragraph Definitions from your

book, Implementing Change: Patterns, Principles and Potholes. Proper credit for the use of the

SoC questionnaire and this figure will be given. If there are any questions you may contact me at

H-508-883-4094 C-774-571-5007 or by email [email protected]

I look forward to your response.

Sincerely,

Lisa C. Oliveira

Page 155: Improving instructional delivery: reflections on student ...1095/fulltext.pdfinstructional practices through reflection is the driver behind the development of a student feedback tool;

155

Hello Lisa:

Thank you for the email. It sounds like measuring Stages of Concern can be useful in your

study. You have my permission to use the SoC Questionnaire, as long as you do not change the

wording of the items.

I do not know which edition of Implementing Change you are using. If it is not the 3rd edition

(2011) then you will be working with the old form of the SoC Questionnaire. You really need to

use the new form (Form 075). You also will want to obtain the technical manual from the

Southwest Educational Development Laboratory, in Austin, TX.

If you have any questions or want me to look over your findings, please let me know.

Best of success in completing your study,

Dr. Gene Hall

Page 156: Improving instructional delivery: reflections on student ...1095/fulltext.pdfinstructional practices through reflection is the driver behind the development of a student feedback tool;

156

Appendix H- Example of Student Data Presented to Teacher for Reflection Table H- 1 Numerical Data Presented to Teacher MS1 Strongly

Disagree Disagree Agree Strongly

Agree Average Score

Comments

1 0 0 7 14 3.66 2 0 2 11 8 3.28 3 0 0 9 12 3.57 4 0 3 11 7 3.20 Not always with

homework 5 0 2 11 7 3.26 (1) Write in= sometimes 6 1 6 10 3 2.75 (1) Write in= sometimes

but not always 7 1 2 4 14 3.47 8 0 0 6 15 3.71 9 1 8 8 2 2.57 (2) No Responses

I don’t understand the question.

10 0 0 11 10 3.47 11 1 5 8 6 2.85 (1) No response=

sometimes 12 0 1 7 13 3.61 13 4 4 13 0 2.47 14 1 2 11 7 3.30 depends what we are

working on 15 1 0 10 9 3.33 (1) No Response=Not

really they are all the same

16 2 6 11 2 2.66 17 1 2 15 3 2.95 18 1 1 11 8 3.19 19 0 1 8 11 3.5 There is not always one 20 0 1 9 11 3.47 21 1 4 13 2 2.8 (1) No Response= In the

middle between agree and disagree

22 0 0 11 9 3.45 (1) No response= sometimes

23 0 3 16 2 2.95 24 0 2 11 8 3.28 25 2 10 9 0 2.38 26 1 2 13 5 3.04 27 0 4 11 6 3.09 28 1 2 15 3 2.95 29 0 1 13 6 3.25 (1) No

response=sometimes 30 0 1 12 8 3.38 31 0 0 1 20 3.95

Page 157: Improving instructional delivery: reflections on student ...1095/fulltext.pdfinstructional practices through reflection is the driver behind the development of a student feedback tool;

157

Figure H-A Visual Student Data for Teacher MS1

* Strongly Disagree (1)

* Disagree (2)

* Agree (3)

* Strongly Agree (4)

0% 0%

33%

67%

Question 1

1

2

3

4

0% 10%

52%

38%

Question 2

1

2

3

4

0% 0%

43%

57%

Question 3

1

2

3

4

0% 14%

53%

33%

Question 4

1

2

3

4

0% 10%

55%

35%

Question 5

1

2

3

4

5%

30%

50%

15%

Question 6

1

2

3

4

5% 9%

19%67%

Question 7

1

2

3

4

0% 0%

29%

71%

Question 8

1

2

3

4

5%

42%42%

11%

Question 9

1

2

3

4

Page 158: Improving instructional delivery: reflections on student ...1095/fulltext.pdfinstructional practices through reflection is the driver behind the development of a student feedback tool;

158

0% 0%

52%

48%

Question 10

1

2

3

4

5%

25%

40%

30%

Question 11

1

2

3

4

0% 5%

33%62%

Question 12

1

2

3

4

19%

19%62%

0%

Question 13

1

2

3

4

5% 10%

52%

33%

Question 14

1

2

3

4

5% 0%

50%

45%

Question 15

1

2

3

4

9%

29%52%

10%

Question 16

1

2

3

4

5%10%

71%

14%

Question 17

1

2

3

4

5%5%

52%

38%

Question 18

1

2

3

4

0% 5%

40%55%

Question 19

1

2

3

4

0% 5%

43%52%

Question 20

1

2

3

4

5%

20%

65%

10%

Question 21

1

2

3

4

0% 0%

55%

45%

Question 22

1

2

3

4

0% 14%

76%

10%

Quesion 23

1

2

3

4

0% 10%

52%

38%

Question 24

1

2

3

4

Page 159: Improving instructional delivery: reflections on student ...1095/fulltext.pdfinstructional practices through reflection is the driver behind the development of a student feedback tool;

159

9%

48%

43%

0%

Question 25

1

2

3

4

5%9%

62%

24%

Question 26

1

2

3

4

0%

19%

52%

29%

Question 27

1

2

3

4

5%10%

71%

14%

Question 28

1

2

3

4

0% 5%

65%

30%

Question 29

1

2

3

4

0% 5%

57%

38%

Question 30

1

2

3

4

0% 0% 5%

95%

Question 31

1

2

3

4

Page 160: Improving instructional delivery: reflections on student ...1095/fulltext.pdfinstructional practices through reflection is the driver behind the development of a student feedback tool;

160

Appendix I-Results of Open Ended Reflective Memo Table I-1

Coding of Open-ended Question 1. What aspects of the I-SAID had the most benefit for reflection?

Teacher Course In-Vivo Code Pattern Code MS2 MS Mathematics 8 “see the student’s

perspective” “Improve teacher effectiveness”

Student perception Identify deficits

MS1 MS Language Arts 8 “knowing I need to do more pre-assessments”

Lesson structure

MS3 MS Science 8 “questions involving reviewing material” “checking if we met the goal”

Lesson structure Lesson structure

HS1 HS Spanish & Latin “variety of questions addressed multiple areas” “agenda, posting objectives, clarity of teacher directions” “how the student interprets this”

Complete picture Lesson structure Student perception

HS4 HS Physical Science “clarity of instruction” “presenting material” “relate to real life experiences” “challenging my students”

Lesson structure Lesson structure Student perception Student perception

HS3 HS Chemistry “reflect on my students’ perception”

Student perception

HS2 HS Spanish “importance of bell activities and activators”

Lesson structure

Page 161: Improving instructional delivery: reflections on student ...1095/fulltext.pdfinstructional practices through reflection is the driver behind the development of a student feedback tool;

161

Table I-2 Coding of Open Ended Question 2. What additional questions would benefit teachers in reflecting to improve their instructional delivery and would you change any of the questions to better suit you? Teacher Course In-Vivo Code Pattern Code MS2 MS Mathematics 8 “effectiveness of

strategies with difficult concepts”

Student perception

MS1 MS Language Arts 8 “add Focus Activity” Strategy inclusion MS3 MS Science 8 “enough hands on

activity” “if students feel my classroom welcomes questions” “if students feel teacher is approachable”

Strategy inclusion Student perception Student perception

HS1 HS Spanish & Latin “how students take notes” “what methods are most effective for student” “pacing of the class”

Strategy inclusions Strategy inclusion Student perception Student perception

HS4 HS Physical Science “not at this time” Satisfaction HS3 HS Chemistry “work in groups”

“discuss or practice skills” “provides opportunities to help with understanding”

Strategy inclusion Strategy inclusion Student perception

HS2 HS Spanish “time between taking assessment and having it returned” “engagement” “adequate time to test” “enjoy class” “take knowledge and apply it outside the classroom”

Student perception Student perception Student perception Student perception Student perception

Page 162: Improving instructional delivery: reflections on student ...1095/fulltext.pdfinstructional practices through reflection is the driver behind the development of a student feedback tool;

162

Table I-3 Coding of Open Ended Question 3a. What benefit or limitations does pairing data from the I-SAID with teacher growth data have on the reflective process? Teacher Course In-Vivo Code Pattern Code MS2 MS Mathematics 8 “all limited”

“student opinions play a factor”

Limitations Limitations

MS1 MS Language Arts 8 “knowing that my growth is or is not keeping pace with what students feel”

Complete picture

MS3 MS Science 8 “not sure”

Unfamiliarity with Growth data

HS1 HS Spanish & Latin “may paint a more complete picture” “if data from I-SAID helps improve instructions strategies growth data should show improvement” “limitation is some questions could be subjective”

Complete picture Lesson structure Complete picture Limitations

HS4 HS Physical Science “not sure”

Unfamiliarity with Growth data

HS3 HS Chemistry “not sure” Unfamiliarity with Growth data

HS2 HS Spanish “not enough focus on pertinence of lessons”

Lesson structure

Page 163: Improving instructional delivery: reflections on student ...1095/fulltext.pdfinstructional practices through reflection is the driver behind the development of a student feedback tool;

163

Table I -4 Coding of Open Ended Question 3b. What benefit or limitations does pairing data from the I-SAID with teacher self assessment have on the reflective process? Teacher Course In-Vivo Code Pattern Code MS2 MS Mathematics 8 “opportunities for

benefits” Benefit

MS1 MS Language Arts 8 “benefit would be knowing that what I believe I am doing is being recognized by my students”

Checks and balances Student vs. teacher perceptions

MS3 MS Science 8 “compare my responses to the students”

Checks and balances Student vs. teacher perceptions

HS1 HS Spanish & Latin “how aligned their self-assessment is with feedback from their students” “some data is based on opinion”

Checks and balances Student vs. teacher perceptions Limitation

HS4 HS Physical Science “great benefit” “we are our worst critics”

Benefit Identify deficits

HS3 HS Chemistry “reflect on teacher perceptions with the students perceptions”

Checks and balances Student vs. teacher perceptions

HS2 HS Spanish “needs more frequency based questions”

Limitations

Page 164: Improving instructional delivery: reflections on student ...1095/fulltext.pdfinstructional practices through reflection is the driver behind the development of a student feedback tool;

164

Table I-5 Coding of Open Ended Question 3c. What benefit or limitations does pairing data from the I-SAID with observational data have on the reflective process? Teacher Course In-Vivo Code Pattern Code MS2 MS Mathematics 8 “observational data also

limited not always reflective of true practices”

Limitations

MS1 MS Language Arts 8 “knowing that what is being observed and recommended I am doing for my students”

Checks and balances

MS3 MS Science 8 “I am not sure what the observational data is”

Missing information

HS1 HS Spanish & Latin “data from the I-SAID would be based on typical practices over a longer period of time.” “observations or walkthrough only shows a snapshot”

Bigger picture for students Smaller picture for evaluator

HS4 HS Physical Science “very beneficial as the observer may see things differently from the students as well as the teacher”

Evaluator vs. student perceptions vs. teacher perceptions

HS3 HS Chemistry “provides a check and balance system which could help eliminate biases.”

Checks and balances

HS2 HS Spanish “not enough open ended questions to embellish answers”

Limitations

Page 165: Improving instructional delivery: reflections on student ...1095/fulltext.pdfinstructional practices through reflection is the driver behind the development of a student feedback tool;

165

Table I-6

Coding of Open Ended Question 4. What actions or processes did you use after reflecting on the data? Teacher Course In-Vivo Code Pattern Code MS2 MS Mathematics 8 “I realized something

that I do the students don’t realize”

Action planning

MS1 MS Language Arts 8 “I created a number of pre-assessments”

Action step

MS3 MS Science 8 “plan better for summarizing activities”

Action steps

HS1 HS Spanish & Latin “I used the average score category to find areas that were ranked lowest and reflect on my practice”

Action planning

HS4 HS Physical Science “I should push for more students to contribute”

Action step

HS3 HS Chemistry “I discussed strategies with colleagues”

Action planning

HS2 HS Spanish “discussed with class of seniors”

Action planning

Page 166: Improving instructional delivery: reflections on student ...1095/fulltext.pdfinstructional practices through reflection is the driver behind the development of a student feedback tool;

166

Table I-7

Coding of Open Ended Question 5. What ways could you use the I-SAID in the future? Teacher Course In-Vivo Code Pattern Code MS2 MS Mathematics 8 “as a tool to measure

growth” Growth measurement

MS1 MS Language Arts 8 “to see where my short comings are and to improve”

Identify deficits

MS3 MS Science 8 “use it as a periodic reflective tool to see if I am meeting goals previously unmet”

Reflective tool growth measurement

HS1 HS Spanish & Latin “it can point out areas of strength and where improvement is needed” “good tool when paired with self-reflection and observation” “create a complete picture”

Identify deficits Complete picture

HS4 HS Physical Science “good way to take a hard look through the eyes of the students”

Identify deficits

HS3 HS Chemistry “self-reflections” Reflective tool HS2 HS Spanish “end of year evaluation

tool” Reflective tool

Table I-8

Responses for Open Ended Question 6. How frequently would you use the I-Said to gather data? Teacher Course Response MS2 MS Mathematics 8 4 times per year (quarterly) MS1 MS Language Arts 8 2 times per year MS3 MS Science 8 4 times per year (quarterly) HS1 HS Spanish & Latin 2 times per year HS4 HS Physical Science 2 times per year HS3 HS Chemistry 1 time per year HS2 HS Spanish 1 time per year

Page 167: Improving instructional delivery: reflections on student ...1095/fulltext.pdfinstructional practices through reflection is the driver behind the development of a student feedback tool;

167

Table I-9

Coding of Open Ended Question 7. How can the I-SAID assist you in developing your professional goals? Teacher Course In-Vivo Code Pattern Code MS2 MS Mathematics 8 “focus on key areas of

need” Identify deficits

MS1 MS Language Arts 8 “a SMART goal can be developed to improve upon short comings in my repertoire”

SMART goal development identify deficits

MS3 MS Science 8 “develop a SMART goal that measures success by moving from agree to strongly agree”

SMART goal development

HS1 HS Spanish & Latin “reflect on teaching practices” “from student perspective”

Reflective tool

HS4 HS Physical Science “focus on improving my weakness and continuing my strengths”

Identify deficits Reflective tool

HS3 HS Chemistry “help to identify what types of courses or workshops would help me learn new techniques”

Professional development

HS2 HS Spanish “seeing how students perceive our teaching.

Student perceptions

Page 168: Improving instructional delivery: reflections on student ...1095/fulltext.pdfinstructional practices through reflection is the driver behind the development of a student feedback tool;

168

Table I-10

Coding of Open Ended Question 8. How can the I-SAID contribute to developing your professional development plan? Teacher Course In-Vivo Code Pattern Code MS2 MS Mathematics 8 “look for professional

development in the areas I should focus on”

Identify deficits Identify Prof. Dev. Needs

MS1 MS Language Arts 8 “detect my shortcomings, create SMART GOAL, plan professional development”

Identify deficits SMART goal development Identify Prof. Dev. Needs

MS3 MS Science 8 “try to recruit professional development by looking at questions”

Identify deficits Identify Prof. Dev. Needs

HS1 HS Spanish & Latin “make long term plans based on the information from the I-SAID then use the I-SAID to track my progress”

Identify deficits Identify Prof. Dev. Needs Growth Measurement

HS4 HS Physical Science “point me in the right direction of areas to benefit my students” “most of the tools are in my possession but I must take time to review”

Student perception Identify deficits Reflective tool

HS3 HS Chemistry “self-reflection” Reflective tool HS2 HS Spanish No response

Page 169: Improving instructional delivery: reflections on student ...1095/fulltext.pdfinstructional practices through reflection is the driver behind the development of a student feedback tool;

169

Table I-11

Coding of Open Ended Question 8. How could you use the results from the I-SAID to enhance peer/mentor observations? Teacher Course In-Vivo Code Pattern Code MS2 MS Mathematics 8 “focus on areas to work

on” Identify Deficits Guide mentor/mentee

MS1 MS Language Arts 8 “great tool for mentors to zero in on suggestions for the new teacher”

Identify Deficits Guide mentor/mentee

MS3 MS Science 8 “mentor mentee discussions”

Guide mentor/mentee Develop discussions

HS1 HS Spanish & Latin “find areas that need improvement” “discuss with my mentee” “specific things for mentor to look for”

Identify Deficits Develop discussions Guide mentor/mentee

HS4 HS Physical Science “as a mentor become a better observer” “offer more concrete examples to mentee” “seeing students perspective allows focus on most important areas”

Improver mentor skills Identify deficits Guide mentor/mentee Student perspective

HS3 HS Chemistry “if I scored low on a question I could ask a colleague to observe and give me feedback” “determine if student perception is correct”

Identify deficits Develop discussions Student perceptions

HS2 HS Spanish “useful in gathering information on specific criteria”

Identifying deficits

Page 170: Improving instructional delivery: reflections on student ...1095/fulltext.pdfinstructional practices through reflection is the driver behind the development of a student feedback tool;

170

Table I-12

Coding of Open Ended Question 10. What must you do to facilitate improving your instructional practice based on the results of the I-SAID ? Teacher Course In-Vivo Code Pattern Code MS2 MS Mathematics 8 “work on emphasizing

learning goals, closing activities and improve wait time”

Skill development

MS1 MS Language Arts 8 “need to create more pre-assessments”

Skill development

MS3 MS Science 8 “check in more to see if students have met our goals”

Skill development Student needs

HS1 HS Spanish & Latin “search for more opportunities to offer students choice”

Skill development Student needs

HS4 HS Physical Science “focus my attention on practices that make students fell they are all involved”

Skill development Student needs

HS3 HS Chemistry “explain the purpose of class activities to students”

Skill development Student needs

HS2 HS Spanish “need to include more activators, emphasis on objectives and pass out a unit agenda”

Skill development

Page 171: Improving instructional delivery: reflections on student ...1095/fulltext.pdfinstructional practices through reflection is the driver behind the development of a student feedback tool;

171

Appendix J- IRB Approval

Page 172: Improving instructional delivery: reflections on student ...1095/fulltext.pdfinstructional practices through reflection is the driver behind the development of a student feedback tool;

172

Page 173: Improving instructional delivery: reflections on student ...1095/fulltext.pdfinstructional practices through reflection is the driver behind the development of a student feedback tool;

173

Page 174: Improving instructional delivery: reflections on student ...1095/fulltext.pdfinstructional practices through reflection is the driver behind the development of a student feedback tool;

174

Page 175: Improving instructional delivery: reflections on student ...1095/fulltext.pdfinstructional practices through reflection is the driver behind the development of a student feedback tool;

175

Page 176: Improving instructional delivery: reflections on student ...1095/fulltext.pdfinstructional practices through reflection is the driver behind the development of a student feedback tool;

176

Page 177: Improving instructional delivery: reflections on student ...1095/fulltext.pdfinstructional practices through reflection is the driver behind the development of a student feedback tool;

177


Recommended