+ All Categories
Home > Documents > Teachers’ sensemaking of data and implications for equity

Teachers’ sensemaking of data and implications for equity

Date post: 30-Apr-2023
Category:
Upload: usc
View: 0 times
Download: 0 times
Share this document with a friend
52
Running head: TEACHERS’ SENSEMAKING OF DATA AND IMPLICATIONS FOR EQUITY Teachers’ sensemaking of data and implications for equity Melanie Bertrand Division of Educational Leadership and Innovation Mary Lou Fulton Teachers College Arizona State University P.O. Box 37100, Mail Code 3151 Phoenix, AZ 85069-7100 (602) 543-5042 [email protected] Julie A. Marsh Rossier School of Education University of Southern California 3470 Trousdale Parkway, WPH, 904C Los Angeles, CA 90089-4039 (213) 740-3710 [email protected] Recently published in American Educational Research Journal SEE: Bertrand, M. & Marsh, J. (2015). Teachers’ sensemaking of data and implications for equity. American Educational Research Journal, 52(5), 861-893. Correspondence concerning this article should be addressed to Melanie Bertrand. Melanie Bertrand is an assistant professor at Arizona State University in Mary Lou Fulton Teachers College, P.O. Box 37100, Mail Code 3151, Phoenix, AZ 85069-7100; email: [email protected]. Her research employs micro- and macro-level lenses to expand conceptions of leadership and explore the role of student voice in challenging systemic racism in education. Julie A. Marsh is an associate professor at the Rossier School of Education at the University of Southern California. She specializes in research on K-12 policy implementation, educational reform, and accountability. Her research blends perspectives in education, sociology, and political science. Author note: The authors gratefully acknowledge support for this research from the Spencer Foundation. We also greatly appreciate the cooperation of educators in our case schools and district, as well as contributions from other members of our research team, including Caitlin C. Farrell, Alice Huguet, Beth Katz, Jennifer McCombs, and Brian McInnis. In addition, we benefited greatly from helpful feedback from Robert Rueda and the anonymous reviewers.
Transcript

Running head: TEACHERS’ SENSEMAKING OF DATA AND IMPLICATIONS FOR EQUITY

Teachers’ sensemaking of data and implications for equity

Melanie Bertrand

Division of Educational Leadership and Innovation Mary Lou Fulton Teachers College

Arizona State University P.O. Box 37100, Mail Code 3151

Phoenix, AZ 85069-7100 (602) 543-5042

[email protected]

Julie A. Marsh Rossier School of Education

University of Southern California 3470 Trousdale Parkway, WPH, 904C

Los Angeles, CA 90089-4039 (213) 740-3710

[email protected]

Recently published in American Educational Research Journal

SEE: Bertrand, M. & Marsh, J. (2015). Teachers’ sensemaking of data and implications for equity. American Educational Research Journal, 52(5), 861-893.

Correspondence concerning this article should be addressed to Melanie Bertrand. Melanie Bertrand is an assistant professor at Arizona State University in Mary Lou Fulton Teachers College, P.O. Box 37100, Mail Code 3151, Phoenix, AZ 85069-7100; email: [email protected]. Her research employs micro- and macro-level lenses to expand conceptions of leadership and explore the role of student voice in challenging systemic racism in education. Julie A. Marsh is an associate professor at the Rossier School of Education at the University of Southern California. She specializes in research on K-12 policy implementation, educational reform, and accountability. Her research blends perspectives in education, sociology, and political science. Author note: The authors gratefully acknowledge support for this research from the Spencer Foundation. We also greatly appreciate the cooperation of educators in our case schools and district, as well as contributions from other members of our research team, including Caitlin C. Farrell, Alice Huguet, Beth Katz, Jennifer McCombs, and Brian McInnis. In addition, we benefited greatly from helpful feedback from Robert Rueda and the anonymous reviewers.

TEACHERS’ SENSEMAKING OF DATA AND IMPLICATIONS FOR EQUITY

1

TEACHERS’ SENSEMAKING OF DATA AND IMPLICATIONS FOR EQUITY

2

Abstract

This article examines an understudied aspect of teachers’ sensemaking of student learning data:

the way in which teachers explain the causes of the outcomes observed in data. Drawing on

sensemaking and attribution theory and data collected in six middle schools, we find that, while

teachers most often attributed outcomes to their own instruction, they also frequently focused on

supposedly stable student characteristics. By citing these characteristics as explanations for the

results analyzed, teachers may have inhibited reflection on their practice and reinforced low

expectations for English Language Learners (ELLs) and students in special education. These

findings yield implications for 1) the effectiveness of data-use reforms and 2) equity in the

education of ELLs and students in special education.

Key words: Attribution, Data use, English Language Learners, Equity, Sensemaking,

Students in special education

TEACHERS’ SENSEMAKING OF DATA AND IMPLICATIONS FOR EQUITY

3

Teachers’ Sensemaking of Data and Implications for Equity

The national discourse around data-driven decision making in education frequently touts

the benefits of student learning data—often defined as student assessment results—for changing

teachers’ practice. U.S. Secretary of Education Arne Duncan, for example, asserted, “Good data

promotes transparency and accountability. It shows the public the value that they’re getting in

their investment in education. It gives teachers information they need to change their practices to

improve student achievement” (2010). Much like Duncan, administrators across the country

view data as important for not only accountability but also instructional improvement.

Despite calls for data to drive instruction, relevant research has yielded mixed results

(Marsh, 2012) and few studies have examined a key aspect of such reforms: teachers’

sensemaking of data (Coburn & Turner, 2011, 2012). Sensemaking is critical to consider in light

of implications related to 1) the effectiveness of data-use policies and 2) the possible impacts on

some student groups, such as English Language Learners (ELLs) and students in special

education. As for the first point, research on policy implementation more generally indicates that

teacher practice and responses to policy are largely driven by their prior knowledge, beliefs, and

values, which may lead to differences in implementation (Coburn, 2001, 2005; Spillane, Reiser,

& Reimer, 2002). Hence, mixed results about teachers’ responses to data may be explained by

sensemaking, in addition to other reasons, such as variability in teacher supports (Marsh, 2012).

Although some studies point to the potential for data to substantively inform and shape teachers’

practice (Hamilton et al., 2009; Konstantopoulos, Miller, & van der Ploeg, 2013; Marsh, 2012;

Nelson, Slavit, & Deuel, 2012), others indicate that teachers may not significantly alter their

instruction in response to data (Ikemoto & Marsh, 2007; Oláh, Lawrence, & Riggan, 2010).

Teachers’ sensemaking of data may be further complicated by their beliefs about students

TEACHERS’ SENSEMAKING OF DATA AND IMPLICATIONS FOR EQUITY

4

in special education and ELLs, who are often the target of accountability policies and data-use

directives. These “subgroups,” as exemplars of “target populations” in policy generally, are

social constructions (Artiles, 2011) created through “cultural characterization or popular images

of the persons or groups” (Schneider & Ingram, 1993, p. 334). No Child Left Behind and more

recent waiver policies require districts and schools to disaggregate student data by subgroup

(Darling-Hammond, 2007). As socially constructed target populations made concrete through

policy, students in special education and ELLs are portrayed in evaluative terms (Schneider &

Ingram, 1993). Common discourse about students in special education has characterized this

group as having medically definable problems that are disconnected from social contexts

(Artiles, 2011). Similarly, ELLs are portrayed as lacking language and academically supportive

home lives (Gutiérrez & Orellana, 2006). As we discuss further below, research also indicates

these students often face low expectations (Cook, Tankersley, Cook, & Landrum, 2000; Pettit,

2011), possibly affecting outcomes (Jussim & Harber, 2005). This complex intersection of

implicit beliefs—reflecting broader discourses—about ability and socially constructed difference

may influence the ways in which teachers interpret and act on data related to these two groups.

In this article, we examine a key aspect of teachers’ sensemaking of data: attribution or

the way in which teachers explain or make sense of the root causes of the outcomes observed in

data. How teachers attribute outcomes is especially important since this shapes their future

instruction and expectations for students. For instance, teachers may attribute low test scores to

prior instruction, as expected by data-use policies (Datnow, Park, & Kennedy-Lewis, 2012), or to

perceived student deficits. Scholarship suggests that these different paths of attribution have

implications for instruction and learning (Jussim & Harber, 2005; Schildkamp & Kuiper, 2010).

We draw on data collected in six middle schools to investigate how teachers make sense

TEACHERS’ SENSEMAKING OF DATA AND IMPLICATIONS FOR EQUITY

5

of data—including assessment results, student work, and observations—and the factors shaping

these attributions. While we examine attributions overall, we pay careful attention to those

related to ELLs and special education students—two populations well-represented in our case

study schools. Below we present a unique theoretical framework that synthesizes sensemaking

and attribution theories with a reconceptualization of data-use advocates’ vision of teacher data

use. Next, we review relevant empirical research on teachers’ data use and expectations,

followed by a description of the research methods. We then discuss our findings and conclude

with implications for policy, practice, and future research.

Theoretical Framework

We draw upon three lenses to better understand the cognitive processes involved in

teachers’ attributions of data: 1) a reconceptualization of the data use cycle, which indicates how

data may lead to action; 2) attribution theory, which posits that motivation to act is associated

with individuals’ perceptions of the causes of outcomes; and 3) sensemaking theory, which

contends with how individuals make meaning of their experiences. Figure 1 portrays the links

between these, putting into static form what is actually a multiplicity of overlapping, non-linear,

and dynamic processes. As shown in the middle circle of the figure, we assert that attribution and

engagement in the data cycle are mutually influential aspects of teachers’ sensemaking about

data. The recursive sensemaking process, entailing attribution and changing understandings of

data, is influenced by beliefs and past experiences, depicted in the oval on the left side of the

figure. At the same time, sensemaking (re)shapes beliefs and interpretations of experiences. The

mutual influence of sensemaking, on one hand, and beliefs and past experiences, on the other, is

indicated by a two-headed arrow. Finally, this iterative relationship influences possible responses

to data, represented by the oval on the right. Below we describe our theoretical framework,

TEACHERS’ SENSEMAKING OF DATA AND IMPLICATIONS FOR EQUITY

6

moving from components (the data cycle and attribution) to the whole (sensemaking).

The Data Use Cycle

The data use cycle provides a starting point from which to approach the other two

theories. The cycle, adapted from Mandinach, Honey, Light, and Brunner (2008) and Marsh et

al. (2006), provides a normative model of teacher data use, assuming a rational approach to

decision making in which one step logically leads to another. The cycle includes four phases

along a continuum, beginning with teachers accessing data (1). They then analyze the data to

turn it into information (2) and combine it with their understanding and expertise to generate

actionable knowledge (3), which can then be used to respond to data (4). Current discourse in

education emphasizes responses that improve instructional practice. In Figure 1, we complexify

the data cycle, conceiving of the first three elements as central aspects of sensemaking that are

not phases, but possible mutually influential processes that do not necessarily fall along a single

continuum. This reconceptualization is supported by research indicating that data use may not

follow a rational model (Coburn & Turner, 2011; Datnow et al., 2012; Farley-Ripple & Buttram,

2014; Slavit, Nelson, & Deuel, 2013). On a related note, this reconceptualization acknowledges

that teachers may use data in non-normative ways, such as to target students deemed most likely

to improve on state tests (Booher-Jennings, 2005; Marsh et al., 2006).

Attribution Theory

In the process of transforming data into actionable knowledge, teachers may make

decisions about the causes of student academic outcomes (Nelson et al., 2012; Oláh et al., 2010;

Schildkamp & Kuiper, 2010). These attributions—or perceived causes of outcomes (Seifert,

2004)—may, in turn, influence the process itself, a supposition supported by research describing

teachers making generalizations about the causes of student outcomes (Schildkamp & Kuiper,

TEACHERS’ SENSEMAKING OF DATA AND IMPLICATIONS FOR EQUITY

7

2010). This suggests that sensemaking entails not only the transformation of data to knowledge,

but also attribution. Within the iterative sensemaking process, teachers may (re)form

understandings of causes of student outcomes, which, in turn, affect how data may be

transformed into knowledge and also what the data signify. Figure 1 reflects this possible mutual

influence in the double-headed arrow between attribution and the reconceptualized data cycle.

Attribution theory identifies three characteristics of attributions that elucidate the

relationship between individuals’ motivation to act and their perceptions of causes of outcomes

(Seifert, 2004; Weiner, 2010). First, there is the locus of causality, ranging from internal (one’s

self) to external (someone or something else). For example, students who blame themselves for

test scores would have an internal locus of causality. The second characteristic is stability, which

refers to a person’s assessment of whether a cause is enduring or transitory. The final

characteristic is controllability, or an individual’s belief in her or his ability to control an

outcome. How an individual formulates attributions along the three axes has behavioral

consequences (Seifert, 2004; Weiner, 2010), including motivation for future achievement,

persistence in a task, and intensity in tackling a task (Dweck & Leggett, 1988; Nicholls, 1984).

Attribution theory, then, provides insights into the nature of teachers’ attributions of data

and their potential influence on motivation to take action in response to the data. For instance, a

teacher may attribute poor student assessment scores to prior instruction (internal locus of

causality) on an “off day” (unstable circumstance), but generally view herself as having control

over the quality of her instruction. According to the theory, this teacher may be motivated to

improve her instruction. However, if she had attributed the poor student outcomes to her students

(external locus of causality), considering them to be “slow learners” (a stable characteristic out

of her control), the theory predicts low motivation to improve instruction. In that attribution is an

TEACHERS’ SENSEMAKING OF DATA AND IMPLICATIONS FOR EQUITY

8

aspect of sensemaking, sensemaking itself has implications for teachers’ actions related to data,

as shown by the arrow between the middle circle and the oval on the right in Figure 1.

Sensemaking

The concept of sensemaking (Weick, 1995; Weick, Sutcliffe, & Obstfeld, 2005) sheds

light on why attribution and data analysis may unfold in specific ways and how these phenomena

may influence responses. Empirical scholarship has indicated that data analysis occurs within a

larger sensemaking processes (Datnow et al., 2012; Slavit et al., 2013; Spillane, 2012; Spillane &

Miele, 2007). Other scholarship (Nelson et al., 2012; Oláh et al., 2010; Schildkamp & Kuiper,

2010) indicates that attributions can arise in data analysis, suggesting that sensemaking

encompasses both attribution and the reconceptualized data cycle, as shown in Figure 1.

Sensemaking theory posits that people partially construct their reality by creating

meanings for their experiences (Coburn, 2001; Spillane & Miele, 2007; Spillane et al., 2002;

Weick, 1995). Weick (1995) explains, “To talk about sensemaking is to talk about reality as an

ongoing accomplishment that takes form when people make retrospective sense of the situations

in which they find themselves and their creations...” (p. 15). In constructing their experiences

through retrospection, people do not consider all possible stimuli, instead, filtering experiences

through existing knowledge, paying attention to some stimuli and ignoring others (Spillane et al.,

2002; Weick et al., 2005). Throughout this process, mental models—people’s beliefs about

causal relationships—can be used to make predictions in new circumstances (Spillane & Miele,

2007; S. Strauss, 2001). Importantly, mental models reflect often implicit understandings that

can be inferred but not directly observed, and teachers may unknowingly use more than one

model at a time, leading to conflicting conclusions (Spillane & Miele, 2007; S. Strauss, 1993).

Data use is an act of sensemaking (Datnow et al., 2012; Spillane, 2012; Spillane & Miele,

TEACHERS’ SENSEMAKING OF DATA AND IMPLICATIONS FOR EQUITY

9

2007) that is influenced by teachers’ past experiences and beliefs. At the same time, teachers’

sensemaking in the present may influence their beliefs and how they understand the past,

including past student outcomes. As the interplay between past and present unfolds, mental

models act as filters through which the data are understood, a process that may (re)form or reify

the models (Spillane & Miele, 2007). The application and (re)formation of these mental models

may give rise to attributions—entailing decisions about the locus of causality, stability, and

controllability—allowing teachers to link present outcomes to past phenomena, such as student

characteristics. For example, teachers’ expectations of ELLs and students in special education, as

beliefs, likely influence their sensemaking and, therefore, attributions. In this way,

sensemaking—and the associated attributions—have implications for the teacher’s beliefs, in

addition to motivation to respond in certain ways (Spillane & Miele, 2007).

In combining sensemaking theory with attribution theory and the reconceptualized data

cycle, our theoretical framework illuminates how teachers may come to understand data and the

possible consequences of the process. Below we explore past research on teachers’ data use and

connect this to scholarship on their expectations for student subgroups.

Past Research on Teachers’ Data Use and Expectations

Below we examine extant research on two areas that are salient to our inquiry: teachers’

use of data and teachers’ expectations for students, specifically those designated for special

education or ELL services. As sensemaking theory would suggest, expectations play an

important role in how teachers make sense of data, possibly contributing to attributions.

Teachers’ Data Use

A growing body of literature has explored teacher data use as a contextualized and

complex practice that does not necessarily follow a technical-rational model. Salient to our study

TEACHERS’ SENSEMAKING OF DATA AND IMPLICATIONS FOR EQUITY

10

is research on the factors that shape teachers’ sensemaking about data and their attributions.

Data use is influenced by teachers’ knowledge of how to interpret and respond to data

(Gummer & Mandinach, 2015; Marsh, 2012). For instance, Means et al. (2011) found that

teachers exhibited a limited understanding of several data interpretation concepts (such as

validity and reliability), which influenced their analyses. In addition, data use is influenced by

whether it occurs in a group and the nature of group interactions (Horn, Kane, & Wilson, 2015;

Huguet, Marsh, & Farrell, 2014; Marsh, Bertrand, & Huguet, 2015).

As sensemaking theory would predict, research has found that teachers’ beliefs and

experiences also shape how they approach data. For instance, Datnow, Park, and Kennedy-Lewis

(2012) found that teachers made meaning of data in an eclectic manner, sometimes drawing upon

their intuitions and past interactions with students, while also being influenced by policy and

school contexts. Beliefs can also influence teachers’ attention to certain aspects of data (Coburn

& Turner, 2011). One important subset of teachers’ beliefs more generally is their conceptions of

data and inquiry (Jimerson, 2014). The research of Nelson et al. (2012) and Slavit et al. (2013)

highlights the importance of epistemological stances toward data, showing that teachers in their

study engaged in more in-depth discussions of instruction when they sought to improve

instruction rather than validate past performance (Slavit et al., 2013).

While research has explored the factors influencing sensemaking about data, few studies

have investigated attribution, which we consider an aspect of sensemaking. Nelson et al. (2012)

mention that teachers may attribute data to student background when they seek to validate past

performance. Another study found that some teachers attributed student difficulties in math to

students’ understanding of concepts, while others cited contextual or external factors, such as

students’ supposed “cognitive weaknesses” (Oláh et al., 2010). One study that explicitly

TEACHERS’ SENSEMAKING OF DATA AND IMPLICATIONS FOR EQUITY

11

considered attribution found that teachers at two of six study schools were not using data to

reflect on teaching, instead explaining “poor output simply as a result of unmotivated students,”

thereby hindering the goal of using data to inform instruction (Schildkamp & Kuiper, 2010, p.

494). Similarly, studies on Response to Intervention—an approach to identify and support

students with learning needs—suggested that teachers using this approach attributed student

outcomes to perceived deficits rather than their own teaching (Orosco & Klingner, 2010;

Thorius, Maxcy, Macey, & Cox, 2014).

Though these studies begin to fill an important gap in the research literature, there

remains much to be discovered about teachers’ attributions of student data. How does attribution

arise within sensemaking processes and how does it influence teachers’ responses to data? When

attributions are directed toward students, which student groups are singled out and what are the

implications? These questions are critical in light of scholarship linking teachers’ beliefs about

students to academic performance, which we review below.

Teacher Expectations

As discussed above, teachers may arrive at attributions of data through processes of

sensemaking, in which they draw upon their beliefs and experiences. Expectations for students,

as beliefs, may shape attributions, which, in turn, may influence future expectations. This link

between attributions and expectations is important because research indicates that expectations

influence student outcomes (Jussim & Harber, 2005).

Jussim and Harber (2005), in a literature review, conclude that teacher expectations

influence outcomes for all students, but often to a small extent. However, effect sizes are much

larger for students who are members of marginalized groups—such as lower-achieving students

and students of color (Jussim & Harber, 2005). Individual studies support this conclusion (Oates,

TEACHERS’ SENSEMAKING OF DATA AND IMPLICATIONS FOR EQUITY

12

2003; van den Bergh, Denessen, Hornstra, Voeten, & Holland, 2010). For instance, McKown

and Weinstein (2008), in a study of 83 classrooms, explain that, for high-bias teachers,

expectations accounted for an average of .29 standard deviations of racial achievement

differences. Similarly, Oates (2003), using a national data set, demonstrates that teachers’

negative perceptions of African American students influenced academic outcomes.

Research has generated mixed results about teacher expectations for ELLs. A literature

review on teachers’ beliefs about this group indicates “that many teachers are frustrated with

ELLs, or even blame ELLs, whereas others hold more positive perceptions of this student

population” (Pettit, 2011, p. 130). The teachers who hold negative perceptions may assume that

ELLs cannot master some curricula or view bilingualism as a deficit in English (Pettit, 2011). In

contrast, Gándara, Maxwell-Jolly, and Driscoll (2005) report that most of the 5,300 California

teachers they surveyed did not blame ELLs for low achievement (p. 6). In summary, this mixed

literature suggests that ELLs may sometimes, but not always, be the target of low expectations.

Other research has illuminated teachers’ expectations of students in special education. In

a literature review, de Boer, Pijl, and Minnaert (2011) find that teachers generally hold neutral or

negative views about including such students in mainstream classrooms. Similarly, Cook et al.

(2000) found that 70 Ohio teachers of inclusive mainstream classrooms disproportionately

named students with disabilities when asked to identify students they were concerned about or

wanted to have removed from the classroom.

Overall, attitudes toward both students in special education and ELLs range from neutral

to negative, possibly reflecting low expectations and suggesting serious consequences. This

literature is important considering the research on teachers’ expectations of some groups of

students of color. This is the case because ELLs are often racialized as students of color (Aud,

TEACHERS’ SENSEMAKING OF DATA AND IMPLICATIONS FOR EQUITY

13

Fox, & KewalRamani, 2010; Gutiérrez & Orellana, 2006; Jimenez, 2012) and students in special

education are disproportionately students of color (Artiles, 2011; Tefera, Thorius, & Artiles,

2014; Waitoller, Artiles, & Cheney, 2010). For this reason, teachers’ expectations for these

groups may intersect with their potentially low expectations for some groups of students of color.

In short, expectations may be shaped by more than ELL or special education designations.

The literature on expectations suggests that the attributions to student characteristics cited

by some researchers (Nelson et al., 2012; Schildkamp & Kuiper, 2010) may both reflect and

promote certain expectations—an aspect of sensemaking—possibly influencing teachers’

response and student outcomes. Also, as Jussim and Harber (2005) point out, the potential

negative effects are more salient for marginalized students, such as students in special education

and ELLs. However, scholarship has yet to link expectations and attribution or examine the types

of attribution and their relation to sensemaking. This article addresses these areas.

Methods

The study presented in this article was part of a larger study with the goal of exploring

the role of coaches and professional learning communities (PLCs) in increasing teachers’

capacity to use data to improve language arts instruction. In alignment with that goal, the larger

study was a year-long comparative case study during the 2011-2012 school year of six low-

performing middle schools in four districts implementing strategies to encourage teachers to use

data—including assessment results, student work, and observations—to inform instruction. In

order to generate theory from our data (Bogdan & Biklen, 2007; A. L. Strauss & Corbin, 1994),

the research team used a largely qualitative approach, visiting each study school three times

throughout the school year to capture changes over time. The study included interviews, focus

groups, observations, and surveys, including open-ended survey questions. There were seven

TEACHERS’ SENSEMAKING OF DATA AND IMPLICATIONS FOR EQUITY

14

members of the research team, including the co-authors, who identify as White women. Julie A.

Marsh and most team members were involved in both data collection and analysis, and two

members—including Melanie Bertrand—were involved in analysis only. Those who collected

data made clear to participants that data use was the study focus, which may have prompted

participants to increase their consideration of data, possibly influencing interview responses.

During the data analysis phase of the larger study, the two co-authors noticed a

phenomenon that we had not intended to study—attribution. We decided to explore it further by

addressing the following research questions, which guide this article: 1) How do teachers make

sense of student learning data and attribute the results they observe? and 2) What factors appear

to shape this sensemaking process? We designed our analytical approach to shed light on the

patterns associated with attribution and their nature, using two main analytic tools: data displays

(Miles & Huberman, 1994) and aspects of the constant comparative method (A. L. Strauss &

Corbin, 1994). Data displays organized pieces of information so that conclusions could be drawn

(Miles & Huberman, 1994). The constant comparative method traditionally occurs during data

collection, in which a phenomenon is noticed, shaping subsequent data collection in a process

through which data collected over time are compared. However, we identified attribution as a

phenomenon of interest after data collection had ended. The aspects of the constant comparative

method that we used include the following: 1) collecting instances of the same phenomenon, 2)

identifying features of the phenomenon, and 3) identifying processes, relationships, and factors

associated with the phenomenon (Bogdan & Biklen, 2007; A. L. Strauss & Corbin, 1994).

Study Sample

Districts and schools were purposefully selected to reflect the aim of the larger study.

Two districts (Shenandoah and Mammoth) invested in literacy coaches to further data-use goals;

TEACHERS’ SENSEMAKING OF DATA AND IMPLICATIONS FOR EQUITY

15

one district (Sequoia) invested in PLCs; and district (Rainier) invested in data coaches. (All

names are pseudonyms.) In addition, the schools had not met state and federal accountability

targets for more than five years. Each of the six case study schools varied in size, as Table 1

shows, but all of them served significant proportions of Latina/o students, African American

students, ELLs, and/or students in special education. At each school, we selected two to four

focal teachers to participate in the study. Some of these teachers were PLC lead teachers, whom

we categorized separately in other publications. The main selection criteria were 1) a language

arts content focus and 2) whether the teacher was working with a coach and/or PLC. Also, a

science teacher was briefly in the study. Other participants included coaches, school

administrators, and district administrators. For the purposes of this study, we draw mainly from

data collected related to the focal teachers, described further in Table 1. [INSERT TABLE 1]

Data Collection

During the three visits to schools throughout the school year, the research team conducted

interviews, focus groups, and observations. Focal teachers, coaches and school administrators

were interviewed up to three times each, resulting in 79 school-level interviews, which were later

transcribed. Of these interviews, we focus on those with the teachers. The research team

followed a semi-structured interview protocol with the teachers, which included questions about

the types of data to which they had access and the use of data in their work. Finally, the research

team asked teachers to bring student data to the second and third interviews and describe how

they used and responded to them. Though these interviews served the purposes of the larger

study, they also created optimal conditions to surface teachers’ attributions of data.

In order to capture social interactions around data, the research team observed 20 school

and district meetings involving data use, including PLC and grade-level meetings. In addition,

TEACHERS’ SENSEMAKING OF DATA AND IMPLICATIONS FOR EQUITY

16

the team conducted six focus groups during the second and third school visits with 24 non-focal

teachers, including teachers in areas outside of language arts. Focus group teachers were asked

about their data use and their work with coaches or PLCs. As with the focal teacher interviews,

the focus groups afforded insights into sensemaking and attribution about data.

Further data collection included interviews of 13 district-level leaders—including

superintendents and staff overseeing literacy efforts. Also, the team surveyed the focal teachers

(17) and coaches (four) once a month. The surveys included both open- and close-ended

questions about participants’ data use. On average, we received completed surveys for 91% of

the coaches or lead PLC teachers and 94% of the case study teachers. Fifteen participants

completed all of their surveys and the remaining six failed to respond to one or two of them. We

draw upon a subset of the survey data, the responses to open-ended survey questions asking

participants to describe their “biggest challenge” and “biggest success” related to data use.

Data Analysis

We began our multi-phased data analysis approach by coding all transcripts, observation

notes, and open-ended survey responses using NVivo qualitative analysis software with an initial

set of codes related to the larger study. In this phase, we were not guided by the theoretical

framework discussed above, but instead by a conceptualization of the data use cycle that posited

four phases of data use (data information knowledge response), which are influenced by

a range of factors (Marsh & Farrell, 2015). With this framework in mind, we coded all

qualitative data using both inductive open coding and a priori codes aligning with the main

aspects of the four-phase data cycle, including: data type, data analysis, responses to data, school

context, and district context. Some of the inductive codes that emerged included: data use related

ELLs and students in special education and attribution. We applied the attribution code to any

TEACHERS’ SENSEMAKING OF DATA AND IMPLICATIONS FOR EQUITY

17

instance in which a participant discussed the perceived cause of student data outcomes.

After the first phase of analysis, we turned our focus to attribution, engaging in an

iterative process in which we reconceptualized the data cycle and explored attribution and

sensemaking theories. We discovered that the parameters of our attribution code in the first

phase aligned with definitions from attribution theorists. We studied the patterns of attribution by

creating a series of matrices (Miles & Huberman, 1994) that allowed us to employ the aspects of

the constant comparative method (A. L. Strauss & Corbin, 1994) mentioned above. Our first

matrix, created in Excel, included every instance of attribution—our unit of analysis—each of

which was given one row, allowing us to analyze trends across cases (Averill, 2002). We

identified 112 instances from our first-phase coding. For each row we noted the associated

participant and school and the target of the attribution, facilitating analysis of the factors

associated with patterns (Miles & Huberman, 1994). At this point, we considered three types of

targets: the student, teacher, and wording of the test. We divided the student- and teacher-target

codes to capture whether a participant was referring to one’s own student(s) and/or oneself, and

whether the attribution referred to something positive or negative. In this way, the matrix

reflected sensemaking theory’s insight that mental models provide causal explanations.

In the next phase, we focused on a subset of 62 instances of attribution, those made by

teachers only about data that they could have perceived as being reflective of their own practice.

Such data were usually from their own classrooms, but we also included instances involving

grade-level data and, in some cases, school-level data. The majority of these instances entailed

teachers making sense of assessment data—state tests, interim assessments, and grade-level or

classroom-level tests—while some involved student work. Informed by sensemaking and

attribution theories, we analyzed these instances using aspects of the constant comparative

TEACHERS’ SENSEMAKING OF DATA AND IMPLICATIONS FOR EQUITY

18

method (A. L. Strauss & Corbin, 1994), creating a new matrix of just these 62 instances, keeping

the columns and information appearing in the first. We then added columns for locus of

causality, stability, and controllability, along with practices involved in each instance, such as the

use of PLC protocols. We also added columns to provide more information on the targets of

attribution, for instance, considering whether the attribution involved references to ELLs or

students in special education, or to oneself as an individual teacher or part of an instructional

team. We then analyzed the instances one by one to fill in the new columns, while also taking

notes on how each instance would be understood through the lens of sensemaking.

Once we completed the expanded matrix, we began to conduct comparisons within Excel,

sorting the matrix by different columns. Through this process, we began to understand that

attribution to students was more complex than we had originally thought, entailing attribution to

either stable student characteristics or student understanding in the moment. We also began to

view attribution to the “teacher” as actually attribution to instruction. We also shifted our

understanding of attribution to the wording on tests as attribution to the nature of the tests more

generally. Through this analysis, four mental models of sensemaking emerged: attribution to 1)

instruction, 2) student understanding, 3) the nature of the test, and 4) student characteristics.

To ensure the trustworthiness of this analysis, we conducted a test of inter-rater

reliability, in which the two authors each coded all 62 instances for these models. We then

conducted statistical analyses comparing the two sets of coding. For Models 1, 3, and 4, there

was substantial to high agreement. For Model 1, there was 84% agreement, with a Cohen’s

Kappa (coefficient) of 0.66. For Models 3 and 4, there was 92% agreement, with Cohen’s Kappa

statistics of 0.81 and 0.82 respectively. For Model 2, there was 76% agreement, with a Cohen’s

Kappa of 0.39, indicating fair agreement. Following this analysis, we discussed and reassessed

TEACHERS’ SENSEMAKING OF DATA AND IMPLICATIONS FOR EQUITY

19

Model 2, the only model for which agreement was not at least substantial. We then aligned our

coding for all instances of disagreement, recoding instances when necessary.

Findings

Our analysis of teachers’ self-reports suggests that the teachers activated four distinct

mental models of sensemaking when attributing student outcome data. Encompassing beliefs

about the causes of student outcomes, the models have implications for teacher motivation to

change instruction and marginalized student groups, such as students in special education and

ELLs. We discuss each mental model below, and then go into more depth about one mental

model that entailed attribution to student characteristics. We end this section with an exploration

of the school-level contextual factors that may have played a role in use of the mental models.

Four Mental Models

Each mental model was associated with certain dimensions of attribution—locus,

stability, and controllability—and encapsulated explanations about the causes of student

outcomes. The models allowed teachers to quickly formulate understandings of data. Teachers

alluded to these models in implicit ways, rarely discussing their beliefs about, for instance, the

possible connection between instruction and improvement in outcomes. Instead, the models

surfaced in repeated explanations that pointed to beliefs, as predicted by sensemaking theory.

These explanations differed within and across teachers; however, taken en masse, they pointed to

mental models at work. Each of the models cites a different cause of outcomes: 1) instruction, 2)

student understanding, 3) the nature of the test, and 4) student characteristics. Any given teacher

used a range of these models over the study period, often drawing upon more than one at a time

(as shown in Table 2). Below we discuss each of these models in more detail. [TABLE 2 HERE]

Model 1: Instruction

TEACHERS’ SENSEMAKING OF DATA AND IMPLICATIONS FOR EQUITY

20

If teachers had made Model 1 explicit, they would have explained: “Classroom

instruction influences student learning, which is reflected in data.” This model aligned with the

expectations of data-use policies, which posit that teachers’ perception of a connection between

teaching and outcomes allows for data to prompt instructional improvement (Mandinach, 2012).

As such, Model 1 can be viewed as normative. Indeed, administrators in our study shared this

view. One principal commented, “I do have [the] expectation that teachers would look at student

work and modify and differentiate their instruction based on student work.”

Teachers often drew upon Model 1 when making sense of data in specific, concrete

situations. For instance, this model was operating when a seventh grade teacher, Ms. Castañeda,

described analyzing the results of a common grade assessment of students’ understanding of

foreshadowing and plot. Noting that one of her classes had more difficulty than another, she said:

…[W]hat…[one class] had a hard time [with] was actually taking the story and analyzing

it…, and I think that was because I maybe didn’t give them specific examples. With my

other group, I think I went into more detail…. So maybe that’s…why my students did,

one group did better than the other.

In attributing one class’s difficulty with her previous instruction, Ms. Castañeda implicitly relied

on Model 1. Most examples of Model 1 paralleled that of Ms. Castañeda.

Model 1 was frequently associated with specific dimensions of attribution. First, teachers

often cited an internal locus of causality with this model: their instruction. This can be seen with

Ms. Castañeda, who said “I” four times to signal that she associated her students’ difficulty with

her own actions in the past. Second, Model 1 often entailed the attributional dimension of

instability in that teachers viewed instruction as changeable. Ms. Castañeda made clear that she

did not view her instruction as reflecting a stable teaching ability by illustrating the differences

TEACHERS’ SENSEMAKING OF DATA AND IMPLICATIONS FOR EQUITY

21

between her approaches in the two classes. Finally, this model also involved the dimension of

controllability when teachers suggested they were capable of altering their instruction. For

instance, in another quotation, Ms. Castañeda described jointly crafting an instructional response

to the common grade assessment results: “[The other teachers and I] just talked about what we

were going to do for the following week and…how we’re going to be able to help them with the

analyzing of the story.” Clearly she felt she had control over the potential of reaching the goal.

By involving the dimensions of internal locus of causality, instability, and controllability,

Model 1 had the potential to motivate teachers to improve their instruction. If, as teachers made

sense of data, they believed 1) their instruction caused student outcomes, 2) their instruction was

not always the same, and 3) they were in control of their instruction, then, as attribution theory

suggests, they may have been more motivated to alter their instruction, which could have

promoted future sensemaking about the connections of instruction and student outcomes.

Model 2: Student Understanding

Model 2 involved teachers citing student understanding as the cause of student learning

results. A summation of this model could be: “Student understanding affects outcomes.” Similar

to Model 1, such an approach to data is normative, cited as beneficial to instruction in the

research literature (Goertz, Oláh, & Riggan, 2009; Supovitz, 2012). Teachers usually invoked

Model 2 to understand results for specific test questions. A seventh-grade English language arts

and social studies teacher, Mr. Johnson, employed this model to make sense of benchmark

assessment results. Of note, he also invoked Model 3 (involving attribution to test wording);

however, we focus here on aspects of his commentary that illustrate Model 2. He explained:

[On the benchmark] there was stuff for the kids…that was hard reading. For me,

personally, how they ask the questions, the words they used to ask questions, tend to be

TEACHERS’ SENSEMAKING OF DATA AND IMPLICATIONS FOR EQUITY

22

difficult. So I use that as kind of test-taking skills rather than just standards, kind of

teaching them what it means, what they are asking you. … [A] lot of times the kids, they

can read, and they know what they are reading, but they don’t understand what they [the

questions] are asking of them. They don’t understand the question. So lot of times, I’ll

take those benchmark questions, and I’ll just put in the words if they can understand, kind

of chart it up, so they’ll have it on the [wall in the] room, so they know.

In this explanation, Mr. Johnson’s sensemaking focused on his students’ understanding when

they took the benchmark. He suggested that the test measured the students’ test-taking skills

rather than their understanding of the “standards.” Also, he seemed to assert that students

understood the passages in the test (“they know what they are reading”), but not the test

questions themselves (“but they don’t understand what they [the questions] are asking of them”).

Mr. Johnson’s analysis, then, appeared to consider student understanding of: test-taking, test

passages, and test questions. In this way, the teacher practiced a nuanced form of sensemaking.

As seen in Mr. Johnson’s quotation, Model 2 involved an external locus of causality. The

cause of the benchmark results, for him, was students’ understanding when they took the test.

Even though he discussed an instructional strategy (posting vocabulary words), this seemed to be

a response to what he considered to be the cause of the outcomes (the students’

misunderstanding of the test questions). In Model 2, the cause—student understanding—was

unstable. For instance, Mr. Johnson explained that, “a lot of times,” the students did not

understand the test questions, suggesting that the misunderstanding occurred frequently, but not

all the time. In addition, he described instructional strategies he either planned to implement

and/or had employed in the past to increase student understanding of the test questions,

suggesting the attributional dimension of instability. Finally, Model 2 appeared to involve a

TEACHERS’ SENSEMAKING OF DATA AND IMPLICATIONS FOR EQUITY

23

belief that student understanding is controllable, as illustrated by Mr. Johnson’s indication that

he could influence student understanding through instruction.

Even though Model 2 involved an external locus of causality, it also entailed instability

and controllability. In other words, teachers seemed to believe that student understanding was

changeable and controllable. Attribution theory suggests that teachers’ use of this model could

have spurred them to alter instruction, while sensemaking theory indicates that they could use the

model to predict future student understandings and outcomes.

Model 3: Nature of the Test

The cause of student results in Model 3 was the nature of the test, including question

wording and curricular alignment. The underlying assumption of the model—which applied only

to assessment results—could be voiced this way: “The nature of the test affects student

outcomes.” As with Model 2, this model sometimes entailed a focus on specific test questions

rather than aggregate data. Teachers would look at test scores and then analyze the test itself to

assess the validity of the questions. As the previous example with Mr. Johnson illustrates,

teachers sometimes employed both Models 2 and 3 at the same time.

Mr. Flagler, an eighth-grade English language arts teacher, along with members of his

PLC, used Model 3 when analyzing results of common grade assessments, which the PLC had

created. He described analyzing one of the questions on such an assessment:

There was a question that 100 percent of the students got right, every single one. We

looked at it…and we asked ourselves, “How was that useful? If everybody got it right,

was it a good question? I mean, could we have done, how can [we] tweak it so it would

be more useful and more information could be derived from it? Was it framed in…such a

way that it was too easy?”

TEACHERS’ SENSEMAKING OF DATA AND IMPLICATIONS FOR EQUITY

24

Here Mr. Flagler indicated that characteristics of a particular test question were the reason that

every student answered it correctly. The thread of causality, then, is clear: The test question was

too easy, leading to the outcome of every student answering it correctly.

Comparing Mr. Flagler’s explanation of student outcomes to Mr. Johnson’s shows that

dimensions of attribution can vary widely in Model 3. In terms of locus of causality, Mr. Johnson

described a district-created benchmark exam, an external locus. In contrast, Mr. Flagler and his

colleagues created the common grade assessment that he discussed. Before interrupting himself,

he said, “‘I mean, could we have done…’” pointing to an internal locus of causality. He then

shifted focus mid-sentence to how the question could be tweaked, suggesting the attributional

dimensions of instability and controllability. He viewed the test questions as changeable

(unstable), and controllable. Mr. Johnson, on the other hand, commented that he was able to help

students understand the test questions (indicating a use of Model 2), but of course made no

mention of feeling empowered to change the questions themselves. In terms of Model 3, Mr.

Johnson demonstrated the attributional dimension of uncontrollability.

The implications for motivation related to Model 3 varied by context. When teachers

were responsible for writing assessments, they may have understood test questions to reflect an

internal locus of causality and be unstable and controllable. Attribution theory indicates that this

model could lead to motivation to adjust the test in the responses phase of the data cycle, while

sensemaking theory suggests that the underlying line of reasoning could be self-perpetuating.

However, this model does not appear to provide an impetus to improve instruction.

Model 4: Student Characteristics

In Model 4, the perceived “cause” of student results was inherent student characteristics,

often specific to certain groups rather than all students. This model could be explained as

TEACHERS’ SENSEMAKING OF DATA AND IMPLICATIONS FOR EQUITY

25

follows: “Students in this group have inherent abilities and attributes, which affect their learning

and outcomes.” Often this model entailed framing the student group in question as having

intractable difficulties with learning, echoing others’ (Horn, 2007; Stein, 2001) findings about

teachers constructing reified student categories. In addition, this model encompassed attributions

to students’ motivation or work ethic. In citing quasi-immutable student characteristics as the

causes of learning outcomes, this model stood in contrast to the normative Models 1 and 2.

Model 4 often involved a set of unspoken assumptions that allowed causal relationships

to make sense. Ms. Hightower, a seventh-grade English language arts teacher, drew upon this

model when explaining the results of a benchmark exam in which 33% of the class scored

“proficient.” When asked if the results were surprising, she responded: “It’s not surprising

because I have some low boys in there, and I have some resource kids [students in special

education]. So these two resource kids are below basic. I have some low kids in there, even the

fact that there is only four below basic is good.” The adjectives she used to connote struggling

students were “low” and “below basic,” a designation corresponding to benchmark and state test

scores. She presented the explanation of her scores as if the connection between “resource kids”

and lack of proficiency was self-evident, suggesting an assumption that students in special

education score poorly on tests by nature. Even the category of “low boys” belied the assumption

that the scores were low because the students were “low.” In addition, both categories were all-

encompassing. She did not describe specific areas of difficulty, but, instead, referred to students

in special education as simply “below basic,” a description aimed at the students themselves, not

their changeable traits. Indeed, she used the adjective “low” to describe students, not their skills.

Model 4 was associated with an external locus of causality, as Ms. Hightower’s

explanation makes clear. She connected the benchmark scores to certain subgroups of her

TEACHERS’ SENSEMAKING OF DATA AND IMPLICATIONS FOR EQUITY

26

students, not to herself. This model was also associated with the attributional dimension of

stability. Teachers often referred to students in ways that suggested constancy rather than

change. This was accomplished in subtle ways, as can be seen in Ms. Hightower’s all-

encompassing descriptions of the students. She did not state that the “low boys” or the “resource

kids” were incapable of learning, but the designations implied a level of stability, hinting that, for

instance, the term “resource kids” would delineate the same group of students for the foreseeable

future. This distinction is important. Teachers in our study employed Model 4—which entails

assigning students to rather fixed categories—while also viewing them as capable of learning.

For example, one teacher, Mr. Schneider, explained, “We have classes that [are] going to score

lower due to certain demographics and due to certain prior history.” “Demographics” and

“history” can be considered stable characteristics, which he cited as the causes of lower scores.

He continued by comparing his class with another teacher’s: “I know she has kids that…came in

a little bit higher than mine, and it’s always fun for my kids to try to get as close to them as they

can. They get blown out every time, but it’s still fun to try to get there.” He implied that his

students were capable of improvement, but that surpassing the other class was unlikely,

illustrating how teachers could believe both in student learning and fixed student characteristics.

Finally, Model 4 often entailed the dimension of uncontrollability. Teachers using this

model may not have felt they could change the supposedly stable characteristics of certain

students. Ms. Hightower hinted at this when expressing a lack of surprise at the low proficiency

rate, as if the issue was beyond her control. Mr. Schneider’s example is illustrative here, as well.

He implied that his colleague’s class would always be “higher” than his class, hinting at a fixed

outlook. Importantly, the lack of control lay with the designations in which teachers placed

students—“high,” “low,” “resource,” “English language learner,” etc. For instance, teachers may

TEACHERS’ SENSEMAKING OF DATA AND IMPLICATIONS FOR EQUITY

27

have felt incapable of changing a special education student into one who is not so designated.

However, this viewpoint did not preclude the belief that they had control over learning.

Attribution theory suggests that Model 4 could have undermined motivation to adjust

instruction in that it involved an external locus of causality, stability, and uncontrollability. In

addition, past research (Jussim & Harber, 2005) suggests that the use of Model 4—which

involves low expectations—may have had implications for student outcomes. To reiterate a

previous point, the expectations literature indicates that stigmatized groups—such as students in

special education and ELLs—are more likely to be negatively affected by low expectations.

Indeed, sensemaking theory would highlight the possibility that teachers could use Model 4 to

predict future student outcomes, thereby potentially re-entrenching low expectations.

Model 4 in Action

Now we turn to a focus on Model 4, which we explore in-depth because of the possible

consequences of its usage, including 1) impacts to students in special education and ELLs, in

light of research on teacher expectations, and 2) impediments to initiatives encouraging data-

driven decision making, considering that the model may suppress teachers’ motivation to

improve instruction. Also, a focus on this model is warranted because teachers often hold low

expectations for some student groups (Cook et al., 2000; Jussim & Harber, 2005; Pettit, 2011),

suggesting that the use of Model 4—entailing similar thought processes—could also be common.

The numbers we present below may be the “tip of the iceberg,” considering the non-normative

nature of this model. In other words, teachers may have avoided voicing Model 4 in order to

represent themselves in a positive light, but may have drawn upon it nonetheless.

To examine teachers’ use of Model 4, we focus on the 62 instances of attribution

described in the Methods section. As shown in Table 3, Model 4 corresponded with 25, or 40%,

TEACHERS’ SENSEMAKING OF DATA AND IMPLICATIONS FOR EQUITY

28

of these examples of attribution. In comparison, Model 1 was more common, and Models 2 and

3 were less common. (Since an instance of attribution could involve more than one model, the

model counts do not total 62.) Even though Model 4 was not the most common model, its

frequency is significant considering that it was not normative. Of import, teachers sometimes

used Model 4 along with other models, especially Model 1. Indeed, more than a third of the

instances of the more normative Model 1 co-occurred with Model 4. This means that the number

of instances of Model 1 in its most normative form—without also involving Model 4—was 26,

only 42% of the total examples of attribution. [INSERT TABLE 3 HERE.]

In the following subsection, we explore the attributions to ELLs and students in special

education that teachers made when using Model 4. This section represents an in-depth

exploration related to our first research question, about how teachers make sense of student

learning data and attribute the results they observe. We do not purport to convey findings about

the experiences of students in special education or ELLs, but, instead, to illustrate how teachers

made attributions that highlighted these categorizations. Following this discussion, we describe

how teachers used Model 4 along with other models.

Attributions to ELLs and Students in Special Education in Model 4

As described above, Model 4 often involved teachers pointing to supposedly stable

student characteristics as the reasons for outcomes. Of the 25 examples of Model 4, 23 of them

involved undesired outcomes. Usually teachers identified specific groups they felt were to blame

for outcomes, oftentimes pointing to more than one group. Most of the negative examples cited

one or more of the following groups: ELLs, students in special education, or “struggling

students.” Only four examples mentioned only struggling students, meaning that the majority

mentioned either or both students in special education or ELLs.

TEACHERS’ SENSEMAKING OF DATA AND IMPLICATIONS FOR EQUITY

29

Some teachers presented ELLs and students in special education as the cause of

undesired outcomes without providing any explanation, implying that the causal chain was self-

evident. For instance, a seventh grade language arts teacher, Ms. Carmichael, discussed ELLs in

this manner when asked about comparing data with other teachers in her PLC. She said these

comparisons were not always helpful: “[S]ometimes it makes it worse because, like, I have EL

[ELL] students and then the other English teacher has all honors students, so mine always do the

worst.” Here Ms. Carmichael assumed that the interviewer would understand why her students

did “the worst” simply because they were ELLs.

In contrast, several teachers provided some explanation when using Model 4. Recall that

most of the teachers in our study taught language arts, so these comments centered on that

subject area. While some teachers cited language skills generally as a problem for both ELLs and

students in special education, others more specifically stated that these students struggled with

analytic or inferential thinking, sometimes lumping both groups of students together. Mr. Flagler,

introduced above, voiced such an explanation. On one common grade assessment, he noted that

the students in one class had difficulty with questions on plot in fictional works. He explained:

I have 14 to 16, I think it’s 16 now individuals with special needs [students in special

education] and a lot of ELs [ELLs] in that classroom, so it’s a lower group, a lower

abilities cohort…. I know that inferential things are difficult for that mindset, okay?

They’re very linear in their thinking, so we should be able to forecast the problem areas.

So areas that are more inferential, like, what was the climax and what was the resolution?

I’ll make sure that I cover that [with] multiple exposures in this classroom in many

different ways: in game format, in videos. I’ll throw in a lot of things…and trying to get

them to understand that standard.

TEACHERS’ SENSEMAKING OF DATA AND IMPLICATIONS FOR EQUITY

30

Making a generalization, he characterized both groups as having difficulty with inferential

thinking, examples of which included identifying the climax and resolution of a story.

In summary, some teachers placed blame on student characteristics without any

explanation, while others explained that students—often students in special education and

ELLs—had difficulty with language arts. Some of the latter group claimed that these students

lacked inferential thinking skills. Making sense of student data in this way shifted responsibility

away from the teacher and may have diverted attention from an examination of instruction as

another possible cause of results. However, Model 4 did not seem to limit opportunities to focus

on instruction in order to address undesirable student outcomes. This finding appears to call into

question attribution theory’s implication that Model 4 may not promote motivation to improve

instruction. Mr. Flagler presents a prime example, considering that he appeared to be motivated

to try different teaching strategies. This seeming conflict leads us to conjecture that Model 4

alone may inhibit motivation to reflect on instruction and consider wholesale changes in one’s

approach, and instead encourage a focus on strategies intended to address perceived student

deficiencies. In addition, sensemaking theory suggests that teachers using this model may have

fit new information into their previous beliefs while bolstering these beliefs, which, in light of

the literature on teacher expectations, points to possible negative consequences for ELLs and

students in special education. Also, in that these two groups of students are often racialized,

Model 4 has implications for racial equity. This point is especially salient considering that the

student populations at our study schools were majority African American or Latina/o.

How Teachers Invoked Model 4 with Model 1

As is mentioned above and shown in Table 3, more than a third of all instances of Model

1 co-occurred with Model 4. This finding is significant because of the seeming conflict between

TEACHERS’ SENSEMAKING OF DATA AND IMPLICATIONS FOR EQUITY

31

the two models and the implications of their joint use. Model 4 involved a non-normative

approach to data use and possible negative implications for student subgroups, whereas Model 1

entailed a more normative approach. From another perspective, the former model may inhibit

motivation to improve instruction, according to theory, whereas the latter may bolster it. It is

possible, then, that the concurrent use of Model 4 could attenuate the motivation that Model 1

may engender. Also of note, we did not find much co-occurrence between Model 4 and Models 2

and 3, as shown in Table 3.

Teachers used Models 1 and 4 together by considering both their teaching and

supposedly stable student characteristics to determine causes of student outcomes and next steps.

Teachers framed their instruction as targeting specific groups of students who embodied stated or

implied deficiencies. Ms. Wexler, a seventh-grade language arts and social studies teacher, used

the two models together when analyzing the results of a common grade assessment about

foreshadowing and plot in fiction. This was the same test that Ms. Castañeda—discussed

above—described interpreting. Here is how Ms. Wexler made sense of the assessment results:

I think with my group that scored lower, I didn’t do enough instruction on it

[foreshadowing] with them. And my group that scored higher is—so one of my groups is

three, four kids; one group is 13 kids. … And then also the bigger group is my RSP

group; there’s 11 RSP kids in that class, so they just move at a slower pace and [are]

trying to stay on track. Sometimes I won’t get into the depth that I need to, and I need to

make the time for it; I need to move them forward. So, it’ll be one of those things

throughout the year, we’ll just keep going back to it anytime we read.

In this discussion, Ms. Wexler framed the students in special education (the “RSP kids”) as

having the presumably stable characteristic of moving at “a slower pace.” She also mentioned

TEACHERS’ SENSEMAKING OF DATA AND IMPLICATIONS FOR EQUITY

32

her instruction, saying that she had not spent enough time on foreshadowing in advance of the

test and generally did not go into enough depth on certain topics with the students in special

education. The implication, then, was that the slow pace of the students in special education

played a role in her failure to go into depth on the topic of foreshadowing. To address this issue,

she commented, “I need to make the time for it; I need to move them forward.” She planned to

continue to bring up foreshadowing in the future, “any time we read.”

Ms. Wexler’s commentary exemplified a pattern we observed with other teachers as well.

With Model 4, teachers illustrated the challenges they faced with their students, while, with

Model 1, they characterized themselves as able and willing to take on the challenges. This form

of sensemaking can be seen with Mr. Flagler’s explanation above. He described the ELLs as

having difficulty with inferential thinking (a teaching challenge due to student characteristics),

while also framing himself as willing to address this challenge through “multiple exposures in

many different ways.” By citing obstacles to desired student outcomes (supposedly stable student

characteristics), teachers like Mr. Flagler mitigated the blame they placed upon themselves,

while characterizing themselves as working toward student progress.

In addition, the joint use of the models may have hindered instructional improvement.

Even though teachers cited the importance of teaching for academic progress for students in

special education and ELLs, Model 4 allowed for a certain degree of complacency. Recall that

attribution theory suggests that teachers’ motivation to improve instruction would be lower when

using Model 4 and higher when using Model 1. The examples in our study indicate that using the

two models together may have engendered motivation, while also providing possible justification

for maintaining lower expectations for certain student groups and their own teaching. Moreover,

outside of implications for teacher motivation, the joint use of the models could impact student

TEACHERS’ SENSEMAKING OF DATA AND IMPLICATIONS FOR EQUITY

33

outcomes. As discussed earlier, Model 4 appears to involve low expectations, which research

(Jussim & Harber, 2005) indicates can negatively affect achievement, especially for

marginalized subgroups such as ELLs and students in special education. As we’ve shown, even

when used in conjunction with Model 1, Model 4 continued to entail low expectations.

School Context

What accounts for the ways teachers made sense of data? Our analysis indicates that

school-level factors may have played a role in attribution practices, including 1) organizational

features and 2) interactions with instructional coaches and PLCs.

We found two main examples for which our evidence indicates connections between

organizational features of schools and teachers’ attributions. One of these examples was related

to “homogeneous grouping,” a practice that we found in all six study schools. In the examples

cited in this article, and in many more not quoted, teachers referred to not only groupings within

their classes, but also classes sorted by assessment results. This type of grouping facilitated

teachers’ use of Model 4 to make sense of data, as Ms. Carmichael made clear. She commented

that comparing data with other grade-level teachers was not always helpful because she taught

the ELLs, so her class always did “the worst.” It is possible that teachers, then, could more easily

place blame on supposedly stable student characteristics when students were sorted by

assessment results. Indeed, a teacher, Ms. Giordano, voiced this argument herself, explaining that

when teachers analyze data in pairs, they may be more likely to attribute differences in class

outcomes to homogeneous grouping, in contrast to teachers working in groups. She said: “You

and I could have differences because, let me go back to that horrible excuse of, ‘You have all the

low and I have all the high students.’ But if there’s four teachers and we all teach different

classes, that excuse doesn’t really work as well.” The teacher, then, framed homogenous

TEACHERS’ SENSEMAKING OF DATA AND IMPLICATIONS FOR EQUITY

34

grouping as the basis of an “excuse” that teachers used to make sense of data.

The other organizational example comes from one particular school that had a higher-

than-average rate of use of Model 3. At Sherman Middle School, 11 of the 24 total examples of

attribution, or 46%, involved this model. In contrast, across all 62 examples of attribution, 22, or

35%, involved Model 3. Moreover, of the examples from all schools but Sherman, only 18%

involved Model 3. This high incidence of Model 3 at Sherman may have been linked to the

school administration’s interpretation of a district initiative encouraging the use of common

grade assessments. At this school, PLCs were encouraged to create their own such assessments,

and the PLC we investigated spent a significant amount of time during almost daily meetings

engaged in this practice. Since teachers had control over writing the tests and school

administrators prioritized teachers’ efforts to develop such tests, they may have been more

inclined to attribute student data to the nature of the test.

In addition to organizational features of schools, coaches and PLCs—and the

opportunities to collaboratively analyze data that they provided—may have influenced

attribution practices. Coaches and PLC members occasionally mentioned instances in which they

discussed addressing others’ use of Model 4. This can be seen with Ms. Santos, a literacy coach,

when she spoke with teachers about the school’s Program Improvement status at a staff meeting.

In this meeting, teachers invoked Model 4 to explain results, and the coach, according to her

retelling of events, refocused the attention to instruction. She explained:

They had all these excuses for why we weren’t out of PI [Program Improvement], and a

big population is the EL population and our special ed…. What I told the staff is that I

felt like there’s a lack of concern, the level of concern of students is very low and you get

that energy too from the teachers…. And then the students don’t necessarily know what it

TEACHERS’ SENSEMAKING OF DATA AND IMPLICATIONS FOR EQUITY

35

is that is being expected of them by the end of the class…. Then I told them that…the

expectation needed to be clear to students.

Here Ms. Santos described teachers drawing upon Model 4 to attribute outcomes to ELLs and

students in special education before she shifted the focus of the meeting to teachers’ instruction.

She indicated that students were exhibiting “a lack of concern,” and suggested that this was

related to teachers not making expectations clear to students. In this way, she attempted to move

teachers from using Model 4 to using Model 1. Though the results of this coach’s efforts are

unclear, it is plausible that she played a role in teachers’ subsequent attribution practices. In

addition to Ms. Santos, two other coaches described efforts to move teachers from the Model 4 to

Model 1. Surprisingly, one of these two coaches also seemed to encourage the use of Model 4.

When presenting data at a meeting, this coach, according to her recounting, told teachers, “‘This

isn’t a reflection of your teaching; it’s really based on your population.’” The mixed messages of

this coach may have influenced the ways teachers at her school made sense of data.

Two PLC members, both from Sherman Middle School, asserted that PLC meetings had

made a difference in patterns of attribution. One of these teachers, Ms. Giordano, was quoted

above. The other teacher described seeing changes in how teachers made sense of data since the

introduction of PLCs at her school. She said:

I think what’s been different about PLCs from before when each teacher did their own

thing is now you do have other people catching mistakes that were maybe on the question

or the strategy and how you presented it, and not so much blaming the student for not

doing well on the question.

This teacher did not make clear whether “blaming the student” would entail Model 4 (supposedly

stable student characteristics) or Model 2 (student understanding in the moment). However, since

TEACHERS’ SENSEMAKING OF DATA AND IMPLICATIONS FOR EQUITY

36

she used a verb with a negative connotation (“blame”), we can assume that she referred to the

less-normative Model 4 as opposed to the more-normative Model 2. Regardless, what is clear

here is the teacher’s belief that interacting in PLCs had altered teachers’ attribution practices.

Discussion

In summary, we find that, when analyzing student data, teachers in our study invoked one

or more mental models of sensemaking involving attribution to 1) instruction, 2) student

understanding, 3) the nature of the test, and 4) student characteristics. The readily available

explanations embedded in the models allowed teachers to formulate understandings that

informed their choice of next instructional steps. Our data also indicate reasons for hope and

concern. On a positive note, teachers most often attributed student outcome results to instruction.

On the other hand, they frequently focused on student characteristics as plausible explanations

for results, which may have both reflected and reinforced low expectations for ELLs and

students in special education. Finally, our study highlights the ways in which school context

plays an important role in shaping the sensemaking process for teachers. The implications of our

research apply to 1) theory, 2) data use initiatives, and 3) ELLs and students in special education.

Before discussing these three areas, we reflect upon the limitations of the study.

Limitations

Several issues limit our analyses. First, our data may not be entirely representative of the

general population of teachers within a given school or district since our sample is limited to six

schools and approximately 2-10 educators at each school. Also, our findings may reflect the

specific contexts of the schools, including their status as not having met accountability targets. In

addition, we draw heavily on self-reports of teachers, who may have reported more socially

desirable responses when asked about attribution. As such, it is possible that attribution to

TEACHERS’ SENSEMAKING OF DATA AND IMPLICATIONS FOR EQUITY

37

student characteristics, such as designations for ELL or special education services, was more

common than this analysis suggests. On a related note, our research team did not originally aim

to study attribution and our research instruments were not designed to capture it. This limitation

is also a strength, however, in that the prevalence of attribution in our data cannot be considered

an overrepresentation. Although this study is best understood as exploratory and theory-building,

we believe it is an important first step in examining the ways in which teachers make sense of

student learning data and lays the groundwork for future research in this area.

Theory

While our study supports the findings of other studies that present data use as a

sensemaking process, it also makes important theoretical contributions. Past scholarship has

framed teachers’ data use as a complex process, entailing a dynamic interplay of beliefs, past

experiences, and present circumstances, including the social context (Coburn & Turner, 2011;

Datnow et al., 2012). Our findings, viewed through the lens of a sensemaking process

encompassing a reconceptualized data cycle, also point to the messiness of data use. In addition,

our study sheds light on a critical, but largely overlooked, aspect of this process: attribution.

As we have shown, our theoretical framework’s consideration of attribution in

sensemaking helps us better understand how this process occurs. Attribution may influence how

teachers understand past, present, and future data and (re)construct and/or reify their beliefs—

including expectations of student groups, such as students in special education and ELLs. For

this reason, sensemaking about data could take dramatically different paths. Moreover, our

theoretical framework illuminates the mechanisms through which sensemaking about data could

lead to certain responses over others.

Another theoretical contribution lies in the four mental models of sensemaking that

TEACHERS’ SENSEMAKING OF DATA AND IMPLICATIONS FOR EQUITY

38

resulted from our application of the theoretical framework. Though derived solely from the data

in our study, the mental models provide a typology of data sensemaking that may prove to have

broad traction in light of other scholarship on attribution in data use (Schildkamp & Kuiper,

2010). The models provide insights about four of the possible different data sensemaking paths,

each of which has different implications for teacher responses. Future research could explore the

connections among these models and the possible significance of their interrelationships.

Data Use Initiatives

Our findings indicate that initiatives designed to encourage teachers to use student data

may be overlooking a crucial element that could influence their effectiveness: sensemaking and

attribution. Specifically, teachers’ use of Model 4, even in conjunction with Models 1 and 2, may

undermine their use of data in normative ways and, in turn, may dilute possible intended positive

effects on student outcomes.

To address this concern, educational leaders could consider ways to encourage teachers

to reflect upon their sensemaking and attributions. For example, protocols asking teachers to

examine the four mental models could be embedded in professional development, along with

opportunities to examine varying interpretations of data. As Katz and Dack (2014) recently

argued, for data to enable true professional learning and permanent change in practice, they must

be used to help educators “overcome the subtle supports—or cognitive biases—that work to

preserve the status quo and impede new learning” (p. 6). With this in mind, leaders could

investigate mechanisms for “intentional interruption” of biases and mental models, ensuring that

teachers “consider all data, not just those that confirm their beliefs” (p. 6). Given the challenge of

this task, administrators may want to seek out assistance of university and intermediary partners

to develop such forms of professional development and assistance. In addition, leaders could

TEACHERS’ SENSEMAKING OF DATA AND IMPLICATIONS FOR EQUITY

39

investigate the effects of changes in classroom groupings and school-wide tracking on teachers’

sensemaking. For example, if all classes were organized heterogeneously and teachers perceived

their groups of students to be more or less comparable, would they be more likely during data

analysis to attribute outcomes to their instruction and respond accordingly?

Future research could bolster data-use initiatives by further examining how teachers draw

upon mental models in making sense of data and how various contextual factors mediate these

interpretive processes. Studies might, for instance, investigate different student grouping

configurations and their effects on sensemaking or examine the efficacy of tools and professional

development designed to build awareness of these mental models, expand teachers’ approaches

to data interpretation, and challenge teachers’ existing models. Also, research could examine

whether educators attribute student outcomes to other factors, such as school policy or factors

beyond the school. In addition, future research could explore the influence of the nature of the

data—the type, the way it is derived, etc.—on teacher sensemaking. For instance, would state

assessment results be associated with the use of a certain model more than others?

ELLs and Students in Special Education

The implications of our study for students in special education and ELLs are related to

teacher expectations, which must be situated within broader contexts. The two student subgroups

are socially constructed through policy and discourse that racialize members and frame them in

deficit terms (Artiles, 2011; Gutiérrez & Orellana, 2006; Schneider & Ingram, 1993). As such,

beliefs and discourse about students of color are intertwined with those about ELLs and students

in special education. Model 4, in placing blame on supposedly stable characteristics of these

groups, is consistent with broader policy and discourse. This suggests that policy and broader

discourses may play a role in the patterns of teacher attribution that we observed.

TEACHERS’ SENSEMAKING OF DATA AND IMPLICATIONS FOR EQUITY

40

At the level of individual teachers in our study, the use of Model 4 may signal pre-

existing beliefs about these student groups and also further entrench these beliefs as teachers fit

new information into the model without altering it. These expectations can have consequences

for students, constraining possibilities for their academic success (Jussim & Harber, 2005). For

this reason, the use of Model 4 raises serious equity-related questions. For example, could

attributions focused on supposedly fixed student characteristics disproportionately harm ELLs

and students in special education? Given that these two groups are disproportionately composed

of students of color (Aud et al., 2010; Tefera et al., 2014), in what ways could the use of Model 4

attributions exacerbate racial inequity?

When considering these equity questions, we support the recommendation of Tefera et al.

(2014), who suggest moving beyond a focus on individual teachers to a consideration of the

broader social and structural forces that shape teachers’ actions. Future research could situate the

relationship between data attributions and equity within the broader policy and discourse

landscape. Model 4 could be further studied through qualitative research involving a

combination of classroom observation and interviews—not just self-reports—perhaps

uncovering more examples of Model 4 and indicating how it unfolds in conjunction with Model

1. In addition, in light of past research showing that white teachers are more likely to hold lower

expectations for some groups of students of color (Oates, 2003), future studies on data attribution

should consider the racial identity of the teachers. These school- and classroom-level inquiries

could be contextualized through an examination of policy documents and implementation.

Collectively, these studies might assist policymakers and education leaders in realizing the goals

of the “educational data movement” while improving learning opportunities for ELLs and

students in special education.

TEACHERS’ SENSEMAKING OF DATA AND IMPLICATIONS FOR EQUITY

41

References Artiles, A. J. (2011). Toward an interdisciplinary understanding of educational equity and

difference: The case of the racialization of ability. Educational Researcher, 40(9), 431-

445.

Aud, S., Fox, M. A., & KewalRamani, A. (2010). Status and trends in the education of racial

and ethnic groups (NCES 2010-015). Washington, DC: U.S. Department of Education.

Averill, J. B. (2002). Matrix analysis as a complementary analytic strategy in qualitative inquiry.

Qualitative Health Research, 12(6), 855-866.

Bogdan, R. C., & Biklen, S. K. (2007). Qualitative research for education. Boston: Pearson.

Booher-Jennings, J. (2005). Below the bubble: “Educational triage” and the Texas accountability

system. American Educational Research Journal, 42(2), 231-268

Coburn, C. E. (2001). Collective sensemaking about reading: How teachers mediate reading

policy in their professional communities. Educational Evaluation and Policy Analysis,

23, 145-170.

Coburn, C. E. (2005). Shaping teacher sensemaking: School leaders and the enactment of

reading policy. Educational Policy, 19, 476-509.

Coburn, C. E., & Turner, E. O. (2011). Research on data use: A framework and analysis.

Measurement: Interdisciplinary Research & Perspectives, 9, 173-206.

Coburn, C. E., & Turner, E. O. (2012). The practice of data use: An introduction. American

Journal of Education, 118(2), 99-111.

Cook, B. G., Tankersley, M., Cook, L., & Landrum, T. J. (2000). Teachers' attitudes toward their

included students with disabilities. Exceptional Children, 67(1), 115-135.

Darling-Hammond, L. (2007). Race, inequality and educational accountability: The irony of 'No

TEACHERS’ SENSEMAKING OF DATA AND IMPLICATIONS FOR EQUITY

42

Child Left Behind'. Race Ethnicity and Education, 10(3), 245-260.

Datnow, A., Park, V., & Kennedy-Lewis, B. (2012). High school teachers' use of data to inform

instruction. Journal of Education for Students Placed at Risk, 17(4), 247-265.

de Boer, A., Pijl, S. J., & Minnaert, A. (2011). Regular primary schoolteachers’ attitudes towards

inclusive education: A review of the literature. International Journal of Inclusive

Education, 15(3), 331–353.

Duncan, A. (2010). Unleashing the power of data for school reform. Paper presented at the

STATS-DC 2010 National Center for Education Statistics Data Conference, Bethesda,

MA.

Dweck, C. S., & Leggett, E. L. (1988). A social-cognitive approach to motivation and

personality. Psychological Review, 95(2), 256-273.

Farley-Ripple, E. N., & Buttram, J. L. (2014). Developing collaborative data use through

professional learning communities: Early lessons from Delaware. Studies in Educational

Evaluation, 42, 41-53.

Gándara, P., Maxwell-Jolly, J., & Driscoll, A. (2005). Listening to teachers of English language

learners: A survey of California teachers’ challenges, experiences, and professional

development needs. Santa Cruz, CA: Center for the Future of Teaching and Learning.

Goertz, M. E., Oláh, L. N., & Riggan, M. (2009). From testing to teaching: The use of interim

assessments in classroom instruction: Consortium for Policy Research in Education.

Gummer, E. S., & Mandinach, E. B. (2015). Building a conceptual framework for data literacy.

Teachers College Record, 117(4).

Gutiérrez, K. D., & Orellana, M. F. (2006). At last: The “problem” of English learners:

Constructing genres of difference. Research in the Teaching of English, 40(4).

TEACHERS’ SENSEMAKING OF DATA AND IMPLICATIONS FOR EQUITY

43

Hamilton, L., Halverson, R., Jackson, S. S., Mandinach, E. B., Supovitz, J. A., & Wayman, J. C.

(2009). Using student achievement data to support instructional decision making: What

Works Clearninghouse, U.S. Department of Education.

Horn, I. S. (2007). Fast kids, slow kids, lazy kids: Framing the mismatch problem in

mathematics teachers’ conversations. The Journal of the Learning Sciences, 16(1), 37–

79.

Horn, I. S., Kane, B. D., & Wilson, J. (2015). Making sense of student performance data: Data

use logics and mathematics teachers’ learning opportunities. American Eduactonal

Research Journal, 52(2), 208-242.

Huguet, A., Marsh, J. A., & Farrell, C. C. (2014). Building teachers' data-use capacity: insights

from strong and developing coaches. Education Policy Analysis Archives, 22(52), 1-28.

Ikemoto, G. S., & Marsh, J. A. (2007). Cutting through the "data-driven" mantra: Different

conceptions of data-driven decision making. Yearbook of the National Society for the

Study of Education, 106(1).

Jimenez, R. M. (2012). Latino immigration, education, and opportunity. Teacher Education and

Practice, 25(4), 569-571.

Jimerson, J. B. (2014). Thinking about data: Exploring the development of mental models for

‘‘data use’’ among teachers and school leaders. Studies in Educational Evaluation, 42, 5-

14.

Jussim, L., & Harber, K. (2005). Teacher expectations and self-fulfilling prophecies: Knowns

and unknowns, resolved and unresolved controversies. Personality and Social

Psychology Review, 9(2), 131-155.

Katz, S., & Dack, L. A. (2014). Towards a culture of inquiry for data use in schools: Breaking

TEACHERS’ SENSEMAKING OF DATA AND IMPLICATIONS FOR EQUITY

44

down professional learning barriers through intentional interruption. Studies in

Educational Evaluation, 42, 35-40.

Konstantopoulos, S., Miller, S. R., & van der Ploeg, A. (2013). The impact of Indiana's system

of interim assessments on mathematics and reading achievement. Educational Evaluation

and Policy Analysis, 35(4), 481-499.

Mandinach, E. B. (2012). A perfect time for data use: Using data-driven decision making to

inform practice. Educational Psychologist, 47(2), 71-85.

Mandinach, E. B., Honey, M., Light, D., & Brunner, C. (2008). A conceptual framework for

data-driven decision-making. In E. B. Mandinach & M. Honey (Eds.), Data-driven

school improvement: Linking data and learning (pp. 13-31). New York: Teachers

College Press.

Marsh, J. A. (2012). Interventions promoting educators' use of data: Research insights and gaps.

Teachers College Record, 114(11).

Marsh, J. A., Bertrand, M., & Huguet, A. (2015). Using data to alter instructional practice: The

mediating role of coaches and professional learning communities. Teachers College

Record, 117(4).

Marsh, J. A., & Farrell, C. C. (2015). How leaders can support teachers with data-driven decision

making: A framework for understanding capacity building. Educational Management

Administration & Leadership, 43(2), 269-289.

Marsh, J. A., Pane, J. F., & Hamilton, L. S. (2006). Making sense of data-driven decision making

in education: RAND.

McKown, C., & Weinstein, R. S. (2008). Teacher expectations, classroom context, and the

achievement gap. Journal of School Psychology, 46, 235-261.

TEACHERS’ SENSEMAKING OF DATA AND IMPLICATIONS FOR EQUITY

45

Means, B., Chen, E., DeBarger, A., & Padilla, C. (2011). Teachers' ability to use data to inform

instruction: Challenges and supports: U.S. Department of Education, Office of Planning,

Evaluation and Policy Development.

Miles, M. B., & Huberman, A. M. (1994). Qualitative data analysis: An expanded sourcebook

(2nd ed.). Thousand Oaks: Sage Publications.

Nelson, T. H., Slavit, D., & Deuel, A. (2012). Two dimensions of an inquiry stance toward

student-learning data. Teachers College Record, 114(8), 1-42.

Nicholls, J. G. (1984). Achievement motivation: Conceptions of ability, subjective experience,

task choice, and performance. Psychological Review, 91(3), 328-346.

Oates, G. L. S. C. (2003). Teacher-student racial congruence, teacher perceptions, and test

performance. Social Science Quarterly, 84(3), 508-525.

Oláh, L. N., Lawrence, N. R., & Riggan, M. (2010). Learning to learn from benchmark

assessment data: How teachers analyze results. Peabody Journal of Education, 85, 226-

245.

Orosco, M. J., & Klingner, J. (2010). One school's implementation of RTI with English

Language Learners: "Referring into RTI". Journal of Learning Disabilities, 43(3), 269-

288.

Pettit, S. K. (2011). Teachers' beliefs about English Language Learners in the mainstream

classroom: A review of the literature. International Multilingual Research Journal, 5(2),

123-147.

Schildkamp, K., & Kuiper, W. (2010). Data-informed curriculum reform: Which data, what

purposes, and promoting and hindering factors. Teaching and Teacher Education, 26,

482-496.

TEACHERS’ SENSEMAKING OF DATA AND IMPLICATIONS FOR EQUITY

46

Schneider, A., & Ingram, H. (1993). Social construction of target populations: Implications for

politics and policy. The American Political Science Review, 87(2), 334-347.

Seifert, T. (2004). Understanding student motivation. Educational Research, 46(2), 137-149.

Slavit, D., Nelson, T. H., & Deuel, A. (2013). Teacher groups' conceptions and uses of student-

learning data. Journal of Teacher Education, 64(1), 8-21.

Spillane, J. P. (2012). Data in practice: Conceptualizing the data-based decision-making

phenomena. American Journal of Education, 118(2), 113-141.

Spillane, J. P., & Miele, D. B. (2007). Evidence in practice: A framing of the terrain. Yearbook of

the National Society for the Study of Education, 106(1), 46–73.

Spillane, J. P., Reiser, B. J., & Reimer, T. (2002). Policy implementation and cognition:

Reframing and refocusing implementation research. Review of Educational Research,

72(3), 387–431.

Stein, S. J. (2001). 'These are your Title 1 students': Policy language in educational practice.

Policy Sciences, 34, 135-156.

Strauss, A. L., & Corbin, J. M. (1994). Grounded theory methodology: An overview. In N. K.

Denzin & Y. S. Lincoln (Eds.), Handbook of qualitative research (pp. xii, 643 p.).

Thousand Oaks: Sage Publications.

Strauss, S. (1993). Teachers' pedagogical content knowledge about children's minds and

learning: Implications for teacher education. Educational Psychologist, 28(3), 279-290.

Strauss, S. (2001). Folk psychology, folk pedagogy, and their relations to subject-matter

knowledge. In B. Torff & R. J. Sternberg (Eds.), Understanding and teaching the

intuitive mind: Student and teacher learning. Mahwah, NJ: Lawrence Erlbaum

Associates.

TEACHERS’ SENSEMAKING OF DATA AND IMPLICATIONS FOR EQUITY

47

Supovitz, J. (2012). Getting at student understanding: The key to teachers' use of test data.

Teachers College Record, 114(11).

Tefera, A., Thorius, K. K., & Artiles, A. J. (2014). Teacher influences in the racialization of

disabilities. In H. R. Milner IV & K. Lomotey (Eds.), Handbook of Urban Education (pp.

256-270). New York: Routledge.

Thorius, K. A. K., Maxcy, B. D., Macey, E., & Cox, A. (2014). A critical practice analysis of

Response to Intervention appropriation in an urban school. Remedial and Special

Education, 35(5), 287-299.

van den Bergh, L., Denessen, E., Hornstra, L., Voeten, M., & Holland, R. W. (2010). The

implicit prejudiced attitudes of teachers: Relations to teacher expectations and the ethnic

achievement gap. American Educational Research Journal, 47(2), 497-527.

Waitoller, F. R., Artiles, A. J., & Cheney, D. A. (2010). The miner's canary: A review of

overrepresentation research and explanations. The Journal of Special Education, 44(1),

29-49.

Weick, K. E. (1995). Sensemaking in organizations. Thousand Oaks, CA: Sage Publications, Inc.

Weick, K. E., Sutcliffe, K. M., & Obstfeld, D. (2005). Organizing and the process of

sensemaking. Organization Science, 16(4), 409-421.

Weiner, B. (2010). The development of an attribution-based theory of motivation: A history of

ideas. Educational Psychologist, 45(1), 28-36.

TEACHERS’ SENSEMAKING OF DATA AND IMPLICATIONS FOR EQUITY

48

Table 1. Study Schools and Participating Teachers

District School Student Population Teachers Interviewed

Mam

mot

h

Green 27% ELLs 14% Students designated as disabled 90% Latina/o students 4% Asian/Pacific Islander students 4% White students 2% African American students

1 6th-grade English teacher 1 7th grade English teacher 1 8th-grade English teacher 1 science teacher

Rai

nier

Cascades 27% ELLs 9% Students designated as disabled 85% Latina/o students 5% African American students 5% Asian/Pacific Islander students 5% White students

2 8th-grade English teachers 1 7th-grade English teacher

Emmons 35% ELLs 14% Students designated as disabled 85% Latina/o students 5% African American students 5% Asian/Pacific Islander students 5% White students

1 7th-grade English teacher 1 7th-grade English and Social Studies teacher 1 7th- and 8th-grade English teacher

Seq

uoia

Sherman 25% ELLs 8% Students designated as disabled 95% Latina/o students 5% White students

3 8th-grade English teachers 1 7th-grade English teacher

Whitney 25% ELLs 11% Students designated as disabled* 95% Latina/o students 4% Asian/Pacific Islander students 1% White students

3 7th-grade English and Social Studies teachers

She

nand

oah Blue Ridge 0% ELLs

20% Students designated as disabled 97% African American students 3% White students

2 7th-grade English teachers

Note. While the numbers have been slightly altered to maintain anonymity, the basic proportions remain true. School districts are listed in alphabetical order. * For all but Blue Ridge, the percentages of students designated as disabled come from state accountability test reporting, which may not accurately reflect the number of students within a school who receive special education services. For Blue Ridge, the percentage comes from school data reporting.

TEACHERS’ SENSEMAKING OF DATA AND IMPLICATIONS FOR EQUITY

49

Table 2. The Four Mental Models of Data Attribution Model 1 Model 2 Model 3 Model 4

Attribution to … Instruction Student

understanding Nature of test

Student characteristics

Dim

ensi

ons

of A

ttri

bu

tion

Locus of Causality

Internal External Internal or external*

External

Stability Instability Instability Stability or instability*

Stability

Controllability Controllability Controllability Controllability or uncontrollability*

Uncontrollability

* Depends upon whether the teacher has a role in test creation.

TEACHERS’ SENSEMAKING OF DATA AND IMPLICATIONS FOR EQUITY

50

Table 3. Number and Percentage of Instances of Attribution by Model Type Number of

Examples Percentage of the

62 Examples Number of Examples

Co-occurring with Model 4

Model 1: Instruction 40 65 14 Model 2: Student Understanding 15 24 3 Model 3: Test Wording 22 39 3 Model 4: Student Characteristics 25 40 N/A

TEACHERS’ SENSEMAKING OF DATA AND IMPLICATIONS FOR EQUITY

51

Figure 1. Theoretical Framework

Information 

Data  Possible Future Responses 

Teacher Beliefs and  Past Experiences 

Knowledge

Attribution 


Recommended