+ All Categories
Home > Documents > A THREE-TIER MODEL OF INTEGRATED BEHAVIOR AND …

A THREE-TIER MODEL OF INTEGRATED BEHAVIOR AND …

Date post: 01-Feb-2022
Category:
Upload: others
View: 2 times
Download: 0 times
Share this document with a friend
167
A THREE-TIER MODEL OF INTEGRATED BEHAVIOR AND LEARNING SUPPORTS: LINKING SYSTEM-WIDE IMPLEMENTATION TO STUDENT OUTCOMES By Anna Leigh Shon Harms A DISSERTATION Submitted to Michigan State University in partial fulfillment of the requirements for the degree of DOCTOR OF PHILOSOPHY School Psychology 2010
Transcript
Page 1: A THREE-TIER MODEL OF INTEGRATED BEHAVIOR AND …

A THREE-TIER MODEL OF INTEGRATED BEHAVIOR AND LEARNING SUPPORTS: LINKING SYSTEM-WIDE IMPLEMENTATION TO STUDENT OUTCOMES

By

Anna Leigh Shon Harms

A DISSERTATION

Submitted to Michigan State University

in partial fulfillment of the requirements for the degree of

DOCTOR OF PHILOSOPHY

School Psychology

2010

Page 2: A THREE-TIER MODEL OF INTEGRATED BEHAVIOR AND …

ABSTRACT

A THREE-TIER MODEL OF INTEGRATED BEHAVIOR AND LEARNING SUPPORTS: LINKING SYSTEM-WIDE IMPLEMENTAION TO STUDENT OUTCOMES

By

Anna Leigh Shon Harms

This study explored elementary schools’ implementation of an integrated three-tier model

of reading and behavior supports as they participated with a statewide Response to Intervention

(RtI) project. The purpose of the study was to examine the process of implementing an integrated

three-tier model and to explore the relation between implementation fidelity and student

outcomes. Implementation fidelity was measured using the Planning and Evaluation Tool for

Effective Schoolwide Reading Programs-Revised (PET-R; Kame’enui & Simmons, 2003),

Positive Behavioral Interventions and Supports Self Assessment Survey (PBIS-SAS; Sugai,

Horner & Todd, 2003), and the Positive Behavioral Interventions and Supports Team

Implementation Checklist (PBIS-TIC; Sugai, Horner & Lewis-Palmer, 2002). Student outcomes

were measured using school-level aggregate data from the Dynamic Indicators of Basic Early

Literacy Skills-6th Edition (DIBELS; Good & Kaminski, 2002) and average major discipline

referrals per 100 students per day, as measured by the Schoolwide Information System (SWIS;

May et al., 2002). A combination of descriptive analyses and generalized estimating equations

were used to evaluate implementation fidelity over time and the relation between implementation

fidelity and student outcomes. Major results included: (1) Average implementation fidelity

scores improved over time although individual schools started with different scores and made

various amounts of growth over time; (2) Approximately half of the elementary schools included

in the study attained criterion levels of implementation during their participation with the RtI

project; (3) Schools made the most amount of implementation growth between years 1 and 2; (4)

Page 3: A THREE-TIER MODEL OF INTEGRATED BEHAVIOR AND …

Overall implementation improvements and most year-to-year improvements were statistically

significant; (5) The reading implementation checklist was a better predictor of student reading

outcomes than the behavior implementation checklists as predictors of student behavior

outcomes; (6) The combination of reading and behavior implementation checklists added to the

prediction of student behavior outcomes beyond the behavior measures alone.

Page 4: A THREE-TIER MODEL OF INTEGRATED BEHAVIOR AND …

Copyright by ANNA HARMS 2010

Page 5: A THREE-TIER MODEL OF INTEGRATED BEHAVIOR AND …

v

ACKNOWLEDGEMENTS

First, would like to thank my advisor and dissertation chair, Evelyn Oka, whose

encouragement and thoughtful feedback have been instrumental to the completion of my

graduate work and this dissertation. I would also like to thank my dissertation committee

members: Sara Bolt, Troy Mariage, John Koscuilek, and Steve Goodman for their enthusiasm

and positivity as a committee. Third, I would like to thank my family and friends who have

provided constant support throughout this long process. I am especially grateful to my husband,

Tim, who has been my number one champion and who has provided endless support to help me

finish, including some very motivational pep talks. I also would like to thank my colleagues

working with Michigan’s Integrated Behavior and Learning Support Initiative and the Ottawa

Area Intermediate School District for their constant support and for doing the outstanding work

that this dissertation is based on. Finally, I would like to thank all of schools that have

participated with Michigan’s Integrated Behavior and Learning Support Initiative over the years.

Without their dedication to students and willingness to engage in school reform, this dissertation

would not have been possible.

Data for this study were collected as part of Michigan’s Integrated Behavior and

Learning Support Initiative (MiBLSi), a Michigan's Integrated Improvement Initiative (MI3)

funded under IDEA mandated activities through the Michigan Department of Education, Office

of Special Education and Early Intervention Services.

Page 6: A THREE-TIER MODEL OF INTEGRATED BEHAVIOR AND …

vi

TABLE OF CONTENTS

List of Tables .................................................................................................................... vii List of Figures .................................................................................................................. viii List of Abbreviations ...........................................................................................................x Chapter 1: Introduction ........................................................................................................1 Chapter 2: Literature Review...............................................................................................6 Chapter 3: Methods............................................................................................................45 Chapter 4: Results ..............................................................................................................59 Chapter 5: Discussion ........................................................................................................74 Appendices.......................................................................................................................114 Bibliography ....................................................................................................................145

Page 7: A THREE-TIER MODEL OF INTEGRATED BEHAVIOR AND …

vii

LIST OF TABLES

Table 1 Critical Features of Public Health, Response to Intervention and Schoolwide Positive Behavior Supports Models.............................................85 Table 2 Schools Included in the Study..........................................................................86 Table 3 Data Availability by Year ................................................................................87 Table 4 Percentage of Schools that Submitted Data .....................................................88 Table 5 Schools with Complete Sets of Data................................................................89 Table 6 Correlation Matrix for Continuous Variables ..................................................90 Table 7 PET-R, PBIS-SAS and PBIS-TIC Sample Sizes and Mean Scores ................91 Table 8 DIBELS and ODRs Sample Sizes and Mean Scores.......................................92 Table 9 Pair-wise Comparisons of Mean Score Differences on the Implementation Measures ..........................................................................................................93 Table 10 Percentage of Schools that Attained Criterion Scores at Least Once ..............94

Page 8: A THREE-TIER MODEL OF INTEGRATED BEHAVIOR AND …

viii

LIST OF FIGURES

Figure 1 Conceptual Framework....................................................................................95 Figure 2 PET-R Highest and Lowest Implementation Scores during Schools’ First Year of Participation with MiBLSi..................................................................96 Figure 3 PBIS-SAS Highest and Lowest Implementation Scores during Schools’ First Year of Participation with MiBLSi.................................................................97 Figure 4 PBIS-TIC Highest and Lowest Implementation Scores during Schools’ First Year of Participation with MiBLSi.................................................................98 Figure 5 Planning and Evaluation Tool for Effective Schoolwide Reading Programs (PET-R) Total Scores over Time .....................................................................99 Figure 6 Positive Behavioral Interventions and Supports Self Assessment Survey (PBIS-SAS) Overall Summary Scores over Time.........................................100 Figure 7 Positive Behavioral Interventions and Supports Team Implementation Checklist (PBIS-TIC) Overall Implementation Scores over Time ................101 Figure 8 Most and Least Amount of Implementation Growth between Single Years .102 Figure 9 Cohort x Year Interaction Effect on PET-R Scores.......................................103 Figure 10 Cohort x Year Interaction Effect on PBIS-SAS Scores.................................104 Figure 11 Cohort x Year Interaction Effect on PBIS-TIC Scores..................................105 Figure 12 Schools’ Attainment of Criterion Scores on the Planning and Evaluation Tool For Effective Schoolwide Reading Programs (PET-R) ........................106 Figure 13 Schools’ Attainment of Criterion Scores on the Positive Behavioral Interventions and Supports Self Assessment Survey (PBIS-SAS) ................107 Figure 14 Schools’ Attainment of Criterion Scores on the Positive Behavioral Interventions and Supports Team Implementation Checklist (PBIS-TIC) ....108 Figure 15 Sustainability of Reading Supports: Cohort 3 PET-R Scores........................109 Figure 16 Sustainability of Behavior Supports: Cohort 3 PBIS-SAS Scores ................110 Figure 17 Sustainability of Behavior Supports: Cohort 3 PBIS-TIC Scores .................111

Page 9: A THREE-TIER MODEL OF INTEGRATED BEHAVIOR AND …

ix

Figure 18 Dynamic Indicators of Basic Early Literacy Skills (DIBELS) Scores over Time.......................................................................................................112 Figure 19 Average Number of Major Discipline Referrals per 100 Students per Day over Time.......................................................................................................113

Page 10: A THREE-TIER MODEL OF INTEGRATED BEHAVIOR AND …

x

LIST OF ABBREVIATIONS CBM: Curriculum-based Measures/Measurement DIBELS: Dynamic Indicators of Basic Early Literacy Skills EBI Evidence-based Intervention / Instruction MEAP: Michigan Educational Assessment Program MiBLSi: Michigan’s Integrated Behavior and Learning Support Initiative ODR: Office Discipline Referral PBIS-SAS: Positive Behavioral Interventions and Supports Self-Assessment Survey PBIS-TIC: Positive Behavioral Interventions and Supports Team Implementation Checklist PET-R: Planning and Evaluation Tool for Effective Schoolwide Reading Programs RtI: Response to Intervention SET: School-wide Evaluation Tool SWIS: School-wide Information System SWPBS: School-wide Positive Behavior Support Tier 1: Primary / universal prevention, bottom / green layer of the triangle Tier 2: Secondary / strategic / targeted prevention, middle / yellow layer of the triangle Tier 3: Tertiary / intensive prevention, top / red layer of the triangle

Page 11: A THREE-TIER MODEL OF INTEGRATED BEHAVIOR AND …

1

CHAPTER 1

INTRODUCTION

Statement of the Problem

Several movements in mental health, education, and school psychology have led to the

increased use of practices that are based on rigorous scientific research (Hayes, Barlow, &

Nelson-Gray, 1999; Individuals with Disabilities Education Act, 2004; Kratochwill & Stoiber,

2002; No Child Left Behind Act, 2002). One way schools have responded is through the

adoption of three-tier models based on public health principles of prevention and early

intervention (Strein, Hoagwood, & Cohn, 2003; Ysseldyke et al., 2006). Individual three-tier

models for improving reading and behavior have been shown to improve students’ reading skills

and reduce levels and frequency of problem behavior (Burns, Appleton & Stehouwer, 2005;

Mathes et al., 2005; Vaughn, et al., 2004). Even larger student gains have been reported for

combined reading and behavior models (Stewart, Benner, Martella, & Marchand-Martella,

2007).

While initial empirical support for the use of three-tier models has been established,

research and publications on how to effectively implement three-tier practices in a variety of

school settings lags far behind (Kovaleski, 2007; Mastropieri & Scruggs, 2005; Sugai & Horner,

2007). Fixsen and colleagues (2004) distinguish the practice and science of “implementation”

from the practice and science of “program development” and encourage that these be viewed

differently. They argue that in the era of evidence-based practice, the majority of research is

being done to develop and identify evidence-based practices. Significantly less research has

revolved around the implementation, sustainability, and scaling up of best practices. The study of

implementation is presented as one way to bridge the gap between efficacy trials and replication

Page 12: A THREE-TIER MODEL OF INTEGRATED BEHAVIOR AND …

2

research conducted in applied settings. Understanding the implementation variables in a study

also provides a critical backdrop for interpreting associated student outcomes. Studies of

implementation are among the most common suggestions for future research on three-tier models

(Griffiths, Parson, Burns, VanDerHeyden, & Tilly, 2007; Kovaleski, 2007).

Greenberg, Domitrovich, Graczyk, and Zins (2005) developed an implementation

framework which recognizes that interventions and implementation supports are not always

carried out as planned. Thus, the “actual implementation” of a program should be interpreted as

the actual intervention and actual supports that were completed. Researchers should be cautious

about assuming that an intervention was implemented entirely as planned and outcomes need to

be evaluated in the context of the quality of intervention implementation.

While many studies have shown positive student outcomes after the implementation of a

three-tier model (Jimerson, Burns, & VanDerHeyden, 2007; Mathes et al., 2005; O’Connor,

Fullmer, Harty & Bell, 2005; Vaughn et al., 2004), the relation between implementation fidelity

and student outcomes is relatively understudied, particularly for school-wide reading systems.

Several research groups have indicated that improved student outcomes should only be expected

once an efficacious practice has been fully implemented (Fixsen, Naoom, Blase, Friedman, &

Wallace, 2004; Horner, Sugai & Lewis-Palmer, 2005). Thus, many researchers only examine

outcome data once there is evidence that a practice is fully implemented (McIntosh, Chard,

Boland, & Horner, 2006; Scott, 2001). While such studies do provide information about fully

implemented programs, they are less useful for understanding the process of reaching full

implementation. Other studies describe implementation levels and then separately describe

student outcomes, but never draw a connection between the two (U. S. Department of Education,

2006a; U. S. Department of Education, 2006b). If implementation of a practice truly leads to

Page 13: A THREE-TIER MODEL OF INTEGRATED BEHAVIOR AND …

3

improved student outcomes, then one might expect to see improvement in student outcomes

paralleling the improvement in implementation.

Purpose and Significance of the Study

The purpose of this study was to examine the state-wide implementation of a three-tier

model of integrated behavior and reading supports (Michigan’s Integrated Behavior and

Learning Support Initiative: MiBLSi). The study examined how many years it takes to fully

implement an integrated three-tier model and whether systems can be maintained over time. In

addition, the relation between implementation quality and student outcomes was explored.

Educators frequently report how commonly an initiative either disappears or fails shortly

after it has been introduced (Latham, 1988). Three-tier models are sometimes referred to as the

latest fad in education (Hale, 2008). Given that so many schools are adopting three-tier models

across the nation, it is critical to thoroughly study these models to ensure that they are indeed

evidence-based and yield better outcomes for students than existing practices. As schools adopt

three-tier models and applications become further removed from their initial implementation

sites, model programs could morph into versions that are fundamentally different from those

with empirical support. At that point, schools may find that they no longer experience improved

student outcomes and could attribute that to the failure of three-tier models to benefit students,

subsequently abandoning their efforts and once again adopting a new practice.

This study provides insight into schools’ ability to implement an integrated three-tier

model with fidelity as they participated in a large-scale project across several years. Results

speak to the relevance of implementation quality for creating change in student outcomes. The

study also adds to the growing literature on three-tier prevention models.

Page 14: A THREE-TIER MODEL OF INTEGRATED BEHAVIOR AND …

4

Theoretical and Conceptual Framework

Chen (1998; 2003) distinguishes between a program’s causal and prescriptive theories. A

causal theory refers to the program’s specific theory of change. A prescriptive theory represents

the model, guidelines, and steps used to deliver the program. Schools participating with MiBLSi

implement Schoolwide Positive Behavior Supports (SWPBS; OSEP Center on Positive

Behavioral Interventions and Supports, 2004), a model that is theoretically grounded in applied

behavioral analysis (Anderson & Freeman, 2000; Anderson & Kincaid, 2005; Carr et al., 2002;

Horner et al., 2010). MiBLSi’s schoolwide reading model is based on the Big Ideas in Early

Reading (National Reading Panel, 2000). Thus, the underlying causal theories include applied

behavior analysis and a five-factor structure of reading skill development. The three-tier model

used by MiBLSi is based on a risk and protective factor prescriptive theory (Meyers & Nastasi,

1999). This theory proposes that developmental trajectories are influenced by the balance of

children’s risk and protective factors. The goal of MiBLSi is to prevent students from

accumulating risk-related experiences and increase children’s supply of protective factors

through prevention and early intervention services. Implementation science is a second

prescriptive theory used to maximize the likelihood that schools will implement MiBLSi with

high levels of fidelity and sustain implementation over time (Fixsen et al., 2005).

The conceptual framework used for this study is based on the work of Chen (1998) and

Greenberg, Domitrovich, Gracyzk, and Zins (2005). The implementation framework proposed by

these authors inserts “actual implementation” as a mediator between intended implementation of

an intervention and student outcomes. Figure 1 depicts this framework with relevance to the

current study.

Page 15: A THREE-TIER MODEL OF INTEGRATED BEHAVIOR AND …

5

Research Questions

The questions for this study fell under three categories:

1. To what extent did schools implement three tier reading and behaviors systems with fidelity

across time?

2. How long were fully implemented three-tier reading and behavior systems sustained?

3. To what extent were student outcomes in reading and behavior associated with scores on the

implementation checklists?

An in-depth rationale for examining each of these questions is presented at the end of Chapter 2.

Dissertation Content

The literature review in Chapter 2 provides background information and existing research

on three tier models, leading up to the research questions. Chapter 3 describes the data source,

sample, measures and preliminary analyses. Statistical analyses and results are presented in

Chapter 4. Chapter 5 closes the paper with a discussion of the study’s findings, implications and

conclusions.

Page 16: A THREE-TIER MODEL OF INTEGRATED BEHAVIOR AND …

6

CHAPTER 2

LITERATURE REVIEW

The following literature review will first describe the six critical features of public health,

or prevention, models and how each feature is relevant to the work of school psychologists.

Next, the six critical features will be discussed in relation to school-based reading and behavior

supports. Existing research on the efficacy of three-tier prevention models will be presented and

one state’s integrated three-tier prevention model will be described in detail. Finally, a case will

be presented for how the existing literature on three-tier prevention models can be enhanced

through the study of implementation fidelity. The literature review will end with a review of the

specific research questions and hypotheses for this study.

Preventive Models of Service Delivery in Schools

School psychology emerged in early the 1900’s as a profession revolving largely around

the psycho-educational testing of students to determine eligibility for special education services

(Merrell, Ervin, & Gimpel, 2006; Reschly, 2000). While the work of many practitioners

continues to involve psycho-educational testing and supporting students with the greatest levels

of academic, behavioral and mental health needs, the past 30 years have also marked a transition

in the practice of school psychology (Brown, Hohenshil, & Brown, 1998; Hosp & Reschly,

2002). A shift can be seen from a reactive to preventive focus and from working at the individual

student level to working at the systems level (Bradley-Johnson & Dean, 2000; Fagan, 2002).

Preventive models of service delivery have increasingly become the focus of research,

innovation, and common practice in public schools (Adelman & Taylor, 2003).

The overarching purpose of a prevention model is to identify risk and intervene with

practices that will both reduce the likelihood of problems manifesting in the future and reduce

Page 17: A THREE-TIER MODEL OF INTEGRATED BEHAVIOR AND …

7

the severity of existing problems (Walker et al., 1996). Schoolwide prevention models are

derived from a tiered framework used for conceptualizing disease prevention in the field of

public health. Strein, Hoagwood, and Cohn (2003) identify six critical features of a public health

model:

• Emphasis on prevention in three levels.

• Use of evidence-based practices and interventions.

• Data-based decision making.

• Focus on strengthening positive behavior.

• School-community collaboration.

• Examination of systems level processes.

These six features most substantially differentiate public health models from clinical / medical

models of service delivery, which have historically dominated the practice of school psychology.

The remainder of this section of the literature review will further describe these six features of a

public health model within schools.

Emphasis on Prevention in Three Levels

Under a public health model, prevention is the most critical component. Public health

models are commonly conceptualized as a triangle horizontally divided into three parts (Strein,

Hoagwood, & Cohn, 2003). The first level of prevention (primary or universal) is represented by

the base layer of the triangle. The goal of primary prevention is to stop learning and behavior

problems from first occurring. Thus, the support provided at this level involves all students

having access to and being engaged in evidence-based instruction and behavior support.

Even when schools implement the highest quality universal prevention, there will likely

be some students who still require additional support in order to attain academic and behavioral

Page 18: A THREE-TIER MODEL OF INTEGRATED BEHAVIOR AND …

8

goals. Secondary prevention (targeted, strategic, or the middle layer of the triangle) involves

services and interventions that are more individualized and intense in nature, drawing upon more

school resources. Secondary prevention is intended to remediate existing problems and prevent

other co-morbid problems from occurring in the future (Adelman & Taylor, 2003).

Despite the best primary and secondary prevention efforts, there will typically be some

students whose needs are not adequately met by instruction and intervention in the first two

levels. Tertiary prevention (intensive, or the peak layer of the triangle) should reduce the severity

of existing problems by engaging students in highly focused, individualized intervention that

takes place every school day. It is important to note that the three tiers of the triangle represent

additional layers of intervention (Walker et al., 1996; Vaughn, Wanzek, Woodruff, & Linan-

Thompson, 2007). Thus, a student who is receiving tertiary interventions should also be involved

in a strong, research-based general education curriculum and have exhausted all appropriate

secondary interventions. Tertiary level services may or may not fall under the scope of special

education depending on the specific model being implemented. However, under a tiered

prevention model, by the time special education services are being considered, students will have

already been involved in high quality interventions for some time, and their performance will be

well documented.

Guidelines have been established regarding the proportion of students who can feasibly

be supported at each level of instruction (Walker et al., 2006). When research-based primary

prevention is delivered with fidelity, it is estimated that at least 80 to 90% of students will

respond positively and require no additional supports to attain academic and behavioral goals.

Due to financial and time restrictions, most school systems will only be able to adequately

provide secondary prevention services for 5 to 15% of the student population. The intensity of

Page 19: A THREE-TIER MODEL OF INTEGRATED BEHAVIOR AND …

9

support becomes so great at the tertiary level that it is ideal to have less than 1 to 7% percent of a

student population require tertiary or intensive intervention. Recall that intensive supports are

layered on top of primary and secondary interventions. While these proportions have become a

common reference point, it is important to remember that they are merely guidelines. Schools

will vary in the proportion of students they are able to adequately support at each tier. Schools

will also vary in the proportion of students who are in need of support at different levels.

Regardless of the variation between schools, the primary goal should always be to have most

students’ needs met through primary prevention, and as few students as possible who require

secondary and tertiary level support.

The work of school psychologists has traditionally focused on issues related specifically

to special education, despite the fact that problems calling for remediation through special

education services initiate in the general education context. School psychologists working under

a prevention framework will have the opportunity to step outside of a special education

“gatekeeper” role and become more involved in primary, secondary and tertiary prevention

efforts.

Two more clarifications are important to make before moving on to a description of the

next critical feature of a public health model. Notice that the description of the prevention

triangle revolved around the type and intensity of supports provided at each level. Some people

mistakenly use the layers to identify students, labeling students as “green,” “yellow,” “strategic,”

“red,” or “intensive.” That is not best practice and should always be avoided. The layers of the

triangle should only be used to describe students’ level of instructional need in a particular area

or level of instructional intensity provided by adults, and never be used to label the students

themselves (Baker, 2005). A triangle horizontally divided into three layers continues to be a

Page 20: A THREE-TIER MODEL OF INTEGRATED BEHAVIOR AND …

10

useful conceptual aide. However, the three distinct layers have also led many to mistakenly view

the three tiers of prevention as entirely distinct. For this reason, more recent visual depictions of

the triangle show the three tiers as blended, making it clearer that the intensity of support is

intended to be a continuum rather than three distinct categories.

Use of Evidence-based Practices and Interventions

The use of evidence-based practices is essential to the implementation of a public health

model. Emphasis on identifying and using evidence-based interventions (EBIs) began in the

medical profession when managed health care systems were introduced (Hayes, Barlow, &

Nelson-Gray, 1999). A variety of task forces have developed criteria for determining the

effectiveness of a given practice (e.g., American Psychological Association Division 16-School

Psychology and Division 53-Child Clinical Psychology, Blueprints for Violence Prevention,

Council for Exceptional Children, Office of Juvenile Justice and Delinquency Prevention’s

Model Program’s Guide, Promising Practice Network, Substance Abuse and Mental Health

Services Administration, U.S. Department of Education, The National Autism Center). The

criteria established by these different groups vary slightly in the details. However, consensus

does surround the basic notion that an EBI must be well defined and demonstrate positive

outcomes related to efficacy, effectiveness and dissemination (Flay et al., 2005).

Federal legislature has also supported the EBI movement by mandating that methods

used within schools are well supported by research (Individuals with Disabilities Education Act,

2004; No Child Left Behind Act, 2002). Thus, this second feature of a public health model is

consistent with the national and professional emphases on using evidence-based practices to

support students. While it may seem logical to only employ interventions with proven outcomes,

some of the most commonly used practices in school psychology and special education have

Page 21: A THREE-TIER MODEL OF INTEGRATED BEHAVIOR AND …

11

only recently been subject to empirical examination (Reschly & Ysseldyke, 2002). This research

has shown that many of the traditional practices (e.g., achievement-ability discrepancy model)

used in eligibility decision-making and special education as a whole are not only based on weak

theoretical foundations, but also fail to yield adequate outcomes for students (Reschly &

Ysseldyke, 2002; Shinn, 1986; Tilly, 2002). School psychologists who are familiar with research

methodology and statistics, and perhaps even experienced researchers themselves, have much to

offer schools as they go about the process of selecting which evidence-based interventions will

help to improve outcomes for their local students.

Data-based Decision Making

Public health models emphasize collecting and analyzing data to adequately identify a

problem, understand why it is happening, select a plan of action that will address the problem,

and determine whether the plan is working (Tilly, 2002). This type of data-based decision

making process can be implemented across all levels of a system (i.e., individual students, small

groups, classrooms, grade levels, whole schools, districts). In order to look across an entire

system, multiple types of student data are needed: Screening, progress monitoring, diagnostic,

and outcome (Hosp, Hosp, & Howell, 2007). This means that school psychologists working

within a public health model need to be able to support schools using all four types of

assessment. While traditional psycho-educational evaluation does involve the collection of a

substantial amount of data, they are primarily diagnostic and the results are not often used

effectively for instructional planning (Reschly & Ysseldyke, 2002).

Focus on Strengthening Positive Behavior

Public health models view the strengthening of skills and positive behavior as essential

for children’s well-being (Strein, Hoagwood, & Cohn, 2003). Behavioral interventions all too

Page 22: A THREE-TIER MODEL OF INTEGRATED BEHAVIOR AND …

12

frequently target the reduction of problem behaviors, while failing to simultaneously address the

increased use of appropriate replacement behaviors. School psychologists can advocate for the

use of practices that will strengthen positive behavior. They can also help to develop and monitor

goals that are positively stated, measureable, objective, and standards-based.

School-community Collaboration

Public health frameworks also call for school-community collaboration in which schools,

families, and community organizations all work together to provide a continuum of services to

students. Traditional school psychological practices have not easily facilitated collaboration with

families and the larger community. Families are often brought in as stakeholders only when a

student has been referred for a special education evaluation (Tilly, 2002). Even then, the parents’

role is often limited to attending meetings set up by the school to approve of decisions already

made by the school. Public health models call for authentic collaborative partnerships among

stakeholders. School psychologists are often in an excellent position to facilitate communication

between families and schools. They can help schools to keep this critical feature in the

foreground when implementing a prevention model.

Examination of Systems Level Processes

The public health approach targets an entire system. Shapiro (2000) emphasizes that the

issues facing today’s schools are much too large to continue addressing them one student at a

time. He argues that in order to solve some of the big problems (i.e., illiteracy) school

psychologists will need to work at the systems level and unevenly focus their skills on

prevention. Systems level work refers to much more than supporting all three tiers of prevention.

Systems change involves considering how organizational structures and processes such as

leadership, teaming, professional development, effective instruction, coaching, and feedback

Page 23: A THREE-TIER MODEL OF INTEGRATED BEHAVIOR AND …

13

loops can all be enhanced to improve student outcomes. When operating under a systems

orientation, educators look to what is within their control to change and improve student

outcomes, rather than viewing problems as residing within students.

School Psychology and Public Health Prevention Models

Several key documents and events can be interpreted as indicators that the profession of

school psychology is embracing the shift toward school-based prevention models. Participants at

the Conference on the Future of School Psychology (2002) identified five critical outcome goals.

Three of the five goals included elements of prevention, one of which specifically called for the

implementation of a public health model (Merrell, Ervin, & Gimpel, 2006). Advocacy for

prevention can also be found in the most recent Blueprint for Training and Practice (Ysseldyke,

et al., 2006). Guidelines specifically promote the use of a tiered system of service delivery in

order to build the capacity of systems and realize improved competencies for all students.

Finally, the theme of the 2006 National Association of School Psychologists Convention was

titled “Prevention is an Intervention.” Clearly, prevention has come to be viewed as essential to

the work of school psychologists.

Terminology Review

Education and psychology are replete with acronyms and multiple names for similar

concepts. Before exploring the critical components of prevention models for reading and

behavior, it will be beneficial to explain some of the language and terminology that will be used

throughout the rest of this paper. Please see the List of Abbreviations and Symbols (page vii) for

a complete list of the acronyms and corresponding definitions used within this paper.

For clarity and consistency, the term “three-tier model” will be used as a general

reference to the preventive models for reading and behavior discussed in the remainder of this

Page 24: A THREE-TIER MODEL OF INTEGRATED BEHAVIOR AND …

14

paper. The application of a public health model to address academic skill development is most

commonly referred to as “Response to Intervention” (RtI) and is the term that will be used here.

Other names include: “Response to Instruction”, “Three-tier Model” and “Problem Solving

Model.” Response to Intervention, as described in this paper, is in reference to a model of service

delivery and prevention, not a method for identifying students with specific learning disabilities

or determining eligibility for special education services. Schoolwide Positive Behavior Support

(SWPBS) is an example of a public health model that addresses student behavior.

The first part of this literature review has provided an overview of the critical

components of a public health prevention model. While it may include more background

information than someone who is very familiar with RtI and SWPBS might need, it is important

to have a solid understanding of the critical features. This is especially true at a time when RtI

and SWPBS are being rapidly adopted (Spectrum K12 School Solutions, 2010; Spaulding,

Horner, May, & Vincent, 2008) and many are still learning about the foundations.

The next section of the literature review will describe RtI (for reading) and SWPBS

models in reference to the general public health framework. Three-tier models have not been

designed exclusively for use in the areas of reading and behavior, so why emphasize these two

areas in this literature review? There are several reasons. Reading is a critical skill for academic

and career success and is therefore among the top priorities of schools (Snow, Burns & Griffin,

1998). Children spend their early years in school learning how to read so that they can later read

to learn. It is common knowledge that of all the academic areas of instruction, the most research

has been conducted on reading. More measures are available to assess reading skills and more

interventions are available to develop reading skills than for any other academic area. Following

this same pattern, Response to Intervention frameworks have been applied to reading more than

Page 25: A THREE-TIER MODEL OF INTEGRATED BEHAVIOR AND …

15

any other academic area (Hughes & Dexter, 2008; RTI Adoption Survey, 2010). After reading

and math, we see the next most common focus of RtI initiatives revolving around behavior

support (RTI Adoption Survey, 2010). If schools’ vision and mission statements can be

considered indicators of priorities, then it must be noted that the large majority include

statements about preparing students for a life of intellectual and social-behavioral success.

Helping students succeed in these two areas seem to be among the most important cultural values

of schools in the United States. An additional reason for focusing on these two areas is due to

how reading and social-behavioral skills interact within individuals (Fleming, Harachi, Cortes,

Abbott, & Catalano, 2004; Morrison, Anthony, Storino, & Dillon, 2001; Nelson, Benner, Lane,

& Smith, 2004). For these reasons, behavior and reading models have been selected to illustrate

how the critical features of a prevention model can be applied to a specific content area. First, the

six critical features will be applied to a three-tier model for reading. Then, the six critical features

will be explained in reference to a three-tier model for behavior. Efficacy research on each will

be presented. Thereafter, a discussion about combined reading and behavior models will be

provided.

Three-tier Models for Improving Reading

A frequently cited definition of Response to Intervention reads “the practice of providing

high quality instruction and interventions matched to student need, monitoring progress

frequently to make decisions about changes in instruction or goals and applying child response

data to important educational decisions (NASDE & CASE, 2006).” Core principles of RtI

include:

• “We can effectively teach all children.

• Intervene early.

Page 26: A THREE-TIER MODEL OF INTEGRATED BEHAVIOR AND …

16

• Use a multi-tier model of service delivery.

• Use a problem-solving method to make decisions within a multi-tier model.

• Use research-based, scientifically validated interventions/instruction to the extent

available.

• Monitor student progress to inform instruction.

• Use data to make decisions.

• Use assessments for screening, diagnosing and progress monitoring

(NASDE & CASE, 2006).” Table 1 shows that these eight principles align with three of the six

features of a public health model as identified by Strein, Hoagwood, and Cohn (2003). Let us

now take a closer look at how the features of a public health model apply within an RtI

framework for reading. Many excellent resources are available on the topic of response to

intervention for reading. A search using the key phrase “response to intervention” yielded 204

resources available for purchase on a major bookselling website. That number does not include

the plethora of web-based resources that are available for free. The primary sources used to

describe RtI for reading in the following section come from: the National Center on Response to

Intervention (http://rti4success.org), the RTI Action Network (http://rtinetwork.org), The

Institute of Education Sciences’ (2009) Practice Guide on Assisting Students Struggling with

Reading: Response to Intervention (RtI) and Multi-tier Intervention in the Primary Grades,

NASDE and CASE’s (2008) Response to Intervention Blueprints for Implementation: School

Building Level, and Vaughn, Wanzek and Linan-Thompson’s (2007) overview in Evidence-

Based Reading Practices for Response to Intervention.

Page 27: A THREE-TIER MODEL OF INTEGRATED BEHAVIOR AND …

17

Emphasis on Prevention in Three Levels

In a typical schoolwide RtI model for reading, tier 1 consists of a research-based core

reading curriculum that students are engaged in for 90 minutes each school day. Emphasis is

placed on implementing the core curriculum with fidelity. Students’ basic reading skills are

screened at least three times per year to identify those who may need more intensive services. At

tier 2, students participate in all core (tier 1) instruction, and additionally receive small group

intervention in an area specific to their instructional needs for 30 minutes each day. Progress is

monitored at least monthly to determine whether students are responding to the interventions

they are participating in. If a student still does not show progress after a set period of well-

matched, well-implemented tier 2 interventions, he may receive additional intervention services

that are more individualized and time intensive. A student ideally participates in tier 3

interventions for 60 minutes in addition to the 90 minutes of core classroom reading instruction.

Student progress is monitored at least weekly. The three tiers are fluid, meaning that students

receive the intensity of instruction that matches their needs, with the understanding that needs

can both change and be better understood over time.

Use of Evidence-based Practices and Interventions

Schools may find it helpful to identify existing instructional practices as falling within

tier 1, 2, or 3. Categorizing instructional practices by tiers has the potential to help schools to see

areas of strength and weakness within their three-tier reading model. However, schools that are

implementing a three-tier model for reading must understand that RtI is not just about providing

differentiated instruction. It is equally, if not more important, for schools to ensure that the

practices used within each instructional tier are evidence-based. The curricula and interventions

used within a three-tier model for reading instruction are typically based on the findings of the

Page 28: A THREE-TIER MODEL OF INTEGRATED BEHAVIOR AND …

18

National Reading Panel (2000). Thus, reading instruction revolves around the development of

five central skills: phonemic awareness, phonics / alphabetic principle, fluency, vocabulary, and

text comprehension. Several resources are available to help schools select evidence-based

reading programs and instructional practices. Some examples include: the What Works

Clearinghouse (http://ies.ed.gov/ncee/wwc/reports/), the Florida Center for Reading Research

(http://fcrr.org), the Oregon Reading First Center (http://oregonreadingfirst.uoregon.edu), and the

Center on Instruction (www.centeroninstruction.org).

Data-based Decision Making

Data-based decision-making within a three-tier model for reading begins with a question.

Are we adequately supporting students to meet instructional goals and objectives? Are all

students on track for success? Which students need more intensive support? What areas of

reading instruction need to be improved? Is the REWARDS program helping Sandy to make

progress in reading? These are just some examples of questions that might lead schools to

collect data. The question at hand will determine which measures are appropriate for helping to

provide an answer.

Within a three-tier model for reading, the most commonly used measures are

curriculum-based reading probes (Deno, 1985; Good, Gruba, & Kaminski, 2002). When

considering the four types of student assessment (screening, progress monitoring, diagnostic,

outcomes), curriculum based measures (CBM) are appropriate for both screening and progress

monitoring. CBM are favored as screening and progress monitoring tools for two major reasons.

First, curriculum-based measurement is efficient. This is essential because three-tier models call

for all students’ reading skills to be screened at least three times per school year (Fuchs & Fuchs,

2007). The less time that is required to obtain valuable information about all students, the more

Page 29: A THREE-TIER MODEL OF INTEGRATED BEHAVIOR AND …

19

time is available to spend on instruction. Second, CBM are sensitive to change resulting from

interventions (Marston, Fuchs, & Deno, 1986). Sensitivity to change makes CBM ideal for

monitoring student progress weekly or monthly, as is best practice for students who are at risk

and participating in strategic and intensive reading interventions.

Large-scale standardized assessments such as the National Assessment of Educational

Progress and the Michigan Educational Assessment Program provide a broad picture of student

skills once per year. They play an important role, as they are tied to state and national standards,

as well as funding streams. While these tests can be effective as outcome assessments, they are

not appropriate for other purposes of assessment. Large-scale outcome assessments take a long

time to administer and may not provide data that are relevant to specific skill sets. In addition,

there is typically a wait time of several months between when the assessments are administered

and when results are available, making the data nearly irrelevant for meaningful intervention

planning (McGlinchey & Hixson, 2004).

CBM data, on the other hand, have been found to be predictive of scores on large-scale

assessments, further supporting their use within a three-tier model (McGlinchey & Hixson, 2004;

Stage & Jacobsen, 2001). CBM data can be analyzed at several levels. Regular screenings help to

identify students who are in need of more intensive services. For students who are participating

in tier 2 and 3 interventions, frequent evaluation of specific skills can help to inform intervention

planning (Fuchs & Fuchs, 2007). Examining data by grade-level, classroom, school building, and

district can provide information about whether groups are on track for becoming successful

readers.

Diagnostic assessments are ideally introduced for students who do not initially respond to

well-implemented tier 2 and 3 interventions. Information from diagnostic assessments can help

Page 30: A THREE-TIER MODEL OF INTEGRATED BEHAVIOR AND …

20

educators to better identify skill deficits and select an intervention that will be most likely to

impact the specific skill.

After collecting data, it is recommended that schools check the validity of the data and

spend time analyzing the results so that they can be used appropriately to answer the question at

hand. If improvements are needed, schools will have data to support those decisions and will be

able determine if their plan worked by starting the data-based decision-making cycle over again.

Focus on Strengthening Positive Behavior

The provision of interventions based on students’ needs should strengthen all students’

reading skills. The ability-achievement discrepancy model for identifying students with specific

learning disabilities has been referred to as a “wait to fail” model. This label comes from the

repeated observation that students must be abysmally behind their peers before a large enough

ability-achievement discrepancy can be found to qualify them for special education services

(Torgesen et al., 2001).

One criticism of three-tier models for reading is that struggling readers are more likely to

benefit to the detriment of gifted and talented students, whose skills far surpass the minimum

standards for achievement. While the needs of gifted and talented students have not been

emphasized as much in the literature, a three-tier model can provide the opportunity to ensure

that gifted students’ needs are well-met (Council for Exceptional Children, 2007). The same core

principals of Response to Intervention can be applied to gifted learners (Pereles, Omdal, &

Baldwin, 2009; Rollin, Mursky, Sha-Coltrane, & Johnsen, 2009).

School-community Collaboration

As RtI initiatives expand and become more sophisticated, increasing attention is being

paid to how schools can effectively collaborate with families and communities. New work has

Page 31: A THREE-TIER MODEL OF INTEGRATED BEHAVIOR AND …

21

outlined the multitude of ways in which families can be involved in RtI efforts. Parents can play

integral roles at multiple levels (Miller, 2010; Reschly, 2010). They can serve on district and

building-level leadership teams as key stakeholders in planning and decision-making. Parents

should also be included in the problem-solving process related to their own children from the

very first indication that a student may be struggling (Tilly, 2002). All of these visions include

families as collaborative partners, and not just as recipients of information about what RtI is.

Examination of Systems Level Processes

Response to intervention is often mistakenly referred to as a special education initiative.

This misconception is likely the result of conversations about RtI revolving around the topic of

Specific Learning Disabilities. A three-tier model for reading actually targets the whole system

before small groups and individual students. Students at all skill levels are exposed to effective

instruction. The type of data collected is meant to inform system-level decision making that will

also influence individual students. Thinking about systems is more than just thinking about all

students together, it involves considering any type of supports needed to be able to improve

outcomes for all students (e.g., leadership, professional development, coaching, instructional

materials, improved scope and sequence, staff buy-in, communication and feedback loops, etc.).

Three-tier Models for Improving Behavior

The Schoolwide Positive Behavior Supports (SWPBS) Implementation Blueprint (2010)

provides an excellent general description:

School-wide Positive Behavior Supports (SWPBS) is a framework or approach

comprised of intervention practices and organizational systems for establishing

the social culture, learning and teaching environment, and individual behavior

supports needed to achieve academic and social success for all students. (p. 12)

Page 32: A THREE-TIER MODEL OF INTEGRATED BEHAVIOR AND …

22

This definition clearly illustrates that SWPBS is not one particular intervention or a singular

practice. Rather, it is a framework, approach, and set of practices. Critical SWPBS features

include (Sugai & Horner, 2006):

• A prevention-focused continuum of support.

• Proactive instructional approaches to teaching and improving social behaviors.

• Conceptually sound and empirically supported practices.

• Systems change to support effective practices

• Data-based decision making.

Table 1 shows how these features align with five of the six key components of a public health

model. The primary sources used to describe SWPBS in the following section come from: The

Technical Assistance Center on Positive Behavioral Interventions and Supports (http://pbis.org),

the SWPBS Implementer’s Blueprint and Self Assessment (Technical Assistance Center on

Positive Behavioral Interventions and Supports, 2010), and the seminal work by Walker and

colleagues (1996), which first described a tiered model of behavior support.

Focus on Prevention in Three Levels

Regardless of the level of prevention, SWPBS revolves around 6 principles for

supporting positive student behavior: (1) Identify behavioral expectations, (2) Teach behavior

expectations, (3) Monitor expected behavior, (4) Acknowledge expected behavior, (5) Correct

behavioral errors, and (6) Use data for decision making. These features should be present in

supports across all three tiers.

The first tier of SWPBS can be considered primary prevention. All students have

opportunities to learn appropriate behaviors and expectations in all school settings. Students are

explicitly taught behavioral expectations, given ample opportunity to display them, and are

Page 33: A THREE-TIER MODEL OF INTEGRATED BEHAVIOR AND …

23

positively acknowledged for appropriate behavior. Behavioral errors are corrected immediately

and consistently. Data are collected to determine which students might need more behavioral

support. Students who need structured support that goes beyond the frequency and intensity of

that provided under universal prevention are eligible to participate in targeted group

interventions. Data decision rules are applied for determining which students need this second

layer of support and for determining which secondary intervention will be the best match. Daily

behavior report systems such as the Behavior Education Plan are examples of tier two

interventions (Crone, Horner & Hawken, 2004). For students whose needs exceed the services

provided in tiers one and two, a tertiary level of intervention is available. At this level, students

have access to interventions and services that are more intense and individualized. It is generally

at the third tier that students acquire direct access to a behavioral support team, which is

comprised of people who know the child best and can provide expertise in conducting functional

behavior assessments and developing individualized behavior intervention plans.

Use of Evidence-based Practices and Interventions

SWPBS is theoretically grounded in applied behavior analysis (Anderson & Freeman,

2000; Anderson & Kincaid, 2005; Carr et al., 2002). This behavioral orientation will be seen in

most practices and interventions considered appropriate for use within a SWPBS framework. For

some schools, taking a behavioral orientation may be a shift from previous practices, which may

have relied primarily on individual or small group counseling and intervention sessions with

limited empirical support. Several resources are available to help schools select evidence-based

behavioral strategies and interventions. Some examples include: The What Works Clearinghouse

(http://ies.ed.gov/ncee/wwc/reports/), the Technical Assistance Center for Positive Behavioral

Interventions and Supports (http://pbis.org), and the Office of Juvenile Justice and Delinquency

Page 34: A THREE-TIER MODEL OF INTEGRATED BEHAVIOR AND …

24

Prevention Model Programs Guide (http://www2.dsgonline.com/mpg/). In the past few years,

much more research has been published which establishes the efficacy of practices and

interventions used within a SWPBS framework. Findings will be discussed in the next section of

the literature review.

Data-based Decision Making

Data-based decision-making within a SWPBS approach begins within a question. Is

student behavior a problem within our school? How does our discipline referral data compare to

national averages? When and where are problem behaviors occurring? What types of problem

behaviors are students engaging in? Who needs more intensive support? Are students

responding to function-based intervention? These are just some examples of questions that might

lead schools to collect data. The question at hand will determine which measures are appropriate

for helping to provide an answer.

Office discipline referrals (ODRs) are commonly used to evaluate the impact of SWPBS.

Although not a direct measure of student behavior, ODRs have been shown to be valid and

reliable indicators of school climate (Irvin, Tobin, Sprague, Sugai, & Vincent, 2004; Sugai,

Sprague, Horner, & Walker, 2000; Tobin & Sugai, 1999). The School-Wide Information System

(SWIS; May et al., 1993) is one available tool for measuring behavior problems in schools.

SWIS is a web-based system that can be used to record and monitor ODRs and student

suspensions. Schools using SWIS develop and follow a procedural flow chart for data-based

decision-making, which helps to ensure that data are used in an effective and systematic manner.

SWIS report features allow schools to easily determine what problem behaviors are occurring,

where they are occurring, when they are occurring, and which students are involved. These

different reports can help a leadership team to determine the scope of a problem and where

Page 35: A THREE-TIER MODEL OF INTEGRATED BEHAVIOR AND …

25

intervention might be best targeted. SWIS can be used to examine the behavioral climate at a

building level and can also be used to inform decisions on a smaller scale. New applications of

SWIS have recently been launched which allow schools to collect and analyze data for students

participating in the Check-in / Check-out targeted intervention and data for students who are

supported through individualized, function-based support plans. Whether or not schools use

SWIS, it is recommended that some type of system be in place for collecting and analyzing

schoolwide behavioral data.

An additional method of collecting individual student data is through functional

behavioral assessment (FBA). This procedure is typically reserved for students who have not

responded to tier one prevention and may require more individualized programming (Walker et

al., 1996). FBA allows one to determine the antecedents and consequences of a problem

behavior and then alter the contingency accordingly. FBA aligns particularly well with SWPBS

because of its focus on changing the environment to support the child, rather than viewing

children as solely responsible for the problem (Horner & Carr, 1998). An additional benefit of

FBA is that the data can directly inform and evaluate intervention (Repp & Horner, 1999).

After collecting data, it is recommended that schools check the validity of the data and

spend time analyzing the results so that they can be used appropriately to answer the questions at

hand. If improvements are needed, schools will have data to support those decisions and will be

able to determine if their plan worked by starting the data-based decision-making cycle over

again.

Focus on Strengthening Positive Behavior

The definition, teaching, and acknowledgement of expected behavior are all designed to

strengthen positive behavior. A SWPBS framework may help prevent schools from focusing too

Page 36: A THREE-TIER MODEL OF INTEGRATED BEHAVIOR AND …

26

heavily on the reduction of problem behaviors, while under-emphasizing the importance of

increasing students’ use of appropriate replacement behaviors. Unfortunately, schoolwide data

on increased positive behavior are not gathered as often as discipline referral patterns. Thus,

schools must rely on the assumption that reductions in misbehavior mean that positive behavior

has been strengthened (Irvin, Tobin, Sprague, Sugai, & Vincent, 2004).

School-community Collaboration

As schools consider adopting SWPBS, parents are often considered key stakeholders and

given voice in the decision-making (OSEP Center on Positive Behavioral Interventions and

Supports, 2004). Tier 2 and 3 interventions for individual students involve collaboration with

parents. Schools may also rely on community partners to support SWPBS principles, gather

recognition for SWPBS efforts and to provide a variety of rewards for appropriate behavior.

Examination of Systems Level Processes

“The outcome of an effective systems approach is an organization that has three basic

features (Gilbert, 1978; Horner, 2003): Common language, common experience, and common

vision/values (Technical Assistance Center on Positive Behavioral Interventions and Supports,

2010, p.41).” The developers of SWPBS have recognized that effective behavioral support

cannot focus solely on the implementation of evidence-based practices. Schools are advised to

develop systems to support staff behavior, practices to support student behavior, and data to

support decision-making. All of these supports should revolve around the attainment of highly

valued academic and behavioral student outcomes.

At this point, the reader should have a full understanding of the critical features of three-

tier models as applied to reading and behavior supports. The next section of this literature review

will summarize what research has demonstrated to be the efficacy of these models.

Page 37: A THREE-TIER MODEL OF INTEGRATED BEHAVIOR AND …

27

Efficacy of Three-tier Models for Improving Reading and Behavior

As previously illustrated, one of the most essential features of a three-tier model is

implementation of evidence-based practices. While many of the individual practices used within

three-tier models have supporting research, the uniqueness of a three-tier model comes from its

use as an overarching framework. Research which examines the efficacy of Response to

Intervention (RtI) and Schoolwide Positive Behavior Support (SWPBS) as a whole is much

sparser than the research on individual practices (VanDerHeyden, Witt, & Gilbertson, 2007). The

following sections summarize the available research in three parts for RtI and SWPBS. First, the

research on universal strategies is reviewed, followed by the research on secondary and tertiary

levels of support. Finally, research on each model as a whole is presented, including large-scale

evaluation studies.

Efficacy of Three-tier Models for Improving Reading

It has been well documented that without instructional intervention, students who

struggle with reading early in school continue to perform poorly on reading assessments in

subsequent years (Fletcher & Lyon, 1998; Francis, Shaywitz, Stuebing, Shaywitz, & Fletcher,

1996; Juel, 1988). Not only do students continue to struggle, but they continue to fall farther

behind their peers (Torgesen, 1997). The research presented below shows that it is possible to

break a trajectory of reading difficulties when effective instruction is provided. Meaningful

student outcomes include improved performance on standardized tests of reading and decreased

rates of identification for special education services (O’Connor et al., 2005; Vaughn et al., 2004;

VanDerHeyden, Witt, & Gilbertson, 2007).

Tier 1 / universal prevention. Researchers from the University of Texas are in the

middle of a 5-year longitudinal study designed to evaluate the effectiveness of the three tiers of

Page 38: A THREE-TIER MODEL OF INTEGRATED BEHAVIOR AND …

28

reading intervention (Vaughn et al., 2004). Preliminary findings reveal that students exposed to

tier 1 interventions showed more improvement on curriculum-based reading probes and

standardized reading tests than students who received typical classroom instruction. O’Connor

and colleagues (2005) trained elementary school teachers on the findings of the National

Reading Panel (2000) and provided structured professional development on how to use those

findings within classrooms. Similar to Vaughn et al., they found that improved teaching within

the core curriculum benefitted the students of trained teachers compared to students in a control

group.

Tiers 2 and 3 / secondary and tertiary prevention. Within an RtI framework, the

majority of empirical support exists at the secondary and tertiary levels. It may be easier to

control variables for studies that focus on small group and individual interventions, compared to

studies which examine universal reading instruction and multiple tiers of instruction.

Mathes and colleagues (2005) identified students who were at risk for reading failure in

first grade. Their study found that at-risk students who participated in tier 2 interventions

performed significantly better on tests of phonological awareness, word reading, and oral reading

fluency than at-risk students who participated in only tier 1 interventions. Students who did not

respond adequately to tier 2 interventions received an additional 16 weeks of intensive

instruction. All students demonstrated significant gains at the end of a 16-week period (Denton,

Fletcher, Anthony, & Francis, 2006). In addition, students who received tier 1, 2 and 3 services

showed more improvement on reading scores than students who received no special

interventions and students who received tier 1 interventions only. A review of similar research

by the Institute of Educational Sciences (IES) resulted in a practice guide, which summarizes

recommendations for using RtI to assist elementary students in the area of reading (Gersten et

Page 39: A THREE-TIER MODEL OF INTEGRATED BEHAVIOR AND …

29

al., 2008). The IES practice guide included five recommendations, one of which was supported

by strong research evidence. That recommendation falls under the category of Tier 2

Intervention. It reads:

Provide intensive, systematic instruction on up to three foundational reading skills

in small groups to students who score below the benchmark score on universal

screening. Typically, these groups meet between three and five times a week, for

20-40 minutes. (Gersten et al., 2008, p. 6)

It is becoming indisputable that students who are at-risk benefit from well-designed, well-

implemented interventions at the secondary and tertiary levels of support.

Complete schoolwide model. Singular studies which systematically examine the entire

RtI process and effects of implementation at tiers 1, 2 and 3 have not been published at this

point. While several large-scale projects have published articles and book chapters on their

specific models, general outcomes, and lessons learned, no one has published research on the

entire model and process (Jimerson, Burns & VanDerHeyden, 2007). Consider the earlier section

of this literature review that described the critical features of a public health model as applied to

RtI for reading. Even that very brief summary revealed the complexity of RtI systems, data and

practices. Given the multiple components that must be integrated to implement RtI as designed,

as well as the innumerable process variations, the task of evaluating the entire RtI process and

associated outcomes is daunting. That being said, the studies described below do help us begin to

put the larger picture together.

VanDerHeyden, Witt, and Gilbertson (2007) examined the impact of implementing a full

RtI process on schools’ number of initial special education evaluations and the number of

students who qualified for services. After implementing a multiple baseline design with five

Page 40: A THREE-TIER MODEL OF INTEGRATED BEHAVIOR AND …

30

elementary schools in the same district, they found that once schools implemented the RtI

process, they had fewer initial referrals. In addition, a higher proportion of the students who were

evaluated ended up qualifying for special education services. Results indicate that the RtI process

helped to reduce the number of students needing special education services, and increased the

accuracy of initial referrals.

The RtI Action Network (www.rtinetwork.org) has published a series of web-based

articles that summarize the research on Response to Intervention. One article in the series

reviewed 11 field studies of RtI (Hughes & Dexter, 2008), four of which were specific to

reading. The authors found that all 11 studies showed improved student outcomes.

Unfortunately, the research designs were not rigorous enough to firmly establish that

implementation of RtI was what contributed to the improved outcomes.

Burns, Appleton, and Stehouwer (2005) conducted a meta-analysis comparing the results

of field-based versus research-based RtI studies. They found that the field-based models

produced larger effects overall, most substantially in the area of systemic change. That study

provides promise for the potential of model RtI programs to be adopted by other schools and to

be successful in those settings.

In summary, the current body of research on RtI has established efficacy around specific

practices. More studies continue to be published each year. Future work needs to specify the

most effective systems and procedures, examine large-scale implementation, and employ more

rigorous research designs.

Efficacy of Three-tier Models for Improving Behavior

In the same way that early reading skills are predictive of future reading success, early

social/behavioral skills are also predictive of future social/behavioral and academic performance

Page 41: A THREE-TIER MODEL OF INTEGRATED BEHAVIOR AND …

31

(Sprague et al., 2001). One purpose of SWPBS is to break this prediction, such that students with

behavior problems early in school do not continue to have behavior problems throughout their

entire school careers. The research presented below shows that SWPBS is effective for

improving individual student behavior and the behavioral climate of whole schools. In 2010,

Horner, Sugai, and Anderson published an article that examined the evidence base for SWPBS at

Tiers 1, 2, and 3. Much of the research presented below is summarized in that article.

Tier 1 / universal prevention. The school-wide application of positive behavior supports

is relatively new compared to a long history of using applied behavior analysis and positive

behavior supports for individual children (Carr et al., 2002). In the past few years,

methodologically rigorous research has been conducted to evaluate the effects of implementing

universal prevention within a SWPBS framework. Results show that schools were able to

implement tier 1 practices with fidelity, as evidenced by scores of 80% or better on the

Schoolwide Evaluation Tool (Bradshaw, Koth, Bevans, Ialongo, & Leaf, 2008). High quality

implementation of tier 1 practices also results in perceived and actual improvement in school

climate and student behavior (Bradshaw, Mitchell, & Leaf, 2009; Horner et al., 2009)

Tiers 2 and 3 / secondary and tertiary prevention. The longest standing research on

positive behavior support falls under the category of secondary and tertiary prevention. There

exists a strong evidence base for tertiary level practices which focus on functional behavioral

assessment and functional behavior support plans (Horner, Sugai, & Anderson, in press).

The added benefit of the SWPBS framework is that it helps schools to select, implement and

evaluate secondary and tertiary interventions consistently and systematically.

Complete schoolwide model. Just as evaluation of the entire RtI system and process is

complex for reading, the same holds true for SWPBS. To date, no single study has examined the

Page 42: A THREE-TIER MODEL OF INTEGRATED BEHAVIOR AND …

32

effects of implementing all three tiers of support within the SWPBS framework. Until that

happens, we can only hypothesize that the layered implementation of all three tiers is more

effective than implementation of only one or two tiers of support. Individual studies of SWPBS

do consistently show reductions in office discipline referrals (McCurdy, Mannella, & Eldridge,

2003; Scott, 2001; Sprague, Walker, Golly, White, & Myers et al., 2001). Maryland and Iowa

have both recently published studies on the effectiveness of their state-wide SWPBS initiatives.

Both states report reduced ODRs in participating schools (Barrett, Bradshaw & Lewis-Palmer,

2008; Mass-Galloway, Panyan, Smith & Wessendorf, 2008). Such studies indicate that large-

scale implementation is possible and that such initiatives are making an impact on student

outcomes.

Research on SWPBS more consistently involves the evaluation of implementation

fidelity, using a common set of measurement tools, than is seen in the body of research on RtI for

reading. This can be considered a strength of the studies on SWPBS and an area needing

improvement for studies on RtI for academic areas. Recommendations for future research on

SWPBS include examination of sustainability, cost/benefit analyses, high school applications,

and interaction effects when multiple tiers of support are integrated.

Support for Integrated Three-Tier Models

Up to this point, the literature review has addressed prevention models both in general

and specific to reading and behavior. There exist several potential benefits of using a prevention

framework across multiple content areas. First, the structure of supports will be consistent for all

stakeholders. If we want teachers and other support staff to be able to engage in effective

teaching, then we must minimize the amount of time and energy that staff must spend on

extraneous activities, such as learning and navigating different systems. Applying a prevention

Page 43: A THREE-TIER MODEL OF INTEGRATED BEHAVIOR AND …

33

framework across multiple content areas also makes sense for financial reasons. In a time when

many schools are struggling to keep their doors open, schools should invest in professional

development and practices that yield big outcomes for the amount of resources required to

implement and sustain proven practices. It is not being implied that three-tier models are easy

and inexpensive to implement. However, compared to other initiatives and programs, which

focus on a single practice or subset of students, sustained implementation of three-tier models

has the potential to more broadly impact students and schools.

Research is also supportive of an integrated model. There is evidence that behavior and

academic performance are linked for individual students and that the strength of the relation

between low academic skills and problem behavior grows over time (Fleming, Harachi, Cortes,

Abbott, & Catalano, 2004; Morrison, Anthony, Storino, & Dillon, 2001; Nelson, Benner, Lane,

& Smith, 2004). By improving instruction alone, we can improve student behavior (Filter &

Horner, 2009; Lee, Sugai, & Horner, 1999; Preciado, Horner, & Baker, 2009; Sanford, 2006).

Research examining the function of students’ misbehavior has also shown that students who

engage in disruptive behavior often do so to avoid or escape difficult academic tasks (McIntosh,

Chard, Boland, & Horner, 2006; McIntosh, Horner, Chard, Dickey, & Braun, 2008; Smith &

Heflin, 2001). When behavior plans have an instructional element they are much more effective

for students who are misbehaving to escape or avoid academic tasks (Roberts, Marshall, Nelson,

& Albers, 2001).

Scott and Barrett (2004) calculated the average number of minutes administrators spent

processing a typical office discipline referral and the amount of time a student typically spends

outside of class after engaging in a major problem behavior. They then illustrated how

implementation of SWPBS and the resulting reduction in office discipline referrals could free up

Page 44: A THREE-TIER MODEL OF INTEGRATED BEHAVIOR AND …

34

administrator time and also lead to increased amounts of time that students could be engaged in

instruction. It is important to note that improving student behavior can free up more instructional

time. However, it cannot be assumed that reductions in problem behavior alone will lead to

improved academic performance.

Rather the expectation is that establishing a predictable, consistent, positive and

safe social culture will improve the behavioral engagement of students in

learning, and that if this engagement is coupled with functional curriculum and

effective teaching, academic outcomes will be more likely (Horner, Sugai, &

Anderson, in press).”

Efficacy of Integrated Models

McIntosh, Chard, Boland, and Horner (2006) examined student outcomes in a school that

had both behavior and reading systems established for at least five years. The study provided

descriptive information about the changes in the percent of students proficient on the Dynamic

Indicators of Basic Early Literacy Skills (DIBELS: Good & Kaminski, 2002) test of oral reading

fluency at the end of third grade and the change in percent of students with 0-1, 2-5 and 6 or

more ODRs during an entire school year. That particular study looked at six schools in one

district that already had high levels of implementation fidelity according to the School-wide

Evaluation Tool (SET: Horner et al., 2004). Data analyses compared schools’ data to a national

sample and revealed that by third grade, more students in the sample scored above criterion in

reading and received fewer office discipline referrals compared to the national sample.

Studies are beginning to show improvements in academic outcomes for schools that are

implementing SWPBS. Horner and colleagues (2009) found small, but statistically significant

differences in student achievement on state-wide standardized tests for schools that implemented

Page 45: A THREE-TIER MODEL OF INTEGRATED BEHAVIOR AND …

35

SWPBS compared to those that had not. More research is needed to confirm these findings and

to explore whether implementation of RtI for academic areas affects student behavior.

A meta-analysis of three-tier reading, behavior, and integrated models revealed larger

effect sizes for integrated models on reading outcomes than reading or behavior models

implemented in isolation (Stewart, Benner, Martella & Marchand-Martella, 2007). Although less

dramatic, the effect sizes for integrated models on behavior outcomes was larger than for

behavior models alone. While the research that supports the logic of implementing an integrated

model is clear, there remains much to be known about how integrated models can be effectively

implemented. Many states are supporting schools in the implementation of tiered behavior and

academic supports. However, most are doing so through separate initiatives. Oregon, Illinois and

Michigan are three examples of states with large-scale initiatives that employ an integrated

approach.

Michigan’s Three-tier Model: An Integrated Approach

Michigan's Integrated Behavior and Learning Support Initiative (MiBLSi) is one example

of a three-tier model that focuses dually on behavior and reading. This initiative is supported by

the Michigan Department of Education, Office of Special Education and Early Intervention

Services. The purpose of MiBLSi is "to develop support systems and sustained implementation

of a data-driven, problem solving model to help students become better readers with social skills

necessary for success (Michigan Department of Education, 2008)." Participating schools build

the capacity of their systems to implement SWPBS and effective school-wide reading

instruction. The MiBLSi model is based on the three-tier models for reading and behavior

described in the previous sections. More specifically, MiBLSi supports a problem solving

approach to RtI. This means that not all schools use the same core curriculum materials and

Page 46: A THREE-TIER MODEL OF INTEGRATED BEHAVIOR AND …

36

interventions. Guidance is provided for how to select instructional curricula, practices and

interventions that are evidence-based.

Individual schools apply to participate with the project. If accepted, schools identify a

leadership team (typically a subset of the school improvement team) to participate in a three-year

training cycle and support implementation in their schools. Each school also identifies reading

and behavior coaches which come from their own school, district or intermediate school district

to provide more local support for implementation. The state of Michigan has been divided into

12 regions, where coordinators are located to provide regional technical assistance. More

information about this systems-change model can be found in the fifth edition of Best Practices

in School Psychology (McGlinchey & Goodman, 2008) and on the project website

(http://miblsi.cenmi.org).

Informal evaluation of MiBLSi’s project data shows that cohorts of schools are making

gains in student reading performance, decreasing major discipline referrals, and reducing special

education referrals and identification. A pilot study of MiBLSi showed that some schools

experienced more dramatic improvements than others (Ervin, Schaughency, Goodman,

McGlinchey, & Matthews, 2006). With regard to implementation of the dual systems, schools

were better able to implement SWPBS with fidelity than evidence-based reading supports.

Initiatives like MiBLSi present an opportunity to explore the implementation of integrated three-

tier models on a large scale.

Using Implementation Science to Deepen the Literature on Three-Tier Models

At this point, we have established that three-tier prevention models have the potential to

improve student outcomes in the areas of reading and behavior. An initial body of supporting

research has been established, with a combined model considered to be more effective than

Page 47: A THREE-TIER MODEL OF INTEGRATED BEHAVIOR AND …

37

implementing RtI for specific areas in isolation. While initial empirical support for the use of

three-tier models has been established, most writing on Response to Intervention includes

commentary on the vast amount of work yet to be accomplished (Chard & Linan-Thompson,

2008). With tens of thousands of schools jumping on board and implementing multi-tiered

models of service delivery, there is a great urgency to answer all unanswered questions

immediately. After reviewing the literature, it seems clear that what is not needed at the moment

is another study that examines one small aspect of RtI or SWPBS. What is needed is a systematic

look at an integrated model to provide insight into how such models are being implemented on a

large scale.

One way to accomplish this task is by examining a three-tier model through the lens of

implementation science. Fixsen and colleagues (2004) distinguish the practice and science of

“implementation” from the practice and science of “program development” and encourage that

these be viewed differently. They argue that in the era of evidence-based practice, the majority of

research has involved the development and identification of evidence-based practices.

Significantly less research has revolved around the implementation, sustainability, and scaling up

of best practices. Knowing that a practice has supporting research does not ensure that the

practice will be implemented as intended and in a way that will yield beneficial outcomes for

students. Thus, it is essential to attend to the research on both the efficacy of a practice and the

research on how to best implement a practice.

Studies of implementation are among the most common suggestions for future research

on three-tier models (Griffiths, Parson, Burns, VanDerHeyden & Tilly, 2007; Kovaleski, 2007).

Implementation science is complex and involves many intersecting variables, including a series

of stages, specific drivers (systemic supports), and feedback loops (Fixsen et al, 2004).

Page 48: A THREE-TIER MODEL OF INTEGRATED BEHAVIOR AND …

38

Examining all aspects of implementation science within an integrated three-tier model was far

beyond the scope of this research study. Instead, this study focused on one feature of

implementation: fidelity. Implementation fidelity has been defined as the delivery of a practice in

the way in which it was designed to be delivered (Gresham, MacMillan, Beebe-Frankenberger,

& Bocian, 2000). Interventions and implementation supports are not always carried out as

planned. Thus, the “actual implementation” of a program should be interpreted as the actual

intervention and actual supports that were completed (Greenberg, Domitrovich, Graczyk, &

Zins, 2005). Researchers should be cautious about assuming that an intervention was

implemented entirely as planned and outcomes need to be evaluated in the context of the quality

of intervention implementation. The current study examined Michigan’s state-wide initiative to

implement a three-tier model of integrated reading and behavior supports in public schools. The

focus of this study was on the implementation fidelity of schoolwide / universal practices.

Research Questions

The questions for this study address (1) schools’ implementation fidelity as they

participated with MiBLSi and worked on implementing a three-tier model of integrated reading

and behavior supports, (2) sustainability of implementation, and (3) the relation between schools’

scores on measures of implementation fidelity and student performance in the areas of reading

and behavior.

Research Question 1: To what extent did schools implement three tier reading and

behavior systems with fidelity across time?

Hypothesis: Schools in a given cohort will have had a range of initial implementation

scores. Over time, schools will make growth, with more growth occurring between the first

few years of implementation. At least 75% of schools will have attained criterion level

Page 49: A THREE-TIER MODEL OF INTEGRATED BEHAVIOR AND …

39

scores on the measures of implementation fidelity at least once. Schools might have been

more likely to attain criterion levels of implementation fidelity during year 2 or later.

• Question 1, Part A: What were the lowest and highest levels of implementation that

schools started with during their first year of participating with MiBLSi?

• Question 1, Part B: How did schools score on measures of implementation fidelity across

time?

• Question 1, Part C: What were the most and least amounts of growth that schools made in

one year?

• Question 1, Part D: Did schools experience statistically significant levels of

implementation growth between each year of participation?

• Question 1, Part E: How many schools attained criterion scores over time?

Existing research on implementation fidelity relates primarily to the implementation of

specific interventions (Gresham, Gansle, Noell, Cohen, & Rosenblum, 1993; Mortenson & Witt,

1998). Kincaid, Childs, Blase and Wallace (2007) called attention to the fact that the school-wide

implementation of SWPBS has been studied very little. This gap in the research is significant as

national and local scale-up efforts are underway. The implementation process and fidelity of

SWPBS can be measured using several tools, some of which include: the School-wide

Evaluation Tool (SET; Horner et al., 2004), Positive Behavioral Interventions and Supports

Team Implementation Checklist (PBIS-TIC; Sugai, Horner, & Lewis-Palmer, 2002), Positive

Behavioral Interventions and Supports Self Assessment Survey (PBIS-SAS; Sugai, Horner, &

Todd, 2003), Benchmarks of Quality (BoQ; Kincaid, Childs, & George, 2005). The Planning and

Evaluation Tool for Effective Schoolwide Reading Programs-Revised (PET-R; Kame’enui &

Simmons, 2007) is one of the most commonly used measures of implementation fidelity for

Page 50: A THREE-TIER MODEL OF INTEGRATED BEHAVIOR AND …

40

schoolwide reading programs and systems, most notably by schools that participated in Reading

First grants. Yet, this measure has not been validated and RtI research studies seldom report its

use.

Some studies provide information about fully implemented programs, but do not describe

data related to the process of reaching full implementation. Other studies have examined

implementation fidelity at single points in the study, but not throughout the entire

implementation process. For example, McCurdy, Mannella, and Eldridge (2003) reported the

student outcomes resulting from 3 years of SWPBS implementation. However, they only

reported fidelity scores from the beginning of the second year of implementation. The current

study examined implementation progress at one-year time increments. It was anticipated that

examining implementation over the course of several years would allow for a more refined

understanding of how long it takes to implement large scale changes in the form of an integrated

three-tier model.

To date, no large-scale studies have been published which systematically examined the

levels and quality of implementation over the span of several years for integrated three-tier

models. One goal of the current study was to fill that gap in the literature. It was hypothesized

that there would be variation in the length of time it took individual schools to implement three-

tier systems for reading and behavior. It was also hypothesized that the majority of growth would

occur within the first two years of implementation. This hypothesis was based on inconsistent

findings concerning schools’ implementation scores after set periods of time (Mass-Galloway,

Panyan, Smith & Wessendorf, 2008; McCurdy, Mannella & Eldridge, 2003; Sprague et al.,

2001). Schools are often told to anticipate a time frame of 3-5 years before a new practice truly

becomes integrated into a system and can be considered fully implemented (Fixsen, Naoom,

Page 51: A THREE-TIER MODEL OF INTEGRATED BEHAVIOR AND …

41

Blase, Friedman & Wallace, 2004; OSEP Center on Positive Behavioral Interventions and

Supports, 2004).

Most developers of fidelity measures recognize a criterion score, or minimum threshold

which indicates that a practice is being implemented with fidelity. When a school attains an

implementation score that is at or above the criterion, they can be considered to be implementing

a practice with fidelity. Thus, for this study, I was not only interested in examining initial

implementation and performance over time, but also to what extent schools were implementing

an integrated three-tier model with fidelity over time. It was hypothesized that schools would be

most likely to attain criterion scores during their second or third years of implementation.

Attaining a criterion score might have been very difficult during the first year, especially for

schools who were new to RtI and SWPBS. Yet, if a school had not made enough improvement to

reach a criterion score during the first 3 years, they might be unlikely to do so in later years, after

the MiBLSi training sequence had ended and formal support decreased.

Research Question 2: How long were fully implemented three-tier reading and behavior

systems sustained?

Hypothesis: At least 75% of schools that attained criterion scores on the measures of

implementation fidelity will have sustained scores at or above criterion for at least 1 year.

The first set of research questions address how well schools were able to implement an

integrated three-tier model. Implementing an integrated three-tier model with fidelity is a great

accomplishment. However, that might be considered just the first step (Coburn, 2003). If schools

were able to implement with fidelity one year, but not sustain implementation, the likelihood of

having lasting impact on student achievement could be compromised. One study found that 85%

of schools were able to maintain or improve implementation scores one year after they had

Page 52: A THREE-TIER MODEL OF INTEGRATED BEHAVIOR AND …

42

reached criterion (Eber et al., 2004). Other research has shown that schools can successfully

sustain implementation over time (Doolittle, 2006; McIntosh & Horner, 2009; Muscott, Mann, &

LeBrun, 2006). It was therefore expected that a large proportion of schools that attained criterion

scores on the measures of implementation fidelity would be able to maintain or improve their

scores in subsequent years. A review of the literature revealed no empirical studies that

specifically addressed the sustainability of RtI for academic areas.

Research Question 3: To what extent were student outcomes in reading and behavior

associated with scores on the implementation checklists?

Hypothesis: Reading implementation scores will predict student outcomes in reading.

Behavior implementation scores will predict student outcomes in behavior. The

combination of reading and behavior implementation measures will better predict student

outcomes than either set of measures on their own.

• Question 3, Part A: Do scores on the reading implementation checklist (PET-R)

significantly predict student reading outcomes?

• Question 3, Part B: Do scores on the behavior implementation checklists (PBIS-SAS and

PBIS-TIC) significantly predict student behavior outcomes?

• Question 3, Part C: Does the combination of scores on the reading and behavior

implementation checklists, better predict student reading outcomes, than scores on the

reading implementation checklists alone?

• Question 3, Part D: Does the combination of scores on the behavior and reading

implementation checklists, better predict student behavior outcomes, than scores on the

behavior implementation checklists alone?

Page 53: A THREE-TIER MODEL OF INTEGRATED BEHAVIOR AND …

43

While several studies have shown positive student outcomes after the implementation of

a three-tier model, the relation between implementation fidelity and student outcomes is

relatively understudied, particularly for reading systems. Rather, several research groups have

indicated that positive student outcomes should only be expected once an efficacious practice has

been fully implemented (Fixsen, Naoom, Blase, Friedman & Wallace, 2004; Horner & Sugai,

2007). Thus, many researchers only examine outcome data once a practice is fully implemented.

Other studies describe implementation levels and then separately describe student outcomes, but

never draw a connection between the two (U. S. Department of Education, 2006a; U. S.

Department of Education, 2006b). If implementation of a practice leads to improved student

outcomes, then it seems likely that improvements in student outcomes might parallel

improvements in implementation.

Studies have also shown the added benefit of implementing integrated systems on both

academic and behavioral outcomes (Horner et al, 2009; McIntosh, Chard, Boland, & Horner,

2006; Stewart, Benner, Martella & Marchand-Martella, 2007). A sub-question in this section

explored whether the combination of both reading and behavior implementation measures better

predicted student outcomes than the reading or behavior measures on their own. Based on

previous research, it was hypothesized that the combination of reading and behavior

implementation measures would better predict student outcomes than the reading or behavior

measures alone.

It was hypothesized that scores on measures of implementation fidelity would be strong

predictors of schools’ student outcomes in the areas of reading and behavior when examined at

the same time. If schools experienced improved student data without corresponding

improvement on the implementation scores, multiple explanations might be considered. First,

Page 54: A THREE-TIER MODEL OF INTEGRATED BEHAVIOR AND …

44

there might have been a time-lag between when schools made implementation improvements and

when those implementation improvements were reflected in student outcome data. Second,

implementation of a three-tier model might have been causing the changes, but the measurement

tools might not have been targeting some critical dimensions of implementation. Finally,

something other than implementation of a three-tier model could have caused changes in student

outcomes.

Page 55: A THREE-TIER MODEL OF INTEGRATED BEHAVIOR AND …

45

CHAPTER 3

METHOD

Data Source and Sample Existing project data from Michigan’s Integrated Behavior and Learning Support

Initiative (MiBLSi) were used to answer this study’s research questions. MiBLSi has been

collecting data from participating schools since the first cohort began in January of 2004. Data

used in this study were collected from five cohorts of schools between 2004 and 2009. Schools

were not asked to collect additional data or engage in any additional activities as a part of this

dissertation project.

MiBLSi project data were stored in several web-based data systems, Microsoft Excel

spreadsheets, one SPSS database, and a FileMaker Pro database. While the majority of data were

primarily housed in the FileMaker Pro database, at the onset of this study data were distributed

across many different electronic files. Thus, the data needed to be reorganized into one larger

SPSS file containing all of the variables relevant to this study, and only the variables relevant to

this study. After a data set specific to this study was organized, school names were deleted from

the file and replaced by school identification numbers. A record of the school names and

identification numbers was kept in a location apart from the SPSS data file.

School Selection Criteria

In the fall of 2010 there were 582 public schools participating with MiBLSi. These

schools were split across seven cohorts and included elementary and middle schools.

Participating schools represented 262 school districts and 43 intermediate school districts /

regional service agencies, which equated to 47% of Michigan districts and 74% of Michigan

intermediate school districts. Several inclusion / exclusion criteria were applied to identify a sub-

Page 56: A THREE-TIER MODEL OF INTEGRATED BEHAVIOR AND …

46

sample of the 582 MiBLSi schools to be included in this study. Criteria were related to a school’s

cohort, grade levels included, and amount of data submitted.

Schools in Cohorts 1, 2, 3, 4.1, 4.2, and 5 were included in this study. Schools from

Cohorts 4.3, 4.4, 6, and 7 were excluded. Cohort 4.3 consisted of primarily middle schools and

the remaining elementary schools were from one region of the state. Cohort 4.4 schools were all

from the same district. Because a specific region or district could have been identified, analysis

of the Cohort 4.3 and 4.4 data would have violated schools’ privacy. It generally took schools

several months to establish their evaluation systems and some measures were only collected once

per year. At the time data were analyzed for this study, Cohort 6 schools had submitted minimal

amounts of data, and were therefore excluded from the study. While Cohort 7 was included in

the total count of buildings participating with MiBLSi, they did not officially start until August,

2010. Therefore, no data were available from Cohort 7 schools. Application of the cohort

criterion narrowed the potential schools from 582 to 274.

Only elementary schools were included. More specifically, only schools including

kindergarten through sixth grade were eligible for inclusion. Schools did not need to have all

grades from kindergarten through sixth grade represented in their buildings. Rather, schools

needed to include at least one grade from the kindergarten through sixth grade and none from

grades 7 through 12. Schools including grades 7 through 12 were excluded to create consistency

in the type of schools being examined. Additionally, the Dynamic Indicators of Basic Early

Literacy Skills (DIBELS: Good & Kaminski, 2002) curriculum-based literacy measures are only

available through sixth grade. An alternative set of literacy measures (AIMSweb Reading-

Curriculum Based Measures and Maze) was used to measure students’ reading performance at

the middle school level. Restricting data analysis to only kindergarten through sixth grade

Page 57: A THREE-TIER MODEL OF INTEGRATED BEHAVIOR AND …

47

allowed for the interpretation of student reading outcomes without having to account for

differences in the reading measures themselves. Research has also shown that office discipline

referral rates tend to be higher in middle and high schools (Irvin et al., 2006; Sugai, Sprague,

Horner & Walker, 2000; Schoolwide Information System, 2010). Therefore, the inclusion of

middle schools in the data analysis could have potentially skewed or confounded office

discipline referral data. Application of the elementary school criterion narrowed the potential

schools from 274 to 256.

In addition, schools must have submitted at least one score from the systems / process

measures (described in the measures section) to be included in the study. There were 6 schools,

for which no implementation data were available. Application of the data criterion reduced the

potential number of schools from 256 to 238, the final number of schools included in this study’s

sample. Number of schools by cohort, average enrollment, and average percentage of students

eligible for free or reduced-price lunch are available in Table 2. Schools’ average demographic

features are similar across all cohorts, with standard deviations showing large variability within

cohorts.

Measures and Variables

Schools participating with MiBLSi were required to collect two types of data: systems /

process data (implementation fidelity) and student outcome data. Systems / process data

provided regular information about changes in staff behavior related to implementation of

school-wide reading and behavior systems. Systems / process data were intended to provide

more frequent feedback about staff efforts than can be reflected through student outcome

measures. Process data may also be more sensitive to the incremental changes that occur within a

system when implementing a new practice. Student outcome measures were used to evaluate the

Page 58: A THREE-TIER MODEL OF INTEGRATED BEHAVIOR AND …

48

impact of research-based practices on students’ academic and behavioral performance. Ideally,

schools will have used the systems / process data and the student outcome data together as they

reviewed progress toward their school improvement goals and developed a plan for

improvement.

Systems / process and student outcome data were intended for use at multiple levels. Data

informed decision making at the school building, region, and whole-project levels to determine

where and how to provide the supports that would allow for improved implementation and

student outcomes. For example, teams systematically reviewed their data at least three times per

year at the Data Review trainings. Time was allotted for teams to identify areas of strength and

weakness and develop an action plan for how to address areas needing improvement. School

teams were then supported by their reading and behavior coaches as they replicated these data

review meetings more frequently within their school buildings. Because schools attended

trainings within specified regions of the state, the regional technical assistance partners may have

looked at the data for an entire region to determine what areas needed to be more thoroughly

addressed through training and coaching. At the whole-project level, data were used similarly to

at the regional level. In addition, annual data reports to the state and federal Departments of

Education provided information about the progress and success of the MiBLSi project as a

whole.

The first three measures described below examine the implementation fidelity of schools’

behavior and reading supports. Scores on these measures represent staff ratings of

implementation fidelity and are used to answer research questions 1-3. Scores are based on self-

assessment. Next, are descriptions of the two student outcome measures, which were used as

indicators of students’ reading skills and behavioral outcomes. The student outcome variables in

Page 59: A THREE-TIER MODEL OF INTEGRATED BEHAVIOR AND …

49

this study are at the whole-school level and are aggregates of individual students’ performance.

The student outcome measures were used to answer research question 3. For each measure, the

following information is provided: (1) information about the purpose of the measure; (2)

information about items, subscales and scoring; (3) a description of which scores were used for

this study and what criterion, or score, represents a high level of implementation fidelity or

strong student outcomes; and (4) information about when and how MiBLSi schools typically

completed and submitted each measure. Table 3 provides an overview of what data schools were

supposed to submit from January 2004 through May of 2009. Cohort 1 schools had been engaged

in data collection for the longest period of time, as they started working with MiBLSi the

earliest. The least amount of data were available from Cohort 5 schools, as they started working

with MiBLSi in August of 2008.

Planning and Evaluation Tool for Effective School-wide Reading Programs-Revised

(PET-R)

The PET-R (Kame’enui & Simmons, 2003) is a 38-item tool used for developing and

maintaining effective school-wide reading programs. The PET-R was designed for use in

evaluation and action planning. School teams self-report whether each of the 38 items are fully in

place (2 points), partially in place (1 point), or not in place (0 points). A copy of the PET-R can

be found in Appendix A.

Questions are arranged under the following seven subscales: (1)

Goals/Objectives/Priorities--5 items and 14 possible points, (2) Assessment--8 items and 20

possible points, (3) Instructional Programs and Materials--5 items and 22 possible points, (4)

Instructional Time--5 items and 14 possible points, (5) Differentiated Instruction/ Grouping/

Scheduling--5 items and 10 possible points, (6) Administration/ Organization/ Communication--

Page 60: A THREE-TIER MODEL OF INTEGRATED BEHAVIOR AND …

50

6 items and 12 possible points, and (7) Professional Development--4 items and 8 possible points.

Eleven items are more heavily weighted than the rest.

Total scores range between 0 and 100. A total score of 80 or above is used as an indicator

that a MiBLSi school is implementing its school-wide reading system with fidelity. Other studies

have used criteria set at an 85% Assessment subscale score and an 85% overall score (McIntosh,

Chard, Boland & Horner, 2006; McIntosh, Horner, Chard, Boland & Good, 2006). Thus, the

MiBLSi criteria may be slightly more lenient than criteria used by other projects. Information on

the technical adequacy of the PET-R has not been published. However, it was developed using

the Positive Behavioral Interventions and Supports Self Assessment Survey (Sugai, Horner &

Todd, 2003) as a model. MiBLSi leadership teams initially completed the PET-R at a

Schoolwide Reading Training in year 1 of the three year training sequence. That typically

occurred in the spring of each school year. In subsequent years, schools completed the PET-R at

either the Spring or Fall Data Review Trainings. In order to compare PET-R scores with student

outcome data, Fall PET-R scores were interpreted as indicators of schools’ reading programs

from the spring of the previous school year.

Positive Behavioral Interventions and Supports Self Assessment Survey (PBIS-SAS)

The PBIS-SAS (Sugai, Horner & Todd, 2003) was introduced in 1999 by Lewis and Sugai

as a means of gathering information on educators’ views of SWPBS and for the purpose of

action planning. See Appendix B for a copy of the PBIS-SAS. The Schoolwide Evaluation Tool

(SET; Sugai, Lewis-Palmer, Todd & Horner, 2001) is perhaps a more objective tool for assessing

SWPBS implementation fidelity because it is administered by an external evaluator and has

strong technical properties. However, the SET was not developed to be used specifically during

the initial stages of SWPBS implementation (Safran, 2006), which may make the PBIS-SAS a

Page 61: A THREE-TIER MODEL OF INTEGRATED BEHAVIOR AND …

51

more functional tool for schools as they begin implementation. The PBIS-SAS is unique as a

measure of implementation fidelity in that it calls for input from a school’s whole staff, not just

the school leadership team. Internal consistency of the PBIS-SAS has been found to range from

.60-.75 on the current status scales (Safran, 2006). Ratings on the four different systems have

been found to differ significantly, indicating that the four systems are truly distinct (2006).

Significant differences have also been observed between the PBIS-SAS scores from different

schools, indicating that this measure is sensitive to implementation in various settings.

This 43-item self-report inventory facilitates evaluation of a school’s level of SWPBS

implementation and need for improvement in four systems: (a) school-wide--15 items; (b) non-

classroom--9 items; (c) classroom--11 items; and (d) individual student--8 items. Respondents

first indicate the current status of each system (e.g., whether the support is in place, partially in

place, or not in place). Respondents then indicate whether the item is high, medium or low

priority for improvement. This study only examined ratings of whether the supports were in

place overall, not priority ratings. Scores for each system range from 0 to 100 percent.

Overall scores can range between 0 and 100% of respondents who said effective behavior

support systems are in place in the school. Overall scores represent an average of scores for the

four sub-systems. SWPBS experts indicate that PBIS-SAS overall and subscale scores ranging

between 50 and 70 % suggest strong implementation integrity. This wide range is intended to

reflect the fact that individual staff responses may vary greatly, particularly on measures such as

the PBIS-SAS where respondents are not limited to those on the school leadership team. The

MiBLSi coordinators selected “66% of respondents indicate the system is in place” as a criterion

for implementation fidelity, a number which falls between the recommended 50 and 70 percent.

Page 62: A THREE-TIER MODEL OF INTEGRATED BEHAVIOR AND …

52

Each staff member can complete the PBIS-SAS online at the Positive Behavior

Interventions and Supports Surveys website (www.pbssurveys.org). The Evaluation Blueprint for

Schoolwide Positive Behavior Support (Algozzine at al., 2010) recommends that schools

complete this measure annually to develop, review and update related action plans. MiBLSi

schools completed the PBIS-SAS once per school year, typically in late spring. For this study,

only the overall scores were used, not the sub-scores for the four individual systems.

Positive Behavioral Interventions and Supports Team Implementation Checklist

(PBIS-TIC)

The PBIS-TIC (version 2.0; Sugai, Horner & Lewis-Palmer, 2002) is a 23-item self report

inventory that is used to assess the degree to which foundations (Tier 1) of SWPBS are being

implemented. It is also used as an action-planning tool for school leadership teams, meaning that

schools should consider how to make improvements in their SWPBS system based on PBIS-TIC

areas of weakness. See Appendix C for a copy of the PBIS-TIC. Items on the PBIS-TIC are

considered to be essential features and processes associated with implementing SWPBS with

fidelity (OSEP Center on Positive Behavioral Interventions and Supports, 2004).

The PBIS-TIC consists of the Start-Up Activity Checklist (17 items) and the On-Going

Activity Monitoring Checklist (6 items). Items on the Start-Up Activity Checklist fall under the

following categories: (1) establish commitment; (2) establish and maintain team; (3) self-

assessment; (4) establish school-wide expectations; (5) establish information system; and (6)

build capacity for function-based support. The On-Going Activity Monitoring Checklist asks

about the team processes surrounding SWPBS action planning. However, this portion of the

PBIS-TIC is not used to calculate the total implementation score. For all items on both checklists

school teams indicate whether they have: (a) achieved an activity = 2 points; (b) the activity is in

Page 63: A THREE-TIER MODEL OF INTEGRATED BEHAVIOR AND …

53

progress = 1 point; or (c) the activity has not yet been started = 0 points. Raw scores on the Start-

up Activity Checklist can range from 0 to 34.

Overall implementation scores on the PBIS-TIC can range from 0-100% of items scored

as “achieved.” Because the technical properties of the PBIS-TIC have not yet been established,

MiBLSi directors sought the assistance of national SWPBS experts to identify an appropriate

criterion score. “80% of items scored as achieved” was selected as the criterion score due to the

similarity of the PBIS-TIC to the School-wide Evaluation Tool (SET; Horner et al., 2004). The

SET is a SWPBS implementation measure whose technical properties have been well

established, and for which 80% is used as a criterion for implementation fidelity.

The entire PBIS-TIC was completed by each school's leadership team once per quarter

(October, December, March, May). MiBLSi schools typically completed a paper copy of the

PBIS-TIC during a school leadership team meeting or a MiBLSi training and then submitted the

scores online at the PBIS Surveys website (www.pbssurveys.org). Schools’ overall PBIS-TIC

scores were used for this study.

Dynamic Indicators of Basic Early Literacy Skills, 6th Edition (DIBELS)

The DIBELS (Good & Kaminski, 2002) are a set of norm-referenced, curriculum-based

reading probes. They can be used for screening and progress monitoring. Scores are predictive of

future reading success in elementary school (Good, Simmons & Kame’enui, 2001; Hosp &

Fuchs, 2004).

Although DIBELS scores can be interpreted at the individual and small group level, this

study examined only aggregate DIBELS data at the whole-school level. The DIBELS data for this

study included only the spring benchmark scores. This allowed for alignment with PET-R scores

and reflected the efforts of the staff and students across the entire school year. At the spring

Page 64: A THREE-TIER MODEL OF INTEGRATED BEHAVIOR AND …

54

benchmark, kindergarten students are assessed using the Letter Naming Fluency, Phoneme

Segmentation Fluency, and Nonsense Word Fluency probes. First grade students are assessed

using the Phoneme Segmentation Fluency, Nonsense Word Fluency, and Oral Reading Fluency

probes. Students in grades 2 through 6 are assessed using the Oral Reading Fluency probes. The

DIBELS 6th edition measures can be downloaded for free from www.dibels.uoregon.edu.

This study examined the percentages of students who scored at the benchmark and

intensive instructional recommendations in the spring of each school year. The instructional

recommendation (i.e. benchmark, strategic, intensive) indicates a student’s level of instructional

need based on their DIBELS scores. Students with a “benchmark” instructional recommendation

are considered at low risk for reading failure. For assessment periods when students are assessed

using more than one indicator, the instructional recommendation reflects a weighted summary

score. This study did not examine the percentage of students who scored at the strategic

instructional recommendation. It was assumed that shifts in the percentages of students at the

benchmark and intensive instructional recommendations would be sufficient for revealing any

change. The strategic percentages were also excluded to reduce the number of inter-related

DIBELS variables included in data analyses. Consistent with a public health framework, it is

recommended that schools have at least 80% of students score at benchmark, less than 15% of

students score at the strategic instructional recommendation, and less than 5% of students score

at the intensive instructional recommendation.

MiBLSi schools conducted school-wide screenings using the DIBELS measures at the

beginning, middle, and end of each school year. Students who were identified as at risk for

reading failure then received targeted or intensive interventions in addition to the regular

classroom curriculum and have their reading progress monitored using the appropriate DIBELS

Page 65: A THREE-TIER MODEL OF INTEGRATED BEHAVIOR AND …

55

progress monitoring probes. Two DIBELS variables were used for this study. The first was

DIBELSB, which represented the percentage of students in a school who scored at or above the

benchmark score during Spring screening. The second variable was DIBELSI, which represented

the percentage of students in a school who scored in the at-risk or intensive category during

Spring screening. For each year that a school submitted DIBELS data, they had a DIBELSB and

DIBELSI score included in the dataset for this study.

Office Discipline Referrals (ODRs)

Office discipline referrals (ODRs) have been identified as a valid measure for evaluating

intervention effectiveness at the whole-school level (Irvin, Tobin, Sprague, Sugai & Vincent,

2004; Sugai, Sprague, Horner & Walker, 2000; Tobin & Sugai, 1999). They provide an indicator

of externalizing behavior problems. Most research on SWPBS examines major discipline

referrals (as opposed to minor discipline referrals) because these are the behaviors that are most

severe, requiring the most administrator time and student time away from the classroom. Schools

can use various data systems for collecting, storing, and generating reports on ODR data. The

School-wide Information System (SWIS; May et al., 2002) is one available system. MiBLSi

schools were required to use SWIS to create consistency across all project schools and because

SWIS was designed specifically for use within a multi-tiered model. SWIS is a web-based system

for recording, storing, and analyzing office discipline referral data (e.g. major ODRs, minor

ODRs, suspensions and expulsions). Schools using SWIS are trained by a SWIS facilitator in how

to effectively use the web-based system, categorize referral causes, generate data and analyze it.

Schools enter ODR data into the SWIS on a weekly basis at minimum. Developers of SWIS

strongly recommend that data be printed on a weekly or monthly basis to be shared with the

school leadership team and possibly the entire school staff. SWIS can be used to track an

Page 66: A THREE-TIER MODEL OF INTEGRATED BEHAVIOR AND …

56

individual’s office discipline referrals as well as other breakdowns by grade, school location and

time of day. School staff can also examine data for information about what types of behavior

students are being referred to the office for. These standard features of SWIS make it a more

reliable measure of ODRs than other informal procedures used by some schools for collecting

and storing ODR data.

The ODR variable for this study was “average number of ODRs per 100 students per

day.” The ODR variable reflects an entire school year of student data and scores are available to

MiBLSi each July. This ratio allows for the comparison of ODRs while accounting for school

size and slight variability in length of the school year. National data show that for elementary

schools, a typical number of ODRs per day per 100 students is 0.22 (SWIS, 2010).

Procedure

Outliers

Boxplots were created in SPSS for each set of data. Outliers were identified as schools

with scores that were either higher or lower than more than one and a half times the inter-quartile

range. Outliers were transformed to prevent them from influencing the panel analyses. Extreme

high scores were reduced to the next highest score that was not also an outlier. Extreme low

scores were increased to the next lowest score that was not also an outlier. However, before

outliers were transformed, they were recorded so that this information was not lost for a

complete picture of schools’ range of scores. The boxplots and graphs of high and low scores

depicted in Figures 1-4 still include the outliers. The panel analyses were run using the

transformed data.

Missing Data

Page 67: A THREE-TIER MODEL OF INTEGRATED BEHAVIOR AND …

57

Longitudinal studies are commonly affected by missing data. Although MiBLSi had been

collecting data from participating schools, the overarching purpose of the project was for

professional development and school improvement, not primarily research. Thus, MiBLSi had

more missing data than would have been expected of a project run for research purposes. This is

important to keep in mind when interpreting the results. With this data set, the missing data

points are potentially due to many factors. Unfortunately, it is not possible at this point to

determine the root causes of missing data. Missing data could represent multiple scenarios: (1)

schools are implementing effective practices, but not collecting data; (2) schools are

implementing effective practices and collecting data, but are not reporting data to the project; (3)

schools are not implementing effective practices and are not collecting data.

A breakdown of the percentage of data submitted by each cohort can be found in Table 4.

After reviewing Table 4, several trends in the missing data are apparent. First, the amount of data

submitted appears to decrease the longer schools are participating with the project. Second, more

student outcome data were submitted than systems / process data. Finally, the PBIS-TIC had the

lowest data submission rates. Although the PBIS-TIC is the fastest measure to complete, it is also

requested the most frequently from schools (3-4 times per school year). Given these trends, it is

highly unlikely that the data are missing completely at random. Among other reasons,

Generalized Estimating Equations were selected to analyze this longitudinal data set because

they can handle missing data better than other types of statistical analyses, such as repeated

measures ANOVA. More information is provided in Chapter 4.

Table 5 further illustrates the amount of data that schools submitted over time. There

were no schools in cohorts 1-3 that had submitted all requested data. The complete sets of data

Page 68: A THREE-TIER MODEL OF INTEGRATED BEHAVIOR AND …

58

were for the systems / process data. Further implications will be reviewed throughout Chapters 4

and 5.

Page 69: A THREE-TIER MODEL OF INTEGRATED BEHAVIOR AND …

59

CHAPTER 4

DATA ANALYSIS AND RESULTS

The following chapter provides information about the data analyses used to answer the

research questions and the results. Results are organized by research question. The first and third

research questions are broad and are therefore broken down into several sub-parts in order to

provide more structure to the results.

Research Question 1: To what extent did schools implement three tier reading and

behavior systems with fidelity across time?

Hypothesis: Schools in a given cohort will have had a range of initial implementation

scores. Over time, schools will have made growth, with more growth occurring between the

first few years of implementation. At least 75% of schools will have attained criterion level

scores on the measures of implementation fidelity at least once. Schools might have been

more likely to attain criterion levels of implementation fidelity during year 2 or later.

• Question 1, Part A: What were the lowest and highest levels of implementation that

schools started with during their first year of participating with MiBLSi?

Figure 2 depicts the lowest and highest implementation scores that schools attained

during their first year of participation with MiBLSi. High and low scores for the PET-R, PBIS-

SAS and PBIS-TIC are all presented. Combined with the box plots in Figures 5-7, Figures 2 -4

illustrate the wide range of scores during initial implementation. We see a large spread in initial

scores for all three measures of implementation fidelity for all cohorts. It is clear that schools

varied greatly in their implementation fidelity, or perceived implementation fidelity, during year

one, which was hypothesized. With the exception of the highest initial score on the PBIS-SAS

from Cohort 2, all other highest initial implementation scores were at or above criterion.

Page 70: A THREE-TIER MODEL OF INTEGRATED BEHAVIOR AND …

60

• Question 1, Part B: How did schools score on measures of implementation fidelity across

time?

To answer this question, cross-year, multi-cohort box plots were created in SPSS (Figures

5-7). Box plots were selected because they most comprehensively depict the central tendency

(median), as well as the wide range of schools’ scores within a given year. Each box plot shows

the school with the lowest value, scores at the 25th percentile, 50th percentile, 75th percentile, and

highest values. Outlier schools are represented by circles that fall either below the bottom

whisker or above the top whisker. Extreme outlier schools are represented by stars that fall either

below the bottom whisker or above the top whisker. Each cohort’s data are clustered together,

starting with Cohort 1 on the left. Each graph includes a drawn-in blue line that shows the

criterion score (80% for the PET-R, 67% for the PBIS-SAS and 80% for the PBIS-TIC). Recall

that the criterion scores represent a minimum threshold for schools to be considered

implementing with fidelity.

Figure 5 displays schools’ overall scores on the PET-R over time. General trends in the

data show that the scores increased from year to year for cohorts 1-4. In years where a cohort’s

scores decreased, the rates of data submission were also lower (e.g., Cohort 1 in 2007 and 2009,

Cohort 3 in 2009). However, not all reduced rates of data submission resulted in lower PET-R

scores.

Figure 6 shows schools’ overall scores on the PBIS-SAS over time. General trends in the

data show that the scores increased over time. In years where a cohort’s scores decreased, it was

often the case that data submission rates were lower (e.g., Cohort 1 in 2009, Cohort 3 in 2009).

However, not all reduced rates of data submission resulted in lower PBIS-SAS scores.

Page 71: A THREE-TIER MODEL OF INTEGRATED BEHAVIOR AND …

61

Figure 7 shows schools’ overall scores on the PBIS-TIC over time. Cohorts 1-4 showed

large improvements between their first and second years of implementation. Compared to the

PET-R and PBIS-SAS, schools’ scores on the PBIS-TIC reached higher levels. Scores for Cohort

1 stayed roughly the same (above criterion) after 2007, but began declining for Cohort 2.

Compared to the other two systems / process measures, there were also more extreme outliers

who scored low on the PBIS-TIC.

All three box pots show a general trend of improving implementation scores over time,

and for some measures the largest amounts of average growth occurred between year 1 and 2.

Thus the hypothesis that schools would make more growth during the first few years of

implementation was supported.

• Question 1, Part C: What were the most and least amounts of growth that schools made in

one year?

New variables were calculated to look at changes in scores between each year of

participation. Variables were calculated by subtracting one year’s score from the previous year’s

score. Figure 8 shows the most and least amounts of implementation growth seen from schools in

cohorts 1-4. The minimum and maximum growth scores were examined for each cohort. The

graphs in Figure 8 include just the lowest and highest growth scores per cohort and measure.

Each bar represents a single school. Data are not disaggregated by year, which means that the

numbers included in the graphs could represent growth between years 1 and 2, 2 and 3, 3 and 4,

or 4 and 5. The top graph depicts the most growth that a school made between single years. The

school that made the most growth from Cohorts 2-4 made larger gains than the school in cohort 1

with top growth scores. The bottom graph depicts the least amount of growth made between

Page 72: A THREE-TIER MODEL OF INTEGRATED BEHAVIOR AND …

62

single years. For the PBIS-SAS and PBIS-TIC, later cohorts included schools whose scores

dropped substantially between single years.

• Question 1, Part D: Did schools experience statistically significant levels of

implementation growth between each year of participation?

Parts A through C of Question 1 show that cohorts’ average scores on the measures of

implementation fidelity increased over time and that there was wide variation in where schools

started and how much growth they made between years. The next step was to understand

whether the incremental improvements in implementation for each cohort as a whole were of

statistical significance. To answer this question, scores from the PET-R, EBS-SAS and EBS-TIC

fidelity measures were analyzed using generalized estimating equations (GEE). GEE was the

most appropriate choice for analyzing this type of longitudinal data. GEE takes into account the

correlated nature of repeated measures and allows for a less biased estimate of the true

population parameters (Hardin & Hilbe, 2002). GEE also more readily compensates for missing

data, which has already been established as an issue for this study (Hardin & Hilbe, 2002). Other

methods of longitudinal data analysis require that cases with missing data be excluded from the

analyses. For this study, excluding schools with missing data would have dramatically decreased

the number of schools included in the statistical analyses. Therefore, GEE maximized the

amount of data that could be included.

Recall that the descriptive analyses presented up to this point included many outliers as

part of the graphs (box plots). They were included previously to present a true picture of the

range of scores that schools attained on the measures of implementation fidelity. However, for

the GEE analyses, the outliers were transformed to reduce their impact on the model (Pallant,

2005). High outliers were reduced to the next lowest highest score that was at or below the 95th

Page 73: A THREE-TIER MODEL OF INTEGRATED BEHAVIOR AND …

63

percentile. Low outliers were increased to the next lowest score that was at or above the 5th

percentile.

The correlation matrix was used for selecting which continuous predictor variables to

include in the GEE analyses. Enrollment (number of students enrolled at a school) was not used

because it was highly correlated with schools’ percentage of students eligible for free and

reduced price Lunch, and Lunch was better correlated with the implementation and student

outcome measures.

Three separate analyses were run, one with each of the implementation measures (i.e.,

PET-R, PBIS-SAS, PBIS-TIC) used as the dependent variable. Main effects were entered

simultaneously into the generalized estimating equations. All three analyses included the same

predictor / independent variables. Year was the primary predictor of interest. Lunch (the

percentage of students in a school who qualified for free or reduced lunch prices) was included in

the GEE analyses as a covariate to control for differences in the socio-economic composition of

schools. For each model the shared variance within cohorts was accounted for by entering in

school’s Cohort as a categorical predictor variable. In addition, interaction terms including the

Year variable were added: Year x Cohort and Year x Lunch. A statistical significance level of .05

was used for all analyses.

PET-R. 378 cases were included in the GEE analysis that explored whether changes in

PET-R scores over time were statistically significant. The overall model effect was significant (p

< .001). Main effects for Year (p < .001) , Cohort (p < .001) and Lunch (p < .05) were all

significant. Schools with a higher percentage of students that were eligible for free and reduced

lunch prices also had slightly higher PET-R scores (β = .163). The main effects for Year and

Page 74: A THREE-TIER MODEL OF INTEGRATED BEHAVIOR AND …

64

Cohort were not interpreted because a significant interaction between Cohort and Year (p < .001)

was also found and is depicted in Figure 7.

PBIS-SAS. 508 cases were included in the GEE analysis that explored whether changes

in PBIS-SAS scores over time were statistically significant. The overall model effect was

significant (p < .001). Main effects for Year (p < .001) and Cohort (p < .001) also were

significant. The main effects were not interpreted because a significant interaction between

Cohort and Year (p < .001) was found. That interaction is depicted in Figure 10.

PBIS-TIC. 418 cases were included in the GEE analysis that explored whether changes

in PBIS-TIC scores over time were statistically significant. The overall model effect was

significant (p < .001). Main effects for Year (p < .001) and Cohort (p < .001) were also

significant. The main effects were not interpreted because a significant interaction between

Cohort and Year (p < .001) was found. That interaction is depicted in Figure 11.

The results of all pair-wise comparisons from post-hoc analyses are presented in Table 8.

For all cohorts and all measures, there were statistically significant mean differences in

implementation scores from the first year that data were collected to the last year that data were

collected. Thus, overall growth was significant for all cohorts and measures. In addition, for all

but one comparison, there were statistically significant improvements in scores from the first

year data were collected to the second year. Cohort 1 made statistically significant growth during

4 of 12 time frames (all measures included) Cohort 2 made statistically significant growth during

7 of 10 time frames (all measures included). Cohort 3 made statistically significant growth

during 4 of 8 time frames (all measures included). Cohort 4 made statistically significant growth

during 5 of 5 time frames (all measures included).

Page 75: A THREE-TIER MODEL OF INTEGRATED BEHAVIOR AND …

65

In summary, GEE analyses showed that there was an average effect for Year on schools’

implementation scores. For specific cohorts and years, the magnitude and direction of that effect

was different. Cohort 4, for example had fewer years of data, but all year-to-year improvements

were statistically significant. It was hypothesized that there would be statistically significant

growth in implementation scores over time, but that changes in scores between the first 2 years

of implementation would be larger than later years. That hypothesis was supported.

• Question 1, Part E: How many schools attained criterion scores over time?

Schools’ actual scores on the implementation measures were transformed into binomial

variables. Scores at or above the criterion cuts were coded as “1.” Scores below the criterion cuts

were coded as “0.” Figures 10-12 summarize the percentage of schools in each cohort that

attained criterion scores on the PET-R, PBIS-SAS and PBIS-TIC over time. Recall that the PET-R

criterion score is 80%. The PBIS-SAS criterion score is 67%. The PBIS-TIC criterion score is

80%. The percentage of schools that attained criterion scores is presented in combination with

the percentage of schools that did not attain criterion scores and the percentage of schools that

did not submit data. Thus, each bar totals 100% of the schools in each cohort that were included

in this study.

The overall percentage of schools that attained criterion scores on the PET-R did not

fluctuate much over time for schools in Cohorts 1 and 2. Cohorts 3 and 4 showed an increasing

trend in percentage of schools attaining criterion scores on the PET-R. The largest jump in

percentage of schools attaining criterion scores on the PBIS-SAS occurred between years 1 and 2

for Cohorts 1, 3 and 4. Similarly, the largest jump in percentage of schools attaining criterion

scores on the PBIS-TIC occurred between years 1 and 2 for cohorts 1-4. Regardless of cohort and

Page 76: A THREE-TIER MODEL OF INTEGRATED BEHAVIOR AND …

66

measure, one trend is evident. The amount of missing data increased over time, making it

difficult to accurately examine longitudinal trends in the data.

Table 10 shows the percentage of schools in each cohort that attained criterion scores at

least once on the PET-R, PBIS-SAS and PBIS-TIC. Approximately 50% of schools in Cohorts 1-

4 attained criterion scores on the PET-R at least once. Between 36 and 69% of schools in Cohorts

1-4 attained criterion scores on the PBIS-SAS at least once. Between 58 and 92% of schools

attained criterion scores on the PBIS-TIC as least once. A fewer proportion of schools in Cohort

5 attained criterion scores. However, they also had fewer years of data to examine. Thus for all

Cohorts, but especially Cohort 5, the comparison of percentage of schools that attained criterion

scores at least once was based on different numbers of years that schools had been implementing

the program. It was hypothesized that at least 75% of schools would have attained criterion

scores at least once. For the most part, data show that fewer than 75% of schools attained

criterion at least once. Exceptions were Cohort 1 and 3 schools on the PBIS-TIC.

Research Question 2: How long were fully implemented three-tier reading and behavior

systems sustained?

Hypothesis: At least 75% of schools that attained criterion scores on the measures of

implementation fidelity will have sustained scores at or above criterion for at least 1 year.

At the time this study was proposed, I wanted to answer this question by identifying

schools that had attained criterion scores and follow those schools through subsequent years.

Unfortunately, the amount of missing data greatly impacted my ability to conduct this analysis.

While most definitions of sustainability first require that implementation fidelity has been

established (Han & Weiss, 2005), for the purposes of presenting at least some data related to

sustainability, I took a different lens. Sustainability was defined as schools maintaining or

Page 77: A THREE-TIER MODEL OF INTEGRATED BEHAVIOR AND …

67

improving their implementation scores over time. I created graphs for Cohort 3 that show

individual schools’ scores over time. Cohort 3 was used as an example because at least three

years of data were potentially available and because the cohort size was large enough to see

trends, but not so large that the graphs were cluttered with too many data points. Only schools

with at least two data points were included in the graphs. Green dots and connecting lines

indicate that a school’s scores increased from one year to the next. Red dots and connecting lines

indicate that a school’s scores decreased from one year to the next. Black horizontal lines

represent the criterion scores on each graph. If a school’s score decreased from one year to the

next, but the second score was still above the criterion score that that measure, then the data are

presented in green.

Figure 15 shows that the majority of schools improved their scores on the PET-R between

years 1 and 2. We can also see that schools who started with lower scores in year one were able

to make gains in subsequent years, such that schools’ long term implementation did not seem to

be predicted by their initial scores. Approximately one third of schools in Cohort 3 attained

scores above 80 percent in year 2. During year 3, only 4 schools submitted data. Two schools

sustained or improved their PET-R scores from year 2 to year 3. PET-R scores for the other two

schools decreased between years 2 and 3. One decreased dramatically from 83% in year 2 to

15% in year 3. Figures 16 and 17 show that the majority of schools in Cohort 3 either maintained

or improved implementation of behavior supports during the first three years of participating

with MiBLSi, as revealed by scores on the PBIS-SAS and PBIS-TIC. Trends beyond year three

show more missing data and decreases in implementation scores (red lines) compared to the first

three years of implementation.

Page 78: A THREE-TIER MODEL OF INTEGRATED BEHAVIOR AND …

68

Because data were not examined as planned, the hypothesis that at least 75% of schools

that attained criterion scores will have sustained scores at or above criterion for at least one year

could not be directly answered.

Research Question 3: To what extent are student outcomes in reading and behavior

associated with scores on the implementation checklists?

Hypothesis: Reading implementation scores will predict student outcomes in reading.

Behavior implementation scores will predict student outcomes in behavior. The

combination of reading and behavior implementation measures will better predict student

outcomes than either set of measures on their own.

This third set of questions connects schools’ implementation data with their student

outcome data. The emphasis in this section is on examining the interrelationship. Therefore,

separate discussion of the student outcomes is not provided. Figures 18 and 19 depict student

outcomes based on DIBELS and ODRs data for each cohort over time.

• Question 3, Part A: Do scores on the reading implementation checklist (PET-R)

significantly predict student reading outcomes?

First, the correlation matrix provided in Table 6 was reviewed. The Pearson correlation

between DIBELSB and PET-R was .135 (p < .01). While the positive correlation was found to be

statistically significant, the magnitude was low (Cohen, 1988). The Pearson correlation between

DIBELSI and PET-R was -.077, which was even smaller than the correlation between the PET-R

and DIBELSB. The relation was non-significant.

Next, GEE analyses were used to answer Question 3, Part A for the same reasons already

explained for Question 1, Part D. Two separate analyses were run. I first used the percentage of

students who scored in the benchmark range on DIBELS (DIBELSB) as the outcome variable.

Page 79: A THREE-TIER MODEL OF INTEGRATED BEHAVIOR AND …

69

The second analysis used the percentage of students who scored in the intensive range on

DIBELS (DIBELSI) as the outcome variable. Main effects were entered simultaneously into the

generalized estimating equations. Score on the PET-R was the primary predictor of interest. Year

and Lunch (the percentage of students in a school who qualified for free or reduced lunch prices)

were also included in the GEE analyses as covariates to control for differences over time and the

socio-economic make-up of schools. For each model the shared variance within cohorts was

accounted for by entering in school’s Cohort as a predictor variable. In addition, the following

interaction terms were included: PET-R x Cohort, PET-R x Year, and PET-R x Lunch. A

statistical significance level of .05 was used for all analyses.

DIBELSB. 373 cases were included in the GEE analysis that explored whether scores on

the PET-R predicted reading outcomes. The overall effect of the model was statistically

significant (p < .001). After controlling for Year, Cohort and Lunch, PET-R scores still

significantly predicted the percentage of students who scored at benchmark on DIBELS (β =

.405, p <.05). For every 1 point increase in PET-R scores the percentage of students at

benchmark increased by a factor of .405. In addition, there was a significant interaction between

Cohort and PET-R (p < .001). The Cohort x PET-R interaction effect on DIBELSB is complex to

interpret. For Cohorts 1 and 4, DIBELSB scores increase as PET-R scores increase. Cohort 4

scores show that the higher PET-R scores are related to higher DIBELS scores until PET-R

scores reach 60. When PET-R scores are in the 60-100 range, there is in inverse relation with

DIBELSB. Namely, with higher PET-R scores, lower percentages of students scored in the

benchmark range. The relation between PET-R and DIBELSB is weak for Cohorts 3 and 5.

DIBELSI. The overall effect of the model was statistically significant (p < .001). After

controlling for Year, Cohort and Lunch, PET-R scores did not significantly predict the

Page 80: A THREE-TIER MODEL OF INTEGRATED BEHAVIOR AND …

70

percentage of students who scored at the intensive level on DIBELS. However, there was a

significant interaction between Cohort and PET-R (p < .05).

The hypothesis for Question 3, Part A was only partially supported.

• Question 3, Part B: Do scores on the behavior implementation checklists (PBIS-SAS and

PBIS-TIC) significantly predict student behavior outcomes?

First, the correlation matrix provided in Table 6 was reviewed. The Pearson correlation

between PBIS-SAS and ODRs was -.136 (p < .01). While the negative correlation was found to

be statistically significant, the magnitude was low (Cohen, 1988). The Pearson correlation

between PBIS-TIC and ODRs was -.125 (p < .01), which was even smaller than the correlation

between the PBIS-SAS and ODRs. The relation was non-significant.

While the negative correlation was found to be statistically significant, the magnitude

was low (Cohen, 1988). Again, GEE were used to answer this question. Average referrals per

100 students per day (ODRs) was used as the outcome variable. Two separate analyses were run.

The first used PBIS-SAS as the predictor variable of interest. The second used PBIS-TIC as the

predictor of interest. All other predictors and covariates were the same as for Question 3, Part A.

Using PBIS-SAS as predictor. 271 cases were included in the GEE analysis. The overall

model effect was not significant (p = .914). Although the correlation between the PBIS-SAS and

ODRs was moderately significant (r = -.136, p <.05), the correlation coefficient was small.

Results from the GEE analysis showed that after controlling for Cohort, Year and Lunch,

schools’ PBIS-SAS did not significantly predict schools’ ODRs. However, there was a significant

interaction between Lunch and PBIS-SAS.

Page 81: A THREE-TIER MODEL OF INTEGRATED BEHAVIOR AND …

71

Using PBIS-TIC as a predictor. 241 cases were included in the GEE analysis. The

overall model was not significant (p = .079). After controlling for Cohort, Year and Lunch,

schools’ PBIS-TIC also did not significantly predict schools’ ODRs.

However, there was a significant interaction between Cohort and PBIS-TIC (p < .01). That

interaction was not interpreted due to the fact that the overall model was not significant. Chapter

5 includes a discussion about potential reasons why the overall model effects for behavior were

not significant. The hypothesis was not supported.

• Question 3, Part C: Does the combination of scores on the behavior and reading

implementation checklists, better predict student reading outcomes, than scores on the

reading implementation checklists alone?

The same GEE analyses as Question 3, Part A were run with the addition of PBIS-SAS

and PBIS-TIC as predictors.

DIBELSB. 283 cases were included in the GEE analysis. When PBIS-SAS and PBIS-TIC

were added as predictors along with PET-R, Year and Cohort, it did not result in an improvement

to the model. The model as whole was still significant (p < .001). There was not a significant

main effect for either behavior measure, nor was there a significant interaction term that included

either of the behavior measures. Thus, the combination of scores on the reading and behavior

implementation checklists did not better predict the percentage of students at benchmark on

DIBELS than the PET-R alone.

DIBELSI. When PBIS-SAS was added as a predictor along with PET-R, Year, Lunch and

Cohort, the overall model became non-significant (p = .609). The same was true when PBIS-TIC

as added as a predictor. The overall model went from being significant to non-significant (p =

.242). There was not a significant main effect for either behavior measure, nor was there a

Page 82: A THREE-TIER MODEL OF INTEGRATED BEHAVIOR AND …

72

significant interaction term that included either of the behavior measures. These results indicate

that the combination of scores on the reading and behavior implementation checklists did not

better predict the percentage of students in the intensive range on DIBELS than the PET-R alone.

The hypothesis was not supported.

• Question 3, Part D: Does the combination of scores on the reading and behavior

implementation checklists, better predict student behavior outcomes, than scores on the

behavior implementation checklists alone?

The same GEE analyses as Question 3, Part B were run with the addition of PET-R as a

predictor. 176 cases were included in the analysis. When PET-R was added as a predictor along

with PBIS-SAS, Year, Lunch, and Cohort the overall model effect was still not significant (p =

.072), but was an improvement over the model without PET-R included. Thus it appears that the

addition of PET-R scores added to the overall model effect. PET-R also showed up as a nearly

significant predictor (p = .053) of schools’ ODRs. There was also a significant interaction

between PET-R and Cohort (p < .001).

When PET-R was added as a predictor along with PBIS-TIC, Year, Lunch, and Cohort

the overall model effect there were too few cases with all data to conduct the analysis.

These results provide a preliminary indicator that the combination of scores on the

reading and behavior implementation checklists lend to better prediction of student behavior

outcomes than scores on the behavior checklists alone. The hypothesis was supported.

Additional Analyses

A series of one-way ANOVAs were run with the Cohort 3 data set to explore whether

there were differences in student outcomes depending on whether or not schools met criterion on

the implementation checklist. No statistically significant differences were found. For example,

Page 83: A THREE-TIER MODEL OF INTEGRATED BEHAVIOR AND …

73

the mean score for the 23 schools in Cohort 3 that met criterion on the PET-R in 2007 was one

point higher than the mean score for the 20 schools in Cohort 3 that did not meet criterion on the

PET-R in 2007. That one point difference was not statistically significant.

Page 84: A THREE-TIER MODEL OF INTEGRATED BEHAVIOR AND …

74

CHAPTER 5

DISCUSSION

This study explored elementary schools’ implementation of an integrated three-tier model

of reading and behavior supports as they participated with a statewide Response to Intervention

(RtI) project. The purpose of the study was to examine the process of implementing an integrated

three-tier model and to explore the relation between implementation fidelity and student

outcomes. This final chapter of the dissertation will explore the implications of this study’s

findings for schools, for large-scale RtI projects, for the fields of school psychology and

education. Limitations and recommendations for future research will be discussed.

Summary of Results and Implications

Research Question 1: To what extent did schools implement three tier reading and

behavior systems with fidelity across time?

Hypothesis: Schools in a given cohort will have had a range of initial implementation

scores. Over time, schools will have made growth, with more growth occurring between the

first few years of implementation. At least 75% of schools will have attained criterion level

scores on the measures of implementation fidelity at least once. Schools might have been

more likely to attain criterion levels of implementation fidelity during year 2 or later.

This study found that schools’ scores on measures of implementation fidelity varied

greatly both initially and over time. This finding was consistent with my first hypothesis that

schools would vary in their initial implementation and the amount of time it took to implement

an integrated three-tier model. Some schools began their work with MiBLSi with very low levels

of implementation fidelity established. Others began already having attained criterion levels of

implementation fidelity. Over time, most schools improved their implementation of an integrated

Page 85: A THREE-TIER MODEL OF INTEGRATED BEHAVIOR AND …

75

three-tier model of behavior and reading supports. However, there were also schools whose

implementation scores decreased from year to year. As a group, the implementation

improvements that schools made from year to year were statistically significant. This finding

confirms what many other publications on three-tier models have reported using only descriptive

analyses (McIntosh, Chard, Boland, & Horner, 2006; Sprague et al., 2001). The more research

studies are able to report outcomes using statistical analyses and rigorous research designs, the

better consumers will be able to select which research-based practices and programs to

implement.

The general trend of improvements in implementation fidelity over time is promising.

Results revealed that schools were able to make large gains between the first and second years of

participation and that many schools continued to improve in subsequent years. Yet, over several

years of participation, only around 30% of schools were able to attain criterion scores on the

implementation measures during any given year. It was hypothesized that 75% of schools would

have attained criterion scores on the implementation measures at least once. This hypothesis was

not supported. Higher percentages of schools were able to attain criterion scores on the PBIS-

TIC, but only around 50% of schools attained criterion scores on the other implementation

measures. There may be multiple explanations for these finding. First, it is unclear what the

missing data might have revealed had they been available. Second, the results could be an

indicator that schools need more or different types of support to implement an integrated three-

tier model with fidelity. Results from this study do appear to align with the suggestion that

schools need between 2 and 4 years to fully implement a new practice (Fixsen et al, 2005). The 2

to 4 year estimate tends to refer to a singular practice or program. Therefore, it is not clear how

much longer it might take to fully implement a new practice alongside other new initiatives or

Page 86: A THREE-TIER MODEL OF INTEGRATED BEHAVIOR AND …

76

school changes. Recall that MiBLSi supports schools in the implementation of a three-tier model

that is dually focused on behavior and reading. Other practices and programs upon which the 2-4

year rule of thumb is based, tend to have a more narrow focus (National Technical Assistance

Center on Positive Behavioral Interventions and Supports, 2010). However, the fact that many

schools were not able to attain criterion-level scores during their first three years of participation

is concerning because after three years, formal support is reduced.

Results from Research Question 1 yield several implications for practice. First, the range

of initial implementation scores suggest that large-scale initiatives like MiBLSi should be

prepared to work with schools at different levels of readiness. The variation in schools’ ability to

improve implementation fidelity over time suggests that schools are differentially equipped to

implement an integrated three-tier model with fidelity. From this study, it is not clear why some

schools made significant growth over time and were able to sustain it, and why some schools

made very little implementation improvements, and in some cases implementation scores got

worse over time. In other words, we do not know the type and intensity of support that schools

received beyond the core training sequence. However, many schools were able to attain criterion

levels of implementation, which suggests that implementing to criterion is possible for schools

that work with MiBLSi, and potentially other large-scale RtI grants. The fact that not all schools,

or even the majority of schools, attained scores at or above criterion on the PET-R and the PBIS-

SAS during a 3-5 year time period might lead one to consider what actions and policies would

need to be put in place in order for all schools to implement with high levels of fidelity.

Page 87: A THREE-TIER MODEL OF INTEGRATED BEHAVIOR AND …

77

Research Question 2: How long were fully implemented three-tier reading and behavior

systems sustained?

Hypothesis: At least 75% of schools that attained criterion scores on the measures of

implementation fidelity will have sustained scores at or above criterion for at least 1 year.

Unfortunately, I was not able to fully explore whether schools that attained criterion

scores were able to sustain high levels of implementation fidelity in subsequent years. Several

attempts to report findings based on this question resulted in a very complex explanation which

was confounded by the different points at which schools initially attained criterion scores and the

fact that not all data were available for those schools during subsequent years.

Using a modified definition of sustainability, the group of schools from Cohort 3 were

used to demonstrate that most schools either improved or sustained their implementation scores

during the first 2-3 years of participation. After that, more schools showed decreases in

implementation fidelity and in large part, did not submit implementation data to the MiBLSi

project. The progress that schools make initially could be an indicator that the MiBLSi training

sequence and local supports are effective for helping schools begin implementing an integrated

three-tier model. It may be beneficial to further explore what happens after the first year or 2 that

causes implementation scores to drop for some schools. It could be that there has been staff

turnover, which might mean that different people are completing the measures and / or that the

school staff is dramatically different from when the school first began working with MiBLSi.

Another possible explanation for the decreasing levels of implementation fidelity in later years

might be that developing and sustaining universal systems of support becomes less of a priority.

During year 2, the MiBLSi training sequence shifts from universal systems and practices to

emphasizing secondary and tertiary level supports. As this shift in training takes place, schools

Page 88: A THREE-TIER MODEL OF INTEGRATED BEHAVIOR AND …

78

may exert less effort toward the development and sustainability of the universal supports. It is

also possible that after the first few years, schools are faced with other competing initiatives that

detract from their ability to sustain a three-tier model.

Research Question 3: To what extent were student outcomes in reading and behavior

associated with scores on the implementation checklists?

Hypothesis: Reading implementation scores will predict student outcomes in reading.

Behavior implementation scores will predict student outcomes in behavior. The

combination of reading and behavior implementation measures will better predict student

outcomes than either set of measures on their own.

Results of the GEE analyses showed that the strongest link between implementation

fidelity and student outcomes was in the area of reading. More specifically, the fidelity measure

(PET-R) was better able to predict the percentage of students who scored at benchmark than the

percentage of students who scored in the intensive range on DIBELS measures. However,

depending on the Cohort, the relation between PET-R scores and reading outcomes was

different. The strongest positive relation between PET-R scores and the percentage of students at

benchmark on DIBELS was for Cohort 1. Other cohorts actually showed an inverse relation

during certain years. Therefore, these results should be interpreted with caution. In addition, the

overall correlation between the PET-R and DIBELS variables were low.

Neither of the behavior implementation measures were significant predictors of ODRS

when controlling for Cohort, Year, and percent of students who qualified for free or reduced

lunch prices. A few ideas will be presented next for why the relation was not seen. However,

these are for the purpose of discussion and not supported by further data analyses. First, more

DIBELS data were available from schools than ODR data. This could have made the reading

Page 89: A THREE-TIER MODEL OF INTEGRATED BEHAVIOR AND …

79

analyses more accurate and generalizable. Second, the emphasis of the PET-R is on a schoolwide

reading model, primarily at the universal level of support. It may be that implementation of

universal reading systems cannot be directly linked to the performance of students who are

struggling readers (as measured by percentage of students who scored in the intensive range on

DIBELS). Similarly, the PBIS-TIC and PBIS-SAS focus on universal behavior supports. The

ODR variable takes into account all discipline referrals and the majority of discipline referrals

tend to come from a smaller subset of the whole school population. It is possible that

implementation of only universal supports may not be as strongly connected to the behavioral

performance of students with high numbers of discipline problems. Another thought is that there

may be a time-lag between when schools make changes in implementation (as reflected on the

measures of implementation fidelity) and when those initial changes make an impact on student

outcomes. It is possible that comparing implementation at year 2 with student outcomes in year 3

or 4 might yield a better model.

Because MiBLSi supports implementation of an integrated three-tier model, there was the

unique opportunity to explore the potential effects of reading systems on behavioral outcomes

and behavioral systems on reading outcomes. The combination of reading and behavior

implementation did not improve the ability to predict student reading outcomes (DIBELS).

However, the combination of reading and behavior implementation scores yielded stronger

model effects for predicting behavioral outcomes (ODRS). Note that PET-R scores on their own

did not significantly predict ODRS. Nor was the overall model statistically significant. However,

by combining both the behavior and reading implementation data, the overall model got better at

predicting ODRS. This suggests that there was some type of link between schools’ reading and

behavior systems and schools’ outcomes. This relation could be explored further with a more

Page 90: A THREE-TIER MODEL OF INTEGRATED BEHAVIOR AND …

80

complete data set. Findings provide some additional support for previous studies that found

improved behavioral outcomes when schools implemented RtI for both reading and behavior

support (Stewart, Benner, Martella, & Marchand-Martella, 2007).

Limitations

The implementation data in this study may be biased because they are based on self-

reports. School staff may have an inaccurate understanding of the implementation of behavior

and reading systems in their buildings or be motivated to respond to items in a socially

acceptable way (Anastasi & Urbina, 1998). Further, it is not possible to determine why scores

fluctuate between years. Scores may reflect a change in implementation. Yet, they may also

reflect a change in the understanding of items by individuals who are completing the measures.

For example, implementers often report that they can become more self-critical of their systems

and practices over time because they better understand what full implementation looks like.

Future studies should include more objective measures of implementation, such as the School-

wide Evaluation Tool (Sugai, Lewis-Palmer, Todd & Horner, 2001) and Benchmarks of Quality

(Kincaid, Childs & George, 2005). The PBIS-SAS, PBIS-TIC and PET-R are more commonly

used for the purposes of self-assessment and action planning. Thus, the technical properties of

these tools have not been properly evaluated to date. While this does present a large flaw in the

current study, the use of the PBIS-SAS, PBIS-TIC and PET-R was intentional. Across the United

States, schools are using these very tools or similar, self-developed tools to measure and support

implementation of three-tier models. Thus, it was of interest to examine how schools performed

on these measures over time and whether they could be used to predict school-wide student

outcomes. This study provided in-depth information about schools’ performance on the measures

over time. At this point, it is not clear whether the relation (or non-relation) between

Page 91: A THREE-TIER MODEL OF INTEGRATED BEHAVIOR AND …

81

implementation scores and student outcomes should be attributed to adequacy of the measures

themselves, how schools used the measurement tools, a combination of the two, or other factors.

Although implementation improvement was found to be statistically significant, it is not

known whether those improvements were viewed to be clinically, or practically, significant.

Staff from participating schools frequently report that they see MiBLSi and the RtI framework in

general making a positive impact in their schools. However, those comments are informal and

not directly connected to this study.

The issue of missing data has been discussed at several points throughout this

dissertation. Over time, the amounts of data that schools submitted decreased, particularly for the

systems / process measures. For all three measures, increasing amounts of missing data

prevented a complete picture of implementation fidelity over time. Although generalized

estimating equations are robust in the context of missing data, a more complete data set is always

ideal.

An experimental design was not used in this study. Results are therefore limited by the

fact that a control group was not included and that pre-intervention data were not available.

Therefore, results should be interpreted with caution. The student outcomes and improvements in

implementation fidelity cannot be directly or solely attributed to schools’ participation with

MiBLSi.

Future Research

While this study examined the implementation of an integrated three-tier model in more

detail than has previously been published, several questions remain unanswered. For example,

this study provided information about how well schools implemented an integrated three-tier

model, but the study did not shed light on how schools were able to do so. A great deal of

Page 92: A THREE-TIER MODEL OF INTEGRATED BEHAVIOR AND …

82

research is being conducted on the factors that facilitate implementation fidelity (Fixsen, Naoom,

Blase, Friedman & Wallace, 2005; Mihalic, Irwin, Fagan, Ballard & Elliott, 2004). Examination

of these factors was beyond the scope and intentions of this study. However, understanding the

causal mechanisms underlying school-wide implementation of three-tier models is a logical next

step. Future research might empirically examine how the full science of implementation (i.e.,

drivers, continuous feedback cycles, and implementation stages) can be used to improve

implementation quality, sustainability and impact on student outcomes.

Future research might also explore in greater depth the factors that allowed some schools

to be very successful and the barriers that prevented some schools from realizing high levels of

implementation fidelity and meaningful improvements in student outcomes. One method of

exploring this question might be by analyzing subscale scores from the implementation

measures. It is possible that by only looking at schools’ overall score, critical implementation

elements were masked. Within each of the fidelity measures, some items and subscales could be

better at predicting student outcomes than others.

This study examined implementation based on measures primarily used to evaluate

universal or Tier 1 systems. Thus, this study did not provide information about schools’

implementation of Tier 2 and 3 systems and the impact of implementing those systems on

student outcomes. Future work might examine implementation at all three tiers. More tools are

currently being created, piloted, and adopted for use. MiBLSi schools are beginning to use the

Benchmarks for Advanced Tiers (Anderson et al., 2010) and the project is also developing

additional tools for measuring implementation fidelity of literacy systems and practices. A future

study examining the impact of having multiple tiers implemented with fidelity would provide a

much deeper understanding of the full effects of using a prevention framework. The application

Page 93: A THREE-TIER MODEL OF INTEGRATED BEHAVIOR AND …

83

of an experimental research design could be very beneficial for evaluating the impact of

implementing an integrated three-tier model. To date, no studies have explored the effects of

implementing an integrated three-tier model through the use of a randomized control trial, which

is the gold standard for evaluating intervention / program efficacy.

When this study was first proposed, I was working with MiBLSi as a practicum student.

Since then, I completed part of my internship in School Psychology with MiBLSi. The following

year, I was hired as a full time employee. I now serve as the evaluation coordinator for the

project. Having a history with the project and now being employed by the grant, I have a vested

interest in helping the project to succeed. However, I made every effort to remain un-biased in

my dissertation work. As part of my continued work with MiBLSi, I strive to understand what

the data suggest and to use that knowledge to improve systems of support for Michigan schools.

This dissertation has been extremely valuable for identifying some of the projects’ strengths and

areas of weakness. With this information, we are now in a better place to improve our evaluation

systems and supports for schools.

One example of using data for decision-making was related to the amounts of missing

data. The scope of the problem was brought to the attention of the project directors, regional

coordinators, trainers, coaches and schools. We then put into place several interventions that we

thought might help increase rates of data submission. First, we began presenting more project

data related to implementation fidelity so that schools could see how the data were being used

and how they could use it for planning and decision making in their own schools. Second, we

reviewed the data collection schedule and re-worked it to be less redundant and so that the most

meaningful data were being collected at critical times during schools’ participation with MiBLSi.

It was hypothesized that the more logical the sequence, the more likely schools would be to

Page 94: A THREE-TIER MODEL OF INTEGRATED BEHAVIOR AND …

84

complete the measures as requested. Next, the measurement schedules were published on the

project website for participating schools to access at any time. The schedule for the entire year

was posted at the beginning of the school year and schools were given a corresponding

assessment manual that contained the schedule, information about how to collect the data and

submit it, and blank copies of the measures during the months when they would need to

complete them. Current work is also underway to develop and validate a set of systems / process

tools which will allow for complete examination of implementation fidelity of reading and

behavior supports at Tiers 1, 2, and 3. These are some examples of ways that research can inform

practice and how data can be used within a problem-solving framework.

Page 95: A THREE-TIER MODEL OF INTEGRATED BEHAVIOR AND …

85

Table 1

Critical Features of Public Health, Response to Intervention and Schoolwide Positive Behavior Supports Models

Public Health Response to Intervention Schoolwide Positive Behavior Supports

Emphasis on prevention in three levels

Use a multi-tier model of service delivery.

Intervene early.

A prevention-focused continuum of support

Use of evidence-based practices and interventions

Use research based, scientifically validated interventions/instruction.

Conceptually sound and empirically supported practices

Data-based decision making

Use data to make decisions.

Monitor student progress to inform instruction.

Use assessments for screening, diagnosing and progress monitoring. Use a problem-solving method to make decisions.

Data-based decision making

Focus on strengthening positive behavior

Proactive instructional approaches to teaching and improving social behaviors

School-community collaboration

Examination of systems level processes

Systems change to support effective practices

We can effectively teach all children.

Page 96: A THREE-TIER MODEL OF INTEGRATED BEHAVIOR AND …

86

Table 2 Schools Included in the Study

Cohort

Date that Training Started

Number of Schools Included in the

Study

Average Enrollment (Standard Deviation)

Average Percent of Students Eligible

for Free or Reduced Price

Lunch (Standard Deviation)

1 January 2004 13 377 (148) 50 (20)

2 January 2005 25 356 (133) 60 (25)

3 January 2006 44 361 (134) 52 (21)

4 January 2007 85 354 (104) 51 (21)

5 August 2008 71 379 (141) 48 (29)

Total 238 361 52

Page 97: A THREE-TIER MODEL OF INTEGRATED BEHAVIOR AND …

87

Table 3

Data Availability by Year

Year Measure

2004 2005 2006 2007 2008 2009

PET-R

(May)

1

1 2

1 2 3

1 2 3

4

1 2 3 4 5

PBIS-SAS (May)

1 2

1 2 3

1 2 3 4

1 2 3 4

1 2 3 4 5

PBIS-TIC

(May)

1 2

1 2 3

1 2 3 4

1 2 3 4

1 2 3 4 5

DIBELS (May)

1

1 2

1 2 3

1 2 3 4

1 2 3 4

1 2 3 4 5

ODRS

(June)

1

1 1 2

1 2 3

1 2 3 4

1 2 3 4

Numbers in the cells above represent cohorts (e.g., “1” means “cohort 1”).

Page 98: A THREE-TIER MODEL OF INTEGRATED BEHAVIOR AND …

88

Table 4 Percentage of Schools that Submitted Data

Cohort Year PET-R PBIS-SAS PBIS-TIC DIBELS ODRs

1 (n = 13)

2004 2005 2006 2007 2008 2009

average

100% 92% 77% 69% n/a

15% 71%

n/a 69% 62% 46% 62% 38% 55%

n/a 38% 54% 77% 62% 23% 51%

92% 100% 100% 100% 100% 77% 95%

n/a 100% 100% 100% 100% 85% 97%

2 (n = 25)

2005 2006 2007 2008 2009

average

84% 88% 76% n/a 0% 62%

76% 76% 64% 64% 28% 62%

72% 80% 72% 52% 16% 58%

92% 100% 100% 100% 92% 97%

n/a 92% 88% 84% 68% 83%

3 (n = 44)

2006 2007 2008 2009

average

96% 98% n/a 9% 68%

84% 71 %

80% 34% 68%

59% 80% 59% 36% 58%

98% 100% 100% 95% 98%

n/a 86% 91% 77% 85%

4 (n=85)

2007 2008 2009

average

n/a 98% 68% 83%

94% 85% 67% 82%

69% 71% 55% 65%

91% 100% 100% 97%

n/a 82% 86% 84%

5 (n=71)

2009 56% 96% 61% 97% n/a

n/a: no data were available because schools were not scheduled to submit it at that date

Page 99: A THREE-TIER MODEL OF INTEGRATED BEHAVIOR AND …

89

Table 5 Schools with Complete Sets of Data

Cohort

Measure 1

(n = 13) 2

(n = 25) 3

(n = 44) 4

(n = 85) 5

(n = 71)

PET-R 2 (15%) 0 (0%) 2 (5%) 58 (68%) n/a

PBIS-SAS 1 (8%) 3 (12%) 2 (5%) 48 (57%) n/a

PBIS-TIC 0 (0%) 2 (8%) 2 (5%) 27 (32%) n/a

DIBELS 10 (77%) 23 (92%) 42 (96%) 77 (91%) n/a

ODRs 13 (100%) 16 (64%) 36 (82%) 67 (79%) n/a

All Systems / Process

Measures 0 (0%) 0 (0%) 0 (0%) 16 (19%) 23 (32%)

All Student Outcome Measures

8 (62%) 13 (52%) 21 (48%) 59 (69%) 59 (83%)

All Measures 0 (0%) 0 (0%) 0 (0%) 9 (11%) 18 (25%)

PET-R: Planning and Evaluation Tool for Effective Schoolwide Reading Programs PBIS-SAS: Positive Behavioral Interventions and Supports Self Assessment Survey PBIS-TIC: Positive Behavioral Interventions and Supports Team Implementation Checklist DIBELS: Dynamic Indicators of Basic Early Literacy Skills MEAP: Michigan Educational Assessment Program ODRs: Office Discipline Referrals Systems / Process Measures: PET-R, PBIS-SAS, PBIS-TIC Student Outcome Measures: DIBELS, MEAP, ODRs

Page 100: A THREE-TIER MODEL OF INTEGRATED BEHAVIOR AND …

90

Table 6 Correlation Matrix for Continuous Variables PET-R PBIS-TIC PBIS-SAS ODRs DIBELSB DIBELSI Enrollment Lunch PET-R r N

1 378

.388*** 246

.306*** 286

.091 231

.135** 373

-.077 373

-.091 378

.145** 378

PBIS-TIC r N

1 418

.702*** 348

-.125 241

.263*** 411

-.243*** 411

-.045 418

-.025 418

PBIS-SAS r N

1 508

-.136 271

.255*** 496

-.243*** 496

-.049 508

-.057 508

ODRS r N

1 401

-.299*** 396

.386*** 396

-.026 401

.401*** 401

DIBELSB r N

1 684

-.903*** 684

.088* 684

-.364*** 684

DIBELSI r N

1 684

-.087* 684

.409*** 684

Enrollment r N

1 705

-.316*** 705

Lunch r N

1 705

***. Correlation is significant at the .001 level (2-tailed) **. Correlation is significant at the .01 level (2-tailed) *. Correlation is significant at the .05 level (2-tailed)

Page 101: A THREE-TIER MODEL OF INTEGRATED BEHAVIOR AND …

91

Table 7 PET-R, PBIS-SAS and PBIS-TIC Sample Sizes and Mean Scores

PET-R PBIS-SAS PBIS-TIC Cohort Year

n µ n µ n µ

2004 13 61.23 n/a n/a n/a n/a 2005 12 68.25 9 51.78 5 64.40 2006 10 77.60 8 65.63 7 86.57 2007 9 79.33 6 57.17 10 92.00 2008 n/a n/a 8 67.75 8 91.50

1

2009 2 85.00 5 77.20 3 92.33

2005 21 60.43 19 30.42 18 42.56 2006 22 74.64 19 53.37 20 73.45 2007 19 79.95 16 61.25 18 87.56 2008 n/a n/a 16 59.06 13 77.38

2

2009 0 n/a 7 71.14 4 72.00

2006 42 59.76 37 32.62 26 45.73 2007 43 78.00 31 60.68 35 90.06 2008 n/a n/a 35 68.83 26 87.96

3

2009 4 62.75 15 70.13 16 89.50

2007 n/a n/a 80 29.50 59 59.61 2008 83 64.62 72 58.89 60 76.85 4 2009 58 79.41 57 64.77 47 85.43

5 2009 40 61.58 68 48.82 43 63.16

Table 6 corresponds with the PET-R, PBIS-SAS and PBIS-TIC box plots depicted in Figures 3-5. PET-R: Planning and Evaluation Tool for Effective Schoolwide Reading Programs PBIS-SAS: Positive Behavioral Interventions and Supports Self Assessment Survey PBIS-TIC: Positive Behavioral Interventions and Supports Team Implementation Checklist

Page 102: A THREE-TIER MODEL OF INTEGRATED BEHAVIOR AND …

92

Table 8 DIBELS and ODRs Sample Sizes and Mean Scores

DIBELSB DIBELSI ODRS Cohort Year

n µ n µ n µ

2004 12 39.92 12 32.67 n/a n/a 2005 13 48.85 13 27.85 13 .69 2006 13 55.92 13 22.62 13 .63 2007 13 59.31 13 19.77 13 .59 2008 13 61.46 13 18.62 13 .47

1

2009 10 62.20 10 17.30 11 .58

2005 23 43.39 23 30.22 n/a n/a 2006 25 49.92 25 26.60 23 .54 2007 25 52.52 25 24.28 22 .48 2008 25 57.00 25 22.48 21 .53

2

2009 23 58.43 23 19.17 17 .76

2006 43 47.35 43 27.49 n/a n/a 2007 44 52.86 44 23.52 38 .51 2008 44 56.80 44 20.36 40 .44

3

2009 42 58.10 42 20.05 34 .46

2007 77 51.91 77 23.58 n/a n/a 2008 85 56.32 85 20.40 70 .37 4 2009 85 60.67 85 17.47 73 .35

5 2009 69 54.96 69 21.14 n/a n/a

Table 6 corresponds with Figures 15 and 16. DIBELSB: Percentage of students who scored in the Benchmark range on the Spring

Dynamic Indicators of Basic Early Literacy Skills DIBELSI: Percentage of students who scored in the Intensive range on the Spring Dynamic Indicators of Basic Early Literacy Skills ODRS: Average number of major discipline referrals per 100 students per day

Page 103: A THREE-TIER MODEL OF INTEGRATED BEHAVIOR AND …

93

Table 9 Pair-wise Comparisons of Mean Score Differences on the Implementation Measures

Cohort

Measure Year Statistic 1 2 3 4

PET-R 2004-2005 Mean Difference Std. Error

6.88 4.27

n/a n/a n/a

2005-2006 Mean Difference Std. Error

10.03** 3.47

14.16** 4.59

n/a n/a

2006-2007 Mean Difference Std. Error

1.41 3.68

5.18* 2.56

18.20*** 2.21

n/a

2007-2009 Mean Difference Std. Error

-.21 2.04

n/a -5.27 6.23

n/a

2008-2009 Mean Difference Std. Error

n/a n/a n/a 14.49*** 2.12

Between First and Last Years

Mean Difference Std. Error

18.11*** 2.96

19.35*** 5.31

12.92* 6.57

14.39*** 2.12

PBIS-SAS 2005-2006 Mean Difference

Std. Error 15.93* 6.58

22.70*** 3.31

2006-2007 Mean Difference Std. Error

-5.52 10.44

7.77* 3.84

28.06*** 3.01

2007-2008 Mean Difference Std. Error

6.11 9.25

-.64 4.53

8.16** 2.43

29.45*** 1.97

2008-2009 Mean Difference Std. Error

9.86** 3.46

14.13** 4.53

1.08 2.92

5.94** 1.95

Between First and Last years

Mean Difference Std. Error

26.38*** 5.59

43.86*** 3.63

37.30*** 3.11

35.39*** 1.88

PBIS-TIC 2005-2006 Mean Difference

Std. Error 25.87*** 4.58

30.45*** 4.71

2006-2007 Mean Difference Std. Error

5.16 2.8

14.40*** 3.89

43.03*** 4.13

2007-2008 Mean Difference Std. Error

-4.55 3.25

-7.41 5.24

-2.40 3.91

16.88*** 2.95

2008-2009 Mean Difference Std. Error

8.83 7.98

17.15 10.76

1.74 5.62

8.90** 2.99

Between First and Last years

Mean Difference Std. Error

35.31*** 9.30

54.59*** 11.14

42.37*** 6.20

25.77*** 2.42

Page 104: A THREE-TIER MODEL OF INTEGRATED BEHAVIOR AND …

94

Table 10 Percentage of Schools that Attained Criterion Scores at Least Once

Cohort

1 2 3 4 5

Measure Total N in Cohort 13 25 44 85 71

PET-R N that attained criterion

Percentage of schools

7

54%

14

56%

24

55%

46

54%

6

8%

PBIS-SAS N that attained criterion

Percentage of schools

9

69%

9

36%

26

59%

40

47%

10

14%

PBIS-TIC N that attained criterion

Percentage of schools

12

92%

18

72%

38

86%

49

58%

8

11%

Page 105: A THREE-TIER MODEL OF INTEGRATED BEHAVIOR AND …

95

Figure 1 Conceptual Framework

PLANNED INTERVENTION

School-wide

Positive Behavior Supports

Response to

Intervention for Reading

ACTUAL IMPLEMENTATION

Scores on Implementation

Checklists

STUDENT OUTCOMES

Office Discipline

Referrals

Performance on Curriculum-Based Literacy Measures

Page 106: A THREE-TIER MODEL OF INTEGRATED BEHAVIOR AND …

96

Figure 2 PET-R Highest and Lowest Implementation Scores during Schools’ First Year of Participation with MiBLSi

Page 107: A THREE-TIER MODEL OF INTEGRATED BEHAVIOR AND …

97

Figure 3 PBIS-SAS Highest and Lowest Implementation Scores during Schools’ First Year of Participation with MiBLSi

Page 108: A THREE-TIER MODEL OF INTEGRATED BEHAVIOR AND …

98

Figure 4 PBIS-TIC Highest and Lowest Implementation Scores during Schools’ First Year of Participation with MiBLSi

Page 109: A THREE-TIER MODEL OF INTEGRATED BEHAVIOR AND …

99

Figure 5 Planning and Evaluation Tool for Effective Schoolwide Reading Programs (PET-R) Total Scores Over Time (Criterion Score = 80). For interpretation of the references to color in this and other figures, the reader is referred to the electronic version of this dissertation.

See Table 7 PET-R, PBIS-SAS and PBIS-TIC Sample Sizes and Mean Scores for the number of schools represented in each box plot above and the mean score for each cohort and year.

Year

2004 2005 2006 2007 2008 2009

1 2 3 4 5

100

90

80

70

60

50

40

30

20

10

PET-

R T

otal

Sco

res (

0-10

0)

Cohort

Page 110: A THREE-TIER MODEL OF INTEGRATED BEHAVIOR AND …

100

Figure 6 Positive Behavioral Interventions and Supports Self Assessment Survey (PBIS-SAS) Overall Summary Scores Over Time (Criterion Scores = 67%)

See Table 7 PET-R, PBIS-SAS and PBIS-TIC Sample Sizes and Mean Scores for the number of schools represented in each box plot above and the mean score for each cohort and year.

1 2 3 4 5

100

80

60

40

20

PBIS

-SA

S O

vera

ll Su

mm

ary

Scor

e (P

erce

nt o

f Sta

ff w

ho R

ated

PB

IS S

yste

ms a

s “In

Pla

ce”)

Cohort

2004 2005 2006 2007 2008

Year

Page 111: A THREE-TIER MODEL OF INTEGRATED BEHAVIOR AND …

101

Figure 7 Positive Behavioral Interventions and Supports Team Implementation Checklist (PBIS-TIC) Overall Implementation Scores Over Time

See Table 7 PET-R, PBIS-SAS and PBIS-TIC Sample Sizes and Mean Scores for the number of schools represented in each box plot above and the mean score for each cohort and year.

1 2 3 4 5 Cohort

Year

2005 2006 2007 2008 2009

100

80

60

40

20

PBIS

-TIC

Ove

rall

Scor

es

(Per

cent

of I

tem

s Sco

red

as “

In P

lace

”)

Page 112: A THREE-TIER MODEL OF INTEGRATED BEHAVIOR AND …

102

Figure 8 Most and Least Amount of Implementation Growth between Single Years

PET-R PBIS-SAS PBIS-TIC

Mos

t Am

ount

of G

row

th b

etw

een

Sing

le Y

ears

Le

ast A

mou

nt o

f Gro

wth

bet

wee

n Si

ngle

Yea

rs

100 80 60 40 20 0

0 -20 -40 -60 -80

Cohort 1 Cohort 2 Cohort 3 Cohort 4

Cohort 1 Cohort 2 Cohort 3 Cohort 4

PET-R PBIS-SAS

PBIS-SAS

Page 113: A THREE-TIER MODEL OF INTEGRATED BEHAVIOR AND …

103

Figure 9 Cohort x Year Interaction Effect on PET-R Scores

Cohort 1 Cohort 2 Cohort 3 Cohort 4 Cohort 5

Cohort

2004 2005 2006 2007 2008 2009 Year

90

80

70

60

50

Mea

n PE

T-R

Sco

res f

or C

ohor

t 80 70 60 50

Page 114: A THREE-TIER MODEL OF INTEGRATED BEHAVIOR AND …

104

Figure 10 Cohort x Year Interaction Effect on PBIS-SAS Scores

Cohort Cohort 1 Cohort 2 Cohort 3 Cohort 4 Cohort 5

2005 2006 2007 2008 2009

Year

80

70

60

50

40

30

20

Mea

n PB

IS-S

AS

Scor

es fo

r Coh

orts

Page 115: A THREE-TIER MODEL OF INTEGRATED BEHAVIOR AND …

105

Figure 11 Cohort x Year Interaction Effect on PBIS-TIC Scores

Cohort 1 Cohort 2 Cohort 3 Cohort 4 Cohort 5

Cohort

2005 2006 2007 2008 2009

Year

100

90

80

70

60

50

40

30

Mea

n PB

IS-T

IC S

core

for C

ohor

t

Page 116: A THREE-TIER MODEL OF INTEGRATED BEHAVIOR AND …

106

Figure 12 Schools’ Attainment of Criterion Scores on the Planning and Evaluation Tool for Effective Schoolwide Reading Programs (PET-R)

Missing

Not at

Criterion

At

Criterion, not First

Time

At Criterion for First

Time

100

90

80

70

60

50

40

30

20

10

0

Perc

enta

ge o

f Sch

ools

2009 2009 2008 2009

2007

2006

2009

2007

2006

2005

2009

2007

2006

2005

2004

Cohort 1 Cohort 2 Cohort 3 Cohort 4 Cohort 5 Cohort and Year

Page 117: A THREE-TIER MODEL OF INTEGRATED BEHAVIOR AND …

107

Figure 13 Schools’ Attainment of Criterion Scores on the Positive Behavioral Interventions and Supports Self Assessment Survey (PBIS-SAS)

Missing

Not at

Criterion

At

Criterion, not First

Time

At Criterion for First

Time Perc

enta

ge o

f Sch

ools

100

90

80

70 60

50

40

30

20

10

0 2009 2009

2008

2007

2009

2008

2007

2006

2009

2008

2007

2006

2005

2009

2008

2007

2006

2005

Cohort 1 Cohort 2 Cohort 3 Cohort 4 Cohort 5 Cohort and Year

Page 118: A THREE-TIER MODEL OF INTEGRATED BEHAVIOR AND …

108

Figure 14 Schools’ Attainment of Criterion Scores on the Positive Behavioral Interventions and Supports Team Implementation Checklist (PBIS-TIC)

Missing

Not at Criterion

At Criterion, not First Time

At Criterion for First

Time

Perc

enta

ge o

f Sch

ools

100

90

80

70 60

50

40

30

20

10

0 2009 2009 2008 2007 2009 2008 2007 2006 2009 2008 2007 2006 2005 2009 2008 2007 2006 2005

Cohort 1 Cohort 2 Cohort 3 Cohort 4 Cohort 5 Cohort and Year

Page 119: A THREE-TIER MODEL OF INTEGRATED BEHAVIOR AND …

109

Figure 15 Sustainability of Reading Supports: Cohort 3 PET-R Scores

2006 2007 2009 Year

PET-

R S

core

s fro

m S

choo

ls in

Coh

ort 3

100

80

60

40

20

0

Page 120: A THREE-TIER MODEL OF INTEGRATED BEHAVIOR AND …

110

Figure 16 Sustainability of Behavior Supports: Cohort 3 PBIS-SAS Scores

2006 2007 2008 2009 Year

PBIS

-SA

S Sc

ores

from

Sch

ools

in C

ohor

t 3

100

80

60

40

20

0

Page 121: A THREE-TIER MODEL OF INTEGRATED BEHAVIOR AND …

111

Figure 17 Sustainability of Behavior Supports: Cohort 3 PBIS-TIC Scores

2006 2007 2008 2009 Year

PBIS

-TIC

Sco

res f

rom

Sch

ools

in C

ohor

t 3

100

80

60

40

20

0

Page 122: A THREE-TIER MODEL OF INTEGRATED BEHAVIOR AND …

112

Figure 18 Dynamic Indicators of Basic Early Literacy Skills (DIBELS) Scores Over Time

2009 2009

2008

2007

2009

2008

2007

2006

2009

2008

2007

2006

2005

2009

2008

2007

2006

2005

2004

Average Percentage of Students who Scored in the Intensive Range Average Percentage of Students who Scored in the Strategic Range Average Percentage of Students who Scored in the Benchmark Range

Cohort 1 Cohort 2 Cohort 3 Cohort 4 Cohort 5 Cohort and Year

100

80

60

40

20

0

Ave

rage

Per

cent

age

of S

tude

nts a

t Ben

chm

ark,

Stra

tegi

c an

d In

tens

ive

Perf

orm

ance

Lev

els B

ased

on

Sprin

g D

IBEL

S Sc

reen

ing

Page 123: A THREE-TIER MODEL OF INTEGRATED BEHAVIOR AND …

113

Figure 19 Average Number of Major Discipline Referrals per 100 Students per Day Over Time

Year 2005 2006 2007 2008 2009

Cohort 1 Cohort 2 Cohort 3 Cohort 4

Cohort

5.00 4.00 3.00 2.00 1.00 0.00

OD

RS

Per 1

00 S

tude

nts P

er D

ay

Page 124: A THREE-TIER MODEL OF INTEGRATED BEHAVIOR AND …

114

APPENDICES

Page 125: A THREE-TIER MODEL OF INTEGRATED BEHAVIOR AND …

115

Appendix A

Planning and Evaluation Tool for Effective Schoolwide Reading Programs (PET-R)

Planning and Evaluation Tool for Effective Schoolwide Reading Programs - Revised

(PET-R)

Edward J. Kameʼenui, Ph.D. Deborah C. Simmons, Ph.D.

Institute for the Development of Educational Achievement

College of Education University of Oregon

Revised May, 2003

*Based on: Sugai, G., Horner, R., & Todd, A. (2000). Effective behavior support: Self-assessment survey. Eugene, OR: University of Oregon.

Page 126: A THREE-TIER MODEL OF INTEGRATED BEHAVIOR AND …

116

Planning and Evaluation Tool for Effective Schoolwide Reading Programs - Revised

School: Date: Position (check one):

Administrator

Teacher

Paraprofessional/Educational Assistant

Grade Level Team

Current Grade(s) Taught (if applicable):

Kindergarten

First

Second

Third

Years of Teaching Experience: Years at Present School:

Directions Based on your knowledge of your schoolʼs reading program (e.g., goals, materials, allocated time), please use the following evaluation criteria to rate your reading programʼs implementation. Each item has a value of 0, 1, or 2 to indicate the level of implementation (see below). Please note that some items are designated with a factor, (e.g., x 2). Items with this designation are considered more important in the overall reading program. Multiply your rating by the number in parentheses and record that number in the blank to the left of the item. In the right-hand column of the table, document evidence available to support your rating for each item.

Levels of Implementation Description

0 = Not in place

1 = Partially in place

2 = Fully in place

Page 127: A THREE-TIER MODEL OF INTEGRATED BEHAVIOR AND …

117

Planning and Evaluation Tool for Effective Schoolwide Reading Programs

Internal/External Auditing Form

0 1 2 Not in place Partially in place Fully in place

EVALUATION CRITERIA DOCUMENTATION OF EVIDENCE

I. Goals, Objectives, Priorities – Goals for reading achievement are clearly defined, anchored to research, prioritized in terms of importance to student learning, commonly understood by users, and consistently employed as instructional guides by all teachers of reading.

Goals and Objectives: 1. are clearly defined and quantifiable at each grade level.

2. are articulated across grade levels.

3. are prioritized and dedicated to the essential elements (i.e., phonemic awareness, phonics, fluency, vocabulary, and comprehension) in reading (x 2).

4. guide instructional and curricular decisions (e.g., time allocations, curriculum program adoptions) (x 2).

5. are commonly understood and consistently used by teachers and administrators within and between grades to evaluate and communicate student learning and improve practice.

/14 Total Points %

Percent of Implementation:

7 = 50% 11 = 80% 14 = 100%

Page 128: A THREE-TIER MODEL OF INTEGRATED BEHAVIOR AND …

118

0 1 2

Not in place Partially in place Fully in place

EVALUATION CRITERIA DOCUMENTATION OF EVIDENCE

II. Assessment – Instruments and procedures for assessing reading achievement are clearly specified, measure essential skills, provide reliable and valid information about student performance, and inform instruction in important, meaningful, and maintainable ways.

Assessment: 1. A schoolwide assessment system and database are established and maintained for documenting student performance and monitoring progress (x 2).

2. Measures assess student performance on prioritized goals and objectives.

3. Measures are technically adequate (i.e., have high reliability and validity) as documented by research.

4. All users receive training and followup on measurement administration, scoring, and data interpretation.

5. At the beginning of the year, screening measures identify students' level of performance and are used to determine instructional needs.

Page 129: A THREE-TIER MODEL OF INTEGRATED BEHAVIOR AND …

119

6. Progress monitoring measures are administered formatively throughout the year to document and monitor student reading performance (i.e., quarterly for all students; every 4 weeks for students at risk).

II. Assessment continued

EVALUATION CRITERIA DOCUMENTATION OF EVIDENCE

7. Student performance data are analyzed and summarized in meaningful formats and routinely used by grade-level teams to evaluate and adjust instruction (x 2).

8. The building has a “resident” expert or experts to maintain the assessment system and ensure measures are collected reliably, data are scored and entered accurately, and feedback is provided in a timely fashion.

/20 Total Points %

Percent of Implementation:

10 = 50% 16 = 80% 20 = 100%

Page 130: A THREE-TIER MODEL OF INTEGRATED BEHAVIOR AND …

120

0 1 2

Not in place Partially in place Fully in place

EVALUATION CRITERIA DOCUMENTATION OF EVIDENCE

III. Instructional Programs and Materials - The instructional programs and materials have documented efficacy, are drawn from research-based findings and practices, align with state standards and benchmarks, and support the full range of learners.

1. A comprehensive or core reading program with documented research-based efficacy is adopted for use school wide (x 3).

2. The instructional program and materials provide explicit and systematic instruction on critical reading priorities (i.e., phonemic awareness, phonics, fluency, vocabulary, and comprehension) (x 2).

3. The instructional materials and program align with and support state standards/scientifically based practices and provide sufficient instruction in essential elements to allow the majority of students to reach learning goals.

4. Supplemental and intervention programs of documented efficacy are in place to support students who do not benefit adequately from the core program (x 2).

5. Programs and materials are implemented with a high level of fidelity (x 3).

/22 Total Points %

Percent of Implementation:

11 = 50% 18 = 80% 22 = 100%

Page 131: A THREE-TIER MODEL OF INTEGRATED BEHAVIOR AND …

121

0 1 2

Not in place Partially in place Fully in place

EVALUATION CRITERIA DOCUMENTATION OF EVIDENCE

IV. Instructional Time - A sufficient amount of time is allocated for instruction and the time allocated is used effectively.

1. A schoolwide plan is established to allocate sufficient reading time and coordinate resources to ensure optimal use of time.

2. Reading time is prioritized and protected from interruption (x 2).

3. Instructional time is allocated to skills and practices most highly correlated with reading success (i.e., essential elements of reading including phonemic awareness, phonics, fluency, vocabulary, and comprehension).

4. Students in grades K-3 receive a minimum of 30 minutes of small-group teacher-directed reading instruction daily (x 2).

5. Additional instructional time is allocated to students who fail to make adequate reading progress.

/14 Total Points %

Percent of Implementation:

7 = 50% 11 = 80% 14 = 100%

Page 132: A THREE-TIER MODEL OF INTEGRATED BEHAVIOR AND …

122

0 1 2

Not in place Partially in place Fully in place

EVALUATION CRITERIA DOCUMENTATION OF EVIDENCE

V. Differentiated Instruction/Grouping/Scheduling - Instruction optimizes learning for all students by tailoring instruction to meet current levels of knowledge and prerequisite skills and organizing instruction to enhance student learning.

1. Student performance is used to determine the level of instructional materials and to select research-based instructional programs.

2. Instruction is provided in flexible homogeneous groups to maximize student performance and opportunities to respond.

3. For children who require additional and substantial instructional support, tutoring (1-1) or small group instruction (< 6) is used to support teacher-directed large group or whole class instruction.

4. Group size, instructional time, and instructional programs are determined by and adjusted according to learner performance (i.e., students with greatest needs are in groups that allow more frequent monitoring and opportunities to respond and receive feedback).

5. Cross-class and cross-grade grouping is used when appropriate to maximize learning opportunities.

/10 Total Points %

Percent of Implementation:

5 = 50% 8 = 80% 10 = 100%

Page 133: A THREE-TIER MODEL OF INTEGRATED BEHAVIOR AND …

123

0 1 2

Not in place Partially in place Fully in place

EVALUATION CRITERIA DOCUMENTATION OF EVIDENCE

VI. Administration/Organization/Communication - Strong instructional leadership maintains a focus on high-quality instruction, organizes and allocates resources to support reading, and establishes mechanisms to communicate reading progress and practices.

1. Administrators or the leadership team are knowledgeable of state standards, priority reading skills and strategies, assessment measures and practices, and instructional programs and materials.

2. Administrators or the leadership team work with staff to create a coherent plan for reading instruction and implement practices to attain school reading goals.

3. Administrators or the leadership team maximize and protect instructional time and organize resources and personnel to support reading instruction, practice, and assessment.

4. Grade-level teams are established and supported to analyze reading performance and plan instruction.

5. Concurrent instruction (e.g., Title, special education) is coordinated with and complementary to general education reading instruction.

6. A communication plan for reporting and sharing student performance with teachers, parents, and school, district, and state administrators is in place.

/12 Total Points %

Percent of Implementation: 6 = 50% 10 = 80% 12 = 100%

Page 134: A THREE-TIER MODEL OF INTEGRATED BEHAVIOR AND …

124

0 1 2

Not in place Partially in place Fully in place

EVALUATION CRITERIA DOCUMENTATION OF EVIDENCE

VII. Professional Development - Adequate and ongoing professional development is determined and available to support reading instruction.

1. Teachers and instructional staff have thorough understanding and working knowledge of grade-level instructional/reading priorities and effective practices.

2. Ongoing professional development is established to support teachers and instructional staff in the assessment and instruction of reading priorities.

3. Time is systematically allocated for educators to analyze, plan, and refine instruction.

4. Professional development efforts are explicitly linked to practices and programs that have been shown to be effective through documented research.

/8 Total Points %

Percent of Implementation:

4 = 50% 6.5 = 80% 8 = 100%

Page 135: A THREE-TIER MODEL OF INTEGRATED BEHAVIOR AND …

125

Planning and Evaluation Tool for Effective Schoolwide Reading Programs

Individual Summary Score Directions: Return to each element (e.g., goals; assessment) and total the scores at the bottom of the respective page. Transfer each element's number to the designated space below. Sum the total scores to compute your overall evaluation of the schoolwide reading program. The total possible value is 100 points. The total score can be used to evaluate the overall quality of the school's reading program. Evaluate each element to determine the respective quality of implementation. For example, a score of 11 in Goals/Objectives/Priorities means that in your estimation the school is implementing approximately 80% of the items in that element.

Element Score Percent

I. Goals/Objectives/Priorities /14

II. Assessment /20

III. Instructional Practices and Materials /22

IV. Instructional Time /14

V. Differentiated Instruction/Grouping /10

VI. Administration/Organization/Communication /12

VII. Professional Development /8

Total Score /100

Page 136: A THREE-TIER MODEL OF INTEGRATED BEHAVIOR AND …

126

Appendix B

Positive Behavioral Interventions and Supports (PBIS) Self-Assessment Survey

Version 2.0 Data Collection Protocol Conducted annually, preferably in spring.

Completed by all staff. Use results to design annual action plan.

EBS Self-Assessment Survey version 2.0 August 2003 ©2000 Sugai, Horner & Todd, Educational and Community Supports University of Oregon Revised 08/27/03 DP

Page 137: A THREE-TIER MODEL OF INTEGRATED BEHAVIOR AND …

127

Effective Behavior Support (EBS) Survey

Assessing and Planning Behavior Support in Schools

Purpose of the Survey

The EBS Survey is used by school staff for initial and annual assessment of effective behavior support systems in their school. The survey examines the status and need for improvement of four behavior support systems: (a) school-wide discipline systems, (b) non-classroom management systems (e.g., cafeteria, hallway, playground), (c) classroom management systems, and (d) systems for individual students engaging in chronic problem behaviors. Each question in the survey relates to one of the four systems.

Survey results are summarized and used for a variety of purposes including:

1. annual action planning, 2. internal decision making, 3. assessment of change over time, 4. awareness building of staff, and 5. team validation.

The survey summary is used to develop an action plan for implementing and

sustaining effective behavioral support systems throughout the school (see “Developing an EBS Annual Action Plan”).

Conducting the EBS Survey

Who completes the survey?

Initially, the entire staff in a school completes the EBS Survey. In subsequent years and as an on-going assessment and planning tool, the EBS Survey can be completed in several ways:

• All staff at a staff meeting. • Individuals from a representative group. • Team member-led focus group.

When and how often should the survey be completed?

Since survey results are used for decision making and designing an annual action plan in the area for effective behavior support, most schools have staff complete the survey at the end or the beginning of the school year.

Page 138: A THREE-TIER MODEL OF INTEGRATED BEHAVIOR AND …

128

How is the survey completed?

1. Complete the survey independently. 2. Schedule 20-30 minutes to complete the survey. 3. Base your rating on your individual experiences in the school. If you do not

work in classrooms, answer questions that are applicable to you. 4. Mark (i.e., “√” or “X”) on the left side of the page for current status and the

right side of the page for the priority level for improvement for each feature that is rated as partially in place or not in place and rate the degree to which improvements are needed (i.e., high, medium, low) (right hand side of survey).

To assess behavior support, first evaluate the status of each system feature (i.e.

in place, partially in place, not in place) (left hand side of survey). Next, examine each feature:

a. “What is the current status of this feature (i.e. in place, partially in place, not in place)?”

b. For each feature rated partially in place or not in place, “What is the

priority for improvement for this feature (i.e., high, medium, low)?”

Page 139: A THREE-TIER MODEL OF INTEGRATED BEHAVIOR AND …

129

Summarizing the Results from the EBS Survey

The results from the EBS Survey are used to (a) determine the status of EBS in a school and (b) guide the development of an action plan for improving EBS. The resulting action plan can be developed to focus on any one or combination of the four EBS system areas.

Three basic phases are involved: (a) summarize the results, (b) analyze and prioritize the results, and (c) develop the action plan. Phase 1: Summarize the results The objective of this phase is to produce a display that summarizes the overall response of school staff for each system on (a) status of EBS features and (b) improvement priorities. Step 1a. Summarize survey results on a blank survey by tallying all individual responses for each of the possible six choices as illustrated in example 1a.

Example 1a.

Current Status

Feature

Priority for

Improvement

In Place

Partial

in Place

Not in Place

School-wide is defined as involving all students, all staff, & all settings.

High

Med

Low

√√√√√√√√√

√√√√√√√

√√√√

1. A small number (e.g. 3-5) of positively & clearly stated student expectations or rules are defined.

√√√√

√√√√

√√√

√√

√√√√√√

√√√√√√√√√√√√

2. Expected student behaviors are taught directly.

√√√√√√√√√√

√√√√

√√√√√√

Page 140: A THREE-TIER MODEL OF INTEGRATED BEHAVIOR AND …

130

Step 1b. Total the number of responses by all staff for each of the six possible choices. As illustrated in example 1b.

Example 1b.

Current Status

Feature

Priority for Improvement

In

Place

Partial

in Place

Not in Place

School-wide is defined as involving all students, all staff, & all settings.

High

Med

Low

√√√√√√√√√

9

√√√√√√√ 7

√√√√ 4

1. A small number (e.g. 3-5) of positively & clearly stated student expectations or rules are defined.

√√√√ 4

√√√√ 4

√√√ 3

√√ 2

√√√√√√ 6

√√√√√√√√√√√√ 12

2. Expected student behaviors are taught directly.

√√√√√√√√√√

10

√√√√ 4

√√√√√√ 6

√√√√√√√ 7

√√√√√√√√√

9

√√√ 3

3. Expected student behaviors are rewarded regularly.

√√√√√√ 6

√√√√√√ 6

√√√√√√√ 7

√√√√√√√√√√√

11

√√√ 3

4. Problem behaviors (failure to meet expected student behaviors) are defined clearly.

√√√√√√ 6

√√√√ 4

√√√√ 4

√√√√√√√√ 8

√√√√√√√√√

9

5. Consequences for problem behaviors are defined clearly.

√√√√√√√√√√√ 11

√√√ 3

√√√ 3

Page 141: A THREE-TIER MODEL OF INTEGRATED BEHAVIOR AND …

131

Step 1c. For each system area, calculate a total summary by counting the total number of responses for a column (e.g., In place: 9 + 2 + …..) and dividing that number by the total number of responses for the row (e.g., In place + Partial + Not in place) as illustrated in example 1c.

Example 1c.

Current Status

Feature

Priority for Improvement

In

Place

Partial

in Place

Not in Place

School-wide is defined as involving all students, all staff, & all settings.

High

Med

Low

√√√√√√√√√

9

√√√√√√√ 7

√√√√ 4

1. A small number (e.g. 3-5) of positively & clearly stated student expectations or rules are defined.

√√√√ 4

√√√√ 4

√√√ 3

√√ 2

√√√√√√ 6

√√√√√√√√√√√√ 12

2. Expected student behaviors are taught directly.

√√√√√√√√√√

10

√√√√ 4

√√√√√ 6

√√√√√√√ 7

√√√√√√√√√

9

√√√ 3

3. Expected student behaviors are rewarded regularly.

√√√√√√ 6

√√√√√√ 6

√√√√√√√ 7

√√√√√√√√√√√

11

√√√ 3

4. Problem behaviors (failure to meet expected student behaviors) are defined clearly.

√√√√√√ 6

√√√√

4

√√√√ 4

√√√√√√√√ 8

√√√√√√√√√

9

5. Consequences for problem behaviors are defined clearly.

√√√√√√√√√√√ 11

√√√ 3

√√√ 3

Totals 25 + 41 + 31= 97 37 + 2 + 16 = 74

Page 142: A THREE-TIER MODEL OF INTEGRATED BEHAVIOR AND …

132

Step 1d. Create a bar graph showing total item summary percentages for each of the six choices (take total responses for each of six choices and divide by the total number of responses) as illustrated in example 1d. using results from example 1c.. Complete the EBS Survey Summary by graphing the current status and priority for improvement for each of the four system areas. Example 1d. has created the graph for the example data presented and summarized in example 1c.

Example 1d.

Completing Phase 1 provides a general summary for the current status and priority for improvement ratings for each of the four system areas. For further summary and analysis, follow Phase 2 and Phase 3 activities.

Page 143: A THREE-TIER MODEL OF INTEGRATED BEHAVIOR AND …

133

Phase 2: Analyze and Prioritize the Results

The objective of this phase is for teams to narrow the focus of Action Plan activities. Teams also may want to include other data or information (e.g., office discipline referrals, behavior incident reports, attendance) to refine their decisions. Use the EBS Survey Summary to guide and document your analysis. In general, the following guidelines should be considered: Step 1. Using the EBS Survey Summary Graph results, rate the overall perspective of

EBS implementation by circling High, Med., or Low for each of the four system areas.

Step 2. Using the EBS Survey Tally pages, list the three major strengths in each of the

four system areas. Step 3. Using the EBS Survey Tally pages, list the three major areas in need of

development. Step 4. For each system, circle one priority area for focusing development activities. Step 5. Circle or define the activities for this/next year’s focus to support the area

selected for development Step 6. Specify system(s) to sustain (S) & develop (D). Phase 3: Use the EBS Survey Summary Information to Develop

the EBS Annual Action Plan

The objective of this phase to develop an action plan for meeting the school improvement goal in the area of school safety. Multiple data sources will be integrated when developing the action plan. The EBS Survey Summary page summarizes the EBS Survey information and will be a useful tool when developing the EBS Annual Action Plan. The EBS Annual Action Plan process can be obtained by contacting the first author of this document.

Page 144: A THREE-TIER MODEL OF INTEGRATED BEHAVIOR AND …

134

Effective Behavior Support (EBS) Survey Assessing and Planning Behavior Support in Schools

Name of school Date District State Person Completing the Survey: Administrator Special Educator Parent/Family member General Educator Counselor School Psychologist Educational/Teacher Assistant Community member Other 1. Complete the survey independently. 2. Schedule 20-30 minutes to complete the survey. 3. Base your rating on your individual experiences in the school. If you do not work in

classrooms, answer questions that are applicable to you.

To assess behavior support, first evaluate the status of each system feature (i.e. in place, partially in place, not in place) (left hand side of survey). Next, examine each feature:

a. “What is the current status of this feature (i.e. in place, partially in place, not in

place)?” b. For those features rated as partially in place or not in place, “What is the priority

for improvement for this feature (i.e., high, medium, low)?”

4. Return your completed survey to by .

Page 145: A THREE-TIER MODEL OF INTEGRATED BEHAVIOR AND …

135

SCHOOL-WIDE SYSTEMS

Current Status

Feature

Priority for Improvement

In

Place

Partial

in Place

Not in Place

School-wide is defined as involving all students, all staff, & all settings.

High

Med

Low

1. A small number (e.g. 3-5) of positively & clearly stated student expectations or rules are defined.

2. Expected student behaviors are taught directly.

3. Expected student behaviors are rewarded regularly.

4. Problem behaviors (failure to meet expected student behaviors) are defined clearly.

5. Consequences for problem behaviors are defined clearly.

6. Distinctions between office v. classroom managed problem behaviors are clear.

7. Options exist to allow classroom instruction to continue when problem behavior occurs.

8.Procedures are in place to address emergency/dangerous situations.

9. A team exists for behavior support planning & problem solving.

10. School administrator is an active participant on the behavior support team.

11. Data on problem behavior patterns are collected and summarized within an on-going system.

12. Patterns of student problem behavior are reported to teams and

Page 146: A THREE-TIER MODEL OF INTEGRATED BEHAVIOR AND …

136

Current Status

Feature

Priority for

Improvement

In Place

Partial

in Place

Not in Place

School-wide is defined as involving all students, all staff, & all settings.

High

Med

Low

faculty for active decision-making on a regular basis (e.g. monthly).

13. School has formal strategies for informing families about expected student behaviors at school.

14. Booster training activities for students are developed, modified, & conducted based on school data.

15. School-wide behavior support team has a budget for (a) teaching students, (b) on-going rewards, and (c) annual staff planning.

16. All staff are involved directly and/or indirectly in school-wide interventions.

17. The school team has access to on-going training and support from district personnel.

18. The school is required by the district to report on the social climate, discipline level or student behavior at least annually.

Name of School ______________________________________Date ______________

Page 147: A THREE-TIER MODEL OF INTEGRATED BEHAVIOR AND …

137

NONCLASSROOM SETTING SYSTEMS

Current Status

Feature

Priority for Improvement

In Place

Partial in Plac

e

Not in

Place

Non-classroom settings are defined as particular times or places where supervision is emphasized (e.g., hallways, cafeteria, playground, bus).

High

Med

Low

1. School-wide expected student behaviors apply to non-classroom settings.

2. School-wide expected student behaviors are taught in non-classroom settings.

3. Supervisors actively supervise (move, scan, & interact) students in non-classroom settings.

4. Rewards exist for meeting expected student behaviors in non-classroom settings.

5. Physical/architectural features are modified to limit (a) unsupervised settings, (b) unclear traffic patterns, and (c) inappropriate access to & exit from school grounds.

6. Scheduling of student movement ensures appropriate numbers of students in non-classroom spaces.

7. Staff receives regular opportunities for developing and improving active supervision skills.

8. Status of student behavior and management practices are evaluated quarterly from data.

9. All staff are involved directly or indirectly in management of non-classroom settings.

Name of School _______________________________________Date ______________

Page 148: A THREE-TIER MODEL OF INTEGRATED BEHAVIOR AND …

138

CLASSROOM SYSTEMS

Current Status

Feature

Priority for Improvement

In

Place

Partial in Place

Not in

Place

Classroom settings are defined as instructional settings in which teacher(s) supervise & teach groups of students.

High

Med

Low

1. Expected student behavior & routines in classrooms are stated positively & defined clearly.

2. Problem behaviors are defined clearly.

3. Expected student behavior & routines in classrooms are taught directly.

4. Expected student behaviors are acknowledged regularly (positively reinforced) (>4 positives to 1 negative).

5. Problem behaviors receive consistent consequences.

6. Procedures for expected & problem behaviors are consistent with school-wide procedures.

7. Classroom-based options exist to allow classroom instruction to continue when problem behavior occurs.

8. Instruction & curriculum materials are matched to student ability (math, reading, language).

9. Students experience high rates of academic success (> 75% correct).

10.Teachers have regular opportunities for access to assistance & recommendations (observation, instruction, & coaching).

11. Transitions between instructional & non-instructional activities are efficient & orderly.

Page 149: A THREE-TIER MODEL OF INTEGRATED BEHAVIOR AND …

139

Name of School _______________________________________Date ______________

INDIVIDUAL STUDENT SYSTEMS

Current Status

Feature

Priority for Improvement

In

Place

Partial in

Place

Not in

Place

Individual student systems are defined as specific supports for students who engage in chronic problem behaviors (1%-7% of enrollment)

High

Med

Low

1. Assessments are conducted regularly to identify students with chronic problem behaviors.

2. A simple process exists for teachers to request assistance.

3. A behavior support team responds promptly (within 2 working days) to students who present chronic problem behaviors.

4. Behavioral support team includes an individual skilled at conducting functional behavioral assessment.

5. Local resources are used to conduct functional assessment-based behavior support planning (~10 hrs/week/student).

6. Significant family &/or community members are involved when appropriate & possible.

7. School includes formal opportunities for families to receive training on behavioral support/positive parenting strategies.

8. Behavior is monitored & feedback provided regularly to the behavior support team & relevant staff.

Name of School ________________________________________Date _____________

Page 150: A THREE-TIER MODEL OF INTEGRATED BEHAVIOR AND …

140

Appendix C Positive Behavioral Interventions and Supports Team Implementation Checklist

(Quarterly) School Date of Report District County State INSTRUCTIONS: The EBS team should complete both checklists quarterly to monitor activities for implementation of EBS in the school. EBS Team Members

Person(s) Completing Report

Checklist #1: Start-Up Activity

Complete & submit Quarterly. Status: Achieved, In Progress, Not

Started

Oct. Dec. Mar. May Date: (MM/DD/YY)

Establish Commitment 1. Administrator’s support & active involvement.

Status:

2. Faculty/Staff support (One of top 3 goals, 80% of faculty document support, 3 year timeline).

Status:

Establish & Maintain Team

3. Team established (representative). Status:

4. Team has regular meeting schedule, effective operating procedures.

Status:

5. Audit is completed for efficient integration of team with other teams/initiatives addressing behavior support.

Status:

Page 151: A THREE-TIER MODEL OF INTEGRATED BEHAVIOR AND …

141

Self-Assessment 6. Team/faculty completes EBS self-assessment survey.

Status:

7. Team summarizes existing school discipline data.

Status:

8. Strengths, areas of immediate focus & action plan are identified.

Status:

Establish School-wide Expectations 9. 3-5 school-wide behavior expectations are defined.

Status:

10. School-wide teaching matrix developed.

Status:

11. Teaching plans for school-wide expectations are developed.

Status:

12. School-wide behavioral expectations taught directly & formally.

Status:

13. System in place to acknowledge/reward school-wide expectations.

Status:

14. Clearly defined & consistent consequences and procedures for undesirable behaviors are developed.

Status:

Establish Information System 15. Discipline data are gathered, summarized, & reported.

Status:

Build Capacity for Function-based Support

16. Personnel with behavioral expertise are identified & involved.

Status:

17. Plan developed to identify and establish systems for teacher support, functional assessment & support plan development & implementation.

Status:

Page 152: A THREE-TIER MODEL OF INTEGRATED BEHAVIOR AND …

142

Checklist #2: On-going Activity Monitoring

Complete & submit Quarterly. Status: Achieved, In Progress, Not Started

1. EBS team has met at least monthly. Status:

2. EBS team has given status report to faculty at least monthly.

Status:

3. Activities for EBS action plan implemented.

Status:

4. Accuracy of implementation of EBS action plan assessed.

Status:

5. Effectiveness of EBS action plan implementation assessed.

Status:

6. EBS data analyzed. Status:

Additional Observations/Comments/Questions:

Page 153: A THREE-TIER MODEL OF INTEGRATED BEHAVIOR AND …

143

Action Plan for Completion of Start-Up Activities

Activity Activity Task Analysis Who When

a

b.

c.

d

1. Establish Commitment • Administrator • Top 3 goal • 80% of faculty • Three year timeline

e

a.

b.

c.

d.

2. Establish Team • Representative • Administrator • Effective team operating

procedures • Audit of teams/initiatives

e

a.

b.

c.

d.

3. Self-Assessment • EBS survey • Discipline data • Identification of strengths, focus • Action Plan developed • Action Plan presented to faculty

e.

Page 154: A THREE-TIER MODEL OF INTEGRATED BEHAVIOR AND …

144

a.

b.

c.

d.

4. School-wide Expectations • Define 3-5 school-wide behavioral

expectations • Curriculum matrix • Teaching plans • Teach expectations • Define consequences for problem

behavior e.

a.

b.

c.

d.

5. Establish Information System • System for gathering useful

information • Process for summarizing

information • Process for using information for

decision-making

e.

a

b

c

d

6. Build Capacity for Function-based Support

• Personnel with behavioral expertise • Time and procedures for

identification, assessment, & support implementation

e.

Page 155: A THREE-TIER MODEL OF INTEGRATED BEHAVIOR AND …

145

BIBLIOGRAPHY

Page 156: A THREE-TIER MODEL OF INTEGRATED BEHAVIOR AND …

146

BIBLIOGRAPHY

Adelman, H. S., & Taylor, L. (2003). Rethinking school psychology: Commentary on public health framework series. Journal of School Psychology, 41, 83-90.

Anastasi, A., & Urbina, S. (1998). Psychological Testing (7th ed.). New York: Prentice

Hall. Anderson, C. M., & Freeman, K. A. (2000). Positive behavior support: Expanding the

application of applied behavior analysis. Behavior Analyst, 23, 85-94. Anderson, C. M., & Kincaid, D. (2005). Applying behavior analysis to school violence

and discipline problems: School-wide positive behavior support. Behavior Analyst, 28, 479-492.

Baker, C. K. (2005). The PBS triangle: Does it fit as a heuristic? Journal of Positive

Behavior Interventions, 7, 120-123. Barrett, S. B., Bradshaw, C. P., Lewis-Palmer, T. (2008). Maryland statewide PBS

initiative: Systems, evaluation, and next steps. Journal of Positive Behavior Interventions, 10, 105-.

Bradley-Johnson, S., & Dean, V. J. (2000). Role change for school psychology: The

challenge continues in the new millennium. Psychology in the Schools, 37, 1-5. Bradshaw, C., Koth, C., Bevans, K., Ialongo, N., & Leaf, P. (2008). The impact of school-wide

positive behavioral interventions and supports (PBIS) on the organizational health of elementary schools. School Psychology Quarterly, 23, 462-473.

Brown, M. B., Hohenshil, T. H., & Brown, D. T. (1998). Job satisfaction of school

psychologists in the United States: A national study. School Psychology International, 19, 79-89.

Burns, M. K., Appleton, J. J., & Stehouwer, J. D. (2005). Meta-analytic review of responsiveness-to-intervention research: Examining field-based and research- implemented models. Journal of Psychoeducational Assessment, 23, 381-394.

Carr, E. G., Dunlap, G., Horner, R. H., Koegel, R. L., Turnbull, A. P. & Sailor, W.

(2002). Positive behavior support: Evolution of an applied science. Journal of Positive Behavior Interventions, 4, 4-16.

Chard, D. J., & Linan-Thompson, S. (2008). Introduction to the special series on

systemic, multitier instructional models: Emerging research on factors that support prevention of reading difficulties. Journal of Learning Disabilities, 41, 99-100.

Page 157: A THREE-TIER MODEL OF INTEGRATED BEHAVIOR AND …

147

Chen, H. (1998). Theory-driven evaluations. Advances in Educational Productivity, 7, 15-24.

Chen, H. T. (2003). Theory-driven approach for facilitation of planning and health

promotion or other programs. The Canadian Journal of Program Evaluation, 18, 91-113.

Christenson, S. L., Carlson, C., & Valdez, C. R. (2002). Evidence-based interventions in

school psychology: Opportunities, challenges, and cautions. School Psychology Quarterly, 17, 466-474.

Cohen, J. (1988). Statistical power analysis for the behavioral sciences (2nd edition). Hillsdale,

NJ: Erlbaum. Council for Exceptional Children (2007). Position on Response to Intervention (RTI):

The Unique Role of Special Education and Special Educators. Retrieved May 20, 2008fromhttp://firstsearch.oclc.org.proxy2.cl.msu.edu:2047/WebZ/FSPage?paget ype=return_frameset:sessionid=fsapp6-41330-fgleuf6tuanuub:entitypagenum=12 :0:entityframedurl=http%3A%2F%2Fwww.eric.ed.gov%2Fcontentdelivery%2Fse rvlet%2FERICServlet%3Faccno%3DED499403:entityframedtitle=ERIC:entityfra medtimeout=30:entityopenTitle=:entityopenAuthor=:entityopenNumber=:

Crone, D. A., Horner, R. H., & Hawken, L. S. (2004). Responding to problem behavior in

schools: The behavior education program. New York: Guilford. Deno, S. L. (1985). Curriculum-based measurement: The emerging alternative.

Exceptional Children, 52, 219-232. Denton, C. A., Fletcher, J. M., Anthony, J. L., & Francis, D. J. (2006). An evaluation of

intensive intervention for students with persistent reading difficulties. Journal of Learning Disabilities, 39, 447-466.

Doolittle, J. (2006). The sustainability of positive behavior supports in schools. (Unpublished

doctoral dissertation). University of Oregon, Eugene, OR. Ervin, R. A., Schaughency, E., Goodman, S. D., McGlinchey, M. T., & Matthews, A.

(2006). Merging research and practice agendas to address reading and behavior school-wide. School Psychology Review, 35, 198-223.

Fagan, T. (2002). School psychology: Recent descriptions, continued expansion, and an ongoing

paradox. School Psychology Review, 31, 5-10. Filter, K. J., & Horner, R. H. (2009, February 1). Function-based academic interventions for

problem behavior. Education and Treatment of Children, 32(1), 1–19.

Page 158: A THREE-TIER MODEL OF INTEGRATED BEHAVIOR AND …

148

Fixsen, D. L., Naoom, S. F., Blase, K. A., Friedman, R. M., & Wallace, F. (2005). Implementation Research: A Synthesis of the Literature. Tampa, FL: National Implementation Research Network, University of South Florida.

Flay, B. R., Biglan, A., Boruch, R. F., Castro, F. G., Gottfredson, D., Kellam, S.,

Mocicki, E. K., Schinke, S., Valentine, J. C., & Ji, P. (2005). Standards of evidence: Criteria for efficacy, effectiveness and dissemination. Prevention Science, 3, 151-175.

Fleming, C. B., Harachi, T. W., Cortes, R. C., Abbott, R. D., & Catalano, R. F. (2004).

Level and change in reading scores and attention problems during elementary school as predictors of problem behavior in middle school. Journal of Emotional and Behavioral Disorders, 12, 130–144.

Fletcher, J. M., & Lyon, G. R. (1998). Reading: A research-based approach. In W. M.

Evers (Ed.), What’s gone wrong in America’s classrooms. (pp. 49-90). Palo Alto, CA: Board of Trustees of the Leland Stanford Junior University, Hoover Institution Press.

Francis, D. J., Shaywitz, S. E., Stuebing, K. K., Shaywitz, B. A., & Fletcher, J. M.

(1996). Developmental lag versus deficit models of reading disability: A longitudinal individual growth curves analysis. Journal of Educational Psychology, 88, 3-17.

Fuchs, L. S., & Fuchs, D. (2007). The role of assessment in the three-tier approach to

reading instruction. In D. Haager, J. Klingner & S. Vaughn (Eds.). Evidence- based Reading Practices for Response to Intervention (pp. 29-42). Baltimore, MD: Paul H. Brookes Publishing Co.

Gersten, R., Compton, D., Connor, C. M., Dimino, J., Santoro, L., Linan-Thompson, S.,

& Tilly, W. D. (2008). Assisting students struggling with reading: Response to Intervention and multi-tier intervention for reading in the primary grades. A practice guide. (NCEE 2009-4045). Washington, DC: National Center for Education Evaluation and Regional Assistance, Institute of Educational Sciences, U.S. Department of Education. Retrieved from http://ies.ed.gov/ncee/wwc/publications/practiceguides/.

Gilbert, T. F. (1978). Human competence: Engineering worthy performance. New York:

McGraw-Hill. Good, R. H., Gruba, J., & Kaminski, R. A. (2002). Best practices in using dynamic

indicators of basic early literacy skills (DIBELS) in an outcomes-driven model. In J. Grimes and A. Thomas (Eds.).Best Practices in School Psychology (4th ed.). Bethesda, MD: National Association of School Psychologists.

Page 159: A THREE-TIER MODEL OF INTEGRATED BEHAVIOR AND …

149

Good, R. H., & Kaminski, R. A. (Eds.). (2002). Dynamic Indicators of Basic Early Literacy Skills (6th ed.). Eugene, OR: Institute for the Development of Educational Achievement.

Good, R., Simmons, D., Kame’enui, E. (2001). Planning and Evaluation Tool for Effective Schoolwide Reading Programs- Revised. (Unpublished measurement tool)

Graczyk, P. A., Domitrovich, C. E., Small, M., & Zins, J. E. (2006). Serving all children:

An implementation model framework. School Psychology Review, 35, 266-274. Greenberg, M. T., Domitrovich, C. E., Graczyk, P. A., & Zins, J. E. (2005). The Study of

Implementation in School-Based Preventive Interventions: Theory, Research, and Practice (Volume 3). DHHS Pub. No. (SMA)___. Rockville, MD: Center for Mental Health Services, Substance Abuse and Mental Health Services Administration.

Gresham, F. M., Gansle, K. A., Noell, G. H., Cohen, S., & Rosenblum, S. (1993).

Treatment integrity of school-based behavioral intervention studies: 1980-1990. School Psychology Review, 22, 254-272.

Gresham, F.M., MacMillan, D.L., Beebe-Frankenberger, M.E., & Bocian, K.M. (2000).

Treatment integrity in learning disabilities intervention research: Do we really know how treatments are implemented? Learning Disabilities Research & Practice, 15, 198-205.

Griffiths, A., Parson, L. B., Burns, M. K., VanDerHeyden, A., & Tilly, W. D. (2007).

Response to Intervention: Research for Practice. Alexandria, VA: National Association of State Directors of Special Education.

Hale, J. B. (2008). Response to Intervention: Guidelines for Parents and Practitioners.

Retrieved May 20, 2008 from http://www.wrightslaw.com/idea/art/rti/hale.htm. Han, S. S., & Weiss, B. (2005). Sustainability of teacher implementation of school-based mental

health programs. Journal of Abnormal Child Psychology, 6, 665-679. Hayes, S. C., Barlow, D. H., & Nelson-Gray, R. O. (1999). The Scientist-Practitioner:

Research and Accountability in the Age of Managed Care. (2nd ed.) Allyn & Bacon.

Hoagwood, K., & Johnson, J. (2003). School psychology: A public health framework I.

from evidence-based practices to evidence-based policies. Journal of School Psychology, 41, 3-21.

Horner, R. H., Sugai, G. (2005). Supporting School-wide Positive Behavior Support.

Leadership address presenting at the OSEP School Improvement Conference, Washington, DC.

Page 160: A THREE-TIER MODEL OF INTEGRATED BEHAVIOR AND …

150

Horner, R. H., Sugai, G., & Lewis-Palmer, T. (2005). School-wide positive behavior support evaluation template. Eugene, OR: University of Oregon.

Horner, R. H., Sugai, G., Smolkowski, K., Eber, L., Nakasato, J., Todd, A. W. et al. (2009). A

randomized, wait-list controlled effectiveness trial assessing school-wide positive behavior support in elementary schools. Journal of Positive Behavior Interventions, 12, 133-144.

Horner, R. H., Sugai, G., Todd, A. W., & Lewis- Palmer, T. (2005). School-wide positive

behavior support. In L. Bambara & L. Kern (Eds.), Individualized Supports for Students with Problem Behaviors: Designing Positive Behavior Plans (pp. 359- 390). New York: Guilford Press.

Horner, R. H., Todd, A. W., Lewis-Palmer, T. Irvin, L. K., Sugai, G., & Boland, J. B.

(2004). The School-wide evaluation tool (SET): A research instrument for assessing school-wide positive behavior support. Journal of Positive Behavior Interventions, 6, 3-12.

Hosp, M. K., Hosp, J. L., & Howell, K. W. (2007). The ABCs of CBM: A Practical Guide

to Curriculum-Based Measurement. Guilford Press: New York, NY. Hosp, M. K., & Fuchs, L. S. (2005). Using CBM as an indicator of decoding, word

reading, and comprehension: Do the relations change with grade? School Psychology Review, 34, 9-26.

Hosp, J. L., & Reschly, D. J. (2002). Regional differences in school psychology practice.

School Psychology Review, 31, 11-29. Individuals with Disabilities Education Improvement Act, U.S.C. H.R. 1350 (2004). Irvin, L. K., Tobin, T. J., Sprague, J. R., Sugai, G., & Vincent, C. G. (2004). Validity of

office discipline referral measures as indices of school-wide behavioral status and effects of school-wide behavioral interventions. Journal of Positive Behavior Interventions, 6, 131-147.

Jimerson, S. R., Burns, M. K., VanDerHeyden, A. M. (2007). Handbook of Response to

Intervention: The Science and Practice of Assessment and Intervention. New York: Springer Science+Business Media, LLC.

Juel, C. (1988). Learning to read and write: A longitudinal study of 54 children from first

through fourth grade. Journal of Educational Psychology, 80, 437-447. Kame’enui, E. J., & Simmons, D. C. (2003). Planning and Evaluation Tool for Effective

School-wide Reading Programs- Revised (PET-R). Eugene, OR: Institute for the Development of Educational Achievement.

Page 161: A THREE-TIER MODEL OF INTEGRATED BEHAVIOR AND …

151

Kincaid, D., Childs, K., & George, H. (2005). School-wide benchmarks of quality. Unpublished instrument, University of South Florida.

Kincaid, D., Childs, K., Blase, K. A., & Wallace, F. (2007). Identifying barriers and

facilitators in implementing schoolwide positive behavior support. Journal of Positive Behavior Interventions, 9, 174-184.

Kovaleski, J. F. (2007). Response to intervention: Considerations for research and

systems change. School Psychology Review, 36, 638-646. Kratochwill, T. R., & Stoiber, K. C. (2002). Evidence-based interventions in school

psychology: Conceptual foundations of the procedural and coding manual of division 16 and the society for the study of school psychology task force. School Psychology Quarterly, 17, 341-389.

Kutash, K., Duchnowski, A. J., & Lynn, N. (2006). School-based Mental Health: An

Empirical Guide for Decision-makers. Tampa, FL: University of South Florida, The Research and Training Center for Children’s Mental Health

Latham, G. (1988). The birth and death cycles of educational innovations. Principal, 68,

41-43. Lee, Y., Sugai, G., & Horner, R. H. (1999). Using an instructional intervention to reduce

problem and off-task behaviors. Journal of Positive Behavior Interventions, 1, 195–204. Lembke, E. S., McMaster, K. L., Stecker, P. M. (2010). The prevention science of

reading research within a response-to-intervention model. Psychology in the Schools, 47, 22-35.

Lewis, T. J., & Sugai, G. (1999). Effective behavior support: A systems approach to

proactive school-wide management. Focus on Exceptional Children, 31, 1-24. Marston, D. Fuchs, L. S., & Deno, S. L. (1986). Measuring pupil progress: A comparison

of standardized achievement tests and curriculum-related measures. Diagnostique, 11, 71-90.

Mass-Galloway, R., Panyan, M. V., Smith, C. R., & Wessendorf, S. (2008). Systems

change with school-wide positive behavior supports: Iowa’s work in progress. Journal of Positive Behavior Interventions, 10, 129-.

Mastropieri, M. A., & Scruggs, T. E. (2005). Feasibility and consequences of response to

intervention: Examination of the issues and scientific evidence as a model for the identification of individuals with learning disabilities. Journal of Learning Disabilities, 38, 525-531.

Page 162: A THREE-TIER MODEL OF INTEGRATED BEHAVIOR AND …

152

Mathes, P. G., Denton, C. A., Fletcher, J. M., Anthony, J. L., Francis, D. J., & Schatschneider, C. (2005). The effects of theoretically different instruction and student characteristics on the skills of struggling readers. Reading Research Quarterly, 40, 148-182.

May, S., Ard, W., III., Todd, A. W., Horner, R. H., Glasgow, A., Sugai, G., et al. (2002).

School-wide Information System. Eugene: University of Oregon, Educational and Community Supports.

McCurdy, B.L., Mannella, M.C., & Eldridge, N. (2003). Positive behavior supports in urban

schools: Can we prevent the escalation of antisocial behavior? Journal of Positive Behavior Interventions, 3, 158-170.

McGlinchey, M. T., & Goodman, S. (2008). Best practices in implementing school

reform. In J. Grimes and A. Thomas (Eds.). Best Practices in School Psychology. (5th ed.). Bethesda, MD: National Association of School Psychologists.

McGlinchey, M. T., & Hixson, M. D. (2004). Using curriculum-based measurement to

predict performance on state assessments in reading. School Psychology Review, 33, 193-203.

McIntosh, K., Chard, D. J., Boland, J. B., Horner, R. H. (2006). Demonstration of

combined efforts in school-wide academic and behavioral systems and incidence of reading and behavior challenges in early elementary grades. Journal of Positive Behavior Interventions, 8, 146-154.

McIntosh, K., Horner, R. H., Chard, D. J., Dickey, C. R., & Braun, D. H. (2008). Reading

skills and function of problem behavior in typical school settings. Journal of Special Education, 42, 131-147.

McIntosh, K., Horner, R. H., Sugai, G. (2009). Sustainability of systems-level evidence-based

practices in schools: Current knowledge and future directions. In W. Sailor, G. Dunlap, G. Sugai, R. Horner (Eds.). Handbook of Positive Behavior Support. New York: Springer Science+Business Media, LLC. Pp. 327-352.

Merrell, K. W., Ervin, R. A., & Gimpel, G. A. (2006) School Psychology for the 21st

Century: Foundations and Practices. (pp. 21-41). New York: The Guilford Press. Meyers, J., & Nastasi, B. K. (1999). Primary prevention in school settings. In In C. R.

Reynolds & T. B. Gutkin (Eds.), Handbook of School Psychology (3rd ed., pp. 764-799). NY: Wiley.

Michigan Department of Education (2008). Design and Validity of the MEAP Test.

State of Michigan.

Page 163: A THREE-TIER MODEL OF INTEGRATED BEHAVIOR AND …

153

Michigan Department of Education (2008). Michigan’s Integrated Behavior and Learning Support Initiative. (brochure).

Mihalic, S., Irwin, K., Fagan, A., Ballard, D., & Elliot, D. (2004). Successful program

implementation: Lessons learned from blueprints. Juvenile Justice Bulletin. Retrieved May 20, 2008, from www.ojp.usdoj.gov/ojjdp.

Miller, D. (2010, March). Involving Families in RtI. Retrieved from

http://www.rtinetwork.org/rti-blog/entry/1/56 Morrison, G. M., Anthony, S., Storino, M., & Dillon, C. (2001). An examination of the

disciplinary histories and the individual and educational characteristics of students who participate in an in-school suspension program. Education and Treatment of Children, 24, 276–293.

Mortenson, B. P., & Witt, J. C. (1998). The use of weekly performance feedback to

increase teacher implementation of a prereferral academic intervention. School Psychology Review, 27, 613-628.

Muscott, H. S., Mann, S., & LeBrun, M. (in press). Positive Behavioral interventions and

supports in new Hampshire: Effects of large-scale implementation of school-wide positive behavior support on student discipline and academic achievement. Journal of Positive Behavioral Interventions.

National Association of State Directors of Special Education and Council of

Administrators of Special Education (2006). Response to Intervention: NASDE and CASE white paper on RtI.

National Association of School Psychologists. (2008). Zero Tolerance and Alternative

Strategies: A Fact Sheet for Educators and Policymakers. Bethesda, MD: Author. National Reading Panel (2000). Teaching Children to Read: An Evidence-Based

Assessment of the Scientific Literature on Reading and its Implications for Reading Instruction. NIH Pub. No. 00-4769. National Institute of Child Health and Human Development, U. S. Department of Health and Human Services.

Nelson, J. R., Benner, G. J., Lane, K. L., & Smith, B. W. (2004). Academic achievement

of K-12 students with emotional and behavioral disorders. Exceptional Children, 71, 59–73.

Nelson, J. R., Benner, G. J., Reid, R. C., Epstein, M. H., & Currin, D. (2002). The

convergent validity of office discipline referrals with the CBCL-TRF. Journal of Emotional and Behavioral Disorders, 10, 181-188.

Page 164: A THREE-TIER MODEL OF INTEGRATED BEHAVIOR AND …

154

Nersesian, M., Todd, A. W., Lehmann, J., & Watson, J. (2000). School-wide behavior support through district-level system change. Journal of Positive Behavior Interventions, 2, 244-247.

No Child Left Behind Act, U.S.C. 115 STAT. 1426 (2002). O’Connor, R. E., Fulmer, D., Harty, K. R., & Bell, K. (2005). Layers of reading

intervention in kindergarten through third grade. Journal of Learning Disabilities, 38, 440-455.

Office of Special Education Programs Center on Positive Behavioral Interventions and

Supports (2004). School-wide Positive Behavior Support Implementers’ Blueprint and Self-Assessment. Eugene, OR: University of Oregon

Pallant, J. (2004). SPSS Survival Manual. New York: Open University Press. Pereles, D. A., Omdal, S., & Baldwin, L. (2009). Response to intervention and twice-

exceptional learners: A promising fit. Gifted Child Today, 32, 40-51. Preciado, J. A., Horner, R. H., & Baker, S. K. (2009). Using a function-based approach to

decrease problem behaviors and increase academic engagement for Latino English Language Learners. The Journal of Special Education, 42, 227–240.

Reschly, A. L. (2010). Schools, Families and Response to Intervention. Retrieved from

http://rtinetwork.org/essential/family/schools-familes-and-rti Reschly, D. J. (2000). The present and future status of school psychology in the United

States. School Psychology Review, 29, 507-522. Reschly, D. J., & Ysseldyke, J. (2002). Paradigm shift: The past is not the future. In A.

Thomas & J. Grimes (Eds.) Best Practices in School Psychology IV (pp. 3-20). Bethesda, MD: National Association of School Psychologists.

Roberts, M. L., Marshall, J., Nelson, J. R., & Albers, C. A. (2001). Curriculum-based assessment

procedures embedded within functional behavioral assessments: Identifying escape- motivated behaviors in a general education classroom. School Psychology Review, 30, 264–277.

Rollins, K., Mursky, C. V., Shah-Coltrane, S., & Johnsen, S. K. (2009). RtI models for

gifted children. Gifted Child Today, 32, 20-30. Safran, S. P. (2006). Using the effective behavior supports survey to guide development

of schoolwide positive behavior support. Journal of Positive Behavior Interventions, 8, 3-9.

Sanford, A. (2006). The effects of function-based literacy instruction on problem behavior and

reading growth. Unpublished doctoral dissertation, University of Oregon.

Page 165: A THREE-TIER MODEL OF INTEGRATED BEHAVIOR AND …

155

Shapiro, E. S. (2000). School psychology from an instructional perspective: Solving big,

not little problems. School Psychology Review, 29, 560-572. Scott, T. M. (2001). A schoolwide example of positive behavior support. Journal of

Positive Behavior Interventions, 3, 88-94. Scott, T. M., & Barrett, S. B. (2004). Using staff and student time engaged in disciplinary

procedures to evaluate the impact of school-wide PBS. Journal of Positive Behavior Interventions, 6, 21-27.

Shinn, M. R. (1986). Does anyone care what happens after the refer-test-place sequence:

The systematic evaluation of special education program effectiveness. School Psychology Review, 15, 49-58.

Skiba, R., Reynolds, C. R., Graham, S., Sheras, P., Conoloy, J. C., & Garcia-Vazquaz. E

(2006). Are zero tolerance policies effective in schools? An evidentiary review and recommendations. American Psychological Association: Washington, D.C.

Smith, M. L., & Heflin, L. J. (2001). Supporting positive behavior in public schools: An

intervention program in Georgia. Journal of Positive Behavior Interventions, 3, 39-47.

Snow, C. E., Burns, M. S., Griffin, P. (1998). Preventing Reading Difficulties in Young

Children. New York: National Academy Press. Spaulding, S. A., Horner, R. H., May, S. L., & Vincent, C. G. (2008, November).

Evaluation brief: Implementation of school-wide PBS across the United States. OSEP Technical Assistance Center on Positive Behavioral Interventions and Supports. Web site: http://pbis.org/evaluation/evaluation_briefs/default.aspx

Sprague, J., Walker, H. M., Stieber, S., Simonsen, B., Nishioka, & Wagner, L. (2001).

Exploring the relation between school discipline referrals and delinquency. Psychology in the Schools, 38, 197-206).

Spectrum K12 Solutions. (2009). RTI Adoption Survey. Retrieved December 17, 2010 from

http://www.spectrumk12.com/rti_survey_results Stage, S. A., & Jacobsen, M. D. (2001). Predicting student success on a state-mandated

performance-based assessment using oral reading fluency. School Psychology Review, 30, 407-419.

Stewart, R. M., Benner, G. J., Martella, R. C., & Marchand-Martella, N. E. (2007).

Three-tier models of reading and behavior: A research review. Journal of Positive Behavior Interventions, 9, 239-253.

Page 166: A THREE-TIER MODEL OF INTEGRATED BEHAVIOR AND …

156

Strein, W. Hoagwood, K., & Cohn, A. (2003). School psychology: A public health perspective I. Prevention, populations, and systems change. Journal of School Psychology, 41, 23-38.

Sugai, G., & Horner, R. H. (2006). A promising approach for expanding and sustaining

school-wide positive behavior support. School Psychology Review, 35, 245-259. Sugai, G., & Horner, R. H. (2007). Is School-wide Positive Behavior Support an

Evidence-based Practice? School-wide Positive Interventions and Supports Technical Assistance Center. Retrieved May 20, 2008, from http://pbis.org/files/101007evidencebase4pbs.pdf.

Sugai, G., Horner, R., & Todd, A. (2003). Effective Behavior Support Self-Assessment

Survey (Version 2.0). Eugene, OR: University of Oregon. Sugai, G., Horner, R. H., & Lewis-Palmer, T. (2002). Effective Behavior Support Team

Implementation Checklist (Version 2). Eugene, OR: University of Oregon. Sugai, G., Lewis-Palmer, T., Todd, A., & Horner, R. H. (2001). The School-wide

Evaluation Tool. Eugene, OR: University of Oregon, Positive Behavioral Interventions and Supports Technical Assistance Center.

Sugai, G., Sprague, J. R., Horner, R. H., & Walker, H. M. (2000). Preventing school

violence: The use of office discipline referrals to assess and monitor school-wide discipline interventions. Journal of Emotional and Behavioral Disorders, 8, 94- 101.

Tilly, W. D. (2002). Best practices in school psychology as a problem-solving enterprise.

In A. Thomas & J. Grimes (Eds.) Best Practices in School Psychology IV (pp. 21- 36). Bethesda, MD: National Association of School Psychologists.

Tobin, T. J., & Sugai, G. M. (1999). Using sixth-grade school records to predict school

violence, chronic discipline problems, and high school outcomes. Journal of Emotional an Behavioral Disorders, 7, 40-53.

Torgesen, J. K., Alexander, A. W., Wagner, R. K., Rashotte, C. A., Voeller, K. K. S., &

Conway, T. (2001). Intensive remedial instruction for children with severe reading disabilities: Immediate and long-term outcomes from two instructional approaches. Journal of Learning Disabilities, 34, 33-58, 78.

U. S. Department of Education (2006a). Reading First Implementation Evaluation: Interim Report. Washington, DC: [Author], Office of Planning, Evaluation and Policy Development, Policy and Program Studies Service.

Page 167: A THREE-TIER MODEL OF INTEGRATED BEHAVIOR AND …

157

U. S. Department of Education (2006b). Reading First State APR Data. Washington, DC: [Author], Office of Elementary and Secondary Education, American Institutes for Research.

Vaughn, S., Linan-Thompson, S., Elbaum, B. Wanzek, J., Rodriguez, K. T., Cavanaugh,

C. L., et al. (2004). Centers for implementing K-3 behavior and reading intervention models preventing reading difficulties: A three-tiered intervention model. Unpublished report, University of Texas Center for Reading and Language Arts.

Vaughn, S., Wanzek, J., Woodruff, A. L., & Linan-Thompson, S. (2007). Prevention and

early identification of students with reading disabilities. In D. Haager, J. Klingner & S. Vaughn (Eds.). Evidence-based Reading Practices for Response to Intervention (pp. 11-28). Baltimore, MD: Paul H. Brookes Publishing Co.

VanDerHeyden, A. M., Witt, J. C., & Gilbertson, D. (2007). A multi-year evaluation of

the effects of a response to intervention (RTI) model on identification of children for special education. Journal of School Psychology, 45, 225-256.

Walker, H. M., Horner, R. H., Sugai, G., Bullis, M., Sprague, J. R., Bricker, D. et al.

(1996). Integrated approaches to preventing antisocial behavior patterns among school-age children and youth. Journal of Emotional and Behavioral Disorders, 4, 194-209.

Ysseldyke, J., Burns, M., Dawson, P., Kelley, B., Morrison, D., & Ortiz, S. et al. (2006).

School Psychology: A Blueprint for Training and Practice III. Bethesda, MD: National Association of School Psychologists.


Recommended