+ All Categories
Home > Documents > Curriculum-based Measures (CBM): The Cornerstone of the RTI Pyramid Long Island Association for...

Curriculum-based Measures (CBM): The Cornerstone of the RTI Pyramid Long Island Association for...

Date post: 15-Dec-2015
Category:
Upload: aubrey-wolever
View: 214 times
Download: 0 times
Share this document with a friend
Popular Tags:
43
Curriculum-based Measures (CBM): The Cornerstone of the RTI Pyramid Long Island Association for Supervision and Curriculum Development Friday October 17, 2008
Transcript

Curriculum-based Measures (CBM): The Cornerstone of the RTI Pyramid

Long Island Association for

Supervision and Curriculum Development

Friday October 17, 2008

Lindenhurst Public Schools – The Team

Joseph LaMelza – Assistant Superintendent

Donna Smawley – Principal, Bower School

Roni Loud – Psychologist, Bower School

Maria Bohrer – Reading Teacher, Bower School

Carol Grasso – Kindergarten Teacher, Bower School

Debra Mauro – Reading Teacher, Bower School

Presentation Objectives

Understanding curriculum-based measures Recognizing the importance of early literacy

skill development Identify factors that contribute to the effective

implementation of RTI Understanding the necessity of managing data Sharing ideas and insights

Essentials of Reading Instruction

Reading instruction MUST focus on instructional strategies that help overall reading ability, NOT only on isolated skills (Goodman;Pearson, 2006)

What is CBM?

CBM is an approach to measuring the growth of student proficiency in the core educational skills that contribute to success in school. It is a fast, inexpensive, and easy-to-use system that allows teachers to continually measure their students’ growth in performance, determine if their students’ are growing at the expected rate, and provide data for teachers to evaluate their instructional strategies if students are not demonstrating adequate growth.

Deno, Lembke, and Anderson

CBM as General Outcome Measures - GOM

Relevant Features – Measure Big Ideas– Efficient– Standardized– Sensitive to growth and change over time and to the

effects of intervention

CBM Levels of Performance

1. Accuracy – barely able to do something without error, if you go slowly and concentrate

2. Fluency – can do something quickly without errors (no more than 5%). Fluency comes after accuracy and only with practice

3. Automaticity – can do something quickly without error and in the presence of distracters. Automaticity comes after fluency and with considerable practice.

CBM = Improvement in RATE

Fluency and automaticity are measured by rate (how fast it can be performed). Rate increases gradually as proficiency develops - which means it is measured over time. Improvement in rate is a measure of progress.

READING RATE = COMPREHENSION

Reading connected text rapidly and accurately plays a crucial role in a student’s ability to comprehend.

Rapid word recognition frees up cognitive resources for higher-level comprehension processes (Fuchs et al., 2001)

ORAL READING FLUENCY IS RELATED TO OUTCOMES

Oral reading fluency predicted satisfactory comprehension skills with 80% accuracy for Grade 1 students and with 70% accuracy for Grade 2 students

Students with satisfactory oral reading fluency but low comprehension may have poor vocabulary skills

Students with good reading speed and accuracy but poor comprehension are the exception rather than the rule (Riedel, 2007)

The most legitimate use of oral reading speed is as Deno (1985) brilliantly conceptualized it; a

way to monitor student progress.

The danger of using speed as the measure is that some students and teachers focus on speed at the expense of understanding.

Students need to simultaneously decode and comprehend using texts that increase in

difficulty (Samuels, 2007)

FLUENCY IS MORE THAN SPEED!

R-CBM as a Predictor

Oral reading fluency correlates highly with comprehension– .67 (Good et al., 2001) and .70 (Buck and Torgesen,

2003) with state reading assessment scores for Grade 3

– .73 with Stanford Achievement for Grade 1 (Cook, 2003)

– .76 with Woodcock-Johnson Broad Reading Cluster (Roberts, 2005) for Grade 1

What Makes a Big Idea a BIG IDEA?

Predictive of reading acquisition and later reading achievement

Something we can do something about, i.e., something we can teach

Something that improves outcomes for children if/when we teach it

Graney, 2006

BIG IDEAS

Phonemic Awareness Alphabetic Principle Accuracy and Fluency with connected text Vocabulary Comprehension

National Reading Panel, 2000

Early Literacy Probs / DIBELS

Most research is based on the body of knowledge regarding R-CBM

Early literacy probs were designed to be a downward extension of CBM before reading

Early literacy probs are short-term measures Early literacy probs are in the CBM family, but are pre-

skills*Don’t test on pre-skills when you can test the skill

Shinn, 2008

How can we use CBM to change Reading Outcomes?

Begin Early Focus Instruction on the BIG IDEAS of Early

Literacy Focus Assessment on Outcomes for Students

CBM in Practice

The Big Ideas for Preventing Reading Failure Increase the quality, consistency, and reach of

instruction Universal screening with timely and valid

assessments of reading growth as progress monitoring – formative vs. summative assessment

Provide more intensive interventions to ‘catch up” the struggling reader

Adapted from: Torgesen/Shinn, 2008

REMEMBER . . .

CBM are indicators CBM is a specific set of procedures CBM is for evaluation of instruction. It does not

require a specific instructional technique Use of CBM formative evaluation increases

student achievement.

Graney, 2006

Definition of RTI

High-quality instruction/intervention that is matched to students’ needs and has been demonstrated through scientific research and practice to produce high learning rates for most students

Learning rate and level of performance are the primary sources of information used in ongoing decision-making

Important educational decisions about intensity and duration of interventions are based on individual student’s response to instruction across multiple tiers of intervention.

National Association of State Directors of Special Education, 2005

Multi-Tiered Response

Tier IIICSE

Referral

Tier IISmall Group InterventionMore intensive duration

Tier IWhole group classroom instruction

ALL

SOME

FEW

CORE Concepts of RTI

Research-based instruction – core programs are taught with fidelity as intended to maximize effectiveness. Instruction is focused on achieving state standards

Use of data to inform instruction – universal screening of all students to measure and to monitor the development of skills – provide program accountability

Measurement of response – progress monitoring is used to determine the effectiveness of interventions – it is systematic, documented, and shared with staff

Interventions are NOT

Shortened assignments Preferential seating Parent contacts Classroom observations Suspensions Doing more of the same assignments Retention

McCook, J., 2005

Intervention Organized in Tiers

• Layers of intervention responding to students’ needs

• Each tier provides more intensive and supportive intervention

• Aimed at preventing reading disabilities

Torgeson, 2004

Multi-Tiered Response

Tier III

Tier II

Tier I

Literacy

CBM

Benchmarks

Progress Monitor

Strategic Monitor

3 Tier Model for RTI

Tier 3

More Differentiated Intense Interventions

*Increase frequency and duration of intervention

*Referral to Special Education

Strategic Monitoring

Tier 2

Implementing Supplementary Instruction

*General Ed Teacher, AIS Teacher, Related Service Providers,

Special Ed Teachers

*Fundations, Wilson, Small Group Instruction through AIS Reading, ERSS Speech

Progress Monitoring

Tier 1

Implementing Classroom Instruction – General Ed Teacher

* Researched Based Curriculum – Harcourt Reading Program, Differentiated Instruction, Focus instruction on Big Ideas of Literacy.

Benchmark Assessments

Kindergarten

Fall – Initial Sound Fluency (ISF), Letter Naming Fluency (LNF), Letter Sound Fluency (LSF)

Winter – Letter Naming Fluency (LNF), Letter Sound Fluency (LSF), Phoneme Segmentation Fluency (PSF), Nonsense Word Fluency (NWF)

Spring – Same as Winter

Benchmark Assessments

Grade 1Fall – Letter Naming Fluency (LNF), Letter Sound Fluency (LSF),

Phoneme Segmentation Fluency (PSF), Nonsense Word Fluency (NWF)

Winter – Phoneme Segmentation Fluency (PSF), Nonsense Word Fluency (NWF), Oral Reading Fluency (ORF), Maze

Spring – Same as Winter

Benchmark Assessment – Cont’d

Grade 2 – 5• Oral Reading Fluency• Maze (Comprehension)

Problem-solving Model – ISTProcess, not interventions, are standardized

Individualized plan for each child that involves different levels of consultation:

•Description of student’s problem

•Data collection and problem analysis

•Intervention design and implementation – differentiated instruction determined by data

•Progress monitoring

•Evaluation of intervention effectiveness

•Flexible groupings throughout the year Wilson, 2007

Three Levels of Assessment

Benchmark Assessment – 3 times a year– Are there children who need additional support?– How many?– Which children?- What to do? Evaluate benchmark assessment data

Progress Monitoring – - Assess at-risk children more frequently – every two weeks

- Are current programs sufficient to keep progress on track or are additional supports / interventions needed?

Strategic Monitoring - weekly monitoring

What decisions do we make with data?

Plan for support with focus on BIG IDEAS.– Grouping – small group instruction, homogenous

groups, differentiated instruction, flexible grouping.– Time – How much? How Frequently? When?– Teacher / Student Interactions – modeling, direct

explanation, increase student engagement, increase guided practice with immediate feedback, scaffolding to support learning, review

Getting Started…..

Select a team – – Classroom teachers, reading specialists, psychologist, building

principal, special education teacher(s), speech teacher, other. People that have a vested interest in reading and literacy outcomes.

– Attend training sessions– Plan for data collection –

Who will collect data? When will you collect data? How will you collect data?

Collecting Data

Plan and Schedule Data Collection Organize Resources Collect Data Enter the Data Use Data for Educational Decision Making

Scheduling Data Collection

Classroom Approach – Obtain coverage for classroom teacher. Approximately 1-2 minutes per benchmark per student. Teacher works in hallway / room.

Advantages – Teachers assess own students, less disruptive to entire school. Disadvantages – Loss of instructional time, coverage, requires more days.

Building-wide Approach – Multiple specialists / trained members of team will assess students. Teacher brings class to library, cafeteria, gym, or other location with tables. Entire class can be assessed in 30 minutes.

Advantages – can be completed in one day, minimal classroom disruptions and loss of instructional time.

Disadvantages – space, trained staff, teachers not assessing.

Data Management System

AIMS Web – Achievement Improvement Monitoring System

www.aimsweb.com

Instructional Recommendations

Improvement Report

Student Record

Progress Monitoring Chart

School Readiness for RTI

Assessment: screening measures, progress monitoring practices and procedures

Curriculum: high-quality, research-based core curricula

Instruction: focus on effective instruction and interventions

School Readiness - Continued

Positive School Climate: school-wide processes and structures, individual student interventions, and a professional learning community

Professional Development: outcome focused content and ongoing assistance

Leadership: problem solving and individual characteristics of strong leaders

Closing the Achievement Gap: School Readiness for RtI, Sopris West Educational Services, 2007

See. . .

Fuchs, L.S., & Fuchs, D. (2006) Best practice in progress monitoring reading and mathematics at the elementary level. In A. Thomas & J. Grimes (Eds). Best practices in school psychology V (pp. 2147 -2164). Bethesda, MD: National Association of School Psychologists.

Hosp, M.K., Hosp, J.L. & Howell, K.W. (2007). The ABCs of CBM: A practical guide to Curriculum-based Measurement. New York, NY: Guilford.

Miura, Wayman,M., Wallace, T., Ives Wiley, H., Ticha, R., & Espin, C. (2007). Literature synthesis on curriculum-based measurement in reading. The Journal of Special Education, 41(2), 85-120.

Shinn, M.R. (2008). Best practices in Curriculum-based Measurement and its use in a Problem-solving model. In A. Thomas & J. Grimes (Eds.). Best practices in school psychology V (pp.243-262). Bethesda, MD: National Association of School Psychologists. Shinn, 2008

Thank You for

Your Attention

and

Participation


Recommended