+ All Categories
Home > Documents > Hammill Institute on Disabilitiesdanabanegas.yolasite.com/resources/project article 5.pdfHammill...

Hammill Institute on Disabilitiesdanabanegas.yolasite.com/resources/project article 5.pdfHammill...

Date post: 05-Jul-2020
Category:
Upload: others
View: 3 times
Download: 0 times
Share this document with a friend
14
Hammill Institute on Disabilities Obscuring Vital Distinctions: The Oversimplification of Learning Disabilities within RTI Author(s): Robert G. McKenzie Reviewed work(s): Source: Learning Disability Quarterly, Vol. 32, No. 4 (Fall, 2009), pp. 203-215 Published by: Sage Publications, Inc. Stable URL: http://www.jstor.org/stable/27740373 . Accessed: 23/09/2012 16:40 Your use of the JSTOR archive indicates your acceptance of the Terms & Conditions of Use, available at . http://www.jstor.org/page/info/about/policies/terms.jsp . JSTOR is a not-for-profit service that helps scholars, researchers, and students discover, use, and build upon a wide range of content in a trusted digital archive. We use information technology and tools to increase productivity and facilitate new forms of scholarship. For more information about JSTOR, please contact [email protected]. . Sage Publications, Inc. and Hammill Institute on Disabilities are collaborating with JSTOR to digitize, preserve and extend access to Learning Disability Quarterly. http://www.jstor.org
Transcript
Page 1: Hammill Institute on Disabilitiesdanabanegas.yolasite.com/resources/project article 5.pdfHammill Institute on Disabilities Obscuring Vital Distinctions: The Oversimplification of Learning

Hammill Institute on Disabilities

Obscuring Vital Distinctions: The Oversimplification of Learning Disabilities within RTIAuthor(s): Robert G. McKenzieReviewed work(s):Source: Learning Disability Quarterly, Vol. 32, No. 4 (Fall, 2009), pp. 203-215Published by: Sage Publications, Inc.Stable URL: http://www.jstor.org/stable/27740373 .Accessed: 23/09/2012 16:40

Your use of the JSTOR archive indicates your acceptance of the Terms & Conditions of Use, available at .http://www.jstor.org/page/info/about/policies/terms.jsp

.JSTOR is a not-for-profit service that helps scholars, researchers, and students discover, use, and build upon a wide range ofcontent in a trusted digital archive. We use information technology and tools to increase productivity and facilitate new formsof scholarship. For more information about JSTOR, please contact [email protected].

.

Sage Publications, Inc. and Hammill Institute on Disabilities are collaborating with JSTOR to digitize,preserve and extend access to Learning Disability Quarterly.

http://www.jstor.org

Page 2: Hammill Institute on Disabilitiesdanabanegas.yolasite.com/resources/project article 5.pdfHammill Institute on Disabilities Obscuring Vital Distinctions: The Oversimplification of Learning

OBSCURING VITAL DISTINCTIONS: THE OVERSIMPLIFICATION OF LEARNING

DISABILITIES WITHIN RTI

Robert G. McKenzie

Abstract. The assessment procedures within Response to

Intervention (RTI) models have begun to supplant the use of tra ditional, discrepancy-based frameworks for identifying students with specific learning disabilities (SLD). Many RTI proponents applaud this shift because of perceived shortcomings in utilizing discrepancy as an indicator of SLD. However, many professionals and organizations have noted the substantial variability between various RTI models and urged cautious implementation. RTI mod els that utilize substantively different assessment procedures as a primary or singular means of SLD identification are likely to

produce numerous sources of measurement error, threats to valid

ity, inaccuracy in identification, and potential legal challenges. This article examines from a psychometric perspective the risks in

replacing discrepancy-based identification of SLD with the myriad options for measuring students' responsiveness and nonrespon siveness to instruction within the intervention tiers of RTL

ROBERT G. MCKENZIE, Ph.D., Department of Special Education and Rehabilitation Counseling, University of Kentucky.

The category of specific learning disabilities (SLD) has long been beset by challenges to the veracity of its definition. Many of the questions raised have been sub

stantive, whereas others have been little more than rhetorical distraction. Professional rejoinders have

ranged from reflexive to deliberate; yet, viewed collec

tively and in historical perspective, they portray a field in a seemingly perpetual identity crisis. This is evi

denced, in part, by the myriad attempts to provide alternative definitions. For example, Hammill (1990) noted that in the first 25 years of SLD as a category of

disability, 11 different definitions received some degree of professional endorsement.

Illustrating this point, current introductory-level college textbooks typically provide both the 1977 fed eral definition of SLD contained in the reauthorized

Individuals with Disabilities Education Act (IDEA) of 2004 and various alternative definitions proffered by professional organizations, such as the 1989

proposal by the National Joint Commission on

Learning Disabilities (NJCLD) and that of the Learning Disabilities Association of America (LDA) in 1986 (con tained in Hallahan, Kauffman, & Pullen, 2009, and

Hallahan, Lloyd, Kauffman, Weiss, & Martinez, 2005,

respectively). Although such contrasts between defini tions provide a context by which to appreciate areas

of professional disagreement, they also highlight the

protracted and unsuccessful quest for consensus in the field.

Dissatisfaction with the SLD definition derives more

commonly from perceived shortcomings in assessing the disability than in the validity of the construct itself.

Volume 32, Fall 2009 203

Page 3: Hammill Institute on Disabilitiesdanabanegas.yolasite.com/resources/project article 5.pdfHammill Institute on Disabilities Obscuring Vital Distinctions: The Oversimplification of Learning

This is evident in the numerous changes in identifica tion procedures (e.g., grade-level deviations, intelli

gence-achievement discrepancy formulae) that have been proposed, implemented, and often subsequently revised over the years (Mellard, Deshler, & Barth, 2004)

without a corresponding modification of the definition itself. Despite, and perhaps because of, the significant time and attention given historically to the definition

(Kavale, 2005), the absence of change has resulted in a

widespread belief that SLD cannot be clearly differenti ated (Kavale, Spaulding, & Beam, 2009). Such doubt has

produced concern that the category is often considered

merely an oversophisticated representation of low achievement and that determination of SLD is fre

quently an arbitrary decision rather than one reached

through a cohesive and diagnostically rigorous process (Fuchs, Deshler, & Reschly, 2004; Kavale, Holdnack, &

Mostert, 2006). The most recent source of dissonance related to iden

tification of SLD is also the most prominent and widely discussed to date; namely, the eligibility schema con

tained in many proposed Response to Intervention

(RTI) models. Because the reauthorization of IDEA

stipulated that establishing an intelligence-achieve ment discrepancy (henceforth referred to as "discrep ancy") is no longer required to determine SLD (Zirkel &

Krohn, 2008), many states have begun to examine the use of RTI for such a purpose, and either already have or are in the process of phasing out the discrepancy option (Berkeley, Bender, Peaster, & Saunders, 2009).

Professional concern has been expressed regarding the lack of research undertaken to validate RTI as a process for identifying SLD (Hollenbeck, 2007) and the conse

quent "mixed message about identification of learning disabilities, one that mistakes the measurement of the construct for the construct itself ..." (Gerber, 2005, p. 519). The often contentious debate surrounding this lat est critique of SLD identification reflects the striking dif ferences between viewpoints within the field. Whereas, for example, many professionals believe that RTI will enhance the accuracy of identifying students with SLD (Bradley, Danielson, & Doolitttle, 2005; Fuchs, Fuchs, & Compton, 2004; Vaughn & Fuchs, 2003), oth ers are concerned that those processes will have the

opposite effect (Kavale & Spaulding, 2008) and further obscure SLD as a disability distinct from low achieve ment (Kavale, Kauffman, Bachmeier, & LeFever, 2008;

Mastrropieri & Scruggs, 2005). As implementation of RTI expands, the relative mer

its of SLD identification methods in these models must be examined, particularly in relation to the discrepancy approach. Such a comparison must (a) provide a con text for arguments against the discrepancy approach, (b) describe the basic structure and assessment compo

nents within RTI models, and (c) detail the arguments against the assessment methodology within RTL

INTELLIGENCE-ACHIEVEMENT DISCREPANCY

The concept of discrepancy and its measurement has

historically been at the forefront of controversies related to identification of SLD (Vaughn & Fuchs, 2003). Some attribute these identification issues to the

disparity between the federal definition and its opera tional components. For example, Kavale (2005) noted that despite its standard use as an SLD eligibility

marker, discrepancy is not contained in the definition itself. Although this assertion is accurate in principle, use of the discrepancy concept was neither serendipi tous nor did it simply evolve among practitioners inde

pendent of the definition. Hallahan and Mercer (2002) pointed out that Monroe

introduced the notion of discrepancy among students with reading disabilities in 1932, thus demonstrating that the concept of discrepancy predates the use of the term learning disabilities and its operational criteria by several decades. However, the specific targeting of dis

crepancy in the evaluation of referred students is trace able to the same federal publication that, separate from the definition, listed suggested procedures for identify ing students with SLD. This included reference to such a student having "a severe discrepancy between achievement and intellectual ability ..." (U.S. Office of

Education, 1977, p. 65083). Thus, the conceptual underpinnings of determining SLD eligibility entail an

expectation that a student demonstrate unexpected low achievement relative to general cognitive ability as

measured by intelligence testing (Hallahan et al., 2007; Kavale et al., 2006). Advocates of the discrepancy com

ponent contend that because SLD is fundamentally dif ferent from low achievement (Johnson, Mellard, &

Byrd, 2005; Kavale, 2005; Kavale & Spaulding, 2008; LDA, 2006), the only means to establish that low achievement also represents underachievement is to contrast the outcome with general cognitive ability (Kavale, 2005; Kavale et al., 2008; Mastropieri & Scruggs, 2005). In that respect, discrepancy represents the opera tional definition of underachievement and a necessary, but not sufficient, condition to establish the presence of, and eligibility for, SLD (Kavale et al., 2006).

Opponents of the discrepancy component include

professionals who consider RTI to be not only a viable means by which to target academically at-risk students, but also a more effective means of determining eligibil ity for SLD. Chief among their criticisms of discrepancy are those related to its (a) presumed role in increased and varying prevalence rates in SLD, (b) lack of power in

differentiating among early-grade students with defi

Learning Disability Quarterly 204

Page 4: Hammill Institute on Disabilitiesdanabanegas.yolasite.com/resources/project article 5.pdfHammill Institute on Disabilities Obscuring Vital Distinctions: The Oversimplification of Learning

cient reading skills, and (c) irrelevance as a medium to

inform instructional decisions. A comparison between

the disparate positions related to discrepancy requires that each of these three concerns be examined.

Increased and Varying Prevalence The twofold increase in the prevalence of SLD in the

last 30 years is often attributed to a presumed increase

in "false-positive" outcomes resulting from application of the discrepancy concept (Berkeley et al., 2009). However, if discrepancy is applied as merely a "first

gate" in determining eligibility, it cannot be solely, or

even primarily, responsible for such an increase. Because a growth in prevalence is as likely to indicate

previous underidentification as present overidentifica

tion, a circular argument exists that cannot be resolved. A more cogent concern relates to the significant

degree to which SLD prevalence is believed to vary between states (Reschly & Hosp, 2004). However, the research of Hallahan et al. (2007) indicated that con

trary to perceptions of widely varying prevalence, SLD

actually exhibits the least variability of any category across all states, leading to the assertion that "using state-to-state variability of disability rates as justification for criticizing learning disabilities identification prac tices is largely unfounded" (p. 142).

Although the Hallahan et al. (2007) research weakens

arguments regarding SLD prevalence variability, an

additional point that is commonly raised concerning the presumed impact of discrepancy on state-to-state

variability merits analysis. Some professionals have noted a lack of unanimity among states in how discrep ancy is operationally defined (e.g., by formulae) and the

variety of instruments used in its determination (Fuchs, Fuchs, et al., 2004; Reschly, 2005). The speciousness of this argument is seen through a comparison of states' assessment practices in SLD and cognitive disability (i.e., mental retardation), which, like other categories, has undergone changes in definition over time. That is, the same variability that is evident in comparing SLD

discrepancy criteria and the measures used to establish it is also evident in the assessment of mental retardation

(MR), perhaps to an even greater extent. In their analy sis of states' eligibility guidelines for MR, Bergeron, Floyd, and Shands (2008) found considerable variation in the IQ cutoff scores, including those citing specific scores (ranging from below 70 to 80) and standard devi

ations, from more than -2 to only -1 depending on per formance outcomes in other areas. Further illustrating states' autonomy in determining MR eligibility, 77% did not specify an outcome to satisfy the maladaptive behavior dimension.

In the same manner, the fears expressed regarding different assessment instruments being employed to

determine discrepancy for SLD are hollow when com

parison is made to the same (or greater) variability in

the assessment of MR. That is, a variety of measures are

used to determine IQ for both SLD and MR (e.g., WISC IV and KABC-II). Similarly, depending on the district

and/or the training of the examiner, various instru ments are used to measure adaptive behavior, such as

the AAMR Adaptive Behavior Scale-School Edition and the Vineland Adaptive Behavior Scales-2. Therefore, one

may reasonably conclude that SLD is no more vulnera ble than MR to variations in prevalence that may result from differences in state guidelines and the instruments utilized to determine eligibility.

A more salient issue regarding variability in the rates of SLD identification relates to the degree to which prac titioners adhere to state eligibility guidelines. In their

analysis of SLD stakeholders' perceptions of identifica tion practices, Mellard et al. (2004) determined that pro grams were more closely monitored according to caseload size than the degree to which students actually had a learning disability. That is, when service avail

ability exists (i.e., room below cap limits), eligibility teams' decisions are primarily motivated by a desire to

provide services to students who are perceived to be able to benefit. The authors concluded:

Hence, addressing classroom needs (rather than the

objectivity that is at the cornerstone of most LD identification models) appears to play a major role in the decision making process, often overriding concerns about following district or state guide lines relative to LD identification. ... Stakeholders

appear to be much more concerned about getting services to children ... than ensuring that they are

getting the decision right about whether or not the student has a disability, (p. 241)

Such lack of fidelity to established guidelines might contribute to the skepticism of those who question the

very construct of SLD because of the variability in its

prevalence. However, as is true with any instrumenta

tion, instances of improper implementation do not

negate the legitimacy of the measure. Additionally, unless data can be produced to verify that widespread contraventions in identification rigor are present, dif ferences between schools in a particular state are

unlikely to have an appreciable influence on statewide

prevalence. Even if such was the case, believing that within-state variations contribute to significant differ ences between states defies reason. To conclude other wise necessitates the collection, analysis, and disclosure of "quality control" data that are not earmarked and maintained by school districts. Specifically, unless a

school-by-school, district-by-district, and state-by-state comparison of individual SLD student eligibility files were undertaken, the conclusion that some districts and

Volume 32, Fall 2009 205

Page 5: Hammill Institute on Disabilitiesdanabanegas.yolasite.com/resources/project article 5.pdfHammill Institute on Disabilities Obscuring Vital Distinctions: The Oversimplification of Learning

states are more rigorous than others is without founda tion and will remain improvable.

As Kavale and Spaulding (2008) concluded, discrep ancy presents less of an issue in psychometric validity than in implementation rigor. Analogous to a ther

mometer that is, in fact, objectively reliable and valid, the accuracy of a reported temperature is also depend ent on the vision and attentiveness of the person who reads it. Inaccuracy in the reading that is due to the lat ter factors does not invalidate the utility of the instru

ment itself.

Lack of Diagnostic Discrimination

Many critics of discrepancy have noted that it lacks sufficient power to differentiate among students with low reading ability (Fuchs, Fuchs, et al., 2004; Vaughn

& Fuchs, 2003). For example, among students with deficient reading skills, many exhibit a discrepancy between reading skill and cognitive ability whereas

many do not. From a statistical standpoint, any concern about such differences lacks cogency. A pool of students with poor reading skills have that trait in common, but there are likely substantive differences in the explana tions for this deficit, vis-?-vis their general cognitive ability. For instance, students with cognitive ability in the IQ range of 70-85 constitute approximately 14% of the population and, if performing at expectation, will be between one and two standard deviations below the mean of grade peers in reading. In those cases, a discrepancy does not exist. Conversely, the same

pool of students will also contain students whose IQ scores predict greater skill development than what is

witnessed. Those students reflect a discrepancy. In

short, within the same pool, some students have a dis

crepancy and some do not, which is entirely plausible statistically. Discrepancy is also often judged to lack power as dis

criminating variable except among older students

(Fuchs & Young, 2006), causing many students to remain unidentified until fifth grade and beyond (Bradley et al., 2005; Fuchs & Fuchs, 2007). If the degree of discrepancy were static among SLD students, this would be a valid criticism of the construct; however,

discrepancy is not static. Many SLD students are not identified until the middle- and high-school years for a variety of reasons. The shift in the nature and pace of content instruction (Mastropieri & Scruggs, 2005) and the reliance on large-group arrangements (Harbort et

al., 2007) accentuate the impact of deficits in basic skills. In such cases, the ability to acquire information that will be expressed on the standardized, norm-refer enced tests used in eligibility determination is further

compromised, and the degree of discrepancy widens

correspondingly.

These factors are natural, reasonable, and education

ally relevant explanations of delayed identification.

Although SLD students should be identified as early as

possible, doing so would necessitate one of two actions. One would entail narrowing the discrepancy criteria to be more sensitive. Unfortunately, this would exacerbate the likelihood of false positives, and thereby further increase prevalence rates. The second action, and one common to many RTI models, would be to eliminate

discrepancy-based identification entirely. As will be dis cussed later in this article, such an approach is impru dent and would likely result in an even greater number of false positives.

Irrelevance to Instruction

Among the assertions made by RTI advocates is the

argument that the field of SLD "has concentrated more on the search for the specific condition of LD and its cause than on intervention effectiveness" (Bradley, Danielson, & Doolittle, 2007, p. 9) and that the discrep ancy approach to identification "does not provide infor mation that can be used to make instructional decisions" (Bradley et al., 2005, p. 485).

A distinction between the dual assessment purposes of eligibility determination and diagnostic insight is

necessary. The determination of discrepancy was never conceived for the purpose of informing instructional decisions (Kavale, 2005). A discrepancy verifies unex

pected underdevelopment of basic academic skills and

nothing more. Depending on the nature of the skill measures employed for contrast with a student's IQ, instructional insight may or may not be derived. This illustrates the importance of exercising quality control and rigor in assessment and eligibility endeavors. For

example, if the instrument employed to measure aca demic skills is simply a broad-based measure with lim ited sampling of reading, writing, and arithmetic, little

(if any) instructionally relevant diagnostic information will be yielded. Conversely, a measure that focuses

exclusively on the diagnosis and error analysis of a skill area not only explains classroom performance, but also

guides practitioners toward meaningful instructional

goals. Measures used to determine SLD eligibility should never stand alone, but be supplemented by informal

and/or curriculum-based assessment designed to pin point specific patterns of deficit.

STRUCTURE OF RESPONSE TO INTERVENTION

RTI is most appropriately characterized as a preven tive, multitiered process of instruction and assessment

designed to enhance early identification of students who exhibit deficits in basic skills relative to develop mental expectation and who are, therefore, at risk for

Learning Disability Quarterly 206

Page 6: Hammill Institute on Disabilitiesdanabanegas.yolasite.com/resources/project article 5.pdfHammill Institute on Disabilities Obscuring Vital Distinctions: The Oversimplification of Learning

falling further behind without instructional modifica

tions. The process involves "implementing high-qual ity, scientifically validated instructional practices based on learner needs, monitoring student progress, and

adjusting instruction based on the student's response" (Bender & Shores, 2007, p. 7).

Although RTI has garnered considerable professional attention recently, numerous areas in need of research remain (Fuchs & Deshler, 2007). Many professionals and organizations have noted the substantial variability and inconsistencies between various RTI models

(Berkeley et al., 2009; Fuchs & Fuchs, 2007; Mastropieri & Scruggs, 2005; Zirkel & Krohn, 2008) and urged cau

tious implementation. The purpose of this article is not to provide an exhaustive review of the various RTI mod els. However, analysis of the assessment methods used to identify skill deficits (and potentially SLD) within RTI and their relative merits compared to the discrepancy approach requires depiction of the instructional and assessment tiers.

Although RTI is theoretically applicable to any basic

skill, including writing and mathematics, to date mod els have focused almost exclusively on early literacy skills (Johnson et al., 2005; Kavale, 2005; Mellard, Byrd, Johnson, Tollefson, & Boesche, 2004) and have often

ignored secondary-level students (Berkeley et al.;

Mastropieri & Scruggs; Mellard et al.). Readers who are interested in detailed descriptions of RTI will find many excellent references to serve that purpose. These include the work of Bender and Shores (2007), Berkeley et al.

(2009), Bradley et al. (2007), and Fuchs and Fuchs

(2007). Tiered Instruction and Assessment

Considerable variability is evident in proposed RTI models as well as those that have already been imple mented in many states (Berkeley et al., 2009; Fuchs &

Deshler, 2007). Some differences are relatively superfi cial; others are substantive and likely to have signifi cant influence on identification of students with SLD and the role special educators play. Although the

majority of models utilize three tiers (Berkeley et al.; Hoover & Patton, 2008), Bradley et al. (2007) noted that four-tier models exist, such as that described by Reschly (2005). Built into all RTI models is the utiliza tion of research-validated instructional methodology (Bender & Shores, 2007; Fuchs & Fuchs, 2007) within each tier. The three tiers common to RTI models are summarized below.

Tier one. Although the entirety of the RTI process may reasonably be considered preventive in intent, the first tier has been referred to as "preventive" by Berkeley et al. (2009). It has also been labeled as the "universal core program" (Council for Exceptional Children [CEC],

2008). This tier typically consists of whole-group instruction and the administration of universal screen

ing at the outset of the school year to identify students who are underperforming in basic skills (e.g., reading). Students who perform above the criterion chosen to indicate risk (discussed in the next section) continue

receiving the entirety of their instruction in the estab lished general education manner. These students are

judged to be "responsive." Those who fall below the criterion are "nonresponsive" and proceed to the sec ond tier.

Tier two. This tier has been variously referred to as

"secondary intervention" (Berkeley et al., 2009) and

"secondary prevention" (Reschly, 2005). Here, supple mental, more intensive instruction is provided, most

commonly in small groups (Berkeley et al.; Reschly), though individual tutoring is also possible (Fuchs, Fuchs et al., 2004). Considerable variability in the recom

mended duration is noted, ranging from 8 weeks or less

(Bradley et al., 2007) to 15 weeks of 4 weekly sessions

lasting 45 minutes each (Fuchs & Fuchs, 2007). Although service is most often conceived to occur in the

general education classroom, it may entail pull-out (Hoover & Patton, 2008).

Instruction within this tier (as well as the next) utilizes one of the following approaches: (a) standard treatment protocol, (b) problem solving, or (c) a combi nation of each. Standard treatment protocol refers to the utilization of research-validated intervention methodol

ogy designed for the acquisition of new skills (Fuchs &

Fuchs, 2007). Problem solving has been likened to the

long-standing concept and practice of prereferral inter vention (Kavale & Spaulding, 2008; Kavale et al., 2006), in that a student's poor performance in the classroom

prompts a team-based examination of possible modifi cations within the general education classroom that will benefit the student. As noted by Berkeley et al. (2009), the majority of states use a combination of standard pro tocol and problem solving intervention approaches. Within Tier Two, progress monitoring may reveal a

student's growth (i.e., "responsiveness"), resulting in intervention maintenance or, if sufficiently robust, a return to Tier One. Conversely, if performance below the criterion continues, the "nonresponsive" student

progresses to the third tier of intervention. Similar to effective prereferral intervention, RTI modifications are

individually tailored according to the student's needs and acquired skills (Fuchs & Fuchs, 2007), which neces sitates training in collaboration skills among both gen eral and special educators. Although the literature does not reveal research related to teacher training in RTI, a recent national survey of special education preservice training programs (McKenzie, in press) indicated a

paucity of structured collaboration content and collab

Volume 32, Fall 2009 207

Page 7: Hammill Institute on Disabilitiesdanabanegas.yolasite.com/resources/project article 5.pdfHammill Institute on Disabilities Obscuring Vital Distinctions: The Oversimplification of Learning

orative experiences among special and general educa

tors, which is likely to specifically include RTL

Tier three. This represents the most intensive level of

intervention, likely necessary for less than 5% of the

general education population (Berkeley et al., 2009), and utilizes the same scheme to determine a student's

responsiveness or nonresponsiveness. This tier appears to present the most variability among RTI models

(Berkeley et al.) and has generated the most debate rela tive to the role of special education. Whereas some pro fessionals consider this final tier to represent special education (Fuchs & Fuchs, 2007; Reschly, 2005), others are less decisive, suggesting that this level of interven tion "might or might not be similar to traditional special education services" (Bradley et al., 2007, p. 9). Still oth ers feel that special education must be entirely separate and that all RTI functions must be the exclusive province of general education (CEC, 2008; Kavale et al., 2008).

In light of such widely discrepant postures regarding the role and occurrence of special education within RTI, the results of a national survey conducted by Berkeley et al. (2009) are instructive. The authors noted that, "in all three-tier models, special education placement is con

sidered to be a separate process that occurs after RTI interventions have been exhausted" (p. 91), and that

although some states permit formal referral to special education in earlier tiers, most states indicate that it occurs after failure to reach criterion in Tier Three. That

many RTI proponents portray discrepancy-based SLD identification as a wait-to-fail approach (e.g., Lyon, 2005) is also instructive as well as paradoxical, for in

many models (as those described above), referral for full evaluation and potential special education service is

utterly dependent on the occurrence (and often the per sistence) of failure. Thus, the emerging characterization of RTI by some professionals as a "watch-them-fail"

model (Reynolds & Shaywitz, 2009) is not without sub stance.

The picture that emerges regarding implementation of RTI is strikingly similar to previous efforts within the field to enhance prereferral intervention, which has been characterized as "a process that can best be described as a theme and a variation ... a process that is

conceptualized in a similar manner, across and within

states, but a process that is carried out in a variety of

ways" (Buck, Polloway, Smith-Thomas, & Cook, 2003, p. 358). The variation evident in the intervention tiers, however, is surpassed by the myriad proposals for meas

uring student progress within RTI and, hence, serve to

influence the likelihood (positively or negatively) of a

student receiving special education.

RTI ASSESSMENT MODELS The professional literature reflects considerable differ

enees in opinion concerning the role of RTI in SLD eli

gibility decisions, including if and when referral for a

comprehensive special education evaluation is war ranted (Berkeley et al., 2009; Burns, Jacob, & Wagner, 2008). Although most RTI models indicate that referral should occur when a student fails to show growth after third-tier intervention (Bradley et al., 2007), CEC (2008) has expressed concern that such a process protracts the referral and identification of students suspected of hav

ing a disability. Considerable debate has also resulted from concerns

regarding the capability of RTI to serve as the primary (and perhaps sole) SLD identification and eligibility

mechanism. For example, some RTI proponents con tend that measuring student performance within each of the tiers eliminates the necessity of administering intelligence tests to determine SLD (Bradley et al., 2005). To this point, Vaughn and Fuchs (2003) noted, "the assumption is that if corrective adaptation in gen eral education cannot produce growth for the individ

ual, then the student has some intrinsic deficit (i.e.,

disability) ..." (p. 138). Many RTI proponents also sup port the corollary judgment; namely, if a student does not fall below the performance criterion selected within the tiers, the student is judged to not have a disability (Fuchs, Fuchs, et al., 2004). Such views are in sharp con trast with those who consider intelligence testing the

only means by which to determine the "expected" level of a student's performance, and thus enable differentia tion between SLD, low achievement due to low ability, and other categories of mild disability (Kavale, 2005; Kavale et al., 2008; Mastropieri & Scruggs, 2005).

Amid the ongoing, multifaceted debate of its merit as an SLD identification model, RTI is being implemented in precisely that manner in many areas. For example, Berkeley et al. (2009) noted that RTI currently is the

only option for SLD identification in two states. Of

greater import, as many as one third of states intend to

implement RTI as the sole means of SLD identification in the near future (Hoover, Bacca, Wexler-Love, &

Saenz, 2008). As planning evolves for expanded implementation,

professionals are obliged to critically examine the

potential value and risk of this identification scheme,

particularly in comparison to the long-standing dis

crepancy model. The means by which RTI identifies students as R (responsive) or NR (nonresponsive) must be evaluated according to the measures, criteria, and normative comparisons employed. Considering the dissatisfaction among many RTI advocates with the dis

crepancy approach to SLD identification, and the posi tion of some that intelligence testing is of dubious

value, a balanced analysis requires that RTI assessment methods for determining SLD eligibility be judged

Learning Disability Quarterly 208

Page 8: Hammill Institute on Disabilitiesdanabanegas.yolasite.com/resources/project article 5.pdfHammill Institute on Disabilities Obscuring Vital Distinctions: The Oversimplification of Learning

according to their ability to stand alone without tradi

tional, full-scale evaluation (including intelligence test

ing).

Measures

Although considerable autonomy is evident at the

school, district, and state levels in the instruments cho sen and the specific criterion selected (Berkeley et al.,

2009; Kavale et al., 2006; LDA, 2006; Mastropieri &

Scruggs, 2005), the variety of measures proposed for use

as an initial universal screen and subsequent post-inter vention determination as R or NR may be categorized as

those that involve formal, standardized measures and

those that are informal and curriculum-based (e.g., the

number and percentage of words read correctly per minute). Regardless of the measure employed in RTI, two aspects must be evaluated - its practicality and the

degree to which measurement error is introduced, thus

increasing the probability of false positives. Among formal, standardized assessments, The

Woodcock Reading Mastery Tests-Revised (WRMT; Woodcock, 1998) is often cited as a useful RTI barome ter of reading skills (Fuchs & Deshler, 2007), largely owing to its popularity among practitioners as a norm

referenced, diagnostic test of reading (Taylor, 2009). There have also been suggestions that using only the word identification subtest of the WRMT is a reason

able measure of R and NR (Fuchs & Deshler). Another instrument that is commonly cited is the Dynamic Indicators of Basic Early Literacy Skills (DIBELS; Good

& Kaminski, 2002). The popularity of this instrument within RTI models is attributable to its focus on

comparing student performance to local norms (Salvia,

Ysseldyke, & Bolt, 2010) and its applicability to cur

riculum-based measurement of early literacy skills

(Compton, 2006; Taylor). Questions have been raised regarding the supply and

training of personnel to conduct second- and third-tier intervention in RTI that are administered apart from

special education placement (Fuchs & Deshler, 2007; NJCLD, 2005; Vaughn & Fuchs, 2003). The same ques tions must be posed regarding administration of the assessment component (Mastropieri & Scruggs, 2005).

Compliance with the Standards for Educational and

Psychological Testing of the American Educational Research Association requires personnel who administer tests to possess sufficient training, credentials, and expe rience to do so (Taylor, 2009). Thus, comparability of

training in test administration is essential in order to minimize a source of error that will threaten the valid

ity of the measures. The array of individuals who have been suggested as resources for second-tier intervention include special educators (Fuchs & Deshler), school psy chologists and paraprofessionals (Fuchs & Fuchs, 2007),

peer tutors, parent volunteers, and remedial reading teachers (Mellard, Byrd, Johnson, Tollefson, & Boesche,

2004). If a similarly diverse range of individuals serve the assessment function, the cost of ensuring equivalent training and familiarity with the standardization and administration of each instrument will be extraordinary and likely prohibitive.

Notwithstanding concerns related to personnel and

training, the process of administering an entire stan dardized test such as DIBELS or WRMT (or merely a sub test within the battery) across one or more classes will be excessively time consuming and impractical if the tests are administered as standardized and used as

intended. That is, each instrument was designed for individual administration (Salvia et al., 2010), so were it

possible to conduct the assessment (or a portion thereof) with a group, the deviation from the instru

ment's standardization and intended use would under mine the validity of scores that are yielded.

The same issue relates to administering, scoring, and

acting upon the score derived from a single subtest within a battery. A test composed of multiple subtests is

standardized, and the corresponding norms derived, by each subject in the standardization sample completing the test in a prescribed sequence and in entirety. By choosing to isolate and administer only part of such a

test, the choice is being made to breach standardization, introduce a source of error, and heighten the likelihood of false positives and false negatives.

Finally, even if all the aforementioned variables were

adequately controlled, the possible threat to validity posed by setting differences is significant. Such threats are inherent whenever outcomes in different classrooms are compared and often transcend those that may oth erwise be attributed to differences in training. In tradi tional evaluation for special education, the when and

where of assessment is controlled to the maximum extent possible, thus improving comparability of set

tings and minimizing error. These controls include, for

example, assessments being individually administered at an optimal time for the student and in a distraction free testing environment. Exercising similar control in RTI assessment, particularly at the universal screening stage, will be onerous and virtually unattainable in most circumstances. Imagine, for example, the proposition of

designing clinically controlled administrations to every early-grade student in a particular school. Unless that can be accomplished, the natural differences in settings are likely to increase the error present.

The necessity of minimizing sources of error to ensure

validity is not restricted to standardized, norm-refer enced measures. Although using repeated measures of

progress monitoring over a number of weeks to deter mine NR may yield a more utilitarian and accurate

Volume 32, Fall 2009 209

Page 9: Hammill Institute on Disabilitiesdanabanegas.yolasite.com/resources/project article 5.pdfHammill Institute on Disabilities Obscuring Vital Distinctions: The Oversimplification of Learning

sampling of student performance than a single admin istration of a norm-referenced measure (e.g., WRMT), if RTI uses informal, curriculum-based measures such as

fluency of word reading, several important matters of control must be exercised when scores across similar

grade levels are compared to determine students' R and NR status. In essence, a de facto manner of standardiza tion is necessary.

For example, ensuring that the teacher/examiner is

reasonably trained to administer and score informal measures is essential. When different individuals are

administering a measure to same-grade students in dif ferent classrooms, inter-examiner reliability is critical.

Similarly, comparability of measures must be demon strated. For instance, imagine that fluency of word

reading for first-grade students was being measured and would yield data in percentage of words read cor

rectly and the rate of words read correctly per minute. In this circumstance, some differences in measure are

obvious and others are subtler, albeit equally influen tial. If one teacher administers a word list and another connective narrative, the inherent differences in the task invalidate the comparison of students' perform ances.

Additionally, regardless of the material used, the con tent validity of the measure must be examined and, in the case of different classrooms and examiners, correla tion between the measures is necessary. For example, two teachers choose to measure word-reading fluency in the same manner. If this involves reading narrative,

they cannot accept at face value that the passages are at the same readability level simply because they are pre sented as such by the publisher. This potential problem is compounded when each teacher uses a different pas sage as the source of outcome measures.

Less obvious are differences in examiner scoring that

may contribute covert error. Using the same example, whether through a word list or narrative, one teacher

may score a student's self-correction on a word as an error whereas another may consider it correct. Such dif ferences in judgment will contribute to substantive dif ferences in some students' scores and hence threaten the validity of R and NR determination.

RTI Criteria To the extent that RTI is often presented as a frame

work for identifying SLD students without using the

achievement-intelligence discrepancy criterion, the fact that the various criteria that have been proposed to determine students' standing as R or NR involve

measuring discrepancy in some fashion, without meas ured IQ is perhaps somewhat ironic. Kavale et al.

(2006) categorized these RTI alternative discrepancy models as producing indices of "absolute" and "rela

tive" discrepancy, depending on the measurement framework adopted by a school or district. Absolute discrepancy. As noted earlier, various meas

ures may be selected and applied as both a universal screen and an assessment of student responsiveness to intervention. One approach is the benchmark method

(Fuchs & Deshler, 2007; Fuchs & Fuchs, 2007), which

targets a criterion-referenced indicator of satisfactory progress within the curriculum. Students who fail to attain the standard reflect an absolute discrepancy, are considered NR, and proceed to the next tier of inter vention. Various criterion sources have been proposed, including state-approved grade-level standards (Bradley et al., 2007), curriculum-based measurement (Fuchs &

Fuchs), and the use of DIBELS to indicate whether word

reading fluency benchmarks have been attained at each

grade level (Compton, 2006). A second approach to

determining an absolute discrepancy utilizes standard

ized, norm-referenced tests (e.g., WRMT) and the resultant standard scores and percentile ranks.

Fuchs and Deshler (2007) and Fuchs and Fuchs

(2007) have cited the method proposed by Torgesen et al. (2001) to determine responsiveness to intervention. This approach classifies outcomes on the WRMT (or merely the word identification subtest), for example, as follows: Students with standard scores of 90 or above

(or a percentile rank of 25 or above) are determined to be R, whereas those below are NR. This method is illus trative of the philosophical chasm between those who

agree with Bradley et al. (2005) that, "IQ tests do not need to be given in most evaluations of children with SLD" (p. 485), and those who believe that SLD identifi

cation, and the corresponding differentiation from low achievement due to low ability, must account for the intra-individual level of expectation by means of intel

ligence testing (Kavale, 2005; Mastropieri & Scruggs, 2005).

A parallel issue in this approach concerns the over

simplification in many cases of what actually consti tutes an individual student's status as R or NR. Two

examples will illustrate this problem. Imagine a pool of early-grade students whose literacy

skills are being assessed in Tier One (the example is

applicable at any tier). Although none of the students'

intelligence has been assessed, and hence no IQ scores have been established, a range of cognitive ability exists within this pool. In effect, each student has a level of intelligence that is yet to be formally measured and authenticated as an IQ score. One student's IQ if

measured, would indicate functioning in the 80-85

range. During RTI assessment, this student attained the standard score of 88 on the WRMT.

Using the absolute discrepancy approach, this stu dent would be deemed NR due to the WRMT score, pro

Learning Disability Quarterly 210

Page 10: Hammill Institute on Disabilitiesdanabanegas.yolasite.com/resources/project article 5.pdfHammill Institute on Disabilities Obscuring Vital Distinctions: The Oversimplification of Learning

ceed to the next tier, and possibly be determined to

be eligible for special education if the outcome

occurred in Tier Two or Three. But is this student actu

ally NR? If the concept of expected level of achieve ment was applied to determining R vs. NR, this student is actually slightly above expectation and would be rea

sonably considered R. In fact, a legitimate argument could be made that altering instruction for this student

likely poses more risks than benefits. A full, traditional evaluation for SLD, including intelligence testing, would produce a profile that reveals the expected level of achievement, the student's "success" in his current instructional environment, and the folly of changing it. This student is, quite simply, a "false positive" that

directly resulted from the RTI absolute discrepancy approach.

The same logic applies to cases in which the bench mark method is used to establish performance discrep ancy. Because approximately 23% of students have

IQ scores between 70 (the commonly used cut point for mental retardation) and 90, unless a convincing argument can be made that all children, including those 23%, can be expected to attain the benchmark, some will fall below but actually be quite responsive to the instruction that they are afforded. The issues raised within the preceding example are also germane to the "relative" discrepancy and "dual discrepancy" approaches that are discussed in the next section of this article.

Although "false positives" (as in the preceding exam

ple) are most often cited as the greatest concern in misidentification (Kavale & Spaulding, 2008; Kavale et

al., 2006), a second example will portray the risk of "false negatives" within an absolute discrepancy approach in RTL Namely, a student is misidentified as R when, in fact, she is NR. Imagine that in the pool of students described in the first example there is a stu dent whose standard score on the WRMT is 92, and hence satisfies the R designation. Were intelligence assessed, her IQ score would fall in the 113-118 range. However, without that score established, she will be

judged to be responsive to the instruction she is receiv

ing and, hence, remain in the same tier. This example reflects the greater probability of not

identifying "twice exceptional" students (i.e., gifted and SLD) when responsiveness to instruction is reduced to an absolute standard of low achievement in RTI (LDA, 2006; NJCLD, 2005). Kavale et al. (2006) noted that concerns related to nonidentification of

gifted/SLD students is exaggerated and irrelevant

because, whether within RTI or a traditional identifica tion process, the typically "average-range" performance of such students would not generate a referral without their IQs already being known, which would have

necessitated the unrealistic and unreasonable process of administering intelligence tests schoolwide.

Though accurate on its face, this notion ignores the fact that "twice exceptional" is part of the professional lexicon and consciousness for a reason; namely, because of the role general educators' clinical judgment has played historically in bringing these students to the attention of special educators. In traditional referral

processes, many general educators refer such students for evaluation not because of absolute classroom fail

ure, but due to their intuitive sense that they are capa ble of performing to a higher level. If an absolute cut

point is used to determine students' responsiveness, the ability of general educators to utilize their knowl

edge, experience, and collaborative dialogue with spe cial educators to refer students who may be "twice

exceptional" will be eliminated.

Slope of improvement and dual discrepancy. The

degree of improvement in student performance as a

result of instruction in any RTI tier is also often con sidered an essential determinant of R and NR. As will be discussed further in the next section, local norms (e.g., classes within a school and/or district) are applied to determine the slope of each student's growth as refer enced by the criterion selected, thus making this analy sis one of "relative" discrepancy. Fuchs and Deshler

(2007) described the "median split" method proposed by Vellutino et al. (1996), in which the WRMT is administered to a class of students at multiple points, from initial (i.e., "universal") screening through a gen eral instructional period. The slope of each student's

performance is rank-ordered, a median slope point is

determined, and students below the median are deemed NR.

Several variations in instrumentation and standards of responsiveness have also been proposed. The same authors cited curriculum-based measures such as flu

ency in reading or word identification proposed by Fuchs and Fuchs (1998). Additionally, Fuchs, Fuchs et al. (2004) referred to the recommendation of Speece and Case (2001) that a discrepancy of one standard deviation below the class mean in slope serves as an indication of NR. Each of these measures also con tributes to an oft-cited measure of responsiveness, dual

discrepancy. In this approach, a student is determined to be NR when both the level of performance and the

slope of improvement fall below the determined cut

points in reference to the comparison class, whether the median advocated by Fuchs and Fuchs (2007) or the aforesaid standard deviation below the mean.

The examples in the previous section related to the

hypothetical and undocumented level of cognitive ability in students warrant revisiting with regard to determination of slope (and hence also dual) discrep

Volume 32, Fall 2009 211

Page 11: Hammill Institute on Disabilitiesdanabanegas.yolasite.com/resources/project article 5.pdfHammill Institute on Disabilities Obscuring Vital Distinctions: The Oversimplification of Learning

ancy. It was noted that some students who are judged to be NR due to their failure to attain a curriculum benchmark or statistical standard (i.e., standard score or

percentile rank) may actually be performing up to

expectation, or nearly so, given their IQs. Those stu dents comprised NR "false positives" using the absolute

discrepancy model. They will likely also be judged NR within a "dual discrepancy" framework for the same reason. For example, a pool of students is deemed

"potentially NR" (pending measurement of their slope of improvement) because each fell below the criterion cut point. Among this pool are students whose per formance aligns with cognitive expectation (i.e., both are in the low-average range) and others who fall well below (i.e., reflecting the "traditional" SLD discrep ancy). The next measurement will indicate their slope and degree of improvement in response to the inter

vening instruction.

By all logic and statistical probability, the former

group of students will reflect less improvement than the latter because they had a smaller margin for improve

ment since they were, unbeknownst to those making educational decisions about them, already performing up to expectation. Similar to the example used previ ously, these "false-positive" NR students will proceed to a more intensive intervention in the next tier. With a

continued marginal slope of improvement in compari son to their peers, which is again statistically probable given their cognitive ability, and absent a comprehen sive evaluation for special education, these same stu dents will likely also be designated as SLD. Examples such as these illustrate the caution expressed by Kauffman (2004) regarding RTI that, "a prevention model increases false positives

- always, inevitably, and

with mathematical certainty" (p. 313).

Performance comparisons. With the exception of models that determine a student's R or NR status based on an absolute discrepancy through the use of a stan dard score or percentile rank on a nationally standard

ized, norm-referenced instrument (e.g., Torgesen, 2001), there appears to be little agreement among RTI advo cates on the scope of comparison to be employed. Some references can only be classified as vague, perhaps reflecting a belief that the specific normative reference used should be determined at the local level. Fuchs, Fuchs et al. (2004) suggested that student performance should be made "in comparison to other classes ... in

the same building, the same district, or the entire nation" (p. 217). Similarly, Fuchs and Deshler (2007) noted that "students above a normative cut point refer enced to the classroom, school district, or nation, are deemed responsive, and others are designated non

responsive" (p. 133). Bradley et al. (2007) seemed more inclined toward establishing statewide standards, yet

hedged somewhat by stating, "states may create criteria that take local variations into consideration" (p. 10).

In addition to the concerns discussed earlier regarding differences in instrumentation, the potential variations in normative comparison to determine R, NR, and

potentially SLD present several vexing predicaments, both practical and legal, that merit examination. The

following examples illustrate the risks inherent in using a normative reference that becomes increasingly more localized (i.e., moving from comparison nationally toward state, district, and school).

Imagine two schools (or districts within a state) with student populations having substantive differences in variables that generally correlate with educational per formance, such as socioeconomic status, parental edu cational level, and so forth. Each school utilizes the same instrument as a universal screen and growth indi cator. Each school has also established the same cut

point to distinguish between R and NR. For the sake of this example, students whose performance on the meas ure is in the lowest 24% of their class/grade in their school are determined to be NR; those above are con sidered R. The raw score of the student with the 24th

percentile rank in School A on the measure used is 35; the score for the same percentile rank in School B is 45.

In this example, students with scores in the same

range (36-44) will be judged differently in terms of

responsiveness to instruction depending upon which school they attend. Within a framework that introduces confounds into the comparative process, "the individ ual school setting (i.e., context) becomes the primary influence on the way the presence or absence of a dis

ability is determined" (Kavale et al., 2006, p. 118). Even if this example were extended to differences between districts rather than individual schools, the problem of

divergent standards persists. As noted by Zirkel and Krohn (2008), "School districts will have various practi cal problems not only in terms of cross pressures from

neighboring districts but also from parents of students

coming from other districts and from private schools"

(p. 73). The dilemmas described above are exacerbated when

schools or districts differ in the performance measures

utilized, particularly when some are curriculum-based, others utilize an entire standardized, norm-referenced

battery (e.g., WRMT), and still others use only a subtest of the battery (e.g., word identification within the

WRMT). Mastropieri and Scruggs (2005) aptly summa rized concerns related to validity as follows: "The issue of using non-standardized procedures associated with RTI for identifying LD remains problematic until issues of standardization, cutoff scores, and validity can be

fully addressed" (p. 528).

Learning Disability Quarterly 212

Page 12: Hammill Institute on Disabilitiesdanabanegas.yolasite.com/resources/project article 5.pdfHammill Institute on Disabilities Obscuring Vital Distinctions: The Oversimplification of Learning

SUMMARY Professional dialogue related to RTI has afforded the

field a rich opportunity to reflect upon and reexamine beliefs about a range of critical issues, such as prereferral systems, preservice training of general and special educators, research-based models of intervention, per sonnel supply for multitiered intervention, factors con

tributing to variability in prevalence, and the construct, definition, and assessment of SLD. The discussion has been spirited, thought provoking, and occasionally con tentious. Although clear demarcations exist between advocates of forthwith implementation of RTI and those who urge a cautious approach, it would be unrea sonable to characterize the latter group as "opposed in

principle" to RTL Nevertheless, many within this group fear that RTI poses a direct threat to the continued exis tence of the category of SLD (Kavale & Spaulding, 2008).

Mastropieri and Scruggs (2005) succinctly conveyed this concern:

If elimination of the category of LD is sought, then this specifically should be the topic of discussion. If it is not, then discussion is needed that demon strates how RTI identification procedures will pre serve the category of LD while improving the identification of students with LD. (p. 529)

Such concerns are the result of and reflect fundamen

tally different beliefs regarding the assessment of SLD

and, to a large degree, differences in the "language" and intent of assessment. The debate concerning the ability of RTI assessment processes to identify SLD reflects the

futility inherent in attempting to reconcile two meas urement approaches that are logically and statistically incompatible, one that is absolute (low achievement) and another that is relative (underachievement).

The professional disequilibrium evident within RTI

dialogue is partially attributable to the lack of distinc tion between several basic terms and concepts; namely, low achievement, underachievement, and the manner in which these relate to SLD and the determination of

NR.

Low achievement is absolute and readily identified by performance aligned with a marker, whether in the curriculum or as an outcome on a test. Inasmuch as RTI utilizes such threshold markers to determine respon siveness, for all purposes NR is conceptually equivalent to low achievement. As discussed earlier, however, low achievement alone may or may not reflect the reality of whether a particular student is R, NR, and/or has a SLD

and, therefore, should not be utilized as a diagnostic cri terion for disability (Kavale, 2005). If NR either desig nates SLD or constitutes a primary path toward identification (as suggested by Fuchs, Fuchs et al., 2004), the fear expressed by Kavale that RTI will produce a shift

from "'All students with SLD have learning problems' to 'All students with learning problems have SLD'" (p. 554) is justified.

Underachievement (or "unexpected low achievement"), which is the foundational construct within SLD, repre sents a relative determination that cannot be derived

solely by an absolute marker and is dependent upon clinical judgment employed on case-by-case basis to determine whether the absolute outcome represents an

unanticipated departure from expectation. Therein resides the litmus test for professional judgment about whether RTI-based assessment is a legitimate means of

identifying SLD. Support for such an identification model constitutes a tacit, and perhaps outright, rejec tion of SLD as a category of disability in which under achievement is a central diagnostic criterion and an endorsement of the contention that RTI has created new operational criteria for its diagnosis (Kavale et al., 2006). Conversely, support for the long-standing con

ceptualization of SLD and its manifestations necessi tates that the NR standing within RTI be deemed wholly insufficient in identifying this category of disability.

Implications for Practice The issues related to identification of students with

SLD presented in this article suggest that professionals should be especially circumspect in planning RTI imple

mentation. Such vigilance must extend from the pursuit of assessment-related research to the practical matters that confront teachers in public schools. The consider able variation in measures used to determine NR, and hence potentially SLD, must be addressed. Broad imple mentation of RTI models that utilize substantively dif ferent assessment procedures as a primary or singular

means of SLD identification are likely to produce even

greater variability in prevalence than discrepancy-based approaches (Berkeley et al., 2009; Fuchs & Deshler, 2007; Hallahan et al., 2007), which, as noted earlier, is

among the primary criticisms of the traditional SLD identification model. Although the local variability in RTI assessment protocols presents significant barriers to

psychometric research (Hollenbeck, 2007), efforts should nevertheless be undertaken to determine the

degree of predictive validity among the various meas ures and NR cut points with populations of students

concurrently identified as SLD through traditional dis

crepancy-based approaches. Professionals must also consider the role of full evalu

ation for SLD and the referral for such by teachers within any RTI model. For the reasons described earlier, NR status alone is incapable of distinguishing SLD from other possible causes of low achievement, thus

answering the question posed by Mastropieri and

Scruggs (2005), "If RTI cannot discriminate, how can it

Volume 32, Fall 2009 213

Page 13: Hammill Institute on Disabilitiesdanabanegas.yolasite.com/resources/project article 5.pdfHammill Institute on Disabilities Obscuring Vital Distinctions: The Oversimplification of Learning

classify?" (p. 528). Consonant with the recommenda

tions of CEC (2008), LDA (2006), and NJCLD (2005), RTI implementation must be accompanied by the

option for teachers to initiate a referral for full evalua tion at any time for a student suspected of having a dis

ability. The manner in which progression through RTI

tiers is determined presents certain risks in this regard, even if referral for full evaluation is an option. In the same manner that instructional modifications derived from prereferral intervention may not be employed as a

legally defensible substitute for special education, the RTI tiers through which students with learning prob lems progress and the measures employed to produce this movement must not be considered a surrogate for full evaluation. To this end, teachers must be cautioned and fully realize that they should not simply await the outcome of the next tier of intervention before initiat

ing a referral, particularly when using RTI models that do not determine students' level of responsiveness (and thus risk their underidentification) in mathematics and

writing. The evaluative structures within RTI and traditional

SLD determination can not only coexist, but can also enhance earlier identification of underachievement. For this to occur, however, classroom teachers, special edu

cators, administrative personnel, researchers, and teacher trainers must recognize that (a) nonresponsive ness and SLD are not always equivalent, (b) responsive ness does not preclude the presence of SLD among students with superior cognitive ability, (c) research related to efficacious identification of secondary-level students with SLD must be expanded, and (d) pre- and inservice education must fully equip current and future

general and special educators to improve SLD identifi cation through meaningful participation in RTL

REFERENCES Bender, W. N., & Shores, C. (2007). Response to intervention: A prac

tical guide for every teacher. Joint publication. Arlington, VA:

Council for Exceptional Children; Thousand Oaks, CA: Corwin

Press.

Bergeron, R., Floyd, R. G., & Shands, E. I. (2008). States' eligibility

guidelines for mental retardation: An update and consideration

of part scores and unreliability of IQs. Education and Training in

Developmental Disabilities, 43, 123-131.

Berkeley, S., Bender, W. N., Peaster, L. G., & Saunders, L. (2009).

Implementation of response to intervention: A snapshot of

progress. Journal of Learning Disabilities, 42, 85-95.

Bradley, R., Danielson, L., & Doolittle, J. (2005). Response to

intervention. Journal of Learning Disabilities, 38, 485-486.

Bradley, R., Danielson, L., & Doolittle, J. (2007). Responsiveness to intervention: 1997 to 2007. Teaching Exceptional Children, 39, 8-12.

Buck, G. H., Polloway, E. A., Smith-Thomas, A., & Cook, K. W.

(2003). Prereferral intervention processes: A survey of state

practices. Exceptional Children, 69, 349-360.

Burns, M. K., Jacob, S., & Wagner, A. R. (2008). Ethical and legal issues associated with using response-to-intervention to assess

learning disabilities. Journal of School Psychology, 46, 263-279.

Compton, D. L. (2006). How should 'unresponsiveness' to second

ary intervention be operationalized? It is all about the nudge. Journal of Learning Disabilities, 39, 17-173.

Council for Exceptional Children. (2008). CEC's position on

response to intervention (RTI): The unique role of special education

and special educators. Retrieved March 23, 2009, from http://

www.cec.sped.org/ AM/Template.cfm?Section=Search&Template

=/search/searchdisplay.cfm&sq=position%20on%20response% 20to%20intervention

Fuchs, D., & Deshler, D.D. (2007). What we need to know about

responsiveness to intervention (and shouldn't be afraid to ask).

Learning Disabilities Research & Practice, 22, 129-136.

Fuchs, D., Deshler, D. D., & Reschly, D. J. (2004). National research

center on learning disabilities: Multimethod studies of identifi

cation and classification issues. Learning Disability Quarterly, 27, 189-195.

Fuchs, D., Fuchs, L. S., & Compton, D. L. (2004). Identifying read

ing disabilities by responsiveness-to-intervention: Specifying measures and criteria. Learning Disability Quarterly, 27, 216-227.

Fuchs, D., & Young, C. L. (2006). On the irrelevance of intelligence in predicting responsiveness to reading instruction. Exceptional Children, 73, 8-30.

Fuchs, L. S., & Fuchs, D. (2007). A model for implementing respon siveness to intervention. Teaching Exceptional Children, 89, 14

20.

Gerber, M. M. (2005). Teachers are still the test: Limitations of

response to instruction strategies for identifying children with

learning disabilities. Journal of Learning Disabilities, 38, 516-524.

Good, R. H., & Kaminski, R. A. (2003). Dynamic indicators of basic

early literacy skills. Longmont, CO: Sopris West Educational

Services.

Hallahan, D. P., & Mercer, C. D. (2002). Learning disabilities:

Historical perspectives. In R. Bradley, L. Danielson, & D. P.

Hallahan (Eds.), Identification in learning disabilities: Research to

practice (pp. 1-67). Mahwah, NJ: Lawrence Erlbaum Associates.

Hallahan, D. P., Kauffman, J. M., & Pullen, P. C. (2009). Exceptional learners: An introduction to special education (11th ed.). Boston:

Allyn & Bacon.

Hallahan, D. P., Keller, C. E., Martinez, E. A., Byrd, E. S., Gelman,

J. A., & Fan, X. (2007). How variable are interstate prevalence rates of learning disabilities and other special education cate

gories? A longitudinal comparison. Exceptional Children, 73, 136-146.

Hallahan, D. P., Lloyd, J. W., Kauffman, J. M., Weiss, M. P., &

Martinez, E. A. (2005). Learning disabilities: Foundations, charac

teristics, and effective teaching (3rd ed.). Boston: Allyn & Bacon.

Hammill, D. (1990). On defining learning disabilities: An emerg

ing consensus. Journal of Learning Disabilities, 23, 74-84.

Harbort, G., Gunter, P. L., Hull, K., Brown, Q., Venn, M. L., Wiley, L. P., et al. (2007). Behaviors of teachers in co-taught classes in

a secondary school. Teacher Education and Special Education, 30, 13-23.

Hollenbeck, A. F. (2007). From IDEA to implementation: A discus

sion of foundational and future responsiveness-to-intervention research. Learning Disabilities Research and Practice, 22, 137-146.

Hoover, J. J., & Patton, J. R. (2008). The role of special educators in

a multitiered instructional system. Intervention in School and

Clinic, 43, 195-202.

Hoover, J. J., Baca, L., Wexler-Love, E., & Saenz, L. (2008). National

implementation of response to intervention (RTI): Research sum

mary. Alexandria, VA: National Association of State Directors

Learning Disability Quarterly 214

Page 14: Hammill Institute on Disabilitiesdanabanegas.yolasite.com/resources/project article 5.pdfHammill Institute on Disabilities Obscuring Vital Distinctions: The Oversimplification of Learning

of Special Education. Retrieved February 3, 2009, from

http: //www. nasdse. org/Pro j ects/ResponsetoInterventionRtIPro j

ect/tabid/411/Default.aspx Individuals with Disabilities Education Improvement Act, Pub. L.

108-446 U.S.C. (2004).

Johnson, E., Mellard, D. F., & Byrd, S. E. (2005). Alternative mod

els of learning disabilities identification: Considerations and ini

tial conclusions. Journal of Learning Disabilities, 38, 569-572.

Kauffman, J. M. (2004). The President's commission and the deval

uation of special education. Education and Treatment of Children,

27, 307-324.

Kavale, K. A. (2005). Identifying specific learning disability: Is

responsiveness to intervention the answer? Journal of Learning Disabilities, 38, 553-562.

Kavale, K. A., Holdnack, J. A., & Mostert, M. P. (2006).

Responsiveness to intervention and the identification of specific

learning disability: A critique and alternative proposal. Learning

Disability Quarterly, 29, 113-127.

Kavale, K. A., Kauffman, J. M., Bachmeier, R. J., & LeFever, G. B.

(2008). Response to Intervention: Sparing the rhetoric of self

congratulation from the reality of specific learning disability identification. Learning Disability Quarterly, 31, 135-150.

Kavale, K. A., & Spaulding, L. S. (2008). Is response to intervention

good policy for specific learning disability? Learning Disabilities Research and Practice, 23, 168-179.

Kavale, K. A., Spaulding, L. S., & Beam, A. P. (2009). A time to

define: Making the specific learning disability definition pre scribe specific learning disability. Learning Disability Quarterly, 32, 39-48.

Learning Disabilities Association of America. (2006). Response to

Intervention: Position paper of the Learning Disabilities Association

of America. Retrieved March 23, 2009, from http://www.

ldanatl.org/about/position/rti.asp

Lyon, G. R. (2005). Why scientific research must guide educational

policy and instructional practices in learning disabilities.

Learning Disability Quarterly, 28, 140-143.

Mastropieri, M. A., & Scruggs, T. E. (2005). Feasibility and conse

quences of response to intervention: Examination of the issues and scientific evidence as a model for the identification of indi

viduals with learning disabilities. Journal of Learning Disabilities, 38, 525-531.

Mellard, D. F., Byrd, S. E., Johnson, E., Tollefson, J. M., &

Boesche, L. (2004). Foundations and research on identifying model responsiveness-to-intervention sites. Learning Disability Quarterly, 27, 243-256.

McKenzie, R. G. (in press). A national survey of pre-service prepa ration for collaboration. Teacher Education and Special Education.

Mellard, D. F., Deshler, D. D., & Barth, A. (2004). LD identification:

It's not simply a matter of building a better mousetrap. Learning

Disability Quarterly, 27, 229-242.

National Joint Committee on Learning Disabilities. (2005).

Responsiveness to intervention and learning disabilities.

Learning Disability Quarterly, 28, 249-260.

Reschly, D. R. (2005). Learning disabilities identification: Primary intervention, secondary intervention, and then what? Journal of Learning Disabilities, 38, 510-515.

Reschly, D. J., & Hosp, J. L. (2004). State SLD identification policies and practices. Learning Disability Quarterly, 27, 197-212.

Reynolds, C. R., & Shaywitz, S. E. (2009). Response to intervention:

Ready or not? Or, from wait-to-fail to watch-them-fail. School

Psychology Quarterly, 24, 130-145.

Salvia, J., Ysseldyke, J. E., & Bolt, S. (2010). Assessment in special and inclusive education (11th ed.). Belmont, CA: Wadsworth

Speece, D. L., & Case, L. P. (2001). Classification in context: An

alternative approach to identifying early reading disability. Journal of Educational Psychology, 93, 735-749.

Taylor, R. L. (2009). Assessment of exceptional students: Educational

and psychological procedures (8th ed.). Columbus, OH: Pearson.

Torgesen, J. K., Alexander, A., Wagner, R., Rashotte, C, Voeller, K., & Con way, T. (2001). Intensive remedial instruction for chil

dren with severe reading disabilities: Immediate and long-term outcomes from two instructional approaches. Journal of Learning Disabilities, 34, 33-58.

U.S. Office of Education. (1977). Assistance to states for education

of handicapped children: Procedures for evaluating specific

learning disabilities. Federal Register, 42, 65082-65085.

Vaughn, S., & Fuchs, L. S. (2003). Redefining learning disabilities as inadequate response to instruction: The promise and poten tial problems. Learning Disabilities Research and Practice, 18, 137

146.

Vellutino, F., Scanlon, D. M., Sipay, E. R., Small, S. G., Pratt, A., Chen, R., et al. (1996). Cognitive profiles of difficult-to-remedi ate and readily remediated poor readers: Early intervention as a

vehicle for distinguishing between cognitive and experiential deficits as a basic cause of specific reading disability. Journal of Educational Psychology, 88, 601-638.

Woodcock, R. (1998). Woodcock reading mastery tests - revised. Circle Pines, MN: American Guidance Service.

Zirkel, P. A., & Krohn, N. (2008). RTI after IDEA: A survey of state

laws. Teaching Exceptional Children, 40, 71-73.

Please address correspondence about this article to: Robert G.

McKenzie, 229 Taylor Education Building, University of Kentucky, Lexington, KY 40506-0001; e-mail: [email protected]

Volume 32, Fall 2009 215


Recommended