+ All Categories
Home > Documents > Training Users in the Gross Motor Function Measure-methodological and Practical Issues

Training Users in the Gross Motor Function Measure-methodological and Practical Issues

Date post: 07-Apr-2018
Category:
Upload: joe-hung
View: 216 times
Download: 0 times
Share this document with a friend

of 7

Transcript
  • 8/6/2019 Training Users in the Gross Motor Function Measure-methodological and Practical Issues

    1/7

    Research RepottTraining Users in the Gross Motor Function MeasureMethodological and Practical Issues

    Key Words: Cerebral palsy, Motorjknction, Reliabiliy, Training methods,Videorecordings.

    Background and Purpose. The Gross Motor Function Measure (GMFM) is acriterion-referenced obsemationul measure for assessing change in gross motorjknction for children with cerebral palsy (CP). The purposes of this report are topresent data on the effects of training pediatric developmental therapists to admin-

    Dev elopm ent of a clinical test is a

    DlanneJ RussellPeter L RosenbaumMary LaneCarolyn Gowland

    com plex and time-consuming procResearchers usually spe nd the majDJ Russell, MSc, is Research Coordinator (NCRU), Department of Clinical Epidemiology and Biosta- ity Of their efon to establish that thtistics, Faculty of Health Sciences, McMaster University, Bldg 74 , Ched oke Campus, Hamilton, O n-tario. Canada L ~ N25. test is reliable (consistent) an d vali

    ister and score the GMFM and to discuss some practical and methodological is- Charles H Goldsmithsues associated with training. Suitjects and Metbods. A weighted kappa estimate Wllllam F Boycepretraining and posttraining workshop was used to determine participants' agree- Nancy Plgwsment of scoring a videotaped GMFM assessment against experts' scoring of thesame videotaped assessment. Several children with CP, representing a spectrum ofqes, severities, and levek; of function, were shown on the videotape.Results.There wm a signiJicant improvement in agrementjkm a mean kappa of .58 to.82 (t=15.38, df= 75,P

  • 8/6/2019 Training Users in the Gross Motor Function Measure-methodological and Practical Issues

    2/7

    able. Review of Manuals for Suggested Training MethodsCrlterla tor Suggested Tralnlng or TestbExaminer Qualltlcatlonsn BSID BOTMP PDMS MAP PEDl GDSExperience with children + + + + -Experience with standardized + + + + + -testing proceduresRead and be familiar with manual + + + + + -

    and administration guidelinesPractice + + + + + -Suggested testing for reliability - - + + + -Formal training offered by test - - - + + -

    developers or detailedexplanation of how to trainoneself

    Formal evaluation of training - - - - - -methods-- - - -- - - - - -'Plus sign (+) indicates manual me ets the criteria; min us sign (-) indicates manual does not meetcriteria or information is missing.b ~ ~ l ~ = ~ a y l e ycales of Infant Development, BOTMP=Bru~ninks-OseretskyTest of Motor Profi-ciency, PDMS=Peabody Developmental Motor Scales, MAP=Miller Assessment for Preschoolers,PEDI=Pediatr~cEvaluation of Disability Inventory, GDS =Gesell Developm ental S cales.

    All measurements can be affected byseveral sources of variation, whichcan affect the reliability of the mea-sure, including factors within theexaminee (subject of the assessment),the examination (or test), the exam-iner (user), and the environment(c~ntext) .~ome important variablesof the examinee are age, functionalactivity level, and degree of disability.The length of the assessment and theclarity of the administration guidelinesare factors that may vary in the exami-nation. Factors associated with theenvironment include the test setting(eg, room), temperature, and time ofday. Other factors thought to be lesscontrollable, but relevant, are patientcompliance, age and backgroundexperience of the examiner, the ex-aminer's familiarity with the exam-inee, and the method of assessment(direct contact or analysis of video-taped activities). One major controlla-ble source of variation is the trainingof potential test users in the back-ground, concepts, and application ofthe test.

    Rothsteins states that when evaluatingtests for clinical use, it is important to

    consider population-specific reliabilityfor the particular group being mea-sured and for the type of peopleadministering the measures. It is use-ful to know the type of patients usedin the reliability studies (and whetherthey are similar to the subjects whowill be assessed) and the level oftraining of the examiners. It is impor-tant to know whether the examinerswere a sample of typical therapists orwhether they were part of the teamdeveloping the measure and thereforeprobably more expert in its adminis-tration and scoring. For reliability tobe generalizable, reliability testingneeds to be conducted with peoplethought to be typical of the users ofthe test. When considering whether toincorporate a new test for research orclinical practice, it is important todetermine whether the test manualprovides advice or preferably evi-dence about the best methods fortraining.The Standards for Tests and Measure-ments in Physical Therapy Practicerequire primary test purveyors or testdevelopers to include in a test manual

    . . descriptions of the qualificationsand competencies needed by test us-ers. These descriptions should includestatements regarding potential conse-quences of unqualified users adminis-tering the test.1@5901

    Test manuals should also " . . . de-scribe how potential test users canobtain the competencies necessary toadminister the tests."l@598)Among thestandards for clinicians using tests isthe following:

    Test users must be able to determinebefore they use a test whether theyhave the ability to administer that test. . . based on an understanding of thetest user's own skills and knowledge(competency) as compared with thecompetencies described by the testp u r ~ e y o r . ~ ( ~ ~ ~ * )

    Stengel4 reviews a number of tests forassessing motor development in non-newborn children for the tests' reli-ability, validity, and usefulness toclinicians. He chose the tests becausethey are comprehensive, familiar toinvestigators studying the manage-ment of children with neurologicdysfunction, and readily available.These tests include the Bayley Scalesof Infant Development,5 Bruininks-Oseretsky Test of Motor Proficien~yPeabody Developmental MotorScales,' Miller Assessment for Pre-schoolers,Vediatric Evaluation ofDisability Inventory,9 and Manual ofDevelopmental Diagnosis.l0Stengelstates that the tests are " . . . fairly easyto learn to administer without theneed for extensive specialinstruction,"4(~" but he presents noevidence to support this contention.

    To determine how well test manualsaddressed the issue of training, themanuals from the nonnewborn pedi-atric tests identified by Stengel4werereviewed by the primary author(DJR). The results of this review arepresented in the Table. Most of themanuals recommend that therapistshave experience with children, expe-rience with standardized testing pro-cedures, and practice as importantfactors in learning their measures, bufew manuals actually explain how toobtain the necessary skills to ensure

    28 / 631 Physical Therapy /Volume 74, Number 7,'July 199

  • 8/6/2019 Training Users in the Gross Motor Function Measure-methodological and Practical Issues

    3/7

    competency. The Pediatric Evaluationof Disability Inventory9 manual hasthe most detailed description of pro-cedures for training. The authors ofthe manual advocate attending a train-ing workshop; however, they alsosuggest methods of training with anexperienced examiner. Overall, theauthors recommend "high agree-ment" with an experienced examiner,but they do not specify a particularlevel of reliability. Case scenarios arealso provided, which can be scoredby test users to evaluate their reliabil-ity compared with the authors' scor-ing. Although formal training may beavailable for some of the measuresreviewed, information on training isnot presented in the Table if thisinformation was not included in thetest manual. We are not aware thatany authors have evaluated the effectsof training.The Gross Motor Function Measure(GMFM) was developed for use bypediatric physical therapists as anevaluative measure for assessingchange over time in gross motorfunction of children with cerebralpalsy. Tht: GMFM is an 88-item,criterion-based observational measurethat assesses motor function in five"dimensions": (1) lying and rolling;(2) sitting; (3) crawling and kneeling;(4) standing; and (5) walking, run-ning, and jumping. Each item isscored on a four-point scale (O=doesnot initiate activity, 1=initiates activity,2 =partially completes activity, and3 =completes activity). Specific de-scriptions for how to score each itemare found in the administration andscoring guidelines contained withinthe test manual,ll which is availablefrom the primary author (DJR). Theresults of the initial validation workon the GMFM have been published.12In the original GMFM validation study,reliability of administering and scor-ing the GMFM over two occasions wasassessed.A small number of develop-mental pediatric physical therapistsfamiliar with the development of theGMFM and trained in the use of themeasure completed interrater (n= 11)and intrarater (n= 10) reliability test-ing on a sample of children with

    cerebral palsy. These children repre-sented a spectrum of ages, diagnostictypes, and severities. Using intraclasscorrelation coefficients (ICC [2,1])I3reliability estimates were calculatedfor each dimension as well as for totalscores, and these values varied from.87 to .99.Following minor revisions to theitems and guidelines, a second reli-ability study using a balanced incom-plete block design was completed by16 developmental pediatric therapistsusing the original and the revisedguidelines." The therapists involvedin this study were not regularly usingthe GMFM but had undergone sometraining. Although the initial reliabilitystudies had all therapists administer-ing and scoring the GMFM, this studyrequired therapists to score fromvideotapes. The results of the studydemonstrated ICCs of .75 to .97 be-tween therapists scoring with the oldand the new guidelines. Although therange of values for the reliabilitycoefficients was greater in the studyusing videotapes, they were still highenough for us to conclude thattrained therapists could score themodified GMFM reliably. Because allour reliability and validity data werecollected using trained pediatric phys-ical therapists who were involved inthe development and validation of themeasure, we needed to knowwhether training was generalizable tothose therapists who would be typicalusers of the test (ie, clinicians work-ing in children's treatment centers).Training would allow test users todetermine their competency withscoring the GMFM and allow us toevaluate the value and impact of thetraining.The purposes of this report are (1) topresent data on the effects of trainingdevelopmental pediatric clinicians inthe use of the GMFM using videotapesfor training and testing, and (2) todiscuss some practical and method-ological issues that arose and may begeneralizable to other measurementtraining situations.

    MethodA 1-day GMFM training program wdeveloped. The workshops com-menced with a description of theresearch background and psychomric properties of the GMFM, followby a videotaped pretest (40 minuteincluding pauses between items).Four children with cerebral palsy with athetosis, 2 with diplegia, andwith hemiplegia), varying in age fr2 to 14 years, were shown on thetesting videotape. An overview ofgeneral concepts in administering scoring the test was followed bygroup discussion on the scoring issues of each GMFM item using vidtaped examples (4 hours). The teaing videotape showed 3 children (with quadriplegia, 1with diplegia,and 1with hemiplegia), who varieage from 6 to 14 years. Later in theafternoon, 45 minutes was spent viing a videotape and discussing howcalculate a total score and issues related to goal setting. The day endewith the readministration of the savideotape used at the pretest. Thepretest and posttest scores were usto ascertain whether the trainingworkshop had an impact on particpants' ability to observe and score videotaped GMFM assessment. Thecorrect pretest and posttest scoreswere previously determined by throf the workshop trainers who viewand independently scored the testitapes using the GMFM administratiand scoring guidelines. Disagree-ments were identified and discusseand the videotaped activities werereplayed until consensus on the corect or "criterion" score was achieBefore commencing the pretest, paticipants were given a GMFM manuand instructed to use the administrtion and scoring guidelines whenscoring the test videotape. Prior tobeing shown the GMFM item on vieotape, the item number and thenumber of trials they would see thchild attempt for that item were idtified. The tape was stopped betweitems to allow participants time toscore and prepare for the next itemNo items were replayed. This protocol was repeated using the same

    Physical Therapy /Volume 74, Number 7fluly 1994

  • 8/6/2019 Training Users in the Gross Motor Function Measure-methodological and Practical Issues

    4/7

    videotape for the posttraining test. Atthe time of the pretest, participantscompleted a questionnaire about theirprevious clinical experience and theirfamiliarity and experience with theGMFM.Prior to initiating training workshops,the plan was to develop three sepa-rate criterion videotapes, each ap-proximately 20 minutes in duration.Each tape was to contain one third ofthe total number of items randomlyselected from each of the five GMFMdimensions. These items were exam-ined to ensure that a mixture of itemswas included (eg, a variety of startingpositions, static and dynamic items). Asample of items from various levels offunction (from children who werefunctioning primarily in the firstGMFM dimension of lying and rollingactivities to those capable of indepen-dent ambulation) was shown on eachtape. Items were grouped by dimen-sion and shown in numerical order.The first of the three videotapes wasdeveloped according to plan and wasused in the first three workshops.Results using this videotape are sum-marized in the "Training Study A"section. Upon close examination ofthe individual item scores from work-shop participants, it became apparentthat one person could have fewererrors than another but have a loweroverall estimated kappa. Becausecriterion videotape " A did not sam-ple equally all possible GMFM scores(0, 1, 2, and 3), a bias was created.Participants had one chance on video-tape " A to score 0 (indicating "doesnot initiate movement"), so that ifthey scored that item incorrectly, theywere severely penalized (from a sta-tistical point of view). When makingthe subsequent videotapes, this ine-quality was corrected by samplingmore equally across GMFM scores.The results using the second video-tape are summarized in the "TrainingStudy B" section. The third videotapehad not been used extensively at thetime this article was written.

    Data AnalysisA weighted estimated kappa statisticwith a quadratic weight was used toanalyze chance-corrected reliabilitybetween the rater's scoring and thecriterion scoring.14 To get a compos-ite measure of agreement across allcategories, a weighted mean of theindividual item kappas was calculatedas described by Fleiss.I4A kappa of1.00 would indicate perfect agreementwith the criterion scoring, and akappa of 0.00 would be equal tochance agreement. A kappa statisticusing a quadratic weight penalizes therater more the further away the rateris from the correct score. When aweighted kappa is calculated usingquadratic weights, it yields resultssimilar to the ICC and has a similarinterpretati0n.uA paired t test was used to examinethe statistical significance of pretrain-ing and posttraining estimated kappascores, and an independent sample ttest was used to compare the posttestestimated kappa scores with the crite-rion test scores. All t tests were two-tailed. A Pearson product-momentcorrelation (r )was used to measurethe relationship of criterion scoreswith previous clinical experience andprevious experience with the GMFMusing SPSSiPC+ version 4.0.16 The .05level was used to test for statisticalsignificance.ResultsTrainlng Study AThe data for training study A werederived from a total of 76 participantswho attended the first three work-shops. Eighty-six percent of the partic-ipants were physical therapists, 13%were occupational therapists, and 1%were kinesiologists. Workshop partici-pants had a mean of 7.7 years of neu-rological pediatric experience, whichvaried from 0 to 25 years. The crite-rion kappa score for this videotapewas set at .70, based on experiencefrom training with the Gross MotorPerformance Measure.7

    A paired t test comparing the pre-training and posttraining scores todetermine whether scores were sig-nificantly different for the total group(n=76) showed a statistically signifi-cant improvement in reliability, froma mean estimated kappa of .58 to .82(t=15.38, df=75, P

  • 8/6/2019 Training Users in the Gross Motor Function Measure-methodological and Practical Issues

    5/7

    The number of years of pediatricneurological experience was corre-lated with the estimated kappa valuesfor the entire sample (n=149) todetermine whether experienced clini-cians were more reliable than lessexperienced clinicians. This was notthe case at the pretest, with years ofpediatric neurological experiencecorrelating at r= -.04 (t=0.54,df=l47, P>.05). At the posttest, therewas a small negative correlation ofr=-.16 (t=1.92, df=147, P .05) between years of pediatricneurological experience and improve-ment in estimated kappa scores.DiscussionThe results of these studies demon-strate that clinicians who attend a1-day GMFM training workshop im-prove their scoring reliability signifi-cantly when tested using videotapedassessments. Note, however, that themethods used in this study relate toevaluating the reliability of scoring avideotape and do not take into ac-count other sources of variability thatare present when clinicians are as-sessing children in the clinic (eg,variability due to different testers,children, and environments). Whetherreliability values obtained using video-tapes are higher or lower than thoseobtained using real-life assessmentswas not addressed as part of thisstudy. There are good reasons tobelieve either that real-life assess-ments might be easier to perform andhence more reliable (eg, becausemore information is available to theexaminer than can be provided on avideotape) o r that these assessmentsare more difficult to perform andhence less reliable (eg, because theassessor must simultaneously test andscore), so that empirical studies ofthese issues are needed to addressthese questions appropriately. Resultsfrom reliability work with the GrossMotor Performance Measure showhigh levels of intrarater reliabilitywhen therapists administer and scorean assessment and then rescore the

    videotape of the same assessment 6weeks later. Boyce et all7 report ICCs(2,l) varying from .90 to .97 on indi-vidual attribute scores and .93 overall.There was a marked difference in thepretest reliability scores and in thenumber of people reaching the crite-rion level of reliability, depending onwhich testing videotape was used toassess scoring. Higher pretest scoresin training study B could have beendue to the finding that more thantwice as many participants in trainingstudy B had reported reading theadministration guidelines prior to theworkshop (23% as compared with10% in training study A). We have notyet evaluated separately this particularsource of variation in trainee skill (ie,how much familiarity with the mate-rial prior to the training workshopinfluences success in reaching a crite-rion level of reliability). Further workis needed to assess whether a certainamount of practice with the measureprior to testing would be sufficient toreach a criterion level of reliability,without the need of a formal work-shop. It is important to note that al-though 63% of therapists reached thecriterion score on the pretest in train-ing study B, their scores still im-proved significantly following theworkshop. Interestingly, our resultsshowed that years of pediatric experi-ence had little effect on the partici-pants' ability to learn to administerthe GMFM, and we believe that fromwhat we currently know, years ofexperience should not preclude peo-ple from undergoing the trainingprocess.

    The videotape used in training studyB appeared to be much easier toscore than the videotape used intraining study A. Things we learnedafter preparing the first videotapewere used to improve the secondvideotape. These improvements in-cluded a longer "lead-in" time priorto the desired movement and a moreequal sampling of scores across thetotal number of response options. Bychance, we may also have sampledsome items with less contentiousscoring issues on the secondvideotape.

    An important consideration in allreliability studies is the need to saple the range of performance acrothe range of items. For example, iftherapists were determining theiragreement of scoring the GMFM wa child who was an independentambulator, it is likely that the chilwould score "3" (completes indepdently) on most items in the firstthree dimensions (lying and rollinsitting, crawling and kneeling). Thpists would have a high level ofagreement strictly because there wlittle room for disagreement.A mcredible estimate of whether therapists agree would be determined bsampling more items for which thchild is likely to have a mixture ofitem scores (Os, Is, 2s, and 3s), aswould be the case (in this examplin the higher two GMFM dimensio(standing; walking, running, andjumping). By including samples ofGMFM items from children perforing across the spectrum of functiomore realistic estimate of agreemeis obtained.Primary purveyors of measures usally spend a great deal of time devoping and validating a new instru-ment, and collecting normative daGenerally, a much smaller amounteffort is directed toward issues oftraining. Although clinicians have responsibility to acquire the necestraining before using a new measuit is often not clear what the necestraining is, or how to acquire it insystematic and effective manner. Ttime and cost associated with settiup a training package have likelybeen deterrents to its developmenSeveral methodological issues werconsidered in planning this evaluaof impact of the GMFM training prgram. Precautions were taken to mmize a learning effect as a result odoing the pretest. Workshop partipants were not given any feedbacktheir performance on the test eithfollowing the pretest or during theworkshop. A separate videotape wdifferent children was used duringtraining. The pretest videotape waused again at the posttest. Had weused different testing videotapes, t

    Physical Therapy /Volume 74, Number

  • 8/6/2019 Training Users in the Gross Motor Function Measure-methodological and Practical Issues

    6/7

    would have added another source ofvariation, and any differences inpretest-posttest scores might havebeen du e to variability in thevideotapes.Although written feedback from par-ticipants indicated the training wo rk-shops were beneficial for them, witheach new testing videotape (encom-passing different items an d thereforedifferent issues), the workshop train-ers learned more about problematicwording and scoring issues with theGMFM. This has allowed for revisionsto the test manual's (available fromthe primary author) to highlight diffi-cult training issues currently bein gdealt with in the wo rksho p. We d onot yet know whe ther the secondedition of the manual will provideuntrained users of the GMFM with acleare r set of directions for self-learning of the m easure. To maketraining mo re accessible to therapistswho are unable to attend a workshop,the Gross Motor Measures Group hasdeveloped a videodisc training pack-age that contains videotape examplesof children similar to those used fo r aworkshop, along with a written com-mentary. This method will need to b eevaluated to determine whether indi-viduals learning by the videodisc canreach sim ilar levels of scoring reliabil-ity as workshop participants.There are a num ber of disadvantagesand advantages to the use of video-tapes as a medium for training andevaluating new users of a test such asthe GMFM. One of the main disadvan-tages of using criterio n videotapes toassess reliability is that this method isonly testing the participant's ability toscore the videotaped test reliably andis not providing an indication of theassessor's ability to adm inister an dscore th e test in a clinical situation.For example, can the examiner elicitappropriate responses from the exam-inee as well as score them reliably?This is particularly important for a testthat involves direct obse rvation ofperformance rather than being sco redfrom videotaped assessments. Thisaspect of learning and performing theGMFM needs to be studied further byexamining the reliability of work shop

    participants in a clinical situation a ndcom paring the reliability with thatachieved in the worksho p.Another problem w ith using video-tapes is the quality of videotaping, inparticular, the ability to capture o nvideotape, from the best possiblecamera angle, the movement thetherapist is trying to test. Experiencehas shown it may be more difficult tojudge whether a child is "initiating" amovement from videotape or fromreal life. We relied on the use ofexpert audiovisual personnel to de-velop o ur training materials in aneffort to address and overcome theseproblems.There are, however, a number ofadvantages to using videotapes as amethod of assessing reliability. First, itis possible to evaluate the effects of anintervention (such as a training work-shop) in a standardized m anner. Sec-on d, the use of videotapes allows anefficient means of assessing severalpatients of varying diagnostic an dfunctional levels while eliminating theissue of patient compliance. Thisadvantage is p articularly appealingwhen dealing with children. Video-tapes can be edited to ensure they arecapturing different training issues andcovering an approp riate spectrum offunction. Third, by having a c riteriontesting videotape with the "correct"score, as determ ined by ex perts, thetherapist can ensur e that responsesare not only reliable but valid. Forexample, if therapists in the clinicalsetting learn an assessment together,they may make an administration orscoring decision that is different fromthe intent of the test developer. Whenthe therapists then assess interraterreliability, it may be high beca useeveryone agrees o n how to score, butthe score is not the correct (valid)one. Finally, another use for criteriontesting videotapes is to have an easymethod of assessing ongo ing levels ofcompetency. Tests can b e co mpletedat regular intervals to ensure that highlevels of reliability are maintainedover time. Grossz0 and Gross an dConradz1offer further discussion ofthe advantages and disadvantages of

    using videotape to cap ture observa-tional data.The issue of how reliable is reliableenough is an interesting one. Al-though a n umb er of guidelines aresuggested in the l i t e r a t ~ r e , l ~ ~ ~ ~hisis still an arbitrary decision. Streinerand Norman23suggest that an accept-able level of reliability is dependenton the size of the sam ple, and theypoint out that clinical assessmentsused to make decisions o n individualsneed to be more reliable than thoseusing gro uped data. This is becausedata that are grou ped (as in researchstudies) and used as the mean ofseveral individuals have smaller mea-surement error. A reliability coeffi-cient itself does not let the therapistknow how m any errors were m ade,which is why we provided partici-pants wh o did not reach o ur presetcriterion level with feedback on indi-vidual item problem s. A test does nothave a single lev el of reliability; there-fore, identifying sourc es of variabilityis useful because this information canbe used to try and reduce largesources of e rror variance23As clini-cians and researchers, we want to b eas reliable as possible to make validdecisions about the management ofchildren. In ou r work, we chose toincrease the criterion level of reliabil-ity require d with the seco nd video-tape based o n o u r experience withthe first videotape and to ensu re amore rigorous level of reliability.As we have tried to illustrate in thiscommunication, there are method-ologic and design features that can beused to address these issues. It isclear that as much care is needed inthe preparation and testing of trainingin the use of a test as in its creationand v alidation. We believe that pri-mary purveyors have a responsibilityto their clinical colleagues and thatthey can learn useful and importantlessons about their measu re whileproviding training in its use.SummaryWe have shown that training improvesworksho p participants' agreement ofscoring a videotaped GMFM assess-

    Physical Therapy /Volume 74, Number 7/July 1994

  • 8/6/2019 Training Users in the Gross Motor Function Measure-methodological and Practical Issues

    7/7

    ment. Although th ere a re a nu mb er ofadvantages to using videotapes totrain test users an d assess scorin greliability, this method does not evalu-ate participants' ability to adm inisterthe m easure. Therefore, further workis need ed to determine w hether reli-ability is maintained in a clinical situa-tion in which it is necessary to bothadminister and score the GMFM.AcknowledgmentsWe gratefully acknowledge the contri-bution of data and thoughtful com-ments a nd q uestions from participantsof the training workshop s. We alsothank Jim Chen for his comp uterassistance an d M arilyn Marshall an dGerry Karlovic for the preparation ofthis manuscript.

    References1 Task Force on Standards for Measurementin Physical Therapy. Standards for tests andmeasurements in physical therapy practice.Phys Ther. 1991;71:589-622.2 Sackett DL, Haynes RB, Guyatt GH, TugwellP. Clinical Epidemiology: A Basic Science forClinical Medicine. 2nd ed. Boston, Mass: Little,Brown and Co Inc; 1991.

    3 Rothstein JM. Measurement and clinicalpractice: theory and application. In: RothsteinJM, ed . Measurement in Physical Theram. NewYork, NY: hurchill Livingstone Inc; 1985146.4 Stengel TJ. Assessing motor development inchildren. In: Campbell SK, ed. Pediatric Neu-rologic Physical Th erapy. New York, NY:Churchill Livingstone Inc; 1991:33-65.5 Bayley N. Bayley Scales of Infant Deveiop-ment. New York, NY: The Psychological Corpo-ration; 1969:5.6 Bruininks RH. Bruininks-O seretsky Test ofMotor Proficiency. Circle Pines, Minn: Ameri-can Guidance Service; 1978:42.7 Folio MR, Fewell RR.Peabody Developmen-tal Motor Scales a nd Activiry Cards. Allen, Tex:DLM-Teaching Resources; 1983:13.8 Miller LJ. Miller Ar ce me nt for Preschoolers.Littleton, Colo: The Foundation for Knowledgein Development; 1982:3.9 Haley SM, Costner WJ, Ludlow LH, t al. Pe-diatric Evaluation of Disabiliry Inventorj(PEDI): Development, Sta ndar dization an dAdministration Manual (Version I) . Boston,Mass: New England Medical Center Hospital;199280-88.10 Knobloch H, Stevens S, Malone AF.Man-ual of Developmental Diagnosis. Rev ed. NewYork, NY: Harper & Row; 1980.11 Russell DJ, Rosenbaum PL , Gowland C,et al. Manu al for the Gross Motor FunctionMeasure: A Measure of Gross Motor Functionin Cerebral Palsy. Hamilton, Ontario, Canada:McMaster University; 1990.12 Russell DJ, Rosenbaum PL, Cadman DT,et al. The gross motor function measure: ameans to evaluate the effects of physical ther-apy. Dev Med Child Neurol. 1989;31:341-352.

    13 Shrout PE, Fleiss JL, lntraclass correlatiuses in assessing rater reliability.Psycho1 B1979;86:420-428,14 Fleiss JL. Statistical Methods for Rates aProportions. 2nd ed. New York, NY: John Wley & Sons Inc; 1981:219.15 Kramer MS, Feinstein AR. Clinical Biotistics LIV: the biostatistics of concordanceClin Phannacoi Ther. 1981;29: 11-123.16 Norusis MJ. SPSS/PC+ Statistics do f o rIBM PC/AT/AT an d P.Q. Chicago, Ill: SPSS1990.17 Boyce WF, Goasland C, Rosenbaum PLet al. Gross Motor Performance Measure fchildren with cerebral palsy: study design preliminary findings. Can J Public H ealth.1992;83(suppl):S34-S40.18 Landis JR, Koch GG. The measurementobserver agreement for categorial data. Bimetrics. 1977;33: 59-174.19 Russell D, Rosenbaum P, Gowland C, Gross Motor Fu nction Measure Manual. 2ed. Hamilton, Ontario, Canada: McMaster Uversity; 1993.20 Gross D. Issues related to validity of vtaped observational data. WestJ Nurs Res.1991;13:658663.21 Gross D, Conrad B. Issues related to rability of videotaped observational data. WNurs Res. 1991;13:798-803.22 Law M. Measurement in occupational tapy: scientific criteria for evaluation. CanaJournal of Occupational Therapy. 1987;54133-138.23 Streiner DL, Norman GR. Health Measumen t Scales: A Practical Guide to Their Deopment and Use New York. NY: Oxford Uversity Press Inc; 1989.

    II RISKi MANAGEMENT:I AN APTA MALPRACTICE RESOURCE GUIDEII Ae;aikzb/eatht! Previously offered only throug h APT'S isk managementI wo rk sh o~ . his heftv eu ide to risk management issues shares information, , L, -I every PT professional needs to know. .on preventive recording...I malpr actice and oth er bases of potential liability..the judicial processI whe n dealing with m alpractice... nformed consent ... nd more! I n a handy,usable format. Numer ous expert contributors.1 (217 pages, tabbed 3-ring binder, 1990)Order No. P-95 Name

    AddressCiryISrardZipDaytime Telephone(7Check enclosed pa)able to AP TA (7Mastercard (7VISA


Recommended